THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Davide Mauri

A place for my thoughts and experiences on SQL Server, Business Intelligence and .NET

  • SQL Server Interpreter for Apache Zeppelin 0.6.2

    I’ve updated the code-base to Apache Zeppelin 0.6.2 and I’ve also finished a first simple-but-working support to autocomplete (you can activate it using CRTL + .). Right now the autocomplete is based on the keywords specified here:

    Reserved Keywords (Transact-SQL)

    is not much, I know, but is something, at least. Next steps will be to read schemas, tables and columns from SQL Server catalogs table. And maybe extract the list of keywords from…somewhere else, to have a more complete coverage.

    I’ve also removed additional Interpreter that may not be useful if you just plan to use it against T-SQL/TDS compatible engines (SQL Server, Azure SQL and Azure DW), and configured the defaults so that it is ready to use SQL Server right from the beginning.

    The code — along with compilation/install/basic usage instructions — is available on GitHub:

    Apache Zeppelin 0.6.2 for SQL Server

    Right now I’ve tested it only on Ubuntu Linux 16.04 LTS 64bits. It should also work on native Windows, but since I haven’t tried it yet on that platform, I don’t know the challenged you may face in order to have the full stack (Java, Maven, Node, ecc. ecc.) working in order to be able to compile and run it.

    At the beginning of next week I’ll release a small tutorial to show how you can use Apache Zeppelin for SQL Server also on your Windows machine using Docker. I plan to do a few tutorials on the subject, since I find Apache Zeppelin very useful and I’m sure it will be loved also by many other SQL Server guys once one start to play with it.

    At some point I’ll will also release only the bin package so that one doesn’t have to compile it itself (but hey, do we love Linux right now, don’t we?) and so that it can just run on Windows, but for now I find the Docker container approach so much better than anything else (it “just runs” and I can do anything via GitHub and Docker Hub), that I’ll stay with this for a while.

  • Azure Functions to Schedule SQL Azure operations

    One of the things that I miss a lot when working on SQL Azure is the ability to schedule jobs, something that one normally does via SQL Server Agent when running on premises.

    To execute scheduled task, on Azure, Microsoft recommends to use Azure Automation. While this is surely one way of solving the problem, I find it a little bit too complex for my needs. First of all I’m not a PowerShell fan, and Azure Automation is all about PowerShell. Secondly, I just need to schedule some SQL statements to be executed and I don’t really need all the other nice features that comes with Azure Automation. With Azure Automation you can automate pretty much *all* the resources available on Azure but my interest, for now, is only on SQL Azure. I need something simple. As much as simple as possible.

    Azure Functions + Dapper are the answer. Azure Functions can be triggered via CRON settings, which means that a job scheduler can be easily built. 
    Here’s an example of a CRON trigger (in function.json)

    {
    "bindings": [
    {
    "name": "myTimer",
    "type": "timerTrigger",
    "direction": "in",
    "schedule": "0 30 4 * * *"
    }
    ],
    "disabled": false
    }

    CRON format is detailed here: Azure Function Timer Trigger. As a simple guideline, the format is:

    {second} {minute} {hour} {day} {month} {day of the week}

    In the sample above, it tells to Azure Function to be executed every day at 04.30. To turn such expression in something that can be more easily read, tools like

    https://crontranslator.appspot.com/

    are available online. If you use such tools, just keep in mind that many doesn’t support seconds, and then you have the remove them before using the tool.

    Dapper is useful because make executing a query really a breeze:

    using (var conn = new SqlConnection(_connectionString))
    {
    conn.Execute("<your query here>");
    }

    To use Dapper in Azure Function, a reference to its NuGet package has to be put in the project.json file

    {
    "frameworks": {
    "net46": {
    "dependencies": {
    "Dapper": "1.50.2"
    }
    }
    }
    }

    It’s also worth mentioning that Azure Functions can be called via HTTP or Web Hook and thus also via Azure Logic Apps or Slack. This means that complex workflows that automatically responds to certain events can be put in place very quickly.

  • Temporal Tables

    I have delivered a talk about “SQL Server 2016 Temporal Tables” for the Pacific Northwest SQL Server User Group at the beginning of October . Slides are available on SlideShare here:

    http://www.slideshare.net/davidemauri/sql-server-2016-temporal-tables

    and the demo source code is — of course — available on GitHub:

    https://github.com/yorek/PNWSQL-201610

    The ability of automatically keep previous version of data is really a killer feature for a database since it lift the burden of doing such really-not-so-simple task from developers and bakes it directly into the engine, in a way it won’t even affect existing applications, if one needs to use it even in legacy solutions.

    The feature is useful even for really simple use cases, and it allows to open up a nice set of analytics options. For example I’ve just switched the feature on for a table where I need to store that status of an object that needs to pass through several steps to be processed fully. Instead of going through the complexity of managing the validity interval of each row, I’ve just asked the developer to update the row with the new status and that’s it. Now querying the history table I can understand which is the status that takes more time, on average, to be processed.

    That’s great: with less time spent doing technical stuff, more time can be spend doing other more interesting activities (like optimizing the code to improve performance where analysis shows they are not as good as expected). 

  • Azure SQL Database DTU Calculator

    One of the most common questions when you start to use SQL Azure is related to the choice of the level of service needed for your needs. On the cloud every wasted resource is a tangible additional cost, so it is good to chose the best service level the fits your needs, no more and no less. You can always scale it up later if needed.

    The "problem" is that the level is measured in DTU - Database Transaction Units - which a value that represents a mix of CPU, memory and I / O. The problem is that it is very difficult, if not impossible, to calculate this value for an existing on-premises server, so that you can have a compare it with the performance of your well-known on-premises server.

    Well, it *was* impossible. Now you can, thanks to this tool:

    Azure SQL Database DTU Calculator

    developed by Justin Henriksen, a Solution Architect specializing on Azure, that simplifies a lot the estimation effort. After running a PowerShell script to detect some metrics on the On-Premises Server, you have to upload the collected values n that site to get an idea of ​​what level of DTU is optimal in case you want to move that database or server to the cloud.

    Of course the more your workload is representative of a real-world scenario, the better estimates you will have: keep this in mind before taking any decision. In addition to this website, there are also two links very useful to better understand what level of service is best suited to your situation:

    Enjoy!

  • Operator progress changes in LQS

    This has maybe gone unnoticed since August is usually a “slow” month, but with the August release there has been a major change in SQL Server Management Studio and how it show the Live Query Statistics data.

    The operator level percentages shown in the Live Query Statistics is now the ratio between actual and estimated rows, which means that the value can get way higher than 100%. The purpose of this approach is to make easier to spot places where cardinality estimation got it wrong for some reason, so that you can go and try to understand the problem and fix the query in order to improve performance or reduce resource usage.

    A detailed post on this topic by Pedro Lopes of the SQL Tiger team is here:

    https://blogs.msdn.microsoft.com/sql_server_team/operator-progress-changes-in-lqs/

    Now that also the Management Studio is following monthly release schedule, the post done by the SQL Server Release Services team about SSMS really needs to be read carefully, just to be sure not to miss this small-but-huge-impact changes:

    https://blogs.msdn.microsoft.com/sqlreleaseservices/tag/ssms/

  • Apache Zeppelin for SQL Server/Azure via Docker

    For those of you that are interested in Big Data, you may be interested in knowing that I've just release the first version or a working docker image that simplifies *a lot* the usage of Apache Zeppelin in a Windows environment.

    As you may know I'm working  on a SQL Server / SQL Azure interpreter for Apache Zeppelin in order to have a good mainstream tool for interactive data exploration and visualization also on the SQL Server platform

    I've just finished a new version of the SQL Server interpreter, rebuilt from scratch, now much cleaner then the first alpha version I release moths ago, and I also decided to use docker to avoid the "linux-pains" :) to everyone who just what to use Zeppelin and are not interested in *building* it.

    Here's a screenshot of the working container:

    50e88c51-3f96-4be4-bbfc-f8f5f877afca

    If you want try it (and/or help with development, documentation, and so on) you can use the docker image here:

    https://hub.docker.com/r/yorek/zeppelin-sqlserver/

    Supporting docker is especially important since it make *really really* easy to deploy to container to Azure and connect it to SQL Azure/ Azure DW or SQL Server in a AzureVM. No manual build needed anymore.

    https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-docker-machine/

    Enjoy!

  • Changing the BI architecture. From Batch to Real-time, from Bulk Load to Message Processing

    In a world of Microservices, CQRS and Event Sourcing is more and more common to have the requirement that the BI/BA solution you’re developing is able to deal with incoming information (more precisely, messages and events) in almost real time.

    It’s actually a good exercise to try to understand how you can turn your “classic” batch-based solution into a message even if you’re still following the batch approach because this new approach will force you to figure out how to deal with incremental and concurrent update. Problems that can help you to renew and refactor your exiting ETL solution to make it ready for the future. I really believe in the idea of continuous improvement which means that every “x” months you have to totally review an entire process of the existing solution, in order to see how it can be improved (this can can mean: make it faster, or cheaper, or easier to maintain and so on).
     
    It’s my personal opinion that if everything could be managed using event and messages, even the ETL process would be *much* more simpler and straightforward that what typically is today, and to start to go on that road, we need to stop to think in batches.

    This approach is even more important in the cloud since it allows a greater efficiency (and favor usage of PaaS instead of IaaS) and helps to have a cheaper solution. In the workshop I’m going to deliver at SQL Nexus I’ll show that this, today, is something that can be easily done on Azure.

    All of  this also perfectly fits in the Lambda Architecture, a generic architecture for building real-time business intelligence and business analytic solution.

    If you’re intrigued by these ideas, or you’re simply facing the problem to move the existing BI solution in the cloud and/or making it less batch and more real-time, the “Reference Big Data Lambda Architecture in Azure” at SQL Nexus at the beginning of May is what you’re looking for.

    Here’s the complete agenda. Near 7 hours of theory and a lot demos to show how well everything blend together and with practical information that allows you to start to use what you’ve learned right from the day after:

    • Introduction to Lambda Architecture
    • Speed Layer:
      • Event & IoT Hubs
      • Azure Stream Analytics
      • Azure Machine Learning
    • Batch Layer:
      • Azure Data Lake
      • Azure Data Factory
      • Azure Machine Learning
    • Serving Layer:
      • Azure Data Warehouse / or Azure SQL
      • Power BI

    See you in Copenhagen!

    PS

    In case you’re wondering, everything is also possible on-prem, obviously with different technologies. Way less cool, but who cares, right? We’re here to do our job with the best solution for the customer, and even if it’s not the coolest one, it may well do it’s job anyway. Yeah, I’m talking of SSIS, pretty old right now, but still capable of impressive things. Especially if you use it along with Service Broker or RabbitMQ, in order to create a real-time ETL solution.

  • Slide e Demos of my DevWeek sessions are online

    I’ve put on SlideShare and GitHub the slide deck and the demos used in my sessions at DevWeek 2016.

    If you were there or you’re simply interested in the topics, here’s the info you need:

    Azure ML: from basic to integration with custom applications

    In this session, Davide will explore Azure ML from the inside out. After a gentle approach on Machine Learning, we’ll see the Microsoft offering in this field and all the feature it offers, creating a simple yet 100% complete Machine Learning solution. We’ll start from something simple and then we’ll also move to some more complex topics, such as the integration with R and Python, IPython Notebook until the Web Service publishing and usage, so that we can integrate the created ML solution with batch process or even use it in real time with LOB application. All of this sound cool to you, yeah? Well it is, since with ML you can really give that “something more” to your customers or employees that will help you to make the difference. Guaranteed at 98.75%!

    Dashboarding with Microsoft: Datazen & Power BI

    Power BI and Datazen are two tools that Microsoft offers to enable Mobile BI and Dashboarding for your BI solution. Guaranteed to generate the WOW effect and to make new friends among the C-Level managers, both tools fit in the Microsoft BI Vision and offer some unique features that will surely help end users to take more informed decisions. In this session, Davide will show how we can work with them, how they can be configured and used, and we’ll also build some nice dashboards to start to get confident with the products. We’ll also publish them to make it available to any mobile platform existing on the planet.

    Event Hub & Azure Stream Analytics

    Being able to analyse data in real-time will be a very hot topic for sure in near future. Not only for IoT-related tasks but as a general approach to user-to-machine or machine-to-machine interaction. From product recommendations to fraud detection alarms, a lot of stuff would be perfect if it could happen in real time. Now, with Azure Event Hubs and Stream Analytics, it’s possible. In this session, Davide will demonstrate how to use Event Hubs to quickly ingest new real-time data and Stream Analytics to query on-the-fly data, in order to do a real-time analysis of what’s happening right now.

    SQL Server 2016 JSON

    You want JSON? You finally have JSON support within SQL Server! The much-asked-for, long-awaited feature is finally here! In this session, Davide will show how the JSON support works within SQL Server, what are the pros and cons, the capabilities and the limitations, and will also take a look at performance of JSON vs. an equivalent relational(ish) solution to solve the common “unknown-schema-upfront” and “I-wanna-be-flexible” problems.

  • SQL Nexus 2016 Agenda Online

    It’s here and it’s fantastic:

    http://www.sqlnexus.com/agenda.html

    Here’s my picks

    • SQL Server Integration Services (SSIS) in SQL Server 2016 – Matt Masson
    • Beautiful Queries – Itzik Ben Gan
    • From SQL to R and beyond - Thomas Huetter
    • Fun with Legal Information in SQL Server: Data Retrieval - Matija Lah
    • Big Data in Production - Brian Vinter
    • Integrate Azure Data Lake Analytics - Oliver Engels
    • DBA Vs. Hacker: Protecting SQL Server - Luan Moreno Maciel
    • Identity Mapping and De-Duplicating - Dejan Sarka
    • SQL Server 2016 and R Engine-powerful duo - Tomaž Kaštrun
    • Dynamic Search Conditions - Erland Sommarskog
    • Normalization Beyond Third Normal Form - Hugo Kornelis
    • Responding to Extended Events in near real-time - Sartori Gianluca

    See you there!

  • SQL Nexus 2016 in Copenhagen

    For 2nd to the 4th of May, in Copenhagen, the SQL Nexus conference will take place and it looks like is going to be one of those events that, if you live in Europe, you really cannot miss.

    SQL_Nexus_930x180px_webbanner_speaker

    Just visit the website to see how awesome is the speaker roster and the, even if the agenda is not yet there, you can already feel that is going to be *really* interesting:

    http://www.sqlnexus.com

    Now, Beside the following Pre-Conference

    Reference Big Data Lambda Architecture in Azure
    The Lambda Architecture is a new generic, scalable and fault-tolerant data processing architecture, that is becoming more and more popular now that big data and real-time analytics are frequently requested by end users, enabling them to make informed decisions more precisely and quickly. During this full-day workshop we'll see how the Azure Data Platform can perfectly support such an architecture and how to use each technology to build it. From Azure IoT Hub and Azure Stream Analytics to Azure Data Lake and Power BI, we'll build a small Lambda-Architecture solution so that you'll be able to become confident with it and its implementation using Azure technologies.

    that I’ll deliver with my friend Allan Mitchell that I’ve already mentioned before I’m happy to announce that I’ll also have a regular session on Machine Learning, a topic I really love:

    Azure ML: from basic to integration with custom applications
    In this session, Davide will explore Azure ML from the inside out. After a gentle approach on Machine Learning, we’ll see the Microsoft offering in this field and all the feature it offers, creating a simple yet 100% complete Machine Learning solution.
    We’ll start from something simple and then we’ll also move to some more complex topics, such as the integration with R and Python, IPython Notebook until the Web Service publishing and usage, so that we can integrate the created ML solution with batch process or even use it in real time with LOB application.
    All of this sound cool to you, yeah? Well it is, since with ML you can really give that “something more” to your customers or employees that will help you to make the difference. Guaranteed at 98.75%!

    See you there!

  • Using Apache Zeppelin on SQL Server

    At the beginning of February I started an exploratory project to check if Apache Zeppelin could be easily extended in order to interact with SQL Server and SQL Azure. In the last week I’ve been able to have everything up an running. Given that I’ve never used Java, JDBC and Linux since the nineties when I was at university, I’m quite pleased of what I achieved (in just a dozen of hours of no sleep). Here’s Zeppelin running a notebook connected to SQL Azure.

    image

    If you want to test it too, you just have to get it source code from the fork I’ve created here on GitHub, and follow the documentation in order to build it. I’ve just run through the tutorial I’ve put up, and in 15 minutes (max) from when you have logged in in your Ubuntu 15.10 installation, you should be able to have a running instance of Zeppelin with the SQL Server interpreter.

    Here’s the document that describes everything you need to do:

    https://github.com/yorek/incubator-zeppelin/blob/master/README.md

    Now, you may be wondering, why you should be interested in Zeppelin at all? Well, if you’re into Data Science you already know how important is the ability to interactively explore data. And with SQL Server 2016 able to run R code natively, the ability to do some interactive exploratory task is even more important. For yourself and for the business user you will work with. With Zeppelin (just like with Jupyter) creating an interactive query is as simple as that:

    image

    But even if you aren’t into Data Science, Apache Zeppelin is really useful because I really think that the lack of a nice online environment to query SQL Azure is quite annoying. I love SQL Server Management Studio, but sometimes I just need to write a quick-and-dirty query to see if everything in going in the right way or, even better, I’d like to create a (maybe not so) simple dashboard with data stored in SQL Azure or SQL Data Warehouse. And maybe I don’t have my laptop with me, and all I have is a browser.

    Well, Apache Zeppelin is just perfect for all these needs and it is actually much more than that. It’s future looks very promising, so having it also on the Microsoft Data Platform is will make our beloved SQL Server / SQL Azure / SQL Data Warehouse / Azure Data Lake even more enjoyable.

    Right now this version is a sort of on Alpha version and it works only on SQL Server and SQL Azure (I haven’t tested yet on Azure Data Warehouse but should work). It “just works” since, as said at the beginning, this was more and experiment than anything else. Now that I know it is feasible, I’ll rewrite the SQL Server support for Zeppelin (called “interpreter”) from scratch, since for this attempt I’ve started from the postgresql interpreter and as a result the code is not so good (it’s more a patchwork of “let’s try if this works” things)…even if it does the job. So if you download the source and take a look a the code…just keep this in mind, please :-).

    Enjoy it and, as usual, feedbacks are more than welcome. (And help, of course!)

    PS:

    Support to Azure Data Lake is not yet there. It will come ASAP, but don’t know when yet. :-)

  • Devweek 2016

    I’m really happy to announce that I’ll be back in London, at the DevWeek 2016 Conference, in April. I’ll be talking about

    Though the conference name may imply that it’s dedicated to Developers, in reality there are *a lot* of interesting sessions on Databases, Big Data and, more in general, the Data Management and Data Science area.

    Here’s the Agenda

    http://devweek.com/agenda

    I’ll be there along with another well-known name of this blog, Dejan Sarka, just to make sure that the BI/Big Data/Data Science and the likes is well represented among all those developers Smile.

    See you there!

  • (Initial) Conference Plan for 2016

    2016 has not started yet and already looks exciting to me! I already have plans for several conferences and I’d like to share it with you all in case you’re interested in some topics.

    I’ll be presenting at Technical Cloud Day, on January 26th a local Italian event and I’ll be speaking about

    • Azure Machine Learning
    • Azure Stream Analytics

    If you’re interested (and speak Italian) here’s the website:

    http://www.technicalcloudday.it/

    I’ll also be present at some international events, like

    SQL Konferenz

    here I’ll delivery my “classic” Agile Data Warehousing workshop, during the Pre-Con days.

    • Why a Data Warehouse?
    • The Agile Approach
    • Modeling the Data Warehouse
      • Kimball, Inmon & Data Vault
      • Dimensional Modeling
      • Dimension, Fact, Measures
      • Star & Snowflake Schema
      • Transactional, Snapshot and Temporal Fact Tables
      • Slowly Changing Dimensions
    • Engineering the Solution
      • Building the Data Warehouse
        • Solution Architecture
        • Naming conventions, mandatory columns and other rules
        • Views and Stored Procedure usage
      • Loading the Data Warehouse
        • ETL Patterns
        • Best Practices
      • Automating Extraction and Loading
        • Making the solution automatable
        • BIML
    • Unit Testing Data
    • The Complete Picture
      • Where Big Data comes into play?
    • After the Data Warehouse
      • Optimized Hardware & Software
    • Conclusion

    You can find more here:

    http://sqlkonferenz.de/agenda.aspx

    I’ll also have a regular session dedicated to SSISDB and its internals: SSIS Monitoring Deep Dive. I’ll show what’s inside and how you can use such knowledge to build (and improve) something like my SSIS Dashboard: http://ssis-dashboard.azurewebsites.net/

    SQL Nexus

    This is a new Nordic Conference where I’ll deliver along with Allan Mitchell where I’ll be presenting a new, super-cool IMHO, workshop. We’ll discuss about the Lambda Architecture, a new generic reference architecture to build Real-Time Analytics solution, and how it can be built using the features that Azure offers. We’ll show how to use Azure Event Hubs, Stream Analytics, Data Lake Power BI and may other cool technologies from Azure.

    You can find more details here:

    Reference Big Data Lambda Architecture in Azure
    The Lambda Architecture is a new generic, scalable and fault-tolerant data processing architecture, that is becoming more and more popular now that big data and real-time analytics are frequently requested by end users, enabling them to make informed decisions more precisely and quickly. During this full-day workshop we'll see how the Azure Data Platform can perfectly support such an architecture and how to use each technology to build it. From Azure IoT Hub and Azure Stream Analytics to Azure Data Lake and Power BI, we'll build a small Lambda-Architecture solution so that you'll be able to become confident with it and its implementation using Azure technologies.

    http://www.sqlnexus.com/pre--and-main-conference.html

    Well if you’re interested in one or more of these topics, you know where to go now. Bye!

  • Custom Data Provider in Datazen

    Playing with Datazen in the last days, I had to solve a quite interesting problem that took me some time but also allowed me to dig deeper into Datazen architecture in order to find a way to go past its (apparent) limits.

    Here’s the story, as I’m sure will be useful to someone else too.

    One of our current customer has a quite complex Analysis Services dynamic security. Beside applying security on who is accessing the data, they also want to apply security based on how someone access such data. In order to satisfy this requirement a specific extension to Excel (their chosen client) has been developed, and it uses the CustomData() MDX Fuction.

    So, here’s the problem: how can I specify values for CustomData property in the SSAS connection string in DataZen, as there is no such property exposed by default from the native SSAS data provider?

    Luckily DataZen support custom data providers, so it’s quite easy to create a new one that exposes the properties you need:

    http://www.datazen.com/docs/?article=server/managing_data_provider_schemas

    I’ve tried to go for the “Overriding built-in data providers” road but I wasn’t able to make it work. I tried to add the “CustomData” property to a file that overrides the default SSAS data provider setting but at the end the “CustomData” property was the only option I was available to see in the overridden native provider. So I created a new SSAS Data Provider and that’s it, everything works perfectly:

    <dataproviderschema>
        <id>MSSSAS</id>
        <enabled>true</enabled>
        <name>SSAS.EPSON</name>
        <type>ssas</type>
        <properties>
            <property>
                <name>Provider</name>
                <value>MSOLAP</value>
            </property>
            <property>
                <name>Data Source</name>           
            </property>
            <property>
                <name>Initial Catalog</name>
            </property>
            <property>
                <name>CustomData</name>
                <value>{00000000-0000-0000-0000-000000000000}</value>
            </property>
        </properties>
    </dataproviderschema>

    Be aware that Datazen do *a lot* a caching so you’ll have to stop the Core service BEFORE you edit/create the XML file, otherwise you may find it overwritten with cached data, and also be sure to IISRESET your web server otherwise you can easily get mad trying to understand why what you’ve just done is not showing up in the UI.

    Beside the caching madness, everything works great.

    Hope this helps!

  • Configuring Pass-Through Windows Authentication in Datazen

    I’ve been working with Datazen lately (I’m working with a customer that literally felt in love with it) and one of the last thing we tried as a port of a POC before going into real development, is integration with Windows Authentication.

    It’s really easy to do that, you just follow instructions here (in the section “Authentication Mode”)

    http://www.datazen.com/docs/?article=server/installing_server

    and it just works. As documentation suggest, you just have to specify the domain name and that’s it.

    Of course, after that, you may also want to enable pass-through authentication, so that once a user tries to access a dashboard via HTML interface, Datazen will use the logon credential, without going through and additional logon screen.

    Here things can be tricky if you just follow that documentation here:

    http://www.datazen.com/docs/?article=server/configuring_integrated_windows_authentication

    which is correct but only to a certain degree. Everything is correct, it’s only missing to say a *very* important thing that you have to know to make sure that it works as expected: you have to provide ALL FOUR SETTINGS (Server, UserName, Domain, Password) in order to make it work.

    If you forgot to do it during installation, no problem, you can do it later setting the

    • ad_server
    • ad_username
    • ad_domain
    • ad_password

    configuration values as explained here:

    http://www.datazen.com/docs/?article=server/server_core_settings

    After that, the magic happens, and everything works perfectly

    PS

    Of course you have to have configured Kerberos Authentication and Delegation correctly, but that’s another story.

More Posts Next page »

This Blog

Syndication

Privacy Statement