THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Maria Zakourdaev

  • Using Google Charts API to Visualize Schema Changes

     

    Last week I have worked on the new email report using Google Charts and liked it so much that decided to share it here with anyone who finds it useful.

    I have a Schema Changes Audit table which is being maintained by the DDL Trigger. The relevant record is added to this table every time anyone changes objects on the server .

    image              image

    If you are not familiar with the Google Charts, you can read my old post about it and how it works here. It is easy to use, very customizable and FREE. The above visualization is using Google Bar Chart.

    To keep things short, I am using Transact SQL to build an HTML Image tag. The above email body contains this HTML:

    -------------------------------------------------------------------------

    <img src="http://chart.apis.google.com/chart?
    cht=bvg&
    chs=660x250&
    chco=CF9036,90062D,67E13B,82088D,319CBA&
    chd=t:0,1|1,3|3,2|18,5|4,8&
    chds=0,19&
    chxt=x&
    chxs=0,ff0000,12,0,lt&
    chxl=0:|Sep%202|Sep%203&
    chm=N,000000,0,-1,10|N,000000,1,-1,10|N,000000,2,-1,10|N,000000,3,-1,10|N,000000,4,-1,10&
    chdl=SendBarReport%20(by%20Maria)|TestAuditTable%20(by%20Diana)|TestAUditTable%20(by%20Maria)|TestAUditTable%20(by%20Roy)|V1%20(by%20Diana)
    " />

    In order to test the HTML chart in usual Browser, you need take the source of the IMG tag and remove the line breaks.

    http://chart.apis.google.com/chart?cht=bvg&chs=660x250&chco=CF9036,90062D,67E13B,82088D,319CBA&chd=t:0,1|1,3|3,2|18,5|4,8&chds=0,19&chxt=x&chxs=0,ff0000,12,0,lt&chxl=0:|Sep%202|Sep%203&chm=N,000000,0,-1,10|N,000000,1,-1,10|N,000000,2,-1,10|N,000000,3,-1,10|N,000000,4,-1,10&chdl=SendBarReport%20(by%20Maria)|TestAuditTable%20(by%20Diana)|TestAUditTable%20(by%20Maria)|TestAUditTable%20(by%20Roy)|V1%20(by%20Diana)

    ------------------------------------------------------------------------

    Here is a script that produces the HTML code and sends out an email. Take into consideration that, if you have high variety of objects that are changing – render this report for 1 day only. Otherwise the chart will be too big.

     

    Create Stored Procedure that builds the HTML and sends it

     

    In order to make sure that your Server can send emails, you will need to enable mailing functionality. Here you can find a nice article on how to configure mailing profile.

    /*

     

    EXEC SendBarReport 'yourmailgoeshere', 1,1

    */

     

    ALTER PROCEDURE [dbo].[SendBarReport]

                                      @MailRecipient  varchar(256),

                                      @PrintMode  bit = 1,

                                      @MailMode bit = 1

    AS

    BEGIN

     

    DECLARE @Legend varchar(max);

    DECLARE @ChartSeries varchar(max) = '';

    DECLARE @ChartColors varchar(max)= '';

    DECLARE @AxisLabels varchar(max)= '';

    DECLARE @ChartLegend varchar(max)= '';

    DECLARE @DataValueMarkers varchar(max)= '';

    DECLARE @Min int, @Max int;

    DECLARE @html varchar(max)='';

     

    -- Prepare the data for the chart

    SELECT  ChangeDate   = CAST(ChangeDate AS Date),

                  Change = ObjectName +' (by '+LoginName+')' ,

                  NumOfChanges = COUNT(1)

    INTO #FinalData

    FROM TestAuditTable

    GROUP BY CAST(ChangeDate AS Date),ObjectName,LoginName;

     

    -- Get distinct dats for the X axis

    SELECT @AxisLabels = @AxisLabels + LEFT(DATENAME(m,ChangeDate),3) +' '+ CONVERT(varchar(5),DAY(ChangeDate)) + '|'

    FROM #FinalData

    GROUP BY ChangeDate

    ORDER BY ChangeDate

     

    -- Get MIN and MAX values to scale the chart correctly

    SELECT        @Min = MIN(NumOfChanges) -1,

                  @Max = MAX(NumOfChanges) +1

    FROM #FinalData;

     

    -- Prepare Chart legend, dynamically generate colors, define data value markers and chart series

    -- Chart series must be in the following format |series1_val1,series1_val2,series1_val3|series2_val1,series2_val2,series2_val3| … where each part between | | belongs to separate value on X axis

    SELECT  @ChartLegend = @ChartLegend + Change + '|',

               @ChartColors = @ChartColors + CONVERT(varchar(6),LEFT(newid(),6)) + ',',

               @DataValueMarkers = @DataValueMarkers + 'N,000000,'+CAST(ROW_NUMBER() OVER ( ORDER BY Change) -1 AS VARCHAR(10)) +',-1,10|',

               @ChartSeries = @ChartSeries + ( SELECT CAST(ISNULL(NumOfChanges,0) AS varchar(50)) +','

                                               FROM #FinalData f4

                                               RIGHT JOIN (SELECT DISTINCT ChangeDate,f2.Change

                                                           FROM #FinalData

                                                           OUTER APPLY (SELECT DISTINCT Change FROM #FinalData ) f2

                                                           ) f3

                                                    ON f3.ChangeDate = f4.ChangeDate and f3.Change = f4.Change

                                               WHERE f3.Change = f1.Change

                                               ORDER BY f3.ChangeDate

                                               FOR XML PATH ('')

                                       ) + '|'

    FROM #FinalData f1

    GROUP BY Change

    ORDER BY Change;

     

    -- Remove last character “|” or “,” from dynamically created strings

    set @ChartSeries = SUBSTRING(@ChartSeries,1,LEN(@ChartSeries)-2);

    set @ChartSeries = REPLACE(@ChartSeries,',|','|');

    set @ChartColors = SUBSTRING(@ChartColors,1,LEN(@ChartColors)-1);

    set @AxisLabels = SUBSTRING(@AxisLabels,1,LEN(@AxisLabels)-1);

    set @ChartLegend = SUBSTRING(@ChartLegend,1,LEN(@ChartLegend)-1);

    set @DataValueMarkers = SUBSTRING(@DataValueMarkers,1,LEN(@DataValueMarkers)-1);

     

    -- Prepare HTML

    set @html =

     

    '<img src="http://chart.apis.google.com/chart?

    cht=bvg&

    chs=660x250&

    chdlp=t&

    chco='+@ChartColors+'&

    chd=t:'+@ChartSeries+'&

    chds='+CAST(@Min as varchar)+','+CAST(@Max as varchar)+'&

    chxt=x&

    chxs=0,ff0000,12,0,lt&

    chxl=0:|'+@AxisLabels+'&

    chm='+@DataValueMarkers+'&

    chdl='+@ChartLegend+'&

     

    " />'

     

    IF @PrintMode = 1 BEGIN

           PRINT @html

    END

     

    -- Send email using prepared HTML as an email body

    IF @MailMode = 1 BEGIN

           EXEC msdb.dbo.sp_send_dbmail

           @recipients = @MailRecipient,

           @subject ='Schema Changes',

           @body = @html,

           @body_format = 'HTML'

    END

     

    END

    Audit table script

     

    CREATE SEQUENCE dbo.GetNextNumber AS int START WITH 1 INCREMENT BY 1;

     

    CREATE TABLE dbo.TestAuditTable(

           ID int NOT NULL DEFAULT NEXT VALUE FOR dbo.GetNextNumber PRIMARY KEY,

           ChangeDate datetime NULL,

    ObjectName sysname,

           LoginName sysname) ;

    P.S.

    Please, don’t say “Have you noticed that Google Charts API has been deprecated?”

    They have announced the deprecation of this tool about two years ago. We can still enjoy this awesome API, Google will not make any changes to it during the next year and will announce if they intend to discontinue or make any changes to it. As soon as this sad day arrives, we will adjust ourselves to their new API https://google-developers.appspot.com/chart/interactive/docs/gallery based on the Java classes.

    Yours,

    Maria

  • Choosing SEQUENCE instead of IDENTITY? Watch your step.

     

    On the nice sunny day getting this error message can be really frustrating. In fact, it is frustrating on any day. Especially if you are doing something really simple.

    image

    Most of you at this moment yell at your monitor “DDL Trigger!!!!” . Yes. I have a simple DDL trigger for auditing purposes.

    If you use the script below for the auditing table, any DDL statement will get pretty red notification to discard the results.

    CREATE DATABASE TestDB;

    USE TestDB;

    CREATE SEQUENCE GetNextNumber AS int START WITH 1 INCREMENT BY 1;

     

     

    CREATE TABLE dbo.TestAuditTable(

           ID            int NOT NULL DEFAULT NEXT VALUE FOR GetNextNumber PRIMARY KEY,

           ChangeDate    datetime NULL,

    ObjectName    sysname,

           LoginName     sysname) ;

     

    You see, the table is pretty basic, I even use SEQUENCE instead of IDENTITY. Which is, in fact, the main problem.

     

     

    CREATE TRIGGER TRG_DDLServer

    ON  ALL server

    For   

    DDL_EVENTS

    AS

    DECLARE @error varchar(256) ;

    DECLARE @LoginName nvarchar(max) ;

    DECLARE @ObjectName nvarchar(max) ;

     

    SELECT @ObjectName = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]','nvarchar(max)'),

            @LoginName = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]','nvarchar(max)');

     

     

    BEGIN TRY

                  INSERT INTO TestDB.dbo.TestAuditTable (ChangeDate,ObjectName,LoginName)

                  VALUES(getdate(),@ObjectName,@LoginName);

    END TRY

    BEGIN CATCH

          

            SET @error = 'Failed execute DDL trigger:' + ERROR_MESSAGE()

            ROLLBACK;

            RAISERROR (@error,18,1) WITH LOG

            

    END CATCH

    GO

     

    In addition to the unclear error message, you get a nice memory dump in Error Log logged each time some DDL comes through.

     

    image

     

    Searching for the problem, I have spent few hours of my life giving various types of permissions on the Sequence object to the user that I am working with and to the guest user, adding WITH EXECUTE to the Trigger and many other things.  Regular inserts to the Audit table worked. I even had created an INSERT TRIGGER and INSTEAD OF INSERT TRIGGER and any of this worked perfectly. Apart from the insert from the DDL trigger.

     

    My stubborn friend Yoni Nakache after a long fight discovered that if you change execution context by using EXEC or sp_executesql over the insert statement inside DDL trigger, DDL will not produce the error anymore.

     

    BEGIN TRY

    EXEC ('INSERT INTO TestDB.dbo.TestAuditTable (ChangeDate,ObjectName,LoginName)

            VALUES(getdate(),'''+@ObjectName+''','''+@LoginName+''');')

    END TRY

     

    More brainstorming around the subject has revealed the truth.

    Server side DDL triggers do not have a default schema. It is a must to add Schema name to the Sequence call inside the Table Default definition. I admit, I frequently forget adding schema name if I am using the dbo schema.

     

    CREATE TABLE dbo.TestAuditTable(

          ID         int NOT NULL DEFAULT NEXT VALUE FOR dbo.GetNextNumber PRIMARY KEY,

          ChangeDate datetime NULL,

          ObjectName sysname,

          LoginName  sysname

    ) ;

     

    The fact that SQL Server does not produce the correct error message looks like a bug. QA plan for Sequence feature did not contain DDL triggers. Neither on the SQL Server 2012 not on SQL Server 2014.

     

    May all your errors turn into experiences that will cause you to evolve.

     

    Yours,

    Maria

  • Memory is a new disk

    In the database world, disk based data stores are slowly being replaced by memory-based data stores. Memory prices are becoming more affordable and operational databases can usually fit totally into memory.

    According to the Gartner (the research company that provides independent technology reports) by the end of 2015 all enterprise DBMS will use memory optimizations and most of this  transformation will happen this year. 

    I am playing with the Hekaton tables these days and thinking to whom of my customers it might be relevant. Most of them prefer the new stuff and are quite eager to put new terms on their CV. They like to say “Why SQL Server? Everyone is using Redis as an in-memory database, it’s free and working blazingly fast. And the other department is using the Couchbase cluster. We don’t want to stay behind…” In such situations I need to step outside of the wardrobe where I’m hiding and peek around. The DBMS market keeps growing and many great new technologies are being introduced. I believe that by learning other technologies we understand better the advantages and disadvantages of our favorite ones. It is good to know what our competitors are doing. I also want to be capable to identify those situations where the SQL Server is the better solution and have enough expertize to explain why.

    Here is the Gartner Magic Quadrant from October 2013 for operational (OLTP) DBMS providers, some already have in-memory optimizations:

    image

    The Big Vendors, Microsoft, Oracle, SAP and IBM are still Leaders on the field, all others are way behind. I believe that the true Leaders are always aware of all the trends in the database world, they adjust to them and adopt the new ideas, which sometimes means buying emerging breakthrough solutions.

    Oracle have understood finally that having several database products is not too easy for their customers. They have announced an In-Memory Database option for Oracle 12c, it should be available this July. As opposed to TimesTen, which is a separate caching product, an in-Memory option will be totally integrated into the 12c version. It will use the in-memory column-store format which is usually being used for data analysis but they say that this feature will speed up also OLTP workloads. They will keep the data simultaneously in two formats – in the old row-based and in the in-memory column-based. Both formats will be transactionally consistent.

    SAP has a High performance Analytic Appliance (HANA) DBMS since 2011. It supports 2 formats and stores in memory both row-based tabular data and column-based tables. Naturally, tabular orientation is recommended more for OLTP environments and column-store for OLAP-like queries. SAP HANA can be scaled out by adding nodes, sharing the same storage. In such case the master node will store row-based data and statistics on the rest of the data.

    IBM DB2 10.5 has a BLU acceleration. This is a memory-resident index which helps to find data, stored in columnar form on disk. There is no scale out solution yet but they claim that it will come soon. IBM BLU is not really for OLTP environments, as opposite to SQL Server: for writes IBM recommends “to commit 10,000 rows at a time”. Transactions are using locks to make changes to the data and latches to access shared structures.

    Microsoft SQL 2014 has a new query engine “in-memory OLTP” (“Hekaton”)  where the row-based data resides completely in memory keeping a copy of the data on the disk for durability purposes. Indexes on memory-resident tables live only in memory. Queries can span all 3 existing query engines: relational traditional query engine where the data sits on disk, column-store query engine, where data is using both memory and disk and this 3rd new query engine that manages memory resident data. Hekaton tables are not using locks and latches for concurrency and locking management, it’s using a combination of timestamps and row versioning.

    Here are some of the proven alternative technologies on the DBMS market that are worth noticing:

    Aerospike: a key-value row-based NoSQL DBMS with in-memory indexes and data on SSD. It has many customers in gaming and advertising-driven sites. Key value stores are very simple and powerful structures if you always use key-matching searches. They are extremely fast and can scale easily by adding more servers to the cluster. The main downside of the key-value structure is a limited ability to search inside the value. Aerospike, unlike many of it’s competitors, allows creation of secondary indexes on the value and stores them in memory. Redis, for instance, is a quite popular really fast in-memory key-value store but it is not scalable beyond one node and does not support secondary indexes on value. Probably , due to those two limitations, it’s not yet in the Gartner quadrant. Aerospike is using latches and short-term locks to ensure isolation between transactions.

    VoltDB: an all-in-memory NewSQL relational database, designed to run as a cluster of servers with tables partitioned across the cluster. Transactions in the VoltDB are running in a serial timestamp order, do not use locks or latches, and translate SQL into the Java code. They have many customers in telecom, utilities and financial services.

    MemSQL: a distributed in-memory row-based NewSQL relational DBMS. It stores all data in memory in a row format and maintains the write-ahead log for durability purposes. SQL is being compiled into the C++ code. Data distribution is implemented using an aggregator node, which is splitting the query and sends across all leaf nodes for processing. The main node will aggregate it back before sending the results to the client. Recently they have added a new column-store table type which must reside on SSD. It is sort of in contradiction to the current trend where everyone who adds column-based technology, makes it memory-resident. This IMDBMS is used by customers such as Comcast and Zynga.

    MongoDB and CouchBase are the most popular document based data stores. Their greatest advantage is a well known open schema in which work with Json documents becomes very easy. Both of them are not really in-memory products, they are working with data on disk. MongoDB however has some mode of work where they use the memory-mapped files to access the data on disk. This is in fact a Linux feature. Linux makes the files appear as regular files but they are sitting entirely in memory. Couchbase is “almost” an in-memory database, it caches as much as possible in RAM but data is persisted.

    There are many developers eager to play with the new technologies. However, the biggest challenge of the new DBMS is to convince organizations to put aside well known products and start using their new solutions. I don’t think this will be an easy task despite the fact that many of them are great products. They can be extremely fast but it takes years to understand all users requirements and provide the complete flexible solution that will suit more than just a few cases.

    Enterprise solutions are like LEGO. You can build out of it anything that you want. However there are situations when all your kid wants is just a toy car. He doesn’t care if it can be transformed into a toy plane or into a castle, and doesn’t intend investing the time needed to build it.

    Yours,

    Maria

  • The Distributor. Think and Rethink every thing.

     

    The key player in the transactional replication topology is the Distributor. Misconfiguration of the Distributor can lead to increased load on the production server, can interfere with the regular application activities and even cause production databases to become inaccessible. In addition, its configuration greatly impacts the replicated data latency. Changing the Distributor’s configuration after setting up the replication can be problematic and can impact application activity.

    The main aspects to be considered when planning a replication design are:

    • Number of publications on each publisher in the future replication topology
    • EPS in the replicated tables
    • Number of planned publishers
    • Distance between Publisher, Distributor and Subscribers
    • Number of subscribers for each publication
    • Size of the published Databases

    Most of the answers to the above questions will lead to the the decision whether you want to have a dedicated Distributor server or configure the Publisher or the Subscriber to perform the Distributor role too. Both approaches have their pros and cons.
    Keep in mind that a dedicated Distributor server will require one more SQL Server license and, most likely, additional monitoring application licenses. To lower the costs the Distributor can be configured on top of the standard edition of SQL Server which, luckily for us, supports 2 node failover clustering to make sure your Distributor is always up.

    Number of publications
    For each publication there is a Log Reader Agent that monitors and detects data changes on tables marked for a replication. The Log Reader Agent is actually an independent executable, running on the server that performs a Distributor role. The larger the number of your publications, the more processes you might find in the server memory space.
    If you plan to configure your Publisher to act as a Distributor, take into consideration that those processes will eat up the resources on your production server.  

    image

    EPS in the replication tables
    All relevant transactions will be read off the database transaction log file by the Log Reader and moved to the distribution database on the Distributor. Each subscriber’s distribution service will read those transactions and apply them on the subscription database. Every time a subscriber pulls data from the distribution database, the IO subsystem and buffer pool are serving those requests. The bigger the number of the subscribers - the higher such load will be.  This can be another point of having a dedicated Distributor with it’s own buffer pool and dedicated storage.
    Moreover, it’s a good idea to isolate distribution database files from other production databases. We have seen a noticeable impact on the production server whose database files LUN was located on the same NetApp array together with the Distributor database files’ LUN.

    Number of publishers
    Each Publisher can be served by only one Distributor. If the number of publications on the Publisher is low and the number of data changes is not significant, you can avoid extra costs and make each Publisher perform also the Distributor role. However, when you have several Distributors in your environment it will get less convenient to monitor the distribution databases since monitoring code needs to be maintained at several locations.
    For instance, if we have a procedure that produces a list of subscriptions that have a problematic status, like uninitialized or expired or a subscription which is in a retrying or failed state. 
    If one of the Publishers has enough spare resources, it can play the Distributor role for the rest of Publishers. Just make sure you configure the permissions correctly to make sure that only allowed people can access the Distribution database.

    Distance between the Publisher, the Distributor and the Subscribers. And number of subscribers
    The Distributor server must have a reliable network to the Publisher and should be located as close as possible. This way Log Reader agents will read off the transaction logs fast enough and transaction logs will not be growing and will not endanger the production databases. Even in a Simple recovery mode, transaction log VLFs are not marked as free till all the transactions have been processed by the log reader.  
    The subscription service named “Distribution agent” is also an executable. This agent can be configured in one of two approaches:

    • pull subscriptions (agent is running on the Subscriber server)
    • push subscriptions (agent is running on the Distributor sever)

    If subscribers are located far from the Distributor, it’s better to configure pull subscriptions in order minimize the network utilization when moving initial copies of the tables (snapshots).
    If Subscribers are close to the Distributor and it’s important to lower the load on the Subscriber server, push subscriptions are also an option. Just remember that there are already a bunch of exe files running on the Distributor, by configuring the push subscriptions you are adding another executable per publication for every subscriber.
    It is common to create publications on the database level. However, if there are several tables with high EPS, separate publications can be created for each table. In such cases, distribution agents for those publications run in parallel and improve data replication speed. However this will increase number of executables on Distributor.
    In case the subscriber server has more spare resources and located near the Production server, you can configure the Distributor on the Subscriber which can be another way to cut the licensing costs.

    Size of the published databases
    Of course, the size of a replicated database does not impact the latency of the data, EPS does. The higher the number of the data changes, the longer it will take to replicate the data. Small but busy databases will most probably have higher replication delay than big “lazy” datasets with low number of changes. However, when you initialize your publication you first need to move the whole tables marked for replication to the Subscriber and then start replicating the transactions.
    At this point database/table sizes make a whole world of difference. If tables are small you can initialize the replication using the Snapshot Agent. Every replicated table will be BCPed to the csv file, copied to the snapshot folder on the Distributor and then pulled by the Subscriber or pushed by the Distributor to the Subscriber depending on the subscription type that you have chosen. The larger the tables, the more of them – the longer the process will take. Especially when the Subscriber is located far away. In such a case you can initialize your subscription using database backup/restore which can take less time when copied to the Subscriber.
    Still, in case of “big data” you may have to wait hours and even days till your hundreds of gigabytes have been copied to the Subscriber. Meanwhile, your distribution database will keep all the transactions since the replication initialization point. Watch it’s size closely in order to avoid the situation where it takes up all the disk space on the drive.

    Other random thoughts:

    Loopback detection for BiDirectional Transactional Replication 
    If you are using BiDirectional Transactional replication you are most probably familiar with the loopback detection mechanism. Distribution server uses it to make sure that the transaction originated on the server A and replicated to server B will not be replicated back to the server A. Loopback detection logic is a part of the sys.sp_MSget_repl_commands procedure and is evaluated when the subscription agent pulls the transactions from the Distributor. Which unfortunately means that all transactions that arrived at the Distributor from server A and were applied to server B will anyway return back to the Distributor and will stay in the distribution database till the Subscriber discards them.
    By the way, @loopback_detection property of the sp_addsubscription system procedure is true by default, you don’t need to turn it on.

    Tempdb on the Distributor
    The workload on the Distributor server is not too much different from other production servers. Transactions data is being stored inside two tables dbo.MSrepl_transactions and dbo.MSrepl_commands. Replication agents activities are logged into tables dbo.MSlogreader_history and dbo.MSdistribution_history.The process of pulling transactions by the sys.sp_MSget_repl_commands procedure uses several temporary variables. It gets executed by every subscriber of every publication.  The more publications * subscribers your Distributor is serving, the larger the number of objects created inside tempdb. In addition, for each publisher there is a Replication monitoring refresher job. It executes the sys.sp_replmonitorrefreshdata procedure in a continuous loop which uses several temporary tables. To minimize the contention on the GAM/SGAM pages you should have multiple tempdb data files, as you most probably do on the rest of your production servers.  

    Distribution database
    In case you have decided to create a dedicated Distributor server there is an another important decision to take: whether you want one distribution database for all Publishers or separated distribution databases. A Publisher can be linked to only one distribution database but the distribution database can serve multiple Publishers.
    Environments with a low level of data changes can share one distribution database. For a busy or potentially busy databases it’s better to create separate distribution database per publisher ( not per publication (!) ).  The less rows each transaction on the published database is inserting/updating/deleting, the more similar EPS you will get on the distribution database. For instance, single UPDATE that changes 10 rows in the published table will be translated into 10 separate UPDATE statements and inserted into the distribution database.
    When several publishers are sharing the same distribution database its EPS can get quite high. Separation of the databases makes it possible to separate the data files onto different physical disks in order to avoid impact of one server’s load on another. 

    It’s difficult to move a Publisher out of the shared distribution to it’s private distribution database after the replication is already up and running. In order to change the distribution database, all publications of the specific publisher must be dropped, recreated and every database must be again copied to all subscribers.

    Distribution databases are hiding inside the system databases folder. When you are creating a new distribution database you can name it whatever you like. Don’t be surprised when you see something like this in the system databases folder.

    image

    To get a list of distribution databases use is_distributor property of sys.databases

    select name

    from sys.databases

    where is_distributor = 1

    Distributor password
    In order to Configure a Publisher to use the Distributor you need to enable Publisher on the Distributor

    imageimage

    and after that you can configure the Distributor on the Publisher

    exec sp_adddistributor @distributor = N'DistributorServerName', @password = N'password'

    If you don’t know the Distributor password you have two options. First is to find someone who knows. Second -  change it ( on the same snapshot above)

    The problem here is that this password is being used by the Publishers to carry out replicated objects configuration changes. For that purpose a new linked server called “repl_distributor” is created on the Publisher. It’s not configured for the data access and its security settings set to use the abovementioned password. In case you have inherited Distributor server from someone else who had forgotten to write this password down, you can change linked server security settings on all Publishers to use a new password. We usually use a regular SQL server user for this purpose to avoid troubles if anyone decides to change the Distributor password by the mistake.

    imageimage

    The Snapshot folder
    There is a snapshot folder that you are requested to define when setting up the Distributor. As I mentioned earlier, even if you are not setting up the snapshot replication, one of the ways to initialize the transactional replication is by using a table’s snapshot. In addition, every time you add or change articles, they will be transferred to the Subscriber using exactly the same snapshot mechanism. First of all, the snapshot folder should be a network path( and not on the local path as the wizard suggests) so pull subscriptions can access it. You must provide write permissions to the Snapshot Agent’s user and read permissions to the Distribution Agent’s user.
    If there is a need to separate table snapshots from different publications, an alternative snapshot folder can be set up for each publication. To improve the snapshot transfer speed between the Distributor and the Subscriber you may want to compress the table snapshots, which can be done on the alternate folder only. Note that the compression process takes time, increases the snapshot creation time and is not suitable for unreliable networks because copy cannot be resumed in case of network failure.

    All Replication agents can be customized in many ways. We can run the custom scripts before or after the snapshot creation, output all logging information to the text files, change number of threads for the BCP actions to improve performance but all this would be a whole topic for another blogpost.

    Yours,
    Maria

    “Do what you do so well that they will want to see it again and bring their friends.” / Walt Disney

  • Grouping events into intervals using hedgehog rolling style

    - I have a challenging SQL problem for you, – yelled one of my coworkers one morning entering my office and dropping on his way a few pictures, SQL server system views map and a visitors chair. – Much more interesting than all your alerts and twitter conversations! ( “Well”, I thought to myself, “it might beat the alerts but not the twitter conversations” )

    The problem presented by him is as follows:

    There is a table containing information about a device’s, like phones or tablets, history of connections to hotspots as shown below:

    image

    The idea was to bind together the above mentioned events into groups. Each group, or we can call it - an interval, should represent the time that any device was connected to the specific access point. Each time the device has moved to the new access point, a new interval should start. If there were no events for the specific device longer than 30 min, the next event should start a new interval even if the new event was reported on the same access point as before.

    Queries will eventually slice the data to see how many users were on the same access point at the specific time, average time devices stayed connected to specific access_point and other similar business questions.

    The matter was indeed an attention-grabbing one since the desired result cannot be achieved by using a regular GROUP BY logic.

    First of all, here is a test data:

    SELECT *

    INTO test_events

    FROM ( VALUES ( 1, 1, 10, '20130101 10:01'),

    ( 3, 2 , 10 , '20130101 10:03' ),

    ( 4, 3 , 10 , '20130101 10:04' ),

    ( 5, 1 , 10 , '20130101 10:05' ),

    ( 6, 1 , 11 , '20130101 10:09' ),

    ( 7, 2 , 12, '20130101 10:06' ),

    ( 8, 3 , 10 , '20130101 10:10' ),

    ( 9, 1 , 11 , '20130101 10:40' ),

    ( 10, 3 , 10 , '20130101 10:47' ),

    ( 11, 3 , 10 , '20130101 10:52' ),

    ( 12, 3 , 10 , '20130101 11:05' ),

    ( 13, 2 , 10 , '20130101 10:53' )

    ) vals (id, hardware_id, access_point, datein);

    image

    The most trivial and easy algorithm would be to travel through the table, ordered by devices and the time. For each new event in the resultlset, access point and hardware_id should be analyzed. If the access point is different from the device’s previous event or more than 30 min have passed since the previous event, we will consider it as an interval start and use it’s own id as an Interval id. Keeping the Interval id we will go to the next event. If the access point is the same, use a previous event id as an Interval Id.

    This logic is easily implemented using a cursor. I haven’t used the cursors for the last 10 years but I wondered what would be the execution times.

    I have added two columns to the table, IntervalId to group the events into the intervals and the IntervalDateEnd to add information for when the interval ends in order to satisfy the above-mentioned queries.

     ALTER TABLE test_events ADD IntervalId int;

    ALTER TABLE test_events ADD IntervalDateEnd datetime ;

    The first cursor was using the aforesaid logic as is and had updated the table on every iteration. On a table with on 250000 rows I have stopped it after 4 hours, when less than half of the rows have got updated.

    My next cursor implementation was a bit different. First of all I have created a temporary table. Instead of updating the table on every iteration, I added a new row to the temporary table grasping the event id and Intervalid. When the cursor finished I updated the main table by joining it to the temporary results table. This solution took 1 min ( ~ 35 sec cursor and ~30 sec table update) on table with 250000 rows and 10 min on table with 500000 rows. The test was performed on a fairly slow VM machine so the execution times reflect that.

    The execution is quite heavy, so if you are using the cursors please make sure you are using them wisely. Differences between the cursors are marked in yellow.

    -----------Better cursor----------

    DECLARE @id int, @accessPoint int, @hardware_id varchar(50),@timestamp datetime

    DECLARE @parentId int,@parentAccessPoint int,@ParentHardwareID varchar(50), @ParentTimeStamp  datetime

     

    CREATE TABLE #Mapping ( Id int, IntervalId int,IntervalDateEnd datetime);

    DECLARE EventLoop CURSOR READ_ONLY STATIC FOR

             SELECT id,access_point,hardware_id,datein

             FROM test_events

             ORDER BY hardware_id, datein ;

     

    OPEN EventLoop

     

    FETCH NEXT FROM EventLoop

    INTO @id,@accessPoint,@hardware_id,@timestamp

     

    WHILE @@FETCH_STATUS = 0

    BEGIN

     

           IF ( @hardware_id = @ParentHardwareID

                AND @accessPoint = @parentAccessPoint

                AND DATEDIFF(mi,@ParentTimeStamp,@timestamp) <= 30 ) BEGIN

                       INSERT INTO #Mapping(Id,IntervalId) SELECT  @id,@parentId

           END

           ELSE BEGIN

                  INSERT INTO #Mapping(Id,IntervalId) SELECT  @id,@id

                  UPDATE #Mapping

                    SET IntervalDateEnd =@ParentTimeStamp

                    WHERE id = @parentId

     

                  SELECT  @parentId = @id

           END

          

           SELECT 

                         @parentAccessPoint = @accessPoint,

                         @ParentHardwareID = @hardware_id,

                         @ParentTimeStamp = @timestamp;

                   

     

        FETCH NEXT FROM EventLoop

           INTO @id,@accessPoint,@hardware_id,@timestamp

    END

    UPDATE #Mapping

    SET IntervalDateEnd =@ParentTimeStamp

    WHERE id = @parentId

    CLOSE EventLoop;

    DEALLOCATE EventLoop;

    UPDATE e

    SET e.IntervalId = m.IntervalId,

        e.IntervalDateEnd = m.IntervalDateEnd

    FROM  test_events e

      JOIN #Mapping m

        ON e.id = m.id

    -----------BAD cursor----------

    DECLARE @id int, @accessPoint int, @hardware_id varchar(50),@timestamp datetime

    DECLARE @parentId int,@parentAccessPoint int,@ParentHardwareID varchar(50), @ParentTimeStamp  datetime

     

    DECLARE EventLoop  CURSOR READ_ONLY STATIC FOR

           SELECT id,access_point,hardware_id,datein

           FROM test_events

           ORDER BY hardware_id, datein ;

     

    OPEN EventLoop

     

    FETCH NEXT FROM EventLoop

    INTO @id,@accessPoint,@hardware_id,@timestamp

     

    WHILE @@FETCH_STATUS = 0

    BEGIN

     

           IF ( @hardware_id = @ParentHardwareID

                         AND @accessPoint = @parentAccessPoint

                         AND DATEDIFF(mi,@ParentTimeStamp,@timestamp) <= 30 ) BEGIN

                                                    UPDATE test_events

                                                      SET IntervalId = @parentId

                                                    WHERE id = @id

           END

           ELSE BEGIN

                 

                  UPDATE test_events

                    SET IntervalId = @id

                  WHERE id = @id

                  UPDATE test_events

                    SET IntervalDateEnd =@ParentTimeStamp

                    WHERE id = @parentId

     

                  SELECT  @parentId = @id

           END

          

           SELECT 

                         @parentAccessPoint = @accessPoint,

                         @ParentHardwareID = @hardware_id,

                         @ParentTimeStamp = @timestamp;

                   

     

        FETCH NEXT FROM EventLoop

           INTO @id,@accessPoint,@hardware_id,@timestamp

    END

    UPDATE test_events

    SET IntervalDateEnd =@ParentTimeStamp

    WHERE id = @parentId

     

    CLOSE EventLoop;

    DEALLOCATE EventLoop;

    I’m habitually trying to avoid cursors for the reason that in most cases they will require more memory and produce much more locks. Therefore I needed to find a true set based solution which, as I have believed, additionally would have much better execution times.

    That’s where I came up with 4 steps logic.

    1st step. Find all interval starters.

    By thinking about the data you can see that each event can belong to only one of the two possible statuses: Start the interval or continue the interval. We can find this by comparing each event to the previous event that was reported for the same device. I can achieve this by using LAG()  window function to bring the previous event. Here you can find more information about the analytic functions.

    Here is a query:

    UPDATE r1

    SET r1.IntervalId = r2.IntervalId

    FROM test_events r1

    JOIN (SELECT id,IntervalId =

                     CASE WHEN

                          lag(access_point,1) OVER (PARTITION BY hardware_id ORDER BY datein ) = access_point

                          and

                          DATEDIFF (mi,

                                    lag(datein,1) OVER (PARTITION BY hardware_id ORDER BY datein ),

                                    datein

                                    ) < 30

                          THEN

                                 NULL

                          ELSE id

                     END

           FROM test_events

           WHERE IntervalId IS NULL) r2

    on r1.id = r2.id

    WHERE r1.IntervalId IS NULL

    Unfortunately I can use windowed functions only in SELECT or ORDER BY clause, otherwise I could have used it in the SET clause of the update and the query would have been even lighter.

     

    image

     

    This step took 7 sec on 250000 rows and 10 sec on the 500000 rows.

     

    Now I need to link the rest of the events to their intervals. In order to do that I can use this simple query that for each event that is continuing the interval find the closest event that is an interval starter using sequential id column which is PK of the table.

     

    UPDATE e1

    SET IntervalId = (SELECT top 1 IntervalId

                      FROM test_events e2

                      WHERE e2.hardware_id = e1.hardware_id and e2.access_point = e1.access_point and e2.id < e1.id

                            and IntervalId is not null

                      ORDER BY e2.datein desc)

    FROM test_events e1

    WHERE IntervalId IS NULL

    Although at the first look this query looks better than the cursor, in fact, is not entirely a set based query. For each event in the table, the subquery searches for the interval starter by going back to the table. This step took 2 minutes on table with 250000 rows and 6 min on table with 500000 rows.

    Which means I need another solution.

    If you take a look at the example below, you can clearly see that the intervals belong to one of the two categories

      • Closed intervals: there are other appearances of the same hardware_id on the different access_point ( marked in blue )
      • Open intervals : there are none other appearances of the same hardware_id on the different access_point ( marked in green)

    image

    2nd step. Link the events within the closed intervals.

    For each start of the interval find the next start interval’s starting time. We can easily find the next value by using the LEAD() window function.

    Together with the current timestamp they define the interval boundaries which will be between the CurrentTime and  LESS than NextIntervalStartTime.

    This step took 15 sec on 250000 table and 25 sec on 500000 table.

    UPDATE e1 

    SET e1.IntervalId = e2.IntervalId

    FROM test_events e1

      JOIN (select *, NextEventStart = LEAD(datein) OVER (PARTITION BY hardware_id ORDER BY datein )

           FROM test_events

           WHERE IntervalId IS NOT NULL ) e2

       ON e1.hardware_id = e2.hardware_id and e1.access_point = e2.access_point

           and e1.datein >= e2.datein and e1.datein < e2.NextEventStart

    WHERE e1.IntervalId IS NULL

    image

    3rd step. Link the events within the open intervals.

     

    For each start of the interval we will need to find the last timestamp for the same device on the same access point.  Analytical function LAST_VALUE() will help us to achieve that. Take into the consideration that the default range of this function includes only  preceding events so you need to explicitly specify RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING option.

     

    Together with the current timestamp they define the interval boundaries which lay between the CurrentTime and the LastTimeForThisDevice

    UPDATE e1

    SET e1.IntervalId = e2.IntervalId

    FROM test_events e1

      JOIN (SELECT * , LastTimeForThisDevice = LAST_VALUE(datein) OVER (PARTITION BY hardware_id,access_point ORDER BY datein RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)

                  FROM test_events ) e2

       on e1.hardware_id = e2.hardware_id and e1.access_point = e2.access_point

       and e1.datein >= e2.datein and e1.datein <= e2.LastTimeForThisDevice

    WHERE e1.IntervalId IS NULL

     

    This query took 25 sec on 250000 rows and 45 sec on 500000 rows.

     

    image

    4th step. Grab all the Interval End Dates.

    If we want to find an interval end dates, there is a need for another query that took 30 sec on 500000 rows.

    UPDATE e1

     set e1.IntervalDateEnd = e2.IntervalDateEnd

     from test_events e1

       join (select id, IntervalDateEnd = LAST_VALUE(datein) OVER (PARTITION BY hardware_id,IntervalId ORDER BY datein RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)

                    from  test_events) e2          

     on e1.id = e2.id

    Total set based solution time is about 2 min. Now, the next step would be to identify good indexes that will support the queries and crawler updates that will take care of fresh data, will not slow down the inserts and will not cause page fragmentation.

    You may think that since the (simpler to write) second cursor took only(!) 10 min to finish there would be not much reason for all the hassle. However, a set based solution is far more scalable which is extremely important these days where every small table tends to turn into BIG data.

    Forcing the set based technology to use iterative functionality is not graceful at all.  Think about the hedgehogs, who would never use their tiny paws to descent from a climb. They will typically roll into a ball and roll down.

    May all your TSQL solutions roll.

    baby hedgehog

  • PASS summit 2013. We do not remember days. We remember moments.

     

    "Business or pleasure?" barked the security officer in the Charlotte International Airport.
    "I’m not sure, sir," I whimpered, immediately losing all courage. "I'm here for the database technologies summit called PASS”.
    "Sounds boring. Definitely a business trip."

    Boring?! He couldn’t have been more wrong.

    If he only knew about the countless meetings throughout the year where I waved my hands at my great boss and explained again and again how fantastic this summit is and how much I learned last year. One by one, the drops of water began eating away at the stone. He finally approved of my trip just to stop me from torturing him.

    Time moves as slow as a turtle when you are waiting for something.
    Time runs as fast as a cheetah when you are there.
    PASS has come...and passed.

    It’s been an amazing week. Enormous sqlenergy has filled the city, filled the convention center and the surrounding pubs and restaurants. There were awesome speakers, great content, and the chance to meet most inspiring database professionals from all over the world.

    Some sessions were unforgettable. Imagine a fully packed room with more than 500 people in awed silence, catching each and every one of Paul Randall's words. His tremendous energy and deep knowledge were truly thrilling.

    No words can describe Rob Farley's unique presentation style, captivating and engaging the audience. When the precious session minutes were over, I could tell that the many random puzzle pieces of information that his listeners knew had been suddenly combined into a clear, cohesive picture.

    I was amazed as always by Paul White's great sense of humor and his phenomenal ability to explain complicated concepts in a simple way.

    The keynote by the brilliant Dr. DeWitt from Microsoft in front of the full summit audience of 5000 deeply listening people was genuinely breathtaking.

    The entire conference throughout offered excellent speakers who inspired me to absorb the knowledge and use it when I got home.

    To my great surprise, I found that there are other people in this world who like replication as much I do. During the Birds of a Feather Luncheon, SQL Server MVP Ted Krueger was writing a script for replicating the food to other tables.

    I learned many things at PASS, and not all of them were about SQL. After three summits, this time I finally got the knack of networking. I actually went up and spoke to people, and believe me, that was not easy for an introvert. But this is what the summit is all about. Sqlpeople. They are the ones who make it such an exciting experience.

    I will be looking forward to the next year. Till then I have my notes and new ideas.

    How long was the summit? Thousands of unforgettable moments.

  • Look back and realize how far you came

     

    clip_image002

    It appeared out of nowhere while I was having a strong argument with the ton of dust that had taken over my computer room. My former favorite book, "Advanced Transact-SQL for SQL server 2000" brought an upset look to my face, the one I usually get from our NetApp admin. He usually gives me such a look when I ask him for more disk space.

    At the time of SQL 2000 there were far less good SQL books available than today and amazing SQL programming tricks were living on Itzik’s book pages; and, similar to the Rubeus Hagrid’s “The Monster Book of Monsters” it attacked anyone who attempted to open it, unleashing wild and exciting brilliant ideas.

    Nowadays there are a lot of SQL books but not all of them are worth reading. Some of them should be strongly avoided and if you touch them you should immediately wash your hands and have a drink or three.
    A few month ago I was asked to technically review some new SQL Server book. Reading their missleading claims of how the optimizer should be configured and behaves, as a book owner, you'd be faced with very challenging alternatives from building an army of origami planes out of it to using it as a desktop monitor foundation.

    With an overwhelming growth of internet resources I read less and less hardcopy books. Google, like a giant octopus pulls out answers from thousands of excellent blogposts. With me being such an abandoning owner, Itzik’s awesome book had found itself other admirers… Several generations of stubborn mice were learning various techniques during all those years. I hope the gastronomic flavors of this book were not less unique than the elegant solutions it reveals. The traces of their passion for knowledge resulted in two comfortable mice holes where a few generations of tiny sql-educated furred professionals were born. They had no idea that “The most powerful database engine in existence” is not that powerful anymore.

    My 2014 oriented brain has tried to pretend it's not interested in this book anymore and it can be thrown away, but I revolted.

    While my kids, ceasing the moment, were using my favorite lipstick and creatively painting abstract art on where normal kids have a face, I sat on the floor paging through the torn pages as the great features of the past were falling to the floor like rustling autumn leaves.

    ..."Sql_variant data type is new to SQL 2000"...

    ..."To solve problematic situations, SQL 2000 introduced two new functions, SCOPE_IDENTITY () and IDENT_CURRENT()"...

    ..."User defined functions (UDFs), introduced with SQL 2000, have an answer for many needs."...

    ..."Whenever you think of BLOB, think of text, ntext and image. Since version 7.0, you can store character datatypes up to 8000 bytes in length, in version 6.x you were limited to 255 bytes."...

    ..."The BIGINT data type is new to SQL server 2000"...

    … "The project of distributed partitioned views was code-named Coyote. For many a night with sunrise, the developer could hear the coyotes' calls, and they sure have come up with something they can be proud of".

    Why, how time passes by so quickly… In 10 years’ time we will smile when reading about columnar and in-memory database technologies. I wonder what would be the top features of SQL 2024?

    clip_image004

  • Alert visualization recipe: Get out your blender, drop in some sp_send_dbmail, Google Charts API, add your favorite colors and sprinkle with html. Blend till it’s smooth and looks pretty enough to taste.

     

    I really like database monitoring. My email inbox have a constant flow of different types of alerts coming from our production servers with all kinds of information, sometimes more useful and sometimes less useful. Usually database alerts look really simple, it’s usually a plain text email saying “Prod1 Database data file on Server X is 80% used. You’d better grow it manually before some query triggers the AutoGrowth process”.

    Imagine you could have received email like the one below.  In addition to the alert description it could have also included the the database file growth chart over the past 6 months. Wouldn’t it give you much more information whether the data growth is natural or extreme? That’s truly what data visualization is for.

    Believe it or not, I have sent the graph below from SQL Server stored procedure without buying any additional data monitoring/visualization tool.

    image

     

    Would you like to visualize your database alerts like I do? Then like myself, you’d love the Google Charts.

    All you need to know is a little HTML and have a mail profile configured on your SQL Server instance regardless of the SQL Server version.

    First of all, I hope you know that the sp_send_dbmail procedure has a great parameter @body_format = ‘HTML’, which allows us to send rich and colorful messages instead of boring black and white ones. All that we need is to dynamically create HTML code.

    This is how, for instance, you can create a table and populate it with some data:

    DECLARE @html varchar(max)

    SET @html = '<html>'
    + '<H3><font id="Text" style="color: Green;">Top Databases: </H3>'
    + '<table border="1" bordercolor="#3300FF" style="background-color:#DDF8CC" width="70%" cellpadding=3 cellspacing=3>'
    + '<tr><font color="Green"><th>Database Name</th><th>Size</th><th>Physical Name</th></tr>' +

    CAST( (SELECT TOP 10
                                td = name,'',
                                td = size * 8/1024 ,'',
                                td = physical_name 
                FROM sys.master_files  
                ORDER BY size DESC
                FOR XML PATH ('tr'),TYPE
    ) AS VARCHAR(MAX))
    +
    '</table>'

    EXEC msdb.dbo.sp_send_dbmail
    @recipients = 'PutYourEmailHere',
    @subject ='Top databases',
    @body = @html,
    @body_format = 'HTML'

    This is the result:

    image

     

    If you want to add more visualization effects, you can use Google Charts Tools https://google-developers.appspot.com/chart/interactive/docs/index which is a free and rich library of data visualization charts, they’re also easy to populate and embed.

    There are two versions of the Google Charts

    image Image based charts: https://google-developers.appspot.com/chart/image/docs/gallery/chart_gall

    This is an old version, it’s officially deprecated although it will be up for a next few years or so. I really enjoy using this one because it can be viewed within the email body. For mobile devices you need to change the “Load remote images” property in your email application configuration.        

     

    image Charts based on JavaScript classes: https://google-developers.appspot.com/chart/interactive/docs/gallery

    This API is newer, with rich and highly interactive charts, and it’s much more easier to understand and configure. The only downside of it is that they cannot be viewed within the email body. Outlook, Gmail and many other email clients, as part of their security policy, do not run any JavaScript that’s placed within the email body. However, you can still enjoy this API by sending the report as an email attachment.

    Here is an example of the old version of Google Charts API, sending the same top databases report as in the previous example but instead of a simple table, this script is using a pie chart right from  the T-SQL code

    DECLARE @html  varchar(8000)

    DECLARE @Series  varchar(800),@Labels  varchar(8000),@Legend  varchar(8000);
       
    SET @Series = '';
    SET @Labels = '';
    SET @Legend = '';

    SELECT TOP 5 @Series = @Series + CAST(size * 8/1024 as varchar) + ',',
                            @Labels = @Labels +CAST(size * 8/1024 as varchar) + 'MB'+'|',
                            @Legend = @Legend + name + '|'
    FROM sys.master_files
    ORDER BY size DESC

    SELECT @Series = SUBSTRING(@Series,1,LEN(@Series)-1),
            @Labels = SUBSTRING(@Labels,1,LEN(@Labels)-1),
            @Legend = SUBSTRING(@Legend,1,LEN(@Legend)-1)

    SET @html =
      '<H3><font color="Green"> '+@@ServerName+' top 5 databases : </H3>'+
       '<br>'+
       '<img src="
    http://chart.apis.google.com/chart?'+
       'chf=bg,s,DDF8CC&'+
       'cht=p&'+
       'chs=400x200&'+
       'chco=3072F3|7777CC|FF9900|FF0000|4A8C26&'+
       'chd=t:'+@Series+'&'+
       'chl='+@Labels+'&'+
       'chma=0,0,0,0&'+
       'chdl='+@Legend+'&'+
       'chdlp=b"'+
       'alt="'+@@ServerName+' top 5 databases" />'
                
    EXEC msdb.dbo.sp_send_dbmail @recipients = 'PutYourEmailHere',
                                @subject = 'Top databases',
                                @body = @html,
                                @body_format = 'HTML'

    This is what you get. Isn’t it great?

    image

    Chart parameters reference:

    chf     Gradient fill  bg - backgroud ; s- solid
    cht     chart type  ( p - pie)
    chs        chart size width/height
    chco    series colors
    chd        chart data string        1,2,3,2
    chl        pir chart labels        a|b|c|d
    chma    chart margins
    chdl    chart legend            a|b|c|d
    chdlp    chart legend text        b - bottom of chart

     

    Line graph implementation is also really easy and powerful

    DECLARE @html varchar(max)
    DECLARE @Series varchar(max)
    DECLARE @HourList varchar(max)

    SET @Series = '';
    SET @HourList = '';

    SELECT @HourList = @HourList + SUBSTRING(CONVERT(varchar(13),last_execution_time,121), 12,2)  + '|' ,
                 @Series = @Series + CAST( COUNT(1) as varchar) + ','
    FROM sys.dm_exec_query_stats s 
       CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
    WHERE last_execution_time > = getdate()-1
    GROUP BY CONVERT(varchar(13),last_execution_time,121)
    ORDER BY CONVERT(varchar(13),last_execution_time,121)

    SET @Series = SUBSTRING(@Series,1,LEN(@Series)-1)

    SET @html =
    '<img src="http://chart.apis.google.com/chart?'+
    'chco=CA3D05,87CEEB&'+
    'chd=t:'+@Series+'&'+
    'chds=1,350&'+
    'chdl= Proc executions from cache&'+
    'chf=bg,s,1F1D1D|c,lg,0,363433,1.0,2E2B2A,0.0&'+
    'chg=25.0,25.0,3,2&'+
    'chls=3|3&'+
    'chm=d,CA3D05,0,-1,12,0|d,FFFFFF,0,-1,8,0|d,87CEEB,1,-1,12,0|d,FFFFFF,1,-1,8,0&'+
    'chs=600x450&'+
    'cht=lc&'+
    'chts=FFFFFF,14&'+
    'chtt=Executions for from'
    +(SELECT CONVERT(varchar(16),min(last_execution_time),121)
             FROM sys.dm_exec_query_stats
             WHERE last_execution_time > = getdate()-1)
    +' till '+
    +(SELECT CONVERT(varchar(16),max(last_execution_time),121)
        FROM sys.dm_exec_query_stats)
    +
    '&'+
    'chxp=1,50.0|4,50.0&'+
    'chxs=0,FFFFFF,12,0|1,FFFFFF,12,0|2,FFFFFF,12,0|3,FFFFFF,12,0|4,FFFFFF,14,0&'+
    'chxt=y,y,x,x,x&'+
    'chxl=0:|1|350|1:|N|2:|'+@HourList+'3:|Hour&'+
    'chma=55,120,0,0" alt="" />'

    EXEC msdb.dbo.sp_send_dbmail
    @recipients = 'PutYourEmailHere',
    @subject ='Daily number of executions',
    @body = @html,
    @body_format = 'HTML'


    image

    Chart parameters reference:

    chco    series colors
    chd        series data
    chds    scale format
    chdl    chart legend
    chf        background fills
    chg        grid line
    chls    line style
    chm        line fill
    chs        chart size
    cht        chart type
    chts    chart style
    chtt    chart title
    chxp    axis label positions
    chxs    axis label styles
    chxt    axis tick mark styles
    chxl    axis labels
    chma    chart margins

    If you don’t mind to get your charts as an email attachment, you can enjoy the Java based Google Charts which are even easier to configure, and have much more advanced graphics. In the example below, the sp_send_email procedure uses the parameter @query which will be executed at the time that sp_send_dbemail is executed and the HTML result of this execution will be attached to the email.

    DECLARE @html varchar(max),@query varchar(max)
    DECLARE @SeriesDBusers  varchar(800);
       
    SET @SeriesDBusers = '';

    SELECT @SeriesDBusers = @SeriesDBusers +  ' ["'+DB_NAME(r.database_id) +'", ' +cast(count(1) as varchar)+'],'
    FROM sys.dm_exec_requests r
    GROUP BY DB_NAME(database_id)
    ORDER BY count(1) desc;

    SET @SeriesDBusers = SUBSTRING(@SeriesDBusers,1,LEN(@SeriesDBusers)-1)

    SET @query = '
    PRINT ''
    <html>
      <head>
        <script type="text/javascript" src="
    https://www.google.com/jsapi"></script>
        <script type="text/javascript">
          google.load("visualization", "1", {packages:["corechart"]});
           google.setOnLoadCallback(drawChart);
          function drawChart() {
                         var data = google.visualization.arrayToDataTable([
                           ["Database Name", "Active users"],
                           '+@SeriesDBusers+'
                         ]);
     
                         var options = {
                           title: "Active users",
                           pieSliceText: "value"
                         };
     
                         var chart = new google.visualization.PieChart(document.getElementById("chart_div"));
                         chart.draw(data, options);
          };
        </script>
      </head>
      <body>
        <table>
        <tr><td>
            <div id="chart_div" style="width: 800px; height: 300px;"></div>
            </td></tr>
        </table>
      </body>
    </html>
    '''

    EXEC msdb.dbo.sp_send_dbmail
       @recipients = 'PutYourEmailHere',
       @subject ='Active users',
       @body = @html,
       @body_format = 'HTML',
       @query = @Query
       @attach_query_result_as_file = 1, 
       @query_attachment_filename = 'Results.htm'

    After opening the email attachment in the browser you are getting this kind of report:


    image

    In fact, the above is not only for database alerts. It can be used for applicative reports if you need high levels of customization that you cannot achieve using standard methods like SSRS.

    If you need more information on how to customize the charts, you can try the following:

    Use the above examples as a starting point for your procedures and I’d be more than happy to hear of your implementations of the above techniques.

    Yours,

    Maria

  • Love your enemies.. it pisses them off.

     

    There is always a threat from somewhere. Old-style DBAs are standing on the shaky ground.

    5 years ago I remember myself defending the SQL Server against the domineering Oracle DBAs. The arguing was always about “who is better”. The Management Studio was a great weapon against sqlplus. And you could always use a “the DDLs are not a part of the transaction “ thing.  Without any Oracle DBAs around I must admit that Oracle is pretty amazing product although it requires a huge deal of tuning.

    Then came an era of NoSQL. It’s not easy to compete against such fast loads, easy scale-up and fast development. The NoSQL solutions support the usecases that do not really fit into the relational model, like graphs or unstructured data. On the other hand, the eventual consistency thing does not fit for everyone, the joins between the entities are hard and there are stability problems with many NoSQL products.  Twitter apparently has been trying to move from MySQL to Cassandra for over a year. In August 2011 Foursquare reported an 11-hour downtime because of the MongoDB.

     

    A few years ago a new trend/treat arrived to our lands. Analytical databases.

    There are quite a lot of quite expensive products positioning themselves as a revolutionary technology that can easily handle an overwhelming amount of Big Data querying and analytical processing, self-tuning, making no need for query tuning, prepared aggregations or calculations. There is no locking or logging overhead therefore data loading into the analytical DBMS is declared to be extremely fast. On the other hand, those DBMS admit that they are not built for the OLTP loads, massive updates and deletes are their weak point.

    The Data Warehouse team at my company has got to the decision that they need such analytical database solution. The vision is to have a scalable self-tuning database that will ingest terabyte or two on a weekly basis and will allow to any number of the concurrent users capability to execute an ad-hoc analytical queries on any period of time. 

    I strongly believe that with the proper database design SQL Server 2012 can provide good performance under the heavy analytical workload. However, I was really curious to understand how the analytical databases achieve their self-tuning performance magic. There is nothing better than the competitors to show us where we have weaknesses that need some strengthening.

    First of all, here is a great map of all the technologies from the Big Data puzzle http://www.bigdatamap.net/technologies/ .

    In order to get a better understanding of the strong and weak points of the analytical databases world, I will drill down a little bit into the technologies that are standing behind the buzzwords.

    There are 4 categories of products inside the analytic databases part of the Data Repositories world:

    - In-Memory solutions (like SAP HANA, XtremeData, Kognitio etc.):

    - When querying the DBMSs from this group, you are working primarily against the tables in the memory. Those DBMSs are the fastest solution in the analytical databases world. In-memory DBMS primary relies on the memory optimization algorithms which are much simpler than the disk access. There are no locking/ concurrency issues and queries are amazingly fast. With those systems you can expect sub-second results in the truly ad-hoc DWH workload with unpredictable queries coming through, sometimes in a size of 6 pages of the code. I have asked whether there are partitions on the tables and the Kognitio representative was quite surprised. “What for?” – he asked me.”Well, how do you delete the old data?”. “You just write a DELETE statement and data gets deleted” – he answered. That’s it. Just use DELETE statement.

    - There are several flavors of the in-memory products, each requires a different memory to data ratio. For instance, SAP HANA loads 100% of the database into the memory which makes this solution extremely expensive for the databases with hundreds of terabytes. Kognitio, on the other hand, allows you to choose which data goes into the memory, usually 20% - 40%, and the rest of the data stays in the persistency layer. For instance, if your queries usually access only last month data, you can have sliding window of this data in the memory and, in case any unpredicted query comes in, the system will load the missing data off the disc. However, on each cluster node 60% of memory will be used for data and the other 40% for the users activity and temporary calculations. When summed up you get to the quite pricey solution anyway.

    - All products in this category save the data to the disk as well. This persistency layer can be their proprietary files, Hadoop or any other DBMS.

    - One might think it will take time after the server downtime to get all data back into the memory. Kognitio states that it will take only 5 min for each 10 TB of data.

    - Watch out for some limitations coming along with in-memory architecture. Some products are not available during the maintenance tasks, like backups, or new nodes additions ( ExtremeData) and should be taken down. This does not suit the systems that must be up 24/7.

    - Disk- oriented solutions (Vertica, Greenplum, Netezza, SybaseIQ, VectorWise, Microsoft PWD, etc.)

    - Disk oriented products are very well optimized for working with the data on the disk. The main trend in this category is about storing the data in the columnar way which allows high compression ratio. However, some products are row-based like Netezza. Microsoft PDW ( V2 which is based on SQL 2012) allows both flavors of storage, rowbased and columnar. Their release is actually in February 2013 so we'll see how they will compete with the others.

    - Disk oriented solutions usually do not rely that there is a lot of RAM.

    - Since the world around us is row-based, all the columnar products are loading their data from the row-based files. They inject the incoming data into some write-optimized row store. The background process afterwards is converting the inserted data into the read-optimized sorted columnar storage.

    - Due to the fact that each column in columnar table is being stored in the separate file, the smaller the amount of requested columns the faster the queries perform. The wider queries usually require some sort of manual optimizations. Vertica, for instance is keeping several copies of the table, called projections. The projections, like covering indexes, contain columns grouped together or columns from different tables (for faster joins), nullifying the benefits of the compression. This is a hybrid columnar/row-based storage solution.

     - Hybrid DBMS products ( ParAccel, Exasol, etc. )

    - Those products are built on any disk-oriented technology, usually columnstores, but provide the best performance given as much RAM as they can get. For instance ParAccel submitted tcp-h benchmark on a system with 288% data/memory ratio.

    - Daemons running on the Hadoop nodes. (Claudera Impala)

    - Kind of HIVE alternative, those solutions are running on the same physical cluster as Hadoop creating a seamless environment. Kognitio is considering such architecture as a future enhancement of their product.

    - Claudera Impala is a public beta, it goes out sometime in 2013. According to some web forum, the fastest Impala queries take > 1 second. Impala was tested and survived a load of 100 concurrent queries. Another test showed Impala running 10 cloned copies of a query with 25%~ish performance degradation.

     

    If you are evaluating an analytical database solution for your Big Data, the features below can help you to understand better the different flavors of the products:

    - Product scalability. Scalability is about whether the product is MPP (massively parallel processing) system or single node system. MPP systems usually are built as a cluster of commodity servers where the tables data distributed across the cluster according to the administrator’s choice, like some sort of partitioning. Cluster nodes combined into a single very powerful analytic machine have vast amounts of processing power. New nodes can be easily added to the cluster when needed. Vertica, ParAccel, Exasol, Greenplum are the MPP solutions. Sybase IQ and VectorWise are single node systems though VectorWise promises that there is a MPP version coming out soon.

    - Single point of failure. While exploring an MPP product, it’s good to know if it’s share-nothing system or there is a main node (ParAccel, Netezza or Greenplum) that is in charge of producing query plans and processing the serial parts of the execution plans. For instance, the count distinct operation requires to run distinct sort again on the merged results. The main node might become a bottleneck in the future when the system load increases.

    - Replication factor/data redundancy. Most MPP solutions are replicating every subset of the data to an additional node for the redundancy. In case any node goes down, there is always another copy of the data which means that the cluster continues to work. The question is which kind of algorithm they use to store the second copy. Two phaze commit is too pricey because it slows down the inserts. And most of the products use optimistic approach. Each one implements that approach differently but the data is usually being replicated by the background process. What happens if the node failed after the data was inserted and before it has got replicated ?

    - Watch out for the limitations. VectorWise can support databases up to 30TB on single node. Another common limitation for the columnar solutions is character data type length. SAP HANA has the 2 billion records limit in a single partition.

    - Data Encryption. Most of the analytical databases that I have looked into did not have a data encryption feature.

    - Row to columnar conversion. The freshly loaded data querying is definitely worth a check. Especially on the column storage solutions. Your queries will probably be merging data from the write optimized row-based storage and column based files.

    - Data modifications. Vertical and Exasol are locking the entire table during the row modification operations ( updates/deletes) locking other modifying processes. I must note here that this does not impact read flows against that table.

    - Management tool. All this can sound great till you realize that your main tool will be the command line like in ParAccel. In such case you will probably need to purchase some third party management tool, like Toad. Make sure there is an ODBC/JDBC provider.

    - Languages support. Analytical solution should usually support ANSI SQL including analytical extensions. Many products support other languages, like R, JAVA, MDX etc.

    - Development/QA environment – It’s worth a check whether there it is possible to restore the production multi-node database to single-node server for development or testing purposes.

    - Hadoop connectivity. Many products have implemented this, it’s quite a useful feature, called External tables. This is very tight products integration, making Hadoop data visible for the user queries without pre-loading it. Although, integration might be not so tight for some products: under the hood they simply connect to Hadoop using the Hive and performance is slow in such cases. Kognitio, on the other hand, have an agent installed on the Hadoop cluster nodes and thus can “push down the predicates” – ask Hadoop to filter the data prior to loading.

    - Number of customers. If there are only 10 customers behind the revolutionary technology, would you consider paying a huge amount of money for it?

     

    Machine generated data grows at the amazing speed therefore the requirements from the DBMS products are changing. SQL Server is evolving as well having a new columnstore index feature inside SQL 2012. In the next version more features from the analytical databases world will be implemented, like in-memory technology <Hekaton> , Hadoop External tables feature <Polybase> and true updatable columnnar data storage implemented by using new column-based clustered index.

    Any software solution has it’s strength and there is no fit-for-everyone solution. There are many shades of grey besides the black and white colors.

    We are starting the POC for the several analytical DBMS. I will surely test the same load on SQL 2012. I accept the architectural limitation that it’s not scalable. I also understand that before Hekaton comes to town, SQL Server cannot compete with the in-memory products. However, I cannot wait to compare the performance of the columnstore index against the other columnstore products.

    Are there any performance benchmarks between the columnar non-mpp product and SQL 2012 that you were a part of ?


  • Real tortoises keep it slow and steady. How about the backups?

     

    … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks.
    Two days passed and there was no sign of the tortoise.
    "You know, she is taking a lot of time", said one of the tortoises.
    A little voice from just out side the fence said, "If you are going to talk that way about me I won't go."

    image

    Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup?

    Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size.

    After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area: 

    • LiteSpeed by Quest
    • SQL Backup by Red Gate
    • SQL Safe by Idera

    The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005. 
    This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course). 
    Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price.
    In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server.

    But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture.

    Medium size database.

    Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database.

    It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer.
    Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server.
    Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time.

    image image

    Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so. 

    Very large database

    As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target.

    image

    Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal. 
    However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files.

    Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration.

    As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time.
    Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release.

    There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks. 
     

    Benchmark server:

    • SQL Server 2008 R2 sp1
    • 2 Quad CPU
    • Database location: NetApp FC 15K Aggregate 53 discs
    • Backup destination volumes: two physical NetApps FC 15K Aggregate 53 discs, 4 files on each volume. 

    Backup statements:

    No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance).

    SQL backup

    LiteSpeed

    SQL Backup

    SQL safe

    backup database <DBNAME> to

    disk= '\\<networkpath>\par1.bak' ,

    disk= '\\<networkpath>\par2.bak',

    disk= '\\<networkpath>\par3.bak'


    with format, compression

    EXECUTE master.dbo.xp_backup_database

    @database = N'<DBName>',

    @backupname= N'<DBName> full backup',

    @desc = N'Test',

    @compressionlevel=8,

    @filename= N'\\<networkpath>\par1.bak',

    @filename= N'\\<networkpath>\par2.bak',

    @filename= N'\\<networkpath>\par3.bak',
    @init = 1

    EXECUTE master.dbo.sqlbackup
    '-SQL "BACKUP DATABASE <DBNAME>

    TO DISK= ''\\<networkpath>\par1.sqb'',

    DISK= ''\\<networkpath>\par2.sqb'',

    DISK= ''\\<networkpath>\par3.sqb''

    WITH

    DISKRETRYINTERVAL = 30,
    DISKRETRYCOUNT = 10,
    COMPRESSION = 4,
    INIT"'

    EXECUTE master.dbo.xp_ss_backup

    @database = 'UCMSDB',

    @filename = '\\<networkpath>\par1.bak',

    @backuptype = 'Full',

    @compressionlevel = 4,

    @backupfile = '\\<networkpath>\par2.bak',

    @backupfile = '\\<networkpath>\par3.bak'

    If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more:

    • RedGate : use THREADPRIORITY option ( values 0 – 6 )
    • LiteSpeed : use  @throttle ( percentage, like 70%)
    • SQL safe :  the only thing I have found was @Threads option.

     

    Yours,

    Maria

  • Is your TRY worth catching?

     

    A very useful error handling TRY/CATCH construct is widely used to catch all execution errors  that do not close the database connection. The biggest downside is that in the case of multiple errors the TRY/CATCH mechanism will only catch the last error.

    An example of this can be seen during a standard restore operation. In this example I attempt to perform a restore from a file that no longer exists. Two errors are being fired: 3201 and 3013:

    image

    Assuming that we are using the TRY and CATCH construct, the ERROR_MESSAGE() function will catch the last message only:

    image

    To workaround this problem you can prepare a temporary table that will receive the statement output. Execute the statement inside the xp_cmdshell stored procedure, connect back to the SQL Server using the command line utility sqlcmd and redirect it's output into the previously created temp table. 

    image

    After receiving the output, you will need to parse it to understand whether the statement has finished successfully or failed. It’s quite easy to accomplish as long as you know which statement was executed. In the case of generic executions you can query the output table and search for words like“Msg%Level%State%” that are usually a part of the error message.

    Furthermore, you don’t need TRY/CATCH in the above workaround, since the xp_cmdshell procedure always finishes successfully and you can decide whether to fire the RAISERROR statement or not.

     

    Yours,

    Maria

  • SQLCMD Mode: give it one more chance

     

    - Click on me. Choose me. - asked one forgotten feature when some bored DBA was purposelessly wondering through the Management Studio menu at the end of her long and busy working day.

    - Why would I use you? I have heard of no one who does. What are you for? - perplexedly wondered aged and wise DBA. At least that DBA thought she was aged and wise though each day tried to prove to her that she wasn't.

    - I know you. You are quite lazy. Why would you do additional clicks to move from window to window? From Tool to tool ? This is irritating, isn't it? I can run windows system commands, sql statements and much more from the same script, from the same query window!

    - I have all my tools that I‘m used to, I have Management Studio, Cmd, Powershell. They can do anything for me. I don’t need additional tools.

    - I promise you, you will like me. – the thing continued to whine .

    - All right, show me. – she gave up. It’s always this way, she thought sadly, - easier to agree than to explain why you don’t want.

    - Enable me and then think about anything that you always couldn’t do through the management studio and had to use other tools.

    - Ok. Google for me the list of greatest features of SQL SERVER 2012.

    - Well... I’m not sure... Think about something else.

    - Ok, here is something easy for you. I want to check if file folder exists or if file is there. Though, I can easily do this using xp_cmdshell …

    - This is easy for me. – rejoiced the feature.

    By the way, having the items of the menu talking to you usually means you should stop working and go home. Or drink coffee. Or both. Well, aged and wise dba wasn’t thinking about the weirdness of the situation at that moment.

    - After enabling me, – said unfairly forgotten feature (it was thinking of itself in such manner) – after enabling me you can use all command line commands in the same management studio query window by adding two exclamation marks !! at the beginning of the script line to denote that you want to use cmd command:

    image

    -Just keep in mind that when using this feature, you are actually running the commands ON YOUR computer and not on SQL server that query window is connected to. This is main difference from using xp_cmdshell which is executing commands on sql server itself. Bottomline, use UNC path instead of local path.

    - Look, there are much more than that. - The SQLCMD feature was getting exited.- You can get IP of your servers, create, rename and drop folders. You can see the contents of any file anywhere and even start different tools from the same query window:

    image

    Not so aged and wise DBA was getting interested: - I also want to run different scripts on different servers without changing connection of the query window.

    - Sure, sure! Another great feature that CMDmode is providing us with and giving more power to querying. Use “:” to use additional features, like :connect that allows you to change connection:

    image

    - Now imagine, you have one script where you have all your changes, like creating staging table on the DWH staging server, adding fact table to DWH itself and updating stored procedures in the server where reporting database is located.

    - Now, give me more challenges!

    - Script out a list of stored procedures into the text files.

    - You can do it easily by using command :out which will write the query results into the specified text file. The output can be the code of the stored procedure or any data. Actually this is the same as changing the query output into the file instead of the grid.

    image

    - Now, take all of the scripts and run all of them, one by one, on the different server. 

    - Easily

    - Come on... I’m sure that you can not...

    -Why not? Naturally, I can do it using :r commant which is opening a script and executing it. Look, I can also use :setvar command to define an environment variable in SQLCMD mode. Just note that you have to leave the empty string between :r commands, otherwise it’s not working although I have no idea why.

    image

    - Wow.- She was really impressed. - Ok, I’ll go to try all those…

    -Wait, wait! I know how to google the SQL SERVER features for you! This example will open chrome explorer with search results for the “SQL server 2012 top features” ( change the path to suit your PC):

    image

    “Well, this can be probably useful stuff, maybe this feature is really unfairly forgotten”, thought the DBA while going through the dark empty parking lot to her lonely car. “As someone really wise once said: “It is what we think we know that keeps us from learning. Learn, unlearn and relearn”.

  • Maintenance plans love story

    There are about 200 QA and DEV SQL Servers out there. 

    There is a maintenance plan on many of them that performs a backup of all databases and removes the backup history files.

    First of all, I must admit that I’m no big fan of maintenance plans in particular or the SSIS packages in general. 

    In this specific case, if I ever need to change anything in the way backup is performed, such as the compression feature or perform some other change, I have to open each plan one by one. This is quite a pain.

    Therefore, I have decided to replace the maintenance plans with a stored procedure that will perform exactly the same thing.  Having such a procedure will allow me to open multiple server connections and just execute an ALTER PROCEDURE whenever I need to change anything in it. There is nothing like good ole T-SQL.

    The first challenge was to remove the unneeded maintenance plans. Of course, I didn’t want to do it server by server.  I found the procedure msdb.dbo.sp_maintplan_delete_plan, but it only has a parameter for the maintenance plan id and it has no other parameters, like plan name, which would have been much more useful. Now I needed to find the table that holds all maintenance plans on the server. You would think that it would be msdb.dbo.sysdbmaintplans but, unfortunately, regardless of the number of maintenance plans on the instance, it contains just one row.

    clip_image002[8]  

    After a while I found another table: msdb.dbo.sysmaintplan_subplans. It contains the plan id that I was looking for, in the plan_id column and well as the agent’s job id which is executing the plan’s package:

    clip_image004[8]

    That was all I needed and the rest turned out to be quite easy.  Here is a script that can be executed against hundreds of servers from a multi-server query window to drop the specific maintenance plans.

    DECLARE @PlanID uniqueidentifier

     

    SELECT @PlanID = plan_id

    FROM msdb.dbo.sysmaintplan_subplans

    Where name like ‘BackupPlan%’

     

    EXECUTE msdb.dbo.sp_maintplan_delete_plan @plan_id=@PlanID

     

    The second step was to create a procedure that will perform  all of the old maintenance plan tasks: create a folder for each database, backup all databases on the server and clean up the old files. The script is below. Enjoy.

     

    ALTER PROCEDURE BackupAllDatabases

                                      @PrintMode BIT = 1

    AS

    BEGIN

     

           DECLARE @BackupLocation VARCHAR(500)

           DECLARE @PurgeAferDays INT

           DECLARE @PurgingDate VARCHAR(30)

           DECLARE @SQLCmd  VARCHAR(MAX)

           DECLARE @FileName  VARCHAR(100)

          

           SET @PurgeAferDays = -14

           SET @BackupLocation = '\\central_storage_servername\BACKUPS\'+@@servername

          

           SET @PurgingDate = CONVERT(VARCHAR(19), DATEADD (dd,@PurgeAferDays,GETDATE()),126)

          

           SET @FileName = '?_full_'+

                         + REPLACE(CONVERT(VARCHAR(19), GETDATE(),126),':','-')

                         +'.bak';

     

           SET @SQLCmd = '

                  IF ''?'' <> ''tempdb'' BEGIN

                         EXECUTE master.dbo.xp_create_subdir N'''+@BackupLocation+'\?\'' ;

     

                         BACKUP DATABASE ? TO  DISK = N'''+@BackupLocation+'\?\'+@FileName+'''

                         WITH NOFORMAT, NOINIT,  SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10 ;

     

                         EXECUTE master.dbo.xp_delete_file 0,N'''+@BackupLocation+'\?\'',N''bak'',N'''+@PurgingDate+''',1;

                  END'

     

           IF @PrintMode = 1 BEGIN

                  PRINT @SQLCmd

           END

          

           EXEC sp_MSforeachdb @SQLCmd

          

    END

  • Transactional replication: are you sure your data is totally synchronized?

     

    There are those rare times, when your replication solution is working perfectly. No consistency errors, no errors at all.
    Or so it seems.

    Then, all of a sudden, you catch one application which is writing to the read-only subscriber. Next, a developer complains that some data is missing from the subscriber.

    Similarly, you always wonder, after the consistency errors have appeared and solved, whether the data is synchronized. In some specific cases, the publication can be dropped by mistake and re-created with replication continuing without reporting any errors.

    Is there a native way to make sure all data is synchronized between the publisher and the subscriber?

    Validate subscriptions using Replication Monitor

    If you open the Replication Monitor application and right click on your publication, you will notice the “Validate subscriptions…” feature.

    image

    Click on it, choose the subscription that requires validation and select the way validation will be performed.

     image

    These options are quite critical since they directly impact the speed of the validation process. Here you can choose whether the tables themselves will be queried or row number information will be extracted from the sysindexes system view.

    The default option is to compare the numbers from sysindexes and compare actual row counts only in case the differences were found.

    You can also choose to compare the checksums of the data. In this case, the validation process will take a long time.

    The result of the comparison will appear in the distribution service details:

    clip_image005

    You will see all tables that were scanned by the validation process and the row counts.

    clip_image007

    If a table is out of sync you will see the error as below

    clip_image009

     

    What’s going on behind the scenes?

    Here is the query that is being executed against each table in the database if you choose to compare actual count and the checksum:

    SELECT count_big(*), sum (convert(numeric, binary_checksum(*) ) )

    FROM <tablename>  WITH (TABLOCK HOLDLOCK)

     

    All the above is nice when you have only one publication to validate. What if you have hundreds of publications?

    As you might have expected, this UI is executing the stored procedure on the publisher. The stored procedure is called sp_publication_validation and it’s main parameters are

    • @rowcount_only

    §  1 - Perform a rowcount check only  ( default )

    §  2 - Perform a rowcount and binary checksum

    • @full_or_fast

    §  0 – Full count using count(*)

    §  1 – Fast count using sysindexes view

    §  2 – Compare the subscriber and publisher using the fast count and, if the results are not equal, uses count on table. If the rows field in sysindexes is NULL full count will be used. ( default )

     

    USE <published database>

    GO

    EXEC sp_publication_validation  @publication =  'PublicationName'

                 ,  @rowcount_only =  2

                 ,  @full_or_fast =  2

    You can execute this stored procedure over all publications on the server.

    I have put together a simple script to do this, in order to use it you need to define a linked server to the Distributor server in order to get a list of publications from the MSpublications table which is located in the Distribution database.

     

    CREATE PROCEDURE ValidateSubscriptions

                            @PrintMode  int = 0

    AS

    BEGIN

         

          DECLARE @SQLCmd  varchar(max);

         

          SET @SQLCmd= ''

     

          SELECT @SQLCmd = @SQLCmd + 'EXEC ['+publisher_db+'].dbo.sp_publication_validation 

          @publication = '''+publication+''',

          @rowcount_only =  2,

          @full_or_fast =  2;' + CHAR(10) + CHAR(13) +

          ' WAITFOR DELAY ''00:01:00'';' + CHAR(10) + CHAR(13)

          FROM DISTRIBUTOR.DistributionDB.dbo.MSpublications p

                JOIN DISTRIBUTOR.master.sys.servers s

                      ON p.publisher_id = s.server_id 

          WHERE s.name =  @@servername

          IF @PrintMode = 1 BEGIN

                PRINT  @SQLCmd

          END

         

          EXEC (@SQLCmd)

     

    END

    All the validation results (for all publications) will be inserted into the MSdistribution_history table which is located in the Distribution database on the Distributor server.

    Viewing it can be done using the following query:

     

    USE DistributionDB

    GO

     

    SELECT      time              Time,

                s.Name            PublisherServer,

                a.publication     Publication,

                s2.name           SubscriberServer,

    SUBSTRING(comments,8,CHARINDEX('''',comments,8)-8) TableName,

                comments          Comments

          FROM dbo.MSdistribution_history  h

           JOIN dbo.MSdistribution_agents   a

             ON h.agent_id = a.id

                      JOIN master.sys.servers s

                        ON a.publisher_id = s.server_id

                            JOIN master.sys.servers s2

                              ON a.subscriber_id = s2.server_id        

          WHERE comments like 'Table %'

    The best part is that if any article fails validation you will immediately see it in the error log

     

    EXEC master..xp_readerrorlog 0,1,'failed data validation'

    clip_image011

    The above means that if you have some usual routine that notifies you in case there are any unusual errors in the SQL Server Error Log, it will show validation failures as well.

     

    May all your articles pass their validation successfully.

    Yours, Maria

  • Unfairly forgotten partitioning views can help us make our partitioning tables design better

    Partitioning is a very important feature for scalable applications, it allows breaking up huge monolithic tables in order to ensure their scalability & manageability within a single instance. For example, data deletion and loading can be very fast if done by partition. Reorganizing, optimizing and rebuilding indices also can be done by partition which is much less intrusive and faster.

    Also, when talking about performance, joins become faster when using tables that are partitioned. Smaller b-trees for each partition make partition access fast, especially when limiting rows by a partition key, accessing only a subset of data. Another performance benefit can be achieved when using partition level lock escalation.

    There are two partitioning designs in the SQL Server. In a partitioned view, the member tables are horizontally joined by a view, so from the user’s perspective, the data comes from one table. A natively partitioned table, a feature introduced in SQL Server 2005, is treated as a single object by the relational engine, yet is handled as multiple objects by the storage engine.

    clip_image002

    When comparing native partitioning to a partitioned view, one of the greatest improvements is that using a partitioned table results in relatively small query plans. PVs in general had enormous query plans because each table in the view was a separate object with separate statistics. Due to the fact that natively partitioned tables are treated by the relational engine as a single object, fewer permutations are considered during the optimization phase, meaning a faster optimization time. Also, auto-parameterization doesn’t work with PVs because the rules of auto-parameterization are quite strict. 

    Adhoc SQL that uses a single partitioned table has many more opportunities to get auto-parameterized and produce a re-usable adhoc query plan.

     In PVs, tables are accessed sequentially.  When using native partitioning many operations can be done in parallel. 

    However, there are many challenges that we face when migrating from an environment that is using partitioned views to native partitioning. The usual way of upgrading such an environment is to create a new partitioned table and then to switch the data in, partition by partition. At some point in time, the old view and the new table flip flop names. During such an upgrade, from the user’s perspective, data is not available or even worse, partially available and the reports are not correct.

    After upgrade, maintenance tasks are also more cumbersome when performed against a partitioned table. Adding new indices to a huge table with thousands of millions of rows takes about 4 hours and the table is locked during this time. Same story with data type changes on columns. Also consider primary key changes when you need to drop the PK, which also takes about 3 hours and then add a new one – another 4 hours (a PK change is usually a bad thing anyway since it indicates the database design is not correct but we all know that it happens and sometimes we need to add an additional column to the PK). Also, there is no scale out for partitioned tables, meaning all partitions must reside on the same server on the same database.

    Here, surprisingly, our good old, unfairly forgotten, partitioning views can help us make our partitioning design even better.

    Consider the following scenario. You have a partitioned view with many big tables. Instead of switching the data partition by partition into a new table, you simply add the new partitioned table under the same partitioned view. No long Sch-M locks, no data partial availability and no downtime. All the new data is inserted into the new partitioned table. When the time comes, the old partitions get purged, leaving the partitioned view with only the native partitioned table underneath. If you need to support indices and other lengthy changes on metadata, you can simply add a new partition table to the view with the required metadata changes which means zero downtime to your environment. In case you need a distributed architecture across servers you also can do it using partitioned views. 

    clip_image004

    Native Partitioning is a great feature of SQL Server, but partitioned views can make it even better helping scaling out and performing maintenance tasks with zero downtime.

More Posts Next page »
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement