THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Damian Widera

  • SQL Server 2014 CU4 is available for downloading

    The SQL 2014 CU4 is available :)

    There are some interesting facts:

    ·         66 issues were solved

    BI\Analysis Services

    10

    BI\Collaborative Insights

    1

    BI\Reporting Services

    11

    Data Quality Services\Cleanse\Export Results

    1

    Integration Services

    1

    Integration Services\Tasks Components

    2

    Integration Services\Tools

    1

    Master Data Services\Client

    1

    Master Data Services\Server

    1

    Master Data Services\Setup

    1

    SQL Connectivity

    1

    SQL Engine

    1

    SQL Engine\Column Stores

    4

    SQL Engine\DB Management

    1

    SQL Engine\Extreme OLTP

    1

    SQL Engine\High Availability and Disaster Recovery

    2

    SQL Engine\Management Services

    1

    SQL Engine\Programmability

    4

    SQL Engine\Query Execution

    2

    SQL Engine\Query Optimizer

    4

    SQL Engine\Replication

    2

    SQL Engine\Search

    1

    SQL Engine\Service Broker

    1

    SQL Engine\Spatial

    1

    SQL Engine\SQLOS

    1

    SQL Engine\Unstructured Data

    1

    SQL Server Client Tools

    3

    SQL Server Client Tools\Database Performance Tools

    1

    SQL Server Client Tools\Powershell

    1

    SQL Server Client Tools\SSMS

    3

     

    ·         The release build is 12.0.2430.0 

    You can find more details on the official page: http://support.microsoft.com/kb/2999197/en-us

    Cheers

    Damian 

     

     

     

  • Transactional replication - how to initialize a subscription from a backup

    Today I had a chance to initialize my scubscription in transactional replication from backup. The database is quite big (120GB) and I thought it I would be better to use this method to start the replication process.

    Unfortunatelly this is not possible from the replication wizard and you have to change the publication and subscription definition manually.

    Here are the steps - if you work with new replication:

    1) Create a publication script and update manually the  sp_addpublication command by changing the @allow_initialize_from_backup parameter  to "true". In my case it was set to "automatic" as I used the wizard to generate the script. I think that it is usefull to set the other parameter: @immediate_sync to "true", too. 

    2) Run the script on the publisher server and then create a backup of the database 

    3) Restore the database on the subscriber.

    4) Go back to the publisher and create a subscription  script but dont run it. Look into it and add these parameters to the sp_addsubscription procedure:

    @sync_type = "initialize with backup" 

    @backupdevicetype = "disk"  (you could add "tape" or "logical" apart from "disk")

    @backupdevicename = "path to the backup"

    when you yse the logical backup device then put the name od this device.

     

    I know that most of you know that a replication can be started using that technique. I worte this blog post in order to not forget about that "feature" in  the future

    Cheers

     

     

     

     

     

  • Swedish SQL Server User Group - my sessions in September

    I had a great privilege to have two sessions in Sweden in September. The first was done in Malmo and the second I gave in Stockholm. 
    The talks were about waits statistics in the SQL Server ("All you should know about waits"). I realized that during 90 minutes I am not able to tell just *ALL* about the wait statistics so I slightly modified the topic to "All IMPORTANT you should know..."
    If you are interested in the presentation deck you can find it here
      
    Thanks to Swedish MVP's and especially to Johan Ahlen,  Steinar Andersen and Lars Utterstrom for inviting me to Sweden
  • Trace flags - TF 7806

    The trace flag 7806 is necessary when a DBA would like to use the dedicated administrator connection (DAC) in the SQL Server Express. The DAC is turned off when you install the SQL Server but it is a good practice to turn it on after the installation is done. The DBA will really need the have access to the unresponsive database server and having DAC active he/she has more chance to do the work. However this feature is not supported in the SQL Server Express edition by default. 

    Microsoft has added a trace flag 7806 to enable this feature in the SQL Server Express.

    As the flag is a global flag we have to turn it on the service level. You should add a parameter –T7806 to the parameter list and restart the service.

    After you did that you are now able to use the DAC in the SQL Server Express. For example you could try to connect using sqlcmd tool:

    Sqlcmd –S localhost\sqlexpress –E –A

    I made an assumption that on your local server there is an instance of the SQL Server Express which is called "sqlexpress" and you  connect to this instance using your Windows credentials

    When you are connected try to run that query:

    SELECT S.session_id FROM sys.tcp_endpoints as E JOIN sys.dm_exec_connections as S

    On E.endpoint_id = S.endpoint_id

    WHERE E.name = ‘Dedicated Admin Connection’

     

    Last remarks – the DAC can be used only by sysadmins

  • Trace flags - TF 1117

    I had a session about trace flags this year on the SQL Day 2014 conference that was held in Wrocław at the end of April. The session topic is important to most of DBA's and the reason I did it was that I sometimes forget about various trace flags :). So I decided to prepare a presentation but I think it is a good idea to write posts about trace flags, too.

    Let's start then - today I will describe the TF 1117. I assume that we all know how to setup a TF using starting parameters or registry or in the session or on the query level. I will always write if a trace flag is local or global to make sure we know how to use it.

    Why do we need this trace flag? Let’s create a test database first. This is quite ordinary database as it has two data files (4 MB each) and a log file that has 1MB. The data files are able to expand by 1 MB and the log file grows by 10%:

    USE [master]

    GO

    CREATE DATABASE [TF1117]

     ON  PRIMARY

    ( NAME = N'TF1117',

         FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117.mdf' ,

         SIZE = 4096KB ,

         MAXSIZE = UNLIMITED,

         FILEGROWTH = 1024KB

    ),

    ( NAME = N'TF1117_1',

         FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_1.ndf' ,

         SIZE = 4096KB ,

         MAXSIZE = UNLIMITED,

         FILEGROWTH = 1024KB

    )

     LOG ON

    ( NAME = N'TF1117_log',

         FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_log.ldf' ,

         SIZE = 1024KB ,

         MAXSIZE = 2048GB ,

         FILEGROWTH = 10%

    )

    GO

    Without the TF 1117 turned on the data files don’t grow all up at once. When a first file is full the SQL Server expands it but the other file is not expanded until is full. Why is that so important? The SQL Server proportional fill algorithm will direct new extent allocations to the file with the most available space so new extents will be written to the file that was just expanded. When the TF 1117 is enabled it will cause all files to auto grow by their specified increment. That means all files will have the same percent of free space so we still have the benefit of evenly distributed IO. The TF 1117 is global flag so it affects all databases on the instance. Of course if a filegroup contains only one file the TF does not have any effect on it.

    Now let’s do a simple test. First let’s create a table in which every row will fit to a single page: The table definition is pretty simple as it has two integer columns and one character column of fixed size 8000 bytes:

    create table TF1117Tab

    (

         col1 int,

         col2 int,

         col3 char (8000)

    )

    go

    Now I load some data to the table to make sure that one of the data file must grow:

    declare @i int

    select @i = 1

    while (@i < 800)

    begin

          insert into TF1117Tab  values (@i, @i+1000, 'hello')

           select @i= @i + 1

    end

    I can check the actual file size in the sys.database_files DMV:

    SELECT name, (size*8)/1024 'Size in MB'

    FROM sys.database_files 

    GO

     

    As you can see only the first data file was  expanded and the other has still the initial size:

     

    name                  Size in MB

    --------------------- -----------

    TF1117                5

    TF1117_log            1

    TF1117_1              4

    There is also other methods of looking at the events of file autogrows. One possibility is to create an Extended Events session and the other is to look into the default trace file:

     

     

    DECLARE @path NVARCHAR(260);

    SELECT    @path = REVERSE(SUBSTRING(REVERSE([path]),

             CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc'

    FROM    sys.traces

    WHERE   is_default = 1;

    SELECT    DatabaseName,  

                  [FileName],  

                  SPID,  

                  Duration,  

                  StartTime,  

                  EndTime,  

                  FileType =

                            CASE EventClass       

                                 WHEN 92 THEN 'Data'      

                                 WHEN 93 THEN 'Log'  

              END

    FROM sys.fn_trace_gettable(@path, DEFAULT)

    WHERE   EventClass IN (92,93)

    AND StartTime >'2014-07-12' AND DatabaseName = N'TF1117'

    ORDER BY   StartTime DESC;

     

    After running the query I can see the file was expanded and how long did the process take which might be useful from the performance perspective.

     

     

    Now it’s time to turn on the flag 1117.

    DBCC TRACEON(1117)

     

    I dropped the database and recreated it once again. Then I ran the queries and observed the results. After loading the records I see that both files were evenly expanded:

    name                  Size in MB

    --------------------- -----------

    TF1117                5

    TF1117_log            1

    TF1117_1              5

    I found also information in the default trace. The query returned three rows. The last one is connected to my first experiment when the TF was turned off.  The two rows shows that first file was expanded by 1MB and right after that operation the second file was expanded, too. This is what is this TF all about J

     

  • Memories about Tadeusz Golonka

    Today at 10:55 AM, Tadeusz Golonka - my greatest  Mentor and Teacher  passed away. I had te opportunity to met Tadek in person several times last years. It was always a great experience to see how he shared his energy and passion. I was always impressed and had a lot of new ideas after such meeting or lecture.

    I can remember the meeting  in early 2009 and his briliant speech he did for us, the MVP community in Poland. We spent two days together and he talked to us all the time. He gave us examples how to share IT passion to other people and how to be better person for others. He was the greates Mentor I have ever met - I realized this during that meeting. My greates dream was and still is to be "like Tadek". Many Times I just went to events to see / hear him on stage ("in action"). I always wanted to have his energy, empathy and passion. Now I have to live without his good words and advices....

    Let me put here the words that Adam Cogan wrote on Tadek's profile on Facebook. I just can't write about that fatal accident. 

    "The circumstances of Tadeusz Golonka death are too tragic. Tad stood up to offer his seat to an elderly lady, he lost his balance and then he slipped and hit the tram door hard. He then fell out of the tram and hit the metal barriers that separate the tram rails from the street. It was a severe accident....

    .. So horrible.  At first it was a miracle is that he survived... he fought for several days.  My thoughts are with his lovely family. The family have asked for blood donations as a symbolic gift. Tad received a lot of blood.  Thank you Tad, you were a wonderful person. I will remember you as a kind man, a gentleman. "

    RIP Tadeusz- You will never ever be forgotten. You are with us all the time

     


     

  • Annual SQL Server conference in Poland - SQLDay 2014

    We had a great 3-days conference this year in Poland. The SQLDay (7th edition) is an annual community conference. We started in 2008 as a part of C2C (community to communities) conference and after that, from 2009 the SQLDay is the independent event dedicated to the SQL Server specialists. 

    This year we had almost 300 people and speakers like Bob Ward, Klaus Aschenbrenner and Alberto Ferrari. Of course there were also many local Polish leaders (MVP's and an MCM :) )

    If you are curious how we played in Wroclaw this year - just visit the link http://goo.gl/cgNzDl (or try that one https://plus.google.com/photos/100738200012412193487/albums/6010410545898180113?authkey=CITqmqmkrKK8Tw)

     

    Visit the conference site: http://conference.plssug.org.pl/ 

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014).

    On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491.

    The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.

    There are some issues fixed you might find if you tried to run the previous samples on the RTM version:

    Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem.

     

  • SQL Server 2014 has come into the light - the RTM will be available on 1st April

  • Strange behaviour on CHECK constraint

    Last Friday I asked myself: imagine I have a CHECK constraint enabled on a column in a table. What would happen if I try to enable it again? 

    My question is not about "expecting some error" - I expect that the engine will check if the constraint is enabled and if it is - nothing will be made.

    If the constraint was off and the on - the SQL Server would have to load all data from a particular table just to check if the constraint is not violated.

    If the constraint is turned on and I issue the commend "turn it on" - I expect that no data is loaded to the memory

     

    Let’s start from  the beginning. I created a new database for the testing purpose. It will be easier to navigate on various DMV later :)

    CREATE DATABASE CheckTEST

    go

    USE CheckTEST

    GO

     

    Now  it's time to create a table - let's call it Months. The table has only one column called MonthNo and one CHECK constraint that will assure us that we have valid month number which is from 1 to 12. 

    CREATE TABLE dbo.Months (MonthNo TINYINT)

    GO

    ALTER TABLE dbo.Months ADD CONSTRAINT [Months_ValidMonth] CHECK(MonthNo BETWEEN 1 AND 12)

    GO

     

    Let's add some data 

    INSERT INTO dbo.Months( MonthNo )

    VALUES  ( RAND() *12 )

    GO 2000

     

    I know the INSERT will cause error when a 0 is generated – but that’s ok.

     

    Next step would be to check how my constraint looks like:

    SELECT * FROM sys.check_constraints

    WHERE name = 'Months_ValidMonth'

    GO

     

    I can see that values in the column is_disabled is set to 0 (ok – constraint is active) , so I am sure that no value is going to be entered to the table that violate the constraint. For sure I could insert NULL.

     

    Suppose I turned off the constraint:

    ALTER TABLE dbo.Months WITH CHECK NOCHECK CONSTRAINT [Months_ValidMonth] ;

    GO

     

    And turned it back again:

    ALTER TABLE dbo.Months WITH CHECK CHECK CONSTRAINT [Months_ValidMonth] ;

    GO

     

    Nothing has happened, data is ok. Now I  clear all buffers:

    DBCC FREEPROCCACHE

    DBCC DROPCLEANBUFFERS

    GO

     

    For this database I should not see any results through the dm_os_buffer_descriptors DMV (or at least I do not want to see any DATA pages that store the actual data of the table)

    SELECT * FROM sys.dm_os_buffer_descriptors

    WHERE database_id = DB_ID()

     

     

    What will happen if I issue the code once again – but only this part of code:

    ALTER TABLE dbo.Months WITH CHECK CHECK CONSTRAINT [Months_ValidMonth] ;

    GO

     

    I expect that NO DATA pages is going to be loaded to the buffers. The constraint is already enabled so why to load data ?

    The conclusion of this blog post: I was surprised but the DATA pages were loaded. What is the reason to do it?

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement