THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Louis Davidson

  • Speaking at the Nashville SQL Server User's Group on April 28th.

    And they say you can't go home again.. On the 28th of April I will be returning back to my old home of Nashville to speak at the Nashville SQL Server User's Group. (And where I moved to was very near the home I lived in when I was in high school.)

    The session I will be doing is a one off session of demos of some of the new T-SQL features that are in SQL Server 2016 SP1, using code that I have culled from the blogs I have done here on SQLBlog.com: Temporal, Row Level Security, Dynamic Data Masking, and a few queries to demonstrate JSON and some of the other new features like CREATE OR ALTER, DROP … IF EXISTS, etc.

    Here is the abstract:

    A Survey of New T-SQL Features in SQL Server 2016 (SP1)

    Last year, Robert Verell gave a presentation to the group on 2016 features where he gave examples of the new features in SQL Server 2016 at the product level. Our promise then was to give deeper dive sessions on the features, so this presentation is demo filled on the T-SQL features that are new for 2016. The features covered will include:

    Temporal Tables - The new table structures and syntax that let you capture history of all changes to a table, allowing you to query the table as it existed at a certain point in time in a very straightforward manner.

    Row Level Security - A mechanism to filter a user's access to data at the row level, in a manner that your code (stored procedures and views included) will honor with no changes to your existing code.

    Data Masking - Lets you define a pattern to mask a column's output in your T-SQL statements to obfuscate sensitive data.

    JSON - Allows you to format the output of a query as a JSON document (much like the FOR XML clause does for XML).

    New Syntax and Commands - Even if you don't use any of these new features in your coding, I will also show a selection of new functions and syntax constructs that you will definitely want to know about.

  • SQL Saturday Birmingham #593 is this weekend!

    It has been a while since I was last in Birmingham (Alabama) for a SQL Saturday. They didn't have one last year, and the event they did have was on the same weekend as Music City Code, so I wasn't able to attend. This year, I can't make it to SQL Saturday in Richmond due to this event, so I am fairly sure things have balanced themselves out.

    I will be doing two sessions (my two current sessions that I have been doing recently, the first one being my favorite session ever to give. Lots of demos that work, and material I actually know from heart. The second has no prep whatsoever, so I don't hate it either.

    Let Me Finish... Isolating Write Operations

    OLTP databases can be constantly written to and reporting databases are written to at least periodically. In order to ensure consistent results, connections must be isolated from one another while executing, ideally with the lowest possible cost to concurrency.  How this isolation is handled is based on the isolation level, whether the classic lock based or the newer optimistic scheme of the in-memory OLTP engine is used, or even if both engines are enlisted in the same transaction. In this session we will look at examples of how SQL Server isolates reading and writing operations from other writing operations to explore how this may affect your application through error messages and performance hits.

    Then I am doing a second professional development session with Robert Verell, and any other career nerd types that are there for the conference:

    Discuss Your Career (Panel)

    As a database professional, you have plenty of choice in your career path. Should you stay at the same company for 20 years, or change every few years? Much depends on what matters most to you. Do you want higher income, regular work hours, training, vacation time, group insurance, job security, lots of travel, telecommuting, or the possibility to advance to management?  The panel will give their observations and experience over decades of experience. Discussion will include at least one person who has spent over 19 years with the same employer, one person who has changed careers very recently, and everyone in attendance who is willing to give an opinion.

    The goal of this session will be to have a discussion about career matters. If attendees have questions about how to manage their career, we will discuss. Or there will be slides with different priorities to discuss if no one has questions. I would note that this session is not specifically about whether or not to change companies, positions etc. The goal is discuss the pros and cons of staying in one place, changing frequently, etc. There are so many factors to consider that it is maddening.  If you don't learn anything, maybe you will teach someone else something about their career. And if you neither learn, nor teach, the prize giveaways start right after this session, so it is a now lose situation.

  • Music City Code, a great conference that you may have never heard of yet

    For quite a few years, Nashville, and then Chattanooga had a conference called Devlink. It was low on cost, and high on education, and run by the awesome John Kellar (@johnkellar), who wrote the forward of my 2012 book. It was an amazing conference and was the place where I learned (well, in many cases learned about) a lot of stuff that existed in the non-Transact SQL world that I keep my head firmly planted in; along with a great database track that lots of my SQL Community friends presented.  Kellar ended the Devlink conference a few year ago, and I, along with a lot of others was pretty bummed.

    To fill this void, another great person (and fellow Microsoft MVP), Gaines Kergosien (@gainesk) and his team of people, started putting on Music City Code (@musiccitycode, musiccitycode.com) as a way to help replace the void left for a larger, general purpose programming conference in Nashville. One of the tough parts about putting on a conference is getting the word out. It isn't like one can advertise on TV and Radio, so I want to do my part to help you as a SQL Server professional to decide to either submit to speak and/or attend Music City Code. I actually didn't hear about Music City Code until I met Gaines at a session at the Nashville SQL Server BI user group where I was doing a lightning talk on picking speakers.

    With 3 days of sessions (including longer day long sessions and typical hour long ones too), there will be plenty to get excited about. The dates are 1-3 Jun 2017, and will be located at Vanderbilt University in Nashville (Music City, y'all). I spoke last year, and plan to submit several sessions again this year (once I figure out what I want to speak about!)  The deadline is March 1st, but it started on Feb 1, so you will want to submit as soon as you can. (https://www.musiccitycode.com/p/call-for-speakers-4). Last year, the only thing that the conference lacked for me were more sessions on data oriented topics. There were some really great sessions of general appeal, along with a major highlight was that they had Melinda Walker from OneSquigglyLine.com live draw several of the sessions, including one on IT Community:

    2016-08-20 11.05.53-2

    With plenty of learning, fun (did I mention they did a jam session every day at lunch?), and opportunities to meet interesting people and hang out and talk about computer topics (all for a tremendously amazing price!) like here where we were discussing "Moving Past Scrum?":

    2016-08-20 14.29.52

    Why not give it a try? Just check musiccitycode.com for details.

  • A new (to me, and possibly you) SSMS feature - Query Plan Comparing

    Wow, Microsoft has really changed in culture recently. This new rapid release cycle for SSMS is seemingly paying dividends in a major way. In a recent build of SSMS (okay, perhaps not "recent", more like this October of 2015, according to this blog by the MSSQL Tiger Team:), they added the ability to compare query plans. I tried it on a very large query with a lot of differences, and it was kind of hard to follow, but that is true with any large plans. Even using the greatest query plan reading tool of them all, SQL Sentry Plan Explorer I was lost in the details.  But for your typical, make a change to a query/index and see what has changed, it is pretty nice.

    As a quick example, take the following query using WideWorldImporters:

    SELECT *
    FROM   Sales.Invoices
            JOIN Sales.InvoiceLines
                ON Invoices.InvoiceId = InvoiceLines.InvoiceId
    WHERE  AccountsPersonID = 3105;

    Let's drop the index on the AccountsPersonID first (DROP INDEX FK_Sales_Invoices_AccountsPersonID ON Sales.Invoices; ), and execute the query, after enabling the Actual Query Plan. 360 rows will be returned, and you will see the plan as the following (along with a missing index hint):

    image

    Right-click the query plan and save the execution plan somewhere. Next, add back the index (using the script from SSMS with some formatting help from SQL Prompt:

    CREATE NONCLUSTERED INDEX FK_Sales_Invoices_AccountsPersonID ON Sales.Invoices
    (
        AccountsPersonID ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON USERDATA;
    GO

    EXEC sys.sp_addextendedproperty @name = N'Description',
        @value = N'Auto-created to support a foreign key',
        @level0type = N'SCHEMA', @level0name = N'Sales',
        @level1type = N'TABLE', @level1name = N'Invoices',
        @level2type = N'INDEX',
        @level2name = N'FK_Sales_Invoices_AccountsPersonID';
    GO

    Then execute the query again and you will see the following plan:

    image

    Right click the plan and choose: Compare Plan. This will give you an open file dialog to choose the file we stored earlier. This will give you a few windows. Graphically it gives you (by default, the following view that highlights the operators that are similar) In this case, the segments for join to the InvoiceLineItems are the same except for the cost. In the Compare Options window you can also have it highlight the dissimilar operations as well. Also in the compare options it tells you what each of the things it has highlighted is, in this case it only highlighted Key Lookup operators in the red, and Index Seeks in blue:

    image

    Beyond this, in the properties you can see specific differences in the plan by selecting each operator. For example, with the select operator selected as I have in the image above, you will see properties such as the following:

    image

    You can see the Actual Number of rows highlighted here, along with an explanation of what this means. This along with other bits of information that is used, estimated number of rows, estimated costs, amount of time to compile, estimated available memory, estimated degree of parallelism, set options, wait stats, etc. Clicking on each operator you can see comparisons between them as well. The ones that have similar operations will automatically jump to the other when you click on it, and the others you can chose any two to compare.

    All in all a dandy amount of information that one can find about two query plans, easily being able to compare two operators. When the query plan gets a lot larger, it can still be hard to pinpoint big differences (I had a query with 50+ operators, and the plan was wildly different from SQL Server version to version, so in that case it mainly told me how wildly different they were!) but one more tool to help look at query plan differences is definitely a great thing!

  • Saving a Few Lines of Code by Performing Multiple Operations in a Single Command

    I was in Mike Byrd's session on in-memory technologies the other day and I saw him do something I had never seen and it prompted me to write this blog today. It has nothing to do with in-memory, as the title notes, but has to do with stacking commands in a statement I had not seen before. This was SET STATISTICS. For my entire career, I have put them each on an individual line:

    --create a table for the demos
    CREATE TABLE dbo.Table1
    (
        Table1Id    int PRIMARY KEY
    );
    GO

    SET STATISTICS IO ON;
    SET STATISTICS TIME ON;
    SET STATISTICS PROFILE ON;
    SET STATISTICS XML ON; --Not typical to use both of these, I suppose but it is possible!
    GO
    SELECT *
    FROM   Table1;
    GO
    SET STATISTICS IO OFF;
    SET STATISTICS TIME OFF;
    SET STATISTICS PROFILE OFF;
    SET STATISTICS XML OFF;

    But then I saw Mike use the following syntax (sans PROFILE and XML):

    SET STATISTICS IO, TIME, PROFILE, XML ON;
    GO
    SELECT *
    FROM   dbo.Table1;
    GO
    SET STATISTICS IO, TIME, PROFILE, XML OFF;

    Wow, that is a lot easier! (And yeah, using SQL Prompt I can set a snippet to take away typing, but this saves space in any case.)

    -------------------------

    Another command that I often use with commas is a DROP (OR DROP IF EXISTS) command. For example, say we add a Table2 object to the example (including a FOREIGN KEY constraint to help make a point):

    CREATE TABLE Table2
    (
        Table2Id    int PRIMARY KEY,
        Table1Id    int REFERENCES dbo.Table1 (Table1Id)
    );


    Now, if we execute the following:

    DROP TABLE Table2, Table1;

    The tables will be dropped. However, it turns out that this method of executing DROP TABLE is not really a single executing command. If we recreate the tables and drop the objects in the wrong order (due to the reference from Table2 to Table1), we see an error:

    CREATE TABLE dbo.Table1
    (
        Table1Id    int PRIMARY KEY
    )
    CREATE TABLE dbo.Table2
    (
        Table2Id    int PRIMARY KEY,
        Table1Id    int REFERENCES dbo.Table1 (Table1Id)
    )
    GO

    DROP TABLE dbo.Table1, dbo.Table2
    GO

    This causes the following error message to be thrown:

    Msg 3726, Level 16, State 1, Line 18
    Could not drop object 'Table1' because it is referenced by a FOREIGN KEY constraint.

    For almost any command, you would expect it to behave as an atomic operation. However, in this case, it does not. Table2 is actually dropped:

    SELECT name
    FROM   sys.tables
    WHERE  name in ('Table1','Table2')

    name
    ----------------------
    Table1

    So while it is an excellent tool for saving a few keystrokes, it will not save you from the mistakes of malordered objects any more than:

    DROP TABLE Table1;
    DROP TABLE Table2;

    For this, you would need to do something with transactions, most simply something using a TRY CATCH block:

    --Recreate Table2, again
    CREATE TABLE dbo.Table2
    (
        Table2Id    int PRIMARY KEY,
        Table1Id    int REFERENCES dbo.Table1 (Table1Id)
    );
    GO

    BEGIN TRY
        BEGIN TRANSACTION;
        DROP TABLE dbo.Table1;
        DROP TABLE dbo.Table2;
        COMMIT TRANSACTION;
    END TRY
    BEGIN CATCH
        ROLLBACK;
        THROW;
    END CATCH;

    Now you get the error message, and dbo.Table2 is not dropped. But realistically, most of the time when we would be using a comma delimited list of tables to drop, I expect it is in an ad-hoc manner, and something like the following is good enough:

    DROP TABLE IF EXISTS dbo.Table1, dbo.Table2;

    Now simply repeat until no error messages and the tables are gone! The only downside with this command is that if you spell the table incorrectly, you may go slightly mad.

  • T-SQL Tuesday #86 My all-time favorite personal SQL Server Enhancement Request: Automatically maintained columns

    T-SQL TuesdayI have heard the name T-SQL Tuesday for (like,) ever; but I have always been so into my own bundle of insanity that never took the time to figure out what it is. This changes today because I read a blog on Adam Machanic's (@adammachanic) blog that outlined the rules (here) and I have been keen to get back to blogging regularly.

    When I saw what the topic was this month, hosted by Brent Ozar, I was even more excited. SQL Server Bugs & Enhancement Requests. Over the years, I have posted a lot of enhancement requests on the connect site, most of them in and around the SQL Server 2005 release. I was super enthusiastic about my requests, and made quite a few. I knew that most would fail, but one has stuck with me for years that I wanted to get into the product.

    Automatically maintained columns. https://connect.microsoft.com/SQLServer/feedback/details/203570/add-automatically-maintained-columns-perhaps-just-update-date-update-user

    The concept is very similar to a DEFAULT constraint, with two differences:

    1. Will work on an UPDATE operation, without specifying DEFAULT

    2. Could be configured to disallow the user from entering a value. My proposed syntax was pretty simple:

    AUTO [WITH OVERRIDE] (scalar expression)

    Now I realize that 10 years ago, I didn't take terribly long to consider that WITH was a terrible thing to add to the syntax, and AUTO is a keyword already, so I am going to rename it: AUTO_DEFAULT (scalar expression, [option]). Since I have thought a bit more about this in the years since writing it, I realized there were a few more options that would be nice. I was terrible in college doing syntax parsing, but the syntax itself is not important. Temporal in SQL Server 2016 has syntax that is similar to this for the new temporal columns which I got really excited about the first time I saw it: SysStartTime datetime2 GENERATED ALWAYS AS ROW START NOT NULL. Maybe in vNext?

    Possible options could include :

    • ALLOW_OVERRIDE -- let user input override scalar expression.
    • INSERT_ONLY -- only works for the insert operation. This makes it a kind of super default, and protects the column from future changes.

    It could be implemented as a constraint, or like dynamic data masking, just be a part of the table structure and syntax. I will leave that ambiguous as it isn't important.

    The major value of the feature would be to eliminate trigger use for the very typical need to automatically set a value for a column. It would also allow (as Rob Farley noted back in 2009) the OUTPUT clause to work when this is the only need you have for a trigger, and SCOPE_IDENTITY would not be affected like it is with an instead of trigger. (Allowing SCOPE_IDENTITY to work with an instead of trigger is also high on my list of fixes).

    As an example, consider a table such as the following:

    CREATE TABLE dbo.Test
    (
         TestId int NOT NULL CONSTRAINT PKTest PRIMARY KEY, --assume a SEQUENCE object name Test_SEQUENCE existed, the value should be always the next value
         Value int NOT NULL,
         RowCreateTime datetime2(0) NOT NULL, --should always be set on INSERT, and not modifiable on UPDATE
         RowModifiedTime datetime2(0) NOT NULL, --should always be the server time when the row was modified
         RowModifiedUserName nvarchar(128) NOT NULL --should be the original login of the user who modified the row, or whatever the user wants it to be
    )

    The proposed syntax to fill the need would be:

    CREATE TABLE dbo.Test
    (
         TestId int NOT NULL CONSTRAINT PKTest PRIMARY KEY AUTO_DEFAULT (NEXT VALUE FOR dbo.TestSEQUENCE, INSERT_ONLY), 
         Value int NOT NULL,
         RowModifiedTime datetime2(0) NOT NULL AUTO_DEFAULT(SYSDATETIME()), 
         RowModifiedUserName nvarchar(128) NOT NULL AUTO_DEFAULT(ORIGINAL_LOGIN(), ALLOW_OVERRIDE)  

    )

    Now, if the user executed:

    INSERT INTO dbo.Test(TestId, Value, RowModifiedTime)
    VALUES (-1000, 1, '1900-01-01')

    I would either like an error to occur, saying TestId and RowModifiedTime should not be modified in a DML statement, or just override their desires. RowModifiedUserName is acceptable to modify, since it allows override:

    TestId     Value RowModifiedTime     RowModifiedUserName
    ---------- ----- ------------------- -------------------
             1     1 2017-01-10 16:09:03 domain\louis

    Now, if I execute the following:

    UPDATE dbo.Test
    SET    Value = 2,
           RowModifiedUserName = 'Fred'
    WHERE  TestId = 1;

    The data should change to:

    TestId     Value RowModifiedTime     RowModifiedUserName
    ---------- ----- ------------------- -------------------
             1     2 2017-01-10 16:15:23 Fred

    Now, if I execute the following:

    UPDATE dbo.Test
    SET    Value = 3,
    WHERE  TestId = 1;

    The data should change to:

    TestId     Value RowModifiedTime     RowModifiedUserName
    ---------- ----- ------------------- -------------------
             1     3 2017-01-10 16:19:25 domain\louis

    It is definitely possible to write triggers to make this happen, but it is certainly a tedious task that definitely feels like a DDL configuration that would save thousands of lines of trigger code from needing to be written when you need to have a value that cannot be overwritten by the user (or in my case, where you don't trust developers, even yourself, to always get the values change for all operations without it automatically occurring.)

     
  • If a Tree Falls in the Forest, and No One is Around To Hear It…

    You no doubt have heard this old philosophical question that completes with, "Does it Make a Sound?" As nerds, we all know that the answer is that it certainly does make a sound. However, take the question to a more technical place, how much sound does it make, and what is the impact of that sound, is far more interesting, and far more complex.

    This brings me to my data question. If an order is processed in a store, but the expected data is not created, did that order ever occur?

    Very often, the staff of a business are very focused on pleasing the customer, making sure they get their product, but due to software limitations, may not end up not capturing information about every sale in a satisfactory manner. Most of the blame I have seen lies in software that doesn't meet the requirements of a customer, making capturing desired details tedious to achieve when the process is in the norm. Very often the excuse programmers give is that too much work of the work to build a system would need to be done for the atypical cases, but requirements are requirements, and it is generally essential that every action that occurs in a business is captured as data.

    In the same way that we know that the tree made a sound, we know that the order did actually occur. However, without data being captured about the transaction, this lack of evidence may cause several things to occur.

    1. If the customer comes back later with a receipt that has a transaction number that was never written to the database, they may be questioned as if they have stolen the product. Calling a customer a thief is not optimum for most businesses that file taxes.

    2. The income from the sale is like going to make cash balances off, causing headaches trying to reconcile cash amounts. (Alternatively, income may be stolen. I never did this when I worked in retail, but sometimes an employee takes a sale, they fake the paperwork, pockets the money. Again, I never did this, and the statute of limitations surely has passed in any case.)

    3. Inventory counts will be off, since product has left the building and hasn't been counted.

    The issues noted are just the short term, functional issues. Generally speaking though, the most important reason we store data is to report on data.

    From a data science perspective, when data is not stored about a transaction, it is impossible to determine how important that transaction was. Every order a customer places is important from the individual purchase income, naturally. But it is also important to help predict the next sale. Information like when the sale occurred, if any discounts or coupons were applied, demographics of the purchaser help to land the next sale both from that same customer and future customers.

    Now, consider when incomplete, or inaccurate information is entered because the database won't allow fringe cases to be stored? Somethings might be acceptable (if you can at least capture the income and inventory changes), but incomplete information can be worse than no information for reporting. If all transactions appear to be of the typical variety, it will look like fringe cases never occur (And sometimes the fringe cases turn out to be an interesting percentage of what is occurring, leading to opportunities with other similar customers.)

    So poorly created and managed databases could actually be harming future business. So while pleasing the customer is always important, equally important is making sure that the systems that we are creating as architects and programmers actually mirror the reality that is occurring for our clients. So when the tree falls, not only is someone there to hear it, the impact of the loss of the tree (reduced oxygen, loss of bird and squirrel habitats, future fire hazards) is calculable and turned into actions (like building the squirrels new housing complexes!)

  • My last SQL Event as a Nashvillian… SQL Saturday Nashville #581

    Of course, technically, I live in Thompsons Station, about 20 miles south of Nashville, and technically this event will be held in Murfreesboro, also 20 miles south of Nashville, and 30 miles south of Thompsons Statiion. But this is splitting hairs. I am leaving the Nashville area for the Chattanooga area (again, 20 miles away in Cleveland, TN) at the end of the month, and this one is going to be sad for me. It has been a great ride working with Robert Verell (@sqlcowbell), Tamera Clark (@tameraclark), Jon Boulineau (@jboulineau), Kerry Tyler (@airbornegeek)  and Daniel Glenn (@danielglenn) these past few years. It stinks leaving the area, but I expect to see these folks quite often in Nashville, Chattanooga, and other SQL Server venues like the Summit (and sooner or later I expect to see a few more of those names at the MVP Summit, assuming I am still attending!)

    This year, I will be doing two sessions. The first is my concurrency session that I premiered last year for SQL Saturday in Nashville. It has improved a lot with stuff I have learned over the year, and it is one of the best written presentations I have ever done (if it still tries to teach too much in an hour.)

    Let Me Finish... Isolating Write Operations

    OLTP databases can be constantly written to and reporting databases are written to at least periodically. In order to ensure consistent results, connections must be isolated from one another while executing, ideally with the lowest possible cost to concurrency.  How this isolation is handled is based on the isolation level, whether the classic lock based or the newer optimistic scheme of the in-memory OLTP engine is used, or even if both engines are enlisted in the same transaction. In this session we will look at examples of how SQL Server isolates reading and writing operations from other writing operations to explore how this may affect your application through error messages and performance hits.

    Then I am doing a second professional development session at lunch with Robert Verell:

    Discuss Your Career (Panel)

    As a database professional, you have plenty of choice in your career path. Should you stay at the same company for 20 years, or change every few years? Much depends on what matters most to you. Do you want higher income, regular work hours, training, vacation time, group insurance, job security, lots of travel, telecommuting, or the possibility to advance to management?  The panel will give their observations and experience over decades of experience. Discussion will include at least one person who has spent over 19 years with the same employer, one person who has changed careers very recently, and everyone in attendance who is willing to give an opinion.

    The goal of this session will be to have a discussion about career matters. If attendees have questions about how to manage their career, we will discuss. Or there will be slides with different priorities to discuss if no one has questions. I would note that this session is not specifically about whether or not to change companies, positions etc. The goal is discuss the pros and cons of staying in one place, changing frequently, etc. There are so many factors to consider that it is maddening. Come with questions, answers, opinions, and the crunchiest lunch choices that Tamera and team have served up.  If you don't learn anything, at least you will have something to eat and a seat to sit and eat

  • Books are DONE…

    Finally, I can say I am done with a couple of books I had been working on, pretty much simultanously at times the past few months:

    Exam Ref 70-762 Developing SQL Databases (https://www.microsoftpressstore.com/store/exam-ref-70-762-developing-sql-databases-9781509304943)

    I wrote the first half of the book, with Stacia Varga finishing things out. It was a challenging project for several reasons, but I am very proud of the final product. If you are studying for the exam, you should find what you need in the book to help you pass the exam (no promises, and you shouldn't think the book is all you need… experience is important too!) It was interesting to get back to working with exam material. My favorite contract work I have ever done has been working on exams, writing questions and objective domains (the outline of which became the backbone of the book.)

    Pro SQL Server Relational Database Design and Implementation (http://www.apress.com/us/book/9781484219720)

    Finally, the book is versionless. We (Jessica Moss reprises her role of contributor writing one of the chapters,) cover many new features in SQL Server 2016 (Memory Optimized Tables, Columnstore indexes, Dynamic Data Masking, Row Level Security, etc), but the core of the book is still relational database design. In the future, we will consider a new edition when things change tremendously. Fortunately SQL Server 2016 SP1 came out as I was making final edits, so the changes in licensing were removed (if you don't know what I mean, read this blog from Aaron Bertrand https://sqlperformance.com/2016/11/sql-server-2016/big-deal-sp1). You may also know that SQL Server vNext CTP 1.1 is already out (https://msdn.microsoft.com/en-us/library/mt788653.aspx) so things are moving more and more rapidly so writing books to keep up with minor (if awesome) new features is almost impossible. But I still feel that books are the best way to learn a complete concept.

     

    After the holidays, I will be kicking into gear again (SQL Saturday Nashville coming up just after the first of the year where I will be presenting), and several other writing tasks I will be starting. But for now it is awesome to sit on my couch and experience the output of other writers (on my TV, naturally). Happy Holidays to everyone!

  • Quickie - SSIS Expression Function - REPLACENULL

    When I am not designing a database, I am usually loading it with data (a good percentage of my time is spent designing data warehouse structures, along with relational ones. Using SSIS is often a challenge, for many reasons, but the most challenging for me is the expression language. Using a Derived Column Transformation, a common task is to turn a NULL value into a surrogate NULL, (we use -1 for UNKNOWN, -2 for Not Applicable). The expression for this prior to SQL Server 2012 was:

    (ISNULL(ColumnName) ? "ReplaceWithThisValue" : ColumnName)

    Which I looked up every..single..time I used it. "?" means THEN…not IF? ":" means ELSE? Huh?  I know this comes from one of those cool languages that I have never mastered, but as I was searching for the syntax again a few days ago, I found REPLACENULL. I had never seen this function before, so I figured I might not be the only one. And perhaps if a commenter feels like telling me how dumb I am to not know about other new expression features I will not be offended. REPLACENULL won't replace every use of the these and the other symbols one must use for SSIS expressions, it does replace one of the more common ones.  So what was:

    ISNULL(column_dim_key) ? -2 : column_dim_key

    Becomes:

    REPLACENULL(column_dim_key, -2)

    Sure it isn't anything more than the Transact-SQL ISNULL function (which itself is oddly named), but it is so much easier to understand, at least for me (and at least one influential Microsoft customer, I imagine.)  Now if they will just give us the ability to use a CASE expression!

  • Curious - How does STRING_SPLIT (2016) handle empty/NULL Input?

    If you haven't seen STRING_SPLIT, it is pretty awesome, and something we have all been working with for many years, which you can read about in Erland Sommarskog's great article here along with many other concepts involved in sending in a set of data to a stored procedure.

    As a quick starting example, consider the string of characters: A,B,C;D. Using a separator of ',' this will give you three separate values. The STRING_SPLIT function will return the multiple values that are parsed into rows in a tabular output.

    DECLARE @StringToSplit varchar(30) = 'A,B,C;D'; --could be varchar(max)

    SELECT *
    FROM   STRING_SPLIT(@StringToSplit,',');


    This returns: 

    value
    ------------------------------
    A
    B
    C;D


    Note: BOL https://msdn.microsoft.com/en-us/library/mt684588.aspx doesn't specify a output value length for columns, but when I added an "INTO hold" clause, the column length varied by input value, and was varchar when the variable was varchar, and nvarchar when it was nvarchar. The length of the output for my call is varchar(30).

    But what about the two versions of an empty value? '' (zero-length/empty string) and NULL. My NULL sense told me that the NULL one would return a single row with NULL, and the empty string would return a single empty string row.  Of course, I was wrong, and it makes sense why (a row of NULL would be really annoying, especially if you want to use the output as an exclusion list, because A NOT IN (SET(B,NULL)) always returns NULL, not TRUE. )

    For example, say the output could include NULL. You could end up with something like the following, where even though the input value of A is not in the NOT IN list, no rows are returned:

    SELECT *
    FROM   (VALUES ('A')) AS List(Col)
    WHERE  Col NOT IN ( SELECT *
                        FROM   (VALUES ('B'), (NULL)) AS Excludes(Col) );

    Change NULL to 'C' and you get the result you expect.

    Ok, so let's look at the output. What does '' (empty-string) return?

    DECLARE @StringToSplit varchar(30) = '';

    SELECT *
    FROM   STRING_SPLIT(@StringToSplit,',');

    Just as expected, a single row with an empty value (which is easier to see if you keep the rows affected message on for the example because in the next output, it looks the same, but is a different result.

    value
    ------------------------------


    (1 row(s) affected)

    Any length of value between the commas will return a value. BOL notes that if you have spaces between separators, you will get outputs of various lengths, so you can do the following: 

    DECLARE @StringToSplit varchar(30) = ', ,  ,   ,  bob  '; --could be varchar(max)

    SELECT *
    FROM   STRING_SPLIT(@StringToSplit,',');

    And you will get 5 rows, one empty, one single string, one string of '  bob  ' etc. Using RTRIM and LTRIM you can deal with spaces in the front and back of your string. But what about NULL (and not the string value 'NULL' which will confuse as you are writing early in the morning)?

    DECLARE @StringToSplit varchar(30) = NULL;

    SELECT *
    FROM   STRING_SPLIT(@StringToSplit,',');

    This returns:

    value
    ------------------------------

    (0 row(s) affected)

    A NULL input returns 0 rows, which makes some sense, since there is no string value to split, but as everything with NULL, may not be exactly what you expect. In this case, the output is more than likely what you will have wanted in the first place, which definitely is not always the case with NULL.

  • Post PASS Summit Resolutions 2016

    So, I am finally back and lightly rested from the two big weeks of SQL learning this year (PASS Summit and MVP Summit) and like pretty much every year I am physically exhausted, feeling my age. So many sessions, so many miles put on my feet. But my brain recovers from the ordeal faster than my feet, and is like: "wow, I am going to need more hours in the week!" But I do want to set the expectation dial to around a 6, not 11, because from what I hear, there is more to life than SQL (and yes, I did chuckle to myself when I said it).

    So here we go, my resolutions for the 2017 year:

    1.  Keep involved locally. My wife and I are leaving the Nashville area this year, moving to Cleveland, TN (south of Nashville, unlike that other more famous Cleveland!) So I will hopefully get involved in the Chattanooga group, but who knows what I will work out. I probably see a meeting or two in Nashville this year too.

    2.  Write more. Pretty much anything, but I want to keep writing books if I can. I have one coming out later this year, and one more early next year. One project I really hope to start on this year is upgrading the DMV book, starting with blogs on the DMVs that need to be added, and hopefully getting ideas from the community, and probably pick up a cowriter or two before it is all said and done.

    On the blogging front, I will try to do my best to put out something every week (well, every other week or so, to be completely honest), even when I am working on a larger bit of material like a book. This will probably be the hardest resolution to keep, because I get pretty focused on one thing at a time and can’t find the mental energy to work on something else while another is waiting. I have tons of unwritten blogs waiting for me and I will try to complete at least some of them. Something I will do at least a few times is to try to participate in T-SQL Tuesday and other blogging challenges now and again.

    3. Continue to speak, but do so "economically." I almost always foot my own bill to go speak at a conference, except for the PASS Summit which is part of my training budget each year. Part of our move from Nashville will be a reduction in family income, so I might slow down a little. Not that I plan to quit or anything, and have already put in for Nashville and Birmingham, and will certainly submit to Atlanta and Chattanooga, and probably an event or two more if I can split costs/travel with others.

    I also plan to submit to the PASS conference this year, something I haven’t done in a few years. This year I plan to do it with a bit of a twist. More on that in a bit,

    4. Start recording of my old presentations and put on (possibly) YouTube.  I planned on doing this a while back, calling it “directors cut” presentations (the point being that I am not restrained to 1 exact hour, so I won’t cut any demos or commentary for time.) Then I had my surgeries and just never did. I was reminded of this when I was at the MVP Summit and talking to another person who used YouTube for a similar sort of use.

    5. Upgrade my website to advertises my talents better. My current website kinda stinks, but is functional enough. But I would like to make it a showcase of my work, including more snippets, the aforementioned videos, and blogged articles. Expand some of the examples examples for my book and presentations for Azure SQL DB, and in some cases in-memory OLTP (naturally starting as blogs!)

    6. Learn something new. There is a specific technology that I am thinking I want to learn very well, but I can't mention it yet. See #2 for what I will do once it has been released.

    7. Continue to stay involved with the PASS Program Committee reviewing and commenting on abstracts for people before they submit them. Note that I had said I plan to submit to the PASS Summit. Last year I felt it was a bit of a conflict of interest as I reviewed 80+ abstracts, and easily could have stolen ideas (and a few sessions I saw seemed like they were stealing MY ideas, even if I had never told anyone.) To make sure that there isn’t a conflict of interest, I will post my abstracts to my blog before the call goes out, so no one could say I stole their idea (and I won't mind if people steal mine as attending PASS without speaking is kind of relaxing.)

    In addition, I hope to use my abstracts as a screening test for people joining the Abstract Coaching team (with some real good righting to see what people seeen.) So I can get the benefit of the team's skills, just like several past team members have (and are highly recommended to do! It is usually a skilled person who knows that other skilled persons can make them better!)

    8. Get more fit. At the PASS Summit this year I walked more than I ever have, for a total of 57215 steps (an average of 8173 steps a day, or approximately 3.5 miles a day based on what my Microsoft Band 2 tells me), though I did use my cane quite a bit. I even attended more of the parties than normal, and only used an Uber once to get to somewhere other than the airport. It still knocked the heck out of me, but I am getting there. I doubt I ever can write resolutions without saying get more fit, but hopefully I can always say I did improve in the follow up.

    9. Keep stretching my comfort zone. This year I wore a kilt, and it was not exactly easy for me. But I got through it, and it didn't kill me. With the PASS Summit in 2017 starting on Halloween, if #sqlcostume becomes a thing, I have a costume idea that will be fun, and will again stretch my comfort zone in a way that will not be scary, or cause anyone nightmares either.

    10. Start working on following through on my secret resolution from last year. Because I ended up working on an extra book and having knee surgery, I never did the leg work (ba boom ching). My idea is to start a conference at a higher level, particularly so speakers can go and learn stuff, along with the people who don't feel like they are getting anything out of conferences. If you steal my idea, you have to let me attend for at least half price… Or free.

    11. If I am not having fun, disregard any of the previous items except #8 (because that one is truly important). I enjoy this stuff most of the time, but sometimes it wears me the heck out. I occasionally just want to turn on the television and watch a football game. While I am writing this there is a game on my hotel TV as I head out of town. I had to check the TV to see who was winning. Seattle. Seattle is winning. (As I edit myself, I will note that Seattle won, and now I am watching Carolina up. Football is great TV to write while you watch.)

    ---------------------

    So there you have it. I am going to try to follow through with these resolutions, but who knows. Life is full of surprises, exciting (like getting a new book to write), not exciting (like having your hip break), or really sad (like a younger acquaintance of ours passing away suddenly).  You don't have a clue what is going to happen, and you are never in control of what is going to happen. Obviously we can only influence the future, but never are you in control.

    Finally, I ask myself why write resolutions. I remember something Scott Adams had written about a long time ago about writing down goals (I can't find the reference I remember anymore). Writing them down, and transmitting them to your web browser helped me make them real last year. Will it work this year as well as last? See you in early October next year for the followup and you will see.

  • After PASS Summit 2016 Recap (As seen by me!)

    In my last blog entry, I promised to blog about the PASS Summit each night when I got back to the room. This was a failure for two reasons. 1. I was always out at night and then exhausted. 2. I forgot the keyboard to my Surface Pro. I tweeted about it, and was picked on by the @surface twitter account:

    image

    But I did tweet about the event rather frequently, as it is much easier to capture your ideas and comments in 140 character spurts (and favoriting other posts saves typing too.) If you want to read all of the tweets about the summit, look for the #sqlsummit hashtag on Twitter.

    The first day was the Microsoft Keynote. It was led by Joseph Sirosh and while there wasn't a tremendous amount of stuff that directly excites this relational engine programmer (though love was shown for how awesome the SQL Server Engine is, both on prem and in the cloud), some of the stuff that was shown was really cool:

    1. Showing the various sources of data you can use with Polybase

    2. SQL Server on Linux - Not that I will ever use this, but it could be a boon to SQL Server usage as time passes (and for a relational programmer, you would not really notice much of a change anyhow)

    3. Azure Analysis Services is coming soon

    4. Azure SQL DW has had some tools created to make it easier to get started with (RedGate has a free tool at http://www.red-gate.com/products/azure-development/data-platform-studio/), and as Tim Ford tweets here: (https://twitter.com/sqlagentman/status/791316930703478784), you can get a free month of SQL DW to give it a try.

    The biggest takeaway was just how much data is going to affect our lives as time passes. Last year, my reaction was that the keynote was a bit creepy, taking mapping DNA and predicting health. This year, it was a couple of examples that were really cool, including some apps, websites, a few game examples, and sentiment analysis of the book War and Peace (https://twitter.com/drsql/status/791320039303491584) by Julie Koesmarno.

    An interesting turn of technology was the push towards "intelligence database" platforms. Something that many of my colleagues have discussed for years has been to leverage the data tier to get work done faster, and more reliably. What had always been missing in those scenarios has been scaling out. Hence we were constantly limited to how much we could do on a single computer. Two things have changed since those days. 1. A single computer can do as much work as most organizations need to. 2. Scaling out is far easier when dealing with read intensive scenarios. There was a demo of SQL Server 2016 handling millions of transactions where the reality was orders of magnitude lighter (and we are talking fraud detection for major credit card companies).

    However, the most moving demo finished out the keynote, and it was the closest to creeped out that I got. There was a computer guessing ages, (I think) gender, etc. Then the computer was describing the surroundings. The the computer was reading a menu at a restaurant. And then you realize this was a computer helping a blind man. Wow. That was just an amazing use of technology.

    If you want to know what Joseph Sirosh (the Corp VP for the Data Group at Microsoft) felt were the top five announcements, he shared it here: https://twitter.com/josephsirosh/status/790950683138596865. Stuff I didn't mention was really outside of what I know (ok, I admit it, care) about (I do only have so much time!)

    -------------

    After this I attended several pretty great sessions:

    • Why Datatype Choice Matters from Andy Yun, where he covered some of the internals of datatypes. The best part for me was the statement that "NULL isn't a value, it is a state of being unknown, undefined.  Hence the null bitmap in the physical record of a row." While I have written about NULL probably hundreds of times, it is good to be reminded of this point, that NULL isn't really a value, even though it does feel like it.
    • Building an SSRS monitoring system with Stacia Varga (a cowriter on this book). She covered a lot of stuff about logging that I may never use, but one thing I learned about that I might directly use is logman.exe, which lets you capture perfmon counters. There is an article here about capturing SSRS statistics: https://msdn.microsoft.com/en-us/library/ms159809.aspx).
    • Then Tom LaRock and Karen Lopez duked it out again talking about time bombs you have lurking in your database code. You know like NULLs no one understands, identity column values that no one pays attention to when the values run out.

    ----------------

    Something I am keen to learn more about came in two sessions: Buck Woody the first day and Dr Cecilia Aragon. Data Science. I don't know if I could, or would want to, become a data scientist. But in either case it leads me down the path of wanting to make sure that databases I create are ready to be a source of some of that data. I have always been a proponent of tailoring my OLTP database designs to capturing every detail that is possible. For example, cause an effect, when it is direct (such as a shipment to an order), or indirect, (such as a follow-on order that the customer tells you, or gets in a link to, a previous order.)  Data Science is about learning more about everything, and the more answers you can provide an algorithm, that can only help you see others behaving the same way.  Capturing additional data that isn't needed immediately is not always something that is greeted by developers with a hearty smile, but it is almost always going to be useful.

    Buck Woody pointed out a website (http://tylervigen.com/spurious-correlations) that has some excellent, messed up, correlations that you can make using data. Such as "Per capita consumption of chicken" and "Total US crude oil imports':

    image

    I eat a lot of hot chicken to try to help, but I am only one person!  These correlation were highlighted even more by Dr Aragon, who had a couple of very interesting quotes that piqued my interest:

    "Data science is driven more by intellectual ecosystems and software ecosystems than by hardware"

    (Paraphrasing)"Humans not gaining exponentially greater cognitive capacity. "

    "Big data is data that is 2 orders of magnitude greater than you are accustomed to"

    For me, these three quotes really put Data Science in perspective. People are now, and have been, very intelligent, regardless of how things seem at times. But what we really lack is the ability to process concepts quickly. People make algorithms, and could slog through data manually, but rather let computers whip through data and give us decisions. Will there ever be a time where machines make correlations that are completely wrong, but they act on them anyhow? It reminds me of Robot Santa Claus on Futurama who judged everyone naughty, the person who was naughty, and the person who told on the person.

    Will we ever make a machine that can come up with algorithms, and understand what is a meaningful correlation without some human logic? Heaven knows that every person who creates a machine won't be good at heart, but could machines ever be machines without people?

    It does all remind me of the Pete Townshend song "Man and Machines" from the Iron Man album :

    "Man makes machines
    To man the machines
    That make the machines
    That make the machines
    Make a machine
    To make a machine
    And man and machine
    Will make a machine
    To break the machines
    That make the machines..."

    On Singlularity Hub I was reading an article about the subject of AI, while it isn't the same thing exactly, has many of the same problems. There was a statement:

    "Based on deep neural nets, the AI impressively mastered nostalgic favorites such as Space Invaders and Pong without needing any explicit programming — it simply learned through millions of examples."

    If you stop at "without needing any explicit programming", this sounds pretty creepy. But if you give the computer an example of a successful solution, perhaps even millions of them, and combine this with the fact that computers don't make tiny mistakes (you know, what makes games fun!) it isn't that the computer can learn by itself. Just that it can try, fail, adjust, and repeat a LOT faster than people. But it still takes a human to guide the process.

    -----------------

    The second keynote had two major parts. First was the PASS business stuff. We have more chapters, more members and want orders of magnitude more people. One way of pushing this direction is, much like the MVP program did, including the entire data platform. PASS no longer means Professional Association of SQL Server, but just PASS. New logo too:

    image

    The little symbols represent what PASS encompasses in who PASS is as an organization, and we as PASS members. Interesting enough, but I always worry that things are going to go sideways and we will end up in a different community of mostly the same people. Time will tell.

    The second part was an excellent keynote address by Dr David Dewitte. It had some interesting details and comparisons of online data warehouse products, but was a lot broader than that. Good overview of internal stuff that can only help your career. I won't cover anything about it, go here (http://www.sqlpass.org/summit/2016/PASStv.aspx) and watch it.  Best quote for me: "SQL Server, Best Query Engine". But other companies are doing great stuff too.

    ----------------

    Then I went to a discussion about the way sessions are chosen. PASS choses sessions in a very interesting way, but really I think they do a good job. No matter how you do it, someone's feelings will get hurt unless you use my favorite method for SQL Saturday session choosing. Everyone gets a session if you haven't angered the organizers in some meaningful manner. Best way to anger the organizers: don't show up without a good excuse. Yes, it happens too often. And that, along with harassing others at an event (or multiple events), is something that takes a while to get over. Best way, be apologetic, attend events and don't be a jerk again.

    -----------------

    The other big thing that happens on the second day is that a group of folks wears kilts to PASS to show support for Women in Technology. This year, I was one of those people. It was not a normal thing for me, and not something I expect to do for a SQL Saturday unless for something special. Want to see the picture. Click this link to see Jamie Wick's tweet of a picture that was taken: https://twitter.com/Jamie_Wick/status/791739875439456256

    -----------------

    Friday, we woke up to a rather interesting sight for a PASS conference, even more interesting than myself in a kilt. The sun came out:

    Attended one more regular session of note: Tim Mitchell's Deep Dive of the SSISDB catalog, where I knew most everything, but using Data Taps to capture intermediate results like you might to a temp table in a SQL Query Batch was very nice. I hope to run a series of blogs about some work I have done with the SSISDB catalog over the next year or so. Another interesting idea, using SSISDB versions for regression testing. Run once, deploy new, run again, compare results, then revert.

    The other thing I went to was Speaker Idol, supporting my man Robert @SQLCowbell Verell. We co-lead the Nashville SQL User Group, and it helps us if Robert gets a speaking slot :) Robert was wild-card/runner up of the day (there are three rounds of four, with a final round of four to complete the day), and he did a great job. I really felt nervous for all of the people who participated, because what pressure. I have long sweated the process of speaking, because all of those eyes staring at you, seemingly expecting perfection (actually just expecting to learn a few new bits and pieces.) And here, while you have 10 eye staring at you, this time actually expecting perfection. In the end he didn't win, but he certainly didn't embarass himself, since he made the finals despite having a yellow background for text in SSMS that still is burned into my eyes.

    ------------------

    Then it just sort of ended… No fanfare, just a walk down to the Moore Theatre to catch Ian Anderson do his Jethro Tull rock opera. I hadn't even noticed there being concerts I cared about in the area, and prior to this year I would have never wandered that far from the hotel most nights, but I discovered the ease of Uber while there, which made walking less scary, since I occasionally aggravate my knee when walking as much as I did this week!) While there I ran into Rodney Kidd, who had a lot of great stories about music, walking back from the show (we were both at the Homewood Suites.) Add that to the stories that Pat Phelan shared at breakfast that morning about his cool experiences, and I had a great time even outside of the event.

    Well, can't wait until next year!

  • Post PASS Summit 2016 Resolutions - Followup

    Last year I started a new tradition for myself that will last until I specifically retire from the SQL Server community, something that I practically promise isn't coming in next year's resolutions. As this is the end of the PASS year, with the Summit coming up next week, it is time to see how I did with regards to what I said I would do. I didn't do as bad as I generally do with my New Year's resolutions, but like with those, still not perfect.

    ---------------------------

    1. Don’t submit to speak again next year – As much as I appreciate the surprise by a few people that I wasn’t speaking, and the kind words by a few people about liking my sessions, this was my favorite year. I am not a natural speaker, so I spend a lot of time preparing and I miss a lot. This was the first year I actually ventured out with people for a meal or two, caught a movie, and attended sessions in all but 2 slots (I saw my SQL Friend Joe Webb whom, I hadn’t seen in forever and I started talking to him and 5 other people who kept stopping by (Worth it!))

    The easiest of them all. I did not submit, and there were two reasons. First is that it is awesome to go to Seattle for the week and just attend meetings, talk to people, etc. The second is in regards to number 8. Since I have been working with the program committee helping to comment on abstracts (and unlike the other members of the team, I see them all), it feels weird to get in competition with those folks.

    2. Don’t stop submitting to speak at SQL Saturdays\User Groups – As much as I didn’t miss speaking at PASS, I do love speaking at smaller venues where I can do most prep at home, and the room size doesn’t give me heart palpitations. I plan to submit to 4-6 more events, and I have 2 or 3 user group presentations planned.  I have hopes of doing one pre-con this year as well possibly, and I will submit to a few (It won’t kill me if I don’t get chosen, it is a lot of work!) I also hope to submit for SQL In The City next year if RedGate does it (and I see the call for speakers before it closes).

    I spoke at quite a few SQL Saturdays, even doing a precon in Tampa. I also spoke at Music City Code, which is a very nice replacement for Devlink.

    3. Possibly attend a SQL Saturday and don’t submit to speak – It might be nice to just come to one and just hang out. I would be more than happy to work a booth for PASS or RedGate or whoever would want me to!

    Did this in Chattanooga, and it kind f reminded me how I felt at the summit last year. It definitely helped that I was able to go to the speaker dinner with Robert Verell, as it is always great to spend the evening talking to my peers.

    4. Get a kilt and a costume to wear for PASS next year – Okay, so this one is probably a long shot (consider this my “lose weight” resolution that may not make it, because I will have to lose weight for the first one to occur), but I have a fantastic costume idea that was inspired by Bob Pusateri’s (@SQLBob) this year (nothing to do with Dilbert, but that is all I will say.)

    I decided not to do the costume, because it just didn't seem like it was that necessary, but I did end up getting a kilt, which I am not 100% sure I will actually wear it on #sqlkilt day on Thursday. It all depends on how well I can figure out the look. The following is a very tiny picture of how the kilt looks, just to prove that I did purchase one :)

    image

    5. Get well enough to walk without making me and others feel bad – I know that I made some people uncomfortable this year watching me try to stand up after sitting down for a few minutes, but I felt better this year than I have in 6 years or more. It was just that muscles tighten when you sit for a while and it hurt really bad.

    This one is the most complicated. I have definitely made a lot of progress. I have dropped another 20-30 pounds since last year, and I have improved my stamina greatly. However, I had surgery to remove a torn meniscus in July and there turned out to be a lot of arthritis. In some respects it is now a bit harder to walk than even before the surgery (not atypical after they get in there and remove and shave down stuff.) Honestly it will probably look like I am worse off this year as I will mostly be walking with a cane, but while things are worse in some areas, I should be able to do much more walking than I have ever been able to do for a PASS Summit.

    In the next year or two I will be likely having a knee replacement, so it will be a while before the idea of walking without looking like I am in paini s a reality (and based on what I know from others, knee replacement is yet another year's recovery, which I am not relishing!)

    6. Keep being involved locally – I may not be the primary chapter leader for Nashville this year because there are so many great people to let have a swing. I don’t know what I will be doing for SQL Saturday, but I will certainly be there if I am in town.

    I did okay here. Our leadership team all kind of hit a wall of time near the end of the year, as I did picking up an additional writing project that will be coming out later this year, early next. It has really taken away my time to work on the user group, and at the same time, we have had a bunch of issues with venues. But we have some great leaders on the team, and I know we can get things up and going again.

    7. Finish my book early so I can bring some to give away at PASS Summit next year – I will have four chapter done by next week, and hopefully will be done with the all of the logical design stuff before the start of the year. Then I have to work in columnstore, in-memory, and all of the other database design changes that are in 2014 and 2016. Sounds easy and a ton of work at the same time (thankfully, I use very few screen shots.)

    This was a bit of a failure for two reasons. First, I got another writing project that took some of my time. Second, there were some (no fault) issues with technical editing that slowed things down a bit. In any case, the book is in the final stages now, and will be out later this year. The website claims it will be published on November 22, but I still have work to do (writing is all done!) http://www.apress.com/9781484219720

    8. Continue volunteering for the SQL PASS Program Committee – I have greatly enjoyed working with the program committee for the past several years, and last year we started a service to allow submitters to get a review of their abstracts done before they submitted it for the year. This year I hope we can expand the service to help more people who are great speakers to get the writing stuff out of the way.

    The abstract review team did some more great work this year, and we will hopefully do it again this year, doing our best to make it easier for submitter, and reviewer, ideally getting people who want abstract help more opinions than ever. We will see…PASS budgets and IT resources and whatnot that I don't have control over.

    9. Blog somewhat more – And not just these post PASS blogs either. I started blogging about new stuff in SQL Server 2016 until I had to get to work on the book (and some other writing projects,) and I will minimally continue that process as SQL Server 2016 matures. After the book is finished, I do hope to get some mastery of the stuff that Azure offers as well once the book is written (more about that in the next blog I will publish.)

    I did fall off once I started writing my books, but the blogs I did write on several of the new stuff in SQL Server 2016 paid major dividends. I was able to use the blogs in both books, refining quite a bit (I don't edit my blogs that great, and I probably never will!), but taking the base structure of the examples. Writing is really fun to me, though sometimes a bit more time consuming than I hope, keeping me stuck at a desk rather than working on #5.

    10. Mystery Project -  I have started shopping around an idea to some folks to see if I can do something kinda different and interesting in the coming year (or so). I have a lot of details to work out, but I want to wait until some other thing have past before I do too much else. (I may send out a survey in the coming months on twitter to see what more people thing too.)

    The mystery project is still a mystery. About the time I was planning on working to make this happen, I got the second book which took up a few months of my time.

    ---------------------------

    Overall, not a terrible first pass at fulfilling PASS resolutions, and who knows what the next ones will say. After the PASS Summit and the MVP Summit a week later, I will make my resolutions for the upcoming year. I know what I am currently thinking I want to do, but the reason I started the PASS Resolutions is that these two conferences have always inspired me to do more and more through the years. 

    I will make one resolution right here though, and this is regards to the Summit itself. I will be blogging about the Summit this year again. In past years I have tried to blog each day of the Summit, but had slacked off in the past few years, favoring tweeting. This year, I will definitely be tweeting as much as possible, but when I get back to the room each night, I will try to recap what I have seen that may be interesting to you.

  • Where have I been? Where will I be?

    SQL Saturday Orlando has been postponed. I won't be able to make it for the make up day, so that part of this blog has changed. Not the part about me loving the folks down there. They are still awesome! 

    I again have not blogged so much that I don't show up in the list on SQLBlog.com, but it is about time to start blogging again.

    Where have I been?

    Good question that probably no one has actually asked. But let me show you. I have been stuck here, fingers attached to a keyboard:

    2016-10-03 09.50.05

    Over the past months, I have been sitting behind this desk, writing and editing. My "Pro SQL Server Relational Database Design and Implementation" book (http://www.apress.com/9781484219720) is nearing completion. Jessica Moss and I turned in our last edit after tech review today. I am turning in the source code and bonus chapter on Triggers (where I include a really messy natively compiled trigger that duplicates a MUCH simpler trigger using interpreted code!).

    Second (and this is the first time I have mentioned it publicly), I am writing half of a book with another awesome person: https://www.amazon.com/Exam-Ref-70-762-Developing-Databases/dp/1509304916/, but that is really been a lot of work in a very short time period. I am getting close to finished with my part on that one too.

    Where will I be?

    This Saturday, I will be at SQL Saturday Orlando, doing my "Let Me Finish… Isolating Write Operations" session.  The abstract for this session is:

    OLTP databases can be constantly written to and reporting databases are written to at least periodically. In order to ensure consistent results, connections must be isolated from one another while executing, ideally with the lowest possible cost to concurrency. How this isolation is handled is based on the isolation level, whether the classic lock based or the newer optimistic scheme of the in-memory OLTP engine is used, or even if both engines are enlisted in the same transaction. In this session we will look at examples of how SQL Server isolates reading and writing operations from other writing operations to explore how this may affect your application through error messages and performance hits.

    I love the Orlando folks, and every year I hope my vacation plans (made 11 months in advance) sync up with their event (If you know me at all, you probably know I don't give up a day at Disney World for anything else. I mean, I am speaking while my wife is at Disney World AND at the same time as my UT Vols are playing Texas A&M…)

    Is this the last time I do this session? Maybe. I do really like it, and it is the most natural session I have ever done (particularly a session with demos!) But I have done it a bunch of times this year, and I generally like to write something new every year (except perhaps for my Database Design Fundamentals chestnut.) I am thinking something around Row Level Security/Dynamic Data Masking. RLS for sure is a pretty cool topic that I can certainly see applying some day. The last time I jumped on a session idea about a new feature it was sequences… did not get as many attendees as some other sessions! Wouldn't hate doing a fun session some day too.

    Then I will be at the PASS Summit and MVP Summits late October and early November, before hibernating until around the

    My next blog will be a follow-up on last year's resolutions I made after PASS. If it arrives in time, I might even model the outcome of one of the resolutions as well.

More Posts Next page »

This Blog

Syndication

Links to my other sites

Archives

Privacy Statement