THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

SELECT Hints, Tips, Tricks FROM Hugo Kornelis WHERE RDBMS = 'SQL Server'

  • How TOP wrecks performance (part 2)

    In my previous post, I showed how the vanilla version of TOP can really help performance by nudging the optimizer into picking an execution plan that is optimized for returning just a few rows instead of the entire result set. But I also showed that adding WITH TIES causes a cardinality misestimate, which may lead to suboptimal or downright terrible plan choices. Luckily, this is fixed in SQL Server 2014 and later, provided you use the new cardinality estimator.


    In this post I will show how the PERCENT option causes even worse performance, and how you can avoid that performance hit.




    Adding PERCENT to a TOP clause does exactly what it states on the label. Here is an example – very similar to the example used in the previous part, but I changed the ORDER BY for reasons I will explain later in the post.


    SELECT   TOP(2) PERCENT CustomerID, OrderDate, SubTotal

    FROM     Sales.SalesOrderHeader

    ORDER BY SalesOrderID;


    This query returns 630 rows. Why? Because without the TOP clause, the query would return 31,465 rows (the number of rows in Sales.SalesOrderHeader), and two percent of that is 630 (rounded up – TOP PERCENT always rounds up).  Unlike TOP without PERCENT, the value for TOP with PERCENT can be a fractional number, e.g. TOP (2.5) PERCENT in the previous query would return 787 rows.


    Because SQL Server does not know how many rows the query without TOP would return, it can only satisfy the TOP PERCENT requirement by first executing the query as if there is no TOP to get that row count (which, for this simple query, means reading the entire table), and then return the required number of rows. Here is the execution plan for the query above:


    Even though I said that SQL Server needs to first read the entire table to get the amount of rows and then return to the table to get the 630 expected rows, the plan shows just a single clustered index scan operator accessing the table. So how does this work? The answer is the table spool operator.


    Let’s step through the plan, starting (as SQL Server does) at the top left operator. The SELECT operator will call the Top operator to get a row, and Top will immediately relay that GetNext row request to the Table Spool operator. Because this spool operator is running in eager spool mode, it will call the Clustered Index Scan operator not once, but over and over again until all rows are read. All rows returned are then “spooled” by the Table Spool operator – i.e., they are written to a worktable, a temporary data structure located in tempdb, from where they can later be retrieved and served (while preserving the order). The Clustered Index Scan operator itself is running as an ordered scan (as shown in the operator properties), which is the reason why this query does not need a Sort operator to satisfy the ORDER BY clause.


    Once all rows have been read from the clustered index and stored in the worktable, the Table Spool will return the first row to the Top operator. And, even though this is not visible in the plan, the Top operator also gets access to the total number of rows that is now stored in the worktable. This allows it to compute the number of rows it should return, and then call the Table Spool operator that amount of times. For each of those calls, the Table Spool will not call out to the Clustered Index Scan operator again, but simply read the next row from the worktable and return it.


    At first sight this table spool may appear to be superfluous. Why not, after getting a count of rows from the table, get back directly to the base table to get the required results? In the case of this simple query that would indeed make sense. But most queries are more complex: they join a bunch of tables together, filter out part of the results based on a complex WHERE clause, perhaps also do grouping and aggregating to get the final result – those queries can be very expensive and long-running, and in that case you really do not want SQL Server to execute the query first in its entirety to just get a row count, and then execute it again to return the requested rows. The Table Spool pattern is used to prevent that double cost, and the overhead of creating the worktable is considered small enough to not bother using a different pattern for simpler queries.


    But there is a catch, and it is a big one. The optimizer seems to think that storing rows in a temporary table is “relatively” cheap, but in reality this turns out to be not the case. The mere process of storing those rows is in reality very expensive. You can see that by rerunning the query, with the STATISTICS IO set option enabled:



    SELECT   TOP(2) PERCENT CustomerID, OrderDate, SubTotal

    FROM     Sales.SalesOrderHeader

    ORDER BY SalesOrderID;


    Switch to the Messages tab after executing this, and you will see that the SalesOrderHeader table has 689 logical reads – a full scan of the entire table, as expected. But you will also see 63,730 logical reads for the worktable – that seems a bit excessive for returning just 630 rows!


    Spool, the incredible performance sink


    It took me some time before I realised that these logical reads are not produced when reading from the worktable, but when writing to it. I was able to verify this theory by changing the TOP(…) PERCENT parameter to either a very high value (e.g. 99.99999), so that all rows from the table are returned, or to a very low value (e.g..0.00001) to return just a single row. The logical reads change, but only a bit: the number of logical reads on the worktable varies from 63,727 to return a single row to 63,888 to return them all. This indicates that over 63,720 reads are generated while building the spool, and between just one or two up to 170 or so to read up to 31,465 rows from it.


    In order to understand why so many logical reads are generated when populating the spool, I tried variations on the same query. When I added a WHERE clause to limit the number of rows going into the spool,  I found that the number of logical reads decreases by two for every row filtered out. When I added more columns to the SELECT list, or used larger tables as the source (e.g. SalesOrderDetail, or a cross join between two tables), that number at one point changes to a difference of three logical reads per row removed. Upon seeing these ratios, I finally figured out what is causing these excessive reads – apparently, the worktable is created with an internal clustered index on top of it (even if the results stored in it can contain duplicates), and every single insert is performed by doing a seek in that index to find the location to add the row. For tiny worktables that seek starts at just a single read, but as the worktable grows, more levels have to be added to the clustered index, causing the number of reads per row to go up to 2, 3, or even more.


    The first rewrite


    If the cost of writing to a spool is one, two, or even three logical reads per row inserted, then the overhead it introduces in the plan for the simple query above is way more than the saving. A much better and faster solution is to use a “Do it yourself” TOP PERCENT workaround, as follows:


    DECLARE @cnt bigint;

    SET @cnt =


      FROM   Sales.SalesOrderHeader);


    SELECT TOP(CAST(CEILING(2.0 * @cnt / 100) AS int))

             CustomerID, OrderDate, SubTotal

    FROM     Sales.SalesOrderHeader

    ORDER BY SalesOrderID;


    This method avoids the overhead of the worktable. An additional benefit is that the number of matching rows is now found using COUNT(*), giving the optimizer more freedom to use a different plan. In this case, it does indeed choose to use a smaller index for the first query, reducing the number of logical reads. The total amount of work for this batch is now down to 57 logical reads for the first query and 18 logical reads – a huge reduction from the 689 logical reads from the base table plus 63,730 logical reads from the worktable we started with.


    But the proof of the pudding is in the eating, and the proof of the query is (for me) in the actual time used. Switching to SET STATISTICS TIME ON and running both queries several times shows that I did indeed achieve a nice performance improvement. The original query consistently takes approximately 300 ms elapsed time and burns 180 ms CPU time; the combined total of the two queries in the rewrite is just 150 ms elapsed time, and uses a mere 15 ms CPU time.


    There are drawbacks too, though. One is that the query now has to be repeated twice. The SELECT clause in the first is simplified to COUNT(*), but all the remaining logic has to be exactly copied – which makes this a time bomb for future maintenance. This can be slightly remedied by using a CTE, but this makes it a lot harder to understand the logic:


    WITH     TheResults AS

     (SELECT SalesOrderID, CustomerID, OrderDate, SubTotal

      FROM   Sales.SalesOrderHeader)

    SELECT TOP(CAST(CEILING(2.0 * (SELECT COUNT_BIG(*) FROM TheResults) / 100) AS int))

             CustomerID, OrderDate, SubTotal

    FROM     TheResults

    ORDER BY SalesOrderID;


    This version uses the exact same amount of logical reads as the two queries from the other version combined, and almost the same amount of time – but it will probably be a challenge to everyone who has never seen this query pattern before.


    Another drawback of this method is that under high concurrency, the population of the table may change between the two queries. The harder to maintain single query makes the window of risk smaller, but is not immune – if this is an important requirement, you need to elevate the transaction isolation level to serializable, which will have a negative impact on concurrency.


    The second rewrite


    The method above is not always a good idea. We avoid using the spool by forcing SQL Server to execute the query twice (the CTE variation removes the duplication from the query itself, but not from the execution plan!). That is an improvement over the relatively high cost of the spool, but only if some conditions are met.


    If the optimizer can simplify the first execution of the query (to get the rowcount) to a very cheap plan, and if the percentage returned (and hence the number of rows for which the full rowset has to be produced) is small, then the saving can be huge. But what if we try this with a relatively expensive query, that does not allow simplification to get the rowcount, and we request a high percentage?


    The query below demonstrates just that. In order to get a sufficiently expensive query on a small database such as AdventureWorks I had to go a bit fancy so this query will not be very realistic – but on much bigger databases, even more realistic queries can quickly become just as or even more expensive.



                   soh.CustomerID, soh.OrderDate, soh.SubTotal, Next1KSales.Total

    FROM           Sales.SalesOrderHeader AS soh


       (SELECT     SUM(sod2.LineTotal) AS Total

        FROM       Sales.SalesOrderHeader AS soh2

        INNER JOIN Sales.SalesOrderDetail AS sod2

              ON   sod2.SalesOrderID = soh2.SalesOrderID

        WHERE      soh2.SalesOrderID BETWEEN soh.SalesOrderID

                                     AND soh.SalesOrderID + 1000) AS Next1KSales

    WHERE          Next1KSales.Total > 500000

    ORDER BY       soh.SalesOrderID;


    This query takes roughly 118 seconds to run (both elapsed time and CPU time). In the STATISTICS IO results, I see 1,347,920 logical reads on SalesOrderDetail, 801,607 on SalesOrderHeader, and 61,042 on a worktable – this is for storing the nearly 30,000 rows that qualify the WHERE clause, of which then 10 percent (nearly 3,000 rows) are returned. Just as in the previous example, the number of logical reads for filling the spool is excessive so let’s try to avoid this. In this case I choose not to duplicate the query in my code, so that means I have to resort to the more esoteric version with a CTE:


    WITH TheResults AS

     (SELECT         soh.SalesOrderID, soh.CustomerID, soh.OrderDate,

                     soh.SubTotal, Next1KSales.Total

      FROM           Sales.SalesOrderHeader AS soh


         (SELECT     SUM(sod2.LineTotal) AS Total

          FROM       Sales.SalesOrderHeader AS soh2

          INNER JOIN Sales.SalesOrderDetail AS sod2

                ON   sod2.SalesOrderID = soh2.SalesOrderID

          WHERE      soh2.SalesOrderID BETWEEN soh.SalesOrderID

                                       AND soh.SalesOrderID + 1000) AS Next1KSales

      WHERE          Next1KSales.Total > 500000)

    SELECT TOP(CAST(CEILING(10.0 * (SELECT COUNT_BIG(*) FROM TheResults) / 100) AS int))

             CustomerID, OrderDate, SubTotal, Total

    FROM     TheResults

    ORDER BY SalesOrderID;


    Looking at the STATISTICS IO results, I see that I did indeed avoid the worktable and its 61,042 logical reads. But I also see that I have to pay a severe penalty: the number of logical reads for SalesOrderDetail has gone up by 162,901 to 1,510,821, and on SalesOrderHeader the reads have increased by 77,795 to 879,402. So in order to avoid just over 61 thousand logical reads, we had to take a hit of over 240 thousand logical reads on other tables. Does not sound like a winning proposal to me.


    The run times of the second query are a bit strange. For reasons beyond the scope of this post the first query does not use parallelism; the second does go parallel, which improves the elapsed time quite a bit, at the cost of much higher CPU usage. In order to get a better comparison I added a MAXDOP hint to force both plans to be serial. This does not change the logical reads, but it does change the execution times: the second plan now uses 125 seconds (both elapsed and CPU time), approximately a 6% increase over the query that simply uses TOP.


    Luckily, there is yet another way to rewrite the query. A method that is actually almost the same as the execution plan chosen for the original query, except that we force SQL Server to replace the implicit temporary work area of the spool with an explicit temporary table. So I rewrite the original single query as two queries. I first execute the original query without any TOP clause and store the results in a temporary table; I then return the TOP(..) PERCENT rows from that temporary table using the original rewrite – which is now efficient because it is no longer expensive to fetch the same rows multiple times.


    SELECT         soh.SalesOrderID, soh.CustomerID, soh.OrderDate,

                   soh.SubTotal, Next1KSales.Total

    INTO           #TheResults

    FROM           Sales.SalesOrderHeader AS soh


       (SELECT     SUM(sod2.LineTotal) AS Total

        FROM       Sales.SalesOrderHeader AS soh2

        INNER JOIN Sales.SalesOrderDetail AS sod2

              ON   sod2.SalesOrderID = soh2.SalesOrderID

        WHERE      soh2.SalesOrderID BETWEEN soh.SalesOrderID

                                     AND soh.SalesOrderID + 1000) AS Next1KSales

    WHERE          Next1KSales.Total > 500000



    SELECT TOP(CAST(CEILING(10.0 * (SELECT COUNT_BIG(*) FROM #TheResults) / 100) AS int))

             CustomerID, OrderDate, SubTotal, Total

    FROM     #TheResults

    ORDER BY SalesOrderID;


    DROP TABLE #TheResults;


    The query above produces the same result as all previous queries, but now in the fastest possible way. Of course the first query still has to do the work of evaluating the entire query which as always necessitates 1,347,920 reads on SalesOrderDetail and 801,607 reads on SalesOrderHeader; these cannot be avoided. (Unless we put in the work to completely rewrite the query, which is probably very well possible in this case but would defy the point of this blog post). There are however no logical reads for inserting into the temporary table, because in this case the temporary table is a heap so SQL Server does not have to search the right location when inserting a row.


    The second query uses just 374 logical reads to read the data from this temporary table twice. There is a hidden cost, though, that you can see in the execution plan: because the data in a heap is not sorted, the data has to be read completely twice; the first time in order to count rows and the second time in order to feed them all into a Sort operator. These operators typically do not cause I/O activity because the sort is done in-memory; however this does cost a lot of CPU, and a lot of memory.


    The query above forces serial execution for the first query, for better comparison to the other versions. It uses approximately 118 seconds – similar to the query with TOP, which is not very surprising because the overhead of the spool operator, though large in relation to the number of rows it has to store, is insignificant in relation to the total amount of work done for the rest of the query. When running both queries multiple times and calculating the average elapsed and CPU time, I expect the query with TOP to be a few tenths of a second slower than the version that inserts into the temporary table. When I allow this query to go parallel, it runs much faster on my system (only 48 seconds elapsed), at the cost of burning way more CPU (over 350 seconds CPU time on my 8-core test system). I would allow the parallelism if this query is critical or if my server has sufficient unused CPU cycles; on a heavily used systems that is constantly under CPU pressure I would force the serial plan.


    The second query, despite the sort, takes almost no time: just a quarter of a second elapsed time, and less than a tenth of a second CPU time.


    It always depends


    At the start of this post I promised that I would explain why I change the ORDER BY for this post from the ORDER BY that I used in the previous post. The reason is obvious as soon as you run the query below:


    SELECT   TOP(2) PERCENT CustomerID, OrderDate, SubTotal

    FROM     Sales.SalesOrderHeader

    ORDER BY CustomerID;


    The SET STATISTICS IO results do still include a worktable, but in this case the number of logical reads in that table is zero. Is this some hidden magical Table Spool optimization at work, or is there a different explanation? To answer that question we will have to look at the execution plan for this query:


    As you can see, there is no Table Spool at all. It is replaced by a Sort operator. There are some surprising similarities between the Table Spool (Eager Spool) in the first plan and the Sort operator in this plan. Both will consume the entire input before even producing the first row of output. As a result, they both know how many rows there are, and they can (and will) both transmit this information to the Top operator so that it can compute the number of row for the specified percentage. And this is why no additional Table Spool is needed in the plan above, because all rows are effectively already “spooled” in the Sort.


    There are also two important differences between these operators; the first being that the Table Spool preserves the order of the rows and the Sort operator (obviously) doesn’t. But in the context of this blog I am mostly interested in the second difference: the Table Spool stores the data it reads in a worktable in tempdb, but the Sort stores it in memory (using tempdb only when the allocated memory is insufficient). And that is why we don’t see any I/O on the worktable in the last example, and why this query, despite the much higher estimated cost, in reality runs a lot faster than the first example.




    For a query that uses TOP with the PERCENT option, the optimizer needs a way to know the number of rows before it can return the correct subset. This requires evaluating the entire query as if there were no TOP, storing the results, and then returning rows from that stored result set. If a Sort operator is needed to satisfy the ORDER BY of the query then this Sort operator gives us the facility to store and count the rows “for free”; in all other cases the optimizer will inject a Table Spool operator in the plan for this purpose.


    However, the Table Spool is in practice way more expensive than what the optimizer thinks it costs. In many cases, it would actually be more efficient to first count the number of rows in the result without saving them, and then run the query again with a Top operator to limit the number of rows returned. Unfortunately, because the optimizer does not fully grasp the actual cost of a Table Spool, this optimization can only be achieved by rewriting the query. In this post I have showed two patterns for this; one that is relatively easy to understand but requires duplicating the code, and a second pattern that doesn’t duplicate code by using a much harder to understand construction.


    Sometimes, especially when returning a large percentage of the results from an expensive query that cannot be simplified when only the count of rows is requested, the rewrite pattern I provided backfires. In such cases an alternative form of rewrite is possible where the results of the entire query are first stored in a temporary table, and then the first form of the TOP (…) PERCENT rewrite is used with that temporary table as the source. Though this form is still somewhat faster than just using TOP (…) PERCENT, it does not save quite as much as the other rewrites do in other conditions, so this form should only be used when you have proven that you need it.


    And obviously, performance is not the only relevant metric for your code, and often far from the most important metric. This is clearly a case where there is a trade-off between performance and maintainability. If you need to use TOP (…) PERCENT on a resultset that will never measure more than just a few thousand rows, then the performance difference is probably not enough to warrant the rewrite. But if you find that one of your key queries is using TOP (…) PERCENT on a huge resultset and the query does not perform as well as you would like, then hopefully the content of this blog post has given you the tools to tackle that problem.


    (And please, for sake of the mental sanity of whoever gets to inherit your code – if you do decide to use any of the rewrite patterns I provided, then please add lots of comments to your code, to explain exactly what you do, and why you are doing it).

  • How TOP wrecks performance (part 1)

    The TOP keyword in the SELECT clause may not be ANSI standard, but I am sure we have all used it. But do you know that it comes with two optional keywords? And have you used them? They are PERCENT and WITH TIES.




    Let’s first look at the based functionality of TOP, and how it affects performance – which in fact, contrary to the title of this post, is usually quite good for performance.


    Here is a simple example, using the AdventureWorks2012 sample database (which can be downloaded from Codeplex). Let’s start with a very basic example:


    SELECT   CustomerID, OrderDate, SubTotal

    FROM     Sales.SalesOrderHeader

    ORDER BY CustomerID;


    For this query, based on the available indexes, the query optimizer has two obvious choices (and probably hundreds of less obvious ones). One is to scan the table (using the clustered index on SalesOrderID) and then sort the results on CustomerID. The other is to do an ordered scan of the index on CustomerID, which eliminates the sort from the plan, but introduces a lookup to fetch the OrderDate and SubTotal columns. Based on the available statistics, the query optimizer estimates that the cost of doing a lookup for each of the 30K+ rows in the table exceeds the cost of a sort, so the plan I get for this query uses the first option:


    Adding a TOP operator to this query will result in a very nice performance boost. Here are the modified query and the plan I got after adding just a vanilla TOP clause:


    SELECT   TOP(3) CustomerID, OrderDate, SubTotal

    FROM     Sales.SalesOrderHeader

    ORDER BY CustomerID;


    The obvious and predictable change is the injection of a Top operator that limits the result set to just the first three rows. But as you see, the rest of the plan also changed shape. This is because the optimizer knows that plans are executed left to right. The Top operator, when called by the SELECT operator, calls the Nested Loops operator to get a row; returns it to the SELECT operator; and then sits waiting until it is called again – which normally will happen fairly quickly, unless the SELECT operator has to wait for the application or network to send out the row. The same thing happens two more times, but when the SELECT operator calls the Top operator for the fourth time, the Top operator will immediately return an “end of data” condition without ever bothering to call its descendant Nested Loops node. No matter how many rows the SalesOrderHeader table has, the TOP clause guarantees that there will never be more than three lookups. The Sort operator we had in the first plan was cheaper than 31,465 lookups, but is far more expensive than just three lookups.


    So vanilla TOP is actually good for performance. It not only reduces the amount of data returned, reducing load on the network; it also allows the optimizer to make choices that are optimal for the specified lower number of rows rather than for the full result set.




    Unfortunately, both the WITH TIES and the PERCENT option change this. Let’s first focus on WITH TIES. Adding this option to a TOP clause has the same effect you sometimes see in sports results: the top three athletes are listed, but if numbers three and four finish in the exact same time or have the exact same score, both are considered to be in the number three spot and both are included in the results. However, if numbers two and three finish in the same time, the number four is NOT included anymore – so this is not like DENSE_RANK.


    Because the concept of ties requires the concept of an order, the WITH TIES option requires that the query has an ORDER BY clause. (But using any TOP clause without ORDER BY is kind of an abomination anyway). Adding WITH TIES to our previous query has only a minimal impact on the plan:


    SELECT   TOP(3) WITH TIES CustomerID, OrderDate, SubTotal

    FROM     Sales.SalesOrderHeader

    ORDER BY CustomerID;


    If you run this query you will see that there are still only three rows returned, but the actual number of rows returned from the Nested Loops operator to the Top operator is shown as four. Where does that extra row come from? Well, if you remember the results of the original query, you will know that there is no tie between rows #3 and #4. The way you saw that is to look at both rows and compare the customer number. And that is also exactly how SQL Server handles this. On the fourth call, the Top operator will request another row from its descendant and compare the value of its ORDER BY column(s) to the value those columns had in the third row. If they are the same, it will return the row, wait for the next call, and then repeat this until the value change; only then will it return an “end of data” condition and stop calling its descendants. In this example the value changed directly on that fourth call, so the number of rows processed is just one more than the number in the TOP expression. If you use TOP(4) WITH TIES, you will get six rows in the result and seven rows processed, because now the Top operator knows there are no more ties after reading the seventh row.


    So far, all behavior is exactly as expected – with one exception. In the execution plan above, the estimated number of rows is three. This is clearly not correct; there is no way SQL Server can ever satisfy this query without processing at least four rows (to verify that there is no tie), or more (if there is one). Now you may think this is just a minor difference, and in this specific case it is – one row off for the TOP(3) WITH TIES and three rows off for TOP(4) WITH TIES will probably not have a major impact on plan choice. But the difference can be much bigger, as demonstrated by this query and execution plan:


    -- Create index (for the demo)

    CREATE INDEX IX_SalesOrderHeader_TerritoryID

    ON Sales.SalesOrderHeader(TerritoryID);



    SELECT   TOP(3) WITH TIES CustomerID, OrderDate, SubTotal

    FROM     Sales.SalesOrderHeader

    ORDER BY TerritoryID;



    -- Drop index (we don't really need it)

    DROP INDEX Sales.SalesOrderHeader.IX_SalesOrderHeader_TerritoryID;



    The difference between an estimate of 3 and an actual of 4595 is way too big, and does in this case actually result in a bad plan choice. The lookup is estimated to execute three times, but in reality it executes 4,595 times. The result is a whopping 14,091 logical reads on a table that is only 689 pages in size. In this, a plan with a clustered index scan and a sort would have been much faster.


    There is good news, though. All the above tests were done on my main machine, which is SQL Server 2012 SP1. I also ran a test on SQL Server 2014, and the new cardinality estimator gave me much better results. The estimated row counts for the input to the Top now actually take data distribution and statistics into account when a query uses TOP WITH TIES. For the query above, the number of rows coming into Top operator was estimated as 3146.5 when I enabled the new cardinality estimator, which is much closer to the actual number (4595). As a result, the plans chosen now have a much better chance of being optimal for the actual data.


    One error that was not fixed in SQL Server 2014 is that, even with WITH TIES, the estimated number of rows being read by the Top operator is still equal to the estimated number of rows being returned. Since the WITH TIES can only stop returning rows to its parent after seeing a row with different values being returned from its child node, the estimated input should always be one more – but now we are really talking about a minor difference, an off-by-one error, which for estimates and plan choice is really just a minor issue.




    So far we have seen that a vanilla TOP operator can give you great performance benefits, but a TOP operator with the WITH TIES option poses a severe danger. In SQL Server 2012 and before, as well as in SQL Server 2014 with the old cardinality estimator, the data distribution of the ORDER BY column(s) is not taken into account, which can result in a gross misestimate, especially if the ORDER BY columns include lots of duplicates. These wrong estimates can easily result in very bas plan choices, and hence to bad performance.


    The new cardinality estimator in SQL Server 2014 fixes this. It still has an off-by-one error, but that is just a minor issue that will hardly ever cause serious performance issues.


    In the next episode, I will focus on the PERCENT option of TOP, and show how this option will wreck performance in an even worse way.
  • Give speakers feedback. It makes a difference.

    I sometimes struggle with finding a catchy title for a blog post. It has to grab the attention, and clarify what the post is about. If you decided to spend time reading my words, I do not want you to finish ending “if I had known this was the content, I would have done something better with my time”.

    For this post, I think the title just sums up the entire post. If you are short on time, you can stop reading now. Seriously.

    Feedback from SQL Server Days

    Last week I received the feedback from my presentation at the SQL Server Days conference in Belgium, three weeks ago. That was quick – which is great! My kudos to the organization for this very fast turnaround.

    As always when I receive feedback for a presentation, the first thing I do is read the comments that attendees have written on the evaluations. I am very happy to say that a lot of people took the opportunity to explain their marks or to give me some tips. Some of them were very positive. One was about the chilly room temperature (and I can only agree to the comment). Some were very sweet. But there were also several comments that I consider to be pointers to me as to where I can improve. Allow me to share a few.

    “The session itself was very good however I expected other content.”

    By itself, I would consider this a cue to revisit the abstract. But some of the other feedback changes how I interpret this. In this case, I think I spent too much time on content that was not advertised.

    “Unfortunately, the time was too short for this session to cover all the speaker wanted to share with us.”

    I get that a lot, and I know. I always either run over or have to speed up near the end, no matter how much time I am given. But in this case, again, the other comments put a different perspective on the feedback – they show me clearly where I go wrong and what causes me too run out of time.

    “The first part showing the old stuff was too long. One or two of the worst examples was probably enough.”

    And this one was followed by four other comments basically saying the same thing. Yes. That message is very clear. When five out of thirteen comments tell me the same thing and two other comments are probably a side effect of the same root cause, I would be a very lousy presenter if I did not take it serious.

    “The subject was different from the one announced. This session was a part of the precon I followed some years ago with Bob Beauchemin. I didn't learn anything new in this session.”

    Okay, this comment has nothing to do with the previous one, but I just have to single it out and comment on it. I do not understand the first part; the session title and abstract were all on the web site and on the printed overviews – but I did replace a speaker who had to cancel about a month in advance, so maybe this attendee used a very old version of the schedule?

    The rest of the comment is nice. I never attended any precon by Bob, but I do know him, I have attended some of his sessions, and I deeply admire his technical knowledge and presentation skills. If a session written entirely by myself is sufficiently similar to Bob’s material that attendees think I copied it, then that is one of the best complements I can get. I’ll take it!

    (For the record: I have never copied other people’s sessions. I de sometimes copy bits and pieces – always with permission of the original author, and I always attribute the material to the original author.)

    Message received

    To understand the comments, you need to know the session. In the session, I show how the OVER clause enables us to solve certain problems. The outline of the session is that I first present the problems with their solutions in SQL Server 2000 (before the OVER clause), then give alternative solutions for SQL Server 2005, 2008, and 2008R2 (where the basic version of the OVER clause is available) and finally present the simple and clean solutions that have become possible in SQL Server 2012, when the windowing extensions to the OVER clause were introduced. In between those demo blocks, I introduce the syntax of the OVER clause.

    After reading the comments and with the benefit of 20/20 hindsight, I have to agree. I think I spend about half the available time on SQL Server 2000, 30% on SQL Server 2005-2008R2, and 20% on the SQL Server 2012-2014 stuff. That is not good. There are only very few people left who support SQL Server 2000, and I waste a lot of people’s time by going over syntax and solutions for that version for half an hour.

    Root cause analysis

    When I need to fix a bug in my code, I do not just dive in head down and start fixing. I have learnt to first step back, analyze the situation, and find the root cause of the problem. Otherwise I end up wasting time fixing a symptom, rather than the problem.

    I do the same here.

    Obviously, I never set out to make this a session on writing big and ugly queries for SQL Server 2000. I wanted to demonstrate the strength of the OVER clause. And I must also admit that I have been struggling with this session from the first time I did it. The first delivery of this session had a different order – I presented the problems one by one, and kept jumping between SQL Server versions each time I moved to the next problem. After that first delivery I left the stage with an unsatisfied feeling; I never got into the flow while presenting, felt awkward all the time, and knew I had to do some serious rewriting or remove the session from my portfolio. I did actually get some fairly decent feedback to that session, but my own mind told me it was no good, and there’s no point in arguing with my harshest critic.

    The version I delivered in Belgium is already the 2.0 version, and a version I have delivered a few times before. When presenting I did feel the flow I normally feel, and I was convinced that, while not my best session, this session is good enough. Until last week. :)

    So how did a session that I designed to highlight functionality introduced in SQL Server 2005 and enhanced in SQL Server 2012 end up spending (wasting) half the available time on SQL Server 2000? Now that the attendees of SQL Server Days have forced me to think about it, I think I can tell. I wanted to convince the audience of the strength of the OVER clause by demonstrating just how much easier some common problems are solved now, as compared to before. That choice was what drove the session design, both the 1.0 and the 2.0 versions. Take a common problem, show the ugly code that was needed to solve this problem back when I was still a young Adonis (“cough”), then show how OVER makes the world a better place. But the problem here, is that the queries for SQL Server 2000 are very complex, and very convoluted. To me, it felt wrong to show complex code without giving any explanation. So when presenting the demos, I take the time to highlight how the queries are built, why they are built that way, and why they could not be simplified.

    With very complex queries for four different problems, all those explanations add up, and end up taking way too much time. Time spent on explanations that benefit nobody. What a waste!

    Fix it (Felix)

    Now that I know where I went wrong, I can set out to fixing it. One option is of course to retire the session, but that is not an option I ever seriously considered. The OVER clause is a great feature, and I have a good story to tell about it. I just need to find the right way to tell it.

    My first plan was to do another total overhaul, create a version 3.0 of this session. I already had some rough ideas forming in my head. Not a total new session yet, just first thoughts – more time is needed before this will evolve into a new session.

    And then I realized that I have no time for this. In just a week from now, I will be in Portland, at SQL Saturday Oregon, and I will present there. The chosen session? You got it: “Powerful T-SQL Improvements that Reduce Query Complexity” – the same session I did at the SQL Server Days. One week is not enough time for a full overhaul. Plus, I have to deliver a session that matches the abstract (and hence the expectations of the attendees) next week, and a full overhaul cannot guarantee that.

    So instead of a 3.0 version, I will update my existing version to a 2.5 version. The rough outline will remain the same, but I will severely cut back on the “old” part. I will not show all the demos for that, and definitely not explain as much as I did before. I will only briefly outline why the queries needed to be so complex, without going into details. I will then spend more time on the SQL Server 2005 stuff, because this is still relevant to the large number of people still working on 2005, 2008, and 2008R2. But I plan to spend way more time than before on the enhancements that have become available in SQL Server 2012. In the next week, I will go over my slides and demo code and see what I need to change there in order to support the 2.5 version – probably not much (it’s just a point release, after all), but I need to make sure that slides and demos will help me keep the pace just right, not get in my way.

    A complete rewrite may still be needed. That depends on the feedback I receive next week, both from my harshest critic (that little voice in the back of my head), and from the attendees. I don’t know if there will be official feedback forms in Portland, but I accept feedback from all sources: conversations, emails, comments on my blog, twitter, and whatever other means you can think of. If the gist of the feedback is that the session is great, I will it as is, but if attendees think it is still not as good as it should be, I will definitely start working on that 3.0 version.

    Bottom line

    I started the blog by saying that you don’t need to read it – just the title is enough. If you ignored me and did continue to read until here, you will probably see that I am right.

    Giving speakers feedback does make a difference. Your feedback has opened my eyes to the flaws in my session on the OVER clause, and has convince me that I need to change it. And it will again be your feedback on the updated version that will tell me whether my changes were sufficient, or more work is needed.

    Speakers give you a lot. They sacrifice their time (a lot of it – only those who ever created a presentation from scratch will really know just how much time that takes!) to share their knowledge with you, often for no more reward than a free entrance pass to the conference. The least you can do as a reward is to give them your feedback. Be positive when they deserve it. And be constructive when they can improve. If you don’t tell us, we will never know!

  • Execution plans, laundry, and a giveaway

    In just a week from now, SQL Saturday #337 will kick off in Portland, Oregon. And I will be there – the third time already for me to be speaking in Portland.

    For a European, Portland is not the most logical location. But the organization over there is a pretty smart bunch of people. They figure that being close to Seattle gives them a great opportunity – so whenever they get the chance, they will reserve a Saturday just before the PASS Summit in Seattle for their own event. And then they make sure to organize this event so extremely well that people, speakers as well as attendees, will want to travel a few days early and make Portland their first stop.

    SQL Mastery Sessions

    But this year, the crew of SQL Saturday Oregon tried something new. They have added a day of pre-cons (or, as they call it, SQL Mastery Sessions) to their program. On Friday, October 31, three speakers will deliver a full-day session, allowing them to dive much deeper in their favorite subjects.

    For the BI fans, Mark Tabladillo will present on “Self-Service BI with Power BI”. I do not know Power BI (or any BI at all), but I do know Mark and I am sure that the session will rock. Based on the abstract, I think it will be a “from zero to hero” type of day. If you want to be the one that brings self-service BI into your organization, go and sign up for this session!

    If you are more DBA oriented, then you should definitely consider going to another rock star in the SQL world: Argenis Fernandez. His session “SQL Server Internals and Data Recovery” promises to give you a real deep-dive insight in how SQL Server stores data, how fate and bad karma can cause that data to become corrupted, and (in my opinion the most important skill a DBA should have, and one you hope never to have to apply to your production database) how to salvage your company’s valuable data if the unthinkable happens. Argenis will correct me if I’m wrong (see the comments below), but based on the abstract I suspect that this session is not for the faint of heart, nor for the beginning DBA. But if you are at least at medior level and want to learn how to recover from almost every imaginable disaster, then this is the session where you should be!

    Both sessions sound great to me, but I am spared the challenge of choosing – because I will be presenting the third SQL Mastery Session, aimed at a target audience of both database developers and DBAs at intermediate level. In “Understanding Execution Plans”, I will explain how to read an execution plan, starting at the very beginning (like: what is an execution plan and where do I find it), and then exposing intimate details of what SQL Server is doing when your query is running. You will learn how a merge join operator processes a right anti semi join, how an exchange operator shoves data between threads, how aggregation relates to my laundry, and hundreds of other things – but most of all, you will learn knowledge that you can apply to slow queries at work, to get a better understanding of what makes them slow, and how you could fix it.

    First the bad news

    Earlier this week I was told that my SQL Mastery Session is already sold out. That is great for me – nothing strokes the ego as much as seeing that there are many people who do not run away screaming at the idea of spending a day locked up with me in a classroom, listening to me talking geek, but are actually prepared to spend real dollars for the privilege. But for you, the reader, this may be bad news – especially if the paragraphs above have whetted your appetite and you now also want to be there. But don’t despair, you can sign up for the waiting list and hope a slot fills up. But there is another option as well.

    Then the good news

    The good news is that the kind people of the SQL Saturday Oregon organization have given me permission to give away one seat to my session. If you win, you not only get a $129 value admission ticket, you even get bypass the waiting list to find yourself in the last available seat, reserved especially for you! Sounds good? Yeah, thought so. So, how do you apply?

    Easy. You just have to help the SQL Saturday Oregon team and me to get all three sessions sold out. Spread the word, do the marketing! Here are the simple steps:

    1. Write on Twitter, Facebook, your blog, a web forum, or any other public place why you think that everyone should sign up for the SQL Mastery sessions in Oregon. Be original, be creative, and most of all: be convincing!

    2. Send me an email, with a link to your post. Send it from an address where you will be able to receive replies – otherwise I will not be able to contact you.

    3. Wait, pray, and hope.

    On Monday, October 27, I will write all email addresses on paper slips, throw them in a bowl, and draw the winner. To reward effort, I do reserve the right to put in two, three or even four slips for entries that are especially creative, convincing, and/or original. This is determined on a purely subjective basis by an independent jury consisting of only me. But every entry will result in at least one slip in the bowl, so every participant has a chance to be the lucky winner!

    I will pass the email address of the winner to the SQL Saturday Oregon team, who will them contact him or her to work out the details.

    Everyone’s a winner

    Only one person can win the main prize. But everyone who enters in the competition will receive a consolation prize. I will not disclose what that prize is at this time – just watch your email next week, to see the surprise.

  • Database Mail … and then the SMTP Server changed

    The database I inherited when I started my current job sends out lots of mails. It does so using Database Mail, and it has lots of mail accounts and mail profiles defined. I know that several of those profiles are used, I suspect that some are not, and I have no idea about the rest – one day I will find the time to clean up, but so far there have always been more pressing matters to attend to.

    But today the mail administrator told me that due to a change in architecture, SQL Server had to start using a different SMTP server for sending mails. Quite an easy task if you have just a single profile – just a few clicks in the Database Mail Configuration Wizard, and done. But repeating those same mouse-clicks for every profile in the list was not my idea of a morning well spent, so I decided that I’d have to script this. (This should be easy – we have just a single SMTP server, so I could hit every single mail account and did not have to bother with exceptions).

    Usually, scripts for such a task are very easy – just type a well-chosen search string in your favorite search engine, check the first two or three hits, and you’ll have a script. Usually even more than one. Carefully inspect the script (just because it’s on the internet does not mean it’s safe!), copy the script, paste into SSMS, make adjustments for your own situation, do one more inspection just to be sure – and then hit the F5 button and say “time to grab a coff … oh wait, it’s already done”.

    In this case I had no luck. Maybe I used the wrong search phrase, or maybe there is a way to accomplish this that is so easy that nobody ever bother blogging about it and I am the only one who managed to overlook the option. Or maybe nobody has ever tried to automate the task of changing SMTP servers.

    Bottom line, I did not find a pre-made script for this task, so I put in the effort to write one, and then decided to share it with you.

    Note that the script below was tested on SQL Server 2008R2 only. Also note that it will update all mail accounts to use the new SMTP server. If you have a more complex setup with multiple servers and only some need to change, you will have to add the correct filtering criteria to the WHERE clause.

    DECLARE @NewServer sysname = 'NotTelling.mail', -- New SMTP server
            @OldServer sysname = 'MySecret.mail',   -- Old SMTP server
            @account_id int;

    FOR SELECT account_id
        FROM   msdb.dbo.sysmail_server
        WHERE  servername = @OldServer;             -- Add extra logic here

    OPEN Cursor_MailAccounts;

    FROM  Cursor_MailAccounts
    INTO  @account_id;

        EXECUTE msdb.dbo.sysmail_update_account_sp
                    @account_id = @account_id,
                    @mailserver_name = @NewServer;
        FETCH NEXT
        FROM  Cursor_MailAccounts
        INTO  @account_id;

    CLOSE Cursor_MailAccounts;
    DEALLOCATE Cursor_MailAccounts;
    (And remember, just because you found it on the internet doesn’t mean it’s safe!)

    With that done, my next task was to fix the mails being sent from SSIS packages. They use an SMTP connection that is defined in the package, not SQL Server’s own Database Mail, so I had to open and edit them by hand. Luckily, I have only four packages active, and only three of them have a Send Mail task, so I did not see any need to automate this.

  • PASS Summit Abstract Feedback

    Last week, Adam posted the feedback he received from PASS on the abstracts he had submitted for the PASS Summit 2014. I was very happy he did – not only because the post itself was a good read, but also because I had not seen the PASS announcement on making the feedback available (I did see it in a PASS community mail a few days later, though).

    I can imagine that not everyone will want to see this feedback, but I do. I love harsh criticism, as long as it’s fair – that’s how I can learn and improve. (The written comments are always my favorite part of the session feedback from conferences where I have spoken). So I have followed Adam’s lead and requested a copy of the feedback on my abstracts. And I will continue to follow Adam, by posting the feedback here – that way, all of the community can learn from my mistakes, instead of just me.

    Understanding Execution Plans [full day, level 400] (not accepted)

    For troubleshooting long running queries, looking at the execution plan is often a good starting point. Once you know how the query is executed, you know why it's slow and what you can do to speed it up.
    But what if the execution plan is just beyond your understanding? What if it uses operators you have seen before, but do not really understand? What if you look at the execution plan, but just don't see the problem?
    In this full-day workshop, you will learn everything you need to be able to read and understand any execution plan. We'll start with an overview of execution plans as a whole, and then dive in and look at all the components, and how they fit together. This will increase your understanding on why the optimizer picks a plan, and what you can do to make it pick a better plan.

    Whether you have read your share of execution plans or whether you wouldn't know where to find them, this workshop will teach you everything you need to know about execution plans. Attend this workshop if you want to hone your tuning skills!
    Throughout the day, we will have exercises to help you get an even better understanding of the theory. In order to get the most out of the day, attendees are encouraged to bring their own laptop, with SQL Server (any version), Management Studio, and the AdventureWorks sample database installed.

    • This sounds a great topic and a good advanced level. I notice it was presented at 300 level in Copenhagen and this time it's aimed at 400 level. Hopefully it will not start too advanced for the audience. - great abstract. maybe more demo time??
      One of the learning outcomes is "Understand how to read execution plans." - hopefully at level 400 people who know how to do this already
    • Good abstract, very good idea for a session.
    • The topic is very important for any developer and looks like the presenter will be very well trained on this session as it is being presented in three other major conferences.
    • well written abstratc with good level of details. interetsing topic - but i dont see ay references to updates in sql server 2014. goals are clear. demo % could be higher for this topic.
    • Yay for audience participation.
      In the future, when noting this session has been presented at SQL Saturdays, please note if it was presented as a full-day pre-conference or as a normal session as normal SQL Saturday sessions do not match the length of a full session.

    Setting the level in an abstract is a constant struggle for me. Is a session only 500 level if only MVPs have a chance of understanding it, and if people walk out with brain damage? If a presenter is able to take the most complex subject matter and explain it in a way that can be understood by everyone, is this a 200 or 300 level session because the intended audience can be relatively new, or is still a 500 level session because the subject matter is deep and advanced? Because of this struggle, I often adjust the level when submitting the same session. In Copenhagen, I got feedback that it was more advanced than the audience expected, so I adjusted. The goal of the session is to ensure that even people who have little experience with execution plans can learn to read them (hence the learning outcome), and then continue to dig deeper where most sessions stop (hence the advanced level). This can be seen as sitting uncomfortably between two stools, or as killing two birds with one stone. I think that the session achieves the latter – but I apparently fail to explain that well enough in the abstract.
    Even for a full day session, time is a constraint. I had to choose between more demos and more room for new content, or more coverage of the basic content. I chose the latter (even the SQL 2012 content is just the last 30 minutes or so), and I stand by that choice, but I realize that everyone has their own preferences.
    When I put in where I presented the session before, I assumed that is would be obvious that this were pre-cons for SQL Saturdays – otherwise it would not be the same session. But it was apparently not clear at all. I should have been more specific.

    Lessons learned:
    Make sure the abstract reflects that people can come in with little previous knowledge, but will still leave with advanced understanding.
    Be careful with assumptions.


    A New Method for Effective Database Design [full day, level 200] (not accepted)

    Your databases may be normalized and may perform well … but is the data actually correct?
    Hidden pitfalls abound even in seemingly-mature database designs. Complexity, overabstraction, and miscommunication, can create situations where data simply can’t meet expectations.
    In this full-day seminar, you will learn how to use a requirements-based methodology to translate you users’ real business problems into a rock-solid database design. You will learn exactly what questions to ask, how to ask them, and how to transform the answers into a fully normalized database that accurately captures all the business requirements.

    If you have ever experienced problems due to bad database design, attend this seminar to make sure that never happens again.

    • Good outcomes - sounds appealing
    • One typo "to translate you" rather than "your".  I really like this session idea, though I can't really foresee what the "new" method might be.  Definitely sounds like the only method I know of. Expect it would be quite good.
    • This is a very important topic.
    • Seems the presenter can pull this off quite nicely.
    • Great topic
    • interesting topic. decent abstract. clear goals. low % of demos for a full day session
    • Abstract is for a full-day session and the submitter notes they have presented the session at SQL Saturdays and SQLBits.  One is not able to compare these due to the massive difference in the session length.

    Hmmm, this is a challenge. The abstract apparently fails to make it clear that the method I present here is indeed very different from traditional database design methods. I remember a previous version of the abstract that was rather boring and technical and got rejected all the time. I tried to spiffy it up and make it sound more appealing, but when I now reread it, it does sound a lot like the marketing speak I see and hear from advocates of other methods. I’ll have to redo this one for the next time, find a middle ground between accurate but bring, and spiffy marketing speak.
    I agree with the demo remark, and I would love to do more demos, and maybe some classroom exercises, but the subject is simply too big for that. Maybe if I can present both Monday and Tuesday? ;-)
    And I again caused confusion by including references to previous deliveries without explicitly mentioning that it was a precon at those events as well. Too bad: with the mostly positive other comments, I feel that this is the only reason that this session was not selected.

    Lessons learned:
    Abstract should contain some “near-marketing” speak to make it sound appealing, but I have gone overboard. I failed to really explain what is new and special about the presented method.
    And, again, make very clear that the other deliveries were precons as well.


    T-SQL User-Defined Functions, or: Bad Performance Made Easy [general session, level 300] (accepted)

    User-defined functions in SQL Server are very much like custom methods and properties in .Net languages. At first sight, they seem to be the perfect tool to introduce code encapsulation and reuse in T-SQL. So why is this feature mostly avoided by all T-SQL gurus?
    The reason is performance. In this session, you will learn how user-defined functions feed the optimizer with misleading and insufficient information, how the optimizer fails to use even what little information it has, and how this can lead to shocking query performance.
    However, you will also see that there is a way to avoid the problems. With just a little extra effort, you can reap the benefits of code encapsulation and reuse, and still get good performance.

    • Should be a good session - I would attend.
    • Very good abstract, and a decent idea for a session. I like that it includes positive benefits along with the bad effects as functions aren't all bad in all cases. I think you have a typo in the notes, as  you say it got a 4.73 on a 15 scale :)
    • This session looks very good. We always hear how UDFs are bad for performance but never what we can do about it. Having a session that shows that can be very good.
    • Well written with clear and concise goals.
    • good topic. great abstract. clear goals. exciting to see session with 100% demos

    Thanks for the kind words. And yeah, that 4.73 was on a 1-5 scale, missed the dash.
    I hope the last reviewer will not be disappointed when the actual demo percentage is about 95% (the PASS submission form has a dropdown for the demo percentage with 0%, 25%, 50%, 75%, or 100%, so I had to round).

    Lessons learned:
    Looks like this abstract, apart from the typo, can remain unchanged.


    Now Where Did THAT Estimate Come From? [general session, level 500] (not accepted)

    The SQL Server Query Optimizer makes its plan choices based on estimated rowcounts. If those estimates are wrong, the optimizer will very likely produce a poor plan. And there's nothing you can do about it. Or is there?
    In this session, you will learn exactly where these estimates come from. You will gain intimate knowledge of how statistics are built and maintained and how they are used to estimate row counts. But you will also learn how filters and joins influence those estimates.
    Though the focus of this session is on understanding the cause of bad estimates, you will also learn ways to fix the problems and get better estimates - and hence, better performing queries.

    • 500 level session and the pre-requisits seem to make it clear that prior knowledge is essential. I hope the presenter can do justice to this - that would be my main concern.
    • I wish the title make it more clear that we were talking about the optimizer, but for a 500 level session, perhaps that isn't needed. "Though the focus of this session is on understanding the cause of bad estimates" I guess I see that, but really the session is about where estimates come from based on the rest of the abstract, and where a bad one comes from is just an offshoot of that.
    • The abstract looks solid. The goals are aligned with the abstract and also look solid.
    • Abstract gives a clear picture of what to expect.  Goals could be better defined.  Is this truly a 500 level?
    • Type-o in Session Prerequisite, "sold" should be "solid".
    • Doubtful if this 500
    • interesting title. good topic - revevant for senior dbas or developers interesting in an indepth understanding of execution plans. abstract has good level of depth goals are terse but convey the necessary details. high % of demo is good.

    Again the level problem, and I must admit that I was, and still am, in doubt on whether to present this as level 400 or 500.
    The “can do justice” remark makes me wonder if this is from a reviewer who got only the abstract, or the abstract and my name. In other words, are these doubts based on the abstract, or based on a perception of my understanding of and ability to explain the subject? I would love to get into contact with this reviewer (mail me, please!). No hard feelings, I am just curious to understand where this opinion comes from and how I can improve.
    The remark on “talking about the optimizer” is interesting. That is NOT what I talk about in this session, but I see what part of the abstract results in this misconception. I need to rewrite that. And yes, you are completely right that the focus is on where estimates come from, not just bad estimates, so I will definitely steal those words for my v2 abstract.

    Lesson learned:
    Focus the abstract on the actual content, not on related content.


    Normalization Beyond Third Normal Form [general session, level 400] (not accepted)

    Do you believe the myths that “Third Normal Form is good enough”, or that “Higher Normal Forms are hard to understand”?
    Do you believe the people who claim that these statements are myths?
    Or do you prefer to form your own opinion?

    If you take database design seriously, you cannot afford to miss this session. You will get a clear and easy to understand overview of all the higher Normal Forms: what they are, how to check if they are met, and what consequences their violations can have. This will arm you with the knowledge to reject the myths about higher Normal Forms. But, more important: it will make you a better designer!

    • This is a good academic session - but not sure about it's real world application - a totlal theoretical discussion (based on the fact it has 0% demos) may not hold peoples interest for too long.
    • Database design is my favorite topic, and one I think db devs need to know more about. My only problem with this abstract is that it feels way overreaching to cover all of these normal forms at a 100 level where 1, 2 and 3 aren't even going to be understood all that well.
    • Looks very interesting. I suppose this is something we all should learn about. Looks the abstract covers a lot of material and while is 100% theory it is theory that must be learned anyway.
    • Love the topic. Abstractand Goals tend to "preach" one side of the story.Real-world examples would be helpful.
    • Not sure this is 100 level,
    • routine topic. abstract and topic dont show anything new or exciting for sql 2012 or 2014. The sessions seems all theory and has no demos. This may not be of much appeal and interest to audience
    • Zero demonstration time.
    • While the abstract is presented as a 100-level; the abstract description and goals are 200-level.

    I always prepare abstracts in a Word document, and then copy the content on the submission form. And in this case I must have made a mistake in the level selection. I intended to submit this as a level 400 session, not as level 100. Oops.
    What is a demo? Is it only a demo if the presenter sits behind the laptop, opens Management Studio, and executes T-SQL code? Or can you consider applying normalization rules to a concrete example of a database design, even if it’s on pre-made Powerpoint slides, as a demo too? The reviewers seem to say “yes” to the last option; I thought “no” when I submitted.
    The “abstract and topic dont show anything new or exciting for sql 2012 or 2014” comment is the only remark in all the feedback that I can’t use in any way to improve myself. The subject of database design is not related to a version of SQL Server, not even to SQL Server itself. I could present this session on a conference on any relational database and it would be equally useful.

    Lessons learned:
    When the session submission form is completed, “check, check and check again” is not sufficient; I probably should wait a day and then check three times more (or just learn not to make such sloppy mistakes).
    All real-world examples (or made up but realistic examples) are demos. Even if I am still in Powerpoint.


    Inside the Nonclustered Columnstore Index [general session, level 400] (not accepted)

    You have heard the marketing speak. You may have seen the demos. And maybe you have even played with the feature: Columnstore indexes, the amazing "go faster" switch in SQL Server 2012!
    But do you know how they work? Come to this session to find out. We will lift the hood to give you a fascinating in-depth view of how Columnstore indexes are built and read, how batch-mode execution operates, and how these features work together to achieve an amazing performance boost.

    • Topic should be well received
    • Thanks for the abstract, it's good to have someone speaking on columnstore Index, also it would be great if we can include some real examples and demo.since there is no any demo.

    I must have made another mistake – not visible in the feedback, but in the mail I got with the results this was listed as a half-day session. I entered this as a normal (75 minute) session in my Word document.
    No demo (or rather 10%, but I had to round to either 0 or 25) is a result of my choice to cover the internals in depth. Most internals can’t be shown in demos. I love seeing lots of demos in the sessions I attend, so I fully understand the reviewer. But I am also convinced that adding more demos would not improve the quality of this specific session. If that reduces the chance to be accepted, then so be it. I rather present the best possible session at only a few events than a watered down version at more places.
    That being said, I probably should not have submitted this session, and I will not submit it anymore to other conferences. The nonclustered columnstore was SQL Server 2012, and is now sort of obsolete. SQL Server 2014 was not released yet when I submitted my abstracts, but I knew it was coming and I knew that it would bring the improved, clustered version of the columnstore index – this session is old news. I love this session and I regret having to decommission it. But this session is now about as relevant as a session on the internals of English Query.

    Lessons learned:
    Once more: check, check, double check – and then check again.
    No matter how much I love a session, when it’s time has come I must no longer submit it.

    Powerful T-SQL Improvements that Reduce Query Complexity [general session, level 300] (not accepted)

    We’ve all dealt with nightmare queries: huge, twisted monsters that somehow work, despite being ugly and unmanageable. The time has come to tame these beasts, and the solution is available now, in SQL Server 2012.
    New T-SQL functions offer out-of-the-box solutions for many problems that previously required complex workarounds. Paging, Running totals, Moving aggregates, YTD, and much more comes at the power of your fingertips in SQL Server 2012. The only thing you need to do is learn the syntax. And that is exactly what this session is all about: a thorough description and explanation of the syntax, and loads of demos to demonstrate how you can use all these new features.
    Attend this session to boldly take SQL Server where it has never gone before!

    • I like the abstract it informs attendees of some of the things that they can expect to learn.
    • The title of the session is kind of vague, since whereas the windowing functions are useful TSQL improvements, they are not the only ones that are available throughout time. Would have gotten a better review if the title was more specific to windowing functions and 2012.
    • Looks good. If selected I would encourage the presenter to include/replace some aspects of it with SQL 2014 features that also help to improve how queries are written.
    • interesting topic and catchy session name. abstract is very well written and points out sql server 2012 features - relevant and current. good level of details in the goals and good balance of demo %
    • The session name is somewhat misleading for what appears to be a "Windowing Functions" presentation.

    Thanks for calling the title catchy. I like it too. But the comments of three other reviewers make me painfully aware that this title fails to deliver on the most important function of a title: it does not give the conference visitor an “at a glance” idea of the broad subject matter. Yes, all abstracts are available. But how many conference attendees read them all? I don’t, that’s for sure! I use the titles to filter down to just two or three promising sessions, then read the abstracts (if I have time – otherwise I pick based on title alone!) There were indeed more improvements in SQL 2012, and then there are even more in SQL 2014. Based on the title, I myself would expect to see them covered.

    Lesson learned:
    I already knew this but failed to apply the lesson: session titles should be catchy, but should also set a correct expectation for the audience.


    And that concludes another long post. Thanks to all the volunteers who spent countless hours on the unthankful task of reading through hundreds of abstracts, commenting on them, selecting between sessions that are almost impossible to compare with nothing more to go on than a few hundred words in the abstract. Your hard work is much appreciated; the PASS Summit would not be possible without people like you. So thanks!

    And I like to conclude this post with the same words that Adam has at the end of his: Have any additional feedback for me? Post it below!

  • SQLPass NomCom election: Why I voted twice

    Did you already cast your votes for the SQLPass NomCom election? If not, you really should! Your vote can make a difference, so don’t let it go to waste.

    The NomCom is the group of people that prepares the elections for the SQLPass Board of Directors. With the current election procedures, their opinion carries a lot of weight. They can reject applications, and the order in which they present candidates can be considered a voting advice. So use care when casting your votes – you are giving a lot of influence to the people you choose.

    Because there are three seats open for elections, every PASS member with a completed profile on the PASS site gets a maximum of three votes. I used only two, and I have a good reason for that. I believe that there is something fundamentally broken in the NomCom selection and election process.

    Since a few years, PASS is actively trying to morph from an “international” organization, ran only from the USA, to a truly international organization. As a result of changes to the bylaws and to the procedure for Board of Directors elections, the current Board of Directors now for the first time in history (as far as I know) has two European representatives – but sadly no one from any other non-North-American area. A lot of work still has to be done, and a lot of time still has to pass, before we can expect a truly international Board of Directors.

    The process for the NomCom election appears to be a step back in this regard. Three seats are to be elected. Two are truly open, the third is reserved … for a US/Canada representative. I do not understand this. Does PASS really need a guaranteed US/Canada seat on the NomCom to get a truly international representation? Wouldn’t it have made much more sense to reserve at least one seat for “non-US/Canada”?

    It is also relevant to consider that the actual NomCom consists of five persons. Three are elected by the members, one is the Immediate Past President of PASS, Bill Graziano, and I could not find anything about the fifth seat but my assumption is that this seat is reserved for founding partner and primary sponsor Microsoft. So that means that two seats are already filled with representatives from the US, and the reserved US/Canada seat in the elections means that North America is guaranteed a 3 out of 5 majority in the NomCom. And depending on the votes, US/Canada may also get the last two seats.

    With only two candidates for the NomCom who are not from the US or Canada, you might consider all of this a moot point. From a practical point of view, the reserved seat for US/Canada will never make a difference (though a reserved seat for non-US/Canada might). But purely out of principle, I want to voice my opinion that this process is broken. (And then we can also speculate on whether there would have been more international candidates if there would have been no reserved seats – if I know that the rules of the election favor my opponents and reduce my chances, I would think twice before even applying).

    I am convinced that all NomCom candidates will try to put the best candidates forward, and that none of them will favor candidates from their own region. But if PASS wants to become a truly international organization, it needs to have international representatives in all bodies, including the NomCom. Ideally, that is achieved by having completely open elections for all bodies – no seats should be reserved for regions. But I understand that the current membership and the past record of election participation from members from different regions would cause such a setup to end up with all North American representatives, so I do see the need for reserving seats for under-represented areas – as a temporary measure.

    What I do not understand is the reason for reserving a seat for a region that is already way over-represented. And that’s why I decided to go on a strike against candidates from the US or from Canada, voting only for candidates from the rest of the world. Nothing personal, not for any reasons related to the actual candidates – just making a point, and maximizing the chance of both open seats going to non-North-American candidates. There were only two such candidates – and that is why I used only two of my three votes.

    There are now less than two days left before voting closes. If you have not already, I urge you to go out and vote now. Use your votes. Use them wisely.

  • SQL TuneIn Zagreb 2014 – Session material

    I spent the last few days in Zagreb, Croatie, at the third edition of the SQL TuneIn conference, and I had a very good time here. Nice company, good sessions, and awesome audiences.

    I presented my “Understanding Execution Plans” precon to a small but interested audience on Monday. Participants have received a download link for the slide deck.

    On Tuesday I had a larger crowd for my session on cardinality estimation. The slide deck and demo code used for that presentation will be available through the conference website, but for those who cannot wait, I have also attached them to this blog post.

    The organization of the event have already announced their plans to host a fourth edition. And if I have any say in the matter, I will visit again.

  • TechDays 2014 – Session material

    Last week in the Hague, I had the honor to present two very different sessions at the Dutch TechDays conference. A deep-dive session on internals of the SQL Server 2012 nonclustered columnstore index, and a very developer-oriented session on the bare basics of performance tuning. To my delight, both times the room was filled with very interested people, asking great questions and, I guess, enjoying my presentations.

    All sessions were recorded, and I have been told that in due time, all will be available on Channel 9. But what if you can’t stand the wait? What if you just want to quickly browse through my slide deck without having to endure my crappy jokes? Or what if you want to play around with my demo code but are of the rightful opinion that manually copying my code while constantly pausing and unpausing the video would be an utter waste of time?

    The answer to each of those problems is: you simply download the slides and demo code from the attachment to this blog post. Simple, huh?

  • Fake statistics, and how to get rid of them

    There are two ways to test how your queries behave on huge amounts of data. The simple option is to actually use them on huge amounts of data – but where do you get that if you have no access to the production database, and how do you store it if you happen not to have a multi-terabyte storage array sitting in your basement? So here’s the second best option: you cheat.

    Luckily, SQL Server has a feature that allows you to do just that. I must warn you that this feature is undocumented and unsupported. You should never under any circumstances use this feature on a production server, unless explicitly directed to by qualified Microsoft staff. Using it, sparingly, on a test box is okay. But as an undocumented feature, there is no guarantee that it will always work, or that it will continue to work in future versions.

    With that disclaimer out of the way, let’s take a look at the “cheat” option. As you probably know, the Query Optimizer relies on statistics for its decisions. If those statistics are wrong, the decisions will probably be wrong. Here is where we find the leverage to cheat: if we can force SQL Server to use statistics that are representative of the production database rather than the test database, we will get the execution plan it would generate on production. The actual performance will probably still be lots faster, but at least we can look at the plan and use our understanding of execution plans to check that we get scans where we want scans, seeks where we want seeks, and all the right types of joins.

    Fake statistics

    The undocumented feature that we can use here actually has a very simple and easy-to-remember syntax. You just run an UPDATE STATISTICS command, adding the options “WITH ROWCOUNT=xxxx, PAGECOUNT=xxxx” to force SQL Server to store the numbers you mention as the rowcount and pagecount. Clear the plan cache, then test your queries and check how they behave with the simulated number of rows and pages. I could give a much longer description and examples of the usage, but others have already done so – see for instance this post from the horse’s mouth (aka the Query Optimization Team), this post from Benjamin Nevarez, or if you are able to read German this excellent post from Uwe Ricken.

    … and how to get rid of them

    But what all those blog posts fail to mention is what happens later. Of course, if you just create a small test table, slap in a few sample rows, fake big statistics, check the execution plan and then drop the table, that is not an issue. But what if you have a test database that contains a lot of carefully crafted test data, and that is used for other tests as well? You may not want to run all the tests with those artificially bumped rowcounts!

    In my case, I ran into this in the AdventureWorks sample database, my loyal companion for presentations and demos. I will present a pre-conference seminar on Execution Plans in Exeter (March 21), in Copenhagen (March 28), and in Zagreb (June 2). For my preparations, I wanted to force a hash spill warning, so I artificially lowered the rowcount for the SalesOrderDetail table to just 200. That worked just fine – this nifty feature can be used to mimic every rowcount, not just the big numbers.

    Because I use this sample database for other purposes as well, I wanted to immediately reset the statistics to their normal value. I figured rebuilding all the statistics on the table with the FULLSCAN option would do the trick. Right? Wrong! The UPDATE STATISTICS did run, it took a while to scan all rows in the table (as requested) – but then it still retained the rowcount and pagecount values that I had forced earlier! And there is no visible indication at all – executing the UDPATE STATISTICS … WITH FULLSCAN statement simply runs for a while, then reports success.

    It was actually mere coincidence that I later did some other tests on the same table, and just happened to notice that the estimated rowcount for this table was still 200. It then took me at least an hour of searching the internet and trying many different options (including using zero or a blank value for the rowcount and pagecount parameters, combining different other options of the UPDATE STATISTICS command, clearing or not clearing the plan cache, and a few other things I don’t even remember. None of them worked. I could of course count the actual number of rows and pages and use that, and that would have worked because my AdventureWorks database never changes – but for a test database where the data can actually change over time, this options would not work.

    In the end, I finally found one a method that works. But it is really a sledgehammer approach, and I prefer not heaving to do this on large tables on a regular basis: ALTER TABLE Sales.SalesOrderDetail REBUILD. After running that statement, I found that the statistics on the SalesOrderDetail table had finally reverted to their normal behavior. Until the next time I need to fake a different number.


    On a development server, using the WITH ROWCOUNT and WITH PAGECOUNT options of the UPDATE STATISTICS is a fine method to simulate large numbers of rows, or to simulate the effect of wrong statistics. But unfortunately, the numbers given stick for far longer that I like; this is not documented anywhere, and not easy to undo. Rebuilding the table and indexes appears to be the only solution.

    If you have found other ways to return from fake row- and pagecounts to the normal statistics behavior, post a comment and share your knowledge!

  • Database Design training – for free?

    When I started this blog, my plan was to focus on two main subject areas. One of them is SQL Server, T-SQL, and performance. The other is database design.

    Looking back over my post history, I honestly cannot say I delivered on the second area. Not because I have nothing to say about database design, but because I found it doesn’t lend itself for a blog. In the SQL Server / T-SQL area, there are subjects that can be isolated and described in a single post, or in a short series. In database design, I do not see such objects. Everything is related, and the only good way to cover this is end to end – starting at requirements gathering, through modeling, normalization, more normalization, and ending at the final database design.

    That cannot be covered on a blog post. So I found a different place: Pluralsight. Pluralsight is a company that provides online training on IT-related subjects. Excellent training, delivered by the best specialists available, for a fraction of the price of classroom training. And with the additional benefit of being able to pause and rewind as often as you want.

    Relational Database Design

    I was delighted when Pluralsight offered me the opportunity to record a course on relational database design for them. I must admit that creating the course took more time than I had ever anticipated – but it is now finished and available in the Pluralsight course library. It is a very comprehensive course – to see the entire course, end to end, you will need to set aside almost a full work day: 7 hours en 33 minutes. But luckily, you can consume the course in small portions, watching a few clips when you have time and stopping to return later when you’ve had enough.

    The first module is the introduction, in which I give a historic overview of how storage hardware and methods have evolved to where they are now. If you are pressed for time, you might consider skipping this module. Knowing the past can help understand the present better, but this module is not required to understand the rest of the course.

    In module 2, I explain the IDEF1X notation for ER diagrams, that I use throughout the course. Module 3 presents you with techniques you can use to gather the information you need from the domain experts, and module 4 shows how to use that information to create a first draft of the data model. Module 5 then explains how to convert between the ER diagram and the relational database representation of the data model – I personally prefer to postpone this conversion until after the normalization, but not everyone likes to work that way; hence my decision to cover the conversion first.

    Normalization is a subject that is far more complex than many people realize – yet much easier to do than many people fear. I use two (long!) modules to cover basic normalization, up to third normal form; and then a third long module to go over all the so-called “higher” normal forms – often considered to be unnecessary, too hard to do, and too complex to understand. I hope you’ll disagree with those qualifications as much as I do after watching this module.

    IDEF1X may be popular, it is far from the only ER diagramming method. In the last module, I cover several other methods, showing you how much they are alike and where you’ll find the important differences. After seeing it, you should be able to work in any ER diagramming method you may encounter.

    I personally think that this is a very good course on relational database design. But I might be just a bit biased. So I suggest that you go ahead, check it out, and be your own judge!

    Cheap …

    My relational database design course is not only (in my opinion) very good – it is also extremely affordable. Have you ever looked at the cost of a classroom course on database design, then decided that you simply cannot afford it? The good news is – Pluralsight is cheaper! Prices for individual subscriptions start at just $29 per month, or $299 per year for a basic subscription – which already gives unlimited access to the full course library. Or you can pay $49 per month / $499 per year for a “plus” subscription, that adds access to exercise files, assessments, certificates, and offline viewing.

    And for that low price, you do not get access to just my course on relational database design – the full course library of Pluralsight already contains over a thousand titles, with new titles being added almost every day. Readers of this bog should be excited to see over fifty courses in the SQL Server subject area, by renown experts such as Paul Randal, Jonathan Kehayias, Leonard Lobel, Glenn Berry, and many others. And if you want to look beyond the borders of SQL Server, Pluralsight has courses on almost every other subject (as long as it is related to dev and IT).

    … or free?

    But the title of my blog promises “free”, not “cheap”. Thirty bucks per month may be a steal, but it is not free. Was I lying? Of course not! You can check my relational database design course (and more) for free. How? Simple!

    Method 1: available to all, but a bit limited. Go to the Pluralsight website, and sign up for a free trial. This costs you absolutely nothing, and it gives you access to all the Pluralsight courses for a whole week. There is a catch, though – your viewing time is limited to 200 minutes. So you can use the trial to watch one of the shorter courses completely, to get a very extensive taster of a longer course, or to browse around and watch snippets of many courses – that is up to you!

    Using this method is an easy way to watch almost half of the relational database design course for free. But what about the rest? Enter method 2!

    Method 2: sufficient to watch the entire course, but not available to everyone. Pluralsight has allowed me to give out a few one-week training passes. Like the free trial, these give access to all courses in the course library, for a full week. Unlike the free trial, they are not time limited. If you have the stamina, you can get a whole 168 hours of training out of them. (I do not advise this. I am not responsible for the effects on mental and physical health if anyone thinks they should even try this. Please don’t). But with a bit of planning, a week should be enough to consume the entire relational database design course, and still have time left to check out some of the other courses!

    So how do you get such a free pass? The easiest method is to post a comment to this blog, explaining why I should pick you. On February 10, I will choose the five best, most convincing, or maybe just the funniest comments, and reward them with a training pass!

    So go the the Pluralsight site, sign up for that free trial, check out my course, then return here and post a comment. Convince me!

  • Parameterization and filtered indexes (part 2)

    In my previous post, I demonstrated how the presence of a filtered index can get in the way of successful parameterization, possibly resulting in too much query compilations and procedure cache bloat. I suggest reading that first, because I will go straight where I left off.


    Use the Force, Luke


    If you read the previous post very carefully, you will have noticed that I wrote that if a plan is unstable, it is “unsafe for simple parameterization” – and that suggests that SQL Server will not have such inhibitions when using forced parameterization. Let’s have a look. Assuming you still have the indexes I created in the previous blog post, just execute the query below change the parameterization setting for AdventureWorks2012 to forced, clear out the plan cache (and remember not to do this on a production system!), execute the query that would not successfully parameterize when the database was set to simple parameterization, and inspect the plan cache. You will see that this time, the query was indeed parameterized. If you also add the code to watch the “Unsafe Auto-Params/sec” counter, you will see no changes to its value.


    USE AdventureWorks2012;


    ALTER DATABASE AdventureWorks2012





    SELECT SUM(OrderQty)

    FROM   Sales.SalesOrderDetail

    WHERE  ProductID = 706;



    SELECT      cp.objtype AS ObjectType,

                cp.usecounts AS UseCount,

                st.[text] AS QueryText,

                cp.plan_handle AS PlanHandle,


                           'varchar(64)') AS ParameterizedPlanHandle,

                qp.query_plan AS [Execution Plan]

    FROM        sys.dm_exec_cached_plans AS cp

    CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qp

    CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS st

    CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') AS batch(stmt)

    WHERE       qp.dbid = DB_ID()

    AND         st.[text] NOT LIKE '%ExcludeMe%';



    At first sight, this may appear to be a very simple and elegant solution for the problem. Just set all databases to forced parameterization and then sit back and relax, waiting for the inevitable flood of happy emails from end users who wish to thank you for the tremendous performance boost you just gave them.


    But wait. When has solving a problem in SQL Server ever been this simple? There must be a catch, right? But where?


    A new use case


    In order to see the problem, I will use a different database – so let’s first clean up the mess we made in AdventureWorks2012 and restore it to its original state, so that other bloggers can use it for their demos without my stuff getting in the way:


    ALTER DATABASE AdventureWorks2012



    DROP INDEX Sales.SalesOrderDetail.ix_ProductID_Over_1000;

    DROP INDEX Sales.SalesOrderDetail.ix_ProductID_Incl_OrderQty;


    A very common use case of filtered indexes is in combination with “soft deletes” – data that is no longer valid is not physically removed from the table, but kept (often to maintain a history of events). A special column, usually called “is_deleted”, is used to track that this information is historic only. Since the majority of queries is only interested in current data, you will find the predicate “is_deleted = 0” in almost every query – so that makes this an ideal candidate for a filtered index. And since we are now under the impression that filtered indexes really require forced parameterization, we will change that setting right away. Here is the script to create this scenario in a sample database (it may take some time to run this!):


    USE tempdb;


    IF EXISTS (SELECT * FROM sys.databases WHERE name = 'FilterDemo')



        DROP DATABASE FilterDemo;





    USE FilterDemo;


    CREATE TABLE dbo.AllThings

       (ThingID int NOT NULL,

        ThingName varchar(20) NOT NULL,

        is_deleted bit NOT NULL DEFAULT (0),

        LotsOfStuff char(2000) DEFAULT ('Placeholder'),

        CONSTRAINT PK_AllThings

            PRIMARY KEY (ThingID)


    DECLARE @i int = 0;

    WHILE @i < 5000


        SET @i += 1;

        INSERT dbo.AllThings




        VALUES (@i,

                CHAR(65 + RAND() * 26) + CHAR(65 + RAND() * 26) + CHAR(65 + RAND() * 26), -- Random name

                CASE WHEN RAND() < 0.99 THEN 1 ELSE 0 END)  -- Most products are deleted


    CREATE INDEX ix_Name_NotDeleted

    ON dbo.AllThings (ThingName)

    INCLUDE (is_deleted)    -- Always include the columns you filter on!

    WHERE is_deleted = 0;



    Once the script is done, let’s get a quick count of the number of things that have not been deleted. I used a formula that results in 99% of the things being deleted, but there is a random factor involved. The actual number of non-deleted things should be close to 50. It was 58 on my system, which I found by running this query:





    FROM   dbo.AllThings

    WHERE  is_deleted = 0;


    But if you switch to the messages tab, you will see a very disturbing number. I expect SQL Server to execute this query by simply scanning the filtered index, as this index exactly contains all the rows it needs to count. But the output from STATISTICS IO shows that a total of 1256 logical reads have been made. Over twelve hundred reads to count just 58 rows? How is that possible? Let’s take a look at the execution plan to find out:


    As you see, the filtered index is not used at all; the optimizer chose to scan the clustered index instead, wading through all 5,000 “things” in my table to find just those 58 that were not deleted.


    The reason for this is simple. I have enabled forced parameterization. So I told SQL Server that, no matter the consequences, it should always replace constant values with parameters. So the plan that the optimizer was forced to compile was not for the query I typed, but for this query instead:


    DECLARE @1 int = 0;


    FROM   dbo.AllThings

    WHERE  is_deleted = @1;


    And the optimizer has to produce a plan that will always return the correct results, for any possible value of the parameter. Of course, when executing this query with parameter value 1, scanning the clustered index is the only possible way to return correct results, so the plan choice that was made was indeed the only possible choice. By enabling forced parameterization, we have effectively crippled the optimizer in using any filtered index at all (except through views).


    More force?


    In a case such as this, where we know that we will always use is_deleted = 0 in the query predicate, it can become very tempting to find ways to convince the optimizer to choose the query plan we want without giving up on the forced parameterization plan. Let’s first see what we can achieve by parameterizing this query ourselves and applying the OPTIMIZE FOR hint:


    DECLARE @1 int = 0;


    FROM   dbo.AllThings

    WHERE  is_deleted = @1

    OPTION (OPTIMIZE FOR (@1 = 0));


    Okay, I admit, it was a rather desperate attempt and I didn’t really expect much of it. The OPTIMIZE FOR hint tells the optimizer that I want a plan that gives the best performance for that particular value, but the results still have to be correct for other values. So the optimizer will still consider what would happen if I supply other values, and will reject the filtered index because of that.


    So, back to the original query, and now use more force. I know that this query will always perform better when using the filtered index – so if the optimizer fails to see that, I will just force it. Applying an index hint does just that. It may be called a hint, but it is a directive; if you hint a query, it WILL be used no matter what. So this should help, right?



    FROM   dbo.AllThings WITH (INDEX = ix_Name_NotDeleted)

    WHERE  is_deleted = 0;


    No, it does not help. The result is an error message. We have now given the optimizer a totally impossible task. We told it to parameterize the query, no matter what, so it did. It now has to produce an execution plan for the parameterized query, and that query will be reused with different values. You and I know that the value will really never be different, but SQL Server does not, and it still has to guarantee correct results. But then we also told SQL Server that it really has to use an index that does not include some of the rows that, for some values of the parameter, may have to be returned. Like I said – an impossible task, and SQL Server responds just like you and I would do, by throwing up its hands in despair and giving up.


    The simple solution


    Luckily, there is a solution to this problem. It’s simple – simple parameterization, to be precise. Just set the parameterization option back to its default setting of simple, and you will get a much better behavior.







    FROM   dbo.AllThings

    WHERE  is_deleted = 0;


    Now the query takes just two logical reads. And the execution plan looks as expected: a scan of the filtered index, that’s all.




    When you look only at the plan cache, forced parameterization may look like manna from heaven. But when you look further, you will see that setting the parameterization option to forced is probably not a good idea at all. It may appear to solve some issues, but you get bigger issues in return – filtered indexes that might boost performance tremendously are ignored, and if they are hinted it can even cause errors. Do you know all the code that is running on your system? Are you sure that none of your application developers has ever added an index hint? Do you want to find out the hard way?


    Simple parameterization in combination with filtered indexes may not always play well with the plan cache. But I would think very long and hard, and do a close inspection of all objects in the database and all queries used before even considering switching to forced parameterization.

  • Parameterization and filtered indexes (part 1)

    Parameterization is a process where SQL Server slightly modifies a query in the parse phase: it replaces constant values by parameters. So if you submit

    -- Query 1


    FROM   Sales.SalesOrderDetail

    WHERE  ProductID = 706;

    the parser will replace this by (and fool the optimizer into thinking you submitted):

    DECLARE @1 smallint = 706;


    FROM   Sales.SalesOrderDetail

    WHERE  ProductID = @1;

    You can verify this very easily. If you go to the actual execution plan and hover over the left-most icon (the SELECT), a popup window will open that shows (a.o.) the query that was fed into the optimizer – but beware, this can be misleading, as will be demonstrated later in this post. I will show a more reliable method shortly. Don’t forget to disable the “Include Actual Execution Plan” option now, as it results in some extra work being done that will influence the output of the queries below.


    The benefit of this is that a second execution of the same query with a different value will reuse the same plan. This saves compilation time, and less procedure cache space is consumed for these queries. However, you do run the risk of getting bad plans because of parameter sniffing with a bad value. Because that risk increases with query complexity, SQL Server’s default behavior is to only parameterize very simple queries (“simple parameterization”), but you can opt to use “forced parameterization” instead, by setting the database option PARAMETERIZATION FORCED.


    Parameterization in action


    Before adding filtered indexes to the mix, let’s start with some experiments. First, verify that you are on your development server and not accidentally logged in to a production box. Then run the query below. The first four lines disable the “Optimize for ad hoc workloads” options, a server-wide option that should probably be enabled on every production box, but is not as relevant for development – and would complicate this blog post. (The same principles apply, it is just a bit harder to demonstrate). The fifth line instantly clears out the execution cache, a very bad thing to do in production, but great for experiments such as this.

    -- Query 2

    EXEC sp_configure 'show advanced options', 1;


    EXEC sp_configure 'optimize for ad hoc workloads', 0;




    Let’s take a look at the contents of the plan cache – it should be empty, so ideally this query will return an empty result set.

    -- Query 3


    SELECT      cp.objtype AS ObjectType,

                cp.usecounts AS UseCount,

                st.[text] AS QueryText,

                cp.plan_handle AS PlanHandle,


                           'varchar(64)') AS ParameterizedPlanHandle,

                qp.query_plan AS [Execution Plan]

    FROM        sys.dm_exec_cached_plans AS cp

    CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qp

    CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS st

    CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple')

                                                 AS batch(stmt)

    WHERE       qp.dbid = DB_ID()

    AND         st.[text] NOT LIKE '%ExcludeMe%';

    The filter on “ExcludeMe” is a trick to ensure that the entry for the query itself does not show up. Other unrelated queries can also pop up, even when you are on a development system that nobody else is using – internal management tasks and installed tools (such as intellisense or third party database management tools) may fire queries that will all show up. Most of these should be excluded because of the filter on DB_ID (but do note that as a consequence, you MUST run this query in the same database where you are running your experiments – which for this blog post will be AdventureWorks). If the remaining unrelated queries bother you, find a part of the query text that identifies those queries and that never occurs in your own queries and add an extra filter on st.[text] to exclude those queries as well.


    Now rerun query 1 above, then inspect the plan cache again (using query 3), and you should see a result similar to this:


    The “Adhoc” plan is not a real execution plan. It is a placeholder that links the original text of the query before parameterization to the actual plan, called a “Prepared” plan. Check the values in the PlanHandle and ParameterizedPlanHandle columns to see this. Or click the content of the Execution Plan column to see a graphical representation of each plan. If you submit Query 1 a few more times, the UseCount of the Adhoc plan goes up, because in case of an exact match between the text of a submitted query and the text of a query in the plan cache, the same plan will be reused. The UseCount of the Prepared plan does not go up, even though this is the plan that is actually executed – a weird way of counting, but documented (though not entirely correct).


    To see parameterization in action, let’s count orders for a different product:

    -- Query 4


    FROM   Sales.SalesOrderDetail

    WHERE  ProductID = 710;

    Go back to query 3 and run it to see what this has done to the plan cache. Here are the results I got:


    The raw text of the query is different, so a new “Adhoc” entry has been made. But after parameterizing the query text, SQL Server found that it already had a plan, so it could skip the compilation process and start executing right away. This saves lots of CPU cycles for the (relatively expensive) compilation process. It also reduces plan cache bloat, because the fake plan for the unparameterized query takes up a lot less space than a full execution plan (add the column cp.size_in_bytes to the SELECT list in query 3 if you want to see how much it saves). As you can see, the UseCount of the parameterized plan does go up this time, which conflicts with the remark in Books Online that it is “not incremented when parameterized queries find a plan in the cache”. I guess it actually represents the number of distinct unparameterized plans that have been linked to this parameterized plan.


    Assuming you use the default of simple parameterization, a more complicated query should not use this mechanism. We can verify this using the same method. After clearing the plan cache (using the last line of query 2 above), I added some needless complexity to the previous query:

    -- Query 5

    SELECT     COUNT(*)

    FROM       Sales.SalesOrderDetail AS sd

    INNER JOIN Sales.SalesOrderHeader AS sh

          ON   sh.SalesOrderID = sd.SalesOrderID

    WHERE      sd.ProductID = 710;

    The join to SalesOrderHeader does nothing, and will even be eliminated by the query optimizer; the actual execution plan is still the same as before. But if you now run query 3 to inspect the plan cache contents, you will get just a single row in the result:


    In this case, there is just an “Adhoc” plan that contains the original query text, including the hard-coded product ID value. It does not reference a different, parameterized plan; and when you click the Execution Plan column, you will indeed see a real execution plan. Change the product ID in query 5 back to 706 and execute it, and a new row will be added to the plan cache – again containing a full plan. So in this case, the optimizer had to do all the work to produce a plan. Imagine hundreds of queries like this on a busy production system, and you can see how all those compilation could impact overall performance, and bloat the plan cache. Using forced parameterization changes this, but at the risk of potentially introducing parameter sniffing issues – so pick your poison.


    As you can see, parameterization works as designed, and it can be quite useful. Depending on your workload, you might want to consider enabling forced parameterization. But that is not the subject of this blog post!


    Filtered indexes


    The feature “filtered indexes” was first introduced in SQL Server 2008. This feature can be very useful, despite its known limitations (hint: if you filter on any columns not included in the index, always INCLUDE them – that solves 95% of the problems). But it has an effect on parameterization and plan reuse that many people are not aware of. To illustrate this, let’s just create a filtered index that should not affect my demo queries at all – an index that includes only products with a much higher product number.

    -- Query 6

    CREATE INDEX ix_ProductID_Over_1000

    ON Sales.SalesOrderDetail (ProductID)

    WHERE ProductID > 1000;

    Run this query to create the index, then clear the plan cache, activate the option to include the actual execution plan and then run query 1 again. Looking at the plan, you might think that nothing changed – just as before, the value 706 has been replaced by the placeholder @1 in the “Statement” property of the “SELECT” iterator. But as I said, this is not always a reliable source of information – let’s get a second opinion from the plan cache by using query 3 again. Here is the result:


    The two STATISTICS XML queries are the result of including the actual execution plan (if you don’t believe me, clear the procedure cache, then rerun query 1 and you will see only that single line). Just a single Adhoc plan, no ParameterizedPlanHandle, no Prepared plan – the picture is clear, the plan was not parameterized. The information in the execution plan is misleading


    So how can a filtered index that would never have been used for this query have this effect on parameterization? The explanation lies of course in the existence of a filtered index that has the ProductID column in its predicate – the same column that is also used in the predicate of the query. Because of the different values, the filtered index may look irrelevant to this query. But if the parameterization were successful, the same plan would ALSO be used for this query:

    -- Query 7


    FROM   Sales.SalesOrderDetail

    WHERE  ProductID = 1706;

    After parameterization, this query looks the same as query 1. If you look at the actual execution plan of this query, you will see that this query not only CAN, but also WILL use the filtered index we created earlier. If SQL Server had allowed parameterization to take place, this option would not have been available. To understand the misleading information in the actual execution plan, we have to understand the actual order of events, and looking at the right performance counter can help here (thanks to Alejandro Mesa, who helped me understand ). Let’s clear out that procedure cache again, and now run the query below – it is the same as query 7, but with two copies of the same query before and after it.

    -- Query 8

    SELECT cntr_value

    FROM   sys.dm_os_performance_counters

    WHERE  [object_name] LIKE '%:SQL Statistics%'

    AND    counter_name = 'Unsafe Auto-Params/sec';



    FROM   Sales.SalesOrderDetail

    WHERE  ProductID = 706;


    SELECT cntr_value

    FROM   sys.dm_os_performance_counters

    WHERE  [object_name] LIKE '%:SQL Statistics%'

    AND    counter_name = 'Unsafe Auto-Params/sec';


    If you follow along, you will see that the value of this performance counter goes up by one. This is an artifact of a chain of events that starts with one component (I assume the parser – I guess someone like Paul White would probably know, or immediately attach a debugger to find out) deciding to parameterize the query – this is the version of the plan you see in the graphical execution plan. The next component (probably the optimizer) notices that the plan is not stable – meaning that different values of the constant can result in different plans being optimal. That makes it unsafe for simple parameterization, so the parameterized version is rejected, and the original, non-parameterized version is compiled, stored in the plan cache, and executed instead.


    Unfortunately, this mechanism of rejecting unsafe parameterizations may get a bit too enthusiastic. Let’s first create one more index – this time not indexed, but with an included column:

    -- Query 9

    CREATE INDEX ix_ProductID_Incl_OrderQty

    ON Sales.SalesOrderDetail (ProductID)

    INCLUDE (OrderQty);

    If I now want to find the total ordered quantity instead of just the number of rows for a product, I would modify my original query 1 as follows:

    -- Query 10

    SELECT SUM(OrderQty)

    FROM   Sales.SalesOrderDetail

    WHERE  ProductID = 706;

    This query will never use the filtered index. Change the value 706 to something above 1000 and check the execution plan if you don’t believe me. Then use a hint to force the filtered index and compare the new execution plan with the old one to see the reason. The new index includes the OrderQty column, so it covers this query. The filtered index does not include this column, so a lookup into the clustered index has to be added to the plan, making it much more expensive (about twice the cost, according to the optimizer’s estimates). For this particular query, the filtered index is totally irrelevant – so now the plan is stable again, and there is no need to reject the parameterization attempt, right? Well, you can use any of the methods outlined above (clearing the cache, running the query, then looking at the cache contents; or watching the “Unsafe Auto-Params/sec” when submitting the query) and you will see that the parameterization is still rejected. The decision to reject or accept a parameterized query is not made after a full optimization, but after just a quick glance at the start of the optimization process. By the time the optimizer has explored enough options to realize that there is a single plan that is optimal for all values, even those included in the filtered index, the decision has already been made and it cannot be undone.



    If you have issues with a bloated plan cache or too much CPU time going to compiling queries, check for filtered indexes. Their habit of getting in the way of successfully parameterizing queries may be a root cause of your problems.


    But that’s not the whole story! In the next part, I will show you how to convince SQL Server to change the default behavior of not parameterizing queries if a filtered index is present – but I will also demonstrate how this can result in even worse performance, or even runtime errors!
  • Understanding Execution Plans

    To me, it feels as if 2014 is a long time away. But it isn’t. Sinterklaas has already dropped off his presents and is probably already back in his castle in Spain. Christmas is almost upon us. And before we know it, we’ll be washing oliebollen down with Champagne.

    That also means that I need to get cracking on preparing my precon seminar on execution plans. What precon seminar you say? Oh right – I didn’t tell you yet. The good folks organizing SQL Saturday #269 in Exeter (UK) (on March 22) have decided to extend the event with a full training day on Friday March 21. For that purpose, they invited eight of the best SQL Server speakers in the world, and me, to spend a whole day pouring our knowledge into other people’s heads.

    The full program for this excellent day can be found here – and believe me, if I was not speaking that day, I would be trying to clone myself and visit at least four of them!

    My session on that day, as you can see, focuses on execution plans. So what can you expect if you sign up? Here is a short overview.

    I will start with the bare basics. What is an execution plan, what is its role in a SQL Server database, and most of all – how can you get to see it, how do you start reading it (Left to right? Right to left?), and how do you get to those extremely interesting details that Microsoft has carefully hidden from view? Why is an estimated plan not estimated, why is an actual plan actually mostly an estimated plan? What is an iterator, what does it do, and how does it interact with other operators?

    After that, we’ll dive into the details. What is a seek, what is a scan? How do you notice when a seek secretly scans, and why should you care? Why do people say scans are bad, and why are they wrong? (Or, maybe even more important, WHEN are they wrong, and when not?)

    Next up are the joins. Inner, outer, loop, merge, hash – but also semi, and anti semi. Yes, those are all join types, and we will explore them all. How do they work, what are the benefits and what are the costs of all of them? How can you influence the type of join chosen, and why would you want to?

    All that, and much more, will be included in my seminar. And that’s just what I have planned for before lunch. At the end of the day, you can expect to have a good working knowledge of almost every iterator that you can encounter in an execution plan. I will also explain what has changed in execution plans in SQL Server 2012 and SQL Server 2014. And to ensure that nobody falls asleep from my constant talking, I will give you all some exercises in between, challenging you to immediately apply what you just learned. These exercises may not be the kind of exercises you expect - but trust me, you will find them to be fun and refreshing, and they will also learn you how to apply your knowledge to all kinds of problems.

    Does this sounds interesting or useful at all to you? I hope so – why else are you on a SQL Server blog site? So get cracking – click this link and sign up for my precon right now. (Or for one of the other precons – they are all great!) If you do so soon enough, you can still apply for the special “Early Bird” rate of only £150 – which is valid until December 15. But don’t despair if you see this post later – from December 16 until February 28, you still only pay £185 – still a steal for a whole day of training! (Even the £200 last minute rate that applies from March 1st is great value – but honestly, why wait?)

    I am looking forward to visiting Exeter this March. I hope to see all of you there. Either in my precon – or if you choose to attend one of the other precons, then maybe in one of the many sessions that will take place the next day.

  • SQLRally and SQLRally - Session material

    I had a great week last week. First at SQLRally Nordic, in Stockholm, where I presented a session on how improvements to the OVER clause can help you simplify queries in SQL Server 2012 enormously. And then I continued straight on into SQLRally Amsterdam, where I delivered a session on the performance implications of using user-defined functions in T-SQL.

    I understand that both events will make my slides and demo code downloadable from their website, but this may take a while. So those who do not want to wait can download the material from this blog post.

    Both SQLRally events have recorded all their sessions. It will obviously take a while to edit and publish all those recordings – but those who missed my session and want to check it out with my explanations know that if they wait a while, they can watch the recording online on the SQLRally websites.

    I once more would like to thank all volunteers who organized these events, all the sponsors who helped fund them … and most of all, all attendees who made my time in Stockholm and Amsterdam amazing. You were a great crowd, both during my session and in the many chats I had during the breaks.

More Posts Next page »

This Blog


Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement