A place where share my experience on SQL Server and related technologies
Microsoft Press (@MicrosoftPress
) gives you the opportunity to download many eBooks for free.
There is one eBook for every taste :) to find and download the most interesting eBook for you, point your browser here
Which is the evolution of the Microsoft Data Platform? Find it out on May 25 and 26, 2016 at the "Evolution of the Data Platform" edition of the 24 Hours of PASS!
Register now, thanks to the sponsors, the 24 Hours of PASS is presented at no cost for you!
The complete schedule of the event is available here.
It doesn't matter from which part of the world you will follow the event, the important thing is to know that they will be 24 hours of continuous training on the evolution of the Microsoft Data Platform!
The 24 Hours of PASS also wants to be a sneak taste what you can expect from the next PASS Summit that will be in Seattle WA from 25 to 28 October 2016.
This article allows us to focus on a new feature of SQL Server 2016 known as Query Store. We will talk about performance issues related to the query plan choice change and how the Query Store can help us to identify queries that have become slower.
Have you ever experienced to have your system slowed down or completely down? What happens? Everyone is waiting for you to fix the problem as soon as possible! The Boss is over your desk! Have you ever experienced to upgrade an application to the latest version of SQL Server and facing an issue with plan change that slows your application down? Query plan choice change can cause these problems. A performance problem related to the system database tempdb is not as hard to fix as a problem related to a query plan changes. You know, query plan changes give you more and more problems. As far as the tempdb is concerned, you can move it into a faster hard-drive or you can increase the number of the tempdb files, but when you have to find the slow running queries you have to figure out why they are slow!
Detecting and fixing problems you face for slow running queries takes you long because you have to look into the plan change as well as the lock occur and a question grow up in your mind: What was the former plan like? So you try to find out an answer for this question, but, has the Data Collector been activated on the server? If the Data Collector hasn’t been activated the only tool you have to investigate slow running queries is the plan cache, but it may not be suitable for troubleshooting. When memory pressure occurs on the server, the queries you are finding could be already gone away from the cache. Finally, when you have the issue on your hands, can you modify the query text? If you cannot, do you know the system stored procedures to create and manage the Plan Giude?
Supposing you are given a query with two predicates on it, one predicate has the highest selectivity, which plan is the best? Supposing you are given a query with two joint tables, for example you have the table A Joined to the table B. Which is the best way to implement the Join? A Joined to B or the opposite? Imagine now a query that has 80 joint tables. Each color in the following picture represents an execution plan generated and evaluated by the Query Optimizer for the same query. In practice we have hundreds or thousands possible plans for a query with medium complexity.
Picture 1 – Execution Plans (Projects PICASSO: http://dsl.serc.iisc.ernet.in/projects/PICASSO)
SQL Server Query Optimizer considers many plans, but when your data changes, it might select a different plan. Usually when it crosses a boundary, performance is approximately the same, but sometimes, the actual performance is visibly different.
What does the query store do for you?
The Query Store stores all the plan choices and related performance metrics for each query, it identifies queries that have become slower recently and it allows DBAs to force an execution plan easily! If you force an execution plan for a particular query, it makes sure your changes work across server restart, upgrades, failover and query recompiles.
How the Query Store captures data?
Every time SQL Server compiles a query, a compile message comes into the Query Store and the execution plan of the query is stored. In the same way, when a query is executed, an execute message comes into the Query Store, the runtime statistics are stored into the Query Store after the query has been executed. Query Store aggregates the information according to the time granularity you have chosen, the aggregation will be done in memory (because we don't want to kill the server) and then, based on the database option DATA_FLUSH_INTERNAL_SECONDS, aggregated data will be stored on disk in a background asynchronous way like the checkpoint mechanism.
Picture 2 – How the Query Store captures data
I have just told you that the aggregation is done in memory and not on the disk, so suppose an unexpected shutdown occurs, how many in memory data would be lost? If you keep a smaller number of data in memory, you will have bigger IO cost and a smaller number of information will be lost in case of an unexpected shutdown, otherwise, if you keep a bigger number of data in memory, you will lose a larger number of data in case of an unexpected shutdown, but you will have smaller IO cost. This setting is your choice! The important thing is that you have the possibility to choice! If you want to see data captured by the Query Store, you need a tool that combines both In-Memory and On-Disk statistics. Each DMV related to the Query Store, joined In-Memory and On-Disk data. For example, the sys.query_store_runtime_stats table valued function groups In-Memory and On-Disk data in a unified place, so you can use it your scripts or in your application.
Bear in mind, when memory pressure occurs on the server, some data In-Memory will be flushed to the disk in order to release the memory for others.. read the complete article here..
Additional resources about SQL Server Query Store are available here on docs.com:
Enjoy the Query Store!
The most important online event on SQL Server and the Microsoft Data Platform is back! The 24 Hours of PASS is coming back with a great edition focused on the Evolution of the Data Platform. Mark your calendar for May 25 and 26, 2016 when 24 back-to-back webinar sessions will be presented live!
These are the main topics that will be covered:
- SQLServer 2016
- Cortana Analytics
- Azure ML
- Azure HD Insights
The Call for Speakers
is now open and starting from the mid April also the registration will be open! Stay connected!
No matter from what part of the world you will follow the event, the important thing is to know that it will be 24 hours of continuous training on SQL Server, Microsoft Azure and related technologies!
You may need to know which is the biggest table (in terms of disk space usage) in the database that you are currently connected. Sometimes, this information is very useful to check the indexing strategy of these tables.
There are several methods to know which is the biggest table of a database, one of these methods is to use the standard report "Disk Usage by Top Table" exposed by SQL Server Management Studio. Another method is through T-SQL language, you can perform the sp_spaceused
system stored procedure for each table contained in your database, you could store all the partial results in a temporary table and ordering these results by the "data" column, so you can find the biggest table of the database.
Using the sp_spaceused
system stored procedure we have to accept a row-by-row solution, in fact, we will perform a call to sp_spaceused for each table contained into the database. Increasing the number of tables, the number of calls to the stored procedure will grow up.
An alternative solution is represented by the following Common Table Expression based on the internal code of the sp_spaceused system stored procedure, it allows us to obtain the result with a single execution, in a set-based way.
WITH spaceused AS
,reservedpages = SUM(reserved_page_count)
,it_reservedpages = SUM(ISNULL(its.it_reserved_page_count, 0))
,usedpages = SUM(used_page_count)
,it_usedpages = SUM(ISNULL(its.it_used_page_count, 0))
,pages = SUM(CASE
WHEN (index_id < 2) THEN (in_row_data_page_count + lob_used_page_count + row_overflow_used_page_count)
ELSE lob_used_page_count + row_overflow_used_page_count
,row_Count = SUM(CASE WHEN (index_id < 2) THEN row_count ELSE 0 END)
sys.objects ON sys.objects.object_id=sys.dm_db_partition_stats.object_id
reserved_page_count AS it_reserved_page_count
,used_page_count AS it_used_page_count
sys.internal_tables AS it
it.parent_id = object_id
AND it.internal_type IN (202,204,211,212,213,214,215,216)
AND object_id = it.object_id
) AS its
sys.objects.type IN ('U', 'V')
name = OBJECT_NAME (object_id)
,rows = convert (char(11), row_Count)
,reserved = LTRIM (STR (reservedpages * 8, 15, 0) + ' KB')
,it_reserved = LTRIM (STR (it_reservedpages * 8, 15, 0) + ' KB')
,tot_reserved = LTRIM (STR ( (reservedpages + it_reservedpages) * 8, 15, 0) + ' KB')
,data = LTRIM (STR (pages * 8, 15, 0) + ' KB')
,data_MB = LTRIM (STR ((pages * 8) / 1000.0, 15, 0) + ' MB')
,index_size = LTRIM (STR ((CASE WHEN usedpages > pages THEN (usedpages - pages) ELSE 0 END) * 8, 15, 0) + ' KB')
,it_index_size = LTRIM (STR ((CASE WHEN it_usedpages > pages THEN (it_usedpages - pages) ELSE 0 END) * 8, 15, 0) + ' KB')
,tot_index_size = LTRIM (STR ((CASE WHEN (usedpages + it_usedpages) > pages THEN ((usedpages + it_usedpages) - pages) ELSE 0 END) * 8, 15, 0) + ' KB')
,unused = LTRIM (STR ((CASE WHEN reservedpages > usedpages THEN (reservedpages - usedpages) ELSE 0 END) * 8, 15, 0) + ' KB')
Are you thinking how can we access to the internal code of the sp_spaceused? It is simple, we can do that using the sp_helptext
system stored procedure!
Microsoft Press (@MicrosoftPress
) has recently announced the publication of the Second Preview
of the eBook Introducing Microsoft SQL Server 2016!
In this second preview, the authors: Stacia Varga (@_StaciaV_
), Denny Cherry (@mrdenny
), and Joseph D'Antoni (@jdanton
) introduce three new security features: Always Encrypted
, Row-Level Security
, and Dynamic Data Masking
; they discuss the enhancements that enable you to better manage performance and storage: TempDB configuration
, Query Store
, and Stretch Database
. Finally, they discuss about the improvements to Reporting Services
, AlwaysOn Availability Groups
enhancements and R
The PDF versions of the eBook are available here:
Enjoy the book and SQL Server 2016!
The release of the latest three versions of SQL Server (2008 R2, 2012 and 2014) has been accompanied by the publication of their respective eBooks. Today, a post on Microsoft Press Blog has announced the publication of the Preview Edition of the eBook Introducing Microsoft SQL Server 2016
This preview eBook contains only three Chapters, the Chapter 2, 4 and 7.
The Chapter 2 describes the new features on the Protection and Data Security areas: Always Encrypted, Row-Level Security, and Dynamic Data Masking. The Chapter 4 describes the new implementations (and some extensions to existing functionality) in the SQL Server Engine that allow you to better manage growing data volumes, query performance and the solutions known as the "Hybrid Cloud". The Chapter 7 describes the most important features implemented in SQL Server Reporting Services (SSRS); reading this chapter you will easily realize why SQL Server 2016 represents a milestone for both Reporting Services and Mobile BI.
The PDF versions of the eBook is available here:
Thanks to the authors: Stacia Varga, Denny Cherry, and Joseph D'Antoni.
Enjoy the book!
The series of free eBooks "The SQL Server and .NET Blog eBook Series" by Artemakis Artemiou (@artemakis
) collects dozens of articles explaining how to solve practical problems in these areas:
Developing SQL Server (The SQL Server and .NET eBook Series)
This book is for database developers and architects who develop SQL Server databases and related database processes. The book features tens of articles that provide detailed information on how to develop in SQL Server. However, the book is not intended to be a step-by-step comprehensive guide. Additionally, it assumes at least intermediate-level experience with SQL Server development and knowledge of basic database principles.
Administering SQL Server (The SQL Server and .NET eBook Series)
This book is for database administrators and architects who monitor and administer SQL Server instances in order to keep them operating to the highest possible level of stability and performance. The book suggests several techniques that can be used for ensuring a healthy SQL Server instance.
Tuning SQL Server (The SQL Server and .NET eBook Series)
This book is for database administrators and architects who monitor and tune SQL Server instances in order to keep them operating to the maximum possible performance and stability. The book suggests several techniques that can be used for ensuring a performant SQL Server instance.
These three ebooks can not miss in your digital library!
Slides and demo files of the sessions we have made at SQL Saturday #454 in Turin are available for download through the schedule page.
This is a good occasion to say "Thanks" to Politecnico di Torino (that has hosted the event), Organizers, Sponsors and Speakers. A big Thanks also to who have attended this event, you have made of this event a great event!
We has fun very much :) and the photos posted on Twitter prove it!
The PASS Summit is back! This year the most important event around the world for SQL Server Data Platform will be held again in Seattle (WA) from 27 to 30 October 2015 and it will be preceded from two pre-conference days from 25 to 26 October 2015.
Why I wrote "The most important event around the world for SQL Server Data Platform"? Because numbers of the PASS Summit 2014 are self explanatory! Last year over 5,000 data-geeks have attended the PASS Summit which has become the landmark event for the entire Microsoft Data Platform. Take a look at numbers of the year 2014, that are shown in the following pictures, they are really impressive, aren't they?
Every year more and more IT Professionals, Technicians, Analysts and Data Scientists consider this event as an unique opportunity to: Connect to other people that have the same passion you have, Share your experience and the problems you face for your job and Learn more and more about the Microsoft Data Platform for choose always the best technology to win the challenges of the market. Don't forget that you have also the opportunity to meet and talk to people like Conor Cunningham (one of the principal architect of the relational Engine of SQL Server and SQL Azure), Dr. David DeWitt (one of the biggest expert of Parallel Databases) and Mark Souza (General Manager in the Microsoft Data Platform Group). Are you thinking that people are too busy for a talk with you? Nobody of these big names walks away if you try to meet and talk them!
If you are undecided about the quality of the content provided at the PASS Summit, you can watch the sessions of the previous year on PASS TV. You will have no doubts about the quality of the sessions :)
Are you a beginner or a first timers at PASS Summit? Don't worry there are lots of sessions of level 100 and 200 and you can count on the "First Timers Guidebook" that it will arrive soon!
What kind of sessions can you expect to find at the PASS Summit 2015? Find it out by the recording sessions of the last 24 Hours of PASS event (September 2015) named "Summit 2015 Preview Edition", each video is available on detail page of the related session.
You can find all information you need on the PASS Summit 2015 website and for your convenience, here some useful links:
See you there!