SQLBI is a blog dedicated to building Business Intelligence solutions with SQL Server.
You can follow me on Twitter: @marcorus
The latest version of Power Pivot Utils v 1.05 has a new feature that deserves the immediate upgrade: you can list unused columns!
Thanks to Idan Cohen and Amir Hefetz for their contribute to the macros developed by Bertrand d'ARBONNEAU!
If you are using Power BI Desktop or Power Pivot in Excel 2016, you should learn when and how you can use GROUPBY instead of SUMMARIZE. The SUMMARIZE function is very powerful and internally very complex, so it’s easy to find scenarios where you get unexpected results or you have performance issues. The new GROUPBY function (also available in SSAS Tabular 2016) can be a better choice in a number of cases, even if it’s not the same and, for example, it does not “enforce” a join as you can do using SUMMARIZE (see here for more details).
I recently wrote an article about one situation where GROUPBY is absolutely the best choice: when you have nested grouping. An example is pretty simple: you want to SUMMARIZE the result of another SUMMARIZE… well, it’s not possible, but you can do that using GROUPBY.
Once you get used with GROUPBY, I also suggest you to check your skills with the DAX Puzzle about GROUPBY we published a few weeks ago. And if you alread solved it, try the new puzzle published less than two weeks ago about “last date” – not related with groupby behavior, but still good food for mind!
In a couple of weeks, I will move to Australia for 3 weeks, visiting Melbourne and Sydney for a number of events and training. This is the list of topics in public events you can attend (with link to registration pages – remember that SQL Saturday conferences are free events, but you still have to register first!):
UPDATE 2016-02-08 - added a session on Feb 16 in Melbourne!
You can follow the links to see more detailed info about each event. Please note that early bird discounted price for Mastering DAX and Optimizing DAX workshops will expire at the end of this week (Feb 6) – if you are in Australia (or close enough) and you don’t want to miss these unique dates for an immersive DAX training, hurry up and get the discounted price!
I really look forward to come back in Australia!
I recently wrote an article about how to Create Static Tables in DAX Using the DATATABLE Function. Such a new DAX feature (the DATATABLE function) will be likely used in the upcoming Analysis Services Tabular 2016, but it is not used neither in Power BI nor in Power Pivot to create static table. For this reason, I think that it could be useful to do a quick recap of all the methods available if you need a table with fixed static data in your data model.
- Power Pivot
- Linked Tables: you can create a table in Excel with static data and import it in a data model using the Linked Table feature. However, this is technically not a “static” table, it can be refreshed with new data and the table in Excel can be dynamic, as shown in the article describing the “Linkback Tables in Power Pivot for Excel 2013”.
- Paste data from Clipboard: if you copy a table in the clipboard and paste it in Power Pivot, the data model has a correspondent table that also contains data definition in model metadata. You cannot modify the content later, unless you use the Paste Replace feature of Power Pivot. Technically, the content is included in the XML defining the data model, but you do not have any access to it in Power Pivot.
- Power BI Desktop:
- Enter Data: the Enter Data feature in Power BI Desktop allows you to enter data manually in a table, and you can also paste data from the clipboard using this user interface, which allows you to modify the content of this static table later, after the paste operation. The content is stored in a string using a compressed and encoded JSON format. Such a string is extracted in an M transformation using calls to nested calls Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText(…))))
- Calculated Table using DATATABLE function: you can also create a calculated table using DATATABLE, as explained in the article I previously mentioned.
- Analysis Services 2012/2014/2016 (until model version 1103):
- Paste data from Clipboard: similar to what you do in Excel Power Pivot. The only difference is that you can modify the static content defining in the data model by manipulating the .BIM file with a text editor. The content of the table is defined in an XML schema.
- Linked Tables imported from an Excel data model: they are defined in the data model exactly as you would define a static table using Paste Data from Clipboard feature.
- Analysis Services 2016 (from model version 1200):
- Paste data from Clipboard: such a feature is not yet implemented (in CTP 3.2) and you get the error message “TOM does not yet support pushed tables” if you try to execute it. Because the model version 1200 does not store data in XML, I **suppose** they will implement the static table as a calculated table using the DATATABLE function, just as you might already do manually in Power BI Desktop. I will update this blog post later when a new CTP or RTM will support such a feature.
I hope this schema will help you making a decision about the technique to use in this scenario.
When you use Power Pivot, there are a number of operations requiring more than one click (or much more than just Excel):
- Creating a pivot table based on the current data model (at least 2 or 3 clicks)
- Create a table of existing measures (including its DAX expression)
- Create a table of existing calculated columns (including its DAX expression)
- Format with DAX Formatter the DAX expressions you extracted from measures and calculated columns (one copy/paste operation each)
- Display memory usage by table and column (you can use VertiPaq Analyzer, which provides a lot of detailed information, but you have to restore the data model in Analysis Services to use it – or you can use some VBA macro, but it’s still more than one-click if you have to copy the macro)
The good news is that a student I had in Paris at a Mastering DAX workshop invested time to create a collection of VBA macros that are shown to the user in a clean and nice ribbon called Power Pivot Utilities.
When I’ve seen this ribbon, I thought it is so useful that it should be shared to a wider audience, so I asked to its author, Bertrand d’ARBONNEAU (email@example.com) how to do that. After a short conversation, we decided to publish it on SQLBI, so you can already download Power Pivot Utilities from the Tools section. We will keep the version on the web site up to date when Bertrand will release fixes and updates.
I am pretty sure many people will appreciate this tool: enabling VertiPaq Analyzer on Power Pivot is a top requested feature, and this tool is a nice replacement waiting for a major upgrade to VertiPaq Analyzer, which require a longer work.
Kudos to Bertrand for his job!
I have experienced (on difference customer’s databases) some performance issues related to security in SSAS Tabular models. The symptoms are that certain queries, pivot table, or reports, have good performance when you connect as a database administrator, but end users see bad performance no matter what selection and filter they use.
I wrote an article, Security Cost in Analysis Service Tabular, which explains how security is implemented and the different techniques that could be used, depending on the size (number of rows) of tables involved. It’s a tough start of the year, it is certainly not an “easy read”, but I wanted to write the information that will help me (and many others!) to remember what is going on and to evaluate possible solutions.
In my experience, a common situation is a star schema where you apply security on many dimensions. Such approach will create multiple joins in the storage engine queries generated for any measure, because rows in the fact table have to be filtered according to the security, even if the dimension is not included in the filters of the report. There are no silver bullets for this issue, specific optimizations might vary depending on specific data volume and security rules. However, knowing where is the problem is always the first step towards the solution.
This short blog post is a note that could help myself in the future, or anyone who is interested in writing a client tool for Analysis Services.
When you establish a session to Analysis Services, you can run both DAX and MDX statements. In DAX, you can create a measure that is local to a query by using the following syntax:
DEFINE MEASURE Sales[Sales Amount] = SUMX ( Sales, Sales[Quantity] * Sales[Unit Price] )
You have to execute this statement every time you run the same query, even when you do that within the same session.
If you have administrative permission, you might deploy such a measure to the data model, but this would affect all the users and is not interesting in the scenarios I’m considering. However, you can use a mix of MDX and DAX syntax to create a measure that has a session scope:
CREATE SESSION MEASURE Sales[Sales Amount] = SUMX ( Sales, Sales[Quantity] * Sales[Unit Price] );
You might expect this syntax to work only in MDX statements, but it works also in any following DAX query within the same session.
As you might imagine, this is an interesting idea to “inject” measures in an existing report. I originally evaluated this approach to create DAX measures for an Excel PivotTable connected to Analysis Services Tabular. However, the “MDX Calculated Measure” feature in Excel cannot be used for this purpose, because they require MDX syntax in the measure definition.
Maybe some other tools/developers will find this information useful.
The 2015 has been an amazing year for Power BI. One year ago, I was commenting on this blog the public preview of Power BI dashboards. A few days ago, Microsoft released the December update of Power BI Desktop and several updates of the Power BI service. In one year, we moved from a situation where Microsoft seemed (and actually was) too slow in the BI market, to a weekly (and sometimes daily) wave of announcements that are becoming hard to follow also for those enthusiast early adopters (but don’t misunderstand and read my lips: D O N ‘ T S T O P R E L E A S I N G N E W F E A T U R E S !!).
It’s clear that Microsoft is moving quickly to get the cloud market of BI services, which is still unexplored (after all, most of the companies keep their data on premises), but it is the only player providing a credible story of integration between cloud and on premises BI services (see “The Box is Back” and “Happy Together” pictures used at last PASS Summit). I finally see both strategy and execution working together. And execution is happening at the right speed, which has to be very fast nowadays.
There are a long number of features, improvements, technical details, and future directions that justify this analysis, but there is still a long road ahead before good comments become actual revenues and adoption in production. However, there are key changes I have seen in this year that strongly support a positive outlook for the entire Microsoft BI stack of technologies:
- No more “by design” answer: first and foremost, the entire Microsoft BI team removed the “by design” answer for bug/suggestions submissions. The Power Query team started with this positive attitude of considering feedback as something to use for real. Now the entire Microsoft BI team listen to the community, consider feedback and prioritize features based on comments received. Don’t underestimate this point. This, and an increased transparency, is the foundation of the credibility that leverages the word of mouth and community support. Other Microsoft products don’t have this asset, and they should do something to get it. Microsoft is behaving as a startup here, and the results are visible. One year ago, this was a promise. Today, it is a fact.
- Constant and continuous innovation: weekly and monthly releases creates a push to keep the software up-to-date. This impacts in particular Power BI Desktop, and backwards compatibility is no longer a constraint for innovation. There are pros and cons in this, but by now there are more goods than bads.
- DataZen acquisition: I’ve seen many comments that were not convinced about this move, because it was creating confusion and overlaps with other Microsoft products. Now that Microsoft released SQL Server 2016 CTP 3.2 that includes the first version of DataZen (now named “mobile report type”) in Reporting Services, most of these considerations are over. However, I always thought that DataZen acquisition was an excellent strategic move. Microsoft bought a fast-growing company that was not quoted, including the technology and the development team in a single move. While most of the comments were about the effective value of the technology and the overlap between DataZen and Power BI mobile app, my appreciation was more about buying a company to remove it from the market, so that other possible big competitors wouldn’t have been a chance to buy it. Doing that when the company was mature enough to have a real customer base (and a real working product), but small enought to not be under the radar of many analysts, is an additional bonus. Great move.
- Custom visuals in Power BI: disclaimer: I might be biased for having inspired the creation of Synoptic Panel. But I think that making the visual components an open source part of Power BI and enabling a large community to contribute to a public gallery is a super smart move. If the community contributes, this is good. But even in case the community would not respond, Microsoft opens a door to release part of software complementing its product (Power BI), without requiring the cost related to official release of software (which higher costs are internationalization and documentation). Think about the cost of releasing and supporting softare in hundreds of languages (think to Office) and you should quickly realize why this move is smart. Not to mention the number of opportunities that it opens to Microsoft partner, leveraging the existing ecosystem.
- Adoption or “R”: Is the support of the “R” language really required for a tool like Power BI? Let me say an unpopular opinion: there are too many buzzwords in the BI arena, and it’s not the first time. It has been always the same, just changing the buzzwords. We started with DSS (Decision Support Systems), we recently heard about big data, I’ve seen that BA (business analytics) rocks more than BI (business intelligence), you cannot live without a data scientist, and yes, you need “R” in your company. The list could continue for pages. But is this really true? I am convinced that “R” is very good in certain domains, and at the same time I see it used (or even just considered) also when it is completely useless. But I am a consultant, I don’t sell licenses, I help customers building solutions that actually work, possibly reducing development cost and time. “R” is an option, but is not necessarily important or relevant in many scenarios. But can Microsoft ignore the important of buzzwords and trends in the market? Absolutely no. So adopting “R” for data sources and visualizations (see latest announcement for R Visuals in Power BI) is another great move that will help Microsoft sales (and this is true for all Microsoft partners, too).
I have seen in one year a multiple of what I have seen in the previous ten years. In reality, Microsoft is leveraging many assets that were already in-house (the Tabular engine, Power Pivot, Power Query, Power View, and many other Azure’s based services), which development required several years. But the weak point has always been the “last mile” of data visualization and presentation. Other players were years-ahead in this area. And I think they are still ahead of Microsoft. You might have a scenario that already fits well with Power BI. Maybe your scenario will be “Power BI” friendly in a few months. But the trend is clear and the improvement is continuous. So, when I think to the considerations I will do in 12 months, I am very positive and I am also scared about the number of new features I have to learn. Since this is what I (probably we) asked for many years, I cannot complain!
I always take a look at stock price of companies to check whether financial results and expected outcomes correspond to my point of view. Unfortunately, it’s not fair to compare Microsoft with other specialized companies that work only in the BI market. So we cannot draw any conclusion by observing that Microsoft (MSFT) is at its highest point since Tableau IPO (DATA), whereas Qlik (QLIK) didn’t add much value to stockholders in the last 3 years (source: http://www.nasdaq.com/symbol/msft/stock-comparison).
What can we expect in 2016? I expect Power BI to continue its growing as data visualization platform, ecosystem, and adoption. However, the upcoming Microsoft SQL Server 2016 will be a huge release for the BI services. It will be the more important release since 2005, and this is mainly because of the new SQL Server Reporting Services (see Microsoft Business Intelligence reporting roadmap to read more). In reality, also SQL Server Analysis Services is a very important release, being the first major upgrade since SQL Server 2012 for Analysis Services Tabular (see latest announcements for SQL Server 2016 Analysis Services CTP 3.2).
Without going in detail, there are many new features that will make Tabular a more “enterprise ready” platform. It’s not that the current version is not good, I helped many companies in succesful adoption of Tabular, but there are many improvements in productivity and performance that completely justify the upgrade even before considering new features available in the new version. There will be a lot to write about that, and I really look forward to start doing that in a few weeks.
The magic key will be “Hybrid BI”. It’s already happening. I expect most if not all of the companies to move in an “hybrid” area for their BI solutions, where certain part are implemented on premises, and others in the cloud. And the reason to move on premises is not always preserving legacy investement, it could be requiring top performance of hardware for in-memory databases. For a certain size, you need to do an accurate hardware choice, where spending more is not necessarily better. In the very long term, I expect everything to move to the cloud, but the reality is that the two technologies will live together for a very long time, and Power BI is a good candidate to play a major role in this scenario.
Last week Amir Netz presented in the Power BI blog a detailed introduction to the Synoptic Panel written by Daniele Perilli. This component won Power BI best visual contest in October, but since then we received many feedback and Daniele worked on it improving the editor and the available features. The big news is that now the Synoptic Panel supports the SVG format. You can still start from a bitmap and draw areas you want to bind to data in Power BI, but now the result of this job is an SVG file created by the Synoptic Designer. Graphical features of the designer are limited, so if you want a real arc, circle or complex shapes, you can create an SVG file with an online editor (such as http://editor.method.ac/ – but you can find many others) and then import it in Synoptic Designer. Since the SVG includes areas, the designer simply import these entities and allows you to edit their names.
Now, beside the technical details, why the impact of this feature is huge? Well, practically any vector format can be converted to SVG, so if you have a technical draw of a building, any planimetry built using a professional drawing software… chances are you can import them in Power BI now! But what if you don’t think this impact you?
Well, you might need to display data on a map: by country, territory, state, city, district, whatever… Yes, we have Map and Filled Map components in Power BI, but they are good if you want to display the exact geographical details and if you are online. If you want to create custom areas, you want to remove certain details, or you want to use it also offline (or without Internet) using Power BI Desktop, you were unable to do that.
For example, if you want to display USA data by state, you might want to move Alaska and Hawaii in a different position in your map, such as in this example. And the good news is that you don’t have to create this file from scratch: we imported this example from Wikimedia, which already has a good SVG format that you can import in Synoptic Designer: https://commons.wikimedia.org/wiki/File:Blank_US_Map.svg
The time you have to spend to create a new map depends on the quality of the initial file. The USA map is a very good example, because there is already one entity for each state, and each state has the 2-letter state code associated to it.
- Go in http://synoptic.design/
- Paste the URL of the SVG file (https://upload.wikimedia.org/wikipedia/commons/3/32/Blank_US_Map.svg) and click GET (or browse from your local drive if you have it there)
- Now you will see in the editor all the states and selecting each one you will highlight the corresponding area on the right pane. You can type the complete name of each state here, or change the code to identify the state. You can store two names, one to bind data in Power BI and the other to display the name of the area in the visual (if blank, the same name used to bind data will be used by default).
- You can also clear some area if you don’t need it. If you want to change the drawing, you should edit the SVG file first in an external editor (such as http://editor.method.ac/). Please note that certain states in this specific map have an additional dash character you might want to remove.
- Click EXPORT TO POWER BI button (bottom right), and then press the DOWNLOAD SVG button
If you find a good map that you think could be useful to other people, please click SUBMIT TO GALLERY, too! We already have a number of maps available, and you can quickly use them by navigating in Gallery / Country/Territory:
You can also browse the gallery directly from the Synoptic Panel component. Once you chose the map, you bind to Legend a column containing state names, to Values the measure to display (it will appear in the popup window by moving the mouse over a state), and to Saturation Values the measure that defines the saturation of the color (you can configure limits in Format properties). In the following example you see the Quantity measure used for both Values and Saturation Values, while the mouse is on Florida.
I really look forward to see new territories published in the gallery!
While I am teaching Mastering DAX and Optimizing DAX classes, I am always available to speech at local user group events. This week, I will present an updated version of the “Toolkit for DAX Optimization” in Amsterdam, on Thursday, November 26, 2016.
At this link you can register (for free) and see logistics details (user group event is after work, starting at 17:30 serving some food – presentations will start at 18:30 and I will speak at 19.00). Please note there are very few seats still available, so register only if you are sure to attend!
I already presented this session at PASS Summit 2015, but I will have more time and I’d like to show some more detail about how to use these tools with Power BI, too. And, of course, I will be open to Q&A!
This is the description of the content of this session.
Toolkit for DAX Optimization
How do you optimize a DAX expression? This session introduces you to useful tools to help measure performance, gather data to find the bottlenecks, and write new optimized versions of DAX. Starting from SQL Profiler, you learn which events are relevant for DAX and how to collect them in different environments (SQL Server Analysis Services [SSAS], Power Pivot, Power BI). Explore DAX Studio, which simplifies and speeds up the data-collection process and makes it easy to find bottlenecks in the storage engine and formula engine. The goal of this session is to provide a methodology to measure performance of your DAX measures, to find bottlenecks, and to identify the main reason of performance issues.
I will focus on tools more than on optimization techniques, but considering the very good feedback I received at PASS Summit, it seems the session works very well – I hope you will enjoy it, too!
I recently write a small article on SQLBI to describe how to create a DAX measure that search for items that have all the items selected in a slicer, instead of any of them. For example, you might want to create a measure that display how many customers bought at least one product for each of the categories selected in a slicer. You cannot change the behavior of the slicer itself, that applies a filter to the calculation, but using DAX you can iterate the selected items and make sure that each one exists in another set of items (such as the products bought by a customer). I think it could be an interesting technique also for different applications (I would be happy to read your ideas in the comments!). You can read the complete article on SQLBI.
David DeWitt and Rimma Nehme delivered the day 2 keynote at PASS Summit 2015 talking about Internet of Things. Many PASS attendees know David for providing some of the best keynotes every, technical and without vaporware and marketing stuffs. So this is a good time to discuss about the real stat of Internet of Things (IoT) these days.
It’s easy to talk about IoT, but it’s hard to enter into this complex and heterogeneous world. There are many type of devices, and a simple categorization is consumer vs. industrial, which have very different requirements and features (not to mention cost, power, and standards used). The key to connect IoT devices is the cloud, but communication happens in two directions: from device to cloud, and from cloud to device. There requires different technologies and often different devices, because many of them have only a single role (sending data or receiving commands, sensors or actuators, but certain devices might do both).
As you can imagine, this is a real source of Big Data. I often see Big Data used to manage data that are generated into a structured relational database, which seems to be a non-sense. But with the volume of data generated by IoT this technology makes perfect sense. Of course, Azure has a lot of technologies that helps you manage this amount of data, but since the topic of this blog is Business Intelligence, I’m more interested to what happens when you want to analyze data.
Here, a few technologies that have a certain history (if you know former names) came into play. Data Mining (also known as Machine Learning these days) can be fundamental to make predictions based on previous behaviors. David is great in providing simple examples to explain the concept: a boiler has a pressure sensor and you have to open valve before boiler explodes. You can train the algorithm for predicting boiler failure, or you can provide built-in intelligence in the algorithm, with a predefined limit of bar to open valve. The analysis of this data requires real-time stream analysis, relying on the cloud for this real-time analysis would generate too much traffic and would also have higher latency (dangerous for this type of applications).
Here it comes a new “definition”: fog computing, also known edge computing. The idea is to not move the data to computation, but to move computation to data. However, the IoT is a database problem, which is not managed in this way these days. And this is the main point of this keynote. Proposing a Polybase for IoT that includes:
- Declarative language: today IoT is based on imperative languages, whereas the goal is to introduce declarative language, such as IoT-SQL (imagine to add a WINDOW and ACTION suffix to a classical SQL query, so that it can act on a range of time, triggering an action when certain conditions happens)
- Complex object modeling: define a standard structure to identify IoT locations in a hierarchical structure (imagine an object model with an API to navigate hierarchies of objects, traversing path and similar stuffs – similar to many MDX statements we know well)
- Scalable metadata management: simple abstraction (Metadata, statistics, access privileges) unified to access different devices; metadata includes collection of standard and extended attributes
- Discrete & Continuous Queries: different query types, such as ExecuteOnce (like a standard SQL), ExecuteForever (continuous flow of responses from device), ExecuteAction (such as ExecuteForever plus an action to execute in defined conditions)
- Multi-purpose queries: here is the smart idea. With a definition of the process at an higher level, the decision of moving the work in the cloud or at the edge (in the fog) is made by the query optimizer, creating a real query plan that distributes the actual work to different parts of the system depending on the requests
This approach is really ambitious, but an important part of it is the idea of embedding security in the system. If you think about the future of IoT, security is of paramount importance. I don’t know if this will be the future of IoT, but this speech raised points that have to be faced, sooner better than later.
You will be able to watch this keynote soon at PASStv on demand.
I’m sitting at the bloggers table at the Day 1 keynote of PASS Summit 2015 and I try to recap the important news for the BI community.
- SQL Server Analysis Services 2016 CTP 3 will be available in a few days here, but Microsoft already announced the news in this release:
- Previously announced news (most of them already described in this blog and/or available in Power BI Desktop):
- Extended Events for SSAS available in SSMS
- Parallel partition processing
- New DAX functions
- DirectQuery over SQL Server 2014
- New DAX functions and syntax (including variables)
- New SQL Server 2016 compatibility level: it’s no longer based on XMLA (that was created for Multidimenisonal) and we have a new Tabular Object Model (TOM). No problem for existing models if they keep the 1103 compatibility level, but once you migrate to the new model, you have to use the new API (but you also have access to the new features). API for TOM is available in JSON, just as in Power BI APIs.
- JSON-based tabular scripting language: while there is still a single file for the entire model, the code is more readable and (most important) it’s easier to be managed in a version control system, possibly merging changes applied by different developers to the same data model
- Bi-directional cross filtering: just as in Power BI Desktop, with an equivalent diagram view layout
- MDX Support for direct query: this is a *BIG NEWS*. Implication is that you will be able to use Excel pivot tables (generating MDX queries) with Direct Query. But wait, there is more about DirectQuery…
- Oracle, Teradata, and APS supported in DirectQuery: while I’m still confident that an in-memory database is very useful when you have a certain amount of data, of for certain calculations, the ability to create real-time dashboards connecting your relational database to a thin semantic layer is wonderful. Just be careful, you have to use compatibility level 1103 for this feature in CTP3 (this will certainly change before RTM).
- SSDT for Visual Studio 2015 available: this is just necessary and expected. The good news is that you don’t need to download 1GB of file if you already have VS2015. Important news here are better performance of SSDT when you design a data model (but I want to test it… again, it’s easy to improve what we have today) and finally we’ll be able to use Visual Studio Configuration manager to create test and production deployment configurations. Really really needed!
- Improved DAX formula editing: now, it’s easy to do something better than what we had in the past! I’m really demanding here and I hope there will be a way to integrate DAX Formatter (I will investigate on that this week). By the way, the new development environment has formula fixup – you rename a measure and an automatic refactor happens, just as in Excel 2016 and Power BI Desktop.
- DBCC for Analysis Services: you can check the consistency of your tabular and multidimensional models.
The keynote was more about SQL Server 2016 as a platform, on premises and on cloud. SQL Server is now better than Oracle in Gartner quadrant, in TPC-H benchmark (for data warehouse) and for security (it is the database with the lower number of vulnerabilities in the last 5 years). This would have been a joke 10 years ago. There are also many improvements such as integration with R, many security, scalability, reliability and performance improvements (in-memory technology in SQL Server is always improving).
You can watch PASS keynote on PASStv: http://www.sqlpass.org/summit/2015/PASStv/Keynote.aspx
That’s all for today’s keynote. More news later this week watching sessions, I hope!
I don’t usually blog about news, unless there is a relevant impact on future directions. Last week we have seen a huge number of announcements for Power BI (Custom Visuals now available to anyone through a gallery online, you can import them in also in Power BI Desktop, and a number of improvements in Power BI Service and a new version of the Personal Gateway).
I am sure we will see a number of announcements this week at PASS Summit 2015. If you look at the public schedule of the sessions, you see that we will finally know more about Analysis Services and Reporting Services, which are expected to show new features in SQL Server 2016. We will finally talk about Corporate BI again, even if we should talk about Hybrid BI these days.
The keynote will start at 8:15 AM Pacific Time on Wednesday, October 28, 2015. I will cover announcements made during the keynote on twitter (@marcorus) and with more details on this blog.
If you will be in Seattle this week, I and Alberto will present these sessions:
I hope Darren will be able to release an update to DAX Studio, which is one of the tools I will use in my session. I already have an early build to show in the session, just in case, but the goal is to make it broadly available before the session.
UPDATE 2015-10-26: DAX Studio 2.3.0 is available!
If you use Analysis Services Tabular, you should dedicate a good amount of time in hardware selection. Unfortunately, throwing money in expensive hardware configuration could be a very bad idea, resulting in your 1,500$ desktop running faster than your expensive 100,000$ server. Moreover, if you use virtualization you have to be very careful in certain settings, that might affect the performance in a very bad way. When I say this, I’m describing queries running up to 2-3 times slower than in optimal conditions. So, why spending time to gain 10% when you have a bigger issue to solve?
I described the main best practices in the article Optimize Hardware Settings for Analysis Services Tabular on SQLBI. This is the result of helping many companies to detect hardware bottleneck and to plan the right hardware configuration. My experience says that the time you spend to correctly allocate the budget has a huge return of investment. Usually you cannot change the CPU or the RAM of a brand new server, so this step is critical. The next step is to check that hardware configuration is correct. It’s incredible how many times I discovered that BIOS settings of an expensive server were the reason of slow performance, so now it’s my first priority when I see a benchmark with suspicious numbers (compared to tech spec of the CPU).
Now, a common discussion I had is that “we have a standardized hardware and virtualization platform”. I completely understand that, but I like to remind that the goal is to get a better return of investment, and standardization has the ultimate goal to reduce costs. So we start to evaluate the cost of a solution that is compliant with the standards, but allocates different hardware to specific workload. The result of a this is spending less (in hardware and licenses) getting more (performance).
I’d like to hear your stories about that – write your experience in the comments!