THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

SQLBI - Marco Russo

SQLBI is a blog dedicated to building Business Intelligence solutions with SQL Server.
You can follow me on Twitter: @marcorus

  • New formula to compute new customers in #dax

    Two years ago I and Alberto Ferrari spent time discussing techniques to calculate new and returning customers, and we wrote the result of our study in the New and Returning Customer article in DAX Patterns web site and book.

    During one of the Mastering DAX courses, a student presented a better solution and Alberto published an article describing this approach that is much faster than the ones we used before. We also worked to backport this algorithm in the "legacy" DAX, without using new features available only in Power BI Desktop, Excel 2016, and SSAS 2016 (we call it "DAX 2015" to quickly identify the DAX version). The performances are equally amazing and you can read the full story (and the DAX code) on the Computing New Customers in DAX article on SQLBI.

    In the future we will also update the DAX Patterns web site, but the idea is to wait a larger work on the existing patterns updating them using the new syntax available in DAX 2015.

  • Leverage Analyze in Excel for your Power Pivot model

    Microsoft recently released the Analyze in Excel feature, which allows you to connect an Excel pivot table to a data model published on Power BI. I think/hope that Microsoft will enhance this feature to extend its current limits. For example, it does not allow you to connect a live model based on an on-premises SSAS Tabular server through Power BI Enterprise gateway. However, this feature is also very interesting to any Power Pivot user.

    When you have a data model in Power Pivot, you have to create all the reports within the same Excel workbook. At the end, you have a single file containing everything: the data model, the data, and the reports. This is fine at the beginning, but as soon as you create more complex data models, you increase the number of reports – in a word, when the model grows – then you face a few problems. Any edit operation of a power pivot model could require time to refresh pivot tables and measures. The editing experience becomes slower. You cannot separate the data from the reports in two different files to split maintenance responsibility. If you created several reports based on the same power pivot data model, you already know this story. It would be nice to split the data model (and the data) from the reports. Now you can, using Power BI and the Analyze in Excel feature. Power BI is your best friend here.

    This is the list of the operations you have to do:

    1. Download and install Power BI Desktop – you have to be a local administrator of your machine to do that, but this is the only step requiring such a permission.
    2. Open Power BI Desktop.
    3. Import the Power Pivot data model in Power BI Desktop using the feature File / Import / Excel Workbook Contents, and then schedule a refresh if you need that.
      import-from-excel
    4. Publish the Power BI Desktop data model in Power BI service
    5. Open the data model from Excel using the Analyze In Excel feature on Power BI service
    6. Create the same reports you had in the original Excel file using new pivot tables.

    At the end, you will have an Excel file connected to an external data model, which can refresh automatically through a schedule you control. Before Power BI, you had to use Power Pivot for SharePoint to do that.

    You might ask why I am not suggesting to publish the Power Pivot data model straight to Power BI service, without creating a Power BI Desktop file to do that. The reason is that it is much simpler to work with two separate files, and the environment in Power BI Desktop provides you the ability to use bidirectional filter in relationships, a feature that is not available in Power Pivot for Excel. However, this approach requires you to plan in advance the moment you “detach” the data model from Excel if you defined synonyms for Q&A.

    At the moment of writing, you cannot modify synonyms in Power BI, neither in Power BI desktop nor in Power BI service. However, if you created synonyms in Excel, they are kept in the data model in Power BI Desktop, even if you cannot modify them (by now, I hope!). If you want to modify the synonyms for Q&A in the Excel file, you have to repeat the cycle and import the Power Pivot data model again in Power BI, losing the changes you might have done before. I am not saying that this way of working should be suggested to anyone, but you might be interested if you are willing to pay the maintenance cost of this solution, which already provides you the best of the two worlds: define synonyms for Q&A in Power Pivot, and create relationships with bidirectional filter in Power BI Desktop. When editing synonyms will be possible also in Power BI Desktop and/or in Power BI Service, this article will become immediately obsolete.

    At the moment, you should consider that changing the synonyms in Power Pivot will require you to overwrite the changes you applied to the data model in Power BI Desktop. Thus, you might want to apply any change to the data model in Power Pivot (for example, adding/removing columns/measures) instead of using Power BI. You might just apply bidirectional filter on Power BI, but you will have to repeat that every time you import the data model in Power BI Desktop from Excel again. And you would lose any report designed in Power BI Desktop doing that, so you should edit reports in Power BI service only… Editing synonyms in Power BI will definitely streamline the process, when it will be possible.

    One feature that I would like for this scenario is being able to change automatically all the connections of the pivot tables based on a Power Pivot data model, replacing the correspondent connection string in the ODC file returned by Power BI clicking on Analyze In Excel. This is also a long awaited feature for those who want to publish a Power Pivot data model to an on-premises Analysis Services Tabular server, and it is not easy to manipulate the file so that you can refactor the connection string in all the queries of the dataset. I am sure that this would improve the productivity of Excel users facing this issue, and it will increase the number of users or Power BI service.

    Conclusion

    You can use synonyms today in Q&A, but you have to create and edit the model in Power Pivot, exporting it to Power BI only to edit relationships using types not available in Power Pivot (such as the bidirectional filter).

  • A warm welcome to Power BI Embedded #powerbi

    I wrote this blog post with Alberto Ferrari.

    A few days ago, during the Build conference, Microsoft announced the availability, although still in preview, of Power BI Embedded. In a few words, Power BI Embedded lets you embed Power BI reports in your application, taking advantage of the tools in Power BI to enrich the analytical power of your application. Users do not have to authenticate with the Power BI service. In fact, they do not even need a Power BI account. Your application performs the necessary steps to authenticate a user and then, it uses App Tokens to request for report rendering. App Tokens are generated by the Power BI service when your application requests them, providing the necessary keys associated with your Azure subscription.

    Power BI embedded is priced using renders. Currently, the price is 2.50 USD per 1,000 renders, and the first 1,000 renders in a month are free. A render is the production of a visual in a report, so a single report requires one or more renders (a dense report will be more expensive).

    The first step to use Power BI Embedded is to activate the service in Azure, by means of creating a Power BI Workspace Collection. A workspace collection, as its name implies, is a set of workspaces. A workspace can contain datasets, reports and dashboards, same as it is happening today with the Power BI workspaces you work with when you are connected to the Power BI service.

    You can add items to a Power BI Embedded workspace by uploading a PBIX file generated with Power BI Desktop. Connection strings, data model, measures and calculated columns, visual interactions and all other functionalities of Power BI are uploaded along with the model and available for your reports. This is, at least, the experience during the preview. As soon as the product reaches general availability, the development experience might be different and, as of today, there are no information about how it will work when released (hopefully you will be able to automate the creation of the model and its data refresh). Nevertheless, in this early phase, the option of using Power BI Desktop for development is awesome, as it leverages a well-established technology to author the reports, although it means that – as of today – you cannot add a dashboard to a workspace collection, because Power BI Desktop does not have a dashboard authoring feature, it stops with reports. Another limitation that exists during the preview is that you cannot refresh data, unless you upload a new version of the PBIX file. If you need to work with live data, today the only option is to leverage DirectQuery connections, along with all the limitations that come from that.

    Once the workspace collection is in place, your application needs to connect to the service and generate an App Token by providing the needed key (which you have to store in the application itself). Once the application has an App Token, it can interact with the user the way it needs and, when it is time to produce a dashboard, it provides the App Token to the service again requesting the rendering of one or more reports. Power BI takes care of handling the report rendering and visual interactions.

    In order for the entire reporting system to work, the Power BI service needs to be able to connect to the data source. This can be very easy, if the data source is already in the cloud, or it requires some additional refreshing steps if the data source is on premises and not connected to the service. In such a case, you have to publish to the Power BI cloud service an updated version of the PBIX file using the Power BI Embedded API.

    So far, so good, it looks like Microsoft created yet another library to show dashboards inside an app. Why do we believe this is huge, not just yet another reporting app?

    • It uses the very same tools your users are probably using in Power BI to gather insights.
    • It does not have a starting price: you pay for what you use. The more customers are using your application, the more you pay for. By using a similar approach (pay per use) you can limit the initial investment to build a real analytical system in your application
    • It does not provide you a simple reporting tool. It provides a modeling tool where you build a data model through a Power BI Desktop file, along with measures, ETL steps with Power Query, calculated columns and tables. Thus, in order to provide analytics to your application, you do not need to focus on how to build a given report. Instead, you will focus on how to build an analytical model on top of which, later, you will build reports.

    This latter change is, in our opinion, the most important of all. A simple shift in the way you think at reporting inside an application might open a whole bunch of new features and help democratizing Business Intelligence. For example, it is easy to think at standard reporting provided inside the application and, for advanced users that require more power, ISV can provide Power BI Desktop files that can be customized and later deployed on Power BI to perform custom analytics.

    It is important to note that this preview is not a complete solution for any ISV. Today there are no APIs to programmatically create a PBIX file. This results in strong limitations for automating the creation of a custom data model depending on parameters defined by the application created by the ISV. An API to create a PBIX file or to manipulate a data model published on the server would be an important advancement to create a fully functional ecosystem for the ISVs. We hope this is only the first step in that direction.

  • How to fix #dax time intelligence functions in #powerbi when date table uses a surrogate key

    I wrote an article about Time Intelligence in Power BI Desktop to explain how to fix measures using DAX functions for time intelligence when you have a data model where the relationship uses surrogate keys. THe short description is that all the time intelligence functions in DAX makes two assumptions: you have a "full" date table (all days for each year you want to handle must be present), and you use the "Mark as Date Table" setting to identify the date column in a date table. However, the latter is not possible in Power BI Desktop, because such a setting is not available yet (end of February 2016).

    There are a number of workarounds possible when we wait for such a feature in a future version of Power BI Desktop. This problem has a limited impact because, when you use the date column in the relationship, the "Mark as Date Table" setting is not strictly necessary. However, it is a good idea to understand why this behavior exists and how the settings affect the DAX engine by reading the article, even if you don't have this problem today.

  • Possible #powerpivot issue removing Excel 2016 going back to Excel 2013

    Today I observed an issue on a notebook where Excel 2016 was installed and then removed, going back to Excel 2013.

    After this operation, workbooks having a Power Pivot data model were working correctly by navigating data in PivotTables, but any operation in the Power Pivot window (such as adding columns, tables, or editing measures) was failing. The error was a TYPE_E_CANTLOADLIBRARY and it was caused by a type library not removed uninstalling Excel 2016.

    I solved the problem by removing the following key from the registry 

    [HKEY_CLASSES_ROOT\TypeLib\{00020813-0000-0000-C000-000000000046}\1.9]

    (WARNING: DON'T TOUCH THE WINDOWS REGISTRY IF YOU DON'T KNOW WHAT YOU ARE DOING - use the following link to get more detailed instructions and always do a backup first).

    I found this article that describes the problem as a possible issue in all the versions of Excel!

  • New features in Power Pivot Utils v 1.05: list unused columns #powerpivot #dax

    The latest version of Power Pivot Utils v 1.05 has a new feature that deserves the immediate upgrade: you can list unused columns!

    Power Pivot Utils Ribbon

    Thanks to Idan Cohen and Amir Hefetz for their contribute to the macros developed by Bertrand d'ARBONNEAU!

  • GROUPBY vs SUMMARIZE in #dax #powerbi #powerpivot

    If you are using Power BI Desktop or Power Pivot in Excel 2016, you should learn when and how you can use GROUPBY instead of SUMMARIZE. The SUMMARIZE function is very powerful and internally very complex, so it’s easy to find scenarios where you get unexpected results or you have performance issues. The new GROUPBY function (also available in SSAS Tabular 2016) can be a better choice in a number of cases, even if it’s not the same and, for example, it does not “enforce” a join as you can do using SUMMARIZE (see here for more details).

    I recently wrote an article about one situation where GROUPBY is absolutely the best choice: when you have nested grouping. An example is pretty simple: you want to SUMMARIZE the result of another SUMMARIZE… well, it’s not possible, but you can do that using GROUPBY.

    Once you get used with GROUPBY, I also suggest you to check your skills with the DAX Puzzle about GROUPBY we published a few weeks ago. And if you alread solved it, try the new puzzle published less than two weeks ago about “last date” – not related with groupby behavior, but still good food for mind!

  • Conferences and Training in Australia Feb/Mar 2016 #dax #sqlsaturday

    In a couple of weeks, I will move to Australia for 3 weeks, visiting Melbourne and Sydney for a number of events and training. This is the list of topics in public events you can attend (with link to registration pages – remember that SQL Saturday conferences are free events, but you still have to register first!):

    UPDATE 2016-02-08 - added a session on Feb 16 in Melbourne!

    You can follow the links to see more detailed info about each event. Please note that early bird discounted price for Mastering DAX and Optimizing DAX workshops will expire at the end of this week (Feb 6) – if you are in Australia (or close enough) and you don’t want to miss these unique dates for an immersive DAX training, hurry up and get the discounted price!

    I really look forward to come back in Australia!

  • Create static tables in Power BI, Power Pivot, and Analysis Services #powerbi #powerpivot #ssas #tabular

    I recently wrote an article about how to Create Static Tables in DAX Using the DATATABLE Function. Such a new DAX feature (the DATATABLE function) will be likely used in the upcoming Analysis Services Tabular 2016, but it is not used neither in Power BI nor in Power Pivot to create static table. For this reason, I think that it could be useful to do a quick recap of all the methods available if you need a table with fixed static data in your data model.

    • Power Pivot
      • Linked Tables: you can create a table in Excel with static data and import it in a data model using the Linked Table feature. However, this is technically not a “static” table, it can be refreshed with new data and the table in Excel can be dynamic, as shown in the article describing the “Linkback Tables in Power Pivot for Excel 2013”.
      • Paste data from Clipboard: if you copy a table in the clipboard and paste it in Power Pivot, the data model has a correspondent table that also contains data definition in model metadata. You cannot modify the content later, unless you use the Paste Replace feature of Power Pivot. Technically, the content is included in the XML defining the data model, but you do not have any access to it in Power Pivot.
    • Power BI Desktop:
      • Enter Data: the Enter Data feature in Power BI Desktop allows you to enter data manually in a table, and you can also paste data from the clipboard using this user interface, which allows you to modify the content of this static table later, after the paste operation. The content is stored in a string using a compressed and encoded JSON format. Such a string is extracted in an M transformation using calls to nested calls Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText(…))))
      • Calculated Table using DATATABLE function: you can also create a calculated table using DATATABLE, as explained in the article I previously mentioned.
    • Analysis Services 2012/2014/2016 (until model version 1103):
      • Paste data from Clipboard: similar to what you do in Excel Power Pivot. The only difference is that you can modify the static content defining in the data model by manipulating the .BIM file with a text editor. The content of the table is defined in an XML schema.
      • Linked Tables imported from an Excel data model: they are defined in the data model exactly as you would define a static table using Paste Data from Clipboard feature.
    • Analysis Services 2016 (from model version 1200):
      • Paste data from Clipboard: such a feature is not yet implemented (in CTP 3.2) and you get the error message “TOM does not yet support pushed tables” if you try to execute it. Because the model version 1200 does not store data in XML, I **suppose** they will implement the static table as a calculated table using the DATATABLE function, just as you might already do manually in Power BI Desktop. I will update this blog post later when a new CTP or RTM will support such a feature.

    I hope this schema will help you making a decision about the technique to use in this scenario.

  • Power Pivot Utils: a nice collection of VBA macros #powerpivot #dax

    When you use Power Pivot, there are a number of operations requiring more than one click (or much more than just Excel):

    • Creating a pivot table based on the current data model (at least 2 or 3 clicks)
    • Create a table of existing measures (including its DAX expression)
    • Create a table of existing calculated columns (including its DAX expression)
    • Format with DAX Formatter the DAX expressions you extracted from measures and calculated columns (one copy/paste operation each)
    • Display memory usage by table and column (you can use VertiPaq Analyzer, which provides a lot of detailed information, but you have to restore the data model in Analysis Services to use it – or you can use some VBA macro, but it’s still more than one-click if you have to copy the macro)

    The good news is that a student I had in Paris at a Mastering DAX workshop invested time to create a collection of VBA macros that are shown to the user in a clean and nice ribbon called Power Pivot Utilities.

    PpUtils_Ribbon

    When I’ve seen this ribbon, I thought it is so useful that it should be shared to a wider audience, so I asked to its author, Bertrand d’ARBONNEAU (bertrand.d-arbonneau@gemalto.com) how to do that. After a short conversation, we decided to publish it on SQLBI, so you can already download Power Pivot Utilities from the Tools section. We will keep the version on the web site up to date when Bertrand will release fixes and updates.

    I am pretty sure many people will appreciate this tool: enabling VertiPaq Analyzer on Power Pivot is a top requested feature, and this tool is a nice replacement waiting for a major upgrade to VertiPaq Analyzer, which require a longer work.

    Kudos to Bertrand for his job!

  • Performance issues related to security roles in Analysis Services Tabular #ssas #tabular

    I have experienced (on difference customer’s databases) some performance issues related to security in SSAS Tabular models. The symptoms are that certain queries, pivot table, or reports, have good performance when you connect as a database administrator, but end users see bad performance no matter what selection and filter they use.

    I wrote an article, Security Cost in Analysis Service Tabular, which explains how security is implemented and the different techniques that could be used, depending on the size (number of rows) of tables involved. It’s a tough start of the year, it is certainly not an “easy read”, but I wanted to write the information that will help me (and many others!) to remember what is going on and to evaluate possible solutions.

    In my experience, a common situation is a star schema where you apply security on many dimensions. Such approach will create multiple joins in the storage engine queries generated for any measure, because rows in the fact table have to be filtered according to the security, even if the dimension is not included in the filters of the report. There are no silver bullets for this issue, specific optimizations might vary depending on specific data volume and security rules. However, knowing where is the problem is always the first step towards the solution.

  • Create DAX measures with a session’s scope #dax #ssas #tabular

    This short blog post is a note that could help myself in the future, or anyone who is interested in writing a client tool for Analysis Services.

    When you establish a session to Analysis Services, you can run both DAX and MDX statements. In DAX, you can create a measure that is local to a query by using the following syntax:

    DEFINE MEASURE Sales[Sales Amount] = SUMX ( Sales, Sales[Quantity] * Sales[Unit Price] )

    You have to execute this statement every time you run the same query, even when you do that within the same session.

    If you have administrative permission, you might deploy such a measure to the data model, but this would affect all the users and is not interesting in the scenarios I’m considering. However, you can use a mix of MDX and DAX syntax to create a measure that has a session scope:

    CREATE SESSION MEASURE Sales[Sales Amount] = SUMX ( Sales, Sales[Quantity] * Sales[Unit Price] );

    You might expect this syntax to work only in MDX statements, but it works also in any following DAX query within the same session.

    As you might imagine, this is an interesting idea to “inject” measures in an existing report. I originally evaluated this approach to create DAX measures for an Excel PivotTable connected to Analysis Services Tabular. However, the “MDX Calculated Measure” feature in Excel cannot be used for this purpose, because they require MDX syntax in the measure definition.

    Maybe some other tools/developers will find this information useful.

  • The State of Power BI #powerbi

    The 2015 has been an amazing year for Power BI. One year ago, I was commenting on this blog the public preview of Power BI dashboards. A few days ago, Microsoft released the December update of Power BI Desktop and several updates of the Power BI service. In one year, we moved from a situation where Microsoft seemed (and actually was) too slow in the BI market, to a weekly (and sometimes daily) wave of announcements that are becoming hard to follow also for those enthusiast early adopters (but don’t misunderstand and read my lips: D O N ‘ T   S T O P   R E L E A S I N G   N E W   F E A T U R E S !!).

    It’s clear that Microsoft is moving quickly to get the cloud market of BI services, which is still unexplored (after all, most of the companies keep their data on premises), but it is the only player providing a credible story of integration between cloud and on premises BI services (see “The Box is Back” and “Happy Together” pictures used at last PASS Summit). I finally see both strategy and execution working together. And execution is happening at the right speed, which has to be very fast nowadays.

    There are a long number of features, improvements, technical details, and future directions that justify this analysis, but there is still a long road ahead before good comments become actual revenues and adoption in production. However, there are key changes I have seen in this year that strongly support a positive outlook for the entire Microsoft BI stack of technologies:

    • No more “by design” answer: first and foremost, the entire Microsoft BI team removed the “by design” answer for bug/suggestions submissions. The Power Query team started with this positive attitude of considering feedback as something to use for real. Now the entire Microsoft BI team listen to the community, consider feedback and prioritize features based on comments received. Don’t underestimate this point. This, and an increased transparency, is the foundation of the credibility that leverages the word of mouth and community support. Other Microsoft products don’t have this asset, and they should do something to get it. Microsoft is behaving as a startup here, and the results are visible. One year ago, this was a promise. Today, it is a fact.
    • Constant and continuous innovation: weekly and monthly releases creates a push to keep the software up-to-date. This impacts in particular Power BI Desktop, and backwards compatibility is no longer a constraint for innovation. There are pros and cons in this, but by now there are more goods than bads.
    • DataZen acquisition: I’ve seen many comments that were not convinced about this move, because it was creating confusion and overlaps with other Microsoft products. Now that Microsoft released SQL Server 2016 CTP 3.2 that includes the first version of DataZen (now named “mobile report type”) in Reporting Services, most of these considerations are over. However, I always thought that DataZen acquisition was an excellent strategic move. Microsoft bought a fast-growing company that was not quoted, including the technology and the development team in a single move. While most of the comments were about the effective value of the technology and the overlap between DataZen and Power BI mobile app, my appreciation was more about buying a company to remove it from the market, so that other possible big competitors wouldn’t have been a chance to buy it. Doing that when the company was mature enough to have a real customer base (and a real working product), but small enought to not be under the radar of many analysts, is an additional bonus. Great move.
    • Custom visuals in Power BI: disclaimer: I might be biased for having inspired the creation of Synoptic Panel. But I think that making the visual components an open source part of Power BI and enabling a large community to contribute to a public gallery is a super smart move. If the community contributes, this is good. But even in case the community would not respond, Microsoft opens a door to release part of software complementing its product (Power BI), without requiring the cost related to official release of software (which higher costs are internationalization and documentation). Think about the cost of releasing and supporting softare in hundreds of languages (think to Office) and you should quickly realize why this move is smart. Not to mention the number of opportunities that it opens to Microsoft partner, leveraging the existing ecosystem.
    • Adoption or “R”: Is the support of the “R” language really required for a tool like Power BI? Let me say an unpopular opinion: there are too many buzzwords in the BI arena, and it’s not the first time. It has been always the same, just changing the buzzwords. We started with DSS (Decision Support Systems), we recently heard about big data, I’ve seen that BA (business analytics) rocks more than BI (business intelligence), you cannot live without a data scientist, and yes, you need “R” in your company. The list could continue for pages. But is this really true? I am convinced that “R” is very good in certain domains, and at the same time I see it used (or even just considered) also when it is completely useless. But I am a consultant, I don’t sell licenses, I help customers building solutions that actually work, possibly reducing development cost and time. “R” is an option, but is not necessarily important or relevant in many scenarios. But can Microsoft ignore the important of buzzwords and trends in the market? Absolutely no. So adopting “R” for data sources and visualizations (see latest announcement for R Visuals in Power BI) is another great move that will help Microsoft sales (and this is true for all Microsoft partners, too).

    I have seen in one year a multiple of what I have seen in the previous ten years. In reality, Microsoft is leveraging many assets that were already in-house (the Tabular engine, Power Pivot, Power Query, Power View, and many other Azure’s based services), which development required several years. But the weak point has always been the “last mile” of data visualization and presentation. Other players were years-ahead in this area. And I think they are still ahead of Microsoft. You might have a scenario that already fits well with Power BI. Maybe your scenario will be “Power BI” friendly in a few months. But the trend is clear and the improvement is continuous. So, when I think to the considerations I will do in 12 months, I am very positive and I am also scared about the number of new features I have to learn. Since this is what I (probably we) asked for many years, I cannot complain!

    I always take a look at stock price of companies to check whether financial results and expected outcomes correspond to my point of view. Unfortunately, it’s not fair to compare Microsoft with other specialized companies that work only in the BI market. So we cannot draw any conclusion by observing that Microsoft (MSFT) is at its highest point since Tableau IPO (DATA), whereas Qlik (QLIK) didn’t add much value to stockholders in the last 3 years (source: http://www.nasdaq.com/symbol/msft/stock-comparison).

     image

    What can we expect in 2016? I expect Power BI to continue its growing as data visualization platform, ecosystem, and adoption. However, the upcoming Microsoft SQL Server 2016 will be a huge release for the BI services. It will be the more important release since 2005, and this is mainly because of the new SQL Server Reporting Services (see Microsoft Business Intelligence reporting roadmap to read more). In reality, also SQL Server Analysis Services is a very important release, being the first major upgrade since SQL Server 2012 for Analysis Services Tabular (see latest announcements for SQL Server 2016 Analysis Services CTP 3.2).

    Without going in detail, there are many new features that will make Tabular a more “enterprise ready” platform. It’s not that the current version is not good, I helped many companies in succesful adoption of Tabular, but there are many improvements in productivity and performance that completely justify the upgrade even before considering new features available in the new version. There will be a lot to write about that, and I really look forward to start doing that in a few weeks.

    The magic key will be “Hybrid BI”. It’s already happening. I expect most if not all of the companies to move in an “hybrid” area for their BI solutions, where certain part are implemented on premises, and others in the cloud. And the reason to move on premises is not always preserving legacy investement, it could be requiring top performance of hardware for in-memory databases. For a certain size, you need to do an accurate hardware choice, where spending more is not necessarily better. In the very long term, I expect everything to move to the cloud, but the reality is that the two technologies will live together for a very long time, and Power BI is a good candidate to play a major role in this scenario.

  • Display data on customized maps in #powerbi (also offline) leveraging SVG format

    Last week Amir Netz presented in the Power BI blog a detailed introduction to the Synoptic Panel written by Daniele Perilli. This component won Power BI best visual contest in October, but since then we received many feedback and Daniele worked on it improving the editor and the available features. The big news is that now the Synoptic Panel supports the SVG format. You can still start from a bitmap and draw areas you want to bind to data in Power BI, but now the result of this job is an SVG file created by the Synoptic Designer. Graphical features of the designer are limited, so if you want a real arc, circle or complex shapes, you can create an SVG file with an online editor (such as http://editor.method.ac/ – but you can find many others) and then import it in Synoptic Designer. Since the SVG includes areas, the designer simply import these entities and allows you to edit their names.

    Now, beside the technical details, why the impact of this feature is huge? Well, practically any vector format can be converted to SVG, so if you have a technical draw of a building, any planimetry built using a professional drawing software… chances are you can import them in Power BI now! But what if you don’t think this impact you?

    imageWell, you might need to display data on a map: by country, territory, state, city, district, whatever… Yes, we have Map and Filled Map components in Power BI, but they are good if you want to display the exact geographical details and if you are online. If you want to create custom areas, you want to remove certain details, or you want to use it also offline (or without Internet) using Power BI Desktop, you were unable to do that.

    For example, if you want to display USA data by state, you might want to move Alaska and Hawaii in a different position in your map, such as in this example. And the good news is that you don’t have to create this file from scratch: we imported this example from Wikimedia, which already has a good SVG format that you can import in Synoptic Designer: https://commons.wikimedia.org/wiki/File:Blank_US_Map.svg

    The time you have to spend to create a new map depends on the quality of the initial file. The USA map is a very good example, because there is already one entity for each state, and each state has the 2-letter state code associated to it.

    1. Go in http://synoptic.design/
    2. Paste the URL of the SVG file (https://upload.wikimedia.org/wikipedia/commons/3/32/Blank_US_Map.svg) and click GET (or browse from your local drive if you have it there)
    3. Now you will see in the editor all the states and selecting each one you will highlight the corresponding area on the right pane. You can type the complete name of each state here, or change the code to identify the state. You can store two names, one to bind data in Power BI and the other to display the name of the area in the visual (if blank, the same name used to bind data will be used by default).
    4. You can also clear some area if you don’t need it. If you want to change the drawing, you should edit the SVG file first in an external editor (such as http://editor.method.ac/). Please note that certain states in this specific map have an additional dash character you might want to remove. 
      usa-1
    5. Click EXPORT TO POWER BI button (bottom right), and then press the DOWNLOAD SVG button
      22usa-2image

    If you find a good map that you think could be useful to other people, please click SUBMIT TO GALLERY, too! We already have a number of maps available, and you can quickly use them by navigating in Gallery / Country/Territory:

    image

    You can also browse the gallery directly from the Synoptic Panel component. Once you chose the map, you bind to Legend a column containing state names, to Values the measure to display (it will appear in the popup window by moving the mouse over a state), and to Saturation Values the measure that defines the saturation of the color (you can configure limits in Format properties). In the following example you see the Quantity measure used for both Values and Saturation Values, while the mouse is on Florida.

    image

    I really look forward to see new territories published in the gallery!

  • User group session in Amsterdam on Nov 26, 2016: Toolkit for #dax optimization

    While I am teaching Mastering DAX and Optimizing DAX classes, I am always available to speech at local user group events. This week, I will present an updated version of the “Toolkit for DAX Optimization” in Amsterdam, on Thursday, November 26, 2016.

    At this link you can register (for free) and see logistics details (user group event is after work, starting at 17:30 serving some food – presentations will start at 18:30 and I will speak at 19.00). Please note there are very few seats still available, so register only if you are sure to attend!

    I already presented this session at PASS Summit 2015, but I will have more time and I’d like to show some more detail about how to use these tools with Power BI, too. And, of course, I will be open to Q&A!

    This is the description of the content of this session.

    Toolkit for DAX Optimization
    How do you optimize a DAX expression? This session introduces you to useful tools to help measure performance, gather data to find the bottlenecks, and write new optimized versions of DAX. Starting from SQL Profiler, you learn which events are relevant for DAX and how to collect them in different environments (SQL Server Analysis Services [SSAS], Power Pivot, Power BI). Explore DAX Studio, which simplifies and speeds up the data-collection process and makes it easy to find bottlenecks in the storage engine and formula engine. The goal of this session is to provide a methodology to measure performance of your DAX measures, to find bottlenecks, and to identify the main reason of performance issues.

    I will focus on tools more than on optimization techniques, but considering the very good feedback I received at PASS Summit, it seems the session works very well – I hope you will enjoy it, too!

This Blog

Syndication

Archives

Privacy Statement