THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

SQLBI - Marco Russo

SQLBI is a blog dedicated to building Business Intelligence solutions with SQL Server.
You can follow me on Twitter: @marcorus

Optimize memory in #powerpivot and #ssas #tabular

Microsoft published an interesting article about how to optimize memory consumption of a PowerPivot Data Model in Excel 2013. All these suggestions are also valid for SSAS Tabular. I also wrote an article Checklist for Memory Optimizations in PowerPivot and Tabular Models with a summary of the best practices.

The short list of things to do is very valuable:

  • Removing columns non necessary for analysis
    • Identity column (PK) of a fact table
    • Timestamps, guid and other info useful for auditing and replication, but with no data for analysis
  • If a column has too many distinct value and cannot be removed (i.e. transaction ID in a fact table for drillthrough), consider splitting the column into multiple distinct parts.
    • Each one of the parts will have a small number of unique values, and the combined total will be smaller than the original unified column.
    • Always separate date and time in two columns, instead of the original datetime.
    • In many cases, you also need the distinct parts to use as slicers in your reports. When appropriate, you can create hierarchies from parts like Hours, Minutes, and Seconds.
    • Keep only the granularity you really need.
  • Normalize columns keeping only those with the lower number of distinct values
    • For example, if you have quantity, price and total line amount, import quantity and price and calculate total line amount as SUMX( Sales, Sales[quantity] * Sales[price] ) instead of SUM( Sales[line amount] ) importing line amount.
  • Reduce precision of number to reduce distinct values (i.e. round to integer if decimal values are not relevant).

The reason is that VertiPaq compress data at column level, creating a dictionary for each column and storing for each row only the number of bits required to store the index to the dictionary. More details in the article Optimizing High Cardinality Columns in VertiPaq I wrote a few months ago and on the SSAS 2012 Tabular book.

A useful macro to analyze memory consumption and quickly identify the most expensive tables and columns in a PowerPivot workbook is available on Kasper De Jonge blog What is eating up my memory the PowerPivot / Excel edition. There is also a version for a Tabular database in his What is using all that memory on my Analysis server instance post.

Published Wednesday, April 03, 2013 1:55 PM by Marco Russo (SQLBI)
Filed under: , ,

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



Tommy said:

Hi Marco,

I faced the following error when I try to "Process Default" the tables in my tabular model during the development session in SSDT-BI (Visual Studio 2013) as well as deployed model in SSAS engine.

"The operation has been cancelled because there is not enough memory available for the application. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.

The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation."

The model was imported from my powerpivot workbook developed using excel 2013 64-bit. The fact table in the model has approximately 26 millions rows.

I research around the google for solutions. I tried to modified the memory setting under the SSAS engine properties ; I also tried to partition the fact table to approximately 800,000 rows. None of the effort success.

My workstation has physical memory of 16GB. I run the SQL server on a VM with 12GB of maximum memory allocated.

Any advice what is indeed causing the problem?



September 5, 2014 10:41 AM

Marco Russo (SQLBI) said:

It's not easy to diagnose an out of memory issue without looking at counters and physical machine. However, if the problem is the memory, try to remove other databases on the same server, just to check whether you have other databases that are eating your memory.

September 5, 2014 7:15 PM

Tommy said:

Hi Marco,

My SSAS does not have any other databases other than the new and the only one which I deployed from SSDT-BI. And the database only consist of two tables, say table A and table B. Table A is a fact table which has a calculated column. The calculated column in Table A doees DAX calculation to pickup some data from Table B.

And I closed the SSDT-BI before I start Process the table in SSAS. So that the Workspace Database is gone and not eating the memory.

Despite so, I sill face the not enough memory issue.

My server was run from a VM. and the VM has no other installation other than Windows Server 2012R2 and SQL Server 2014 Enterprise which during installation, I configured it to have Database Engine, SSAS, SSIS and SSRS.

I wonder if that no enough issue arise because I am using SQL Server 2014 version which implement in-memory OLTP together with in-memory OLAP. Which means that all the databases (and tables) of Database Engine are loaded into memory as well? thus the Database Engine compete with SSAS for memory?

Even I tried to limit the Database Engine memory to only 2GB, it still no help in solving the issue

September 6, 2014 8:22 PM

Marco Russo (SQLBI) said:

The number of rows in a table is not the most important parameter. Number of columns and density (number of unique values) of each column are more important.

Partition doesn't help too much - you reduce the number of rows to process in a single process operation, but the result of them should be stored in memory.

The problem might be the compression. SSAS uses segments of 8 million rows each, whereas Power Pivot uses 1 million rows per segment. An explanation of the DefaultSegmentRowCount setting is explained here:

However, if you have this problem with a table, either you have many columns in this table or several columns have unique values for each row, creating big pressure to the process.

Try analyzing the size of the table and the columns with the BISM Server Memory Report by Kasper Jonge:

Maybe you can eliminate unnecessary columns to reduce table size.

September 7, 2014 10:43 AM

Nick Singleton said:

Marco - thanks for taking the time to answer my questions...  I am trying to optimize my Tabular Model for compression.  How do I handle composite primary keys?  I have read that SSAS Tabular doesn't support composite keys.  Therefore, I have been concatenating the two columns into a new key.  The concatenated keys have a lot of unique values.  I don't really see a way around this.  Suggestions?

December 8, 2014 9:00 AM

Marco Russo (SQLBI) said:

A relationship that has more than 1-2 million unique Keys could generate performance issues. The point is if you really need that or not. If you are using composite Keys, I am worried you are doing a relationship using something like order number + line number (or similar), which means you are not trying to denormalize your model towards a star schema, which is the best approach for a large Tabular model.

December 8, 2014 9:06 AM

Nick Singleton said:

I need composite keys because I am importing thousands of databases with the same data model.  The composite key would be along the lines of TransactionID & ContextID (each database has its own ContextID).  

I was reading your post: .  Based on your article, I think I should keep the two columns separate rather than concatenating them and create a DAX calculated column for key that will be used for joins.

Also, I'm really interested in seeing the performance of a 3NF model.  You are saying that performance for a star-schema is better.  Tabular is very appealing to me because Excel can write queries against a 3NF data model that end-users are familiar with.

December 9, 2014 11:34 AM

Marco Russo (SQLBI) said:

Nick, as I said is a problem of performance. 3NF with million of rows in columns that define relationships means slow performance. Changing the schema would simplify queries and improve performance. But let's start with your model and consider optimization as you see performance are no longer good.

December 9, 2014 1:24 PM

Abolade Baba said:

Hi Marco,

I called a stored procedure to generate a table that would power my analysis in Power Pivot. The result set contained about 5 million rows and some 10 columns, one is a date column which I intend to use as timeliner, two of the columns have about 300 and 1000 unique values respectively. After completing the calculated columns I need and interact with the resulting spreasheets, response time is very slow.

I envisage that this workbook will grow monthly with the addition of about 400k rows via monthly refresh. My concern is with the response time whenever I try to change the timeline value from one month to another. I was wondering what could I do improve on to make response time acceptable. Currently it takes about 6-7 mins to get a response when any is interactively changed in the report.

Thanks for your help.


March 23, 2015 8:58 AM

Marco Russo (SQLBI) said:

You are experiencing very slow performance that is not justified by the numbers you provided. I cannot say what is the issue, but you should try to verify performance with fewer columns and simpler calculations. Also check that hardware is not too old, Power Pivot is very sensitive to clock and ram speed, but 6-7 minutes is too high for a query.

March 23, 2015 9:42 AM

Marco Russo (SQLBI) said:

You are experiencing very slow performance that is not justified by the numbers you provided. I cannot say what is the issue, but you should try to verify performance with fewer columns and simpler calculations. Also check that hardware is not too old, Power Pivot is very sensitive to clock and ram speed, but 6-7 minutes is too high for a query.

March 23, 2015 9:42 AM

Abolade Baba said:

I have a laptop that is about 5 years old with 8GB RAM and i core 5 processor. i will experiment with smaller columns and how my results compares. Thanks.

March 24, 2015 8:42 PM

Marko Milatovic said:

I need help about workbook size optimizer.

I tried to optimize my workbook,but some mistake pop out.

The workbook cant be optimized if there are unsaved files.Save the file and try again.

Please help.Thanks.

December 16, 2015 8:11 AM

Marco Russo (SQLBI) said:

Marko, I cannot help you with the workbook size optimizer add-in - Microsoft didn't release updates for a while. These days I restore the Power Pivot model to SSAS Tabular and I run VertiPaq Analyzer to find the most expensive columns.

January 2, 2016 1:58 PM

Pedro said:

Hi Marco,

When you say that we should split time and date, I already do that. HOWEVER I do it as calculated columns, while hiding the original datetime column.

Your suggestion is different, right? I assume that what you suggest is to delete the original datetime field altogether. This means that we would have to write a query to split the datetime field at processing time, correct?



November 2, 2016 8:02 AM

Marco Russo (SQLBI) said:

Yes you should avoid importing the datetime at all - you can use PowerQuery / M to do that, if you cannot do that on your data source.

January 19, 2017 2:13 PM

Leave a Comment


About Marco Russo (SQLBI)

Marco Russo is a consultant, writer and trainer specialized in Business Intelligence with Microsoft technologies. He runs the SQLBI.COM website, which is dedicated to distribute resources useful for BI developers, like Integration Services components, Analysis Services models, tools, technical information and so on. Marco is certified as MCT, MCDBA, MCSD.NET, MCSA, MCSE+I.

This Blog



Privacy Statement