THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

SQLBI - Marco Russo

SQLBI is a blog dedicated to building Business Intelligence solutions with SQL Server.
You can follow me on Twitter: @marcorus

Optimize memory in #powerpivot and #ssas #tabular

Microsoft published an interesting article about how to optimize memory consumption of a PowerPivot Data Model in Excel 2013. All these suggestions are also valid for SSAS Tabular. I also wrote an article Checklist for Memory Optimizations in PowerPivot and Tabular Models with a summary of the best practices.

The short list of things to do is very valuable:

  • Removing columns non necessary for analysis
    • Identity column (PK) of a fact table
    • Timestamps, guid and other info useful for auditing and replication, but with no data for analysis
  • If a column has too many distinct value and cannot be removed (i.e. transaction ID in a fact table for drillthrough), consider splitting the column into multiple distinct parts.
    • Each one of the parts will have a small number of unique values, and the combined total will be smaller than the original unified column.
    • Always separate date and time in two columns, instead of the original datetime.
    • In many cases, you also need the distinct parts to use as slicers in your reports. When appropriate, you can create hierarchies from parts like Hours, Minutes, and Seconds.
    • Keep only the granularity you really need.
  • Normalize columns keeping only those with the lower number of distinct values
    • For example, if you have quantity, price and total line amount, import quantity and price and calculate total line amount as SUMX( Sales, Sales[quantity] * Sales[price] ) instead of SUM( Sales[line amount] ) importing line amount.
  • Reduce precision of number to reduce distinct values (i.e. round to integer if decimal values are not relevant).

The reason is that VertiPaq compress data at column level, creating a dictionary for each column and storing for each row only the number of bits required to store the index to the dictionary. More details in the article Optimizing High Cardinality Columns in VertiPaq I wrote a few months ago and on the SSAS 2012 Tabular book.

A useful macro to analyze memory consumption and quickly identify the most expensive tables and columns in a PowerPivot workbook is available on Kasper De Jonge blog What is eating up my memory the PowerPivot / Excel edition. There is also a version for a Tabular database in his What is using all that memory on my Analysis server instance post.

Published Wednesday, April 03, 2013 1:55 PM by Marco Russo (SQLBI)
Filed under: , ,

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Tommy said:

Hi Marco,

I faced the following error when I try to "Process Default" the tables in my tabular model during the development session in SSDT-BI (Visual Studio 2013) as well as deployed model in SSAS engine.

"The operation has been cancelled because there is not enough memory available for the application. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.

The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation."

The model was imported from my powerpivot workbook developed using excel 2013 64-bit. The fact table in the model has approximately 26 millions rows.

I research around the google for solutions. I tried to modified the memory setting under the SSAS engine properties ; I also tried to partition the fact table to approximately 800,000 rows. None of the effort success.

My workstation has physical memory of 16GB. I run the SQL server on a VM with 12GB of maximum memory allocated.

Any advice what is indeed causing the problem?

Regards,

Tommy

September 5, 2014 10:41 AM
 

Marco Russo (SQLBI) said:

It's not easy to diagnose an out of memory issue without looking at counters and physical machine. However, if the problem is the memory, try to remove other databases on the same server, just to check whether you have other databases that are eating your memory.

September 5, 2014 7:15 PM
 

Tommy said:

Hi Marco,

My SSAS does not have any other databases other than the new and the only one which I deployed from SSDT-BI. And the database only consist of two tables, say table A and table B. Table A is a fact table which has a calculated column. The calculated column in Table A doees DAX calculation to pickup some data from Table B.

And I closed the SSDT-BI before I start Process the table in SSAS. So that the Workspace Database is gone and not eating the memory.

Despite so, I sill face the not enough memory issue.

My server was run from a VM. and the VM has no other installation other than Windows Server 2012R2 and SQL Server 2014 Enterprise which during installation, I configured it to have Database Engine, SSAS, SSIS and SSRS.

I wonder if that no enough issue arise because I am using SQL Server 2014 version which implement in-memory OLTP together with in-memory OLAP. Which means that all the databases (and tables) of Database Engine are loaded into memory as well? thus the Database Engine compete with SSAS for memory?

Even I tried to limit the Database Engine memory to only 2GB, it still no help in solving the issue

September 6, 2014 8:22 PM
 

Marco Russo (SQLBI) said:

The number of rows in a table is not the most important parameter. Number of columns and density (number of unique values) of each column are more important.

Partition doesn't help too much - you reduce the number of rows to process in a single process operation, but the result of them should be stored in memory.

The problem might be the compression. SSAS uses segments of 8 million rows each, whereas Power Pivot uses 1 million rows per segment. An explanation of the DefaultSegmentRowCount setting is explained here:

http://www.purplefrogsystems.com/blog/2013/12/ssas-tabular-performance-defaultsegmentrowcount/

However, if you have this problem with a table, either you have many columns in this table or several columns have unique values for each row, creating big pressure to the process.

Try analyzing the size of the table and the columns with the BISM Server Memory Report by Kasper Jonge: http://www.powerpivotblog.nl/what-is-using-all-that-memory-on-my-analysis-server-instance/

Maybe you can eliminate unnecessary columns to reduce table size.

September 7, 2014 10:43 AM

Leave a Comment

(required) 
(required) 
Submit

About Marco Russo (SQLBI)

Marco Russo is a consultant, writer and trainer specialized in Business Intelligence with Microsoft technologies. He runs the SQLBI.COM website, which is dedicated to distribute resources useful for BI developers, like Integration Services components, Analysis Services models, tools, technical information and so on. Marco is certified as MCT, MCDBA, MCSD.NET, MCSA, MCSE+I.

This Blog

Syndication

Archives

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement