I have seen this effect on many production servers but never quite believed it. There is much more RAM than cube size, yet SSAS is continually losing data from cache and asking for physical IO. Actually, SSAS relies on the Windows File Cache to reduce much of its IO. However, the Windows File Cache does not cache the entire cube either.
To demonstrate this effect, I have set up a 36GB server, dedicated to SSAS, and set up an 11GB MOLAP cube. Then, over a period of 19 hours, I have client tasks querying the cube in a semi random fashion. The MOLAP cube was not updated over the period. Over the 19 hours, if SSAS or Windows File Cache were effective, there wouldn't be much more than 11GB of read IO on the SSAS data drive over the 19 hours. However, the system reached an equilibrium, where it was requesting over 20MB of read IO/second. If you add this up, the system read 1.3 terabytes of data from the 11GB cube (20MB*60*60*19). Obviously, the system was not effectively caching the cube. This is very different from SQL Server, which would quite happily keep an 11GB database in memory, effectively eliminating further read IO.
My server was running Windows 2008, SQL Server 2008 R2 with default configuration settings. The SSAS database was located on the d: drive. Nothing else is located on the d: drive
Here is a Perfmon chart with some of the key counters during my test.
SSAS (msmdsrv.exe) did not use more than 12GB of memory during the test, which is well below the LowMemoryLimit 65% (which is about 23GB).
During the 19 hours, SSAS requested, on average, 102MB/second of read IO. The Windows File Cache was able to satisfy about 80% of these read requests, so the physical disk bytes/second was only 20MB/second.
The Perfmon chart above shows some of the key counters. Notice
- The Process (msmdsrv) Bytes/second y axis scale is in MB, often going off scale at 100MB/sec. These are not necessarily physical IO, as the Windows cache will satisfy many.
- The PhysicalDisk Bytes/second is averaging 20.1MB/second for the 19 hours
- The Windows Cache Bytes is sitting between 0.6GB and 1.1 GB.
- The red line (Cache Copy Reads/sec) is the IO requested of the cache. These are IO that the Windows cache has managed to satisfy by finding the page in its cache.
- See how the Cache Copy Reads/sec is satisfying IO requested of the msmdsrv process, reducing the PhysicalDisk Bytes/sec. Not so easy to see, as the reads are reads/sec, while the PhysicalDisk and Process are in bytes/sec, but you can see it.
However, you can clearly see that SSAS (or the system) is performing much more IO than would be necessary if the entire 11GB cube was cached in memory.
There are a couple of takeaways from this exercise
- Be thankful that we have a Windows Cache, and consider configuring it for SSAS.
- Try and put SSAS databases on very fast (ideally solid state disks) storage devices. If you can't avoid the IO, at least make it fast.
- Partition large cubes. This helps reduce logical IO requests and to the extent it bunches the popular data together it will probably help the Windows Cache hold the hot data more effectively.
- Hopefully, in a future version, either SSAS or the Windows File Cache will cache more of the cube.
Please help get this fixed by voting on https://connect.microsoft.com/SQLServer/feedback/details/760107/multidimensional-cube-is-not-retained-in-memory-even-though-there-is-plenty-of-ram
This comment has been removed by the author.
ReplyDeleteRichard, this is really strange to me. Are you sure that your server is not paging?
ReplyDeleteYou should track much more performance counter, including private bytes from SSAS, SSAS counters (especially memory) and system memory counters (paging in particular).
Hi Unknown,
ReplyDeleteI was tracking every Permon counter. I only showed the salient counters in the picture above, or it would be quite unreadable. The system was not paging. It had free memory all day, and the paging file is on the c: drive.
Why don't you try and reproduce the same results? All you need is an 11GB cube and a server with, at least 2-3 times the RAM.
Hi Richard,
ReplyDeleteI tried to leave a comment yesterday but it hasn't showed up...? Anyway, I was going to suggest that you're running into the problem I describe here: http://cwebbbi.wordpress.com/2009/02/04/arbitrary-shaped-sets-and-the-storage-engine-cache/
Chris
Hi Chris,
ReplyDeleteI think the shaped sets have something to do with it. But I would like an 11GB cube to be cached in a 36GB server.
Thanks Richard,
ReplyDeleteMuch appreciated as this is a constant concern for us..
Cheers
If you want to see Analysis Services cache manager improved to cache more of the cube in memory vote at https://connect.microsoft.com/SQLServer/feedback/details/760107/multidimensional-cube-is-not-retained-in-memory-even-though-there-is-plenty-of-ram
ReplyDeleteHave you looked at amd.com/ramdrive? I'm on a SSD I7 with 16gb ram, and I'm just DYING for faster performance on these SSRS / SSAS subreports..
ReplyDeleteI've been using this RAMDRIVE product for about a month with ABSOLUTELY ZERO PROBLEMS, I'd love it if someone could help me to benchmark it :)
PS - I know this sounds ridiculous.. but I honestly still use SQL 2000 (on Win7) for development, and then I upsize to 2008 R2 :)
Have you looked at amd.com/ramdrive? I'm on a SSD I7 with 16gb ram, and I'm just DYING for faster performance on these SSRS / SSAS subreports..
ReplyDeleteI've been using this RAMDRIVE product for about a month with ABSOLUTELY ZERO PROBLEMS, I'd love it if someone could help me to benchmark it :)
PS - I know this sounds ridiculous.. but I honestly still use SQL 2000 (on Win7) for development, and then I upsize to 2008 R2 :)
This comment has been removed by the author.
ReplyDeleteRichard,
ReplyDeletethank you very much for your post. I faced exactly with the same problem as you. Moreover, I have a MDX calc which is intensively running through a measure group, it looks like this (a part of it):
FILTER
(
NonEmpty(
IIF (
[ТоргТочки].[ТоргТочки_Id].CurrentMember.Level is [ТоргТочки].[ТоргТочки_Id].[(All)],
EXISTS( [ТоргТочки].[ТоргТочки_Id].[ТоргТочки_Id].Members, <> ),
existing [ТоргТочки].[ТоргТочки_Id].[ТоргТочки_Id].Members
)
,
[Measures].[РезультатN_Count]
)
,
[Measures].[РезультатN_Sum]> 0 and [Measures].[Оборот] > 0
)
this measuregroup has size of 500000 rows and 50 MB on disk (if we look at files in DATA folder).
And during execution of this request I see IO load made by msmdsrv.
Question: is it so difficult for analysis services to load the 50MB totally into memory BEFORE executing the request? Instead, it is making a lot of reads from disk...
I tried 5 different RAM drive software, they did not give any result. This is because they all have the same problem: with big files sequential copying from ram-disk to ram-disk the speed is fine (500MB/sec), but when you will be copying a set of small files (e.g. AS database set of files) from ram-disk to ram-disk, you will get the speed of 50Mb/sec.
At least, I've got such a result.
So, if someone knows ram-disk software which works good with small files, we all be grateful if you name it here.
Richard,
ReplyDeleteSorry for being late to the game. I just read your Connect item which linked to this post.
Your issue sounds most frustrating - I feel your pain.
I am no SSAS expert, but your issue was intriguing, so I have a few questions for you (mainly to clarify the problem statement in your original post):
- Can you please share the exact numerical values you use for your SSAS LowMemoryLimit, TotalMemoryLimit, and HardMemoryLimit configuration values (if you have specified any explicit values to override the default values), and which ones are not explicitly set and rely on the default values? SSAS Books Online says the defaults are 65(%), 80(%), and 0 (zero - use the mid-point between low and total, or 72-73%), respectively.
I understand that your post states:
- 36 GB dedicated server
- 11 GB cube
- 65% LowMemory(Limit?) is mentioned - not sure if that is explicitly set or a default value. If in effect, it should result in an actual low memory limit value in effect of 23.4 GB (65% of 36 GB).
- No mention of Total or Hard memory limit configuration settings (explicit or default) - possibly what was meant by "default configuration setting" at the start of one paragraph in your post.
After reading an interesting 2-post series on SSAS memory configuration and a specific performance problem (see links: http://greg.blogs.sqlsentry.net/2009/06/analysis-services-memory-limits.html and http://greg.blogs.sqlsentry.net/2009/06/analysis-services-memory-limits-part-ii.html), I learned that these SSAS memory configuration values represent a percentage of server memory if they are <= 100, or represent bytes of memory if they are > 100. Yes - that is bytes, and not KB, MB, GB, or any other storage unit measures! It surprised me too! The root cause in that post series was use of explicit SSAS memory configuration values that assumed KB instead of bytes, resulting in inadequate memory for SSAS and the source of the performance problems.
Is it possible that you explicitly set one or more of these SSAS memory configuration values thinking that the unit measure was KB, MB, or GB, but SSAS is expecting bytes and the setting is way lower than what you had thought?
As an example: Say you had manually calculated 65% of 36 GB as 23.4 GB, truncated it to 23, and set LowMemoryLimit to 23 (thinking it was GB for the storage unit measure). SSAS would have interpreted the 23 as a percentage of 36 GB (and not as 23 GB), resulting in a configured low memory limit value of 8.3 GB (23% of 36 GB) - less than the 11 GB needed by your cube.
Another alternative to try: set each of the three memory limit configuration values to their appropriate byte values, based on manual calculations of the 65%, 80%, and 73% default percentage values. The resulting memory limit values in bytes should be:
* 25,125,558,681 bytes: 23.4 GB - 65% of 36 GB
* 30,923,764,531 bytes: 28.8 GB - 80% of 36 GB
* 28,239,409,971 bytes: 26.3 GB - 73% of 36 GB
I'm guessing that the memory configuration values in bytes should not include comma thousands separators. I only include the comma thousands separators here for readability.
The premise of explicitly setting the memory limit byte values is if the percentage values result in miscalculated memory limit values in bytes (arithmetic overflow, etc.). Just a theory.
I hit the reply size limit for your blog, so I will break this into a separate reply.
Scott R.
Richard,
ReplyDeleteContinued from prior post ...
- Can you confirm that no other SSAS memory configuration settings are being explicitly used besides the three names stated above? It appears that there are other memory "knobs" to play with, but the docs say don't mess with them without specific guidance from Microsoft. I am guessing that you haven't set any other memory configuration values, but I wanted to specifically ask and close that loop.
- Can you confirm that you are using the x64 64-bit platform versions of Windows Server, SSAS, and supported server hardware / VMs? I know this is probably a dumb question, as you spoke of 64-bit in general terms in your post, but I'm asking to close this loop as you never specifically said the hardware / software platform status for each of these components in your post. Hopefully easy to confirm, put behind, and eliminate as a possible root cause source.
When you mentioned that 2-3 GB cubes did not seem to have these issues, it also got me to thinking about 2-3 GB memory limits from 32-bit platforms of days gone by, and wondered if any of the installed hardware / software might be of a different platform than expected or assumed.
I look forward to your feedback and a better understanding your "puzzle". Thanks again for sharing your experiences. We all learn from these efforts.
Scott R.
Hi Scott,
ReplyDeleteThis is absolutely a feature of SSAS. I have had many super SSAS technicians ask me similar questions with incredulity that the memory caching in SSAS could be so poor in an X64 environment. I have spent the time to convince these technicians of my findings and got them to reproduce it on occasion.
So I apologise to you for this brief answer, satisfy it to say that this is the behaviour of SSAS, and I would very much like to see it improved.
Hi Richard,
ReplyDeleteRecently, I stumbled across a section explaining the economic memory management model (in MSFT SQL Server 2008 Analysis Services Unleashed) and immediately thought of this post ;-)
(According to the text) regardless of memory utilization (relative to the defined low/total/hard limits) cleaner threads will still activate on a recurring basis, traverse the list of "memory holders" and send out requests to give up some shrinkable memory. Upon receiving a request, the "memory holder" will calculate the "tax" it owes and pay up (read: release shrinkable memory). I think this may be partly responsible for the churn you're seeing.
So unless you disable the cleaning threads entirely (which I'm not sure is possible) I don't think you can cache an entire cube - assuming you are also able to eliminate the arbitrary shaped-sets issue that Chris wrote about some time ago.
Perhaps you might try messing with the
"LimitSystemFileCachePeriod" setting in the msmdsrv.ini file. The default value is 1000 which I can only assume is in milliseconds - but really I have no idea and haven't ever messed with this property.
Hi Richard,
ReplyDeleteWe raised a support request for this one a few years ago and got nowhere after spending a lot of effort proving what you illustrate here.
We ended up taking the workaround on the connect entry .. using solid state and overdoing memory.
Unfortunately the overdoing isn't overdone anymore and the machine can't take more memory.
Trying to go lateral..
http://support.microsoft.com/kb/976618
use the GetSystemFileCacheSize API function and the SetSystemFileCacheSize API function to set the maximum or minimum size value for the working sets of the system file cache.
Seems to work in a development environment.
Any thoughts?
W
Thanks Acius,
ReplyDeleteThat adds more information to the issue. I will look into it when I get a few free minutes. However, I'm not confident that it will be a good workaround. It appears that it fixes the issue of the file cache taking too much memory. My experience has been with lots of free RAM and SSAS + file cache not using it. Ideally, SSAS would have a good cache (like SQL Server) to hold cells and aggregation in a cache limited on size. SSAS also relies on Windows file cache to hold data in memory. So there is an overlap, which would add inefficiencies. I wouldn't mind so much if Windows file cache stored all the cube data files so all SSAS file calls were to Windows cache. But this isn't my experience.
Perhaps the SetSystemFileCacheSize API could be used to set a high limit on the file cache.
Thank you for the contribution. It does get us closer to a solution.