• Bug
  • Status: Closed
  • Resolution: Fixed
  • drb
  • Reporter: sourceforgetracker
  • September 21, 2009
  • 0
  • Watchers: 0
  • September 22, 2009
  • September 22, 2009

Description

I have “diskPersistent” cache config, which works fine until DiskStore tries to flush all accumulated entries to disk.

Our application performs cache “dispose” during shutdown process. However, dispose operation itself seems to use so much memory that application server crashes with OutOfMemoryError. Note that application itself works fine without any memory problems. The problem is shutdown process.

I noticed dispose actually goes to flushSpool() which tries to compute in-memory index first. In our case this index structure itself might be quite big. This might be the cause for memory overflow (just a guess).

Is it possible to make this process more scalable, e.g. remove elements from cache while building the index?

Regards, Sergey.

[email protected] Sourceforge Ticket ID: 1369073 - Opened By: nobody - 29 Nov 2005 11:50 UTC

Comments

Sourceforge Tracker 2009-09-21

Logged In: YES user_id=693320

The problem was caused by spoolToDisk putting each element in the DiskStore and then when finished clearing the MemoryStore. Because the put initially puts the Elements in a spool list which is also in memory you have a large memory spike until the spoolThread can catch up with its disk writes.

I have fixed the issue by putting the Element in the disk store and removing it from the memory store before moving on to the next one. A simple change really. testMemoryEfficiencyOfFlushWhenOverflowToDisk in CacheTest shows the effect. That test would cause an out of memory error on a standard 64MB JVM before the change.

Thanks for your bug report. The fix is in trunk and will be in ehcache-1.2beta6. Comment by: gregluck - 9 Apr 2006 10:53 UTC

Fiona OShea 2009-09-22

Re-opening so that I can properly close out these issues and have correct Resolution status in Jira