• Bug
  • Status: Closed
  • 1 Critical
  • Resolution: Not a Bug
  • ehcache-core
  • cdennis
  • Reporter: passover
  • January 28, 2011
  • 0
  • Watchers: 0
  • March 23, 2011
  • February 14, 2011

Attachments

Description

It’s quite strange the parameter of maxElementsInMemory suddenly becomes invalid only if diskPersistent is setted to be true. The element count in memory is always increasing untill out of memory. What I do is just to keep putting element into cache, and the code is in attachment.

Comments

Fiona OShea 2011-01-31

If fix is needed please also merge to 2.4.x branch

Chris Dennis 2011-02-01

I’ve had a first pass look at this. The reason he’s seeing what looks like invalid behavior is because he has a 300Mb disk spool set up. The in-memory counts both things that are in the memory store and also things sitting in the spool which haven’t been written to disk yet (which is a lot of elements when you have a 300Mb spool). He’s also doing some other slightly ill-advised stuff in his test which makes things worse (i.e. that slows down the spool thread to prevent it from keeping up). I’m going to look in to it in more detail to make sure nothing actually wrong is happening, but I wouldn’t consider this any kind of showstopper for Fremantle.

Chris Dennis 2011-02-14

The behavior you are seeing is not caused by a failure to respect the maxElementsInMemory count, but due to a detail about what the statistic actually means. When an element is pushed from the in-memory store to the disk store, it is stored in a queue which is worked on by an asynchronous disk writing thread. The queue size is softly bounded, by which I mean the queue is allowed to grow until it hits the disk spool size in the configuration (300Mb in your test code), after which the cache putting threads are throttled, so as to allow the disk writer to catch up. The value returned by maxElementsInMemory is the sum of the in-memory elements, and the elements remaining in the spool. In a typical situation where the cache is not being aggressively loaded, the writer will keep the queue small, but in your case the cache is being constantly filled.

In addition after each cache write you are requesting that the cache be flushed. In a disk persistent cache this adds an index-write operation to the queue, which triggers a write of the key-set to disk. Note that the cache.flush() calls are asynchronous, the call returns before the flush has completed, and hence this does not at all throttle the putting threads. Since this happening with every put it is slowing the disk writer thread down, and the queue is consequently backing up, and occupying lots of heap, (up to 300Mb). As the queue size is softly limited, when it hits capacity it only throttles the puts (it doesn’t block them), it’s entirely possible that the pattern of continuous flush would slow the disk writer thread down enough that it cannot keep up even when the putting threads are being throttled, and hence the queue growth is unbounded.

If you remove the cache.flush() calls, and reduce the spool size to a value that can fit in the heap (alongside your configured in-memory elements) then you should see much better behavior.