• Bug
  • Status: Closed
  • Resolution: Fixed
  • drb
  • Reporter: sourceforgetracker
  • September 21, 2009
  • 0
  • Watchers: 0
  • September 22, 2009
  • September 22, 2009

Description

I wasn’t able to comment in any way in the old bug (ID: 2824181) so I created another one.

I don’t think it’s a problem with ConcurrentHashMap in MemoryStore. The problem is not showing with following cache settings in my original JUnit Test: Cache cache = new Cache(“someName”, 200000, true, true, 100000, 1000000);

The difference from the original test is the number ofmaxElementsInMemory. So it seems the “get old value” problem lies in overflowing the MemoryStore to disk. I’ve tried to track this problem myself (unfortunately I don’t know the code well enough), and my bet (more of a hunch) for the source of the problem would be the spool in the DiskStore.

IMO, the synchronized on MemoryStore::put() is a bit too harsh to get rid of the problem. If a “synchronized” has to be used as a temporary measure. It’s better to put it on DiskStore::put() (I have not tested if that fixes the problem).

One extra note: I’ve found that the initial value of spool variable is “new HashMap()” while everywhere else when you need to clear the spool, you use “Collections.synchronizedMap(new HashMap())”. I think it would be a good idea to use the synchronizedMap in the initializer as well. (Or use the unsynchronized one in the “removeAll()” and “swapSpoolReference()” and lock externally). Sourceforge Ticket ID: 2827708 - Opened By: oninofaq - 27 Jul 2009 10:44 UTC

Comments

Fiona OShea 2009-09-22

Re-opening so that I can properly close out these issues and have correct Resolution status in Jira