• Bug
  • Status: Closed
  • Resolution: Fixed
  • drb
  • Reporter: sourceforgetracker
  • September 21, 2009
  • 0
  • Watchers: 0
  • September 22, 2009
  • September 22, 2009

Description

Hi

We are trying to insert 1million records one at a time in a for loop. EHCache is configured to hold 4000 records in memory and overflow to disk.

The program faces OutOfMemory and I have to increase mx beyond 64mb. The program is a test program and the main method calls insert the records and no other logic.

Shouldnt the memory be managed and the jvm should not grow beyond the default one especially when we use disk overflow feature.

java -mx256m -verbose:gc PerfTestCache -a all -c 1000000 [GC 511K->141K(1984K), 0.0063770 secs] [ name = diskOverflow status = 2 eternal = true overflowToDisk = true maxElementsInMemory = 4000 timeToLiveSeconds = 0 timeToIdleSeconds = 0 diskPersistent = false diskExpiryThreadIntervalSeconds = 120 hitCount = 0 memoryStoreHitCount = 0 diskStoreHitCount = 0 missCountNotFound = 0 missCountExpired = 0 ]

public void insert() { timerStart(); try{ for ( int i = 0 ; i<count; i++ ) ehcache.setData(“” + i , “AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAA” + i ); }catch (Exception e){e.printStackTrace();} timerEnd(); }

Sourceforge Ticket ID: 1118920 - Opened By: vs007 - 8 Feb 2005 21:36 UTC

Comments

Fiona OShea 2009-09-22

Re-opening so that I can properly close out these issues and have correct Resolution status in Jira