• Bug
  • Status: Closed
  • 2 Major
  • Resolution: Fixed
  • ehcache-core
  • alexsnaps
  • Reporter: pomelko
  • February 10, 2012
  • 0
  • Watchers: 4
  • August 13, 2012
  • March 05, 2012

Attachments

Description

In our application based on Spring 3.1 + Hibernate 3.6.2 + Ehcache 2.5.0, we have a problem with entity cache region which is growing over maxEntriesLocalHeap limit.

This elements which represents removed entities, remain in cache, and can’t be removed by new elements (when maxEntriesLocalHeap limit is attain).

Hibernate is using net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory, setting by hibernate.cache.region.factory_class property.

Region definition in ehcache xml:

It can be simple reproduced by repeted create/save/delete entity (test in attachment).

EntityImpl entity = new EntityImpl(); session.save(entity); session.delete(entity);

Problem doesn’t occure when create/save/load are repeted.

It seems working after hibernate provider change on net.sf.ehcache.hibernate.SingletonEhCacheProvider (via hibernate.cache.provider_class) or after downgrade to Ehcache 2.4.7. Tests are finished with success.

Comments

Pawel Omelko 2012-03-01

Hello Alex,

Did you run included test to repeats described behavior? What I need to know now is classification. Is it bug or something wrong in configuration.

Thanks for help.

Alexander Snaps 2012-03-01

I’ve had a look at your code (and will attach a slightly modified version, maven based). As far as I can tell the issue is related to the explicit flush you do on the session. For some reason Hibernate believes the delete need to happen. As such, using the READ_WRITE strategy) it locks these entries in the cache. I believe though it never unlocks them afterwards. As a result you end up with a SoftLock for each of the entries in the Cache. These go above the threshold as, of ehcache 2.5, these get pinned. And will only get unpinned once unlocked. I believe this unlock never happens. As I see it now, there is no bug in our code, potentially one in Hibernate’s (3.6.10 exposes the same behaviour). Avoiding calling the flush() will make it go away…

Pawel Omelko 2012-03-02

It’s not so simple to not use explicit flush. Let’s take a thousand of entity modified on scroll. You have to execute session flush and clear after some iteration, to prevent OOM exception. Anyway, before session will be closed, flush have to be done to make db changes. So it’s not a solution to avoid calling flush.

The same situation occure, when I delete entity using org.springframework.orm.hibernate3.HibernateTemplate. Then I’m not using explicit flush. This is done by Spring, when flushMode is set on FLUSH_AUTO for example.

Im not quite sure what that means ‘For some reason Hibernate believes the delete need to happen’? This is what I’m doing in my test, delete entities.

And what now? Do you have something more to do with this bug, or should I describe it in Hibernate JIRA?

Thanks for your help anyway!

Alexander Snaps 2012-03-02

The explicit flush is the issue. Now I don’t know what exactly your production system does, but the test does:

    session.save(entity);
    session.delete(entity);

in a loop. So all these saves don’t necessarily have to go to the database. Probably also depending on the @Id generator… Anyways, in your test if you do remove the explicit flush the problem isn’t present, there a no entities in the database, and none in the cache.

Are you saying your production code does something like:

    Session session = db.createSession();
    session.setFlushMode(FlushMode.MANUAL);
    for (int j = 0; j < i; j++) {
      EntityImpl entity = new EntityImpl();
      session.save(entity);
      if (j % 100 == 99) {
        session.flush();
        session.clear();
      }
    }
    session.close();

Alexander Snaps 2012-03-02

Actually I was wrong, sorry! When removing the manual flush. The deletes are plainly ignored.

Alexander Snaps 2012-03-02

I’ve done some more investigation. Give me another couple of hours and I’ll get back to you… I think I might have narrowed the issue down.

Alexander Snaps 2012-03-02

I confirm this is a bug in our code base. It’s been introduced with 2.5 & its support for pinning. The Hibernate CacheRegion implementation we provide do not properly handle the unpinning. I’ll work on providing a patch as soon as possible

Alexander Snaps 2012-03-05

Fixed on trunk, and merged back to 2.5.x branch (r5345). The fix consists in 2 parts:

  • Properly unpinning SoftLocks that aren’t currently held;
  • check capacity even when doing replaces in the MemoryStore, so that putting a unlocked SoftLock (when deleting) or putting the actual CacheEntry (when updating) after the transaction completed (READ_WRITE strategy) actually honours the capacity limit.