• New Feature
  • Status: Closed
  • 2 Major
  • Resolution: Fixed
  • ehcache-core,ehcache-terracotta
  • hsingh
  • Reporter: amiller
  • December 07, 2009
  • 0
  • Watchers: 1
  • January 17, 2013
  • January 26, 2010

Description

Currently, Terracotta clustered caches can specify coherentReads=”false” which will skip the lock acquire and just look for a local copy of the value first. This is valid in cases where this isn’t any stale data (read-only reference data) or where you just don’t care about stale data.

We could add a complementary coherentWrites flag that turned on concurrent write locks. In cases of partitioned data, that might always be ok. This flag would use concurrent locks instead of write locks in the ConcurrentDistributedMap (you can currently pass this on the constructor).

However, you might also want some way to turn this on and off dynamically so that you can bulk load in concurrent write mode and then switch back to write locks after that.

Comments

gluck 2009-12-07

Use Case 1 All nodes read only except for one batch loader which is invoked from cron once per day.

Use Case 2 Terracotta is cofigured in durable, HA mode where three nodes write to their own primary and mirrors. In this case the application is partitioned at the CacheManager level and the user does not want clustering (this is a real use case)

In both cases the coherentWrites=”false” would be left permanently on.

Now if there was a dynamic feature it would add programmatic runtime changing of this feature, but it has not come up in the use cases I have been looking at.

Alex Miller 2009-12-17

By the way, I tried to hack this in to test MyLife by just hard-coding the write lock to a CONCURRENT lock in the CDM under Ehcache and it failed as concurrent writes cannot be nested. I didn’t really investigate any farther than that but it implies to me we would need to take a thorough look at where locks are taken and be very careful to avoid nested writes if we used concurrent mode.

Here’s the stack trace in question:

Exception in thread “Thread-27” java.lang.UnsupportedOperationException: Don’t currently support nested concurrent write transactions at com.tc.object.tx.ClientTransactionManagerImpl.begin(ClientTransactionManagerImpl.java:105) at com.tc.object.bytecode.ManagerImpl.lock(ManagerImpl.java:719) at com.tc.object.bytecode.ManagerUtil.beginLock(ManagerUtil.java:208) at org.terracotta.collections.BasicLockStrategy.beginLock(BasicLockStrategy.java:27) at org.terracotta.collections.ConcurrentDistributedMapDso.beginLock(ConcurrentDistributedMapDso.java:973) at org.terracotta.collections.ConcurrentDistributedMapDso.putNoReturn(ConcurrentDistributedMapDso.java:271) at org.terracotta.collections.ConcurrentDistributedMapDsoArray.putNoReturn(ConcurrentDistributedMapDsoArray.java:117) at org.terracotta.collections.ConcurrentDistributedMap.putNoReturn(ConcurrentDistributedMap.java:189) at org.terracotta.cache.impl.DistributedCacheImpl.putNoReturn(DistributedCacheImpl.java:363) at org.terracotta.modules.ehcache.store.ClusteredStore.put(ClusteredStore.java:96) at net.sf.ehcache.Cache.put(Cache.java:861) at net.sf.ehcache.Cache.put(Cache.java:796) at com.mylife.wsfy.cache.impl.terracotta.WsfyCountDataLoader.loadDataFromFile(WsfyCountDataLoader.java:60) at com.mylife.wsfy.cache.impl.terracotta.WsfyCountDataLoader.access$000(WsfyCountDataLoader.java:15) at com.mylife.wsfy.cache.impl.terracotta.WsfyCountDataLoader$WsfyCountThread.run(WsfyCountDataLoader.java:107) at java.lang.Thread.run(Thread.java:619)

Fiona OShea 2010-01-26

We did something for this, is this resolved?

Saravanan Subbiah 2010-01-26

We are providing bulk loading interfaces to ehcache now which does this.

Himadri Singh 2010-02-22

Verified in rev 1916