• Bug
  • Status: Closed
  • Resolution: Fixed
  • drb
  • Reporter: sourceforgetracker
  • September 21, 2009
  • 0
  • Watchers: 0
  • September 22, 2009
  • September 22, 2009

Description

There is a gap in the typical intent of wanting to cluster/replicate by invalidating elements rather than copying. When using replicateUpdateViaCopy=false, this should apply to Puts of new elements as well as Updates of existing elements since one cache doesn’t know the existance or not of that keyed element in other caches.

For example, JVM1 has a reference to userId 123 that was cached 5 minutes ago, and JVM2 has no reference at all to userId 123. Then JVM2 receives a call to set userId 123 with a new object without a get of any previous object. There is no way to configure ehcache to have JVM2 invalidate userId 123 in JVM1 in this situation, it can only copy not invalidate.

The copy won’t work for us since we have some object initialisation routines that won’t work ViaCopy.

A more logical, consistent, and safe implementation would be to have notifyElementPut use the replicateUpdatesViaCopy=false flag as it does in notifyElementUpdated for RMISynchronousCacheReplicator and RMIAsynchronousCacheReplicator.

This is in fact the only way to make invalidation work properly for us in this context, and I can’t imagine why you would want the inconsistency of not invalidating both creates/inserts and updates. If for some reason that is warranted, there should be a replicatePutsViaCopy=false flag.

Sourceforge Ticket ID: 2684466 - Opened By: kmashint - 12 Mar 2009 04:32 UTC

Comments

Sourceforge Tracker 2009-09-21

Hi

I think I agree with you.

In your example JVM2 would have the reference because you would have distributed caching. But it is possible and sometime desirable to set up unidirectional replicates, in which case the extra behaviour for puts is desirable.

I have implemented this in trunk. Will be in ehcache-1.6-beta 4 out soon. Comment by: gregluck - 28 Mar 2009 05:27 UTC

Fiona OShea 2009-09-22

Re-opening so that I can properly close out these issues and have correct Resolution status in Jira