Sharing Mondrian cache between applications¶
New test scenarios¶
- How does CDC behave when we have two Pentaho instances sharing the same cache cluster?
- If we set the two Hazelcast maps to use 90% of memory, does this result in an OutOfMemory exception?
Test Lite & Instances¶
- Lite member, no other members in cluster
Hazelcast keeps retrying, apparently offers no option to timeout and exit gracefully
Does so by design: http://code.google.com/p/hazelcast/issues/detail?id=784
CDC must handle this situation: http://redmine.webdetails.org/issues/526
Expectation: CDC should turn lite member into a full Hazelcast node in same JVM. Settings should be adjusted so that the full member does not consume more than ?% of the JVM memory.
Actual Behavior: CDC launches a new standalone full member. Memory size for the new node is either inferred from map configuration (if the maps are configured using absolute memory sizes) or read from settings.xml. In the meantime, application will keep running, although slower, since gets and puts have been changed to use a timeout.
- No memory limit, then fill memory
Full member dies, java.lang.OutOfMemoryError: Java heap space
Situation then reverts to first case
Actual Behavior: When a node dies, if it was launched by our scripts, the script itself will relaunch it.
- Lite Member, other members in cluster
Expectation: CDC should connect to the cluster and act as a lite member.
Actual Behavior: CDC launches as a lite member and connects to the cluster
- Lite member, connected to cluster - cluster then goes offline
Expectation: CDC should turn lite member into full member. In the least, it should not block and keep working without using cache
Actual Behavior: CDC launches a new standalone node (like in the first case). If by some reason this does not succeeds, it does not block and keeps working
- Lite member, no other members in cluster, and a cluster is started later
Expectation: CDC connects to the cluster. Ideally, it should shutdown the full node it launched at startup.
Actual Behavior: CDC connects to the cluster. The full node CDC launched is shut down.
- Cache cluster is active, Mondrian starts up.
Expectation: Mondrian starts using the elements in cache.
Actual Behavior: Mondrian does not sync with the cache cluster, even though the API is fully implemented on our side (getSegmentHeaders is never called)
- Cache has been populated by Mondrian and is cleaned from somewhere else (another mondrian instance or autonomous cache controls)
Expectation: Mondrian gets a cache miss, runs the query and repopulates the cache
Actual Behavior: Mondrian gets a cache miss, runs the query but does not populate the cache - seems to mark cache as invalid.