Greg Luck benchmarked ehcache vs memcached:
The results are that put and gets are 500 to 1000 times faster in ehcache compared with memcached. The grey bar, barely visible, is ehcache with the cache all in memory. The blue bar is with 9,900 of the cache items in the disk store. Even the disk performance of ehcache is way faster than memcached. Of course memcached is entirely in memory.
Interesting to say the least but I think it’s a false comparison. Apparently, ehcache stores the cache locally and then sync’s the cache with the cluster in the background.
Did Greg use getMulti? If he didn’t use getMulti then memcached performance will CLEARLY suffer due to gigabit ethernet latency. A 1000x performance difference is certainly possible.
Greg should re-run the benchmark with getMulti.
Also, a flat out put() comparison won’t be fair either since there is no putMulti in memcached right now. I talked to Brad Fitzpatrick about this problem it should be possible without a protocol update but I have yet to implement it in the Java client.
The reason this isn’t an issue in practice is that most memcached installs are 99% read and 1% write. That said, putMulti would be really attractive for benchmarking or installs that are more like 50/50 read/write.
Local in-process caching will always be faster than remote caching due to the fact that objects don’t need to be serialized/deserialized during puts/gets.
One could use a local in-process cache to further buffer memcached but with Java there’s one critical problem – memory size estimation.
With a local in-memory LRU cache there’s no way for Java to free the memory used by the VM and you could potentially run into runtime out of memory exceptions.
Using weak references is a potential solution but the cache pruning would be non-deterministic from the cache’s perspective. There would be no way to tell the cache to remove lower priority items first.
Taking all this into account, using getMulti with memcached and without a local LRU cache is fine for most production applications.