be the fastest you can be, memcached

New memcached client based on SWIG/libmemcached. 15 to 150 times faster than memcache-client, depending on the architecture. Full coverage, benchmarks.

tell me

Some nice results from OS X x86:

                                     user     system      total
set:ruby:noblock:memcached       0.100000   0.010000   0.110000
set:ruby:memcached               0.150000   0.140000   0.290000
set:ruby:memcache-client        18.070000   0.310000  18.380000
get:ruby:memcached               0.180000   0.140000   0.320000
get:ruby:memcache-client        18.210000   0.320000  18.530000
missing:ruby:memcached           0.290000   0.170000   0.460000
missing:ruby:memcache-client    18.110000   0.330000  18.440000
mixed:ruby:noblock:memcached     0.380000   0.340000   0.720000
mixed:ruby:memcached             0.370000   0.280000   0.650000
mixed:ruby:memcache-client      36.760000   0.700000  37.460000

Ubuntu/Xen AMD64 was similar to the above, while RHEL AMD64 was more like 20x. It’s weird how much better Ruby performance was on RHEL.

I’ll try to push a little more Ruby into C, because we’re already down to counting single dispatches. For any deep object, most of the time is spent in Marshal.

features

Built-in non-blocking IO, consistent key modulus, cross-language hash functions, append/prepend/replace operators, thread safety, and other fancy stuff. CAS (compare and swap) coming as soon as libmemcached finishes it.

The API is not compatible with Ruby-MemCache/memcache-client, but it’s pretty close. Don’t drop it into Rails just yet.

28 responses

  1. There is a world of difference between libmemcache and libmemcached. Later is much more stable as taunted by memcache community. Here is a run down.

    Thanks Evan. We have lots of non-Rails applications, where I can find use for your library.

  2. A world of difference is a bit much, I’d say. New fancy implementation doesn’t make a world of difference to me. And I am constantly annoyed at the willingness of the open source community to reinvent stuff instead of working on existing solutions. Maybe libmemcached is better than libmemcache? Then that effort could have been directed at porting caffeine to libmemcached instead, don’t you think?

  3. I know about caffeine. You must be Martin Kihlgren, one of the authors. Caffeine does not ship with any license which means no one can use it or modify it.

    There is no C-level API compatibility between libmemcache and libmemcached, so a port would end up being a complete rewrite anyway.

    I’ve added caffeine to the benchmark suite:

                                         user     system      total
    set:ruby:noblock:memcached       0.100000   0.010000   0.110000
    set:ruby:memcached               0.160000   0.130000   0.290000
    set:ruby:caffeine                2.210000   0.360000   2.570000
    set:ruby:memcache-client        18.080000   0.330000  18.410000
    get:ruby:memcached               0.220000   0.140000   0.360000
    get:ruby:caffeine                2.170000   0.370000   2.540000
    get:ruby:memcache-client        18.200000   0.310000  18.510000
    missing:ruby:memcached           0.320000   0.170000   0.490000
    missing:ruby:caffeine            2.050000   0.350000   2.400000
    missing:ruby:memcache-client    18.130000   0.310000  18.440000
    mixed:ruby:noblock:memcached     0.340000   0.350000   0.690000
    mixed:ruby:memcached             0.380000   0.280000   0.660000
    mixed:ruby:caffeine              3.900000   0.740000   4.640000
    mixed:ruby:memcache-client      36.310000   0.590000  36.900000
    

    Note to others, if you’re trying to install caffeine on OS X, you need to run $ port install libmemcache ossp-uuid first.

  4. Evan, you have a stash of Dr. Pepper inside your computer case, right? It took you less than 48 hours to get to this point! Congrats! (I remember your single line of “working on a SWIG interface to libmemcached…”)

    Did you take a drug test lately? ;-)

    Talk later man, again, congrats; I’ll drop some (Windows) testing later.

  5. Oh man, Windows. Good luck with that. I don’t know if Brian is even supporting Windows yet in libmemcached.

    Thanks for the compliments. No soda, I just drink coffee.

  6. You do know that coffee contains caffeine? :D

    Oh, what a mishap not to include the license. Maybe didn’t matter because of the need of a complete rewrite anyway. I guess it should be gpl or something if he bothers adding it now.

    Kudos for getting this done in any event!

  7. Yeah. But the ratio is so different. C vs. Ruby time on plain Opterons was 1:20, while on Xen Opterons it was 1:100.

    My guess is that the hypervisor slows down the stack or the cache in some non-linear way (does that even make sense?). Or maybe it’s a socket issue.

    Does anyone have benches for Ruby on a bare server, and Ruby on the same server as a single Xen guest? Maybe I’ll try it.

  8. Evan:

    Actually I am not the author, even though we worked in the same office for some time.

    Nice that you did the benchmarks, that sort of validates the claim that libmemcached is better than libmemcache (though afaik libmemcached didnt exist when caffeine was created, unfortunately).

    As Albert said, shoddy of us not to include the license. I guess it should be the same license as libmemcache. But I can happily say that that specific piece of code was never my problem :) (But now I added an MIT license anyway!)

    Also, the specific piece of functionality in caffeine that you should take an extra look at is probably the branch-invalidation that is mentioned in that blog (further explained at http://lists.danga.com/pipermail/memcached/2006-July/002551.html) – it allows one single call to the memcache cluster to invalidate any number of keys. Very nice and useful.

    Chris:

    Yes, the naming is unfortunate (I again blame others :) but the fact of the matter is that the two gems do completely different things. Our AR caching was done by CachedSupermodel, which is based upon cached_model but does (at least when we created it) a hell of a lot more.

  9. Yeah, libmemcached is new; development began about 4 months ago.

    I saw the namespace feature and wondered if it required double lookups. It does. That’s a great approach for some situations and a bad one for others.

    For example, instead of a namespace, Interlock stores an invalidation tree. This means a write is really a read/write, and an invalidation is really a large series of deletes—but plain reads still use single lookups. But asynchronous delete in the new client will eliminate the invalidation penalty.

    Anyway I think it’s the kind of thing that belongs in a higher-level library. I want to stay really close to libmemcached if I can for the client.

  10. I have compiled memcached-1.2.4.tar.gz and libmemcached-0.14 on debian 4.0.

    When i try to install the memcached gem, i get this error:

    ...
    make: *** [libmemcached_wrap.o] Error 1
    
  11. This was enough of a quick hack to get me up and running in rails – enough for testing it at least (put this in environment.rb):

    require 'memcached'
    class CacheWrapper < Memcached
      def get(k)
        begin
          super
        rescue Exception => e
          nil
        end
      end
      alias_method :[], :get
    end
    CACHE = CacheWrapper.new('127.0.0.1:11211')
    
  12. Yeah, that is basically the plan. You probably want to just rescue Memcached::NotFound rather than every possible exception.

  13. To install on OS X I had to do it slightly different from the method in the docs.

    $ sudo port install libmemcached memcached

    Which will install:

    libmemcached @0.15_0
    memcached @1.2.4_1

    Then to install the gem:

    sudo gem install memcached --no-rdoc --no-ri -- --with-opt-dir=/opt/local

    Wow…it looks really fast from my benchmarks.

  14. Throwing exceptions for normal behaviour is horrendously slow in Ruby, especially at the deeper levels of code where memcache is often called in Rails apps. Ruby will attempt to build a full backtrace every time an exception is thrown.

    If there isn’t another alternative to returning nil for “not found”, I’d suggest moving back to the nil semantic; those backtraces will eat up most of the performance gains had by switching to libmemcached.

  15. You’re right. I added a --recursion option to run the benchmark in deep stacks, and adding 200 methods to the stack makes it 8x slower.

    I really want an out-of-band channel for cache misses, so I added a default option to Memcached.new that disables backtraces in NotFound. This doubles the speed of the miss benchmark and puts it more in line with a hit.

    Getting a partial backtrace in there would require some C hax.

    Previously I had been blaming the sluggish miss speed on the overhead of building a begin/rescue block, but that was totally bogus.

  16. Thanks for testing that. Do all the tests pass?

    There is a current bugfix I need in the Mercurial tip, so I’m waiting for 0.19 before I release a new version.

  17. Hey :) I just looked in on this entry again after a long time.

    Evan: Yes, double lookups can of course be a problem. But if you have potentially hundreds of entries in your memcache cluster that need to be invalidated very often, then making a double lookup is not a big cost (since each lookup is so blazing fast (especially in libmemcached, as you so pertinently have benchmarked :) ) compared to doing hundreds (in our case sometimes thousands) of invalidations in one request.

    So yes, perhaps the feature belongs in a higher level library, but since it isn’t a lot of features one would want to add that way, and since that feature isn’t trivial to implement, perhaps it would make sense to have it in ruby memcached anyway? Otherwise I fear it would never be implemented, and never used, and to me, that would be sad :/

  18. I guess I think it’s something that should belong in a caching library like Interlock or Cache_fu, not in the memcached client itself.

    In Interlock, invalidations happen only on writes. I always prefer front-loading the writes instead of amortizing the invalidation cost over the reads except in unusual circumstances.

  19. I’m using this to make it feel like im used to it feeling. Anyone see any gotchas?

    class Memcached
    
      alias :orig_get :get
      alias :orig_set :set
    
      def get(k)
        k = k.to_s if k.is_a? Symbol
        orig_get(k)
      end
    
      def set(k,v)
        k = k.to_s if k.is_a? Symbol
        orig_set(k,v)
      end
    
      protected :orig_get, :orig_set
      alias :[] :get
      alias :[]= :set
    end