I’m pleased to release Interlock, a Rails plugin for maintainable and high-efficiency caching. Documentation is here.
what it does
Interlock uses memcached to make your view fragments and associated controller blocks march along together. If a fragment is fresh, the controller behavior won‘t run. This eliminates duplicate effort from your request cycle. Your controller blocks run so infrequently that you can use regular ActiveRecord finders and not worry about object caching at all.
Interlock automatically tracks invalidation dependencies based on the model lifecyle, and supports arbitrary levels of scoping per-block.
production-ready
Interlock has full C0 test coverage and has been used in production on CHOW for five or six weeks already. We’ve seen 3-4x speedups on controllers that were already accelerated with Sphinx and cache_fu.
If you do have any problems (gosh), please report them on the forum instead of the blog comments.
Anyway, go read the docs; everything is there. In closing, here is a lolcat I made with my own kitten. He’s new:
Your fragment caching is 3-4x faster than cache_fu’s fragment caching?
Yeah, cause pieces of the controller don’t run at all if the corresponding fragment is fresh.
If you are only caching ERb rendering and have an empty controller then there’s no change, obviously. The speedup is extremely app-dependent.
One of the most interesting things is that we can now replace
get_cache
calls in the controller withfind
and performance doesn’t really change.Okay so cache_fu code like this this pastie, ported to Interlock, will give a 3-4x speedup?
No, since you moved your controller logic into the view. Your controller action is empty.
That’s a great idiom, although it partly removes the C from MVC. One way to look at Interlock is that it encourages you to write code like that, but with regular action blocks. It also provides straightforward invalidation and scope management.
How would you handle a view where the whole page refers to one object and related objects, but there’s one personalized block, say, somewhere in the middle. :)
The other case that I think I like cache_fu’s finer grained approach better is dealing with lots of dependent sub-objects. So, for example, I have the prototypical blog app but it also has comments and ratings (a la Digg or Slashdot) for those comments.
How would I expire the Post view when a rating on a comment was changed?
You can just tag the whole block to the personalized object:
If that is too slow (try it first!), you can put a broadly scoped block on each side of the personalized section, with a finely scoped block around that. ERb is just strings so you don’t have to worry about your tags closing inside or outside of
view_cache
blocks.For the expiry issue, you can just make the block that references Comment depend on Comment:
This will expire the block when any comment is changed. That seems excessive, but keep in mind that for most read-heavy web apps that is still going to eliminate 90% of your loads.
If you really, really want to know how to write a custom expiration callback, see here.
Thanks Evan. I can see why you don’t want to encourage it, but I’m wondering if there’s a simple extension to deal with the general case here. Eventually, expiring all Posts when a Comment is rated will get prohibitive. That assumes my site gets as popular as I’m hoping it will, I guess, so first things first.
I came to Rails after working for a while at a big web site that had a simple, AR-like persistence layer that had fine grained expirations, so I know the pitfalls and benefits pretty well (or so I think ;-) )
I’ll dig into the plugin at some point and see if what I’m thinking makes any sense. For example, I’m just wondering if there’s a way to walk back up the associations and mark simple 1-many and 1-1 associations to expire each other.
Thanks for the quick response.
Oh, one other thought. In the example I left above, I’m assuming you wouldn’t use something like cache_fu behind that layer? You said on Chow you sped up pages that already used cache_fu, so it seems like you still have cache_fu in the code?
It’s worth considering. It’s also the kind of thing where if you provide an API, and say it’s for performance, everyone will jump right to use it even when it’s unnecessary :P .
There would definitely be a way to walk the tree, but that assumes every load you use is
:including
associated records. This is not always the case, and would invalidate a lot of fresh caches needlessly.You are right that the cache_fu there is kind of legacy. Since you have already eliminated so many reads, and you are hopefully only reading when some invalidation has occurred anyway, you can just let the DB do its work.
The big win in that case is not speed (it’s slower), but rather, you don’t have to track per-record invalidations (or worse—custom finder caches).
Hah! That’s their problem then. :-)
All I was thinking was something between params[:id] auto invalidation and invalidate when any of “these other Classes” change… that seems wasteful on any site with real traffic.
Yeah, I understand the concern. So far we haven’t needed it, and we get plenty of real traffic. Low-hanging fruit and all that.
Thanks for posting this, Evan. This looks awesome. =)
Evan, yet another contribution that I’ll end up using a lot. Thanks.