qcon presentation

My QCon presentation is available.

Improving Running Components at Twitter

Some choice Tweets:

  • philwills: Evan Weaver on scaling twitter at #qcon was full of
    interesting stuff and good questions from audience.
  • markhneedham: fascinating reading these stats about #twitter
    from Evan Weaver’s talk #qcon
  • jurgenonland: sitting at a presentation from Evan Weaver @
    #qcon, wow he must be verry unhappy at his work
  • szegedi: Listening to Evan Weaver talking about Twitter system
    architecture & tuning. Getting to learn from these experiences is priceless.
  • oudenampsen: Was just by Evan Weaver of twitter. Gave the impression that any time he could commit suicide. However interesting.

My presentation abilities have gone from “bad” to “tolerable”, so I’m relatively satisfied with the situation. Clearly I need to be more engaging.

11 responses

  1. So why are so few, to no, startups using LDAP? I would think LDAP would be a perfect short and long-term solution for twitter. They could put all of their users in LDAP, and then all of the people they are following into a group (each user gets a group). Now, when you want to know who someone is following, that is one query (no joins like in a DB!), and then if you want to know who is following them, again, it is one query.

    LDAP (like Sun’s LDAP which is free) scales beautifully (it was designed for this, just ask the telcos), and you can easily put all of this info in memory. Sun’s 6.3 version handles large groups like this very well. (I do not work for Sun.) And if you want open source, they could use OpenDS or Fedora DS (but I don’t know how well Fedora would do with huge groups).

    Looking at those slides makes my head spin in terms of what they have tired to engineer with memcache, etc. It just seems like they are trying to fit square pegs into round holes.

  2. Why don’t you just use X, X works on my blog. Amateurs.

    Good slides man, and don’t let the london weather (or locals) get you down :)

  3. Great presentation!

    On slide 40 of your presentation, you state that, “network memory, at web scale, is lower latency than local computation.” Could you elaborate on that? What do you mean by web scale and what qualities of “web scale” changes the traditional latency ordering?

  4. @Chris Farnham: I was not at the presentation and did not hear the audio, but from looking through these slides, my assumption is that it is faster to get the cached results from the network memory than the recompute them locally. Once you have computed something locally, you want to cache it on the network, so that other nodes can pick it up, rather than recompute the same result.

Follow

Get every new post delivered to your Inbox.

Join 516 other followers