Tuesday, October 16, 2012

Massive Geekage - RICON 2012

Taking a couple days off to attend the Distributed Systems geekfest at RICON 2012, at the W Hotel in San Francisco. It isn't particularly well-focused for my core skillset, but I really enjoy being challenged to think about different architectures and I am interested in the scalability problems faced by some of the other folks at the conference. Plus, I've been thinking about a design for a time-series database tool that would use Riak and I am hoping to get some useful design criticism from people with experience.

I don't intend to live-blog the festivities, but after a few sessions and a couple of really helpful conversations with people attending,  I can see a couple clear trends. The concept of "eventual consistency" really dominates the discussions (as you might expect) but more importantly it really drives design. Hellerstein's keynote emphasized some work he and his team are doing to formalize this and define both analysis methodology as well as languages and tools to identify those areas in your code where you require transactions and consistency. As an Oracle DBA, my immediate reaction was to scoff ("the first time your code really needs consistency is when you want to read() data" - well duh). But as I reflect more and listen to the other discussions, I'm warming to the ideas. Frankly, the change of thinking reminds me of another phase change I encountered when we first started using AWS, when the exhortation to "challenge your storage" kept coming back to haunt my attempts at design.

Another idea that I've heard several times today relates to something I like to think about as the master data problem. The core idea is to separate your read-heavy data from your write-heavy data, particularly for things like logging and notification data but also for the transaction base. This probably won't hold true for really important data like financial transactions, but it seems that if your problem set allows for any sort of latency between writing data and reading it (which would be obviously bad in an accounting application) there's likely an opportunity to apply some of Hellerstein's concepts to refactoring your world.

In the stuff I support at work, the most obvious dimension for a refactor is to remove the master data (client organizations, those organizations' users, the entitlement for those users, etc) out of the custom application and into a back-office system like (name your favorite) CRM. In the external web applications, that master data need not ever be updated by the users themselves (ie nobody ever changes the name of the organization they work for). So despite that data being incredibly important for everything that goes on in the application, it isn't actually ever involved in a transaction. To put it another way, that data could be productively queried once per day, cached in a k/v store of some sort, and not updated the rest of the day.

More to the point, even our transaction information is amenable to an eventually-consistent design view. Our clients use a well-defined process workflow, so at any given point in the process we have a solid idea of who might be changing data, and what data they would be touching. As I'm coming to think about it, this can fit with eventual consistency with some very minor changes to the way we think about our data. Not to our data model, simply to the way we make it available to people to use. It's a pretty small step from marshalling-for-Angular to a well-defined conflict resolution model.

Now in our case, Riak is overkill. But I really love the party game where you challenge one of the assumptions of your product and think about how redesigning around a different principle (or set of principles) would change your code base, support profile, team skills, scalability, or profitability (dream on, overhead boy). As Hellerstein's ideas have infiltrated deeper into my thinking, I've found some really useful ideas to simplify the way our applications work with our data, and maybe nudged our future path a bit toward something that would allow eventual consistency rather than being built around traditional transactions.

One last thought. In a previous post, I hinted at an evolving set of ideas about NoSQL tools and where they fit in the ecosystem. Prior to this conference, those ideas I had were simply "obvious" ideas that were rattling around in my head. But the final presentation for the conference was Dr. Eric Brewer, talking about the road ahead for NoSQL as a toolset (apologies about not pointing to his presentation, I'll change this when I locate it). He laid out virtually everything I've had rattling around in my head, with clear motivations and directions, as well as the occasional hint about how little code a given change might require. Given that Brewer is "the guy", and that he presented my case better than I could have done, I am unlikely to go any farther with the notions. It's clear to me that the people at the head of this movement see everything I'm seeing, so I intend to use my scarce spare time to work on advancing some smaller features within the broader set.

As I finish up this post, the conference is over and long gone. I've caught up on sleep, followed up with people I met, even written some design docs for my pet time-series project (coming soon!). I can't fully express how much I enjoyed it, the Basho folks went way over the line on producing this one. I found myself constantly in the presence of people smarter and more experienced than me, with all kinds of ideas about systems and architectures. I used vacation time and my own money to attend, and received value back in spades. There's really no way to rave too much about it. If they put the conference on again next year, I'll be there.

No comments:

Post a Comment