Microservices vs David Heinemeier Hansson
Hearing that David Heinemeier Hansson had slammed microservices in his Rails Conf 2015 keynote was a bit disorienting as I was developing a microservice at the time. I leaned over to the other senior developer and said: “What’s his deal?” Toby Tripp, who had just watched the keynote, explained that when David’s team implemented microservices they had terrible trouble managing all the versions between 3 systems. “Isn’t that why we use a lookup call to get the URI-map?” I asked.
“Yep” he said.
“That and never remove a return key/value (pair), only add new ones until you can prove they are not used anymore” He added.
I returned to adding in some caching to our service calls.
Later, I watched the video which is here:
David has a meandering style but he mostly discusses microservices between 43:00 and 52:00. The “what’s in your backpack” stuff is all about his earlier zombie apocalypse metaphor that you can watch if you like. He claims his team implemented microservices right, loved them on day 5 but hated ‘em in year 2 (paraphrase). I don’t understand how I will regret my microservice architecture decision if we implement HATEOAS (Hypermedia as the Engine of Application State). You may remember HATEOAS as the thing from REST that all the eggheads complained was missing from Rails’ “RESTful-routes.” “Sure, the routes were REST but what about the discoverable URI-map?” they may have said. They were, of course, right (jerks).
I’m not saying HATEOAS is perfect. I was in the middle of adding LOTS of caching to our client so it almost never has to call our service for one of the two calls you must, at a minimum, make with a HATEOAS service. Why two calls?
- To get the URI-map (A map of how to call the service for all resources)
- Actually make the call (Using the above map)
- What? There’s a 3? Yup, that second call could return another URI-map
- It could continue
I know this seems insane, to make 2 or more http calls for every one request, but the benefits outweigh the costs. Old calls will always work. Less bugs. More confident software. Also, you can cache the hell out of all the URI map requests from the client side. When you need to expire that cache you can do it upon release of new functionality because, this being a microservice, you own all sides of the interface. Just work it into your deploy script and you’re done. Extra bonus: No URI ever has a stupid v7 in it.
The last call is a bit of a bummer over http. Even servers right next to each other in the cage have important amounts of latency for many services. HATEOAS isn’t for every service. Although, with virtual machines, I run my service on the same physical box in another vm: Somewhat mitigating network latency. Double Extra Bonus: My service is purely functional so I can cache all of its responses forever. That bold claim, however, is another blog post.
Anyway, Tripp is leaving my team to go be the king of software at some company I refuse to promote out of spite. How would you like to work at Backstop Solutions? I’m looking for a senior Ruby dev with a bunch of Clojure experience but, to be honest, I don’t really expect to find one. So if you’ve got some decent Ruby skills and can learn Clojure while pair programming you should apply. https://www.backstopsolutions.com/careers
Mention you read this article in the interview and you’ll get to see a man in a crazy shirt blush.