My name is James Adam, and I’ve been using Ruby, and Rails, for a long time now.
I’d like to share with you some of my personal history with Rails.
Hopefully it won’t be too self-indulgent, and hopefully I won’t bore you before I get to the actually “important” bit at the end. I also want to apologise — every time I come to the US, I seem to get a cold, so please excuse me if I need to cough.
Anyway, let’s get started. Sit back, as comfortably as you can.
You can feel your limbs and eyelids getting heavier. All your worries are drifting away. Let me gently regress you all the way back to 2005.
I’d actually discovered Ruby a few years earlier, and I’d totally fallen in love with it. I rewrote most of the code I was using for my research in Ruby. I was so excited about Ruby when I discovered it that I sent this in an email to my friends, who’d all got “proper” jobs writing software, most of them using Java at the time.
The nature of PhDs is that when you finish, you’ve become a deep expert in a topic that’s so narrow that it can be hard to explain how it is connected to anything outside it. So when I finished my PhD in early 2005, I was worried that I was going to be in a similar state professionally — having fallen in love with a language which I’d never be able to use for work, and instead having to return to Java or C++ or something like that to get a job.
I was incredibly lucky. Just as I was finishing my thesis, this excited Danish guy had been starting to talk about a web framework he was writing with the weird name “Ruby on Rails”.
Long story short, a job in London got posted to the ruby mailing list, and so I took the seven hour train journey down from Scotland for an interview, and within a few months I was working, in my first job, being paid to use Ruby, and a very early version of Rails (I think it was 0.9 at the time).
Working on multiple applications
Our team was like a mini agency within a much bigger media company, and the company used a tangled mess of Excel spreadsheets in various departments, in all sorts of weird and wonderful ways. It was our job to build nice, clean web applications to replace them. We were a small team, never more than five developers while I was there, but we’d work on a whole range of applications, all at the same time.
These days, and particularly with Rails 5.2 which was released a few days ago, Rails provides us with pretty much everything you need to write a web application, but’s easy to forget a time when Rails had almost none of the features we take for granted now.
A time before Active Storage, before encrypted secrets, before Action Cable, before Turbolinks, the Asset Pipeline, before resources or REST, before even Rack!
These are the headline features for Rails in Spring 2005, when version 0.13 was released. Migrations were brand new! Can you even imagine Rails without migrations?
In lots of ways, Rails 0.13 is very similar to modern Rails — it has a the same MVC features, models with associations, and controllers with actions, and ERB templates, and action mailer — but in many other ways it was actually closer to Sinatra than what we have in Rails today. To illustrate, the whole framework including dependencies and gems it used was around 45,000 lines of code. By comparison, if you today add Sinatra and ActiveRecord to an empty Gemfile today, and run bundle install, with core dependencies it ends up at around 100k of code.
But the core philosophy of Rails was exactly the same then as it is now — convention over configuration, making the most of Ruby’s expressiveness, and providing most things that most people needed to build a web application, all in one coherent package.
So it’s Summer 2005, Rails is at 0.13, and in our team we are building all these applications, and we realise that we’re building the same basic features again and again.
Specifically, things like code to manage authentication — only allowing the application to be used by certain people — and code to manage authorisation — only allowing a subset of those people to access certain features, like admin sections and so on. At the time, I think we had at least four applications under simultaneous development, which is around one app per developer, and it just seemed counter-productive for each of us to have to build the same feature, each in slightly different ways, making it harder for us to move around between the applications, and increasing the chance that we’d each introduced our own bugs and so on. After all, a big part of the Rails philosophy has always been the concept of DRY — don’t repeat yourself.
And these authentication and authorisation features weren’t particularly complicated; they had the same concerns you would expect (login, logout, forgot password, a user model, some mailer templates and so on). So it occurred to us that what we really needed to do was to write this once, and then share it between all the applications we were building, so they could all benefit from new features and bug fixes, and we’d be able to move between the applications more easily.
In modern Rails, we typically share code by creating gems, and those gems integrate with Rails using a mechanism called “Railties”, but that was only introduced at the end of 2009 (on New Years Eve, actually).
Even using just regular gems was problematic, because this is before bundler, and before we’d figured out how to reliably lock down gem versions for an application. They’d need to be installed system-wide, and so if you had multiple applications running on the same host, upgrading the gems for one application could end up breaking another. Gem freezing also didn’t appear in Rails until version 0.14.
So without any existing mechanisms, we had to roll up our sleeves and invent something ourselves.
So in the late summer of 2005, we extracted all of the login and authorisation code, including controllers and models and views and so on, into a separate repository and then wrote a little one-file patch library, which was loaded when Rails started. This library added paths to the controllers and models into the load paths for Rails, and made a few monkey-patches to Rails’ internals to get everything else working nicely.
Originally this one-file patch was called “frameworks”, but fairly soon after it got renamed to “engines”. The name “engines” was actually the idea of Sean O’Halpin, who was my boss at the time and has been programming with Ruby for even longer than me.
A “plugin” was just a folder of code that contained a “lib” directory and a file called “init.rb”, and the “plugins mechanism” would just iterate through subdirectories of “vendor/plugins”, trying to load files called init.rb if it could find them. Very simple.
We spotted this feature being added Rails and so I emailed Jamis and David to say “hey, we’ve been working on something similar — if you were interested, we’d be happy to contribute!”
We got a very nice reply, the gist of which was “that sounds very interesting, but perhaps you can package that up as a plugin itself and we’ll see how people use it”. So that’s exactly what we did, resulting in the “Engines Plugin”, which let you treat other plugins as these vertical MVC slices which could be shared between applications. This is the first homepage for it, hosted on RubyForge.
I also released the first engine, the “Login Engine”, which wrapped up code from one of the original authentication generators we’d used, along with a few tweaks that we found useful.
That was November 1st, 2005.
The engines plugin is released; the controversy begins
People got pretty excited. Like, super-excited. Which was really exciting for me too.
There was enthusiasm for the demonstration engine that I released, and it seemed that people understood the idea behind what we were doing. Someone tried to turn Typo, which was the first really popular Rails-based blogging platform, into an engine, so they could add blogs into their application.
A lot of people got pretty enthusiastic specifically about not having to write the same old login stuff again and again. Some people, I think, hoped that they would never have to write another login system again ever, and that the simple one we released with work seamlessly for everyone.
But then somebody got so excited, that they started talking about “engines within engines, depending on other engines” and I think it was this idea that ultimately pushed DHH over the edge, and about 10 days later he wrote the first post about engines on the official Rails blog.
In the post, David talked about his distrust of the dream of component-based development, and that it’s better to build your own business logic than try to adapt something that someone else wrote, and that we shouldn’t expect to be able to plug in or swap out these high-level components like forums and galleries and whatever, and never have to build them ourselves.
And I agreed with him, but tried to clarified that what engines were great for was extracting code that you’ve written, and sharing it around a bunch of apps that you’re working on at the same time, as long as those features were relatively isolated from the rest of the application. And I think David agreed with that too.
I could see his perspective: when you just have a single application to work on, like, say … Basecamp, then chances are that you can and should develop as much of the business logic yourself as you can. But if you’re working on 3 or 5 or 10 applications, at the same time, then chances are that the balance of value vs cost of sharing starts to tip the other way.
I was pretty happy with that conversation — it seemed like we generally understood and agreed about the potential benefits and dangers of what I was proposing. I’d shared an idea, and David had merely expressed a bit of caution, but a bunch of people had become super excited by it, maybe a little too excited, and on the other side a LOT of other people took David’s post as a declaration that the idea was fundamentally bad and to be avoided at all costs.
And so that’s basically how things played out for the next three years. Every now and then someone would write a blog post saying “Rails needs something like engines” or “engines are actually pretty useful” and they’d be met with the reaction “Didn’t you know that engines are ‘too much software’ (whatever that means), and like, really bad?”
And so I’d write a comment or another blog post trying to be reasonable and say “well, it’s more complicated than that” and occasionally the author might add a little clarification to their post but by that point it’s too late and you’ve got people commenting that rails engines are actual evil.
I call this time the wilderness years.
The Wilderness Years
During this time I tried to respond to the criticisms of the engines concept, with varying degrees of success. It was occasionally… quite frustrating.
I spoke at a bunch of conferences about plugins, and sometimes engines, and also tried to gently steer the development of the plugin architecture in Rails to reduce the amount of patching that the engines plugin needed to do, by adding things like controlling plugin loading order, exposing Rails configuration and initialisation hooks to plugins, and stuff like that.
Plugins became very popular, and went from being shared as links on a wiki page to having their own directory you could search and comment on.
Here’s an example of a fun plugin that I wrote for one of those presentations. See, I’m having fun!
And when, if I did mention engines in those presentations, I tried to explain that there were valid use cases, and sure, you could use them in a terrible way, but that doesn’t mean people should never use them. I hoped that those presentations, if not actually changing anyone’s mind, might’ve at least softened people a little to the idea that engines might not be 100% terrible.
But then on the official plugin directory you’d get someone tagging the engines plugin as “shit”, and the cycle would start again. (I never did find out who that was.)
Some people would go to lengths to explain why “Rails Engines” were bad, but I’d try to write a short comment to respond to each of their points and hopefully clear up any misconceptions about what the engines plugin did and what engines were good for.
In this particular case, though, what was super confusing is that the same people then released their own plugins trying to basically do the same thing!
The wilderness period lasted so long that some companies even wrote engines-like plugins without realising that engines even existed! (Brian and I actually had a conversation afterwards, and talked about merging the projects).
Rails 3: an evolution
So this is all happening between 2006 and 2008, during which a new Ruby web framework appeared, called Merb.
It was designed to be extremely fast — largely because it didn’t do very much — and be particularly good at handling many simultaneous requests and things like file uploads. Unlike Rails, which was at the time a relatively tightly-coupled set of frameworks, Merb was designed to be extremely modular, so it could (for example) support multiple ORM frameworks. It was also designed to have clear and stable internal APIs, since much of the merb framework was written as optional plugins.
One of the developers most involved in Merb was Yehuda Katz, who eagle-eyed people will have spotted was generally sympathetic to the concept of “engines”, and so it’s probably not surprising that in 2008, Merb introduced their implementation of the idea, called “Merb slices”, to a generally positive response from the Ruby community.
But it’s not a huge surprise that this is how the most popular Rails podcast at the time chose to frame that.
And I don’t blame the presenters for thinking or saying that, it was just a representation of the opinion in the community as a whole, at that a time.
This is a painting of “Sysiphus”, who in Greek mythology was cursed by the Gods by being forced to roll an immense boulder up a hill only for it to roll down when it nears the top, repeating this action for eternity. These days it’s a common image invoked to describe tasks that are both laborious and futile.
A surprising development
So we come to the end of 2008. Rails is about to reach version 2.3. The controversy had largely died down — people who got some value out of working with engines were pretty happy, I hope! and the people who thought they were evil seemed to have forgotten about they existed.
So you can imagine my surprise when I received this email from DHH with the subject line, “I repent”.
I think I actually became giddy at the time. Rails core had decided that engines weren’t evil, and that they were going to be integrated into the framework. My work… was done.
OK, not really. Without going into a huge amount of detail, in Rails 2.3, plugins absorbed some of the core engines behaviour. They could provide controllers, views and most other types of code. This was released in Rails 2.3, in March 2009.
At the same time, Merb and Rails decided to merge, and Rails 3 would be the end result. The goal of doing this was, in part, to establish some clear, stable APIs within Rails, that other libraries and plugins could rely on, so they they didn’t break when Rails was upgraded. This was a fairly significant rewrite of a lot of the core parts of Rails in order to create those APIs.
Yehuda and Carl Lerche did much the work, and as part of it, they decided that rather than having a Rails application, and these “engine” things inside it that looked like a Rails Application and got access to the same hooks and config and so on, that instead, the outer application itself should just be a Rails Engine with a few other bells and whistles. So I guess the “engines inside of engines” person actually got their wish!
This was released as Rails 3.0, in 2010.
Finally, with Rails 3.1, released in August 2011, the last two bits of work that the engines plugin did — managing migrations from engines, and assets became part of Rails, and the plugin that I had written was officially deprecated.
The Hype/Hate Cycle
You have probably heard of the Gartner Hype Cycle, which is a way of understanding how technology trends evolve.
We have the initial creation or discovery of the technology, then the peak of inflated expectations, where everyone is excited about having jetpacks or living on the moon, but then the trough of disillusionment, when it turns out it’s actually much harder to build a jetpack than we thought, and there are a lot of things we need to built one that aren’t ready yet. But eventually technology starts to climb the slope of enlightenment, as we figure all those little things out and iron out the problems so we don’t set our legs on fire and so on, and we finally get to the plateau of productivity, when zipping around on our jetpacks seems pretty ordinary and we look back at old movies of people moving around using their legs and laugh about how quaint they seem.
And I think we can use a similar cycle to understand how the Rails community reacted to the “engines” concept too. For engines, it took just under six years from idea to acceptance.
We have the same starting point, and the same peak of inflated expectations (“I’ll never need to write login code again!!”), but then we enter what I like to call the TROUGH OF RECEIVED OPINIONS, where some big names in the community have been like “woah woah woah”, and we’ve personally haven’t actually tried using the technology but we’ve heard it sucks and so basically it’s the worst thing ever. And then for about three years, we scrabble up the SLOPE OF FEAR, UNCERTAINTY and DOUBT, where people find themselves thinking “hey, wouldn’t it be great if I could share this slice of functionality between all my apps?”, but when they try, they get bogged down in all the blog posts, often from years ago saying “no! It sucks!” and so they give up. And then, finally we reach the plateau of “oh — are those still a thing?”
And as you can see, at the end of the cycle, we are just about neutral. We’re basically back where we started, but at the very least I can finally put the boulder down and stop pushing it up the hill :-)
If you’d like a nice summary, I found this quote in the book “Rails 3 in Action”, which was published around the same time.
Engines: there when you need them
So, what changed in 2008? Well, I think think it’s quite simple in retrospect. Rails was originally extracted from Basecamp, the software that DHH built and still works on today. At the start of Rails life, Basecamp was the only Rails application that David worked on, but between 2005 and 2008, 37signals added another three flagship applications, along with a few other smaller ones like Writeboard and Ta-da list.
Their small team — I think it was four developers at that point — had to build and support all those applications… at the same time… that… sounds… familiar, doesn’t it? :)
What it basically says is that with some of the tools that Rails gives you, it’s definitely possible to get in a mess. But instead of protecting you from misusing them by keeping them from you altogether, we should trust ourselves to use those tools and approaches sensibly. Concerns is one example of a “sharp knife” — some people think they encourage sweeping complexity under the rug, while others think that used appropriately, it’s not a problem and the benefits outweigh the risks.
And that’s exactly what the engines concept is: a sharp knife. For around 6 years, it was a little too sharp to be included in Rails’ silverware drawer, but it seems like perhaps now we can be trusted with it. And these days, lots of popular libraries are engines.
Devise, which is an extremely popular authentication library, is an engine. The Spree e-commerce platform is an engine, and you can get content management systems like Refinery CMS, which is an engine too. Even the new Active Storage feature in Rails, is implemented as an engine inside.
Welcome back to 2018
OK, that’s the end of our trip back to 2005, and we’re now back in the present. This is a good moment to take a stretch.
But before we start the third act, I wanted to mention one little thing that has nothing to do with Rails, or engines. Most of what I’ve talked about happened at least ten years ago, and when I was writing this talk, I wanted to make sure that I hadn’t inadvertently distorted how things played out in my memory.
All of the comments and posts I’ve used are real, but when I tried to find all these original newsgroup posts and articles and blogs so on that were written at the time, what I found was that almost all of sites I have referenced are either totally gone, or only partially available (e.g. all the discussion comments on the rubyonrails blog have disappeared, loud thinking is gone, even Ruby Forge is gone…)
I think that history is interesting and important, and it’s kind of mind-boggling that without archive.org, information that’s only ten years old might otherwise be basically gone forever. So if you can, please support archive.org. They accept donations at their website, and I genuinely believe they are providing one of the most valuable services on the web today.
The History of Rails
OK, back to Rails. So at the start of this talk, I did say that I didn’t want this to be too self-indulgent, or to paint myself as some misunderstood genius or hero, finally proven right.
I am sure there are many other stories like this, in many other Open Source projects. But what I think is interesting about this journey is that it shows that the history of Rails can be viewed as a history of opinions.
Rails is “opinionated software”, which is great because it saves us a lot of time by allowing us to offload lots of decisions about building software, in exchange for accepting some implicit constraints. Following those constraints is what we sometimes call the “Rails Way”.
Some of those opinions are about how we use Ruby as a programming language — about how you should be able to express behaviour at the level of a line of code. An example of this are the methods that Rails adds to objects like String and Array.
Objects in a Rails application tend to have a lot of methods. Some people believe that it’s better to try to minimise the number of methods on an object, but it’s Rails’ opinion that the tradeoff is worth it, in order to be more expressive. Neither is wrong or evil. They are just two different opinions.
Other opinions are at a more architectural level, and are ultimately about how we ought to structure the applications we build when using Rails.
If you build your URLs and controllers in terms of REST and resources, you’ll be able to use a lot more of the abstractions and high-level mechanisms that Rails provides. But if you like to add lots of custom actions into your controllers, Rails can’t stop you, and it won’t stop you, but you’ll have to do a lot more work yourself to wire things up.
But that’s not the same as saying “if you don’t use resources, your code is bad” — it’s just the guiding opinion that Rails has.
What might not be obvious, though, is that over the 14 years of Rails’ life so far, those opinions haven’t always existed, and haven’t always stayed the same.
I think a particularly interesting is example is Active Resource.
It used to be a part of the Rails framework, an evolution of the “actionwebservice” framework, which used to support SOAP, Dynamic WSDL generation, XML-RPC, and all the acronyms that David mentioned as “WS-deathstar” in his keynote yesterday.
ActiveResource let you save and load remote data using JSON over HTTP, using the same ruby methods as you’d use on a regular Active Record model. It made it easy build things like micro services and so, I think, acted as a signal that you could and should do that. It was removed in Rails 4.0, which might be one of the first indications of the current opinion that a Majestic Monolith is a more productive way to work overall.
The only constant is change
The purpose of highlighting these changes of opinion is not to say that DHH, or anyone who is or was in Rails Core is frequently wrong; it’s to show that even in the world of opinionated software, opinions can change.
Fundamentally, what is and isn’t in Rails is driven by the needs of the people who write it, and to a greater or lesser extent, that means people building applications like Basecamp. But not everybody is building an application like that. I think more and more of us are working on Rails applications that have been around for a long time, in some cases ten years or more, and those kinds of applications have different needs and experience different pressures than one where the developer controls all the requirements and is free to rewrite it if they choose to, at almost any time.
Right now there are differing opinions in the community about what the future of Rails might include.
The majestic monolith vs. micro services; concerns and callbacks vs smaller object composition; system tests vs. unit testing & stubs… these tensions are good. We need people to be pushing at the boundaries of the Rails Way, to figure out what’s next.
If we just sit back and wait for a relatively small group of people to tell us what the future of Rails looks like, then it will only be a future that suits their particular, unavoidably-limited contexts.
In 2014, 37signals changed their name to Basecamp and returned to maintaining a single application, so some of the motivation from within Rails Core for things like engines is naturally going to diminish. And that’s understandable: it’s an itch they may no longer have. But I wonder how many other software itches there are, which Basecamp doesn’t experience, but hundreds or thousands of other applications and developers do.
We need more voices sharing their experiences, good and bad, with the current Rails Way and we need people to build things like Trailblazer, ROM, and Hanami, and dry-rb, and then others to try using them and learning from them.
Probably none of these projects will ever usurp Rails, but they might contain ideas about how to build software, or how to structure web frameworks, which are new and useful. And like Merb, they might end up influencing the direction Rails takes towards something better for many of us. They might already have found some conceptual compression, to use the phrase from David’s Keynote, that we can adopt or adapt.
And there’s no reason why the people doing that exploration can’t be you, because who else is going to do it? You are the Rails community. You work with Rails all the time. Who better than you to spot situations where a new technique or approach might help. Who better than you to try and distill that experience into beautiful, expressive code that captures a common need.
You can be one of the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in square holes. The ones who see things differently.
As it says at the bottom of the “Rails Doctrine”:
“We need disagreement. We need dialects. We need diversity of thought and people. It’s in this melting pot of ideas we’ll get the best commons for all to share. Lots of people chipping in their two cents, in code or considered argument.”
Life in the Big Tent
OK, wonderful. Rails now embraces all manner of opinions under its big tent. But what happens when you have your idea, but people don’t quite understand it immediately, and things get a little out of control and suddenly people are decrying it as evil?
I feel for you, genuinely, because when I released engines, the main way people expressed these kinds of opinions was in the comment form of a blog. But we are now living in the age of the tweet, where many people don’t think twice about unleashing a salvo of 280 character hot takes out into the world. I’m not sure that we live in an age of “considered opinion” at the moment.
So what can we do? Well, two things.
Firstly, as consumers of open source technology, I think we could all try our best to avoid sharing opinions like that. If you’ve had a bad experience with a technology or a technique, then that’s totally valid and you can and absolutely should share that experience. But don’t do it in a tweet, or if you MUST do it in a tweet, try to at least be balanced.
Even better, start a blog, or post on Medium, and write as much as you can about your experience and your context, and share a link to THAT on twitter.
Secondly, if you are lucky and generous enough to actually try to contribute a new idea to this, or any other community, try not to become demotivated if people don’t understand the point at first. This is that first blog post about Rails on the 37signals blog, in early 2004. Look at the first comment.
What this shows is that the value of why you’re doing something differently, is often not immediately obvious to people. You will have to patiently explain it. Sometimes again and again, maybe for years and years. And you won’t be able to convince everyone, but you might reach someone who finds it interesting or useful, who might then reach someone else, and before you know it, lots of people are getting value from your little idea, and it could end up making a big difference after all.
The subjective value of ideas, and how to stay sane
There’s one last thing I’d like to say. When you make something, and it receives criticism, especially on the internet, from strangers, it can be very hard to deal with, sometimes so much so that we might stop creating things altogether, or never even try.
When I was a researching this talk, I stumbled across an old blog post, from 2006 — actually by DHH, would you believe it — that captured a good way of dealing with situations like this. I’m going to paraphrase it.
View your idea, or the thing you’ve made, as a pearl, not a diamond. When someone responds to your idea and points out all the flaws, the situations where it might not work for them, that’s OK, because what they’re asking for is a diamond. They want you to give them something they consider flawless. They want something perfect. But you need to try to remember that however that want is expressed, constructively or vitriolically, or wherever in between, that it’s not your job to make a diamond for them.
Instead, all you can offer them is the pearl you’ve made, and if that’s not good enough, then:
Future relevancy protection: As Tim Garnett correctly points out, at lot of discussion of Refinements suffers from not being clear about which version of Ruby is current at the time of writing. When I gave this talk, the latest version of Ruby was 2.2.3, but I believe the content is still relevant for 2.3.
Chances are, you’ve heard of refinements, but never used them.
The Refinements feature has existed as a patch and subsequently an official part of Ruby for around five years, and yet to most of us, it only exists in the background, surrounded by a haze of opinions about how they work, how to use them and, indeed, whether or not using them is a good idea at all.
I’d like to spend a little time taking a look at what Refinements are, how to use them and what they can do.
But don’t get me wrong - this is not a sales pitch for refinements! I am not going to try to convince you that they will solve all your problems and that you should all be using them!
The title of this presentation is “Why is nobody using refinements?” and that’s a genuine question. I don’t have all the answers!
My only goal is that, by the end of this, both you and I will have a better understanding of what they actually are, what they can actually do, when they might be useful and why it might be that they’ve lingered in the background for so long.
What are refinements?
Simply put, refinements are a mechanism to change the behaviour of an object in a limited and controlled way.
By change, I mean add new methods, or redefine existing methods on an object.
By limited and controlled, I mean that adding or changing those methods does not have an impact on other parts of our software which might interact with that object.
Refinements are defined inside a module, using the refine method.
This method accepts a class – String, in this case – and a block, which contains all the methods to add to that class when the refinement is used. You can refine as many classes as you want within the module, and define as many methods are you’d like within each block.
To use a refinement, we call the using method with the name of the enclosing module.
And that’s really all there is to refinements – two new methods, refine, and using.
However, there are some quirks, and if we really want to properly understand refinements, we need to explore them. And the best way of approaching this, is by considering a few more simple examples.
Now we know that we can call the refine method within a module to create refinements, and that’s all relatively straightforward, but it turns out that where and when you call the using method has a profound effect on how the refinement behaves with our code.
We’ve seen that invoking using inside a class definition works. We activate the refinement, and we can call refined methods on a String instance:
Our class uses the refinement, but when we pass a block to a method in that class, suddenly it breaks.
So what’s going on here? For many of us this is quite counter-intuitive; after all, we’re used to being able to re-open classes, or share behaviour between super- and sub-classes, but it seems like that only works intermittently with refinements?
It turns out that the key to understanding how and when refinements are available relies on another aspect of how Ruby works that you may have already heard of, or even encountered directly.
The key to understanding refinements is understanding about lexical scope.
To understand about lexical scope, we need to learn about some of the things that happen when Ruby parses our program.
Let’s look at the first example again:
As Ruby parses this program, it is constantly tracking a handful of things to understand the meaning of the program. Exploring these in detail would take a whole presentation in itself, but for the moment, the one we are interested in is called the “current lexical scope”.
Let’s “play computer” and follow Ruby as it processes our simple program here.
The top-level scope
When Ruby starts parsing the file, it creates a new structure in memory – a new “lexical scope” – which holds various bits of information that Ruby uses to track what’s happening at that point. We call this the “top-level” lexical scope.
When we encounter a class (or module) definition, as well as creating the class and everything that involves, Ruby also creates a new lexical scope, nested “inside” the current one.
We can call this lexical scope “A”, just to give it an easy label. Visually it makes sense to show these as nested, but behind the scenes this relationship is modelled by each scope linking to its parent. “A”’s parent is the top level scope, and the top level scope has no parent.
As Ruby processes all the code within this class definition, the “current” lexical scope is now A.
When we call using, Ruby stores a reference to the refinement within the current lexical scope. We can also say that within lexical scope “A”, the refinement has been activated.
We can see now that there are no activated refinements in the top-level scope, but our Shouting refinement is activated for lexical scope A.
Next, we can see a call to the method shout on a String instance. The details of method dispatch are complex and interesting in their own right, but one of the things that happens at this point is that Ruby checks to see if there are any activated refinements in the current lexical scope that might affect this method.
In this case, we can see that for current lexical scope “A”, there is an activated refinement for the shout method on Strings, which is exactly what we’re calling.
Ruby then looks up the correct method body within the refinement module, and invokes that instead of any existing method.
And there, we can see that our refinement is working as we hope.
So what about when we try and call the method later? Well, once we leave the class definition, the current lexical scope returns to being the parent, which is the top-level one.
Then we find our second String instance and a method being called on it.
Once again, when ruby dispatches for the shout method, it checks the current lexical scope – the top-level one – for the presence of any refinements, but in this case, there are none. Ruby behaves as normal, which is to call method_missing and this will raise an exception by default.
Calling using at the top-level
If we had called using Shouting outside of the class, at the top level, our use of the refined method both inside and outside the class works perfectly.
This is because once a refinement is activated, it is activated for all nested scopes, so calling using at the top level activated the refinement in the top level scope, which means it will be activated in any nested scopes, including “A”. And so, our call to the refined method within the class works too.
So this is our first principle of how refinements work:
When we activate a refinement with the using method, that refinement is active in the current and any nested lexical scopes.
However, once we leave that scope, the refinement is no longer activated, and Ruby behaves as it did before.
Lexical scope and re-opening classes
Let’s look at another example from earlier. Here we define a class, and activate the refinement, and later re-open that class and try to use it. We’ve already seen that this doesn’t work; the question is why.
Watching Ruby build its lexical scopes reveals why this is the case. Once again, the first class definition gives us a new, nested lexical scope A. It’s within this scope, that we activate the refinements. Once we reach the end of that class definition, we return to the top level lexical scope.
When we re-open the class, Ruby creates a nested lexical scope as before, but it is distinct from the previous one. Let’s call it B to make that clear.
While the refinement is activated in the first lexical scope, when we re-open the class, we are in a new lexical scope, and one where the refinements are no longer active.
So our second principle is this:
Just because the class is the same, doesn’t mean you’re back in the same lexical scope.
This is also the reason why our example with subclasses didn’t behave as we might’ve expected:
It should be clear now, that the fact that we are within a subclass actually has no bearing on whether or not the refinement is activated; it’s entirely determined by lexical scope. Any time Ruby encounters a class or module definition via the class (or module) keywords, it creates a new, fresh, lexical scope, even if that class (or module) has already been defined somewhere else.
This is also the reason why, even when activated at the top-level of a file, refinements only stay activated until the end of that file – because each file is processed using a new top-level lexical scope.
So now we have another two principles about how lexical scope and refinements work.
Just as re-opened classes have a different scope, so do subclasses. In fact:
The class hierarchy has nothing to do with the lexical scope hierarchy.
We also now know that every file is processed with a new top-level scope, and so refinements activated in one file are not activated in any other files – unless those other files also explicitly activate the refinement.
Lexical scope and methods
Let’s look at one more of our examples from earlier:
Here we are activating a refinement within a class, and defining a method in that class which uses the refinement. Later, we create an instance of the class and call our method.
We can see that even though the method gets invoked from the top level lexical scope – where our refinement is not activated – our refinement still somehow works. So what’s going on here?
When Ruby processes a method definition, it stores with that method a reference to the current lexical scope at the point where the method was defined. So when Ruby processes the greet method definition, it stores a reference to lexical scope A with that:
When we call the greet method – from anywhere, even a different file – Ruby evaluates it using the lexical scope associated with its definition. So when Ruby evaluates ”hello".shout inside our greet method, and tries to dispatch to the shout method, it checks for activated refinements in lexical scope “A”, even if the method was called from an entirely different lexical scope.
We already know that our refinement is active in that scope, and so Ruby can use the method body for “shout” from the refinement.
This gives us our fourth principle:
Methods are evaluated using the lexical scope at their definition, no matter where those methods are actually called from.
Lexical scope and blocks
A very similar process explains why our block example didn’t work. Here’s that example again – a method defined in a class where the refinement is activated yields to a block, but when we call that method with a block that uses the refinement, we get an error.
We can quickly see which lexical scopes Ruby has created as it processed this code. As before, we have a nested lexical scope “A”, and the method defined in our class is associated with it:
However, just as methods are associated with the current lexical scope, so are blocks (and procs, lambdas and so on). When we define that block, the current lexical scope is the top level one.
When the run method yields to the block, Ruby evaluates that block using the top-level lexical scope, and so Ruby’s method dispatch algorithm finds no active refinements, and therefore no shout method.
Our final principle
Blocks – and procs, lambdas and so on – are also evaluated using the lexical scope at their definition.
With a bit of experimentation, we can also demonstrate to ourselves that even blocks evaluated using tricks like instance_eval or class_eval retain this link to their original lexical scope, even though the value of self might change.
This link from methods and blocks to a specific lexical scope might seem strange or even confusing right now, but we’ll see soon that it’s precisely because of this that refinements are so safe to use.
But I’ll get to that in a minute. For now, let’s recap what we know:
Lexical scope & principles recap
Refinements are controlled using the lexical scope structures already present in Ruby.
You get a new lexical scope any time you do any of the following:
entering a different file
opening a class or module definition
running code from a string using eval
As I said earlier: you might find the idea of lexical scope surprising, but it’s actually a very useful property for a language; without it, many aspects of Ruby we take for granted would be much harder, if not impossible to produce. Lexical scope is used as part of how Ruby understands references to constants, for example, and also what makes it possible to pass blocks and procs around as “closures”.
We also now have the five basic principles that will enable us to explain how and why refinements behave the way they do:
Once you call using, refinements are activated within the current, and any nested, lexical scopes
The nested scope hierarchy is entirely distinct from any class hierarchy in your code; subclasses and superclasses have no effect on refinements; only nested lexical scopes do.
Different files get different top-level scopes, so even if we call using at the very top of a file, and activate it for all code in that file, the meaning of code in all other files is unchanged.
Methods are evaluated using the current lexical scope at their point of definition, so we can call methods that make use of
refinements internally from anywhere in the rest of our codebase.
Blocks are also evaluated using the lexical scope, and so it’s impossible for refinements activated elsewhere in our code to change the behaviour of blocks — or indeed, any other methods or code — written where that refinement wasn’t present.
Right! So now we know. But why should we even care? What are refinements actually good for? Anything? Nothing?
Let’s try to find out.
Let’s use refinements
Now, another disclaimer: these are just some ideas – some less controversial than others – but hopefully they will help frame what refinements might make easier or more elegant or robust.
The first will not be a surprise, but I think it’s worth discussing anyway.
Monkey-patching is the act of modifying a class or object that we don’t own – that we didn’t write. Because Ruby has open classes, it’s trivial to redefine any method on any object with new or different behaviour.
The danger that monkey-patching brings is that those changes are global – they affect every part of the system as it runs. As a result, it can be very hard to tell which parts of our software will be affected.
If we change the behaviour of an existing method to suit one use, there’s a good chance that some distant part of the codebase – perhaps hidden within Rails or another gem – is going to call that method expecting the original behaviour (or its own monkey-patched behaviour!), and things are going to get messy.
Say I’m writing some code in a gem, and as part of that I want to be able to turn an underscored String into a camelized version. I might re-open the String class and add this simple, innocent-looking method to make it easy to do this transformation.
Unfortunately, as soon as anyone tries to use my gem in a Rails application, their test suite is going to go from passing, not to failing but to ENTIRELY CRASHING with a very obscure error:
/app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:261:in `const_get': wrong constant name Admin/adminHelper (NameError)
from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:261:in `block in constantize'
from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:259:in `each'
from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:259:in `inject'
from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:259:in `constantize'
from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/core_ext/string/inflections.rb:66:in `constantize'
from /app/.bundle/gems/ruby/2.1.0/gems/actionpack-4.2.1/lib/abstract_controller/helpers.rb:156:in `block in modules_for_helpers'
from /app/.bundle/gems/ruby/2.1.0/gems/actionpack-4.2.1/lib/abstract_controller/helpers.rb:144:in `map!'
from /app/.bundle/gems/ruby/2.1.0/gems/actionpack-4.2.1/lib/abstract_controller/helpers.rb:144:in `modules_for_helpers'
from /app/.bundle/gems/ruby/2.1.0/gems/actionpack-4.2.1/lib/action_controller/metal/helpers.rb:93:in `modules_for_helpers'
You can see the error at the top there - something to do with constant names or something? Looking at the backtrace I don’t see anything about a camelize method anywhere?
The first is breaking API expectations. We can see that Rails has some expectation about the behaviour of the camelize method on String, which we obviously broke when we added our own monkey-patch elsewhere.
The second is that monkey patching can make it far harder to understand what might be causing unexpected or strange behaviour in our software.
Refinements in Ruby address both of these issues.
If we change the behaviour of a class using a refinement, we know that it cannot affect parts of the software that we don’t control, because refinements are restricted by lexical scope.
We’ve seen already that refinements activated in one file are not activated in any other file, even when re-opening the same classes. If I wanted to use a version of camelize in my gem, I could define and use it via a refinement, but anywhere that refinement wasn’t specifically activated – which it won’t be anywhere inside of Rails, for example – the original behaviour remains.
It’s actually impossible to break existing software like Rails using refinements. There’s no way to influence the lexical scope associated with code without editing that code itself, and so the only way we can “poke” some refinement behaviour into a gem is by finding the source code for that gem and literally inserting text into it.
This is exactly what I meant by limited and controlled at the start.
Refinements also make it easier to understand where unexpected behaviour may be coming from, because they require an explicit call to using somewhere in the same file as the code that uses that behaviour. If there are no using statements in a file, we can be confident – assuming nothing else is doing any monkey-patching – that Ruby will behave as we would normally expect.
This is not to say that it’s impossible to produce convoluted code which is tricky to trace or debug – that will always be possible – but if we use refinements, there will always be a visual clue that a refinement is activated.
Onto my second example.
Managing API changes
Sometimes software we depend on changes its behaviour. APIs change in newer versions of libraries, and in some cases even the language can change.
For example, in Ruby 2, the behaviour of the chars method on Strings changed from returning an enumerator to returning an Array of single-character strings.
Imagine we’re migrating an application from Ruby 1.9 to Ruby 2 (or later), and we discover that some part of our application which depends on calling chars on a String and expecting an enumerator to be returned.
If some parts of our software rely on the old behaviour, we can use refinements to preserve the original API, without impacting any other code that might have already been adapted to the new API.
Here’s a simple refinement which we could activate for only the code which depends on the Ruby 1.9 behaviour:
The rest of the system remains unaffected, and any dependencies that expect the Ruby 2 behaviour will continue to work into the future.
My third example is probably familiar to most people.
One of the major strengths of Ruby is that its flexibility can be used to help us write very expressive code, and in particular supporting the creation of DSLs, or “domain specific languages”. These are collections of objects and methods which have been designed to express concepts as closely as possible to the terminology used by non-programmers, and often designed to read more like human language than code.
Adding methods to core classes can often help make DSLs more readable and expressive, and so refinements are a natural candidate for doing this in a way that doesn’t leak those methods into other parts of an application.
RSpec as a DSL
RSpec is a great example of a DSL for testing. Until recently, this would’ve been a typical example of RSpec usage:
One hallmark is the emphasis on writing code that reads fluidly, and we can see that demonstrated in the line developer.should be_happy, which while valid Ruby, reads more like English than code. To enable this, RSpec used monkey-patching to add a should method to all objects.
Recently, RSpec moved away from this DSL, and while I cannot speak for the developers who maintain RSpec, I think it’s fair to say that part of the reason was to avoid the monkey-patching of the Object class.
However, refinements offer a compromise that balances the readability of the original API with the integrity of our objects.
It’s easy to add a should method to all objects in your spec files using a refinement, but this method doesn’t leak out into the rest of the codebase.
The compromise is that you must write using RSpec at the top of every file, which I don’t think is a large price to pay. But, you might disagree and we’ll get to that shortly.
RSpec isn’t the only DSL that’s commonly used, and you might not even have thought of it as a DSL – after all, it’s just Ruby. You can also view the routes file in a Rails application as a DSL of sorts, or the query methods ActiveRecord provides. In fact, the Sequel gem actually does, optionally, let you write queries more fluently by using a refinement to add methods and behaviour to strings, symbols and other objects.
DSLs are everywhere, and refinements can help make them even more expressive without resorting to monkey-patching or other brittle techniques.
Onto my last example.
Internal access control
Refinements might not just be useful for safer monkey-patching or implementing DSLs.
We might also be able to harness refinements as a design pattern of sorts, and use them to ensure that certain methods are only callable from specific, potentially-restricted parts of our codebase.
For example, consider a Rails application with a model that has some “dangerous” or “expensive” method.
moduleUserAdminrefineUserdodefpurge!user.associated_records.delete_all!user.delete!endendend# in app/controllers/admin_controller.rbclassAdminController<ApplicationControllerusingUserAdmindefpurge_userUser.find(params[:id]).purge!endend
By using a refinement, the only places we can call this method are where we’ve explicitly activated that refinement.
From everywhere else – normal controllers, views or other classes – even though they might be handling the same object – the very same instance, even – the dangerous or expensive method is guaranteed not to be available there.
I think this is a really interesting use for refinements – as a design pattern rather than just a solution for monkey-patching – and while I know there could be some obvious objections to that suggestion, I’m certainly curious to explore it a bit more before I decide it’s not worthwhile.
So those are some examples of things we might be able to do with refinements. I think they are all potentially very interesting, and potentially useful.
So, finally, to the question I’m curious about. If refinements can do all of these things in such elegant and safe ways, why aren’t we seeing more use of them?
Why is nobody using refinements
It’s been five years since they appeared, and almost three years since they were officially a part of Ruby. And yet, when I search GitHub, almost none of the results are actual uses of refinements.
In fact, some of the top hits are gems that actually try to “remove” refinements from the language!
You can see in the description: “No one knows what problem they solve or how they work.”! Well, hopefully we at least have some ideas about that now.
I actually asked another of the speakers at RubyConf2015 — who will remain nameless — what they thought the answer to my question might be, and they said:
So I don’t find this answer very satisfying. Why are they bad?
I asked them why, and they replied
“Because they’re just another form of monkey patching, right?”
Well – yes, sort of, but also… not really.
And just because they might be related in some way to monkey-patching – does that automatically make them bad, or not worth understanding?
I can’t shake the feeling that this is the same mode of thinking that leads to ideas like “meta-programming is too much magic” or “using single or double quoted strings consistently is a *very important thing*” or that something – anything – you type into a text editor can be described as “awesome” when that word should be reserved exclusively for moments in your life like seeing the Grand Canyon for the first time, and not when you install the latest gem or anything like that.
I am… suspicious… of “awesome”, and so I’m also suspicious of “bad”.
I asked another friend if they had any ideas about why people weren’t using refinements, and they said “because they’re slow”, again, as if it was a fact.
““TL;DR: Refinements aren’t slow. Or at least they don’t seem to be slower than ‘normal’ methods”
So why aren’t people using refinements? Why do people have these ideas that they are slow, or just plain bad?
Is there any solid basis for those opinions?
As I told you right at the start, I don’t have a neatly packaged answer, and maybe nobody does, but here are my best guesses, based on tangible evidence and understanding of how refinements actually work
1. Lack of understanding?
While refinements have been around for almost five years, the refinements you see now are not the same as those that were introduced half a decade ago. Originally, they weren’t strictly lexically scoped, and while this provides some opportunity for more elegant code than what we’ve seen today – think not having to writing using at the top of every RSpec file, for example – it also breaks the guarantee that refinements cannot affect distant parts of a codebase.
It’s also probably true that lexical scope is not a familiar concept for many Ruby developers. I’m not ashamed to say that even though I’ve been using Ruby for over 13 years, it’s only recently that I really understood what lexical scope is actually about. I think you can probably make a lot of money writing Rails applications without ever really caring about lexical scope, and yet, without understanding it, refinements will always seem like confusing and uncontrollable magic.
The evolution of refinements hasn’t been smooth, and I think that’s why some people might feel like “nobody knows how they work or what problem they solve”. It doesn’t help, for example, that a lot of the blog posts you’ll find when you search for “refinements” are no longer accurate.
Even the official Ruby documentation is actually wrong!
This hasn’t been true since Ruby 2.1, I think, but this is what the documentation says right now. Nudge to the ruby-core team: issue 11681 might fix this…
UPDATE: since giving the presentation, this patch has been merged!
I think some of this … “information rot” can explain a little about why refinements stayed in the background.
There were genuine and valid questions about early implementation and design choices, and I think it’s fair to say that some of the excitement about this new feature of Ruby was dampened as a result. But even with all the outdated blog posts, I don’t think this entirely explains why nobody seems to be using them.
So perhaps it’s the current implementation that people don’t like.
2. Adding using everywhere is a giant pain in the ass?
Maybe the idea of having to write using everywhere goes against the mantra of DRY - don’t repeat yourself - that we’ve generally adopted as a community. After all, who wants to have to remember to write using RSpec or using Sequel or using ActiveSupport at the top of almost every file?
It doesn’t sound fun.
And this points a another potential reason:
3. Rails (and Ruby) doesn’t use them
A huge number of Ruby developers spend most if not all of their time using Rails, and so Rails has a huge amount of influence over which language features are promoted and adopted by the community.
$ fgrep 'refine ' -R rails | wc -l # => 0
Rails contains perhaps the largest collection of monkey-patches ever, in the form of ActiveSupport, but because it doesn’t use refinements, no signal is sent to developers that we should – or even could – be using them.
Now: You might be starting to form the impression that I don’t like Rails, but I’m actually very hesitant to single it out. To be clear: I love Rails – Rails feeds and clothes me, and enables me to fly to Texas and meet all y’all wonderful people. The developers who contribute to Rails are also wonderful human beings who deserve a lot of thanks.
I also think it’s easily possible, and perhaps even likely, that there’s just no way for Rails to use refinements as they exist right now to implement something at the scale of ActiveSupport. It’s possible.
But even more than this, nothing in the Ruby standard library itself uses refinements!
You can also get into some really weird situations if you try to include a module into a refinement, where methods from that module cannot find other methods define in the same module.
But this doesn’t necessarily mean that refinements are broken; all of these are either by design, or a direct consequence of lexical scoping.
Even so, they are unintuitive and it could be that aspects like these are a factor limiting the ability to use refinements at the scale of, say, ActiveSupport.
5. Refinements solve a problem that nobody has?
As easy as it is for me to stand up here and make a logical and rational argument about why monkey-patching is bad, and wrong, and breaks things, it’s impossible to deny that fact that even since you started reading this page, software written using libraries that rely heavily on monkey-patching has made literally millions of dollars.
So maybe refinements solve a problem that nobody actually has. Maybe, for all the potential problems that monkey patching might bring, the solutions we already have for managing those problems – test suites, for example – are already doing a good enough job at protecting us.
But even if you disagree with that – which I wouldn’t blame you for doing – perhaps it points at another reason that’s more compelling. Maybe refinements aren’t the right solution for the problem of monkey-patching. Maybe the right solution is actually something like: object-oriented design.
6. The rise of Object-oriented design
I think it’s fair to say that over the last two or three years, there’s been a significant increase in interest within the Ruby community in “Object Oriented Design”, which you can see in the presentations that Sandi Metz, for example, has given, or in her book, or discussion of patterns like “Hexagonal Architectures”, and “Interactors”, and “Presenters” and so on.
The benefits that O-O design tends to bring to software are important and valuable – smaller objects with clearer responsibilities, that are easier and faster to test and change – all of this helps us do our jobs more effectively, and anything which does that must be good.
And, from our perspective here, there’s nothing you can do with refinements that cannot also be accomplished by introducing one or more new objects or methods to encapsulate the new or changed behaviour.
For example, rather than adding a “shout” method to all Strings, we could introduce a new class that only knows about shouting, and wrap any strings we want shouted in instances of this new class.
I don’t want to discuss whether or not this is actually better than the refinement version, partly because this is a trivial example, so it wouldn’t be realistic to use, but mostly because I think there’s a more interesting point.
While good O-O design brings a lot of tangible benefits to software development, the cost of “proper O-O design” is verbosity; just as a DSL tries to hide the act of programming behind language that appears natural, the introduction of many objects can – sometimes – make it harder to quickly grasp what the overall intention of code might be.
And the right balance of explicitness and expressiveness will be different for different teams, or for different projects. Not everyone who interacts with software is a developer, let alone someone trained in software design, and so not everybody can be expected to easily adopt sophisticated principles with ease.
Software is for its users and sometimes the cost of making them deal with extra objects or methods might not be worth the benefit in terms of design purity. It is – like so many things – often subjective.
To be clear – I’m not in any way arguing that O-O design is not good; I’m simply wondering, whether or not it being good necessarily means that other approaches should not be considered in some situations.
So what’s the right answer?
And those are the six reasonable reasons that I could come up with as to why nobody is using refinements. So which is the right answer? I don’t know. There’s probably no way to know.
I think these are all potentially good, defensible reasons why we might have decided collectively to ignore Refinements.
However… I am not sure any of them are the answer that most accurately reflects reality. Unfortunately, I think the answer is more likely to the first one we encountered on this journey:
Because other people told us they are “bad”.
Let me make a confession.
When I said “this is not a sales pitch for refinements”, I really meant it. I’m fully open to the possibility that it might never be a good idea to use them. I think it’s unlikely, but it’s certainly possible.
And to be honest, it doesn’t really bother me either way!
What I do care about, though, is that we might start to accept and adopt opinions like “that feature is bad”, or “this sucks”, without ever pausing to question them or explore the feature ourselves.
Sharing opinions is good. Nobody has the time the research everything. That would not only be unrealistic, but one of the benefits of being in a community is that we can benefit from each other’s experiences. We can use our collective experience to learn and improve. This is definitely a good thing.
But if we just accept opinions as facts, without even asking “why”… I think this is more dangerous. If nobody ever questioned an opinion as fact, then we’d still believe the world was flat!
It’s only by questioning opinions that we make new discoveries, and that we learn for ourselves, and that — together — we make progress as a community.
The “sucks”/“awesome” binary can be an easy and tempting shorthand, and it’s even fun to use – but it’s an illusion. Nothing is ever that clear cut.
There’s a great quote by a British journalist and doctor called Ben Goldacre, that he uses any time someone tries to present a situation as starkly either good or bad:
“I think you’ll find it’s a bit more complicated that that.”
This is how I feel whenever anyone tells me something “sucks”, or is “awesome”. It might suck for you, but unless you can explain to me why it sucks, then how can I decide how your experience might apply to mine?
One person’s “suck” can easily be another person’s “awesome”, and they are not mutually exclusive. It’s up to us to listen and read critically, and then explore for ourselves what we think.
And I think this is particularly true when it comes to software development.
Explore for yourselves
If we hand most, if not all responsibility for that exploration to the relatively small number of people who talk at conferences, or have popular blogs, or who tweet a lot, or who maintain these very popular projects and frameworks, then that’s only a very limited perspective compared to the enormous size of the Ruby community.
I think we have a responsibility not only to ourselves, but also to each other, to our community, not to use Ruby only in the ways that are either implicitly or explicitly promoted to us, but to explore the fringes, and wrestle with new and experimental features and techniques, so that as many different perspectives as possible inform on the question of “is this good or not”.
If you’ll forgive the pun, there are no constants in programming – the opinions that Rails enshrines, even for great benefit, will change, and even the principles of O-O design are only principles, not immutable laws that should be blindly followed for the rest of time. There will be other ways of doing things. Change is inevitable.
So we’re at the end now. I might not have been able to tell you precisely why so few people seem to be using refinements, but I do have one small request.
Please – make a little time to explore Ruby. Maybe you’ll discover something simple, or maybe something wonderful. And if you do, I hope you’ll share it with everyone.
Certainly, the QR code is convenient, but it’s also opaque and if you just scan it and move on with the setup, you’ll never see it again
This bit me when I had my phone replaced earlier this year, only two find that my authenticator app hadn’t stored any backup of the credentials it needed to generate tokens1. I tried to log in to one of my newly-secured 2FA services (Gandi, in this case), only to find myself unable to generate the security code, and without any way to log in, no way to re-add the credentials to the authenticator app on the phone!
In this specific case I managed to get back in by calling the service and verifying my identity over the phone, but it’s hassle that I’d rather not repeat.
So, to avoid this, here’s what I do now.
Get the secret key
Instead of scanning the QR code, ask for the secret key:
Take a note of this somewhere (I store it in [1Password], for example). Now if you ever need to somehow re-add this service to your authenticator application, you can use this code.
Once you have the code, you’re free to switch back to the QR code (which contains exactly the same information) for a more convenient way of getting that data into your app.
2FA using the command-line
One secondary benefit of having the code available is that you don’t need to pull out your phone to generate an authentication code. Using a program called oathtool, you can generate codes in the Terminal:
You can even get the code ready to paste straight into the web form:
$ oathtool --base32 --totp "YOURCODE..." | pbcopy
Now you can simply switch back to the webpage and paste the code using a single keystroke. Boom.
Sharing 2FA with colleagues
Another benefit of storing the code is that you can give other people access to the same credentials without everyone needing to be present when the QR code is on the screen (which might not even be possible if they are remote).
Instead, you just need to (securely) share the code with them, and they can add it to their authenticator app and start accessing the 2FA-secured service. Admittedly, this is more cumbersome than scanning a convenient QR code
This may have been because I didn’t encrypt the backup for my phone in iTunes, which means (I believe) that no passwords will be saved, but it could also be a legitimate security measure by the app itself. ↩
If you haven’t seen the video, then I’d still strongly encourage you to read the blog post. While I can now see the inspiration for wanting to discuss these ideas1, the post really does stand on it’s own.
I’m not going to re-explain it here, so yes, really, go read it now.
What I found really interesting here was the idea of building new enumerators by re-combining existing enumerators. I’ll use a different example, one that is perhaps a bit too simplistic (there are more concise ways of doing this in Ruby), but hopefully it will illustrate the point.
Let’s imagine you have an Enumerator which enumerates the numbers from 1 up to 10:
You can now use that to do all sorts of enumerable things like mapping, selecting, injecting and so on. But you can also build new enumerables using it. Say, for example, we now only want to iterate over the odd numbers between 1 and 10.
We can build a new Enumerator that re-uses our existing one:
> odd_numbers = Enumerator.new do |yielder|
numbers.each do |number|
yielder.yield number if number.odd?
=> #<Enumerator: #<Enumerator::Generator:0x007fc0b38de6b0>:each>
So, that’s quite neat (albeit somewhat convoluted compared to 1.upto(10).select(&:odd)). To extend this further, let’s imagine that I hate the lucky number 7, so I also don’t want that be included. In fact, somewhat perversely, I want to stick it right in the face of superstition by replacing 7 with the unluckiest number, 13.
Yes, I know this is weird, but bear with me. If you have read Tom’s post (go read it), you’ll already know that this can also be achieved with a new enumerator:
> odd_numbers_that_arent_lucky = Enumerator.new do |yielder|
odd_numbers.each do |odd_number|
if number == 7
=> #<Enumerator: #<Enumerator::Generator:0x007fc0b38de6b0>:each>
StopIteration: iteration reached an end
In Tom’s post he shows how this works, and how you can further compose enumerators to to produce new enumerations with specific elements inserted at specific points, or elements removed, or even transformed, and so on.
A hidden history of enumerable transformations
What I find really interesting here is that somewhere in our odd_numbers enumerator, all the numbers still exist. We haven’t actually thrown anything away permanently; the numbers we don’t want just don’t appear while we are enumerating.
The enumerator odd_numbers_that_arent_lucky still contains (in a sense) all of the numbers between 1 and 10, and so in the tree composition example in Tom’s post, all the trees he creates with new nodes, or with nodes removed, still contain (in a sense) all those nodes.
It’s almost as if the history of the tree’s structure is encoded within the nesting of Enumerator instances, or as if those blocks passed to Enumerator.new act as a runnable description of the transformations to get from the original tree to the tree we have now, invoked each time any new tree’s children are enumerated over.
I think that’s pretty interesting.
Notes on ‘Catamorphisms’
In the section on Catamorphisms (go read it now!), Tom goes on to show that recognising similarities in some methods points at a further abstraction that can be made – the fold – which opens up new possibilities when working with different kinds of structures.
What’s interesting to me here isn’t anything about the code, but about the ability to recognise patterns and then exploit them. I am very jealous of Tom, because he’s not only very good at doing this, but also very good at explaining the ideas to others.
Academic vs pragmatic programming
This touches on the tension between the ‘academic’ and ‘pragmatic’ nature of working with software. This is something that comes up time and time again in our little sphere:
Now I’m not going to argue that anyone working in software development should have a degree in Computer Science. I’m pretty sympathetic with the idea that many “Computer Science” degrees don’t actually bear much of direct resemblance to the kinds of work that most software developers do2.
Ways to think
What I think university study provides, more than anything else, is exposure and training in ways to think that aren’t obvious or immediately accessible via our direct experience of the world. Many areas of study provide this, including those outside of what you might consider “science”. Learning a language can be learning a new way to think. Learning to interpret art, or poems, or history is learning a new way to think too.
Learning and internalising those ways to think give perspectives on problems that can yield insights and new approaches, and I propose that that, more than any other thing, is the hallmark of a good software developer.
Going back to the blog post which, as far I know, sparked the tweet storm about “programming and maths”, I’d like to highlight this section:
At most academic CS schools, the explicit intent is that students learn programming as a byproduct of learning CS. Programming itself is seen as rather pedestrian, a sort of exercise left to the reader.
For actual developer jobs, by contrast, the two main skills you need these days are programming and communication. So while CS still does have strong ties to math, the ties between CS and programming are more tenuous. You might be able to say that math skills are required for computer science success, but you can’t necessarily say that they’re required for developer success.
What a good computer science (or maths or any other logic-focussed) education should teach you are ways to think that are specific to computation, algorithms and data manipulation, which then
provide the perspective to recognise patterns in problems and programs that are not obvious, or even easily intuited, and might otherwise be missed.
provide experience applying techniques to formulate solutions to those problems, and refactorings of those programs.
Plus, it’s fun to achieve that kind of insight into a problem. It’s the “a-ha!” moment that flips confusion and doubt into satisfaction and certainty. And these insights are also interesting in and of themselves, in the very same way that, say, study of art history or Shakespeare can be.
So, to be crystal clear, I’m not saying that you need this perspective to be a great programmer. I’m really not. You can build great software that both delights users and works elegantly underneath without any formal training. That is definitely true.
Back to that quote:
the ties between CS and programming are more tenuous … you can’t necessarily say that they’re required for developer success.
All I’m saying is this: the insights and perspectives gained by studying computer science are both useful and interesting. They can help you recognise existing, well-understood problems, and apply robust, well-understood and powerful solutions.
That’s the relevance of computer science to the work we do every day, and it would be a shame to forget that.
In the last 15 minutes or so of the video, the approach Tom uses to add a “child node” to a tree is interesting but there’s not a huge amount of time to explore some of the subtle benefits of that approach ↩
Which is, and let’s be honest, a lot of “Get a record out of a database with an ORM, turn it into some strings, save it back into the database”. ↩
I’ve been toying around with Docker for a while now, and I really like it.
The reason why I became interested, at first, was because I wanted to move the Printer server and processes to a different VPS which was already running some other software (including this blog), but I was a bit nervous about installing all of the dependencies.
What if something needed an upgraded version of LibXML, but upgrading for one application ended up breaking a different one? What if I got myself in a tangle with the different versions of Ruby that different applications expect?
If you’re anything like me, servers tend to be supremely magical things; I know enough of the right incantations to generally make the right things appear in the right places at the right time, but it’s so easy to utter the wrong spell and completely mess things up1.
Docker provides an elegant solution for these kinds of worries, because it allows you to isolate applications from each other, including as many of their dependencies as you like. Creating images with different versions of LibXML, or Ruby, or even PostgreSQL is almost trivially easy, and you can be confident that running any combination will not cause unexpected crashes and hard-to-trace software version issues.
However, while Docker is simple in principle, it’s not trivial to actually deploy with it, in the pragmatic sense.
What I mean is getting to a point where deploying a new application to your server is as simple (or close enough to it) as platforms like Heroku make it.
Now, to be clear, I don’t specifically mean using git push to deploy; what I mean is all the orchestration that needs to happen in order to move the right software image into the right place, stop existing containers, start updated containers and make sure that nothing explodes as you do that.
But, you might say, there are packages that already help with this! And you’re right. Here are a few:
Dokku, the original minimal Heroku clone for Docker
Orchard, a managed host for your Docker containers
…and many more. I don’t know if you’ve heard, but Docker is, like, everywhere right now.
So why not use one of those?
That’s a good question
Well, a couple of reasons. I think Orchard looks great, but I like using my own servers for software when I can. That’s just my personal preference, but there it stands, so tools like Orchard (or Octohost and so on) are not going to help me at the moment.
Dokku is good, but I’d like to have a little more control of how applications are deployed (as I said above, I don’t really care about git push to deploy, and even in Heroku it can lead to odd issues, particularly with migrations).
Flynn isn’t really for me, or at least I don’t think it is. It’s for dev-ops running apps on multiple machines, balancing loads and migrating datacenters; I’m running some fun little apps on my personal VPS. I’m not interested in using Docker to deploy across multiple, dynamically scaling nodes; I just want to take advantage of Docker’s isolation on my own, existing, all-by-its-lonesome VPS.
But, really more than anything, I wanted to understand what was happening when I deploy a new application, and be confident that it worked, so that I can more easily use (and debug if required) this new moving piece in my software stack.
So I’ve done some exploring
I took a bit of time out from building Harmonia to play around more with Docker, and I’d like to write about what I’ve found out, partly to help me make it concrete, and partly because I’m hoping that there are other people out there like me, who want some of the benefits that Docker can bring without necessarily having to learn so much about using it to it’s full potential in an ‘enterprise’ setting, and without having to abandon running their own VPS.
There ought to be a simple, easy path to getting up and running productively with Docker while still understanding everything that’s going on. I’d like to find and document it, for everyone’s benefit.
If that sounds like the kind of think you’re interested in, and would like reading about, please let me know. If I know there are enough people interested in the idea, then I’ll get to work.
It was really interesting to read the news about the potential cancellation of Wicked Good Ruby, a Ruby conference that was due to run for a second time in Boston this year:
Rather than half-ass a conference we’ve decided to cancel it. All tickets will be fully reimbursed. I’ve already reached out to those that have purchased with how that will be done.
There’s lots of interesting stuff in this thread, but one point that jumped out to me was the financial risk involved:
Zero sponsors. […] I had 2 companies inquire about sponsorship. One asked to sponsor but never paid the sponsorship invoice after 82 days.
Last year we lost $15k. In order to made it worth our effort this year we needed to make a profit. The conference we had in mind required a $50k sponsorship budget with an overall budget of close to $100k. (last year’s conference cost about $125k) Consider to date, after 6 months, we have received $0 in sponsorship the financial risk was too high.
Since announcing this a few hours ago I’ve been contacted by 3 other regional conference organizers. They are all having similar issues this year. Sponsorship is incredibly difficult to come by […] I didn’t get the sense they were going to bail but I think this is a larger issue than just Boston.
With costs in the tens or even hundreds of thousands of dollars, it’s really no wonder that organisers might have second thoughts.
But does running a conference really need to come with such an enormous financial risk? With Ruby Manor, our biggest budget was less than £2,000 (around $3,000). Here’s why:
We don’t use expensive ‘conference’ venues…
… so we aren’t locked into their expensive conference catering.
We run a single track for a single day, so we only need one main room.
We meticulously pare away other expenses like swag, t-shirts, catering and so on, where it’s clear that attendees are perfectly capable of handling those themselves.
Now, it could be that there are a few factors unique to Ruby Manor that make it difficult, or even impossible, for other conferences to follow the same model. For instance, holding the event in the center of a big city like London means there’s already a wide range of lunch options for attendees, right outside the venue.
Another problem could be the availability of university and community venues as an alternative to conference centres. I really don’t know if it’s possible or not to rent spaces like this in other cities or countries. A quick look at my local university indicates that it’s totally feasible to rent a 330-seat auditorium for less than $1,000, and even use their coffee/tea catering for a total of less than $3,0001, all-in.
I would be genuinely fascinated for other conferences to start publishing their costing breakdown. LessConf have already done this, and even though I might not have chosen to spend quite so much money on breakfasts and surprises, I genuinely do applaud their transparency for the benefit of the community.
I think I’ve come up with a plan: reduce the conference from 2 nights to 1 night. Cut out many of the thrills that I was planning. This effectively would reduce our operational costs from ~$100k to around ~$50k. This would also allow us to reduce the ticket prices (I would reimburse current tickets for the difference).
I genuinely wish the organisers the best of luck. It’s a tough gig, running a conference. That said, $50,000 is still an enormous amount of money, and I cannot help but feel that it’s still far higher than it needs to be.
Every hour you spend as a conference organiser worrying about sponsorships or ticket sales or other financial issues, is an hour that could be spent working on making the content and the community aspects as good as they can be.
Let’s not forget: the only really important part of a conference is getting a diverse group of people with shared interests into the same space for a day or two. Everything else is just decoration. A lot of the time, even the presentations are just decoration. It’s getting the community together that’s important.
I realise that many people expect, consciously or otherwise, some or all of the peripheral bells and whistles that surround the core conference experience. For some attendees, going to a conference might even be akin to a ‘vacation’ of sorts. Perhaps a conference without $15,000 parties feels like less of an event… less of an extravaganza.
But consider this: given the choice between a glitzy, dazzling extravaganza, and a solid conference that an organiser can run without paralysing fear of financial ruin, I know which I would choose, and I know which ultimately benefits the community more.
To be clear, they require that you cannot make a profit on events they host, but from what I can tell, most conference organisers don’t wish to run their events for profit anyway. ↩
This irregularly-kept blog thing is getting so irregular it’s almost regularly irregular. Lest you fool yourself into presuming any ability to predict when I might write here (i.e. “never”), let me thwart your erroneous notion by serving up this rambling missive, unexpected and with neither warning nor warrant!
Take that, predictability! Chalk another one up for chaos.
I live in Austin now, or at least, for the moment
In summer last year, I moved from London to Austin. My other half had already been living in Austin for two years, and I’d been flying back and forth ever other month or so, but 24 months of that is quite enough, and so I bit the bullet and gave up my London life for Stetsons and spurs. It’s been quite an adventure so far, although I do miss a few people back in the UK.
We don’t know how long we’ll stay here, but for the moment I’m enjoying the weather immensely. You might think it’s a balmy summer in London when temperatures regularly reach 20°C, but during Austin’s summer the temperature never drops below 20°C, even at night. Can you even conceive of that?
Farewell Go Free Range, Howdy Exciting
Leaving the UK also marked the beginning of the end of the Free Range experiment for me. Ever since I started kicking the ideas for Free Range around in late 2008, I’ve always considered it an experiment. I wish I could say that it had been a complete success, and it was very successful in a number of important ways, but if you’re going to pour a lot of energy into trying to make something for yourself, it really needs to be moving in the direction you feel is valuable.
I wrote a bit more on the Exciting blog and the Free Range blog. I could say a lot more about this – indeed, I have actually written thousands of words in various notepads and emails – but I’ll save that for another time; my memoirs, maybe. It takes a lot of effort to keep a small boat moving in a consistent direction through uncharted waters.
So: I’ve spun out a lot of the projects I was driving into a new umbrella-thing called Exciting, and that’s the place to look at what I’m working on.
Right, but what are you doing exactly?
Well, I’m still doing the occasional bit of client work, and I have some tinkering projects that I need to write more about, but this year I have mostly been1 working on Harmonia. Although it was built for Free Range, it’s still both a product and a way of working that I believe lots of teams and companies could benefit from. I’ve added a fair bit of new functionality (more calendar and email functionality, webhooks, Trello integration) and fixed a whole bunch of usability issues, the lion’s share of the work has been in working on how to communicate what Harmonia does and how it could help you.
I really cannot overstate the amount of effort this takes. It’s exhausting. Some of that is doubtless because I am a developer by training, and so I’ve had to learn about design, user experience, copywriting, marketing, and everything in between, as I’ve gone along. It’s very much out of my comfort zone, but it’s simply not possible to succeed without them.
You can build the best product in the world, but if nobody knows it exists or can quickly determine whether or not it’s something they want… nobody will ever use it.
The good thing is that the web is the ideal medium for learning all this, because you can do it iteratively. I’ve completely redesigned the Harmonia landing page four times since it became my main focus, and each time I think it’s getting better at showcasing the benefits that it can bring. It still needs more work though; as of writing this I think it’s too wordy and the next revision will pare it down quite a bit.
It’s also a real challenge to keep up this effort while working alone. It’s hard to wrangle a group of people into a coherent effort, but once you succeed then it provides a great support structure to keep motivation up and to share and develop ideas. Working by yourself, you don’t have to deal with anyone else’s whims or quirks, but you also only have yourself to provide motivation and reassurance that what you’re working on is valuable, and that the decisions you’re making are sensible.
What’s more, it can be challenging to stay motivated in a world seemingly filled by super-confident, naturally-outgoing entrepreneur types. I don’t operate with the unshakable confidence that what I’m building is awesome, or that my ideas are worth sharing, or that they even have any particular merit.
Confronted with the boundless crowd of what-seems-typical Type A entrepreneurs that fill the internet, prolifically, with their reckons and newsletters and podcasts and blog posts and courses, I often wonder about the simpler life of just being an employee; of offloading the risk onto someone else in exchange for a steady incoming and significantly less crippling self-doubt. I’m sure I’m not the only person who feels this way, whose skin crawls as yet another service or blog post or person is cheaply ascribed the accolade “awesome” when really, I mean, really, is it? Is it?
But… there’s still a chance that I might be able to carve out a niche of my own, and maybe build another little company of friends working on small things that we care about, and investing in our future collectively. I still believe that’s a possibility.
Anyway, so that’s what I’m doing, mostly: crushing it2.
Couldn’t resist this turn of phrase; see here if you don’t immediately get it. ↩
I’ve been trying to largely ignore the recent TDD discussion prompted by DHH’s RailsConf 2014 keynote. I think I can understand where he’s coming from, and my only real concern with not sharing his point of view is that it makes it less likely that Rails will be steered in a direction which makes TDD easier. But that’s OK, and if my concern grows then I have opportunities to propose improvements.
I don’t even really mind what he’s said in his latest post about unit testing the Basecamp codebase. There are a lot of Rails applications – including ones I’ve written – where a four-minute test suite would’ve been a huge triumph.
I could make some pithy argument like:
… but let’s be honest, four minutes for a substantial and mature codebase is pretty excellent in the Rails world.
So that is actually pretty cool.
A lot of that speed is no doubt because Basecamp is using fixtures: test data that is loaded once at the start of the test run, and then reset by wrapping each test in a transaction and rolling back before starting the next test.
This can be a benefit because the alternative – assuming that you want to get some data into the database before your test runs – is to insert all the data required for each test, which can potentially involve a large tree of related models. Doing this hundreds or thousands of times will definitely slow your test suite down.
(Note that for the purposes of my point below, I’m deliberately not considering the option of not hitting the database at all. In reality, that’s what I’d do, but let’s just imagine that it wasn’t an option for a second, yeah? OK, great.)
So, fixtures will probably make the whole test suite faster. Sounds good, right?
The problem with fixtures
I feel like this is glossing over the real problem with fixtures: unless you are using independent fixtures for each test, your shared fixtures have coupled your tests together. Since I’m pretty sure that nobody is actually using independent fixtures for every test, I am going to go out on a limb and just state:
Fixtures have coupled your tests together.
This isn’t a new insight. This is pain that I’ve felt acutely in the past, and was my primary motivation for leaving fixtures behind.
Say you use the same ‘user’ fixture between two tests in different parts of your test suite. Modifying that fixture to respond to a change in one test can now potentially cause the other test to fail, if the assumptions either test was making about its test data are no longer true (e.g. the user should not be admin, the user should only have a short bio, or so on).
If you use fixtures and share them between tests, you’re putting the burden of managing this coupling on yourself or the rest of your development team.
Going back to DHH’s post:
Why on earth would you run your entire test harness for every single line change in a particular model? If you have so little confidence in the locality of your changes, the tests are indeed telling you that the system has overly high coupling.
What fixtures do is introduce overly high coupling in the test suite itself. If you make any change to your fixtures, I do not think it’s possible to be confident that you haven’t broken a single test unless you run the whole suite again.
Fixtures separate the reason test data is like it is from the tests themselves, rather than keeping them close together.
I might be wrong
Now perhaps I have only been exposed to problematic fixtures, and there are techniques for reliably avoiding this coupling or at least managing it better. If that’s the case, then I’d really love to hear more about them.
Or, perhaps the pain of managing fixture coupling is objectively less than the pain of changing the way you write software to both avoid fixtures AND avoid slowing down the test suite by inserting data into the database thousands of times?
I’ve been playing with Raygun.io over the last day or so. It’s a tool, like Honeybadger, Airbrake or Errbit, for managing exceptions from other web or mobile applications. It will email you when exceptions occur, collapse duplicate errors together, and allows a team to comment and resolve exceptions from their nicely designed web interface.
I’ve come to the conclusion that integrating with something like this is basically a minimum requirement for software these days. Previously we might’ve suggested an ‘iterative’ approach of emailing exceptions directly from the application before later using one of these services, but I no longer see the benefit in postponing a better solution when it’s far simpler to work with one of these services than it is to set up email reliably.
It seems pretty trivial to integrate with a Rails application – just run a generator to create the initializer complete with API key. However, I had to do a bit more work to hook it into a Rack application (which is what Vanilla is). In my config.ru:
# Setup Raygun and add it to the middlewarerequire'raygun'Raygun.setupdo|config|config.api_key=ENV["RAYGUN_API_KEY"]enduseRaygun::RackExceptionInterceptor# Raygun will re-raise the exception, so catch it with something elseuseRack::ShowExceptions
The documentation for this is available on the Raygun.io site, but at the moment the actual documentation link on their site points to a gem, which more-confusingly isn’t actually the gem that you will have installed. Reading the documentation in the gem README also reveals how to integrate with Resque, to catch exceptions in background jobs.
One thing that’s always worth checking when integrating with exception reporting services is whether or not they support SSL, and thankfully it looks like that’s the default (and indeed only option) here.
The Raygun server also sports a few plugins (slightly hidden under ‘Application Settings’) for logging exception data to HipChat, Campfire and the like. I’d like to see a generic webhook plugin supported, so that I could integrate exception notification into other tools that I write; thankfully that’s the number one feature request at the moment.
My other request would be that the gem should try not to depend on activesupport if possible. I realise for usage within Rails, this is a non-issue, but for non-Rails applications, loading ActiveSupport can introduce a number of other gems that bloat the running Ruby process. As far as I can tell, the only methods from ActiveSupport that are used are Hash#blank? (which is effectively the same as Hash#empty?) and String#starts_with? (which is just an alias for the Ruby-default String#start_with?). Pull request submitted.
I was lucky enough to be gifted a Twine by my colleagues at Go Free Range last weekend, and I took the opportunity to put together a very simple service that demonstrates how it can be used.
If you haven’t heard of Twine, it’s a hardware and software platform for connecting simple sensors to the internet, and it makes it really very easy to do some fun things bridging the physical and online worlds.
On the hardware side, there’s a simple 2.7” square that acts as a bridge between your home WiFi network and a set of sensors.
Some of the sensors are built in to the square itself: temperature, orientation and vibration can be detected without plugging anything else in. You can also get external sensors, which connect to the square via a simple 3.5mm jack cable. If you buy the full sensor package, you’ll get a magnetic switch sensor, a water sensor and a ‘breakout board’ that lets you connect any other circuit (like a doorbell, photoresistor, button and so on) to the Twine.
Connecting the Twine to a WiFi network is elegant and features a lovely twist: you flip the Twine on its “back”, like a turtle, and it makes its own WiFi network available.
Connect to this from your computer, and you can then give the Twine the necessary credentials to log on to your home network, and once you’re done, flip it back onto its “belly” again and it will be ready to use. I really loved this simple, physical interaction.
On the software side, Twine runs an online service that lets you define and store ‘rules’ to your connected Twine units. These rules take the form of when <X> then <Y>, in a similar style to If This Then That. So, with a rule like when <vibration stops> then <send an email to my phone>, you could pop the Twine on top of your washing machine and be alerted when it had finished the final spin cycle.
As well as emailing, the Twine can flash it’s LED, tweet, send you an SMS, call you, or ping a URL via GET or POST requests including some of the sensor information.
Supermechanical, the company that launched Twine about a year and a half ago via Kickstarter, maintains a great blog with lots of example ideas of things that can be done.
All technology tends towards cat…
Just as the internet has found its singular purpose as the most efficient conduit for the sharing of cat pictures, so will the Internet of Things realise its destiny by becoming entirely focussed on physical cats, in all their unpredictable, scampish glory.
It’s neat having something in your house tweet or send you an email, but I like making software so I decided to explore building a simple server than the Twine could interact with, and thus, “Pinky Status” was born:
What follows is a quick explanation of how easy it was.
I hooked up the magnetic switch sensor to the Twine, and then used masking tape to secure the sensor to the side of the catflap, and then the magnet to the flap itself.
That way, when “Pinky” (that’s our cat) opened the flap, the magnet moves away from the switch sensor and it enters the ‘open’ state. It’s not pretty, but it works.
Next, we need a simple rule so that the Twine knows what to do when the sensor changes:
When the sensor changes to open, two things happen. Firstly, I get an email, which I really only use for debugging and I should probably turn it off, except that it’s pretty fun to have that subject line appear on my phone when I’m out of the house.
Secondly and far more usefully, the Twine pings the URL of a very, very simple server that I wrote.
The key part is at the very bottom – as Twine makes a POST request, the server simply creates another Event record with an alternating status (‘in’ or ‘out’), and then some logic in the view (not shown) can tell us whether or not the cat is in or out of the house.
In more recent versions of the code I’ve moved to Rails because it’s more familiar, but also slightly easier to do things like defend against duplicate events (normally when the cat changes her mind about going outside when her head is already through the flap) and other peripheral things.
But don’t be dissuaded by Rails - it really was as trivial as the short script above , showing some novel information derived from the simple sensor attached to the Twine. Deploying a server is also very easy thanks to tools like Heroku.
A few hours idle work and the secret life of our cat is now a little bit less mysterious than it was. I really enjoyed how quick and painless the Twine was to setup, and I can highly recommend it if you’re perhaps not comfortable enough to dive into deep sea of Arduinos, soldering and programming in C, but would still like to paddle in the shallower waters of the “internet of things”.