Why is nobody using Refinements?

Written on November 15 2015 at 18:11 ∷ permalink

This is a transcript of a presentation I gave at RubyConf 2015; the actual slides are here; the video is from confreaks.

Chances are, you’ve heard of refinements, but never used them.

The Refinements feature has existed as a patch and subsequently an official part of Ruby for around five years, and yet to most of us, it only exists in the background, surrounded by a haze of opinions about how they work, how to use them and, indeed, whether or not using them is a good idea at all.

I’d like to spend a little time taking a look at what Refinements are, how to use them and what they can do.


But don’t get me wrong - this is not a sales pitch for refinements! I am not going to try to convince you that they will solve all your problems and that you should all be using them!

The title of this presentation is “Why is nobody using refinements?” and that’s a genuine question. I don’t have all the answers!

My only goal is that, by the end of this, both you and I will have a better understanding of what they actually are, what they can actually do, when they might be useful and why it might be that they’ve lingered in the background for so long.

What are refinements?

Simply put, refinements are a mechanism to change the behaviour of an object in a limited and controlled way.

By change, I mean add new methods, or redefine existing methods on an object.

By limited and controlled, I mean that adding or changing those methods does not have an impact on other parts of our software which might interact with that object.

Let’s look at a very simple example:

module Shouting
  refine String do
    def shout
      self.upcase + "!"

Refinements are defined inside a module, using the refine method.

This method accepts a class – String, in this case – and a block, which contains all the methods to add to that class when the refinement is used. You can refine as many classes as you want within the module, and define as many methods are you’d like within each block.

To use a refinement, we call the using method with the name of the enclosing module.

class Thing
  using Shouting

  "hello".shout # => "HELLO!"

When we do this, all instances of that class, within the same scope as our using call, will have the refined methods available.

Another way of saying that is that the refinement has been “activated” within this scope.

However, any strings outside that scope are left unaffected:

"hello".shout # => NoMethodError: undefined method 'shout' for "Oh no":String

Changing existing methods

Refinements can also change methods that already exist

module TexasTime
  refine Time do
    def to_s
      if hour > 12
        "Afternoon, y’all!"

When the refinement is active, it is used instead of the existing method (although the original is still available via the super keyword, which is very useful).

class RubyConf
  using TexasTime

  Time.parse("12:00").to_s # => "2015-11-16 12:00:00"
  Time.parse("14:15").to_s # => "Afternoon, y’all!"

Anywhere the refinement isn’t active, the original method gets called, exactly as before.

Time.parse("14:15").to_s # => "2015-11-16 14:15:00"

And that’s really all there is to refinements – two new methods, refine, and using.

However, there are some quirks, and if we really want to properly understand refinements, we need to explore them. And the best way of approaching this, is by considering a few more simple examples.

Using using

Now we know that we can call the refine method within a module to create refinements, and that’s all relatively straightforward, but it turns out that where and when you call the using method has a profound effect on how the refinement behaves with our code.

We’ve seen that invoking using inside a class definition works. We activate the refinement, and we can call refined methods on a String instance:

class Thing
  using Shouting
  "hello".shout # => "HELLO!"

But we can also move the call to using somewhere outside the class, and still use the refined method as before.

using Shouting
class Thing
  "hello".shout # => "HELLO!"

In the examples so far we’ve just been calling the refined method directly, but we can also use them within methods defined in the class.

class Thing
  using Shouting
  def greet

Thing.new.greet # => "HELLO!"

Again, even if the call to using is outside of the class, our refined behaviour still works.

using Shouting
class Thing
  def greet

Thing.new.greet # => "HELLO!"

But this doesn’t work:

class Thing
  using Shouting
  def greet

Thing.new.greet.shout! # => NoMethodError

We can’t call shout on the String returned by our method, even though that String object was created within a class where the refinement was activated.

And here’s another broken example:

class Thing
  using Shouting

class Thing
  "hello".shout # => NoMethodError

We’ve activated the refinement inside our class, but when we reopen the class and try to use the refinement, we get NoMethodError again.

If we nest a class within another where the refinement is active, it seems to work:

class Thing
  using Shouting

  class OtherThing
    "hello".shout # => "HELLO!"

But it doesn’t work in subclasses:

class Thing
  using Shouting

class OtherThing < Thing
  "hello".shout # => NoMethodError

Unless they are also nested classes:

class Thing
  using Shouting

  class OtherThing < Thing
    "hello".shout # => "HELLO!"

And even though nested classes work, if you try to define a nested class using the “double-colon” or “compact form”, our refinements have disappeared again:

class Thing
  using Shouting

class Thing::OtherThing
  "hello".shout # => NoMethodError

Even blocks might seem to act strangely:

class Thing
  using Shouting
  def run

Thing.new.run do
end # => NoMethodError

Our class uses the refinement, but when we pass a block to a method in that class, suddenly it breaks.

So what’s going on here? For many of us this is quite counter-intuitive; after all, we’re used to being able to re-open classes, or share behaviour between super- and sub-classes, but it seems like that only works intermittently with refinements?

It turns out that the key to understanding how and when refinements are available relies on another aspect of how Ruby works that you may have already heard of, or even encountered directly.

The key to understanding refinements is understanding about lexical scope.

Lexical scope

To understand about lexical scope, we need to learn about some of the things that happen when Ruby parses our program.

Let’s look at the first example again:

As Ruby parses this program, it is constantly tracking a handful of things to understand the meaning of the program. Exploring these in detail would take a whole presentation in itself, but for the moment, the one we are interested in is called the “current lexical scope”.

Let’s “play computer” and follow Ruby as it processes our simple program here.

The top-level scope

When Ruby starts parsing the file, it creates a new structure in memory – a new “lexical scope” – which holds various bits of information that Ruby uses to track what’s happening at that point. We call this the “top-level” lexical scope.

When we encounter a class (or module) definition, as well as creating the class and everything that involves, Ruby also creates a new lexical scope, nested “inside” the current one.

We can call this lexical scope “A”, just to give it an easy label. Visually it makes sense to show these as nested, but behind the scenes this relationship is modelled by each scope linking to its parent. “A”’s parent is the top level scope, and the top level scope has no parent.

As Ruby processes all the code within this class definition, the “current” lexical scope is now A.

When we call using, Ruby stores a reference to the refinement within the current lexical scope. We can also say that within lexical scope “A”, the refinement has been activated.

We can see now that there are no activated refinements in the top-level scope, but our Shouting refinement is activated for lexical scope A.

Next, we can see a call to the method shout on a String instance. The details of method dispatch are complex and interesting in their own right, but one of the things that happens at this point is that Ruby checks to see if there are any activated refinements in the current lexical scope that might affect this method.

In this case, we can see that for current lexical scope “A”, there is an activated refinement for the shout method on Strings, which is exactly what we’re calling.

Ruby then looks up the correct method body within the refinement module, and invokes that instead of any existing method.

And there, we can see that our refinement is working as we hope.

So what about when we try and call the method later? Well, once we leave the class definition, the current lexical scope returns to being the parent, which is the top-level one.

Then we find our second String instance and a method being called on it.

Once again, when ruby dispatches for the shout method, it checks the current lexical scope – the top-level one – for the presence of any refinements, but in this case, there are none. Ruby behaves as normal, which is to call method_missing and this will raise an exception by default.

Calling using at the top-level

If we had called using Shouting outside of the class, at the top level, our use of the refined method both inside and outside the class works perfectly.

This is because once a refinement is activated, it is activated for all nested scopes, so calling using at the top level activated the refinement in the top level scope, which means it will be activated in any nested scopes, including “A”. And so, our call to the refined method within the class works too.

So this is our first principle of how refinements work:

When we activate a refinement with the using method, that refinement is active in the current and any nested lexical scopes.

However, once we leave that scope, the refinement is no longer activated, and Ruby behaves as it did before.

Lexical scope and re-opening classes

Let’s look at another example from earlier. Here we define a class, and activate the refinement, and later re-open that class and try to use it. We’ve already seen that this doesn’t work; the question is why.

Watching Ruby build its lexical scopes reveals why this is the case. Once again, the first class definition gives us a new, nested lexical scope A. It’s within this scope, that we activate the refinements. Once we reach the end of that class definition, we return to the top level lexical scope.

When we re-open the class, Ruby creates a nested lexical scope as before, but it is distinct from the previous one. Let’s call it B to make that clear.

While the refinement is activated in the first lexical scope, when we re-open the class, we are in a new lexical scope, and one where the refinements are no longer active.

So our second principle is this:

Just because the class is the same, doesn’t mean you’re back in the same lexical scope.

This is also the reason why our example with subclasses didn’t behave as we might’ve expected:

It should be clear now, that the fact that we are within a subclass actually has no bearing on whether or not the refinement is activated; it’s entirely determined by lexical scope. Any time Ruby encounters a class or module definition via the class (or module) keywords, it creates a new, fresh, lexical scope, even if that class (or module) has already been defined somewhere else.

This is also the reason why, even when activated at the top-level of a file, refinements only stay activated until the end of that file – because each file is processed using a new top-level lexical scope.

So now we have another two principles about how lexical scope and refinements work.

Just as re-opened classes have a different scope, so do subclasses. In fact:

The class hierarchy has nothing to do with the lexical scope hierarchy.

We also now know that every file is processed with a new top-level scope, and so refinements activated in one file are not activated in any other files – unless those other files also explicitly activate the refinement.

Lexical scope and methods

Let’s look at one more of our examples from earlier:

Here we are activating a refinement within a class, and defining a method in that class which uses the refinement. Later, we create an instance of the class and call our method.

We can see that even though the method gets invoked from the top level lexical scope – where our refinement is not activated – our refinement still somehow works. So what’s going on here?

When Ruby processes a method definition, it stores with that method a reference to the current lexical scope at the point where the method was defined. So when Ruby processes the greet method definition, it stores a reference to lexical scope A with that:

When we call the greet method – from anywhere, even a different file – Ruby evaluates it using the lexical scope associated with its definition. So when Ruby evaluates ”hello".shout inside our greet method, and tries to dispatch to the shout method, it checks for activated refinements in lexical scope “A”, even if the method was called from an entirely different lexical scope.

We already know that our refinement is active in that scope, and so Ruby can use the method body for “shout” from the refinement.

This gives us our fourth principle:

Methods are evaluated using the lexical scope at their definition, no matter where those methods are actually called from.

Lexical scope and blocks

A very similar process explains why our block example didn’t work. Here’s that example again – a method defined in a class where the refinement is activated yields to a block, but when we call that method with a block that uses the refinement, we get an error.

We can quickly see which lexical scopes Ruby has created as it processed this code. As before, we have a nested lexical scope “A”, and the method defined in our class is associated with it:

However, just as methods are associated with the current lexical scope, so are blocks (and procs, lambdas and so on). When we define that block, the current lexical scope is the top level one.

When the run method yields to the block, Ruby evaluates that block using the top-level lexical scope, and so Ruby’s method dispatch algorithm finds no active refinements, and therefore no shout method.

Our final principle

Blocks – and procs, lambdas and so on – are also evaluated using the lexical scope at their definition.

With a bit of experimentation, we can also demonstrate to ourselves that even blocks evaluated using tricks like instance_eval or class_eval retain this link to their original lexical scope, even though the value of self might change.

This link from methods and blocks to a specific lexical scope might seem strange or even confusing right now, but we’ll see soon that it’s precisely because of this that refinements are so safe to use.

But I’ll get to that in a minute. For now, let’s recap what we know:

Lexical scope & principles recap

  • Refinements are controlled using the lexical scope structures already present in Ruby.
  • You get a new lexical scope any time you do any of the following:
    • entering a different file
    • opening a class or module definition
    • running code from a string using eval

As I said earlier: you might find the idea of lexical scope surprising, but it’s actually a very useful property for a language; without it, many aspects of Ruby we take for granted would be much harder, if not impossible to produce. Lexical scope is used as part of how Ruby understands references to constants, for example, and also what makes it possible to pass blocks and procs around as “closures”.

We also now have the five basic principles that will enable us to explain how and why refinements behave the way they do:

  1. Once you call using, refinements are activated within the current, and any nested, lexical scopes
  2. The nested scope hierarchy is entirely distinct from any class hierarchy in your code; subclasses and superclasses have no effect on refinements; only nested lexical scopes do.
  3. Different files get different top-level scopes, so even if we call using at the very top of a file, and activate it for all code in that file, the meaning of code in all other files is unchanged.
  4. Methods are evaluated using the current lexical scope at their point of definition, so we can call methods that make use of refinements internally from anywhere in the rest of our codebase.
  5. Blocks are also evaluated using the lexical scope, and so it’s impossible for refinements activated elsewhere in our code to change the behaviour of blocks — or indeed, any other methods or code — written where that refinement wasn’t present.

Right! So now we know. But why should we even care? What are refinements actually good for? Anything? Nothing?

Let’s try to find out.

Let’s use refinements

Now, another disclaimer: these are just some ideas – some less controversial than others – but hopefully they will help frame what refinements might make easier or more elegant or robust.

The first will not be a surprise, but I think it’s worth discussing anyway.


Monkey-patching is the act of modifying a class or object that we don’t own – that we didn’t write. Because Ruby has open classes, it’s trivial to redefine any method on any object with new or different behaviour.

The danger that monkey-patching brings is that those changes are global – they affect every part of the system as it runs. As a result, it can be very hard to tell which parts of our software will be affected.

If we change the behaviour of an existing method to suit one use, there’s a good chance that some distant part of the codebase – perhaps hidden within Rails or another gem – is going to call that method expecting the original behaviour (or its own monkey-patched behaviour!), and things are going to get messy.

Say I’m writing some code in a gem, and as part of that I want to be able to turn an underscored String into a camelized version. I might re-open the String class and add this simple, innocent-looking method to make it easy to do this transformation.

class String
  def camelize

"ruby_conf_2015".camelize # => "RubyConf2015"

Unfortunately, as soon as anyone tries to use my gem in a Rails application, their test suite is going to go from passing, not to failing but to ENTIRELY CRASHING with a very obscure error:

/app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:261:in `const_get': wrong constant name Admin/adminHelper (NameError)
  from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:261:in `block in constantize'
  from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:259:in `each'
  from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:259:in `inject'
  from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/inflector/methods.rb:259:in `constantize'
  from /app/.bundle/gems/ruby/2.1.0/gems/activesupport-4.2.1/lib/active_support/core_ext/string/inflections.rb:66:in `constantize'
  from /app/.bundle/gems/ruby/2.1.0/gems/actionpack-4.2.1/lib/abstract_controller/helpers.rb:156:in `block in modules_for_helpers'
  from /app/.bundle/gems/ruby/2.1.0/gems/actionpack-4.2.1/lib/abstract_controller/helpers.rb:144:in `map!'
  from /app/.bundle/gems/ruby/2.1.0/gems/actionpack-4.2.1/lib/abstract_controller/helpers.rb:144:in `modules_for_helpers'
  from /app/.bundle/gems/ruby/2.1.0/gems/actionpack-4.2.1/lib/action_controller/metal/helpers.rb:93:in `modules_for_helpers'

You can see the error at the top there - something to do with constant names or something? Looking at the backtrace I don’t see anything about a camelize method anywhere?

Now we all know what caused the problem, but if someone else had written that code, I very much doubt it would be so obvious. And this is exactly the problem that Yehuda Katz identified with monkey patching in his blog post about refinements almost exactly five years ago.

Monkey-patching vs refinements

Monkey-patching has two fundamental issues:

The first is breaking API expectations. We can see that Rails has some expectation about the behaviour of the camelize method on String, which we obviously broke when we added our own monkey-patch elsewhere.

The second is that monkey patching can make it far harder to understand what might be causing unexpected or strange behaviour in our software.

Refinements in Ruby address both of these issues.

If we change the behaviour of a class using a refinement, we know that it cannot affect parts of the software that we don’t control, because refinements are restricted by lexical scope.

We’ve seen already that refinements activated in one file are not activated in any other file, even when re-opening the same classes. If I wanted to use a version of camelize in my gem, I could define and use it via a refinement, but anywhere that refinement wasn’t specifically activated – which it won’t be anywhere inside of Rails, for example – the original behaviour remains.

It’s actually impossible to break existing software like Rails using refinements. There’s no way to influence the lexical scope associated with code without editing that code itself, and so the only way we can “poke” some refinement behaviour into a gem is by finding the source code for that gem and literally inserting text into it.

This is exactly what I meant by limited and controlled at the start.

Refinements also make it easier to understand where unexpected behaviour may be coming from, because they require an explicit call to using somewhere in the same file as the code that uses that behaviour. If there are no using statements in a file, we can be confident – assuming nothing else is doing any monkey-patching – that Ruby will behave as we would normally expect.

This is not to say that it’s impossible to produce convoluted code which is tricky to trace or debug – that will always be possible – but if we use refinements, there will always be a visual clue that a refinement is activated.

Onto my second example.

Managing API changes

Sometimes software we depend on changes its behaviour. APIs change in newer versions of libraries, and in some cases even the language can change.

For example, in Ruby 2, the behaviour of the chars method on Strings changed from returning an enumerator to returning an Array of single-character strings.

# Ruby 1.9
"hello".chars # => #<Enumerator: "hello".each_char>

# Ruby 2.0+
"hello".chars # => ["h", "e", "l", "l", "o"]

Imagine we’re migrating an application from Ruby 1.9 to Ruby 2 (or later), and we discover that some part of our application which depends on calling chars on a String and expecting an enumerator to be returned.

If some parts of our software rely on the old behaviour, we can use refinements to preserve the original API, without impacting any other code that might have already been adapted to the new API.

Here’s a simple refinement which we could activate for only the code which depends on the Ruby 1.9 behaviour:

module OldChars
  refine String do
    def chars; each_char; end

"hello".chars # => ["h", "e", "l", "l", “o"]

using OldChars

"hello".chars # => #<Enumerator: "hello".each_char>

The rest of the system remains unaffected, and any dependencies that expect the Ruby 2 behaviour will continue to work into the future.

My third example is probably familiar to most people.


One of the major strengths of Ruby is that its flexibility can be used to help us write very expressive code, and in particular supporting the creation of DSLs, or “domain specific languages”. These are collections of objects and methods which have been designed to express concepts as closely as possible to the terminology used by non-programmers, and often designed to read more like human language than code.

Adding methods to core classes can often help make DSLs more readable and expressive, and so refinements are a natural candidate for doing this in a way that doesn’t leak those methods into other parts of an application.

RSpec as a DSL

RSpec is a great example of a DSL for testing. Until recently, this would’ve been a typical example of RSpec usage:

describe "Ruby" do
  it "should bring happiness" do
    developer = Developer.new(uses: "ruby")
    developer.should be_happy

One hallmark is the emphasis on writing code that reads fluidly, and we can see that demonstrated in the line developer.should be_happy, which while valid Ruby, reads more like English than code. To enable this, RSpec used monkey-patching to add a should method to all objects.

Recently, RSpec moved away from this DSL, and while I cannot speak for the developers who maintain RSpec, I think it’s fair to say that part of the reason was to avoid the monkey-patching of the Object class.

However, refinements offer a compromise that balances the readability of the original API with the integrity of our objects.

module RSpec
  refine Object do
    def should(expectation)

using RSpec

123.should eq 123 # => true
false.should eq true # => false

It’s easy to add a should method to all objects in your spec files using a refinement, but this method doesn’t leak out into the rest of the codebase.

The compromise is that you must write using RSpec at the top of every file, which I don’t think is a large price to pay. But, you might disagree and we’ll get to that shortly.

RSpec isn’t the only DSL that’s commonly used, and you might not even have thought of it as a DSL – after all, it’s just Ruby. You can also view the routes file in a Rails application as a DSL of sorts, or the query methods ActiveRecord provides. In fact, the Sequel gem actually does, optionally, let you write queries more fluently by using a refinement to add methods and behaviour to strings, symbols and other objects.

DSLs are everywhere, and refinements can help make them even more expressive without resorting to monkey-patching or other brittle techniques.

Onto my last example.

Internal access control

Refinements might not just be useful for safer monkey-patching or implementing DSLs.

We might also be able to harness refinements as a design pattern of sorts, and use them to ensure that certain methods are only callable from specific, potentially-restricted parts of our codebase.

For example, consider a Rails application with a model that has some “dangerous” or “expensive” method.

module UserAdmin
  refine User do
    def purge!

# in app/controllers/admin_controller.rb
class AdminController < ApplicationController
  using UserAdmin

  def purge_user

By using a refinement, the only places we can call this method are where we’ve explicitly activated that refinement.

From everywhere else – normal controllers, views or other classes – even though they might be handling the same object – the very same instance, even – the dangerous or expensive method is guaranteed not to be available there.

I think this is a really interesting use for refinements – as a design pattern rather than just a solution for monkey-patching – and while I know there could be some obvious objections to that suggestion, I’m certainly curious to explore it a bit more before I decide it’s not worthwhile.

So those are some examples of things we might be able to do with refinements. I think they are all potentially very interesting, and potentially useful.

So, finally, to the question I’m curious about. If refinements can do all of these things in such elegant and safe ways, why aren’t we seeing more use of them?

Why is nobody using refinements

It’s been five years since they appeared, and almost three years since they were officially a part of Ruby. And yet, when I search GitHub, almost none of the results are actual uses of refinements.

In fact, some of the top hits are gems that actually try to “remove” refinements from the language!

You can see in the description: “No one knows what problem they solve or how they work.”! Well, hopefully we at least have some ideas about that now.

Breaking bad

I actually asked another of the speakers at RubyConf2015 — who will remain nameless — what they thought the answer to my question might be, and they said:

“Because they’re bad, aren’t they.”

As if it was a fact.

Now, my initial reaction to this kind of answer is somewhat emotionally charged, but my actual answer was more like:

“Are they? Why do you think that?”

So I don’t find this answer very satisfying. Why are they bad?

I asked them why, and they replied

“Because they’re just another form of monkey patching, right?”

Well – yes, sort of, but also… not really.

And just because they might be related in some way to monkey-patching – does that automatically make them bad, or not worth understanding?

I can’t shake the feeling that this is the same mode of thinking that leads to ideas like “meta-programming is too much magic” or “using single or double quoted strings consistently is a *very important thing*” or that something – anything – you type into a text editor can be described as “awesome” when that word should be reserved exclusively for moments in your life like seeing the Grand Canyon for the first time, and not when you install the latest gem or anything like that.

I am… suspicious… of “awesome”, and so I’m also suspicious of “bad”.


I asked another friend if they had any ideas about why people weren’t using refinements, and they said “because they’re slow”, again, as if it was a fact.

And if that were true, then that would be totally legitimate… but it turns out it’s not:

““TL;DR: Refinements aren’t slow. Or at least they don’t seem to be slower than ‘normal’ methods”

So why aren’t people using refinements? Why do people have these ideas that they are slow, or just plain bad?

Is there any solid basis for those opinions?

As I told you right at the start, I don’t have a neatly packaged answer, and maybe nobody does, but here are my best guesses, based on tangible evidence and understanding of how refinements actually work

1. Lack of understanding?

While refinements have been around for almost five years, the refinements you see now are not the same as those that were introduced half a decade ago. Originally, they weren’t strictly lexically scoped, and while this provides some opportunity for more elegant code than what we’ve seen today – think not having to writing using at the top of every RSpec file, for example – it also breaks the guarantee that refinements cannot affect distant parts of a codebase.

It’s also probably true that lexical scope is not a familiar concept for many Ruby developers. I’m not ashamed to say that even though I’ve been using Ruby for over 13 years, it’s only recently that I really understood what lexical scope is actually about. I think you can probably make a lot of money writing Rails applications without ever really caring about lexical scope, and yet, without understanding it, refinements will always seem like confusing and uncontrollable magic.

The evolution of refinements hasn’t been smooth, and I think that’s why some people might feel like “nobody knows how they work or what problem they solve”. It doesn’t help, for example, that a lot of the blog posts you’ll find when you search for “refinements” are no longer accurate.

Even the official Ruby documentation is actually wrong!

This hasn’t been true since Ruby 2.1, I think, but this is what the documentation says right now. Nudge to the ruby-core team: issue 11681 might fix this

UPDATE: since giving the presentation, this patch has been merged!

I think some of this … “information rot” can explain a little about why refinements stayed in the background.

There were genuine and valid questions about early implementation and design choices, and I think it’s fair to say that some of the excitement about this new feature of Ruby was dampened as a result. But even with all the outdated blog posts, I don’t think this entirely explains why nobody seems to be using them.

So perhaps it’s the current implementation that people don’t like.

2. Adding using everywhere is a giant pain in the ass?

Maybe the idea of having to write using everywhere goes against the mantra of DRY - don’t repeat yourself - that we’ve generally adopted as a community. After all, who wants to have to remember to write using RSpec or using Sequel or using ActiveSupport at the top of almost every file?

It doesn’t sound fun.

And this points a another potential reason:

3. Rails (and Ruby) doesn’t use them

A huge number of Ruby developers spend most if not all of their time using Rails, and so Rails has a huge amount of influence over which language features are promoted and adopted by the community.

$ fgrep 'refine ' -R rails | wc -l # => 0

Rails contains perhaps the largest collection of monkey-patches ever, in the form of ActiveSupport, but because it doesn’t use refinements, no signal is sent to developers that we should – or even could – be using them.

UPDATE: Very shortly before this presentation was given, the first use of refinements actually was added internally within Rails.

Now: You might be starting to form the impression that I don’t like Rails, but I’m actually very hesitant to single it out. To be clear: I love Rails – Rails feeds and clothes me, and enables me to fly to Texas and meet all y’all wonderful people. The developers who contribute to Rails are also wonderful human beings who deserve a lot of thanks.

I also think it’s easily possible, and perhaps even likely, that there’s just no way for Rails to use refinements as they exist right now to implement something at the scale of ActiveSupport. It’s possible.

But even more than this, nothing in the Ruby standard library itself uses refinements!

$ fgrep 'refine ' -R ruby-2.2.3/ext | wc -l # => 0

Many new language features, like keyword arguments, won’t see widespread adoption until Rails and the Ruby standard library starts to promote them.

Rails 5 has adopted keyword arguments and so I think we can expect to see them spread into other libraries as a result.

Without compelling examples of refinements from the libraries and frameworks we use every day, there’s nothing nudging us towards really understanding when they are appropriate or not.

4. Implementation quirks

There are a number of quirks or unexpected gotchas that you will encounter if you try to use refinements at scale.

For example, even when a refinement is activated, you cannot use methods like send or respond_to? to check for refined methods. You also can’t use them in convenient forms like Symbol#to_proc.

using Shouting
"hello".shout # => “HELLO"
"hello".respond_to?(:shout) # => false
"hello".send(:shout) # => NoMethodError
["how", "are", "you"].map(&:shout) # => NoMethodError

You can also get into some really weird situations if you try to include a module into a refinement, where methods from that module cannot find other methods define in the same module.

But this doesn’t necessarily mean that refinements are broken; all of these are either by design, or a direct consequence of lexical scoping.

Even so, they are unintuitive and it could be that aspects like these are a factor limiting the ability to use refinements at the scale of, say, ActiveSupport.

5. Refinements solve a problem that nobody has?

As easy as it is for me to stand up here and make a logical and rational argument about why monkey-patching is bad, and wrong, and breaks things, it’s impossible to deny that fact that even since you started reading this page, software written using libraries that rely heavily on monkey-patching has made literally millions of dollars.

So maybe refinements solve a problem that nobody actually has. Maybe, for all the potential problems that monkey patching might bring, the solutions we already have for managing those problems – test suites, for example – are already doing a good enough job at protecting us.

But even if you disagree with that – which I wouldn’t blame you for doing – perhaps it points at another reason that’s more compelling. Maybe refinements aren’t the right solution for the problem of monkey-patching. Maybe the right solution is actually something like: object-oriented design.

6. The rise of Object-oriented design

I think it’s fair to say that over the last two or three years, there’s been a significant increase in interest within the Ruby community in “Object Oriented Design”, which you can see in the presentations that Sandi Metz, for example, has given, or in her book, or discussion of patterns like “Hexagonal Architectures”, and “Interactors”, and “Presenters” and so on.

The benefits that O-O design tends to bring to software are important and valuable – smaller objects with clearer responsibilities, that are easier and faster to test and change – all of this helps us do our jobs more effectively, and anything which does that must be good.

And, from our perspective here, there’s nothing you can do with refinements that cannot also be accomplished by introducing one or more new objects or methods to encapsulate the new or changed behaviour.

For example, rather than adding a “shout” method to all Strings, we could introduce a new class that only knows about shouting, and wrap any strings we want shouted in instances of this new class.

class Shouter
  def initialize(string)
    @string = string

  def shout
    @string.upcase + "!"

shouted_hello = Shouter.new("hello")
shouted_hello.shout # => "HELLO!"

I don’t want to discuss whether or not this is actually better than the refinement version, partly because this is a trivial example, so it wouldn’t be realistic to use, but mostly because I think there’s a more interesting point.

While good O-O design brings a lot of tangible benefits to software development, the cost of “proper O-O design” is verbosity; just as a DSL tries to hide the act of programming behind language that appears natural, the introduction of many objects can – sometimes – make it harder to quickly grasp what the overall intention of code might be.

And the right balance of explicitness and expressiveness will be different for different teams, or for different projects. Not everyone who interacts with software is a developer, let alone someone trained in software design, and so not everybody can be expected to easily adopt sophisticated principles with ease.

Software is for its users and sometimes the cost of making them deal with extra objects or methods might not be worth the benefit in terms of design purity. It is – like so many things – often subjective.

To be clear – I’m not in any way arguing that O-O design is not good; I’m simply wondering, whether or not it being good necessarily means that other approaches should not be considered in some situations.

So what’s the right answer?

And those are the six reasonable reasons that I could come up with as to why nobody is using refinements. So which is the right answer? I don’t know. There’s probably no way to know.

I think these are all potentially good, defensible reasons why we might have decided collectively to ignore Refinements.

However… I am not sure any of them are the answer that most accurately reflects reality. Unfortunately, I think the answer is more likely to the first one we encountered on this journey:

Because other people told us they are “bad”.


Let me make a confession.

When I said “this is not a sales pitch for refinements”, I really meant it. I’m fully open to the possibility that it might never be a good idea to use them. I think it’s unlikely, but it’s certainly possible.

And to be honest, it doesn’t really bother me either way!

What I do care about, though, is that we might start to accept and adopt opinions like “that feature is bad”, or “this sucks”, without ever pausing to question them or explore the feature ourselves.

Sharing opinions

Sharing opinions is good. Nobody has the time the research everything. That would not only be unrealistic, but one of the benefits of being in a community is that we can benefit from each other’s experiences. We can use our collective experience to learn and improve. This is definitely a good thing.

But if we just accept opinions as facts, without even asking “why”… I think this is more dangerous. If nobody ever questioned an opinion as fact, then we’d still believe the world was flat!

It’s only by questioning opinions that we make new discoveries, and that we learn for ourselves, and that — together — we make progress as a community.

The “sucks”/“awesome” binary can be an easy and tempting shorthand, and it’s even fun to use – but it’s an illusion. Nothing is ever that clear cut.

There’s a great quote by a British journalist and doctor called Ben Goldacre, that he uses any time someone tries to present a situation as starkly either good or bad:

“I think you’ll find it’s a bit more complicated that that.”

This is how I feel whenever anyone tells me something “sucks”, or is “awesome”. It might suck for you, but unless you can explain to me why it sucks, then how can I decide how your experience might apply to mine?

One person’s “suck” can easily be another person’s “awesome”, and they are not mutually exclusive. It’s up to us to listen and read critically, and then explore for ourselves what we think.

And I think this is particularly true when it comes to software development.

Explore for yourselves

If we hand most, if not all responsibility for that exploration to the relatively small number of people who talk at conferences, or have popular blogs, or who tweet a lot, or who maintain these very popular projects and frameworks, then that’s only a very limited perspective compared to the enormous size of the Ruby community.

I think we have a responsibility not only to ourselves, but also to each other, to our community, not to use Ruby only in the ways that are either implicitly or explicitly promoted to us, but to explore the fringes, and wrestle with new and experimental features and techniques, so that as many different perspectives as possible inform on the question of “is this good or not”.

If you’ll forgive the pun, there are no constants in programming – the opinions that Rails enshrines, even for great benefit, will change, and even the principles of O-O design are only principles, not immutable laws that should be blindly followed for the rest of time. There will be other ways of doing things. Change is inevitable.

So we’re at the end now. I might not have been able to tell you precisely why so few people seem to be using refinements, but I do have one small request.

Please – make a little time to explore Ruby. Maybe you’ll discover something simple, or maybe something wonderful. And if you do, I hope you’ll share it with everyone.

Docker on your Server - legal fun

Written on December 17 2014 at 15:17 ∷ permalink

A few months ago, I was contacted by IP & trademark lawyers about the “Docker on your Server” e-book project that I wrote about earlier. Here’s the meat of their message, which was conveniently sent to me in a PDF attachment.

We recently learned that you run a website titled “Docker On Your Server” that is accessible at the internet address http://www.dockeronyourserver.com/. The domain name of your site includes Docker’s trademark DOCKER, and your site reproduces Docker’s distinctive whale logo, both without authorization or license. Based upon these uses, Docker is concerned that its current and prospective customers are likely to be confused into believing that your website is affiliated with, associated with, licensed, sponsored, or endorsed by Docker, which is not true. Docker does not permit the registration or use of domain names that include DOCKER as the first word in the domain name, and does not permit use of the Docker Whale Logo in a manner that suggests sponsorship or association. Your unauthorized use constitutes trading on the goodwill associated with Docker and Docker’s marks, and is actionable under the applicable laws.

Specifically, I was asked to make the following changes:

  • Discontinue use of the domain name dockeronyourserver.com, including configuring your servers or domain servers so that requests directed to dockeronyourserver.com return a 404 type error, or cause immediate redirection (without displaying a page) to a permissible domain name.
  • Select and use a different domain name that does not include the DOCKER trademark as the first word of the domain name. For example, “usingdockeronyourserver.com” is acceptable.
  • Change the title of the e-book to “Using Docker on Your Server,” or a similar title in which DOCKER is not the first word. This is intended to preclude any suggestion of sponsorship, licensing or association.
  • In all web pages that you make available, or e-books or other electronic or print publications, remove the Docker Whale logo. Display and use the Docker name only in plain typography, not using Docker’s logo or typography.

The logo usage is fine; this was an oversight on my part, although to be fair I started this project before Docker’s logo had gone from being a nice piece of open-source branding to the trademark of a growing, VC-funded enterprise.

Putting a different word in front of the domain seems a more tenuous request to me. Docker don’t own the word ‘docker’, or it’s usage. They can only object to its usage if it appears to infringe on their trademark, and here they are arguing that basically any domain that starts with ‘docker’ is an infringement. That seems wrong to me, but I still complied.

I bought a second domain (thanks, Docker), made some changes, and replied:

Thanks for your letter. While I disagree that my site was confusing, I’m obvious just an individual with far less legal power than a fully-funded startup. So, I believe I have now complied to the requests made in the letter. Namely:

  • The site is now ‘usingdockerwithyourserver.com’
  • The original site URL now redirects without an interstitial page
  • The book title is now prefixed with “Using”
  • Use of Docker logo and logotype has been removed.

I trust this will satisfy your legal team’s desire to enforce trademarks. If not… well, let me know.

Here’s what the site looked like then

Backing Up 2-Factor Authentication Credentials

Written on October 21 2014 at 13:05 ∷ permalink

My ex-colleagues at Free Range wrote about sharing 2-factor authentication (2FA) credentials yesterday, which reminded me of a little trick that I like to use when setting up 2FA with a mobile authenticator: don’t use the QR code.


Certainly, the QR code is convenient, but it’s also opaque and if you just scan it and move on with the setup, you’ll never see it again

This bit me when I had my phone replaced earlier this year, only two find that my authenticator app hadn’t stored any backup of the credentials it needed to generate tokens1. I tried to log in to one of my newly-secured 2FA services (Gandi, in this case), only to find myself unable to generate the security code, and without any way to log in, no way to re-add the credentials to the authenticator app on the phone!

In this specific case I managed to get back in by calling the service and verifying my identity over the phone, but it’s hassle that I’d rather not repeat.

So, to avoid this, here’s what I do now.

Get the secret key

Instead of scanning the QR code, ask for the secret key:

Take a note of this somewhere (I store it in [1Password], for example). Now if you ever need to somehow re-add this service to your authenticator application, you can use this code.

Once you have the code, you’re free to switch back to the QR code (which contains exactly the same information) for a more convenient way of getting that data into your app.

2FA using the command-line

One secondary benefit of having the code available is that you don’t need to pull out your phone to generate an authentication code. Using a program called oathtool, you can generate codes in the Terminal:

=> 529038

You can even get the code ready to paste straight into the web form:

$ oathtool --base32 --totp "YOURCODE..." | pbcopy

Now you can simply switch back to the webpage and paste the code using a single keystroke. Boom.

Sharing 2FA with colleagues

Another benefit of storing the code is that you can give other people access to the same credentials without everyone needing to be present when the QR code is on the screen (which might not even be possible if they are remote).

Instead, you just need to (securely) share the code with them, and they can add it to their authenticator app and start accessing the 2FA-secured service. Admittedly, this is more cumbersome than scanning a convenient QR code

… which is why the post on the Free Range blog prompted me to write this :)

  1. This may have been because I didn’t encrypt the backup for my phone in iTunes, which means (I believe) that no passwords will be saved, but it could also be a legitimate security measure by the app itself.

Notes on 'Notes on "Counting Tree Nodes"'

Written on July 17 2014 at 16:11 ∷ permalink

Having now finished watching Tom’s episode of Peer to Peer, I finally got around to reading his Notes on “Counting Tree Nodes” supplementary blog post. There are a couple of ideas he presents that are so interesting that I wanted to highlight them again here.

If you haven’t seen the video, then I’d still strongly encourage you to read the blog post. While I can now see the inspiration for wanting to discuss these ideas1, the post really does stand on it’s own.

Notes on ‘Enumerators’

Here’s the relevant section of the blog post. Go read it now!

I’m not going to re-explain it here, so yes, really, go read it now.

What I found really interesting here was the idea of building new enumerators by re-combining existing enumerators. I’ll use a different example, one that is perhaps a bit too simplistic (there are more concise ways of doing this in Ruby), but hopefully it will illustrate the point.

Let’s imagine you have an Enumerator which enumerates the numbers from 1 up to 10:

> numbers = 1.upto(10)
=> #<Enumerator: 1:upto(10)>
> numbers.next
=> 1
> numbers.next
=> 2
> numbers.next
=> 3
> numbers.next
=> 9
> numbers.next
=> 10
> numbers.next
StopIteration: iteration reached an end

You can now use that to do all sorts of enumerable things like mapping, selecting, injecting and so on. But you can also build new enumerables using it. Say, for example, we now only want to iterate over the odd numbers between 1 and 10.

We can build a new Enumerator that re-uses our existing one:

> odd_numbers = Enumerator.new do |yielder|
    numbers.each do |number|
      yielder.yield number if number.odd?
=> #<Enumerator: #<Enumerator::Generator:0x007fc0b38de6b0>:each>

Let’s see it in action:

> odd_numbers.next
=> 1
> odd_numbers.next
=> 3
> odd_numbers.next
=> 5
> odd_numbers.next
=> 7
> odd_numbers.next
=> 9
> odd_numbers.next
StopIteration: iteration reached an end

So, that’s quite neat (albeit somewhat convoluted compared to 1.upto(10).select(&:odd)). To extend this further, let’s imagine that I hate the lucky number 7, so I also don’t want that be included. In fact, somewhat perversely, I want to stick it right in the face of superstition by replacing 7 with the unluckiest number, 13.

Yes, I know this is weird, but bear with me. If you have read Tom’s post (go read it), you’ll already know that this can also be achieved with a new enumerator:

> odd_numbers_that_arent_lucky = Enumerator.new do |yielder|
    odd_numbers.each do |odd_number|
      if number == 7
        yielder.yield 13
        yielder.yield number
=> #<Enumerator: #<Enumerator::Generator:0x007fc0b38de6b0>:each>
> odd_numbers.next
=> 1
> odd_numbers.next
=> 3
> odd_numbers.next
=> 5
> odd_numbers.next
=> 13
> odd_numbers.next
=> 9
> odd_numbers.next
StopIteration: iteration reached an end

In Tom’s post he shows how this works, and how you can further compose enumerators to to produce new enumerations with specific elements inserted at specific points, or elements removed, or even transformed, and so on.


A hidden history of enumerable transformations

What I find really interesting here is that somewhere in our odd_numbers enumerator, all the numbers still exist. We haven’t actually thrown anything away permanently; the numbers we don’t want just don’t appear while we are enumerating.

The enumerator odd_numbers_that_arent_lucky still contains (in a sense) all of the numbers between 1 and 10, and so in the tree composition example in Tom’s post, all the trees he creates with new nodes, or with nodes removed, still contain (in a sense) all those nodes.

It’s almost as if the history of the tree’s structure is encoded within the nesting of Enumerator instances, or as if those blocks passed to Enumerator.new act as a runnable description of the transformations to get from the original tree to the tree we have now, invoked each time any new tree’s children are enumerated over.

I think that’s pretty interesting.

Notes on ‘Catamorphisms’

In the section on Catamorphisms (go read it now!), Tom goes on to show that recognising similarities in some methods points at a further abstraction that can be made – the fold – which opens up new possibilities when working with different kinds of structures.

What’s interesting to me here isn’t anything about the code, but about the ability to recognise patterns and then exploit them. I am very jealous of Tom, because he’s not only very good at doing this, but also very good at explaining the ideas to others.

Academic vs pragmatic programming

This touches on the tension between the ‘academic’ and ‘pragmatic’ nature of working with software. This is something that comes up time and time again in our little sphere:

Now I’m not going to argue that anyone working in software development should have a degree in Computer Science. I’m pretty sympathetic with the idea that many “Computer Science” degrees don’t actually bear much of direct resemblance to the kinds of work that most software developers do2.

Ways to think

What I think university study provides, more than anything else, is exposure and training in ways to think that aren’t obvious or immediately accessible via our direct experience of the world. Many areas of study provide this, including those outside of what you might consider “science”. Learning a language can be learning a new way to think. Learning to interpret art, or poems, or history is learning a new way to think too.

Learning and internalising those ways to think give perspectives on problems that can yield insights and new approaches, and I propose that that, more than any other thing, is the hallmark of a good software developer.

Going back to the blog post which, as far I know, sparked the tweet storm about “programming and maths”, I’d like to highlight this section:

At most academic CS schools, the explicit intent is that students learn programming as a byproduct of learning CS. Programming itself is seen as rather pedestrian, a sort of exercise left to the reader.

For actual developer jobs, by contrast, the two main skills you need these days are programming and communication. So while CS still does have strong ties to math, the ties between CS and programming are more tenuous. You might be able to say that math skills are required for computer science success, but you can’t necessarily say that they’re required for developer success.

What a good computer science (or maths or any other logic-focussed) education should teach you are ways to think that are specific to computation, algorithms and data manipulation, which then

  • provide the perspective to recognise patterns in problems and programs that are not obvious, or even easily intuited, and might otherwise be missed.
  • provide experience applying techniques to formulate solutions to those problems, and refactorings of those programs.

Plus, it’s fun to achieve that kind of insight into a problem. It’s the “a-ha!” moment that flips confusion and doubt into satisfaction and certainty. And these insights are also interesting in and of themselves, in the very same way that, say, study of art history or Shakespeare can be.

So, to be crystal clear, I’m not saying that you need this perspective to be a great programmer. I’m really not. You can build great software that both delights users and works elegantly underneath without any formal training. That is definitely true.

Back to that quote:

the ties between CS and programming are more tenuous … you can’t necessarily say that they’re required for developer success.

All I’m saying is this: the insights and perspectives gained by studying computer science are both useful and interesting. They can help you recognise existing, well-understood problems, and apply robust, well-understood and powerful solutions.

That’s the relevance of computer science to the work we do every day, and it would be a shame to forget that.

  1. In the last 15 minutes or so of the video, the approach Tom uses to add a “child node” to a tree is interesting but there’s not a huge amount of time to explore some of the subtle benefits of that approach

  2. Which is, and let’s be honest, a lot of “Get a record out of a database with an ORM, turn it into some strings, save it back into the database”.

Docker on your Server

Written on July 01 2014 at 21:22 ∷ permalink

TL;DR: Register at http://www.dockeronyourserver.com if you’re interested in an e-book about using Docker to deploy small apps on your existing VPS.

I’ve been toying around with Docker for a while now, and I really like it.

The reason why I became interested, at first, was because I wanted to move the Printer server and processes to a different VPS which was already running some other software (including this blog), but I was a bit nervous about installing all of the dependencies.

Dependency anxiety

What if something needed an upgraded version of LibXML, but upgrading for one application ended up breaking a different one? What if I got myself in a tangle with the different versions of Ruby that different applications expect?

If you’re anything like me, servers tend to be supremely magical things; I know enough of the right incantations to generally make the right things appear in the right places at the right time, but it’s so easy to utter the wrong spell and completely mess things up1.

Docker isolation

Docker provides an elegant solution for these kinds of worries, because it allows you to isolate applications from each other, including as many of their dependencies as you like. Creating images with different versions of LibXML, or Ruby, or even PostgreSQL is almost trivially easy, and you can be confident that running any combination will not cause unexpected crashes and hard-to-trace software version issues.

However, while Docker is simple in principle, it’s not trivial to actually deploy with it, in the pragmatic sense.

Orchestration woes

What I mean is getting to a point where deploying a new application to your server is as simple (or close enough to it) as platforms like Heroku make it.

Now, to be clear, I don’t specifically mean using git push to deploy; what I mean is all the orchestration that needs to happen in order to move the right software image into the right place, stop existing containers, start updated containers and make sure that nothing explodes as you do that.

But, you might say, there are packages that already help with this! And you’re right. Here are a few:

  • Dokku, the original minimal Heroku clone for Docker
  • Flynn, the successor to Dokku
  • Orchard, a managed host for your Docker containers
  • …and many more. I don’t know if you’ve heard, but Docker is, like, everywhere right now.

So why not use one of those?

That’s a good question

Well, a couple of reasons. I think Orchard looks great, but I like using my own servers for software when I can. That’s just my personal preference, but there it stands, so tools like Orchard (or Octohost and so on) are not going to help me at the moment.

Dokku is good, but I’d like to have a little more control of how applications are deployed (as I said above, I don’t really care about git push to deploy, and even in Heroku it can lead to odd issues, particularly with migrations).

Flynn isn’t really for me, or at least I don’t think it is. It’s for dev-ops running apps on multiple machines, balancing loads and migrating datacenters; I’m running some fun little apps on my personal VPS. I’m not interested in using Docker to deploy across multiple, dynamically scaling nodes; I just want to take advantage of Docker’s isolation on my own, existing, all-by-its-lonesome VPS.

But, really more than anything, I wanted to understand what was happening when I deploy a new application, and be confident that it worked, so that I can more easily use (and debug if required) this new moving piece in my software stack.

So I’ve done some exploring

I took a bit of time out from building Harmonia to play around more with Docker, and I’d like to write about what I’ve found out, partly to help me make it concrete, and partly because I’m hoping that there are other people out there like me, who want some of the benefits that Docker can bring without necessarily having to learn so much about using it to it’s full potential in an ‘enterprise’ setting, and without having to abandon running their own VPS.

There ought to be a simple, easy path to getting up and running productively with Docker while still understanding everything that’s going on. I’d like to find and document it, for everyone’s benefit.

If that sounds like the kind of think you’re interested in, and would like reading about, please let me know. If I know there are enough people interested in the idea, then I’ll get to work.

Sign up here: http://www.dockeronyourserver.com

  1. You might consider this to be an early example of what I mean.

Reducing risk when running a conference

Written on June 25 2014 at 14:37 ∷ permalink

It was really interesting to read the news about the potential cancellation of Wicked Good Ruby, a Ruby conference that was due to run for a second time in Boston this year:

Rather than half-ass a conference we’ve decided to cancel it. All tickets will be fully reimbursed. I’ve already reached out to those that have purchased with how that will be done.

There’s lots of interesting stuff in this thread, but one point that jumped out to me was the financial risk involved:

Zero sponsors. […] I had 2 companies inquire about sponsorship. One asked to sponsor but never paid the sponsorship invoice after 82 days.


Last year we lost $15k. In order to made it worth our effort this year we needed to make a profit. The conference we had in mind required a $50k sponsorship budget with an overall budget of close to $100k. (last year’s conference cost about $125k) Consider to date, after 6 months, we have received $0 in sponsorship the financial risk was too high.


Since announcing this a few hours ago I’ve been contacted by 3 other regional conference organizers. They are all having similar issues this year. Sponsorship is incredibly difficult to come by […] I didn’t get the sense they were going to bail but I think this is a larger issue than just Boston.

With costs in the tens or even hundreds of thousands of dollars, it’s really no wonder that organisers might have second thoughts.

But does running a conference really need to come with such an enormous financial risk? With Ruby Manor, our biggest budget was less than £2,000 (around $3,000). Here’s why:

  • We don’t use expensive ‘conference’ venues…
  • … so we aren’t locked into their expensive conference catering.
  • We run a single track for a single day, so we only need one main room.
  • We meticulously pare away other expenses like swag, t-shirts, catering and so on, where it’s clear that attendees are perfectly capable of handling those themselves.

Now, it could be that there are a few factors unique to Ruby Manor that make it difficult, or even impossible, for other conferences to follow the same model. For instance, holding the event in the center of a big city like London means there’s already a wide range of lunch options for attendees, right outside the venue.

Another problem could be the availability of university and community venues as an alternative to conference centres. I really don’t know if it’s possible or not to rent spaces like this in other cities or countries. A quick look at my local university indicates that it’s totally feasible to rent a 330-seat auditorium for less than $1,000, and even use their coffee/tea catering for a total of less than $3,0001, all-in.

I would be genuinely fascinated for other conferences to start publishing their costing breakdown. LessConf have already done this, and even though I might not have chosen to spend quite so much money on breakfasts and surprises, I genuinely do applaud their transparency for the benefit of the community.

In the end it seems there’s hope for Wicked Good:

I think I’ve come up with a plan: reduce the conference from 2 nights to 1 night. Cut out many of the thrills that I was planning. This effectively would reduce our operational costs from ~$100k to around ~$50k. This would also allow us to reduce the ticket prices (I would reimburse current tickets for the difference).

I genuinely wish the organisers the best of luck. It’s a tough gig, running a conference. That said, $50,000 is still an enormous amount of money, and I cannot help but feel that it’s still far higher than it needs to be.

Every hour you spend as a conference organiser worrying about sponsorships or ticket sales or other financial issues, is an hour that could be spent working on making the content and the community aspects as good as they can be.

Let’s not forget: the only really important part of a conference is getting a diverse group of people with shared interests into the same space for a day or two. Everything else is just decoration. A lot of the time, even the presentations are just decoration. It’s getting the community together that’s important.

I realise that many people expect, consciously or otherwise, some or all of the peripheral bells and whistles that surround the core conference experience. For some attendees, going to a conference might even be akin to a ‘vacation’ of sorts. Perhaps a conference without $15,000 parties feels like less of an event… less of an extravaganza.

But consider this: given the choice between a glitzy, dazzling extravaganza, and a solid conference that an organiser can run without paralysing fear of financial ruin, I know which I would choose, and I know which ultimately benefits the community more.

  1. To be clear, they require that you cannot make a profit on events they host, but from what I can tell, most conference organisers don’t wish to run their events for profit anyway.

Let me tell you a bit about what I'm doing

Written on June 23 2014 at 22:03 ∷ permalink

This irregularly-kept blog thing is getting so irregular it’s almost regularly irregular. Lest you fool yourself into presuming any ability to predict when I might write here (i.e. “never”), let me thwart your erroneous notion by serving up this rambling missive, unexpected and with neither warning nor warrant!

Take that, predictability! Chalk another one up for chaos.

I live in Austin now, or at least, for the moment

In summer last year, I moved from London to Austin. My other half had already been living in Austin for two years, and I’d been flying back and forth ever other month or so, but 24 months of that is quite enough, and so I bit the bullet and gave up my London life for Stetsons and spurs. It’s been quite an adventure so far, although I do miss a few people back in the UK.

We don’t know how long we’ll stay here, but for the moment I’m enjoying the weather immensely. You might think it’s a balmy summer in London when temperatures regularly reach 20°C, but during Austin’s summer the temperature never drops below 20°C, even at night. Can you even conceive of that?

Farewell Go Free Range, Howdy Exciting

Leaving the UK also marked the beginning of the end of the Free Range experiment for me. Ever since I started kicking the ideas for Free Range around in late 2008, I’ve always considered it an experiment. I wish I could say that it had been a complete success, and it was very successful in a number of important ways, but if you’re going to pour a lot of energy into trying to make something for yourself, it really needs to be moving in the direction you feel is valuable.

I wrote a bit more on the Exciting blog and the Free Range blog. I could say a lot more about this – indeed, I have actually written thousands of words in various notepads and emails – but I’ll save that for another time; my memoirs, maybe. It takes a lot of effort to keep a small boat moving in a consistent direction through uncharted waters.

So: I’ve spun out a lot of the projects I was driving into a new umbrella-thing called Exciting, and that’s the place to look at what I’m working on.

Right, but what are you doing exactly?

Well, I’m still doing the occasional bit of client work, and I have some tinkering projects that I need to write more about, but this year I have mostly been1 working on Harmonia. Although it was built for Free Range, it’s still both a product and a way of working that I believe lots of teams and companies could benefit from. I’ve added a fair bit of new functionality (more calendar and email functionality, webhooks, Trello integration) and fixed a whole bunch of usability issues, the lion’s share of the work has been in working on how to communicate what Harmonia does and how it could help you.

I really cannot overstate the amount of effort this takes. It’s exhausting. Some of that is doubtless because I am a developer by training, and so I’ve had to learn about design, user experience, copywriting, marketing, and everything in between, as I’ve gone along. It’s very much out of my comfort zone, but it’s simply not possible to succeed without them.

You can build the best product in the world, but if nobody knows it exists or can quickly determine whether or not it’s something they want… nobody will ever use it.

The good thing is that the web is the ideal medium for learning all this, because you can do it iteratively. I’ve completely redesigned the Harmonia landing page four times since it became my main focus, and each time I think it’s getting better at showcasing the benefits that it can bring. It still needs more work though; as of writing this I think it’s too wordy and the next revision will pare it down quite a bit.

Going solo

It’s also a real challenge to keep up this effort while working alone. It’s hard to wrangle a group of people into a coherent effort, but once you succeed then it provides a great support structure to keep motivation up and to share and develop ideas. Working by yourself, you don’t have to deal with anyone else’s whims or quirks, but you also only have yourself to provide motivation and reassurance that what you’re working on is valuable, and that the decisions you’re making are sensible.

What’s more, it can be challenging to stay motivated in a world seemingly filled by super-confident, naturally-outgoing entrepreneur types. I don’t operate with the unshakable confidence that what I’m building is awesome, or that my ideas are worth sharing, or that they even have any particular merit.

Confronted with the boundless crowd of what-seems-typical Type A entrepreneurs that fill the internet, prolifically, with their reckons and newsletters and podcasts and blog posts and courses, I often wonder about the simpler life of just being an employee; of offloading the risk onto someone else in exchange for a steady incoming and significantly less crippling self-doubt. I’m sure I’m not the only person who feels this way, whose skin crawls as yet another service or blog post or person is cheaply ascribed the accolade “awesome” when really, I mean, really, is it? Is it?

But… there’s still a chance that I might be able to carve out a niche of my own, and maybe build another little company of friends working on small things that we care about, and investing in our future collectively. I still believe that’s a possibility.

Anyway, so that’s what I’m doing, mostly: crushing it2.

  1. Couldn’t resist this turn of phrase; see here if you don’t immediately get it.

  2. Sarcasm, (e.g. this)

The problem with using fixtures in Rails

Written on April 30 2014 at 14:13 ∷ permalink

I’ve been trying to largely ignore the recent TDD discussion prompted by DHH’s RailsConf 2014 keynote. I think I can understand where he’s coming from, and my only real concern with not sharing his point of view is that it makes it less likely that Rails will be steered in a direction which makes TDD easier. But that’s OK, and if my concern grows then I have opportunities to propose improvements.

I don’t even really mind what he’s said in his latest post about unit testing the Basecamp codebase. There are a lot of Rails applications – including ones I’ve written – where a four-minute test suite would’ve been a huge triumph.

I could make some pithy argument like:

Sorry, I couldn't resist

… but let’s be honest, four minutes for a substantial and mature codebase is pretty excellent in the Rails world.

So that is actually pretty cool.

Using fixtures

A lot of that speed is no doubt because Basecamp is using fixtures: test data that is loaded once at the start of the test run, and then reset by wrapping each test in a transaction and rolling back before starting the next test.

This can be a benefit because the alternative – assuming that you want to get some data into the database before your test runs – is to insert all the data required for each test, which can potentially involve a large tree of related models. Doing this hundreds or thousands of times will definitely slow your test suite down.

(Note that for the purposes of my point below, I’m deliberately not considering the option of not hitting the database at all. In reality, that’s what I’d do, but let’s just imagine that it wasn’t an option for a second, yeah? OK, great.)

So, fixtures will probably make the whole test suite faster. Sounds good, right?

The problem with fixtures

I feel like this is glossing over the real problem with fixtures: unless you are using independent fixtures for each test, your shared fixtures have coupled your tests together. Since I’m pretty sure that nobody is actually using independent fixtures for every test, I am going to go out on a limb and just state:

Fixtures have coupled your tests together.

This isn’t a new insight. This is pain that I’ve felt acutely in the past, and was my primary motivation for leaving fixtures behind.

Say you use the same ‘user’ fixture between two tests in different parts of your test suite. Modifying that fixture to respond to a change in one test can now potentially cause the other test to fail, if the assumptions either test was making about its test data are no longer true (e.g. the user should not be admin, the user should only have a short bio, or so on).

If you use fixtures and share them between tests, you’re putting the burden of managing this coupling on yourself or the rest of your development team.

Going back to DHH’s post:

Why on earth would you run your entire test harness for every single line change in a particular model? If you have so little confidence in the locality of your changes, the tests are indeed telling you that the system has overly high coupling.

What fixtures do is introduce overly high coupling in the test suite itself. If you make any change to your fixtures, I do not think it’s possible to be confident that you haven’t broken a single test unless you run the whole suite again.

Fixtures separate the reason test data is like it is from the tests themselves, rather than keeping them close together.

I might be wrong

Now perhaps I have only been exposed to problematic fixtures, and there are techniques for reliably avoiding this coupling or at least managing it better. If that’s the case, then I’d really love to hear more about them.

Or, perhaps the pain of managing fixture coupling is objectively less than the pain of changing the way you write software to both avoid fixtures AND avoid slowing down the test suite by inserting data into the database thousands of times?

That’s certainly possible. I am skeptical though.

Shooting errors with Raygun.io

Written on August 29 2013 at 03:05 ∷ permalink

I’ve been playing with Raygun.io over the last day or so. It’s a tool, like Honeybadger, Airbrake or Errbit, for managing exceptions from other web or mobile applications. It will email you when exceptions occur, collapse duplicate errors together, and allows a team to comment and resolve exceptions from their nicely designed web interface.

I’ve come to the conclusion that integrating with something like this is basically a minimum requirement for software these days. Previously we might’ve suggested an ‘iterative’ approach of emailing exceptions directly from the application before later using one of these services, but I no longer see the benefit in postponing a better solution when it’s far simpler to work with one of these services than it is to set up email reliably.

It seems pretty trivial to integrate with a Rails application – just run a generator to create the initializer complete with API key. However, I had to do a bit more work to hook it into a Rack application (which is what Vanilla is). In my config.ru:

# Setup Raygun and add it to the middleware
require 'raygun'
Raygun.setup do |config|
  config.api_key = ENV["RAYGUN_API_KEY"]
use Raygun::RackExceptionInterceptor

# Raygun will re-raise the exception, so catch it with something else
use Rack::ShowExceptions

The documentation for this is available on the Raygun.io site, but at the moment the actual documentation link on their site points to a gem, which more-confusingly isn’t actually the gem that you will have installed. Reading the documentation in the gem README also reveals how to integrate with Resque, to catch exceptions in background jobs.

One thing that’s always worth checking when integrating with exception reporting services is whether or not they support SSL, and thankfully it looks like that’s the default (and indeed only option) here.

The Raygun server also sports a few plugins (slightly hidden under ‘Application Settings’) for logging exception data to HipChat, Campfire and the like. I’d like to see a generic webhook plugin supported, so that I could integrate exception notification into other tools that I write; thankfully that’s the number one feature request at the moment.

My other request would be that the gem should try not to depend on activesupport if possible. I realise for usage within Rails, this is a non-issue, but for non-Rails applications, loading ActiveSupport can introduce a number of other gems that bloat the running Ruby process. As far as I can tell, the only methods from ActiveSupport that are used are Hash#blank? (which is effectively the same as Hash#empty?) and String#starts_with? (which is just an alias for the Ruby-default String#start_with?). Pull request submitted.

Monitoring our cat with Twine

Written on August 19 2013 at 20:34 ∷ permalink

I was lucky enough to be gifted a Twine by my colleagues at Go Free Range last weekend, and I took the opportunity to put together a very simple service that demonstrates how it can be used.

The Twine

If you haven’t heard of Twine, it’s a hardware and software platform for connecting simple sensors to the internet, and it makes it really very easy to do some fun things bridging the physical and online worlds.


On the hardware side, there’s a simple 2.7” square that acts as a bridge between your home WiFi network and a set of sensors.

The twine bridge

Some of the sensors are built in to the square itself: temperature, orientation and vibration can be detected without plugging anything else in. You can also get external sensors, which connect to the square via a simple 3.5mm jack cable. If you buy the full sensor package, you’ll get a magnetic switch sensor, a water sensor and a ‘breakout board’ that lets you connect any other circuit (like a doorbell, photoresistor, button and so on) to the Twine.


Connecting the Twine to a WiFi network is elegant and features a lovely twist: you flip the Twine on its “back”, like a turtle, and it makes its own WiFi network available.

Twine setup

Connect to this from your computer, and you can then give the Twine the necessary credentials to log on to your home network, and once you’re done, flip it back onto its “belly” again and it will be ready to use. I really loved this simple, physical interaction.


On the software side, Twine runs an online service that lets you define and store ‘rules’ to your connected Twine units. These rules take the form of when <X> then <Y>, in a similar style to If This Then That. So, with a rule like when <vibration stops> then <send an email to my phone>, you could pop the Twine on top of your washing machine and be alerted when it had finished the final spin cycle.

Twine rules


As well as emailing, the Twine can flash it’s LED, tweet, send you an SMS, call you, or ping a URL via GET or POST requests including some of the sensor information.

Supermechanical, the company that launched Twine about a year and a half ago via Kickstarter, maintains a great blog with lots of example ideas of things that can be done.

All technology tends towards cat

Just as the internet has found its singular purpose as the most efficient conduit for the sharing of cat pictures, so will the Internet of Things realise its destiny by becoming entirely focussed on physical cats, in all their unpredictable, scampish glory.

It’s neat having something in your house tweet or send you an email, but I like making software so I decided to explore building a simple server than the Twine could interact with, and thus, “Pinky Status” was born:

Pinky Status

What follows is a quick explanation of how easy it was.

The sensor

I hooked up the magnetic switch sensor to the Twine, and then used masking tape to secure the sensor to the side of the catflap, and then the magnet to the flap itself.

Catflap sensor

That way, when “Pinky” (that’s our cat) opened the flap, the magnet moves away from the switch sensor and it enters the ‘open’ state. It’s not pretty, but it works.

The Rule

Next, we need a simple rule so that the Twine knows what to do when the sensor changes:

Pinky Status Twine Rule

When the sensor changes to open, two things happen. Firstly, I get an email, which I really only use for debugging and I should probably turn it off, except that it’s pretty fun to have that subject line appear on my phone when I’m out of the house.

Secondly and far more usefully, the Twine pings the URL of a very, very simple server that I wrote.

A simple service

Here’s the code, but it’s probably clearest to view an earlier Sinatra version than the current Rails implementation:

require "rubygems"
require "bundler/setup"
require "sinatra"
require "data_mapper"

DataMapper::setup(:default, ENV['DATABASE_URL'] || 'postgres://localhost/pinky-status')

class Event
  include DataMapper::Resource
  property :id, Serial
  property :source, Enum[:manual, :twine], default: :twine
  property :status, Enum[:in, :out]
  property :created_at, DateTime

  def self.most_recent
    all(order: [:created_at.desc]).first

  def self.most_recent_status
    most_recent ? most_recent.status : nil

  def self.next_status
    if most_recent_status
      most_recent_status == :in ? :out : :in



get "/" do
  @events = Event.all
  @most_recent_status = Event.most_recent_status
  erb :index

post "/event" do
  Event.create!({created_at: Time.now, status: Event.next_status}.merge(params[:event] || {}))
  redirect "/"

The key part is at the very bottom – as Twine makes a POST request, the server simply creates another Event record with an alternating status (‘in’ or ‘out’), and then some logic in the view (not shown) can tell us whether or not the cat is in or out of the house.

In more recent versions of the code I’ve moved to Rails because it’s more familiar, but also slightly easier to do things like defend against duplicate events (normally when the cat changes her mind about going outside when her head is already through the flap) and other peripheral things.

But don’t be dissuaded by Rails - it really was as trivial as the short script above , showing some novel information derived from the simple sensor attached to the Twine. Deploying a server is also very easy thanks to tools like Heroku.


A few hours idle work and the secret life of our cat is now a little bit less mysterious than it was. I really enjoyed how quick and painless the Twine was to setup, and I can highly recommend it if you’re perhaps not comfortable enough to dive into deep sea of Arduinos, soldering and programming in C, but would still like to paddle in the shallower waters of the “internet of things”.