Objects on Rails

Table of Contents

./images/cover-800x600.jpg

Objects on Rails (Revision 10)

Copyright © Jul by Avdi Grimm. All rights reserved.

What is this?

This is the complete text of Objects on Rails, a "developer's notebook" documenting some guidelines, techniques, and ideas for applying classic object-oriented thought to Ruby on Rails applications. This book is aimed at the working Rails developer who is looking to grow and evolve Rails projects while keeping them flexible, maintainable, and robust. The focus is on pragmatic solutions which tread a "middle way" between the expedience of the Rails "golden path", and rigid OO purity.

About the author

I'm Avdi Grimm. I've been doing large-scale object-oriented software development my entire programming career, first in aerospace and networking systems, and later on for web applications. I've been hacking in Ruby for over a decade, and I still love how fun and effortless it makes the job of building programs out of objects. I've spoken at a few conferences about Ruby code construction, I'm a co-host of the Ruby Rogues podcast, and I blog about software at Virtuous Code. If you like, you can follow me on Twitter.

I'm also a freelance consultant working with the Code Benders collective. If you have a software project and you want to move fast without sacrificing quality, you should get in touch with us.

Feedback & Discussion

Got questions, comments, errata—or just want to discuss object-oriented programming with other Rubyists? Come on over to the book's email discussion group.

Acknowledgments

When I first envisioned this project I saw it as a "mini-book", maybe fifty pages at most. As I worked through the examples it exploded in size. If it weren't for all the help and support I received along the way I probably would have given up on it halfway through.

Of course, this book wouldn't have existed in the first place without Ruby and Rails. Thanks to Matz and the Ruby core team for creating a language which made object-oriented programming fun again. And to David Heinemeier Hansson and the whole Rails team, for a framework that makes building web applications suck less.

A huge thank you to all the beta readers who gave me feedback on drafts of the book: Adam Guyot, Alan Gardner, Alex Chaffee, Amos King, Andre Bernardes, Andrew Premdas, Andrew Vit, Andrew Wagner, Ants, Arvind Laxminarayan, Assembler Ben, Bradley Grzesiak, Brendon Murphy, Brian Jolly, Bruno Lara Tavares, Chris McGrath, Chris Zwickilton, Conrad Taylor, Craig Savolainen, Dan Bernier, Dan Croak, Dan Dorman, Daniel Dosen, Daniel Schierbeck, David Jacques, David Laribee, Diabolo, Emmanuel Gomez, François Beausoleil, Gabriel Malkas, Greg S, Hugh Kelsey, James Ladd, James Mead, Jefferson Jean Martins Girao, Jim Gay, Joe Van Dyk, Joel Meador, Jon Olsson, Jonas Pfenniger, Joshua Flanagan, Katrina Owen, Kerry Buckley, Kevin Rutherford, Leo Cassarani, Loren Norman, Manuel Enrique Vidaurre Arenas, Mark Kocera, Martin Samson, Max Justus, Michael Greenly, Michel Barbosa, Mislav Marohnić, Nick Gauthier, Nicolas Sanguinetti, Nikolay Sturm, Noel Rappin, Paul M., Peter Jaros, Piotr Sarnacki, Raphael, Rob Sharp, Ryan Bates, Sammy Larbi, Scott Smith, Srdjan Pejic, Steve Klabnik, Steve Tooke, Steven Harman, Tim Craft, TJ Singleton, Tony Semana, and Wael M. Nasreddine.

Special thanks to George Anderson and Larry Marburger, my Athos and Aramis, for constantly encouraging me and being willing sounding-boards. To my fellow CodeBenders, especially Matt Kern, Dan Kubb, and Piotr Solnica: thanks for putting up with me spending so much of my time writing. And thanks to my assistant Mandy Moore, who made it possible to spend more of my time writing and less of it dealing with administrative headaches.

Finally and most importantly, thank you to Stacey for unwaveringly supporting the projects that preoccupy me; for the fresh AeroPress coffee in the morning after every up-til-5AM writing session; and for reminding me to eat every now and then. And to Lily, Josh, Kashti, and Ebba, for being the best kids ever.

Introduction

Hi there! Welcome to Objects on Rails. This text is a step-by-step walkthrough following the construction of a simple web application using Ruby on Rails. It differs from other such walkthroughs in that it attempts to apply a strongly Object-Oriented philosophy to the process.

"Wait a sec!" you say. "Isn't Rails already Object-Oriented?" Well, yes and no. While Rails is written in a thoroughly OO language, and built on some solid OO patterns, there are aspects of conventional Rails application development which depart significantly from OO practices.

Which is not necessarily a problem in and of itself. Lots of programs get by just fine with (for instance) a mix of OO and Functional programming styles. But experience has shown that these same Rails not-so-OO practices—such as models which violate the Single Responsibility Principle (SRP), or putting complex business logic in helpers—are a common source of development delays in maturing Rails applications.

In this text we'll build a basic blog application (yes, another one). You'll be my virtual pair-programmer as I work through the problems the application presents by applying what I've learned of Object-Oriented design over the years. I'll do my best to explain my reasoning, step you through the code piece by piece, and point you to further information about the patterns and guidelines I'm following.

In some parts the "Objects" aspect of the text will just mean small tweaks to standard Rails practices. In other areas we'll take extended side-tracks from the so-called "golden path". Some techniques I apply will be tried-and-and-true chestnuts I use in every project I work on; others will be tools I only haul out occasionally. In a few cases I'll use this exercise as an opportunity to work through ideas I haven't had the opportunity to fully flesh out until now. In a few cases that might mean we go down a dead-end siding; when that happens I'll let you know, and do my best to explain the thinking which lead up to it.

What this is not

This is not a Rails tutorial
Familiarity with Rails and Ruby is assumed.
This is not a Rails critique
This is not all about how "Rails is wrong". Rails is a terrifically powerful framework for quickly assembling web applications. I'm interested in how to better use the tools Rails provides, not so much in how to subvert or replace them.
This is not comprehensive
This text is effectively a snapshot of some of my Rails development preferences and ideas circa late 2011/early 2012. It doesn't capture every possible application of OO patterns or SOLID principles to Rails development.
This is not a rule book or a "best-practices" manual
The last thing I want anyone to do is follow the approach shown here as a book of rules for how to build a Rails application "right". I hope that you'll consider the patterns and idioms presented here, and select the ones that speak to you and make sense for your application. If nothing else, I hope to give you some food for thought.

About the approach

This text takes the form of a walkthrough. We'll build an app step-by-step using Test-Driven Design (TDD), at most steps presenting test code followed by implementation code.

However, while we will use TDD at the unit test level to drive development, there is a major piece missing from our TDD stack. Ordinarily, when building a "real" app, I would drive each feature from the outside in using acceptance tests, typically written using Cucumber. However, in the interests of brevity, I omit the acceptance testing component in this document. For the curious, I've included the Cucumber acceptance tests in Appendix B.

About the Code

With a few exceptions, all the code samples are from the working demo codebase I built as I wrote the text. If you have the "deluxe edition" you have access to a full copy of the source code, including revision history. I've taken the liberty of eliding some of the code samples in the text to show only the bits that are interesting or relevant to the discussion; this means, for instance, that some require statements have been omitted.

This text uses Ruby 1.9. Among other things, I make extensive use of the new "stabby lambda" syntax:

# Ruby 1.8
lambda{|x,y| x + y}
# Ruby 1.9
->(x,y){ x + y}

Stabby lambdas

If you're not familiar with the changes from 1.8 to 1.9, I highly recommend Peter Cooper's comprehensive walkthrough on the subject.

Also note that I use the "Weirich convention" for choosing between {} and do...end delimiters around blocks. So blocks which are evaluated for their result value, I surround with braces ({}); whereas blocks which are evaluated for their side-effects get do...end.

A note on scale

The challenge in writing about code patterns is to come up with examples that are simple and clear enough that the supporting code doesn't get in the way of understanding the specific technique being demonstrated–while still hopefully avoiding examples that feel completely contrived. Unfortunately, if you succeed in that, you are often confronted with a new problem: the example problems you carefully distilled down to their essence now seem so simple, so trivial that whatever refactoring or abstraction you're trying to illustrate seems superfluous and a waste of effort.

The application I work through in these pages is a deliberately simple one, and many of the techniques I demonstrate may seem like massive overkill for the task at hand. Please understand, as you read through the examples, that these are patterns and idioms intended to make the development and evolution of large-scale applications more tractable. While they may seem less than compelling in the context of a "toy" app, hopefully you can visualize how they might be helpful for larger-scale development.

Why OOP?

Why bother with these techniques? What's wrong with the way we've always written Rails applications?

The biggest reason—scratch that, the only reason—is to make our apps easier to change. The only constant in life is change, and that goes double for software projects. Markets change, requirements change, external dependencies change, and platforms change. As I've written about at length elsewhere, Rails is reaching a point at the time of writing where a lot of projects are starting to mature, and a lot of developers are realizing their projects aren't nearly as easy to modify as they used to be.

It's risky for me to give some specific example, e.g. "by following these guidelines, you'll be able to easily change to MongoDB in the future!". Inevitably someone will say "hah, my app will never need to switch to MongoDB, therefore I don't need these techniques!"

Attempts to predict which parts of a codebase will need to change, and to structure it accordingly, have ended badly more often than not. Part of the nature of change is that you often don't know beforehand what is going to need change. In this text I'm not going to attempt to say "using such and so technique will make such and so component easier to change". In fact, I would encourage you not to spend too much time thinking about what is most likely to change. Much like premature optimization, premature change management usually misses the mark.

Amongst all this uncertainty, there are some basic principles that have proven, over decades of Object-Oriented software development, to make software generally more flexible and amenable to change. Principles such as:

  • Small objects with a single, well-defined responsibility.
  • Small methods that do only one thing.
  • Limiting the number of types an object collaborates with.
  • Strictly limiting the use of global state and singletons (that includes limiting the use of class-level methods).
  • Small object interfaces with simple method signatures.
  • Preferring composition over inheritance.

These rules of thumb, practiced habitually, tend to lead to more flexible codebases which can adapt to any type of change adroitly; whether the change is a data model which better resembles the problem domain; a new data storage backend; or a re-structuring of the app into a half-dozen mini-apps.

So the answer to why, in the end, is "because things change". Some good habits and sound architectural guidance early on in a project can save a lot of headaches down the road.

With that intent in mind, let's jump in!

Yet another frickin' blog app

Let's write a new blog app in Rails, since no one's ever done that before!

We'll use Rails version 3.0, running on Ruby (MRI) 1.9.2.

$ rails new bloog --skip-test-unit --skip-prototype
      create  
      create  README
      create  Rakefile
      create  config.ru
      create  .gitignore
      create  Gemfile
      create  app
      # ...

(Note: if you are following along at home, be sure not to name your project "blog". We'll be defining a class named Blog later, which will clash with the Blog application class Rails generates in that case.)

I guess we should start with the home page. Let's add a route:

root to: "blog#index"

Now we'll need a controller for that route to work:

$ rails g controller blog index
      create  app/controllers/blog_controller.rb
       route  get "blog/index"
      invoke  erb
      create    app/views/blog
      create    app/views/blog/index.html.erb
      invoke  helper
      create    app/helpers/blog_helper.rb

Hm, what should the view look like?

Well, we are presenting a blog. So presumably we'll have an object to represent the blog.

<!-- app/views/blog/index.html.erb -->

<h1><%= @blog.title %></h1>
<h2><%= @blog.subtitle %></h2>

The first view

OK, now we know we need a blog object in the view. Let's create it in the controller:

# app/controllers/blog_controller.rb
class BlogController < ApplicationController
  def index
    @blog = Blog.new
  end
end

The blog controller

Looks like we need a Blog class next.

# app/models/blog.rb
class Blog
  def title
    "Watching Paint Dry"
  end
  def subtitle
    "The trusted source for drying paint news & opinion"
  end
end

The Blog model

At this point we can load the page:

Loading the page for the first time
Loading the page for the first time

Adding blog entries

A blog without entries isn't very useful. Let's add blog posts to the app. Since we're adding something more complex than just static strings, we'll TDD it. Just to prove there's nothing up our sleeves, we'll use MiniTest/Spec.

# spec/models/blog_spec.rb
require 'minitest/autorun'
require_relative '../../app/models/blog'
describe Blog do
  before do
    @it = Blog.new
  end
  it "has no entries" do
    @it.entries.must_be_empty
  end
end

The first test

Running this spec results in a failure:

$ ruby spec/models/blog_spec.rb 
Loaded suite spec/models/blog_spec
Started
E
Finished in 0.001575 seconds.
  1) Error:
test_0001_should_have_no_entries(BlogSpec):
NoMethodError: undefined method `entries' for #<Blog:0x894b044>
    spec/models/blog_spec.rb:10:in `block (2 levels) in <main>'
1 tests, 0 assertions, 0 failures, 1 errors, 0 skips

To make it pass, we add an entries attribute to Blog:

class Blog
  attr_reader :entries

  def initialize
    @entries = []
  end
  # ...
end

Adding entries to the blog

When we run the tests again, they pass:

$ ruby spec/models/blog_spec.rb 
Loaded suite spec/models/blog_spec
Started
.
Finished in 0.002619 seconds.
1 tests, 2 assertions, 0 failures, 0 errors, 0 skips

You may have noticed that we're not using any kind of Rails integration for setting up and running the tests. We're not even relying on Rails constant autoloading. This is intentional. By writing "plain old tests" which don't rely on any special Rails helpers, we keep the tests isolated and force ourselves to be deliberate about creating any dependencies between our objects. We anticipate that this will have a salutary effect on the object design which emerges from our TDD process.

As a welcome side effect, the tests run ridiculously fast.

Placeholder blog entries

OK, now we have an entries attribute on Blog, but there's nothing in it. Let's add some temporary example blog posts in the BlogController.

def index
  @blog = Blog.new
  post1 = @blog.new_post
  post1.title = "Paint just applied"
  post1.body = "Paint just applied. It's a lovely orangey-purple!"
  post1.publish
  post2 = @blog.new_post(title: "Still wet")
  post2.body = "Paint is still quite wet. No bubbling yet!"
  post2.publish
end

Placeholder entries

You may have noticed that we're calling a #new_post method which doesn't exist yet, followed by some other methods on the return value which also don't exist. Now that we know what code we need, let's make it exist.

Making new entries

First, let's specify that #new_post method. Clearly it needs to return some kind of blog post object which is associated with the Blog object. However, we want to keep our tests isolated, and we only want to test one model at a time. So we'll make the process by which new posts are created easy to swap out:

class Blog
  # ...
  attr_writer :post_source
  # ...
  private
  def post_source
    @post_source ||= Post.public_method(:new)
  end
end

A post source

#public_method, if you're unfamiliar with it, instantiates a call-able Method object. When the object's #call method is invoked it will be as if we called the named method on the original object. The "public" in the name refers to the fact that unlike #method, #public_method respects public/private boundaries and will not generate a Method object for a private method.

In this case, Post.public_method grabs a reference to a Method object representing the (not yet written) Post.new method. During normal operation, Blog will use this method reference (the equivalent of calling Post.new) to generate post objects. But we can substitute any call-able object when testing the class.

Now we'll make some assertions about how Blog#new_post should behave:

# spec/models/blog_spec.rb
require 'ostruct'
describe Blog do
  # ...
  describe "#new_post" do
    before do
      @new_post = OpenStruct.new
      @it.post_source = ->{ @new_post }
    end
    it "returns a new post" do
      @it.new_post.must_equal @new_post
    end
    it "sets the post's blog reference to itself" do
      @it.new_post.blog.must_equal(@it)
    end
  end
end

Specifying Blog#new_post

Here, we substitute a lambda which simply returns an OpenStruct for the #post_source.

Making these pass is straightforward:

class Blog
  # ...
  def new_post
    post_source.call.tap do |p|
      p.blog = self
    end
  end
end

Implementing Blog#new_post

Aside: subject and let

After I completed most of the examples in this text someone pointed out to me that recent versions of MiniTest have borrowed the let and subject methods from RSpec. I'm not going to go through and update every example, but I think it's worth demonstrating them briefly, since they make specs more concise and declarative.

Here's an elided version of blog_spec.rb using subject and let:

describe Blog do
  subject       { Blog.new(->{entries}) }
  let(:entries) { [] }

  it "has no entries" do
    subject.entries.must_be_empty
  end
  # ...
end

Using MiniTest's "subject" and "let"

As you can see, @it has been replaced with subject, and @entries is now entries.

Besides removing the need for a before block, let and subject have some other useful properties: they are lazily instantiated and memoized. Meaning that if a test doesn't use entries at all, the object will never be instantiated. And if the test references entries more than once, the definition block will only be run once, and the value it returns will be reused. If you have objects which are expensive to create, this can make your tests run a little faster.

Note that to get access to let and subject you will need either the gem version of MiniTest, or Ruby 1.9.3.

Posts vs. Entries

Hold on a sec. Aren't we getting our terms confused here? First we said a Blog has "entries". But then we started talking about "posts". Shouldn't we pick one or the other?

In fact, this choice to use multiple terms is deliberate. The dark side of having sensible framework conventions is that after a while, those conventions turn into assumptions. In this case, if we called the entries collection posts instead, there's a good chance we'd start mentally conflating it with the Post class. Anything in blog.posts is a Post object, end of story.

This is one of those subtle assumptions that can lead to big problems. For instance, if we assume blog.new_post is equivalent to Post.new, we might start to just skip the Blog part and write Post.new(...) or Post.create(...) whenever we want a new blog entry.

Now imagine some time passes, and we add the ability for a Blog to have many different types of posts—photos, embedded videos, etc, each represented by different classes such as PhotoPost and VideoPost. A call to Blog.new_post(...) looks at the arguments arguments and chooses the right type of object to instantiate. Unfortunately, we hard-coded references to Post everywhere, and now we have to go back and change them all.

These kinds of assumptions don't just lead to extra work; they can introduce security holes. Let's say we made the "blog entries are Posts" assumption, and as a result we coded various controller actions to look like this:

def update
  @post = Post.find(params[:id])
  # ...
end

Then one day we decide to add the ability to host multiple blogs. Instead of a singleton Blog instance, there are multiple blogs, each owned by a different user. The controllers are all updated to make sure that actions modifying a Blog can only be made by the user who owns that blog.

The upgrade goes easily, and everything is working great. Then one day a clever user realizes that if he can guess the ID of someone else'spost, he can modify its content! Why is this possible? Because all those calls to Post.find() bypassed any scope constraints imposed by accessing posts from a Blog instance instead of fetching them directly.

These are not hypothetical issues; I've seen them in production codebases. Using different terms for "the blog's entries" and "a Post object" doesn't automatically fix the problem. But if they mentally trip us up a bit, that might just be the nudge we need to remember that the entries managed by a blog are not necessarily equivalent to the set of all Post records. The topic of this text is looking at Rails projects with a fresh perspective, and playing with naming is one way to do that.

The Post class

It's pretty obvious that our next step needs to be creating a Post class. Let's specify its behavior.

# spec/models/post_spec.rb

require 'minitest/autorun'
require_relative '../../app/models/post'

describe Post do
  before do
    @it = Post.new
  end

  it "starts with blank attributes" do
    @it.title.must_be_nil
    @it.body.must_be_nil
  end

  it "supports reading and writing a title" do
    @it.title = "foo"
    @it.title.must_equal "foo"
  end

  it "supports reading and writing a post body" do
    @it.body = "foo"
    @it.body.must_equal "foo"
  end

  it "supports reading and writing a blog reference" do
    blog = Object.new
    @it.blog = blog
    @it.blog.must_equal blog
  end

  describe "#publish" do
    before do
      @blog = MiniTest::Mock.new
      @it.blog = @blog
    end

    after do
      @blog.verify
    end

    it "adds the post to the blog" do
      @blog.expect :add_entry, nil, [@it]
      @it.publish
    end
  end
end

Specifying the Post class

Next we satisfy the specification:

# app/models/post.rb
class Post
  attr_accessor :blog, :title, :body
  def publish
    blog.add_entry(self)
  end
end

Implementing the Post class

Why "publish"?

If you've written a few Rails apps you may be wondering why we're calling the method which makes a new blog entry #publish instead of #save.

One of the central elements of object-oriented design is capturing the language of the domain in our models. Think for a minute about the language of blogging. No one says "I saved a blog post the other day". They say "I published a blog post" or maybe "I posted a blog entry". By calling the method #publish, we are continuing to build a system which echoes our mental model of the domain.

Consider how we might extend this program in the future. We might add scheduled posts, which appear some period of days later than they are first saved. We might also add a draft state for posts, where they are saved but they are only visible to the blog owner.

Our choice of the verb #publish fits right into this extended workflow:

post.save_draft
# followed by ...
post.schedule
# or...
post.publish

This is not a coincidence. Choosing appropriate domain language for program elements often means we don't need to rename as many things as we add more features down the road.

Adding entries to the blog

Driving out Post has revealed that we need one more method on Blog, one which will actually add the post to the blog. We'll quickly spec it out and add it.

describe Blog do
  describe "#add_entry" do
    it "adds the entry to the blog" do
      entry = Object.new
      @it.add_entry(entry)
      @it.entries.must_include(entry)
    end
  end
end

Specifying Blog#add_entry
class Blog
  # ...
  def add_entry(entry)
    entries << entry
  end
  # ...
end

Implementing Blog#add_entry

Looking back at our demo code in the BlogController, we remember that in making a second post, we changed things up a little and passed in the title as an argument:

post2 = @blog.new_post(title: "Still wet")

Let's modify Blog#new_post to support this syntax. First, the spec:

# ...
it "accepts an attribute hash on behalf of the post maker" do
  post_source = MiniTest::Mock.new
  post_source.expect(:call, @new_post, [{x: 42, y: 'z'}])
  @it.post_source = post_source
  @it.new_post(x: 42, y: 'z')
  post_source.verify
end
# ...

Specifying Blog#new_post

And then the implementation:

# ...
def new_post(*args)
  post_source.call(*args).tap do |p|
    p.blog = self
  end
end
# ...

Implementing Blog#new_post

Now we're passing the arguments along, but we still need to implement keyword arguments on the Post initializer.

describe Post do
  # ...
  it "supports setting attributes in the initializer" do
    it = Post.new(title: "mytitle", body: "mybody")
    it.title.must_equal "mytitle"
    it.body.must_equal "mybody"
  end
  # ...
end

Specifying keyword arguments for new posts
class Post
  # ...
  def initialize(attrs={})
    attrs.each do |k,v| send("#{k}=",v) end 
  end
  # ...
end

Implementing keyword arguments

Now we just need to update the views to show our posts.

<!-- app/views/blog/index.html.erb -->
<h1><%= @blog.title %></h1>
<h2><%= @blog.subtitle %></h2>
<%= render partial: "entry", collection: @blog.entries %>

Showing blog entries on the index page
<!-- app/views/blog/_entry.html.erb -->
<article>
  <header>
    <h3><%= entry.title %></h3>
  </header>
  <p><%= entry.body %></p>
</article>

The Blog entry partial

Reloading the page, we can see our demo entries.

Showing entries
Showing entries

Submitting posts

This is progress, but a blog with static entries doesn't do us a lot of good. We need to be able to submit new entries.

First we'll add a "New post" link.

<!-- app/views/layouts/application.html.erb -->
<!-- ... -->
<div class="sidebar two columns">
  <nav>
    <ul>
      <li><%= link_to "New post...", new_post_path %></li>
    </ul>
  </nav>
</div>
<!-- ... -->

Adding a "New post" link

For that new_post_path call to work we need a route:

# config/routes.rb
# ...
resources :posts
# ...

Routes for posts

And for the route to work we need a controller:

# app/controllers/posts_controller.rb
class PostsController < ApplicationController
  def new
    @post = @blog.new_post
  end
end

The posts controller

Looks like we need the @blog object in the PostsController as well as the BlogController. Time to factor the code that sets it out into the ApplicationController:

# app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
  # ...
  before_filter :init_blog
  private
  def init_blog
    @blog = Blog.new
  end
end

Making the blog instance available from the PostsController

That's enough to make the link render but we need somewhere for it to go. We'll create a quick "new post" form:

<!-- app/views/posts/new.html.erb -->
<h1>New Post</h1>
<%= form_for @post do |f| %>
  <%= f.text_field :title %>
  <%= f.text_area :body %>
  <%= f.submit %>
<% end %>

A form for new posts

Using ActiveModel

We're almost there, but in order to construct paths and render forms, Rails has certain expectations about the protocols that a model object will respond to—protocols that our basic Post class doesn't know about. The easiest way to make it compliant is to add a couple of modules from ActiveModel. We also need to implement one method ourselves: #persisted?. For now, it's sufficient to just return false.

class Post
  extend ActiveModel::Naming
  include ActiveModel::Conversion

  # ...

  def persisted?
    false
  end

  # ...
end

Augmenting Post with ActiveModel modules

With that change, we can click on our "New post…" link and see a new post form.

New post form
New post form

The Post creation action

So far so good. Now let's make submitting the form work. We need to add a PostsController#create action.

class PostsController
  # ...
  def create
    @post = @blog.new_post(params[:post])
    @post.publish
    redirect_to root_path, notice: "Post added!"
  end
  # ...
end

The PostsController #create action

We can get rid of the demo posts in BlogController now.

class BlogController < ApplicationController
  def index
  end
end

No more placeholder entries

Making the Blog object into a Singleton

There's just one little problem remaining: a new blog object—and hence a new, blank list of posts!—is created with every request. We need to make a single blog object last across requests.

Our app only supports a single blog at the moment, so we'll just store an instance to an app-wide Blog object using an initializer.

# config/initializers/blog.rb
THE_BLOG = Blog.new

Setting up the blog singleton in an initializer

Now we change the before filter which sets the @blog variable to use that constant:

class ApplicationController < ActionController::Base
  # ...
  def init_blog
    @blog = THE_BLOG
  end
end

Using the blog singleton object

And now we can submit new posts!

New post added
New post added

Object Trees and Lone Wolves

Let's take one last look at our controller action for submitting new posts.

def create
  @post = @blog.new_post(params[:post])
  @post.publish
  redirect_to root_path, notice: "Post added!"
end

The PostsController #create action, again

Notably absent from this action is any direct reference to the Post class. We talked about this a little already in Posts vs. Entries. But I want to look at this from one more angle before moving on.

Object-oriented programs tend, more often than not, to evolve into a roughly tree-shaped structure. This is not surprising, since it mimics how we tend to think of the world around us as well as the world inside our programs. A website has a blog, a blog has categories, categories contain entries, an entry has tags and comments, and so on.

It's easy to go nuts with this, of course. When I first learned about OOP there was a huge emphasis on using it to break the world down into hierarchical ontologies which made for pretty UML diagrams. The problem with this view of OOP is that most complex systems don't have a single natural hierarchy to them. To use the blog example, we could also say a blog has authors, authors have entries, and the entries might have categories associated with them. This is no more "correct" than the version where categories are the concept which "contain" entries.

But the point is that we naturally break our systems into hierarchies, which practically speaking means trees of objects. In a tree of objects, each object mediates access to its "leaf" or "branch" objects. So you might access a blog object from a site object, a category from the blog object, and an article from the category object.

(I should probably clarify something at this point: I'm talking here about "trees" in the sense of "has-many" and "belongs-to", not in the sense of "is-a" inheritance trees.)

This pattern has some attractive properties. Having "parent" (I use the term loosely) objects mediate access to "child" objects gives us a natural "seam" in our design. At the seam, we can do a number of things:

  1. Control access based on authorization information—as we saw in "*Posts vs. Entries.
  2. Pre-load child objects with a reference back to their parent. The #new_post method above does this, enabling the post object to publish itself.
  3. Save a reference to the child object in the parent. ActiveRecord's autosave facility does this. When we are careful to access child objects from the parent, ActiveRecord is able to persist all new, modified, or deleted child objects automatically when the parent object is saved.
  4. Decide the actual class of the object to be instantiated, based on the parameters or the state of the parent.

And the great thing about having the seam is that we don't have to think about any of those concerns at the beginning. We can add them in at the seam point transparently, as the need arises.

Every time we put in an explicit reference to a class, rather than creating or accessing the object via its "parent" in the tree, we are implicitly rejecting all of these advantages. Sometimes this may be what we want. More often, it's a mistake.

Consider these common, seemingly innocuous lines:

@post = Post.new(params[:post])
@post = Post.create(params[:post])
@post = Post.find(params[:id])

Lone wolf instantiations

Every one of these creates an object which is a "lone wolf". It's an object with no family, no ties to its community. If this object was arrested and had a bail hearing, it would be considered a flight risk. It is an object which believes it will never be part of something bigger than itself.

It's also an object which makes our tests more painful. How many times have you seen test setup that looks like this:

# RSpec
post = stub(:post)
Post.stub(:new).and_return(:post)

Or:

# Mocha
Post.any_instance_stubs(:foo)

These lines just add extra work and noise to tests. Having to stub #new, or override a method on every instance of an object, is a hamfisted, shotgun-blast approach to testing. And it is usually made necessary as a result of of "lone wolf" object creation.

Seams are useful things to have in growing programs. Rejecting them has serious implications for extensibility, as well as for security and correctness. That's why I regard any bare references to a class as a red flag, especially in Rails controller actions. I feel a lot more comfortable when I can clearly see the tree structure—trunk to limb, limb to branches, branches to twigs, twigs to leaves.

Getting the tests running again

Unfortunately, our changes to the Posts model have broken our tests.

$ ruby spec/models/post_spec.rb 
app/models/post.rb:2:in `<class:Post>': 
  uninitialized constant Post::ActiveModel (NameError)
        from app/models/post.rb:1:in `<top (required)>'
        from spec/models/post_spec.rb:2:in `require_relative'
        from spec/models/post_spec.rb:2:in `<main>'

Our nicely isolated tests don't know where to find ActiveModel.

We could fix this by requiring ActiveModel somewhere in the test setup. But we don't actually need ActiveModel for the tests to pass. And we really like how fast the tests run with so few dependencies. Is there some way we can continue to keep our models as lightweight as possible while testing their behavior, and only use dependencies like ActiveModel when running as part of an application? Let's find out.

Our first crack at this problem might be to simply define empty versions of the needed modules in the test file, before requiring the post.rb file.

# spec/models/post_spec.rb

# ...
module ActiveModel
  module Naming; end
  module Conversion; end
end

require_relative '../../app/models/post'
# ...

Ghost modules

This gets the test passing again. But this approach is problematic. Let's say we had these tests running as part of a Rake task which also included full-stack tests. As a result, the task loaded the full Rails environment. Depending on load order, this file with its empty definitions of ActiveModel::Naming and ActiveModel::Conversion might cause ActiveSupport to think that those modules had already been loaded—and therefore never load the real versions. This is definitely not what we want.

Stubbing out modules

What we really need is a way to conditionally create empty or "stub" modules only if a) they are not already defined; and b) they are not auto-loadable. Here's a method which does just that.

# spec/spec_helper_lite.rb
def stub_module(full_name)
  full_name.to_s.split(/::/).inject(Object) do |context, name|
    begin
      context.const_get(name)
    rescue NameError
      context.const_set(name, Module.new)
    end
  end
end

stub_module

This method uses #const_get to attempt to reference the given module. If the module is defined, or if calling #const_get causes it to be auto-loaded, the method does nothing more. But if #const_get fails to turn up the module, it defines an anonymous empty module to act as a placeholder.

Here it is being used to stub out modules in our Post spec:

# ...
require_relative '../spec_helper_lite'
stub_module 'ActiveModel::Conversion'
stub_module 'ActiveModel::Naming'
require_relative '../../app/models/post'
# ...

Using stub_module

The tests are once again passing:

$ ruby spec/models/post_spec.rb                            
Loaded suite spec/models/post_spec
Started
......
Finished in 0.000530 seconds.
6 tests, 7 assertions, 0 failures, 0 errors, 0 skips

Adding timestamps

Two features that are pretty much required for a blog are 1) time-stamped posts; and 2) listing posts in reverse-chronological order. So far our blog supports neither of these. Time to fix that.

Once again, we'll take an outside-in approach and first find a place in the views to display the (as-yet nonexistent) timestamp.

<!-- app/views/blog/_entry.html.erb -->
<article>
  <header>
    <p><time pubdate="pubdate"><%= entry.pubdate %></time></p>
    <h3><%= entry.title %></h3>
  </header>
  <p><%= entry.body %></p>
</article>

Adding timestamps to the view

An entry's publishing timestamp should start out blank and then be filled in once it is published. Let's spec that out.

# ...
describe "#pubdate" do
  describe "before publishing" do
    it "is blank" do
      @it.pubdate.must_be_nil
    end
  end

  describe "after publishing" do
    before do
      @it.blog = stub!
      @it.publish
    end
    it "is a datetime" do
      @it.pubdate.class.must_equal(DateTime)
    end
  end
end
# ...

Specifying publishing timestamps

Note the use of stub!. MiniTest's built-in mocking was becoming insufficient for our needs, so we've supplemented it with rr, a succinct but powerful test double library. Here's the setup for that:

# spec/spec_helper_lite.rb
require 'rr'

class MiniTest::Unit::TestCase
  include RR::Adapters::MiniTest
end

rr setup

We also need to add rr to our Gemfile:

# Gemfile
# ...
group :development, :test do
  # ...
  gem 'rr'
  # ...
end
# ...

rr in the Gemfile

Implementing #pubdate is just a matter of adding a new attribute accessor:

class Post
  attr_accessor :blog, :title, :body, :pubdate
  # ...

Adding Post#pubdate

The timestamp isn't much help if it doesn't use the current time. Let's add a spec asserting that it does.

# ...
describe "#pubdate" do
  # ...

  describe "after publishing" do
    before do
      @clock = stub!
      @now = DateTime.parse("2011-09-11T02:56")
      stub(@clock).now(){@now}
      @it.blog = stub!
      @it.publish(@clock)
    end

    # ...

    it "is the current time" do
      @it.pubdate.must_equal(@now)
    end
  end
end
# ...

Specifying timestamp correctness

That's a lot of test setup; any more and we'd want to find a way to refactor the tests.

Now, besides creating a stubbed blog instance, we're also creating a @clock stub. We create a fixed @now time for the clock to respond with when #now is called on it. Then we pass the clock into the Post#publish method and assert that it uses the @now time to set its #pubdate attribute.

Sensible defaults for injected dependencies

Wait a second… does this mean the app will always have to pass a clock object in to Post#publish now? Won't this break our other tests where we pass nothing to #publish?

Sensible defaults to the rescue! Let's update the Post#publish method to make this test pass:

# ...
def publish(clock=DateTime)
  self.pubdate = clock.now
  blog.add_entry(self)
end
# ...

A default clock

We add a clock parameter, and make it default to DateTime. That way in the absence of any parameter, the method will just take its timestamp from the system clock via DateTime.

Why make it possible to pass the clock in? We might turn the question around: our tests up until now have been very careful to isolate our System Under Test (SUT) from any external dependencies, why make an exception for the system clock? By making it possible to pass a clock object in, we make it very easy to test the behavior of #publish deterministically, without resorting to heavy-handed clock-overriding libraries such as Timecop. To quote Growing Object Oriented Software, Guided by Tests:

we've seen so many systems that are impossible to test because the developers did not isolate the concept of time.

But there are other advantages to passing the clock in, which we'll discuss in the next section.

OMG Dependency Injection!

In constructing carefully isolated tests, we have now used dependency injection twice. First, we used setter injection to strategize how Blog objects create new entries:

class Blog
  # ...
  attr_writer :post_source
  # ...
  private
  def post_source
    @post_source ||= Post.public_method(:new)
  end
end

Setter injection

And then moments ago we used parameter injection to pass in a clock object to Post#publish:

# ...
def publish(clock=DateTime)
  self.pubdate = clock.now
  blog.add_entry(self)
end
# ...

Parameter injection

There is a lot of bad press around the Dependency Injection pattern these days. Most of it probably stems from the experiences people have had with heavyweight DI frameworks in Java and C#. As you can see, though, at its core Dependency Injection is just about making it possible to pass an object's collaborators in from the outside. As we've just seen, Ruby makes it very easy to make dependencies inject-able while still having sensible built-in defaults for those dependencies.

Is all this care taken to make dependencies inject-able solely in order to satisfy our need for isolated tests? Well, certainly that's what drives us to provide for DI in the first place. But what's interesting about this discipline of strict isolation is the type of object design it pushes us towards.

Let's say we wanted to add the ability to post-date or pre-date some posts. Currently, Post sets its pubdate at publish time, so we can't just set the date when we create the post. Of course, we could add some new behavior to Post to implement custom pubdate setting; but because we've made the clock inject-able, we can implement custom publish dates without making any changes at all to Post:

fixed_clock = OpenStruct.new(now: DateTime.parse(params[:pubdate]))
@post.publish(fixed_clock)

A fixed clock

Here, we've just used OpenStruct to create a quick ad-hoc object which responds to the #now method with a fixed date.

We could get as fancy as we wanted with custom clocks. We could implement a delay so that posts have a review period before going live:

class DelayClock
  def now
    DateTime.now + 24.hours
  end
end
# ...
@post.publish(DelayClock.new)

A delayed clock

All without making any changes to the Post class.

According to the Single Responsibility Principle (SRP), a class should have one, and only one, reason to change. A corollary of this principle is that when a new requirement requires changes to more than one class or module, at least one of them probably has too many responsibilities.

Consider that if we had not made the clock an inject-able dependency, implementing post-dating and pre-dating would have meant changing the code in (at least) two places: once in the controller, and once in the Post model. By contrast, in our current design we are able to implement the feature by changing the code in only one place.

Our tests influenced us to factor out the responsibility for determining a post's timestamp into a discrete object. By letting our discipline of test isolation drive our design, we arrived at a system that respects SRP without even really thinking about it.

Injecting only the dependencies we need

You might have wondered why we inject a callable @post_source instead of simply having a @post_class variable which defaults to Post.

For one thing, it makes our test setup easier. If we injected a "post class" mock, we'd have to do one of two things:

  1. Stub .new on the real Post class, which would override the method globally and potentially interfere with any test code which also calls Post.new.
  2. Create a @post_class stub and a @post stub, and stub the former to return the latter.

By contrast, all we need to do to stub the callable version is this:

blog.post_source = ->{ new_post }

As usual with mocks and stubs however, the ease of passing in a callable over injecting a .new method is pointing out a deeper design decision. When we say that Blog calls Post.new, we are implying that Blog has Post as a collaborator, and may potentially call any method on Post. The destinies of these two classes are now entwined.

By saying, instead, that Blog#new_post depends only on "some callable which will return a post when called", we are explicitly holding off from binding Blog to the Post class interface. We are choosing to make the minimum necessary connection between the two classes. If, down the road, we add a call to another Post class method, say, Post.find, our isolated tests will flag it and remind us to make a conscious choice about adding the new dependency.

Oh by the way, that post_source? It's a factory, as in "the Factory pattern". I didn't want to scare you off with big pattern talk, so I snuck it in under an assumed name ;-)

Sorting and limiting posts

OK, now we have timestamps on the posts. On a proper blog, posts are listed in reverse-chronological order, with the most recent post at the top. Let's implement sorting by timestamp, and while we're at it, let's limit the display to the ten most recent posts.

Spec:

# spec/models/blog_spec.rb
# ...
describe "#entries" do
  def stub_entry_with_date(date)
    OpenStruct.new(pubdate: DateTime.parse(date))
  end
  it "is sorted in reverse-chronological order" do
    oldest = stub_entry_with_date("2011-09-09")
    newest = stub_entry_with_date("2011-09-11")
    middle = stub_entry_with_date("2011-09-10")
    @it.add_entry(oldest)
    @it.add_entry(newest)
    @it.add_entry(middle)
    @it.entries.must_equal([newest, middle, oldest])
  end
  it "is limited to 10 items" do
    10.times do |i|
      @it.add_entry(stub_entry_with_date("2011-09-#{i+1}"))
    end
    oldest = stub_entry_with_date("2011-08-30")
    @it.add_entry(oldest)
    @it.entries.size.must_equal(10)
    @it.entries.wont_include(oldest)
  end
end
# ...

Specifying entry limiting and ordering

To implement this, we remove the entries attribute reader we created earlier, and substitute a "real" method:

# app/models/blog.rb
# ...
def entries
  @entries.sort_by{|e| e.pubdate}.reverse.take(10)
end
# ...

Sorting and limiting entries

We also have to change the #add_entry method to reference the @entries collection directly, now that #entries returns a modified copy:

# ...
def add_entry(entry)
  @entries << entry
end
# ...

With that, our blog is starting to behave more like the real thing:

Sorted entries
Sorted entries

Running the tests again, we find that one of them has broken because of a missing #pubdate method on the plain Object.new we used as a stand-in for a blog entry. Changing it to an rr stub object is sufficient to make the test pass again.

describe Blog do
  describe "#add_entry" do
    it "adds the entry to the blog" do
      entry = stub!
      # ...
    end
  end
  # ...
end

Stubbing with rr

Adding validation

Blog posts, at the very least, should have a title. Let's add a validation to enforce this constraint.

Here's the specification:

# ...
it "is not valid with a blank title" do
  [nil, "", " "].each do |bad_title|
    @it.title = bad_title
    refute @it.valid?
  end
end

it "is valid with a non-blank title" do
  @it.title = "x"
  assert @it.valid?
end 
# ...

Specifying entry title validation

We could manually implement a #valid? method here. But we know that Rails needs more than just a #valid? method in order to present validation failures in a user-friendly way. And besides, why write that method when it's a one-liner using ActiveModel?

class Post
  # ...
  include ActiveModel::Validations
  validates :title, presence: true
  # ...
end

Implementing entry title validation with ActiveModel

Now that we're using ActiveModel to satisfy our own expectations as well as Rails' expectations, we can no longer stub out the ActiveModel modules when running in isolation. We must use the real thing.

These lines will have to go:

stub_module 'ActiveModel::Conversion'
stub_module 'ActiveModel::Naming'

And we have to add a requirement for ActiveModel to the model file:

require 'active_model'

When running in the full app, this won't be necessary. But we need it in order to continue running our tests outside of the Rails environment. By explicitly requiring ActiveModel only in files which need it, we don't saddle unrelated tests with the extra load time.

While we're adding validation to the Post class, let's also modify the contract of #publish to only add posts to Blog when the post is valid, and to return false when validation fails.

# ...
describe "#publish" do
  # ...
  describe "given an invalid post" do
    before do @it.title = nil end

    it "wont add the post to the blog" do
      dont_allow(@blog).add_entry
      @it.publish
    end

    it "returns false" do
      refute(@it.publish)
    end
  end
end
# ...

Specifying publishing guards
class Post
# ...
def publish(clock=DateTime)
  return false unless valid?
  self.pubdate = clock.now
  @blog.add_entry(self)
end
# ...
end

Implementing publishing guards

With these changes in place, we can update the PostsController to handle validation failures:

# ...
def create
  @post = @blog.new_post(params[:post])
  if @post.publish
    redirect_to root_path, notice: "Post added!"
  else
    render "new"
  end
end
# ...

Making the PostsController validation-aware

With that change and some tweaks to the view (not shown), we now get an error message when we try to submit a blog post with no title.

Validation error message
Validation error message

Introducing the Exhibit Pattern

No blog is complete without the ability to post funny cat pictures. We'd like to add the ability to attach a picture URL to posts. In addition, we want to present posts differently if they have a picture URL. The "body" text will become the picture caption.

As before, we'll start at the view level and work inward. We'll add a picture URL field to the new post form:

<!-- app/views/posts/new.html.erb -->
<%= form_for @post do |f| %>
  # ...
  <%= label :image_url, "Picture URL:" %>
  <%= f.text_field :image_url %>
  # ...
<% end %>

Adding a picture URL

And we'll create partials for both displaying text-only entries and for displaying picture entries. Here's the one for picture entries:

<!-- app/posts/_picture_body.html.erb -->
<figure>
  <img src="<%= post.image_url %>"/>
  <figcaption><%= post.body %></figcaption>
<figure/>

A partial for picture entries

We're using the HTML5 <figure> and <figcaption> tags to mark up a picture semantically.

We also go ahead and add an image_url attribute to the Post model.

# ...
attr_accessor :blog, :title, :body, :image_url, :pubdate
# ...

Adding an image URL to the Post model

Now the question is how to ensure that the correct partial is rendered based on the type of post. Initially, we might think to do something like this:

<!-- app/views/blog/_entry.html.erb -->
<!-- ... -->
<% if entry.image_url.present? %>
  <%= render "/posts/picture_body", post: entry %>
<% else %>
  <%= render "posts/text_body", post: entry %>
<% end %>
<!-- ... -->

Conditional logic in the view

But this raises some warning flags. Logic in views is almost always bad news, even logic as simple as this. Speaking from my own experience, a lot of the technical debt I've seen in Rails projects has been in convoluted view code. If possible, it would be nice to avoid going down that road so early in the project.

And anyway, from an Object-Oriented Design perspective this just feels wrong. Remember those beginning OO examples, where you send a "draw" message to a "shape" object, and if it's a Circle it will draw a circle, and if it's a square it will draw a square? What's the point of using an OO language if we can't use polymorphism, and instead fall back on conditionals everywhere?

What we'd like to be able to write in our view template code is something like this:

<% entry = exhibit(entry, self) %>
<article>
  <!-- ... -->
  <%= entry.render_body %>
</article>

The view code we'd like to write

Conceptually, what we have are two post types: a "picture post", and a "text post". The core of OO philosophy is representing discrete concepts as objects. So let's take these two concepts and represent them as objects.

But what kind of objects? The models in an MVC application are supposed to be presentation-agnostic—they shouldn't know anything about how to display themselves. And we know we don't want to put business logic into the views. It seems like we need a third kind of object between a Model and a View.

Exhibit A

To satisfy our need for an object which mates a model and a view, we'll use what I've taken to calling an Exhibit object. If the Model is concerned with storing and manipulating business data, and the View is concerned with displaying it, you can think of the Exhibit as standing between them deciding which data to show, and in what order. It may also provide some extra presentation-specific information (such as the specific URLs for related resources) which the business model has no knowledge of by itself.

The Exhibit object is so named because it is like a museum display case for an artifact or a work of art. It does not obscure any of the features of the object being presented. Rather, it tries showcase the object in the best light to a human audience, while also presenting meta-information about the object and cross-references to other objects in the museum's collection.

Technically, exhibit objects are a type of Decorator specialized for presenting models to an end user. In fact, I briefly considered calling them "Presenter Decorators", but that term is a bit unwieldy, as well as being a little too easy to confuse with other "Presenter" terms (on which more later).

We write a spec for the exhibit we need:

# spec/exhibits/picture_post_exhibit_spec.rb
require_relative '../spec_helper_lite'
require_relative '../../app/exhibits/picture_post_exhibit'

describe PicturePostExhibit do
  before do
    @post = OpenStruct.new(
      title:   "TITLE", 
      body:    "BODY", 
      pubdate: "PUBDATE")
    @context = stub!
    @it = PicturePostExhibit.new(@post, @context)
  end

  it "delegates method calls to the post" do
    @it.title.must_equal "TITLE"
    @it.body.must_equal "BODY"
    @it.pubdate.must_equal "PUBDATE"
  end

  it "renders itself with the appropriate partial" do
    mock(@context).render(
      partial: "/posts/picture_body", locals: {post: @it}){
      "THE_HTML"
    }
    @it.render_body.must_equal "THE_HTML"
  end
end

Specifying an Exhibit for a picture post

We define stubs for both a "post" object, and a "context" object. The @post stub stands in for a Post instance. The @context object stands in for the Rails template object which is the context that all views are rendered in. When you call helpers like #render or #form_for in a Rails view, you're calling them on the template object.

Then we specify that the exhibit must 1) act as a "pass-through" object, forwarding any methods it doesn't know about on to the model object; and 2) that it must know how to use the context to render an appropriate post body partial. The pass-through property satisfies the "transparency" requirement of the Decorator pattern, of which Exhibit is a subset.

Next we write an implementation which satisfies this spec.

# app/exhibits/picture_post_exhibit.rb
require 'delegate'
class PicturePostExhibit < SimpleDelegator
  def initialize(model, context)
    @context = context
    super(model)
  end

  def render_body
    @context.render(partial: "/posts/picture_body", locals: {post: self})
  end
end

Implementing the picture post Exhibit

We've defined a new directory, app/exhibits, for Exhibit objects. In it, we've created a PicturePostExhibit class.

This class inherits from SimpleDelegator. SimpleDelegator is Ruby standard library class which has a very simple job: forward all calls to an underlying object. Not very useful in and of itself; but as a basis for defining Decorator objects it is quite handy.

In the exhibit initializer, we save the view context in an instance variable. Then we call the SimpleDelegator initializer with super, to set up delegation to the model object.

In the #render_body method, we use the saved @context to render a partial for a picture-type post.

The exhibit for text-only posts is nearly identical. Because it's so similar, I'll omit the spec for it and just show the implementation:

# exhibits/text_post_exhibit.rb
require 'delegate'
class TextPostExhibit < SimpleDelegator
  def initialize(model, context)
    @context = context
    super(model)
  end

  def render_body
    @context.render(partial: "/posts/text_body", locals: {post: self})
  end
end

An Exhibit for text posts

The only difference here is a different partial being rendered.

Now we need an easy way to wrap a model object in the appropriate exhibits (if any). Let's spec out a helper to do that:

# spec/helpers/exhibits_helper_spec.rb
require_relative '../spec_helper_lite'
require_relative '../../app/helpers/exhibits_helper'

stub_class 'PicturePostExhibit'
stub_class 'TextPostExhibit'
stub_class 'Post'

describe ExhibitsHelper do
  before do
    @it = Object.new
    @it.extend ExhibitsHelper
    @context = stub!
  end

  it "decorates picture posts with a PicturePostExhibit" do
    post = Post.new
    stub(post).picture?{true}
    @it.exhibit(post, @context).must_be_kind_of(PicturePostExhibit)
  end

  it "decorates text posts with a TextPostExhibit" do
    post = Post.new
    stub(post).picture?{false}
    @it.exhibit(post, @context).must_be_kind_of(TextPostExhibit)
  end

  it "leaves objects it doesn't know about alone" do
    model = Object.new
    @it.exhibit(model, @context).must_be_same_as(model)
  end
end

Specifying the exhibit helper

By its nature, ExhibitsHelper is a piece of code which will have to reference a lot of different classes, both model classes and exhibit classes. In order to avoid having test dependencies on all those class definitions, we define and use a #stub_class test helper which is almost identical in definition to the #stub_module method we wrote before.

Here's helper code which satisfies the spec:

# app/helpers/exhibits_helper.rb
module ExhibitsHelper
  def exhibit(model, context)
    # Doing a string comparison because of Rails class-reloading weirdness
    case model.class.name
    when 'Post'
      if model.picture?
        PicturePostExhibit.new(model, context)
      else
        TextPostExhibit.new(model, context)
      end
    else
      model
    end
  end
end

Implementing the exhibit helper

Hey, I thought we were getting *rid* of conditionals! That's just a giant mass of conditionals! This is true. Unfortunately, it's not always possible to completely eliminate type-based conditionals. What we can do is isolate the conditionals to a single place, rather than scattering them all over our view code. That's exactly what we're trying to do here. Anywhere we might have done an if...then...else in a view template based on an object's class or traits, we can instead add an exhibit to handle the conditional behavior polymorphically. All the conditionals are consolidated on this one helper method, which decides which exhibit(s) to apply to a given object.

The helper code above uses a #picture? predicate method on Post objects. Let's quickly implement that.

Spec:

# ...
describe "#picture?" do
  it "is true when the post has a picture URL" do
    @it.image_url = "http://example.org/foo.png"
    assert(@it.picture?)
  end
  it "is false when the post has no picture URL" do
    @it.image_url = ""
    refute(@it.picture?)
  end
end
# ...

Spec for a #picture? predicate

Implementation:

# ...
def picture?
  image_url.present?
end
# ...

The #picture? predicate

Because we'll probably be using the #exhibit helper method all over the place in the future, we'll put it in our ApplicationController:

class ApplicationController < ActionController::Base
  # ...
  helper :exhibits
  # ...
end

Adding the exhibits helper to the ApplicationController

Now, with our tiny homegrown exhibit framework in place, we rewrite the blog entry partial.

<% entry = exhibit(entry, self) %>
<article>
  <header>
    <p><time pubdate="pubdate"><%= entry.pubdate %></time></p>
    <h3><%= entry.title %></h3>
  </header>
  <%= entry.render_body %>
</article>

Using the entry exhibit in a view

No more conditionals in the view! Just a simple method call to #render_body which Does The Right Thing.

Here's how it looks when we post a picture:

Posting a picture
Posting a picture

What about Presenters?

Now, if you follow trends in the Rails community at all, right about now you're probably exclaiming "hang on, aren't you talking about Presenters?".

The truth is, in the first few drafts this whole section was about Presenters. Then I started researching the history of the Presenter concept in Ruby and Rails. And the more I read, the less certain I became about my terminology.

The idea of a Presenter was first introduced to the Rails community by Jay Fields in a series of blog articles in 2006 and 2007. Fields summarizes the Presenter pattern as follows:

The Presenter pattern addresses bloated controllers and views containing logic in concert by creating a class representation of the state of the view. An architecture that uses the Presenter pattern provides view specific data as attributes of an instance of the Presenter. The Presenter's state is an aggregation of model and user entered data.

The Presenter as described by Fields is reminiscent of the intermediate representation in the "Two Step View" pattern:

…a logical screen structure that is suggestive of the display elements yet contains no HTML.

The canonical example for this Presenter is a report or summary, where a number of disparate elements from different models need to be brought together on a single page. Rather than have the controller fetch each element individually, it instead instantiates a Presenter object which aggregates the separate elements into a single unit customized for that particular view.

Over time Fields and others elaborated and extended the Presenter concept, to include view-specific attributes (such as a list of possible values for a <select> tag); non-persisted fields (such as the "confirm password" field on an account creation page); validations and error messages (thereby keeping user-facing error strings out of the models); and, in some versions, the ability to accept updated data and save the new values back to their respective tables.

More recently, various programmers, including your author, have evolved a more granular variation on the Presenter idea, where individual model instances are wrapped with an object whose responsibility is to adapt the model for presentation. Often (but not always) these wrappers are proper Decorators, passing through any un-overridden method to the underlying model instance.

At first these differences seemed like minor variations on a theme; but the more I thought about it the more I realized they are two largely orthogonal concepts.

"Classic" presenters are oriented towards a particular view; they are, in Jay Fields' words, "class versions of your views". They have names like ReportPresenter or OrderCompletionPresenter. In contrast, this second generation of presenters are oriented primarily towards specific models. They have names like UserPresenter or PicturePostPresenter. They enable a particular model instance to render itself to a page.

Taking all this into consideration, I realized that if I persisted in calling these things "Presenters", I'd be—at best—further muddying the semantic waters. As I mentioned earlier, I considered calling them "Presenter Decorators", but that still seemed like it could lead to confusion. At the same time I didn't want to simply call them "Decorators" because I think they are a sufficiently specialized kind of Decorator to deserve their own designation. Eventually I settled on "Exhibit", a term which as far as I know is unencumbered by previous pattern associations.

When we tease these two concepts of "Presenter" and "Exhibit" into separate entities, we realize that they are complementary patterns that could easily work together in an application. This text deals mainly with Exhibits, but it's easy to imagine an app in which Exhibits are aggregated together under a combined Presenter for a particular page.

Note: By introducing the "Exhibit" terminology, my intention is not to impose my own naming on anyone else's work. I simply want to keep my ideas from further polluting the already overburdened "Presenter" namespace. Others are welcome to use the "Exhibit" term if they find it helpful, but in the context of this text I am only applying the term to the presentational decorators described here.

Exhibit Object Characteristics

For the purposes of clarity, here's a rundown of the essential characteristics of an Exhibit object.

An Exhibit object:

  • Wraps a single model instance.
  • Is a true Decorator. All unrecognized messages are passed through to the underlying object. This facilitates a gradual migration to the use of Exhibits to encapsulate presentation knowledge, since they can be layered onto models without any change to the existing views. It also enables multiple Exhibits to be layered onto an object, each handling different aspects of presentation.
  • Brings together a model and a context. Exhibits need a reference to a "context" object—either a controller or a view context—in order to be able to render templates as well as construct URLs for the object or related resources.
  • Encapsulates decisions about how to render an object. The tell-tale of an Exhibit is telling an object "render yourself", rather than explicitly rendering a template and passing the object in as an argument.
  • May modify the behavior of an object. For instance, an Exhibit might impose a scope on a Blog#entries association which only returns entries that are visible to the current user (as determined from the Exhibit's controller context). Or it might reformat the return value of a #social_security_number method to include dashes and have all but the last four digits obscured: ***-**-5678.

Refactoring the exhibits

The two exhibits we defined earlier are nearly identical. Clearly, they are ripe for refactoring. Let's take care of that. We'll move the commonalities into a Exhibit base class:

# app/exhibits/exhibit.rb
require 'delegate'
class Exhibit < SimpleDelegator  
  def initialize(model, context)
    @context = context
    super(model)
  end
end

An Exhibit base class

Now our exhibits are are a lot slimmer:

require_relative 'exhibit'
class PicturePostExhibit < Exhibit
  def render_body
    @context.render(partial: "/posts/picture_body", locals: {post: self})
  end
end

The refactored PicturePostExhibit

The specs we wrote earlier help ensure that we haven't broken anything by performing this refactoring.

Before we move on, let's add a couple of extras to the Exhibit base class.

class Exhibit < SimpleDelegator  
  # ...
  def to_model
    __getobj__
  end
  def class
    __getobj__.class
  end
end

Making an Exhibit look more like a model

We don't have time to demonstrate it here, but these will help prevent certain "gotchas" down the road. The first one defines #to_model to return the wrapped model (the strange #__getobj__ method is how SimpleDelegator refers to its underlying object). The second one is a flat-out fib: it redefines #class to return the class of the original model, instead of the class of the exhibit. Together, these methods will help ensure that Rails helpers such as #form_for don't get confused when they encounter models wrapped in exhibits.

Refactoring #exhibit

A little while ago we wrote this:

# app/helpers/exhibits_helper.rb
module ExhibitsHelper
  def exhibit(model, context)
    # Doing a string comparison because of Rails class-reloading weirdness
    case model.class.name
    when 'Post'
      if model.picture?
        PicturePostExhibit.new(model, context)
      else
        TextPostExhibit.new(model, context)
      end
    else
      model
    end
  end
end

The ExhibitsHelper, again

On one hand, this code makes it very easy to see, in one place, what exhibits will be applied to a given object. On the other hand, even with just three exhibits it's already a nasty nested conditional. We wouldn't want to keep adding to it without refactoring it somehow.

Let's look at one possible refactoring. First, we rewrite the ExhibitsHelper#exhibit method to delegate to a class method on the Exhibit base class.

# ...
def exhibit(model, context)
  Exhibit.exhibit(model, context)
end
# ...

Delegating #exhibit

Next we implement Exhibit.exhibit to iterate through a list of exhibits, giving each one an opportunity to wrap the provided object.

class Exhibit < SimpleDelegator
  # ...
  def self.exhibit(object, context)
    exhibits.inject(object) do |object, exhibit|
      exhibit.exhibit_if_applicable(object, context)
    end
  end
  # ...
end

Searching a list of potential exhibits

This code bears a strong resemblance to the Chain of Responsibility pattern. It differs from the traditional version of that pattern in that it doesn't return as soon as the first exhibit capable of wrapping the object is found.

We define Exhibit.exhibit_if_applicable to query a .applicable_to? predicate against the given object, and instantiate itself if the result is affirmative.

class Exhibit < SimpleDelegator
  # ...
  def self.exhibit_if_applicable(object, context)
    if applicable_to?(object)
      new(object, context)
    else
      object
    end
  end
  # ...
end

Exhibit.exhibit_if_applicable

Note that .exhibit_if_applicable is an example of "Tell, Don't Ask". This keeps the .exhibit logic nicely focused on one and only one thing: giving each exhibit an opportunity to apply itself to the object at hand.

In Exhibit, .applicable_to? will simply return false. Subclasses will have to implement it to match applicable objects.

def self.applicable_to?(object)
  false
end

Exhibit.applicable_to?

We add a .applicable_to? predicate to each concrete Exhibit subclass, implementing the appropriate matching semantics for that exhibit. For instance, the PicturePostExhibit.applicable_to? method checks to see if the given object is a picture Post.

class PicturePostExhibit < Exhibit
  def self.applicable_to?(object)
    object.is_a?(Post) && object.picture?
  end
  # ...
end

PicturePostExhibit.applicable_to?

Likewise for the other two exhibits:

class TextPostExhibit < Exhibit
  def self.applicable_to?(object)
    object.is_a?(Post) && (!object.picture?)
  end
  # ...
end
class LinkExhibit < Exhibit
  # ...
  def self.applicable_to?(object)
    object.is_a?(Post)
  end
  # ...
end

TextPostExhibit.applicable_to?

Finally, we define Exhibit.exhibits to return a hard-coded list of exhibits to try.

class Exhibit < SimpleDelegator
  def self.exhibits
    [
     TextPostExhibit,
     PicturePostExhibit,
     LinkExhibit
    ]
  end
  # ...
end

A hard-coded list of exhibits

Hard-coding this list is a little non-DRY. We could instead define an .inherited callback on Exhibit which would automatically add exhibits to an internal list as they are loaded.

The advantage of hard-coding the list is that it ensures a consistent and obvious ordering of exhibits. We know which exhibits may be applied, and we know the order in which they will be tried. There are advantages to both approaches, but personally, based on experience debugging auto-generated lists of classes, I lean towards the explicit list. Even though it duplicates a little bit of knowledge.

Many models to many exhibits

Our original #exhibit helper and our new refactored version share a common property which might not be immediately obvious: They both enable a many-to-many relationship between business models and exhibits. That is, some model objects may have no exhibits which apply to them. Other models may have multiple exhibits "stacked" on them. Some exhibits may be applicable only to a single model class, whereas others might be common across many types of model.

This is an important point. A big part of the power of the Exhibit pattern is being able to freely vary both sides of the Exhibit/Model relationship. It gives us two independent "axes" of change—one for business decisions, one for presentation decisions.

This is why we haven't implemented any kind of convention-based exhibit discovery—e.g. we don't automatically look up a PostExhibit for a Post object. The strength of the pattern lies in the ability to independently vary the business model and the view model, so we don't want to arbitrarily constrain ourselves by binding business models and exhibits in a one-to-one, lockstep relationship.

Do we need helpers?

Our use of exhibits raises the question: what do we need view helpers for? Anything?

In my experience helpers in Rails apps tend to devolve into large, disorganized bags of unrelated methods. Often these methods repeat the same conditional business logic over and over again. For instance, how many times have you seen helper code like this:

if current_user.logged_in?
  # ...
else
  # ...
end

Checking that the user is logged in

Thinning out helpers by taking some of their presentation responsibilities away is not a bad thing, in my view.

That said, I don't think helpers are completely useless. They are a good place to put general rendering methods which aren't tied to any particular model. For instance, we could write a helper for displaying HTML5-style images with captions:

module FigureHelper
  def figure(image_path, caption)
    content_tag(:figure) do 
      image_tag(image_path) +
        content_tag(:figcaption, caption)
    end
  end
end

A helper for rendering HTML5 figures

This helper generates markup that looks like this:

<figure>
  <img alt="freshpaint.jpg" src="http://example.org/f.jpg" />
  <figcaption>Fresh paint</figcaption>
</figure>

Generated figure markup

This is pretty generic code, and I think it works well in a helper.

Making the data stick around

Posts, time stamps, reverse-chronological sorting, image posting… we're well on our way to a working blog engine. But I feel like there's something missing. Some little detail, if I could just put my finger on it…

…oh yeah, persistence! It would probably be good if our blog posts lasted longer than run-time of the application server.

If you've been following along wondering "where's the ActiveRecord?", this is where we get to it. Now that we have figured out what our domain model looks like, it's time to start serializing the models to a database.

But first, a little philosophizing.

The trouble with ActiveRecord

ActiveRecord is an Object-Relational Mapper (ORM) based on the Active Record pattern from Patterns of Enterprise Application Architecture. As an ORM, it is semi-orthogonal to the business logic of your application. ORMs handle the loading and saving of objects to records in a database. The behavior of those objects, apart from persistence, is (theoretically) outside of the ORM's responsibilities.

In practice, real world Rails-based projects tend to be almost inextricably coupled to the ActiveRecord library. And not just to ActiveRecord; ActiveRecord-based apps also tend to have very tight inter-coupling between the various models in the system. In pathological cases, controllers and even views are also closely married to the details of ActiveRecord and database schema.

Part of this is doubtless due to the way ActiveRecord integrates with models. By declaring an is-a relationship between ActiveRecord and model classes, your models are no longer just domain models; they effectively are ActiveRecord. One result of this tight coupling is that novice and intermediate Rails developers are often surprised to find out that it's even permissible to have model objects which do not inherit from ActiveRecord::Base. And even after they learn this they sometimes still exile their non-ActiveRecord models to the lib/ ghetto, denying them their true place in app/models.

Melon Collie and the Infinite Protocol

Consider the case of the #find method. By inheriting from ActiveRecord::Base, you declare that your model supports #find. Find supports (at last count) four modes (:id, :first, :last, and :all), each of which can take any of twelve different options. Some of the options, such as :conditions, can accept an effectively limitless range of values.

#find is, in effect, an infinite protocol. This presents some serious difficulties. Many Rails developers have discovered, for instance, that it is very difficult to write meaningful ActiveRecord mock objects in their tests. If they strictly specify all of the #find arguments that their method-under-test must pass, they are essentially dictating the implementation of the method in the test. If, on the other hand, they stick with pure stubs which will accept any possible call to #find, their tests are less brittle, but also less useful because they don't actually specify much.

As a result, a lot of developers resort to running all of their unit tests as what are, in effect, integration tests, with "real" collaborator objects and full database interactions. The result, on the testing side, is slow tests. The result on the application code side is classes that freely call #find and friends on a half-a-dozen different collaborator classes—thus ensuring that future refactorings will be a slow and tedious process akin to un-teasing thickly matted dreadlocks.

One day, after years of witnessing and addressing the technical debt incurred in various maturing Rails codebases as a result of ActiveRecord-inspired tight coupling, I had an epiphany. What if we stopped treating ActiveRecord as the backbone of our model classes, and instead, programmed as if ActiveRecord were merely a private implementation detail?

And this is why we have, so far, programmed this application without once touching ActiveRecord. We've worked through the object representation of domain concepts—blog, posts, publishing, etc—using traditional object-oriented analysis and development.

Now we'll add persistence to the mix. We'll use ActiveRecord, because it is both convenient and powerful. But we'll attempt to do it in a way that treats it as an internal concern to our models, not as the backbone of our design.

Adding ActiveRecord

Looking at our models, it's pretty clear that we need a "posts" table to hold blog posts. So we'll start by creating a migration to create that table.

class CreatePosts < ActiveRecord::Migration
  def self.up
    create_table :posts do |t|
      t.datetime :pubdate
      t.string :title
      t.text :body
      t.string :image_url
      t.timestamps
    end
  end
  def self.down
    drop_table :posts
  end
end

Migration to add a "posts" table

Once we run this migration, we have a place to keep our blog posts. Now we need to make the Post model store itself there.

We need to make several changes to the Post and Blog code.

require 'date'
require 'active_record'
class Post < ActiveRecord::Base
  validates :title, presence: true
  attr_accessor :blog
  def picture?
    image_url.present?
  end
  def publish(clock=DateTime)
    return false unless valid?
    self.pubdate = clock.now
    @blog.add_entry(self)
  end
end

Adding ActiveRecord to Post
  • Post now inherits from ActiveRecord::Base. We require active_record for when we are running tests in isolation.
  • The various individual ActiveModel mixins are gone, subsumed into ActiveRecord::Base.
  • No more attribute accessors for title, body, and image_url. Those are handled by ActiveRecord now.
  • No more initializer. Its former functionality is rendered redundant by the AR initializer.
  • #persisted? is gone too, for the same reason.

Moving on to Blog:

class Blog
  # ...

  def initialize(entry_fetcher=Post.public_method(:all))
    @entry_fetcher = entry_fetcher
  end

  # ...

  def entries
    fetch_entries.sort_by{|e| e.pubdate}.reverse.take(10)
  end

  # ...

  def add_entry(entry)
    entry.save
  end

  private

  def fetch_entries
    @entry_fetcher.()
  end

  # ...
end

Upadating Blog to use ActiveRecord
  • The @entries instance variable, which used to point to an array of entries, is gone. In it's place is an @entry_fetcher variable. We're using this variable to make the strategy for finding blog entries an inject-able dependency. This will make testing the class easier.
  • Since posts live in the database now, the default method for fetching a list of entries is to call Post.all.
  • Apart from using the entry fetcher instead of referencing the @entries list directly, the entries method has changed surprisingly little. Because the result of ActiveRecord's #all is Enumerable, we can still use the same sorting and filtering methods we used before. It's not efficient, but it works for now.
  • #add_entry, instead of adding the post to an internal list, now calls #save on the passed entry.

Why does Blog do the saving?

That last item warrants some more discussion. Originally, Blog#add_entry was needed because Blog maintained an internal list of entries. But now that posts are stored in the DB, can't we just call #save on them directly from Post#publish, and do away with Blog#add_entry?

Here's the thing: while the data storage strategy has changed, the conceptual model of the application ought to stay the same. And that model is that a Blog is currently the top-level object in the app, and it is responsible for creating and maintaining a list of blog entries.

Does it really matter who does the saving? Consider this: supposing one day we decide to add a feature where our blog will send notifications to our social networking accounts (Twitter, Facebook, etc.) whenever a new post is published. This is publishing of notifications should probably be accessible from the top-level blog object, since it will presumably have references to the needed account information objects. Let's say there's a Blog#spam_social_networks(entry) method.

If Posts are responsible for saving themselves, the Blog object will have no way of knowing when a new post goes up, and therefore needs to be broadcast. Which means we'd probably wind up adding an after_save hook in Post, something like this:

class Post < ActiveRecord::Base
  # ...
  after_save :broadcast_entry
  # ...
  private

  def broadcast_entry
    blog.spam_social_networks(self)
  end
end

After-save hook to notify social networks

The trouble is, spamming social networks is almost entirely orthogonal to a Post's primary responsibility of representing a blog post. The origins of many a bloated model can be traced back to this kind of gradual responsibility creep.

When we keep the conceptual responsibility of adding a new post on the Blog, there's no need for callbacks:

# ...
def add_entry(entry)
  entry.save
  spam_social_networks(entry)
end
# ...

Notifying social networks from the Blog model

That's shorter and (I'd argue) a better place for the code. The larger point here is that by first building up our domain models divorced from persistence concerns, we came up with a design that closely matches our mental picture of the problem. As a result, new features that are still consistent with our original conception of the problem space tend to slot in neatly.

(You might be objecting "but wait! Now Blog has two responsibilities!" This is true, and a fair point. We can optimistically imagine that #spam_social_networks is only an entry point to a third object whose sole responsibility is sending out notifications.)

Updating the Blog tests

As you might imagine, we need to make a number of changes to the Post and Blog specs to adapt them to these changes. We'll start with Blog.

First of all, in creating a Blog instance to test, we now supply our own entries list instead of letting it reach out to Post for the list.

# ...
before do
  @entries = []
  @it = Blog.new(->{@entries})
end
# ...

Injecting blog entries in tests

And instead of asserting that #add_entry adds an item to an internal list, we now assert that it calls #save to add the entry:

# ...
describe "#add_entry" do
  it "adds the entry to the blog" do
    entry = stub!
    mock(entry).save
    @it.add_entry(entry)
  end
end
# ...

Asserting entries are saved

If you recall, the tests for Blog also specify that it must return only 10 entries from #entries, and they must be sorted in reverse-chronological order. We could inject a fake entries collection into the object and continue to test it as we did before. But this would make for a fragile test. We probably want to change the Enumerable code to native ActiveRecord filtering/limiting calls at some point in the future. At that point our specs would break.

One option is that we simply remove the specs when that happens, since we trust that ActiveRecord will implement sorting and filtering correctly. But do we trust ourselves to call ActiveRecord correctly?

Instead, what we'll do is move these specs from the current isolated unit test into a separate Blog integration test suite. This suite will hit the actual database.

# spec/models/blog_integration_spec.rb
require_relative '../spec_helper_full'

describe Blog do
  include SpecHelpers
  before do
    setup_database
    @it = Blog.new
  end

  after do
    teardown_database
  end

  describe "#entries" do
    def make_entry_with_date(date)
      @it.new_post(pubdate: DateTime.parse(date), title: date)
    end

    it "is sorted in reverse-chronological order" do
      oldest = make_entry_with_date("2011-09-09")
      newest = make_entry_with_date("2011-09-11")
      middle = make_entry_with_date("2011-09-10")

      @it.add_entry(oldest)
      @it.add_entry(newest)
      @it.add_entry(middle)
      @it.entries.must_equal([newest, middle, oldest])
    end

    it "is limited to 10 items" do
      10.times do |i|
        @it.add_entry(make_entry_with_date("2011-09-#{i+1}"))
      end
      oldest = make_entry_with_date("2011-08-30")
      @it.add_entry(oldest)
      @it.entries.size.must_equal(10)
      @it.entries.wont_include(oldest)
    end
  end
end

Separating out an integration test for the Blog

This spec will continue to specify the expected order and collection size regardless of how the selection is accomplished inside Blog.

Separating out integration tests

This is a technique I often use in the apps I work on. Separating unit tests from integration tests puts a clear divider between the tests that verify that our database interactions are doing what we think they are doing, from the tests that specify what logic our models should implement.

It also makes it very easy to run only the fast, isolated tests; or only the slow, DB-bound tests. Keeping as many of our tests as possible in super-fast isolation means we can complete the red-green-refactor cycle in seconds rather than minutes.

You may have noticed some new methods being called in the before and after blocks. These ensure that the database contents is blown away before and after test runs. Here are the definitions:

# spec/spec_helper_full.rb
require_relative 'spec_helper_lite'
require_relative '../config/environment.rb'
module SpecHelpers
  def setup_database
    DatabaseCleaner.strategy = :transaction
    DatabaseCleaner.clean_with(:truncation)
    DatabaseCleaner.start
  end
  def teardown_database
    DatabaseCleaner.clean
  end
end

Database setup/teardown for tests

Updating the Post tests

The majority of changes we make to the Post tests are removals. For instance, this test asserts that we can pass attributes into Post#new:

it "supports setting attributes in the initializer" do
  it = Post.new(title: "mytitle", body: "mybody")
  it.title.must_equal "mytitle"
  it.body.must_equal "mybody"
end

An obsolete Post test

We are reasonably confident that this functionality Just Works in ActiveRecord, so we trash the test.

Throwing away tests… does this mean that the test was a waste of time? No, it served its purpose. We're using tests primarily for the sake of driving design, so even if we threw them all out right now they would still have played their part. Of course, it's also nice to have them around to catch regressions; but deleting the odd test should not be cause for consternation.

Mocking ActiveRecord

Post is now an ActiveRecord::Base derivative, which means it's going to be trying to talk to the database all the time. How can we continue to test it in isolation? We'll use a couple of strategies to make that work.

First, remember how we said we were going to treat ActiveRecord as an implementation detail rather than as an essential part of the model? Now we put those words into action. Here's the top-level setup block for Post tests:

before do
  # ...
  @it = Post.new(title: "TITLE")
  @ar = @it
  # ...
end

An alias for mocking ActiveRecord

In this setup block, we take a second reference to the object being tested and call it @ar. It's actually the same object, but we'll use it for creating mocks and stubs of ActiveRecord-provided methods. We want to treat ActiveRecord as just another collaborator, and the @ar alias helps us make that delineation more "real".

Here's an example where we use the alias:

# ...
before do
  stub(@ar).valid?{false}
end

it "wont add the post to the blog" do
  dont_allow(@blog).add_entry
  @it.publish
end
# ...

Using the ActiveRecord alias

We want to simulate the case where the object is invalid. Since validity checking is provided by ActiveRecord, we treat it as an external dependency and stub it out with stub(@ar).valid?{false}. Then we attempt to publish the post, and verify that in an invalid state the post will not be added to the blog.

Stubbing out the Database with NullDB

Secondly, in order to avoid the overhead of connecting to a real database, we use NullDB to set up a do-nothing database connection before running the specs.

# ...
before do
  setup_nulldb
  # ...
end

after do
  teardown_nulldb
end
# ...

Setting up NullDB

These helpers are defined in spec_helper_lite.rb:

module SpecHelpers
  def setup_nulldb
    schema_path = File.expand_path('../db/schema.rb', 
                                   File.dirname(__FILE__))
    NullDB.nullify(schema: schema_path)
  end

  def teardown_nulldb
    NullDB.restore
  end
end

NullDB helpers

Rake tasks for testing

Now that we have both isolated unit tests and integration tests, it seems like a good time to set up some Rake tasks for running them. We want shortcuts for running just the unit tests, just the integration tests, or all of the above.

# lib/tasks/test.rake
require 'rake/testtask'
namespace 'test' do |ns|
  test_files             = FileList['spec/**/*_spec.rb']
  integration_test_files = FileList['spec/**/*_integration_spec.rb']
  unit_test_files        = test_files - integration_test_files
  desc "Run unit tests"
  Rake::TestTask.new('unit') do |t|
    t.libs.push "lib"
    t.test_files = unit_test_files
    t.verbose = true
  end
  desc "Run integration tests"
  Rake::TestTask.new('integration') do |t|
    t.libs.push "lib"
    t.test_files = integration_test_files
    t.verbose = true
  end
end
# Clear out the default Rails dependencies
Rake::Task['test'].clear
desc "Run all tests"
task 'test' => %w[test:unit test:integration]

Rake tasks for testing

We define a FileList, test_files, which expands to all of the project's tests. Then we define a subset which expands to only the integration tests. Remember, we kept them in separate files ending in _integration_spec.rb, making them easy to match. Finally, we subtract the integration tests from the full set of tests to get a list of unit_test_files.

Once we have our file lists, it's a straightforward matter of declaring some Rake::TestTask tasks for each set, and a top-level test task which depends on both of them.

Using ActiveRecord objects as data access objects

In the code above we constructed a chinese wall between the bits of the model that ActiveRecord provides, and the bits that we provide. Some Rails practitioners prefer to set up a stricter division between business logic and storage logic.

In order to accomplish this, they create separate business model objects which keep an internal reference to an ActiveRecord object. The ActiveRecord object is kept intentionally "skinny", containing only associations, scopes, and validations. The business model object delegates its storage to the AR object, but handles everything else internally. The ActiveRecord object becomes a way to get at the stored data, and nothing more.

Piotr Solnica has a great post about this pattern. Personally, I think this is a promising technique for separating concerns. But I also think it may be a bit heavyweight for some apps. In the code above I've tried to strike a middle ground, using convention more than hard object divisions to separate the concerns, and not straying too far from Rails norms. A little later on, once we get into tagging, we'll revisit this idea of using ActiveRecord as just a thin layer over database rows.

Concealing ActiveRecord behind a FigLeaf

So far, our attempts to treat ActiveRecord as a private implementation detail have been by convention, rather than enforced by the framework. It would be nice if we could get some validation that we are in fact obeying the rules we've set for ourselves.

I've written a tool to make this possible, called FigLeaf. The code is fairly short, although it may be a bit daunting if you haven't done a lot of Ruby metaprogramming.

# Tools for making inherited interfaces private to a class.
module FigLeaf
  module Macros
    # Given a list of classes, modules, strings, and symbols, compile
    # a combined list of methods. Classes and modules will be queried
    # for their instance methods; strings and symbols will be treated
    # as method names. 
    #
    # Once the list is compiled, make all of the methods private.
    #
    # Takes an optional options hash, which can include the following options:
    #
    # - :ancestors is a boolean determining whether to consider
    #   ancestors classes and modules.
    #
    # - :except is a list of classes, modules, and method names which
    #   will be excluded from treatment.
    def hide(*stuff)
      hide_methods(self, [Object], *stuff)
    end

    # Like #hide, only hides methods at the class/module level.
    def hide_singletons(*stuff)
      hide_methods(singleton_class, [Class], *stuff)
    end

    # The shared bits of #hide and #hide_singletons
    def hide_methods(mod, except_defaults, *stuff)
      options = stuff.last.is_a?(Hash) ? stuff.pop : {}
      include_ancestors  = options.fetch(:ancestors){false}
      except             = Array(options.fetch(:except){except_defaults})
      protect            = Array(options[:protect])
      except_methods     = collect_methods(true, *except)
      protect_methods    = collect_methods(true, *protect)
      methods_to_hide    = collect_methods(include_ancestors, *stuff)
      (methods_to_hide - except_methods).each do |method_name|
        mod.module_eval do 
          next unless method_defined?(method_name)
          if protect_methods.include?(method_name)
            protected method_name
          else
            private method_name
          end
        end
      end
    end

    # Given a list of classes, modules, strings, and symbols, compile
    # a combined list of methods. Classes and modules will be queried
    # for their instance methods; strings and symbols will be treated
    # as methods names. +include_ancestors+ determines whether to
    # include methods defined by class/module ancestors.
    def collect_methods(include_ancestors, *methods_or_modules)
      methods_or_modules.inject([]) {|methods, method_or_module|
        case method_or_module
        when Symbol, String
          methods << method_or_module.to_sym
        when Module # also includes classes
          methods.concat(method_or_module.instance_methods(include_ancestors))
        when Array
          methods.concat(method_or_module)
        else
          raise ArgumentError, "Bad argument: #{method_or_module.inspect}"
        end
      }
    end
  end

  def self.clothe(other)
    other.extend(Macros)
  end

  def self.included(other)
    clothe(other)
    other.singleton_class.extend(Macros)
  end

  def self.extended(object)
    clothe(object.singleton_class)
  end
end

FigLeaf

In a nutshell, FigLeaf enables us to selectively make public methods inherited from other classes and modules private. The objects can still call these methods internally, but external classes are prevented from doing so. To get an idea of how it works, we'll go ahead and apply it to the Post class.

class Post < ActiveRecord::Base
  include FigLeaf
  hide ActiveRecord::Base, ancestors: true,
       except: [Object, :init_with, :new_record?, 
                :errors, :valid?, :save]
  hide_singletons ActiveRecord::Calculations, 
                  ActiveRecord::FinderMethods,
                  ActiveRecord::Relation
  # ...

Using FigLeaf

In this code, we hide the entire ActiveRecord::Base interface, with just a few carefully chosen exceptions like #valid? and #save. We also hide a bunch of the more common class-level methods that ActiveRecord adds, like .find, .all, and #count by calling #hide_singleton with the modules which define those methods.

Now, if we jump into the console and try to call common ActiveRecord methods on it, we are denied access:

ruby-1.9.2-p0 > Post.find(1)
NoMethodError: private method `find' called for #<Class:0xa1a4a50>
ruby-1.9.2-p0 > Post.new.destroy
NoMethodError: Attempt to call private method

We've explicitly exposed the valid? and #errors methods. Those are methods which we exercise in our specs, so they are part of the public contract of Post. We've also decided to expose #save as-is.

We still have a some test failures as a result of introducing FigLeaf. Our blog_integration_spec.rb is now failing because Blog tries to use Post.all to fetch blog entries.

app/models/blog.rb:5:in `public_method': 
  method `all' for class `Class' is private (NameError)

We hesitate to expose Post.all. #all is another "infinite protocol" method; exposing it as part of our class interface is making quite a large promise to our collaborators. Instead, we decide to expose a named scope which gives Blog exactly what it needs, and no more.

class Post < ActiveRecord::Base
  # ...
  def self.most_recent(limit=10)
    all(order: "pubdate DESC", limit: limit)
  end
  # ...
end

A named scope for the most recent posts

We then change Blog to use this scope when fetching entries.

def initialize(entry_fetcher=Post.public_method(:most_recent))
  @entry_fetcher = entry_fetcher
end

Using the Post.most_recent scope

The sorting and limiting code in Blog is now redundant:

def entries
  fetch_entries.sort_by{|e| e.pubdate}.reverse.take(10)
end

The old code for sorting and limiting entries

We remove it, confident that our integration tests will let us know if the change breaks the intended semantics of the method.

def entries
  fetch_entries
end

All our tests are once again passing. And we now have an extra bulwark against tight coupling to ActiveRecord APIs.

Let me be very clear: I'm not trying to introduce Java-like bondage & discipline back into a dynamic language. FigLeaf is not intended as a hammer to keep your coworkers or your library clients in line. It's not as if that would work, anyway; the strictures that it adds are easy enough to circumvent.

FigLeaf's intended role is more along the lines of the "rumble strips" along highways which give you a jolt when you veer off into the shoulder. It provides a sharp reminder when you've unthinkingly introduced a new bit of coupling to an interface you are trying to keep isolated from the rest of the codebase. Then, you can consciously make the decision whether to make that method public, or find a different way of going about what you were doing.

Exiting Eden

The FigLeaf technique demonstrated above is an incremental approach to partitioning business logic from the persistence mechanism. It's an aid to help you think about the two concerns separately, without departing too far from Rails conventions.

If you and your app are ready for a bigger step towards true separation of concerns, you may want to look into the Data Mapper pattern, as described in "Patterns of Enterprise Application Architecture". Note that this is not the same as the DataMapper project. Just as ActiveRecord is an implementation of the Active Record pattern in Ruby, so the DataMapper gem is a (partial) implementation of the Data Mapper pattern.

The Data Mapper pattern completely separates business models from persistence concerns. In it business models have no knowledge of how to save themselves; instead, mapper objects map model properties to database columns. This separation gives you the ability to make substantial changes to your persistence strategy without affecting your business logic, and vice-versa.

To my knowledge, there is no complete library implementation of the Data Mapper pattern in Ruby. The Ruby DataMapper project does not yet enable full separation of business objects and mapper objects. That said, it is still a fantastic library which in many ways exceeds ActiveRecord. Given the choice, I'll always pick DataMapper over ActiveRecord for a new project. The only reason I didn't pick it for this text is that I didn't want to introduce too many new concepts at one time. And the future of DataMapper is bright: the developers are making steady progress towards a 2.0 release which promises to finally give the Ruby world a full-fledged Data Mapper pattern implementation.

Default content

We've decided (for no good reason other than exposition) to give Post bodies a default value. Specifically, when we write a new entry and fail to provide a body, it should be filled in with the text "Nothing to see here".

# spec/models/post_integration_spec.rb
# ...
  it "defaults body to 'Nothing to see here'" do
    post = make_post(body: '')
    post.body.must_equal("Nothing to see here")
  end
# ...

Specifiying default body content

To make this pass, we define a private method to do the defaulting.

# app/models/post.rb
# ...
  private

  def set_default_body
    if body.blank?
      self.body = 'Nothing to see here'
    end
  end
# ...

Implementing default content for the post body

Now we just need to hook it into the persistence process. We could do this by adding a before_validation hook.

before_validation :set_default_body

We could do that. Or… we could treat this class like any other class, and override the method we need to modify.

# app/models/post.rb
# ...
  def save(*)
    set_default_body
    super
  end
# ...

Overriding ActiveRecord#save

This satisfies our needs just as well, and is less "magical". When exactly does #set_default_body get called? Why, from the top of #save, of course. It's right there in the code.

Navel-gazing objects

I include this example because of a tendency I've noticed for Rails models to overuse hooks. ActiveRecord hooks are a variation on the Observer pattern, and the point of Observer is to enable other objects to be notified on an object's lifecycle events. Not so that the object can stare at itself in the mirror all day.

If we need to intercept an ActiveRecord-provided method, we can just intercept it. There are some exceptions, like after_find, which let us hook into Rails machinery that might otherwise be difficult to override. But for simple cases, we can do the simple thing and override the method we want to change. Following a few rules will keep us from inadvertently breaking things in the process:

  1. Unless you care about the value of arguments, use a single * as the method parameters so that the override doesn't interfere with the original method's protocol.
  2. Always call super, unless you are intentionally canceling the default behavior.
  3. Always call super without parentheses, unless you want to explicitly change the arguments going to the parent method. Leaving out the parentheses tells Ruby to re-use the arguments which were passed into the current method.
  4. Remember to return the result of =super=, by either making super the last call in the method, writing return super, or saving the return value in a local variable and then returning the local at the end. If you need to do some processing after the call to super, but you don't want to save the return value in a local, you can use #tap:
    def foo(*)
      super.tap do
        # ... after-super processing
      end
    end
    
    

Exhibits for REST

Now that we've got persistence working, let's add a rudimentary RESTful API to our blog.

Really RESTful APIs make heavy use of hyperlinking in the resource representations they serve. For instance, a JSON representation of a blog post might look something like this:

{
    "title": "Flakes",
    "body": "Uh oh, the paint is starting to flake!",
    "links": [
        {
            "rel": "next",
            "href": "http://example.org/blog/posts/3",
        },
        {
            "rel": "prev",
            "href": "http://example.org/blog/posts/1"
        },
        {
            "rel": "up",
            "href": "http://example.org/blog/"
        }
    ]
}

Example JSON representation of a blog post

Constructing hyperlinked responses like this one presents us with a problem. Normally, when rendering HTML representations, we render links to other resources using the various routing helpers (such as url_for or #post_url) that Rails provides for us inside of view templates. But when we render JSON data, there typically is no view context.

For instance, here's an implementation of Post#show that uses Rails' #respond_with method:

def show
  @post = Post.find(params[:id])
  respond_with(@post)
end

Using respond_with to render a post

(If you're looking at Post.find and calling out "Lone Wolf object!", good for you! We'll address that code smell in an upcoming section.)

In the absence of a template at [posts/show.json], a request for a post in JSON format will result in call to Post#to_json.

We'd like Post#to_json to provide a fully hyperlinked JSON representation as in the example above. But Post doesn't know anything about routing… and that's the way it should be! Once again, we need an object to mesh together information from a model and information from the framework. Once again, we need an Exhibit.

We could write an exhibit specifically for converting Post objects to JSON. But it would be tedious writing exhibits for each kind of model that we come up with, and in most cases the logic will probably be the same. So instead, we'll write a generic LinksExhibit which will work for most objects we throw at it.

We'll pick three simple link types to start with:

  • prev
  • next
  • up

These are three of the standard link types defined in the HTML4 spec. When we serve a JSON representation of a blog post, we want to include a link to the next chronological post, the preceding post, and the "parent" resource (up), which is the blog itself.

The LinkExhibit class will take a model object and a template object, and adorn the model with next_url, prev_url, and up_url methods. It will also augment the #to_json method to include a list of links in the JSON Hyper-Schema style.

The spec for LinkExhibit isn't that exciting, so I'll omit it. Here's the code:

class LinkExhibit < Exhibit
  RELATIONS = %w[next prev up]
  def prev_url
    @context.url_for(prev)
  end
  def next_url
    @context.url_for(self.next)
  end
  def up_url
    @context.url_for(up)
  end
  def links_hash
    {
      "links" => RELATIONS.map { |rel|
        {"rel" => rel, "href" => send("#{rel}_url")}
      }
    }
  end
  def serializable_hash(*args)
    super.merge(links_hash)
  end
  def to_json(options={})
    serializable_hash(options).to_json
  end
end

LinkExhibit

In order to generate URLs, the LinkExhibit relies on the model to respond to three methods, unsurprisingly called #prev, #next, and #up. These methods are expected to return the model object with the specified relationship to the receiver.

Implementing these methods for Post is straightforward:

# ...
def self.first_before(date)
  first(conditions: ["pubdate < ?", date],
        order:      "pubdate DESC")
end
def self.first_after(date)
  first(conditions: ["pubdate > ?", date],
        order:      "pubdate ASC")
end
# ...

def prev
  self.class.first_before(pubdate)
end

def next
  self.class.first_after(pubdate)
end

def up
  blog
end
# ...

Post navigation methods

Remember, we're avoiding exposing ActiveRecord built-in methods in the Post public API, so we have to define explicit class-level finder methods to retrieve the posts preceding and following a given date. By defining explicit finders (.first_before and .first_after) with constrained parameters, we keep the Post interface manageable.

For now, since so far we're only exposing Post objects as JSON, we make this Exhibit applicable only to Post objects.

class LinkExhibit < Exhibit
# ...
  def self.applicable_to?(object)
    object.is_a?(Post)
  end
# ...
end

LinkExhibit applicability

There's no reason it couldn't wrap any other type of object which implements #up, #next, and #prev, however.

Now we modify PostsController to expose a #show method.

class PostsController < ApplicationController
  respond_to :html, :json
  include ExhibitsHelper
  # ...
  def show
    @post = exhibit(Post.find_by_id(params[:id]), self)
    respond_with(@post)
  end
end

Adding a #show action to the PostsController

We've made a few modifications here. First, we've used respond_to to indicate that this controller can serve resources using JSON representations. Then we've included the ExhibitsHelper so as to give the controller access to #exhibit. Finally, we've implemented a #show method which finds the indicated Post, runs it through exhibit() to wrap it in all appropriate exhibits, and then uses respond_with to hook into Rails' automatic content negotiation system.

If you're following along carefully, you might be wondering about using ExhibitsHelper#exhibit in a Controller, where before we only used it in views. In particular, does it work to pass a controller as the context argument to ExhibitsHelper#exhibit?

The context parameter can really be anything which responds to the various helper methods used by exhibits. In this case, the only exhibit being called upon is LinkExhibit, and the only helper method it needs from the context is #url_for. Since #url_for is available in controllers as well as in views, everything works out.

Well, almost. One of the things LinksExhibit will be calling #url_for on is a post's #up relation, which is the global Blog instance. As you'll recall, Blog is just an ordinary object, not an ActiveRecord. As such, Rails has no idea how to convert it to a route in #url_for. Let's fix that.

First, we need to give Blog a model name.

# app/models/blog.rb    
# ...
def self.model_name
  ActiveModel::Name.new(self)
end
# ...

Giving Blog an ActiveModel::Name

This is how we tell Rails that Blog's name is… "Blog". Now that Rails officially knows its name, when we pass a Blog object to url_for, Rails will look for a blog_url helper. So all we have to do now is define that helper.

# app/controllers/application_controller.rb

# ...
def blog_url(*)
  root_url
end
# ...

A routing helper for the Blog model

We're all set up now to serve out JSON versions of blog posts. When we create a few posts and then point curl at http://localhost:3000/posts/2.json, here's what we get:

{
  "body": "This is the second post. Establishing a pattern here!", 
  "created_at": "2011-11-14T23:53:16Z", 
  "id": 2, 
  "image_url": "", 
  "links": [
      {
          "href": "http://localhost:3000/posts/3", 
          "rel": "next"
      }, 
      {
          "href": "http://localhost:3000/posts/1", 
          "rel": "prev"
      }, 
      {
          "href": "http://localhost:3000/", 
          "rel": "up"
      }
  ], 
  "pubdate": "2011-11-14T23:53:16Z", 
  "title": "Second post", 
  "updated_at": "2011-11-14T23:53:16Z"
}

Generated JSON representation of a post

That's just a beginning. I'm sure you can imagine how we might expand this out to include a JSON version of the home page, which contains links to individual posts, and so on.

Adding tags

OK, now let's add the ability to annotate posts with tags.

What sort of functionality does tagging entail? Let's describe some basic use cases:

Tagging a post

Before saving a new post, the user types some keywords into a "tags" field. They might separate the keywords with either spaces, commas, or other non-word characters. They might accidentally enter a tag twice. When the post is saved, it is displayed with its list of tags. The displayed tags are separated by commas, and have had any duplicates removed.

Seeing a list of all tags

A visitor to the blog sees a list of all unique tags that have been applied to any post in the blog sidebar.

Filtering by tag

When a visitor clicks on one of the tags, he or she sees a "filtered" view of the blog showing only posts which have been tagged with that keyword.

There are plenty of other ways to use tags, but this is enough to get us started.

An Object Model for Tags

Looking at the list of use cases, it seems clear that we'll need some kind of object that represents a list of tags. Let's start with that.

describe TagList do
  # ...
end

The most basic behavior we can specify is how the TagList will behave with no tags in it.

describe "given a blank string" do
  before do
    @it = TagList.new("")
  end

  it "is empty" do
    @it.must_be_empty
  end

  it "stringifies to the empty string" do
    @it.to_s.must_equal ""
  end

  it "arrayifies to the empty array" do
    @it.to_a.must_equal []
  end
end

Specifying TagList behavior with no tags

TagList should assist us in converting from the space- or comma-separated strings that users type in.

describe "given tags separated by commas or whitespace" do
  before do 
    @it = TagList.new("barley, hops water, yeast")
  end

  it "is not empty" do
    @it.wont_be_empty
  end

  it "stringifies to a comma separated list" do
    @it.to_s.must_equal "barley, hops, water, yeast"
  end

  it "arrayifies to a list of strings" do
    @it.to_a.must_equal %w[barley hops water yeast]
  end
end

Specifying tag normalization

It should also eliminate any duplicates.

describe "given duplicate tags" do
  before do
    @it = TagList.new("barley, hops, barley")
  end

  it "eliminates duplicates" do
    @it.to_a.must_equal %w(barley hops)
  end
end

describe "given duplicate mixed case tags" do
 before do
   @it = TagList.new("barley, hops, BarlEy")
 end

 it "eliminates duplicates ignoring case" do
   @it.to_a.must_equal %w(barley hops)
 end
end

Specifying TagList duplicate handling

It should normalize the tags to lowercase.

describe "given mixed-case tags" do
  before do 
    @it = TagList.new("Barley, hOps, YEAST")
  end

  it "lowercases the tags" do
    @it.to_a.must_equal %w(barley hops yeast)
  end
end

Specifying tag case normalization

It shouldn't be tripped up by being instantiated with nil.

describe "given nil" do
  before do 
    @it = TagList.new(nil)
  end
  it "is empty" do
    @it.must_be_empty
  end
end

Specifying TagList behavior when initialized with nil

We'll need to be able to combine tag lists together if we're going to show an overview of all tags in use on the blog.

describe "#+" do
  it "combines tag lists into one" do
    result = TagList.new("foo, bar") + TagList.new("baz, buz")
    result.must_equal(TagList.new("foo, bar, baz, buz"))
  end
end

Spec for combining tag lists

That tag overview should probably be in alphabetical order, so we'll want the tag list to be able to return a sorted version of itself.

describe "#alphabetical" do
  before do
    @it = TagList.new("foo, bar, baz, fuz")
    @result = @it.alphabetical
  end
  it "returns the tags in alpha order" do
    @result.to_a.must_equal %w(bar baz foo fuz)
  end
  it "returns another tag list" do
    @result.must_be_kind_of TagList
    @result.wont_be_same_as @it
  end
end

Specifying TagList sorting

Finally, we'll specify a handy conversion method to quickly turn things that aren't tag lists into tag lists.

describe "TagList()" do
  describe "given a TagList" do
    it "returns the same tag list" do
      list = TagList.new("")
      TagList(list).must_be_same_as(list)
    end
  end
  describe "given an array" do
    before do
      @it = TagList(%w[foo bar])
    end
    it "returns a tag list" do
      @it.must_be_kind_of(TagList)
    end
    it "contains the given tags" do
      @it.to_a.must_equal(%w[foo bar])
    end
  end
end

Specifying tagList conversions

The converter has a similar look and feel to Ruby's built-in conversion methods such as String, Array, and Integer.

Implementing these requirements takes considerably less space than we needed to spec them out:

require 'forwardable'
module Conversions
  private
  def TagList(value)
    return value if value.is_a?(TagList)
    TagList.new(value)
  end
end
class TagList
  extend Forwardable
  include Enumerable
  attr_reader :tags
  def_delegators :tags, :empty?, :to_a, :each
  def initialize(tags)
    case tags
    when Array
      @tags = tags
    else
      @tags = tags.to_s.split(/\W+/)
    end
    @tags.each(&:downcase!)
    @tags.uniq!
  end
  def to_s
    tags.join(", ")
  end
  def to_ary
    @tags
  end
  def +(other)
    self.class.new(to_a + other.to_a)
  end
  def ==(other)
    to_a == Array(other)
  end
  def alphabetical
    self.class.new(tags.sort)
  end
end

The TagList implementation

Our TagList implementation behaves much like an Array, and in fact it is built on top of an internal Array called @tags which holds the actual tag strings. Some of its Array-style methods, like empty? and :each, don't need any special treatment, so TagList passes them straight on to the underlying Array using the Forwardable library. Other methods have more tag-specific behavior, and are explicitly implemented.

Conversions

You may have noticed a funny little module named Conversion in the code above. We anticipate that we will want access to the TagList() converter method from more than one class or module. But adding it to the global namespace would be bad form. So instead we define a module to act as a namespace for conversion methods. We'll reopen this module and add other converter methods to it as our codebase expands. Any class needing access to conversions will then be able to include the Conversion module and have access to all defined converter methods.

Attaching the TagList to a Post

Now how do we attach our tag list class to a Post object? We'll start out with a naive solution which just serializes the tags to a column in the posts table. In order to do that, we create a migration for a new tags column.

class AddTagsToPosts < ActiveRecord::Migration
  def self.up
    add_column :posts, :tags, :string
  end
  def self.down
    remove_column :posts, :tags, :string
  end
end

A migration to add tags to posts

Now we need to tell Post to represent its new tags attribute as a TagList instead of as a raw string. We do that using ActiveRecord's composed_of facility:

composed_of :tags, class_name: 'TagList', mapping: %w(tags tags),
                   converter: ->(value) { TagList(value) }

Using composed_of to add tags to posts

This incantation tells ActiveRecord to mediate access to the tags attribute using a TagList. When a new Post is created, ActiveRecord will initialize a TagList object, passing it the raw tags data. When it comes time to write the record back to the database, ActiveRecord will use the TagList's own tags attribute as the new value of the tags field. Recall that in TagList, #tags is an accessor to the underlying Array instance.

The :converter option tells ActiveRecord what to do when some code calls post.tags= with a new value. In this case, it will convert the given value into a TagList.

TagList represents itself internally using an array, but the tags column we just created is a simple string field. In order to safely write an array into a string field and get it out again as an array, we need to tell ActiveRecord to serialize the field:

serialize :tags

Now whenever the tags field is written to the database, the value (an array provided by TagList) will first be serialized into YAML format. When it is read out again, it will be parsed from the YAML back into an array, and the array will be fed back into a new TagList.

We could have serialized the TagList object itself to the tags column. But serializing application objects to YAML can lead to headaches down the road. We have to ensure that the TagList code is loaded before accessing that field, something that can be surprisingly tricky when running in development mode with Rails' class autoloading enabled. And if we ever changed the representation of TagList, we could find ourselves in versioning hell as we try to load TagList objects which were serialized before the change. It's all-around easier to only serialize Ruby built-ins like Arrays and Hashes.

Making Post more tag-aware

Now we can attach tags to an individual post, but we also need to be able to get a list of all the tags in use, and to find all posts with a given tag. In order to drive out this functionality, we create a new integration spec suite for the Post class, to complement its existing unit-level spec suite.

describe Post do
  include SpecHelpers
  before do
    setup_database
    @blog = Blog.new
  end
  after do
    teardown_database
  end
  def make_post(attrs)
    attrs[:title] ||= "Post #{attrs.hash}"
    post = @blog.new_post(attrs)
    post.publish.must_equal(true)
    post
  end
  describe ".all_tags_alphabetical" do
    before do
      @post_tags = [
                    nil,        # make sure nils are handled
                    %w(barley yeast),
                    %w(yeast hops),
                    %w(water)
                   ]
      @post_tags.each do |tags|
        make_post(title: tags.inspect, tags: tags)
      end
      @it = Post.all_tags_alphabetical
    end
    it "returns a unique, alphabetized list of all tags" do
      @it.must_equal TagList(%w(barley hops water yeast))
    end
  end
  describe ".tagged" do
    it "filters the collection by tag" do
      duck  = make_post tags: %w[billed feathered]
      robin = make_post tags: %w[reddish feathered]
      fox   = make_post tags: %w[reddish furred]
      platypus = make_post tags: %w[billed furred]

      reddish = Post.tagged("reddish")
      reddish.size.must_equal 2
      reddish.must_include(robin)
      reddish.must_include(fox)
      furred = Post.tagged("furred")
      furred.size.must_equal 2
      furred.must_include(fox)
      furred.must_include(platypus)
    end
  end
end

An integration test for tags

These new specs are satisfied with a trio of new class-level methods on Post:

class Post
  LIMIT_DEFAULT=10
  # ...
  def self.most_recent(limit=LIMIT_DEFAULT)
    order("pubdate DESC").limit(limit)
  end

  def self.all_tags_alphabetical
    all_tags.alphabetical
  end

  def self.all_tags
    except(:limit).map(&:tags).reduce(TagList.new([]), &:+)
  end
  # ...
end

Adding tag query methods to Post

That last method is worth a second look. Remember that we defined the + operator on TagList to combine two tag lists into one. That comes in handy now, as we are able to use #reduce to very succinctly combine an arbitrary number of tag lists into one master list.

Oh, and if you're wondering about except(:limit), that's a bit of a kludge. In our master layout, in the blog sidebar, we want to show all tags in the database even when only a subset of posts (say, the ten most recent) are currently being shown. except(:limit) simply throws away any LIMIT clause in the current scope, so as to retrieve the tags of all posts in the database.

Accepting and displaying tags

In order to add tags to posts we need a place to enter them. We add a new field to the "new post" form:

<%= label :tags, "Tags:" %>
<%= f.text_field :tags %>

Adding tags to the new post form

We also update the blog entry partial to display any tags that are associated with a blog post.

<p class="entry_tags">Tags: 
  <span class="tags"><%= entry.tags %></span>
</p>

Adding tags to the entry partial

Remember that entry.tags will return a TagList, and TagList.to_s is defined to format the tags separated by commas. So this should look fine when rendered.

We also want to show a top-level list of tags that shows all tags in use on the blog. We add a new section to the sidebar in the main application layout:

<!-- ... -->
<h4>Tags</h4>
<nav>
  <ul>
    <%= render partial: "/tags/tag_item",
               collection: @blog.tags %>
  </ul>
</nav>
<!-- ... -->

Adding tags to the main layout

The tags/tag_item partial is just a thin wrapper around the tags/tag partial:

<li><%= render partial: "/tags/tag", object: tag_item %></li>

Filtering posts by tag

The tags/tag partial renders a link to a tag-filtered view of the blog:

<%= link_to tag.to_s, root_path(tag: tag.to_s) %>

To make this work, we make a small addition to the BlogController.

class BlogController < ApplicationController
  def index
    if params[:tag].present?
      @blog = @blog.filter_by_tag(params[:tag])
    end
  end
end

Enabling the front page to be filtered by tag

If a :tag parameter is supplied to the index action, it puts a filtered version of the blog into @blog. We define Blog#filter_by_tag as follows:

def filter_by_tag(tag)
  FilteredBlog.new(self, tag)
end

Blog#filter_by_tag

Then we define FilteredBlog as a decorator which wraps the main Blog instance and filters its #entries by a given tag.

class Blog
  # ...
  class FilteredBlog < DelegateClass(Blog)
    include ::Conversions
    def initialize(blog, tag)
      super(blog)
      @tag = tag
    end
    def entries
      Taggable(super).tagged(@tag)
    end
  end
end

The FilteredBlog decorator

This class is an implementation detail of Blog, and will not be used by any other code, so we just nest it inside the Blog class rather than giving it its own file.

Wondering about the DelegateClass(Blog) bit? That's a very close relative to SimpleDelegator, which we've already used. SimpleDelegator is a generic delegator base class which can work when wrapped around any underlying object. DelegateClass(klass), on the other hand, generates a delegator base class customized specifically for wrapping objects of the passed klass. In practice, it doesn't make a huge difference; but delegates based on DelegateClass may be a little more efficient since they don't have to use #method_missing to intercept method calls. There are some other minor differences; for instance, the class DelegateClass() generates responds to .public_instance_methods with a more accurate list than the SimpleDelegator version. Since we know that FilteredBlog will always be wrapping a Blog object, we can use DelegateClass() instead of SimpleDelegator.

At this point, we have a bare-bones but still useful post-tagging functionality. We can add tags to a post, see the keywords a post has been tagged with, and see a list of all tags on the front page. And when we click on one of the tags, we are presented with a subset of posts which are tagged with that keyword.

Filtering by tag
Filtering by tag

Extracting a Taggable role

:composed_of enabled us to keep most of the tagging code inside of TagList and out of Post. But there is still a fair amount of tagging-specific code in the Post class. This is troubling for two reasons:

  1. Right now it's just tagging. But what about when we add other functionality, like post revision control, or authorization? Will every new feature that we add result in adding another dozen lines of code to Post? What ever happened to the Single Responsibility Principle?
  2. What if we decide we want to tag entities other than posts? Will we be duplicating this code for every class that can be tagged?

We might try to pull the tagging "facet" into a module. For each new feature, we could include a new module in Post:

class Post
  include Taggable
  include RevisionControlled
  include Permissible
  # etc...
end

A Taggable module

And in fact, this is how many Rails projects address the issue of ever-expanding class files. But does this really address the root problem? We're still adding more and more responsibilities to Post objects. The only difference is, now it's harder to find the definition of any given Post method (or validation, or before-filter…) because it might be in any of a half-dozen different files.

Using a mixin module

Instead of using modules included in the class, let's see if we can extract out the "taggable" responsibility in a way that keeps it as orthogonal as possible to the item being tagged.

We'll start by defining a TaggableRecord mixin module. This module will represent the taggable "role" that a model object may assume. When injected into an object using Object#extend, this module will intercept the #tags and #tags= methods.

module TaggableRecord
  def tags
    _tag_list
  end
  def tags=(new_tags)
    @_tag_list = TagList.new(new_tags)
  end
  # ...
end

The TaggableRecord module

This module will also intercept calls to #save. Before calling the object's original #save method, TaggableRecord updates the object's tags field. It uses the ActiveRecord-provided #[]= method to write the new tags value so as to avoid calling the TaggableRecord#tags= method.

module TaggableRecord
# ...
  def save(*, &block)
    self[:tags] = _tag_list.to_a
    super
  end
# ...
end

Overriding ActiveRecord#save in TaggableRecord

TaggableRecord also uses the ActiveRecord-provided #[] to initially load up its TagList with values from the original record.

module TaggableRecord
  # ...
  private

  def _tag_list
    @_tag_list ||= TagList.new(self[:tags])
  end
end

TagList initialization in TaggableRecord

This module will be included into objects which already have their own state and methods. So we prefix our private instance variable and method names with an underscore to make naming collisions less likely.

Since up until now we've used external decorators (e.g. SimpleDelegator) to adorn objects with new functionality, you may be wondering why we're using a module now. In this case, we need the tight integration that only a module can give us. For instance, by intercepting #save within the object, rather than in an outside wrapper, we also implicitly intercept any other methods which use #save—such as #create. In this case, that's exactly the behavior we want.

(If you're still not clear on the trade-offs between decoration and dynamic module extension, I've included a longer discussion in Appendix C.)

Class-level Taggable methods

That takes care of the instance-level functionality; but a lot of the tagging-related code on Post is at the class level. We define a module for that next:

module TaggableRelation
  def all_tags_alphabetical
    all_tags.alphabetical
  end
  def all_tags
    except(:limit).map{|e| Taggable(e).tags}.reduce(TagList.new([]), &:+)
  end
  def tagged(tag)
    select{|e| Taggable(e).tags.include?(tag)}
  end
end

The TaggableRelation module

These are pretty much exactly as they were in Post.

As we decided earlier, we don't want to have Post always carrying this tagging baggage even when it isn't needed. We need a way to quickly apply the taggable "hat" to objects on a just-in-time basis. For that we define another global conversion method:

def Taggable(item)
  case item
  when ::Class, ::ActiveRecord::Relation
    item.extend(::TaggableRelation)
  else 
    item.extend(::TaggableRecord)  
  end
  item.extend(::Taggable)
end

A Taggable() conversion method

This conversion method lets us apply tagging functionality to record instances, relations, and classes by simply calling Taggable(object_to_be_made_taggable).

Divesting Post of tagging

We are now able to remove a bunch of code from Post:

# ...
composed_of :tags, class_name: 'TagList', mapping: %w(tags tags),
                   converter: ->(value) { TagList(value) }
serialize :tags
def self.all_tags_alphabetical
  all_tags.alphabetical
end
def self.all_tags
  except(:limit).map(&:tags).reduce(TagList.new([]), &:+)
end
def self.tagged(tag)
  select{|e| e.tags.include?(tag)}
end
# ...

Cleaning tag-related code from the Post class

In fact, the only tagging-related code we can't get rid of is the line that tells ActiveRecord to serialize the tags field:

serialize :tags

Applying the Taggable role

Now we walk through the codebase, applying the Taggable() role anywhere we need to treat a Post or a collection of posts as taggable objects. For instance, in the _entry partial we apply it before rendering the tag list:

<article>
  <header>
    <p><time pubdate="pubdate"><%= entry.pubdate %></time></p>
    <h3><%= entry.title %></h3>
    <p class="entry_tags">Tags: 
      <span class="tags"><%= Taggable(entry).tags %></span>
    </p>
  </header>
  <%= entry.render_body %>
</article>

Applying the Taggable role in a template

And in the PostsController we apply it to a post which is about to be saved, so that any tags which were written to the new post are correctly serialized.

# ...
def create
  @post = Taggable(@blog.new_post(params[:post]))
  if @post.publish
    redirect_to root_path, notice: "Post added!"
  else
    render "new"
  end
end
# ...

Appling the Taggable role in the PostsController

Note that in wherever we use Taggable(), we use its return value as the "taggable" entity. We do this even though we know that Taggable() extends its argument with a module, altering it in place. We could just as well do this:

Taggable(some_object)
some_object.tags = "foo, bar"
# ...

Why use the return value? We do it for consistency and implementation hiding. We happen to know (because we just wrote it) that Taggable() actually modifies its argument in place. But in other cases, we wrote conversion methods which don't modify their argument; instead, they return a wrapped object or a brand new object.

By consistently using the return value of conversion methods throughout our codebase—whether we need to or not—we free ourselves from the mental burden of having to remember how a particular conversion works. Not only that, but we future-proof our code this way: if, at some point, we decide we do want to use a decorator instead of a module, we can alter the implementation of Taggable() and know that it will continue to work wherever we've used it.

Refactoring to a separate ActiveRecord model

We don't have to spend much time with our new tagging system to realize that our naive implementation is grossly inefficient at scale. To search across or list all tags in the blog, we are forced to load every single blog entry. If this blog engine is going to compete with WordPress it's definitely going to need a faster tags implementation.

We decide to give tags some database tables of their own. In order to keep tags nice and generic, we'll create a tags table which stores the actual tag keyword, and an item_tags table which will polymorphically map from tags to taggable items (such as posts).

We write a migration that creates the new tables, migrates the old tags data to the new tables, and then removes the tags field from the posts table.

class AddTagTables < ActiveRecord::Migration
  class Post < ActiveRecord::Base; end
  class Tag < ActiveRecord::Base; end
  class ItemTag < ActiveRecord::Base
    belongs_to :tag
    belongs_to :item, polymorphic: true
  end
  def self.up
    create_table :tags do |t|
      t.string :name
      t.timestamps
    end
    create_table :item_tags do |t|
      t.integer :item_id
      t.string  :item_type
      t.integer :tag_id
    end
    Post.find_each do |post|
      Array(post.tags).each do |tag|
        tag_record = Tag.create!(name: tag.to_s)
        ItemTag.create!(item: post, tag: tag_record)
      end
    end
    remove_column :posts, :tags
  end
  def self.down
    raise ActiveRecord::IrreversibleMigration, "Cannot be reversed"
  end
end

Migrating tag attributes to a "tags" table

Note that we define any ActiveRecord models we need for the data migration within the context of the migration. This will enable the migration to continue working even if we change or remove those models in future revisions.

Using ActiveRecord as a Row Data Gateway

Our new tables are strictly implementation details; we still intend to work with tags in terms of our tried-and-true TagList class. We define some bare-bones ActiveRecord models for the new tables, with no business logic and just enough code to set up their relationships. In order to underscore the fact that these are not full-fledged business models, we put the files in a new directory called app/data.

# app/data/tag.rb
class Tag < ActiveRecord::Base
  has_many :item_tags
end

The Tag class
# app/data/tag.rb
class ItemTag < ActiveRecord::Base
  belongs_to :tag
  belongs_to :item, polymorphic: true
  delegate :name, to: :tag
end

The ItemTag join model

While technically ActiveRecord classes, we'll use these classes more like Row Data Gateways - thin wrappers around a row of data.

Before we forget, we remove the one remaining vestige of "tagginess" from Post:

# ...
serialize :tags
# ...

Now Post contains no tag-related code whatsoever.

Constructing a TagStorage repository

Currently, our TaggableRecord role module uses a record's #[]=/=#[]= methods to write tags into the record. That won't do anymore. We rewrite TaggableRecord to use a "tag storage" object for reading and writing tags instead.

module TaggableRecord
  attr_accessor :_tag_storage
  def tags
    @_tag_list ||= TagList.new(_tag_storage.load)
  end
  def tags=(new_tags)
    @_tag_list = TagList.new(new_tags)
  end
  def save(*, &block)
    super.tap do |successful|
      if successful
        _tag_storage.store(tags.to_a)
      end
    end
  end
end

Rewriting TaggableRecord to use a tag storage object

What's a tag storage object? Defining it is our next job. From our definition in TaggableRecord, we know it needs to respond to two methods: #load and #store.

First of all, it will keep a reference to the item which it is storing tags for.

class TagStorage
  attr_reader :item
  def initialize(item)
    @item = item
  end
  # ...
end

Beginning the TagStorage class

Loading tags will map across an ItemTag collection to get the names of all the tags applied to the item.

# ...
def load
  item_tags.map(&:name)
end
# ...

TagStorage#load

Item tags is simply a memoizing layer on top of #fetch_item_tags:

# ...
def item_tags
  @item_tags ||= fetch_item_tags
end
# ...

TagStorage#item_tags

#fetch_item_tags is where the actual tag loading happens. We create a scope which encompasses all ItemTag records which have a type and ID corresponding to the item being tagged. We include the tags table in the query, since we know we'll be needing the tag names.

# ...
def fetch_item_tags
  ItemTag.where(item_type: item.class, item_id: item.id).includes(:tag)
end
# ...

TagStorafe#fetch_item_tags

That takes care of loading tags. Storing tags is a little more involved. Our #store method must find the difference between the tags currently stored for the item, and the tags that have been set using the item's #tags attribute. Then it must create and delete ItemTag and Tag records accordingly.

# ...
def store(tags)
  current_tags  = item_tags.map(&:name)
  new_tags      = Array(tags)
  remove_tags(current_tags, new_tags)
  add_tags(current_tags, new_tags)
end
# ...

TagStorage#store

#add_tags determines which tags have been added, and creates the needed ItemTag mappings:

# ...
def add_tags(current_tags, new_tags)
  new_tags = new_tags - current_tags
  new_tags.each do |tag|
    item_tags << ItemTag.create!(item_tag_attributes(tag))
  end
end
# ...

TagStorage#add_tags

It uses a helper method #item_tag_attributes to generate the attributes for new ItemTag records:

# ...
def item_tag_attributes(t)
  tag = Tag.find_or_create_by_name(t)
  {item: item, tag: tag}
end
# ...

TagStorage#item_tag_attributes

Finally, #remove_tags goes through the cached list of item tag mappings, and removes any that are no longer needed from both the cached list and the database.

# ...
def remove_tags(current_tags, new_tags)
  removed_tags = current_tags - new_tags
  item_tags.each do |item_tag|
    if removed_tags.include?(item_tag.name)
      item_tag.delete
      item_tags.delete(item_tag)
    end
  end
end
# ...

TagStorage#remove_tags

Whew! That was a fair amount of code. But now we have a way to store tags in the database for arbitrary objects: all we need is a type and an id, and we can store tags against it.

Putting on the finishing touches

Since TaggableRecord now needs a TagStorage object, we have to update our Taggable conversion.

def Taggable(item, tag_storage=::TagStorage.new(item))
  return item if item.kind_of?(::Taggable)
  item.extend(::Taggable)
  case item
  when ::Class, ::ActiveRecord::Relation
    item.extend(::TaggableRelation)
  else 
    item.extend(::TaggableRecord)
    item._tag_storage = tag_storage
  end
  item
end

Updating Taggable() to use TagStorage

The Taggable() code reminds us: what about wrapping classes and relations? How does that change now that we are storing tags in the database?

The #all_tags accessor doesn't change much, except now it's querying ItemTag instead of the current class/relation:

# ...
def all_tags
  TagList(ItemTag.where(item_type: klass).includes(:tag).map(&:name))
end
# ...

Updating TaggableRelation#all_tags to query ItemTag

Now that tagged items are no longer expected to have a tags field, we need to wrap the underlying #new method in one which understands and handles the tags parameter. Otherwise if we tried to create e.g. a new Post with parameters which included tags, it would complain that no such attribute exists.

def new(attrs={}, &block)
  attrs = attrs.dup
  tags  = attrs.delete(:tags)
  Taggable(super(attrs, &block)).tap do |item|
    item.tags = tags
  end
end

Implementing TaggableRelation#new to handle "tags" attribute

The last TaggableRelation method we need to update is the one that enables us to get a list of all items tagged with a particular keyword. This method turns out to be a bit of a doozy.

def tagged(tag)
  joins("JOIN item_tags ON item_tags.item_id = #{table_name}.id AND " \
        "item_tags.item_type = \"#{klass.name}\"").
    joins("JOIN tags ON item_tags.tag_id = tags.id").
    where("tags.name = ?", tag)
end

Updating TaggableRelation#tagged

I will concede that this is a bit nuts. In trying to avoid putting any requirements whatsoever on the tagged classes—enabling classes like Post to be blissfully unaware that tagging even exists—we've been forced to assemble a pretty gnarly query. Simply requiring classes which may be tagged to declare a has_many :through relationship to the tags table would have vastly simplified this.

Sometimes going "off the rails" results in pain like this. Is it worth it? That's a decision only you and your team can make, in the context of a given project.

We've now migrated tag storage from a per-record field to a set of separate tables. While we wrote a fair amount of new code for this, it's worth reflecting on what we didn't change:

  1. We didn't change any of the views.
  2. We didn't change any of the controllers.
  3. We didn't change any of the helpers.
  4. We didn't change the Blog model.
  5. We removed one line from Post.

By pulling "taggability" into a discrete role, rather than an inherent attribute of the tagged objects, we've decoupled what is tagged from how it is tagged. Our tagging implementation can change independently of other concerns. If tomorrow we decided to change to using Redis for our tag store, the change might not be small, but it would be isolated. Most of the app wouldn't know or care about the change. To quote Kent Beck in Smalltalk Best Practice Patterns:

When you can extend a system solely by adding new objects without modifying any existing objects, then you have a system that is flexible and cheap to maintain.

Reconsidering Taggable

Implementing the Taggable role has been an instructive exercise. I don't know about you, but I learned a lot while writing it.

Looking back over the code we wrote for Taggable, some of it is undeniably awkward. And for an aspect like taggability, which many would consider an intrinsic property of a blog post, it honestly feels like overkill. I think if I were to write a blog engine right now, based on these experiences, I would keep Taggable as an ordinary module and include it into the Post class.

However, I don't think this completely invalidates the technique. There are more truly orthogonal concerns which I think would lend themselves well to this dynamic role-extension approach. As an example, consider adding security to the application. It would be nice if we could write our models without thinking about crosscutting concern of security, and then dynamically extend them with an AccessControlled role at the controller level. Since such a role might well want to intercept ActiveRecord methods like #save, the dynamic module extension technique would be well-suited to it.

The bottom line is this: your classes don't need to grow linearly with your requirements, even if they are your app's "core" classes. It is possible to build up your business objects as composites of small, orthogonal pieces. Whether through decoration, composition, or module extension, you have the power to put a stop to unchecked method creep today, and implement your next feature as a small, focused unit.

Respecting controller privacy

There's something that has been bugging me about our views.

Rails tries to make view templates look kind of like fancy controller methods, right down to sharing instance variables with the controller. In order to maintain this ruse Rails resorts to the kludgy expedient of copying instance variables one by one from the controller to the view template on render. Many have noted this as a particularly egregious violation of encapsulation. Until now, we haven't addressed this violation in our blog app.

Let's remedy the situation now. We'll start with the @blog variable which is set in ApplicationController. We'll continue to set the variable, but instead of setting it in a before filter we'll expose it using a method named #blog.

# ...
 private

 def blog
   @blog ||= THE_BLOG
 end
 # ...

A controller accessor for the Blog instance

#blog is private here, so that it won't be exposed as a controller action. Unfortunately this means it won't be available to views, either. That's easily fixed, however:

# ...
 def blog
   @blog ||= THE_BLOG
 end
 helper_method :blog
 # ...

Exposing ApplicationController#blog to views with helper_method

helper_method tells Rails to make the method available to views and helpers.

We make a similar change to PostsController:

# ...
def show
  @post = exhibit(Post.find_by_id(params[:id]), self)
  respond_with(@post)
end

private
attr_reader :post
helper_method :post
# ...

Exposing the current post as a controller accessor method

We continue to set @post directly in the controller actions, but for the purpose of views we expose it using a method #post.

Now we go through our views looking for "@" signs and replacing direct instance variable references with method calls. E.g.

<h1>New Post</h1>
<%= form_for Taggable(@post) do |f| %>
<!-- ... -->

…becomes:

<h1>New Post</h1>
<%= form_for Taggable(post) do |f| %>
<!-- ... -->

This is more than a feel-good change. We gain a couple of pragmatic benefits from calling methods instead of accessing instance variables:

  1. Easier refactoring to partials. Well-written partials reference "@"-less local variables set with the :locals, :object, or :collection keys rather than directly accessing instance variables. Since the views now use the "@"-free form, they can be easily broken up into partials without altering their content.
  2. More informative error messages. How many times have you struggled with mysterious "NoMethodError on NilClass" failures in views? A missing or misspelled instance variable will default to nil, which isn't very helpful for tracking it down. A missing or misspelled method, on the other hand, will raise an exception which tells you exactly what method couldn't be found.

This is the cheapest, most trivial way to begin distancing views from controllers. If you are interested in exploring more advanced OO approaches to rendering views, check out the Further Reading section.

Jealously guarding collections

The changes we just made enable us to fix another nagging issue. Back in the Exhibits for REST chapter, we did something we said we were going to try to avoid. We accessed a Post object directly from the Post model, instead of going through the Blog.

class PostsController < ApplicationController
  # ...
  def show
    @post = exhibit(Post.find_by_id(params[:id]), self)
    respond_with(@post)
  end
  # ...
end

Accessing a post via the Post class

Since we added a #blog method to ApplicationController, we now have access to that method from all controllers. We now change PostsController#show to find the requested post through the blog.

class PostsController < ApplicationController
  # ...
  def show
    @post = exhibit(blog.post(params[:id]), self)
    respond_with(@post)
  end
  # ...
end

Accessing a post via the Blog instance

For this to work, we need a Blog#post query method.

class Blog
  # ...
  def post(id)
    entries.find_by_id(id)
  end
  # ...
end

The Blog#post query method

Now the Post object loaded by PostsController#show is no longer a Lone Wolf.

Objects as lending libraries

Note what we didn't write in the controller:

# ...
@post = exhibit(blog.entries.find_by_id(params[:id]), self)
# ...

This version would have satisfied the "tree of objects" architectural style we're shooting for. But there a couple of problems with it.

For one thing, it taunts the Law of Demeter. It sets up structural coupling between PostsController and Blog. It duplicates the knowledge that Blog has a collection of entries which responds to #find_by_id and returns a Post, thus making that structure just a little bit harder to change.

But another, and perhaps even bigger problem is that it makes it harder for Blog to exercise control over its entries collection.

Imagine a lending library. A lending library has various collections: books, videos, periodicals. The library will allow patrons to use and even borrow the items in its collection; that's it's purpose. But the library doesn't allow them to just take items willy-nilly.

Instead, the library requires patrons to enter during visiting hours. They must register for a library card before borrowing items. They must come to the counter and and present their card before taking an item out of the library. The books they borrow have stickers or perhaps even an RFID tag identifying them as part of the library's collection.

Now imagine the the library as an object. Would the library we've just described prefer that a book to be withdrawn like this?

library.books.find_by_isbn('161293031X')

That's a bit like putting the bookshelf outside the front door of the library and allowing passersby to take books off the shelves as they please.

I suspect the electronic librarians working inside our virtual library would much rather the patrons used an interface like this:

library.borrow_book('161293031X', library_card)

OK, I admit, this is getting to be a pretty stretched-out metaphor. The point is, when client code accesses an object's children through a collection object, rather than directly through the parent object, the parent is no longer the mediator. Remember the advantages we cited earlier of having a parent object mediate access to its children:

  1. The ability to control access based on authorization information.
  2. The opportunity to pre-load child objects with a reference back to their parent.
  3. The opportunity to keep a list of child objects in the parent, for autosave or other purposes.
  4. The ability to decide the concrete type of child object to return, or, in this example, what collection(s) to search for the child.

It's not impossible for a parent object to mediate access to its children even with a collection object as a middleman. There are schemes involving proxy objects and callbacks which can enable the parent object to stay "in the loop" with regard to child access.

It's easier and introduces less structural coupling, however, to simply have all child access go through the parent object. That may mean, in some cases, that the parent simply delegates finder methods directly to an underlying collection, as we did in Blog#post(id). But even that simple act of delegation introduces a seam where more involved processing can be introduced later on, with no change to the code's clients.

In short, we prefer to let parent objects mediate access to their children, rather than having clients pick through their collection associations directly.

Toward self-rendering objects

We've taken some small steps towards wrapping our models with the "smarts" to render themselves as HTML or JSON, but there's still room for improvement. Here is our partial for rendering entries as it stands now:

<% entry = exhibit(entry, self) %>
<article>
  <header>
    <p><time pubdate="pubdate"><%= entry.pubdate %></time></p>
    <h3><%= entry.title %></h3>
    <p class="entry_tags">Tags: 
      <span class="tags"><%= Taggable(entry).tags %></span>
    </p>
  </header>
  <%= entry.render_body %>
</article>

The entry partial

As you'll recall, earlier we avoided putting logic in the view by calling entry.render_body and delegating the decision of which body partial to render to the Exhibit. While there is no branch logic in this partial, there's still more code than we'd like.

Given our preference, we'd rather the template looked something like this:

<!-- app/views/posts/_post.html.erb -->
<article>
  <header>
    <p><time pubdate="pubdate"><%= post.pubdate %></time></p>
    <h3><%= post.title %></h3>
    <div class="entry_tags">Tags: 
      <%= post.tags.render(self) %>
    </div>
  </header>
  <%= post.render_body(self) %>
</article>

An ideal entry partial

There are a few differences in this version:

  1. No entry = exhibit(entry, self) boilerplate at the beginning.
  2. Now that we have a more fleshed-out Post model, which this partial is specific to, we've replaced the entry terminology with post. We've also moved the file to the more canonical location app/views/posts/_post.html.erb.
  3. No more remembering to put Taggable() around the post whenever we want to work with tags.
  4. Speaking of tags, instead of rendering the tag list as a string directly, we now tell the list to render itself within the view template.

Let's see what we need to do in order to make this refactored view template work.

Exhibiting from inside the controller

First of all, having to explicitly exhibit() models inside each view is a drag. To avoid this, we start exhibiting our models at the controller level.

We're actually already doing this in the action for showing a single post:

# app/controllers/posts_controller.rb
# ...
def show
  @post = exhibit(blog.post(params[:id]), self)
  respond_with(@post)
end
# ...

Exhibiting a post at the controller level

So that's alright. But now that we are using a controller as the context for the exhibited model, instead of a view context, we have a problem: render_body no longer works.

Remember that render_body relies on having a reference to a view template in order to work:

class PicturePostExhibit < Exhibit
  def render_body
    @context.render(partial: "/posts/picture_body", locals: {post: self})
  end
end

Review of PicturePostExhibit#render_body

Yes, ActionController provides a #render method too, but it's a very different beast from the #render found in views. It expects to be called once per action in order to determine what top-level rendering action to take.

So, we change #render_body to accept a template argument:

# ...
def render_body(template)
  template.render(partial: "/posts/picture_body", locals: {post: self})
end
# ...

Adding a template argument to PicturePostExhibit#render_body

And then we explicitly pass in the current view context object in the view:

<article>
  <!-- ... -->
  <%= post.render_body(self) %>
</article>

Explicitly passing the view context to #render_body

From now on, we'll use this pattern of explicitly passing in the template object which the exhibit should use to render itself.

Incidentally, this is a classic example of Double Dispatch. Quoting Smalltalk Best Practice Patterns:

How can you code a computation that has many cases, the cross product of two families of classes? […] The solution is adding a layer of messages that get both objects involved in the computation.

In our case, the two families of cases are:

  1. The specific Exhibit which might be wrapped around a given model object; and
  2. The template object within which the model may be presented.

The first dispatch, the call to #render_body, enables the post body partial to be determined polymorphically. The second dispatch, the call to template.render, farms the specific details of finding and rendering the chosen partial back out to the template object.

Exhibiting the blog object

It was easy enough to exhibit a single Post object at the controller level. It gets more complicated when we look at rendering multiple blog posts in the context of the front page.

In that context, we can't directly exhibit entries at the controller level, because the controller only supplies a blog object to the view. Then, consistent with our object tree, we access the blog entries via the blog object using the #entries accessor.

We'll have to start at the root of the tree by first exhibiting the blog object itself at the controller level. We alter ApplicationController to always decorate the blog before returning it:

class ApplicationController < ActionController::Base
  include ExhibitsHelper
  # ...
  def blog
    @blog ||= exhibit(THE_BLOG)
  end
  # ...
end

Exhibiting the Blog instance at the controller level

This alone, however, is not quite enough to ensure the blog object is always exhibited. Recall that earlier we made it possible to present only a subset of posts on the blog home page by filtering by tag:

class BlogController < ApplicationController
  # ...
  def blog
    if params[:tag].present?
      super.filter_by_tag(params[:tag])
    else
      super
    end
  end
end

Filtering the blog by tag

If there's no filter, we get the exhibited blog from ApplicationController#blog, and all is well. If there is a filter, however, we get a raw Blog::FilteredBlog instance - with no exhibit wrapped around it!

The naive solution is to keep calling exhibit() every time we have a new non-exhibited object to deal with:

exhibit(super.filter_by_tag(params[:tag]))

This solution is ugly, verbose, and unsustainable. The more code we write, the more often we'll wind up accidentally leaving out an exhibit() call and introducing a bug.

We need a way to recursively exhibit objects, such that the exhibited object's "children" are automatically exhibited as well..

The exhibit_query macro

Our first cut at this problem is to write an explicit wrapper for Blog#filter_by_tag in a brand-new BlogExhibit class:

class BlogExhibit < Exhibit
  # ...
  def filter_by_tag(*)
    exhibit(super)
  end
end

Exhibiting the result of BlogExhibit#filter_by_tag

This method simply calls the underlying model's #filter_by_tag method and then wraps the result in a call to exhibit() before returning it.

We know we're going to need this for more than Blog#filter_by_tag. So once we are confident this works, we extract it to a "macro" method:

class Exhibit
  # ...
  def self.exhibit_query(*method_names)
    method_names.each do |name|
      define_method(name) do |*args, &block|
        exhibit(super(*args, &block))
      end
    end
  end
  private_class_method :exhibit_query
  # ...
end

A generic exhibit_query macro

We call the macro exhibit_query. "Query", in this case, is used in the sense of "command/query separation". It refers to a method whose purpose is to return an object, rather than to effect some change in the system state. exhibit_query advises such methods such that their return value will be wrapped in the appropriate exhibit(s) (if any).

We then replace our explicit version in BlogExhibit with a call to the new macro:

class BlogExhibit < Exhibit
  # ...
  exhibit_query :filter_by_tag
end

Using the exhibit_query macro

Finishing the BlogExhibit

We finish out our new BlogExhibit with a predicate to determine if it is applicable.

class BlogExhibit < Exhibit
  def self.applicable_to?(object)
    object_is_any_of?(object, 'Blog', 'Blog::FilteredBlog')
  end
  # ...
end

BlogExhibit applicability

This definition makes use of a new Exhibit.object_is_any_of? helper to match on any of a list of class names. Here's the the definition of Exhibit.object_is_any_of?:

def self.object_is_any_of?(object, *classes)
  # What with Rails development mode reloading making class matching
  # unreliable, plus wanting to avoid adding dependencies to
  # external class definitions if we can avoid it, we just match
  # against class/module name strings rather than the actual class
  # objects.
  # Note that '&' is the set intersection operator for Arrays. 
  (classes.map(&:to_s) & object.class.ancestors.map(&:name)).any?
end
private_class_method :object_is_any_of?

The Exhibit.object_is_any_of? helper predicate

Note that we made both this method and the exhibit_query "macro" private with private_class_method. Since they are both intended exclusively for use inside exhibit class definitions, there is no reason to clutter up the class' external interface with them.

Rendering the list of posts

Let's circle back and remind ourselves what we're trying to accomplish right now. Inside the template for the Blog#index action we'd like to be able to write something like this:

<!-- ... -->
<%= render partial: "/posts/post", collection: blog.entries %>
<!-- ... -->

So far, we've been trying to find a way for the exhibited blog object to recursively confer exhibited status on its children, the blog entries.

We have the first part of the puzzle now. In order for blog.entries to also be exhibited, we need to make it an "exhibited query" just as we did with Blog#filter_by_tag.

class BlogExhibit < Exhibit
  # ...
  exhibit_query :filter_by_tag, :entries
end

Exhibiting the entry list

Is this all we need to do? Alas, no. The return value of Blog#entries is a collection (an ActiveRecord::Relation instance, if you really want to know). We've made sure that the collection itself will go through the exhibiting process; but the individual elements of the collection will still come back as bare, unadorned Post objects. Objects which have no idea what to do when someone sends them the #render_body message.

An exhibit for collections

What we need is a type of Exhibit which will wrap a collection, and ensure that any elements accessed from within the collection will also be exhibited.

We begin to spec out an EnumerableExhibit class, using Ruby's Enumerable, Array, and Hash classes as a guideline for what methods a collection exhibit should be expected to handle.

Here's a sample of the spec:

describe EnumerableExhibit do
  # ...
  subject { EnumerableExhibit.new(model, context) }
  let(:model) { ["e1", "e2", "e3"] }
  let(:context) { Object.new }

  before do
    # #exhibit is part of the superclass interface, not this class'
    # interface, so it is fair game for stubbing
    stub(subject).exhibit {|model|
      @last_exhibited = model
      "exhibit(#{model})"
    }
  end

  describe "#each" do
    it "exhibits each element" do
      results = []
      subject.each do |e| results << e end
      results.must_equal(["exhibit(e1)", "exhibit(e2)", "exhibit(e3)"])
    end
  end

  # ...

  describe "#grep" do
    it "exhibits the result set" do
      subject.grep(/[12]/).must_equal('exhibit(["e1", "e2"])')
    end
  end

  describe "#select" do
    it "exhibits each result" do
      subject.select{|e| /[23]/ === e}.must_equal('exhibit(["e2", "e3"])')
    end
  end

  # ...

  describe "#[]" do
    it "exhibits the result" do
      subject[1].must_equal("exhibit(e2)")
    end
  end

  # ...

  describe "#group_by" do
    it "exhibits the result" do
      subject.group_by{|e| e == "e2"}.
        must_equal({ true  => 'exhibit(["e2"])',
                     false => 'exhibit(["e1", "e3"])'})
    end
  end

  # ...
end


Sample of the EnumerableExhibit spec

This is just a small subset of the full spec; we want to be very careful to avoid surprises with this class, so we carefully spec out the behavior of every common, and some not-so-common, Enumerable and Array method, as well as one or two methods from ActiveRecord::Relation.

For some of the methods, getting the desired behavior is as simple as making the EnumerableExhibit Enumerable itself, and defining #each.

class EnumerableExhibit < Exhibit
  include Enumerable
  # ...
  def each(*)
    super do |e|
      yield exhibit(e)
    end
  end
  # ...
end

Making EnumerableExhibit Enumerable

We implement #each to wrap each element in exhibit() before yielding. Because they are implemented in terms of #each, methods like #map and #inject will now work exactly as expected, also wrapping each element in exhibit() before returning them.

Other accessor methods which return single elements, such as #[] and #fetch, we are able to quickly add using our handy exhibit_query macro.

class EnumerableExhibit < Exhibit
  # ...
  exhibit_query :[], :fetch
  # ...
end

Using exhibit_query in the EnumerableExhibit implementation

However, there is another set of methods which are not so straightforward. As an example, consider Enumerable#select, aka Enumerable#find_all. This method takes a one-argument block and returns all elements for which the block evaluates to true.

At first, it seems like we could just rely on the free implementation of #select that we get from Enumerable. And indeed, there's nothing broken about this version. Each matching element (if any) will be wrapped in appropriate exhibits.

The problem is that "filter" methods like select are often used as part of chains of calls, each link in the chain winnowing down and/or transforming the elements until the desired result set is reached. And it really doesn't make sense to be wrapping exhibits around every single element at intermediate points in the chain. In addition, for some types of filter chains, having the individual elements wrapped in exhibits may mess up the filter logic in non-obvious ways. What we really want is to leave the individual elements yielded to the #select's block to be "pristine", and for the method to return a new EnumerableExhibit which is wrapped around the result set.

In order to achieve this behavior, we write another macro along the lines of exhibit_query, this time called exhibit_enum. The job of exhibit_enum is to wrap an underlying collection method such that:

  1. It has "stock" block behavior - no wrapping elements in exhibit() before yielding them to the block.
  2. The return value is run through the exhibiting process in order to wrap the result set in a new EnumerableExhibit result set.
  3. Optionally, the return value is post-processed with a custom block to account for more complex result formats.

Here's the macro:

class EnumerableExhibit < Exhibit
  # ...
  def self.exhibit_enum(*method_names, &post_process)
    post_process ||= ->(result){exhibit(result)}
    method_names.each do |method_name|
      define_method(method_name) do |*args, &block|
        result = __getobj__.public_send(method_name, *args, &block)
        instance_exec(result, &post_process)
      end
    end
  end
  private_class_method :exhibit_enum
  # ...
end

The exhibit_enum macro

We then list out the methods to be wrapped in this fashion.

class EnumerableExhibit < Exhibit
  # ...
  exhibit_enum :select, :grep, :reject, :to_enum, :sort, :sort_by, :reverse
  # ...
end

Using exhibit_enum in EnumerableExhibit

As noted above, a few methods require special post-processing for the return value. For example, Enumerable#partition returns an array of arrays instead of a simple array. We handle these cases separately:

class EnumerableExhibit < Exhibit
  # ...
  exhibit_enum :partition do |result|
    result.map{|group| exhibit(group)}
  end
  # ...
end

Post-processing the results of EnumerableExhibit#partition

Here's the source of the complete EnumerableExhibit class.

require_relative 'exhibit'

class EnumerableExhibit < Exhibit
  include Enumerable

  def self.applicable_to?(object)
    # ActiveRecord::Relation, surprisingly, is not Enumerable. But it
    # behaves sufficiently similarly for our purposes.
    object_is_any_of?(object, 'Enumerable', 'ActiveRecord::Relation')
  end

  # Wrap an Enumerable method which returns another collection
  def self.exhibit_enum(*method_names, &post_process)
    post_process ||= ->(result){exhibit(result)}
    method_names.each do |method_name|
      define_method(method_name) do |*args, &block|
        result = __getobj__.public_send(method_name, *args, &block)
        instance_exec(result, &post_process)
      end
    end
  end
  private_class_method :exhibit_enum

  exhibit_query :[], :fetch, :slice, :values_at, :last
  exhibit_enum :select, :grep, :reject, :to_enum, :sort, :sort_by, :reverse
  exhibit_enum :partition do |result|
    result.map{|group| exhibit(group)}
  end
  exhibit_enum :group_by do |result|
    result.inject({}) { |h,(k,v)|
      h.merge!(k => exhibit(v))
    }
  end

  def each(*)
    super do |e|
      yield exhibit(e)
    end
  end

  # `render '...', :collection => self` will call #to_ary on this
  # before rendering, so we need to be prepared.
  def to_ary
    self
  end
end

The complete EnumerableExhibit implementation

Transitive exhibited-ness

It's taken some effort, but we've got some pretty nifty functionality on our hands now. Given a line of code like the following:

entry = exhibit(blog).entries.first

The "exhibited" nature of blog will be conferred upon entries, and then upon the chosen entry. Each object in the chain will be transparently wrapped in the appropriate Exhibit objects (if any). In effect, we've made the "exhibited" property transitive from parent objects to children.

This template code now works:

<!-- ... -->
<%= render partial: "/posts/post", collection: blog.entries %>
<!-- ... -->

What this means is that we can now be confident that no matter where the /posts/post partial is rendered, either on a single-post page or as part of the front-page index, its post local will refer to an exhibited object. We can safely call post.render_body(self) and know that the right body partial will be rendered.

Telling the post to render itself

When we look at the code to render a post body:

<article>
  <!-- ... -->
  <%= post.render_body(self) %>
</article>

Rendering a post body

…a question naturally springs to mind. If we can tell an exhibited Post to render its body… why not tell it to render the whole post the same way? Making this work will be our next task.

It turns out to be fairly straightforward. First, we add a a method #to_partial_path to Exhibit. This method's job is to return an appropriate partial path (as in render partial: path) for the exhibited model.

class Exhibit < SimpleDelegator
  # ...
  def to_partial_path
    if __getobj__.respond_to?(:to_partial_path)
      __getobj__.to_partial_path
    else
      partialize_name(__getobj__.class.name)
    end
  end
  # ...
end

Implementing Exhibit#to_partial_path

We first check to see if the underlying model has its own idea of what partial should be used. We do this because as of Rails 3.2, #to_partial_path is a part of the ActiveModel API. If a model implements the method, we want to defer to it by default.

Otherwise, we munge the model's class name into a partial path using a helper method #partialize_name. That method is defined as follows:

class Exhibit < SimpleDelegator
  private
  # ...
  def partialize_name(name)
    "/#{name.underscore.pluralize}/#{name.demodulize.underscore}"
  end
  # ...
end

Generating a default partial path

Given the class name "Post", this will return "/posts/post", which is exactly what we want.

Now that we know the name of the partial to render, the #render method itself is trivial.

class Exhibit < SimpleDelegator
  # ...
  def render(template)
    template.render(:partial => to_partial_path, :object => self)
  end
  # ...
end

Exhibit#render

We can now use this in app/views/posts/show.html.erb to render the post object:

<%= post.render(self) %>

Entry collection, render thyself

OK, that's cool, but can we use the same thing for rendering a list of posts? Like, say, on the blog front page?

Not quite yet. Here's what we'd like to write:

<!-- ... -->
<%= blog.entries.render(self) %>

We know we can tell an exhibited Post to render itself, but here we're rendering a collection of posts.

Hm, "collection"… didn't we just finish writing an exhibit class for collections? And doesn't it now inherit #render from Exhibit?

It does indeed, but if we try the code above right now, it tries to render the collection of posts with a partial named /active_record/relation, and fails when it can't find that partial. Let's customize EnumerableExhibit to do something a little smarter.

class EnumerableExhibit < Exhibit
  # ...
  def render(template)
    inject(ActiveSupport::SafeBuffer.new) { |output,element|
      output << element.render(template)
    }
  end
  # ...
end

Implementing EnumerableExhibit#render to render each element

Instead of rendering a partial, this version of #render iterates over the elements of the underlying collection, rendering each one and appending the results to a buffer. Remember that #inject is implemented in terms of our #each method, which exhibits the elements before yielding them. We use an ActiveSupport::SafeBuffer instead of a String so that Rails will render the resulting string as raw HTML instead of escaping it.

Once we upgrade to Rails 3.2 we'll probably be able to switch this to use a simple render self and rely on Rails to ask each item for a partial path using #to_partial_path. But for right now this works well enough.

Now we can write this:

<!-- ... -->
<%= blog.entries.render(self) %>

And it renders blog posts to the front page.

Rendering a TagList

Back at the beginning of this chapter, we said we'd like to be able to write the following code as part of our app/views/posts/_post.html.erb partial:

<!-- ... -->
<div class="entry_tags">Tags: 
  <%= post.tags.render(self) %>
</div>
<!-- ... -->

Telling a TagList to render itself

The first step to making this work the way we want is to make sure the tags collection gets exhibited. Which means we first need to make the tags collection accessible in the first place. Remember, #tags is added by the Taggable wrapper, it's not an intrinsic feature of Post.

We'll kill to birds with one stone:

class PostExhibit < Exhibit
  # ...
  def tags
    exhibit(Taggable(to_model).tags)
  end
  # ...
end

Exposiong tags in PostExhibit

Calling #tags on an exhibited Post now gets us the exhibited TagList associated with that post.

TagList is Enumerable, so it will have EnumerableExhibit added to it. Unfortunately, this doesn't give us quite what we need. EnumerableExhibit works well for rendering collections of render-able objects. But the tags in TagList are just strings, and we don't have a StringExhibit. Nor are we sure we want one.

Instead, we'll create a dirt-simple TagListExhibit.

# app/exhibits/tag_list_exhibit.rb
class TagListExhibit < Exhibit
  def self.applicable_to?(object)
    object_is_any_of?(object, 'TagList')
  end
end

TagListExhibit

Wait a sec… that doesn't actually do anything, does it? In fact, we don't need it to do much. So long as it comes after EnumerableExhibit in the list of exhibits, it will override that exhibit's #render by dint of being the "outermost" exhibit. Which means it will use the default Exhibit#render, which means it will look for a partial named /tag_lists/tag_list. Which we then dutifully provide:

<!-- app/views/tag_lists/_tag_list.html.erb -->
<ul class="tags">
  <%= render partial: "/tags/tag_item", collection: tag_list %>
</ul>

A partial for tag lists

The /tags/tag_item, as you may recall from earlier, renders /tags/tag inside an <LI> tag, and /tags/tag renders the tag as a link to all tagged posts.

<!-- app/views/tags/_tag_item.html.erb -->
<li><%= render partial: "/tags/tag", object: tag_item %></li>

<!-- app/views/tags/_tag.html.erb -->
<%= link_to tag.to_s, root_path(tag: tag.to_s) %>

Now anywhere we exhibit a TagList, we can tell it to #render itself and it will generate an unordered list. With a little CSS (not shown) we can make this work anywhere we show a list of tags.

Bringing it all back home

We opened this chapter with some code showing how we would like to write the /posts/post template. Let's take a look at it again, alongside the original template, now that we've done the work to make the new version render successfully.

<!-- old -->
<% entry = exhibit(entry, self) %>
<article>
  <header>
    <p><time pubdate="pubdate"><%= entry.pubdate %></time></p>
    <h3><%= entry.title %></h3>
    <p class="entry_tags">Tags: 
      <span class="tags"><%= Taggable(entry).tags %></span>
    </p>
  </header>
  <%= entry.render_body %>
</article>

The old post template
<!-- new -->
<article>
  <header>
    <p><time pubdate="pubdate"><%= post.pubdate %></time></p>
    <h3><%= post.title %></h3>
    <div class="entry_tags">Tags: 
      <%= post.tags.render(self) %>
    </div>
  </header>
  <%= post.render_body(self) %>
</article>

The new post template

Let's also take a look at the revised /blog/index template within which this template is rendered.

<!-- app/views/blog/index.html.erb -->
<h1><%= blog.title %></h1>
<h2 class="tagline"><%= blog.subtitle %></h2>
<%= blog.entries.render(self) %>

The new blog/index template

Let's recap what we did:

  1. We moved exhibiting of the post and blog object into their respective controllers. No more cluttering up templates with calls to exhibit().
  2. By making "exhibited-ness" transitive, and making collections such as ActiveRecord::Relation exhibit-able, we made it possible to render the collection of blog entries with a straightforward:
    blog.entries.render(self)
    
    
  3. We extracted the wrapping of the post object with Taggable() into the post exhibit object.
  4. We made TagList exhibit-able, making the rendering of a list of tags have the exact same form as the rendering of the list of blog entries: post.tags.render(self).

With these changes, we've taken a big step towards a "component", or "widget" style of page rendering. By enabling objects in the template to render themselves to the page, we've engaged the full power of polymorphism. We've introduced a significantly greater degree of flexibility to vary what is shown, how it is shown, and where it is shown independently.

But we've still respected many Rails conventions. We still look for view templates in the conventional places. We still use the plain vanilla Rails RHTML views, along with the standard tag helpers, to take care of the actual rendering of HTML tags to the browser. We still follow the standard Rails workflow of: set up an object in the controller, then render its attributes in a template.

In short, as with the other parts of this text we've tried to find a middle way. A little more decoupled and flexible than vanilla Rails style, but still close enough that someone new to the project could learn their way around within a few days.

Summary

Rather than a recap, this section is structured as a list of scenarios, with pointers back to the relevant sections.

Scenarios

You are beginning a new application

Work from the outside in. Define what your screens should look like, and let that drive out your business objects.

Your need business objects that don't exist yet in order to flesh out views

Use placeholder objects until the real objects exist.

You are writing view templates

Respect controller privacy by accessing instance variables using accessor methods.

A view requires an ActiveRecord-style object in order to function correctly

Use ActiveModel to make non-ActiveRecord models compatible with Rails helpers.

You are writing business models

Start with plain objects. Leave persistence for later on. Listen to the language of the domain. Organize objects into a roughly tree-like structure with a single root. Empower models to mediate access to their children.

An object needs a way to make new instances of model

Don't hard-code the dependency on another class. Instead, inject a callable factory that the object can use to manufacture objects. Use sensible defaults to keep client code from having to always supply the dependency.

A model collaborates with another model or a collection of other models

Don't hard-code assumptions about the class of a collaborator.

A set of model methods are only needed in certain contexts

Consider factoring those methods out into a discrete role.

You are writing unit tests. You want them to be fast and to enforce good encapsulation

Keep your tests isolated from Rails and from classes other than the one under test. Stub out modules and classes that the objects under test reference but don't actually use in the context of the test. Use dependency injection to preserve encapsulation. Inject only the minimum required interface.

A non-ActiveRecord model needs to perform validations

Use ActiveModel::Validations to add validations to arbitrary models.

A model must be displayed using different HTML depending on its state

Instead of introducing logic to the view, make an Exhibit object for each different state of the model.

You are developing a complex view with many models

Consider using the Presenter pattern to aggregate the needed models into a single object representing the whole view.

The logic matching Exhibits to models is getting complicated

You need to output similar chunks of HTML in more than one view

If the HTML is not related to a specific model, use a helper.

You need an object to persist between requests

Turn the object into an app-wide Singleton initialized at start-up.

You need to persist a model to the database

You need to test that a database query functions correctly

Use an integration test separate from your isolated unit tests.

A model needs to hook into various persistence lifecycle events

Prefer to override ActiveRecord methods rather than using callbacks, if possible.

You want to more fully separate your business models from ActiveRecord

You want to add a child collection without adding a new table

You have replaced custom code with framework features, breaking the unit tests

Throw away your tests. They have served their purpose.

You want to display the same view with various different scopes applied to the model

You want to expose a rich hypermedia RESTful API

Conclusion

Well, that brings us to the end of our Object-Oriented Rails walkthrough. There's a lot more ground I could cover, but this is already a lot longer than I expected it to be.

I've tried to cram a number of ideas for how to build web applications using Object-Oriented methods and Test-Driven Design into this text. Some of them are techniques I use daily; some I use occasionally; and a few I came up with as I was writing this. My hope is that your imagination has been sparked by one or two of these techniques, and you'll be inspired to try them out, or try variations, on your own projects.

Feelin' the burn

Pain don't hurt.

—Dalton, in "Roadhouse"

A lot of the patterns we've looked at have been significantly more work than following the traditional Rails development process. For instance, we went to an awful lot of effort to avoid exposing common ActiveRecord methods on the Post class.

Is all this extra effort worth it? Well, that really depends on the application. In some ways a blog app was a terrible choice for demonstrating these techniques, since if there's one type of app the rails "golden path" is well suited for, it's a blog. I chose a blog application as the demo because I wanted to go with a problem domain anyone would be familiar with, so as to put the focus on the coding strategies rather than the problem domain. As a result, a lot of those strategies felt like overkill in this context.

But in my experience there are a lot of more complex Rails projects out there which could have benefited from a little more "pain" in the form of careful design decisions. Many of these pain points are really canaries in a coal mine, letting us know that our code is becoming overly coupled, our interfaces too large, our objects saddled with too many responsibilities. Like muscles which have sat too long in one position, we don't realize how knotted-up they've become until we stand up and stretch. Disciplines like isolated tests, or FigLeaf, keep us on our toes and keep those muscles from knotting up in the first place.

TMTOWTDI

I want to be clear that the techniques I've demonstrated here aren't "the Right Way" to do Rails; or even necessarily "the Avdi Grimm Way". More than showing you a set of patterns and practices, what I hope you'll take away from this is that Rails is not a set of rigid walls confining your project. Your project is not a Rails project; it is a software project, which uses Rails to satisfy the need of projecting your business objects onto the web. Rails is not the framework; it is a part of your overall project architecture.

As a project scales, its architecture must evolve and grow to support it. That may mean building new abstractions upon the foundation Rails gives you. This should not be a cause for consternation and fear, but a cause for excitement! One, because your project is successful enough to need new abstractions to support its continued evolution. Two, because you're working in Ruby and Rails, and it's all just objects, all the way down.

So don't be afraid to build up your project's architecture. And in the process, don't be shy about lifting idioms and patterns from the Object-Oriented literature. There is very little new under the sun; whatever the problem you are facing, chances are Kent Beck already solved it in Smalltalk. Ruby didn't invent OO, it just made it ten times more fun. Likewise Rails didn't invent web programming patterns, it just stripped away all the ceremony and boilerplate.

at_exit

It's been a ton of work but a lot of fun writing this; I hope you get some value from it. If you have questions, corrections, suggestions, or objections, please don't hesitate to get in touch. I'm always interested in seeing how other developers tackle the problems of sustainably growing and evolving a codebase.

Thanks for reading, and happy hacking!

Appendix A: Further reading

There has been a surge of interest in applying classic Object Oriented principles to Rails development lately. Here are some starting points for further reading.

General

  • The definitive compilation of web application patterns, and the source of many of the patterns found in Rails, is Patterns of Enterprise Application Architecture, by Martin Fowler. Every web application developer should have a copy of this book within easy reach.
  • No object-oriented programming book list is complete without Kent Beck's Smalltalk Best Practice Patterns. Cleverly disguised as a book about Smalltalk, this book is really a comprehensive manual for constructing elegant, expressive code in any OO language.
  • Steven Baker is writing an eBook on applying SOLID principles to Rails code, called Solid Rails.
  • Steve Klabnick wrote an article called "The Secret to Rails OO Design"
  • In September 2011 The Ruby Rogues podcast (of which I am a member) interviewed Jim Weirich on the topic of "Object Oriented Programming with Rails".
  • My fellow CodeBender Piotr Solnica has written on the topic of "Making ActiveRecord Models Thin".
  • Gary Bernhardt has been offering up a wealth of information on clean OO design and writing fast, isolated tests in his Destroy All Software video series.
  • Greg Brown has been writing some great stuff on this topic; for instance, here's an article on applying SOLID Design Principles to Ruby code based on his experience writing the Prawn PDF library.
  • Nicholas Henry: "Rails is Not Your Application".
  • Dan Croak wrote a comprehensive overview of Decorator implementations in Ruby
  • Nick Gauthier gave a presentation on "Ruby and the Web" in February 2012 at B'More on Rails. In it, he talks about how Rails is actually closer to a Model 2 architecture than to Model-View-Controller. He also explores what a more deliberately Object-Oriented web framework in Ruby might look like.
  • Gary Bernhard has an alternative take on a Ruby web framework called Raptor. Quoting the README: "Raptor is an experimental web framework that encourages simple, decoupled objects. There are no base classes and as little 'DSL' as possible."

Fast Tests and Mock Objects

Data-Context-Interaction

A number of Rails practitioners have begun exploring the application of the Data-Context-Interaction (DCI) pattern to Rails apps. DCI is the brainchild of the same people who came up with MVC, and offers some novel techniques for breaking down functionality along use-case boundaries. Here are some reading suggestions to get you started learning about DCI.

Presenters, decorators, and view models

Views

  • Effigy replaces your view templates with honest-to-goodness objects. Among other interesting implications, this means that instead of layouts you simply have a base class which defines boilerplate HTML, and then individual view subclasses override the parts of the base which they wish to customize.
  • decentexposure enables you to decoratively set up your controller interfaces, and enforces the use of those interfaces.
  • Apotomo is a view component framework for Rails.
  • The Two-Step View pattern from Patterns of Enterprise Application Architecture is a robust pattern for decoupling models from views.

Rails

  • Rails 3 took massive strides towards being a modular framework which allows you to mix and match pieces, or swap in your own pieces when the stock ones aren't sufficient for your app. For understanding how the Rails puzzle fits together, and how to hook your own code into it, there is no better guide than Crafting Rails Applications, by José Valim.

Appendix B: Acceptance Tests

Here is the acceptance test suite for the demo application. Step implementations and supporting files can all be found in the demo application source code.

Feature: Basic Blog
  As a drying paint enthusiast
  I want to publish blog entries
  So that my friends can enjoy my fascinating hobby

  Scenario: Visit home page
    When I go to the home page
    Then I should see the blog title

  Scenario: Post a text entry
    When I go to the home page
     And I start a new post
     And I fill in the title "First Post!"
     And I fill in the body "I just painted a fence!"
     And I submit the entry
     And I return to the home page
    Then I should see a post with title "First Post!"
     And the post body should be:
     """
     I just painted a fence!
     """

  Scenario: Post a photo
    When I post an entry with these values:
    | name      | value                            |
    | title     | Check it out, I painted the cat! |
    | image_url | http://example.com/madcat.jpg    | 
    Then I should see a post with title "Check it out, I painted the cat!"
     And the post should show image with URL "http://example.com/madcat.jpg"

  Scenario: Post many entries
    When I post the following entries in order:
    | title  | body           | tags |
    | Post A | This is post A | a,z  |
    | Post B | This is post B | b,z  |
    | Post C | This is post C | c,x  |
    Then I should see the following entries in order:
    | title  | body           |
    | Post C | This is post C |
    | Post B | This is post B |
    | Post A | This is post A |
    And I should see tags: a,b,c,x,z
    When I look at posts tagged "z"
    Then I should see the following entries in order:
    | title  | body           |
    | Post B | This is post B |
    | Post A | This is post A |

The acceptance suite

Appendix C: Decoration vs. Dynamic Module Extension

/This section adapted from a blog post originally published January 31, 2012./

Having trouble choosing between Decorators and dynamically adding modules to objects? Let's examine the pros and cons.

Composing an adventure

Consider an adventure game, with objects representing player characters.

class Character
  # ...
end

Character class

A Character can be described:

class Character
  # ...

  # ...
end

Character#describe

A Character can look, listen, and smell his environment:

class Character
  # ...

  # ...
end

Character sense methods
<<definitions>>
cohen = Character.new
cohen.describe
cohen.look
cohen.listen

: You are a dashing, rugged adventurer. : You can see a lightning bug. : You can see a guttering candle. : You hear a distant waterfall.

The character can also consult all of his senses at once:

class Character
  # ...

  # ...
end

Character#observe
<<definitions>>
cohen = Character.new
cohen.observe

: You can see a lightning bug. : You can see a guttering candle. : You hear a distant waterfall. : You smell egg salad.

Characters can have various effects conferred upon them by items, potions, etc. A simple example is a hat:

require 'delegate'
class BowlerHatDecorator < SimpleDelegator
  def describe
    super
    puts "A jaunty bowler cap sits atop your head."
  end
end

BowlerHatDecorator

At each turn of the game, the Character object will be decorated with whatever effects are currently active, and then a user command will be performed:

<<definitions>>
cohen = BowlerHatDecorator.new(Character.new)
cohen.describe

You are a dashing, rugged adventurer.
A jaunty bowler cap sits atop your head.

Seeing in the dark

A more interesting effect is conferred by an infravision potion. It enables your character to see in the dark.

class InfravisionPotionDecorator < SimpleDelegator
  def describe
    super
    puts "Your eyes glow dull red."
  end

  def look
    super
    look_infrared
  end

  def look_infrared
    list("You can see", ["the ravenous bugblatter beast of traal"])
  end
end

InfravisionPotionDecorator

While the character is experiencing the effects of an infravision potion, his powers of observation increase:

<<definitions>>
cohen = InfravisionPotionDecorator.new(Character.new)
cohen.describe
cohen.look

You are a dashing, rugged adventurer.
Your eyes glow dull red.
You can see a lightning bug.
You can see a guttering candle.
You can see the ravenous bugblatter beast of traal.

There's just one little problem that crops up when the #observe method is called.

<<definitions>>
cohen = InfravisionPotionDecorator.new(Character.new)
cohen.observe

You can see a lightning bug.
You can see a guttering candle.
You hear a distant waterfall.
You smell egg salad.

Hey, where'd that bugblatter beast go?

The Character#observe method calls #look—but since the wrapped object has no knowledge whatsoever of the InfravisionPotionDecorator, it calls the original definition of #look, not the one which also calls #look_infrared.

Now, granted, this flaw actually works out in our intrepid adventurer's favor, since the ravenous bugblatter beast of Traal is so stupid it thinks that if you can't see it, it can't see you. But never mind that: it's still a bug, and bugs must be blattered.

A solution that's all wet

We could patch this flaw by overriding #observe as well in the decorator:

class InfravisionPotionDecorator < SimpleDelegator
  def observe
    look
    listen
    smell
  end
end

Overriding #observe in the InfravisionPotionDecorator

Yuck! This is the exact same implementation as in Character, just copied and pasted so that the correct implementation of #look will be called. Clearly this is non-DRY. But even worse, we've introduced a nasty variety of connascence. Every time we introduces a new Character method which calls #look, we'll have to cull through every single effect decorator which overrides #look, adding copy-and-pasted versions of the new method so that it doesn't accidentally ignore the effect-wrapped version. Double yuck!

Modules to the rescue

In Ruby, there is an easy solution: extend the character with a module instead of a decorator.

module InfravisionPotionModule
  def describe
    super
    puts "Your eyes glow dull red."
  end

  def look
    super
    look_infrared
  end

  def look_infrared
    list("You can see", ["the ravenous bugblatter beast of traal"])
  end
end

InfravisionPotionModule
<<definitions>>
cohen = Character.new.extend(InfravisionPotionModule)
cohen.observe

: You can see a lightning bug. : You can see a guttering candle. : You can see the ravenous bugblatter beast of traal. : You hear a distant waterfall. : You smell egg salad.

This time the overridden method is added directly to the object via its singleton class. So even the object's own unmodified methods get the new infravision version of #look.

Sadly, by enabling him to see the monster we have sealed our protagonist's fate. But at least we fixed the bug!

Other solutions

That's not the only way to fix the problem. We might, for instance, decompose our Character into individual body parts, with separate attributes for eyes, nose, and ears. The Character could then delegate the individual senses to their respective organs:

require 'forwardable'
class Character
  extend Forwardable

  attr_accessor :eyes
  attr_accessor :ears
  attr_accessor :nose

  def_delegator :eyes, :look
  def_delegator :ears, :listen
  def_delegator :nose, :smell
end

Decomposing Character into body parts

A potion of infravision might then replace the character's eyes with infrared-enhanced ones:

class InfravisionPotionDecorator < SimpleDelegator
  class EyesDecorator < SimpleDelegator
    # ...
  end

  def initialize(character)
    super(character)
    character.eyes = EyesDecorator.new(character.eyes)
  end
end

Decomposing the InfravisionPotionDecorator

…but this is an awful lot of code and ceremony. It might make sense someday, but right now it feels like massive overkill. The module extension approach, by contrast, is only a small change from our original version.

Are decorators overrated?

So what can we learn from this? When composing objects, Is it always better to use module extension than decoration?

In a word, no. For one thing, decoration is a simpler structure to understand. Given object A wrapped in object B wrapped in object C, it's easy to reason about how method calls will be handled. They'll always go one-way: a method in object A will never reference a method in B or C.By contrast, method calls in a module-extended object can bounce around the inheritance hierarchy in unexpected ways.

A second consideration is that once you've extended an object with a module, its behavior is changed for all clients, including itself. You can't interact with the "unadorned" object anymore. You might extend an object for your own purposes, then pass it to a third-party method which doesn't understand the modified behavior of the object and barfs as a result.

Finally, there's a performance penalty. While it varies from implementation to implementation, dynamically extending objects can slow down your code as a result of the method cache being invalidated. Of course, as with all performance-related guidelines, be sure to profile before making any code changes based on this point.

Conclusion

Decoration and module extension are both viable ways to compose objects in Ruby. Which to use is not a simple black-or-white choice; it depends on the purpose of the composition.

For applications where you want to adorn an object with some extra functionality, or modify how it presents itself, a decorator is probably the best bet. Decorators are great for creating Presenters, where we just want to change an object's "face" in a specific context.

On the other hand, when building up a composite object at runtime object out of individual "aspects" or "facets", module extension may make more sense. Judicious use of module extension can lead to a kind of "emergent behavior" which is hard to replicate with decoration or delegation.

Appendix D: Test Helper Organization

In the course of this text we introduced a variation on the traditional Rails spec/spec_helper.rb scheme for common specs/tests setup code. Instead of a single helper file, we broke up the helpers based on the type of test, so that fast, isolated tests wouldn't be slowed down by unneeded setup when run by themselves.

We introduced this code in a glancing, piecemeal fashion as we constructed the tests, and you might have been left a little unclear as to exactly what goes where. In this section I'll lay out the test helper files and explain the purpose of each.

I've also reorganized and tweaked the test helpers in my demo codebase since completing the main text. This section reflects the more recent version of the helpers.

spec/spec_helper_lite.rb

The purpose of this file is to supply the prerequisites for purely isolated model tests, without burdening the tests with excessive startup time. Notably, it eschews both the Rails environment and Bundler setup, which means that any needed gems must be explicitly required. It also means the tests must be run in an environment where the correct gem versions are available, e.g. in the context of an RVM gemset which has been initialized using bundle install.

Minitest is explicitly specified with gem, in order to get the newer gem version of Minitest instead of the one that comes with Ruby.

RR ("Double Ruby") is also required, for mocking and stubbing.

Finally, this is the file which defines stub_module and stub_class, which enable us to stub out references to other classes and modules without actually loading the code for them.

ENV['RAILS_ENV'] ||= 'test'
gem 'minitest' # demand gem version
require 'minitest/autorun'
require 'rr'
require 'ostruct'
$: << File.expand_path('../lib', File.dirname(__FILE__))
class MiniTest::Unit::TestCase
  include RR::Adapters::MiniTest
end
def stub_module(full_name, &block)
  stub_class_or_module(full_name, Module)
end
def stub_class(full_name, &block)
  stub_class_or_module(full_name, Class)
end
def stub_class_or_module(full_name, kind, &block)
  full_name.to_s.split(/::/).inject(Object) do |context, name|
    begin
      # Give autoloading an opportunity to work
      context.const_get(name)
    rescue NameError
      # Defer substitution of a stub module/class to the last possible
      # moment by overloading const_missing. We use a module here so
      # we can "stack" const_missing definitions for various
      # constants.
      mod = Module.new do
        define_method(:const_missing) do |missing_const_name|
          if missing_const_name.to_s == name.to_s
            value = kind.new
            const_set(name, value)
            value
          else
            super(missing_const_name)
          end
        end
      end
      context.extend(mod)
    end
  end
end

spec_helper_lite.rb

spec/spec_helper_nulldb.rb

This file builds on spec_helper_lite.rb and adds NullDB helpers for stubbing out database interactions. It was originally part of spec_helper_lite.rb, but I separated it out to its own file when I removed Bundler setup from spec_helper_lite.rb.

require "bundler/setup"
require_relative 'spec_helper_lite'
module SpecHelpers
  def setup_nulldb
    require 'nulldb'
    schema_path = File.expand_path('../db/schema.rb', File.dirname(__FILE__))
    NullDB.nullify(:schema => schema_path)
  end
  def teardown_nulldb
    NullDB.restore
  end
end

spec_helper_nulldb.rb

spec/spec_helper_full.rb

This file adds the full Rails environment to the setup already done in spec_helper_lite.rb. It also sets up DatabaseCleaner for integration tests.

require_relative 'spec_helper_lite'
require_relative '../config/environment.rb'
module SpecHelpers
  def setup_database
    DatabaseCleaner.strategy = :transaction
    DatabaseCleaner.clean_with(:truncation)
    DatabaseCleaner.start
  end
  def teardown_database
    DatabaseCleaner.clean
  end
end

spec_helper_full.rb

Credits

Keyboard shortcuts: n for next page, p for previous, ? for help.

Enjoying Objects on Rails? Why not take it with you! For $5, you can download DRM-free PDF, EPUB, and Kindle versions of this book, along with the full source code. Click on the big button below to buy now! Or, click here to shop for the Objects on Rails "Sponsor Edition" and other products.

Add to Cart