Posts Tagged ‘Testing’

Double-Blind Test-Driven Development in Rails 3: Part 2

February 1, 2011

  1. Simple Tests
  2. Double-Blind Tests
  3. Making it Practical with RSpec Matchers

The last article in this series defined the concept of double-blind test-driven development, but didn’t get much into real-world examples. In this article, we’ll explore several such examples.

The Example Application

This article includes a sample app that you can download using the link above. Be sure to checkout tag “double_blind_tests” to see the code as it appears in this article. The next article will have a lot of refactoring. I limited my samples to the model layer, where 100% coverage is a very realistic goal, and this is likely to be the greatest benefit.

I chose a simple high school scheduling app with teachers, the subjects they teach, students, and courses. In this case, I’m defining a course as a student’s participation in a subject. Teachers teach (ie, have) many subjects. Students take (have) many subjects, via courses. The course record contains that student’s grade for the given subject.

The database constraints are intentionally strict, and most of the validations in the models ensure that these constraints are respected in the application layer. We don’t want the user seeing an error page because of bad data. Depending on the application, that can be worse than actually having bad data creep in.

Associations

Here’s an example of a has_many association:

# excerpt from spec/models/teacher_spec.rb

describe Teacher do
  it "has many subjects" do
    teacher = Factory.create :teacher
    teacher.subjects.should be_empty

    subject = teacher.subjects.create Factory.attributes_for(:subject)
    teacher.subjects.should include(subject)
  end
end

In order to factor out our own assumptions, we have to ask what they are. The assumption is that the subject we add to the teacher’s subject list works because of the has_many relationship. So we’ll first test that teacher.subjects is, in fact, empty when we assume it would be. Then we’re free to test that adding a subject works as we expect.

Here’s a belongs_to association:

# excerpt from spec/models/subject_spec.rb

describe Subject do
  it "belongs_to a teacher" do
    teacher = Factory.create :teacher

    subject = Subject.new
    subject.teacher.should be_nil
    
    subject.teacher = teacher
    subject.teacher.should == teacher
  end
end

Again, we’re challenging the assumption that the association is nil by default, by testing against it before verifying that we can add a teacher. This tests that this is a true belongs_to association, and not simply an instance method. This is the kind of thing that can and will change over the life of an application.

Validations

Let’s test validates_presence_of:

# excerpt from spec/models/teacher_spec.rb

describe Teacher do
  describe "name" do
    it "is present" do
      error_message = "can't be blank"
      
      teacher = Teacher.new :name => 'Joe Example'
      teacher.valid?
      teacher.errors[:name].should_not include(error_message)

      teacher.name = nil
      teacher.should_not be_valid
      teacher.errors[:name].should include(error_message)

      teacher.name = ''
      teacher.should_not be_valid
      teacher.errors[:name].should include(error_message)
    end
  end
end

This example was actually explained in detail in the last article. Validate that the error doesn’t already exist before trying to trigger it. Don’t just test the default value when you create a blank object, test the likely possibilities. Refactor the error message to DRY up the test and add readability. And finally, test by modifying the object you already created (as little as possible) rather than creating a new object from scratch for each part of the test.

A more complex version is needed to validate the presence of an association:

# excerpt from spec/models/subject_spec.rb

describe Subject do
  describe "teacher" do
    it "is present" do
      error_message = "can't be blank"

      teacher = Factory.create(:teacher)
      subject = Factory.create(:subject, :teacher => teacher)
      subject.valid?
      subject.errors[:teacher].should_not include(error_message)
    
      subject.teacher = nil
      subject.should_not be_valid
      subject.errors[:teacher].should include(error_message)
    end
  end
end

While the test is more complex, the code to satisfy it is not:

# excerpt from app/models/subject.rb

validates_presence_of :teacher

testing validates_length_of:

# excerpt from spec/models/teacher_spec.rb

describe Teacher do
  describe "name" do
    it "is at most 50 characters" do
      error_message = "must be 50 characters or less"
      
      teacher = Teacher.new :name => 'x' * 50
      teacher.valid?
      teacher.errors[:name].should_not include(error_message)
      
      teacher.name += 'x'
      teacher.should_not be_valid
      teacher.errors[:name].should include(error_message)
    end
  end
end

And here’s the model code that satisfies the test:

# excerpt from app/models/teacher.rb

validates_length_of :name, :maximum => 50, :message => "must be 50 characters or less"

While you can definitely start to see a pattern in validation testing, this introduces a new element. Instead of freshly setting the name attribute to be 51 characters long, we test the valid edge case first and then add *just* enough to make it invalid – one more character.

This does two things: it verifies that our edge case was as “edgy” as it could be, and it makes our test less brittle. If we wanted to change the test to allow up to 100 characters, we’d only have to modify the test name and the initial set value.

validating a number’s range using validates_numericality_of:

# excerpt from spec/models/teacher_spec.rb

describe Teacher do
  describe "salary" do
    it "is at or above $20K" do
      error_message = "must be between $20K and $100K"
      
      teacher = Teacher.new :salary => 20_000
      teacher.valid?
      teacher.errors[:salary].should_not include(error_message)

      teacher.salary -= 0.01
      teacher.should_not be_valid
      teacher.errors[:salary].should include(error_message)
    end

    it "is no more than $100K" do
      error_message = "must be between $20K and $100K"

      teacher = Teacher.new :salary => 100_000
      teacher.valid?
      teacher.errors[:salary].should_not include(error_message)
      
      teacher.salary += 0.01
      teacher.should_not be_valid
      teacher.errors[:salary].should include(error_message)
    end
  end
end

And here’s the code that satisfies the test:

# excerpt from app/models/teacher.rb

validates_numericality_of :salary, :message => "must be between $20K and $100K",
  :greater_than_or_equal_to => 20_000, :less_than_or_equal_to => 100_000

We’re doing the same here as in our testing of name’s length. We’re setting the edge value that’s *just* within the allowed range, then adding or subtracting a penny to make it invalid. I split up the top and bottom edge tests, because it’s better to test as atomically as possible – one limit per test.

Defaults

Another tricky database constraint to test for is a default value:

# excerpt from spec/models/course_spec.rb

describe Course do
  describe "grade_percentage" do
    it "defaults to 1.0" do
      course = Course.new :grade_percentage => nil
      course.grade_percentage.should be_nil
      
      course = Course.new :grade_percentage => ''
      course.grade_percentage.should be_blank
      
      course = Course.new :grade_percentage => 0.95
      course.grade_percentage.should == 0.95
      
      course = Course.new
      course.grade_percentage.should == 1.0
    end
  end
end

In this case, we can’t avoid having to recreate the model from scratch, because the nature of the implementation. There’s no actual code in the model that makes this happen, it’s purely in the database schema. Why should we test it, then? Because we test any behavior we’re going to rely on in the application. The fact that this model behavior is implemented at the database level (and therefore, not purely TDD) is a small inconvenience.

What’s the assumption our double-blind test is verifying in this case? That the value is only set in the absence of other values being explicitly assigned. Testing with nil and blank values verifies that the default doesn’t override them – it only works in the complete absence of any assignment. I also test an arbitrary (but valid) value as the anti-assumption test before finally verifying that the default is setting to the correct value.

Most default tests verify only that the correct default value is set – the double-blind version verifies that it’s acting only as a default value in all cases.

Summary

The point of double-blind testing is bullet-proof tests, that can’t be reasonably thwarted by antagonistic coding – whether that’s your anti-social pairing partner, or yourself several months down the road. The bottom line is this: test all assumptions.

That being said, this is very time consuming, and we can see a ton of repetition even in this small test suite. What we need is a way to get back to speedy testing before our boss/client notices it now takes an hour to implement one validation.*

*Even if you work for a government owned/regulated institution that actually digs that kind of non-agile perversion, you WILL eventually go insane. Even in this small sample app, the voices in my head had to talk me off a building ledge twice.

The answer lies in RSpec matchers, which are easy to implement, and can grow with your application. The benefit is not just speedier development – it’s also consistency across your application. We’ll explore that in the last article of this series.

Double-Blind Test-Driven Development in Rails 3: Part 1

January 31, 2011

This is a three-part series introducing the concept of double-blind test-driven development in Rails. This post defines the concept itself, and lays the groundwork by showing the way tests are more commonly written. The next couple posts will show how to double-blind test various common rails elements, and how to make this added layer of protection automatic and quick.

  1. Simple Tests
  2. Double-Blind Tests
  3. Making it Practical with RSpec Matchers

Looking at a rails application that was built with test-driven development, you might expect to see something like this:

# spec/models/teacher_spec.rb

describe Teacher do
  it "has many subjects" do
    teacher = Factory.create :teacher
    subject = teacher.subjects.create Factory.attributes_for(:subject)

    teacher.subjects.should include(subject)
  end
  
  describe "name" do
    it "is present" do
      teacher = Teacher.new

      teacher.should_not be_valid
      teacher.errors[:name].should include("can't be blank")
    end
    
    it "is at most 50 characters" do
      teacher = Teacher.new :name => 'x' * 51
      
      teacher.should_not be_valid
      teacher.errors[:name].should include("must be 50 characters or less")
    end
  end
end

Truth be told, if you’re seeing this in the wild the app is probably doing pretty good. This level of testing works great during the early stages of an app, when things are simple. But as things grow and/or multiple developers become involved, you need more.

Consider models where the associations and validations stretch into the dozens of lines. The more careful and specific you are about validations, the easier it is to get conflicting or overlapping validations. I actually came up with the concept of double-blind testing while retro-testing models in a client app that previously had no validation specs.

What is Double-Blind Testing?

In the world of scientific studies, you always need a control group. One set of participants gets the latest and greatest new diet pill, while the other gets a placebo. Researchers used to think this was good enough, and probably pretty funny to watch the placebo users rave about their shrinking waistlines. But it turns out studies like this still allowed some bias – as researchers observed the effects, their *own* preconceived notions tainted results. Enter the double-blind study.

In a double-blind study, the researchers themselves are unaware of which participants are in the control group, and which are being tested. Both sides are “blind”. They may have lost funny patient anecdotes, but they gained research reliability.

Applying the Lessons of Double-Blind Studies to Test-Driven Development

As I said, in the early stages of an app the tests I showed above work great, as long as you’re using TDD and the red-green-refactor cycle. This means you write the test, run it, and it fails. Then you write the simplest code that will make the test pass, run the test again, and confirm that it passes. Most testing tools will literally show red or green as you do this. Then, as you start to amass tests, you’re free to refactor your code (abstracting common code into helper methods, changing for readability, etc) and run the tests again at any time. You will see failures if you broke anything. If not, you’ve more or less guaranteed your code refactoring works properly.

The problem comes in when you start changing old code, or adding tests to processes that didn’t initially happen. What I’m calling double-blind testing is this:

each test needs to verify the object’s behavior before testing what changes.

As an example, let’s rewrite one of the tests from above:

# original test

describe "name" do
  it "is present" do
    teacher = Teacher.new

    teacher.should_not be_valid
    teacher.errors[:name].should include("can't be blank")
  end
end
# modified to be double-blind

describe "name" do
  it "is present" do
    error_message = "can't be blank"

    teacher = Teacher.new :name => 'Joe Example'
    teacher.valid?
    teacher.errors[:name].should_not include(error_message)

    teacher.name = nil
    teacher.should_not be_valid
    teacher.errors[:name].should include(error_message)

    teacher.name = ""
    teacher.should_not be_valid
    teacher.errors[:name].should include(error_message)
  end
end

This is the basic pattern for all double-blind testing. We’re not leaving anything to chance. In the original version, we expected our object to be invalid, we treated it as such, and we got the result we expected. Do you see the problem with this?

Here’s an exercise: can you make the original test pass, even though the object validation is not working correctly? There’s actually a style of pair programming that routinely does exactly this. One developer writes the test, and the other writes just enough code to make it pass, with the good-natured intention of tripping up the first developer whenever possible. If you wrote the original test, I could satisfy it by just adding the error message to every record on validation, regardless of whether it’s true! Your test would pass, but the app would fail.

The test is now “double-blind” in the sense that we as testers have factored out our own expectations from the test. In this case, we expect the error message to not be there until we initialize the object a certain way, and this can be bad. It may sound far-fetched or paranoid*, but in large codebases your original tests are often abused in this very way. The “you” that writes new code today is often at odds with the “you” from three months ago that wrote the older code with a different understanding of the problem at hand.

*Plus, everybody knows it’s not paranoia when the world really is out to get you. I’ve discussed this at length with the voices in my head, and they all agree. Except Javier. That guy’s a jerk.

Now that I’ve laid out the justification, let’s take a closer look at how the test changed. The first thing I did was create a version of the object that I believe should NOT trigger the error message. Then I run through two cases that should. You can see right away, I was forced to be more *specific* about what should trigger an error. Instead of just a blank object with no values set, I’ve proactively set the attribute in question to both nil and blank. A key element here is to try to work with the *same* object, modifying between tests, rather than creating a new object each time. My test wouldn’t have been as specific if I’d just recreated a blank Teacher object and run a single validation check.

Also, with the increased code comes the increased chance of typos. We don’t want to DRY test code up too much, because a good rule is to keep your tests are readable (non-abstract) as possible. But I’ve specified the error message at the top of the test, and reused that string over and over. I did this in a way that DRY’s the code and adds readability. You can see at a glance that all three tests are checking for the same error.

Finally, the first time I run the object’s validation, notice I’m not asserting that it should be valid. If I had written teacher.should be_valid on line 8 of the double-blind test, I’d have to take the extra time to make sure every other part of the object was valid. Not only is this time-consuming, it’s very brittle. Any future validations would break this test.

If you use factories often, you may suggest setting it up that way since a factory-generated object should always be valid. Then you could assert validity. However, this only slows down your test suite. it’s enough just to run valid? on the object, which triggers all the validation checks to load up our errors hash.

Summary

I believe this is a new concept – I was already coding most of my tests this way, but it didn’t dawn on me how valuable it was until I started retro-testing previously testless code. The value showed itself right away.

I would love to hear feedback on this – if you think it’s unnecessary (I tend to be very rainman-ish about my testing code) or even detrimental. However, if you think it’s too much work, I ask you to hold your criticism until you’ve read part 3 of this article, where I show how to use your own RSpec matchers to greatly speed this process.

My RailsConf Takeaway – Kata and Code Retreats

June 16, 2010

I attended RailsConf last week, and it was amazing. I met a lot of amazing people (including a handful of personal geek heroes) and became truly fired up about my personal craft of programming. Serendipity played a big part, as a series of events lined up to help me form my new strategy for being a better developer.

On the plane ride out, I read the first half of Malcolm Gladwell’s Outliers. Chapter 2 had me spellbound. The title: The 10,000 Hour Rule. In study after study, the author showed that to truly master a craft, only a nominal amount of intelligence and “natural” talent are required. For instance, any IQ above 120 should be more than capable. After that basic filter, what separates the masters from the rest is *practice*. About 10,000 hours, by most studies’ estimates. That could be practicing baseball, music (not just performing, but practicing to intentionally get better), or chess. You get the idea.

Fast forward to the Ignite RailsConf event Sunday night, where 14 speakers had 5 minutes each to give a lightning presentation, with no overtime. Three of those speakers mentioned the 10,000 hour rule. Later in the week, Yehuda Katz gave a keynote where he said he’d only been programming since 2004! Just six years, about half my professional career, and he’d mastered Ruby and Rails to the level of being a core member of the Rails team. How? Finding “impossible” challenges, and doing them. This is, of course, the essence of practice – making a concerted effort to complete tasks that were previously out of your ability.

But it gets better. BohConf (the official unconf of RailsConf) hosted a mini code retreat Tuesday afternoon. Code retreats, as defined by Corey Haines, blend Dave Thomas’ concept of Code Kata with pair programming and repetition. You pair with someone, and work to solve a common programming problem, most typically Conway’s Game of Life. You have about 30 minutes to write a purely test-driven solution. Then you reflect on what you were able to get done, delete the code, find a new coding partner, and start all over! This is practice and feedback combined!

Over the week, the idea rolled around my head. And during my extended 12-hour plane trip home. And all weekend. I need to be practicing, in order to master my craft. I need to create my own collection of code kata – “fundamentals” I can practice over and over until they become second nature. I also want to pair program more, and perform my kata in public where I can be open to feedback.

I have a plan for this, which I’ve already put in motion. Stay tuned :)

UPDATE: I have posted my first (official) kata at vimeo. Read my blog post about it.

Route Testing with Shoulda

January 27, 2010

I received a comment in a recent article, by someone who saw the default routes in my config/routes.rb example and suggested I remove them. Yes! This makes total sense from several angles.

Between RESTful routes and named routes, there’s no good reason not to explicitly name all of your routes. Also, you don’t get the goodness of say, “pets_path” or “pet_path(@pet)” if you just let the default routes handle PetsController for you. Finally, if you’re doing TDD (test-driven development) then why aren’t you testing your routes as well?

Route testing is easy, and the tests themselves run super fast. One app of mine has almost 600 routing assertions that run in just 2 seconds. It’s fun to watch the dots for your routing tests zoom across the screen! Let’s start by adding a test file to our system with the standard boilerplate:

# test/unit/routing_test.rb
require 'test_helper'

class RoutingTest < ActionController::TestCase

end

We’ll start with some TestUnit basics before we get into how Shoulda can make life easier. There are three assertions to learn:

def test_generates_user_index
  assert_generates '/users', :controller => 'users', :action => 'index'
end

def test_recognizes_user_index
  assert_recognizes {:controller => 'users', :action => 'index'}, '/users'
end

def test_routes_user_index
  assert_routing '/users', :controller => 'users', :action => 'index'
end

The first assertion, assert_generates, verifies that if url generators like url_for are passed this hash of controllers, actions and whatever else, the correct route (/users in this case) is generated. The next assertion, assert_recognizes, does just the opposite, ensuring that a route of /users end up calling the index action of the users controller. The third version (assert_routing) does both! It combines the two previous assertions into one, and this is what you’ll want most of the time.

Using Shoulda to DRY up your tests

Here are the above tests, in Shoulda form:

require 'test_helper'

class RoutingTest < ActiveSupport::TestCase
  context "routing users" do
    should "generate /users" do
      assert_generates '/users', :controller => 'users', :action => 'index'
    end

    should "recognize users" do
      assert_recognizes {:controller => 'users', :action => 'index'}, '/users'
    end
  
    should "generate and recognize users" do
      assert_routing '/users', :controller => 'users', :action => 'index'
    end
  end
end

While these tests are simple, they’re not very DRY. Just imagine, you’ll need to have seven of these tests *just* for the most basic RESTful route. Instead, install my shoulda routing macros:

script/plugin install git@github.com:bellmyer/shoulda_routing_macros.git

Now here’s an example routing file:

# config/routes.rb
ActionController::Routing::Routes.draw do |map|
  # a simple resource
  map.resources :chickens

  # a resource with extra actions
  map.resources :users, :collection => {:thankyou => :get}, :member => {:profile => :get}

  # nested resources
  map.resources :owners do |owners|
    owners.resources :pets
  end

  # singleton resource
  map.resource :session
  map.ltp_contact_link '/ltp/:code', :controller => 'ltp_contacts', :action => 'new'
end

And this is how easy they are to test:

# test/unit/routing_test.rb
require 'test_helper'

class RoutingTest < ActionController::TestCase
  # simple
  should_map_resources :chickens

  # with extra actions
  should_map_resources :users, :collection => {:thankyou => :get}, :member => {:profile => :get}

  # nested
  should_map_resources :owners
  should_map_nested_resources :owners, :pets

  # singleton
  should_map_resource :session
end

There’s no longer a good excuse to use default routes, or not test your routes in detail. Enjoy!

Restful Controller Tests with Shoulda – Stubbing

January 20, 2010

View the Source Code

This is part 2 of a 5 part series on restful controller tests, using Shoulda as the foundation. Here are all of them so far:

  1. The Basics
  2. Stubbing for Speed

Stubbing

My initial set of controller tests are a great foundation. They lay out exactly how to take advantage of Shoulda to create tests for three different roles of user: anonymous visitor, member, and admin. Now we’ll use the Mocha ruby gem to speed up our tests, by eliminating database calls.

Here’s my (amateurish) video of the changes, followed by a more detailed explanation:

We’ll start by ensuring that rails will require the gem. Add this to config/environments/test.rb:

config.gem 'mocha'

Then on the commandline:

rake gems:install RAILS_ENV=test

Mocha allows us to stub (or “fake”) methods on an object, and track how often they’re called during a test. For example, if you stub an object’s save method in a create action, you can test what happens when saving succeeds (returns true) or fails (returns false), with no messy database interaction.

While I won’t get into a primer on Mocha itself, I will say that almost every database interaction can be removed from functional tests, with the exception of user authentication. I usually leave that in, and that’s my one and only use for fixtures. Fixtures are fast, but cumbersome if you begin adding multiple records for every model in your application. I use factories instead, but they’re not as fast for loading the database. Factories are another chapter in the Functional Tests saga, however.

That said, let’s get on to reviewing the stubbed versions of my tests. Starting with the admin context, our setup changes from this:

    setup do
      @valid = Factory.build(:setting).attributes
      @setting = Factory :setting
      login_as :admin
    end

to this:

    setup do
      @setting = Factory.build :setting
      @setting.id = 1001

      Setting.stubs(:find).returns(@setting)
      Setting.stubs(:find).with(:all, anything).returns([@setting])
      
      login_as :admin
    end

Instead of creating a new setting, we’re using our factory to build an unsaved one. We’re giving it an id since saving would have normally done this. Finally, we’re stubbing out the find method, both for an individual and for all settings, so they return the one we’ve built. Loggin in as admin will be our only database hit.

Now for the actions. The index, show, and new action tests stay exactly the same, except the first two are no longer hitting the database – we’ve stubbed out the find method to automatically return our fake setting. On to the create method, in the “with valid data” context. Here’s the original:

context "with valid data" do
  setup do
    post :create, :setting => @valid
  end
        
  should_assign_to :setting, :class => Setting
  should_redirect_to("setting page"){setting_path(assigns(:setting))}
  should_set_the_flash_to "Setting was successfully created."
        
  should "create the record" do
    assert Setting.find_by_name(@valid['name'])
  end
end

And here’s the stubbed version:

context "with valid data" do
  setup do
    Setting.any_instance.expects(:save).returns(true).once
    Setting.any_instance.stubs(:id).returns(1001)
          
    post :create, :setting => {}
  end

  should_assign_to :setting, :class => Setting
  should_redirect_to("setting page"){setting_path(1001)}
  should_set_the_flash_to "Setting was successfully created."
end

We’ve stubbed any instance of Setting to return true upon save, without actually saving. No database hit, and we still get to test everything. We’ve even dropped our “create the record” test, and stopped passing in valid data, because the expectation we set in the setup handles this. To be more paranoid, we could pass in valid data and check that those params end up in the Setting.new call, but I think that’s overkill since it’s such a basic step.

Next, the “with invalid data” context. First the original:

context "with invalid data" do
  setup do
    post :create, :setting => {}
  end
  
  should_assign_to :setting, :class => Setting
  should_respond_with :success
  should_render_with_layout :settings
  should_render_template :new
  should_not_set_the_flash
end

And the new version:

context "with invalid data" do
  setup do
    Setting.any_instance.expects(:save).returns(false).once
    post :create, :setting => {}
  end
  
  should_assign_to :setting, :class => Setting
  should_respond_with :success
  should_render_with_layout :settings
  should_render_template :new
  should_not_set_the_flash
end

The only changes here are that we’re setting a stubbed expectation of a call to save, forcing a return value of false, and again we don’t need to pass in valid data. Now we can run all the same tests, because we’ve forced the code down the failure branch.

Our edit tests stay the same, with the exception again of no database hit thanks to our stubbed finders. Next up is the update action, and we’ll start with the valid data context. Here’s the original:

context "with valid data" do
  setup do
    put :update, :id => @setting.id, :setting => {:name => 'Bob'}
  end
  
  should_assign_to(:setting){@setting}
  should_redirect_to("setting page"){setting_path(assigns(:setting))}
  should_set_the_flash_to "Setting was successfully updated."
  
  should "update the record" do
    @setting.reload
    assert_equal 'Bob', @setting.name
  end
end

And here’s the new version:

context "with invalid data" do
  setup do
    @setting.expects(:update_attributes).returns(false).once
    put :update, :id => @setting.id, :setting => {}
  end
  
  should_assign_to :setting, :class => Setting
  should_respond_with :success
  should_render_with_layout :settings
  should_render_template :edit
  should_not_set_the_flash
end

Much like a successful create, we stub out our update to succeed and run all the same tests except that the record was actually changed. Now the invalid data context. Here’s the original:

context "with invalid data" do
  setup do
    put :update, :id => @setting.id, :setting => {:name => nil}
  end
  
  should_assign_to :setting, :class => Setting
  should_respond_with :success
  should_render_with_layout :settings
  should_render_template :edit
  should_not_set_the_flash
end

And the new version:

context "with invalid data" do
  setup do
    @setting.expects(:update_attributes).returns(false).once
    put :update, :id => @setting.id, :setting => {}
  end
  
  should_assign_to :setting, :class => Setting
  should_respond_with :success
  should_render_with_layout :settings
  should_render_template :edit
  should_not_set_the_flash
end

And much like our unsuccessful create, we stub out the update to return false, and run the same tests! The finaly action that changes is update, and it’s very easy. First the original:

context "destroying" do
  setup do
    delete :destroy, :id => @setting.id
  end
      
  should_assign_to(:setting){@setting}
  should_redirect_to("index"){settings_path}
  should_not_set_the_flash
      
  should "delete the record" do
    assert !Setting.find_by_id(@setting.id)
  end
end

And the new version:

context "destroying" do
  setup do
    @setting.expects(:destroy).once
    delete :destroy, :id => @setting.id
  end
    
  should_assign_to(:setting){@setting}
  should_redirect_to("index"){settings_path}
  should_not_set_the_flash
end

This time we’re only stubbing out the call to destroy, expecting it to happen once. All the tests stay the same, except we no longer test that a record has actually been removed. No record was actually in the database, so we can’t. Plus, a failing call to destroy is so rare, it doesn’t even let you know in the return value.

The admin context is the only one that changes with these upgrades, members and visitors never get this far in our examples. This will generally make your functional tests over twice as fast, which is huge when you’re faced with mounting test times. I believe I was able to cut testing time down from 90 seconds to just under 30, implementing stubbing across an application’s test suite.

If you view the source code for this part of the series, you’ll see a lot of repetition in the tests. We’ll DRY (Don’t Repeat Yourself) that code in the next chapter in the series.

Shoulda Macro for a Cleaner Uniqueness Test

January 19, 2010

Shoulda macros are so neat and tidy, aren’t they? I love kicking off my unit tests with quick-and-deadly validation and association tests. Here’s an example:

require 'test_helper'

class CouponTest < ActiveSupport::TestCase
  should_validate_presence_of :code, :name, :description

  context "validating uniqueness" do
    setup do
      Factory :coupon
    end

    should_validate_uniqueness_of :code
  end

  should_belong_to :user
end

Do you see anything wrong with this picture? should_validate_uniqueness_of necessarily requires that you already have a record in the database. As much as I hate database interaction in my tests, it’s a necessary evil here. The macro works by copying the attributes of an existing record, and validating it to see if you get any uniqueness errors. So, for this one test, I have to setup a context with a Factory call, because I’m not about to create a record for all the other tests that don’t need it.

Wouldn’t it be nice if you could call should_validate_uniqueness_of and it would create the record for you, if one didn’t already exist? It could use the Factory you’ve already setup. And it would still work the old way if you don’t have factories, or don’t enjoy DRY coding.

Add this macro to your application:

# test/shoulda_macros/validation_macros.rb
module Test
  module Unit
    class TestCase
      class << self
        alias_method :svuo_original, :should_validate_uniqueness_of
        
        def should_validate_uniqueness_of(*attributes)
          class_name = self.name.gsub(/Test$/, '')
          klass = class_name.constantize
          model_sym = class_name.underscore.to_sym
          context "with a record in the database" do
            setup do
              Factory model_sym unless klass.count > 1
            end
            
            svuo_original *attributes
          end
        end
      end
    end
  end
end

Here, we’re overriding the default Shoulda macro. Before we call the original, we’ll setup a context and create the record using our Factory, unless a record is already created. That means it will always “just work”, with the exact same syntax, and all of the options of the original should_validate_uniqueness_of

Thoughtbot can’t create the macro like this by default, because it assumes you have factory_girl installed. But if you do use factories, this upgrade will take some of the ugly out of your tests:

require 'test_helper'

class CouponTest &lt; ActiveSupport::TestCase
  should_validate_presence_of :code, :name, :description
  should_validate_uniqueness_of :code

  should_belong_to :user
end

Nice, eh? As usual, all you need to do to install/create a new shoulda macro is drop a ruby file into the test/shoulda_macros folder, with macro methods defined. You don’t even need to reopen TestCase the way I did, unless you’re overriding an existing macro.

Restful Controller Tests with Shoulda

December 30, 2009

View the Source Code

This is part 1 of a 5 part series on restful controller tests, using Shoulda as the foundation. Here are all of them so far:

  1. The Basics
  2. Stubbing for Speed

After porting eight MVC stacks from an older application to its newer home, and revamping all the tests along the way, I developed a great rhythm in cranking out functional tests for restful resources. I took the time to use the best that modern shoulda has to offer, and came up with (what I feel) is a great set of functional tests.

First, each controller action gets its own context, with several tests executed in each. But wait, there’s more! Do you have multiple roles? I usually have visitor (not logged in), member (logged in, normal user) and admin (logged in, superuser). That means triple the testing, because I want to confirm that the application is behaving the correct way under each circumstance, and the behavior is often very different.

Next, a little background. For authentication this app is using restful_authentication and role_requirement. My basic testing suite consists of shoulda, factory_girl, and woulda. I’m also using autotest, mocha, test_benchmarker, and redgreen to speed up my tests and development cycle, but I’ll cover those in a different article. This is about the basics.

Let’s begin (finally) with a restful controller for site settings – admins have all access, but members and visitors have none. This will allow me to demonstrate how I handle testing each role, and it’s easy to grant more privileges to a role by copying over the tests from say, admin to member.

Admin Tests

I find it’s easier TDD to start with the least restrictive role and work backward, adding restrictions to my controller as I go along. That said, here are my admin tests. I first setup the admin context like so:

  context "as admin" do
    setup do
      @setting = Factory.create(:setting)
      @valid = Factory.build(:setting).attributes
      login_as :admin
    end

    ...
  end

I’ve created a basic setting (using factories), and a hash of valid attributes I can use for create and update actions. I’ve also logged in as admin. If you don’t understand contexts and setup, a context is a group of tests, and the setup for that context is done before every test within. Contexts can also be nested – and they will be.

Index

  context "getting index" do
    setup do
      get :index
    end
      
    should_assign_to(:settings){[@setting]}
    should_respond_with :success
    should_render_with_layout
    should_render_template :index
    should_not_set_the_flash
  end

Using shoulda’s awesome macros, I setup a simple get :index which will run before each shoulda macro, then go nuts testing that @settings is assigned an array with my one and only existing setting. The page responds with success, it renders the default layout with the index template, and doesn’t set the flash. These five tests are what you should think about for every action, under every role. This is the foundation, and you add stuff from here as you add more complicated functionality to the basic scaffold.

New

  context "getting new" do
    setup do
      get :new
    end
      
    should_assign_to :setting, :class =&gt; Setting
    should_respond_with :success
    should_render_with_layout
    should_render_template :new
    should_not_set_the_flash
  end

This is very similar to index above. Now, I’m testing that @setting (singular) is set, with a single object of class Setting. This is as specific as I can test right now, because this was a newly created object in the action – so I can’t test that a specific setting was called. The rest is nearly identical to the index action, expect the template I expect to be rendered.

Create

  context "posting create" do
    context "with valid data" do
      setup do
        post :create, :setting =&gt; @valid.merge('name' =&gt; 'Slappy')
      end
        
      should_assign_to :setting, :class =&gt; Setting
      should_redirect_to("index page"){settings_path}
      should_set_the_flash_to "Your setting was successfully saved."

      should "create the record" do
        assert Setting.find_by_name('Slappy')
      end
    end
      
    context "without valid data" do
      setup do
        post :create, :setting =&gt; {}
      end
        
      should_assign_to :setting, :class =&gt; Setting
      should_respond_with :success
      should_render_with_layout
      should_render_template :new
      should_not_set_the_flash
    end
  end

Now we’re getting a little more complex. We have two outcomes to think about: a successful create, and a failed one. We have to react differently, so we have to test for both.

The first sub-context tests for success. We’re passing in a valid hash of attributes. Notice our tests look different this time – and easier on the fingers. Instead of 5 basic tests, we have 3. We still check that a Setting object was created/assigned, but instead of checking response codes and rendered content, we’re expecting the action to redirect to the index page. And this time, we are expecting a flash message to be set. Finally, I’m checking that the record was actually created. I forced the name attribute to be ‘Slappy’ because no matter how my test suite changes in the future, that name is unlikely to already be in the database.

The next sub-context tests for failure – most likely from invalid input on the part of the user. It happens. We’ve passed an empty attribute list to the action, which should ensure failure if we have at least one required attribute with validates_presence_of in the model. In my next article I’ll cover mocks and stubs for a cleaner way. Our tests are back to the familiar – a setting object should be assigned, and we should expect an HTTP response of success. Our create wasn’t successful, but that’s not what we’re testing here. We’re testing the HTTP response, and success indicates there wasn’t an error at the HTTP level. Next, we’re rendering the new template again, for the user’s second try. We’re not setting the flash though, because you’re probably going to list off the individual errors anyway.

Edit

  context "getting edit" do
    setup do
      get :edit, :id =&gt; @setting.id
    end
    
    should_assign_to(:setting){@setting}
    should_respond_with :success
    should_render_with_layout
    should_render_template :edit
    should_not_set_the_flash
  end

This is back to the short-and-sweet familiar. This is almost like our new action, with a couple exceptions. First, we can now specify (by passing a block to the should_assign_to macro) the specific setting we’re expecting, because we specified it in our request. Also, we’re expecting the edit template to be rendered.

Update

  context "putting update" do
    context "with valid data" do
      setup do
        put :update, :id =&gt; @setting.id, :setting =&gt; {:name =&gt; 'Slappy'}
      end
      
      should_assign_to(:setting){@setting}
      should_redirect_to("index page"){settings_path}
      should_set_the_flash_to 'Your knowledge base was successfully updated.'

      should "update the record"
        assert Setting.find_by_name('Slappy')
      end
    end
    
    context "with invalid data" do
      setup do
        put :update, :id =&gt; @setting.id, :setting =&gt; {:name =&gt; ""}
      end
      
      should_assign_to(:setting){@setting}
      should_respond_with :success
      should_render_with_layout
      should_render_template :edit
      should_not_set_the_flash
    end
  end

You can see we’re doing the same things here that we did with our create action. When calling update however, we can pass only the attributes we want to change. I check that it *has* changed, too.

We have to pass a hash with a piece of bad data to get a failure, and in this case just assume that name is an attribute that can’t be blank.

Destroy

  context "deleting" do
    setup do
      delete :destroy, :id =&gt; @setting.id
    end
    
    should_assign_to(:setting){@setting}
    should_redirect_to("index page"){settings_path}
    should_set_the_flash_to "Your knowledge base was successfully deleted."
    
    should "delete the record" do
      assert !Setting.find_by_id(@setting.id)
    end

This is pretty self-explanatory. We’re assigning, redirecting, and spitting out a nice flash message. We’re also doing a hard check to make sure the setting object is gone.

Members

From here, the tests get a lot easier to absorb. I’ll list the whole member context here:

context "as a member" do
  setup do
    login_as :quentin
  end
  
  context "attempting to get index" do
    setup do
      get :index
    end
    
    should_not_assign_to :settings
    should_respond_with 401
    should_render_with_layout false
    should_not_set_the_flash
  end

  context "attempting to get new" do
    setup do
      get :new
    end
    
    should_not_assign_to :setting
    should_respond_with 401
    should_render_with_layout false
    should_not_set_the_flash
  end

  context "attempting to create" do
    setup do
      post :create, :setting =&gt; {}
    end
    
    should_not_assign_to :setting
    should_respond_with 401
    should_render_with_layout false
    should_not_set_the_flash
  end

  context "attempting to get edit" do
    setup do
      get :edit, :id =&gt; 1
    end
    
    should_not_assign_to :setting
    should_respond_with 401
    should_render_with_layout false
    should_not_set_the_flash
  end

  context "attempting to update" do
    setup do
      put :update, :id =&gt; 1, :setting =&gt; {}
    end
    
    should_not_assign_to :setting
    should_respond_with 401
    should_render_with_layout false
    should_not_set_the_flash
  end

  context "attempting to delete" do
    setup do
      delete :destroy, :id =&gt; 1
    end
    
    should_not_assign_to :setting
    should_respond_with 401
    should_render_with_layout false
    should_not_set_the_flash
  end
end

The only setup I need is to login as a regular user, in this case the default “quentin” that is created by restful_authentication. Notice every action has basically the same four tests: nothing is assigned, a 401 (unauthorized) HTTP error is returned, no layout is rendered, and no flash is set. In my action calls, I can use fake id numbers and empty attribute hashes, because they’ll never be checked anyway. So, no need to create an actual setting object.

By default, role_requirement shows a blank page, but I changed that to display a special 401.html file I created. This is a person who is already logged in, so we know who they are (authentication) but does not have permission to be where they are (authorization).

Visitors

context "as a visitor" do
  context "attempting to get index" do
    setup do
      get :index
    end
    
    should_not_assign_to :settings
    should_redirect_to("login"){new_session_path}
    should_set_the_flash_to "You need to login to do this."
  end

  context "attempting to get new" do
    setup do
      get :new
    end
    
    should_not_assign_to :setting
    should_redirect_to("login"){new_session_path}
    should_set_the_flash_to "You need to login to do this."
  end

  context "attempting to create" do
    setup do
      post :create, :setting =&gt; {}
    end
    
    should_not_assign_to :setting
    should_redirect_to("login"){new_session_path}
    should_set_the_flash_to "You need to login to do this."
  end

  context "attempting to get edit" do
    setup do
      get :edit, :id =&gt; 1
    end
    
    should_not_assign_to :setting
    should_redirect_to("login"){new_session_path}
    should_set_the_flash_to "You need to login to do this."
  end

  context "attempting to update" do
    setup do
      put :update, :id =&gt; 1, :setting =&gt; {}
    end
    
    should_not_assign_to :setting
    should_redirect_to("login"){new_session_path}
    should_set_the_flash_to "You need to login to do this."
  end

  context "attempting to delete" do
    setup do
      delete :destroy, :id =&gt; 1
    end
    
    should_not_assign_to :setting
    should_redirect_to("login"){new_session_path}
    should_set_the_flash_to "You need to login to do this."
  end
end

Visitors are different from members without access. We want to be nicer and assume visitors just aren’t logged in yet, if they try to access protected actions. So we simply redirect them to the login page with a nice message. Just as with members, there’s no need to create actual objects in the database, or give actual id numbers or attribute hashes, since the action will never get far enough to check.

Conclusion

Once I developed this basic set of tests, I was pleased with how consistent my controller coding became. I don’t have a generator for these, I code them by hand every time. It keeps me sharp, and doesn’t take that long when you get into the swing of it. This isn’t the final draft, however. I’ve since modified this template with mocking and stubbing to greatly improve the speed, cutting runtime to a quarter of what it was. I’ll write about that approach in my next article.

Files

Here are Pastie links to my sample controller test and controller files. My controller is mostly vanilla scaffold with all the format junk stripped out, and a couple of key protected methods at the end.

settings_controller_test.rb
settings_controller.rb

Setting the flash in tests

December 27, 2009

I recently needed to test a thankyou page for a form. The previous action sets a flash value, flash[:email], and then redirects to this page. Testing the page by itself requires that this flash be set. Here’s how to do it:

get :thankyou, {}, nil, {:email =&gt; 'test@test.com'}

This will ensure that when the page is called, it has the correct flash elements set. This is important because we’re calling the page directly, not redirecting from a previous action that would have set the flash automatically.

The test methods to call actions (get, post, put, delete) actually take four parameters, even though we typically only use two. The first is the action, and the second is the parameter list. We normally don’t put curly braces around the parameter list because Ruby is smart enough to know that if it sees a bunch of key-value pairs at the end of a method call, it should scoop them up into a single hash. We can’t do that for our example because parameter, session, and flash data need to be passed in different hashes.

For more details visit the same page I did, A Guide to Testing Rails Applications.

Running your Rails Test Database in Memory (RAM)

August 4, 2009

I recently read a blog post by Amr Mostafa that benchmarked running MySQL databases in memory. I’ve been trying to figure out how to do this, and he had the answer: use the tmpfs filesystem, which runs in memory, to store your database.  I’ll have to figure out just how difficult that is later, since I’m not a super DBA…or even really a DBA at all.

Amr is not a Rails developer, and the purpose of his benchmark was to simulate regular web traffic.  His results seemed ambiguous, but I noticed something missing in his trial: writes to the database.  His benchmark tests only used select statements, which read from the database.  While this is the majority of most database usage, I think the perfomance gain during writes would tip the scales decidedly in favor of running MySQL in memory, if you can afford the RAM.

This has an added benefit to us Ruby on Rails developers: we could potentially use it to dramatically increase the speed of our Test Driven Development, especially for those of use (should be all of us) using autotest!  TDD requires running tests every few minutes, or even seconds.  And writing to the database in tests is a lot bigger piece of the puzzle, since the database is recreated from scratch before every test.

For a lot of us, test suites are manageable.  Autotest only runs tests for changed files during normal development, occasionally running the entire test suite.  This, coupled with judicious mocking, stubbing and unit testing techniques can keep most test suites under control.  But larger apps use increasingly more tests, and higher-level tools like RSpec can be especially resource-intensive.

I tried to contact Mostafa about running some benchmarks for write speed, but my comment was considered spam!  I did get a smaller message through, so hopefully I’ll hear back.  If so, I’ll post a link to his thoughts/results.  Until then, I may dabble with doing this myself, and seeing what amateurish benchmarks I can run myself.


Follow

Get every new post delivered to your Inbox.