The Five Golden Rules of Bumper Sticker Influence

May 21, 2013

This is a departure from my site’s usual topic of Ruby and Ruby Accessories, but I’m nothing if not a renaissance man.

At one time or another, we all attempt to influence others via the humble bumper sticker. A younger, more idealistic version of myself even sported “Don’t Mess With Texas” once upon a time. I’m not sure exactly what I was trying to convey, except that I was from Texas and it was important that every Nebraska driver knew that.

This graphic indicates that the driver is from Texas and, while there is no explicit rule about messing with the driver, their home state is clearly off limits.

This graphic indicates that the driver is from Texas and, while there is no explicit rule about messing with the driver, their home state is clearly off limits.

Since the sweet, siren song of bumper sticker activism (ie, something for nothing) will lure us all at some point, it’s important to at least understand the mechanics of bumper sticker influence, and how much influence each individual bumper sticker carries. This science can be boiled down to Five Golden Rules.

Let’s say you have something important to say, and I’m a fellow driver destined to drive behind you at some point. You’ve decided that the best way to change my entire world view is through a bumper sticker. It’s perfect – you drive in front of hundreds of cars every day, and you can champion a noble cause while listening to Kesha on the radio with the windows rolled up tightly. Of course, you don’t like Kesha, but there’s a train-wreck attraction there that others simply won’t understand.* Anyhow, on to the rules:

  1. As a fellow human with a pulse, you begin with a base level of Credibility.
  2. This Credibility Score is automatically cut in half by the mere act of displaying bumper stickers.
  3. The Influence Quotient for each bumper sticker is a share of the remaining Credibility Score, proportional to the surface area of the given bumper sticker. In other words, if you have a small sticker and a large one, the larger sticker gets the lion’s share of the Influence.
  4. If you spell out any message using “coexist” style religious symbols, the value of any adjacent sticker is flipped to negative.
  5. Finally, since you and I don’t know each other, you started this process with a Credibility of zero. Use this as the basis of your calculations.

Assuming a grid pattern in rush hour traffic, there might be as many as eight cars surrounding yours at any given time. Of those eight, 5 are doing the same thing you are. Tick tock, don’t stop.

Kansas City, Google Fiber, and the Great Divide

November 28, 2012

On the left is the Missouri side of Kansas City, with a Time Warner Cable truck in the driveway. On the right is the Kansas side, with a Google Fiber truck installing service in a building being used to house a number of new startups.

For those of you not familiar with Kansas City, it’s an interesting metro area where two distinct municipalities – the Kansas side or “KCK”, and the Missouri side, known as “KCMO”.  The city is literally divided by a street called “State Line Road”, and that’s where I took the above picture.

In practice, the two sort of blend together to make one big city.  For the most part, residents move from one to the other in the course of their day without really even noticing.  There are even state universities with local campuses that offer in-state tuition to residents of the metro area on both sides of the line.

What’s in a Line?

One of the biggest differences I’ve noticed since moving here five years ago is how cable/broadband services operate.  Time Warner Cable spans the entire metro, but the Kansas side has the benefit of a couple smaller players in the cable market.  Even though I currently have Time Warner, I benefit greatly from these competitors.  How? TWC is drastically different on each side of the line – both in price, and service.

When I first moved to town from Omaha, I had to start a job immediately. I didn’t have time to house hunt for my family first, so I rented a temporary place of my own on the Missouri side, close to my employer.  Setting up cable was a nightmare.  It took three scheduled appointments, with the infamous four-hour windows, before they actually showed up and installed the service.  That was just for internet, which is a simple as it gets.  No set-top boxes or routing cable to multiple rooms.  And it was relatively expensive.

It took 3 months to find a place on the Kansas side for my wife and kids to move down as well.  When the big day came, I called TWC on a friday to inform them I’d need service switched over on monday.  They promptly gave me an appointment to make sure I wouldn’t go a day without service in the new house.  And I didn’t.  To top it off, I called them an hour before the appointment to ask them to throw a TV package on as well, and they were more than accommodating.  I don’t remember the specifics, but even with adding TV service, the monthly rate went up very little if at all.

And Then There Was Fiber

It would be fun to say the Missouri side finally got its fair share with Fiber moving to town, but Google chose to focus on the Kansas side for now.  And thus, the picture.  I was walking back from lunch today, down State Line Road itself, when I saw a Time Warner truck sitting in a driveway on the Missouri side, and a Google Fiber truck sitting in the driveway of “Homes for Hackers” on the Kansas side.

It’s amazing to see how one company (Time Warner) can treat people on opposite sides of a street so differently, purely because of the presence/absence of competition.  Believe it or not, it’s what makes me a hardcore capitalist.  Competition raises the bar, and it can come from anywhere seemingly overnight.  Whether you’re an entrepreneur, a nine-to-fiver, or somewhere in between, the lesson here is “do your best”.  Don’t make it so easy for somebody to swoop in and put you out of business because you were resting on your laurels.

Why Aren’t You Building Angry Ruby Robots?

October 25, 2012

If you’re reading this article, you’re wasting valuable time that could be spent building robots, in ruby, that battle each other to the death.  I can hear you already: “Does such a thing actually exist??”  Well no, not really.  But there are two consolation prizes:

  • it will exist in the very near future, and
  • until it does, I’ve created a simplified version to whet your appetite.

Rubots!

“Rubots” is the word you use when you’re referring to Ruby Robots but don’t want to waste precious rubot-coding time pronouncing the full phrase.  My goal is simple: to create Ruby-based games where you can code your own player classes to battle against the sample players provided, other players you’ve created, or for the most fun: players created by your Rubyist friends!

The popularity of Rails has brought many people to the Ruby camp.  Sadly, many of these people don’t learn how to code Ruby outside of the Rails environment.  I got my start through Rails, and there were times I wasn’t sure where one ends and the other begins.  I want to get programmers comfortable coding pure Ruby, and there’s no better way than the promise of digital violence.

Prisoner’s Dilemma

My first iteration of this concept is a Rubot implementation of the classic game theory exercise, Prisoner’s Dilemma.  The gist is that you’re one of two prisoners who have been placed in separate rooms and questioned by the authorities for a crime.  You have to decide whether to cooperate with your partner in crime by not saying anything, or betray them by cutting a deal.  There are different rewards/consequences based on how each of the two players decides to act.

You can download it here:

https://github.com/bellmyer/prisoners_dilemma

The README is pretty comprehensive, so I won’t rehash all of it here.  The basic idea is that you create player classes that decide whether to cooperate with, or betray, their opponents.  The game is played in multiple turns, so you can base your decision on how you and your opponent have behaved in previous turns, or any other criteria you want to consider.  Maybe your prisoner gets grumpy around nap time, and from 2-4pm it only betrays its opponent :)

This is meant to be played tournament style, and so I’ve included a basic round-robin script that loads all player classes in the project directory and pits them against each other in a battle royale.  This is a great activity for Ruby groups, especially among beginners.  Player classes inherit from a prefabbed parent class, and simple strategies can be implemented with even a basic understanding of the language.

Enjoy, and please provide feedback.  This will be the first of (hopefully) many games offered in this same style.  Rubots unite!

The Sifteo Platform, From a Developer’s Perspective

September 28, 2011

Sifteo Cubes are a novel idea first publicized in a TED talk in 2009.  In January 2011, Sifteo invited a limited number of people to join the early release program by pre-purchasing their starter kits.  The $100 kits sold out quickly, and customers received them about three months later.  I was one of those customers.

I don’t have, or spend, a lot of time on games.  But I was intrigued by their soon-to-be-available SDK.  As a developer, I was interested in seeing if this platform “had legs” by trying it out. I also wanted to get in on the ground floor as a third-party developer.

The SDK (Software Development Kit)

Fast forward to September. I’m personally tired of waiting for the Sifteo SDK, and my cubes have collected dust for the last six months:

The current location of my Sifteo cubes, for the last several months

This is the box of stuff I don't quite have a place for, in my office. I walked past it today, remembered that I had them, and decided to see if they ever got around to releasing that SDK.

I’ve been waiting since I ordered my early release cubes the first week of the year. I was led to believe the SDK was just around the corner. My interest was primarily in development, so they got my $100+ under false pretenses. I’m not saying they lied or did this on purpose, but the end result is the same to me.

Part of me wonders if they didn’t want to complete with third party developers on the core apps they had in the works. This is based on their failure to answer even simple questions until recently – such as, what language/platform will be used? I couldn’t think of a simpler and more readily available answer to give. Developers resorted to studying screen shots of code to take their best guess. Java and Python were the front-runners of the guessing game, but both were wrong: C# is the winner.

Sifteo has traditionally been very tight-lipped about their SDK release schedule, claiming several times that they didn’t have a date in mind, wanted the SDK to be solid, and so it would take however long it would take. Of course, this was well *after* developers like myself had paid to be part of the early release program. Again, even if they had the best of intentions, my end result is the same as if they’d deliberately mis-led me.

Signing up for their SDK-specific mailing list, I only received a couple e-mails over the months. They were general marketing blasts targeted to consumers. It turns out, I was really just signing up for their general mailing list. Ironically, I’ve since learned that their blog has been posting SDK-related info, that I *didn’t* receive via e-mail. I didn’t know they’d publicized a September SDK release until today (09/28/2011). Of course, it’s starting to look like it was just as well, since it doesn’t appear that they’ll actually *have* a September release.

The latest word from Sifteo staff is that the SDK should be ready within a week, pushing the release to early October:

Sifteo SDK release pushed back to October, according to staff

Sifteo staff's latest estimate of the SDK release date

The explanation given is that they have a small development team, and SDK development/documentation took a back seat to consumer-facing issues. That’s a reasonable response, mostly, but it would have been more useful and appreciated months ago.

The Product

The cubes themselves are novel and fun, but limited. Far from the “tiny blocks with brains“, as Time Magazine called them, they are simply small color screens with motion sensors and bluetooth connectivity. They don’t run apps, your computer does. The distinction is lost on many people, which is why I think reviewers have overlooked it thus far. Your computer must be nearby and running the Siftrunner application. You can’t switch games on the cubes themselves – you must go back to your computer to do this, as well. The cubes are merely new peripherals, like a mouse and small screen rolled into one. My kids can’t play them on long car rides, unless I bring my laptop to actually run the applications the whole time.

Bluetooth means you can’t stray too far from your computer. And despite the fact that the apps are actually running on your computer, which presumably has internet access, there’s no support for using your internet connection. This would open a whole new dimension of game play. Because of their simplicity, any game played on the cubes could easily be played against online players. The bandwidth requirements would be very small.

Product Comparison

Let’s put on our Consumer Reports hat for a moment, and contrast Sifteo with similar handheld platform: Cube World. My oldest son was into these for a couple years.  Similar in size, they have a low-resolution black-and-white display which allows you to interact with funny stick people.

Like Sifteo cubes, Cube World cubes can detect motion. Roll your cube over and over, and the tiny person trapped inside rolls around as if bound by gravity.  Keep it up, and he actually vomits digital blocks.  Put multiple cubes together, and the stick people can interact.  They have different games and toys, and seeing the interaction (sometimes playful, sometimes antagonistic) is fun.

Value

Cube World lacks a color display, multiple games, wireless connectivity, or the complexity of Sifteo. But they’re also not tethered to a computer, and the price ($25 a pair when they were in production) allowed you to buy a dozen cubes for the price of the Sifteo starter kit ($150 for three cubes and a charger).

With limited input options (movement, and placing cubes in proximity to one another), Sifteo cubes are limited as a gaming platform.  For another $20, you can walk into your local Sears and buy a brand new Nintendo 3DS.  I haven’t played a Sifteo game yet that keeps my interest for more than half an hour.  Speaking as a parent with a lot of road trip experience, you get a lot more peace and quiet for your dollar with the DS.

Summary

Suffice to say, I’ve been disappointed in Sifteo so far. They’ve done a great job of marketing. They got my $100 commitment before they ever had a physical product to sell, and they’ve gotten a lot of publicity from the media.

MIT degrees and TED exposure can only *start* a business. At some point, you have to deliver the goods. I can’t speak to consumer satisfaction. That will bear itself out soon, since I believe cubes have started shipping to the general public. From a developer’s perspective however, Sifteo’s debut falls short.

Show Intent with Better Naming

March 8, 2011

I had an interesting experience at a code retreat with the creator, Corey Haines. I created some code that I felt was really perfect. I didn’t think there was room for much improvement, but it only took Corey a few seconds in passing to find a flaw. It starts with this list of rules for simple design:

  1. Passes tests – the code should be test-driven, and the tests should all pass.
  2. No duplication – often known as DRY – don’t repeat yourself. Every distinct piece of information in the system should have one (and only one) representation in the code.
  3. Expresses intent – the code should be self-explanatory.
  4. Small – methods, classes, indeed the entire application shouldn’t be any bigger than absolutely necessary.

My Original Version

I won’t explain what this code is supposed to do. That might defeat the point. See if you can figure out which principal I violated with this code. I’ll say it’s not Rule #1, but showing the tests would take up too much room.

def new_status current_status, neighbor_count
  return :alive if neighbor_count == 3
  return current_status if neighbor_count == 2
 
  :dead
end

The Problem

Corey asked me one question: what if one of the requirements changes? And there it was. In an attempt to do the most in the fewest lines, I’d over-refactored the method. Not only had I made the method brittle if business requirements should change in the future, I’d factored out the intent of the method itself.

As usual, one good software practice begets another. Test-driven development results in smaller, simpler methods for instance. And in this case, showing intent in your code reduces brittleness. So how do you accomplish this?

The Solution

Express the problem domain in the code itself. Here’s my example, reworked:

def new_status current_status, neighbor_count
  return :dead if overpopulated?(neighbor_count)
      || underpopulated?(neighbor_count)

  return :alive if population_perfect?(neighbor_count)
 
  current_status
end
 
def overpopulated? neighbor_count
  neighbor_count > 3
end
 
def underpopulated? neighbor_count
  neighbor_count < 2
end
 
def population_perfect? neighbor_count
  neighbor_count == 3
end

This code is longer, but it shows intent much more clearly. You don’t even need to know what “overpopulated” is to understand what the method is doing. But if you want to know, or need to change it, it’s easy. In fact, we’re passing neighbor_count around a lot, so it looks like it’s time to abstract this into a class:

class Cell
  def initialize current_status, neighbor_count
    @current_status = current_status
    @neighbor_count = neighbor_count
  end
 
  def next_status
    return :dead if overpopulated? || underpopulated?
    return :alive if population_perfect?
 
    @current_status
  end
 
  private
 
  def overpopulated?
    @neighbor_count > 3  
  end
 
  def underpopulated?
    @neighbor_count < 2
  end
 
  def population_perfect?
    @neighbor_count == 3
  end
end

Now the code is much more readable, and understandable. We’re now clearly showing intent. And now, just for fun, the tests:

class CellTest << Test::Unit::TestCase
  def test_should_die_when_alive_and_overpopulated
    cell = Cell.new :alive, 4
    assert_equal :dead, cell.next_status
  end
 
  def test_should_die_when_alive_and_underpopulated
    cell = Cell.new :alive, 1
    assert_equal :dead, cell.next_status
  end
 
  def test_should_live_when_alive_and_perfect_population
    cell = Cell.new :alive, 3
    assert_equal :alive, cell.next_status
  end
 
  def test_should_stay_alive_by_default_when_alive
    cell = Cell.new :alive, 2
    assert_equal :alive, cell.next_status
  end
 
  def test_should_die_when_dead_and_overpopulated
    cell = Cell.new :dead, 4
    assert_equal :dead, cell.next_status
  end
 
  def test_should_die_when_dead_and_underpopulated
    cell = Cell.new :dead, 1
    assert_equal :dead, cell.next_status
  end
 
  def test_should_live_when_dead_and_perfect_population
    cell = Cell.new :dead, 3
    assert_equal :alive, cell.next_status
  end
 
  def test_should_stay_dead_by_default
    cell = Cell.new :dead, 2
    assert_equal :dead, cell.next_status
  end
end

Code Retreat in Boulder, Colorado

February 26, 2011

I’m in beautiful downtown Boulder, getting ready to attend a code retreat with Ruby greats like Corey Haines, Chad Fowler, Dave Thomas*, Mike Clark, Michael Feathers and many more.

Last night I hung out with some KC friends and we setup our dev environments for the event. I got motivated, and created a base environment on GitHub you can download. It runs your tests automatically using Watchr every time you save your code file, and if you’re on a mac it even takes a screen shot at each save! Now you can go back and relive the magic. Maybe string them together into a video with a little commentary, and boom – easy post-retreat blog video.

Use the link above, and let me kno w if it was useful!

*not the Wendy’s guy, as my wife likes to ask. You’d think since the world is down to just one living, notable Dave Thomas that joke would get a little old. I think the people who grew up watching Wendy’s commercials will also have to die out first :)

Double-Blind Test-Driven Development in Rails 3: Part 3

February 2, 2011

  1. Simple Tests
  2. Double-Blind Tests
  3. Making it Practical with RSpec Matchers

This is the last article in this series describing the concept of double-blind test-driven development. This style of testing can add time to development, but this can be cut significantly using RSpec matchers.

If you’re not familiar with matchers, they’re the helpers that give RSpec its english-like syntax, and they can be a powerful tool speeding up all of your test-driven development – whether you follow the double-blind method or not.

If you’re using RSpec, you’re already using their built-in matchers. Say we have a Site model, and its url method takes the host attribute and appends the ‘http://&#8217; protocol. Here’s a likely test:

describe Site, 'url'
  it "should begin with http://" do
    site = Site.new :host => 'example.com'
    site.url.should equal('http://example.com')
  end
end

The equal() method in the code above is the matcher. You can pass it to any of RSpec’s should or should_not methods, and it will magically work.

But the magic isn’t that hard, and you can harness it yourself for custom matchers that conform to your application.

The Many Faces of Custom RSpec Matchers

While I don’t want this article to turn into a primer on custom RSpec matchers (it’s a little off-topic), I’ll give you the three styles of defining them, and explain my recommendations. There are simple matchers, the Matcher DSL, and full RSpec matcher classes.

Let’s start by writing a test we want to run:

it "should be at least 5" do
  6.should be_at_least(5)
end

This test should always pass, provided we’ve defined our matcher correctly. The first way to do this is the simple matcher:

def be_at_least(minimum)
  simple_matcher("at least #{minimum}"){|actual| actual >= minimum}
end

As you might guess, actual represents the object that “.should” whatever – in this case “.should be_at_least(5)”. This version makes a lot of assumptions, including the auto-creation of generic pass and fail messages.

If you want a little more control, you can step up to RSpec’s Matcher DSL. This is the middle-of-the-road option for creating custom matchers:

RSpec::Matchers.define :be_at_least do |minimum|
  match do |actual|
    actual >= minimum
  end

  failure_message_for_should do |actual|
    "expected #{actual} to be at least #{minimum}"
  end

  failure_message_for_should_not do |actual|
    "expected #{actual} to be less than #{minimum}"
  end

  description do
    "be at least #{minimum}"
  end
end

Now we’re rocking custom failure messages, and test names. This is pretty cool, and honestly how I started out doing matchers. It’s also how I started out doing the matchers for double-blind testing.

The problem is that by skipping the creation of actual matcher classes, we lose the ability to do things like inheritance. Not a big deal if our matchers stay simple, but they won’t. Not if we use them as often as we should! I found myself re-defining the same helper methods in each matcher I defined this way.

So let’s see just how daunting a full-fledged custom matcher class really is:

module CustomMatcher  
  class BeAtLeast
    def initialize(minimum)  
      @minimum = minimum
    end  
  
    def matches?(actual)  
      @actual = actual
      @actual >= @minimum
    end  
  
    def failure_message_for_should  
      "expected #{@actual} to be at least #{@minimum}"  
    end  
  
    def failure_message_for_should_not  
      "expected #{@actual} to be less than #{@minimum}"  
    end  
  end  
  
  def be_at_least(expected)  
    BeAtLeast.new(expected)  
  end  
end  

This isn’t so bad! We’re defining a new class, but you can see it doesn’t have to inherit from anything, or use any unholy Ruby voodoo to work.

We just have to define four methods: initialize, match? (which returns true or false), and the two failure message methods. Along the way, we set some instance variables so we can access the data when we need it. Finally, we define a method that creates a new instance of this class, and that’s what RSpec will rely on.

You can add as many other methods as these four will rely on. But you also get other benefits over the DSL. You can use inheritance, moving common methods up the chain so you only have to define them once, instead of in each matcher definition. You can also write setup/teardown code in your parent classes, make default arguments a breeze, and standardize any error handling. I do all of these in the matchers I created for the example app.

The bottom line is this: defining your own matcher classes directly really DRY’s up your matchers, and that always makes life simpler. I think it’s the only way to go for serious and heavy RSpec users. It allows the class for my validate_presence_of matcher to be this short and sweet:

module DoubleBlindMatchers
  class ValidatePresenceOf < ValidationMatcher
    def default_options
      {:message => "can't be blank", :with => 'x'}
    end

    def match
      set_to @options[:with]
      @object.valid?
      check !@object.errors[@attribute].include?(@options[:message]), shouldnt_exist
      
      set_to nil
      check !@object.valid?, valid_when('nil')
      check @object.errors[@attribute].include?(@options[:message])
      
      set_to ""
      check !@object.valid?, valid_when("blank")
      check @object.errors[@attribute].include?(@options[:message])
    end
  end
  
  def validate_presence_of expected, options = {}
    ValidatePresenceOf.new expected, options
  end
end

And the Teacher model, which grew considerably during our double-blind testing, now looks like this (in its entirety):

# spec/models/teacher_spec.rb

require 'spec_helper'

describe Teacher do
  it {should have_many :subjects}
  
  it {should validate_presence_of :name}
  it {should validate_length_of :name, :maximum => 50, :message => "must be 50 characters or less"}
  
  it {should validate_presence_of :salary}
  it {should validate_numericality_of :salary, :within => (20_000..100_000), :message => "must be between $20K and $100K"}
end

Summary

Now that you’ve seen my entire proposal for double-blind testing, let me know what you think. Be cruel if you must, it’s the only way I’ll learn. I’ll do the best to explain (not defend) my reasoning, and keep an open mind to changes.

I’ll also be publishing my double-blind matchers as a gem so you can add them to your project.

Double-Blind Test-Driven Development in Rails 3: Part 2

February 1, 2011

  1. Simple Tests
  2. Double-Blind Tests
  3. Making it Practical with RSpec Matchers

The last article in this series defined the concept of double-blind test-driven development, but didn’t get much into real-world examples. In this article, we’ll explore several such examples.

The Example Application

This article includes a sample app that you can download using the link above. Be sure to checkout tag “double_blind_tests” to see the code as it appears in this article. The next article will have a lot of refactoring. I limited my samples to the model layer, where 100% coverage is a very realistic goal, and this is likely to be the greatest benefit.

I chose a simple high school scheduling app with teachers, the subjects they teach, students, and courses. In this case, I’m defining a course as a student’s participation in a subject. Teachers teach (ie, have) many subjects. Students take (have) many subjects, via courses. The course record contains that student’s grade for the given subject.

The database constraints are intentionally strict, and most of the validations in the models ensure that these constraints are respected in the application layer. We don’t want the user seeing an error page because of bad data. Depending on the application, that can be worse than actually having bad data creep in.

Associations

Here’s an example of a has_many association:

# excerpt from spec/models/teacher_spec.rb

describe Teacher do
  it "has many subjects" do
    teacher = Factory.create :teacher
    teacher.subjects.should be_empty

    subject = teacher.subjects.create Factory.attributes_for(:subject)
    teacher.subjects.should include(subject)
  end
end

In order to factor out our own assumptions, we have to ask what they are. The assumption is that the subject we add to the teacher’s subject list works because of the has_many relationship. So we’ll first test that teacher.subjects is, in fact, empty when we assume it would be. Then we’re free to test that adding a subject works as we expect.

Here’s a belongs_to association:

# excerpt from spec/models/subject_spec.rb

describe Subject do
  it "belongs_to a teacher" do
    teacher = Factory.create :teacher

    subject = Subject.new
    subject.teacher.should be_nil
    
    subject.teacher = teacher
    subject.teacher.should == teacher
  end
end

Again, we’re challenging the assumption that the association is nil by default, by testing against it before verifying that we can add a teacher. This tests that this is a true belongs_to association, and not simply an instance method. This is the kind of thing that can and will change over the life of an application.

Validations

Let’s test validates_presence_of:

# excerpt from spec/models/teacher_spec.rb

describe Teacher do
  describe "name" do
    it "is present" do
      error_message = "can't be blank"
      
      teacher = Teacher.new :name => 'Joe Example'
      teacher.valid?
      teacher.errors[:name].should_not include(error_message)

      teacher.name = nil
      teacher.should_not be_valid
      teacher.errors[:name].should include(error_message)

      teacher.name = ''
      teacher.should_not be_valid
      teacher.errors[:name].should include(error_message)
    end
  end
end

This example was actually explained in detail in the last article. Validate that the error doesn’t already exist before trying to trigger it. Don’t just test the default value when you create a blank object, test the likely possibilities. Refactor the error message to DRY up the test and add readability. And finally, test by modifying the object you already created (as little as possible) rather than creating a new object from scratch for each part of the test.

A more complex version is needed to validate the presence of an association:

# excerpt from spec/models/subject_spec.rb

describe Subject do
  describe "teacher" do
    it "is present" do
      error_message = "can't be blank"

      teacher = Factory.create(:teacher)
      subject = Factory.create(:subject, :teacher => teacher)
      subject.valid?
      subject.errors[:teacher].should_not include(error_message)
    
      subject.teacher = nil
      subject.should_not be_valid
      subject.errors[:teacher].should include(error_message)
    end
  end
end

While the test is more complex, the code to satisfy it is not:

# excerpt from app/models/subject.rb

validates_presence_of :teacher

testing validates_length_of:

# excerpt from spec/models/teacher_spec.rb

describe Teacher do
  describe "name" do
    it "is at most 50 characters" do
      error_message = "must be 50 characters or less"
      
      teacher = Teacher.new :name => 'x' * 50
      teacher.valid?
      teacher.errors[:name].should_not include(error_message)
      
      teacher.name += 'x'
      teacher.should_not be_valid
      teacher.errors[:name].should include(error_message)
    end
  end
end

And here’s the model code that satisfies the test:

# excerpt from app/models/teacher.rb

validates_length_of :name, :maximum => 50, :message => "must be 50 characters or less"

While you can definitely start to see a pattern in validation testing, this introduces a new element. Instead of freshly setting the name attribute to be 51 characters long, we test the valid edge case first and then add *just* enough to make it invalid – one more character.

This does two things: it verifies that our edge case was as “edgy” as it could be, and it makes our test less brittle. If we wanted to change the test to allow up to 100 characters, we’d only have to modify the test name and the initial set value.

validating a number’s range using validates_numericality_of:

# excerpt from spec/models/teacher_spec.rb

describe Teacher do
  describe "salary" do
    it "is at or above $20K" do
      error_message = "must be between $20K and $100K"
      
      teacher = Teacher.new :salary => 20_000
      teacher.valid?
      teacher.errors[:salary].should_not include(error_message)

      teacher.salary -= 0.01
      teacher.should_not be_valid
      teacher.errors[:salary].should include(error_message)
    end

    it "is no more than $100K" do
      error_message = "must be between $20K and $100K"

      teacher = Teacher.new :salary => 100_000
      teacher.valid?
      teacher.errors[:salary].should_not include(error_message)
      
      teacher.salary += 0.01
      teacher.should_not be_valid
      teacher.errors[:salary].should include(error_message)
    end
  end
end

And here’s the code that satisfies the test:

# excerpt from app/models/teacher.rb

validates_numericality_of :salary, :message => "must be between $20K and $100K",
  :greater_than_or_equal_to => 20_000, :less_than_or_equal_to => 100_000

We’re doing the same here as in our testing of name’s length. We’re setting the edge value that’s *just* within the allowed range, then adding or subtracting a penny to make it invalid. I split up the top and bottom edge tests, because it’s better to test as atomically as possible – one limit per test.

Defaults

Another tricky database constraint to test for is a default value:

# excerpt from spec/models/course_spec.rb

describe Course do
  describe "grade_percentage" do
    it "defaults to 1.0" do
      course = Course.new :grade_percentage => nil
      course.grade_percentage.should be_nil
      
      course = Course.new :grade_percentage => ''
      course.grade_percentage.should be_blank
      
      course = Course.new :grade_percentage => 0.95
      course.grade_percentage.should == 0.95
      
      course = Course.new
      course.grade_percentage.should == 1.0
    end
  end
end

In this case, we can’t avoid having to recreate the model from scratch, because the nature of the implementation. There’s no actual code in the model that makes this happen, it’s purely in the database schema. Why should we test it, then? Because we test any behavior we’re going to rely on in the application. The fact that this model behavior is implemented at the database level (and therefore, not purely TDD) is a small inconvenience.

What’s the assumption our double-blind test is verifying in this case? That the value is only set in the absence of other values being explicitly assigned. Testing with nil and blank values verifies that the default doesn’t override them – it only works in the complete absence of any assignment. I also test an arbitrary (but valid) value as the anti-assumption test before finally verifying that the default is setting to the correct value.

Most default tests verify only that the correct default value is set – the double-blind version verifies that it’s acting only as a default value in all cases.

Summary

The point of double-blind testing is bullet-proof tests, that can’t be reasonably thwarted by antagonistic coding – whether that’s your anti-social pairing partner, or yourself several months down the road. The bottom line is this: test all assumptions.

That being said, this is very time consuming, and we can see a ton of repetition even in this small test suite. What we need is a way to get back to speedy testing before our boss/client notices it now takes an hour to implement one validation.*

*Even if you work for a government owned/regulated institution that actually digs that kind of non-agile perversion, you WILL eventually go insane. Even in this small sample app, the voices in my head had to talk me off a building ledge twice.

The answer lies in RSpec matchers, which are easy to implement, and can grow with your application. The benefit is not just speedier development – it’s also consistency across your application. We’ll explore that in the last article of this series.

Double-Blind Test-Driven Development in Rails 3: Part 1

January 31, 2011

This is a three-part series introducing the concept of double-blind test-driven development in Rails. This post defines the concept itself, and lays the groundwork by showing the way tests are more commonly written. The next couple posts will show how to double-blind test various common rails elements, and how to make this added layer of protection automatic and quick.

  1. Simple Tests
  2. Double-Blind Tests
  3. Making it Practical with RSpec Matchers

Looking at a rails application that was built with test-driven development, you might expect to see something like this:

# spec/models/teacher_spec.rb

describe Teacher do
  it "has many subjects" do
    teacher = Factory.create :teacher
    subject = teacher.subjects.create Factory.attributes_for(:subject)

    teacher.subjects.should include(subject)
  end
  
  describe "name" do
    it "is present" do
      teacher = Teacher.new

      teacher.should_not be_valid
      teacher.errors[:name].should include("can't be blank")
    end
    
    it "is at most 50 characters" do
      teacher = Teacher.new :name => 'x' * 51
      
      teacher.should_not be_valid
      teacher.errors[:name].should include("must be 50 characters or less")
    end
  end
end

Truth be told, if you’re seeing this in the wild the app is probably doing pretty good. This level of testing works great during the early stages of an app, when things are simple. But as things grow and/or multiple developers become involved, you need more.

Consider models where the associations and validations stretch into the dozens of lines. The more careful and specific you are about validations, the easier it is to get conflicting or overlapping validations. I actually came up with the concept of double-blind testing while retro-testing models in a client app that previously had no validation specs.

What is Double-Blind Testing?

In the world of scientific studies, you always need a control group. One set of participants gets the latest and greatest new diet pill, while the other gets a placebo. Researchers used to think this was good enough, and probably pretty funny to watch the placebo users rave about their shrinking waistlines. But it turns out studies like this still allowed some bias – as researchers observed the effects, their *own* preconceived notions tainted results. Enter the double-blind study.

In a double-blind study, the researchers themselves are unaware of which participants are in the control group, and which are being tested. Both sides are “blind”. They may have lost funny patient anecdotes, but they gained research reliability.

Applying the Lessons of Double-Blind Studies to Test-Driven Development

As I said, in the early stages of an app the tests I showed above work great, as long as you’re using TDD and the red-green-refactor cycle. This means you write the test, run it, and it fails. Then you write the simplest code that will make the test pass, run the test again, and confirm that it passes. Most testing tools will literally show red or green as you do this. Then, as you start to amass tests, you’re free to refactor your code (abstracting common code into helper methods, changing for readability, etc) and run the tests again at any time. You will see failures if you broke anything. If not, you’ve more or less guaranteed your code refactoring works properly.

The problem comes in when you start changing old code, or adding tests to processes that didn’t initially happen. What I’m calling double-blind testing is this:

each test needs to verify the object’s behavior before testing what changes.

As an example, let’s rewrite one of the tests from above:

# original test

describe "name" do
  it "is present" do
    teacher = Teacher.new

    teacher.should_not be_valid
    teacher.errors[:name].should include("can't be blank")
  end
end
# modified to be double-blind

describe "name" do
  it "is present" do
    error_message = "can't be blank"

    teacher = Teacher.new :name => 'Joe Example'
    teacher.valid?
    teacher.errors[:name].should_not include(error_message)

    teacher.name = nil
    teacher.should_not be_valid
    teacher.errors[:name].should include(error_message)

    teacher.name = ""
    teacher.should_not be_valid
    teacher.errors[:name].should include(error_message)
  end
end

This is the basic pattern for all double-blind testing. We’re not leaving anything to chance. In the original version, we expected our object to be invalid, we treated it as such, and we got the result we expected. Do you see the problem with this?

Here’s an exercise: can you make the original test pass, even though the object validation is not working correctly? There’s actually a style of pair programming that routinely does exactly this. One developer writes the test, and the other writes just enough code to make it pass, with the good-natured intention of tripping up the first developer whenever possible. If you wrote the original test, I could satisfy it by just adding the error message to every record on validation, regardless of whether it’s true! Your test would pass, but the app would fail.

The test is now “double-blind” in the sense that we as testers have factored out our own expectations from the test. In this case, we expect the error message to not be there until we initialize the object a certain way, and this can be bad. It may sound far-fetched or paranoid*, but in large codebases your original tests are often abused in this very way. The “you” that writes new code today is often at odds with the “you” from three months ago that wrote the older code with a different understanding of the problem at hand.

*Plus, everybody knows it’s not paranoia when the world really is out to get you. I’ve discussed this at length with the voices in my head, and they all agree. Except Javier. That guy’s a jerk.

Now that I’ve laid out the justification, let’s take a closer look at how the test changed. The first thing I did was create a version of the object that I believe should NOT trigger the error message. Then I run through two cases that should. You can see right away, I was forced to be more *specific* about what should trigger an error. Instead of just a blank object with no values set, I’ve proactively set the attribute in question to both nil and blank. A key element here is to try to work with the *same* object, modifying between tests, rather than creating a new object each time. My test wouldn’t have been as specific if I’d just recreated a blank Teacher object and run a single validation check.

Also, with the increased code comes the increased chance of typos. We don’t want to DRY test code up too much, because a good rule is to keep your tests are readable (non-abstract) as possible. But I’ve specified the error message at the top of the test, and reused that string over and over. I did this in a way that DRY’s the code and adds readability. You can see at a glance that all three tests are checking for the same error.

Finally, the first time I run the object’s validation, notice I’m not asserting that it should be valid. If I had written teacher.should be_valid on line 8 of the double-blind test, I’d have to take the extra time to make sure every other part of the object was valid. Not only is this time-consuming, it’s very brittle. Any future validations would break this test.

If you use factories often, you may suggest setting it up that way since a factory-generated object should always be valid. Then you could assert validity. However, this only slows down your test suite. it’s enough just to run valid? on the object, which triggers all the validation checks to load up our errors hash.

Summary

I believe this is a new concept – I was already coding most of my tests this way, but it didn’t dawn on me how valuable it was until I started retro-testing previously testless code. The value showed itself right away.

I would love to hear feedback on this – if you think it’s unnecessary (I tend to be very rainman-ish about my testing code) or even detrimental. However, if you think it’s too much work, I ask you to hold your criticism until you’ve read part 3 of this article, where I show how to use your own RSpec matchers to greatly speed this process.

Legacy Database Column Names in Rails 3

January 28, 2011

If you work with legacy databases, you don’t always have the option of changing column names when something conflicts with Ruby or Rails. A very common example is having a column named “class” in one of your tables. Rails *really* doesn’t like this, and like the wife or girlfriend who really hates your new haircut, it will complain at every possible opportunity:

# trying to set the poorly named attribute
ruby-1.9.2-p0 > u = User.new :class => '1995'
NoMethodError: undefined method `columns_hash' for nil:NilClass
# trying to set a different attribute that is only guilty by association
ruby-1.9.2-p0 > u = User.new :name
NoMethodError: undefined method `has_key?' for nil:NilClass
# trying to set the attribute later in the game
ruby-1.9.2-p0 > u = User.new
 => #<User id: nil, name: nil, class: nil, created_at: nil, updated_at: nil> 
ruby-1.9.2-p0 > u.class = '1995'
NoMethodError: undefined method `private_method_defined?' for nil:NilClass

Like the aforementioned wife/girlfriend, you’re not going anywhere until this issue is resolved. Luckily, Brian Jones has solved this problem for us with his gem safe_attributes. Rails automatically creates accessors (getter and setter methods) for every attribute in an ActiveRecord model’s table. Trying to override crucial methods like “class” is what gets us into trouble. The safe_attributes gem turns off the creation of any dangerously named attributes.

Just do this:

# app/models/user.rb
class User < ActiveRecord::Base
  bad_attribute_names :class
end

After including the gem in your bundler, pass bad_attribute_names the list of offending column names, and it will keep Rails from trying to generate accessor methods for it. Now, this does come with a caveat: you don’t have those accessors. Let’s try to get/set our :class attribute:

ruby-1.9.2-p0 > u = User.new
 => #<User id: nil, name: nil, class: nil, created_at: nil, updated_at: nil> 
ruby-1.9.2-p0 > u.class = '1995'
 => "1995" 
ruby-1.9.2-p0 > u
 => #<User id: nil, name: nil, class: "1995", created_at: nil, updated_at: nil> 
ruby-1.9.2-p0 > u.class
 => User(id: integer, name: string, class: string, created_at: datetime, updated_at: datetime) 

The setter still works (I’m guessing that it was still created because there wasn’t a pre-existing “class=” method) and we can verify that the object’s attribute has been properly set. But calling the getter defaults to…well, the default behavior.

The answer is to always use this attribute in the context of a hash. You can send the object a hash of attribute names/values, and that works. This means your controller creating and updating won’t have to change. Methods like new, create, update_attribute, update_attributes, etc will work fine.

If you want to just set the single value (to prevent an immediate save, for example) do it like this:

ruby-1.9.2-p0 > u[:class] = '1996'
 => "1996" 
ruby-1.9.2-p0 > u
 => #<User id: nil, name: nil, class: "1996", created_at: nil, updated_at: nil> 

Basically, you can still set the attribute directly, instead of going through the rails-generated accessors. But we’re still one step away from a complete solution. We want to be able to treat this attribute like any other, and that requires giving it a benign set of accessors (getter and setter methods). One reason to do this is so we can use standard validations on this attribute.

Adding accessors to our model is this simple:

# add to app/models/user.rb

def class_name= value
  self[:class] = value
end
  
def class_name
  self[:class]
end

We’re calling the accessors “class_name”, and now we can use that everywhere instead of the original attribute name. We can use it in forms:

# example, not found in code

<%= f.text_field :class_name %>

Or in validations:

# add to app/models/user.rb

validates_presence_of :class_name

Or when creating a new object:

# example, not found in code

User.create :class_name => 'class of 1995'

If you download the code, these additions are test-driven, meaning I wrote the tests for those methods before writing the methods themselves, to be sure they worked properly. I encourage you to do the same.

Good luck!