Jake Scruggs

writes ruby/wears crazy shirts

Recently I was on a project where a bunch of code had been written before we arrived. It was quite a struggle to get the application under test. After a number of months the team hit 50% and then we just stayed there. We had a hard time getting client developer buy-in on the push upward from 50%. I didn't really understand this attitude at first, but after talking with the devs, I realized that the tests were mostly a nuisance for them. They saw it like this:

"If I have to gut a few pages, as part of a changing requirement, now I also have to spend a day fixing the stupid tests. And the tests never really catch any bugs, so what was the point? All the tests are doing is slowing me down."
Since the coverage was low and many of the test writers were new to unit tests we didn't really have a lot of protection from bugs. But we also had a sizable suite to maintain. They were feeling all the pain of keeping a test suite running but seeing none of the benefits. Now, as a guy who'd been on projects with good test coverage I could see how the suite was making things better: First, I had a roughly 50% chance of being able to use the tests as documentation for any class I opened up. And second, the tests were catching some problems before they made it into the repository -- but not enough for the client devs to notice.

We never did convince them that writing tests allowed them to go faster in the long run and it caused me to wonder how many teams get to 30-60% coverage and give up because the benefits are too subtle and the pain is too much?

As a traveling consultant I've developed a pretty poor opinion of American cars. Sliding doors that won't close, unfathomable glove boxes, ugly/boring interiors, boring/ugly exteriors, crazy placements for the radio/ac/heating/trunk release buttons, plastic doors with tails of extra material hanging off poorly drilled holes... The other day I was sitting in a Chrysler 300 talking with one of our tech leads about how he really wishes the rental company hadn't upgraded him and would've much preferred a Hyundai Elantra -- a car that costs half as much as the 300. It's a bit depressing.

I spent the last week writing a log parser of all things. The web site I'm currently working on is the sign-up page for a large company so they are very concerned about how far the average joe gets into the process before they give up. If everybody bails on the address page, then perhaps it's too complicated or unresponsive or... something. Up till now we've been redesigning pages based on some data and a lot of guesswork so we thought it might be about time to get some serious data on every session every day. Now the first thing you learn about parsing rails logs is that they are designed to be human readable but not so much computer parsable. An Apache request is one line. A Rails request log has lots more info, but is multi-line. And since we run a lot of Mongrel instances at once, all these requests are interleaved, so sorting them out is an intense computation (we actually ran out of memory a few times before we did some optimization).

Our first pass was mostly a ruby script that called off to a few classes which produced a csv file that our analysts could play with. But they wanted more data -- data only Apache could provide. So we added an Apache log parser and merged the results. And there were bugs (stuff wasn't logged, sessions seemed to end at the starting page or somehow start in the middle (both of which are impossible)). And the code was complicated and hard to understand. At this point we realized that we had been treating this task like a second class citizen -- a task to be automated but not really cared about. So we bit the bullet and refactored the log parser to be more Object Oriented. Now there's a log collector that has lots of requests. Each request mostly delegates to its Apache Request or its Rails request. Along the way things became more clear and the bugs started to become obvious, including one I have to share with you:

Remember how I said many sessions seemed to start in the middle while others ended at the beginning? Well, it turns out that a typical telemarketer will sign up one user and then head back to the first page to sign up another. At which point Rails grabs the session id of their previous session and logs it before processing the request (which wipes the session, but too late for the logging). So it looks like the previous session ended at the beginning. Of course, the next session has had it's beginning stolen so it looks like it started in the middle. Since Rails does this bit of logging before it even looks at our code we needed to adjust our logger to ignore the false session and look for the real session id in a extra bit of logging we added in.

Anyway now that we have our own custom log parser we can get some awesome info on our app for debugging weird stuff that only happens under load and the analysts get a csv file then can import into Excel and generate all sorts of cool graphs and statistics. Yes it did mean a week of writing regular expressions and some crazy command line statements (like: cat access.log | grep | awk | sort | uniq -c | sort ) but we now have a very sharp, well tested, tool.

In my last post I talked about using RSpec without Rails, but since just about all my Ruby programming involves Rails, I should probably get into how to specify Rails code.

RSpec on Rails: Models

So if you're new to RSpec and you want to get started quickly, head on over to rspec.rubyforge.org, get RSpec, Spec::Rails and then, after you've created a new Rails project and rspec-ed it (all detailed on the website) you can run:

ruby script/generate rspec_scaffold person name:string

Which will get you a bunch of stuff to play with. Of you'll need to set up your database.yml and run rake db:migrate if you want thing to work.

Fire up your favorite IDE and find the person_spec.rb and you should see something like:

require File.dirname(__FILE__) + '/../spec_helper'

describe Person do
before(:each) do
@person = Person.new

it "should be valid" do
@person.should be_valid

Not the world's most rigorous set of specs, but then our model doesn't have much behavior at the moment. You'll notice that the auto-generated spec does use RSpec's built in interrogative feature. 'be_valid' makes RSpec look for 'valid?' Running the spec with spec formatting ('-fs') produces the following output:

- should be valid

Finished in 0.028878 seconds

1 example, 0 failures

But what's interesting is that if I remove the string between it and do, the output remains the same. If you leave off the name of a spec, RSpec guesses a name for you by looking at what you're asserting.

But let's say I want to specify that a Person is invalid without a name. Here's how I would do that:

it "should be invalid without a name" do
@person.should have(1).error_on(:name)
@person.should_not be_valid

As you can see, there's some built in sugar to make checking errors more readable. Now that I have a failing spec I can add the following line to my Person model to make the it pass:

validates_presence_of :name

Now let's do something a little more interesting:

it "should run inside a transaction when making rich" do
@person.cash.should == 1000000.00

What I'm saying here is that a Person should have a method that makes them rich and furthermore that the rich-making should happen inside a transaction. RSpec has mature mocking/stubbing framework built into it. 'and_yield' can even specify what to yield so you could return an object that you created in the spec and then check to make sure some things happened to it. Of course, before I get to much further, I should write some code to make the spec pass:

def make_rich
Person.transaction do
self.cash = 1000000.00

Now what if you already have a codebase that has a lot of Test::Unit tests but you want to start using rspec? Well you could just start writing specs in their own spec folder and rspec's default rails task will run both your tests and your specs. But that's not your only option. You can use RSpec inside your tests like so:

require File.dirname(__FILE__) + '/../../test_helper'
require 'spec/test_case_adapter'

class PersonTest < person =" Person.new(:name"> "Jake")

def test_person_can_have_name_set
@person.name.should == "Jake"

Now if you do this you won't have access to all the feature of RSpec: The reports, the mocking frameworks, and some custom matchers won't work (for instance '@person.should have(1).error_on(:name)' isn't available) but it's a low impact way of trying out RSpec before you commit.

Next time we'll look at view specing with RSpec.

I'm going to try and turn my recent presentation on RSpec into a series of blog posts -- This being the first. Before I get on with it I would like to thank the Houston Ruby and Rails Users Group for having me. They asked some pretty insightful questions and were generally a great audience. I'd also like to thank ThoughtWorks for paying for my flight, rental car, and lost billable hours so I could do this presentation.

RSpec without Rails

So in order to show you how RSpec works, I'm going to start with specifying a class that deals with string case like so:

require File.dirname(__FILE__) + '/../spec_helper'
require 'string_caser'

describe StringCaser do
it "should upcase a string" do
caser = StringCaser.new("A String")
caser.upcase.should == "A STRING"

After requiring a spec_helper and the file I intend to spec, I set up a 'describe' block. Inside the block there is a single spec (or example) which is also a block. (If you've looked at RSpec before you might wonder what happened to context and specify -- they've been replaced by 'describe' and 'it' for more explanation see Describe it with RSpec) In the example I declare a the name of the spec to be "should upcase a string" which does not execute and could be any name you like. By convention they all start with "should." Then I create a new StringCaser and give it a string. On the next line I call caser.upcase which should equal "A STRING". Instead of having to remember which is expected and which is actual, the RSpec way is to simply do what you want and then call 'should' on the output. Much more readable. If you're wondering, should and should_not can be called on any Ruby Object.

So If I run the above spec without creating the file, I'll get an error on the require. So let's create a file called string_caser_spec.rb

class StringCaser
def initialize(content)
@content = content

def upcase

And if run the spec like so:

spec string_caser_spec

I get:

'StringCaser should upcase a string' FAILED
expected "A STRING", got "A String" (using ==)

Finished in 0.029125 seconds

1 example, 1 failure

Because I forgot to upcase the return value. Oh well, easily fixed:

def upcase

Now run rspec so that it formats the output like so:

spec string_caser_spec -fs

And I get this output:

- should upcase a string

Finished in 0.036034 seconds

1 example, 0 failures

Lets add another right below the previous spec:

it "should know if a sting is all lowercase" do
caser = StringCaser.new("a lowercase string")
caser.should be_lower_case

Now I'm saying that a StringCaser instance should know if its content is all lowercase. But this time I'm using RSpec's built in support of interrogatives. When RSpec sees 'be_' in a matcher (which is the thing that comes after should), it looks for a method with a name that follows 'be_' and has a '?' at the end. In this case 'be_lower_case' makes RSpec look for a method called 'lower_case?' and calls it. There a lot of nice syntactic sugar like this in RSpec -- way more than I could ever hope to cover. Now 'lower_case?' doesn't exist yet so I get an error, and then I write this code:

def lower_case?
@content.downcase == @content

And I'm back to passing. Btw, you can call rspec with -c to print your output in color, if you like.

Now for something a little more interesting. I want StringCaser to throw a RuntimeError if it receives an object that does not respond to 'upcase.' Here's the spec:

[1, 3.4, Object.new ].each do |bad_thing|
it "should raise a Runtime Error when passed a #{bad_thing.class} because it doesn't respond to 'upcase'" do
lambda { StringCaser.new(bad_thing) }.should raise_error(RuntimeError)

Cool, huh? Sure I could do this in Test::Unit, but it would have to be one test with a generic name. In RSpec I can name on the fly (yes I could use some Ruby magic to define methods at runtime in Test::Unit, but think of the readability! (or lack there of)).

So now I add some code to StringCaser:

def initialize(content)
raise RuntimeError unless content.respond_to?(:upcase)
@content = content

and when I run my specs with spec string_caser_spec -fs, I get:

- should upcase a string
- should know if a sting is all lowercase
- should raise a Runtime Error when passed a Fixnum because it doesn't respond to 'upcase'
- should raise a Runtime Error when passed a Float because it doesn't respond to 'upcase'
- should raise a Runtime Error when passed a Object because it doesn't respond to 'upcase'

Finished in 0.035137 seconds

5 examples, 0 failures

Which is nice. There's also a way to get this output as an html file.

Now I find that as I'm writing unit tests that I often end up with a bunch of different setups, or one big complicated setup that only 1/5 of which is important to any one test. Either case kinda sucks, but RSpec gives me a way to segregate my setups like so:

describe "StringCaser with all lower case contents" do
before(:each) do
@caser = StringCaser.new("lowercase")

it "should return true when lower_case? is called on it" do
@caser.should be_lower_case

#More specs for this case

describe "StringCaser with some upper case contents" do
before(:each) do
@caser = StringCaser.new("MiXcAsE")

it "should return false when lower_case? is called on it" do
@caser.should_not be_lower_case

#More specs for this case

Now I can I have a place where I specify for each context. This tends to drive out more specs and produces a more complete suite. Warning, this does not mean I'm endorsing huge setup methods. A spec should be readable in and of itself. And yet, I think a short setup that does exactly what the 'describe' says is a useful thing.

Next time I'll talk about testing/specing/examplifying Rails Models.

Aug 6, 2007

RSpec Resources

Later this evening I'm giving a talk on RSpec at the Houston Ruby and Rails Group.

In case somebody misses a link, I'm posting them here:

RSpec's homepage: rspec.rubyforge.org

RSpec Users Mailing list: rubyforge.org/mailman/listinfo/rspec-users

David Chelimsky's Blog: blog.davidchelimsky.net

If you install RSpec as a plugin to your Rails project, then look inside:
for some nice spec examples

Once you've got RSpec and RSpec on Rails installed in your project, running:

ruby script/generate rspec_scaffold person name:string phone_number:integer cash:decimal
from the base of your project will get you a bunch of generated specs to look at and play with.