Jake Scruggs

writes ruby/wears crazy shirts

Sep 13, 2007

Metrics for Rails

Everyone thinks they write good code -- it's just part of human nature. You can't do something every day and not secretly suspect that you're good at it. Self-delusion is a powerful thing so you need to use metrics to take a hard look at your code.

On my current project, we've just added a daily metrics build (run every day at midnight by CruiseControl.rb) that takes a look at our code in three ways:

  • Code coverage with Rcov
  • Cyclomatic complexity with Saikuro
  • And um..., Flogging with Flog
Rcov is a code coverage tool that can be used with Rails Rcov to add a bunch of rake tasks to your build so you can figure out which lines of code are run by your tests... and which are not.

Saikuro computes cyclomatic complexity which "measures the number of linearly independent paths through a program's source code." Methods with more paths are harder to understand/debug/modify.

And Flog is cyclomatic complexity with an attitude. It scores ruby methods with an "ABC metric: Assignments, Branches, Calls, with particular attention placed on calls."

Why do we use both Saikuro and Flog? Well Flog catches Ruby specific complexities that cyclomatic complexity doesn't (for instance, calls to eval are given particular weight) and it picks up methods that Saikuro misses (we use metaprogramming to define a fair amount of methods and Saikuro seems to miss anything not defined with a 'def'). But Flog outputs a flog score which isn't very familiar to most developers, while cyclomatic complexity is a relatively well understood computer science term.

Also flog has the scariest looking software website ever. I can't believe those guys have the chutzpa to put such pictures on a site that advertises them as consultants for hire.

So how did our code look? Pretty good -- 96% code coverage overall but some methods need testing love. And some of our code had pretty high cyclomatic complexity or Flog numbers so we'll need to write some developer tasks to fix the problems uncovered. (Every iteration we work on a few developer tasks in addition to the development, bug, and production tasks.)

But the metrics build isn't done, in the future I'd like to:
  • Find duplicate code with CPD.
  • Use Heckle to see if our tests are any good. (Heckle changes your code (not permanently) and then re-runs your tests -- if they don't fail then you've got bad tests)
  • Fail the build on bad metrics numbers.
  • Figure out how to integrate the results Panopticode-style with all the cool visualizations it offers.