Archive for the 'Writing Reliable, Bug-Free Code' Category

Cucumber and WATIR: kick-starting your testing

Friday, January 16th, 2009

I think Cucumber is fast becoming indispensable for my testing. The point of it is that you can write documentation that your client understands and then prove that the application does what it says. When coupled with WATIR you can show that it really works in an actual browser – you can even show it in Internet Explorer, Firefox and Safari.

There are a number of issues with using WATIR – the main one being speed – as WATIR scripts the browser, an extensive test suite will take a long time to run.

But my first hurdle was actually getting the code running. Unlike using Cucumber with Webrat, where your Cucumber steps have access to your Rails code (so you can play around with the response object), with WATIR you run the application in a separate process and then poke the browser in the same way a real user would. Of course, unlike Webrat, you get to test your Javascript too.

However, I didn’t want to have to remember to start my application before running the tests (and probably forgetting to put it into the test environment or running on the wrong port) – so I came up with the following for your features/support/env.rb.


system 'rake db:test:clone'
system 'rm log/test.log'
system 'ruby script/server -p 3001 -e test -d'

at_exit do 
  system "kill `ps aux | grep mongrel_rails | grep -e '-p 3001' | grep -v grep | awk '{ print $2 }'`"
end

Before do
  @browser = Watir::Safari.new
  @browser.set_fast_speed
  @base_url = 'http://localhost:3001'
end

This clones your development database, gets rid of the test log (as we’ll need to look through that to track down any errors, so we want to keep it small) and then starts your app in test mode on port 3001 (so it doesn’t clash with the version you’ve already got open in development). The Before clause sets up WATIR and configures a base url that my steps use elsewhere. And the at_exit hook looks for the server we started and kills it.

Next step is to add a platform instruction – so you can choose to test with Safari, Firefox (if I can ever get the Firewatir plugin installed) and Internet Explorer (where I guess the kill statement will need a bit of alteration).

An alternative is to tell mongrel to start and write a pidfile to a known place, and then replace the ps/grep line with:


  pid = File.read(File.expand_path(RAILS_ROOT + "/log/mongrel.pid")).chomp
  `kill #{pid}`

Thanks to Caius for that version.

Writing tests for your controllers improves the design of your models

Saturday, December 20th, 2008

I’ve recently been updating some old code – partly written by someone else, partly written by myself. At the time, I thought I had written this code really well; looking back on it now, it looks awful. Fair enough, I’ve learnt a lot – I want to look back on old code and shudder. But also, there is very poor test coverage on this app and the tests that there are are quite unwieldy due to an over-reliance on fixtures.

So I’ve been reworking them all using RSpec, my fork of RSpec-Rails and my Object Factory (which means I can avoid fixtures).

Most of the work involves writing a spec that mimics the current behaviour (by inspecting the code and trying to match all paths through it), then refactoring the code, using the spec to prove that I haven’t broken it.

But some points have some really horrible code (and lots of it) within the controllers. As you probably know, Skinny Controllers is the Rails way – your application logic belongs in your models (as they are your application) – the controller should just find or create the relevant model, ask it to do something and then render the results.

Because of this, I opted to just rewrite the actions in question.

To do this I started by writing a Cucumber feature describing things from the user’s point of view. Actually writing the steps that match the feature was a lot of work; because Cucumber is a full stack test you have to deal with all the dependencies that your individual action has (for example, are you logged in with the correct permissions with all associated objects created and in the right state?).

Then I wrote a controller spec. Controller specs in RSpec should use mock objects; you don’t really want to test the models, you just want to prove that the controller finds or creates the right model, asks it to do something and renders the correct output at the end.

So a typical spec looks something like this (note that this is not RESTful as it was an existing part of the application that I am about to change):


  it "should process an item" do
    @item = mock_model Item, :work_type => :buy_stuff, :quantity => 2
    
    on_getting :process_item, :id => '1' do
     Item.should_receive(:find).with('1').and_return(@item)
      @stuff = mock_model Stuff
      @item.should_receive(:process).and_return(@stuff)
    end
    
    assigns[:stuff].should == @stuff
    response.should be_success
    response.should render_template('admin/orders/process_item')
  end

This basically says:

  • We have an item, of type “buy stuff” with a quantity
  • When the process_item action is called, we expect that the controller will try to find the item with the given id
  • Then we should call process on the item and it should give us some stuff
  • The stuff should be stored in an instance variable, called stuff
  • And a page should be successfully rendered using the process_item template

That’s a pretty succinct explanation of what the controller should do – I can’t think of many ways of making that skinnier. It also bears no resemblance to the actual implementation of the action – which currently looks something like this:


def process_work_item
    @item =Item.find(params[:id])
    case @item.product.class.to_s.underscore.to_sym
    when :buy_stuff
      @stuff = Stuff.build_item(@item)
      @stuff.setup_new
      render :action => :process_stuff
    when :update_stuff
      @item.stuff.prepare_for_update
      render :action => :update_stuff
    else
      render :status => 404, :text => "Item product type #{@item.product.class.to_s} unknown."
    end
rescue ActiveRecord::RecordNotFound
  render_not_found
end

Pretty complicated – and as new types of item are added we need to add more and more clauses to that case statement.

First things first – I’ve said that we should call “process” on the item class. So I add a pending spec to the Item specification – this is to remind me that I’ve got some work to implement later on.


  it "should process itself based upon its work type"

Then I rework the controller so that the controller spec passes.


    @item =Item.find(params[:id])
    @stuff = @item.process
    render :template => "admin/orders/process_item"

Pretty simple huh?

What we have done is shifted the logic from the controller into this new “process” method on the Item. We have made it so that the controller knows virtually nothing about the item – all it knows is how to find it and that it has a process method. All the implementation details are now hidden within the Item, out of the way of the outside world.

Through the use of mocking we can ignore actual implementations and concentrate on presenting ourselves as simply, and minimally, as possible to the outside world. This reduces coupling, increases flexibility and makes our code easier to read. Don’t you agree?

Coping with the VAT Change

Tuesday, November 25th, 2008

There seems to be a lot of wailing and gnashing of teeth about the upcoming VAT change. Especially as it is only a 13 month change and the rate will revert to 17.5% in 2010.

However, it ought to be really simple (although I realise that this may be a bit late for some computer systems).

Never hard-code the VAT rate

Have a Tax class that the rest of the system can ask when it needs the tax rate. That way, you’ve only got one place to make the change.

Store the rate against your invoice lines

Add a field to your invoice lines that stores the rate applicable for that line. Populate this when the line is created. That way, an invoice at tax point 30th November 2008 will have lines with a 17.5% stored against them, those created on the 1st December 2008 will have a 15% stored against them. This keeps your most vital financial documents accurate over time.

Calculate the VAT as late as possible

When calculating the VAT on an invoice do not do this:



value_excluding_vat = 0

vat = 0

value_including_vat = 0

for each line in my invoice

  value_excluding_vat += line.value

  vat += line.value * line.tax_rate

  value_including_vat += line.value + (line.value * line.tax_rate)

The problem here is that you are calculating the VAT on a per line basis and then summing the values. This will lead to rounding errors – where your line values are a penny out from your totals.

Instead you need to place each line into a “bucket” for its tax rate, sum all values for a given rate and then calculate the tax on that. That way you are working on the largest numbers possible, reducing the risk of small decimal place discrepancies throwing your totals out.



value_excluding_vat = 0

for each line in my invoice

  bucket = the_bucket_for line.vat

  bucket.value += line.value

  value_excluding_vat += line.value

vat = 0

for each bucket in my collection of buckets

  vat += bucket.value * bucket.rate

value_including_vat = value_excluding_vat + vat

Have a Tax Band and Tax Rate model that deals with changes over time

If you really want to have a “minimal administration” system for dealing with tax rate changes then you should switch your data model to have multiple Tax Band objects (as remember, there are multiple VAT bands – “standard” is 17.5/15%, “low” is 5%, “zero-rated” is 0% and “exempt” is also 0% but needs to be accounted for separately to “zero-rated”).

A Data Model for dealing with variable tax bands and variable tax ratesEach Tax Band object should have a set of Tax Rate objects hanging off it – where each Tax Rate has a percentage rate and a start and end date. That way you can ask your “Standard Rate” tax object “what is the percentage on the 30th November 2008?” and it can give you the correct answer.

That makes dealing with a change like this simple – you simply go to the “Standard Rate” object, find it’s current rate object and set its end date to 30th November 2008. Then you add a new rate object to the “standard rate” of 15%, setting its start date to 1st December 2008, with an end date of 31st December 2009. Lastly, add another 17.5% rate object, with a start date of 1st January 2010 and no end date.

Update the rest of your systems to always ask the relevant tax object for the correct rate and tax rate changes will never be a headache again.

Acceptance Testing in Ruby, Rails, RSpec and Cucumber

Friday, November 21st, 2008

I’ve written up a new post at the Brightbox blog detailing how we are using RSpec and Cucumber to build acceptance tests for the next generation Brightbox systems.

Showing the Git branch in your bash prompt

Wednesday, October 29th, 2008

Safe and Secure - Image by frko: http://www.sxc.hu/photo/962334

Safe and Secure - Image by frko: http://www.sxc.hu/photo/962334

My first adventure in source control was many years ago. It was my first proper job and I was the sole developer in a tiny company. To keep the source code safe, it was all stored on a network share, and the file server was backed up at least once a day.

The problems started when two other developers joined the team. Within a week we repeatedly had the issue of two people editing the same file at the same time and one of them losing their changes. So we devised our own source control system. Every file had a piece of cardboard with its name written on it, about 2cm by 10cm. These were stuck, with blu-tak, to a wall. If anyone wanted to edit a file they must stand up, go to the board, take the piece of cardboard for that particular file and stick it to their monitor. So we had an at a glance view of who was working on what and also made sure that the files were kept safe.

Since then I’ve used Visual Sourcesafe (don’t laugh). To my mind, it’s not completely awful, but I could never get my head around branching and merging. I then moved to Subversion which is both incredibly simple to use and free. And I understood branching and merging, but it was just a bit too long-winded to use.

However now I, like all the Rails-kids, use git. Which is both great and awful. It’s taken me weeks to even begin to get my head around it. I still lose code every now and then. But it’s so easy to branch and merge; for the first time, branching is an integral part of the way I work. There is danger in branching though. With svn it’s easy to see which branch you are in – it’s the folder name. In git, it’s not so easy.

So, my extremely smart colleague, David Smalley, showed me this amendment to your .bashrc (or in my case .profile):

function parse_git_branch {
  git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
}
 
export PS1="\u@\h:\w\$(parse_git_branch)\$ "

The function asks git which branch we are currently in. We then set PS1, the variable for the command prompt, asking it to show the username, host, path and branch.

So, if you are not in a git-managed folder you see:

rahoulb@monster:/Volumes/src$

And if you are in a git-managed folder you see:

rahoulb@monster:/Volumes/src/bb-billing(master)$

So now you have no excuse!

Photo by frko

Telling Stories with RSpec

Thursday, October 16th, 2008

Last night I gave a talk at Geekup about RSpec and RSpec User Stories.  

 

Telling Stories With RSpec
View SlideShare presentation or Upload your own. (tags: ruby rails)

 

Thanks to Ashley Moran for talking it through with me.

UPDATED to use Slideshare to display the slides.

The Specification is the Documentation Part Two

Tuesday, August 5th, 2008

Two (related) thoughts on “The Specification is the Documentation“.

One of the things that I like to do, when developing, is to start with a sketch (you know, with 95g/m2 paper and a 6B pencil) of how the UI will look. There are two reasons for this. Firstly, it helps communications with the client – they can see something concrete, while it’s also blatantly apparent that there’s a long way to go yet. Secondly, it puts me in the position of starting from the ideal position and working “downwards” to what is possible, rather than starting from what is easy and working “upwards” to what is nice. In other words it forces me to raise my standards.

Writing my specifications with the helper methods as described is the coding equivalent. I start with a high-level, abstract, description of the problem I am trying to solve (“given a logged in user and three gizmos, expect the gizmos to be tagged when I go to the ‘tag-gizmos’ page”). I then work on setting up the environment (implementing the helpers) and the getting the specification to pass (implementing the code). Again, I start with the ideal position and work downwards as I implement.

The second thing is that it is one of those techniques where I am sure that I am along the right lines. How can I tell? Because as soon as I wrote my first specification in this way it “felt right”. After I had written a couple more I wanted to go back to every piece of code I had ever written and rewrite it all in this new way.

The Specification is the Documentation

Friday, August 1st, 2008

In a former life I used to write “functional specifications”. These were long, dense, hard-to-read documents that detailed what an application (not yet written) was supposed to do. I would spend (literally) weeks typing these things up, the customer would read it, think they understand and I would quote them based upon the document. And then the project would go over budget as all the tiny subtle details became apparent about two weeks before deadline day. But, even worse, the document would slowly become out of date, as changes were made in response to feedback – while the spec stayed unchanged.

As you might expect, this lead to general disillusionment with functional specifications. What’s the point if they didn’t help with the budget and didn’t reflect the actual application?

But when I read about Test-Driven Development the key thing that struck me was the tests became a living embodiment of what the application is supposed to do. They are a specification; but not only that, they are a specification that has to remain up to date.

Only one problem. They are written in code. Making them meaningless to the client. In fact, often, some tests were so obscure they were only meaningful to me; as I was writing them. Come back to it six months later and try and figure out why that change has made it fail? No idea.

Which is why RSpec and its Stories are so exciting. A Story is a text document that describes the required behaviour of the application. Read that again. A text document; written in English. So your clients can read them. Can help to write them. You then supply some Ruby code, that matches the sentences in your stories and associates them with code. That code is run, testing your application and proves that it does what it is supposed to (providing you’ve written your test code correctly of course).

Story: measure progress towards registration goals
  As a conference organizer
  I want to see a report of registrations
  So that I can measure progress towards registration goals
  Scenario: one registration shows as 1%
    Given a goal of 200 registrations
    When 1 attendee registers
    Then the goal should be 1% achieved 

  Scenario: one registration less than the goal shows as 99%
    Given a goal of 200 registrations
    When 199 attendees register
    Then the goal should be 99% achieved

Now, I have to admit, I’ve not used RSpec Stories in anger yet. But it has had a strong effect on my “unit” tests.

The difference between stories and unit tests are that stories test your full stack. Go to ‘/’, fill out the text fields, click the submit button, it should insert into the database successfully and then show these three records on the ‘/whatever’ page. Unit tests will check that the form is shown when you go to ‘/’. A separate test will check that your record can be inserted into the database. Another test will prove that ‘/whatever’ asks the database for three records.

But, having read about Stories, the way I write my unit tests have changed. Check this controller specification out:

describe ReportsController, "at the admin site" do
  it "should show this month by default" do
    given_the_admin_site
    given_a_system_user

    when_getting :index do

      expect_to_find_orders_for Date.today

    end

    assigns[:date].should be_today
    assigns[:orders].should == @orders
  end
end

Not quite plain English. But, when it comes to maintenance, using given, expect and when as prefixes for your helpers makes a world of difference.

 

UPDATE: now using block syntax around the “when_getting” statement

Setting up a mock object to test a :dependent => :destroy association in RSpec and Rails

Thursday, July 10th, 2008

One of the great advantages of using mock objects to test and specify your objects is that you concentrate solely on the thing you are testing.  

If you weren’t using mocks to tests that a controller re-shows the “new” form if given an invalid object, you would do post :create, :model => { ... } where … is a set of fields that are invalid. This means that, when writing your spec, you have to remember what it is that makes that model invalid. This also means that, every time you change that model, and its rules for validity, you potentially have to amend the controller test as well. In other words, you are not actually testing the controller in isolation, you are also testing the model.

Using mocks lets you write the following:


  @model = mock Model
  Model.should_receive(:new).and_return(@model)
  @model.should_receive(:save).and_return(false)
  post :create, :model => { :some => :fields }
  response.should render_template('/models/new')

In other words, it doesn’t matter what parameters you send to create. The Model (class) will return you a mock instance of model, and the mock instance will return false on save (you can actually get save! to raise an ActiveRecord::RecordInvalid – but I’ve had some difficulties with that). In other words, the real model is no longer part of the test and you are concentrating on the behaviour of the controller alone.

This concentrating on what is important is a vital advantage when using mocks.

But how does this work on testing models?

I wanted to test that a default value was copied from one field to another under certain circumstances. So I set up a method, called set_default_value and wrote some specs to ensure that it was working. Then I wrote the following:


  it "should set the default value before validation" do
    @model = Model.new
    @model.should_receive(:set_default_value)
    @model.valid?
  end

This failed, as it should do. Then I set a before_validation :set_default_value on the model and the spec passed. Each spec concentrates purely on what is important – a couple to show that set_default_value works under different circumstances, and one to show that it is called when it is supposed to be.

What about testing a :dependent => :destroy association?

Unfortunately, that can’t be done without saving at least one of the objects in the association (which means knowing enough about it to make it valid). But, as David Chelimsky (Mister RSpec) points out on the RSpec mailing list, you can do it using mock objects for part of the association. Which was a relief as my “child” object was complex with a whole set of interdependencies of its own.

To DRY or not DRY?

Wednesday, July 9th, 2008

A very interesting article about how DRY you should be in your specs.  

http://lindsaar.net/2008/6/24/tip-24-being-clever-in-specs-is-for-dummies

Personally I agree with everything said.  Readability comes first, even at the expense of efficiency and DRY; “be nice to those who have to maintain the code”.  The really interesting thing though is the example is actually quite DRY – it’s more about how you organise and order the code, more than repeating yourself.