Archive for the 'Writing Reliable, Bug-Free Code' Category

Another reason to love Ruby…

Friday, December 2nd, 2011

Nyan cat tests your code

How we work

Sunday, February 27th, 2011

We have published a bit more information about how we work – our Beautiful Code doctrine, the process that we follow and how we convert your ideas into tangible features.


Thursday, February 24th, 2011

One of the greatest things about RSpec is that the specifications you write not only prove that your code is doing what it should be doing, but also it provides a starting point for your technical documentation.

Which means you are secure enough in your development to add functionality like this:

ApplicationHelper greeting
- should say 'morning' between 4am and 12pm
- should say 'afternoon' between 12pm and 5pm
- should say 'evening' between 5pm and 9pm
- should say 'night' between 9pm and 4am

One of those tiny details that most people won’t even notice – but makes a subliminal impression on your users (in this case, greeting them with a “Good morning” or “Good evening” depending upon the time).

Always write tests

Thursday, September 2nd, 2010

One of the things that stands out about Ruby and Rails developers is that the vast majority obsess over “behaviour-driven-development” (or its similar predecessor “test-driven-development”).

At first glance, this allows a suite of automated tests to be run against your code – the idea being that you have proof that your system does what it is supposed to. Bugs are caught early and, most importantly, a new feature cannot break existing features without you knowing about it.

But there is a much bigger advantage to behaviour-driven-development.

The very act of writing your tests before writing your code clarifies what you are trying to achieve and results in designs and code that are actually simpler – and hence easier to maintain.

I was reminded of this when trying to build a data importer. My colleague wrote a huge piece of code that seemed to work at first. But upon further investigation, it appeared to get some internal references wrong – and as the import took an hour to run, debugging became a pain.

So I started over. I wrote out a series of names for my specification: it should import a single product, it should import all variations of that product, it should import the images for a product, it should update an existing product.

These bullet points are a simple description for what it was supposed to achieve.

I then implemented the first bullet point in code – I took a copy of the import file, stripped out most of the data till I found a relevant piece, loaded that and then called the importer. My test just checked the results and ensured that the results were what was expected. The first run failed – because I didn’t have any code in my importer. But I implemented the simple “import a product” case and the test passed. Then I implemented the second bullet point in code. The first test passed, the second failed. I added the code to get the new test to pass – two passing tests and a very simple piece of code for the importer. Continue till all the tests were implemented – fantastic. The importer worked. But the most important thing – because it was built incrementally, it was simple. Even better, if it did start getting messy I could rewrite and rearrange the code to simplify it – and still be sure it was working as the tests proved it.

Writing the tests does not take much time – in fact it saves time as you have to explain to yourself what you are doing before you begin. But even more importantly, it results in clean, simple code. Which saves even more time (and money) in the long run, as that becomes cheaper to maintain.

It happens to the best of us

Monday, October 5th, 2009

We just had some customers report a bug. Not good. We didn’t get an exception email. All the tests passed. We couldn’t see anything untoward in the log files. But it was there. We could reproduce it, both in staging and in production. Not good at all.

But the weirdest thing was we couldn’t figure out the cause. Well I could see why the code was failing (after adding some extra log messages). But ‘git blame’ said those lines of code were unchanged in twelve months. Why hadn’t people complained before? Why hadn’t we noticed it?

After much hunting through log files we found the point when the feature last worked. It coincided with a deployment. That deployment was our Rails 2.3.4 forms vulnerability fix. And the bug was in a form – a missing form parameter that earlier versions of Rails ignored, the newer Rails was choking on.

But why didn’t the tests catch it?

After more hunting I saw that the Cucumber test that exercised the form didn’t have a “When I press the Update button” step. And the subsequent tests were passing, even though the update button hadn’t been pressed.

So I added the step in and made the feature pass. Then deployed it as an emergency fix.

However, what are the lessons to learn here (as there are always some)?

  • Firstly, testing cannot catch everything.
  • Secondly, the cracks in your tests are where the bugs are.
  • Thirdly, we probably need some sort of peer review for tests. I feel that this is more important than for code, because once the tests are right you can refactor the code without worry.
  • Fourthly, you really need to log everything. Absolutely everything. Don’t worry about your huge log files – that’s what `logrotate` is for. Get it written down so that one time when you have an obscure bug, you’ll be able to find it easily.

Using Cucumber to estimate a project

Friday, May 15th, 2009

InvoiceWriting estimates up-front is a really tricky part of client work.

From the customer’s point of view it’s pretty essential. You need to know how much you are spending before the work begins so you don’t get stung.

From the developer’s point of view it’s pretty difficult to do because you don’t know how long things will take until you know what you’re building. And you don’t know the details of what you are building until they become apparent, which is normally while you are doing the work.

For this particular client I wanted to get this piece of work out of the way as quickly as possible and the requirements were deceptively complicated. As trying to deal with intangibles is a recipe for disaster, and time was short, I was pretty sure that whatever estimate I came up with would be totally inaccurate.

So to deal with the intangibles I thought I may as well get stuck in with the actual work. And what’s the first thing that I do? Write a cucumber story.

I went through the requirements documents (which after weeks of to and fro were actually quite detailed and refined) and broke it down into four features, each with a number of scenarios. Each scenario was then broken down into individual steps – however, I did not write any step definition (ruby) files. This is cucumber as customer documentation, not as integration testing.

We agreed that the features met the requirements (the client was very impressed with the level of detail and the clarity that cucumber offered) and I simply went through each feature, scenario and step and came up with a price for the step individually. It’s still guesswork, but it’s guesswork on a small scale, so you’re less likely to be way off. Total the lot and there’s your estimate.

And even better, once the client has agreed the price and you’ve started development, you’ve got a development plan, a measure of progress and proof that it all works – all just a simple rake features away.

Image by lemon drop

Switching off transactions for a single spec when using RSpec

Friday, May 8th, 2009

I have just written a load of test code that needed to verify that a particular set of classes behaved correctly when a transaction was rolled back.

However, the rest of my suite relied on transactional fixtures (which is Rails’ badly named way of saying that a transaction is started before each test and rolled back at the end, leaving your test database in a pristine state before the next case is run).

In particular, my spec_helper.rb had the following:

Spec::Runner.configure do |config|
  config.use_transactional_fixtures = true
  # stuff

The code being tested looked something like this:

  Model.transaction do
    do_something # may fail with an exception
    do_something_else # may fail with an exception
rescue Exception => ex

I had a spec for the successful path (checking that the outcomes of do_something and do_something_else were what I expected.

However when I tried the same for the failure paths, the outcomes matched the successful path. The time-tested debugging method of sticking some puts statements in various methods showed that do_some_recovery_stuff was being called as expected. But the outcomes were still wrong.

And the reason? Transactions. This was a Rails 2.2 project, running on Mysql (innodb). As RSpec/Test::Unit starts a transaction before the specification clause runs (and then rolls it back on completion) when Model.transaction statement is reached, the spec is actually starting a second transaction, nested within RSpec/Test:Unit’s. Which means when the inner transaction is rolled back, the database doesn’t actually do anything – there’s still an outer transaction that may or may not be rolled back. (I think Rails 2.3 corrects this behaviour and if you roll back an inner transaction then the outer transaction reflects the correct state, but I’m not 100% on that).

So I had a choice – move the (production) app to Rails 2.3 to fix this one bug (which is very urgent) or figure out how to switch the outer transaction off for these particular steps. Google wasn’t very helpful – lots of stuff on how RSpec extends Test::Unit, lots of stuff on how Rails extends Test::Unit to add the fixtures (and transaction) support. But no concrete example on how to actually switch it off.

After much playing around (overriding methods like use_transaction?(method_name) and runs_in_transaction?) I eventually stumbled across the answer. And it’s pretty simple.

Set your default to be config.use_transactional_fixtures = true in spec_helper.rb. Then, for the specs that are not transactional, simply create a describe block and simply add the following:

describe MyClass, "when doing its stuff" do
  self.use_transactional_fixtures = false
  it "should do this"
  it "should do that"
  it "should do the other"  

The only thing to be aware of is you may well need an after :each block to clean up after yourself.

Fixing bugs in untested code

Wednesday, May 6th, 2009

When you’ve got an application that has little or no test coverage it can be quite daunting making changes. What if you alter X and it breaks Y? Without running through the entire app by hand how will you know what you’ve broken?

Well you won’t.

Even worse, what if your client reports a bug in that application? That makes things even worse doesn’t it?

No. It’s actually an opportunity. Because even if your boss thinks “testing is a great idea, we’ll start on the next project when we’ve got more time”, a bug fix is one of those things where the time to fix varies. So take advantage of that.

Start by writing a test that reproduces some functionality in the same area that you know works.

This won’t be easy. You need to get the database into the correct state, set up the session correctly. To save you some time, try using an object factory – with the correct configuration you can concentrate on creating just the models you need without having to fill the entire database with test data.

Take a look at the code you are testing as you are writing the tests. But make sure your test checks the outcomes of that code, not the implementation – when using mocks it’s pretty easy to end up effectively rewriting the real code as a series of should_receive(:something) calls. Which looks great until you come to refactor, at which point it becomes a nightmare.

Get your test to pass. Remember this is a feature that works so it shouldn’t be that hard to get it to pass. And you are building some important foundations as you are setting up a configuration for your object factory a model at a time.

Once you’ve got the first test working, prove that it’s doing what it’s supposed to be doing. Comment out its of the implementation and watch it fail. If it doesn’t fail then you’ve got a problem – your test is testing something other than that particular implementation.

Now we’ve got a test that really checks your existing code. I like to add another test that checks the error handling in that existing code as well (there is error handling in your existing code isn’t there?) Follow the same process as before – test the outcomes (probably catching an exception if you’re at the model level, probably looking for a particular redirect and flash message at the controller level), and then prove that it works by commenting code out. Hopefully this should be pretty quick to write as most of the hard work was done when setting up the original test.

And finally we can get to the meat of it. Write a test that reproduces your bug (that is, your test exercises your app in the way expected and your app should fail). Again, most of your setup work should already be done, so it shouldn’t take too long. Now run the test and watch it fail. If it doesn’t fail then your test isn’t right.

After all of this, we are finally in a position to fix the bug.

Your test should prove that it’s been fixed. And prove that the bug won’t reappear in a future version. But you’ve also taken an important first step to wrapping your application in tests, making your life easier in future.

Bug of the day: cookie not being saved? Check your host-name.

Monday, April 27th, 2009

Lesson learnt today.

Apparently, underscores are not part of the standard for a host name. However, your browser (and DNS server and all the links in between) will accept underscores in the host name.

But, when you have Safari set to “only accept cookies for sites that I visit” the underscore in the cookie’s host will cause Safari to silently discard the cookie. Leaving you with a strange log-in bug that seems to defy explanation.

The solutions – make your host name compliant with the standard. And until you can do that, switch Safari to “accept all cookies”.

Quick Tip: make it easier to debug your full-stack acceptance tests

Tuesday, March 24th, 2009

Spanner in the works

Spanner in the works

One of the issues when using Selenium or Watir to power your full-stack acceptance testing (apart from the time it takes for the test suite to run), is that stuff happens within your browser, fails and then Cucumber happily moves on to the next test before you get a chance to look at what went wrong.

If you are just using plain old Webrat you can pepper your code with puts statements so you can check the value of variables, the existence of HTML elements and the flow of code as it happens. But with Selenium or Watir, you need to run your app separately to Cucumber, normally in a hidden, background, process, so the output of your puts statements is lost in the ether (or an empty pipe).

After having a particularly annoying and hard to trace bug, that was related to an interaction between form content and javascript, I came up with an extremely simple debugging tool.

Just add the following into one of your steps files:

When /^I pause$/ do

Then, find the feature that is causing you grief and insert a “when I pause” step at the appropriate time.

When I do this
And I do that
And I pause
And I press "Save"
Then I see my newly created object

Cucumber will power your app, poking it until it gets to the “when I pause” step. It will then pause, waiting on STDIN for you to hit return – giving you time to open your inspector window and poke around in the form as the tests see it.

In this particular case, my steps file had an incorrectly named element within it – all it took was an inspection of the element in question and I saw the error. Hours of frustration wiped out by one of the simplest commands there is.

Spanners by woodsy