Archive for the 'Designing Great Software' Category

Balsamiq Mockups

Friday, April 24th, 2009

Recently I’ve been playing around with Balsamiq Mockups. This is an application that lets you chuck screen designs (for web pages, desktop applications or iPhone applications) together very simply and quickly.

Balsamiq Mockups

Balsamiq Mockups

The UI is pretty straightforward – create a screen, select some controls (via drag/drop or by typing in a name) and place them where you want them. Each control can then be roughly edited (for example, the table control lets you put comma-separated values in) so that you can make it look something like the expected data.

I have to say that it’s a pretty good application. It’s the first thing I’ve found that comes anywhere near the convenience of a whiteboard (or even better a piece of A3 and a 6B pencil). Even more amazing is that it’s also the first Adobe AIR application that doesn’t want to make me gouge my own eyes out in frustration at its non-standard user interface.

So, overall I have to recommend it (and if you ask nicely, they may even give you a free copy).

Is it better than a piece of paper? No.

Is it better than everything else I’ve tried on a computer? Yes.

Complexity is everywhere

Wednesday, February 11th, 2009

I’ve been a fan of Basecamp for years.  Ever since I heard about it (all the way back in 2005) I’ve encouraged its use whenever possible.  It has pretty much become the de-facto standard for web-developers across the world.  Part of its appeal is its unstructured nature – it’s basically a series of messages with task lists and dates.  Use what you want, how you want.  Each task item has three variables – title, task list it belongs to and position in the list (and they recently added a comments stream so you can discuss the item).  

However, sometimes its laissez faire approach is too unstructured.  So you may move to a bug-tracker.  The big problem there is, no matter how simple you think things are, how many fields you make optional, there is always an issue about complexity.  What takes precedence?  The issue-priority or the version that it has been assigned to?  Or does something over due date take precedence over both of those?  

You see, adding anything increases the complexity exponentially – going from three variables in Basecamp to eight or nine in a bug-tracker (and that’s a simple bug-tracker, not like the beast that is Bugzilla) means you have to define what each of those fields means to you and how they interact.  

Even the most apparently simple piece of software has this inherent complexity – look at the massive variation in Twitter clients – despite Twitter being nothing than a single 140-character text field.  All of which shows why software development can sometimes be very very hard to get right.

The five day product launch

Friday, February 6th, 2009

The launch of is exciting for a couple of reasons.  The obvious reasons are that this is something that we, as Ruby developers, needed.  It gets the Brightbox name out there.   And it’s also nice to get people together and give something to “the community”.  

But personally, what I like best about it, is that the entire site was built, from original idea to launch in less than five days.  

Last weekend, Caius and I had chatted about how we needed to test our stuff against Ruby 1.9.  John suggested automatically downloading and testing gems and reporting the results back to a web-site “a bit like the wine project does”.  David popped up in our Jabber room a couple of hours later saying “I’ve had an idea … … list stuff that does and doesn’t work with ruby 1.9″.  Jeremy liked the idea … “I could probably skin it tomorrow night”.  

And so an evening of hacking by David gets a basic framework in place – he pulls the gems from Rubyforge and adds a comments model.  I take over on Monday, putting a basic HTML interface together, with gravatars and captchas, which Jeremy then makes look nice.  A blind alley over user registration, a few problems with Ferret, but by Tuesday we were pretty much done.  Deploy to the live site on Wednesday and then we just needed to test a few gems to seed the display.  Come Thursday and we all tweet about the new site … and is launched.

From our point of view, it was great to do.  We still had our normal “day” jobs to deal with, but the excitement of a brand new project (with the added pressure of knowing that there were probably other people with the same idea looking to launch soon) was fantastic.  

And there’s still loads to do (as you can see from the discussion ongoing on the Brightbox forums) but I have to say I’m really pleased with how things have worked out so far.  We kept it agile, we kept it focussed and we kept it fun!

MVC: A brief history of Models, Views and Controllers

Saturday, January 31st, 2009

Any web-developer Rubyist knows about models, views and controllers. The MVC paradigm is embedded in the structure of Rails and Merb and encouraged by Ramaze and Sinatra. If you’re a Mac developer or an iPhone bod then MVC is common practice there as well. Same goes for Sproutcore and even Microsoft is getting in on the act.

But where does MVC come from?

Well, like most things I enjoy in the world of software development, MVC has its roots in Smalltalk. However, things used to look slightly different to the MVC we know and love. In Smalltalk, the basic Object has a feature known as “dependencies” built-in; nowadays, we would call this the observer pattern. Basically, any object can register an interest in any other object and will be notified whenever the target changes.

target addDependent: listener.

Later, when something changes within target, it can decide to notify all its dependents (observers):

self changed: #SomeArbitraryMessage.

and the listener has its update: aSymbol method called.

Building this notification mechanism right into the core of the system means that it can be used everywhere – especially when designing user-interfaces (remember, Smalltalk was the GUI for the Xerox Star, which his Steveness bought and adapted for the Lisa). Models were the system components, the pieces of the system that actually do the work. Views represented portions of the screen and controllers represented the keyboard and the mouse. These parts were arranged as follows:

MVC in Smalltalk

MVC in Smalltalk

The view would add itself as a dependent to the model and display the relevant aspects of the model on-screen. The user would see this, manipulate it via the mouse and keyboard, which triggers the controller. The controller sends messages to the model, which triggers its notifications, prompting the dependent view to redraw itself. In other words, the controller and view are independent objects that know nothing about each other – the only tie between them is the controller’s knowledge of the model and the dependency mechanism.

Contrast this to the MVC we know today:

MVC in Rails

MVC in Rails

Here the user pokes the controller, which in turn sends messages to the model. The model does its thing and then the controller extracts the relevant information and passes it to the view, which is then rendered to the user. In this manner, the controller lives up to its name, orchestrating the entire cycle – and is the centre of all the coupling (as it knows about the view and the model).

So why did things change (first in Smalltalk and then the following MVC frameworks)?

The first problem is that the controller takes input and the view handles output – as separate objects. Which, especially for desktop applications, doesn’t really work; if you click on a text-field you expect different behaviour to clicking on a scrollbar.

Secondly, imagine an array of models being shown in a list box. The list box has the concept of a current selection. So your array model (a model of models) needs to have a current selection, so that the view knows which is which. But what if the same list is rendered in two places at the same time. If you point both views at the same array model, they would both share the same selection properties. So you actually end up with something more like:

Getting Complicated

Getting Complicated

As you can see it gets nasty pretty quickly. And actually the controller is simply passing its messages directly through to the array model which is actually doing the routing of the other messages – the controller is pretty much redundant.

So if you swap things around, merge the controller and array model (into some sort of array controller) and then add multiple array controllers – one per view – to deal with the selection issue, suddenly you have the newer style object layout. Requests go into one of the controllers, it routes it to the model, then parcels up the results and invokes the view.

Evolution in action.

Cucumber and WATIR: kick-starting your testing

Friday, January 16th, 2009

I think Cucumber is fast becoming indispensable for my testing. The point of it is that you can write documentation that your client understands and then prove that the application does what it says. When coupled with WATIR you can show that it really works in an actual browser – you can even show it in Internet Explorer, Firefox and Safari.

There are a number of issues with using WATIR – the main one being speed – as WATIR scripts the browser, an extensive test suite will take a long time to run.

But my first hurdle was actually getting the code running. Unlike using Cucumber with Webrat, where your Cucumber steps have access to your Rails code (so you can play around with the response object), with WATIR you run the application in a separate process and then poke the browser in the same way a real user would. Of course, unlike Webrat, you get to test your Javascript too.

However, I didn’t want to have to remember to start my application before running the tests (and probably forgetting to put it into the test environment or running on the wrong port) – so I came up with the following for your features/support/env.rb.

system 'rake db:test:clone'
system 'rm log/test.log'
system 'ruby script/server -p 3001 -e test -d'

at_exit do 
  system "kill `ps aux | grep mongrel_rails | grep -e '-p 3001' | grep -v grep | awk '{ print $2 }'`"

Before do
  @browser =
  @base_url = 'http://localhost:3001'

This clones your development database, gets rid of the test log (as we’ll need to look through that to track down any errors, so we want to keep it small) and then starts your app in test mode on port 3001 (so it doesn’t clash with the version you’ve already got open in development). The Before clause sets up WATIR and configures a base url that my steps use elsewhere. And the at_exit hook looks for the server we started and kills it.

Next step is to add a platform instruction – so you can choose to test with Safari, Firefox (if I can ever get the Firewatir plugin installed) and Internet Explorer (where I guess the kill statement will need a bit of alteration).

An alternative is to tell mongrel to start and write a pidfile to a known place, and then replace the ps/grep line with:

  pid = + "/log/")).chomp
  `kill #{pid}`

Thanks to Caius for that version.

The curious case of beauty in Ruby (or Rails vs Merb part 2)

Saturday, December 27th, 2008

I’m sure you’ve all heard the Rails 3 announcement. When I first found out my initial reaction was “fuck me“. But shortly after I was filled with a feeling of dread and general unease. And I didn’t know why ….

Firstly, a bit of history.

I first tried programming on a Commodore Vic 20, and then after that a C64. C64 BASIC was very simple – if you wanted to do anything beyond PRINT statements you needed to POKE values into registers and control the hardware directly. Great for learning how things actually worked. And, to be fair, I was shit at it.

But I do remember reading an article on a system called “Smalltalk” and its “Object Oriented Programming”. Suddenly, programming made sense. It read a bit like English. You sent messages to the thing that knows how to answer your question. It was like talking to people. You ask Dave a football question. You ask George a music question. Cos Dave knows crap all about music and George knows nothing about football.

But, in those days, Smalltalk cost a fortune; there was no way a child like me could get hold of a Smalltalk environment. So instead, I got hold of Turbo Pascal 6 With Objects (thanks Dad). It was not Smalltalk but it read a bit like English and it had objects. I played about with Turbo Pascal, went to university (where I didn’t do computing but did do some C++) and then got a job doing Delphi (Turbo Pascal for the 90s). This object-oriented stuff really worked for me; I put a lot of effort into writing classes that had really simple public interfaces and with code that read like English. And Pascal (the language underlying Delphi) was great for that. Then I discovered Java, which meant I could write Delphi-like code but with having to deal with memory management. I also discovered PHP, Python and Ruby. None of which clicked with me; dynamic typing made me nervous (and PHP and Ruby seemed a bit ugly).

However, I needed an ORM for a Delphi project and I thought I should try to copy an open source project. Whilst searching I discovered Rails and thought “this is the one to copy”. But a day into my “copy ActiveRecord into Delphi” plan I thought “this is just like Smalltalk”. Why make an inferior copy when I can use something that’s not far off the Holy Grail. Writing an application on Rails had the same effect on me as my original discovery of Smalltalk – it read like English, it felt fantastic. So I gave up on Delphi and became a Rails programmer.

What I liked about Rails was its emphasis on happiness. When I wrote Rails code I felt like I was writing beautiful prose. I would go back and refactor it until it read correctly. This was not like pure Ruby, which was often ugly. No; Rails had this idea about beauty in code that really got me excited. It made me happy. It also made decisions for me – put your code here, test it like this, set up your database this way. But Rails had performance problems – so Merb was born. A ground-up rewrite of many of Rails’ ideas but with an emphasis on configurability and performance.

Maybe it’s the Engine Yard connection (I turned Engine Yard down for a job because it didn’t “feel right”) – and now I work for Brightbox, one of their competitors – but for some reason, every time I tried Merb I just couldn’t get into it. It was weird. Structurally and functionally it was the same as Rails – but it was Rails plus performance plus options. And I didn’t like it. I never got past the tutorials. Merb emphasises clear and understandable code and was tested with RSpec (which I love). Rails is hard to understand and uses Test::Unit (which is ugly). But I love Rails and I can’t get into Merb. I just couldn’t figure out why.

Until today.

Mr Hanson did a blog post on his first piece of Rails-Merb integration. And something stood out at me. As he was describing Merb’s provides/display functionality I noticed that I didn’t really “get it”. provides made sense, but how does that relate to display. Mr H addresses that directly:

There were a couple of drawbacks with the provides/display duo, though, that we could deal with at the same time. The first was the lack of symmetry in the method names. The words “provides” and “display” doesn’t reflect their close relationship and if you throw in the fact that they’re actually both related to rendering, it’s gets even more muddy.

And then he describes the Rails 3 version of the same functionality. Instead of provides/display it becomes respond_to/respond_with. In particular display @users becomes respond_with @users.

It’s only a tiny thing. Logically and functionally, they are exactly the same. But DHH’s version has an emphasis on the words that are used. How they couple together (display/provide versus respond_to/respond_with).

And there is the reason that I was uneasy about Rails 3. What if Rails lost this emphasis on the human factors – how the words mesh together – in the search of performance. Merb is written functionally, Rails is written emotionally – Merb is about performance, Rails is about feelings.

But DHH has made me feel much better about Rails 3 – he has shown that he will take Merb constructs and Railsify them, humanise them. Because, although code is executed by computers, it is written, and more importantly, read by people like me.

If you find this useful then please take a look at some of my other writing – or recommend me on Working with Rails. Cheers.

Writing tests for your controllers improves the design of your models

Saturday, December 20th, 2008

I’ve recently been updating some old code – partly written by someone else, partly written by myself. At the time, I thought I had written this code really well; looking back on it now, it looks awful. Fair enough, I’ve learnt a lot – I want to look back on old code and shudder. But also, there is very poor test coverage on this app and the tests that there are are quite unwieldy due to an over-reliance on fixtures.

So I’ve been reworking them all using RSpec, my fork of RSpec-Rails and my Object Factory (which means I can avoid fixtures).

Most of the work involves writing a spec that mimics the current behaviour (by inspecting the code and trying to match all paths through it), then refactoring the code, using the spec to prove that I haven’t broken it.

But some points have some really horrible code (and lots of it) within the controllers. As you probably know, Skinny Controllers is the Rails way – your application logic belongs in your models (as they are your application) – the controller should just find or create the relevant model, ask it to do something and then render the results.

Because of this, I opted to just rewrite the actions in question.

To do this I started by writing a Cucumber feature describing things from the user’s point of view. Actually writing the steps that match the feature was a lot of work; because Cucumber is a full stack test you have to deal with all the dependencies that your individual action has (for example, are you logged in with the correct permissions with all associated objects created and in the right state?).

Then I wrote a controller spec. Controller specs in RSpec should use mock objects; you don’t really want to test the models, you just want to prove that the controller finds or creates the right model, asks it to do something and renders the correct output at the end.

So a typical spec looks something like this (note that this is not RESTful as it was an existing part of the application that I am about to change):

  it "should process an item" do
    @item = mock_model Item, :work_type => :buy_stuff, :quantity => 2
    on_getting :process_item, :id => '1' do
      @stuff = mock_model Stuff
    assigns[:stuff].should == @stuff
    response.should be_success
    response.should render_template('admin/orders/process_item')

This basically says:

  • We have an item, of type “buy stuff” with a quantity
  • When the process_item action is called, we expect that the controller will try to find the item with the given id
  • Then we should call process on the item and it should give us some stuff
  • The stuff should be stored in an instance variable, called stuff
  • And a page should be successfully rendered using the process_item template

That’s a pretty succinct explanation of what the controller should do – I can’t think of many ways of making that skinnier. It also bears no resemblance to the actual implementation of the action – which currently looks something like this:

def process_work_item
    @item =Item.find(params[:id])
    case @item.product.class.to_s.underscore.to_sym
    when :buy_stuff
      @stuff = Stuff.build_item(@item)
      render :action => :process_stuff
    when :update_stuff
      render :action => :update_stuff
      render :status => 404, :text => "Item product type #{@item.product.class.to_s} unknown."
rescue ActiveRecord::RecordNotFound

Pretty complicated – and as new types of item are added we need to add more and more clauses to that case statement.

First things first – I’ve said that we should call “process” on the item class. So I add a pending spec to the Item specification – this is to remind me that I’ve got some work to implement later on.

  it "should process itself based upon its work type"

Then I rework the controller so that the controller spec passes.

    @item =Item.find(params[:id])
    @stuff = @item.process
    render :template => "admin/orders/process_item"

Pretty simple huh?

What we have done is shifted the logic from the controller into this new “process” method on the Item. We have made it so that the controller knows virtually nothing about the item – all it knows is how to find it and that it has a process method. All the implementation details are now hidden within the Item, out of the way of the outside world.

Through the use of mocking we can ignore actual implementations and concentrate on presenting ourselves as simply, and minimally, as possible to the outside world. This reduces coupling, increases flexibility and makes our code easier to read. Don’t you agree?

Coping with the VAT Change

Tuesday, November 25th, 2008

There seems to be a lot of wailing and gnashing of teeth about the upcoming VAT change. Especially as it is only a 13 month change and the rate will revert to 17.5% in 2010.

However, it ought to be really simple (although I realise that this may be a bit late for some computer systems).

Never hard-code the VAT rate

Have a Tax class that the rest of the system can ask when it needs the tax rate. That way, you’ve only got one place to make the change.

Store the rate against your invoice lines

Add a field to your invoice lines that stores the rate applicable for that line. Populate this when the line is created. That way, an invoice at tax point 30th November 2008 will have lines with a 17.5% stored against them, those created on the 1st December 2008 will have a 15% stored against them. This keeps your most vital financial documents accurate over time.

Calculate the VAT as late as possible

When calculating the VAT on an invoice do not do this:

value_excluding_vat = 0

vat = 0

value_including_vat = 0

for each line in my invoice

  value_excluding_vat += line.value

  vat += line.value * line.tax_rate

  value_including_vat += line.value + (line.value * line.tax_rate)

The problem here is that you are calculating the VAT on a per line basis and then summing the values. This will lead to rounding errors – where your line values are a penny out from your totals.

Instead you need to place each line into a “bucket” for its tax rate, sum all values for a given rate and then calculate the tax on that. That way you are working on the largest numbers possible, reducing the risk of small decimal place discrepancies throwing your totals out.

value_excluding_vat = 0

for each line in my invoice

  bucket = the_bucket_for line.vat

  bucket.value += line.value

  value_excluding_vat += line.value

vat = 0

for each bucket in my collection of buckets

  vat += bucket.value * bucket.rate

value_including_vat = value_excluding_vat + vat

Have a Tax Band and Tax Rate model that deals with changes over time

If you really want to have a “minimal administration” system for dealing with tax rate changes then you should switch your data model to have multiple Tax Band objects (as remember, there are multiple VAT bands – “standard” is 17.5/15%, “low” is 5%, “zero-rated” is 0% and “exempt” is also 0% but needs to be accounted for separately to “zero-rated”).

A Data Model for dealing with variable tax bands and variable tax ratesEach Tax Band object should have a set of Tax Rate objects hanging off it – where each Tax Rate has a percentage rate and a start and end date. That way you can ask your “Standard Rate” tax object “what is the percentage on the 30th November 2008?” and it can give you the correct answer.

That makes dealing with a change like this simple – you simply go to the “Standard Rate” object, find it’s current rate object and set its end date to 30th November 2008. Then you add a new rate object to the “standard rate” of 15%, setting its start date to 1st December 2008, with an end date of 31st December 2009. Lastly, add another 17.5% rate object, with a start date of 1st January 2010 and no end date.

Update the rest of your systems to always ask the relevant tax object for the correct rate and tax rate changes will never be a headache again.

Acceptance Testing in Ruby, Rails, RSpec and Cucumber

Friday, November 21st, 2008

I’ve written up a new post at the Brightbox blog detailing how we are using RSpec and Cucumber to build acceptance tests for the next generation Brightbox systems.

Telling Stories with RSpec

Thursday, October 16th, 2008

Last night I gave a talk at Geekup about RSpec and RSpec User Stories.  


Telling Stories With RSpec
View SlideShare presentation or Upload your own. (tags: ruby rails)


Thanks to Ashley Moran for talking it through with me.

UPDATED to use Slideshare to display the slides.