Archive

Archive for November, 2011

Lessons learned from a recent implementation

November 17, 2011 3 comments

For much of the past year I have been very busy as the development and implementation phases of my project have been in full swing. A quieter period following delivery has given me a chance to reflect how things went before the next phase which has now started up and will keep us all busy through the coming winter.

The project was overwhelmingly successful as we implemented two new applications and a series of significant enhancements to an existing large billing system, all on time. Unfortunately some of the gloss was taken off by a series of problems with each of the major billing system release implementations which bears looking at again to see what we can improve next time.

The problems we encountered were varied and were often quite peripheral to the system’s core functionality – in practice we had very few bugs. One of the main sources of grief throughout the project was the timely provision of test environments – a long story, the short version of which is that the outsourced infrastructure providers couldn’t provide anything in time and we had to test as best we could with what we had. Ultimately the root cause of implementation problems usually stemmed from this; generally something hadn’t quite been fully tested because the lack of a test environment. We tried to compensate by add extra manual checks, but inevitably some things slipped through. This was all understood by the client Project Manager and the decision was taken to go ahead knowing this (correctly – we had to meet a regulatory deadline).

Each of these events created a lot of excitement among the many client managers in Change Management and Service Delivery who as stewards are naturally suspicious of any change to their lovely shiny production systems. They demand perfection so when we failed to provide it, even though they knew the true situation before we went live, they all made a big fuss. The time spent dealing with the politics was disproportionate, given the actual impact.

I wrote earlier about the similarities between Conducting and Project Management – here is another one. A common joke in musical circles is that as long as you start a performance well and end well nobody will notice the mistakes in between, and this is just as true with an IT project. Each release was viewed as a “failure” even though by any dispassionate evaluation, it was emphatically not.

Also in every case, each time my team has been blamed whether it is technically our fault or not. The contract clearly spelled out a dependency on the client to provide adequate environments. Most of the testing that was not carried out was the client’s responsibility, not ours, and we always received formal client signoff. Despite that the finger of blame has been pointed at us, not elsewhere within the client’s organisation

Summary of the lessons of our experience:

  • An outsourced infrastructure management hinders progress (and I am quite sanguine about whether my employer would have been any different)
  • Visual checks, however careful, are no substitute for rehearsals
  • As the supplier, my project team sustains reputational damage when things go wrong whatever the contract says
  • I should have paid more attention to managing internal client stakeholders, especially those with no particular interest in the overall success of the project – even if it’s really the client PM’s responsibility

There are some positives too; for another time, maybe.