Archive

Archive for March, 2010

Release Management Lessons Learned – benefits of a working process

March 28, 2010 Leave a comment

Some more lessons learned from my stint as release manager, this time looking at the benefits of having a well-defined but flexible process, which made sure that problems were surfaced early.

Our release process sought to verify that everything was in a fit stage to proceed, and to make sure that we didn’t forget anything important. My notes are drawn from a test release late in 2009, and highlight a few of the things that we found in a normal “problem-free” release.

The most important thing about this release is that despite being a major release, no significant bugs were found, so no subsequent defect fix releases were needed. The development teams learned from my constant harrying of them through the year; this was the first time we managed it during my year doing the QA role.

Benefit of getting it right first time

Quite apart from saving the overhead of fixing, testing, repackaging and retesting a bug fix release, delivering a high quality release through test at the first attempt gave us other very clear and tangible benefits. Firstly, we didn’t have to worry about code branches (the development team had already started work on the next release). Secondly, I was able to produce a draft Production Release note for this release in time for User Acceptance Testing. Because I didn’t have to merge the content of two or more test releases, manual errors were eliminated – I usually missed something out each time. UAT was then carried out against the Production release note, and also failed to find any defects.

A working process doing its job

Having a rigorous process proved itself time and again by identifying mistakes and omissions at an early stage. I’ve picked the following examples of relatively small things that were caught early, which contributed to what was ultimately a successful release. Of course, I also found quite a lot of small QA issues that are of no interest to anyone outside the team.

One very valuable check was to audit all changes in the source code repository since the last release, to make sure both that all expected changes were found, and no items were omitted. On this occasion the development team had made a minor change to a component and checked it back in to the repository, but hadn’t itemised it in the release note. The source control log correctly identified the defect number, which was in the release’s scope. On investigation I found a reference to this particular code item buried deep in the detail of the defect, and added it.

Our test team used to share the builds around between them, so each time the instructions were seen by a relatively fresh pair of eyes. As a result we usually found any obvious errors in the release note before the end users followed them at UAT. We also, as usual, found minor defects and suggested improvements to the documentation relating to changes made some time ago that had not been spotted at the time. Normally, by the time they reached the users they were pretty accurate.

Another issue that arose that hadn’t been considered before was how best to implement security privileges to the new functions added at the release. Following go-live, the business users took responsibility for maintaining system access via a set of UI screens, and we stopped updating our build scripts. This release added so much new functionality at once that doing it manually would have been too onerous. We realised that instead of simply listing new functions, we had to write a script to add them in dynamically. We recorded the business’s initial access requirements in a defect, which allowed us to test the script in the usual way. Including a check in the process made sure we addressed this before it was too late.

A final small issue that came to light was that one of the migration SQL scripts didn’t run from the ‘@’ prompt in SQL Plus. This identified that our development tools didn’t place the same restrictions on the code as the production environment, and led to a slight tightening of unit test procedure.

Nothing dramatic here, but hopefully a few illustrations of things that would have been annoying if we’d only uncovered them at the last minute, the difference between a smooth release and a fraught one.

More

From a completely different perspective, a review of an Agile project from David Larsen.

Release Management Lessons Learned – some problems and what we did about them

March 18, 2010 1 comment

Last year, I was QA and Release Manager for the project I was working on at the time. Now that I’ve moved on, I’m making some notes on things to remember for another time.

I am always interested to read about other people’s experiences, what worked for them etc, so in the same spirit I am offering up some observations of issues that we found while forming releases, and what we did about it. We aren’t always very good at recording lessons learned, either formally as part of our job or even privately, which is a shame as it can be a very worthwhile exercise.

During the year our internal processes eventually became quite tight. I have mentioned previously that I created simple checklists to ensure we carried out all the steps needed to prepare a release. Despite this, new issues still arose nearly every time. Those described below occurred during one of the later production releases and are typical of the sort of thing we found.

Development process issues

We had two significant issues with our code that surfaced immediately following a release into production, which – embarrassingly – required an immediate Hotfix:

1. Missed branch – lack of formal QA process

We reintroduced a critical defect that had been fixed in Production by a Hotfix to the previous release. It turned out that the Hotfix code branch hadn’t been incorporated back into the main code base. The development team leader was aware of it and thought it was done, but in fact had confused it with a different Hotfix.

The immediate failing in this case was with the Hotfix release process. What should have happened when the code branch took place (a couple of months before) was to raise a defect to merge it back into the main code base. Then, even if the development team forgot about it, the QA checks would have spotted it.

On this occasion, the ultimate cause was lack of familiarity with the process; I was on annual leave and my role was performed by someone else. At the time the original Hotfix was applied, we didn’t have a standard checklist for Hotfixes, and a defect wasn’t raised because my stand-in didn’t think to do it. By the time this issue surfaced, we had already closed this gap; the Hotfix checklist included a specific check to prevent this very issue.

2. UI usability issue – failure of testing process

A bug fixed at this release introduced serious usability issues, which really should have been spotted during testing. It went all the way through the test process including User Acceptance Testing without detection.

The problem here was that the original defect description was incorrect, and subsequent attempts at clarification served only to muddy the waters. It was unclear exactly what was being tested, making it harder for a review to identify that the test was inadequate.

You could argue this might be symptomatic of a more widespread process failure, within both my project team and the client’s; fortunately this was an isolated incident. Following this the test team started ensuring that all defects contained a clear functional statement, and were not afraid to challenge them if they didn’t.

At heart, this reflected a very common cause of project delivery problems – poorly specified requirements.

Our existing processes should have prevented both of these issues, but we didn’t quite have the right checks in place to ensure they were being followed.

Production release process issue

During the release database build the client DBA didn’t carry out instructions as documented and set up some tables incorrectly. He hadn’t been involved in the Production Support rehearsal of the implementation; the action for the next release was to ensure the same personnel are included in the dry run.

A second point – instructions are notoriously difficult to write well, and can easily be misunderstood by a new reader even if everyone else thinks they are crystal-clear. If the dry run involves the same team who will actually carry it out, any confusion can be clarified (and if necessary, the instructions improved) when it doesn’t matter.

Self-improving process

The end result was some embarrassment, it didn’t do much harm to the team’s reputation in the long term but we can always do without it.

This was why I added a step towards the end of the release checklist to “write up notes of release including any issues encountered for later dissemination”. By reminding myself to do it, we were able to prevent silly mistakes from happening a second time.

Even when you think you have everything under control, there is always scope for improvement.

An example checklist for a software release

March 8, 2010 2 comments

I mentioned previously how I found using a checklist helped enormously when I was responsible for managing releases last year. As we identified issues I added them into the checklist for next time and after a few months the process was pretty tight and a contributory factor to the overall success of the project.

As time went by I developed several checklists for different purposes; of these the one I used most often was to release a build from the development team to our test team. I hope it might be a useful starting point for others, so I’m setting it out below.

Of course, it’s not fully comprehensive. It reflects the issues we experienced on our project – another project might have a very different set of experiences and would need to add other checks or remove some of mine. In summary I was seeking to minimise errors in the following:

  • QA to ensure the development team had followed processes and provided a full audit trail;
  • All software components were as expected, and nothing else;
  • Packaging the release and accompanying documentation.

We followed a traditional waterfall process, but with short iterations (often a release every week or two). I’ve removed most application specific detail, however we had two development teams for the application and the reporting suite respectively and I’ve retained the distinction as it is quite a common one.

Test Release Checklist

  • Check all new code has Peer Reviews and Unit Tests.
  • Update all defects in Quality Centre (QC) by checking against the list of fixed defects supplied by team leaders, setting ‘Fixed in Version’ to current release.
  • Derive a list in QC to use in release note (use Export All to save to Excel) and also easily to update at the end. Needs formatting in Excel – add grid and format cells to align Top, save it for later use in Release Note.
  • Check all QC defects can be traced back to a Unit Test. Check also whether Unit Tests have sufficient coverage (i.e. if an extra branch in code, that both are being tested, not just new case).
  • Consider if defects should be a Change Request, and whether they involve a spec update (which implies a CR).
  • Are there any dependencies on other systems – upstream or downstream – or infrastructure? If so will need to liaise with client team to ensure everything is aligned for Production release.
  • Prepare Release Note.
  • Check out in CVS – directly on the UNIX box by running a checkout script, which also produces a listing of changed files since previous release.
  • Check all expected changes are in CVS.
  • Check all updated files are expected for release, if anything else has been identified need to decide on action – either trace it back to a defect and add it in if necessary or exclude it.
  • Save listing of the all the CVS changes since the last release for the record.
  • Check release notes/emails from application teams. Check versions are consistent with email and CVS. If nothing apart from versions has changed then no further action, however if configuration settings have been updated then need a change to Installation Guide and also need to specify new settings in Release Note.
  • Are there any Hotfixes since last tagged release? If so, do any involve anything other than a DML update? See Hotfix log.
  • DDL changes – do a diff on database DDL build scripts. Ensure all changes are included in migration script (and vice versa) and also traceable to a defect. Ensure we are not dropping any production data as part of this (e.g. User Roles, Reference Data) unless this has been agreed and tested. Also have there been any Hotfixes with DDL since the last tagged release?
  • Reporting components – are there any changes to parameters, LOVs etc to feed into Release Note or Installation Guide?
  • Raise any defects required to cover gaps, e.g. document updates outstanding (and note in section 2.3 of release note).
  • Run CVS check again just to check no last-minute changes have been introduced to CVS.
  • Tag release (excluding anything that isn’t required).
  • Check a couple of different files in release to check tag has worked properly, and especially the files/directories that have been retagged manually.
  • Update defects in QC.
  • Send out email formally releasing build.
  • Update Release Summary sheet in Log.
  • Write up notes of release including any issues encountered for later dissemination.
  • Make any updates identified in Release/QA documentation.
  • A couple of days after release, send out review email to project team leaders of how the release went.
Categories: Work Tags: ,

Are Annual Performance Reviews Pointless?

March 1, 2010 Leave a comment

In my employer’s organisation, few of us look forward to February. It is the month of Annual Performance Reviews, an activity viewed without enthusiasm by both reviewers and reviewees alike. It doesn’t help that it has to be fitted in alongside the day job so if you have more than a few to do (and write up afterwards) they are quite disruptive and a lot of effort.

The process followed by my employer is common, especially among consultancy firms. We all obtain written feedback from our direct line managers, which is reviewed against a set of objective criteria and some personal objectives set at the previous year’s meeting. How well we are judged to have done contributes to our next pay rise (if there is one…). We then have a short chat about future career aspirations (which the company is often not in much of a position to satisfy) and more objectives are set for the next year. In theory we are all supposed to work towards meeting our objectives through the year, which aren’t set in stone and are varied to suit changing circumstances. Of course most people don’t actually give them a second thought until about a month before the next review.

Many people argue that this sort of review should be dispensed with altogether, as counterproductive. The majority of my colleagues view them as at best a box-ticking exercise, and at worst a charade.

Many of the arguments given against reviews are against specific aspects of the process, rather than carrying out a review as such. One common complaint is the practice of ranking all staff, especially if the outcome has to fit a forced bell curve distribution. This is divisive and demoralising, especially as most people over-rate their own performance. William M Fox argues that effort should only be made to solicit ratings for the extreme performers. My own experience is that we spend a lot of time arguing over the difference between those either side of the above/below average dividing line, only then later for them all to be dragged down arbitrarily when some HR bod decides we still don’t have enough “below average”. This group are the ones whose morale is most undermined by this type of process, despite often having put in a perfectly acceptable performance, usually barely distinguishable from their peers who are judged “above average”.

By contrast, I have found that if the performance rating is based on a more objective measurement (e.g., have I met agreed objectives?) it is more widely accepted and I have been able to sell the idea to staff.

A more convincing argument against annual reviews is that all feedback on performance should be given directly, at the time. This is hardly a new idea – for example it’s espoused in The One Minute Manager written in 1982. But it’s much easier said than done. I’ve rarely received any direct comments on my work as I’ve done it, and most of the time (even now) I’ve really had no idea how my day-to-day performance measures up. Of course the reason for this is obvious – most managers are under a lot of time pressure, and however well intentioned, never find time for quick discussions with their staff.

This reason alone makes a strong case in favour of holding Annual Performance Reviews – managers are required, annually, to discuss and record the performance of their teams, even if they do nothing during the rest of the year. It’s also a good thing to be able to have a chat about our own work with someone who knows us, especially (following the model used by many consultancy firms) they are somebody independent who can give a wider perspective.

Of course, as reviewees, we can always take matters into our own hands. I surprised my own manager last year by requesting a one-to-one – he was quite relieved when he found that I had no motive except to review how I was doing against my job specification and objectives and to see if there was anything I should work on to improve. Of course, most of us are too lazy to do this unless we are forced to; even if we aren’t it’s hard to find time and easy to keep putting it off.

So, just as usefully, an annual review forces us once a year to interrupt our own busyness and take stock, when we might otherwise never get round to it. I have found it a good opportunity to identify a few things I can work on in the year ahead, and I don’t necessarily feel I need to discuss them all with my manager.

So, however imperfect, annual reviews are worthwhile if we use them to measure our own progress and see how we can improve in the year ahead. If you are required to have one, make the best of it!

More information

Kenneth Blanchard and Spencer Johnson (1982), The One Minute Manager, Collins/Fontana (1983 edition)

The trouble with Performance Reviews, an article in Business Week setting out some ways to do it better.