Society – in the UK at least – seems obsessed these days with performance. And I’m not referring to the exploits of our athletes in Team GB during London 2012.
No, what I’m concerned about is the focus we seem to have on performance targets and monitoring. Now don’t get me wrong. I do agree that there has to be accountability for many of the things in which society invests. But there are some areas where performance monitoring and appraisal have been taken a little too far in my opinion. I have to be a little circumspect, I guess, since my elder daughter Hannah is a PhD psychologist specializing in industrial and organizational psychology and particularly in issues relating to performance.
In the public eye
Before the Olympics Games finally got under way, there was great concern that the company that had been contracted to provide security cover had not been able to recruit the necessary number of staff or get those that had been trained in time. Thankfully, all went quiet and appears to have gone smoothly as the Games commenced.
But the company in question, G4S, had contracted to provide all these people, yet only came clean a couple of weeks before the Games were due to open that they could not fulfil the terms of the contract. Now it’s not my role here to criticize how G4S managed its contract. What I do find hard to understand is how its failure to deliver was not picked up until so close to the Games. I would have expected the contract to have some very carefully defined targets and a set of milestones that had to be achieved by certain dates. These would have permitted adequate monitoring of contract progress. Maybe these were in place, but no-one bothered to monitor what was going on.
Now we have the furore about GCSE English exam results, and the moving of grade boundaries. Not only will this change in marking practice affect individual students, lower grades affect the targets which schools are expected to achieve in terms of passes at certain grades among students taking exams if the funding from the government is not to be adversely affected. Again, targets, targets, targets.
My own experiences with performance setting and monitoring have been concerned with two aspects. First, as with most employees these days I guess, I’ve had to undergo an annual performance appraisal. And second, as Director for Program Planning and Communications at IRRI between 2001 and 2010 I had responsibility for developing the institute’s Medium Term Plan and helping colleagues to define/refine their annual research targets, as well as respond to the increasingly idiotic and meaningless questions raised by the small-minded accountant hired by the CGIAR Secretariat in Washington, DC who hadn’t the first clue about scientific research (either basic, applied or for development) and was, in reality, the proverbial ‘bean counter’.
It was the late 1980s, and I was working at the University of Birmingham, as lecturer in plant biology. The Thatcher government had made any salary increase for academic staff contingent on the introduction of a performance appraisal system – something that was very new in academic circles. We had training courses – both for those supervising staff and for those being appraised. I have to admit I was dead set against this new fangled approach. It seemed to me that if you were found not to be performing as expected there were measures in place to help you do better. If, on the other hand, everything was going well, you might get a pat on the back, and that seemed about it.
I think I surprised myself when, after the first round of performance appraisal, I became a convert. I had found the whole exercise worthwhile and, following a complete reorganization in the School of Biological Sciences into four research groups (I was in the Plant Genetics group), had a better understanding of my niche in the School – and that it was appreciated by my head of group. It was also very useful to be able to have a frank and unconstrained discussion, one-on-one, with my head of group, Dr (later Professor) Mike Kearsey. From this experience, I became convinced that performance appraisal should be more about personal development rather than a means primarily to set remuneration policy and merit increases, although it surely plays a part.
So I was rather shocked when I moved to IRRI in 1991 to find a system of forced ranking, where local staff expected to be rated ‘excellent’ just for doing their job, simply because salary increases were tied to the outcome of the appraisal cycle. In fact during the 19 years I was at IRRI, I think I must have been through more than half a dozen different appraisal systems – and to my mind, none of them was particularly satisfactory. I was able to have some aspects of the development criteria I’d experienced at Birmingham brought into the IRRI system, however, and I think they were appreciated by staff at all levels.
But getting staff performance appraisal just right is a tricky issue, and I do not count myself an expert by any stretch of the imagination. But I think I can recognise a system that is just not delivering – either for the individual staff members or the organization.
Performance targets and monitoring
The days when a researcher could follow his or her scientific curiosity are long gone. Just ask anyone who has had to write a research proposal – for basic research, applied research, or research for development, and the problem of crystal ball gazing emerges once again. Scientists are often asked, as one of the criteria for evaluation, what the impact of their research is likely to be, 10, 20 or 50 years down the road. This is an impossible question for many.
But in the fields of research that I have been associated with for several decades, the success of any grant submission is the ability to clearly demonstrate what the outcomes and impact are expected to be, and to plot a pathway (through milestones) to achieving those. I don’t have much of a problem with that; after all this type of research is not done for its own sake, but has the ultimate aim of improving people’s livelihoods. But while a targets and monitoring scheme can be a framework to assess the benefit-cost of research investment, it had, in my opinion, become a millstone around the collective necks of the international agricultural research community, imposed from above by a group of donors whose staff (well some of them at least who were calling the shots) had little understanding of the nature, complexities and constraints of carrying out research for development, and often in rather challenging conditions.
Among the beefs I had with that accountant in DC were first, the ambiguity of the monitoring metrics – which allowed interpretation and therefore gaming of the system among research centers in the system (after all the ranking that performance monitoring brought about had a direct impact on the next year’s funding), and second, the complete lack of understanding that even though a research project had not met its targets to the letter, there could have been nevertheless significant impact on the ground. It was the numbers that mattered. And I’m afraid I did, on more than one occasion, let my frustration with system get the better of me, and interact with the Secretariat folks in less than my usual courteous way.
I worry that research for development is increasingly being devised and carried out to a formula, and the performance targets and monitoring are only exacerbating the problem. As I said from the outset, I have no issues with performance assessment per se. But when these exercises take away significant valuable time from active researchers in order to feed into a bureaucratic system (for certain months of the year I was spending over 50% of my time responding to external performance monitoring and auditing requests and having to ask researchers to take time away from their work to meet the deadlines which were imposed on us) then the balance is wrong.
Since retiring I’ve fortunately not had to deal with these issues any more – and it was the increased bureaucracy of international agricultural research that finally decided me to retire. I’m sure this won’t be the end of it. The CGIAR has gone through a major reform and reorganization program, and I’m sure it will have to devise new (better, probably not less complicated?) performance monitoring schemes in order to justify the shape, feel, direction, and expense of spending several years navel gazing to move its agricultural research agenda forward.
I’ve never thought of myself as a cynic. Unfortunately in the last 18 months before I retired I felt myself developing a cynical outlook, and I didn’t like what I saw. Time to get out. I’m happier now.
Postscript (20 March 2014)
Just after the beginning of 2014 I received an email from an old friend, Sirkka Immonen, who works for the CGIAR Independent Evaluation Arrangement, based in Rome. Sirkka and a colleague had made an analysis of the CGIAR’s performance management system, which they had published in the journal Evaluation . One of their compelling conclusions is ‘ . . . that the CGIAR’s PM [Performance Measurement] experiment failed against all the intended purposes. There were inherent difficulties in developing a set of annual indicators with high validity in reflecting the kind of performance that research institutions are expected to demonstrate, on outputs, outcomes and impacts. The system therefore was dominated by simpler observations related to quantitative records and institutional issues with unclear connections to performance of research organizations.’ The whole article is certainly worth a read. And after I had read it myself, I did feel somewhat vindicated for the stance that I had taken and the many concerns that I had raised while trying to implement what I then considered a flawed system in the research context.
 Immonen, S and LL Cooksy (2014). Using performance measurement to assess research: Lessons learned from the international agricultural research centres. Evaluation 20 (1), 96-114.
DOI: 10.1 1177/1356389013517444