The IPBES review proposal – how not to evaluate a science-policy interface

[Background]

by Carsten Neßhöver, NeFo & UFZ Science-Policy Expert Group

As a new institution, IPBES should be checked carefully on its performance in terms of effectiveness, its relevance, credibility and legitimacy. For this, the plenary has decided to conduct a review early in its process. The proposal now discussed at IPBES-4 would not allow for proper evaluation as it misses several baselines regarding independence, transparency and resources. And it seems to ignore the scientific expertise that can help analysing Science Policy Interfaces (SPIs).

Remember “ClimateGate”? It was back in 2009/2010 when IPCC saw itself heavily attacked due to some (in the end minor) mistakes in its reports and hacked emails from experts involved, but also due to its often non-transparent way of operation. This led to a broad external evaluation [pdf] of its work by the Inter Academy Council (and other bodies) with proposals for improving its structures and processes (though in the end, not all of them where taken up).

For IPBES, it was decided early on (Decision IPBES/2/5) [pdf], that it should be evaluated by an independent external body to analyse the complex nature of its work (with a set of interlinked functions and a quite demanding work programme). At IPBES-2 it was decided that such a review should present a mid-term and end-term review during the first work programme until 2018. It should be conducted by an independent body.

For IPBES-3 a first idea for such a review was presented and only now for IPBES-4, a concrete proposal is available (IPBES/4/16) [pdf] – not much time left for a mid-term review.

The proposal

To account for the time constraints, the document proposes that a (smaller) mid-term review should be done already by the next plenary (IPBES-5) in 2017, and the end-review already a year later in 2018 (for IPBES-6) in order to inform the set-up of a new work programme. As time is too short for a call for institutions to host the review board, this task is proposed to be given directly to one global scientific institution (ICSU), supported by another (ISSC).

A call for experts is not intended, rather these two organisations would select the experts based on a set of criteria (see annex to document IPBES/4/16) [pdf]. This external review would be complemented in parallel by an internal one by MEP and Bureau. Few resources would be provided: 5 and 10 experts would conduct the mid and end-term review, respectively. The experts would only visit the next plenary and would be supported by some person months to fund a project manager at the hosting institution (3 month for the mid-term, 6 months for the end review).

What makes a good review of a science-policy interface like IPBES?

When analysing science-policy interfaces (SPIs), you want to check whether they reach their overall goals and are perceived as credible, relevant and legitimate (“CRELE”). Also the early constitutional document of IPBES (the Busan Outcomes [pdf]) stresses this. In order to properly address these dimensions, you need to look at the goals/functions of the SPI (e.g, for IPBES the four goals and the objectives of the work programme based on them), the structures (e.g. for IPBES plenary, MEP and Bureau), the processes (for IPBES: the procedures to conduct its work) and the outputs (the reports and other products).

In the end the ultimate goal is to receive outcomes, e.g. via changed policies or changed behaviour. The figure below illustrates these dimensions. It also adds another attribute recently proposed in addition to CRELE (Sarkki et al. 2015): iterativity adds the perspective that SPI activities need to be reflexive to ensure an institutional learning and improvement, and support learning of those actors involved.

Thus, when looking at IPBES this conceptual model illustrates that an evaluation needs to look at quite different elements of the work so far: How the overall goals and translated in activities, and how the IPBES bodies (the structure) and processes (e.g., conducting assessments) fit together to achieve high-quality (“credible”) but also “relevant” outputs – and in the end outcomes.

grafik_dimensions_for_policy_analysis_spiral.jpg

Dimensions for analysing science-policy interfaces (based on work in the SPIRAL-project, see also Young et al. 2013, p.39ff.
Dimensions for analysing science-policy interfaces (based on work in the SPIRAL-project, see also Young et al. 2013, p.39ff.
SPIRAL-Project

Consequences for the IPBES review

The review proposal remains quite superficial in terms of what should be evaluated. It distinguishes between administrative and scientific functions mentioned earlier in IPBES documents, but doesn’t make explicit links to the original goals – assessments, identification of knowledge needs, policy tools and methodologies, capacity building. The different functions also fall in the responsibility of different IPBES bodies, so that an evaluation along these lines might become quite blurry regarding the structural elements of IPBES.

Also, the review proposed would put the (independent) external review side to side to an internal one by MEP and IPBES-Bureau with unclear links (or separation of which is evaluated by whom). This would strongly reduce the credibility of such a review process, as independence is key for the acceptance of such a review by IPBES’ stakeholders and governments.

In addition, doing a detailed review of an SPI needs time and resources to really look inside the process: the work of its bodies and its processes. This can only be achieved by evaluators with an according background being allowed and equipped to participate in this process with a clear mandate (e.g. in meetings). Further, the evaluators need to be able to gain information directly from those involved, e.g. via (group) interviews and/or questionnaires. Such methods are only foreseen in a quite low number in the review proposal. Instead, most of the conclusions by the reviewers would have to be based on the analysis of IPBES documents.

This procedure will be insufficient and risks the credibility and probably the relevance of the IPBES review, as it may fail at revealing the crucial points which might need adaption for an effective IPBES in the future. Also, a number of scientific analyses about IPBES already exist that should be recognized in the review (see list of papers on NeFo website).

Another crucial element is the independence of the review board: Handing it over to an organisation which has been lobbying for IPBES for many years (for surely good reasons) and which has strong ties via a number of persons to IPBES bodies might not be the appropriate option here. Too much pragmatism in review processes will rather harm than help as it makes the process prone to attacks.

For sure, the review process proposed will need to undergo major changes with regards to openness, ambition and methods in order to make it a credible and relevant review – the review process deserves the same high scientific standards as IPBES maintains in its assessment processes: using the best available knowledge by experts with a suitable and broad expertise.