Opinion: Giving scientific advice to government during a crisis

Covid-19 has drawn attention to the role that science and expert advice plays in policymaking during an emergency, writes Professor Brian Collins (UCL Civil, Environmental & Geomatic Engineering) in an article co-authored with Kieron Flanagan from the University of Manchester.

This has exposed the Government’s Chief Scientific Advisor and Chief Medical Officer, and the way their advice feeds into government policy, to unprecedented levels of attention. At the same time, with the lines between policy and science being blurred, questions are being asked about what evidence informs policy and how.

UCL Public Policy has brought us two experts on public policy, and presents their views on how evidence informs policy, and the challenges of doing so effectively and transparently, during a crisis.

How it works

Dr. Kieron Flanagan - Senior Lecturer in Science and Technology Policy - Manchester Institute of Innovation Research, University of Manchester

How scientific expertise is sought, offered to, taken up and used by policymakers has been the focus of much research in the broad area of science policy and governance.

Much of this work focuses on high-profile controversies such as GM food, or policy disasters such as BSE, rather than on the day-to-day operation of scientific advice for policy. There is much we can learn from extreme cases, though it’s a mistake to assume that those cases are necessarily representative of how these advice-to-policy systems function more broadly. Partly driven by responses to previous disasters and controversies, the UK has developed a sophisticated system for integrating advice into policy, drawing on substantial in-house expertise within government coupled with independent advice from standing or ad-hoc expert advisory committees and commissioned research and analysis from independent researchers, including a specific approach to expert advice for emergency response.

One interesting feature of these systems is how ’national’ they are. Each government seeks and receives advice broadly from its own national advisory system, even where that advice is about challenges that cross borders and inevitably draws on the global scientific literature. A question that researchers investigating these systems have often asked themselves is: how is it that different national governments decide to take different responses to the same regulatory or policy issue? The predominantly national nature of the advisory process is, perhaps, part of the answer.

Global and national

The Covid-19 response illustrates this national versus global tension, with different decisions in different countries stemming from different approaches to modelling and analysis. Whilst international co-ordination and information sharing does happen in many areas of policy and regulation (for example with respect to global standard setting) it is less evident in fast-moving and highly uncertain unfolding emergency situations. We can see the important role being played by the World Health Organisation in the present pandemic but even here governments filter that advice through their own national advisory systems.

At the same time, the conditions and challenges of responding to a global challenge will be different in different places, and there will also be differences in what responses are feasible. Inevitably, advice about emergency response must be context specific. This makes national advisory systems indispensable - but at the same time places a great premium on having the right expertise involved.

Amongst other critics, the editor of the Lancet, Richard Horton, has argued that the epidemiological, public health and behavioural science expertise drawn on in the current incarnation of SAGE should have been bolstered with practical experience regarding intensive medical care and the treatment of severe respiratory problems. Others have criticised the SAGE process for not following the spirit of the post-BSE guidance on transparency in scientific advice to government. Time will tell how valid such criticisms are.

The public inquiry into the handling of BSE found that the UK’s early responses were undermined by a lack of practical experience with meat processing practices, compounded by secrecy in the process. As a result, the eminent scientists on the committee that advised on the initial BSE response made some recommendations that were impossible to implement in practice. This did not filter back to advisors and policymakers for some time, and precious time was lost in taking more practical action. The findings of the BSE Inquiry were major drivers of subsequent reform of UK practice, resulting in the advisory system we have today. We shall see what lessons are learned from the COVID-19 experience.

When an emergency erupts

Professor Brian Collins, CB, FREng - Professor of Engineering Policy, UCL - Former member of the SAGE Scientific Advisory Group for Emergencies

There has been a lot of commentary on the role of scientific advice in helping governments across the world develop policies to take us through the COVID-19 pandemic with minimum impact. SAGE - or the Scientific Advisory Group for Emergencies - in the UK is a body that is set up to coordinate such advice; in the role I held ten years ago as Chief Scientific Advisor at the Department for Transport I sat on SAGE when we in the UK were dealing with the volcano in Iceland that erupted and had a significant effect on civil aviation across Europe. I subsequently was a member of a team that advised the Government Chief Scientific Advisor and the Government on how SAGE might be better prepared.

Emergencies are situations that we hadn’t thought would happen, or - if we had - they occur at scale or a speed that we’re unprepared for. Clearly it would be good if we imagined situations that were plausible and then put in place contingency plans to deal with them. In the UK there is a Civil Contingencies Unit in the Cabinet Office that has responsibility for identifying such situations in the form of a national risk register that prioritises risks according to likelihood and impact.

The most recent risk register from 2017 shows that a pandemic was seen as having the highest impact and one of the most likely to occur. It’s therefore reasonable for the COVID-19 SAGE to have expected a high degree of preparedness for such an event in all aspects of services and activities that would be affected. Clearly this means the NHS and public health bodies, but also care services throughout the community and health care equipment suppliers to pick out obvious groups. It would appear that such an expectation is ill founded.

Private influence

The bodies responsible for responding to the impact of Covid-19 are not just central government; indeed, following the privatisation of a number of aspects of relevant services in the last two decades of the last century commercial and arms lengths bodies are those now responsible for delivering the services affected. Although to some extent their authority over their operations and priorities for investment are set by central government; the public and the media seem to still hold central government accountable for outcomes. The lack of alignment of responsibility authority and accountability is producing confusing policy messages and investment priorities for politicians, citizens and commercial operations. Thus, the governance context in which SAGE currently is working has been made confusing by privatisation and made more complex and possibly less effective by this misalignment, as the advice can be used in a wide range of contexts by different actors who are not necessarily aligned to a common purpose.

SAGE produces advice, not policy. Policy generation is informed by a lot of other factors, mainly political and in some cases party political. Policy decisions may look as if they are ignoring SAGE generated advice; this perception is amplified by the fact that whilst SAGE advice and evidence is made public as soon as possible, policy generation and decision-making processes are not. This means that the link between scientific advice and policy isn’t clear, nor are the trade-offs made against other factors, resulting in potential damage to the trustworthiness of the enacted policies.

Science in practice

Although there will always be complaints from scientific communities not represented in SAGE, the scientific expertise that goes into government is as wide in disciplinary coverage as is practicably possible. Academics are brought up to be competitive and adversarial - the principle of peer review - but in these situations some degree of consensual behaviour is essential. From my own experience with the Icelandic volcano ten years ago, experts on every aspect of the event were available immediately and provided inputs. They ranged from volcanologists to experts in aircraft structures, from atmospheric physicists who provided model based insights to satellite sensors that provided data, from civil aviation safety regulators to national economists. The science is holistic and available thanks to decades of investment in the science base. This is still the case for Covid-19, a much greater challenge that the eruption of a volcano in Iceland.

The lack of systemic alignment of the various elements of SAGE as I have described make it less effective that it might be. The scientific evidence is as complete as it can be and evolves and gets better as the data from the emergency situations becomes available. This is how evidence works in an evolving situation and this needs to be recognised. But the lack of transparency of policymaking confuses the actors involved and this dilutes the sense of common purpose. This may make the actions being taken by many to deliver on the mission of minimising the impact of the Covid-19 virus on society as a whole less effective.

Better governance of the context in which SAGE sits and operates would deliver better outcomes for the nation, making it more resilient to possible and probable future shocks.

This article was originally published in Wonkhe on 27 April.