It’s easy to understand how evidence leads to better policy outcomes. When a new drug is approved, we expect it’s because it has been rigorously tested. In the same vein, when governments implement new policies, whether it’s the new food guide or changes to the Canada child benefit program, these changes should also be informed by the best available evidence. But in practice, this process is more complicated than we think.
I recently attended the 2019 ‘What Works Global Summit’ in Mexico City. The conference brought together leaders in evidence-based decision-making to explore how evidence can be used to design, implement and review policies and programs for better outcomes on a range of issues including the environment, health, and global development.
During the mostly international program, it became clear that Canada is well respected both for the systems we have in place to support the use of evidence by governments at home, and the work we’ve done to advance the field in developing countries. Examples include our government’s well-established methods for public consultation, and important work like a rapid-response evidence team in Uganda (originally funded by Canada’s International Development Research Centre).
As a country, we have a strong reputation in the global evidence-use community. However, there is a lot for us to learn from this fast-growing, global field.
Here are a few of the main themes that came up over the conference.
How automation can help us use evidence
While there is a growing interest in producing rigorous evidence to support policy making, things like systematic reviews (highly reliable sources) and evidence maps (research to identify existing knowledge as well as gaps in understanding) are only getting harder to produce. These types of products require an intensive search of the literature on a given topic, and with scientific papers being published at such rapid rates, conducting these analyses is becoming increasingly expensive. Additionally, with more and more literature feeding into these reviews, it increases the risk of selection bias impacting the outcomes of the study.
Could automation play a role in making it easier, faster, and cheaper to use evidence? Speakers outlined some ways automation can help, particularly around text-mining to identify authors, and to produce evidence maps showing which fields are over/under represented and where gaps exist.
An example of how we’re already using this type of tech was a case study where over 500,000 scientific abstracts were mined for research demonstrating a relationship between agriculture and deforestation, using an extensive dictionary of causal verbs (i.e. words that indicate a relationship between the two variables, e.g. ‘make’ or ‘force’.). 27 studies were identified showing a link between the two topics, allowing the researchers to focus their attention on those specific studies, instead of having to pour through stacks of studies themselves.
One group has also been hosting evidence synthesis hackathons to create new open source tools to support the full use of new technology in the evidence process. They’ve hosted a few hacks so far, and the next event will be in Ottawa at the upcoming Collaboration for Environmental Evidence conference in June 2020.
We need to engage citizens
It’s one thing to have researchers and policy-makers involved in the evidence-based decision-making process. Engaging citizens is a whole other ballgame. How can leaders work with citizens to pressure governments to make sure that evidence is used — and how do we involve citizens early on, in the evidence-gathering process itself?
During my talk at the summit, I highlighted a Canadian example of how, in recent years, scientists and researchers have become far more active in advocating for governments to use and prioritize evidence use. We also heard about a Kenyan example of how their new constitution enabled an unprecedented amount of public consultation around new wildlife conservation legislation, creating more citizen ownership and support for implementing the new legislation in the first place. An example from Ghana was also discussed, where citizens worked closely with civil society organizations to create better data collection around local sanitation. Their engagement resulted better evidence and ultimately, better policy outcomes.
What counts as evidence?
Previously, the global ‘What Works’ community focused narrowly on the most rigorous form of evidence: systematic reviews. Systematic reviews are the gold standard in evidence, but they can be time-consuming, often taking a year to produce a single review. Again, this means they’re also expensive and not very helpful when policy-makers need information on a tight timeline. This year, there was a lot of talk about the need to expand the summit’s focus to include other types of evidence, like the above-mentioned evidence maps and rapid response evidence services.
The Centre for Rapid Evidence Synthesis in Uganda is a really exciting example of this. Policy-makers can call the centre with a question, and they’ll work with the policy-maker to help clarify and frame the question. Then, they will send them back a rapid evidence synthesis briefs, anywhere between one to 28 days. This is much faster than a full systematic review, and because it’s demand-driven — it’s a product actually requested by the decision-maker — it has a high degree of uptake by decision-makers.
Canada is already seen as a leader for the structures we have in place to ensure that evidence is used in decision-making, but with growing interest in the field and technology offering new disruptions, this isn’t a time to rest on our laurels. With continued attention and investment, we can ensure better policy outcomes for Canadians, and that we remain a global leader in this field.