* Any views expressed in this opinion piece are those of the author and not of Thomson Reuters Foundation.
As our understanding of reliability and strength factors predicting slavery outcomes continues to improve, policies, programmes and interventions can be better tailored and targeted
James Cockayne is director of the United Nations University Centre for Policy Research and Project Director of Delta 8.7.
The best global estimate we have is that over more than 40 million people were victims of modern slavery or forced labour in 2016. With Target 8.7 of the UN Sustainable Development Goals, countries committed to take effective measures to eradicate forced labour, modern slavery and human trafficking by 2030. If governments are serious about meeting that target, that would mean around 9,000 people need to be removed from such bondage each and every day. But as we enter 2019, we simply don’t know how close we are to achieving that aggressive rate of reduction or what interventions are the most effective at doing so.
Launching a report a year ago on the effectiveness of key parts of the UK’s anti-slavery activities, the UK’s Auditor General, Sir Amyas Morse, noted that: “To combat modern slavery successfully … government will need to build much stronger information and understanding of perpetrators and victims than it has now.” A year on, while that information base continues to be built, it has become clearer that there are specific challenges relating to data collection and sharing that will hinder advancement in this field.
Until relatively recently, data on these prolific human rights abuses was fairly meagre. As modern slavery and human trafficking has become a greater international area of concern, so too has research and analysis into its causes. As our understanding of the reliability and strength factors predicting slavery outcomes continues to improve, policies, programmes and interventions can be better tailored and targeted. For policymakers, this means that anti-slavery work will become both more effective and more efficient, making the case for investment in anti-slavery efforts easier and easier to make.
A recent paper by Jacqueline Joudo Larsen and Pablo Diego-Rossel has made substantial strides in modelling slavery risk. As they acknowledge though, there are limits to our understanding of risk right now, and to the predictive capabilities of existing models. Individual factors such as age, gender, employment status, feelings about household income and about one’s life may be predictors of slavery risk. We need further research in this area to move the needle on our level of certainty about whether they or other factors are in fact predictors. The more data we have about modern slavery, the more accurate predictive models become.
Even where good data is available, there are real and continuing barriers to sharing it. Key parts of the data and methods that we rely on for such research are proprietary. Businesses remain a key source of funding of slavery risk analysis, and in fact may become a bigger player as countries impose new reporting and due diligence obligations on companies. There is a real risk that this could lead to a fragmented evidence base that is trapped behind corporate walls, holding back our understanding of slavery risk. This would make anti-slavery investments less effective and less efficient. Everyone loses out.
A more effective approach would be to encourage the development of common methodologies and open data, and to invest in systems for data sharing and collective learning. To invest, in other words, in science. The good news here is that the trend is clearly in the right direction. The Call to Action to End Forced Labour, Modern Slavery and Human Trafficking includes important commitments to data sharing; The International Conference of Labour Statisticians (ICLS) recently adopted new survey standards that will lead to better, more comparable data on forced labour prevalence in the years ahead; and the Alliance 8.7 Pathfinder process will allow countries to benefit quickly from new scientific insights.
Nonetheless, the pace of progress is slow – too slow to meet Target 8.7 objectives. The new survey standards set out by the ICLS will deliver high quality, comparable data – but only in five to 10 years. And the survey methods underlying this approach, and the existing Global Estimates, cost millions of dollars to execute. This is one reason that governments have begun experimenting with other techniques, such as multiple systems estimation (MSE) – but MSE is also constrained by what data is already available.
Like many high-cost, slow moving analytical processes, slavery prevalence estimation and risk analysis is ripe for digital disruption to accelerate our knowledge in this area and translate it into policy. The Global Fund to End Modern Slavery is already experimenting with using social media and mobile technology for survey dissemination, promising significant cost reductions. Delta 8.7 has used machine learning to estimate official development spending commitments to fighting slavery and with partnerswill be bringing together computational scientists and machine learning experts to drive further work in this space. And computational analysis offers a path for rapid acceleration of the kinds of modelling begun by Joudo-Larsen and Diego-Rossel.
We are on the verge of significant scientific breakthroughs in understanding what is likely to make someone vulnerable to modern slavery – and in tailoring programming and policies accordingly. There are still debates to be had over the methodologies and techniques needed to achieve these breakthroughs. But these discussions must be encouraged as they will ultimately build strong common methodologies, cultivate trust in data-sharing, and support the uptake of digital tools for unlocking evidence on modern slavery risk. Achieving Target 8.7 requires continued focus on the science of anti-slavery, and a willingness to think laterally – and digitally.