Affiliated organization : World Wide Web foundation
Type of publication : Report
Date of publication : June 2017
All over the world, AI systems filter email spam, recommend things for people to buy, provide legal advice on everything from parking tickets to asylum applications, and in some places can determine whether you are paid a visit by the police.
The report provides a detailed outlook of the risks and opportunities artificial intelligence (AI) poses for low and middle income countries, as well as the key elements that can be leveraged upon to maximize the benefits and minimize the risks generated by AI.
The opportunities
AI is already enabling a wave of innovation across many sectors of the global economy. It helps businesses use resources more efficiently (e.g. through automated planning, scheduling, optimised workflows, optimised supply chains, optimised logistical pathways) and enables entirely new business models to be developed, often built around AI’s powerful ability to interrogate large data sets. Many businesses in low and middle income countries will benefit from these AI capabilities, translating into greater opportunities for small entrepreneurs to develop new businesses.
Across Africa, micro-credit platforms, while sometimes controversial, are leveraging AI to define how to measure risk when potential clients do not have a traditional credit ’footprint’. AI is also used for fraud detection and operational optimisations as part of these platforms.
These advancements promise to provide further dynamism to local economies by reducing transaction costs associated with lack of information. This applies to the issue of basic government data. There are expectations that AI may help to cost-effectively improve the quality of national statistics (for example on employment and wealth) that are needed for good economic planning and policy-making.
There are plenty of instances where AI is being used to improve delivery of public services and public goods in low and middle income countries.
Across Africa, micro-credit platforms, while sometimes controversial, are leveraging AI to define how to measure risk when potential clients do not have a traditional credit ’footprint’. AI is also used for fraud detection and operational optimisations as part of these platforms
In other cases, AI has been used to improve police coverage, such as in dealing with transit issues. In Uganda15 AI is used to advise individuals or emergency vehicles on optimal routes, dynamically redeploying a limited number of traffic police officials, and analysing possible reconfigurations of the road network to remove bottlenecks.
In other cases it has been used for environmental ends. In Kenya, for example, the World Wildlife Fund (WWF) supports the use of an AI device with drones. After nine months, over a dozen hunters had been apprehended in the Maasai Mara.
AI has also been used for agricultural matters, including identifying crop disease with a smartphone. Mcrops, developed in Uganda, is a diagnostic tool for diagnosing viral crop diseases in cassava plants.
Finally, AI has been used to prevent and predict natural disasters. The Red Cross/Red Crescent Climate Centre has an on-going project with Togo’s Nangbéto Dam, which frequently overspills, causing great disruption to the livelihoods of people living downstream. In the past models were poor at predicting the likelihood of overspill, but using a combination of crowdsourced information (including by mobile phone) and AI techniques, an improved model of overspill prediction was developed.
AI-based automated translation and voice recognition systems could have significant impact in countries with multiple languages. This is the case for numerous low and middle income countries, including India, Indonesia and Nigeria. This would particularly benefit marginalised groups who experience disproportionate rates of illiteracy.
The risks
A lot of attention has been given to the upheaval of employment markets that AI will cause in high-income countries. However, the World Bank Development Report (2016) estimates that the ‘share of occupations that could experience significant automation is actually higher in lower income countries than in higher ones, where many of the jobs susceptible to automation have already disappeared, and this concerns about two thirds of all jobs’.
While the impact on the employment market for many Indian men could be significant, the picture for women across the world could be even more devastating. Just 14% of women were in full-time formal employment – an indicator of a ‘good’ job – compared with 33% of men across 17 countries in the Middle East, Northern Africa, and Sub-Saharan Africa regions of the world.
This could result in a situation where value produced in low and middle income countries is extracted into high income countries, echoing the exploitation of minerals and natural resources in Africa by Western countries in the nineteenth century
There is also the risk of ‘brain drain’ in the AI space.47 Sharma Punit describes the case of 25-year-old Tushar Chhabra, co-founder of Cron Systems, which builds internet of things-related solutions for the defence sector.48 He is quoted as saying that an Indian Institute of Technology (IIT)-educated engineer based in the US and working on AI for seven years “asked for Rs2.5 crore [~$375,000] per annum as salary. As a start-up you cannot afford that price.”
This could result in a situation where value produced in low and middle income countries is extracted into high income countries, echoing the exploitation of minerals and natural resources in Africa by Western countries in the nineteenth century.
There is also a risk that there will be over-reliance on AI. It is important to recognise the limitations of data analysis. AI today is capable of recognising patterns, and large and diverse datasets can throw up many patterns indeed. Some are meaningful, others are not. Correlation does not equal causation. This should be borne in mind as our use of AI for data analysis increases, especially when used to inform public policy.
Low and middle income countries traditionally have larger informal economic sectors than richer countries, with many workers being paid in cash, leading to difficulty in identifying the income tax base and in effectively collecting this tax which has often led to many of these countries relying on flat consumption taxes, such as VAT, which are easier to collect. If automated agents, such as chatbots or mechanical robots, perform the majority of work then the potential tax base is eroded further, leading to lower government revenues
There are several ways in which AI could undermine democracy in low and middle income countries. Authoritarian regimes could use AI for surveillance, for example by identifying and targeting political opponents based on personal data. These risks could become greater as smartphone penetration increases.
There are also concerns that AI may be used to spread ‘fake news’ and misinformation around election periods. There have been concerns expressed about how this has allegedly happened around the recent Brexit referendum in the UK and the Presidential election in the USA, through the microtargeting of individual voters with persuasive information based on an assessment of interests, personality type, and other criteria.
The context
The quality and quantity of available data is critical to the success of AI systems. Huge volumes of data are now available in low and middle income countries thanks to the vastly expanded number of mobile phone users.
The sectors where there is the most need for action, like education, health and food security, are not always the sectors where large amounts of data are generated in a helpful format. In Ghana, for example, many records are still paper-based; on the other hand, banks and mobile phone companies collect reams of useful data across the low and middle income countries.
This urge for more data increases indirect risks associated with AI and its enabling ecosystem.These developments worsen existing concerns about privacy and raise new ones. Furthermore, high levels of corruption in some developing countries as well as a weak data infrastructure through which data might be more likely to leak means that there is work to be done around securing data properly.
AI may be good at identifying problems and recommending solutions, but the actual implementation of those solutions (e.g. medical treatment) may require technical, economic and socio- political infrastructure that is lacking or weak in many low and middle income countries
Some of the optimism about the application of AI in developing countries rests on the ubiquity of mobile phones. Yet, across low and middle income countries, internet and mobile penetration varies significantly between urban and rural areas, age groups and genders.
AI may be good at identifying problems and recommending solutions, but the actual implementation of those solutions (e.g. medical treatment) may require technical, economic and socio- political infrastructure that is lacking or weak in many low and middle income countries. Before designing solutions to be rolled out, it is fundamental to ensure that some key elements of an enabling infrastructure, such as governance institutions, policies and laws required for an effective roll-out are in place.
In order to maximise the benefits of AI it is vital that populations in low and middle income countries have the skills to develop and deliver programs. This is the case for all levels of society. For poor communities, STEM skills could be a path to economic empowerment. For programs intended to work under government supervision, there is the need to be mindful of the limited government capacity in many low and middle income countries.
AI developers in low and middle income countries are not well plugged into the larger-scale coordinated networks of AI development. The ‘Partnership for AI’, for example (an initiative including Amazon, Facebook, Google, Microsoft, IBM, and Apple focused on establishing best practices for artificial intelligence systems and educating the public) has not successfully engaged actors from the developing world.
Moreover, concerns about bias are compounded by the severe lack of diversity in the AI field, raising fears that bias may be considered less of a problem or may not be identified when it occurs. Kate Crawford has written compellingly about what she terms ‘artificial intelligence’s white guy problem’, whereby a lack of representation can limit the perspectives and experiences of AI’s creators, leading to a greater possibility of “like me” bias.
Potentials areas for action
Create bridges between developers in low and middle income countries and high income countries. Provide economic support to AI developers from low and middle income countries to attend global AI conferences where many of the informal networks are built and sustained. Provide the necessary resources (technical, financial and human) to embed closer relationships, collaboration and partnerships between AI initiatives in low and middle income countries.
Ensure the interests of low and middle income countries are represented in key debates and decisions relating to AI. Advocate for the specific circumstances of low and middle income countries to be considered in global efforts to tackle news silos and fake news. Advocate for a more inclusive ‘Partnership on AI’ and IEEE, which actively involve developers from a variety of low and middle income countries. Support and further develop existing South- South collaboration efforts and initiatives on AI.
Facilitate access to open, good quality data to enable the development of AI technologies, while ensuring personal data is not misused Promote the transparent, and accountable use of personal data; and ensure proper data protection standards are in place (by governments and companies). Promote access to free, open, and anonymised, curated datasets so ethical developers have access to good data sets to train AI systems while also ensuring privacy.
Maximise the opportunities for AI to be used for public good, with a particular focus on marginalised groups Support the development of impactful tools that use AI to improve the delivery of public services and public goods, in particular those that allow delivery of services to marginalised groups. Advocate for governments to adopt these AI tools to deliver public services and public goods, particularly to marginalised groups. Ensure systems of liability, accountability (including the ‘right to an explanation’), justification, and redress for decisions made on the basis of AI.
Les Wathinotes sont soit des résumés de publications sélectionnées par WATHI, conformes aux résumés originaux, soit des versions modifiées des résumés originaux, soit des extraits choisis par WATHI compte tenu de leur pertinence par rapport au thème du Débat. Lorsque les publications et leurs résumés ne sont disponibles qu’en français ou en anglais, WATHI se charge de la traduction des extraits choisis dans l’autre langue. Toutes les Wathinotes renvoient aux publications originales et intégrales qui ne sont pas hébergées par le site de WATHI, et sont destinées à promouvoir la lecture de ces documents, fruit du travail de recherche d’universitaires et d’experts.
The Wathinotes are either original abstracts of publications selected by WATHI, modified original summaries or publication quotes selected for their relevance for the theme of the Debate. When publications and abstracts are only available either in French or in English, the translation is done by WATHI. All the Wathinotes link to the original and integral publications that are not hosted on the WATHI website. WATHI participates to the promotion of these documents that have been written by university professors and experts.