By Jovan Kurbalija and Sorina Teleanu
The summer of 2017 was the summer of artificial intelligence (AI), as developments in the field of AI and robotics picked up pace globally. On the technological side, Internet companies continued to invest in the development of AI technologies aimed at enhancing their products, while researchers explored modalities for addressing concerns such as privacy, ethics, and accountability in AI decisions. Countries elaborated strategic AI plans aimed to position them at the forefront of developments. Alerts on the risks of AI for society seemed to become more alarming, while parliaments and governments discussed the economic, social, and ethical implications of AI. More details on some of these developments in our Summer Diary of Artificial Intelligence…
Major breakthroughs in AI
AI algorithms involve judgments and decision-making, replacing similar human processes. But, while humans can explain why they make certain decisions, this is not (yet) the case with AI systems. Concerns about discrimination and bias in decisions made by AI systems push researchers to continuously work on identifying ways to make algorithms ‘accountable’. In one example of such work, researchers at the Massachusetts Institute of Technology (MIT) are one step closer to finding a way to determine why an AI system makes one decision over another (e.g. why a driverless car makes certain decisions while on the road). Having looked at how artificial neural networks process information, they found that individual neurons in such networks could be highly correlated to high-level concepts involved in decision-making (such as identifying patterns and concepts in an image). In practical terms, identifying the specific neuron in an artificial neural network that is responsible for a certain decision could help explain why the AI system made that particular decision.
Efforts are also being made to enhance the efficiency of AI systems, as well as to make them safer. Researchers from OpenAI and DeepMind have been working on an AI algorithm that learns from human feedback, in an attempt to address problems associated with the concept of reinforcement learning ‒ an area of machine learning that rewards AI systems if they take the right actions to complete a task. Researchers proposed a method in which the reward predicted is based on human judgement, which is fed back into the reinforcement learning algorithm to change the agent’s behaviour. DeepMind is also working on new technologies for empowering AI systems with imagination. In one such proposed technology, an imagination-based planner could perform imagination steps before taking an action (proposing an imagined action and evaluating it). Another proposal describes imagination-augmented agents, which could interpret predictions from a learned environment model to develop plans for decisions and actions.
In the field of business applications, Amazon has launched a new service that uses AI to help identify and protect sensitive data stored in Amazon Web Services (AWS). The service, called Amazon Macie, relies on machine learning to discover and classify sensitive data such as personally identifiable information, and protect it from breaches, data leaks, and unauthorised access. Access to such data is continuously monitored in order to detect any suspicious activity, based on access patterns. Detailed alerts are generated when risks of unauthorised access or inadvertent data leaks are identified. In addition, Facebook has explained how it combines AI and human expertise to ‘keep terrorist content off’ the social media platform.
AI standards under development
Continuing its efforts to address ethics and privacy concerns related to AI systems, the Institute of Electrical and Electronics Engineers (IEEE) has launched a project for the creation of a standard for personal data AI agent. The standard will describe the technical elements needed to create and grant access to a personalised AI that will comprise input, learning, ethics, rules, and values controlled by individuals. It will be aimed at educating policymakers and the private sector on the need to create the mechanisms for individuals to train personal AI agents to harmonise personal data usage for the future. This work will be in line with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.
Big AI strategies in the making
While AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly understanding that they need to keep up with this evolution and indeed take advantage of it. China, for example, has elaborated an AI development plan which is set to transform the country into ‘the world’s premier AI innovation center’ by 2030. In two intermediary milestones, companies and research facilities are expected to match the level of other countries such as the USA by 2020, while by 2025, AI is expected to become ‘a key impetus for economic transformation’. Financial investments in achieving these goals have risen to a total of $150 billion.
Taiwan is also set to invest $527 million in the development of AI, as part of a plan whose main focus will be on the creation of research and developments labs. Funds will also be used to support initiatives in five areas: AI-based public services, adding value to sectors like medicine and finance using AI, helping companies incorporate AI, promoting public participation in AI development, and building up innovation capacity for smart robotics.
AI for good
Ethical issues are at the core of AI developments. AI companies are trying to ‘do good’ or at least to make sure that they ‘do not do evil’.
The AI for Good Global Summit, organised in early June by the International Telecommunication Union (ITU), in partnership with other UN agencies, looked at the main trends in the search for good and useful applications of AI. Representatives of governments, intergovernmental bodies, industry, civil society, and the research community discussed the role of AI in addressing global challenges such as poverty, hunger, and health, as well as the implications of AI developments for ethics, privacy, and security.
In line with its commitment to use AI to promote sustainable development around the world, Microsoft has launched two new initiatives. The AI for Earth initiative will focus on AI projects in the areas of agriculture, water, biodiversity, and climate change, while the Microsoft Research AI lab will focus on identifying solutions for AI challenges, as well as developing ‘a more general, flexible AI’.
Alarms about AI risks
Tesla CEO Elon Musk has warned that AI ‘is a fundamental existential risk for human civilization’, and called for precautionary and proactive government intervention. This is not the first time when Musk has issued such warnings on the possible dangers of what experts calls ‘general AI’ (i.e., complex AI systems that can replicate humans’ intellectual capability). But there are voices criticising him for distracting attention from challenges related to the current ‘narrow AI’ already in use in areas such as self-driving cars.
Musk was also among the signatories of a letter sent to UN on the dangers of autonomous weapons. Representatives of over 100 companies working in the field of AI and robotics have told the UN that ‘lethal autonomous weapons [...] can be weapons of terror, weapons that despots and terrorist use against innocent populations, and weapons hacked to behave in undesirable ways’. They have asked the organisation ‘to find a way to protect us all from these dangers’. The letter comes in the context of a decision taken by the UN Conference of the Convention on Certain Conventional Weapons to establish a Group of Governmental Experts on Lethal Autonomous Weapon Systems. The group was supposed to have its first meeting in August, but this was postponed until November.
But also some optimism
According to a survey conducted by the World Economic Forum, millennials believe that technologies (including AI and robotics) are creating jobs instead of destroying them.
Parliaments and governments in search for policy solutions for AI challenges
In the context of both pessimistic and optimistic views about AI’s future, governments try to better understand the implications and to think of policy solutions for addressing them.
In the UK, the House of Lords created a Select Committee on Artificial Intelligence, to explore the economic, ethical, and social implications of AI. The Committee, which has to deliver its report by 31 March 2018, has recently launched a public call for evidence on the implications of AI, asking for public input on issues such as the current state of AI and the pace of development, the impact of AI on society, ethical implications, and the role of government.
Germany’s Federal Ministry of Transport and Digital Infrastructure has adopted an action plan for the implementation of a set of ethics guidelines for the programming of automated and networked driving systems. The guidelines contain principles such as: automated and networked driving is ethically necessary if the systems cause fewer accidents than human drivers; in the event of danger, the protection of human life always has top priority; in the case of unavoidable accidents, any qualification of people according to personal characteristics (age, sex, physical or mental constitution) is not permitted; anddrivers should have full control over what personal information is collected from their vehicles.
In South Korea, the government has announced plans to reduce tax benefits previously granted to the automation industry, in what seems to be a policy aimed at making up for lost income taxes as workers are gradually replaced by robots.
The US Economics and Statistics Administration analysed the employment impact of autonomous vehicles, and concluded that such vehicles are expected to have a potentially profound impact on labour demand. While workers in motor-vehicle operator jobs might have difficulty finding alternative employment, on-the-job drivers (who use vehicles to deliver services or travel to work sites, for example) are more likely to benefit from greater productivity and better working conditions offered by autonomous vehicles. But the study also underlined that it is still not clear to what extent autonomous vehicles could eliminate certain occupations, resulting in job losses.
* * *
The summer of Artificial Intelligence announces a busy autumn as well. At the Geneva Internet Platform and DiploFoundation, we will continue to provide updates on the major developments in the field of AI (via the GIP Digital Watch observatory) as well as reflections on trends and implications. In particular, we will focus on the intersection between AI and policy, on issues such as ethics and various ‘futures’, ranging from AI and the future of work to AI and the future of wars. More will follow…