The recent debate by the European Commission is seeking to establish EU rules in the fast-growing field of robotics, particularly around the areas of how to settle legal issues of compliance with ethical standards and liability for accidents involving automated systems such as driverless cars.
This is an issue of our time as machine learning has evolved from the research lab into mainstream consumer and industrial markets as computing and vast amounts of data generated by devices is giving new machine intelligent capabilities.
New intelligent systems
Whether this is the acclaimed Amazon Alexa and google home enabling voice recognition activated home appliances to the myriad of online help, image and voice translation services, to the advances in automated car driving to robotic-assisted surgery and cancer image screening.
A case in point in the energy and utility management -- Google's recent achievement of the 40% reduction in data centre energy management wastage by using deep learning algorithms is testament to the stunning power of machine and artificial intelligence (A.I.) being able to optimize complex systems way beyond human abilities.
But while this is only the start, the EU question over visibility and accountability of automation is an important point as the complexity of these algorithms to "self-learn" is rapidly outpacing regulation over impact of failure and human life if these automations go wrong. Who is to blame? The computer coder, the manufacturer, or the consumer? If the functionality is invisible to the user how can they know what it is doing? What rules and certifications are needed to regulate automation to protect humans and business and society as these capabilities expand way beyond simple feedback and response.
New Asimov laws?
Issac Asimov famously wrote the three laws of robotics way back in 1947: a robot will not harm a human or allow a human to be harmed, will obey orders from a human but not violate the first law, and must protect itself but not in violate of the other two laws. These are fine outcome-based goals, yet the implementation of this means the designers and users must know where the line of danger or ethics is.
So while the EU is contemplating a "kill switch" on automata, it still has a long way to go to defining acceptable use and controls and the visibility of these programs and robotics to not put humans is harms way. This may be the extreme case as most scenarios will be general choices and automation of transactions by machines to improve lifestyles, increase efficiency and energy management. But equally the scope of miss-use and faults to enter are the points being discussed by the EU, the UK and many others as systems assurance and safety by design needs to be evident in the use of automation that will increasingly be the norm in our lives and business operations.
Leading the intelligent enterprise
This will impact the safety, productivity and shape of the labor market and the future of the digital ecosystem as automation becomes a central strategy for all industries, smart cities, digital utility grids, connected transport, telehealth, 5G and the hyperconnected economy.