Artificial Intelligence Needs Ethical Controls

Artificial Intelligence (AI) will enhance the efficiency of our decision-making and lifestyle habits, as well as extend natural barriers to conceptualization and execution of higher-level tasks, from everyday decision-making to our careers and education. Optimism is high among AI companies and their investors. The industry is set to reach a $36.5-$100 billion USD annual revenue by 2025, and investments are projected to grow 300% in 2017.

AT&T Foundry, Ericsson and RocketSpace took a close look at The Future of Artificial Intelligence in Consumer Experience. This piece is part three in a series highlighting the perspectives and accomplishments of pioneering startups and companies disrupting the AI industry today, including Talent Sonar.

The report makes 5 projections:

1. Humans Have More Room to be Human

2. Be Everywhere as Data is Everywhere

3. Connectivity Instantly Powers Your Own Adventure

4. Consumers Go from One Click to Zero Clicks

5. Ethical AI Controls for Bias

Number 5 is a key element of the work Talent Sonar does. There is potential for ethical dilemmas with AI, and it is vital to have human-driven bias controls so that companies can account for risks and mitigate them by controlling AI datasets and algorithms. On their own, AI algorithms would propagate existing data sets and therefore propagate existing biases. By becoming conscious of this, technologists and platform developers can actively work to mitigate biases in the long run.

In talent acquisition specifically, we know that AI can perpetuate patterns of hiring, creating a team that could all be smart and talented, but likely to be very similar to one another. However, with efforts to create new datasets, a hiring process can be recreated to attract diverse applicants and create a workforce that reflects a breadth of diversity that will drive a company’s success.

This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
CONVERSATIONS