The Global Search for Education: AI, Algorithms and What We Should All Be Thinking About

The Global Search for Education: AI, Algorithms and What We Should All Be Thinking About
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

"Algorithms are as biased as the humans who designed or commissioned them with a certain intention. We should therefore spark an open debate about the goals of software systems with social impact." -- Ralph Müller-Eiselt

Biased algorithms are everywhere, so at a critical moment in the evolution of machine learning and AI, why aren’t we talking about the societal issues this poses?

In her book, Weapons of Mass Destruction – How Big Data Increases Inequality and Threatens Democracy, Cathy O'Neil points out that “big data processes codify the past” but they do not “invent the future.” How do we feel about machines influencing our human institutions? Who protects the quality of life when algorithms are in charge? O’Neil argues that the human touch is essential to “embed better values into our algorithms.”

Ralph Müller-Eiselt is an expert in education policy and governance and heads the Bertelsmann Foundation's taskforce on policy challenges and opportunities in a digitalized world. In his latest ”Ethics of Algorithms“ project (he is co-author of Die Digitale Bildungsrevolution; English Title - Education’s Digital Revolution), he takes a close look at the consequences of algorithmic decision-making and artificial intelligence in society and education. He joins The Global Search for Education to talk about AI, algorithms and what we should all be thinking about.

"It is up to us to determine whether AI in education will be a catalyst for strengthening social equity – or for weakening it." -- Ralph Müller-Eiselt

Ralph, how do we ensure that algorithms are always conceived to achieve a positive impact for societies and education, rather than a danger or a risk?

Algorithms are as biased as the humans who designed or commissioned them with a certain intention. We should therefore spark an open debate about the goals of software systems with social impact. It is up to us as a society to decide where such systems should be used and to make sure that they are designed with the right purposes in mind. Secondly, we must remember that even algorithms designed with good intentions can produce bad results. The larger their potential effects on individual participation in society are, the more important is a preventive risk assessment and – once automated decision making is in use – a comprehensive evaluation to verify the intended results. Involving neutral third parties in this process can significantly help to build up trust in software-based decision making.

How do we assess whether or not they are accomplishing what is intended?

Transparent accountability is key when it comes to assessing algorithm-based applications and tools. This does not mean that we need to make the code of algorithms publicly accessible. In fact, that would not at all be helpful for most affected individuals to gain an understanding of how algorithm-informed decisions are being made. Instead, we need mechanisms like self-explanatory statements of purpose for algorithms that can be verified by an evaluation through neutral experts who are granted access to the relevant information and data. These evaluations should be designed as holistically as possible in order to check whether algorithms are actually serving the intended purposes and to reveal their real-life risks and opportunities.

"While there are vast opportunities for algorithm-informed advice on competence-oriented curricular choices and job options, we may not close our eyes before the dangers of targeting weak customers, standardized discrimination and large-scale labor market exclusion." -- Ralph Müller-Eiselt

How do you see algorithms and AI adapting to the evolving education systems?

The digital era offers a number of potential added values for education. Many of them are inherently dependent on the use of connected data – be it personalizing learning, overcoming motivational barriers through gamification, providing orientation in the jungle of opportunities, or not least, matching individual competencies with labor market demands. The use of algorithms and AI in the education sector is still in its initial phase, with a lot of trial and error to be observed. But technology can and will quite certainly help evolve all these issues in the very near future. Since this might impact education at quite some scale, policy makers should better not await these things to happen and react afterwards, but actively shape regulation now towards sustaining the public good. It is up to us to determine whether AI in education will be a catalyst for strengthening social equity – or for weakening it.

How can we personalize AI to adapt to every classroom and child's needs?

Personalizing learning to better develop individual capabilities is one of the main opportunities of digital learning. Algorithm-based applications and AI can democratize access to personalized education that for cost-related reasons was previously only available to a limited number of people. But there is a fine line between promise and peril of AI in education. While there are vast opportunities for algorithm-informed advice on competence-oriented curricular choices and job options, we may not close our eyes before the dangers of targeting weak customers, standardized discrimination and large-scale labor market exclusion.

Since AI is made by humans, is there risk that algorithms and AI will not accurately work in an educational setting due to human error? How will mistakes in AI impact the learning experience?

Algorithms are only as good as the humans who designed them. Human error can translate into an algorithm at many stages: from collecting and selecting the data over programming the algorithm to interpreting its output. For example, if an algorithm uses historical data, which is biased in a certain direction due to discriminatory patterns of the past, the algorithm will learn from these patterns and most likely even strengthen this discrimination when it is used at scale. Such unintended errors need to be strictly avoided and constantly checked for, since they would broaden social inequalities in the education sector.

"For policy makers, it is now high time to proactively shape this field towards more social equity. And those being involved in the actual design and development of algorithms should take the time to reflect about their social responsibility and create common standards for professional ethics in this field." -- Ralph Müller-Eiselt

How can these issues be minimized?

As explained in more detail above, we need to do preventive risk assessments and ensure a constant and comprehensive evaluation of algorithm-based applications through neutral third parties. We should also spark a broader public debate and raise awareness for the use, chances and risks of algorithms in education. For policy makers, it is now high time to proactively shape this field towards more social equity. And those being involved in the actual design and development of algorithms should take the time to reflect about their social responsibility and create common standards for professional ethics in this field.

Do AI and algorithms need to be readjusted for different educational systems globally? How important will it be to incorporate cultural differences into formulation of AI?

What most education systems in the world have in common is that they aim to empower and support people in developing their individual capabilities and talents, in short: to create equality of opportunities. However, the ways to approach and achieve this aim are manifold. All of them have their strengths and weaknesses. What works in one place does not necessarily work in another social context. In the same way, algorithm- and AI-based applications need to be adjusted to the particular socio-cultural setting they are being employed in.

(All photos are courtesy of CMRubinWorld)

C. M. Rubin and Ralph Müller-Eiselt

Join me and globally renowned thought leaders including Sir Michael Barber (UK), Dr. Michael Block (U.S.), Dr. Leon Botstein (U.S.), Professor Clay Christensen (U.S.), Dr. Linda Darling-Hammond (U.S.), Dr. MadhavChavan (India), Charles Fadel (U.S.), Professor Michael Fullan (Canada), Professor Howard Gardner (U.S.), Professor Andy Hargreaves (U.S.), Professor Yvonne Hellman (The Netherlands), Professor Kristin Helstad (Norway), Jean Hendrickson (U.S.), Professor Rose Hipkins (New Zealand), Professor Cornelia Hoogland (Canada), Honourable Jeff Johnson (Canada), Mme. Chantal Kaufmann (Belgium), Dr. EijaKauppinen (Finland), State Secretary TapioKosunen (Finland), Professor Dominique Lafontaine (Belgium), Professor Hugh Lauder (UK), Lord Ken Macdonald (UK), Professor Geoff Masters (Australia), Professor Barry McGaw (Australia), Shiv Nadar (India), Professor R. Natarajan (India), Dr. Pak Tee Ng (Singapore), Dr. Denise Pope (US), Sridhar Rajagopalan (India), Dr. Diane Ravitch (U.S.), Richard Wilson Riley (U.S.), Sir Ken Robinson (UK), Professor Pasi Sahlberg (Finland), Professor Manabu Sato (Japan), Andreas Schleicher (PISA, OECD), Dr. Anthony Seldon (UK), Dr. David Shaffer (U.S.), Dr. Kirsten Sivesind (Norway), Chancellor Stephen Spahn (U.S.), Yves Theze (LyceeFrancais U.S.), Professor Charles Ungerleider (Canada), Professor Tony Wagner (U.S.), Sir David Watson (UK), Professor Dylan Wiliam (UK), Dr. Mark Wormald (UK), Professor Theo Wubbels (The Netherlands), Professor Michael Young (UK), and Professor Minxuan Zhang (China) as they explore the big picture education questions that all nations face today.

C. M. Rubin is the author of two widely read online series for which she received a 2011 Upton Sinclair award, “The Global Search for Education” and “How Will We Read?” She is also the author of three bestselling books, including The Real Alice in Wonderland, is the publisher of CMRubinWorld and is a Disruptor Foundation Fellow.

Follow C. M. Rubin on Twitter: www.twitter.com/@cmrubinworld

Popular in the Community

Close

What's Hot