The semester may be over for University of Delaware students, but the "Blog Blog Project" continues! This blog comes from Courtney Griffin, a Junior in Electrical Engineering. She, along with nine other Engineering students, worked with Communication and Political Science students this semester to create technology for the greater good. This blog examines the fears that many people have about advances in technology.
Humans have always been scared of the unknown. When Christopher Columbus sailed across the the ocean, people were frightened he was going to sail right off the end of the earth. There is a stigma of the unknown for the obvious reasons that we are not sure and we do not understand the consequences. Psychology research suggests we generally like to be able to anticipate consequences. That's why the act of falling can be so frightening; we don't know what to expect when we land.
Because this fear is innate to all things novel, it's unsurprising that the lightning-speed development in technology is terrifying to some people. Technology plays a significant role in every aspect of our lives. It is what separates modern society from an archaic past. It's open-endedness and potential for great change make it nearly impossible to gain a complete understanding of its effects. Without this overarching understanding, many of us develop an improper indication of how powerful these advances could become. As a result, we fear what we don't know and artificial intelligence is often Enemy Number One.
Artificial intelligence is essentially programming a piece of technology designed "to mimic human intelligent behavior." As of right now, technology helps to complete many tasks that, as humans, we could not complete alone at the same speed. However, humans can recognize problems or stresses that a computer might not be able see. In the past tens of years and in the future, programmers are designing ways to solve many problems. In other words, a computer will think less strictly than typically expected and more broadly, similar to how our own brains work. For example, years before autocorrect, when we typed out a word incorrectly it would remain incorrect until we physically changed it ourselves. In recent developments, a computer can now recognize if a word is misspelled or doesn't make sense in the context and will provide suggestions that are more appropriate. This virtual recognition may seem insignificant, but it is a huge development in the way computers are designed because now the computer is beginning to recognize human error.
The very reason most pieces of technology were invented was to improve the way something was originally done and execute it in a more efficient way than humans could have alone. Think email for long-distance communication or digital cameras for more efficient photo-capturing. The main flaws with such programs are the errors made by the humans designing them. Theoretically, in the case of artificial intelligence, computers can complete everything humans could before, but now it is with essentially no error, because they will be able to detect what is a mistake and what is not. Computers and their successors will be better and better designed to recognize and understand our patterns. For instance, in voice-recognition software, the program is coded to recognize inflections in voice to detect a question from a statement or differentiate between similar words. It can hear through various accents the same way humans can that, years ago, was unfathomable. This is where it gets scary.
Alongside this comes the ability for machines to reprogram themselves and essentially to learn. In other words, if a problem is not solved, the computer, without human intervention, will be able to decipher the best way to handle the situation on its own and teach itself to approach that problem.
From this stems a deep concern that scientists may unintentionally be developing a virtual consciousness. There is a lot of controversy about what consciousness is, or entails; if it has to do with the conceptual act of decision-making or the physical gray matter in the brain is unclear. It is obvious that there is still a lot to learn in the field of artificial intelligence as well as many differing views on its potential growth. Either way, it is a development to be wary of as this state of technology is completely unprecedented and, perhaps most frightening, unknown.