Virtual Reality is clearly a very powerful medium. Its capacity for illusion is so strong that, for example, one is able to help patients with phantom limb pain. Thrive Global posted an article earlier this summer that inquired into the possibilities of using VR for mental health.
“It’s really about illusion,” Alex Miller, a computer scientist creating virtual realities for the neurology department at the University of Pennsylvania, says: you manipulate what a patient sees in their virtual self and their virtual world, and their brains will literally incorporate these things into the body image. While still early, results indicate that using the Penn neurology games does indeed reduce the intensity of phantom limb pain.”
Oxford psychiatrist and VR specialist Daniel Freeman told Thrive Global: “VR could become the method of choice for psychological treatment — out with the couch, on with the headset.”
When in the VR environment, because of the embodied quality mentioned above the power resides in its unique ability to immerse. In other words one embodies the experience one sees. However, what are the pitfalls of such possibilities?
What is the possible “dark side” of VR/AR in conjunction with AI especially? How can we manage these more difficult and potentially risky aspects?
To answer some of these concerns please find below another snippet from the event at AWE ( Augmented World Expo in Silicon Valley) of Techforgood under the auspices of the Virtual World Society (organization founded by the grandfather of VR Tom Furness focused on building better lives with VR) and DigitalRaign (new tech impact community and accelerator focused on Tech for good) .
Phil Lelyveld of the USC Entertainment Technology Center was also a part of the special tract at AWE on Tech for Good. In his presentation he pointed out that Artificial Intelligence will inform our world view from here on out. And therefore, it will also reshape Virtual Reality, and Augmented Reality as it already is: Facebook is already integrating AI into AR (article in Wired for more details). Additionally the Internet of Things will also be connected to VR/AR/MR and AI, which in turn therefore will also inform the robotics sector. Because of the sheer scope of the movement, and the many businesses employed in the new technologies sector, it is crucial to understand that there may be an incredible lack of security protections in the field of VR/AR/MR. This lack of security may be relevant to everyone in the very near future, if not already now. This challenge may in turn add complexity and cost, factors that up until now Phil points out, have not been properly addressed.
What are possible solutions to this complex set of challenges?
-give people control over their own data
-create a marketplace for the exchange of data
-make privacy and anonymity a starting point
Most urgently the concern rests with the possible impact of AI and VR/AR/MR.
Here Philip suggests various possibilities:
-an AI audit:
is the data used good, reliable and reasonable?
what is the goal for the data?
is there a discrepancy caused by diversity or are there errors in assumptions and design that could help or hurt specific populations?
Is there a long term monitoring set up that will check for feed back loops that may bloom into biases over long term use?
AI is a new technology and whilst it has been established that machines can now learn to learn no one really knows the outcome yet: “we can build these models but we don’t know how they work.” (Joel Dudley, Deep Patient team leader, Mount Sinai Hospital, NY from MIT Technology Review).
Why is this important? It leaves the human being open for emotional manipulation in VR/AR/MR. Research shows that the concept of the self is very fragile, through embodiment and through the impact of these platforms on self image and self worth (Mel Slater, ICREA, Research Professor at University of Barcelona, Spain. Leader of the Experimental Virtual Environments- EVENT Lab for Neuroscience and Technology).
The factors that make VR/AR/MR so appealing to use in therapeutic settings make it equally vulnerable to this dark side of manipulation or abuse of data for gain. The AI audit proposed by Phil Lelyveld is a clear start to setting up necessary boundaries and checks in the field.