Between the unveiling of Pepper, the introduction of Otonaroid and Kodomoroid and the (very justified) hype surrounding Cynthia Brezeal’s Jibo campaign, The Uncanny Valley has been in the news quite a bit lately. Of course, as a roboticist and a human, the subject greatly piques my interests. I’m thrilled to see the field tackling things like what robots should look like, how people will react to humanoid robots in the home as permanent fixtures and if it is even moral or ethical to pursue such things as making a mechanized human clone.
In addition to the strictly HRI aspect of the Uncanny Valley’s familiarity disconnect, I’ve read about it in relation to sexual attraction, evolution and even Twitter bots (really).
One article which really struck me brought the Uncanny Valley to its furthest conceivable limit. Entitled ‘The Uncanniest Valley‘, Steven Kotler of Forbes imagines what happens not only as robotic mimicry of human looks is perfected, but also what happens when perfect emulation of the human experience occurs. What happens when our robots and A.I. outperform us, outlive us and, finally, out-know us better than we know ourselves? Steven posits that this will, perhaps, cause us “a nearly unstoppable fear reaction—a brand new kind of mortal terror, the downstream result of what happens when self loses its evolutionarily unparalleled understanding of self.”
One can certainly see us faced with an uncertain and unprecedented future once this threshold is crossed. Where do we go from there? We either accept ‘evolution’, letting nature take its course, our fate in the hands of something greater, or we use it as a learning experience: Steven is extremely optimistic, writing “It’s not hard to imagine that our journey to this valley will be fortuitous. For certain, the better we know ourselves—and it doesn’t really matter where that knowledge comes from—the better we can care for and optimize ourselves.” Considering the advancements in robotics / A.I., memory storage and sensors over the past 10 years even, the general scenarios above seem like a possibility (though whether true A.I. is 10, 50 or 100 years away is constantly under heavy debate). If it will be as beneficial is hard to say; after all, there’s been nothing like it before.
One glaring question resulting from the article is ‘why does the animal kingdom not know such fear in the face of humans?’ We, perhaps, understand animals better than they know themselves, humans being (mostly) of greater intelligence. However, in the article, ‘knowing oneself’ is not simply knowing ‘about’ oneself (physical characteristics, dietary needs, pet peeves). Rather, it is knowing, from the perspective of the entity, and limited or enhanced by all of its sensory traits therein, what it is to be said being.
I believe that we are on the cusp of being able to achieve different, fully-immersive levels of experience. Emerging VR devices such as the Oculus Rift and mechanical-assistive machines such as exoskeletons will allow us to explore what existence is like from the sensory perspective of other, dissimilar beings. By their powers combined, it is now possible to completely alter every human sense at once. Augmenting or subduing input (sensory deprivation or ‘sensorship’, as I call it) in specific combinations will allow us to craft tailor-made experiences of other beings.
Using such methods, we might hope to gain insight into a multitude of worlds that have hitherto been hidden from us. This could lead us to greater knowledge and especially empathy as a dominant species. Who knows, it might even lessen the impact of the Uncanniest Valley by giving us a leg up on our future all-knowing creations.