Thursday, August 09, 2018 by Rhonda Johansson
When Alice fell down the rabbit hole, she got everything she asked for and more; cute little rabbits with gloves on their hands, caterpillars that could talk, and mean, nasty flowers that thought she was a weed. It’s a comical story, but one that could become reality in the future. The expression of “going down the rabbit hole,” is sometimes used to describe just how far we are willing to push the limits. On a basic literary level, it speaks of the beginning of a fanciful adventure that we cannot understand — but one that will change our lives forever. It is with this in mind that computer scientists warn of a potential danger we may not even be aware of: Our tinkering with artificial intelligence could lead to an external brain or A.I. system that we will no longer have the ability to control.
A recent editorial published on TechnologyReview.com — MIT’s resource for exploring new technologies — warned of the pace in which we are advancing technology. Recent algorithms are being designed at such a remarkable speed that even its creators are astounded.
“This could be problem,” Will Knight, the author of the report writes. Knight describes 2016’s milestone of a self-driving car which was quietly released in New Jersey. Chip maker Nvidia differentiated its model from other companies such as Google, Tesla, or General Motors by having the car rely entirely on an algorithm that taught itself how to drive after “watching” a human do it. Nvidia’s car successfully taught itself how to drive, much to the delight of the company’s scientists.
Nevertheless, Nvidia’s programmers were unsettled by how much (and how fast) the algorithm learned the skill. Clearly, the system was able to gather information and translate it into tangible results, yet exactly how it did this was not known. The system was designed so that information from the vehicle’s sensors was transmitted into a huge network of artificial neurons which would then process the data and deliver an appropriate command to the steering wheel, brakes, or other systems. These are responses that match a human driver. Though what would happen if the car did something totally unexpected — say, smash into a tree or run a red light? There are complex behaviors and actions that could potentially happen, and the very scientists who made the system struggle to come up with an answer.
Nvidia’s underlying A.I. technology is based on the concept of “deep learning,” which, up till now, scientists were not sure could be applied to robots. The theory of an external or artificial “thinking” brain is nothing new. This has colored our imaginations since the 1950’s. The sore lack of materials and gross manual labor needed to input all the data, however, have prevented the dream from coming to fruition. Nevertheless, advancements in technology have resulted in several breakthroughs, including the Nvidia self-driving car. Already there are aspirations to develop self-thinking robots that can write news, detect schizophrenia in patients, and approve loans, among other things.
Is it exciting? Yes, of course it is; but scientists are worried about the unsaid implications of the growth. The MIT editorial says that “we [need to] find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur — and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.”
In an effort to control these systems, some of the world’s largest technology firms have banded together to create an “A.I. ethics board.” As reported on DailyMail.co.uk, the companies involved are Amazon, DeepMind, Google, Facebook, IBM, and Microsoft. This coalition calls themselves the Partnership on Artificial Intelligence to Benefit People and Society and operate under eight ethics. The objective of this group is to ensure that advancements in technologies will empower as many people as possible, and be actively engaged in the development of A.I. so that each group is held accountable to their broad range of stakeholders.
Just how far down the rabbit hole are we, as a society, planning on going? You can learn a little bit more when you visit Robotics.news.