Living brain cells inside robotic brains are now successful
at learning and making their own decisions (BBC news)
. We are creating Artificial
intelligence (AI-robots that make their own decisions) which will supersede us
and make human extinction possible. We
can see the beginnings of ‘human versus robot law’ documented as governments
pre-empt the AI explosion, a UN spokesperson is publically calling for debates
on AI and human rights groups are concerned that it will soon be out of human
hands. Whilst this discussion is based
on easily available and reputable online sources many people are not aware or
not interested, dismissing AI as sci-fi. However, many academics, scientists, researchers
and roboticists are taking it seriously.
For instance Cambridge University in England has set up a Centre for the
Study of Existential Risk in regard to AI. The homepage reads:
Photo by Steve Jurvetson http://flic.kr/p/bowfcN Could you spot the robot at a glance? |
Its strange that the Romo robot which uses your iphone as it’s brain has more comments and shares online than the articles discussing robots which are going to completely change the world. Why isn’t the rapid increase in AI spread all over our screens and the front pages of our newspapers? Perhaps there’s not enough interest? The Machine Intelligence Research Institute (MIRI) also warns an uncontrolled explosion in AI could lead to human extinction. It seems to me at its current course it will be rather like human evolution, apes after all watched us make fire and become sophisticated communicators leaving them behind. We are at the dawn of creating robots or humanoids that are capable of succeeding us and the impact will be on many different levels both economically, socially, culturally and beyond. Although it may seem like fantasy many researchers and academics are trying to make provisions and look at risks so that there is a controlled AI explosion. The MIRI started at the university of Berkley campus. They consist of a team who have gained qualifications or are employed in renowned environments such as Harvard University, Oxford University and Google research (see MIRI).
The ‘robo sapien’ (dubbed by the New York Times) aka ‘the Terminator’ has been released by DARPA (Defense Advanced Research Project Agency) to help assist in natural disasters or nuclear power plant rescues. Some have commented that it looks like a prototype infantryman (Cnet)and one can see that DARPA was established for ‘maintaining the technological superiority of the U.S. military’ (DARPA). AI in the military already exists but I wonder how advanced the technology really is?
Robots taking the place of human employment-even nurses, primary care doctors and lawyers
Soon robots will be the preferred workforce, why rent human labour when you can own robots? You won’t wake up tomorrow and find the streets are filled with independently thinking robots but they are being introduced already. These high tech innovations will be viewed as rather rudimentary someday. They won’t just take over a certain sector like farming in the industrial revolution or the assemblers in the automobile industry. Your job won’t be safe even if you are a lawyer or journalist and in a predicted 50 years even a primary care doctor won’t be safe. It won’t take years of study for a robot to do a better job, they won’t take sick days and need regular breaks either. They will be cheap and highly skilled and extremely productive. Capitalist economies have started to find new innovations that will inevitably lead to dramatic change. AI most likely, will go beyond our control. The developmental directions/outcome is dependent on market demands.
Projects, like the one in Bristol Robotics Laboratory, are working hard to make ‘robots trustworthy’. Many researchers have growing concerns that once robots make it out of the lab into mass production and their engineering is common knowledge corners may be cut (www.brl.ac.uk).
The New America Foundations podcast ‘will robots steal your job? (below)' discusses emerging advances used by businesses and how computers are not just beating the very best human chess players. The systems in place now used in science and cutting edge technologies are ‘producing data we cannot understand because we are fundamentally limited.’ Systems will continue to use ‘algorithms they find useful’ which will soon ‘breakaway from us.’ Technology is already superseding us in new ways.
Human devolution and extinction- Will
AI breakaway from us leaving humans behind as we did with apes
“At some point, this century or next, we may well be facing
one of the major shifts in human history…- when intelligence escapes the
constraints of biology…Nature didn’t anticipate us, and we in our turn
shouldn’t take AGI (Artificial General Intelligence) for granted.
“The
critical point might come if computers reach human capacity to write computer
programs and develop their own technologies. This, [Irving John ‘Jack’] Good’s
“intelligence explosion”, might be the point we are left behind - permanently -
to a future-defining AGI.” (See more at www.cam.ac.uk).
Are Governments pre-empting the AI explosion?
The emergence of human versus robot law
It surprises me that an interesting article written by Dr Tony Hirst on June 11th 2013 ‘Naughty robot: Where’s your human operator’ has no comments at all, over a month later. Dr Hirst writing for the Open University asks fundamental questions like: in years to come who should be held responsible for a robot committing a murder? If robots have responsibilities should they also have rights? He questions whether we are seeing the first signs of ‘robot law’ (‘that is, self-regulating and independent decision-making entities’) versus ‘human law.’
Driverless cars are taking to the roads and although fully
autonomous ones will be too expensive for most right now, auto-pilot ones are
expected to be commonplace. Different US states are regulating these cars in
different ways. Florida and California
consider the operator to be the one who engages the autonomous vehicle into
autopilot, even when a person is not present.
The US is also considering laws around autonomous unmanned vehicles,
specifically drones. Whilst a Tennessee
Senate bill is looking at surveillance issues other states are particularly looking
at restricting drones from having weapons because you can kill someone remotely
by using them. Considering the laws on
autonomous cars when a human operator is considered to have responsibility Dr
Hirst questions; Who is responsible when LAR’s are make their own lethal force
decisions?
“Traditional command responsibility is only implicated when
the commander ‘knew or should have known that the individual planned to commit
a crime yet he or she failed to take action to prevent it or did not punish the
perpetrator after the fact.’ It will be important to establish, inter alia,
whether military commanders will be in a position to understand the complex
programming of LARs sufficiently well to warrant criminal liability.”
Dr Hirst also highlights human rights campaigner groups that
warn:
“There is clearly a strong case for approaching the possible
introduction of LARs (Lethal Automated Robots) with great caution. If used,
they could have far-reaching effects on societal values ... there is widespread
concern that allowing LARs to kill people may denigrate the value of life
itself. ... If left too long to its own devices, the matter will, quite
literally, be taken out of human hands. ...” (read more).
What can be done to control the AI
explosion?
Technology has aided humans in scientific discovery,
prolonged life and helped us drive our economies forward. At this point though
we should be considering the risks that rapidly growing AI exposes us to. Why was there so much concern over cloning a
sheep (which dominated headlines) and yet there’s no fierce debating taking
place at all levels of society on this subject? Despite billions being invested
in projects it seems there is a lack of discussion. In the future even though the comparison of
robot-human brains could be a likened to that of rat-human brains, right now we
have the advantage of being the dominant species, the ability to discuss, debate
and prepare. It is in our nature to
fight for survival after all. Currently, living brain cells in robotic brains
are merely learning motor skills rather like an infant, but they are autonomous
and continue to teach themselves. Other
robots have the ability to learn, improve and store a vast amount of
information so much so they will break away from us. We have created the building blocks for an AI explosion. There is a debate over the amount of time it will take before we are entirely superseded and like any theory the measurements considered are not definite. Some researchers predict 16 years, others 30-100 and maybe more the truth is we cannot be definite but it is likely it will happen in the near future.
On
a brighter note there is some hope that robots eventually will be able to
empathise with humans and the first steps are starting to be taken by
roboticists such as David Hanson. The
video below showcases how realistic looking robots are currently being built. They understand speech, recognise perception
of people and show emotion
The hope, is that when AI matches human intelligence, programmed
empathy will be fundamental.
Please comment and share.