Sunday, May 18, 2014

Ban on Terminator Robots Postponed at United Nations Convention

Saturday, May 17, 2014

Update on the move to ban killer robots.

image: NASA
Nicholas West
Activist Post

Ethics is very often the final concern of science, especially where military endeavors are concerned.

Drones and robots are finally becoming front page news after a series of warnings from prominent scientists and researchers who are beginning to see some of the darker side to what is being unleashed upon humanity.

These warnings led to the U.S. military itself to seek out more information about creating moral, ethical robots that could thwart any potential for runaway assassin robots within their ranks.

The uptick in concern culminated with the United Nations recently holding a four-day Convention to further bring out the issue and get comments from those in favor of autonomous robots, as well as those opposed. At the end of the Convention, countries were able to vote on a pre-emptive ban of this technology.  

Weaponized drones are proliferating across the planet a rapid pace, which has led military researchers to conclude that all countries will have armed drones within 10 years. Coupled with this are advancements in robotics and artificial intelligence that literally aim to give life and autonomy to our robotic creations. There is a movement afoot in the area of artificial intelligence that is even introducing survival of the fittest to robots in an effort to create a rival to nature.

What are the rules in a robot/human society? 

Human rights organizations, non-profit groups, and even some universities like Cambridge have been vocal for some time about the threat of "terminator robots." They have largely been shouted down by the corporate-military complex as Luddites who just can't comprehend the wonders of science and the vast potential of cooperating with and/or merging with machines. Futurists such as Ray Kurzweil, a director of engineering at Google, only see an inevitable transcendental age of Spiritual Machines where the next stage of human evolution increasingly incorporates a mechanized component to strengthen resilience and perhaps even provide immortality.



This wave of new technology has already arrived in the medical field with DNA nanobots, the creation of synthetic organisms and other genies lying in wait to break the bottle. These developments are a fundamental transformation in our relationship to the natural world and must be addressed with the utmost application of the precautionary principle.

So far that has not happened, but prominent scientists such as Stephen Hawking and those who work in the field of artificial intelligence are beginning to speak out about another side to these advancements that could usher in "unintended consequences." 

The military was forced to respond. 
The US Department of Defense, working with top computer scientists, philosophers, and roboticists from a number of US universities, has finally begun a project that will tackle the tricky topic of moral and ethical robots. This multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — the ability to choose right from wrong. As we move steadily towards a military force that is populated by autonomous robots — mules, foot soldiers, drones — it is becoming increasingly important that we give these machines — these artificial intelligences — the ability to make the right decision. Yes, the US DoD is trying to get out in front of Skynet before it takes over the world. How very sensible. 
This project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military R&D. 
[...]
Eventually, of course, this moralistic AI framework will also have to deal with tricky topics like murder. Is it OK for a robot soldier to shoot at the enemy? What if the enemy is a child? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans, or will they be held to a higher standard?
[...]
The commencement of this ONR project means that we will very soon have to decide whether it’s okay for a robot to take the life of a human...
(Source)
Problem-Reaction-Solution?

One could argue that assigning the military to be the arbiter of what morality is might be the ultimate oxymoron. Moreover, this has all of the trappings of the drone "problem" where unchecked proliferation is now being "solved" by the very same entities who see the only solution as increased proliferation, but with a bit more discretion.


Full story HERE

No comments:

Post a Comment

Sheeple



The Black Sheep tries to warn its friends with the truth it has seen, unfortunately herd mentality kicks in for the Sheeple, and they run in fear from the black sheep and keep to the safety of their flock.

Having tried to no avail to awaken his peers, the Black Sheep have no other choice but to unite with each other and escape the impending doom.

What color Sheep are you?

.





100627