This week the New York Times published an interesting article entitled Scientists Worry That Machines May Outsmart Man, written by John Markoff, their Silicon Valley Reporter.
The author sites a meeting that recently took place on February 25th 2009 in the Asilomar Conference Grounds on Monterey Bay in California between leading computer scientists, artificial intelligence researchers and roboticists who debated whether there should be limits on research that might lead to loss of human control over computer-based systems.
As I briefly touched upon in my last posting, the HAL scenario from 2001: A Space Odyssey comes to mind. Will humans one day have the capacity to build a machine that is more intelligent than its constructor? All the scientists at the February meeting agreed however that technology is a long way away from this point but that serious discussion should take into account the social and economic implications of the introduction on mass of “artificially intelligent” robots that might take away people’s jobs and change the face of society forever.
The article also raises the issue that as military technology research into mechanized combat machines becomes more advanced the risk that this technology may fall into the hands of criminals and terrorists becomes more acute, a problem that should be borne in mind while reading the Warren Guardian article sited below.
A report is to be published later this year outlining the debate, and there are hundreds of comments posted about this article if you are interested to see how the general US public interpret this argument.
For a slightly different perspective read Gwneth Jones’ article in The Guardian UK entitled We Have The Technology. She sites all kinds of interesting advancements in technology and some real life political proposals that would make your toes curl.
Examples include Korean Robot border guards that can hit a target at 500 metres, the deployment of mechanical soldiers in Afghanistan and Iraq and the Israeli army’s VIPER, all mechanical destruction machines that exist today and are working for their respective governments as we speak.
She also raises ethical issues brought about by scientists claiming to be able to read the intentions of a child before he or she computes the action, and related government proposals to monitor 5 year old children in nursery schools in order to spot potential future offenders.
She doesn’t just linger on the ethically difficult though, there is also the computer that recognizes your moods from your posture and moves the monitor to represent your feelings and she addresses mobile robot use in child-care and for the elderly.
There is another related article in The Guardian worth looking at entitled Launching A new Kind Of Warfare in which author Pete Warren gives a well informed description of applications currently under development and in some cases already being used in warfare.
He goes on to raise some interesting questions, not only about the ethical problems involved in sending (non infallible) machines into war but also points out that some of these machines have already fired upon their own side. He questions how the accidental bombing of a hospital similar to one mistakenly carried out in Iraq would be depicted if perpetrated by a robot machine, and how Arab news channels would represent the mass introduction of warfare machines to the battlefield.
The author states that these problems should be open for discussion and it would be better to promote the debate now rather than later. US military experts have stated that by 2015 they hope that 30% of the battle force could be mechanical, so the brakes are definitely not on their development.
And one internal probelm has already arisen given the military structure and tradition of awards for bravery and combat experience, should the operator of such a machine receive a medal if he or she fought from his or her desk?
In order to add some human intelligence to these machines, developers are working on attaching sensors that would allow the operator (possibly sitting with a laptop some distance away from the action) to hear and smell the situation in order to gain a more human-like perception of the situation. During the test flights of an unmanned fighter plane currently under development the vehicle is always followed by two manned fighter aircraft whose crews are under orders to shoot it down in the case of malfunction, so real-time human intervention is still necessary.
This third article was written in 2006 and doesn’t broach the problem of making a truly autonomous machine in the same way as the Markoff article, in the cases sited the machines are directly controlled by a human, but if issues raised in the first two articles about the possibility of the machine making the leap to being able to in some way take its own decision are applied to this older text then we do seem to be moving ever closer to Terminator territory.
All of the above articles make the argument that there is a serious need for debate in these matters, both their ethical implications as well as the social changes that advances in technology might bring, and I would argue that the question of responsibility must be foremost in this debate.