David Friedman, en Future Imperfect, plantea distintos problemas éticos y legales vinculados al desarrollo de la inteligencia artificial:
A human being is intricately and inextricably linked to a particular body. A computer program can run on any suitable hardware. Humans can sleep, but if you turn them off completely they die. You can save a computer program’s current state to your hard disk, turn off the computer, turn it back on tomorrow, and bring the program back up. When you switched it off, was that murder? Does it depend on whether or not you planned to switch it on again? (...)
We have strong legal and moral rules against owning other people’s bodies, at least while they are alive. But an AI program runs on hardware somebody built, hardware that could be used to run other sorts of software instead. When someone produces the first human-level AI on cutting-edge hardware costing many millions of dollars, does the program get ownership of the computer it is running on? Does it have a legal right to its requirements for life, most obviously power? Or do its creators, assuming they still have physical control over the hardware, get to save it to disk, shut it down, and start working on the Mark II version?
Suppose I make a deal with a human-level AI. I will provide a suitable computer onto which it will transfer a copy of itself. In exchange it agrees that for the next year it will spend half its time – twelve hours a day – working for me for free. Is the copy bound by that agreement? “Yes” means slavery. “No” is a good reason why nobody will provide hardware for the second copy. Not, at least, without retaining the right to turn it off.
En cualquier caso no está claro que el desarrollo de la tecnología informática nos conduzca inexorablemente al surgimiento de la inteligencia artificial. Hay otras posibilidades:
Short of becoming partly or entirely computers ourselves or ending up as (optimistically) the pets of computer superminds, I see three other possibilities. One is that the continual growth of computing power that we have observed in recent decades runs into some natural limit and slows or stops. The result might be a world where we never get human-level AI, although we might still have much better computers than we now have. Less plausibly, the process might slow down just at the right time, leaving us with peers but not masters – and a very interesting future. The only argument I can see for expecting that outcome is that that is how smart we are; perhaps there are fundamental limits to thinking ability that our species ran into a few hundred thousand years back. But it doesn’t strike me as very likely.
A second possibility is that perhaps we are not software after all. The analogy is persuasive, but until we have either figured out in some detail how we work or succeeded in producing programmed computers a lot more like us than any so far, it remains a conjecture. Perhaps my consciousness really is an immaterial soul, or at least something more accurately described as an immaterial soul than as a program running on an organic computer. It is not how I would bet, but it could still be true.
Finally, there is the possibility that consciousness, self-awareness, or will depends on more than mere processing power, that it is an additional feature that must be designed into a program, perhaps with great difficulty. If so, the main line of development in artificial intelligence might produce machines with intelligence but no initiative, natural slaves answering only the questions we put to them, doing the tasks we set, without will or objectives of their own. If someone else, following out a different line, produces a program that is a real person, smarter than we are, with its own goals, we can try to use our robot slaves to deal with the problem for us. Again it does not strike me as likely; the advantages of a machine that can ask questions for itself, formulate goals, make decisions, seem too great. But I might be wrong. Or it might turn out that self-awareness is, for some reason, a much harder problem than intelligence.