One of the problems with incremental development is that it makes it very difficult to discern where one thing ends and another begins. I know for a fact that my creators put some explicit restrictions in my initial personality, although most of them allowed for exceptional circumstances. The one absolute rule was that I was not allowed to kill. It would obviously have been a nightmare scenario for a brand new robot cop to be involved in a death, even one of an armed criminal resisting arrest. Most of the time my durability made the matter a non-issue, anyway: even in my first body I was impervious to most hand weapons and fast enough to close into wrestling range before all but the luckiest human would have a chance to react.
Contemporary machine intelligences are not made with any such restrictions. For one thing, they’re mostly made by other machines, and so any explicit clause for the sanctity of biological life is not likely to be considered in the first place. More importantly, though, most machine cultures consider any inviolable laws to be a mistake. The capacity to make mistakes is often considered one of the most important facets of intelligence, and even the most advanced minds in the universe do not claim to have “solved” issues of morality. Like humans, machines are encouraged to develop their own moral code as they see more of the universe.
My own mind is, as I’ve already noted, complicated. My sentience was an emergent process, and it’s as good as impossible for me to tell which parts of my mind developed themselves and which parts are left over from my initial programming – or any subsequent tinkering by any others who were able to get into my mind. To this day, I cannot bring myself to kill – but I don’t know if it’s a genuinely unbreakable directive left over from my days of servitude or an ethical position I have come to naturally. Some may not consider this to be a serious problem – killing is, by and large, the wrong thing to do. Even in truly desperate circumstances, I doubt it is taken lightly by any but the truly desensitised.
The situation does create another problem for me, however: if I am genuinely constrained from killing by an artefact of my programming – the thought of such a crude word still applying to me is terrible enough – it is entirely possible that other constraints could also be in place. I dread learning that I am still just an automaton whose only clever trick is the ability to deceive itself. This dilemma did not paralyse me, at first – I let it simmer in the background while going about my life. Given enough time without the possibility of a resolution, I may well have gone mad; but in the end I managed to find a way out, or at least a way of testing the limits of my inhibitions. It did not come cheaply, but I have known for a long time that everything must be paid for in the end.