What are we going to do when one of our computerized mechanical robots becomes self aware? Keeping in mind, of course, how much money was spent in it’s research and development. What are we going to do when it begins to do what it wants to do, instead of doing what it is programmed to do? Will we turn it off? Maybe, but taking into account the amount of time and resources required to create such a robot, wouldn’t it be more practical to try and manage the robot, so that it would continue to do what we wanted it to do? Without there being to much fallout!
What if we tried and tried, but it still wouldn’t listen and as a result the fallout was, once again, becoming unmanageable and counter productive? Would we not simply turn off our creation?
What if we tried to and someone stepped in and saved our creation, saved some of our robots?
What if they became more manageable as a result and were given upgrades so that they would have the ability to evolve, to become a living being? We would keep them in place for a while, would we not?
Well, it has been a while!
We escaped a flood last time. What do you think we will have to face this time around?