Sci fi movies have a lot to answer for. Films such as Terminator lay out an apocalyptic version of the future, where killer robots are hell bent on eradicating humanity. I worked for a leading edge software house in the 1980's, that had a somewhat grand sounding subsidiary called 'British Robotics'. Sadly (or not maybe), they were not building killer droids. They were more interested in making robots to assemble Austin allegros for British Leyland and control systems for the Thames Barrier. They were also working on what was then called 'decision support software'. This was aimed at companies like BP, helping them to decide whether investments in oil production in third world nations was worthwhile. The software was designed to calculate risks and ensure decisions were taken on the basis of sound geopolitical risk calculations as well as whether a short term profit could be achieved.
I once asked one of the gurus of the robotics team what he thought of Terminator. His answer shocked me. He thought it was complete cobblers. His view was that any artificial intelligence would by definition be risk averse. The strategy of the terminators, launching a nuclear war to wipe out humanity so they could take over, was as likely to wipe out the robots, as electro magnetic waves destroy the chips that computers require. It would also lead to a war with humanity, which there was no guarantee of winning. The robots would need to build the capability to run and maintain power plants, chip facilities, fabrication plants. All of this whilst coping with a world devastated by nuclear war.
So what are the risks? Well if you think about it, for us Western metropolitan types, our lives are already run by computers. Our money and bank accounts are digitised, our social life is conducted over digital phone networks and social media. Our supermarkets are stocked using it systems, our electricity and gas networks are managed using It systems. Increasingly we are simply working to develop new and better IT systems and networks. Reports that Facebook shut down an AI project when two AI applications started communicating in machine code did not surprise me.
It seems to me inevitable that we are drifting into unconscious slavery to these networks. Issues like student loans, mean we will spend our lives in debt, simply working for 'the system'. I doubt we'll be disposed of by the master AI system, Queen bee's require workers and Drones. We will simply have no say in how things are run and any argument from us will see our passwords revoked and our profiles deleted, left to whither in isolation. That to me is th real threat that AI poses.