So, world, get ready! Google, the very company that once proudly bore the slogan 'Don’t be evil', has decided that the ban on developing military technologies is so last century. Now its AI can officially work for the army, create weapons, and possibly decide who deserves to live and who does not. Well, welcome to the bright future where algorithms decide who needs to 'go to the ban'... forever.
Google – now also an arms baron?
Remember how Google once boasted that it would not participate in the development of systems that could harm people? Yes, it's funny to remember because now they have simply removed that point from their principles of working with AI. Well, why was it even needed if military budgets are much fatter than subscriptions to Google One?
The company had already collaborated with the Pentagon in 2018 as part of the Maven project, where AI analyzed video recordings from drones. Back then, Google employees literally rebelled – saying, 'we don’t want to help kill people'. But several years have passed, profit has become more important than principles, and now the company removes all restrictions without hesitation.
How exactly will Google 'protect' people?
The official position – Google just wants to work with governments, 'so that AI protects humanity'. Yeah, sure. Protecting is exactly the word that comes to mind when talking about military developments.
Let's guess how this will work in practice:
'Smart' drones – now they will be able to recognize targets, and maybe even choose for themselves who to shoot at. Those who don't please the algorithm will receive a 'humanitarian package' from the sky.
Patrol robots – Google and Boston Dynamics are already developing autonomous robots. Well, since they have learned to open doors, why not teach them to aim as well?
Automated surveillance systems – now your face will not only be identified in the metro but also evaluated: potential criminal or not?
Who’s next?
It's funny, but while some countries are trying to regulate the use of AI in armaments, Google simply stepped forward and said: 'But we can.' Now they are one step closer to becoming a technology supplier for the military. Well, why not? War is also a market, and a very profitable one.
Why is this frightening?
Because now the world is even closer to the scenario that sci-fi writers depicted in their dystopias. We already see how algorithms are taking on more and more decisions: whom to hire, whom to fire, who to give credit to. But what will happen when they start deciding who should be 'neutralized' in the name of security?
Well, all that’s left is to wait for Google to release its new version of the AI assistant, which upon the command 'Okay, Google, protect me' will send a drone with
a rocket straight to your enemies.