We live in a world where AI is already widely used in a variety of weapons systems by a number of countries.
Drones and UAVs are a perfect example, with AI selecting and engaging targets without human intervention, as well as Loitering Munitions (kamikaze drones) identifying and engaging targets. In development are also ‘swarming technologies’, in which multiple AI-controlled drones operate in coordination.
But there’s much more: Missile Defense Systems use AI for automatic detection and engagement of incoming missiles or aircraft; AI-Enabled Targeting Systems that identify targets in conflict zones; autonomous Naval Systems (unmanned ships), and even DARPA’s Air Combat Evolution (ACE) Program in which AI can pilot an actual F-16 in flight.
Google released a statement laying out ‘our principles’, which included a pledge to not allow its AI to be used for technologies that “cause or are likely to cause overall harm’, weapons, surveillance, and anything that, ‘contravenes widely accepted principles of international law and human rights’.”
Google has announced it has made ‘updates’ in their AI Principles – now, all the previous vows not to use AI for weapons and surveillance are gone.
There are now three principles listed, starting with ‘Bold Innovation’........
Complete Article
www.thegatewaypundit.com
Drones and UAVs are a perfect example, with AI selecting and engaging targets without human intervention, as well as Loitering Munitions (kamikaze drones) identifying and engaging targets. In development are also ‘swarming technologies’, in which multiple AI-controlled drones operate in coordination.
But there’s much more: Missile Defense Systems use AI for automatic detection and engagement of incoming missiles or aircraft; AI-Enabled Targeting Systems that identify targets in conflict zones; autonomous Naval Systems (unmanned ships), and even DARPA’s Air Combat Evolution (ACE) Program in which AI can pilot an actual F-16 in flight.
Google released a statement laying out ‘our principles’, which included a pledge to not allow its AI to be used for technologies that “cause or are likely to cause overall harm’, weapons, surveillance, and anything that, ‘contravenes widely accepted principles of international law and human rights’.”
Google has announced it has made ‘updates’ in their AI Principles – now, all the previous vows not to use AI for weapons and surveillance are gone.
There are now three principles listed, starting with ‘Bold Innovation’........
Complete Article

TECH DYSTOPIA: Google Drops Pledge Not To Use AI for Weapons or Mass Surveillance Systems | The Gateway Pundit | by Paul Serran
The recent decision by Google to abandon its commitment against using AI for military and surveillance purposes raises critical ethical questions about the future of technology in governance and security.
