1LoverofGod
Well-known
Who would you trust more in a high-stress, high-stakes military excursion? A trained human soldier with the limitations of hunger, thirst, lack of sleep, and emotion—or a soulless, autonomous artificial intelligence system acting according to its programming? That’s a question militaries around the world—including the US military—are already facing as AI quickly advances into every industry. But the big question is this: who gets to do the programming?
So what exactly is autonomous AI when it comes to the military? Well, according to a Christian Post article, the Congressional Research Service states these are:
a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.
In other words, it’s a lethal system that gets to make its “own” decisions outside of human control, all based on its fallible human programming. Now, “the U.S. military does not have weapons completely controlled by artificial intelligence in its inventory,” but it’s likely such systems are coming soon—elsewhere if not here. So that gets us back to our question—who gets to do the programming?
No AI code is worldview free. As we’ve seen with AI like ChatGPT, the worldview of those doing the programming influences the end result. It’s not neutral! (Nothing is!) And the same rules apply to lethal weapon systems. Whoever is developing the algorithm is incorporating their worldview—their beliefs—into the programming which then determines how the system behaves and what decisions it makes. So worldview matters—hugely!
More
So what exactly is autonomous AI when it comes to the military? Well, according to a Christian Post article, the Congressional Research Service states these are:
a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.
In other words, it’s a lethal system that gets to make its “own” decisions outside of human control, all based on its fallible human programming. Now, “the U.S. military does not have weapons completely controlled by artificial intelligence in its inventory,” but it’s likely such systems are coming soon—elsewhere if not here. So that gets us back to our question—who gets to do the programming?
No AI code is worldview free. As we’ve seen with AI like ChatGPT, the worldview of those doing the programming influences the end result. It’s not neutral! (Nothing is!) And the same rules apply to lethal weapon systems. Whoever is developing the algorithm is incorporating their worldview—their beliefs—into the programming which then determines how the system behaves and what decisions it makes. So worldview matters—hugely!
More