I disagree with this statement. There are times in certain peer fights that everything in front of you that meets a certain criteria needs to die. Those situations have been few and far between for the last century due to the nature or war, but that doesn’t mean they don’t exist.
Multiple generations of attack pilots learn to lay waste to anything that meets X, Y, Z condition beyond a particular line needs to removed from the battle space. If you need examples, I would ask any skid pilot flying armed reconnaissance missions during OIF 1. AI can certainly autonomously preform that function with minimal human validation. Hard to argue human ethics and morals in war after the highway of death in the Gulf War and the march up to Baghdad in 2003. History has shown us that humans are just as violent with or without computers or AI.
You can shoot on a brick on a computer screen if you have PID solved per the ROE. Sometimes PID means confirmed presence of enemy and confirmed absence of friendly. That is an area that is being worked through. But sometimes it's more complicated than that (Oh look, that SOF or 3 letter agency team that doesn't talk to anyone about anything they're doing found themselves in the robot kill zone- I guess they're dead now...). The attack pilot still has the ability to say, "something doesn't feel right here." And come off safe. AI isn't going to do that. IPOE isn't always right, and the AI only has info that is given to it.
And minimal human validation is still human validation. Robot: "There is something moving over here, can I shoot it?" Human: "Yes".
How many times have we put the thing on the thing and pressed the thing, only for the bomb to go exactly where it was programmed to- right to the wrong place? In some cases where friendlies died.
And we're working the EMCON piece as well.