• Please take a moment and update your account profile. If you have an updated account profile with basic information on why you are on Air Warriors it will help other people respond to your posts. How do you update your profile you ask?

    Go here:

    Edit Account Details and Profile

NEWS AI versus F-16 Pilot

Swanee

Cereal Killer
pilot
None
Contributor
Just imagine if the logic was hostile = shoot and somehow the all friendlies get dec’d hostile in a shooting platform. Interpretation of sensors is not always right, be it eye balls or electronic means.

From what I've seen, and what I believe, we're a long way off on the ROE front from allowing AI to actually engage targets without realtime human approval.

We're closer on the technology front, and doctrine always follows technology. But I'm reasonably confident that automated weapons engagement systems will have a human in the loop for generations to come. There is an ethical and moral argument there as well.

Everything needs an abort code/plan.
 

Hair Warrior

Well-Known Member
Contributor
But I'm reasonably confident that automated weapons engagement systems will have a human in the loop for generations to come. There is an ethical and moral argument there as well.
Do you mean for the United States only? Or do you apply that globally?
 

RedFive

Well-Known Member
pilot
None
Contributor
Interpretation of sensors is not always right, be it eye balls or electronic means.
Sensors are fallible. Wires fray, lenses get dirty, vibration affects them, bullets sever connections....let's not even talk about EMPs. These devices we create, be it a Tesla, missile, or futuristic man-killing drone are only as good as their sensors. Only as good as the inputs sent to their programs. Only good as the programmers who wrote the code.

You can train a computer to pick out all the aromas in a glass of wine. You can program it to recommend what foods to pair it with. You can even let the AI run wild and try to make its own conclusions about how it tastes and what grapes to mix together to improve next year's wine. But it can never tell you if it's good.

And we want to let these things make decisions on whether or not to take a human life? Pffffft.


Cheers! ??????
 

Hotdogs

I don’t care if I hurt your feelings
pilot
From what I've seen, and what I believe, we're a long way off on the ROE front from allowing AI to actually engage targets without realtime human approval.

We're closer on the technology front, and doctrine always follows technology. But I'm reasonably confident that automated weapons engagement systems will have a human in the loop for generations to come. There is an ethical and moral argument there as well.

Everything needs an abort code/plan.

I disagree with this statement. There are times in certain peer fights that everything in front of you that meets a certain criteria needs to die. Those situations have been few and far between for the last century due to the nature or war, but that doesn’t mean they don’t exist.

Multiple generations of attack pilots learn to lay waste to anything that meets X, Y, Z condition beyond a particular line needs to removed from the battle space. If you need examples, I would ask any skid pilot flying armed reconnaissance missions during OIF 1. AI can certainly autonomously preform that function with minimal human validation. Hard to argue human ethics and morals in war after the highway of death in the Gulf War and the march up to Baghdad in 2003. History has shown us that humans are just as violent with or without computers or AI.
 

Swanee

Cereal Killer
pilot
None
Contributor
I disagree with this statement. There are times in certain peer fights that everything in front of you that meets a certain criteria needs to die. Those situations have been few and far between for the last century due to the nature or war, but that doesn’t mean they don’t exist.

Multiple generations of attack pilots learn to lay waste to anything that meets X, Y, Z condition beyond a particular line needs to removed from the battle space. If you need examples, I would ask any skid pilot flying armed reconnaissance missions during OIF 1. AI can certainly autonomously preform that function with minimal human validation. Hard to argue human ethics and morals in war after the highway of death in the Gulf War and the march up to Baghdad in 2003. History has shown us that humans are just as violent with or without computers or AI.

You can shoot on a brick on a computer screen if you have PID solved per the ROE. Sometimes PID means confirmed presence of enemy and confirmed absence of friendly. That is an area that is being worked through. But sometimes it's more complicated than that (Oh look, that SOF or 3 letter agency team that doesn't talk to anyone about anything they're doing found themselves in the robot kill zone- I guess they're dead now...). The attack pilot still has the ability to say, "something doesn't feel right here." And come off safe. AI isn't going to do that. IPOE isn't always right, and the AI only has info that is given to it.

And minimal human validation is still human validation. Robot: "There is something moving over here, can I shoot it?" Human: "Yes".

How many times have we put the thing on the thing and pressed the thing, only for the bomb to go exactly where it was programmed to- right to the wrong place? In some cases where friendlies died.

And we're working the EMCON piece as well.
 

Swanee

Cereal Killer
pilot
None
Contributor
In your assessment, do any state actors not fit your description? (rational thought and value on human life)

Potentially. NK, Iran, some others... We've made mistakes with Patriot batteries in auto mode shooting down friendlies. And we all know that Iran really screwed the pooch by shooting down the airliner.

I'm not sure what you're getting at- even the bad guys will abuse fertilizer and diesel fuel.
 

Hair Warrior

Well-Known Member
Contributor
Potentially. NK, Iran, some others... We've made mistakes with Patriot batteries in auto mode shooting down friendlies. And we all know that Iran really screwed the pooch by shooting down the airliner.

I'm not sure what you're getting at- even the bad guys will abuse fertilizer and diesel fuel.
I would put Russia and China at the top of the list. They are building autonomous weapons and they are prepared to use them, with less regard for collateral damage than US/NATO. Russia has already used/tested some UGVs operationally in Syria, according to open sources. Both also have the technology and resources to do it, at a level Iran and DPRK don’t possess.

I would argue we can partly judge their threshold for collateral damage by how they previously conducted warfare and heavy-handed societal control (for Russia: in Chechnya, Syria, Crimea/ E. Ukraine, assassinations of dissidents and ex spies at home and abroad; for China: in Tiananmen Sq, the treatment of the Uighurs, SCS militarization)

https://www.bbc.com/news/world-europe-34797252

 
Top