• Please take a moment and update your account profile. If you have an updated account profile with basic information on why you are on Air Warriors it will help other people respond to your posts. How do you update your profile you ask?

    Go here:

    Edit Account Details and Profile

NEWS AI versus F-16 Pilot

Hotdogs

I don’t care if I hurt your feelings
pilot
You can shoot on a brick on a computer screen if you have PID solved per the ROE. Sometimes PID means confirmed presence of enemy and confirmed absence of friendly. That is an area that is being worked through. But sometimes it's more complicated than that (Oh look, that SOF or 3 letter agency team that doesn't talk to anyone about anything they're doing found themselves in the robot kill zone- I guess they're dead now...). The attack pilot still has the ability to say, "something doesn't feel right here." And come off safe. AI isn't going to do that. IPOE isn't always right, and the AI only has info that is given to it.

And minimal human validation is still human validation. Robot: "There is something moving over here, can I shoot it?" Human: "Yes".

How many times have we put the thing on the thing and pressed the thing, only for the bomb to go exactly where it was programmed to- right to the wrong place? In some cases where friendlies died.

And we're working the EMCON piece as well.

We can agree to disagree on this subject. A great many pilots in this forum have spent their careers working through and training for scenarios that you describe (including myself). In my opinion, your thinking is a result from cultural conditioning to a specific type of warfare for the last 2 decades. Ironically some of the situations you are insinuating about are as a result of human error or process fouls and not automation.

All the factors that you discussed can mitigated/controlled. You should probably sit down with Fires professionals and look at how they run through deliberate or dynamic target development while generating a fire support plan in a conventional peer to peer fight. I believe it’s a matter of years before some of these functions are either semi or fully automated.
 

bubblehead

Registered Member
Contributor
It can learn things that humans haven't thought to try, and therefore wouldn't have thought to program in.

We did a project where we taught an AI-controlled glider to dynamically soar. It came up with a trajectory that no one had contemplated. Minds were blown.
While I agree that a computer will come up with something (it can calculate millions of permutations ahead), I am of the opinion that no one will be able to "teach" a computer human intuition (i.e., ability to know something directly without analytic reasoning, bridging the gap between the conscious and non-conscious parts of our mind, and also between instinct and reason).
 

Swanee

Cereal Killer
pilot
None
Contributor
We can agree to disagree on this subject. A great many pilots in this forum have spent their careers working through and training for scenarios that you describe (including myself). In my opinion, your thinking is a result from cultural conditioning to a specific type of warfare for the last 2 decades. Ironically some of the situations you are insinuating about are as a result of human error or process fouls and not automation.

All the factors that you discussed can mitigated/controlled. You should probably sit down with Fires professionals and look at how they run through deliberate or dynamic target development while generating a fire support plan in a conventional peer to peer fight. I believe it’s a matter of years before some of these functions are either semi or fully automated.

Awesome. I'll be sure the tell the dozen or so companies, 3 universities, and the Gov sponsor that you think we don't know what we're doing at our next exercise.

If you think you know better, send me your resume to forward to the team leads. We can always use smart guys who know how AI and ML work in depth across the ranges of military operations.
 

taxi1

Well-Known Member
pilot
If we get into an actual shooting war and our foes are letting their unmanned craft make their own decisions, with the occasional red-on-red tolerated becasue that is how they roll, we will pretty quickly follow suit is my prediction. Especially when you have way more unmanned craft than you have people to supervise them.
 

Hotdogs

I don’t care if I hurt your feelings
pilot
Awesome. I'll be sure the tell the dozen or so companies, 3 universities, and the Gov sponsor that you think we don't know what we're doing at our next exercise.

If you think you know better, send me your resume to forward to the team leads. We can always use smart guys who know how AI and ML work in depth across the ranges of military operations.

Hard pass buddy. I have no desire to be associated with the robot community. Don’t get so butt hurt when some one disagrees with you.
 

RedFive

Well-Known Member
pilot
None
Contributor
If we get into an actual shooting war and our foes are letting their unmanned craft make their own decisions, with the occasional red-on-red tolerated becasue that is how they roll, we will pretty quickly follow suit is my prediction. Especially when you have way more unmanned craft than you have people to supervise them.
Let's rephrase this...

"If we get into an actual shooting war and our foes are using chemical weapons, with the occasional red-on-red tolerated because that is how they roll, we will pretty quickly follow suit is my prediction."

Yeah, no. Highly doubt fratricide at the hands of autonomous UAVs will be looked upon favorably by the American public.
 

Pags

N/A
pilot
Let's rephrase this...

"If we get into an actual shooting war and our foes are using chemical weapons, with the occasional red-on-red tolerated because that is how they roll, we will pretty quickly follow suit is my prediction."

Yeah, no. Highly doubt fratricide at the hands of autonomous UAVs will be looked upon favorably by the American public.
Unless it's a big war and a few dead by our own hand are decimal dust in the grand count.

Pre-WWI all the major powers signed treaties against chemical weapons. That pretty quickly went away once everyone needed a new way to win. Then it was just a matter of keeping up with the Joneses when it comes to killing.
 

taxi1

Well-Known Member
pilot
Who will program the programmers?


1200px-Drill_instructor_at_the_Officer_Candidate_School.jpg
 

Swanee

Cereal Killer
pilot
None
Contributor
Hard pass buddy. I have no desire to be associated with the robot community. Don’t get so butt hurt when some one disagrees with you.

I'm glad you're happy with a 1960s airplane updated to 1997 tech.

Don't get so butt hurt when someone else may know more than you about something. You might find that your viewpoint is also pretty damn narrow. You say you don't want anything to do with the robot world, yet here you are, commenting about what you do know about the robot world (which isn't much...).

So what is it? Are you commenting just because you want to be a dick to me? Or because you have a vested interest in the subject matter?


I think you want to be a troll.
 

Hotdogs

I don’t care if I hurt your feelings
pilot
I think you want to be a troll.

Not really. Some individuals are just fortunate enough to be involved in planning and executing fire support for around the last 12 years.

I’ll let you continue on about how you think your specific vignettes from the last 15 years are relative to the next peer fight. I disagreed with your viewpoint about the application of semi or fully automated fire support with regards to the specific examples you highlighted. The factors you highlight aren’t new and we’ve managed them in a information constrained environment well before you or I began our careers. In time, I’m pretty confident that UAS with varying levels of automation and human interface will be able to do the same. The robot that autonomously mapped my floor plan and is vacuuming my living room right now gives me confidence.

I don’t have a desire to be a part of the robot community because I don’t find it interesting. The airplane I flew was a good piece of gear. I would fly it for another 10 years if the Corps would let me, but that’s probably not in cards.
 

Hair Warrior

Well-Known Member
Contributor
All the factors that you discussed can mitigated/controlled. You should probably sit down with Fires professionals and look at how they run through deliberate or dynamic target development while generating a fire support plan in a conventional peer to peer fight. I believe it’s a matter of years before some of these functions are either semi or fully automated.
While I agree that a computer will come up with something (it can calculate millions of permutations ahead), I am of the opinion that no one will be able to "teach" a computer human intuition (i.e., ability to know something directly without analytic reasoning, bridging the gap between the conscious and non-conscious parts of our mind, and also between instinct and reason).
If we get into an actual shooting war and our foes are letting their unmanned craft make their own decisions, with the occasional red-on-red tolerated becasue that is how they roll, we will pretty quickly follow suit is my prediction. Especially when you have way more unmanned craft than you have people to supervise them.
I would caution Joint planners against mirror imaging bias. Adversary armed autonomous platforms don’t have to do what humans do. They don’t have to be deliberate, follow a certain established process, adhere to LOAC, or even survive. They don’t have to be intuitive, adapt, or solve problems. It’s just the newest iteration of the land mine - except now it also flies/ floats/ drives/ shoots. If Russia or China gets into a shooting war that the regime thinks they might lose, they will pull zero punches in order to inflict pain on “their aggressor” until they can achieve an exit or offramp that is favorable to their own terms (e.g. avoid regime change). That means, potentially, no-f*&$-given about civilian casualties or international legal restraints on warfare. Just look at Russia in Chechnya, or China in the Korean War. And they don’t have to worry about red-on-red if their “kill-o-matic 3000” has a X km weapons range and they plunk it down X+5 km behind enemy lines, and tell it to shoot anything that moves, with no intention of getting it back. Who cares if it kills a bunch of noncombatants? They control their domestic news media, and they’re fighting for state survival at this point anyway. They just need a quick fix that is cheap to mass produce, disrupts the enemy, and undercuts their enemy’s perceived advantages in technology/ training/ logistics/ integration of the elements of warfare/ industrial might/ and so forth.
 
Last edited:
Top