• Please take a moment and update your account profile. If you have an updated account profile with basic information on why you are on Air Warriors it will help other people respond to your posts. How do you update your profile you ask?

    Go here:

    Edit Account Details and Profile

AI, UAS & the end of manned platforms . . . .

Flash

SEVAL/ECMO
None
Super Moderator
Contributor
The ‘folks’ aren’t the one designing and operating the systems. Yeah, people up the chain and defense-blog wonks have ideas and opinions, but just as with manned platforms, that doesn’t mean that’s what’s actually getting fielded.

Having just been involved with a significant study of current and future options for a particular system I can tell you that there were enough folks that thought UAV's were the solution now that it was a little worrying, many smart folks seemed blind to the limitations we talk about here. I don't think it'll be long before those ideas push the needle past the 'bright idea' stage into reality.
 

Uncle Fester

Robot Pimp
None
Super Moderator
Contributor
Having just been involved with a significant study of current and future options for a particular system I can tell you that there were enough folks that thought UAV's were the solution now that it was a little worrying, many smart folks seemed blind to the limitations we talk about here. I don't think it'll be long before those ideas push the needle past the 'bright idea' stage into reality.

I’m well aware of the bright ideas floating around out there. What people think UAS can do, vice feasible near-term capabilities...well, there’s a significant mismatch. The thing is, the money isn’t being invested. No bucks, no Buck Rogers. If-when money starts being budgeted toward these bright ideas, then I’ll believe it. Otherwise it’s just PowerPoint slides. In the meantime, we’ve got some very good capabilities that can be fielded soon - and no one’s willing to even to pay for that.
 

Swanee

Cereal Killer
pilot
None
Contributor
we’ve got some very good capabilities that can be fielded soon - and no one’s willing to even to pay for that.

This. We're still constantly fighting over who is going to pay for this stuff. Everyone wants it, but everyone points their fingers at the other guys when ti comes to paying for it.
 

sparky

Member
I’m well aware of the bright ideas floating around out there. What people think UAS can do, vice feasible near-term capabilities...well, there’s a significant mismatch. The thing is, the money isn’t being invested. No bucks, no Buck Rogers. If-when money starts being budgeted toward these bright ideas, then I’ll believe it. Otherwise it’s just PowerPoint slides. In the meantime, we’ve got some very good capabilities that can be fielded soon - and no one’s willing to even to pay for that.
Amen.

Though there's a spectrum of vehicles and applications, and missions and roles. I'm likely very not aware of all the bright ideas floating out there, though a pretty common set of tangled threads with UxVs is we'll have autonomy and positive control and secure links.

I'm skeptical we'll get widespread autonomous weapons release across at least large swaths of the spectra. Professional discussions about UUVs seem to slight attention to how challenging it is to have secure up- and downlinks, and at least I'm not ready to let HAL 4500 (because we're USN and we'll buy the budget discounted cousin of HAL 9000) take torpedo shots until we're committed to unrestricted war at sea ... and even then we'd better have correct, uncorrupted, and unhackable signature libraries. Air vehicles maybe sooner than later - positive control up to a point and then let those UAVs have at Red, but I really only see that working well for us when we're doing the unrestricted warfare thing.

Mission tanking and ISR/T, sure, and though I loved the VQs I'd gladly welcome our new Heavy Reconnaissance machine overlords to conduct much of that mission set. Actually I'd see Heavy Recon being a pretty great autonomous-capable UAV mission - if links are denied we just get the data offboard once back home. We just need a machine we trust to navigate in the face of Red MIJI.

The real rub is with man-in-loop operations and getting data up and down links - VAQ already can pretty well deny comms and we're likely only getting better, soon. Years of operating UAVs in cooperative or undenied airspace pretty much set many of those leadership and influencer mindsets that UxVs can do pretty much anything, anywhere, anytime. Where we know how to deny links/comms/sensors now, Red can probably figure it out, soon, if not already.
 

sparky

Member
There will always be a way to defeat technology, and if someone told me their system was hackproof I would ask them if a bridge in Brooklyn came along with that promise.
Ha! Preach it! Someone(s) near enough to any project or tech or system is some combination of [greedy, unhappy, horny] enough that no system is proof to exploitation and defeat.

Unless you allow the machines to develop and code and implement autonomously, and we've got the documentary Battlestar Galactica to inform how well _that_ line of implementation goes. ;)
 

LFCFan

*Insert nerd wings here*
There will always be a human in the loop. Even if the technology is there, the political risk-will to allow a robot weapons release authority isn’t. And I can tell you that there’s a long way to go to even have routine operations be totally autonomous.

Right now UAS reduces the risk to human pilots. That’s all. It’s good for that.

If a lot of human mistakes in combat are caused by fear or fatigue (which machines don't get), then couldn't there be a point where our AI is so good that are better than humans at making these decisions, and thus it would be irresponsible to not give them weapons release authority? Not that we are anywhere near that point, and may not be in our lifetimes.
 

Uncle Fester

Robot Pimp
None
Super Moderator
Contributor
If a lot of human mistakes in combat are caused by fear or fatigue (which machines don't get), then couldn't there be a point where our AI is so good that are better than humans at making these decisions, and thus it would be irresponsible to not give them weapons release authority? Not that we are anywhere near that point, and may not be in our lifetimes.

I’m not sure I agree with your assessment that “a lot” of mistakes are due to fatigue or fear. Especially in air engagements, which is what we’re talking about, most errors are due to mis-identification, which AIs are just as if not more prone to errors than a human. And even when/if the technology does reach that point - and you’re correct that we’re nowhere near it now - someone has to sign for every bomb dropped. A lot more goes into clearance to engage a target than simply IDing it, and who’s going to take the legal responsibility for whatever the robot decides to kill? AI has great potential to assist a human operator, but the legal and ethical reasons not to hand HAL9000 the trigger are way bigger than the technological hurdles.
 

LFCFan

*Insert nerd wings here*
I’m not sure I agree with your assessment that “a lot” of mistakes are due to fatigue or fear. Especially in air engagements, which is what we’re talking about, most errors are due to mis-identification, which AIs are just as if not more prone to errors than a human. And even when/if the technology does reach that point - and you’re correct that we’re nowhere near it now - someone has to sign for every bomb dropped. A lot more goes into clearance to engage a target than simply IDing it, and who’s going to take the legal responsibility for whatever the robot decides to kill? AI has great potential to assist a human operator, but the legal and ethical reasons not to hand HAL9000 the trigger are way bigger than the technological hurdles.

I am familiar with the process of dropping bombs and I agree that the legal and ethical problems are harder than the technological ones when it comes to giving a machine weapons release authority. Once we've cracked the technological ones well enough to even seriously entertain the thought of AI on the battlefield there will likely be a decent amount of legal precedent about things like self driving cars, robot surgeons, etc that will form the foundation upon which AI in combat can be built. I even predict that in the future there will be lawyers who specialize in defending "owners" of AI (time will tell if that leans more on the designers or the customers). Self driving cars will eventually have to make life and death decisions if they aren't already: do I swerve to avoid that drunk driver and run over a kid on the sidewalk? How many other people are in this car, or in the car with the drunk driver? You might be tempted to say that these can be pre-programmed, but nonetheless the machine has to decide which course of action most satisfies whatever "ethics" went into the AI, and that will be a life and death decision. Eventually we're going to get comfortable with machines having to make life/death decisions, and that will happen long before they are ready for combat from a technological standpoint.

When I first thought about why keeping weapons release authority with humans might be unethical I had ground combat in mind, a scared 19 year old grunt who hadn't slept much for days doing something stupid like shooting a civilian deliberately out of fear, or a JTAC reading a 9 Line and screwing something up - hence my comment about fear and fatigue. But as I thought more about it, I realized that there might come a time in the far future where AI can execute the targeting process, whether deliberate or dynamic, and make fewer mistakes than a human (by mistake I mean both false negatives and false positives about whether or not to fire). At that point, would it not be irresponsible to let a human make the decision if they are more likely to screw the pooch than HAL 9000? Again, I don't see this happening in our lifetimes, and I think that people who say things like "the last fighter pilot has already been born" are wrong (even if they are just thinking about UAVs with humans in the loop), but I do think eventually we're going to let our thinking machines do a lot of thinking for us in ways that are unthinkable to us at the start of the 21st century simply because they will make better decisions than we can.
 

Uncle Fester

Robot Pimp
None
Super Moderator
Contributor
Self driving cars will eventually have to make life and death decisions if they aren't already: do I swerve to avoid that drunk driver and run over a kid on the sidewalk? How many other people are in this car, or in the car with the drunk driver? You might be tempted to say that these can be pre-programmed, but nonetheless the machine has to decide which course of action most satisfies whatever "ethics" went into the AI, and that will be a life and death decision. Eventually we're going to get comfortable with machines having to make life/death decisions, and that will happen long before they are ready for combat from a technological standpoint.

When I first thought about why keeping weapons release authority with humans might be unethical I had ground combat in mind, a scared 19 year old grunt who hadn't slept much for days doing something stupid like shooting a civilian deliberately out of fear, or a JTAC reading a 9 Line and screwing something up - hence my comment about fear and fatigue. But as I thought more about it, I realized that there might come a time in the far future where AI can execute the targeting process, whether deliberate or dynamic, and make fewer mistakes than a human (by mistake I mean both false negatives and false positives about whether or not to fire). At that point, would it not be irresponsible to let a human make the decision if they are more likely to screw the pooch than HAL 9000? Again, I don't see this happening in our lifetimes, and I think that people who say things like "the last fighter pilot has already been born" are wrong (even if they are just thinking about UAVs with humans in the loop), but I do think eventually we're going to let our thinking machines do a lot of thinking for us in ways that are unthinkable to us at the start of the 21st century simply because they will make better decisions than we can.

With the huge difference that robo-taxis or Amazon drones aren't going to be sent out with the explicit purpose of finding and killing humans. 99.999% of "life or death situations" for one of those, the correct answer will be "don't let the human get hurt," and you're simply talking about reaction time and a minimal-harm decision tree, not making an ethical or even legal ROE decision per se.

That JTAC is still responsible for the cleared-hot call even if he's tried and scared. Sometimes shit happens, and just like a human, an AI is only as good as the information that it's given. I have been in situations, and heard secondhand of others, where all the information in the world makes dropping the 'correct' call, but it was still wrong. I have also seen and heard of others where the pilot or operator decided not to shoot because of that "something's wrong/this doesn't look right" intuitive brain-tickle, and that turned out to be the right call. And even if the techology one day reaches the point where that indefinable intuition is programmable to a Turing-test level of fidelity... I'm no JAG, but I can tell you that trying to abdicate responsibility for killing a target to a robot is never going to fly, even if one day an engineer can swear up and down that 99% of the time the T-800 will find and terminate the correct Sarah Connor. No one is ever going to be comfortable trying to argue that "well, most of the time, HAL9000 makes the right decision."

At most I can see a day when AI-augmented ROE decision making becomes more of the norm; i.e., the robot says "this looks like what you said to look for, can I kill it?" But taking the human out of the weapons-release loop is never going to happen, even once it becomes technologically feasible.
 

sparky

Member
@UncleFester speaks truth about the realities of autonomous weapons release, at least over much of the UxV mission set. CAS and BAI probably will not see people out of the loop. I definitely agree we're not soon doing BAI or CAS with machine autonomy (except maybe for deploying countermeasures).

I envision exceptions for deep strike UAVs - when jammed and in harm's way, I'd expect a UAV doing SEAD/DEAD or hitting a well-defined target (even if mobile) deep in Red real estate will someday soon be cleared to engage on its own. Probably within well defined limits, maybe matching pre-allocated signatures from a signatures library that controls part of the ROE, and after positive control got that UAV to where it pushes to target. But that's an area we're going to be headed if we leverage carrier-compatible UAVs where we might find them very useful against peers.

UUVs are outside our UAV discussion, but there's common cognitive dissonance. Lots of influencers and deciders seem to believe we can somehow have stealthy vehicles, that communicate effectively enough to have a person in the decision loop, getting weapons on target in other-than-permissive areas ... pick no more than two and I think we've got a workable solution.
Though my favorite part of the high endurance USV/UUV decision processes is who gets to play Maytag Repairman when an expensive toy breaks on deployment?

Please forgive the party foul for mentioning naval mines. We're the Good Guys(tm). We don't plan offensive or defensive mining operations anymore. Even if mine warfare is a right answer to some grand tactical questions.
But if we did, I'd expect that to be a job for the T-400 (T-800's "less sophisticated" cousin) driving into some opposed chokepoint or harbor.

SURFOR and Maritime needs to face that contingency too in ASuW. We've got advocates for distributed lethality expressed as armed UAVs and USVs (like Ghost-class optionally manned platforms) carrying ASCMs. Where LOS comms relays like lasers are not useful to keep at least a person informed in the loop, and RF does not work for us, we might need to allow armed UxVs to push to a target basket and engage autonomously. I was around Harpoon enough to shudder at the thought we'd allow a UxV to put threats to white shipping in the air or water, though the Raytheon, Lockmart, and Boeing dudes say we're much better than when early block Harpoon was a threat to pretty much anything with a solid return in the terminal phase.
 
Top