If AI is so frigging smart, what kind of watch, coffee, and after-market G1 flight jacket does he/she/they/it recommend? Asking for some friends...
You can shoot on a brick on a computer screen if you have PID solved per the ROE. Sometimes PID means confirmed presence of enemy and confirmed absence of friendly. That is an area that is being worked through. But sometimes it's more complicated than that (Oh look, that SOF or 3 letter agency team that doesn't talk to anyone about anything they're doing found themselves in the robot kill zone- I guess they're dead now...). The attack pilot still has the ability to say, "something doesn't feel right here." And come off safe. AI isn't going to do that. IPOE isn't always right, and the AI only has info that is given to it.
And minimal human validation is still human validation. Robot: "There is something moving over here, can I shoot it?" Human: "Yes".
How many times have we put the thing on the thing and pressed the thing, only for the bomb to go exactly where it was programmed to- right to the wrong place? In some cases where friendlies died.
And we're working the EMCON piece as well.
While I agree that a computer will come up with something (it can calculate millions of permutations ahead), I am of the opinion that no one will be able to "teach" a computer human intuition (i.e., ability to know something directly without analytic reasoning, bridging the gap between the conscious and non-conscious parts of our mind, and also between instinct and reason).It can learn things that humans haven't thought to try, and therefore wouldn't have thought to program in.
We did a project where we taught an AI-controlled glider to dynamically soar. It came up with a trajectory that no one had contemplated. Minds were blown.
We can agree to disagree on this subject. A great many pilots in this forum have spent their careers working through and training for scenarios that you describe (including myself). In my opinion, your thinking is a result from cultural conditioning to a specific type of warfare for the last 2 decades. Ironically some of the situations you are insinuating about are as a result of human error or process fouls and not automation.
All the factors that you discussed can mitigated/controlled. You should probably sit down with Fires professionals and look at how they run through deliberate or dynamic target development while generating a fire support plan in a conventional peer to peer fight. I believe it’s a matter of years before some of these functions are either semi or fully automated.
Awesome. I'll be sure the tell the dozen or so companies, 3 universities, and the Gov sponsor that you think we don't know what we're doing at our next exercise.
If you think you know better, send me your resume to forward to the team leads. We can always use smart guys who know how AI and ML work in depth across the ranges of military operations.
Let's rephrase this...If we get into an actual shooting war and our foes are letting their unmanned craft make their own decisions, with the occasional red-on-red tolerated becasue that is how they roll, we will pretty quickly follow suit is my prediction. Especially when you have way more unmanned craft than you have people to supervise them.
Unless it's a big war and a few dead by our own hand are decimal dust in the grand count.Let's rephrase this...
"If we get into an actual shooting war and our foes are using chemical weapons, with the occasional red-on-red tolerated because that is how they roll, we will pretty quickly follow suit is my prediction."
Yeah, no. Highly doubt fratricide at the hands of autonomous UAVs will be looked upon favorably by the American public.
CS professors.Who will program the programmers?
Garbage in, garbage out.CS professors.
Hard pass buddy. I have no desire to be associated with the robot community. Don’t get so butt hurt when some one disagrees with you.
I think you want to be a troll.
All the factors that you discussed can mitigated/controlled. You should probably sit down with Fires professionals and look at how they run through deliberate or dynamic target development while generating a fire support plan in a conventional peer to peer fight. I believe it’s a matter of years before some of these functions are either semi or fully automated.
While I agree that a computer will come up with something (it can calculate millions of permutations ahead), I am of the opinion that no one will be able to "teach" a computer human intuition (i.e., ability to know something directly without analytic reasoning, bridging the gap between the conscious and non-conscious parts of our mind, and also between instinct and reason).
I would caution Joint planners against mirror imaging bias. Adversary armed autonomous platforms don’t have to do what humans do. They don’t have to be deliberate, follow a certain established process, adhere to LOAC, or even survive. They don’t have to be intuitive, adapt, or solve problems. It’s just the newest iteration of the land mine - except now it also flies/ floats/ drives/ shoots. If Russia or China gets into a shooting war that the regime thinks they might lose, they will pull zero punches in order to inflict pain on “their aggressor” until they can achieve an exit or offramp that is favorable to their own terms (e.g. avoid regime change). That means, potentially, no-f*&$-given about civilian casualties or international legal restraints on warfare. Just look at Russia in Chechnya, or China in the Korean War. And they don’t have to worry about red-on-red if their “kill-o-matic 3000” has a X km weapons range and they plunk it down X+5 km behind enemy lines, and tell it to shoot anything that moves, with no intention of getting it back. Who cares if it kills a bunch of noncombatants? They control their domestic news media, and they’re fighting for state survival at this point anyway. They just need a quick fix that is cheap to mass produce, disrupts the enemy, and undercuts their enemy’s perceived advantages in technology/ training/ logistics/ integration of the elements of warfare/ industrial might/ and so forth.If we get into an actual shooting war and our foes are letting their unmanned craft make their own decisions, with the occasional red-on-red tolerated becasue that is how they roll, we will pretty quickly follow suit is my prediction. Especially when you have way more unmanned craft than you have people to supervise them.