CampaignSMS

The US Military's AI 'Swarm' Initiatives Speed Pace of Hard … – Slashdot

Catch up on stories from the past week (and beyond) at the Slashdot story archive




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
AI written by the military would _never_ adopt a “kill them all and left god sort them out” attitude?
Hopefully AI will take all the troublemakers out via rapture and leave the rest of us to a hundred years of peace…
What could possibly go wrong?
war plan selection USA 1st strike!
From an essay I wrote on this in 2010: https://pdfernhout.net/recogni… [pdfernhout.net]
“There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those “security” agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecur
Lex Fridman had an in-depth interview with the dude a few days ago, with fresh takes on the latest geopolitical events. Well worth a watch. There’re also nuggets of short clips from the Lex Clips channel.
https://www.youtube.com/watch?… [youtube.com]
Lex Luthor hates superman.
Certainly you can find that attitude within the ranks of any military, but it doesn’t always govern the high-level strategic decisions, or weapons development. Nations, and by extension their armies, can have goals other than mass slaughter.
Take, for example, the US invasion of Afghanistan. The first part, large-scale military operation with the goal of regime change. They achieved it with 10,000 Taliban casualties and 2,000 civilian casualties. Yes, there were a few “kill them all” types who deliberately t
…may skip safety to gain an advantage or to save a buck. Suppose they send a swarm after us, but our counter-swarm is remote controlled. They set off an EMP burst to screw up all radio, and bots are left to fight on their own. The more autonomous ones would have a big advantage.
move to def con 2!
you will find little sympathy with this line of thought around here. they literally define themselves as the saviors, and some actually believe that.
then of course if you are convinced that you are part of the good guys, anything goes, and everything is the other’s fault.
the us does this not because it is bad, but simply because it can. it’s the way power works and the us harbors a ruthless and very powerful elite that can afford fuck
up here and there again and again and nothing ever happens, except lots of people die and they continue making money. so if you live today in an area some of these motherfuckers want to fuck around with, you’re fucked, simple as that.
but that’s still localized risk, bad luck living just there. the principal risk to global world peace is that china is getting close to be a stopping force to that, and this the us will not tolerate that and all hell will break loose. rather soonish, i would expect and hope to be wrong.
Hamas is the product of 75 years of occupation and Apartheid.
Putin attacked Ukraine because NATO wanted to put nukes there right on their border, and they reacted as strongly as the USA did when the USSR wanted to put nukes in Cuba.
North Korea’s regime is the product of the US war in Korea.
So, I guess your sarcastic question could be answered “yes, they are all made to do bad things by foreign interference”.

Putin attacked Ukraine because NATO wanted to put nukes there right on their border, and they reacted as strongly as the USA did when the USSR wanted to put nukes in Cuba.

Putin attacked Ukraine because NATO wanted to put nukes there right on their border, and they reacted as strongly as the USA did when the USSR wanted to put nukes in Cuba.
Taking Ukraine would simply put their border closer to NATO’s nukes. It is a nonsensical argument.
lol yea coz that’s an accurate analogy.

North Korea’s regime is the product of the US war in Korea.

North Korea’s regime is the product of the US war in Korea.
I’m confused, didn’t South Korea suffer the same war?
Ok so sit down this may come as a shock:
The two sides in war often don’t come out with the same result.
So what you mean is that the side that aligns with the USA gets a much better future. I get it. So USA the goodies. I find your position a bit manichean, but cannot really argue with it, I mean, Japan, West Germany as opposed to East Germany… yup, you are mostly right.
Haha OMG posting a Ben Shapiro video as a “lesson in Middle Eastern history” has got to be the funniest thing I’ve seen this year.

STOP MAKING ENEMIES. problem solved.

Don’t you guys realise that the U.S. is the primary driver of conflict the world over?

$$$$ before humanity.

STOP MAKING ENEMIES. problem solved.
Don’t you guys realise that the U.S. is the primary driver of conflict the world over?
$$$$ before humanity.
The USA is creating enemies by allowing women to go out in public without a bag over their head, to hold a job, learn to read and write, go see a physician without a male relative escorting them, or just generally have a life outside of their home. I won’t say that the USA is the primary driver of international conflict but I can agree that it is a large contributor. If the USA stopped making enemies then that would not likely do much to end conflict because then these women hating assholes that have been
I wish that would work.
Yes, the US has often acted in such a way as to make enemies. Often the alternative was isolationism, but by no means always.
That said, the domination of “the petro dollar” has caused the US to act in many ways that have created enemies. And so has an earlier paranoid hatred of anything that called itself communist. (I still see no reason for the “Vietnam War” other than the US unilaterally breaking the Geneva agreement.)
But not making enemies isn’t sufficient. Some folks will cha
> Don’t you guys realise that the U.S. is the primary driver of conflict the world over?
USA is far from perfect, but without us, dictators would have their way with the world. We let Germany and Japan be independent democracies after they lost. Dictators wouldn’t do this.
That being said, funding Israel is a big mistake. Israel swipes land and has about 30 go-to excuses for doing such. Israel wants land more than peace, and that’s a recipe for perpetual conflict. (Giving the land back won’t create instant
That’s not being a hypocrite, that’s being self-centered. Just like all the countries that *do* decide to develop nuclear weapons anyway are self-centered. The world would be a lot safer if nobody had nuclear weapons. (OTOH, there would probably be lots more wars.) But it’s really hard to stuff a genie back into the bottle.
We are seeing remote controlled drones used in Ukraine to great effect, and Russia seems to be able to do very little about them. The smaller ones are almost impossible to detect and shoot down, and very cheap to procure. The Ukrainians are literally buying them from AliExpress, strapping grenades to them, and sending them in through windows and doors.
When they become autonomous, swarms of suicide drones will be deployed in a similar fashion.
Well equipped militaries will simply flatten buildings like Israel
Ah yes. Our enemies might do this, therefore we must do it before they do!
The warmonger’s motto.
We already have autonomous weapons, and we’ve been doing a pretty good job keeping them in check so far. It’s not a perfect record but far from something that I’d think is a problem.
Take the Phalanx CIWS system as an example. This system doesn’t have an IFF system because anything moving fast enough towards a friendly asset to be a threat is fired upon. There’s exceptions written in so friendly aircraft can approach safely, typically by following a path designated for a safe approach, a path kept guarded so an enemy can’t use it to send in missiles or something.
In the airspace over Ukraine it’s just generally a “no fly zone” and anything airborne that can’t be ruled out as a bird is shot down automatically. This has resulted in some “blue on blue” incidents, mostly on the Russian side, but that’s a risk taken even without automated systems.
The primary rule that protects friendly forces and noncombatants from automated killing machines is that we don’t use them where the risks of “blue on blue” or “blue on green” events could happen. In that case a human is put in the decision loop. That’s not foolproof because not every human will get it right, but it does mean that we have a person that is capable of more complex decision making than some rigid algorithm.
What is making automated systems important is that weapons can move at much greater speeds than in the past, meaning if a person is in the loop they may not be able to process the threat quickly enough to respond. If they do respond in time then it may be because the human is using an overly simplistic decision tree on whether to fire or not. An example of that is telling a sailor on a ship to fire upon any radar contact that comes from shore. That’s going to be effective in protecting against enemy forces on land, not put friendly aircraft coming from other ships at risk, but could mean someone fleeing the war in a Cessna could end up getting shot to pieces. Crazy things like this has happened.
https://en.wikipedia.org/wiki/… [wikipedia.org]

One of the more notable events occurred on Midway when the pilot of an RVNAF Cessna O-1 dropped a note on the deck of the carrier. The note read “Can you move these helicopter to the other side, I can land on your runway, I can fly 1 hour more, we have enough time to move. Please rescue me. Major Buang, Wife and 5 child.” Midway’s commanding officer, Captain L.C. Chambers ordered the flight deck crew to clear the landing area; in the process an estimated US$10 million worth of UH-1 Huey helicopters were pushed overboard into the South China Sea. Once the deck was clear Major Buang approached the deck, bounced once and then touched down and taxied to a halt with room to spare.[24] Major Buang became the first RVNAF fixed-wing pilot to ever land on a carrier. A second Cessna O-1 was also recovered by USS Midway that afternoon.[6]:âS121âS

One of the more notable events occurred on Midway when the pilot of an RVNAF Cessna O-1 dropped a note on the deck of the carrier. The note read “Can you move these helicopter to the other side, I can land on your runway, I can fly 1 hour more, we have enough time to move. Please rescue me. Major Buang, Wife and 5 child.” Midway’s commanding officer, Captain L.C. Chambers ordered the flight deck crew to clear the landing area; in the process an estimated US$10 million worth of UH-1 Huey helicopters were pushed overboard into the South China Sea. Once the deck was clear Major Buang approached the deck, bounced once and then touched down and taxied to a halt with room to spare.[24] Major Buang became the first RVNAF fixed-wing pilot to ever land on a carrier. A second Cessna O-1 was also recovered by USS Midway that afternoon.[6]:âS121âS

This is why China doesn’t build large aircraft carriers, because an aircraft carrier is a big fat target, and there is a lot of symbolic value if one is sunk, rendered inoperable, or the flight deck rendered unusable.

This is why China doesn’t build large aircraft carriers, because an aircraft carrier is a big fat target, and there is a lot of symbolic value if one is sunk, rendered inoperable, or the flight deck rendered unusable.
What are you talking about? China wants aircraft carriers badly, and apparently has, or had in the past, plans to have 6 “large” carriers and 8 “light” carriers.
https://en.wikipedia.org/wiki/… [wikipedia.org]
China doesn’t have many aircraft carriers now because it takes a lot of money and time to build one, especially for someone that hasn’t built any before. Another problem is getting enough suitable aircraft to make the carrier effective, and have competent pilots to fly them.

There is going to be a time where a fleet of ships with drones on them will have more firepower than a present-day carrier group, even bringing back the SLAM as a deterrent option.

There is going to be a time where a fleet of ships with drones on them will have more firepower than a present-day carrier group, even bringing back the SLAM as a deterrent option.
An aircraft carrier that has unmanned air
There is going to be a time where a fleet of ships with drones on them
This is not a new idea. The US has about a hundred frigates and cruisers of this kind and has had for fifty years. A bunch of submarines too. They’re pretty handy, but the US still builds carriers. Shockingly, they launch more capable drones off of those.
Also, China has bought several carriers over the years, then they started building their own. They’re currently on version 4.
You say “drone” and “swarm” and geeks lose their minds. Sudden
I’d rather have a military AI make a fire decision than a cop in the South.
Those racist “thin blue line” bastards just want to kill minorities.
Military AIs take out actual threats.
The “thin blue line” is a way to make “us vs them” legtimate. It’s not. Cops that live this way should be fired or shot by military AIs.
If you’re an “us vs them” person in law enforcement, just shoot yourself. Nobody will cry.
Community policing is better. Not much better, but better. But we’re doing our best to reverse those gains and remove the educational requirement for cops so we can start hiring high school bullies again.
So somebody involved is a Stargate fan and has a sense of humour.
Because we’re all looking to create an unstoppable swarm of tiny machines bent on destroying all biological life in the galaxy.
> There is little dispute among scientists, industry experts and Pentagon officials
Scientists are people who follow the principles of science.
Industry “experts” are just people employed who claim qualifications.
Pentagon officials are dipshits with a starched shirt and some medals.
NONE of them are qualified to opine on anything much. The so-called “scientists” might IF and ONLY IF that science field was related to what they opine about.
AI is not a thing. There’s no “intelligence” in AI. All there is is
So, from your argument, you are not qualified to opine about intelligence.
AI is a name. Names are always valid. A black cat can be named Grey, and that’s still a valid name.
The real problem with AI is that it’s not a good clade. Very different things are given the same name. This is still valid English use, but it *is* confusing.
As for “intelligence”, there is no commonly accepted meaning. People just use the “I know it when I see it” test. (And if you mean IQ, there are AIs with a higher one than you
Land mines, anti-tank mines, and anti-shipping mines are fully autonomous and have been for over a century.
Their “AI” is simple: if someone passes nearby, explode.
Although modern anti-shipping mines are often designed to ignore decoys and take out high-value assets.

Land mines, anti-tank mines, and anti-shipping mines are fully autonomous and have been for over a century.
Their “AI” is simple: if someone passes nearby, explode.
Although modern anti-shipping mines are often designed to ignore decoys and take out high-value assets.

Land mines, anti-tank mines, and anti-shipping mines are fully autonomous and have been for over a century.
Their “AI” is simple: if someone passes nearby, explode.
Although modern anti-shipping mines are often designed to ignore decoys and take out high-value assets.
This reminds me of a couple things, things that might not exactly follow where you were going.
First thing this brings to mind is something of a joke, which may have some basis in reality. An admiral is in his office on the flagship of a navy flotilla when the relative calm is broken by a large explosion. He gets up to run to the bridge to see what has happened. A mine had exploded off to the side of the ship but did only superficial damage to the thick hull, even so this was a considerable danger since h
This is how you get an AI overlord. Naming it fucking “Replicator”… the fuck.
OpenAI: Behold.. ChatGPT!
People: OMG.. autocomplete will destroy civilization
Pundits: ChatGPT is an existential threat to humanity
People: F-35 F*** yeah!
People: Oh.. .no.. AGI… oh no .. LLM.. oh no
People: F-22 shoot them out of the sky!
Meanwhile in the military: lets create swarms of AI robots to destroy civilization
Military laughing at us: haha.. they’re scared of autocomplete, but don’t even blink an eye on the literal civilization ending weapons we routinely create. They even cheer us on! Haha.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Fewer People Moving in California Are Moving Into the State Than Anywhere Else
Microsoft, Uber, Dell CEOs Consider Government-Funded Stock Funds for Children
Documentation is the castor oil of programming. Managers know it must be good because the programmers hate it so much.

source

Leave a Reply

Your email address will not be published. Required fields are marked *