CampaignSMS

Cruise's CEO Resigns – Slashdot

Follow Slashdot stories on Twitter




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
could have prevented this mishap.
Who tried to save a few pennies by omitting such an essential camera?
A wiper, asswipe.

could have prevented this mishap.
Who tried to save a few pennies by omitting such an essential camera?

could have prevented this mishap.
Who tried to save a few pennies by omitting such an essential camera?
The problem isn’t the cameras, it’s the AI.
I saw someone showing a video showing a similar issue with Tesla.
They were doing tests with a cutout of a small child and having it jump out into the road to see if the car would stop, and it didn’t do great. But in the final test with the latest “FSD” it came to an almost complete stop, only lightly bumping the “child.” This was just enough to knock over the cutout, and when the cutout fell to the ground the Tesla lost sight of it, forgot it was there, and proceeded to drive over it.
The trouble is these systems are getting fairly good at seeing objects and tracking them, but when something vanishes they don’t really have the judgment to understand where they went, or the nuanced memory required to say oh, that person who just appeared in front of me, they didn’t go anywhere and are probably lying in front of my car.
Maybe you can nab that situation with enough special cases, but what about the small child who just vanished behind a parked car? Or the deer running towards the road that dropped out of sight because it ran down into the ditch?
Self-driving is still a very difficult problem.
It’s not all algorithms. It’s all data. AI is really just studying lots of data looking for patterns and hoping that if it find a pattern in the “good” data, that following that pattern will get a “good” result. What this means is that they don’t have enough data about collisions with objects for their algorithm to detect it should stay stopped, or that the data they do that says “nothing detected hit the gas” is strong enough to override it.
It’s not semantics. There are no algorithms involved really. Nobody is adding logic like “if(camera_sees_object()) then break_hard()” The entire point of AI and machine learning is NOT to write those kinds of algorithms, because the complexity is such that it can’t be done. Instead, its all pattern matching. That’s what machine learning is.

The ‘AI’ can only do what it was programmed to do. Contrary to the common AI abbreviation there is no intelligence involved here, it is all algorithms. This is on the developers. If a living object was detected and any sort of defensive maneuver was initiated, the fail safe should be to stop and not move until the vehicle operator/nanny clicks ok on a bazillion proceed prompts.

The ‘AI’ can only do what it was programmed to do. Contrary to the common AI abbreviation there is no intelligence involved here, it is all algorithms. This is on the developers. If a living object was detected and any sort of defensive maneuver was initiated, the fail safe should be to stop and not move until the vehicle operator/nanny clicks ok on a bazillion proceed prompts.
And then you block the street which holds up emergency vehicles.
Or you’re in an intersection and you’re now getting t-boned.
Even a fail-safe is tough.
It’s not ok to drag a person around, unless it is. It’s not ok to stop, unless it is. It’s not ok to pull over, unless it is. It’s not ok to accelerate unless it is.
Human judgment. AI’s don’t have it.

Majority of people, who hit something they did not see, would attempt to pull over just like AI did.

Majority of people, who hit something they did not see, would attempt to pull over just like AI did.
Except the AI did see it.
It just “forgot” because the moment goes out of view for a couple of seconds it effectively doesn’t exist for the AI.
The person (assuming the person they hit was unconscious and couldn’t scream) would also notice their car’s driving felt off, and would quickly surmise that thing they hit was still under and stop. The AI wouldn’t put those things together. It would just think “oh I have a mechanical error, better pull over!”.
Waymo seems to be far ahead of everyone else. If you look at their videos from 6-7 years ago, they show how their system tracks objects even when they are occluded. When it seems a pedestrian it predicts the path they will take, and if it then loses sight of them it still has a good idea of where they are likely to be.
I find it interesting that Waymo identified the need for this long ago, but other companies either decided they didn’t need it, or it just didn’t occur to them and they thought that vision alo
I don’t think this is a fair characterization.
Although it appears the techniques in use differ, it’s safe to say that Google/Waymo had a headstart, and have gotten around to solving more problems.
For instance, for Tesla, if you listen to the first talk by Andrej Karparthy years ago, he was talking about modeling more things with neural networks, but they still haven’t released a version where planning is under network control. So until now, their system hasn’t actually learned to improve the driving itself,
I think relying on neural networks to do things like path prediction is a bad idea. Waymo uses algorithms and they have proven to be reliable. They doubtless consume a lot less energy too, and don’t require a supercomputer and gigawatt hours of electricity to train either.
Recognition with just cameras is a real issue too. They can train their NNs to recognize certain objects and certain situations, but not give it the general intelligence needed to understand things that are well outside the training data.
“they can always see everything they need to know about.”
Unless lts a semi trailer moving across the road, then not so much.
Far ahead in the number of NHTSA investigations, crashes, and running people over, you mean. All of these American companies are playing fast and loose with safety, and the consequences are gradually catching up with them. Allowing these companies to self-regulate is never going to fly.

Waymo seems to be far ahead of everyone else. If you look at their videos from 6-7 years ago, they show how their system tracks objects even when they are occluded. When it seems a pedestrian it predicts the path they will take, and if it then loses sight of them it still has a good idea of where they are likely to be.

I find it interesting that Waymo identified the need for this long ago, but other companies either decided they didn’t need it, or it just didn’t occur to them and they thought that vision alone would be enough. Tesla seem to be in the latter category, with the assumption being that with enough cameras they can always see everything they need to know about.

Waymo seems to be far ahead of everyone else. If you look at their videos from 6-7 years ago, they show how their system tracks objects even when they are occluded. When it seems a pedestrian it predicts the path they will take, and if it then loses sight of them it still has a good idea of where they are likely to be.
I find it interesting that Waymo identified the need for this long ago, but other companies either decided they didn’t need it, or it just didn’t occur to them and they thought that vision alone would be enough. Tesla seem to be in the latter category, with the assumption being that with enough cameras they can always see everything they need to know about.
I find it notable how Waymo stays under the radar, so it does seem like they’re being quite a bit safer.
I still think occlusion is a fundamentally tough problem. Consider someone carrying a large umbrella, the car could easily see that as a separate object to track (even a potential person). Then, the person folds up the umbrella and poof, the object is gone.
Now, instead of an umbrella assume it was a toddler who got picked up (and has vanished according to the car), and they might get put back down at any
Who gives a shit about a CEO? Who should give a shit about a CEO? No one.
He moved fast and broke his career
What is this Robotaxi BS? .. GM can’t even do ADAS in their cars. Make ADAS a thing and then get autonomous done.
Exactly.
The industry fallaciously believed they could leapfrog 10-20 years of technological development, social acclimation, and legal precedence and go from “automobiles with no driver assistance” to “fully autonomous vehicles”. It was a stupid assertion 10 years ago and it’s a stupid assertion today in the age of “AI” (for very loose definitions of “AI”). Here are the problems:
1. Driving is HARD.
Choosing to take actions while driving is easy, but taking in the MASSIVE amount of data that humans do while d
Musk also keeps promising it’s just around the corner too.
I’m sure more controlled environments, like planes and trains, and maybe long-haul road, will make headway but the average car is not that.
Planes and trains are pretty much fully-automated as-is. Long-haul road is coming, already seen plenty of driverless trucks on the 15 between Barstow and Vegas.
Planes are fully automated in the portions of the route they fly far away from anything, or where an ATC has cleared a path for them to use. Trains are automated because they are tracked by a centralized system at all times. Neither setup works very well for cars on congested public roads that are governed by the traffic rules that humans use.
That’s why we should use elevated PRT hanging from a monorail. Then the cars don’t have to be self-driving, the system can drive them, you eliminate the loss of the pneumatic tires, instead of these long ribbons of asphalt that cost so much to build and maintain you have pylons and a narrow ribbon of steel. Cars and roads brought us a long way, but they ceased to make sense decades ago. We have the technology for self driving, and it is called rail.
The issue with planes and trains is the same as this robotic thought. AIs today are great at mimicking your sympathetic nervous system. If you could do it automatically without thinking, an AI can probably do it too. As soon as things go outside the norm though, as soon as you actually need to think, because youâ(TM)ve not encountered this scenario before, the AI is screwed and it wonâ(TM)t even know it.
What do you think an AI would have done in Sullyâ(TM)s place? What about for AirFrance
Sully: Birds!
AI: Branta canadensis, a.k.a. Canada geese.
OS: Both engines appear to be offline: Abort, Retry, Fail?
A video, which TechCrunch also viewed, showed the robotaxi braking aggressively and coming to a stop over the woman. The DMV’s order of suspension stated that Cruise withheld about seven seconds of video footage, which showed the robotaxi then attempting to pull over and subsequently dragging the woman 20 feet…
A woman died in that hit and run meaning someone is facing serious criminal charges.
Cruise withheld the 7 seconds from the DMV, did they also withhold it from the detectives? Because presumably that
Can you give a source for this? The original article stated that the condition of the woman in the hospital is critical.. the latest I could find was from october 24, from the company’s statement “First and foremost, our thoughts are with the individual, and we are hoping for their complete recovery.”
How’s the lawsuit from that woman coming along? With regulators and new liability requirements coming in soon, sounds like he’s decided he’s made enough money and is a good idea to skedaddle.
Would probably sell any stock he has too, except he would have to file announcements with the SEC that would draw attention to it.
I wonder how much of this is because of GM? I have no idea if this guy started out virtuous or not, but maybe GM came in and told him to take some shortcuts which he didn’t like – and now there are suits against the company which he doesn’t feel like he ought to defend.
According to the DMV, Cruise representatives showed video footage of the crash captured by the robotaxi’s onboard cameras only leading up to the point where the driverless vehicle made its first complete stop after braking hard.
“Cruise did not disclose that any additional movement of the vehicle had occurred after the initial stop of the vehicle,” the DMV said in its Order of Suspension. The DMV alleges it received from Cruise the full footage of the video on Oct. 13 — 11 days after
Autonomous vehicles are a luxury promoted as necessity.
There is enough money in play that casualties are not a moral concern but merely a financial risk when they generate negative publicity.
has consequences….
How much did he finally walk away with from this scam?
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
OpenAI Fiasco: Emmett Shear Becomes Interim OpenAI CEO as Altman Talks Break Down
The IMF Launches ‘Central Bank Digital Currency’ Handbook, Says CBDCs Could Someday Replace Cash
All science is either physics or stamp collecting. — Ernest Rutherford

source

Leave a Reply

Your email address will not be published. Required fields are marked *