This isn’t a fully autonomous vehicle in the vein of a Google car, though – the primary feature is what Tesla calls Autosteer, which keeps the car in its current lane once you’re already on the road and manages speed and distance from the car ahead. On the call, Elon Musk was careful to call out Autosteer as a “beta” feature – drivers are told to keep their hands on the wheel, even when the function is engaged. “We want people to be quite careful” at first, Musk said, while admitting that “some people” may take their hands off the wheel regardless. “We do not advise that,” he added. An upcoming version 7.1 will add the ability to send the car off to a garage on its own and come back to pick you up, another feature teased when Musk first announced autopilot capabilities last year.
Am I the only one who feels a little uncomfortable about a function like this being designated ‘beta’, but still sent to every Tesla driver? People – including myself, and yes, even Tesla drivers – are idiots, and I don’t trust them to follow Musk’s advice at all.
Reminds me of the Infiniti Q50 from 2014 or even a year earlier.
Which already had active lane control:
https://www.youtube.com/watch?v=zY_zqEmKV1k
But as you can see at the end, be careful !
Simple automation like cruise control and active cruise control is useful because it can handle the accelerator for you. Fully self driving cars are obviously useful as well. But what is the use of a steering feature that doesn’t actually free you from steering the car? This would make driving more difficult, not less because you have to rest your hands on the wheel without actually turning it. Same with lane changes, how is it easier to hit the turn signal rather than just steering the wheel directly?
Also really not keen on the beta label applying to life or death features like this.
Edited 2015-10-15 00:18 UTC
However this does illustrate how the self driving car is coming very quickly and Google is unlikely to have any part of it. Tesla is a few years out, but so is every other car manufacturer out there. The google car will just be another failed experiment out of Google.
Your comment makes better sense in context with what hardware is actually in the Tesla, and yes, it could be that Tesla has a more solid approach than Google.
The next generation of the machine learning hardware in the Tesla makes the Google LIDAR approach look positively ancient, and you wonder why they are doing it that way.
The things Musk have been saying about Tesla’s approach being better makes real sense, when you see what this hardware can actually do.
The car handles checking in all directions for other cars and their speeds so as to avoid collisions? It’s a very common thing that happens when people are changing lanes where they don’t notice an incoming car or judge the other cars’ speed or distance wrong.
Here’s a video: https://www.youtube.com/watch?v=3yCAZWdqX_Y
I can just feel his nervousness. It takes all of his self control to not touch the wheel and that’s pretty much how I would feel using it.
I don’t steering on my own if I am behind the wheel but I think the coolest potential of self driving cars is being able to rent a car and having the car come get you or having the car drop you off without needing to find a place to park it.
It has the potential to challenge the need for car ownership for a lot of people. We have a car, SUV, van, and I have a MC. The sticker price for all 4 vehicles is about $100k + maintenance, fuel, registration, insurance etc. That is a lot of money for a middle class family to afford and all of them spend nearly all of their time just parked.
Converting to Zipcar or automated Uber or something would probably save us a fortune on car ownership because you are splitting the costs with a lot of other people.
As someone that works with tech though I still have less confidence in the reliability of full automaton than most other people I know but this is something we’ll get to the bottom of for sure in the transition phase of “assisted driving” before we remove the requirement for the human copilot to oversee control. I feel like it needs a red “oh shit” button for passengers to hit if nobody is in the driver’s seat that will decelerate the car if it looks like something is going horribly wrong.
Isn’t that the point though? This is an interim step, and maybe the are trusting drivers a bit much at this point, I guess well see. But the entire reason for the move toward self driving cars is because people are idiots…
Anyway, we don’t have concrete numbers, but all estimates I have seen put the number of Teslas on the road today at somewhere around 40,000. Total. There are about 20,000 car accidents a day, in the US alone. Even if this causes every one of them to crash in the next month it will barely cause a blip statistically.
Im much more concerned about the 20,000 accidents per day – almost all of them caused by driver error in cars that don’t have any kind of assisted driving…
but would be front page news.
Real mainstream car manufacturers are years ahead of Google and Tesla when it comes to self driving technology. Yet when some Silicon Valley tech company introduces a poorer version of the same feature on their toy cars the press get an erection.
BHP and Rio Tinto have had fully autonomous 400 tonne dump trucks operating in their iron ore mines for the best part of a decade without a single accident.
Yeah, because industrial scale robotics, operating on private property and executing pre-programmed operations controlled by large centralized computer systems costing millions of dollars is the same thing as what Google and Tesla are building…
What, do you think they walk up to the trucks, point that-a-way and tell them to go move rocks? They are robots. They are only autonomous in a very very limited sense. Yeah, they can detect obstacles and avoid them, backup and dump when required, stuff like that. Take one off its home mine and it wont even operate – it needs a central computer to feed it programming…
Its barely even related technology, much less years ahead. Try getting into one and telling it you want to go to Melbourne and see how that works out for you…
Lane control as this article is about, has shipped in high-end cars for several years now. Though the big car manufacturers have a detector in the steering wheel, so that it will run an alarm and start to slow down if you let go.
So the GP is right, this is only news because it comes out of an IT darling, it is a standard feature that Tesla is just late with, and have shitty implementation of.
It’s not just “lane control” though, it will speed up, slow down, change lanes etc.
Every car company is working on automation currently but it’s not something available in very many cars.
There is a good comment on it here: https://www.reddit.com/r/teslamotors/comments/3owdql/we_should_build…
Most car companies require you to keep a hand on the steering wheel for their lane keep assist systems, many don’t integrate adaptive cruise or the ability to read speed limits, and the model S is a fleetwide push that continually updates and improves.
The other systems are frozen and will not receive updates without buying a more recent car.
The lane keep assist in some cars just bounces the car back into the lane off an invisible boundary for the lane, any don’t even stay centered or combine adaptive cruise.
This puts Tesla mostly slightly ahead of the rest of the auto industry. Other than Tesla S class probably leads the pack and it has a $95k base price.
After the ~$10k in tax incentives and the $20k/10 years in fuel savings costs the Model is a cheaper than its sticker price implies.
The fact that he has to be compared against cars like S class or 7 series BMW in luxury/features, cars like Nissan GT-R in acceleration, and a small SUV in space/storage speaks volumes about it. It is unprecedented in center of gravity, safety, and updates after you buy it as well.
The model S seats 7 and the cargo room (31.6 cu.ft.) is more than double that of a Toyota Camry (15 cu.ft.) and its closer to what you would find in an SUV. A GMC Acadia is only 24.1 cu.ft. and most vans are about 30 cu.ft. and weigh 2,000 lbs more than the S.
If you have an issue with the car the Tesla dealer comes to pick it up from you and trades you for a loaner so you don’t have to take time out of your schedule to bring it in. It leads the industry there as well.
They really do get a lot of things right.
Also a big one: they don’t operate in traffic.
A modern mine site is an extremely controlled environment, nothing at all like the public highways.
And the mine operations, every movement of every truck, are constantly monitored in real-time (GPS tracking, multiple cameras, etc.) from a control room with 60 highly-trained personnel with the power to shut down any piece of machinery with the push of a button. I can’t see Tesla, Apple, or Google doing that.
I don’t disagree with your point, but the efforts of Rio Tinto are not comparable to Google’s.
No quite. I worked at Rio Tinto until recently, whilst they are working towards fully autonomous, their trucks in the Pilbara of Western Australia are currently driven from a large office of drivers near the Perth Airport!
Remote controlled with GPS pre-defined driving paths and tracking for the boring bits.
“Mining company Rio Tinto uses huge self-automated trucks on mines in the Pilbara region of Western Australia that are programmed to drive themselves and navigate mine roads and intersections using sensors, GPS, and radar guidance systems. The trucks self-drive but are overseen by a controller in Perth, 1800 kilometres away.”
http://www.theage.com.au/it-pro/business-it/forget-selfdriving-goog…
Hi,
Frequently laws don’t catch up with technology until after that technology is obsolete anyway.
What are the current legal ramifications here? For example, can you get charged with driving without due care if the autopilot is engaged and it runs over a carelessly placed child? Can we talk to people on smartphones and just pretend the car is driving itself when it isn’t? What if you’re drunk?
– Brendan
In most states, if you are in a vehicle and the keys are in it while intoxicated that is enough to get a DUI. You don’t even have to be driving it at the time… It wouldn’t matter one wit that the car had “autopilot”.
I’m sure we are less than 5 years away from some dumbass trying to use assisted driving as an alibi for a DUI, assuming it hasn’t happened already (I wouldn’t be surprised at all). I guarantee it absolutely will not work.
Anyway, when and if we ever get to the point of having true self driving vehicles (which we do not have in any way, shape, or form), I would wager that laws would get passed pretty damn quick to indemnify drivers (passengers?) of all responsibility in the event of an accident during automated driving. But we are no where near that point yet…
The simplest solutions is, if you have automated cars and they crash less than humans, it will be easier to get insure.
Thus the manufacturer can just take on the liability:
http://www.bbc.com/news/technology-34475031
AN other recent article claims it’s a bad idea, better to slowly inch to more and more autonomous driving:
http://www.roboticstrends.com/article/why_self_driving_cars_should_…
in UK law you are in control of the vehicle if you are in it with the keys. So even drunk asleep on the back seat means you are in control of the vehicle under the influence.
No, you are not the only one.
And I feel even more uncomfortable about a car that is “FCC-approved” but then gets a software update that changes it functionality so drastically. I was assuming those updates would also need to be “FCC-approved”, but now that we suddenly see Beta-features appear I am just baffled.
(I have no idea which agency actually approves cars, hence the “FCC-approved”)
In the US, it’s a combination of the National Highway Traffic Safety Administration (relevant to this feature, as it doesn’t affect emissions) and the Environmental Protection Agency. There can also be state laws in play, which affect the legality of using such features.
(In California and other states that follow California emissions, the California Air Resources Board sets even stricter emissions limits than the EPA does (the way the laws are written, California was granted an exception such that they could set their own standards). Still, it doesn’t affect this (other than that their standards result in other automakers buying zero-emissions vehicle credits from Tesla, meaning Tesla has more funds available for R&D).)
I’ve been driving a car with adaptive cruice control for one year, and I’ve come to trust its emergency break function after it basically saved my day *twice*.
Drivers are idiots, which is why we need this.
They are basically building in features that have been present in the BWM 3 series (and its competitors) for years. So this isnt anything particularly new.
In terms of the ‘beta’ aspect, this isnt going to All Tesla drivers/cars, only those that have signed up to their Beta Program. From my understanding, those Beta testers who were selected are also provided special training on the features. Version 6.2 was released in Jan, this current release is v7
This system goes beyond most lane control systems, however, in that it can work when lane markings disappear, by following the car ahead, as I understand.
I agree, but this is already part of BMW’s system using the traffic system
http://www.bmw.com/com/en/insights/technology/connecteddrive/2013/d…
However, as I am no expert in lane control systems, it may well do it in a new and clever way which is beyond me
Edited 2015-10-15 11:17 UTC
The fact that it’s already in other cars is probably why they’re releasing it like this. There’s bound to be a host of legal issues around any kind of automation so letting companies with bigger wallets and repulations go first in some things may be a better plan.
Odds are the software can do much more but they’re probably not ready to risk enabling those features on open roads yet.
The Tesla uses the EyeQ3 chipset from Mobileye, which can do a lot more than what Tesla are doing with it now, so it will be interesting to see if they will take full advantage of it. Tesla are the first ones to use it. BMW might be using an older version.
The chipset is clearly meant for longer-duration autonomous driving in cities and on highways. It’s fully machine learning.
After watching this presentation, I’m not really sure that Google’s lidar approach is the correct one, because it can do things that Google’s car can’t do:
https://youtu.be/0UzVBTgHqSQ?t=18m13s
It shows that computer vision is basically a solved problem. It can detect a large number of things, and computational performance and camera quality is the remaining issue.
And it matches up with what Musk has said about self-driving.
Their next generation chipset is crazy: Support for 8 cameras and multiple radars and lidars and processing all that input at 30 frames per second.
I’ll take a beta autonomous car over the average driver any day!
On the event I buy a car with this tech, may I ever get a discount or rebate because I like to drive and have no use for this tech ?
I think we already pay too much things we end up not using (like MS computer tax, useless politicians, some bank services, ..).
Typical Silicon Valley nonsense of releasing something to the public, but slapping the word “beta” on it because they think is somehow absolves them of liability when it, in this case, kills somebody.
I remember my classes in AI from a LONG time ago, and the instructor posed a question about driverless cars. (which this is leading to)
Imagine you are in a small two lane tunnel, no barrier between you and the oncoming car. suddenly an 8 year old girl appears out of nowhere (let’s just say she was beamed there by star trek means, or came from the future or something like dropped from the ceiling). Your computer system knows that the occupants of the vehicle will be fine if it hits the child, It knows that the child will be killed, it knows that if it diverts to the wall that you, the sole occupant of the vehicle will be killed, or if it veers to oncoming traffic the other occupant, in a Citron 2CV will be instantly killed and his/her remains will never be identified in the following fire, but you will be fine… most likely…
What does the ethics model of the AI do?
Is it designed to protect the occupant over other people?
Who would be liable for this programming?
In that example I would maintain lane position and attempt emergency braking, pedestrian be dammed.
At the end of the day, she is in your path of travel and should not be. The people in the other lane are innocent and in some cases so are you.
Part of the reason I say this is I was driving in a nearby city minding my own business and someone jumped out in front of me from between 2 cars where it was difficult to see them enter the roadway and then just proceeded not to even look at me after I had to slam my brakes and honked at them.
Exactly the same thing happened to me at the same section of town another time. I was talking to a friend once and he mentioned he ran into the same thing as well.
There is a soup kitchen right there and poor/homeless with intentionally enter traffic without warning trying to get hit by traffic for insurance reasons. By standing between a couple vans before jumping out into the lane it prevents drivers from seeing them clearly in advance.
Now, if in both of these situations the AI in my car decided to plow me (and my friend) into a fixed object or oncoming traffic I would be fucking pissed. “Pedestrians always have the right of way” blah blah but that doesn’t entitle anyone to jump out from parked cars into moving traffic on purpose.
Even if the person does this is 8 it’s a tragic mistake but it’s still their mistake at the end of the day and I think the car should keep lane position or risk significantly compounding the problem. If you plow into the passenger car how do you know it doesn’t contain a whole family or something? Is the car behind them going to be able to perform emergency braking in time not to rear end the car you just plowed head on?
I think these examples are relatively few but I can think of a couple scenarios I was in where I would seriously challenge AI to do a better job.
1. I was driving a brisk pace at night and a line of deer run out in front of my car. I have no chance of stopping in time but I see a gap between 2 of them, floor it, and squeeze my car between the gap.
2. I am in bumper to bumper 40 MPH traffic, a car on the right on the other side of a parked box truck is trying to cross the road with no visibility of oncoming traffic instead of just using the traffic light. He pulls out into my path and I have 0 chance of avoiding an accident so I slam my car into the front quarter panel of his car instead of his drivers door because that’s about all I can do about it. An AI might have improved on my reaction time by like 300 ms but I’m not sure it would have made the same call to avoid injury to the driver.
3. Kid on bicycle driving down steep hill and aiming for the sidewalk next to me. His intention is not to cross my path but I move over to the turn lane just in case. He hits the curb, flies over his handlebars and lands in the lane I was just driving in. Likely AI would have hit that kid because.
4. Trying to get up steep hill and the roads are very slippery. I get a running start but don’t make it all the way up the hill. At first I stop going forward, then I start sliding backwards with my foot on the brake. The road is banked and if I sit there with my foot on the brake I am going into the ditch. I e-brake to spin the car around and drive safely down the hill. AI would have had my car in that ditch.
For my personal driving history I can think of just one case where AI would have avoided an accident I didn’t and a handful where the opposite would be true despite the fact that I often drove at extremely reckless speeds.