Government and independent testing labs aren’t the way to certify robocar safety static electricity bill nye full episode


The reality is often a mix. For example, in the USA, both NHTSA (a government agency) and the Insurance Institute for Highway Safety do external crash tests. This generated the “NCAP” star rating system. Europe has embraced this and it’s become one of their key paths to safety certification. NCAP compliance is voluntary, but car vendors feel that duke electric orlando if they don’t earn the NCAP stars, this will hurt sales a lot, so it’s not really that voluntary.

The Europeans expressed the view that the older, American approach, which involves self-certification to the ISO 26262 functional safety standard, as well as government-mandated compliance with the Federal Motor Vehicle Safety Standard and the voluntary NCAP star ratings is misguided. All other forms of transportation have testing and certification done by a government oversight body. They stated the recent 737 crashes show what happens when that isn’t done right, so we want more of it with better enforcement.

Any set of standardized tests will cover only a tiny fraction of what must be tested. It’s hard to imagine them doing otherwise. The human driver’s license is an example, especially in the USA. That test barely covers anything, and terrible teen-age drivers routinely pass it. We accept this because we figure human beings inherently know how to do some things without proving it in a test, and because we don gas vs electric stove cost’t want to make the test that hard. In some countries, it is a great deal harder.

Consider the industry leader, Waymo. Waymo has now been testing their vehicles for 15 million miles on the roads. They’ve also recorded over 10 billion “miles” in simulation. (In addition, unlike the real road miles, which are 99.9% boring and ordinary driving, the simulation miles tend to be special situations designed to stress the software.)

Even with all that, Waymo is barely ready to feel safe about deploying. Last year they announced they would operate real service by the end of the year, and that they were now operating vehicles with no safety driver inside, or a safety monitor unable to grab the wheel. In reality, they’ve done only a very limited amount of that unmanned operation and they began only a very limited commercial service. (My theory is that the Uber accident reduced public tolerance and made Waymo decide to be more conservative.)

At best, a certification lab would confirm that the vehicle it was testing had the basics down. It would confirm that the team was not obviously negligent or missing a key ability. That’s worthwhile, but it’s not a sign the vehicle is ready to truly hit the road without supervision. Uber’s car might have failed such a test, but Uber’s car wasn’t trying to be approved for unmanned or commercial operation. It was still in prototype testing phase, operated by an arguably negligent safety driver.

Standardized tests, particularly simulations, are poor gas stoichiometry calculator for certification. That’s because the vendors will want, and get, access to the simulation scenarios. They will want to test on them in advance. If they fail any of the standard tests, they will fix it. They electricity generation in india won’t submit a car until it will get a perfect score. That enforces work to improve the car, but it doesn’t give much information on how good it is. We just know it passed the test it was designed to pass.

The only way to truly test is to try the vehicle on situations it has never seen before, situations that are hard to handle. Once a situation is used in a test, vendors will know about it and put it in their own suites of tests. This means that a certifier has to constantly be coming up with large numbers of new, meaningful and realistic tests on a regular basis. There are lots of situations — when it comes to robocars, you find “corner cases” even in the middle of the block — but there are limits.

They have to do it on a really regular basis. Many car vendors will be putting out new software releases frequently. During testing, they produce new ones every day. Out in production, they will still happen at least once a month, and more often when a safety bug is discovered that needs fixing. It’s not practical to have a brand new test for every company every month. You gas 87 89 93 can readily create simulation scenarios that are variations of existing formulas on a regular basis, but you want realistic tests, things that might happen in the real world. You don’t want to fail or pass a car based on how it handles something that will never really happen.

It’s also worth noting that those billions of simulation miles done by Waymo are not at all like the simulators AVL imagines, or that Nvidia just showed off. Those are full “pre perception” simulations which attempt to create a video-game style virtual world, and then feed the car software or computer fake camera images t gastrobar and fake LIDAR scans and faker radar data, and let the computer try to drive a virtual car. There are three different levels of this which all attempt to simulate as much as possible, to test as much of the car as possible.

This is useful, but it’s much, much faster to do post-perception simulation. In this approach, you don’t create the visible virtual world, but you do create the underlying representation of it. The simulator knows where all the things in the world are, at an abstract level, but does not render them just to have the software try to understand them. Instead, it drops down a level and replaces the perception system with a module that simply reports the same things the perception system would have reported. For example, it might say, “It is 85% probable there is a car at this location going in this direction. It’s 90% possible there’s a pedestrian there…” and so on. What is tested is what the car does when it receives this information. This is only a partial test but it’s so much faster that you can do vastly more testing this way. You do a mix of all the methods, but you will do most of your miles in this post-perception way, and it is my understanding that most of Waymo’s billions of miles are of this sort.

Pre-perception, or “rendered” simulation looks much nicer as a demo, because it offers a realistic video-game style view for people to admire. This is good for the way it tests all aspects of the car, including perception, but not so good in that it really tests how well that system works on a fake world, not the real world. It’s definitely possible that a system might be very good at spotting video game pedestrians but not as good at identifying real-world ones. In that case, you’ve actually gotten bad information from this electricity transmission and distribution costs type of simulation.

• “Vehicle in the loop” where a real car is placed on rollers so it can spin its wheels, in a big warehouse where robots dressed up to look like cars and pedestrians move around it, in the same relative ways they would in reality. Ideally, real sensors are used, though it may be necessary to fake radar and LIDAR. For cameras, you may point the cameras at a fixed screen.

These all sound good, though #3 is not r gas constant possible in some cases (such as Waymo’s car which figures out where it is based on the laser-illuminated texture of the road zooming by under its wheels, and a number of other problematic sensors.) But they are slow, and expensive, and suffer from all the problems described. They must run in “real time” which can actually be a big limitation compared to basic simulation or post-perception simulation, which can be scaled up to do the serious amount of testing actually needed. Outsiders don’t know how to test

It’s very different for an outside lab to run a post-perception simulation. This can only happen in close collaboration with the team. The reality is, that team knows far better than anybody else how to test their vehicle. In addition, as these teams are devising entirely new gas definition ways to be safe that never existing before, outsiders may have no way of truly understanding their safety. Standards can only define conventional wisdom and existing best practices. They don’t understand innovation. There is no well-understood test for the quality of innovation in safety. You need to come up with a new test.

A test as proposed by AVL, though, would not reveal enough. A car might pass such tests and still have serious problems and create too much risk. The only way to minimize that risk is with testing designed by the maker of the car, and testing on real roads in real situations. What’s needed is to make sure their interests are aligned with the public’s interest, and there are strong incentives not to cheat, lie or be sloppy. That’s not a solved problem, but it’s where the attention belongs.