Lei Feng network (search "Lei Feng network" public concern) by: Author of this article Huo Ju, from the public number "false reason", Lei Feng network has been authorized.
(via: tutorvoice.com )
The ownerâ€™s death caused by Teslaâ€™s â€œautopilotâ€ system in May was forced to be disclosed because of the June 30 National Highway Traffic Safety Administration (NHTSA) investigation, which finally became a global news hot spot. I believe that everyone has seen the introduction of accidents and some technical problems everywhere. We do not repeat what has been reported by various media. I just want to talk about my opinion on this matter. This accident does not involve only one company and one technology. It also involves the relationship between technology and people, as well as corporate and social responsibility.
I have to declare first that as a person who is too lazy to drive, I have no prejudice towards autonomous driving technology, and I am quite looking forward to it. But technically and logically, I don't think that short-term auto-pilot systems can be applied to family vehicles. Also to remind some fans of Tesla and Elon musk fans, this article will cause you a certain degree of discomfort, you can consider directly closing the page, then unsubscribe this account.
Tesla has adopted several technologies. The camera is part of the Israeli company Mobileye. The exact technology developed by them is "assisted driving" rather than "automatic driving." Mobileye's company profile is this: "Mobileye is a leader in algorithms and chip systems for the visual information processing industry, and is targeting the driver assistance system (DAS) market." and "The Mobileye technology makes traveling on the road safer and reduces traffic accidents." Save lives while having the potential to reshape the driving experience through autonomous driving." What does this mean? It shows that Mobileye has a clear understanding of its technical level, that is, it is the assistance of human drivers, not automatic driving. Although they expressed the possibility of fully automatic driving in the future, they also made it clear that this is not the case.
The technology provided by Mobileye is the most advanced part of Tesla's "autopilot" related technology. Other millimeter-wave radars and sensors are nothing new. In the past five years, if you bought a medium-priced car, if you choose a better configuration kit, use radar to prevent these collisions, prevent rear-end collisions, and even automatically brake the like. All should be standard.
Can putting these kinds of things together produce revolutionary technological innovations? They do provide some convenience, but it is far too far away from real innovation. Mobileye's system is almost in cooperation with major car manufacturers, and from BMW Toyota to GM, there are more mature systems. According to Mobileye's own data, their chip EyeQ began commercial first-generation products in 2007 and was installed on 3.3 million vehicles in March 2014, involving 160 models and 18 automakers. This is not a new thing at all, but other manufacturers do not dare to call this thing Autopilot.
If we don't count the radio-navigation autos that appeared in the 1920s, from the 80s, Carnegie Mellon University (CMU) and the Ministry of Defense began to develop computer vision-based autopilot technology. This is also a 30-year history. . The following are two historical videos (click here to read the original text), which are the results of CMU's Navlab 1984-1994 and the 1997 experiment. In the 1997 video, you can see that their vehicles are completely autonomous on the freeway, and it looks no different from today. These years of technological development are helpful for the development of such technologies, which are roughly the increase of training data and the miniaturization of high-performance chips and GPUs. These changes can indeed make technology popularization and improvement. For example, Mobileye's achievements in these years were mainly small-sized and cheap. These are all good business projects, but it is still optimistic to say that there are breakthroughs in science.
Tesla himself actually emphasized many times that this was only a driver assistance system, but he couldn't stop the media and fans from eagerly pursuing it. He suddenly saw the application of this technology as a technological revolution. A few years ago, Feng Dahui satirized the pursuit and worship of several celebrities within the circle, inventing a word called â€œQiao Kaimubeiâ€, in which Mu refers to â€œIron Man Muskâ€. This strange phenomenon of worship has now spread to ordinary users. Because of the blind worship of Tesla and Musk, the owners seem to believe that Tesla has mastered the most advanced automatic driving technology, and this technology Fully available, with a lower rate of error than human driving, so they really started to drive the system on their own regardless of the steering wheel. On YouTube you can see the so-called â€œtestingâ€ videos of enthusiastic owners who leave the steering wheel, eat, shave, and even play games on their mobile phones. Tesla clearly knows that this situation is going to be a problem. It has also tried to upgrade the system and added a series of restrictions to their autopilot mode. For example, the maximum speed is limited to 45 MPH, but in the end the owners protested and even threatened to sue. In the strong opposition to give up. It is very likely that this is another example of a company being kidnapped by users.
If you have a basic understanding of computer vision and machine learning, you will understand why these systems can only help. We can use the previous game of AlphaGO and Li Shishi as an example to help understand. There are some differences between the two, which are not particularly accurate comparisons, but they can roughly illustrate the problem. In that battle, computer input and output were still managed by people. Someone fell for AlphaGo and the results were input back to the system. Making AlphaGo in a non-intrusive environment, with clear and accurate data input, the system stability is also high. But in the car, the input is the camera, radar and sensors. We do not discuss the accuracy of artificial intelligence systems. We only talk about computer vision. Even with a relatively stable test environment, there are sufficient computing resources and time, and the accuracy is still less than 100%. What's more, in the car, the computing power is limited, the time requirements are extremely high, and the situation on the road is ever-changing. This kind of accuracy will drop even more. Whether the millimeter wave or the laser radar has its own work restrictions, it cannot always be correct due to weather, rain, snow, fallen leaves, dust, etc.
In addition to the data processing part, although there are some differences between the algorithms, the basic principles are all the same. The basic technology popular in these years is deep learning-related technologies. The same technique was also applied to the AlphaGo game. If you remember the battle that AlphaGo lost, you should still remember the feeling of that time. At that time, the onlookers directly felt that "the level of AlphaGo dropped abruptly, from beyond Jiudan's. The master instantly becomes the level of the Go player and even mistakes. " Yes, in such a system, this will happen at some point. From the image input to the calculation of the results, each of these links may lead to such a situation. Therefore, if it is a relatively clear understanding of the relevant technology, it should be now afraid to drive the so-called "autopilot" of Tesla, to shave their own.
Unlike a human being, if something goes wrong, its mistakes will be more intense and clear. In the final intuitive performance, that is, it may work normally in 99.9% of the time, but when the 0.1% is abnormal, it will suddenly become impossible to complete even the basic functions, and it is difficult to be corrected. It is not like the level of human decline. Relatively gentle, it is very difficult for a normal person to suddenly become an idiot from a genius. What's worse is that this kind of error will exist in all the same versions of software at the same time and be fully reproducible. A 100-person driver will not make the same mistake, but 100 vehicles equipped with the same version of the system will have the same problem under the same circumstances and the impact will be very large.
The media said that "the human driver and the camera did not recognize the white truck." This description is too beautifying the accident. In this case, the human driver did not seriously drive the vehicle. He certainly did not recognize the white truck. But can a normally driving driver recognize this situation? When the incident occurred during the day, visibility was good. Normally it should be possible. Even if it did not see it beforehand, there was still a chance to slow down when it was nearby, or by changing the direction to a non-lethal place like a barrier, not As with the autopilot system after the missed check, it did not slow down and crashed into the truck. After passing through the truck, the driver had already died and the vehicle could not stop and continue to move forward a few hundred meters.
In addition to the problems of algorithms and technologies, anyone who writes a program for a few days should understand that there must be a bug in the software. There is still a long way to go from algorithm to final product realization. It also requires a lot of packaging and productization work. All these work will bring bugs, and these systems will eventually be based on other existing technologies, such as operating systems and other standard libraries. Bug will also appear. When the Bug appears, does the software handle the current situation correctly? I am afraid it is also not optimistic.
The current transportation system is based on human-centered high fault tolerance system. A newcomer, who is familiar with the basic traffic rules, can go on the road after passing the exam. Other people on the road will adapt to your presence based on his driving situation. For beginners, people may be ceremonially ceremonial, and vehicles for other license plates in other places will also be ceremonial. This rule is a culture established during the long-term driving of the vehicle and is not a hard rule. Remember the joke? â€œIf the car wiper next to the big sunny day suddenly moved, this should be a novice driver who wants to turn on the lights and turning right.â€ The human's low level of driving behavior even temporarily misjudgments, and is usually predictable and perceived by other participants on the road. It won't cause too bad results. Such a highly fault-tolerant system with a large number of self-driving vehicles is likely to be a disaster, because once the latter is wrong, the behavior becomes very different, and even an experienced human driver can hardly predict it. Although self-driving vehicles can accurately obey traffic regulations with 99%, they still produce unpredictable situations. To solve this problem, it is only possible to completely exclude human drivers from the traffic system and 100% rely on autonomous driving. But this is undoubtedly unviable in law and politics.
In the light of these circumstances, there can be a rather definite conclusion: At this stage, the relationship between people/machines still complement each other and find each other's bugs. A more reliable approach is to use assistive driving as an emergency backup to use for the last remedy in the event of a person's failure. It should not be reversed, and people should make up for machine errors. Because it is not so easy for people to switch from relaxed to stressful state. If the machine is dominant, people will be out of a state of relaxation for a long time, and it will be almost impossible to expect people to have a timely and correct reflection at the instant that the machine has a problem.
Teslaâ€™s actions in this regard, whether it is â€œsubstitutingâ€ the ADAS (Advanced Driver Assistance System) with â€œAutopilotâ€, or the long-term intimacy and even encouragement of the userâ€™s crazy behavior on the Youtube, are all very Irresponsible. There are a lot of people who will push the problem to the user, saying that the vendor has clearly informed the risk, but when it does not happen, everyone is advancing and praise how cool this technology is. I believe that manufacturers also understand that such a system will not care about this risk warning once it reaches users who have no knowledge of background knowledge. The name Autopilot affects the user's perception, and with the help of the community and the media, it can't stop it. If the conspiracy thesis is a bit, the manufacturer even intends to acquiesce in this matter to obtain more autopilot mileage data. Before this matter, the media liked to say, "Tesla's autopilot mileage has risen rapidly and has exceeded Google for several years. The results..." This behavior is like giving a machine gun to a monkey. God knows what they will do. The final impact of this incident is not yet known. To wait for the findings of the US National Highway Traffic Safety Administration, the worst case scenario is likely to affect the development of the entire industry.
What is more ridiculous is that after this kind of thing happened, Tesla's fans saw the accident as "dedication to science." This kind of beautification is very boring. He is the fanaticism of the media rendering and the victim of mutual suggestion by the fan community. It is not related to science itself. This â€œtestingâ€ video in which the two hands left the steering wheel did nothing to help the science, but only a few people showed off.
Road traffic is a very subtle act, and it involves the common understanding of the traffic owners. At present, technical driving under this technical condition is applied on a large scale to ordinary vehicles, not only to the owner himself, but also to other people on the road. I drove in accordance with the traffic rules, thinking that everyone would follow the rules that are similar to me. Who would have thought that a buggy vehicle was next to it? If it is a professional test, it will take extra care to prevent accidents. This is not terrible. But ordinary users do not like this, they are involving everyone on the road into this dangerous game. This is precisely why other traditional car manufacturers are not keen on this matter. Both Toyota GM and BMW have invested heavily in driver assistance systems and have matured products, but with their extensive vehicle ownership, if they To the user, the risk is not at this level.
One of the channels the media also likes to publicize is the â€œlower rate of accidents caused by automatic driving than human drivingâ€. On the issue of driving, comparing this average is not significant. A considerable proportion of the deaths caused by human driving accidents are due to drunk driving, lack of driving attention, speeding, and failure to wear safety belts. Take my Ontario Province as an example. In 2014, the death rate of auto accidents of 100,000 population in the province was 3.52, which is quite low in North America. Here, peopleâ€™s driving habits are very good and standardized, but these factors still account for a considerable proportion. In the accidental motor vehicle accident data of Ontario in 2014, 24.9% was drunk driving, 17.9% was not focused on driving, 17.0% was speeding, and 12.5% â€‹â€‹was not in the seat belt. Other traffic violations were not counted. Because the United States has a greater proportion of these problems, if one does not do these illegal activities, the accident rate of a human driver is far less than the media has exaggerated.
What's more, human individual differences are quite large. Some people have no tickets for 20 years. Some people often have various kinds of accidents. How can these two figures averagely compare human accident rates with autopilot? In this case, the individual's significance is greater than the overall statistics, and some auxiliary systems, such as the seat belt warning system or the in-vehicle alcohol content detection system installed for drivers with alcohol history, can avoid many potential accidents and thus save the more people. Even strict traffic laws, increased law enforcement, and more training are all very useful. If only considering reducing the accident rate, they are far more effective than trying to replace human drivers with automatic driving.
Taking into account the moral issues of drones, that is, under certain circumstances, the moral hazard of the people in the car or the driver of the opposite side, or the death of the surrounding pedestrians. People, the unmanned system used to calculate cargo or special purpose seems to be more feasible.
There is no one in this kind of car. When he hits the hanging rock in a dangerous situation, it is only a loss of a car and cargo, and it will not cause other people to be injured or die. Machine-driven autopilot systems that are mixed together are hard to have a good solution to this ethical dilemma.
After this incident, countries should gradually formulate regulations that regulate unmanned driving. For example, at least such vehicles should have clear marks so that other people know that there is a possibility of drones. Other human drivers on the road have the right to know This matter, and adopt their own defense or avoidance strategy. Or the communication system between vehicles will be formulated as a standard, and both the automatic car and the human-driven car will be installed so that the vehicle can understand the situation of other surrounding vehicles. This kind of networked system, if it does not consider privacy, should be very helpful to the traffic situation, but it is not directly related to automatic value, and human driving also needs such a system.
I think that combining these factors, in the short term, there are stricter and more advanced driver assistance systems, and a small number of purely automated professional vehicles, both of which are possible, but let the vehicle drive itself while People sitting in the driver's seat to shave, eat, this situation should be difficult to be popularized. So far, I am skeptical that I dare to declare that the Volvo family (after 2020), who will be able to do this kind of automatic driving that does not require human monitoring in the future, is still skeptical.
Human beings always fall into such a situation. During a historical period, they are very underestimated about technology. When the next cycle is overestimated, we are now in such an overestimation cycle. The last one was probably the 80s. One evidence is that the science fiction novels of that time were all based on the year 2000. It is believed that mankind can establish a colony on Mars after 20 years... In fact, by the year 2000 we had no mobile internet yet.
Title image: Ever wondered how Google's self-driving cars see the world? Here's all you need to know
Tesla accident latest development: the driver may be watching DVD
Ontario accident data
Lei Feng Net Note: Reprinted Please be sure to retain the author's complete information, not to delete the content.
Lighting Pole, Steel Lamp Pole, Street Light Poles, Garden Lamp Post
YIXING FUTAO METAL STRUCTURAL UNIT CO.,LTD( YIXING HONGSHENGYUAN ELECTRIC POWER FACILITIES CO.,LTD.) , https://www.chinasteelpole.com