Category Archives: Selfdriving

Does the Car of the Future Require a “Talking Car” Mandate? – Competitive Enterprise Institute (blog)

http://ift.tt/2utuBuz

Does the Car of the Future Require a "Talking Car" Mandate?
Competitive Enterprise Institute (blog)
The Competitive Enterprise Institute is hosting an event later today where a proposal to require all new cars in the United States come outfitted with vehicle-to-vehicle communications (V2V) technology that relies on a protocol called dedicated short

This Image Is Why Self-Driving Cars Come Loaded with Many Types of Sensors – MIT Technology Review

http://ift.tt/2tDJsWt

Autonomous cars often proudly claim to be fitted with a long list of sensors—cameras, ultrasound, radar, lidar, you name it. But if you’ve ever wondered why so many sensors are required, look no further than this picture.

You’re looking at what’s known in the autonomous-car industry as an “edge case”—a situation where a vehicle might have behaved unpredictably because its software processed an unusual scenario differently from the way a human would. In this example, image-recognition software applied to data from a regular camera has been fooled into thinking that images of cyclists on the back of a van are genuine human cyclists.

This particular blind spot was identified by researchers at Cognata, a firm that builds software simulators—essentially, highly detailed and programmable computer games—in which automakers can test autonomous-driving algorithms. That allows them to throw these kinds of edge cases at vehicles until they can work out how to deal with them, without risking an accident.

Most autonomous cars overcome issues like the baffling image by using different types of sensing. “Lidar cannot sense glass, radar senses mainly metal, and the camera can be fooled by images,” explains Danny Atsmon, the CEO of Cognata. “Each of the sensors used in autonomous driving comes to solve another part of the sensing challenge.” By gradually figuring out which data can be used to correctly deal with particular edge cases—either in simulation or in real life—the cars can learn to deal with more complex situations.

Tesla was criticized for its decision to use only radar, camera, and ultrasound sensors to provide data for its Autopilot system after one of its vehicles failed to discern a truck trailer from a bright sky and ran into it, killing the driver of the Tesla. Critics argue that lidar is an essential element in the sensor mix—it works well in low light and glare, unlike a camera, and provides more detailed data than radar or ultrasound. But as Atsmon points out, even lidar isn’t without its flaws: it can’t tell the difference between a red and green traffic signal, for example.

The safest bet, then, is for automakers to use an array of sensors, in order to build redundancy into their systems. Cyclists, at least, will thank them for it.

(Read more: “Robot Cars Can Learn to Drive without Leaving the Garage,” “Self-Driving Cars’ Spinning-Laser Problem,” “Tesla Crash Will Shape the Future of Automated Cars”)

Lyft Is Getting Serious About Self-Driving Cars, But Its Rivals Have a Big Head Start – The Drive

http://ift.tt/2eI4Ra9

Lyft has discussed plans for self-driving cars numerous times in the past, but the ride-sharing company is finally getting serious about them. On Friday, it announced plans to launch its own autonomous-car development program. It’s a major (and necessary) step toward meeting Lyft’s ambitious autonomous car goals. The company previously said the “majority” of its rides will be in self-driving cars by 2021, and it hopes to give 1 billion rides a year in autonomous electric cars by 2025. But when it comes to actual autonomous driving technology, Lyft’s rivals have a major head start.

In all other things, Lyft’s main rival is Uber. While it is currently fending off a legal challenge from Waymo, the larger ride-sharing company has been testing self-driving cars on public roads for some time now, even giving customers rides on a limited basis. This gives it a multi-year head start on Lyft; a powerful advantage, as is the company’s deep purse. 

On the other hand, Lyft doesn’t even have any of its own self-driving cars. What Lyft does have is partnerships with other companies. General Motors and Jaguar-Land Rover have invested in Lyft, and the ride-sharing company is working with both Waymo (the former Google self-driving car project) and NuTonomy on autonomous driving. NuTonomy self-driving cars will provide the first autonomous Lyft rides as part of a pilot program in Boston set to start later this year.

However, it’s unclear how Lyft will expand upon the work being done in these partnerships. The company is building a new research facility in Palo Alto, California, and plans to hire “hundreds” of engineers, but has been vague on what work will actually be done. Luc Vincent, Lyft’s lead engineer, told The Verge that Lyft won’t develop its own lidar, a crucial sensor for self-driving cars.

Lyft also faces the same problem as other tech companies trying to break into the autonomous-car game: It doesn’t make cars. Regardless of what autonomous driving tech it develops on its own, Lyft will need to partner with an automaker to get it on the road—which is where the company’s team-ups with GM and JLR could come in handy. 

Assuming self-driving cars ever become commercially viable, and Lyft can develop its own version of the technology, the company faces the additional hurdle of competition in the marketplace. Lyft will have to distinguish its autonomous ride-sharing service from services offered by Uber, and possibly automakers like Ford and Tesla as well—both of whom have expressed interest in starting their own car-sharing services.

It makes sense for Lyft to be interested in self-driving cars. Analysts predict autonomous driving will make ride sharing even more profitable, and Lyft’s main rival is already investing in the technology. But with so many other companies already working on self-driving cars, Lyft may be entering the race too late to become a leader.

Autonomous driving: Is hyperbole overwhelming reality? – Shanghai Daily (subscription)

http://ift.tt/2uQrfEN

RIDING the gold rush of artificial intelligence, self-driving cars are headed toward the wild, wild west.

Whenever my friends ask me my opinion on how close we are to autonomous driving, I wonder if they have considered all the implications of letting an unseen hand take the steering wheel.

I don’t mean to sound like a die-hard conservative, as the automotive industry itself has often been depicted, but one can ever be too careful when confronting life-changing technology.

It is a bit chilling to see the bloodbath created by remotely hacked self-driving cars in the movie “Fast and Furious 8” and to witness robots outsmarting and rebelling against their creators in the HBO series “Westworld.”

Both popular dramas look to tell a tricky tale about the much-hyped buzzwords “artificial intelligence.” The concept is now in a complex stage of real-world evolution, and some of its developers cannot wait to declare the revolution is “on” ­— hail, the world’s fourth industrial revolution!

Today’s auto industry headlines are dominated by carmakers in a rush to deploy artificial intelligence in their products or to acquire artificial intelligence startups. Silicon Valley is the new Detroit. Coding is the new word in vehicle engineering. Car owners dream about taking a nap while motoring to a destination.

“If I had a self-driving car, then why would I need my husband?” my friend Yasmin joked.

I refrained from mentioning that her job as a bank teller might also become obsolete as artificial intelligence moves into the financial realm.

My friend Andrew, who works in the venture capital business, said financing for artificial intelligence has skyrocketed in the past year. In China, where the public is particularly open-minded about technological innovation, the artificial intelligence industry is expected to be valued at 1 trillion yuan (US$147 billion) by 2030, with autonomous driving as one of its vanguard applications.

“This is one of the biggest futuristic gambles, and nobody wants to miss the party,” Andrew said. “The self-driving car is all geared up with money, and there is no going back as I see it.”

The ongoing unbridled, run-amok growth of artificial intelligence frightens many, including Elon Musk, whose Tesla prides itself on being an aggressive pioneer in autonomous driving pioneer. His Mars immigration plan designed as a refuge is case of artificial intelligence colonizing Earth.

He and the world-renowned physicist Stephen Hawking keep sounding alarming bells over artificial intelligence, suggesting that our own aggression or ignorance might spell the end of human civilization as we know it.

A quandary

Developed to help human beings surpass their own limits, artificial intelligence is destined to become a quandary. Either it makes too little progress to prove its usefulness, or it eventually takes off on its own by outpacing its creator in evolution. The invincible AI-enabled AlphaGO master, which notoriously beat the best player in the world, is the latter scenario as a nutshell.

Though the thought of rogue robots going about killing people still seems too ridiculous to contemplate, some harsh reality begins to sink in when people see a fatal Tesla crash with a disconnected driver relying on autopilot functions.

That’s why I was really stunned to see Baidu testing its self-driving car on a busy Beijing highway. The event was live-streamed at Baidu’s artificial intelligence developer conference earlier this month. It was really courageous of Baidu founder and CEO Robin Li to ride shotgun, putting his personal safety at risk. Not to mention that Baidu didn’t have a license to do this stunt on public roads.

In response to his controversial ride, Li later said he believes robocars will eventually be safer than those driven by human. It is a common view held by the auto industry and other developers of artificial intelligence, who believe that machines can operate rationally, free of emotional distractions and drive fatigue.

What if the autonomous cars, designed to mimic us, learn road rage and the foul language that goes with it? After all, when Microsoft created a chatbot to conduct automated discussions on Twitter, the machine learned sexist and racist comments from waggish users only hours after its launch.

“It is possible for autonomous vehicles to develop their own characters, just like their makers,” said my friend Eric, who is developing robotic wrestlers. “It all depends on their learning programs. I personally think it is inevitable that machines will develop their own consciousness and then their own emotions because we are now striving to create a new, elevated version of ourselves.”

If so, how would they react to moral conundrums? It is a classic thought experiment in ethics to have to choose one’s priorities in face of a predictable accident, leaving the rest to god’s mercy.

Would it make us feel better if we left the decisions completely to cars? Who should take legal responsibility for self-driving cars? Do the engineers who pre-program a car’s reaction in an accident situation have the right to play god? These questions demand answers before it is too late to reply in a new technology frontier that is now basically autonomous.

“Do you think self-driving cars will lead us to a dystopia like in science fiction?” I asked Eric.

“Well, I guess we will get what we deserve,” Eric said. “Where there are no rules, the only boundary is our own conscience.”

Daimler, Bosch Testing Autonomous Parking at the Mercedes-Benz Museum – The Drive

http://ift.tt/2v0iKYh

Autonomous driving is often pitched as a way to make life easier and more convenient, but not everyone views driving as a chore. Parking, on the other hand, isn’t most people’s idea of fun. Consequently, autonomous parking might be easier for the public to accept. Daimler and Bosch are aiming to find out through a pilot program at the Mercedes-Benz Museum in Stuttgart, Germany. The two longtime partners have teamed up on an autonomous parking system that lets cars drive themselves into a parking garage without any human intervention.

The service is currently running as a demonstration only, although Daimler hopes to offer it to customers beginning in 2018. Right now, it works similar to a car-sharing service, with users reserving a car via smartphone app. The car drives itself out of the garage to a pickup point, and a human driver takes it from there. The driver then drops the car off at a designated spot, and it drives itself back into the parking garage.

This system relies not only on cars with some degree of autonomous driving capability, but also on sensors embedded in the parking garage itself. Provided by Bosch, the sensors monitor other vehicles in the area, and warn the car of potential obstacles.

Daimler hopes to offer autonomous parking to customers in 2018, pending regulatory approval. It’s unclear whether the feature will be available on actual customer cars, or solely on cars made available through a car-sharing service. Either way, customers may be limited to certain parking garages that are equipped with the necessary sensors.

In the future, autonomous parking could have major efficiency benefits, Daimler believes. The company claims autonomous parking could allow up to 20 percent more vehicles to fit into the same space. Since doors won’t need to be opened to let passengers out, cars will be able to park closer together. Having cars drive directly to a designated space rather than cruise around looking for one could also save fuel.

Nexar releases 55K street pics from 80 countries to spur autonomous driving – TechCrunch

http://ift.tt/2uptEoN

Nexar has released a dataset that it says is the world’s largest photo set featuring geographically diverse images for automotive tech development, for an open competition. There are 55,000 tagged photos in the set, taking from over 80 countries, in a variety of lighting and weather conditions. Each of the photos is taken from street level, using Nexar’s community-based V2V dashcam app for iOS and Android, and the goal of the release is to help drive the development of autonomous driving perception models that can handle a wide range of weather, road and country variety.

The release of Nexar’s image set, which it cals NEXET, is part of a challenge issued by the company to researchers to create a perception system for self-driving cars that’s able to work in a range of different settings, across geographical borders, while delivering consistent performance in all cases.

Nexar says that their goal is to address a significant gap in a lot of current research, which uses imagery for training that comes from either very circumscribed real world areas, or from simulations or lab-based environments. Any software developer knows that there are issues you only come across when dealing with real-world conditions, and that’s definitely true for training autonomous driving systems, which still face a huge hurdle in terms of addressing edge cases. With an iPhone app, outlier use cases have relatively low stakes; with driving, they could mean the difference between life and death.

Nexar’s whole goal is to build an Advanced Driver Assistance System that combines data from multiple streams via consumer devices around the world, and its competition is designed to help it further its own efforts. But the ultimate value to the industry is also apparent, and it’s rare to come across this size and type of dataset in the wild.

Nexar releases 55K street pics from 80 countries to spur autonomous driving – TechCrunch

http://ift.tt/2tuACG6

Nexar has released a dataset that it says is the world’s largest photo set featuring geographically diverse images for automotive tech development, for an open competition. There are 55,000 tagged photos in the set, taking from over 80 countries, in a variety of lighting and weather conditions. Each of the photos is taken from street level, using Nexar’s community-based V2V dashcam app for iOS and Android, and the goal of the release is to help drive the development of autonomous driving perception models that can handle a wide range of weather, road and country variety.

The release of Nexar’s image set, which it cals NEXET, is part of a challenge issued by the company to researchers to create a perception system for self-driving cars that’s able to work in a range of different settings, across geographical borders, while delivering consistent performance in all cases.

Nexar says that their goal is to address a significant gap in a lot of current research, which uses imagery for training that comes from either very circumscribed real world areas, or from simulations or lab-based environments. Any software developer knows that there are issues you only come across when dealing with real-world conditions, and that’s definitely true for training autonomous driving systems, which still face a huge hurdle in terms of addressing edge cases. With an iPhone app, outlier use cases have relatively low stakes; with driving, they could mean the difference between life and death.

Nexar’s whole goal is to build an Advanced Driver Assistance System that combines data from multiple streams via consumer devices around the world, and its competition is designed to help it further its own efforts. But the ultimate value to the industry is also apparent, and it’s rare to come across this size and type of dataset in the wild.