Posted by Evan Ackerman, March 2012  –  Last time we put our life in the hands of a robot car, it managed to park itself without crashing or abducting us. Robot cars also know how to drive like maniacs, and even how to powerslide. These are all very neat tricks — tricks that might save your life one day. But what’s going to happen when all cars are this talented? Efficiency. Scary, scary efficiency.

It’s not just the sensor-driven skills that will soon be common to individual cars that will shape the future of automotive transportation, but also the ability for cars to communicate with each other, sharing constant updates about exactly where they are and where they’re going. And with enough detailed information being shared at a fast enough pace between all vehicles on the road, things like traffic lights become completely redundant:

Seriously, just watching this simulation (which comes from Peter Stone, a computer scientist at the University of Texas at Austin) makes me more than a little nervous. I’d have to go through that intersection with my eyes closed and probably screaming, but on the upside, I’d get through it without stopping, saving time and gas and (as long as all the robots behave themselves) actually preventing accidents.

So, how close are we to something like this? It’s hard to say. In a lot of ways, we’re just about there: we have cars that can drive themselves just about as reliably as a human can, and many automakers are working at inter-car communication. But as we’ve discussed before, there are a lot of legal and social issues standing in the way of widespread adoption, and it’s going to take a concerted effort to provide a framework in which we can safely allow progress to be achieved.

 

 

 

How Google’s Self-Driving Car Works

POSTED BY: Erico Guizzo

 

 

http://spectrum.ieee.org/automaton/robotics/artificial-intelligence

 

 

 

Once a secret project, Google’s autonomous vehicles are now out in the open, quite literally, with the company test-driving them on public roads and, on one occasion, even inviting people to ride inside one of the robot cars as it raced around a closed course.

Google’s fleet of robotic Toyota Priuses has now logged more than 190,000 miles (about 300,000 kilometers), driving in city traffic, busy highways, and mountainous roads with only occasional human intervention. The project is still far from becoming commercially viable, but Google has set up a demonstration system on its campus, using driverless golf carts, which points to how the technology could change transportation even in the near future.

 

Stanford University professor Sebastian Thrun, who guides the project, and Google engineer Chris Urmson discussed these and other details in a keynote speech at the IEEE International Conference on Intelligent Robots and Systems in San Francisco last month.

 

Thrun and Urmson explained how the car works and showed videos of the road tests, including footage of what the on-board computer “sees” [image below] and how it detects other vehicles, pedestrians, and traffic lights.

 

 

Google has released details and videos of the project before, but this is the first time I have seen some of this footage — and it’s impressive. It actually changed my views of the whole project, which I used to consider a bit far-fetched. Now I think this technology could really help to achieve some of the goals Thrun has in sight: Reducing road accidents, congestion, and fuel consumption.

 

 

Watch:

Urmson, who is the tech lead for the project, said that the “heart of our system” is a laser range finder mounted on the roof of the car. The device, a Velodyne 64-beam laser, generates a detailed 3D map of the environment. The car then combines the laser measurements with high-resolution maps of the world, producing different types of data models that allow it to drive itself while avoiding obstacles and respecting traffic laws.

 

The vehicle also carries other sensors, which include: four radars, mounted on the front and rear bumpers, that allow the car to “see” far enough to be able to deal with fast traffic on freeways; a camera, positioned near the rear-view mirror, that detects traffic lights; and a GPS, inertial measurement unit, and wheel encoder, that determine the vehicle’s location and keep track of its movements.

 

Here’s a slide showing the different subsystems (the camera is not shown):

 

 

Two things seem particularly interesting about Google’s approach. First, it relies on very detailed maps of the roads and terrain, something that Urmson said is essential to determine accurately where the car is. Using GPS-based techniques alone, he said, the location could be off by several meters.

 

The second thing is that, before sending the self-driving car on a road test, Google engineers drive along the route one or more times to gather data about the environment. When it’s the autonomous vehicle’s turn to drive itself, it compares the data it is acquiring to the previously recorded data, an approach that is useful to differentiate pedestrians from stationary objects like poles and mailboxes.

 

 

The video above shows the results. At one point you can see the car stopping at an intersection. After the light turns green, the car starts a left turn, but there are pedestrians crossing. No problem: It yields to the pedestrians, and even to a guy who decides to cross at the last minute.

 

 

Sometimes, however, the car has to be more “aggressive.” When going through a four-way intersection, for example, it yields to other vehicles based on road rules; but if other cars don’t reciprocate, it advances a bit to show to the other drivers its intention. Without programming that kind of behavior, Urmson said, it would be impossible for the robot car to drive in the real world.

Clearly, the Google engineers are having a lot of fun (fast forward to 13:00 to see Urmson smiling broadly as the car speeds through Google’s parking lot, the tires squealing at every turn).

 

But the project has a serious side. Thrun and his Google colleagues, including co-founders Larry Page and Sergey Brin, are convinced that smarter vehicles could help make transportation safer and more efficient: Cars would drive closer to each other, making better use of the 80 percent to 90 percent of empty space on roads, and also form speedy convoys on freeways. They would react faster than humans to avoid accidents, potentially saving thousands of lives. Making vehicles smarter will require lots of computing power and data, and that’s why it makes sense for Google to back the project, Thrun said in his keynote.

 

Urmson described another scenario they envision: Vehicles would become a shared resource, a service that people would use when needed. You’d just tap on your smartphone, and an autonomous car would show up where you are, ready to drive you anywhere. You’d just sit and relax or do work.

 

He said they put together a video showing a concept called Caddy Beta that demonstrates the idea of shared vehicles — in this case, a fleet of autonomous golf carts. He said the golf carts are much simpler than the Priuses in terms of on-board sensors and computers. In fact, the carts communicate with sensors in the environment to determined their location and “see” the incoming traffic.

“This is one way we see in the future this technology can . . . actually make transportation better, make it more efficient,” Urmson said.

 

 

Watch:

 

The Evolution of Self-Driving Vehicles

 

Thrun and Urmson acknowledged that there are many challenges ahead, including improving the reliability of the cars and addressing daunting legal and liability issues. But they are optimistic (Nevada recently became the first U.S. state to make self-driving cars legal.) All the problems of transportation that people see as a huge waste, “we see that as an opportunity,” Thrun said.

PS: Don’t miss the first part of the keynote, in which Thrun and Urmson describe their experiences at the DARPA autonomous vehicle challenges.

 

Nature Technology – Magic Of Motion

 

 

Robotics Trends for 2012

POSTED BY: Erico Guizzo & Travis Deyle

March 20, 2012

 

What’s in store for robotics in 2012? Nearly a quarter of the year is already behind us, but we thought we’d spend some time looking at the months ahead and make some predictions about what’s going to be big in robotics.

Or at least what we think is going to be big. Lacking divine powers (or a time machine) to peek into the future, we relied on our experience as longtime observers of the robotics landscape, covering the field here on Automaton and on Hizook, another leading robotics blog. To make sure our forecasts aren’t too far off, we asked a group of roboticists with different backgrounds for their predictions for 2012. This “panel of experts” provided invaluable insight, and after we tabulated everyone’s suggestions we narrowed it all down to the final 12. It was not an easy task. So many great ideas. (Thanks, panelists!)

In making our selection, we tried to avoid the “perennial trends”—areas like environmental robotics, entertainment and toy robots, and others that are always thriving with activity. We focused on emerging areas and we “followed the money,” looking at where funding is going. For example, the National Robotics Initiative, spearheaded by the U.S. National Science Foundation, will put a lot of resources into robots that can collaborate with people. DARPA, for its part, has multiple programs that involve manipulation and bionic devices. Europe’s Framework Programmes is funding development of “cognitive systems and robots” that can assist people in everyday tasks. And in Asia, the decades-long funding of healthcare robots for older adults has intensified.

Again, we’re not trying to present a comprehensive survey of the state of robotic research in 2012. We’re hoping that the trends described here are helpful as a “heat map” that highlights promising areas and technologies in robotics and AI. We know that many readers will disagree with our choices. Some will feel outraged at our selections. Others will just want to learn more. In either case, we want to hear from you. Write us or leave a comment below. And as predictions go, only one thing looks certain: Robotics is going through an amazing time, and things should only get more exciting.


 

 

#1 CO-ROBOTS: ROBOTS AS CO-WORKERS AND CO-INHABITANTS

Robots typically fall somewhere on a spectrum between direct teleoperation and full autonomy. Unfortunately, teleoperation can be cumbersome, and full autonomy is often illusive. Somewhere in the middle lies a compelling trade-off, where humans and co-robots collaborate to perform practical tasks, such as delivering medication to a person (pictured below). Co-robots are at the heart of the $70 million National Robotics Initiative (NRI) and they represent a definitive step toward robots migrating out of factories and academic labs and into our everyday lives. According to the NRI, co-robots must be safe, relatively cheap, easy to use, available everywhere, and interact with humans to “leverage their relative strengths in the planning and performance of a task.” “A lot of us are turning our attention in that direction,” said a researcher from our panel of experts. (No wonder so many U.S. roboticists spent the latter portion of 2011 drafting proposals!) The program’s scope is broad, but the key aspect of co-robots is clear: Co-robots must interact with humans. So expect to see a big jump in activity in human-robot interaction in all its myriad forms throughout 2012.

 

 

A study by Georgia Tech’s Human Factors and Aging Laboratory used the PR2 robot to interact with older adults in the Aware Home. Photo: Keith Bujak


 

 

#2 3D SENSING: THE KINECT REVOLUTION CONTINUES

Last year, a curious adornment started appearing on many robots’ heads. It was Kinect, the now-popular Microsoft 3D sensor. Cheap and easy to use, Kinect made 3D mapping and motion sensing accessible, and the robotics community embraced it wholeheartedly (see one example in the photo, below). “People have been searching for a low cost alternative to laser rangefinders, and now we have one (for indoor use, at least),” one of our panelists told us, adding that she expects to see a “surge” in usage. Indeed, the Kinect 2, which may appear sometime this year, will feature higher resolution and frame rate, allowing the device, if you believe the rumors, to read lips. New types of cameras also promise to expand the possibilities of 3D sensing. So-called “computational cameras”—like the Lytro, based on technology developed at Stanford—capture both intensity and angle of light and allow for refocusing already-snapped pictures and the creation of 3D images. This new wave of 3D sensors may not only give robots better “eyes,” but they could also provide an effective way of “3D scanning” everyday objects, generating libraries that robots would access to finally understand this thing we call “the real world.” “3D sensing is already hot,” one of the panelists commented, “but with the Kinect and the next generation of similar cheap sensors, the sky is the limit.”

 

 

Microsoft’s Roborazzi robot features a Kinect for navigation and a camera for snapping pictures of people. Photo: Bill Crow/Microsoft


 

 

 

 

 

 

#3 CLOUD ROBOTICS: THE FORECAST CALLS FOR CLOUDS

Several research groups are exploring the idea of robots that rely on cloud-computing infrastructure to access vast amounts of processing power and data. This approach, which some are calling “cloud robotics,” would allow robots to offload compute-intensive tasks like image processing and voice recognition and even download new skills instantly. A lot of activity should be happening in this area this year. Or as one researcher put it to us, “the cloud will explode.” In particular, Google has a small team creating robot-friendly cloud services that, if they become popular among roboticists, could be a tectonic shift in the field (imagine every robot using a “Google Maps for Robots” for navigation). In Europe, a major project is RoboEarth, whose goal is to develop a “World Wide Web for robots,” a giant cloud-enabled database where robots can share information about objects, environments, and tasks (see photo of a test of the system, below). Many other projects are taking shape, and we expect that in 2012 “cloud robotics” will sound less like a buzzword and more like a serious research domain.

 

 

A robot connected to RoboEarth serves a drink to a patient during a trial of the system. Photo: RoboEarth.org/TU Eindhoven


 

 

 

 

#4 COMPLIANT ACTUATION: ROBOTS WITH A SOFT TOUCH

When robots interact with humans, safety is a key concern. Conventional position-controlled arms like the ones that dominate factories just won’t cut it. Making robots with a soft touch is key to a future where humans and robots can share spaces and collaborate closely. For this reason we expect to see numerous improvements to compliant actuation and tactile sensing technologies in 2012. Examples include better series elastic actuators (used by robots like the Meka M1 in the photo below) and tactile skin. In addition, we expect researchers to think “outside the box”—developing new, clever types of compliant systems far removed from electromechanical motors. For example, in 2011 we saw a number of soft-bodied robots that, much like their meat-bag biological counterparts, are inherently squishy. These include iRobot’s Hexapod JamBot, based on particle jamming actuators, and OtherLab’s Ant-Roach Pneubot, an inflatable robot made of fabric and pneumatic actuators. These and other projects point to a tantalizing future filled with compliant and soft-bodied robots that we expect to start taking shape in 2012.

 

 

The Meka Robotics M1 compliant robot with custom series elastic actuators. Photo: Meka Robotics


 

 

 

 

 

 

#5 SMARTPHONE-BASED ROBOTS: THE NEW ROBOT BRAINS

Almost every robot these days need a combination of sensors, CPU, display, and network connectivity. Smartphones and tablets offer a combination of sensors, CPU, display, and network connectivity. Do you see where this is going? At one end of the spectrum, iRobot has demonstrated a remote presence prototype robot called Ava, which uses a tablet to control its mobile base. At the other end of the spectrum, two Seattle engineers quickly raised over US $100,000 on fundraising website Kickstarter with the promise of developing a cute little smartphone-powered robot called Romo (pictured below). These are just two examples of a trend that we believe has earth-shattering potential for robotics. Mobile devices—based on Apple’s iOS and Google’s Android—are riding an extraordinary, unprecedented wave of innovation. We think (we hope) that robotics can take advantage of this same wave. The result would be many more robots moving out of the lab into the marketplace. “More and more smarts in cellphones, like [Apple’s voice assistant] Siri, will impact robot toys and research this year,” one of our panel members told us. Indeed, thanks to smartphones, robots will only get smarter.

 

 

Romotive, a Seattle start-up, created a tracked mobile robot powered by a smartphone. Photo: Romotive


 

 

 

 

 

#6 LOW-COST MANIPULATION: A ROBOT ARM YOU CAN AFFORD

With the ever-improving economics of computing and developments like the popularization of 3-D sensors, the overall cost of a robot is dropping precipitously. Except for one thing: Actuators seem to be holding up the show. This is most evident with grippers and high degree-of-freedom arms for manipulation. We’re not talking about hobby servo solutions; we’re talking about what many roboticists want. They want a powerful system for mobile manipulation: human-type form factor, compliant actuation, respectable payload (at least 5 kilograms). Oh, and did we mention cheap? Like all for less than $5000. In other words, could a new robot arm do for manipulation what the Kinect is doing for 3D sensing? An ICRA 2011 paper by Stanford researchers on a low-cost compliant manipulator suggests it is possible (system pictured below). And indeed, there is a fair amount of funding in this direction—from NSF’s NRI to DARPA’s three manipulation-centric programs (ARM-H, ARM-S, and M3). As one researcher explained to us, all this funding should lead to “interesting new hand designs and autonomous manipulation systems.”

 

 

Stanford researchers developed a low-cost compliant 7-DOF arm. Photo: Morgan Quigley


 

 

 

 

 

#7 SELF-DRIVING VEHICLES: COMING TO A STREET NEAR YOU

Okay, you probably won’t see autonomous vehicles driving near you anytime soon, unless you live in Silicon Valley, where Google has been extensively testing its famous self-driving Toyota Prius (shown below during at demo at TED) and now also a fleet of autonomous golf carts. But one thing’s for sure: Autonomous vehicles have proliferated in the past few years, with projects in the United States, Germany, France, Italy, the U.K., and China. Last year, Nevada became the first U.S. state to permit autonomous cars to be legally be driven on public roads (though some speculate that Europe might prove more friendly to this type of vehicle than the United States.) Either way, autonomous driving features are already showing up on regular mass-produced cars. Some models of the Prius now have a driving assist function that keeps the car centered on its lane, and another function can park the car all by itself. Though carmakers will insist these are not autonomous driving features, it’s clear that cars are becoming more robotic. Furthermore, it’s likely that autonomous vehicles will drive another trend as well: As one panelist explains, we should start to “map, and perhaps even instrument, our environment” to help autonomous cars and robots navigate. “This is a shift,” he says. “The emphasis used to be solely on local algorithms and computation. Folks are starting to realize that this is not the low-hanging fruit.” Driving, we’ll soon be able to say, is so 20th century.

 

 

Google demonstrates its self-driving car at the TED 2011 conference. Photo: Steve Jurvetson/Flickr


 

 

 

 

#8 FACTORY ROBOT HELPERS: THE FUTURE OF MANUFACTURING

Last year, an announcement from electronics manufacturer giant Foxconn took the robotics community by surprise. The Taiwanese company said it was going to add 1 million robots to its assembly lines over the next three years. One million robots is a lot of robots—in fact, it’s double the current industrial robot population. Many uncertainties remain about Foxconn’s plans, including whether they can pull if off. What’s clear, though, is that there’s a huge need for flexible, capable, safe manufacturing robots—a new generation of industrial machines very different from the big, expensive manipulators in existence. And it looks like this new generation is beginning to arrive. Examples include Kawada Industries’ Nextage, ABB’s FRIDA (pictured below), and Yaskawa’s Motoman SDA10D. But the factory robot everyone wants to see? It’s the mysterious robot that Rodney Brooks is developing at his super-secretive startup Heartland Robotics. Is 2012 the year he’ll unveil the machine? We hope so.

 

 

The ABB FRIDA concept is a 14-axis dual arm robot designed for working alongside human workers in manufacturing environments. Photo: ABB


 

 

 

 

 

#9 RAPID PROTOTYPING: A 3D PRINTER IN EVERY HOME

Rapid prototyping is incredibly useful; being able to quickly fabricate a part can save thousands of dollars, eliminate days of waiting, and allow you to figure out whether your neat design is indeed brilliant—or a flop. There is one rapid prototyping device (a robot in its own right) that will likely make a significant impact in 2012: MakerBot Industries’ Thing-O-Matic 3D printer (pictured below). 3D printers are now a mainstay of academic and industrial robotics labs, but they’re expensive (some cost over $20,000). The Thing-O-Matic (and its ilk) make 3D printing available to the masses, retailing for as little as $1000. Low-cost 3D printers could very well be the next big trend in home robots. MakerBot forecasted 10,000 units would be sold in 2011, but we’re guessing they shattered their estimates: They were already ahead of schedule in March of 2011, and raised $10 million in venture capital to expand their efforts. At the forefront of the DIY “maker movement,” 3D printers like the MakerBot are fulfilling the “personal fabricator” scifi visions set forth in Neal Stephenson’s “Diamond Age,” allowing people to mockup, share, and refine digital products—using websites like Shapeways and Thingiverse—for home fabrication. Our bet is that 3D printing is going to keep moving in one direction: up.

 

 

One of the open-source 3D printers created by MakerBot. Photo: Bre Pettis/Flickr


 

 

#10 UNMANNED AERIAL VEHICLES: CROWDED SKIES

Small UAVs, and in particular quadrotors, were huge in 2011. “They provide an entry path to UAVs for people who couldn’t have worked on them in the past,” one researcher told us. “I would expect interest in that to continue and grow.” Last year was just the beginning. In 2012, we expect to see additional UAV up-take by both professional and “citizen” researchers looking for inexpensive robot platforms (pictured in the photo below is a system used by ETH Zurich researchers). Consider DIY Drones, a popular website for UAV enthusiasts; it boasts more than 20,000 members who design, build, and fly their own autonomous UAVs. Buyers looking for already-assembled models have lots of options too. Walking through the mall during the holiday season, we saw literally a dozen stores selling UAVs, from Air Swimmers RC blimps (under $40) to small smartphone-controlled helicopters (about $100) to Parrot’s AR.Drone quadrotor ($300). Brisk holiday sales created thousands of new UAV enthusiasts who, bolstered by ever-expanding open software and hardware resources, will work hand-in-hand with professional researchers to unveil some amazing flying machines in 2012.

 

 

A quadrotor helps assemble a tower as part of an art project by ETH Zurich roboticists and architects. Photo: Markus Waibel


 

 

#11 TELEPRESENCE ROBOTS: YOUR AVATAR IN THE REAL WORLD

Telepresence robots—mobile machines that act as your stand-in at a remote location—first became prominent in 2010, when Silicon Valley start-up Anybots introduced one of the first commercial offerings, an alien-looking robot called QB. In 2011, more robots hit the market, including the Vgo from Vgo Communications and the Jazz from French company Gostai (pictured below). “It’s an important technology with a strong business case, and will save both time and money for business travelers,” one researcher told us. Indeed, we think that 2012 will be a milestone for telepresence robots, and robotics in general: By the end of the year, hundreds of QBs, Vgos, Jazzes, and others will be roaming around offices—a place where robots were nonexistent—all over the world. It’s a first in robot history. And in 2012 a new entrant promises to make this market even more competitive: It’s likely that Suitable Technologies, a Willow Garage spin-off, will introduce its much-awaited remote presence system as well. So expect to see more telepresence robots near you—if you don’t become one yourself.

 

 

The Jazz telepresence robot by Gostai. Photo: Gostai


 

 

 

 

#12 BIONICS: THE LINE BETWEEN HUMANS AND MACHINES GETS BLURIER

Cyborgs and other man-machine hybrids have long captured people’s imagination. We’re still far from the technology envisioned in science fiction shows like “The Six Million Dollar Man” and “Robocop,” but researchers have made significant progress in the past two years. Areas like robotic prostheses and brain-machine interfaces seem to be building lots of momentum, and we expect to see some promising milestones in 2012. In particular, exoskeletons are literally strutting out of the lab. This year, Ekso Bionics (formerly Berkeley Bionics) will begin selling its robotic suit first to rehab clinics in the United States and Europe, hoping to have a model ready for at-home physical therapy by the middle of 2012 (see photo of a “test pilot,” below). At the same time, a DARPA-sponsored project by Johns Hopkins University and the University of Pittsburgh has been testing a brain implant that allows patients to control an advance robotic arm with their thoughts alone. Many other groups are also working on technologies that promise to blur the line between humans and machines; it won’t happen overnight, but now the promise is not just science fiction anymore—it’s real.

 

 

A “test pilot” tries the exoskeleton created by Ekso Bionics. Photo: Ekso Bionics


A version of this article appeared in the March 2012 issue of IEEE Robotics & Automation Magazine.

The authors are thankful for the timely and thoughtful feedback from a number of researchers: Raffaello D’Andrea, Tiffany Chen, Matei Ciocarlie, Steve Cousins, Aaron Edsinger, Kaijen Hsiao, Charles C. Kemp, Masaaki Kumagai, Matt Mason, Hai Nguyen, Daniela Rus, Bruno Siciliano, Stefano Stramigioli, Gaurav Sukhatme, Russ Tedrake, Andrea Thomaz, and Holly Yanco.

About the authors

Erico Guizzo (e.guizzo@ieee.org) is a senior associate editor at IEEE Spectrum, covering robotics and other topics. He’s the editor of Automaton, IEEE Spectrum’s popular robotics blog. An IEEE Member, he has a background in electrical engineering and holds a master’s degree in science writing from MIT.

Travis Deyle (tdeyle@hizook.com) is the founder of Hizook.com. He holds a PhD in electrical engineering from Georgia Tech’s Healthcare Robotics Lab and is currently a NSF Computing Innovation postdoc fellow at Duke University, working on robots, wirelessly-powered sensors, and quirky actuators.

 

Hospital Robot Delivers MEDs and other items to patients

 

 

Hospital Soft-Lifting Robo-assistant