Here's What it's Like to Ride in a Google Self-driving Car in Austin
The first thing I noticed after T.C. Ricks turned on Google's self-driving vehicle was the sound. On top of the Lexus SUV's purring engine, a quiet rhythmic thrum came from the black, siren-shaped sensor atop the car. That sensor, along with others tacked around the vehicle, was constantly scanning our surroundings, attempting to differentiate static objects from moving ones — the pedestrian cutting behind the hedges, the bicyclist careening around a parked car.
Google expanded testing of its self-driving technology to Austin over the summer, and has since brought 14 test vehicles into town — six Lexuses and eight of its futuristic-looking prototypes — regularly sending them out in the Brentwood and Mueller neighborhoods with a “test driver” behind the wheel and another Google employee in the passenger seat.
From my spot in the backseat, I had a clear view of the passenger seat where Sharin Kim did what she does most days: peer at her laptop displaying what the car’s sensors are seeing. At first, the screen was almost entirely black because Google has not mapped out the Alamo Drafthouse parking lot on West Anderson Lane to the detail needed for its self-driving software. The company doesn’t allow the car to run on autopilot in areas that they haven’t obsessively mapped, so Ricks drove the vehicle the old-fashioned way out of the lot.
As Ricks maneuvered the car toward the road, I watched Kim’s screen fill with blue lines until it was covered in them. White circles and squares represented static objects like trash cans and signs.
Once the car reached the end of the lot, Ricks set up the autopilot to start.
“Autodriving!” a pleasant female voice declared from somewhere near the dashboard.
And we were off.
The demonstration I observed Wednesday morning was only my second test drive in three-and-a-half years as a transportation reporter. The first was a trip I took down the country’s fastest highway on its opening day. This experience couldn’t have been more different. For one, I wasn’t doing the driving. And instead of zooming at 85 mph, Google’s car never broke 25 mph as it wound through neighborhood streets. Google staffers told me they haven’t taken the car on any Austin highways. The unpredictability of north Austin traffic, with its unique mix of distracted drivers, impatient pedestrians and oblivious bicyclists, makes for a suitable training ground for now.
The car made a left turn into the stop-and-go traffic of Anderson Lane. At first, it was hard to notice anything special taking place. I could see Ricks’ shoulders and head and the steering wheel moving in front of him. I had to lean over to observe that his hands were hovering over the wheel but not touching it.
Ninety seconds into our drive, a rectangular screen installed above the dashboard flashed red. An alarm sounded. “Manual,” the female voice said in a less optimistic tone. I saw “computer_problem” pop up on the screen. Ricks’ hands gripped the steering wheel. I craned my head, trying to find the situation that prompted the computer’s cautiousness. Nothing stood out.
The mountain of data just collected will eventually reveal what prompted the car’s software to decide it was better to be safe than sorry and let a human take over. Google’s developers will pour over the data, and from that, learn how to make the car smarter.
Instances like these (it happened one more time during our 15-minute trip) are one reason Google is testing its cars in Austin where they confront different traffic patterns and challenges than previous tests logging more than a million miles in California.
Less than a minute later, the car was ready to take over again.
“Autodrive!” we heard again, and we were back in the future.
Over the next 10 minutes, the car made an unhurried loop up Burnet Road and down Shoal Creek Parkway before returning to where we started. On Kim’s laptops, I watched a simulation of what the car sensors were seeing, with each moving object represented by a constantly shifting parallelogram. Yellow boxes were pedestrians. Cars were pink or green. Despite the stripped-down, Atari-esque display, knowing that millions of data points were behind each frame seemed like something out of the Jetsons.
For the most part, the car provided a smooth ride, though not completely uneventful. Along with twice abruptly switching back to manual mode, a brief, jarring moment unfolded as we approached a stop sign on Greenlawn Parkway. After beginning to slow down, the car came to a jerky halt, as if the driver had slammed on the brakes. Google’s software had apparently decided a car in the next lane veered too close, and stopped quickly. It was an overreaction but not a terrible one.
Much has been made of the excessively cautious nature of Google’s test vehicles. They stop short in situations that most human drivers wouldn’t. They can take forever to enter busy intersections, particularly when negotiating an unprotected left turn. At one point in my trip, as the car was waiting to make a right turn, I noticed an image pop up on Kim’s screen directly in front of where our car was. The “occlusion fence,” as developers call it, was signaling that we didn’t yet have the right-of-way. After a few beats, slightly longer than I might have waited had I been behind the wheel, the "fence" disappeared and the car completed the turn. Ensuring that the “fence” stays up only as long as its needed is among the ways in which Google developers are using testing in California and Austin to teach the car how to better balance safety concerns with real-world road conditions
We arrived back in the parking lot where we started. As car rides go, it was pretty uneventful.
Of course, from Google’s point of view, that’s exactly the point.
Disclosure: Google is a corporate sponsor of The Texas Tribune. A complete list of Tribune donors and sponsors can be viewed here.
Information about the authors
Learn about The Texas Tribune’s policies, including our partnership with The Trust Project to increase transparency in news.