seeing what (we want) driverless cars to see (but they don’t)

Just back from the annual conference of the Royal Geographical Society with the Institute of British Geographers, which was held in Cardiff last week – and inspired enough to write my first blog post in quite a while.

driverless car lidar

Well actually there was a lot that was inspiring at the conference, but here I just want to focus on one thing. The Digital Geographies Working Group sponsored a pre-conference event, called Navigating Data Landscapes (the conference theme was landscape). I helped to organise it with Tess Osborne from Birmingham University and Sam Hind from the University of Siegen. Sam’s contribution was a workshop on YouTube videos showing what driverless cars ‘see’, and Tess’s which was a chance to play with a range of virtual and augmented reality devices. Mine was to screen a short film made by speculative architect Liam Young called Where the City Can’t See.

At the end of the afternoon, all the participants got together for a panel discussion with James Ash, Clancy Wilmott and Emma Fraser. Here are some of the comments that I took away from the afternoon from various contributors, specifically around the visualities of Lidar scans.

Images of what driverless cars ‘see’ deploy a cartographic language, not least because Lidar is a technology that uses light to map space by calculating distance. So although Lidar is a technology – like photography – that depends on light, it does not create photographs. This shows in visualisations that layer Lidar scans on top of one another so that it seems the viewer can look through one surface (or rather points scattered across that surface) onto another.

This is a spatial sensibility and not primarily a visual one – so the Lidar images aren’t really landscape images. Sam’s preferred term for what they show is ‘terrain’, a more topographic notion.

In fact, it’s quite possible that Lidar tech doesn’t really see anything at all. Which means that driverless cars – that use Lidar technologies to calculate the distances to objects around them – don’t ‘see’ either. The videos that purport to show us what driverless cars see actually show us a highly mediated version of the data that a Lidar scan has generated. It’s a visual imaginary of what humans think a driverless car sees.

And humans want to think that driverless cars see because this reassures us that we understand how they operate, the principles of their operation – that they are like us, in some way.

All of which means that videos of what Lidar scanners see are not actually what Lidar scanners see, they are what humans working with Lidar scan data desire the scanner to be seeing. Thus the videos showing what cars see are actually what humans (want to) see… and another symptom of this is that so many of the images of what driverless cars see aren’t from the car’s point of view at all, but rather they hover in mid-air. From a drone, perhaps – or perhaps just another, rather minor, god trick of seeing something from nowhere. (I’m trying to remember – surely there must be a deity of automobiles in Neil Gaiman’s American Gods?)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s