almost a new year, almost a new look

One of life’s minor irritations is when a software you use a lot gets automatically updated with a new look and the new version makes you miss the old one. Sad as it is to admit, Evernote’s recent revamp made me me realise that I was actually really quite fond of its old, dark version.

Nonetheless, I’ve been feeling for a while that the theme of my blog – which was called Elegant Grunge, kind of the black nail varnish of WordPress themes – was indeed feeling a bit too grunge-y. Also possibly hard to read for those without mac screens.

So here is the new-look visual/method/culture.

Happy holidays.

pre-registration for online image elicitation methods training now live!

Over the past few weeks I’ve been working with Professor Helen Lomax and Dr Nick Mahony to produce three online advanced research training modules in image elicitation methods.  Each module will run once in the next three years, funded by the Economic and Social Research Council, and pre-registration is now live here.  If you have an interest in using image elicitation methods with ‘vulnerable’ research participants, look at module 1 here; if you’re interested in the discourses of elicitation and participation more generally, take a look at module 2 here; and module 3 here looks at a couple of possible futures for digital image elicitation methods.  They’re free and open to all researchers, but are aimed primarily at PhD students.

The Architectural Review’s first online edition with our essay on digital renders

The Architectural Review has just launched its first digital edition: you can see it here.  Among many interesting essays, it has a shortened version of the paper that Monica Degen, Clare Melhuish and I published on digital architectural renders here, retitled “Interfacial relations”, with the byline “Reconceiving the computer-generated render as an interface for human interaction rather than a static object”.  Accurate, if not very snappy.

The full version (which discusses some of the conceptual implications of this sort of imagery more broadly) is:

Rose, Gillian, Monica Degen, and Clare Melhuish. “Networks, Interfaces, and Computer-Generated Images: Learning from Digital Visualisations of Urban Redevelopment Projects.” Environment and Planning D: Society and Space 32, no. 3 (2014): 386–403. doi:10.1068/d13113p.

we’re all cultural studies scholars now?

Rather belatedly, something interesting struck me about the furore in the UK a week or so ago over a tweet sent by Emily Thornberry, now the ex shadow attorney general.  For those of you with short memories or not UK-media-aware, Thornberry’s tweet was criticised for being contemptuous of working-class voters, and she resigned from her shadow cabinet role after a couple of conversations with her party leader.

Here is the tweet (and notice the text as well as the photo):

thornberry

In the online discussion of the tweet, loads of things were happening, of course, but there was something about the whole dynamic of the discussion that I thought was intriguing: the way the tweet swung in and out of being seen as ‘representing’ something.

On the one hand, there were a lot of claims – including by Emily Thornberry herself – that the image was meaningless.  It meant nothing; the scene had just struck her as something that could be shared on Twitter.  Now, we could discuss the conditions under which certain things become noticeable and photographable, of course, but still, given how so many photos are put onto social media in just that way, not as meaning anything, just as a sort of ‘oh look’ statement, ‘I am here’, ‘that is here’, I think her claim has some credibility.  Perhaps it really wasn’t an image that meant anything, it wasn’t symbolic, it was purely descriptive, just a picture of a van and flags, a pure “Image from #Rochester”.

Its status as pure description also prompted a lot of online discussion about how photographs become meaningful rather than inherently carrying meaning as well.  So on the other hand, huge amounts of online work went into interpreting the meanings the tweet implied.  The flags, the van, the location: all were decoded, re-coded, explicated, interpreted, repeatedly, by very very many people.  And so were the processes through which all that interpretive work was being done, because there was also a lot of discussion about the sort of coverage given to the tweet in and by different media outlets.

Noortje Marres has been interviewed on the excellent LSE Impact blog about digital sociology, and a point she makes very well there is that the tools and techniques of  social analysis are now widely distributed among many kinds of social actors.  As she says, “social actors, practices and events are increasingly and explicitly oriented towards social analysis and are actively involved in it (in collecting and analysing data, applying metrics, eliciting feed-back, and so on)”.  She is particularly referring to the digital tools embedded in social media platforms, the internet of things, online transaction databases and the like.

But one of the things that the Thornberry tweet affair made evident to me was that the same might be said of the tools of cultural interpretation.  Textual and visual analysis, and an understanding of the significance of the role of the media – in the strict sense of the term – are alive and kicking pretty much everywhere, it felt like, as the vigorous debated unfolded.  The furore was a sort of mass cultural studies seminar.  And if cultural studies has gone viral, what are the implications for those of us who do it – possibly more carefully, certainly much more slowly – in the academy?

playing with what a ‘photograph’ is now: time, space, object, file, icon, snap

I came across a very interesting essay by Jonathan Massey in the online architecture journal Aggregate last week, on the Norman-Foster-designed building at 30 St Mary Axe in central London, popularly known as The Gherkin (the building, not the paper).  Massey understands the building design as a sustained material engagement with various kinds of “risk imaginaries”, and it’s a very interesting argument.

Embedded in the essay, though, was an image by Bryan Scheib, which I found equally fascinating, though Massey doesn’t discuss it in detail.  It’s called ‘The Gherkin”, and is one of a series of images created by Scheib as part of a series called Tableau Vivants.  The series is photographic, in that it’s a series of images that are created from photographs: from the most popular user-uploaded photographs of iconic architecture on Google Images.  For each building, Scheib has (presumably) transformed them into black and white images and them superimposed them.   As well as The Gherkin, I could identify the Guggenheim Museum in New York and Frank Lloyd Wright’s Fallingwater house; others I either didn’t recognise or they’d become so blurred by the repeated overlapping photographs that the building itself was hardly visible any more.

 

These are complex images.  For Scheib, they hark back to the ‘tableau vivant’ photographs that were popular in the late nineteenth century as a form of historical narrative (and which were also often composed of multiple photographs collage-ed together). He describes his own images as similarly evoking a narrative, as the buildings are photographed again and again, from a similar angle, so that the images “embody a history of documentation and perception”.  For Scheib, then, these images are about temporality.  For Massey instead, Scheib’s image captures something about spatiality, in particular the spatiality of urban perception, because they show the “consistency and variation in visual representation that characterizes urban icons”.

Both of these interpretations point to some of the effects of these images, for sure.  For me there are others too.  In particular, they also seem to be negotiating the status of the photograph as a particular kind of object.  Constituted from photographs, it’s not at all clear to me that this image of The Gherkin can itself be described as a photograph.  On the one hand, as a collage of other photographs – and collage has always been used by photographers – it must surely be a photo.  But as both Scheib and Massey emphasise in their comments about the mutability of both the temporality and the spatiality of this image, its relation to the object it pictures is much more attenuated than a photograph generally assumes.  And while the image is based entirely on what Google can find, Scheib himself seems to be doing some work to assert the image’s status as precious art object: the transformation of the online photographs into black and white surely speaks to the history of architectural photography that played a large part in constituting Modernist buildings as cultural icons in the first place, and his webpages also show the Tableau Vivant series framed on the wall of a the classic white cube gallery.

 

So these images are sliding about all over the place.  Slippery temporalities, multiple spatialities, embedded in Google and The Gallery… this makes them very typical of so many images now.  And it seems appropriate therefore that Scheib isn’t a photographer: he’s an architect.  His website carries several beautiful visualisations of his building projects which also move apparently seamlessly between what were once distinct visual media and genres.  Proof, if any more were needed, that, if software isn’t exactly taking command, it’s certainly enabling the dissolution of many of the distinctions between high and low, image and object, then there and now that the photographic bit of our visual culture has depended on for so long.

looking at smarter London as smooth: simplified and friction-free flow

I visited the very interesting Smarter London exhibition at the Building Centre on Store Street, London last week. The exhibition is organised by New London Architecture with a range of other partners, including the Centre for Advanced Spatial Analysis at University College London.  Several large screens hang on the black walls of a dimly lit room, all looping various text and video projections related to London as a ‘smart city’. You can see most of the videos from the exhibition and download a report by the exhibition partners here.

The exhibition is based on a fairly minimal definition of ‘smart’ – “a smart city is one that uses data, technology and analytics to change the way we design, build and manage the city digital technologies” – and the exhibition is correspondingly diverse, though mostly focussed on various aspects of the built environment.  So, for example, and predictably, a lot of attention is given to big data and its real-time presentation, including dashboards like Greater London Authority’s London Dashboard (other products are available, including CASA’s City Dashboard) and animated 2D maps showing the distribution of various objects over various timescales, including, in London, buildings, Boris bikes and Blitz bombs. There are also  3D digital models of various cities, including Seattle as well as London; the London one is hooked up to what I assume was a Connect, so when you stand in front of it you can flap your arms and bank and wheel “like a pigeon” above London (this is going to help planners a lot, apparently).

IMG_0986

my arm failing to be captured as a pigeon while photographing the installation

Then there are a range of examples of mapping underground infrastructure, like cables and sewers and train tunnels, a digital model of Hammersmith flyover generated by laser-beam measurements, visualisations generated by projects using Building Information Modelling,  models of pedestrian flow across a bridge, analyses of tweets to show traffic flow… and the report has many more examples of ‘smart’ urban projects, including shopping apps and residential retrofits.

The exhibition clearly demonstrates the sheer diversity of ways in which digital technologies are shaping the design, management and experiencing of urban spaces.  In that sense, it’s a refreshing alternative to the visions of smart cities offered by big corporations like IBM, Cisco and Siemens, all of whom offer a much more integrated approach to the management of urban spaces using big data.

In other ways, however, the technologies, as they appear in this exhibition, visualised in various ways, have quite a few things in common.  One of them that struck me was how rarely pictures of people appeared in this exhibition’s images of ‘smart’.  There’s a clip from a tv news report showing construction workers using an augmented reality app on an iPad, a couple of talking head experts, a video (screenshot below) of lovely people smiling at an animated 3D model of buildings, and there was the pedestrian flow model. Other than that, these images either showed people converted into data points, or were entirely people-less.

IMG_0991

The images were also all somewhat abstract. Indeed, in the image above, the glowing 3D city appears as (what looks like) a photograph of a real city fades away. Otherwise there were very few photographs, and very few pictorial digital visualisations (though the report on the exhibition has more of the latter). Instead there were maps, diagrams of different kinds, and rather ‘reduced’ images, like the one above, of urban environments in which buses and buildings become simple cuboid shapes and sewers and tube lines became, literally, lines in empty 3D space. The 3D urban models were more complex, but still very stripped back.  Even when many of these visualisations showed very complex assemblages of objects, their individual components were simplified.  Most were animated, too, zooming you in and out and around and through buildings.

IMG_0989

This pared-down visual style is quite striking, and seems to permeate a lot of the commercial advertising for smart city technologies too. It conveys a minimalism, a feeling of efficiency and smoothness and even a kind of pleasure in  blemish-free surfaces and volumes.  There’s also an insistence on smooth flow in their animation.  The point of view in these images glides, swoops, revolves, even moves through walls, with nary a hesitation or trip – in the case of pigeon, if ‘you’ fly too low, the ‘building’ you’re about to ‘hit’ dissolves into Minecraft-like pieces.  There’s no friction in this world, no nubbly texture or glitchy stumbling.  (Paul Dourish recently tweeted about a whole range of ‘frictions’ this emphasis on smoothness of digital technologies obscures, for example software updates and incompatabilities,, dodgy wifi signals and reboots, using the hashtag #truthinadvertising.)   This perfectly echoes Hito Steyerl’s comments about digital images inducing a kind of free-fall effect in their viewers (my last blog post was about her fab book The Wretched of the Screen).

And Steyerl’s question about this mobile point-of-view could therefore be posed to this emerging visual aesthetic of ‘smart': is this the latest incarnation of the god-trick of presuming to see everything from everywhere?  Or does it open out the possibilities of seeing things from different points of view?  More radically, perhaps, is this aesthetic suggesting that this is no longer about human spectators at all, since, as the literature on smart/sentient/intelligent cities never tires of pointing out, none of this software and digital infrastructure is visible to the human eye anyway?  In which case, the visitors to this exhibition are as invisible in its field of vision as the people (not) in its visualisations.

visual methods news: a conference and online training

I’m very pleased to announce that the webpage for three online modules offering advanced training in Image Elicitation Methods, hosted by The Open University and funded by the Economic and Social Research Council, is now live.  Bookmark it because the teaching team – myself, Helen Lomax and Nick Mahony – will be updating it regularly with information about module content and  how to register.

The first module is on using image-elicitation methods when working with vulnerable participants, and will be offered in February 2015, with registration for that one opening in early January next year.  The second one is called ‘complicating the rhetoric of participation’, and puts the contemporary emphasis on participation in a wider, political and cultural context; that will run in April.  And in June, the third and final module looks at the future of image-elicitation methods in the context of digital visual culture.

aiem

And in more exciting, visual-methods-related news, details about the fourth International Visual Methods conference have just been announced.  It will be held at the University of Brighton on 16-18 September 2015, and the webpage is here.  The deadline for panel proposals is 16 January 2015, and for papers the deadline is 30 January 2015.

on tinkering with digital debris: the work of Hito Steyerl

I picked up a copy of Hito Steyerl’s book The Wretched of the Screen at the ICA bookshop in London when she had an exhibition there a few months ago.  She’s a video artist based in Berlin and her work is a brilliant commentary on the mediation of different kinds of visuality by digital technologies.  The most interesting piece  in the exhibition for me was called How Not To Be Seen: A Fucking Didactic .Mov File; you can see part 5 here.  It’s very articulate about the co-implication of digital cameras, image resolutions, entertainment and advertising (including, yes, architectural visualisations!).  It’s also kind of funny and downright weird in places, which disrupts the very easy binary model that so much current discussion about digital technologies and big data seem to be falling into, ie surveillance versus privacy, privacy versus open access, participation versus surveillance, etc etc etc.  Steyerl makes it all a bit less easy than that; her movie is in some ways actually not very didactic.

e-flux_Hito-Steyerl_364The book, which I’ve finally started reading, does a similar thing.  I haven’t read it all yet, but the opening chapters are very stimulating, especially the essay ‘In defense of the poor image’.  Steyerl talks there about the millions and millions of poor quality images that now circulate: aesthetically banal, low resolution, copied, compressed and altered, valueless debris that whizzes around the internet, “the trash that washes up on the digital economies’ shores” (page 32).  As, she says, it is precisely their compression and speed of travel that makes this sort of image ideally suited to the contemporary visual economy in which attention is minimal and novelty is all.

But Steyerl continues, exploring how their dispersal across circuits of mobility also produces networks of viewers, spectators, who can dip and shift and alter poor images to suit their own ends.  Her account doesn’t evoke a radical grassroots challenging the digital corporations, so much as a rather aimless browsing and tinkering, which may or may not produce effects, further circulations, that might, or might not, be inventive of something new.

This seems to me to be closer to what’s happening now than the us-versus-them analyses that appears so common in discussions of big data and smart cities.  If we need to learn not to see high end glossy digital images as perfect images of perfect things – as I argue with Monica Degen and Clare Melhuish in our recent paper – maybe we also need to learn not to see poor images as too valueless to have effects.