Menu

Search

  |   Insights & Views

Menu

  |   Insights & Views

Search

I visualised how algorithms 'see' urban environments and build detailed profiles of citizens

Machines see better than you think. The Creative Exchange/Unsplash. , FAL

Algorithms, software and smart technologies have a growing presence in cities around the world. Artificial intelligence (AI), agent-based modelling, the internet of things and machine learning can be found practically everywhere now – from lampposts to garbage bins, traffic lights and cars. Not only that, these technologies are also influencing how cities are planned, guiding big decisions about new buildings, transport and infrastructure projects.

City-dwellers tend to accept the presence of these technologies passively – if they notice it at all. Yet this acceptance is punctuated by intermittent panic over privacy – take, for example, Transport for London’s latest plans to track passenger journeys across the transport network using wifi, which drew criticism from privacy experts. If there was more widespread understanding about how these technologies work, then citizens would be in a better position to judge what data they’re comfortable with sharing, and how to better safeguard their privacy as they navigate the city.

That’s why, in recent study, I set out to unpack how some of the algorithms behind AI and machine learning operate, and the impact they have on familiar urban contexts such as streets, squares and cafes. But instead of trying to explain the mystifying mathematics behind how algorithms work, I started looking at how they actually “see” the world we live in.

How algorithms ‘see’

If we really want to see what machines see, we need to force ourselves to think like computers. This means discounting everything we usually perceive with our senses and rationalise through our brain, and instead go through a step-by-step process of data acquisition. This is exactly what we tried to demonstrate with The Machine’s Eye: a simulation that shows the steps through which a hypothetical AI system “reads” a physical environment and is able to profile the people in it.

It starts from a pitch-black situation – with no information – and gradually gathers data from a number of interconnected devices: smartphones, microphones, CCTV and other sensors. It starts by detecting and organising information directly from the physical environment: the dimension of the room, type of establishment, the number of people inside, their languages, accents, genders and types of conversation. It then interpolates these data with what can be found about each individual online, data mining from social media, online posts, databases and personal pages.

Our AI machine is finally able to bring all these data together into an accurate profile of a targeted individual, inferring the likelihood of personal relationships, family prospects, life expectancy, productivity or “social worth” – that is, their contribution to society in financial and social terms, within the context of this fiction.

In this simulation, all data are fictitious – the main purpose of the video is to raise awareness about what a truly connected internet of things, operated by an advanced AI system, could hypothetically do.

Scratching the surface

A growing number of companies are exploring how algorithms see the built environment. New York-based Numina, for instance, gathers data about urban flows of cars and pedestrians, showing their traces in real time. And Volvo teamed up with renown photographer Barbara Davidson to design an automated system, where cameras and on-board sensors depicted the presence of people in the busy streets of Copenhagen.

Projects like these combine machine-driven representations of data with a human perspective – but they only scratch the surface of what algorithms really see in our world. In reality, the amount of data detected by sensors and the computational power of the algorithms processing it creates a much richer and more nuanced picture of the urban environment.

City Scanner is a smart city project developed by Cambridge (Massachusetts) in partnership with MIT’s Senseable City Lab. Garbage trucks are equipped with a set of sensors, which can detect a number of urban performance criteria, from the level of pollution of certain areas, to gas leaks and potholes.

The sensors send the data gathered during their usual route to a central server, where a series of algorithms combine, sort and analyse the large amount of data, returning an almost real-time picture of the quality of urban life. This constant monitoring of urban performance builds on previous observations to help continuously improve the city’s infrastructure, and policies that underpin it.

Likewise, engineers at the courier company UPS have created the ORION (On-Road Integrated Optimisation and Navigation) algorithm. This 1,000-page work not only suggests the quickest pathway between two points in the city, but it gets smarter over time. ORION continuously learns from its own outputs, measuring how fast the trucks travel from A to B, and comparing it with a statistical model, in order to improve its accuracy and performance.

Clearly, algorithms can be trained to “see” our cities in more detail, and navigate through them more efficiently, than any human. But after all, they are essentially mathematical and statistical models, cleverly applied to real-life problems. That’s why it’s so crucial to demystify these black-box technologies, in order to understand and make the best use of them, to improve the urban environment.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.