No, we don’t need an app for this

Monday April 05 2021
corna pic

A retail worker wearing a face mask or covering due to the COVID-19 pandemic, serves a customer as the Selfridges department store reopens for general retail in London on December 2, 2020 as England emerges from a month-long lockdown to combat the novel coronavirus pandemic. PHOTO | AFP


Covid-19 is no time to forget history. This is neither the public health sector, the humanitarian response sector, nor the technology sector’s first disaster – and we’d do well to avoid repeating the mistakes of the past.

As the world braces for the possibility of a prolonged pandemic, the governments managing cases of Covid-19 are ramping up their use of technology. Everyone from the World Health Organisation(WHO) to the UK’s National Health Service (NHS) to the Government of Pakistan is promising to roll out a mobile app to help track the virus, or, really, the people with the virus. To do what, exactly, is less clear.

As someone who has built and deployed mobile technologies in disasters for 10 years, it’s clear to me: this is the wrong idea, and the wrong approach. Most of the proposed apps use a combination of real-time location data and symptom tracking to try to calculate your risk of infection, and communicate that with the government and, sometimes, health researchers.

While a number of these technologies purport to ‘contact trace’ Covid-19 – they are, as often as not, used to measure and enforce public lockdowns and quarantines. And while containment is an important goal, the countries often held up as model examples – China, South Korea, and Singapore – have a number of characteristics that suggest... well, that success in containment wasn’t just down to the app.


Take the example of South Korea, largely credited as containing Covid-19. It used a combination of mass public testing, a world-class healthcare system, and an open data portal. Oh, and a patient-tracking mobile application.

So we don’t know what difference the app made in an otherwise robust national response, and we can’t judge whether the negative consequences were worth it.

Some South Koreans expressed discomfort with the way intrusive tracking and health messaging bullied and stigmatised them, and others were harassed after their identities were deduced from supposedly anonymous public data.

Technology will play an important – and often positive – role in this disaster response: it already is. But here are a few of the ways it can go wrong and, no, it’s not (just) because of privacy:

The basic “doing new things” problem. No need to linger here – technology products not only require specific problem definitions and deployment plans, but dedicated people, resources, and testing. Organisations that don’t normally develop or maintain large-scale technology deployments – or integrate them into public service delivery – typically make a lot of mistakes at each stage. In this case, the cost of mistakes are very high.

Snake oil. There are a lot of good intentions in disaster response but, as the proverb goes, they can pave the way to hell, or at least they can do without considerably more work. A lot of technology proposals are described broadly as ‘public interest’ or ‘open’, instead of doing the harder work of demonstrating practical value to scientifically-critical institutions.

For example, a lot of these applications are using location tracking data, under the presumption that location data is a good proxy indicator for risk of transmission. It’s the kind of thing that sounds like it should be true. Unfortunately, it isn’t. One of the effects of the urgency of disasters is that we expedite the trial process for promising new tools, like vaccines. Technology systems, however, unlike drug testing, don’t have institutional checks and balances to make sure they solve the intended problem. On top of that, disaster responders mid-crisis have even less ability to separate snake oil from quality products.

“It’s a disaster.” As it says on the tin, disasters are not the ideal time to do much of anything – certainly not try and teach everyone to calmly do something nuanced, like comparatively self-assess their health risk during a global shutdown, or, you know, download and use an app.

Equality of need, not data. There is always a tension in disaster response between delivering aid based on the greatest need versus delivering aid based on where it is easiest to reach. Technology hasn’t changed much about that, but it’s made the digital divide all the more stark when it comes to using technology to monitor or mediate public services. For instance, if a government is using a smartphone application to measure movement or administer services, it’s only capable of reaching the half of the world with a smartphone, likely missing the most vulnerable communities.

Using technology to identify needs or access resources is inherently biased toward the users of that specific technology. Technology markets are so fragmented that it’s difficult to deliver relief equally, and pandemic response isn’t meant to skew towards the most fortunate. For example, Android is the world’s most deployed software, with nearly three billion users, which is nowhere near enough to reach the world’s seven billion people.