By Matthew Sepehr Mahmoudi
Where is “race” in critical conversations about technology?
Racial injustice in the tech industry is often met by demands for greater diversity. While diversity in STEM spaces is certainly an issue, this moment of intense datafication, marked by a number of different names — “surveillance capitalism”, “technological capitalism”, or “information society” (referred to by Fuchs, Buckland and Castells) — begs the question: what does it mean to be represented in the digital age?
Focusing on the refugee justice implications of digital technologies developed for refugees and vulnerable migrant populations, reveals the crucial nature of representation, which is complex and – for the displaced – often requires the ability to navigate visibility and invisibility for survival. This is increasingly difficult with the introduction of the “body-borders” of biometric surveillance systems, and as digitally-generated data on migration and refuge becomes an unescapable reality. In this moment of datafied refuge, we must interrogate refugee technologies to reveal the changing nature of power and how bodies are sorted in the 21st century.
Stenum writes on the difficulty to benefit from strategies such as “flexible identities” and “de-identification” as the displaced often ‘operate […] in a space between a bodily and self-identified existence and a governmental representation.’ In other words, the institutional representation of the transient body limits her movement. This isn’t new; digital migration governance can hardly be disentangled from colonialism, with roots in the practice of counting and categorising colonial subjects. Today, mobility governance — the control of (usually racialised) bodies — continues to discipline perceived threats to the carefully curated Western liberal project, though this process is now far more disaggregated and akin to what Cedric Robinson referred to as Racial Capitalism in the 1984 publication, Black Marxism. While researching the justice implications of digital urban technologies aimed at refugees and under-served communities it has become increasingly apparent to me that, although by far the majority of tech initiatives – such as those designed for information provision, job-matching, identification and connectivity — produce their tools in the image of marginalised populations, a great many of these (good-willed or otherwise) either do not reach their intended populations, or end up causing a number of harms. These harms include: compromising the privacy of individuals, siphoning people off into precarious forms of labour, misleading individuals based on outdated information, or disrupting (rather than augmenting) existing social relations.
In my doctoral research, I start with the assumption that this moment of particular human movement — this so-called “refugee crisis” — is not unique simply due to scale, but due to how the process of movement, throughout journeys and beyond resettlement, is datafied. Historically, this is underpinned by the invention of the passport, innovations in post-9/11 biometric technologies, and the wide adoption of networked consumer products. As smartphones and digital infrastructures are increasingly used to navigate transit (‘digital passage’), for orientation, to access social, legal, and medical services, as well as to connect with jobs and housing, it is no surprise that technology corporations have become significant actors in mobility governance.
This means that it has become easier to track refugees by benign actors, such as keenly interested academics, NGOs, and rescue coordinators, as well as more sinister actors including border security towards the interception of migrants. From mathematical formulas that neatly calculate the numbers of refugees any given European country should receive, to technical innovations that surveil mobility, there is no shortage of examples as to how sociotechnical imaginaries have shaped and determined refugee destinies. Dijstelbloem and Meijer point to digital technologies as a symbolic front, or a political strategy, through which governments can claim that they are proactive in their management of borders. My interest in these developments have been along two lines: first, how technological interventions, such as these, are seen as a way of bypassing the politics of migration and integration (meanwhile deferring power to but a few large tech corporations), and; secondly, how inequities and discrimination resulting from these interventions have been defined as technical questions.
Examples of this includes the much elated ID2020 Alliance, a partnership spearheaded by Microsoft and Accenture and funded in part by the Rockefeller Foundation, to set up a digital blockchain-based identity system for displaced individuals. Data surveillance concerns aside (especially for populations who often rely on “invisibility” to escape persecution and claim asylum), it ought to cause some degree of concern that displaced populations were to be serviced by a software giant who, as of 2018, enjoys $19.4 million in active contracts with ICE. The very same infrastructural cloud technologies used for ID2020 to purportedly enable mobility, is being used to facilitate the deportation and separation of families in the United States.
In a similar vein, the deployment of IrisGuard’s iris scanner technology by the UNHCR and the WFP in Jordan’s Za’atari and Azraq refugee camps, were celebrated as prime examples of how information communications technologies for development (ICT4D), were being used to provide access to credit, in the literal blink of an eye. However, fear from affected communities, aid workers and academics alike, about the invasive and obscure data practices of the technology runs the risk of disincentivising refugees from registering altogether upon arrival to camps, potentially barring them from access to critical services.
Finally, an extensive mapping and interrogation exercise of applications and other digital tools geared towards refugee integration in Berlin, revealed startling indicative findings:
1) “refugee tech”, more often than not, does not rely on the input or involvement of displaced individuals at the iteration phase;
2) Nor do they depend on demonstrating that they are in fact used by refugees to receive funding;
3) Communities have existing channels of both digital and non-digital communication, which are sidelined in favour of developing new and “disruptive” technologies.
As populations who do not possess the same level of protection as citizens, these examples show that refugees and asylum-seeker communities are at risk of being used as experimental sites, where the racialised politics of “integration”, combined with the dominance of the ICT4D narrative, produce iconographies of non-agentic others. As per Georgiou, ‘not everyone speaks and is heard in the same way; not everyone is equally represented, even if most are digitally present’ in the digital age
Let me be clear again: this is not new. A disregard for the human cost associated with racialised non-citizens is one of the oldest features of capitalism and explains the high degree of experimentalism and unending search for “innovation”. Currently capitalism may appear especially sophisticated, and have acquired the means by which more and more aspects of human experience can be commodified and sold, though it is only a mere evolution and intensification of the nature of capitalism, especially when interrogated from the vignette of the black radical tradition. The tension occupying this school of thought emphasises capitalism as necessarily racial, with Robinson’s claim that it emerges ‘not because of some conspiracy to divide workers or justify slavery and dispossession’, but because racial formations were a prerequisite to capitalism as we know it.
Foregrounding technologically-mediated marginality in Robinson’s racial capitalism opens up for the possibility that even the most well-intentioned interventions are contingent on racialism. It should be unsurprising, then, that ICT4D since the 20th century — some 300+ years after the emergence of racial capitalism — is reaffirmed by the iconography of racialised under-served others. Consequently, it has justified the expansion of western corporate interests and economic logics; established western technologies as synonymous with development (Granquist 2005), and; justifed invasive practices under the language of efficiency. To co-opt a Silicon Valley trope, racialism is ‘not a bug, it’s a feature’.
Today, under conditions of datafied refuge, the role of racial capitalism in informing logics of digital access and representation cannot be understated. Camps and cities alike have become sites of “sandboxing”; spaces where populations are digitally enclosed, and where the realm of socioeconomic life is often subject to experimentation by tech actors. In the digital age, the digital periphery is, in other words, central to the survival of racial capitalism.
The extent to which modern technologies are built on the back of “subaltern” suffering; utilised to expand dominant hegemonic logics of the white and “desirable” way of life and predicated upon the suppression or “domestication” of radical marginalised voices, has been largely omitted until recently. In December 2018, a Medium article by Dr Julia Powles posited that over-enthusiasm with Artificial Intelligence systems and solving their bias-related issues ‘[…] denies us the possibility of asking: Should we be building these systems at all?’. The consequence of the diversification of engineering spaces falls anywhere between rendering black and brown bodies recognisable in computer vision technologies deployed in surveillance contexts, to digitally gentrifying marginalised communities.
As researchers, we have a role to play in challenging this paradigm. Irredeemable as they may be from their role in reinforcing ethnocentric and orientalist imaginaries, the disciplines of sociology, anthropology, and ethnographers at large, are in a position to bring to the fore the manners in which, developments outlined above and their power relations, are expressed in the every day of affected communities and contexts. Developing research around privileging othered knowledges, through methodologies along the lines of participatory action research (PAR), critical ethnographic research, and other approaches concerned with data settings rather than datasets, allow us to contest not just the representativeness of undesirable technologies, but to challenge their premises altogether. To include and give space to the epistemological standpoint of communities who are only adversely included in the very systems that target them, should be a prime concern for academia in the 21st century. Collectively, we can break through the artificially imposed monolith of the digital periphery paramount to confronting racial capitalism and its curious ways of innovating alterity.
Matt Mahmoudi is a PhD candidate in Development Studies at the University of Cambridge, where he is also Program Lead at TheWhistle.org, an academic spin-out developing and researching digital human rights reporting suites. As Jo Cox Scholar, his research focuses on technological marginalisation in refugees and asylum seekers and examines the justice implications of new digital boundaries to life in cities in an era of “datafied refuge”. Matt co-coordinated the Cambridge branch of Amnesty International’s Digital Verification Corps, and co-founded and co-produces Declarations: The Human Rights Podcast at Cambridge’s Centre of Governance & Human Rights. Matt tweets @MattMoudi