Notes from August reading

Introduction – longer notes for all chapters can be found here

The introduction begins by explaining the Clever Hans Effect, in which an interviewer gives subtle and unintentional clues to their subject. This is said to reflect how biases get into a system, with the people who create a system becoming entangled within it. Conversely, those who study a system can also become entangled within it. Even if we think we understand how a system works, we may not understand why it works as it does. This leads to the central question of the book: how is ‘intelligence’ made and what traps can this create? When it comes to technology, we often see the myth that AI can contain human-like forms of intelligence. AI systems are often seen as being a disembodied form of intelligence, removed from the material world. These two beliefs present AI as objective, rather than as something created (and informed) by humans.

It then goes on to address the question: what is AI? This definition is important as it sets the stage for how we value and govern AI systems. AI researchers often give complex explanations about it being concerned with taking the most rational action in any situation. This notion of ‘the most rational action’ suggests that AI could be used for high stakes decision making. This book argues that AI is neither artificial nor intelligent; instead, it is made up of material resources and labour, as well as historical contexts. AI is also influenced by dominant interests and modes of thinking, making it a registry of power. The focus placed on the technical capabilities of AI (or machine learning/ML) within industry serves to avoid conversations regarding what AI is being optimised for, for whom, as well as who gets to make these decisions.

This book is framed as an atlas as atlases are both maps and representations of power, carving out territories of knowledge and (historically) colonial power. Crawford seeks here to provide a warts-and-all cartography of the empire of AI, showing powers behind these systems. The language of an atlas is also reflective of the God’s-eye view often taken by AI, presenting the world as legible and knowable. The chapters of this book go through many physical spaces of AI (such as mines and warehouses), showing that the vastness of AI is not reducible to a single black-box: algorithms are as much made up by their material context as their code.

It then goes through the chapters of the book. Chapter one looks at the huge environmental impact of planetary computation networks, upon which AI relies chapter two examines AI’s use of human labour (particularly ghost workers) with AI creating granular mechanisms for time management in the workplace. Chapter three examines the role of data within AI systems. Data becomes infrastructural, rather than personal, in AI systems, aiding in the systems of surveillance capitalism. Chapter four looks at AI as an epistemic machinery for classification, reifying normative and oppressive social inequities. Chapter five looks at AI’s use of micro-facial expressions, an essentially unreliable tool used to make judgements about people. Chapter six looks at how AI is used as a tool for state power. This includes AI’s military history, with military logics shaping AI systems we see today in municipal government and the private sector. The conclusion assesses how AI functions as a structure of power which combines infrastructure, capital, and labour. This results in a widening of existing asymmetries of power, with Crawford suggesting means of resistance.

AI is therefore an idea, an infrastructure, an industry, a way of exercising power, and a way of seeing. It is also a manifestation of highly organised capital, backed by vast systems of extraction, with global supply chains. Understanding what AI is requires understanding all of these components. AI is often seen as spectral or invisible but its systems are physical infrastructures which reshape the earth, equally shifting how the world is seen and understood. 

Chapter four: Classification

This chapter begins by discussing phrenology, showing that the ‘data’ used to support it was not scientific in its application but was made to fit around a priori assumptions and beliefs. It was nonetheless believed to be an objective science at the time (in the early 19th century) and used to justify white supremacy for decades. This serves to show how data is not simply drawn objectively from the world but involves a process of human measurement and interpretation. This foreshadows the epistemological problems with measurement and classification in AI. Similar to phrenology, at AI’s core is a politics of classification, seeking to render the world legible and create forms of ‘truth’ for technical systems. AI also has a tendency to become invisible within platforms and infrastructures, being taken for granted as ‘correct’. It is not enough to focus on AI bias, rather fundamental questions should be asked about classification to ensure AI systems do not remain invisible, such as how does this classification function? What unspoken theories underlie the classification systems? And how do they interact with the classified? 

When bias in an AI system is highlighted and fixed, there is rarely any discussion of why such biases happen so often and if this belies a fundamental issue of AI systems. The example is given here of Amazon’s automated hiring practices which systematically excluded all women based on gendered language. This system was based on data from successful previous job applications at Amazon and so the AI only really reflected issues which already existed within the company. This statistical ouroboros should be challenged (in which AI is based on sexist/racist/etc data, promoting similar outcomes), rather than just fixing individual systems. 

It then gives the example of IBM’s facial recognition systems, which performed poorly on non-white and non-male people. This highlighted fundamental politics of diversity in this system and context, with anyone outside of a male:female facial binary being excluded from the dataset. This system is premised on a centralised production of identity, based on currently available and observable data; any variable not immediately understandable by the system is excluded. Priority is given to what the system can do, rather than a concern about accurately depicting people. These choices are made by designers, showing the centrality of the classification process even today. 

The terms bias and classification have long histories. Importantly here, the field of machine learning understands bias as a statistical concept, discussing samples which are not reflective of the whole dataset. Bias is about errors that can occur when conducting the generalisation-prediction process, meaning it is about classification errors. This is not really how bias is commonly understood, which is more along the lines of unconscious attitudes which produce behaviours in contrast to your stated beliefs. Bias is generally seen as related to the human, raising the question of ‘why do AI systems skip over this human element in favour of a narrow, technical definition?’. This includes datasets which feed AI, with the datasets containing a worldview, which is replicated in supposedly objective AI systems. 

It goes on to give the example of ImageNet in a similar vein, again raising questions about why we allocate people into gender or race categories without their consent. This includes categories which are offensive or insensitive, highlighting that classifications are not neutral but are driven by political ideologies. Categories are often constructed through MTurk workers, who are required to make judgments at a rapid pace. This shows the fallibility in the accuracy of these classifications, even within their own boundaries. Working with shifting and inaccurate classifications casts scientific and ethical issues upon AI systems. It also suggests functional issues in that AI seeks to reduce ways of being and knowing into legible data, restricting knowledge to the logics of AI themselves. 

How should we go about addressing these issues of power masquerading as objective measurement? One of the first things to ask here is who gets to choose what information is fed into AI systems for training and on what basis? Assessments of AI systems have to go beyond coding and computation to look at where frameworks of maths and engineering cause issues, as well as understanding how these all interact with data, workers, the environment, and users.

Chapter six: State

This chapter goes over the military and intelligence sector history of AI, using the Snowden archive as its main contemporary source. Snowden showed an empire of information being developed in the intelligence community. This empire of information is similar to today’s AI sector but the former does not seek to justify itself through consumer utility, instead simply intended to capture all the data it possibly can. The intelligence community had little concern for ethics or laws, seeking to retrofit laws around their technology rather than develop their technology around laws. This ‘capture all’, military style thinking is reflected in AI today, seen in Facebook’s God’s-eye network view and the Amazon Ring persistently opening new areas of life to surveillance and data collection. This is not a coincidence, with AI research historically receiving much of its funding from US intelligence agencies. Logics of the intelligence communities fused with the classificatory thinking of AI, with state and corporate actors working together to produce infrastructural warfare devices. AI therefore helps to reshape the traditional roles of the state and expand older forms of geopolitical power.

While AI systems rely on multinational infrastructures, discourse around AI is adversarial, presenting as a winner-takes-all war between geopolitical powers. In America, this has recently been pushed by Ash Carter, Secretary for Defense from 2015 to 2017. He sought an American dominance of AI as a part of his Third Offset strategy. The Third Offset involves the military partnering with the tech industry and its extractive infrastructures. This sought to, for example, get AI systems into battlefields, even if they were incomplete (as a part of Project Maven). Project Maven was specifically intended to create an AI that would allow analysts to select a target and view all of the drone footage of this target. The ultimate goal is to automate drone detection and tracking of enemies. Google initially won the first PM contract and kept this secret from the public and most employees. When employees did find out in 2018, they demanded the contract be cancelled. When Google did cancel the contract, Microsoft took it up instead. When discussing these programs, the conversation is often shifted to the technical accuracy of the AI to ‘kill people correctly’, rather than the ethics of it happening at all.

AI and the state is not confined to the military though, often being used at the local level, outsourcing key state functions like policing and welfare. The example is given here of Peter Thiel’s company Palantir. Palantir mixes AI data analysis with generic consulting work to extract bad actors from data, doing so through the NSA model of collecting everything and asking questions later. It is set up and operates like a Silicon Valley tech startup but provides military style AI services to government bodies like ICE, alongside supermarket chains and local law enforcement bodies. This is another step towards algorithmically driven decisions in law enforcement, removing human agency. It also reifies existing inequities through a feedback loop, in which those already within a criminal justice database are continually surveilled. This is tech-washed as ‘objective’ as it is done by AI. Other, similar examples from other companies are given in the extended notes.

One question that is often ignored in the outsourcing of AI systems is whether or not technologies should be legally accountable for the harms they produce when used by governments. Generally states try to disclaim any responsibility for the discriminatory actions of AI, saying they cannot be to blame if they don’t understand it. This further removes accountability from these AI systems, with vendors and contractors having little incentives to ensure their systems avoid repeating historical oppressions. 

Underlying military and AI logics of targeting is the idea of the signature: essentially the idea that data can be considered ‘true’ if it is ‘accurate enough’. IBM was contracted during the Syrian refugee crisis to use ML platforms to detect data signatures of refugees who may be connected to jihadism. They created a ‘terrorist credit score’ to do this, based on disparate measures and data on refugees, without the refugees’ knowledge of this happening. These refugees were treated as a test case for a technical system, while being subjugated to analysis through the lens of ‘creditworthiness’. These systems reward predetermined forms of behaviour and exclude or penalise those who do not act normatively, acting as a moralised social classification system which again reifies the status quo. This has been used at the state level too, with two examples being given from Michigan. Here, two AI systems were used with the intention of disbarring people from accessing welfare and punish perceived fraudulence. Any data discrepancies were treated as fraudulence, ultimately inaccurately penalising over 40,000 people. The point made here is that military-style AI systems of classification are punitive systems based on a threat-targeting model with no place in municipal governance (and likely not in military action either). 

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close