For the July meeting, we decided to do something slightly different: we watched a film (or, rather, a documentary). We watched Coded Bias, the Netflix documentary on biases within artificial intelligence systems. The idea behind watching Coded Bias was two-fold: first, to get an idea of how concerns around normalised data extraction is conveyed to the public; and secondly, to discuss the ever-present question of ‘what next?’ once we have identified AI’s biases.
In regard to the first point, it was felt that the concerns in the documentary may be specifically American concerns. In Australia, New Zealand, and the UK, it was felt that many people (even when made aware of such surveillance) are not particularly alarmed. When it comes to issues of the border in Australia, much of the public tends to support surveillance measures. This leads on to the second point above, what do we do with this knowledge? Coded Bias still has merit in that it does open up debate to a wider audience (particularly younger people who may not have any prior knowledge of AI and surveillance) and it specifically gives voice to those traditionally marginalised in both governance and technology.
Critiques were raised however, specifically around the narrative structure of the documentary. A clear protagonist and (seeming) solution are presented (namely, going to your government with sympathetic lawmakers) which rang a bit hollow for many of us. It was felt that (through presenting a neat solution), Coded Bias may not provide any critical literacy with which viewers can assess AI in their own lives beyond what is specifically shown in the documentary. Perhaps to most pressing concern however was the apparent acceptance of AI’s current place in society, seeking to ‘fix’ it rather than more fundamentally challenge the use of AI. This leads us to our next reading, Kate Crawford’s (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, which does just this.