Minutes from July meeting

Discussion of Coded Bias:

  • We decided to watch Coded Bias initially given that it is a piece of public literature; it is much more accessible to a general audience than most of the reading we have done up until this point. We had hoped this would allow us to look at the broader social and pedagogical challenges of platforms. 
  • This is contrasted with our own anxieties around platforms, which themselves are often a concern about the wider social understanding and concern about platforms. As always, it is very easy for us to have these concerns, the difficulty comes in the questions of ‘so what?’ and ‘what next?’. This is particularly true in education, which contains a normative or ethical element of understanding generally. While we may not expect Coded Bias to have any new insights for us then, it is worth watching to see how debates are framed around topics like fear and compliance in the context of platforms. Alongside this is the idea of a paranoid pleasure, in which we may have an interest in viewing how we are crushed and rendered powerless by the juggernaut of platforms. 
  • Based on the documentary, it was felt that public concern around AI and facial recognition were much more prevalent in the US than in Australia (and probably in the UK as well, despite its presence in the documentary). The question then becomes: why is this the case, why do we see such a difference in levels of concern in different areas?
  • The 2019 Data 61 report (from CSIRO) was then brought up. This report came up with guidelines for AI ethics but the point raised here was that 2019 was a very late time to do this. At Deakin, the Data Cultures stream (itself a part of the Science and Society Network) put out a draft response to the Data 61 report, which Deakin made its official response. One core element of this was to move the focus away from merely being a cost:benefit analysis (which the Data 61 report initially was). This reflects the relaxed attitude towards AI in Australia, in both the public and private sectors.
  • The pro-carceral history of Australia was suggested as playing a role in the acceptance of such technologies here. In general, Australians have a pro-government attitude when it comes to issues of securitisation, especially when it is governance of/against POC. The broad findings of the data on this shows that the larger this governance gets (i.e. if it is done by federal bodies and is concerned with the national border) the more people support it. When it is a more local form of governance (the example given being of parking tickets), people tend to have more opposition to it but only really because of its impact on themselves. The overall knowledge of AI appeared to be fairly low as well. This makes it difficult to apply much of Coded Bias to the Australian context, given the completely different start point we find ourselves at here.
  • Schools were then discussed, with it being said that AI or facial recognition does not come up much in schools generally. This could be due to how schools are approached as well though, the right questions may not be asked of them. 
  • In terms of the UK, it is difficult to pull apart any sort of concern with AI or facial recognition with other, pre-existing systemic issues. The example from Coded Bias was mainly the police using facial recognition in a manner which targets POC (particularly young, male POC) but the racialisation of policing is an long-standing element of the UK and (in particular) London’s growth. So to focus on the facial recognition aspect of this may be to miss the forest for the trees when it comes to systemic political injustice. This mirrored the aspect of the documentary which discussed AI generated prison sentences; these were previously done by judges in a systemically racist manner anyway. To place the brunt of the racism on the algorithm gives the larger system too much lenience when we already know it is built on racism to begin with. 
  • While overall there seems to be a lack of knowledge and concern about AI, ML, etc (or at least critical theory has a tendency to present it as such), this does not mean that Coded Bias has no real utility. Rather, it was still seen as useful in that it brings the topic to those who may not have knowledge about it and presents the issues in an accessible manner. This includes young people (such as school students) who are subject to AI governance (with the example given of student loan allocation in India favouring people from higher classes) but also those in the corporate world who assume algorithms are ‘objective. Achieving this is something we keep coming back to as a goal. 
  • Another benefit of the documentary is that it actually gives voice to people from non-dominant groups, as well as people who come together as collective advocacy groups around these issues. 
  • However, Coded Bias did have a narrative structure with a clear protagonist and resolution via campaigning for legal reform. This solution is not bad and it is likely that we need some sort of collective body/public interest body to take action, rather than repsonsibilising individuals around issues of AI ethics. However, this is not possible for everyone and the solution of ‘greater legal oversight and regulation’ is unlikely to solve all problems (even if it is still a net positive). If this is the case, does raising awareness without an apparent and real solution simply lead to a fatalistic paranoia around technology? 
  • Even if we cannot propose a direct solution to the problem, providing people with a set of critical media literacy tools would be beneficial. It was felt that Coded Bias did not do this (again beyond ‘go to your lawmakers who will fix the problem’). This is effectively the question of ‘what next?’. This is a particularly pertinent question in the context of COVID-19 and schooling, with EdTech often being lauded as the saviour of schooling. There may be some overly optimistic viewpoints on EdTech that we will need to spend the next period of time addressing, before we can make substantive progress with how schools work with (large) EdTech companies. 
  • One practical issue with AI led governance is what happens when it goes wrong? Can you sue an algorithm? Or those who make it? Or those who choose to enact it? Or is the idea that ‘the AI decided it’ used to defer all responsibility? Part of the problem is that AI is often viewed as autonomous and is mythologised. It may be more effective to see AI not as making decisions but as presenting options, with which humans make choices. This attempts to centre human agency (and biases) in any AI related discourses. This seems like a clear solution predicated on transparency and accountability but actually enacting it remains a difficulty.
  • The final question discussed was: where do we accept AI? We (largely) accept the YouTube algorithm or the Spotify algorithm, for example. Where do we draw the line between this and even something like Google Maps? These are seemingly mundane uses of AI but have big impacts on how we navigate the world on a day to day basis? This is even before we address issues of systemic injustice. So where do we draw a dividing line for appropriate AI?
%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close