Minutes from August Meeting

Discussion of Atlas of AI:

  • There was initially discussion about the Clever Hans example from the introduction, with divergent opinions on what it represented. On one hand, it was seen as reflecting a Bratton style perspective on big tech as a stack to which no one person is accountable. This ignores people’s willingness to implement features which are abhorrent in the pursuit of profit, instead seemingly putting it down to a nebulous idea that it just happens due to some unknowable element of the technology itself. Again this ignores that people who create these programs are often aware of the problems of the program. 
  • On the other hand, it was felt that this was not what was to be taken from the Clever Hans story. It was felt here that the point of the story was that ‘intelligence’ is not something objective or stable, or even singular. Instead, it is contextual and can exist in multiple different ways, which serves to challenge the notion of an ‘objective’ AI. This includes highlighting that the ways of knowing we gain through AI are often more about what the technology can currently do (and fitting the data around that) than fitting the technology around the data. The world (seen through AI) thus gets framed by those who make the AI, often being private, for-profit companies. It was also felt that chapter two addresses responsibility very directly in discussing the labour and extraction involved in the creation of both hardware and software.
  • There was the framework used to critique AI within the book. The discussion of big tech happens in the context of a Foucauldian, will-to-know tradition. This recognises the paradox of reason, in that the systems we create are simultaneously liberating and oppressive. There is also a recognition here that when we create a system or way of knowing, it is very hard to simply undo it or put it back in the bottle. The problem with this is that ways of knowing are premised on asymmetrical gazes and this inevitably leads to their misapplication by both state and private actors. Ultimately this means the book is interesting in providing details around the creation of scientific knowledge in AI but it still does not provide us with an answer to ‘so what do we do next?’.
  • While this remains a good question (and one that comes up time and time again), it does not mean that we should take current systems for granted. There is still value in rearticulating them in new ways, which we would hope can both broaden readership and generate new perspectives on these issues. 
  • Despite this, there is still the question of ‘what do we do next?’ and it seems that the only obvious solution is the hope of some accountability and transparency via regulation. This is obviously not as easy as it sounds because of issues like: who would hold regulatory bodies accountable; and where would these meta-regulators get their information?
  • Another point raised was that AI should not diminish on-the-ground accountability either e.g. the Oracle Advisor used by Centrelink is an AI used to make decisions about claims but ultimately we should remember that humans have authority and autonomy to overrule machines (in theory, if not in practice within a bureaucracy). 
  • The discussion then turned towards Tencent and the CCP’s encroaching takeover of large Chinese tech companies. This has been conducted under the guise of ‘wellbeing’, meaning that the state presents itself as responsible for citizen wellbeing. This includes what you are allowed to do on a device and for how long. This is an interesting counterpoint to the western model, in which their platforms almost seem to outstrip the state, while here political power makes commercial interests align to the state. This is aided by the public news in China reporting on corruption within many big tech companies, effectively making it ‘true’, rather than a rumour. 
  • There was then a discussion of our threshold for what machine learning should do in the first place and how transparent this should be. This loops back to the discussion of human accountability such as with the Oracle Advisor. If a system was developed which targeted specific people (‘if’), these people may complain but if the response is simply ‘well, we cannot know why this is happening, it is simply the algorithm determining it’ then this would surely be seen as an issue? But this is the system we find ourselves in currently with platforms like Facebook. Crawford discusses black-boxing, often claiming even those who design an AI don’t understand it, and it is worth asking how this AI can have value? How can we understand the use and utility of an AI without being able to understand it?
  • At this point, the differentiation of machine intelligence and human intelligence was discussed. This is in light of Crawford citing Dreyfus, who effectively claims the two forms of intelligence are incommensurable. This raises issues for both the value of AI and transparency in AI because it means (even if we could see the entire code of an AI), we could never really understand it. 
  • However, there was again divergence on this reading of Dreyfus. It was also felt that Dreyfus was saying that machine learning is narrow compared to human intelligence, that it cannot or understand things humans can because it can only draw from a dataset and make standardised assumptions from there. This subordinates machine intelligence to human intelligence, rather than places them equal and deems them simply mutually intelligible. This is seen historically, in a desire for accountability and transparency from the pre-platform state around their statistics and their contexts. It is also seen today in the context of lockdowns, in which we need things which may seem old fashioned but are distinctly human, like trust and community, to tackle the issue. This involves relying on the goodness in people, while understanding some people will do ‘wrong’ as humans are complex; more complex than AI. 
  • There was also a comparison of the domestication of data (making the world legible for AI through datafication and rendering it understandable to people through pattern recognition), with the domestication of corn. These are both human processes in which we learn how to refine the raw qualities of the world and refining it for a particular task. Corn, for example, requires specific breeding and nixtamalization to be edible by humans, while data needs to be presented in patterns which we can see for ourselves. This discussion continued in an email chain, which is posted below
%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close