For the August meeting, we returned to a book. Specifically, Kate Crawford’s (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Crawford covers the whole process of AI’s creation, from the material extraction of mining to the invisible labour done by MTurks to the intermingling of the military and Silicon Valley. We focused on the introduction, chapter four (classification), and chapter six (the state).
This all led to some interesting discussions around what we got from the book. How Crawford presented ‘intelligence’ as a concept was read differently by different people. This led to some thinking the book hand-waved away personal responsibility platforms and companies have for their AI. Others saw it as saying that AI is simply not able to understand the world in a nuanced manner as humans do, necessitating further transparency and oversight of AI by (accountable) people.
As always, we had a difficult conversation around ‘having this information, what do we do next?’. Even if we all agree that greater transparency for AI is desirable and understandable, what dos this oversight look like? This involved a discussion of the recent tensions between the Chinese government and Tencent, contrasting this to the western model of the platform outstripping the state.
Overall, much of the conversation revolved around the interaction between the human and the AI, asking: even in the face of greater automation, where do we draw the line and require a human to make decisions? There is, of course, no easy answer to this question but in the face of increasingly long and frequent lockdowns here in Melbourne, it is worth refocusing on potentially pre-digital modes of organisation such as a trust in the general goodness of people. I did say that this answer was not an easy one.