Minutes from March meeting

Knox et al

It was felt that it was somewhat difficult to connect this article to the lived experience of teachers. The article itself lacked some empirical work in this sense (although that is not necessarily the aim of the article) and research with teachers suggests technologies are experienced through a gentler form of nudging, rather than as drastically as the article describes. When the article does give examples, they sometimes feel a bit disconnected from schooling anyway, such as discussing IoT devices in the home. 

The above is not necessarily a problem because the article deals more with the digital imaginary than reality. The problem is that this distinction was somewhat unclear in the article. This article is more in the vein of Andrejevic’s cascading logic of automation, dealing with the discourse around education and digitalisation, rather than someone like Neil Selwyn’s more hands on approach. Knox et al point to smaller steps in digitalisation which lead to a larger form of datafication. Making this distinction renders the article perhaps more useful, because it sets the boundaries for what Knox et al are intending to do. 

One example from the article that was particularly interesting though was the wearable devices which can read human emotional states. These devices appear to perform functions which teachers already do (and can do in a more complex way than machines can, or at least in a more human way). This makes the technology seem to be a bit of a solution looking for a problem. 

The marketing of these technologies is of note too, promising to be neutral and objective measures of the classroom, problematising the role of the teacher described above (reflecting their solution-seeking-a-problem nature). We see this in apps which promise a measurement of student mindfulness or Microsoft’s student well-being app, Reflect. These kinds of technologies are created by people and often fed data from teacher evaluations or other human-generated data and so, in reality, repackage traditional forms of knowledge and present them as new, objective student measurements. 

These types of apps then go on to create new, additional work for teachers who have to monitor the data-outputs from the technologies; seemingly in contrast to their marketing as making things easier for teachers. It was felt to parallel the mechanical turk: presenting itself as automatic and technological, hiding the labour done by people to achieve this goal. A choice is also made about which data gets used to feed these technologies, limiting their ability to assess students. An example was given of a NAPLAN essay marking technology which could not recognise certain sophisticated forms of writing as the data it had been fed did not include this. Thus, these essays were given poor marks despite (and because of) their sophistication.

It is always worth noting too that this kind of managerialism through quantified metrics is not new in teaching; it is not brought on by big data, merely exacerbated by it (and vice versa, with managerialism promoting datafication too). This raised questions around the deprofessionalisation of teaching, created through this additional managerial labour, and how it fundamentally changes what it means to be a teacher. This is reflected in high attrition rates in pre-service and early teachers, asking ‘are we left only with those who accept or embrace this kind of datafied education?’.


The Koopman conversation was somewhat shorter and revoled more around ideas of technical democracy; asking how we can reimagine and redesign the current information systems to work better for people, rather than companies. Koopman, of course, suggests that this must happen long before the level of data output, that it must begin (and be a conscious effort) from early design stages. Koopman promotes the use of technics/experts and the problem here, as always, is how to achieve this goal without falling into tokenistic action, rather than effective real change? 

STS suggests non-experts are likely to be important here too, being the people who will be impacted by the data formatting and collection. Thus, bringing experts and non-experts together to diagnose current problems (and problems in future designs) may be a good first step.

The Koopman chapter in general did feel connected to ideas around algorithmic bias: it is perhaps more reasonable to question the data fed to an algorithm and why this data has a bias than simply stating that the algorithm itself is the only problem. This is the question of data and its format as raised by Koopman and is interesting because these two issues seem so closely linked that they can be hard to tease apart. 

One practical means of applying this is in social media participation and moderation. We have seen a lot of discussion recently around the importance of the content of participation, rather than solely the participation itself. If social media platforms tolerate and even encourage bad actors and bigots on their platforms, this has the knock on effect of excluding those targeted by these bad actors. So through only focusing on participation (not content), we lose the ‘freely given’ element of democratic deliberation which is central to Habermas’ idea. This seems to suggest that the somewhat older internet-moderation idea of debate and the best ideas winning on ‘merit’ have failed to work, necessitating new means of moderation which looks at the formatting of data and communication as described by Koopman.

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close