Discussion of Mascheroni & Siibak, and Grandinetti:
- There was some frustration felt around the Grandinetti article, with the primary question being: who is this actually written for? It did not feel like any information that someone who is likely to be on First Monday would not know anyway and it did not do much in terms of empirical work either (adding a few examples about Zoom was not entirely sufficient in this case).
- It was felt that there was a lack of user perception within the article, that students (and staff) were spoken about but never really spoken with. Instead, Grandinetti defaulted back to the political economy critique which is fine but, again, nothing terribly new. It was felt that this approach may have been taken simply because it is easier than doing the fieldwork. This is a critique levied at political economy work at large: how do we design projects which incorporate user perspectives, rather than imposing perspectives upon the user?
- One issue in formulating such research is the black-boxed aspect of platforms. Platforms intend to act infrastructurally, thus to remain as hidden as possible. This makes it difficult for both users and researchers to fully understand how they experience platforms. Researchers hypothesising impacts and telling users about these impacts is not necessarily ‘authentic’ either. This is discussed by Mark Andrejevic in Automated Media, stating that we have moved away from a panoptic form or surveillance to a more internalised one which is not even considered.
- Ultimately then, the problem of the Grandinetti article was one of impact: is this writing simply to provide other academics a form of moral grandstanding, that we see how bad all of these systems are, generating a form of institutional capital, or does it actually make a difference? That the article felt like it could almost be written by an algorithm itself (one fed on First Monday articles) gives the impression of the former.
- This is a regular frustration within contemporary academic writing on media: the norm has become to conduct a thorough critique between surveillance, capital, and datafication; this has become a new tenant of faith which doesn’t help us shed light or do anything about inequalities.
- The Mascheroni and Siibak booked fared better in this regard. One of the reasons for this is its use of the A-level algorithmic marking issue from the UK. In this sense, it did at least highlight some practical example of public outrage around algorithmic decision making.
- However, at the same time, it fails to point out that all this algorithm really did was reflect the inequities which exist within the education system anyway. This is not meant in some abstract manner, the algorithm effectively ranked students in an institution and assigned grades based on student rankings from the same institution in previous years. In not discussing this, there is a misrecognition of the core issue. COVID allows for a discussion of algorithms through the lens of existing inequities and so it is not enough to stop at the critique of ‘the technology is bad’; these technologies speak to existing societal issues.
- One of the issues facing academic writing in this regard is that the problem is twofold: technologies do reflect and reify existing issues which cannot be overlooked; but, secondarily, technology companies make huge amounts of money from education systems with poor technology. Both critiques feel important to make, it is making clear delineations between the two which feels difficult.
- The former critique above is especially true as it is often difficult to measure outcomes from educational technologies, as their internal metrics are difficult to translate to other quantified grading metrics (often by design). This means that, even if an educational technology appears to be good, it is very difficult to ever really verify this. Despite this, huge amounts of money are spent on things like Google Classroom rather than more material resources for teachers, or simply on having more teachers (or smaller classes, effectively).
- Instead, we so often see technologies which are solutions looking for problems (or which create problems and position you as a ‘bad parent’ or ‘bad teacher’ if not used) that it is difficult not to see this type of technological spending as the problem. The pursuit of measurement is an endless one which dooms us to failure for never measuring enough and this is something worth critiquing. It is finding a balance between this (which is currently overrepresented) and empirical work on pre-existing inequities (which is vastly underrepresented) which we strive to do, despite its difficulty.