The Unseen Teen – Lenhart and Owens – 2021 – longer notes can be found here
This is a student of the health impacts of social media (and internet usage more broadly) for adolescents. However, unlike many other studies, the focus here is on the role of the producers. As such, 25 current or former tech company workers were interviewed regarding user wellbeing. Adolescents (aged between 13 and 17) are the focus of this research as they have special legal status as minors (being classified as ‘vulnerable’). However, the principles of universal design suggest that designing for an edge population (such as minors) can be for the benefit of the whole population.
It is difficult to find a consensus for what is meant by either health or wellbeing. Particularly, digitally-oriented definitions of health and wellbeing often incorporate issues like equity and justice. From the data gathered here, no consensus could be reached on what constituted ‘healthy’ technology or digital wellbeing. While some participants did recognise this as an overly simplistic view of digital wellbeing, generally participants defaulted to a narrow, metrics-driven understanding of wellbeing which focused on time and attention to the platform. The researchers instead define ‘healthy’ technology as being technology that improves (or, at least, doesn’t diminish) mental, social, and physical health of all of its users. Digital wellbeing is treated as a sense that interaction with an online space can be a positive force for the user’s mental, social, and physical life, including offering users agency to manage their use of the platform.
The definitions offered by the researchers are intended to reject traditional, quantitative notions of ‘healthy’ technology use, recognising it as a contextual issue. This research recognises that different adolescents may experience things differently on the same platform. Thus, it is important to look at groups who may be at risk on social media platforms, rather than simply the average user. Companies are urged to recognise this too, developing efficient harm-anticipation systems in response.
The participant interviews tried to draw out how adolescent wellbeing got integrated into platform design but found that this only really happened late into R&D (if considered at all). This raised the question: why is adolescent wellbeing so often ignored by platform companies, despite their public prioritisation of mental health? Interviewees suggest that this is often an active decision, with platforms avoiding designing for any specific social group. It is suggested that laws like COPPA further this by driving companies away from providing child-centred products or merely managing their compliance to the law. Despite all of this, we know that adolescents use platforms and so a failure to design for adolescent users remains worth examining.
Three key reasons that cause poor platform design for adolescents are suggested here. Firstly, companies tend to design for the average user (normally young, white, adult males). This can leave out any subgroups who do not fit this mould. This ‘average user’ tends to reflect the types of people who work in platform development. Three reasons were identified for how the average user becomes constructed. The first is that business models focus on daily or monthly active users, meaning the largest group is targeted by the product, missing nuances and harms faced by smaller groups. Secondly, growth metrics treat users as a monolith, which again edges out nuances of subgroups on platforms. Thirdly, there is a general lack of diversity in who works in tech, meaning they get designed around the experiences of young, white (or Asian) males.
The second key reason for poor platform design (in terms of adolescent health) is the use of strategic ignorance by companies to avoid difficult topics around adolescents. Three main methods of strategic ignorance are described. The first is simply not collecting data, which allows for companies to deny that the issue exists at all e.g. not collecting data about user age means companies can ignore issues around adolescent wellbeing. Secondly, collecting too much content means that relevant data becomes hard to find or can be ‘’missed’. Thirdly, unclear lines of responsibility for knowledge, which distributes ignorance across the company. With enough division of labour, no one person or team can be responsible for a problem, mitigating the responsibility to identify ethical or health concerns around the platform. Workers who do raise these issues are seen as ‘blockers’ of innovation, often being siloed into the Trust and Safety team or ignored altogether.
Thirdly, how platforms and their leaders understand what their platform is ‘for’ and what value it provides shapes the choices made within the company. This ranges from companies who view their platform as a digital commons (thus privileging free speech above all else) to companies who seek to build communities or immersive worlds on their platforms (who tend to conduct more moderation of content). Companies with shareholders and investors tended to find it difficult to balance the desire for profit from these actors with a desire to prioritise the user. Company leadership is important too e.g. if leadership focuses on key performance indicators (KPIs), this tends to move focus away from the human aspect of the platforms. CEOs and founders were also identified here as being confident in their own visions and so unwilling to change/see the faults in their designs. This made it more difficult to pre-emptively account for any harms that platform could cause. Finally, advertisement-based business models are fundamentally at odds with user wellbeing: it is very difficult to change user-engaging processes without risking financial disaster. Some interviewees identified this as a structural problem with capitalism but it is suggested here that social platforms could still do better, prioritising adolescent wellbeing in a manner which compliments long-term growth.
Levers of change and best practice
Two questions remain here: what can be done by outside regulators to push for change in social platforms? And what should companies (and workers within companies) do to change their organisations for the better?
Five key drivers of change are identified here. The first is simply that negative outside pressure on companies (e.g. media attention) can push change as there is a fear of negative PR. the second is that publicised tragedies can force company change and while it should not be necessary that harm occurs to enact change, it is often the first step in companies taking harm on their platform seriously. Again, negative media attention can aid in this and so there is an imperative for press to go beyond surface-level narratives e.g. screen time. The third is that public failures from other companies can spur action, with new start-ups taking the most heed of previous failures in their sector. Fourth is that pressure from civil society is important, such as advocacy groups working as external advisors to companies regarding new features. This is, in part, to deal with the lack of diversity within the platform companies themselves. Finally, regulations can help (or the threat of regulation, at least). However, focusing too much on regulation can sideline conversations around broader platform toxicity. Additionally, if regulation is written or implemented poorly, unintentional outcomes can occur or companies can be pushed out of a sector as it becomes ‘too risky’.
Several suggestions are also made here for platform companies, around remaining agile in responding to new situations and their health or wellbeing-related consequences. This is framed as benefiting adolescent users but also as a value added decision for all users, generating long term profit and a lower PR risk. The first is empowering, rather than protecting adolescent users, working with young people to get their perspectives on platforms and make platforms a safer space in general for younger users. This includes building systems in which adolescents are allowed to make mistakes while remaining accountable for their behaviour (as adolescents should be offered more forgiveness than adults).
The culture of the platform itself is important to consider as well. Social platforms are not neutral technologies and so developers should work to build safe and healthy environments in which communities can grow, supposedly incentivising users to continually engage. This includes ensuring product teams consider the broad range of user subgroups on the platform, rather than assuming all users are like themselves. This includes collecting more data to determine which groups require a specific focus. The makeup of product teams is important to achieving this. Employees with expertise in user health and wellbeing should be integrated from the beginning of the development process, rather than tacked on at the end. Additionally, cultivating a diverse workforce would allow for a wider range of experiences when designing platform features. These workers should be made to feel empowered to speak up, rather than in a precarious position or ask ‘blockers’. This includes working with unions or encouraging the development of unions.
In terms of outreach, companies should look for employees who have training in ethics of humanities in order to be more proactive in recognising structural biases. It also means working with outside experts such as advocacy groups to pre-empt any harm caused by features or to make amends about any features which do cause harm, as there is a growing recognition that companies cannot continue to ignore this. Finally, there should be organisation collaboration between companies to filter out bad actors and promote wellbeing. This happens informally currently but professionalising it would make the process more effective and include outside experts also, such as the advocacy groups mentioned previously.
While the user may be the constant focus of social media and gaming in one sense (engagement), there is less attention put toward the wellbeing of young users. This report asks, ‘how do social media platform companies think about and design for the wellbeing of young people?’. It is a longitudinal, qualitative research project, interviewing various tech workers.
The key outcomes are:
- Many state that companies treat adolescent use of the platform as an afterthought. Given that they are a large user group, this is said to be a lost opportunity to aid in particular developmental needs.
- Defining wellbeing and harms from social media is difficult and lacks consensus. Many focus on screen time rather than more embodied forms of health. There needs to be greater focus from companies and regulators on reducing potential harms for specific minority subgroups.
- Companies design for an imagined ‘average’ user, missing the full range of user experiences. Designing for the average user can mean it fits no one, while designing around smaller subgroups can benefit everyone.
- Many companies use strategic ignorance to abstain from responsibility for the negative impacts of their platforms. This responsibilises subgroups for their own management of wellbeing.
- Company structures, cultures, or incentives don’t promote a focus on user wellbeing. There tends to be a focus on exponential growth, with wellbeing only becoming a focus once harms to adolescents have already become public.
- Actors within and outside of tech companies can improve adolescent wellbeing on social media platforms. Within companies this can include: creating policies for adolescent users to empower them to learn and rehabilitate amongst other young people (via age gating); developing with real people in mind and a diverse team; integrate wellbeing expertise into every point of the design process; hiring a diverse workforce. For regulators and civil society this includes: applying pressure to these companies to address the issue of wellbeing; well thought out regulation without narrow definitions of harm.