A few weeks ago, 42 states’ Attorneys General filed a lawsuit against Meta alleging that the social media giant intentionally launched addictive products that harm teenagers. Specifically, the lawsuit claims that Meta harmed children through excessive push notifications and alerts, filters that caused teenage girls to feel ashamed of their bodies, and collecting teens’ data without their consent.
The case lifts up internal debate at Meta — that the company had data demonstrating the negative ethical consequences of their various products, but, despite a few noble objections, these products were launched anyway. If you’re interested in a concise summary of the lawsuit, The Daily had a great episode last week.
My course at Darden, “Technology and Ethics,” is designed to empower the tech employees of the future to intervene when they see ethical breaches like the ones at Meta. In our class, we teach that there are two primary categories of interventions — either employees can speak truth to power, the method deployed by managers at Meta who did their best to raise alarms; or by baking ethical reasoning into the product development process itself. Although the former intervention occasionally may be an effective last resort, the latter is more helpful when trying to ensure companies ship ethical products. Considering ethics before any code is written allows managers to find creative solutions and open up choices beyond “launch” and “do not launch.”
In our class, we teach a framework for thinking through product design choices through an ethical lens. The framework has four steps:
Identifying the feature itself (e.g. push notifications, notification badges). The easiest part.
Listing the objectives that feature is trying to achieve, in quantitative terms. For push notifications and badges, the objective is user engagement, measured in monthly active users (MAU) or time spent on the app (T/S).
Listing out potential harms and negative consequences. Product managers should ask themselves, “What are the potential ethical harms of this design choice? How likely are those harms to occur? Do we have data to assess that likelihood?”
For this step it is essential that managers consult members of their team with different levels of technical expertise and general life experiences to identify a comprehensive list of potential harms. When those people are not available in their immediate circle, they should consult the popular press, and, where relevant, academic research. Teams with the time and wherewithal should conduct bespoke research, as Facebook teams did in this case.Although it’s impossible to identify every possible consequence – or to mitigate them – managers should make every effort to build their list. The two primary harms and unintended consequences of push notifications and notification badges: increased user distraction, and increased user dopamine levels, often associated with depression and anxiety.
List out potential mitigation strategies. Again, to identify these strategies, managers should leverage the diversity of their teams to come up with the most creative solutions. Some solutions will be prohibitive based on business objectives or resource constraints. Nevertheless, there are obvious ways to mitigate the harms of notifications and badges, including only badging specific events, or defaulting to an opt-in screen so that users can select when and how they want to be notified. Only when the core development team agrees on a solution that mitigates at least some of the ethical harms should the product enter development.
Meta is getting sued because, despite researching the potential ethical harms of various design choices, it did not incorporate harm reduction and ethical considerations before they started writing code. Instead, the mental health research was presented post-development at the point of the “go vs. no go” decision — either to launch or not to launch — which made the ethical path seem both more costly and inconsistent with business growth.
Ironically, had a framework like ours from Darden been incorporated earlier in the product development process, managers would likely have found “win win” opportunities to launch these features in ways that would have mitigated their mental health costs. Considering consequences at the right time would have allowed Meta to get creative and avoid public reproach.
Instead, Meta only considered ethics at the last minute, which led them to take the most extreme path in service of business metrics, suppress internal mental health data, and hoping nobody would notice. This is a story we’ve heard before from Team Zuckerberg, have we not?
As we look to a future with generative AI that presents unknown opportunities and risks, it is essential to bake ethical decision-making into the design of products, from the outset. While Meta has questions to answer as a result of this lawsuit, the questions will only get worse if managers don’t use frameworks like the ones we teach at Darden to navigate ethical tradeoffs and find creative solutions before the first code is written.