Rationale

I’d like to share the long-form version of my thinking and rationale behind Ethics Litmus Tests. I’ve put headers on each section so feel free to skip around to the bits that interest you, and to ignore those that don’t.

Disclaimer: I am not a moral philosophy expert. Think of me as ethics-curious — reasonably well read, looking for ways to solve real world problems.

Introduction

This project comes from a place of need. I needed a tool for myself, and I needed a tool I could point to for my colleagues and clients. I hadn't found any that felt fit to purpose. It comes from a place of curiosity and exploration. It comes from the impulse to practice, to uncover, to unearth, to compare. To hypothesise, test, assess, and do it again. To move towards confidence while acknowledging the impossibility of perfect certainty. To treat ethics as a science experiment, not an opportunity for self-righteousness. To discard the idea of moral certainty. To tangle with the grey.

It's the opposite of having all the answers. It’s the opposite of the belief that there’s one obvious answer. It looks beyond 'Do the right thing'. It poses questions that lead to more questions. It's up to you to excavate the answers, and through that process make discoveries. It's about the process, not the artefact.

This project is a response to, and rejection of, the long, vague, hand-wavey, un-verifiable, un-implementable ethics frameworks that are in vogue right now. The pretty principles documents. You know the ones I mean. The strategy documents that are, lets be honest, hard to read, let alone turn into work culture.

Most of the principles proposed are so imprecise as be useful for justifying any kind of strategy you could propose. They're good for signalling, only. But if that's where the work ends, then it's worse than nothing — because you've provided a tool that has no actual bearing on engineering or design decision making, and more importantly you've eaten up the oxygen of an alternative approach that might be more pragmatic.

In my view, the principles proposed in these documents are meaningless without implementation details. We need explicit instructions on how they turn into product or business decisions. What even is 'data privacy', anyway? How do we turn ‘promote the progress of society and human civilization’ into a code test? If lack of diversity in our teams is directly responsible for tech’s moral failings then how much diversity leads to good ethics? How do we measure it? How do we know if we’ve succeeded?

Project inspiration

Three major reference points to acknowledge:

  1. Brian Eno’s Oblique Strategies
    Card decks aren’t a new idea but I particularly like the subtle and provocative ideas in this pack. Eno proposed his deck as a way of getting past creative blocks and to encourage lateral thinking. Poetic, sometimes cryptic in nature, they function both as a mental reset and as a cipher for whatever subconscious gem has been brewing but hasn’t had a chance to emerge yet.

  2. The Good Place
    The genius of this show is in the scripting — chock-full of pop culture references, and with a light and breezy tone, it nonetheless manages to explore beefy moral philosophy concepts from heavyweights such as Kant, Aristotle, and T. M. Scanlon. I also admire the show’s capacity to broach ethics from a totally non-religious viewpoint (there’s no mention of Heaven or Hell, and religious cosmologies only get a glancing look-in).

  3. Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones by James Clear
    This book highlights the value of small improvements on a daily basis. This feels like an extremely useful frame to think about ethics work - we want to support learning habits, support skills growth, and do it in a way that is small, repeatable, easy to bite-off, attractive.

Why a physical tool?

Good question - why not an app, a spreadsheet, some other digital tool? We’re techies, why make an analogue tool?

I wanted to offer a physical tool that was intuitive, appealing, and would hold space both on our desks, and in our minds. You know 'out of sight, out of mind'? I think that applies doubly to ethical conundrums.

We’re arguably doing a terrible job of prioritising ethical review in our work, so I felt that a deliberate ‘place-holder’ was a useful McGuffin. If/when we reach a Star Trek utopia (luxury space communism, yeah!) and this work is second nature, then fair enough, holding space might not be necessary. But in the meantime it gives us collective permission, and a collective reminder, of the conversations to be had.

Pricing / product rationale

Making a paid product is a way to (hopefully) spin up income to support my research and work producing open source tools to support Fair Machine Learning. In order to ensure that price won’t be a barrier to entry, I made the PDF freely available under a Creative Commons licence. Those buying the card pack get the original artwork and the unique test tube container with cork lid. Especially as it’s a glass container, I can’t bring the price down much and still be able to ship it safely with enough eco-friendly packing (unless I get very high volume sales 😉).

Philosophy Rationale

My approach is not to center one moral philosophy or theory as the correct one, but rather to take a sampler approach and cherry-pick some of the best ideas from different schools of thought. Ethics Litmus Tests do not presume to know ‘right’ way forward, but rather offer provocations to get us out of our heads, test a feeling rapidly, and provide a different perspective.

Partially this approach provides a breadth of ideas which I thought would be most useful to anyone who might pick up this pack, and would hopefully make room for geographic and cultural differences in ethics. Additionally, I feel that the competitive vibe between moral philosophies is itself a problem, and the opposite of ‘doing the work’ of practicing our moral intuition.

So my response to catty intellectuals is to vote for “one of everything, please”.

I started with questions that were intuitive and came to mind easily. The first question was one I’ve been intuitively using for years anyway:

“Would this be ok if it happened to Mum?”

After I had more questions than slots available, I started to categorise the prompts to assess the overall spread and coverage of the prompts. This was a helpful way to improve the breadth and coverage of ideas presented in the card pack.

While there are obvious overlaps in these categories, I find them useful for a broad strokes concept-map to describe the instincts I hope to hone and refine.

This is the taxonomy I came up with:

Appeal to empathy

To help us consider experiences of technology from another’s point of view, and let that guide our decision making. Empathy an emotional experience, distinct from sympathy which can be purely cognitive. There’s a heap of writing about the power of empathy for breaking through past the blinkers of our own experience, as well as some rejection of empathy (as being insufficiently thorough approach to defining potential failures).

Appeal to humility

Attempts to remind us of our varied cognitive biases, human foibles, and tendency to center our own experiences. Humility requires us to accept the edges of our imagination, our knowledge. Humility asks us to remember the unknown unknowns. For a deep dive, check out my friend Milly’s piece on Humility in tech for the greater good.

Appeal to intellectual honesty

Helping us untangle the way we want things to be true and untrue from the strength or weakness of the argument for those things. These prompts are attempts to keep us honest, to ourselves, and to others.

Appeal to Consequentialism

Is what it says on the box: “Consequentialism is an ethical theory that judges whether or not something is right by what its consequences are.” - Ethics Unwrapped

The challenge of Consequentialism is the challenge of quantifying the value of competing futures - how can you anticipate all the effects any action or policy might have? (Spoiler: you can’t). But nonetheless it can provide a useful frame for thinking about policy, especially if your focus is on reducing the worst harms rather than achieving the highest benefits.

Appeal to Utilitarianism

Utilitarianism is a sub-category of consequentialism, primarily concerned with measuring and comparing the utility of choosing any given policy or action over another. It’s worth calling out as it has inspired a number of modern thinkers and is often considered associated with the Effective Altruism movement. EA and Utilitarianism do have some differences in approach, specifically, “Unlike utilitarianism, effective altruism doesn’t necessarily say that doing everything possible to help others is obligatory, and doesn’t advocate for violating people’s rights even if doing so would lead to the best consequences.” - Effective Altruism

Appeal to Contractualism

Contractualism is concerned with the social contracts, the invisible agreements underpinning our everyday interactions with each other. This theory can have a broad-sense meaning, “to indicate the view that morality is based on contract or agreement—or in a narrow sense—to refer to a particular view developed in recent years by the Harvard philosopher T. M. Scanlon” - Stanford Encyclopedia of Philosophy

T. M. Locke and his work “What do we owe to each other” gets several mentions in The Good Place, and in many ways his work has become emblematic of Contractualism.

Appeal to Deontology

Deontology is a moral philosophy made popular by its most famous advocate, the secular philosopher Kant. Deontological ethics consider the will and intention of the actor rather than the outcome of their actions. Another divergence from Consequentialism is that “Deontological theories hold that some acts are always wrong, even if the act leads to an admirable outcome.” - Seven Pillars Institute

Appeal to ‘best self’

One of the philosophies discussed in The Good Place is the idea of the manifold versions of self (literally expressed through reboots, in this case), and the deep question - which is the best one? In Season 2, Ep 9, Chidi has a customary freak-out and has to be talked down by Eleanor:

“So in a way, it doesn’t matter if I was better in version 492 or whatever because the best version of me is just about my effect on the world around me as it is about my own, ego-centric self-image”

Problem definitions

These provocations are included to help us think about the waters we’re swimming in, and to make sure we’re clear enough on what has, or could, go wrong. They also point us towards thinking about the scale, urgency or importance of the scenario compared to the broader set of problems. This kind of analysis is very useful for ranking and prioritising, and deciding whether it’s a ‘now’ problem or a ‘later problem.

Classic gotchas

This category holds what I think of as the “Just So” stories of tech — the fall out that happens after we accidentally cc the entire company email list, or the thing gets published ahead of schedule. I didn’t deep dive into specifically technical problems as these don’t generalise well, but rather attempted evoke the feeling of that time you ssh’d into the server and deleted a whole repo by accident. 😂

Conclusion

This is the beginning, not the ending! Ethics Litmus Tests are not intended to be your only ethics tool, a full coverage test or a full schema for fairness. It’s one tool for your toolbox — a way to make getting starting easier and more approachable.

Previous
Previous

Design Inspiration