Photo by Lucas George Wendt on Unsplash
Recently, I’ve been thinking about precision and developing frameworks for organisations. I’m from a western analytic philosophical background - precision and defining terminology was drilled into me as an important skill as an undergraduate. I was taught that if you can’t precisely define a term, then it’s very difficult to construct a robust & coherent philosophical argument. This is because your own thinking is likely to be hazy on the subject and imprecision leads to unclear premises, vague conclusions, and non-universal arguments. I think that there is little written on the virtues of precision when it comes to tech ethics, so I want to advocate for that.
To go into a bit more detail, there are plenty of words used in ethics that have ambiguity: ‘harm’, ‘good’, ‘bad’ to name a critical few. This ambiguity has a use. It tells the audience that you are expressing a certain moral emotion towards a preposition that can give clarity to your writing. But these ambiguous words also cause harm (😊) if used incorrectly in a complex system. Without precise definition of ‘good’ and ‘bad’, it becomes very difficult to know how to behave correctly. It’s very easy to frame any action as being ‘good’ or ‘bad’. Other words that have ambiguity include common terms like ‘bias’, ‘equality’, ‘freedom’, ‘respect’. We find these terms often in ethics manifestos – I prefer the frameworks that expand on what they mean by these terms, and I dislike the ones that don’t prioritise this. This is particularly true of manifestos that are supposed to be open to the public, and don’t have a narrow audience that they’re writing for.
As ethicists we should be looking to define our terms more precisely. I think a lot of the problems that exist in business and technology ethics come from haziness around concepts like ‘harm’ and other concepts like ‘misinformation’. The more we can do to clarify and understand these ideas the better we will become.
Precise language for humans
However, there’s a difference between me, ‘technology ethicist’, having a precise understanding of terminology and whether it is useful for me to communicate that precision to others.
So most of this post is about exploring the limits of precision when working as a professional ethicist, particularly when operating in the applied ethical space. I don’t think that precision is that helpful for building frameworks that will be used by humans in the workplace. This is for the following reasons:
- Precise language may not be accessible. For example, very few people are able to articulate the difference between mis, dis and mal information. You may use a precise word, but actually it makes the piece more confusing for people.
- Precise language is dependent on context. The word ‘labour’ in Arendt’s Human Condition has a subtly different meaning to the word ‘labour’ as understood in political party manifestos. By taking the time to explain what you mean by a commonly used word, the average person loses interest.
- On an organisation level, there may be words that are understood in a particular context. For example, if you’re working for a washing machine manufacturer, the word ‘digital’, will have different connotations to a telecommunications company. A digital vs analog display on an appliance is very different to a smart phone vs a landline. Precision would be irritating if the context makes it obvious.
- To go on a small tangent, some people may have negative emotional connotations of a particular word that others do not. I’ve found that lots of people have a very emotional reaction to the word ‘ethics’, which is hard to navigate. In which case, if a precise word has negative emotion attached to it, it could be best to choose another word. Also note that you might have a weird negative reaction to a word, which you might have to get over to communicate with others.
The learning from this is sometimes it is unhelpful to be completely precise. You need to write for a particular audience need. If this is the case, you better define the audience in the ethics document you are writing, just in case the document gets into different hands and is misinterpreted.
The second learning is that it is unrealistic to expect everyone to understand your precise language. So, when you’re building ethical manifestos, or frameworks, you’ll use imprecise & accessible language, but crucially, you must then offer an escalation path, so the audience knows who to speak with if they don’t understand something. That individual needs to be skilled in ethical language, theory and be equipped with a wide range of case studies that can help them make decisions.
Precise language for machines
When it comes to AI and algorithms, we need to be precise. This is because machines don’t have the same reference points as biological organisms. They don’t have a consciousness, nor emotions, and so they cannot understand the ambiguity/references in certain words. When a human uses the word ‘big’ we understand that context matters. We recognise that a big ant and a big mountain are of completely different sizes. We can’t expect that a machine would understand these things – we have to tell the machine that is the case. Modern machine learning programmes can generally understand these context, but it learns it in a way that is alien to humans. You’ll often hear stories about developers themselves not fully comprehending why an algorithm has provided the answer it has. In my view, this means that we have to be very clear when programming machines.
Precision here can only really come about via testing the system, and learning from others efforts in this area. When it comes to commercial objectives, a lot of effort is put into iterating to improve. Irritatingly, this level of effort is rarely matched when it comes to ethical objectives, even when the ethical will improve the commercial.
If someone is building frameworks for machines, the necessity for precision is greater than if you’re building a framework for a human.
Precision in arguments
Ok, this is something I think we all need to get better at. Generally the ethical frameworks I see are a collection of random statements and aspirations. An example collection might be three areas where the company seeks to mitigate bias, become more transparent and/or improve human wellbeing.
Don’t get me wrong, these are all worthwhile ideas. But it’s hardly an ethical strategy. I need to see something about why transparency is important for that business.
This is something that Patagonia does pretty well. They’ve got particular core values (link accessible in April 2024) some of which are justice, equity and anti-racism. Justice is what I would call an ‘intrinsic value’. Whereas ‘transparency’ is an ‘instrumental value’. Transparency will lead to justice – but not all types of transparency will do this. In my view, all ethical manifestos should start with intrinsic values and then show how the instrumental values and actions lead back to them. This is a basic form of a philosophical argument – and it’s why philosophers do need to be part of building these ethical frameworks.
I think that writing about the importance of an argument in ethics is a pretty long post in itself – so maybe that’s something I’ll come back to later. For beginners, an argument refers to the way a framework moves between premises and conclusions. But for now, I will just flag that the argument itself should be precise, and this is true regardless of the audience reading the framework/manifesto.
I would say that this is an important post - but it definitely sits in ‘thought-in-progress’ category. If you’ve got any thoughts about precision in ethics - where it’s useful, where it’s not so useful, what it means and techniques to achieve it - then please post below.