The Edge of Digital Ethics - The Algorithm Does Not Care

#4・
1.95K

subscribers

8

issues

Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that The Edge of Digital Ethics will receive your email address.

Per Axbom
Per Axbom
Last month I tweeted this observation, and wrote a similar sentiment in Swedish on LinkedIn:
A dangerous fallacy is that killer robots will take the appearance of monstrous, mechanical machines. Instead, killer robots have (among many other disguises) taken the appearance of web forms automatically determining if people are eligible for financial assistance.
Let me explain what I mean by this. Because I am not against the use of algorithms to search through large troves of data for patterns and answers to specific questions posed by humans. Computers can be amazing tools when searching for data points and alleviating tedious information-retrieval tasks.
But.
Algorithms, by their nature, create a new type of distance between the human performing the action and the human subjected to the action. This creates a world of problems related to transparency, accountability and moral efficacy.
The main argument in defense of algorithms is that they eliminate the obstacle of human inefficiency. Often this sentiment fails to take into account the many benefits of human inefficiency, such as taking time to reflect, taking time to question and taking time to listen.
In the very fabric of human interconnectedness is also the moral awareness brought about by sharing physical space, eye-contact and microcommunication. If you can tell someone is upset even as they are smiling and their words are saying the opposite, you understand microcommunication.
Factors contributing to algorithmic risk
Distance from subject: The further the decision-maker is from the human subject dealing with the impact of that decision, the easier it is for them to distance themselves from the humanity of that subject. Think: drone warfare.
Time from subject: Distance can also manifest in time. There are algorithms built today that will affect people 10, 20, 30 years in the future. How do we care for someone we can not yet know, and how can we break free from the prejudice today when it is embedded in code. Think: eugenics.
Actor knowledge of subject: when algorithms are implemented without a full understanding of the problem space, its context and its people, there will be mistakes made as there is in all development. Think: Convention on the Rights of the Child (and why it was written)
Subject Awareness of Action: Many people are unaware of how many decisions affecting them are made by algorithms every day. Some make life easier, some harm health and some raise costs (many examples in the links below). Within this unawareness it becomes more and more difficult for people to exercise their rights. To object to the invisible treatment. To judge its fairness. Think: your phone.
Operator Awareness of Actions: Increasingly concerning, the people responsible for active algorithms have less insight into their workings as time progresses, more subcontractors are involved and people switch jobs. How do we control what we ourselves do not understand but are employed to give the appearance of understanding? Who is willing to take the fall for an automated decision that harms? Think: Tay, the AI chatbot.
Regulatory Awareness of Actions: As subjects and operators themselves lose sight of automated decision-making so will of course any regulatory institutions and their staff. As decisions become faster and more invisible, and individual humans escape accountability, oversight becomes ever more difficult. Think: Robots or their makers in prison?
Data contamination: Even when algorithms are designed to not collect information about residential area or gender, research shows time and time again that this information still plays a part because the sheer volume of data that many algorithms rely on still encode and reveal the prejudices they are actively trying to avoid. Algorithms are never neutral and yet that keeps being the biggest smoke-screen argument for their deployment. Think: A thermometer in the hand of a black person is a gun according to Google.
Capacity for listening: Remember microcommunication. Who is actively listening for indications of harm, misunderstanding and the broader perspective? Did this human require medical care? The scheduling algorithm certainly does not care about anything that does not concern scheduling. Think: “talk to the hand”.
Actions per hour / Number of subjects: The sheer volume of people impacted and the frequency of the decision-making will of course also play a part in determining how much potential for danger an algorithm contains, and thus how much effort should be placed in working to mitigate and minimise those risks. Think: notifications from all your various inboxes.
All of these risks can be addressed and managed and talked about. Sometimes with the outcome of making things less efficient. Making something less efficient can still make a lot of sense. When the intent is to protect.
But to protect we first need to acknowledge the risks of algorithms on an industry-wide scale. And makers needs to assume responsibility. Makers need to see the broader potential for harm and care for all the people (and nature) they impacting. And regulators need to more clearly determine the direction and the constraints for a sustainable forward movement.
We are certainly not there yet.
Thank you for caring,
/Per
P.S. When talking about algorithms I tend to include narrow AI, which are algorithms designed to change over time in such a way that re-calibration happens according to a pre-defined machine-learning system. They sound intelligent, but they still only focus on completing the specific task they are programmed to do, under a narrow set of constraints and limitations. Also referred to as weak AI, it is still the only type of AI humans have realised. Remember: A human still designed the way the algorithm “learns” and changes itself, but the phenomenon is often used to double down on projecting responsibility onto “the other”.

Algorithmic bias - The Wikipedia definition
Examples
Algorithms: How they can reduce competition and harm consumers - GOV.UK
The Hidden Dangers in Algorithmic Decision Making | by Nicole Kwan | Towards Data Science
What happened when a ‘wildly irrational’ algorithm made crucial healthcare decisions
450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” - IEEE Spectrum
Robodebt scheme - Wikipedia
A Drug Addiction Risk Algorithm and Its Grim Toll on Chronic Pain Sufferers
Fired by Bot: Amazon Turns to Machine Managers And Workers Are Losing Out - Bloomberg
Algorithms are controlling your life - Vox
Algorithms have already taken over human decision making
INFOGRAPHIC: Historical bias in AI systems
Acknowledging harm and addressing it
Contractual terms for algorithms - Innovatie
New Zealand has a new Framework for Algorithms. — NEWZEALAND.AI
When algorithms decide what you pay
Episode 2 of Breaking the Black Box: When Algorithms Decide What You Pay
Episode 2 of Breaking the Black Box: When Algorithms Decide What You Pay
Who made that decision: You or an Algorithm?
Who Made That Decision: You or an Algorithm?
Who Made That Decision: You or an Algorithm?
The tweets
Per Axbom
A dangerous fallacy is that killer robots will take the appearance of monstrous, mechanical machines. Instead, killer robots have (among many other disguises) taken the appearance of web forms automatically determining if people are eligible for financial assistance.
Per Axbom
It's nice to not have to be held responsible for a decision that hurts another human being. https://t.co/2iw4cIEsMM
På svenska / In Swedish 🇸🇪
DN Debatt. ”Full insyn måste råda i offentliga algoritmer” - DN.SE
Har du blivit diskriminerad av en algoritm? Nu kräver forskare hårdare kontroller - Computer Sweden
Transparenta algoritmer i försäkringsbranschen (PDF)
Ansvarsfull teknikutveckling | KOMET
Om BankID
Jag genomför just nu en undersökning om BankID. Det är fem stycken ja/nej frågor och en valfri fritextruta. Svara gärna. Dela gärna. 🙏
Svara: Fem frågor om användning av annan persons BankID.
Osäkerheterna med BankID
Did you enjoy this issue? Yes No
Per Axbom
Per Axbom @axbom

A newsletter about responsible innovation and universal wellbeing in a world of runaway machines with fuzzy accountability.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Created with Revue by Twitter.
Axbom Innovation AB, Regnstigen 8, SE-169 60 Solna