ARTIFICIAL INTELLIGENCE: Ethical AI, It’s Personal!
Artificial Intelligence (A.I) has an increasing influence on our everyday life’s. It has the profound potential to transform society for good, or for bad.
There’s no shortage of policies and papers relating to AI Ethics with over 17 ethical frameworks published in 2021 alone¹. Broadly speaking they focus on overarching policy guidelines aimed at institutions.
Unfortunately the majority of these documents are somewhat unwieldly and can be hard for their readers to see the “wood from the trees”. The shear size of some of these documents can be off putting especially for newbies into this field. While they offer guidance they lack consistency often leading to confusion².
This leaves the individuals practicing AI with a difficult ethical conundrum to resolve in the face of ever increasing AI legislation³. It’s time for a “change of approach”. Instead of individuals waiting for governments or institutions to prescribe ethical guidance, “take matters into your own hands” !.
“Individual AI Practitioner”.
Individual decisions made by AI practitioners can have immense impacts commercial and societal, whether intentional or not. With the pace of technological innovation far outpacing the ability of traditional “Top Down” governance, there is need for the AI professional themselves to take a “Bottom Up” approach, and follow their own “Personal Ethical” code of conduct .
A simple “One pager” detailing your own AI ethical principles can be a lot more pragmatic and meaningful, then referencing a 100 page framework!
Code of Conduct.
The focus for this code of conduct is the individual AI practitioner. Its foundation is based on the ethos that “Ethics is a necessity”. It is not only a tool for supporting human agency but also a counterbalance to avoid maleficence⁴. Simply put, “Promote good, & avoid bad outcomes!”
I acknowledge that “trying to do the right thing” sometimes isn’t timely, obvious, or always objective, so I propose the following code of conduct to assist individuals with consistent ethical decision making.
Foundation.
Floridi’s⁵ “Dual Advantage” of an ethical approach to AI is the foundation for this code of conduct and forms the bedrock on which the five pillar principles of Human Agency, Integrity, Transparency, Veracity, and Accountability, stand.
Q: “Why bother with Ethics for AI?”
A: The dual advantage from Ethical behavior, “Enables organizations to take advantage of the social value that AI technologies offer” i.e. socially acceptable or preferable opportunities.
,while ! simultaneously “Enabling organizations to anticipate and avoid or at least minimize costly mistakes”. i.e. socially unacceptable even if legally permissible.
This dual approach also mitigates against opportunities missed or lost due to the fear of mistakes.⁵
Pillar Principles.
Note:
- See Appendix for the Practical application of these principles.
- “Handy” mnemonic, “Human Integrity Transforms Value Add!!!!” as a memory aid for the five principles.
Intended Usage.
Think of this code of conduct as a simple “Ethical Prism” through which AI use cases or scenarios can be viewed.
The five principles act as lens/guides to contextually separate and help analyze specific ethical considerations.
By asking the questions prompted by these principles the AI practitioner can contextualize ethical implications during solution development e.g.
- Would you publish an AI Model trained on Biased data?
- Can you explain the decision making of your AI solution?
- Is this solution Transparent about how user data is used?
Note: The code of conduct does not purport to be definitive or to be used in isolation.
Summary.
Top down governance regarding AI ethics is in its infancy and struggling to keep pace with technological innovation. Increasing AI legislation coupled with vast volumes of material can make ethical AI a rather nebulous concept.
Following a personal code of conduct based on a succinct set of principles (Human Integrity Transform the Value Add) provides a sound rational for AI ethics. It can act as a simple tool used to contextualize, and frame consistent analysis. You don’t need elaborate documentation.
The truth is, there may never be a go-to or overarching ethical standard to follow or consult when it comes to AI. America, China and Europe⁶ are on different ethical wavelengths while big tech⁷ have competing agendas. Faced with this reality, remember
“Ethical AI Starts With You!”
Notes & Citations.
About the Author:
Colin Byrne is an IT professional with over 25 years industry experience, and a recent (2020) MSc. post grad in Artificial Intelligence, from the University of Limerick.
Checkout another of my Articles: The Great AI Debate!
References:
[1] 2021 “Ethical AI Frameworks”, Available at: ethical AI framework — Google Scholar
[2] 2019 “Establishing the rules for building trustworthy AI”, Available at: Nature Machine Intelligence
[3] 2021 “The EU’s new Regulation on Artificial Intelligence”, Available at: Arthur Cox
[4] 2018 “50 Million Facebook profiles harvested for Cambridge Analytical in major data breach”, Available at: The Guardian
[5] 2018 “An Ethical Framework for a Good AI Society”, Available at: Springer
[6] 2021 “China wants to dominate AI, the U.S. & Europe need each other”, Available at: Politico
[7] 2021 “Google might ask questions about AI ethics, but it doesn’t want answers”, Available at: The Guardian
[8] 2021 “A European approach to excellence and trust on Artificial Intelligence”, Available at: Europa.eu
[9] 2016 “GDPR Regulation”, Available at: Europa.eu
[10] 2018 “Veracity Model, Methods & Morals”, Available at: Hackernoon
[11] 2019 “Everyday Ethics for Artificial Intelligence”, Available at IBM-Watson
[12] 2021 “Data Science Institute DIS”, Available at: American College for Radiology
[13] 2018 “The Art of AI”, Available at: Medium
[14] 2016 “Artificial Intelligence, for the future of decision making”, Available UK.Gov
Appendix:
Practical examples of pillar principles.
Human Agency⁵: “AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit, or misguide human autonomy”.
“Principle of autonomy in the context of AI, means striking a balance between the decision-making power we retain for ourselves and that which we delegate to artificial agents”.⁵
- The AI practitioner should be mindful when designing systems to enable mechanisms for human self-determination, by building in appropriate safe guards or interlocks.
- Appropriate due diligence is required.
- Delegation of tasks to AI systems may lead to unplanned and unforeseen changes in human behaviors.
- The AI preconditioner needs to consider the societal implications of decisions/predictions made by AI systems
- They should advocate for systems that promote equitable societal outcomes.
Integrity: “Is doing the right thing, even when no one is looking” (C.S Lewis)
• The AI practitioner shall maintain integrity and independence in their technical judgements.
- The AI practitioner must not use obfuscation in any form but instead strive to make their intentions clear, concise and in a coherent manner.
- Cognizance of relevant legal requirements e.g. EU approach to AI⁸.
- When seeking appraisal for AI systems, real and tangible effort must be given to obtaining appropriate diverse perspectives.
Transparency: Data forms the underlying basis of all AI systems. The handling of this data in and of itself, independent of the decision-making process poses significant ethical challenges, e.g. European legislation (GDRP, 2018)⁹ framework regulating the processing and movement of user data.
- The AI practitioner should at least aware of the provenance, consent, and application of the data they intend to process.
- They should be aware of any legal implications relating to the dissemination and processing of data.
- They should distinguish between consent and informed consent and advocate for mitigation of risk in this area.
Veracity: “Conformity with truth or fact.”
The computing adage “Garbage In, Garbage out” is also applicable for AI systems. The quality of the data input, directly affects the quality of the data output. Simply getting a result isn’t in or of itself the end step. Results need to be appropriately validated.¹⁰
- The context and impact of an AI system predicting results should frame the validations requirements. e.g. predicting outcomes; Favorite movies vs the direction for autonomous vehicles.
Bias awareness: “As humans are inherently vulnerable to biases, and also responsible for building AI solutions. Consequently there is real potential for human bias to be embedded in the systems we create” .¹¹
- The AI practitioner should develop without intentional biases.
- Strive to minimize bias, both technical-algorithmic and societal bias.
- Validate early and often.
- The AI practitioner should be aware of applicable domain specific standards e.g. modeling standards, certification programmes etc. and advocate for adherence .e.g. Medical imaging standardization¹²
Accountability: “Refers to the need to explain and justify one’s decisions and actions to their partners, users and others with whom the system interacts” .¹³
- The AI practitioner should be aware of and able to articulate the decision-making process for which they are tasked.
- AI should be designed for humans to easily perceive, detect and understand the decision process.¹¹
- “To ensure accountability, decisions must be derivable from, and explained by, the decision-making algorithms used”.¹³
- Where decision making isn’t derivable i.e. deep level neural networks, then the decision process itself needs to be explained.
- Stakeholder involvement in the system development needs to be clearly documented.
- The AI practitioner needs to be aware of and or advocate for suitable redress mechanisms.
- Redress mechanism need to be available where society have legitimate concerns.¹⁴