In one of the talk that I attended last year, the term in the title of this newsletter was brought up, “Ethics Aligned Design”. This gist of the talk was about how to build AI that are ethically aligned…and do not ask me aligned with who because this is exactly what I want to discuss about in this newsletter.
In the past one year, there are two pillars in Artificial Intelligence that has generated a lot of “experts” and they are “Generative AI” and “AI Governance”. They are coming out of the wood works so as a side note for those who are interested in these two pillars, do be mindful on who you are following.
Coming back! I have some question marks when the talk shared this term. Let me explain why and you will probably see why I am also skeptical of the term “AI Ethics” or making Artificial Intelligence more ethical.
Definitions
Firstly, as always, let us set up some definition for discussion.
Ethics: a theory or system of moral values (Merriam Webster)
Morals: principles of right and wrong in behavior (Merriam Webster)
Most of us are guilty of putting them and using them interchangeably but in actual fact morals is the fundamental building block of being ethical. Morals are the underlying individual principles that says an action is right or wrong and things get more complicated once we talk about ethics which could mean diving into a spaghetti bowl over here.
Discussion
Does a person who eats meat despite seeing the killing process of animals considered unethical then? A very young son who stole food from a convenience store to feed his ailing mother, and they only have each other in the whole family, is that right, wrong or moderately wrong? How about a retiree who accidentally hurt a very pushy real estate agent that has been harassing the retiree non-stop to sell his retirement home for a profit (agent’s mostly).
Do you have an answer to the above question? Do you think your friends and relatives will 100% back your answer to whether it is right or wrong? Or moderate or ameliorate?
So here comes the fun part. If we humans are not confident that our actions and suggestions to the questions above is absolutely right or wrong, should we expect an AI that is designed and engineered by humans to be better ethically than humans?
We humans look at right and wrong through our own background, experience and knowledge and thus the multiple perspectives on a moral conundrum, we cannot expect machines to behave “ethically” as well then.
So here begs the question, although there are situations that are absolutely right or wrong but in the real world it is more complicated than that, and how can we make machines work for humans more ethically then? My believe is we need to “target” the people behind that is designing the AI, what are we using AI for. We have to constantly asked ourselves that question when we design any tools in fact. And this leads me to the following:
1 ) Ethics Aligned Design sounds very fluff to me. I have strong doubts it will work very well but we can always incorporate ethics into our design and engineering work.
2 ) I believe in AI Professionals Ethics rather than AI Ethics…humans cannot agree on a certain Ethics Framework altogether, I doubt we can build AI that can do otherwise.
What are your thoughts on this?
Thanks for reading till the end! I hope this has sparked more thoughts in you! To continue supporting my work, consider sharing this and get me a “book” or two over here. :)