The Ethics Codes of AI companies, Fragmentation and What Can We Do?
Updated: Feb 6, 2019
By Nancy Nemes - Founder Ms. AI
2018 was the year everyone was talking about Ethics in Artificial Intelligence. A broad civic and public debate on the implications of AI spread out across the globe. It is the right time to discuss, but how can we oversee all the different efforts, who decides if commitments are being fulfilled? And what is Ethics, after all?
AI’s close connection with ethics, and hence philosophy, is driven by AI’s core elements of intelligence, action, epistemology, consciousness, or even free-will. And because this technology is creating "artificial creatures" (according to Wikipedia, artificial animals or artificial people), it needs to be closely researched by philosophers, linguists, cognitive scientists, neuroscientists along with AI technologists.
First, let’s take a look at some of the AI Principles published so far.
Microsoft identified 6 ethical principles for their AI:
Fairness: AI systems should treat all people fairly.
Inclusiveness: AI systems should empower everyone and engage people.
Reliability & Safety: AI systems should perform reliably and safely.
Transparency: AI systems should be understandable.
Privacy & Security: AI systems should be secure and respect privacy.
Accountability: AI systems should have algorithmic accountability.
I like that Microsoft puts People at the center of everything they do.
Google published their 7 AI Principles in June 2018. They believe that AI should:
Be socially beneficial.
Avoid creating or reinforcing unfair bias.
Be built and tested for safety.
Be accountable to people.
Incorporate privacy design principles.
Uphold high standards of scientific excellence.
Be made available for uses that accord with these principles.
I love #4 and wonder about what seems to be a pleonasm in #2 “unfair bias” - can bias also be fair? It depends how you define fairness. More on algorithmic bias in this excellent article by Cody Marie Wild.
Salesforce came up with 5 Principles on their AI For Good:
Being of benefit: AI technologies should benefit, empower, and create shared prosperity for as many people as possible.
Human value alignment: AI systems should be designed so that their goals and behaviors align with human values. Specifically, they should be designed and operated to remain be compatible with human ideals like dignity, rights, freedoms, and cultural diversity.
Open debate between science and policy: There should be constructive and healthy exchange between AI researchers and policymakers.
Cooperation, trust and transparency in systems and among the AI community: Researchers and developers of AI should cooperate for the benefit of all. If an AI system causes harm, it should be possible to ascertain why.
Safety and Responsibility: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Salesforce is going a bit deeper, and in December 2018, they hired Paula Goldman to lead their new Office of Ethical and Humane Use. This office will focus on developing strategies to use technology in an ethical and humane way at Salesforce.
Facebook reportedly has a “special” team looking at the ethics of AI to ensure its AI algorithms are fair and that they benefit everyone, however no plans for ethics boards or guidelines on AI ethics are currently public knowledge.
The Future of Life Institute , a charity working to ensure that tomorrow’s most powerful technologies are beneficial for humanity, has created these 7 principles:
Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders’ goals for the system.
Security: applying cybersecurity paradigms and techniques to AI-specific challenges.
Control: structural methods for operators to maintain control over advanced agents.
Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety.
Verification: techniques that help prove a system was implemented correctly given a formal specification.
Ethics: effort to understand what we ought to do and what counts as moral or good.
Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.
MIT has their own AI Ethics Reading Group who meet regularly to examine the ethics, morals, and explainability of these systems.
The European Commission is currently drafting AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), of which a final version is due in March 2019.
There are tools stacked on other tools creating a perplexing paraphernalia.
All of this work is great and also has the challenge of being hard to follow (so many sources), unevenly distributed (i.e. currently primarily centered in USA, Europe just as follower so far), and many times shows hypothetical use cases rather than providing a granular view on HOW TO EXECUTE on that vision, in real life.
So how do we see beyond the hype? And what is Ethics after all – is it subjective or objective, relative or absolute?
According to a study by the International Academy of Consciousness, what we need is Cosmoethics. That is, an analysis from a much broader perspective, taking into account every culture, world or dimension, as well as extra-physical beings and even subtle energies. Because it is so panoptic, and it asks that you do (think/feel/act) in such a way that it can become example to all consciousnesses, multidimensionally, it is an ethics of the Cosmos.
Not an easy thing, you see… And to add to the complexity, recent studies have questioned the effectiveness of the above Codes of Ethics…, for example because, so far, software engineers did not change behavior based on awareness of such codes. Professor Lucy Suchman, a foundational thinker in Human Computer Interaction, who has made fundamental contributions to ethnographic analysis, conversational analysis and Participatory Design techniques for the development of interactive computer systems…, said “while ethics codes are a good start, they lack real democratic or public accountability”.
What can we do? How do we find our level of Cosmoethics and quality of intention?
Much more is needed in order to develop AI that is truly democratic and accountable to people. First of all, we need open and transparent public dialogue, in every country, on what people want from a given form of AI.
We need to win hearts and minds. We need to drive cultural changes that will embrace AI for Good, and decisively and immediately dismiss any Bad in it.
We need collaboration across disciplines and borders. Cross-pollination between humanities and technologies. To cite Nicolas Gülzow in his excellent article "Why degrees of cross-pollination determine the speed of innovation": "linear growth models for technology are useless".
We then need globally independent bodies (yes, also in China and Russia!) who bring up synchronized universally acceptable principles and create strong review processes to drive greater accountability to ensure ethical commitments are being held by everyone (yes, also by China, Russia and Mr. Trump).
What AI For All also means is that AI for Military should be properly regulated, just like the Pharma industry (more about this in a future blog post).
Rather than dispersed statements we need synchronicity and independent oversight. The global exchange we are being able to have today should allow for this.
Reading and teaching in school about thinkers like Confucius, Lao-tse, Socrates and Plato (along with STEM of course), will help grow lucid and self-aware generations that are able to span across disciplines and positively shape their own evolution.
With Ms. AI, it is our intent to connect our minds and hearts to drive inclusiveness, diversity and enthusiasm in AI, all with a lucid dose of Cosmoethics :-)
Looking forward to an exciting 2019!