top of page
Search
  • Writer's pictureNancy Nemes

Why are Humanities interested in the direction in which Artificial Intelligence is moving towards?

By guest author: Marilena Velonia-Bellonia, Microsoft, advocate for Responsible AI and member at #humanAIze


Over recent years, Artificial Intelligence solutions have blossomed in diverse industries and environments in both research fields and the market. One of the most important factors leading to this, has been the availability of the necessary resources. The Cloud has offered both the processing power as well as the required scalability to empower AI solutions to have the immense impact we observe today.


AI technologies have by far escaped bounded use and are in fact serving an extensive range of use cases, among them being many that imply direct benefits and impact on people. The availability of low or no code tools that have inherent AI capabilities is allowing people to take advantage of the power of AI and this has undoubtedly been a huge progress accelerator for society. However, at the same time, what we observe is the lack of a corresponding effort for the systems’ impact foresight and assessment.



As a result, Humanities have shown a rising interest over the progress and direction of AI. We are in the process of recognizing multiple considerations for AI systems emerging after their use in real market environments. Concerns regarding individuals’ privacy are maybe among the most widely ‘discussed’ ones, since the related legislation has already raised the attention towards this direction. But there are also increasing emerging concerns of individuals’ data use on a societal level. Many have expressed their concerns regarding social polarization through the use of targeted content in social networks. This is believed to be ultimately resulting in separated groups within the boundaries of which individuals approve each other’s opinions, while these groups are completely incomprehensible to one another.


If we compare our lives with just some years ago, we can probably tell the difference in our dependence on AI systems. AI has taken various forms (chatterbots, voice assistants, smart home software, androids and special implants to aid our intellectual functioning) and have served as helpers in many tasks of ours. Opinions certainly differ, from people who think that the fact that we ‘rely’ on AI systems, will long-term shrink humans’ capabilities, to other who argue that the use of AI systems in everyday life, merely augments our capabilities.


Since we identify the rising use of AI in both easy and difficult tasks, another important concern is that of the systems’ reliability, a system’s technical robustness. This issue directs us towards ensuring the necessary quantity and quality of testing prior to deployment in real environments. But what happens in cases where the available testing data or environment is not representative enough of the real world? And how can this enough be measured?


Further to the above, during the past years, bias and fairness issues have been a popular topic of concern on AI algorithms. It is indeed normal to impulsively think that by putting computers and AI in the privileged position to make decisions is ‘solving’ the humans’ bias problem, however this appears not to be the case. Bias can be also found inside a training dataset that reflects past human decisions which is being ‘fed’ into an AI system to learn from.


All the issues above are raising multiple issues around Ethics and require a solid approach both from people directly involved in the development, management and deployment of such systems, but also from external experts’ guidance. One of the obstacles to this, is today’s work market’s intense specialization, with professionals holding deeper knowledge in more specific work areas, and thus a relatively narrow view of universal projects and solutions. As a result, it is making it harder to perceive and foresee potential threats.


It is questionable whether the world’s pace at this moment is leaving us the necessary time to think of the goal we are aiming to. We shall consider creating the appropriate environments to be able to reflect on a system’s potential outcomes and whether each of them is something we really aspire to.


This leads us to wonder what will be the market’s response to all the above. Will we observe the establishment of new roles focusing on ethics governance? Will the market offer specialized guidance through impact assessments and customized ethics guidance?





153 views0 comments
bottom of page