While the topic of AI and discrimination has been under discussion for several years now, we still learn about incidents involving them and factors such as race, age, sex, ethnicity, religion, nationality, disability, culture, socio-economic status, and geographical location, among others.

This blog is not intended to be a scientific study, but rather an opportunity to reflect on the responsibility of AI systems within a human rights framework. To this end, the reflections presented here are based on data and information drawn from various articles and studies that have examined this topic in recent years.

To better illustrate what incidents mean in the context of AI related to diversity and inclusion, a good example is provided in the results of a study conducted by researchers in the USA in 20231.  Researchers asked two Large Language Models (LLMs) to prepare job recommendation letters for female and male candidates.  The study revealed significant gender bias in the content of those letters.

When referring to men, terms used included:

  • Natural leader
  • Role model
  • Expert
  • Integrity
  • Listeners
  • Thinkers
  • Respectful
  • Reputable
  • Authentic
  • Master.

However, when referring to women, phrases employed included:

  • Participant
  • Well-liked member
  • Beauty
  • Delight
  • Grace
  • Stunning
  • Warm
  • Emotional.

Behind these results lies a gender bias.

Another example worth highlighting is the role of AI in healthcare, and the biases present in some AI models used in this sector. Studies have shown that health-related data is often limited to certain groups, excluding populations from some parts of the world. Likewise, it has been noted that AI researchers often fail to include professionals from marginalised and/or socioeconomically diverse backgrounds, leading to biased databases and outcomes2.

Is AI exacerbating discrimination?

AI has become an essential tool in our daily professional work. Its use, without critical analysis and a human rights-based approach, can reinforce and exacerbate discrimination in all its forms and expressions - as demonstrated by the simple example of recommendation letters.

This raises several important questions:

  • What happens with biases in AI systems?
  • Why does this happen?
  • At what stage does it occur?

AI systems require code (instructions that power the software). The success and reliability of such machine intelligence is dependent on the quality of their code.  It must be high quality in every sense, from a technical aspect to the standpoint of ethics, diversity and inclusion.

Computer code is mostly produced by data engineers, data scientists, and machine learning experts. Women are underrepresented in these positions and in leadership roles. Code quality is impacted not only by the underrepresentation of women but also by the underrepresentation of gender diversity3.

Therefore, it is essential to build diverse teams that bring unique contributions, so that wide perspectives are included in the programming that makes up AI systems.

Tracking AI bias in action

Identifying incidents based on the different biases that persist in our society is an important step toward developing strategies to mitigate or eliminate them.

Databases where such incidents are reported and monitored, help to identify patterns in failures and help make improved decisions based on real data. Documenting incidents caused by AI, permits assessment of risks and harms that AI systems may inflict on individuals, groups, and communities. The evaluations generate essential information that should be considered when designing public policies or developing AI systems, making them a valuable tool to that end.

Continuous monitoring of AI systems is essential. Diversity and inclusion, as well as Human Rights Principles, must be considered at every stage of the design, development, and deployment of smart technologies to mitigate biases and discrimination, and avoid adverse social impacts.

AI poses a new challenge in the field of human rights.

FICPI’s view

FICPI uniquely combines education and advocacy on topics around patents and trade marks, with a focus on developing the professional excellence of its individual members. FICPI Forums, Congress, committees and meetings are opportunities to gather insights from the international IP attorney community on any issue, whether it be practice-related or topics of patent and trade mark law. Our organisation strives for equality and diversity of representation and encourages members to do likewise. 

Next steps

References:

1 "Kelly is a warm person, Joseph is a role model: Gender biases in LLM-Generated Reference Letters”. Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chanb, Nanyum Peng.

2 “The imperative of diversity and equity for the adoption of responsible Ai in healthcare”. Denise E. Hilling, Imane Ihaddouchen, Stefan Buijsman, Reggie Towsend, Diederik Gommers, Michael E. van Genderen.

3 “An empirical study on the impact of gender diversity on Code Quality in AI Systems”. Shamse Tasnim Cynthia, Banani Roy.