top of page
Work Desk

The Ethical Implications of AI: How Machines are Infringing on Human Rights

  • Nikita Ambwani
  • May 6, 2023
  • 8 min read

Guest Author: Nikita Ambwani reachable at Linkedin. The views and opinions in the article are authors own.


In recent years, advances in Artificial Intelligence (AI) have been nothing short of groundbreaking. From self-driving cars to facial recognition technology, machines are now capable of performing tasks that were once exclusively the domain of human beings. But as we continue to push the boundaries of what AI can do, it's becoming increasingly clear that these technologies come with a set of ethical implications, sometimes challenging the fundamental human rights, something that cannot be ignored.


“To deny people their human rights is to challenge their very humanity.” - Nelson Mandela

The Scope and Extent of Human Rights Violations caused by AI


Imagine a world where every step you take and every move you make is being watched and analyzed by machines. A world where your face, voice, behavior, online activities are all scrutinized by sophisticated algorithms that aim to predict what you're going to do next. While some see AI surveillance technologies as highly effective tools for crime prevention and public safety, others warn that they pose serious risks to our privacy, civil liberties, and even human rights. We have analyzed some fundamental issues regarding human rights and AI below -


Issues related to the right of privacy- Every person should be able to understand what personal information is specifically kept in automated databases and for what objectives in order to defend their right to privacy. Finding out who controls such data, whether it be public authorities or commercial organisations, is another aspect of it. AI puts this right's protection in jeopardy because data is frequently collected without a person's consent. Because AI's decision-making process is opaque, people lose control over their individual private information. Nowadays, a lot of people use video conferencing software all over the world due to the COVID-19 pandemic, with Zoom becoming as one of the most well-liked options. According to a report from April 2020, passwords along with email addresses from 530,000 Zoom accounts have been traded on the dark web. In actuality, there is still a great deal of accounts available for purchase, and this is the first dataset that has been. Hackers that employed credential stuffing stole the data. This kind of action entails making a large number of automated login attempts in order to access user accounts without authorization utilising a huge number of credentials across different websites that prior data breaches have obtained. Algorithms assist the perpetrators in carrying out the cyberattack. This well-known illustration highlights how the right to privacy is currently at risk. This, however, is just one of many instances of how AI has violated this legal right. Besides compromising other rights, legal experts and organisations in charge of enforcing data protection laws feel that AI presents significant data security and privacy challenges. These include surveillance, uninformed consent, and violations of an individual's data protection rights (such as the right to access personal data, the right to stop processing that may result in harm or distress, the right to not be liable to a selection that relies solely on automated procedures, etc.).


Issues related to right of life, liberty and security- The expanding application of AI in the criminal justice system poses a threat to the freedom from restrictions on personal liberty. The use of recidivism risk-scoring software to guide detainment choices throughout the criminal justice system in certain jurisdictions, from setting bail to determining criminal sentences, is one instance. A set of defendants have been wrongly classified as high risk by the algorithms, receiving stricter bail conditions, being held in pretrial custody, and receiving longer prison sentences. Detention decisions based on risk-scoring algorithms may also be illegal or arbitrary since they employ inputs that are not required by law and are not regulated by them. The idea behind criminal threat evaluation software is that it should only be used simply as a tool to help judges decide how long to sentence someone. However, by assigning a defendant a level of future guilt based on their likelihood of reoffending, they may undermine the presumptions of innocence necessary for a fair trial. By incorporating preexisting police bias into its algorithms, predictive policing software also runs the risk of incorrectly attributing culpability. According to reports, judges have little knowledge of the operation of these risk-scoring systems, yet many of them place a lot of trust in the outcomes since they believe the programme to be objective. The issue of whether or not legal judgements based on such software may actually be regarded as fair is brought up by this.


Issues related to right of freedom of speech and expression- Government pressure on businesses to handle the issue of alleged terrorist content, slanderous remarks, and "fake news," without defining criteria or definitions, has increased the use of automated systems. A great deal of information is accidentally erased given that AI is not error-free and corporations are compelled to remove undesirable content rapidly from their websites. Following complaints, YouTube removed more than 100,000 videos documenting the atrocities taking place in Syria. When violent content has substantial documentary or instructional value, YouTube's policy offers an exception. These clips regularly constitute the only records of horrible crimes as well as infringements of human rights. Nevertheless, they were destroyed. According to a recent German law, social media networks must remove a range of content within 24 hours of receiving a complaint (or up to 7 days in cases where it is less clear). People refrain from expressing themselves freely due to lack of privacy amplified by AI surveillance systems. It has considerably negative outcomes as far as the freedom of expression is concerned. Face recognition is one effective illustration. This might significantly discourage assembling if it were used to identify protesters in public areas. Due to the fact that many individuals rely on the level of protection that anonymity offers to congregate in public and voice their opinions, the introduction of such a system in nations that limit free assembly would essentially prevent the enjoyment of this right.


Ethical Issues Connected to the Development of AI


There are a number of ethical matters connected to the development of artificial intelligence (AI). Some of these are specific to AI, while others are more general ethical issues that arise in any technological development. First, there is the question of what goals we should programming AI to pursue. Do we want AI systems to be beneficial for all humanity, or do we want them to further the interests of a particular group? There is also the question of whether AI systems should be designed to be autonomous, or whether they should be subject to human control. Second, there are questions about how AI systems will impact the distribution of power and resources in society. Will they lead to increased inequality or will they help everyone to share in the benefits? Third, there are apprehensions about the safety of systems of artificial intelligence. As AI systems become more controlling, there is a risk that they could cause harm either through malice or negligence. Fourth, there are questions about privacy and data protection when it comes to AI. As AI systems increasingly have access to personal data, there is a risk that this could be used to unfairly manipulate or exploit individuals.


Some of the key ethical issues that AI poses on human include:

  • Bias and discrimination: AI systems can sometimes reflect and reinforce biases that exist in society, leading to unfair or discriminatory outcomes. This can be particularly problematic when AI is employed in making decisions that have significant influence on the lives of people, such as in employment, credit scoring, or criminal justice.

  • Surveillance and Privacy: AI systems collect, store, and analyze vast amounts of personal data, putting confidentiality into danger and making data vulnerable to unknown surveillance. This is particularly true when AI is used in areas such as facial recognition, voice recognition, or tracking of individuals' online activity.

  • Accountability and transparency: AI systems have a tendency to be obscure and incomprehensive, that makes it difficult to hold developers and users accountable for their decisions. The need of the hour is greater transparency and accountability in the development and use of AI systems, as well as mechanisms for redress when things go wrong.

  • Safety and security: AI systems can pose risks to safety and security, particularly when they are used in critical infrastructure or in contexts where they can be hacked or manipulated. Thus, AI systems should be safe and secure.

There are broader ethical concerns about the role that AI plays in the society and its impact on our values and way of life. As AI is becoming increasingly popular in our lives, it is important to consider how it will shape our future and what kind of world we want to live in.


Constructive Solutions for Better Protection of Human Rights

  • It is crucial that strategies are designed in such a way that they take a balanced approach towards overcoming plausible disadvantageous consequences of the usage of AI while giving room as well as a structure for dealing with the issues that cannot be anticipated given the diversity of the context of AI usage. The Commissioner of Human Rights along with the Council of Europe have released a “10-point recommendation” regarding human rights and AI highlighting major areas where through action, the adverse effects of AI on human rights can be thwarted. Some of them are

  • Regulatory authorities should provide a legal framework outlining a process for public authorities to use when acquiring, developing, and/or deploying artificial intelligence (AI) technologies in order to conduct “human rights impact assessments (HRIAs)”. Public authorities ought to be obligated to perform a self-assessment of current and potential AI systems as part of the HRIA legal framework. By taking into account the system's nature, its proper context, its reach, and its objective, this self-evaluation should examine the potential effects of the AI system on human rights. In order to find, gauge, and/or map the effects on human rights and dangers over time, the human rights impact assessments must also contain a meaningful external assessment of AI systems, either by a third-party researcher/auditor with the necessary knowledge or by a third party oversight agency.

  • Any businesses or public authorities employing an artificial intelligence (AI) system in a process of decision-making that significantly affects someone's human rights must be disclosed. People must be able to comprehend how decisions are made and how they have been confirmed, in addition to the usage of an AI system being disclosed in plain and understandable words. Transparency criteria must also make it possible to monitor every aspect of an AI system. This can take the form of an independent, thorough, and efficient audit or the public disclosure of information about the system in contention, its procedures, both its immediate and long-term impact on the rights of individuals and the steps taken to detect and mitigate those impacts. None of the systems of AI should be so complex that they cannot be reviewed and scrutinized by humans.

It is not advisable to adopt technologies that cannot be held to proper standards of accountability and openness.
  • The processing of data in connection with AI systems must be proportionate to the legal purpose it serves, and it must always strike an equal balance between the freedoms and rights at risk and the interests that will be served by the creation and use of the AI system. Regulatory authorities should establish a legal framework that offers adequate protections in cases where AI systems process personal information about crimes, criminal investigations, and charges as well as related security measures, biometric information, personal information about "racial" or ethnic background, political opinions, trade union affiliation, health, or sexual orientation. These protections must also defend against the use of this data for discriminatory or biased purposes.

  • Other organizations along with their team of experts have also provided numerous recommendations so as to counter the detrimental effects of AI that put human rights at a vulnerable position. These include diverse suggestions to governmental authorities and private sector entities. For example, Employment of human oversight before finalizing AI decisions, comprehensive data protection laws, compliance with the provisions of “UN Guiding Principles on Business and Human Rights,” increasing incentives for better accountability, and promotion of AI centric research are only some of the many proposals.

Conclusion


It is clear that AI technology has the potential to fundamentally change our daily lives and even undermine basic human rights. While this may seem overwhelming, there are actions we can take now to ensure our future with AI is equitable across different demographics. We must work together as a global society to develop standards for ethical implementation of AI and prioritize transparency so all stakeholders understand how decisions, made by algorithms or not, affect those around them. Ensuring fairness throughout the whole process will be key for enabling us to safely reap the benefits of artificial intelligence in the future while the fundamental human rights are maintained.

bottom of page