top of page
Work Desk

The Ethics of AI: Examining the Implications of NYC's AI Bias Audit Local Law 144

  • Nikita Ambwani
  • Apr 25, 2023
  • 6 min read

Updated: May 6, 2023

Guest Author: Nikita Ambwani reachable at linkedin. The views and opinions are author's own.


Introduction to New York AI Bias Audit Local Law 144


After the Department of Consumer and Worker Protection delayed the enforcement date of the New York City Bias Audit Law or Local Law 144 to April 15, 2023, there has been an augmentation of engagement with the law and its revised proposed rules. Among the changes and clarifications made to the proposed rules, new definitions with extended clarification have been provided along with the utilisation of human input for the process of decision making. Any AI-driven recruiting tools used in the city must now comply with a local ordinance that the New York City Council adopted. This is a crucial step in ensuring that algorithms do not obstruct the hiring of diverse workers or constitute a barrier to equitable employment opportunities.


The law in New York City is among the first in the world to mandate algorithmic audits of technologies used by businesses to make hiring choices. The New York City Council originally presented the bill in February 2020.

Overview of New York AI Bias Audit Local Law 144


As it is stated, the Local Law 144 mandates the bias audit of automated employment decision tools (or AEDTs) before their actual use. Now, to get a better understanding of what AEDTs are, the statute has carefully defined this term that can be further broken down for a meticulous analysis. The law says that AEDTs are “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” Thus, it can be understood that this aspect of the definition is an all encompassing and inclusive one which also mentions the circumstantial usage of AEDTs.


What is worthy to note here is that the statute also sets out the processes that are not included under the definition as “a tool that does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural 2 persons, including, but not limited to, a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.” The interpretation of this part of the definition aids in getting a clear picture about the extent of application of this law, that is to say, a recruiting agency that utilizes artificial intelligence for purposes other than the context of employement, does not fall within the ambit of the statute.


Further ahead in the statute, we find the definition of a very crucial term which is bias audit that the recruiting firms have to undergo. It is defined as “an impartial evaluation that is conducted by an independent auditor.” The term does not outline the format of a bias audit or how bias should be recognised. Bias or negative effects can be assessed using a variety of measures, and the lack of adequate justification and uniformity regarding which metric should be applied and when could provide problems, especially for applicants who are attempting to understand what numerous metrics represent.


Principally, the Local Law 144 advances with numerous requirements that have to be complied with. Few of them are-

  1. The bias audit of the proposed AI tool has to be conducted atleast one year before its actual use and a summation of the results of such an audit have to be made available on the website of the employment agency.

  2. Every applicant or employee who lives in the city who is being assessed or evaluated for a job by an employer or employment agency must be informed about the use of an automated employment decision tool. This notification must be sent at least ten working days in advance of the intended usage, giving the applicant time to request an alternate selection procedure or accommodation.

  3. Candidates must be informed of the traits and job requirements that the automated employment decision tool will consider while evaluating them. The notice must be sent at least 10 business days in advance of the use. They must be given information about the source of the data and the policy of data retention in under 30 days of a written request, except if providing this information would be against local, state, or federal law, if candidates are not given sufficient details concerning the characteristics that comprise the model at least 10 days prior to the use of the tool.


To enforce compliance to the provisions of this law, a civil penalty of not less than $500 nor more than $1,500 for the first violation and any successive violation that occurs within a single day of the first violation is to be imposed on any individual who contravenes any requirements.


Implications of this Law on AI Ethics and Challenges Surrounding AI Bias


Because it aims to address any bias or discrimination that might appear in automated decision-making systems, the New York City AI law is pertinent to AI ethics. The legislation mandates that organisations that use automated employment decision tools (AEDTs) notify job applicants and workers of their usage, are transparent about it, and perform yearly bias evaluations of the technologies. Most of the concerns and complaints regarding the usage of AI revolve broadly around two sorts. First, the concern regarding discrimination in bias and the continuance of exclusion of certain categories of employees or candidates on the basis of sex, race, and other protected characteristics by the AI procedures that are incorrectly thought to be objective. To make sure that the AI model is fair, decision makers, and developpers have an important role. Explaining how bias is supposed to be interpreted and in what context, should be the fundamentals of the task. Anomalies between the predictions of the model and the accuracy regarding the value presented should be avoided and thus, fairness should be ensured. For the duration of the model’s employement, it is necessary to ensure that they are regularly supervised and upgraded so that they are able to provide results that are fair and which do not shift from the performance dynamics.


Second, the large amount of data that is collected and is easily available to these systems would raise concerns around privacy, protection and vulnerability. In order to develop solutions that are useful in the workplace, AI will increasingly rely on data produced by people—employees. Employee monitoring and data collecting are now much more inexpensive, covert, and extensive thanks to new technology. Although the legislation provides for greater transparency by adding the requirement that the candidates must be made aware about the characteristics that are used in the AI system 10 days prior to its use, how the algorithms are processing such characteristics. The algorithms of AI processes can be difficult to comprehend, considering the technical aspect of the bias metrics and thus, unacquaintance with them can lead to confusion.


The legislation encourages the moral and responsible use of AI technology in the workplace by requiring businesses to be open and honest about how they use AEDTs and to aggressively identify and reduce any possible bias in these systems. This is crucial since AI has a tendency to reproduce and perpetuate societal prejudices, which can result in unjust or discriminating consequences. In order to assist guarantee that AI is utilised in a way that respects people's rights and dignity and is compatible with ethical values, the law was created. In general, legislators and regulators are becoming more aware of the need to address ethical challenges in AI and to encourage responsible and accountable use of this technology. The New York City AI law is an illustration of this.


Conclusion


There are numerous multi-disciplinary ethical issues that AI systems face and with the collective conscience and efforts of social scientists, technologists, government authorities and general public can such issues be addressed and mitigated. Concerns of bias and discrimination, surveillance and privacy, accountability and transparency, and security and privacy all have an influence on New York City AI law. The law attempts to address the specific issue of bias and discrimination in the context of automated employment decision tools by requiring employers to disclose and be transparent about their use of such tools to job applicants and employees (AEDTs). Moreover, annual bias checks on the instruments are required. The law also seeks to strengthen accountability and transparency in the use of AEDTs by forcing employers to openly explain how the tools work, what data is used to make decisions, and how decisions are made. The law intends to guarantee that AEDTs are utilised impartially and without bias by fostering openness and accountability. It is currently being debated how to solve these challenges in the development and widespread usage of AI systems. The larger topic of AI ethics is also important to concerns of privacy and surveillance, accountability and transparency, safety and security, safety and security, and safety and security.

bottom of page