April 6, 2023 By Jennifer Kirkwood 3 min read

Organizations sourcing, screening, interviewing, hiring or promoting individuals in New York City are required to conduct yearly bias audits on automated employment decision-making tools as per New York City Local Law 144, which was enacted in December 2021.

This new regulation applies to any “automated employment tool;” so, any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence, including homegrown and third-party programs. Organizations must also publish information on their website about how these tools govern their potential selection and interview process. Specifically, organizations must demonstrate how their AI tools support fairness and transparency and mitigate bias. This requirement aims to increase transparency in organizations’ use of AI and automation in their hiring processes and help candidates understand how they are evaluated.

As a result of these new regulations, global organizations that have operations in New York City may be halting the implementation of new HR tools in their systems, as their CIO or CDO must soon audit the tools that affect their hiring system in New York.

To address compliance concerns, organizations worldwide should be implementing bias audit processes so they can continue leveraging the benefits of these technologies. This audit can offer the chance to evaluate the candidate-to-employee lifecycle, covering all relevant personas, tools, data, and decision points. Even simple tools that recruiters use to review new candidates can be improved by incorporating bias mitigation into the AI lifecycle.

Download the AI governance e-book

AI regulations are here to stay

Other states are taking steps to address potential discrimination with AI and employment technology automation. For example, California is working to remove facial analysis technology from the hiring process, and the State of Illinois has recently strengthened its facial recognition laws. Washington, D.C. and other states are also proposing algorithmic HR regulations. In addition, countries like Canada, China, Brazil, and Greece have also implemented data privacy laws. 

These regulations have arisen in part due to guidelines from the US Equal Employment Opportunity Commission (EEOC) on AI and automation, and data retention laws in California. Organizations should begin conducting audits of their HR and Talent systems, processes, vendors, and third-party and homegrown applications to mitigate bias and promote fairness and transparency in hiring. This proactive approach can help to reduce brand damage risk and demonstrates a commitment to ethical and unbiased hiring practices.

Bias can cost your organization

In today’s world, where human and workers’ rights are critical, mitigating bias and discrimination is paramount.

Executives understand that a brand-disrupting hit resulting from discrimination claims can have severe consequences, including losing their positions. HR departments and thought leaders emphasize that people want to feel a sense of diversity and belonging in their daily work, and according to the 2022 Gallup poll on engagement, the top attraction and retention factor for employees and candidates is psychological safety and wellness.

Organizations must strive for a working environment that promotes diversity of thought, leading to success and competitive differentiation. Therefore, compliance with regulations is not only about avoiding fines but is also about demonstrating a commitment to fair and equitable hiring practices and creating a workplace that fosters belonging.

The time to audit is now – and AI governance can help

All organizations must monitor whether they use HR systems responsibly and take proactive steps to mitigate potential discrimination. This includes conducting audits of HR systems and processes to identify and address areas where bias may exist.

While fines can be managed, the damage to a company’s brand reputation can be a challenge to repair and may impact its ability to attract and retain customers and employees.

CIOs, CDOs, Chief Risk Officers, and Chief Compliance Officers should take the lead in these efforts and monitor whether their organizations comply with all relevant regulations and ethical standards. By doing so, they can build a culture of trust, diversity, and inclusion that benefits both their employees and the business as a whole.

A holistic approach to AI governance can help. Organizations that stay proactive and infuse governance into their AI initiatives from the onset can help minimize risk while strengthening their ability to address ethical principles and regulations.

Learn more about data strategy
Was this article helpful?
YesNo

More from Artificial intelligence

Generative AI meets application modernization

2 min read - According to a survey of more than 400 top IT executives across industries in North America, three in four respondents say they still have disparate systems using traditional technologies and tools in their organizations. Furthermore, the survey finds that most executives report being in the planning or preliminary stages of modernization. Maintaining these traditional, legacy technologies and tools, often referred to as “technical debt,” for too long can have serious consequences, such as stalled development projects, cybersecurity exposures and operational…

Accelerating responsible AI adoption with a new Amazon Web Services (AWS) Generative AI Competency

3 min read - We’re at a watershed moment with generative AI. According to findings from the IBM Institute for Business Value, investment in generative AI is expected to grow nearly four times over the next two to three years. For enterprises that make the right investments in the technology it could deliver a strategic advantage that pays massive dividends. At IBM® we are committed to helping clients navigate this new reality and realize meaningful value from generative AI over the long term. For our…

How IBM and the Data & Trust Alliance are fostering greater transparency across the data ecosystem

2 min read - Strong data governance is foundational to robust artificial intelligence (AI) governance. Companies developing or deploying responsible AI must start with strong data governance to prepare for current or upcoming regulations and to create AI that is explainable, transparent and fair. Transparency about data is essential for any organization using data to drive decision-making or shape business strategies. It helps to build trust, accountability and credibility by making data and its governance processes accessible and understandable. However, this transparency can be…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters