April 6, 2023 By Jennifer Kirkwood 3 min read

Organizations sourcing, screening, interviewing, hiring or promoting individuals in New York City are required to conduct yearly bias audits on automated employment decision-making tools as per New York City Local Law 144, which was enacted in December 2021.

This new regulation applies to any “automated employment tool;” so, any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence, including homegrown and third-party programs. Organizations must also publish information on their website about how these tools govern their potential selection and interview process. Specifically, organizations must demonstrate how their AI tools support fairness and transparency and mitigate bias. This requirement aims to increase transparency in organizations’ use of AI and automation in their hiring processes and help candidates understand how they are evaluated.

As a result of these new regulations, global organizations that have operations in New York City may be halting the implementation of new HR tools in their systems, as their CIO or CDO must soon audit the tools that affect their hiring system in New York.

To address compliance concerns, organizations worldwide should be implementing bias audit processes so they can continue leveraging the benefits of these technologies. This audit can offer the chance to evaluate the candidate-to-employee lifecycle, covering all relevant personas, tools, data, and decision points. Even simple tools that recruiters use to review new candidates can be improved by incorporating bias mitigation into the AI lifecycle.

Download the AI governance e-book

AI regulations are here to stay

Other states are taking steps to address potential discrimination with AI and employment technology automation. For example, California is working to remove facial analysis technology from the hiring process, and the State of Illinois has recently strengthened its facial recognition laws. Washington, D.C. and other states are also proposing algorithmic HR regulations. In addition, countries like Canada, China, Brazil, and Greece have also implemented data privacy laws. 

These regulations have arisen in part due to guidelines from the US Equal Employment Opportunity Commission (EEOC) on AI and automation, and data retention laws in California. Organizations should begin conducting audits of their HR and Talent systems, processes, vendors, and third-party and homegrown applications to mitigate bias and promote fairness and transparency in hiring. This proactive approach can help to reduce brand damage risk and demonstrates a commitment to ethical and unbiased hiring practices.

Bias can cost your organization

In today’s world, where human and workers’ rights are critical, mitigating bias and discrimination is paramount.

Executives understand that a brand-disrupting hit resulting from discrimination claims can have severe consequences, including losing their positions. HR departments and thought leaders emphasize that people want to feel a sense of diversity and belonging in their daily work, and according to the 2022 Gallup poll on engagement, the top attraction and retention factor for employees and candidates is psychological safety and wellness.

Organizations must strive for a working environment that promotes diversity of thought, leading to success and competitive differentiation. Therefore, compliance with regulations is not only about avoiding fines but is also about demonstrating a commitment to fair and equitable hiring practices and creating a workplace that fosters belonging.

The time to audit is now – and AI governance can help

All organizations must monitor whether they use HR systems responsibly and take proactive steps to mitigate potential discrimination. This includes conducting audits of HR systems and processes to identify and address areas where bias may exist.

While fines can be managed, the damage to a company’s brand reputation can be a challenge to repair and may impact its ability to attract and retain customers and employees.

CIOs, CDOs, Chief Risk Officers, and Chief Compliance Officers should take the lead in these efforts and monitor whether their organizations comply with all relevant regulations and ethical standards. By doing so, they can build a culture of trust, diversity, and inclusion that benefits both their employees and the business as a whole.

A holistic approach to AI governance can help. Organizations that stay proactive and infuse governance into their AI initiatives from the onset can help minimize risk while strengthening their ability to address ethical principles and regulations.

Learn more about data strategy
Was this article helpful?
YesNo

More from Artificial intelligence

Top 6 innovations from the IBM – AWS GenAI Hackathon

5 min read - Generative AI innovations can transform industries. Eight client teams collaborated with IBM® and AWS this spring to develop generative AI prototypes to address real-world business challenges in the public sector, financial services, energy, healthcare and other industries. Over the course of several weeks, cross-functional teams comprising client teams, IBM and AWS representatives worked to design, develop and iterate on prototypes that push the boundaries of what's possible with generative AI. IBM used design thinking and user-centric approach to guide the…

IDC 2024 SaaS CSAT Award for Financial Governance, Risk and Compliance presented to IBM, September 2024

2 min read - IBM's prowess in the Financial Governance, Risk and Compliance (GRC) sector has been recognized by IDC, a leading global market intelligence firm. In its 2024 SaaS Path Survey, IBM emerged as a standout performer, securing the highest customer satisfaction scores in the Financial GRC application market. The survey, which collected ratings from approximately 2,900 organizations worldwide, asked customers to rate their vendor on over 30 different customer satisfaction metrics. IBM's high customer satisfaction scores, compared to the overall average in…

IBM experts break down LLM benchmarks and best practices

3 min read - On September 5, AI writing startup HyperWrite’s Reflection 70B, touted by CEO Matt Shumer as “the world’s top open-source model,” set the tech world abuzz. In his announcement on X, Shumer said it could hold its own against top closed-source models, adding that it “beats GPT-4o on every benchmark tested” and “clobbers Llama 3.1 405B. It’s not even close.” These were big claims—and the LLM community immediately got to work independently verifying them. Drama ensued in real-time online as third-party…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters