April 4, 2023 By Jennifer Kirkwood 3 min read

What C-level executives should know about Algorithmic HR

Many organizations are currently utilizing AI and automation in various HR practices, including internal sourcing, screening, candidate hiring, promotions, and pay. While these technologies have been widely adopted, organizations should monitor whether they could be perpetuating biases and discrimination. New laws and regulations, from the EEOC to NYC 144, have been implemented to address this concern and promote ethical HR practices in AI.

NYC 144 was passed in December 2021 and will officially be in 2023. This bill requires that bias audits be conducted on any automated employment decision tool prior to the use of said tool. Failure to comply can result in civil penalties of no more than USD 500 for a first violation and each additional violation occurring on the same day as the first violation, and not less than USD 500 or more than USD 1500 for each subsequent violation—and each day the unaudited tool is used counts as separate violations.

Beyond the legal consequences, unethical and biased hiring practices can impact a company’s reputation and potentially limit its ability to attract customers and talent, and foster shareholder trust. Therefore, companies should work to monitor whether their AI and automation processes could be perpetuating bias and discrimination. The HR industry is required to understand bias and secure the rights of protected classes in the United States. While automation and AI have been used for processes like resume data parsing for over 15 years, it’s important to regularly audit the supporting machine learning models and automated processes to review ethical HR practices.

Learn how to develop an AI governance framework

Mishandled data have the potential to lead to discrimination

The use of embedded automation, natural language processing and AI technologies in the hiring process has the potential to negatively impact certain candidates by eliminating or highlighting specific candidate attributes in discriminatory ways. It can be difficult for organizations to identify when automation or AI is being used since these technologies are often deeply ingrained in the hiring process. Therefore, organizations should examine the embedded technologies within their applications and processes to determine if they are collecting data about gender, race, disability, ethnicity, or other personal information that could potentially be used to discriminate against valid applicants.

To drive fair and ethical hiring practices, organizations should be aware of what information their applications are collecting, how that data is used, and how it is secured. Resources such as the US Equal Employment Opportunity Commission (EEOC) and the upcoming EU ACT GDPR of AI – which the US has agreed to align with – can serve as valuable resources for organizations as they proceed with compliance efforts.

There are risks associated with mishandling data in the hiring process. With regulations like NYC 144, organizations should use technology with caution and transparency because errors or lack of oversight could cause costly problems. For example, organizations like Walmart, CarMax, Capital One Financial Group, and KPMG, among others, have paid significant penalties for discriminatory hiring practices. These mistakes can result in fines, considerable brand damage, and loss of customers, employees, and promising candidates due to the perception of unethical hiring practices.

To help avoid these risks, organizations should not hesitate to begin targeting their hiring processes for bias mitigation to better prioritize diversity and equity and maintain compliance with regulations like NYC 144. By being transparent about their hiring processes, and organizing resources to mitigate bias, companies can build trust with employees, customers, and the public while also improving the quality of their hires.

A need for a united C-suite compliance strategy

CHROs should be partnering with their CXO counterparts to navigate compliance challenges stemming from the complex nature of these new regulations. Consequently, the responsibility for compliance overlaps among the CDO, CIO, CPO, and CHRO, and each executive has an opportunity to leverage their expertise to support compliance. 

CIOs, CPOs, and CDOs’ focus areas of data security, data privacy, and governance tools and frameworks, can provide a strong foundation for CHROs to begin auditing their processes. As domain experts, CHROs are suited to implement governance over processes to address compliance with regulations, educate teams and users about the importance of technology, and promote practices of fairness, transparency and equal opportunity. 

They can also give attention to candidate and employee experiences and other current and future projects requiring ethical HR standards inspection. All CIOs, CDOs, CPOs and CHROs should examine how automation and AI are used in HR workflows and monitor whether technology is used responsibly to avoid the potential risks of costly fines, brand damage, and loss of trust and talent.

Some example best practices to keep in mind:

  • Execute regular audits to explain how all hiring, promotion and pay decisions are conducted, threading through the entire candidate-to-employee lifecycle
  • Educate HR Stakeholders and have embedded technical and ethical AI resources aligned with the CIO and CDO 
  • Be transparent and publish standards so candidates and employees are aware of how their data is being used and stored
  • Vet employment technology with people who understand technical and employment privacy requirements
  • Embed ethical AI practices in ESG strategy and diversity, equity and inclusion
  • Work with key stakeholders on a holistic AI governance framework to establish or refine processes for directing, managing and monitoring your organization’s AI activities 
Learn more about an HR/Talent strategy with trustworthy AI
Was this article helpful?

More from Artificial intelligence

10 tasks I wish AI could perform for financial planning and analysis professionals

4 min read - It’s no secret that artificial intelligence (AI) transforms the way we work in financial planning and analysis (FP&A). It is already happening to a degree, but we could easily dream of many more things that AI could do for us. Most FP&A professionals are consumed with manual work that detracts from their ability to add value to their work. This often leaves chief financial officers and business leaders frustrated with the return on investment from their FP&A team. However, AI…

ServiceNow and IBM revolutionize talent development with AI

4 min read - Generative AI is fundamentally changing the world of work by redefining the skills and jobs needed for the future. In fact, recent research from ServiceNow and Pearson found that an additional 1.76 million tech workers will be needed by 2028 in the US alone.  However, according to the IBM Institute for Business Value, less than half of CEOs surveyed (44%) have assessed the potential impact of generative AI on their workforces. To help customers develop and upskill their workforces to meet…

Responsible AI is a competitive advantage

3 min read - In the era of generative AI, the promise of the technology grows daily as organizations unlock its new possibilities. However, the true measure of AI’s advancement goes beyond its technical capabilities. It’s about how technology is harnessed to reflect collective values and create a world where innovation benefits everyone, not just a privileged few. Prioritizing trust and safety while scaling artificial intelligence (AI) with governance is paramount to realizing the full benefits of this technology. It is becoming clear that…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters