January 25, 2024 By Jennifer Kirkwood 5 min read

Growing up, my father always said, “do good.” As a child, I thought it was cringeworthy grammar and I would correct him, insisting it should be “do well.” Even my children tease me when they hear his “do good” advice and I’ll admit I let him have a pass on the grammar front.

In the case of responsible artificial intelligence (AI), organizations should prioritize the ability to avoid harm as a central focus. Some organizations may also aim to use AI for “doing good.” However, sometimes AI requires clear guardrails before one can agree with “good.”

Read the “Presidio AI Framework” paper to learn how to address generative AI risks with guardrails across the expanded AI life cycle

As generative AI continues to go mainstream, organizations are excited about the potential to transform processes, reduce costs and increase business value. Business leaders are eager to redesign their business strategies to better serve customers, patients, employees, partners or citizens more efficiently and improve the overall experience. Generative AI is opening doors and creating new opportunities and risks for organizations globally, with human resources (HR) leadership playing a key role in managing these challenges.

Adapting to the implications of increased AI adoption could include complying with complex regulatory requirements such as NIST, the EU AI Act, NYC 144, US EEOC and The White House AI Act, which directly impact HR and organizational policies, as well as social, job skilling and collective bargaining labor agreements. Adopting responsible AI requires a multi-stakeholder strategy as affirmed by top international resources including NIST, OECD, the Responsible Artificial Intelligence Institute, the Data and Trust Alliance and IEEE.

This is not just an IT role; HR plays a key role

HR leaders now advise businesses about the skills required for today’s work as well as future skills, considering AI and other technologies. According to the WEF, employers estimate that 44% of workers’ skills will be disrupted in the next 5 years. HR professionals are increasingly exploring their potential to improve productivity by augmenting the work of employees and empowering them to focus on higher-level work. As AI capabilities expand, there are ethical concerns and questions every business leader must consider so their AI use does not come at the expense of workers, partners or customers.

Learn the principles of trust and transparency recommended by IBM for organizations to responsibly integrate AI into their operations.

Worker education and knowledge management are now tightly coordinated as a multi-stakeholder strategy with IT, legal, compliance and business operators as an ongoing process, as opposed to a once-a-year check box. As such, HR leaders need to be innately involved in developing programs to create policies and grow employees’ AI acumen, identifying where to apply AI capabilities, establishing a responsible AI governance strategy and using tools like AI and automation to help ensure thoughtfulness and respect for employees through trustworthy and transparent AI adoption. 

Challenges and solutions in adopting AI ethics within organizations

Although AI adoption and use cases continue to expand, organizations may not be fully prepared for the many considerations and consequences of adopting AI capabilities into their processes and systems. While 79% of surveyed executives emphasize the importance of AI ethics in their enterprise-wide AI approach, less than 25% have operationalized common principles of AI ethics, according to IBM Institute for Business Value research.

This discrepancy exists because policies alone cannot eliminate the prevalence and increasing use of digital tools. Workers’ increasing usage of smart devices and apps such as ChatGPT or other black box public models, without proper approval, has become a persistent issue and doesn’t include the correct change management to inform workers about the associated risks. 

For example, workers might use these tools to write emails to clients using sensitive customer data or managers might use them to write performance reviews that disclose personal employee data. 

To help reduce these risks, it may be useful to embed responsible AI practice focal points or advocates within each department, business unit and functional level. This example can be an opportunity for HR to drive and champion efforts in thwarting potential ethical challenges and operational risks.

Ultimately, creating a responsible AI strategy with common values and principles that are aligned with the company’s broader values and business strategy and communicated to all employees is imperative. This strategy needs to advocate for employees and identify opportunities for organizations to embrace AI and innovation that push business objectives forward. It should also assist employees with education to help guard against harmful AI effects, address misinformation and bias and promote responsible AI, both internally and within society.

Top 3 considerations for adopting responsible AI

The top 3 considerations business and HR leaders should keep in mind as they develop a responsible AI strategy are:

Make people central to your strategy

Put another way, prioritize your people as you plot your advanced technology strategy. This means identifying how AI works with your employees, communicating specifically to those employees how AI can help them excel in their roles and redefining the ways of working. Without education, employees could be overly worried about AI being deployed to replace them or to eliminate the workforce. Communicate directly with employees with honesty about how these models are built. HR leaders should address potential job changes, as well as the realities of new categories and jobs created by AI and other technologies.

Enable governance that accounts for both the technologies adopted and the enterprise

AI is not a monolith. Organizations can deploy it in so many ways, so they must clearly define what responsible AI means to them, how they plan to use it and how they will refrain from using it. Principles such as transparency, trust, equity, fairness, robustness and the use of diverse teams, in alignment with OECD or RAII guidelines, should be considered and designed within each AI use case, whether it involves generative AI or not. Additionally, routine reviews for model drift and privacy measures should be conducted for each model and specific diversity, equity and inclusion metrics for bias mitigation.

Identify and align the right skills and tools needed for the work

The reality is that some employees are already experimenting with generative AI tools to help them perform tasks such as answering questions, drafting emails and performing other routine tasks. Therefore, organizations should act immediately to communicate their plans to use these tools, set expectations for employees using them and help ensure that the use of these tools aligns with the organization’s values and ethics. Also, organizations should offer skill development opportunities to help employees upskill their AI knowledge and understand potential career paths.

Download the “Unlocking Value from Generative AI” paper for more guidance on how your organization can adopt AI responsibly

Practicing and integrating responsible AI  into your organization is essential for successful adoption. IBM has made responsible AI central to its AI approach with clients and partners. In 2018, IBM established the AI Ethics Board as a central, cross-disciplinary body to support a culture of ethical, responsible and trustworthy AI. It is comprised of senior leaders from various departments such as research, business units, human resources, diversity and inclusion, legal, government and regulatory affairs, procurement and communications. The board directs and enforces AI-related initiatives and decisions. IBM takes the benefits and challenges of AI seriously, embedding responsibility into everything we do.

I’ll allow my father this one broken grammar rule. AI can “do good” when managed correctly, with the involvement of many humans, guardrails, oversight, governance and an AI ethics framework. 

Watch the webinar on how to prepare your business for responsible AI adoption Explore how IBM helps clients in their talent transformation journey
Was this article helpful?
YesNo

More from Artificial intelligence

Generative AI meets application modernization

2 min read - According to a survey of more than 400 top IT executives across industries in North America, three in four respondents say they still have disparate systems using traditional technologies and tools in their organizations. Furthermore, the survey finds that most executives report being in the planning or preliminary stages of modernization. Maintaining these traditional, legacy technologies and tools, often referred to as “technical debt,” for too long can have serious consequences, such as stalled development projects, cybersecurity exposures and operational…

Accelerating responsible AI adoption with a new Amazon Web Services (AWS) Generative AI Competency

3 min read - We’re at a watershed moment with generative AI. According to findings from the IBM Institute for Business Value, investment in generative AI is expected to grow nearly four times over the next two to three years. For enterprises that make the right investments in the technology it could deliver a strategic advantage that pays massive dividends. At IBM® we are committed to helping clients navigate this new reality and realize meaningful value from generative AI over the long term. For our…

How IBM and the Data & Trust Alliance are fostering greater transparency across the data ecosystem

2 min read - Strong data governance is foundational to robust artificial intelligence (AI) governance. Companies developing or deploying responsible AI must start with strong data governance to prepare for current or upcoming regulations and to create AI that is explainable, transparent and fair. Transparency about data is essential for any organization using data to drive decision-making or shape business strategies. It helps to build trust, accountability and credibility by making data and its governance processes accessible and understandable. However, this transparency can be…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters