AI works alongside diverse, human interests. People make decisions based on any number of contextual factors, including their experiences, memories, upbringing, and cultural norms. These factors allow us to have a fundamental understanding of “right and wrong” in a wide range of contexts, at home, in the office, or elsewhere. This is second nature for humans, as we have a wealth of experiences to draw upon.
Today’s AI systems do not have these types of experiences to draw upon, so it is the job of designers and developers to collaborate with each other in order to ensure consideration of existing values. Care is required to ensure sensitivity to a wide range of cultural norms and values. As daunting as it may seem to take value systems into account, the common core of universal principles is that they are a cooperative phenomenon. Successful teams already understand that cooperation and collaboration leads to the best outcomes.
Consider the culture that establishes the value systems you’re designing within. Whenever possible, bring in policymakers and academics that can help your team articulate relevant perspectives.
Work with design researchers to understand and reflect your users’ values. You can find out more about this process here.
Consider mapping out your understanding of your users’ values and aligning the AI’s actions accordingly with an Ethics Canvas. Values will be specific to certain use cases and affected communities. Alignment will allow users to better understand your AI’s actions and intents.
“If machines engage in human communities as autonomous agents, then those agents will be expected to follow the community’s social and moral norms. A necessary step in enabling machines to do so is to identify these norms. But whose norms?”