Business

Can We Instill Human Values into Artificial Intelligence?

Dan Nicholson

The growing integration of artificial intelligence (AI) into our daily lives—from self-driving cars to smart home appliances to machine learning medical readings—raises important questions. Questions regarding the values it embodies, whose values it represents, and the methods for selecting and integrating them are the next phase of the technology’s development. Here, experts weigh in on the significance of infusing AI with human values and explore strategies for navigating this complex landscape.

Philosophical and Ethical Foundations for AI Values

The idea that technology should be subject to some form of ethical guardrails is far from new. Norbert Wiener, the father of cybernetics, proposed a similar idea in a 1960 Science article, launching an entire academic discipline focused on ensuring that automated tools incorporate the values of their creators. 

“But only now are we seeing AI-embedded products being marketed according to how well they embody values such as safety, dignity, fairness, meritocracy, harmlessness, and helpfulness,” write a panel of experts in Harvard Business Review. They say these values are becoming just as important as traditional measures of performance, such as speed, scalability, and accuracy.

Different philosophical and ethical frameworks for human values are being tested and applied to AI models. A study published by DeepMind in the Proceedings of the National Academy of Sciences draws inspiration from philosophy to find ways to better identify principles to guide AI behavior. Specifically, it explored how a concept known as the “veil of ignorance”—a thought experiment intended to help identify fair principles for group decisions—can be applied to AI. Researchers found that this approach encouraged people to make decisions based on what they thought was fair, whether or not it benefited them directly. They also discovered that participants were more likely to select an AI that helped those who were most disadvantaged when they reasoned behind the veil of ignorance. 

These insights could help researchers and policymakers select principles for an AI assistant in a way that is fair to all parties, fosters impartiality, and prioritizes the common good over individual preferences. 

Current Challenges and Strategies to Building Ethical AI

While ethical frameworks are useful in an ideal world of AI development, experts say there are five key challenges business leaders and companies need to consider when developing AI-enabled products and services aligned with human values—and strategies for building them.

1. Define Values for Your AI Product

Establishing the foundational values that guide AI’s decisions is a critical first step. Companies must broaden their stakeholder base to include diverse perspectives ranging from employees and customers to civil society organizations and policymakers. 

To address this challenge, leaders can either embed principles established by regulatory systems, business standards, or even governments. Or, articulate your own values through the company mission. Some companies assemble a team of specialists—technologists, ethicists, human rights experts, and others—to develop their own values.

2. Confront Trade-offs in AI Development

Balancing competing priorities such as privacy, security, and user autonomy presents a complication in AI development. Managers are tasked with making nuanced judgments to ensure that AI systems prioritize human values while also meeting performance metrics. This is when clear communication channels with stakeholders are essential for gathering feedback and fostering alignment throughout the development process.

For example, companies that offer products to assist the elderly or to educate children must consider not only safety but also dignity and agency: When should AI not assist elderly users so as to strengthen their confidence and respect their dignity? When should it help a child to ensure a positive learning experience?

3. Align Partners' Values in AI Ecosystems

Collaboration is the quickest way to develop in the AI space. Yet companies must carefully vet partners to ensure compatibility with desired ethical standards. Processes for assessing external AI models and data, as well as understanding underlying technical systems, are crucial for maintaining alignment and mitigating potential conflicts. CEO Sam Altman of OpenAI, for instance, had to question how much flexibility his company would give people of differing cultures and value systems to customize OpenAI’s products. How could they ensure that new products created with third-party models were and are aligned with desirable values—especially given limitations on how much they may fine-tune them? Only the developers of the original models know what data was used in training them, so companies will need to select their AI partners carefully.

4. Incorporate Human Feedback

Embedding human values in AI requires continuous feedback mechanisms to refine algorithms and mitigate undesirable outcomes. 

Practices such as reinforcement learning from human feedback and employing "red teams", whose job it is to push the AI toward undesirable behavior to find weaknesses in the tech, aid in aligning technology with societal values. Adaptability and transparency are key in ensuring AI products evolve responsibly.

5. Anticipate Surprises

As AI becomes increasingly pervasive, preparing for unforeseen challenges is imperative. 

Some unpredictable behaviors may be induced, whether intentionally or not, by users’ interactions with AI-enabled products, so ensuring that all versions of an AI remain aligned and exhibit no novel emergent behaviors can prove challenging.

Strategies such as embedding values into organizational culture, implementing formal guardrails in AI programming, and segmenting markets based on values help address emergent issues. Continuous monitoring and adaptation are essential for safeguarding against unintended consequences.

Conclusion

As AI continues to evolve, the integration of human values into its development and deployment processes becomes ever more crucial. By defining core values, navigating trade-offs, aligning partners' values, incorporating human feedback, and anticipating challenges, companies can ensure that AI technologies reflect ethical principles and contribute positively to society. With a concerted effort toward ethical AI development, we can harness the transformative potential of technology while upholding human values and dignity.

Sources

Deepmind

Harvard Business Review

Dan Nicholson is the author of “Rigging the Game: How to Achieve Financial Certainty, Navigate Risk and Make Money on Your Own Terms,” deemed a best-seller by USA Today and The Wall Street Journal. In addition to founding the award-winning accounting and financial consulting firm Nth Degree CPAs, Dan has created and run multiple small businesses, including Certainty U and the Certified Certainty Advisor program.

No items found.
Top
Nth Degree - Safari Dan
Next Up In
Business
Top
Nth Degree - Safari Dan
Mid
Pinnacle Chiropractic (Mid)
Banner for Certainty Tools, Play your Game.  Blue gradient color with CertaintyU Logo
No items found.
Top
Nth Degree - Safari Dan
Mid
Pinnacle Chiropractic (Mid)