{"id":211277,"date":"2024-03-08T11:04:18","date_gmt":"2024-03-08T11:04:18","guid":{"rendered":"https:\/\/michigandigitalnews.com\/index.php\/2024\/03\/08\/your-ai-products-values-and-behaviors-are-vital-heres-how-to-get-them-right\/"},"modified":"2025-06-25T17:21:03","modified_gmt":"2025-06-25T17:21:03","slug":"your-ai-products-values-and-behaviors-are-vital-heres-how-to-get-them-right","status":"publish","type":"post","link":"https:\/\/michigandigitalnews.com\/index.php\/2024\/03\/08\/your-ai-products-values-and-behaviors-are-vital-heres-how-to-get-them-right\/","title":{"rendered":"Your AI products\u2019 values and behaviors are vital: Here&#8217;s how to get them right"},"content":{"rendered":"<p> [ad_1]<br \/>\n<br \/><img decoding=\"async\" src=\"https:\/\/content.fortune.com\/wp-content\/uploads\/2024\/03\/GettyImages-1256204331-e1709848813982.jpg?w=2048\" \/><\/p>\n<p>In the short time since artificial intelligence hit the mainstream, its power to do the previously unimaginable is already clear. But along with that staggering potential comes the possibility of AIs being unpredictable, offensive, even dangerous. That possibility prompted\u00a0<a href=\"https:\/\/fortune.com\/company\/alphabet\/\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">Google<\/a> CEO Sundar Pichai\u00a0<a href=\"https:\/\/www.wired.com\/story\/google-splits-up-responsible-innovation-ai-team\/\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">to tell<\/a>\u00a0employees that developing AI responsibly was a top company priority in 2024.\u00a0Already we\u2019ve seen tech giants like Meta, <a href=\"https:\/\/fortune.com\/company\/apple\/\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">Apple<\/a>, and Microsoft\u00a0<a href=\"https:\/\/www.engadget.com\/google-apple-meta-and-other-huge-tech-companies-join-us-consortium-to-advance-responsible-ai-164352301.html\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">sign on<\/a>\u00a0to a U.S. government-led effort to advance responsible AI practices. The U.K. is also\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/02\/05\/uk-govt-touts-100m-plan-to-fire-up-responsible-ai-rd\/\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">investing<\/a>\u00a0in creating tools to regulate AI\u2014and so are many others, from\u00a0<a href=\"https:\/\/hbr.org\/2024\/02\/the-eus-ai-act-and-how-companies-can-achieve-compliance\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">the European Union<\/a>\u00a0to\u00a0<a href=\"https:\/\/www.who.int\/news\/item\/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">the World Health Organization<\/a>\u00a0and beyond.<\/p>\n<div>\n<p>This increased focus on the unique power of AI to behave in unexpected ways is already impacting how AI products are perceived,\u00a0marketed, and adopted. No longer are firms touting their products using solely traditional measures of business success\u2014like speed, scalability, and accuracy. They\u2019re increasingly speaking about their products in terms of their\u00a0<em>behavior<\/em>, which ultimately reflects their\u00a0<em>values<\/em>. A selling point for products ranging from self-driving cars to smart home appliances is now how well they embody specific values, such as safety, dignity, fairness, harmlessness, and helpfulness.\u00a0<\/p>\n<p>In fact, as AI becomes embedded across more aspects of daily life, the values upon which its decisions and behaviors are based emerge as critical product features. As a result, ensuring that AI outcomes at all stages of use reflect certain values is not a cosmetic concern for companies: Value-alignment driving the behavior of AI products will\u00a0significantly impact market acceptance, eventually market share, and ultimately company survival.\u00a0Instilling the right values and exhibiting the right behaviors will increasingly become a source of differentiation and competitive advantage.\u00a0<\/p>\n<p>But how do companies go about updating their AI development to make sure their products and services\u00a0behave as their creators intend them to?\u00a0To help meet this challenge we have divided the most important transformation challenges into four categories, building on our recent work in\u00a0<a href=\"https:\/\/hbr.org\/2024\/03\/bring-human-values-to-ai\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">Harvard Business Review<\/a>. We also provide an overview of the frameworks, practices, and tools that executives can draw on to answer the question: How do you get your AI values right?<\/p>\n<h2 class=\"wp-block-heading\"><strong>1.\u00a0<\/strong><strong>Define your values, write them into the program\u2014and make sure your partners share them too<\/strong><\/h2>\n<p>The first task is to determine\u00a0<em>whose<\/em>\u00a0values should be taken into account. Given the scope of AI\u2019s potential impact on society, companies will need to consider a more diverse group of stakeholders than they normally would. This extends beyond employees and customers to include civil society organizations, policymakers, activists, industry associations, and others. The preferences of each of these stakeholders will need to be understood and balanced.\u00a0<\/p>\n<p>One approach is to embed principles drawing on established moral theories or frameworks developed by credible global institutions, such as\u00a0<a href=\"https:\/\/www.unesco.org\/en\/artificial-intelligence\/recommendation-ethics\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">UNESCO<\/a>. The principles of Anthropic\u2019s Claude model, for example, are taken from the United Nations\u2019 <a href=\"https:\/\/fortune.com\/company\/universal\/\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">Universal<\/a> Declaration of Human Rights.\u00a0<a href=\"https:\/\/www.press.bmwgroup.com\/global\/article\/detail\/T0318411EN\/seven-principles-for-ai:-bmw-group-sets-out-code-of-ethics-for-the-use-of-artificial-intelligence?language=en\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">BMW<\/a>, meanwhile, derives\u00a0its AI values from EU requirements for trustworthy AI.\u00a0<\/p>\n<p>Another approach is to articulate one\u2019s own values from scratch, often by assembling a team of specialists (technologists, ethicists, and human rights experts). For instance, the AI research lab\u00a0<a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2213709120\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">DeepMind<\/a>\u00a0elicited feedback based on the philosopher John Rawls\u2019s idea of a \u201cveil of ignorance,\u201d in which people propose rules for a community without any knowledge of how the rules will affect them individually.\u00a0DeepMind\u2019s results were striking in that they focused on how AI can help the most disadvantaged, making it easier to get user\u2019s buy-in.<\/p>\n<p>Identifying the right values is a dynamic and complex process that must also respond to evolving regulation across jurisdictions. But once those values are clearly defined, companies will also need to write them into the program to explicitly constrain AI behavior. Companies like <a href=\"https:\/\/fortune.com\/company\/nvidia\/\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">Nvidia<\/a> and OpenAI<strong><u>\u00a0<\/u><\/strong>are developing frameworks to write formal generative-AI guardrails into their programs to ensure they don\u2019t cross red lines by carrying out improper requests or generating unacceptable content. OpenAI has in fact differentiated its GPT-4 model by its improved values, marketing it as\u00a0<a href=\"https:\/\/openai.com\/gpt-4\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">82% less likely than its predecessor model<\/a>\u00a0to respond to improper requests, like generating hate speech or code for malware.\u00a0<\/p>\n<p>Crucially, alignment with values requires the further step of bringing partners along. This is particularly important (and challenging) for products created with third-party models because of the limitations on how much companies may fine-tune them. Only the developers of the original models know what data was used in training them. Before launching new partnerships, AI developers may need to establish processes to unearth the values of external AI models and data, similar to how companies assess potential partners\u2019 sustainability. As foundational models evolve, companies may need to change the models they rely upon, further entrenching values-based AI due diligence as a source of competitive advantage.<\/p>\n<h2 class=\"wp-block-heading\"><strong>2. Assess the tradeoffs\u00a0<\/strong><\/h2>\n<p>Companies are increasingly struggling to balance often competing values<strong>.\u00a0<\/strong>For example, companies that offer products to assist the elderly or to educate children must consider not only safety but also dignity and agency. When should AI not assist elderly users so as to strengthen their confidence and respect their dignity? When should it help a child to ensure a positive learning experience?\u00a0<\/p>\n<p>One approach to this balancing act is to segment the market according to values. A company like DuckDuckGo does that by focusing on a smaller search market that cares more about privacy than algorithmic accuracy, enabling the company to position itself as a differentiated option for internet users.<\/p>\n<p>Managers will need to make nuanced judgments about whether certain content generated or recommended by AI is harmful. To guide these decisions, organizations need to establish clear communication processes and channels with stakeholders early on to ensure continual feedback, alignment, and learning. One way to manage such efforts is to establish an\u00a0<a href=\"https:\/\/fortune.com\/2022\/03\/04\/artificial-intelligence-ai-watchdog-review-board\/\" target=\"_self\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">AI watchdog<\/a>\u00a0with real independence and authority within the company.<\/p>\n<h2 class=\"wp-block-heading\"><strong>3. Ensure human feedback<\/strong><\/h2>\n<p>Maintaining an AI product\u2019s values, including addressing biases, requires extensive human feedback on AI behavior, data that will need to be managed through new processes. The AI research community has developed various tools to ensure that trained models accurately reflect human preferences in their responses. One foundational approach, utilized by GPT-3, involves \u201csupervised fine-tuning\u201d (SFT), where models are given carefully curated responses to key questions. Building on this, more sophisticated techniques like \u201creinforcement learning from human feedback\u201d (RLHF) and \u201cdirect preference optimization\u201d (DPO) have made it possible to fine-tune AI behaviors in a more iterative feedback loop based on human ratings of model outputs.\u00a0<\/p>\n<p>What is common to all these fine-tuning methodologies is the need for actual human feedback to \u201cnudge\u201d the models towards greater alignment with the relevant values. But who provides the feedback and how? At early stages, engineers can provide feedback while testing the AI\u2019s output. Another practice is to create \u201cred teams\u201d who act as adversaries and test the AI by pushing it toward undesirable behavior to explore how it may fail. Often these are internal teams, but external communities can also be leveraged.\u00a0<\/p>\n<p>In some instances, companies can turn to users or consumers themselves to provide valuable feedback. Social media and online gaming companies, for example, have established content-moderation and quality-management processes as well as escalation protocols that build on user reports of suspicious activity. The reports are then reviewed by moderators that follow detailed guidelines in deciding whether to remove the content.<\/p>\n<h2 class=\"wp-block-heading\"><strong>4. Prepare for surprises<\/strong><\/h2>\n<p>As AI systems become larger and more powerful, they can also display more unexpected behaviors. Such behaviors will increase in frequency as AI models are asked to perform tasks they weren\u2019t explicitly programmed for and endless versions of an AI product are created, according to how each user interacts with it. The challenge for companies will be ensuring that all those versions remain aligned.<\/p>\n<p>AI itself can help mitigate this risk. Some companies already deploy one AI model to challenge another with adversarial learning. More recently, tools for out-of-distribution (OOD) detection have been used to help AI with things it has not encountered before. The\u00a0<a href=\"https:\/\/www.theguardian.com\/sport\/2022\/jul\/24\/chess-robot-grabs-and-breaks-finger-of-seven-year-old-opponent-moscow#:~:text=Last%20week%2C%20according%20to%20Russian,match%20at%20the%20Moscow%20Open.\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">chess-playing robot<\/a>\u00a0that grabbed a child\u2019s hand because it mistook it for a chess piece is a classic example of what could happen. What OOD tools do is help the AI \u201cknow what it doesn\u2019t know\u201d and abstain from action in situations that it has not been trained to handle.\u00a0<\/p>\n<p>While impossible to completely uproot, the risk associated with unpredictable behavior can be proactively managed. The pharmaceutical sector faces a similar challenge when patients and doctors report side effects not identified during clinical trials, often leading to removing approved drugs from the market. When it comes to AI products, companies must do the same to identify unexpected behaviors after release. Companies may need to build specific AI incident databases, like those the OECD and Partnership on AI have developed, to document how their AI products evolve.\u00a0<\/p>\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n<p>As AI becomes more ubiquitous, companies\u2019\u00a0values\u2014how to define, project, and protect them\u2014rise in importance as they ultimately shape the way AI products behave. For executives, navigating a rapidly changing values-based marketplace where unpredictable AI behaviors can determine acceptance and adoption of their products can be daunting. But facing these challenges now by delivering trustworthy products that behave in line with your values will lay the groundwork for building lasting competitive advantage.<\/p>\n<p><strong>***<\/strong><\/p>\n<p><em>Read\u00a0<a href=\"https:\/\/fortune.com\/author\/francois-candelon\/\" target=\"_self\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">other\u00a0<\/a><\/em><a href=\"https:\/\/fortune.com\/author\/francois-candelon\/\" target=\"_self\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">Fortune<\/a><em><a href=\"https:\/\/fortune.com\/author\/francois-candelon\/\" target=\"_self\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">\u00a0columns by Fran\u00e7ois Candelon<\/a>.\u00a0<\/em><\/p>\n<p>Fran\u00e7ois Candelon\u00a0is a managing director and senior partner of\u00a0<a href=\"https:\/\/fortune.com\/company\/boston-consulting-group\/\" target=\"_self\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">Boston Consulting Group\u00a0<\/a>and the global director of the BCG Henderson Institute (BHI).<\/p>\n<p><em>Jacob Abernethy\u00a0is an associate professor at the Georgia Institute of Technology and a cofounder of the water analytics company BlueConduit.<\/em><\/p>\n<p><em>Theodoros Evgeniou\u00a0is professor at INSEAD, BCG Henderson Institute Adviser, member of the OECD Network of Experts on A.I., former World Economic Forum Partner on A.I., and cofounder and chief innovation officer of Tremau.<\/em><\/p>\n<p><em>Abhishek Gupta\u00a0is the director for responsible AI at <a href=\"https:\/\/fortune.com\/company\/boston-consulting-group\/\" target=\"_blank\" rel=\"noopener\" class=\"sc-47dba8f0-0 iRbseu styledLinkColor \">Boston Consulting Group<\/a>, a fellow at the BCG Henderson Institute, and the founder and principal researcher of the Montreal AI Ethics Institute.\u00a0<\/em><\/p>\n<p><em>Yves Lostanlen\u00a0has held executive roles at and advised the CEOs of numerous companies, including AI Redefined and Element AI.<br \/><\/em><br \/><em>Some of the companies featured in this column are past or current clients of BCG.<\/em><\/p>\n<\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"https:\/\/fortune.com\/2024\/03\/08\/artificial-intelligence-ai-products-values-behaviors-bcg\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] In the short time since artificial intelligence hit the mainstream, its power to do the previously unimaginable is already clear. But along with that<\/p>\n","protected":false},"author":1,"featured_media":211278,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[149],"tags":[],"_links":{"self":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/211277"}],"collection":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/comments?post=211277"}],"version-history":[{"count":2,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/211277\/revisions"}],"predecessor-version":[{"id":339504,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/211277\/revisions\/339504"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media\/211278"}],"wp:attachment":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media?parent=211277"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/categories?post=211277"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/tags?post=211277"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}