Confluence for 9.3.23
ChatGPT and Google AI for enterprise. What we can learn from how college students use ChatGPT. The Alignment Problem by Brian Christian. Walmart launches GenAI tool for associates.
Welcome back to Confluence. Here’s what has our attention at the intersection of AI and corporate communications:
ChatGPT and Google AI for Enterprise
What We Can Learn From How College Students Use ChatGPT
Book Recommendation: The Alignment Problem by Brian Christian
Walmart Launches GenAI Tool for Associates
ChatGPT and Google AI for Enterprise
With Microsoft 365Copilot on the horizon, OpenAI and Google’s announcements foreshadow an inflection point for the use of generative AI in organizations.
Last week saw two major announcements showing that the race for enterprise market share in the generative AI space is heating up.
First, OpenAI announced ChatGPT Enterprise. It marks the company’s biggest move since the introduction of ChatGPT itself. OpenAI’s announcement notes that ChatGPT Enterprise will include unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, and advanced data analysis capabilities (which most observers are interpreting as a reframing of the Code Interpreter feature). All of these are noteworthy features, but the biggest game-changer is enterprise-grade privacy and security, with assurances that OpenAI will not train on enterprise users’ data. In our conversations with clients, data privacy and security have been the top concerns for employees’ use of generative AI tools, and rightfully so. While we don’t anticipate those concerns going away entirely, we do think these advances in security — and the ability for organizations to define their own terms in contracts with AI providers — will substantially mitigate them. The primary concern holding many organizations back from fully embracing these technologies may not hold them back much longer.
On the heels of the OpenAI announcement, Google unveiled new developments to bolster its enterprise offerings. The announcements, a summary of which you can find here, all aim to increase Google’s AI value proposition to large businesses.
This news, paired with the anticipated arrival of Microsoft 365 Copilot later this year or early next year, suggests that we’re approaching a major inflection point for the use and availability of generative AI tools within organizations. In most organizations today, the use of generative AI by employees is limited to early adopters who are exploring and experimenting with the publicly-available versions of these tools on their own. In the coming months, more organizations will place these tools directly in the hands of their employees. Employee adoption of them is likely to accelerate, as will the emergence of the second- and third-order consequences that adoption will bring.
For corporate communication teams, the time to start preparing is now.
What We Can Learn From How College Students Use ChatGPT
They’re using it in ways more savvy than essay creation.
As students return to class, one of the more interesting stories to watch this fall will be how higher education — and education at all levels — continues to grapple with generative AI. With that in mind, we returned to a May 2023 essay by Columbia undergrad Owen Kichizo Terry in The Chronicle of Higher Education to see what organizations and corporate communication teams might learn from one of the most enthusiastic (and reported-upon) user bases of ChatGPT: college students. We agree with many of Kichizo’s points, namely:
The generative AI genie is out of the bottle. With the general purpose nature and increasing sophistication of these technologies, strict AI-detection protocols and bans are likely to have diminishing utility.
Rather than fighting these technologies, it will be more productive to identify where they can add value and develop users’ skills. Just as important, we should be equally intentional about where these tools should not be used and develop those skills by other means.
ChatGPT and tools like it are much more than a “fancy autocomplete.” While a student technically could use ChatGPT to write an entire essay, savvier students use it in more sophisticated ways: to help generate and structure arguments, support claims, and much more. As with any tool, it can be used for good or ill. But when used in the right way in the right cases for the right reasons, it can be a force multiplier.
We expect many of the issues, opportunities, practices, and standards of generative AI adoption in one sphere to translate to others. It’s not difficult to begin to draw parallels between the student experience with generative AI and what we can expect to see in the corporate space. Keep watching as the generative AI dynamic in education unfolds.
Book Recommendation: The Alignment Problem by Brian Christian
A thoughtful examination of the complex challenge of aligning AI systems with human values.
Why do AI systems sometimes not understand what we ask them to do, even when we believe we are clear in our instructions? How do we ensure they serve our interests, and that their outputs align with human values? Brian Christian explores these questions in The Alignment Problem. We’ve read it, and believe you should too. The real value Christian delivers is the depth with which he explores the various ways that AI systems learn, and the risks and benefits associated with different modes of learning. When we deepen our understanding of how these systems learn and produce outputs, we can make better decisions about how and when to use them.
We think that last point is important for professionals using generative AI to wrestle with. Given the strengths and weaknesses of this technology, decision-making about their use will be as important, at least in the near term, as will be proficiency in their use. To make those decisions well you need to know a bit about about how they work and “think.” Skillfulness in application alone won’t help you avoid some the larger risks or see the larger opportunities.
“As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the “sorcerer’s apprentice”: we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete—lest we get, in some clever, horrible way, precisely what we asked for.” - Brian Christian
Here’s the GPT4 summary of the text, FWIW:
"The Alignment Problem: Machine Learning and Human Values" by Brian Christian delves into the challenges of aligning artificial intelligence (AI) systems with human values. The book explores the evolving landscape of machine learning, focusing on the ethical, philosophical, and technical hurdles in creating AI that benefits humanity without causing unintended harm. Christian interviews experts in the field, recounts historical precedents, and presents real-world examples to illustrate the complexity and urgency of the alignment problem.
Key Themes:
Misaligned Objectives: One of the core issues is that machine learning models can often optimize for goals that are not perfectly aligned with human intentions. This can lead to unwanted or even dangerous outcomes.
Ethical and Philosophical Questions: The book discusses the inherent challenges in translating human values and ethical considerations into something that a machine can understand.
Technical Solutions: Various methods for addressing the alignment problem are discussed, such as reward modeling, inverse reinforcement learning, and debate methods, among others.
Case Studies: The book is replete with examples, from Google's language models to self-driving cars, demonstrating the real-world implications of misaligned AI.
Human Collaboration: Christian emphasizes that solving the alignment problem is not just a technical challenge but a collaborative effort that includes insights from philosophy, social science, and even art.
In summary, "The Alignment Problem" offers a comprehensive look at the difficulties of ensuring that machine learning algorithms act in ways that are beneficial to and aligned with human values. It blends technical discussion with ethical considerations, offering both cautionary tales and potential paths forward.
Walmart Launches GenAI Tool for U.S. Campus Associates
The desktop and mobile app brings content generation and other generative AI abilities to thousands of US employees.
In a harbinger of things to come, Walmart announced the release of a generative AI tool for employees working on corporate campuses in the U.S. Read the LinkedIn post by Donna Morris, EVP and Chief People Officer, here.
Expect this announcement to be one of many in coming months, as organizations seek to take advantage of the skill-augmentation generative AI can afford while solving for questions of security and confidentiality. For many, a custom solution hosted on-premises is the answer while others will opt for “as a service but secure” solutions outside the corporate firewall like Microsoft Copilot or GPT Enterprise.
It’s a progressive decision by an employer of scale and significance. It will also have second-order consequences, as all solutions and innovations also create new problems. In any such instance of generative AI adoption, we’d be thinking about:
Principles for use. We’re presuming Walmart has outlined a set of guidelines for associate use of the AI tool — getting these right is important to minimizing some of the risks these tools can create.
Quality assurance. Most excellent creators — be they writers, musicians, photographers, videographers, or others — will quickly admit that creation is at least equal in importance to editing, deciding what among created work should be kept and creating harmony across that body of work. Corporate communications exists as a professional function, in part, not to centralize the content creation as much as to centralize the assurance of quality and alignment in what’s conveyed. When everyone has the ability to create content, and at a greater volume and across a broader range of forms, it begs the question of how an organization ensures that what’s created is reflective of brand standards, quality standards, alignment, and other matters. This has been an issue since online publication platforms like SharePoint and blog software emerged in organizations in the 2000s, but generative AI will supercharge the ability of individuals to create content. Give everyone a musical instrument and it can sound like a cacophony or Beethoven — the difference is not just skill, but coordination. Widespread use of generative AI tools will put pressure on the latter.
Attribution and disclosure policy. We wrote about the question of AI use disclosure in an earlier post, and as the Wall Street Journal reports here, some organizations are already wrestling with the question for consumer-facing content. We think it’s an important matter worthy of deep and careful consideration, especially for internal communication. We expect broad AI use and exposure will make employees more sensitive to the origination of what they see, read, or hear. People will want to know what comes from a person, a ghostwriter, or a machine, and this knowledge will be central to what and whom they choose to trust.
Skill atrophy. As people learn where generative AI can automate and replace existing skill sets, other skills will erode — some with important consequences. In the late 1990s the aviation industry realized that increased automation in cockpits had, contrary to expectations, made some forms of aviation less safe. Pilots had become so reliant on automation that their skills in situational awareness, task management, decision-making in critical moments — even flying the airplane by hand — had atrophied. Outsourcing a task to AI can add value, but the question is, “What other skills may fall into disuse, and what’s the net benefit of that trade?” Keep watching this issue as it evolves — it’s an important one.
Depression of developmental curves. Some skills are learned early in a career and then less often used as a person advances. In our firm, proofreading is one of those skills. While everyone willingly reviews and quality-checks colleagues’ work, we disproportionately rely on more junior colleagues to copyedit. It’s a skill we can immediately, and likely with greater accuracy, outsource today to GPT4 and Claude 2. But we don’t ask junior colleagues to do this just to save time or to teach copyediting or our corporate voice. We do this to teach attention to detail. If we were to outsource copyediting to GPT4, we’d also eliminate an important source of skill development that’s much more critical to their success as advisors down the line than copyediting. Could they learn attention to detail in other ways? Yes, but we’d argue that their overall developmental path would be depressed from where it would otherwise be if they were to do lots of copyediting early on. There are all sorts of tasks that people accomplish that foster important developmental skills at a more macro-level. As we look to replace “routine” work with automation, it’s wise to ask what larger skills that work may cultivate, and how to ensure development of those skills still occurs as AI use becomes more prevalent.
It’s good to have Walmart leading the way on this, given their scale. As we noted earlier, we’re sure many others will soon follow. The real work over the next few years will be identifying and solving for the second-order consequences generative AI creates in communications. The five topics above have our attention already, and we’re sure many others will emerge in the months to come.
That’s it for now. Before we go, we’ll leave you with something cool — if a bit chilling if you consider the possibility for deep fakes — for music fans.
AI Disclosure: We used generative AI in creating imagery for this post. We also use it selectively as a summarizer of content and as an editor and proofreader.