Confluence for 9.24.23
The "AI Lull." Salesforce, Adobe, and Roblox embed genAI capabilities. Google Bard curbs hallucinations. A visual explanation of LLMs. New paper on prompt optimization.

Welcome back to Confluence. This is our seventh email, and we hope the series is adding value for you. If you’ve missed some of the prior issues, you can always visit the archive. That said, here’s what has our attention at the intersection of AI and corporate communication this 24 September 2023:
The “AI Lull”
Salesforce, Adobe, and Roblox Embed GenAI Capabilities
Google Bard Curbs “Hallucinations”
A Visual Explanation of LLMs
New Paper on Prompt Optimization
The “AI Lull”
This is a common pattern for revolutionary technologies, Cowen argues.
Over the past year, economist Tyler Cowen has been one of the more bullish voices for the positive impact that generative AI will have on business and society. It may seem surprising, then, that in a recent column in Bloomberg, Cowen focused on what he calls the “AI lull”:
Google searches for ChatGPT peaked in April and are now down significantly, as is customer engagement with ChatGPT. Claude 2, a new and wonderful model released in July by Anthropic, hasn’t captured the public imagination the way ChatGPT did.
Cowen places this phenomenon in a larger historical context, citing similar lulls in the early days of electricity and the internet to note that “Every revolutionary technology has a period when it feels not so exciting after all.” These lulls, he argues, are healthy and “provide some time to process the advances in AI and to fathom that large-language models really can perform many tasks at high levels.”
While Cowen’s argument is about the economy and society writ large, it’s consistent with our views on how these advances will unfold in organizations in general and corporate communication in particular. Corporate communication teams have a finite window to process these advances and prepare for their implications, and we are encouraging our clients to continue to press for their teams to build context, knowledge, and skills specific to generative AI ahead of the coming curve of general appreciation.
Do not be deceived. While the hype and marketing may have died down, at least on the retail side, the AI revolution marches on. And it will be just as transformative as advertised. — Tyler Cowen
Salesforce, Adobe, and Roblox Embed GenAI Capabilities
More generative AI integrations are coming to widely-used applications.
Generative AI technology will proliferate in two primary ways: by the introduction of “AI-native” software and applications (think ChatGPT, Claude 2, Midjourney, and a host of others) and by the integration of generative AI technology into software and applications that millions of people already use. As new products and applications hit the market each day, the past two weeks have also seen a flurry of developments in the latter category:
Salesforce launched Einstein Copilot, an AI-powered assistant that will be embedded across the Salesforce suite of applications (Slack, Tableau, etc.). As will be the case with most integrations like this, the tool can be trained on internal company data, powering its output to be even more relevant.
Adobe is embedding its previously standalone Firefly generative AI model into its suite of creative applications. With its integration into Adobe’s creative suite with widely-used applications like Photoshop and InDesign, the technology will be at the fingertips of millions more users.
Roblox announced a new AI assistant coming to its gaming and creative app. Salesforce and Adobe, of course, are enterprise technology companies used by businesses across the world, while Roblox is a mobile gaming app primarily used by kids. Why does it matter? Roblox is one of the most widely-used applications in the world, with over 214 million monthly users. As millennials grew up with the internet in the 1990s and GenZ grew up with mobile computing and social media after that, another generation of young people will be growing up with generative technology embedded into the tools they use—and games they play—every day.
All of that happened between September 8 and September 12. In the coming months, we can expect more developments like these, headlined by the wide release of Microsoft365 Copilot (more on that in our next edition). In each case, these developments will bring the power generative AI to more and more people, with less and less friction. A major inflection point in the use of these tools—and the implications of that use—is coming.
Google Bard Curbs “Hallucinations”
It’s a step in the right direction to address one of the biggest limitations to current generative AI technology.
As the race for market share continues to intensify, the major players are announcing new features and capability improvements at a rapid clip. Google recently announced a slew of updates to Bard, its answer to ChatGPT. The most notable of those updates, in our view, is how Google is attempting to curb “hallucinations” (the tendency of genAI tools to “make things up” or get specific facts wrong). You can read more specific details on how Google is attempting to do that here, but the short answer is the ability to easily ask Bard to fact-check its own responses.
One point we’ve reiterated consistently in conversations within our firm and with clients is that we should not mistake current barriers to this technology for lasting limitations. Right now, hallucination is one of the biggest barriers to people confidently using these tools, and the unpredictability of when and how these errors may occur is part of what gives generative AI a jagged frontier of effectiveness. People are rightfully cautious when it comes to trusting the accuracy of everything genAI tools produce, and we continue to advise that all outputs of genAI tools be subject to human review.
The steps Google is taking with Bard and hallucination are a case in point that the companies developing these technologies view current barriers and limitations as problems to solve, not lasting limitations. The hallucination problem isn’t fully solved, of course, but this is a step in the right direction. There’s no doubt that we will continue to see incremental progress like this against the many limitations that these tools have today. Don’t let the current limitations blind you to the long-term potential.
A Visual Explanation of Large Language Models
A baseline understanding of LLMs is essential to knowing where they excel and where they don’t.
Large Language Models (LLMs) are one of the foundational technologies underlying the latest generation of genAI tools and applications. LLMs themselves are underpinned by a foundational technology called transformer models. These are complex, sophisticated technologies that can quickly become overwhelming for a non-expert. We are of the view, though, that having a baseline understanding of how these technologies work is essential to knowing how to use them most effectively (and, just as important, knowing what not to use them for).
While there has been no shortage of attempts to translate these technologies into layperson’s terms, the Financial Times has published one of the best ones yet. It’s an excellent visual explanation that takes just a few moments to review, and we recommend it for anyone getting up to speed on the basics of the technology, or someone at an intermediate level of understanding who wants to go deeper. Pair it with Cal Newport’s “What Kind of Mind Does ChatGPT Have?” You’ll be smarter for having read both.
New Paper on Prompt Optimization
It may be good practice to ask your generative AI to “take a deep breath” before responding.
A new paper from Google DeepMind explores means of optimizing large language models through prompt design. The paper is quite technical, so here is GPT-4’s summary1:
The paper discusses a fresh way to use large language models, like GPT-3 or GPT-4, as problem solvers. Usually, we think of these models as good at generating text—like writing essays or answering questions. But this paper says, "Hey, they can also solve mathematical problems!"
How They Do It?
They introduce a method called "Optimization by PROmpting" or OPRO. Here's how it works:
1. You give the model a problem described in simple, everyday language.
2. The model suggests a bunch of solutions based on what you've told it.
3. You try out these solutions and see how good or bad they are.
4. You tell the model how each solution performed.
5. The model uses this new information to suggest even better solutions next time.
What Problems Can It Solve?
The paper shows that this method can solve various mathematical problems, like figuring out the best route for a delivery truck or finding the line that best fits a set of data points. And it does this quite well, often better than the traditional methods used to solve these problems.
What Does This Mean for You?
Broadens Your View: You might be thinking that generative AI models are only good for writing text or creating content. This paper says, "No, they can do more!" They can help solve real-world problems too.
Easy to Start: You don't need to be a math whiz to use this method. You describe your problem in simple language, and the model does the heavy lifting.
New Toolkit: As you learn more about generative AI, this adds a new tool to your toolkit. Now, you can use these models for both content generation and problem-solving.
Limitations: It's not a magic wand. There are cases where it might not work well, or it might be too expensive computationally. But it's a promising direction.
Step Into the Future: As AI continues to evolve, methods like this are pushing the boundaries of what we think is possible. It's an exciting field, and you're stepping into it at the right time.
So, if you're getting into generative AI, this paper adds a whole new layer of possibilities. It's not just about creating content; it's also about solving real-world problems in an intuitive and innovative way.
One interesting finding from the paper, though, was its review of different prompts one can use to optimize the large language model’s solution. The top prompt? “Take a deep breath and work on this problem step-by-step.” We often ask GPT-4 to “think step-by-step,” and many of us have gone so far to include it as part of our standing instructions. But “take a deep breath?” We’d not thought of that, but have started to use it. For the curious, here is GPT-4’s summary of the same paper, with the same prompt, with “Take a deep breath and work on this problem step-by-step” added to the beginning of the prompt:
Paper Summary
Title: Large Language Models as Optimizers
Authors: Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
Affiliation: Google DeepMind
Abstract Summary:
The paper explores the use of large language models as optimization tools, particularly for problems where traditional gradient-based methods face challenges due to the absence of a gradient.
---
Simplified Explanation:
The paper primarily focuses on using large language models, like the one you're interacting with, for optimization tasks. Optimization is essentially the process of finding the best solution to a problem from a set of possible solutions. Normally, this is done using mathematical techniques that make use of gradients, which are sort of like the "slopes" that point in the direction of the optimal solution. However, not all problems have these gradients, making them hard to solve with traditional methods.
The authors propose that large language models can act as a kind of "substitute" for these traditional optimization methods. They argue that language models have learned a lot about various problems and their solutions during their training, and this knowledge can be used to find good solutions to optimization problems, even when gradients are not available.
What This Means for You:
As someone who is starting to learn about generative AI, understanding the versatility of large language models is crucial. These models are not just for generating text; they can be applied in various other domains, like optimization, as the paper suggests.
Expand Your Horizon: Knowing that these models can be used for optimization broadens the range of applications you can consider. This could be particularly useful if you're looking to apply AI in unconventional ways.
Easier Problem-Solving: If you encounter a problem that's hard to solve with traditional methods, you now know that a language model could potentially be an alternative tool for finding a solution.
Practical Skills: Learning how to harness the optimizing power of language models can give you an edge in various professional settings. You could potentially develop systems that are more efficient or that solve problems that were previously considered too difficult for automated systems.
Interdisciplinary Knowledge: The paper highlights the interdisciplinary nature of AI. As you delve deeper into the subject, you may find that knowledge from different fields—mathematics, computer science, engineering, etc.—can be synergistically combined.
Future of AI: Understanding the capabilities and limitations of current AI models will help you stay ahead of the curve, as you'll be more prepared for upcoming trends and technologies.
To fully grasp the potential of what's laid out in this paper, you'll want to delve into the field of optimization as well. It will enhance your understanding of both AI and the various problems it can solve.
We prefer the latter of the two, and the entire exchange serves as an example of the influence prompts have on generative AI output.
That’s all we have for this edition of Confluence. We will leave you with something cool for audiobook fans …
AI Disclosure: We used generative AI in creating imagery for this post. We also use it selectively as a summarizer of content and as an editor and proofreader.Our original prompt for the GPT-4 Advanced Data Analysis plugin, after uploading the paper as a PDF file: Please summarize the attached paper for me. Explain it to me in simple terms and assume I am a novice on the subject matter. Also explain what it means to me as a person starting to learn more about, and starting to learn how to use, generative AI.
