Confluence for 9.10.23
Google Gemini is coming. More websites are blocking OpenAI’s access to their content. Most people have no GenAI experience. What it means to use AI as a writing tool. A brief prompt design primer.

Welcome back to Confluence. Here’s what has our attention at the intersection of AI and corporate communication:
Google Gemini Is Coming
More Websites Are Blocking OpenAI’s Access to Their Content
Research Shows Most People Have No GenAI Experience
What it Means to Use AI as a Writing Tool
A Brief Prompt Design Primer
Google Gemini Is Coming
We may again be surprised by what large language models can do.
Google’s next move in the large language model (LLM) space is coming, and it’s called Gemini. Zvi Mowshowitz gives some context here, noting that “By all reports, and as one would expect, Google’s Gemini looks to be substantially superior to GPT-4,” and that “Yes, it will advance capabilities, and those not paying attention who think the whole LLM thing was a ho-hum will notice and freak out …”
How will it be different? It’s hard to say, as no one in the wild has seen or worked with it yet. But projections are that it will be more powerful and capable than GPT4 (perhaps significantly so), in part because it will have been trained on much more data (including everything in YouTube, Google Scholar, Google Books, and Alphabet’s other assets), and in part because it integrates with a broader technical architecture stemming from Google’s DeepMind and AlphaGo work. We won’t get more technical than that here, but you can read this piece for a good overview of Gemini.
Why does it matter? First, Gemini will be capable of handling data-driven requirements, images, audio, videos, 3D models, and graphs — meaning it should quickly expand the set of skills and tasks people in corporate communication can augment or outsource to generative AI. Second, if we believe the projections, Gemini will have capabilities that will surprise, perhaps even startle, observers much in the same way that GPT4 did. People in organizations are starting to wake up to the realities of generative AI, but many are still skeptical. We’ve seen that exposure to what generative AI can do is one way people come to recognize what these tools may mean for them. Seeing is believing, and with Gemini, we may see things that really get our attention.
More Websites Are Blocking OpenAI’s Access to Their Content
If this becomes a trend it could affect the capabilities of ChatGPT and other tools.
OpenAI and other companies developing large language models depend on massive datasets to train those models. Historically, these datasets have been easy to access via web crawler programs that scour the internet, cataloging the content of countless pages. Google uses these programs to index what’s on different websites and when they should appear in your search results, and OpenAI uses them to collect vast quantities of data and text for training the models underneath ChatGPT.
Most organizations allow this indexing because they want their content to appear in search results. But now that companies are using that information to train LLMs, they’re starting to change their posture. Amazon, The New York Times, Shutterstock, and more have now blocked OpenAI’s crawlers from accessing their content and thus from using it to train OpenAI’s AI models. In the short term, this means very little — as large as some of these companies are, the restricted content is still a mere fraction what is available on the internet. That said, keep an eye on the issue given the potential to create headaches for Generative AI companies down the road.
Most People Have No GenAI Experience
The technology is advancing faster than expectations.
We think about AI quite a bit, but recent research from Pew shows that we are very much outliers. Among those who have heard of ChatGPT, this research indicates that only 24% of those individuals have used it. If you’re using generative AI on a regular basis this figure may feel strikingly low — yet if you include the total number of adults who haven’t heard of ChatGPT it’s likely much lower still. The other finding that stood out is that only 19% of those who have heard of ChatGPT believe it will have a major impact on their own job. Even though we have no insight into the jobs the respondents have, this study from OpenAI and Wharton has us believing these respondents have a poor read on the future.
There is a clear disconnect between the awareness people have of these tools and their feel for the pace at which these tools are evolving (and the change that evolution will bring). Generative AI technology is progressing exponentially, not linearly, yet most people likely have linear projections for its future impact. There is a big difference between linear and exponential thinking. The bottom line: with this technology the world has the potential to change quickly and surprisingly. If you or members of your team are part of the 76% who have heard of ChatGPT or Claude 2 or other generative AI tools but haven’t used them, today is an ideal time to start.
What it Means to Use AI as a Writing Tool
We should explore novel uses for generative AI rather than merely using it as a replacement for existing tools.
Ethan Mollick’s most recent One Useful Thing offers interesting and provocative advice on how we can begin to use large language models like GPT4 and Claude 2.
[There is a lot of advice on] using AI as a thesaurus, or asking it for correction and grammar edits … these suggestions are typical advice I see for using AI: use AI to do stuff that other tools are already doing.
But LLMs are not Google replacements, or thesauruses or grammar checkers. Instead, they are capable of so much more weird and useful help.
What type of things? “Weird things,” according to Mollick.
One [suggestion] for improving writing with AI was to use AI as a thesaurus, a source of synonyms. Thesauruses are old, they have been around longer than printed books - the first versions are over two millennia old - and have become staples of grammar checkers and online dictionaries. As a result, a thesaurus is a comfortable analog. They do a narrow task of suggesting words, and they leave humans in charge of everything else. You could use an AI as a thesaurus, but it isn’t one, since its training goes far beyond finding similar words. Instead, you can take the concept much further. Forget synonyms of words, find synonyms for paragraphs.
I asked GPT-4 to Give me 20 vastly different variations on this [the paragraph above]. Make them as different as possible in style. label each style. Some of the more exciting examples of what it came up with:
Haiku
Word book, ancient too,
More than print, task narrow, small.
AI aims sky-highMarketing Hype
Unlock the POWER of words with a thesaurus! Your GO-TO tool since before Gutenberg! Level UP your grammar game with this time-tested CLASSIC. Why settle for AI when you could have the ORIGINAL wordsmith's wonder?
And my favorite:
Internet Meme
Thesaurus: "I haz words.”
AI: "Hold my data. I haz paragraphs."
Mollick’s point is an important one: these are new tools, with new capabilities we’re only beginning to understand — and according to some, new ways of “thinking” that are so different from our own that they qualify as an alien form of intelligence. While straight-forward use cases like grammar-checking and chart design are practical, they likely miss the larger ways in which generative AI can augment our skills. Surfacing these other use cases requires a fair bit of exploration, trial-by-error, creativity, and (as we’ve said before) understanding a bit about how they work, their strengths, and their weaknesses. We expect that five years from now we’ll look back and smile at the simple and limited ways we started to use large language models. For now, though, push the limits of creativity and see what they can do.
“These examples will hopefully serve to illustrate something important: the real value of AI comes not from having it emulate old ways of solving problems, but, instead, by helping us unlock new capabilities. Employees at companies don’t need another tool to search their corporate intranets for data, they need a way of skipping the most boring parts of their job while making their remaining work more productive and engaging.”
— Ethan Mollick
A Brief Prompt Design Primer
Becoming skillful at prompting generative AI is essential to getting the most value from it.
We are often asked for advice on crafting effective prompts for large language models like GPT4 or Claude 2. Prompt design — called “prompt engineering” by some — is the process of optimizing your input prompts to better produce the desired response from an AI language model. It’s important, as a well-crafted prompt improves the overall quality and relevance of AI-generated responses and reduces the likelihood of receiving incorrect or ambiguous information. Ethan and Lilach Mollick have a nice, 11-minute video on prompt design for instructors and students which translates well to the corporate sphere here. In addition, here’s some of the guidance we offer our team on writing effective prompts for GPT4 and Claude 2:
Provide important context: Provide context that should inform the nature of the output. Context should almost always include the role you want the AI to play (“you are a colleague of VP-level on a media relations team,” “you are a skeptical employee in a large manufacturing company, working at a plant in Oklahoma”) and the role you play (“I am a leader of an internal communication team coming to you for expert advice”). Context can also include additional information about your intentions, the organization, the history or precedent of what you’re asking about — more context leads to better outputs.
Be specific and clear: LLMs perform best when given clear and specific instructions. This eliminates ambiguity and helps the model generate a more accurate response. “Write in an authoritative voice,” “Give me three paragraphs on each topic … I will tell you to keep going if you run out of space,” “Offer advice that might be controversial,” “Push me to defend my arguments” are all examples.
Break complex tasks into smaller components and use step-by-step instructions: LLMs may struggle with complex or multi-part questions. Break prompts into smaller, more manageable components to increase the chances of receiving accurate and complete outputs. For example, you can first ask it to analyze and summarize some text. Ask for modification of that output as you may prefer, then ask it to create a survey instrument based on the text. When that’s done, ask it to review the survey for methodological and validity risks and suggest changes based on that review. Then ask it to translate the survey into Spanish. When asking an LLM to perform a task or solve a problem, provide step-by-step instructions to guide the model through the process. “I want you to do the following … [list of instructions].” You can also ask the model to “think step-by-step,” which typically forces it to demonstrate its thinking and build on individual increments more accurately.
Provide examples: Include examples in your prompt to guide the LLM toward the desired format or style of response. “Here’s an example of a typical memorandum in our organization,” “Here’s an example of a typical email we send,” … the model will accept and attempt to emulate almost any example you provide it (just be sure to follow your organization’s security protocols when providing content).
Set constraints: Specify constraints — like word counts and desired structure — to shape the output to your desired structure and format. You can also create constraints based on audience (“Explain this to me as if I were 10 years old with a US-based education”), style (“Following the AP style guide …”), or even the stance of the LLM (“be hyper-critical in your review and feedback”).
Guide the tone and style: Explicitly state the tone (e.g., formal, informal, academic, conversational) or style (e.g., concise, detailed, persuasive) you wish the LLM to adopt. You can also upload a sample text in your preferred style and ask it to describe that style. Then use that description in asking for output (“In the ‘professional and engaging style you just described,’ please write a two-paragraph summary of the following policy document …”).
Incorporate verification steps: To increase the likelihood of receiving accurate and reliable information, consider adding a verification step to your prompt. Ask the model to provide sources or evidence to support its response. You can even go so far as to ask the model to critique its own output for weakness or inaccuracy.
That’s it for now. As we close this issue, we’ll leave you with something cool Chase Lean is doing with Midjourney …
AI Disclosure: We used generative AI in creating imagery for this post. We also use it selectively as a summarizer of content and as an editor and proofreader.