Confluence for 1.5.25
The generative AI knowledge gap. Free courses at DeepLearning.AI. Backlash against AI-influencers. A New Year’s Midjourney prompt storm.

Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI and corporate communication:
The Generative AI Knowledge Gap
Free Courses At DeepLearning.AI
Backlash Against AI-Influencers
A New Year’s Midjourney Prompt Storm
The Generative AI Knowledge Gap
A little education goes a long way.
On December 31, Simon Willison published a blog post titled “Things we learned about LLMs in 2024”. It’s a helpful, comprehensive digest of just how much happened in 2024 and where things stand as we enter the new year. The list centers primarily on important technical observations, like “the GPT-4 barrier was comprehensively broken” and “the rise of inference-scaling ‘reasoning’ models.” Amid those technical-leaning items was a simpler, more human one that we believe is one of the most important dynamics facing organizations’ adoption of generative AI in 2025: “Knowledge is incredibly unevenly distributed.” The knowledge Willison writes of here is specifically knowledge of generative AI: what the tools are, how the models work, how you can use them, and what you can do with them.
As Willison writes:
Most people have heard of ChatGPT by now. How many have heard of Claude?
The knowledge gap between the people who actively follow this stuff and the 99% of the population who do not is vast.
The pace of change doesn’t help either. In just the past month we’ve seen general availability of live interfaces where you can point your phone’s camera at something and talk about it with your voice... and optionally have it pretend to be Santa. Most self-certified nerds haven’t even tried that yet.
Given the ongoing (and potential) impact on society that this technology has, I don’t think the size of this gap is healthy. I’d like to see a lot more effort put into improving this.
While Willison is speaking of the knowledge gap in society writ large, we see the same gap inside teams and organizations. In our experience over the past two years, an essential first step for organizations, teams, and individuals to get real traction with generative AI is to address the knowledge gap. We start every seminar or development session we lead with a show of hands to gauge who in the group uses generative AI on an everyday basis, who uses it occasionally, and who has never used it at all. While the relative proportion of hands we see for each category has shifted quite a bit over the past year (we see more people in the middle and fewer who have never used generative AI), the overall pattern reflects Willison’s observation of widely-distributed knowledge and experience with generative AI tools.
The temptation for many teams and organizations is to jump straight to use cases — to look for the ways they can immediately begin getting value from these tools and recouping their significant investment in them. The challenge with jumping straight to use cases, however, is that without understanding how these tools work and how to use them, it’s difficult to determine what to use them for and how to do so productively. Once a team is grounded in a shared understanding and vocabulary of this technology — and given the time and space to experiment with it both individually and together — uncovering specific use cases that are directly, immediately relevant to a specific team or organization becomes much easier and more organic.
CIO.com recently ran an article titled “Gen AI in 2025: Playtime is over, time to get practical” which noted the “pilot fatigue, aimless experimentation, and failure rates” from many organizations’ initial forays into generative AI. We expect the pressure for practical applications of generative AI within organizations, including within corporate communications teams, to steadily increase in 2025. In our view — grounded in our experience within our own firm and with clients —the most efficient way to “get practical” is to first address the knowledge gap. Create the time and space for educational grounding, and the rest will follow much more easily.
Free Courses At DeepLearning.AI
Short courses worth your time.
Speaking of closing the knowledge gap, this week we stumbled across a free online short course at DeepLearning.AI on working with Open AI’s new reasoning o1 model, which one of us just completed (we know, we know … nerd alert). That course may be of interest to some of you, but of broader interest are two other courses available there:
Generative AI for Everyone — a three-to-five-hour, learn-at-your-own-pace introductory course on generative AI. Here’s the blurb:
Instructed by AI pioneer Andrew Ng, Generative AI for Everyone offers his unique perspective on empowering you and your work with generative AI. Andrew will guide you through how generative AI works and what it can (and can’t) do. It includes hands-on exercises where you’ll learn to use generative AI to help in day-to-day work and receive tips on effective prompt engineering, as well as learning how to go beyond prompting for more advanced uses of AI. You’ll delve into real-world applications and learn common use cases, and get hands-on time with generative AI tools to put your knowledge into action, and gain insight into AI’s impact on both business and society. This course was created to ensure everyone can be a participant in our AI-powered future.Collaborative Writing and Coding with OpenAI Canvas — a 50-minute short course, also free, on using OpenAI’s “canvas” feature, which is of particular use to those using ChatGPT to create written content. Its blurb:
Explore a new way to write and code with OpenAI Canvas, a user-friendly interface that allows you to brainstorm, draft, and refine text and code in collaboration with ChatGPT. In the short course Collaborative Writing and Coding with OpenAI Canvas, taught by Karina Nguyen, research lead at OpenAI, you’ll learn how to use this tool to enhance your writing and coding. Canvas provides a side-by-side workspace where you and ChatGPT can collaboratively edit and refine text or code. This makes brainstorming, drafting, and iterating on the text feel more natural and effective … This course will teach you how to make the most of the new interactive Canvas workspace and tools to make your writing more flexible, more efficient, and a lot more fun as you go behind the scenes with examples of the different use cases and learn what it takes to train the model that powers Canvas’s features and functionalities.
Both are worth a look.
Backlash Against AI-Influencers
What we can learn from Meta’s attempt at introducing AI-generated users.
Meta’s recent moves in artificial intelligence deserve our attention — not just for what they reveal about the company’s strategy, but for what they signal about the future of digital communication platforms and our role as communication professionals.
The Financial Times recently reported Meta’s expanding integration of AI features across Facebook and Instagram. While some implementations feel familiar — think photo filters and editing tools — others push into more provocative territory. Meta confirmed they had created AI-powered “influencer” accounts, essentially digital personalities built on their open-source Llama models, that have been active for nearly a year.
The recent attention on these personalities sparked predictable pushback, leading Meta to take down these AI-powered accounts. But the broader ramifications matter more than this specific retreat. For corporate communication professionals, we believe there are two key implications to take away from this situation.
First, Meta’s strategy illuminates a broader industry trajectory. A Meta executive told the Financial Times:
We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do. They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.
This isn’t just Meta’s roadmap — it’s a preview of how major platforms will likely deploy generative AI to capture and retain user attention. We are going to see more instances of companies testing the waters of what users will tolerate when it comes to AI-generated content, not fewer.
Second, and perhaps more crucial for communication professionals, is the reality that audiences respond differently to AI-generated content. While we’re still navigating this challenge in real time, Marshall McLuhan’s insight that “the medium is the message” takes on new relevance. The mere fact that content is AI-generated becomes part of its meaning, influencing how audiences receive and interpret it.
This creates a complex calculus for communication professionals. We must weigh not just whether AI can create certain content, but whether it should. The decision extends beyond questions of quality or efficiency — it demands understanding how the AI origin of content might affect its reception.
As platform owners push the boundaries of AI integration, communication professionals must bring seasoned judgment to these decisions. We’re operating in uncharted territory where social norms and expectations around AI-generated content are still taking shape. Our expertise in understanding audiences and crafting effective messages becomes more valuable, not less, in this context.
A New Year’s Midjourney Prompt Storm
Images too good not to share.
Sometimes people ask where the images at the top of every edition of Confluence come from. They’re made by Midjourney, our preferred generative AI tool for image creation (and one you should know how to use). We use a two-step process, though, as we’ve also created a Claude Project to act as our art director. We give it an idea, and it gives us 10 artistic directions, including a Midjourney prompt for each. This week we gave the Claude Project this prompt:
Something that has to do with the new year, includes "2025" in text.It quickly generated 10 concepts with prompts for each, from “Japanese Minimalism Direction: Contemporary Japanese design Approach” to “American Regionalism Direction: 1930s American painting Approach.” We then feed those prompts to Midjourney, which creates four images for each prompt. All told, in about five minutes we have, if we wish, 40 images from which to choose (and we can revise them within Midjourney if we like). We share this because it’s a very practical example of how we’re using generative AI to enable a specific use case in our work. We’re using two tools, one to assist the other, to great effect. We also share it because this week we ran all 10 prompts through Midjourney (usually we just pick one or two). All produced impressive and beautiful imagery. We include one image from each of the 9 prompts we didn’t select for the header, unedited, here.
And here’s wishing you and yours a fantastic 2025.









We’ll leave you with something cool: A series of videos Ethan Mollick has made with Google’s Veo 2 that create in-genre film snippets of classic video games. “Donkey Kong as a nature documentary” is priceless, as is “Grand Theft Auto as a colorized silent slapstick film.”
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.