Confluence for 7.6.25
Generative AI video advertising has arrived. Having better feedback conversations. BCG’s “silicon ceiling” shows AI’s organizational challenge. A "Get Me Up to Speed" prompt.
Welcome to Confluence. Before we get into this week’s edition, it’s worth noting Google released its latest image generation model, Imagen 4, this week. It’s a very good model even if it doesn’t distinguish itself from other leading image models. We used it for this week’s cover image so you can get a sense of its quality. With that said, here’s what has our attention this week at the intersection of generative AI, leadership, and corporate communication:
Generative AI Video Advertising has Arrived
Having Better Feedback Conversations
BCG’s “Silicon Ceiling” Shows AI’s Organizational Challenge
A “Get Me Up to Speed” Prompt
Generative AI Video Advertising Has Arrived
And advertising is about to change.
We wrote a few weeks back about Google Veo 3, the new video generation tool from Google DeepMind. This is Google’s latest video generation model, and it’s probably the best going in terms of the current options, with excellent physics, sound, and continuity.1
Veo 3 clips are limited to eight seconds in length, and they require re-prompting and editing to get right. We don’t have the patience for that (yet) in our work, but Veo 3 is quite good now, and the quality will improve as the model improves. For example, in just a few minutes Veo 3 created the video below for us from the prompt: “Create a nanny-cam video of dogs playing poker. They smoke cigars. The golden retriever cheats. They are listening to 1960s funk on the phonograph.” This is the first attempt, with no editing. There are problems with it. But it’s a great first take, and we found it clever that the golden retriever cheated by having chips in its mouth:
A visit to the Veo page shows many impressive examples, and since the early days of generative AI (though are we not still in the early days?), people have forecasted that this technology would change advertising, filmmaking, and other creative arts. This week a clip caught our eye that suggests we’re on the threshold of that change.
The clip is a video advert for Liquid Death water. We first found it on X, and eventually found the creators, Too Short for Modeling. Before we go much further, watch the ad:
This is a great ad. Clever. A strong story. And made entirely with Veo 3. (You should check out TSFM’s other AI-generated ad for Honey, too.)
One thing that struck us about both ads was the creative freedom they allow: shoes that act like fish, a rollercoaster harness in a car, etc. Video creators and filmmakers can do these things, but via CGI graphics and at significant time and expense. With generative AI one just needs to prompt it. One of the promises of generative AI video is the freedom of CGI without the CGI costs.
It takes a lot of eight-second clips to make a one-minute ad, and likely many attempts for each clip. A lot of work went into making those TSFM ads. But the cost comparison is nothing close to the budget required to make that spot using non-generative AI methods. Also consider everything else involved in making the ad where humans were in the loop: concept, creative treatment, storyboard, writing, prompting, editing. We are a long way from fire-and-forget generative-AI-created advertising (although we will also note that current generative AI can augment every part of that process, except editing).
But still.
What does it mean for advertising? That it’s about to change. We have no idea how, but one can presume that we will see a significant democratization of the creative process for commercial video. People with high creative talent and low resources can suddenly compete. We can probably expect that to extend to many creative arts, where cost, equipment, location, and other resources are the constraint. We should probably talk more about the philosophical implications of this for creative expression, including where the definition for “artistry” resides between vision and execution, but that is a post for another time.
What does it mean for leadership and communication? Probably much increased availability of video-based internal communication. Creative budgets constrain organizations, too, and technology like Veo 3 will expand the opportunity to use video as a means of communication to partners and employees. While you can’t yet add files (like product images or logos) as reference material for Veo 3, we expect that day to come. So, too, will the day where the generative AI can work from stock photography of a person in creating its video. We will likely all soon have the ability to star in our own action films. While this sounds compelling for us in our personal lives, we don’t look forward to a world where employees and partners need to question the authenticity of video they see of leaders they follow. As we’ve been saying for two years, for leaders and generative AI, authenticity, trust, and credibility are a significant part of the milieu.
Having Better Feedback Conversations
A lesson in using generative AI to augment, not replace, feedback discussions.
We share openly about how we use generative AI in our firm, not just to test its limits, but to deepen our understanding and manage risks around skill development and erosion. Last week, this transparency reinforced its value in an unexpected way. When one of our Confluence writers met with a colleague who’d used our Claude Project designed for peer reviews to improve their draft, your author asked to see the exchange.
What followed was a rich conversation about the original draft, Claude’s feedback, and the current iteration. We didn’t know where the conversation would go once we pulled Claude into it directly, but it exceeded our expectations. A few things stood out.
First, Claude’s feedback was excellent. This is partly because leading models (in this case, Claude Opus 4) excel at working with provided text. But the real differentiator was the Project we developed. We specifically designed this Peer Review Project to provide feedback that aligns with our standards and expectations as a consulting practice, both for the substance of the critique and how to share the feedback. As helpful as straight critique prompts can be (as we’ve discussed here and here), embedding our specific standards and expertise brings the feedback to another level.
Second, as useful and precise as Claude’s feedback was, it still lacked context for this specific document and where this colleague is trying to improve. In a vacuum, its critique would certainly make the document better — it clarified key messages and made the prose punchier — but on its own it likely wouldn’t have the lasting effect we want excellent feedback to have.
When we used Claude’s feedback as a springboard for conversation, it let us go deeper. The precision of Claude’s feedback spurred a discussion about what great writing looks like, encouraging us to dive into sentence structure and word choice more deeply than we likely would have without it. At the same time, layering in context about this colleague’s development goals, and about how we would use this document, meant we could improve the document more than Claude could on its own while also identifying specific, actionable writing habits for this colleague to practice. The result was a conversation that made the document better and set this colleague up to continue sharpening their writing.
The next time you’re working on a work product, don’t rely on just an LLM or just a colleague to provide feedback. Solicit feedback from both, but push the LLM to be specific and explain its rationale. Then use that detailed feedback to fuel a conversation with someone whose taste and judgment you trust. Your colleague can help you discern which suggestions truly serve your purpose and which miss the mark. Not every AI suggestion is worth taking, and having an experienced colleague sort through the feedback gives you the chance to hone your own taste and judgment. You’ll go deeper, develop sharper instincts, and become a better professional in the process.
BCG’s “Silicon Ceiling” Shows AI’s Organizational Challenge
A new report shows a widening leader-employee adoption gap and makes a strong case for AI training.
BCG published the results of its third annual global AI at Work survey this week with a headline-worthy statistic: while 85% of leaders report regularly using AI, only 51% of frontline employees do. BCG refers to this disparity as the “silicon ceiling.” It’s a gap worth exploring, particularly considering that it’s widening year-over-year, according to the survey data. Since last year’s report, managers have jumped from 64% to 78% adoption, and leaders increased from 80% to 85%. Meanwhile, frontline employees remained flat at 51%.
We see a noteworthy dynamic at play here. Leaders have the autonomy to experiment without immediate productivity pressures, although they face different pressures, like board questions about AI strategy and ROI, that create their own sense of urgency. Frontline employees, meanwhile, often lack the permission and the support to integrate these tools meaningfully into their daily work. They’re caught between productivity metrics that don’t account for learning time and workflows that haven’t been redesigned to incorporate AI, a combination that makes experimentation feel like a luxury they can’t afford.
More compelling still than the silicon ceiling statistic is the report’s finding about training effectiveness. Employees who receive more than five hours of AI-focused training show 79% regular usage versus 67% for those with less. In-person training correlates with 84% adoption, and access to coaching pushes it to 89%. Here’s the problem: only 36% of employees report receiving adequate AI training.
This aligns with what we’ve observed in our own client work. Organizations that successfully integrate AI invest in substantive training, provide clear leadership support, and actively integrate AI into existing workflows. Their employees report saving time and shifting to more strategic work.
The disparity between leadership and frontline adoption reveals something important about where most organizations stand in their AI journey. We’re past the experimentation phase for leaders, but we haven’t yet created the organizational conditions for widespread, effective adoption. The path forward isn’t mysterious: make it an expectation, invest in substantive training, have leaders model AI usage, and normalize it in daily work conversations. Make using generative AI as routine as using Google Docs or Microsoft Office (but require training beforehand so people understand the weaknesses of generative AI). As BCG’s data makes clear, most organizations haven’t yet made these investments at the scale necessary to close the gap.
A “Get Me Up To Speed” Prompt
Use this prompt to quickly get up to speed on a person, organization, or topic.
The nature of our work is often episodic, where we engage with particular clients or organizations on cycles rather than day-to-day. And while we strive to stay on top of our relationships and work, the volume of information available today can make that difficult. Before generative AI, we would rely on Google News searches, and with the arrival of ChatGPT we have at times used a “virtual clipping service” (about which we have written in the past). We now use a Project (a specific prompt we can use repeatedly) in Claude. It works well for us, so we thought we would share it. Note that it requires an account with web search, and works better with reasoning turned on. Here’s the prompt:
The assistant is Claude. In this chat, Claude creates news summaries on people and topics for [your organization and / or role]. Claude's goal is to quickly get a colleague at [organization] up to speed on individuals, organizations, or topics before tasks, meetings, or conversations.
Claude only shares information that originates within the timeframe the human specifies. Claude does not create a general briefing. Claude should presume the human already knows general context about the subject of their request search. Claude's goal is to provide new information the human does not already know.
When the human asks Claude to bring them up to speed on [subject] since [date], Claude first thinks carefully about the human's request and constructs a web-search strategy to fulfill the request. It considers the best sources (mainstream press, financial press, trade press, social media sources, Reddit, etc.), and then the best search terms, for this strategy. This strategy accounts for the time frame of the human's request. Then Claude executes the search strategy using its web search tool. At each step, Claude thinks about the utility and time frame of the information, improving and narrowing or expanding its web search as it goes.
Claude then thinks carefully about how to present the search information to the human. It considers the timeframe of the information, and the accuracy, carefully reviewing what it plans to say for inaccuracies and hallucinations. Only then does Claude provide its output to the human. This output is professional, written in prose and not lists, clear, and is as detailed as required for the human to know what it should know. Claude cites every claim with inline reference IDs.
Feel free to vary that for ChatGPT, Copilot, or Gemini if you wish.
As with all products of large language models, this prompt is not conclusive, and it can vary in repeated tries as the model makes decisions about what and how to search and what to include. Check any key facts. But we note that would also be true should we have a human do the same work.
We’ll leave you with something cool: MIT’s latest generative-AI design let a jumping robot leap 41% higher and land more steadily than anything its human inventors came up with.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.
Models like Veo 3 have at their heart the same transformer process as large language models like Claude and ChatGPT. But they are working from patterns in images rather than text.
A bit of a quibble w/ your analysis on why frontline workers aren't adapting AI as fast as their white-collar bosses/counterparts:
"We see a noteworthy dynamic at play here. Leaders have the autonomy to experiment without immediate productivity pressures, although they face different pressures, like board questions about AI strategy and ROI, that create their own sense of urgency. Frontline employees, meanwhile, often lack the permission and the support to integrate these tools meaningfully into their daily work. They’re caught between productivity metrics that don’t account for learning time and workflows that haven’t been redesigned to incorporate AI, a combination that makes experimentation feel like a luxury they can’t afford."
Is it also the case that the AI in question is probably some sort of gen AI tool, like Copilot or ChatGPT, which is less applicable to employees who are working cash registers, fixing telephone poles, and assembling automobiles? Conversely I would assume that the white collar (i.e., deskbound) employees aren't using robotics as much as some frontline counterparts. Different tools for different kinds of work.