Confluence for 8.18.24
The importance of "breaking the pattern" in AI output. How AI is affecting recruitment. Google announces Gemini Live. A note on quality.

Greetings and welcome to Confluence. Before we get started, we will note that Midjourney continues to improve as a tool for image generation. We will likely write about it next week, but in the meantime, consider the header image for our issue this week. If the fact that you can produce something like that, essentially for free and in seconds, doesn’t blow your mind … well, we don’t know what would. With that said, here’s what has our attention this week at the intersection of generative AI and corporate communication:
The Importance of “Breaking the Pattern” in AI Output
How AI Is Affecting Recruitment
Google Announces Gemini Live
A Note on Quality
The Importance of “Breaking the Pattern” in AI Output
Grasping the full potential of today’s AI models requires looking beyond their default outputs.
Last week, The Atlantic published a piece by Caroline Mimbs Nyce titled “Why Does AI Art Look Like That?” to investigate the phenomenon of why so much AI-generated imagery looks so similar. Mimbs Nyce notes that, across models, AI-generated art is “stuck with a distinct aesthetic”:
The colors are bright and saturated, the people are beautiful, and the lighting is dramatic. Much of the imagery appears blurred or airbrushed, carefully smoothed like frosting on a wedding cake.
It’s an aesthetic that by now many of us know well, immediately recognizable for what Mimbs Nyce refers to as its “weird, cartoonish look.” Indeed, we’ve used the term “cartoonish” to describe this look for nearly a year now.
The piece goes on to explore the causes of this from multiple angles, which is interesting and worth understanding. But, in our view, the phenomenon demonstrates a more important point, which Mimbs Nyce alludes to when she writes that “A user can get around this algorithmic monotony by using more specific prompts … But when a person fails to specify, these tools seem to default to an odd blend of cartoon and dreamscape.” The emphasis in the quotation is ours, and we could not agree more: it’s the default output of AI image models that has such a similar aesthetic. The models are capable of producing much more interesting, nuanced, and differentiated outputs, but it takes effort and skill on the part of the user.
The same is true for language models. Without an attempt to prompt the model in a specific direction or to provide the model with relevant context, the output will be generic. We hear this time and again when people note that something “just sounds like AI.” This is the linguistic equivalent of the image-aesthetic phenomenon Mimbs Nyce explores. By default, and without prompting, the output of AI models does “just sound like AI.” But they’re capable of much more.
Ethan Mollick calls attention to this in his book Co-Intelligence, when he writes about the power of “breaking the pattern”:
The default output of many of these models can sound very generic, since they tend to follow similar patterns that are common in the written documents that AI was trained on. By breaking the pattern, you can get much more useful and interesting outputs.
To tie it back to Mimbs Nyce’s article, there are very real technical reasons why AI output looks so similar by default. It’s important to distinguish, however, between the default output of these models and their actual potential. As with anything, it takes a bit of effort and ingenuity on the part of the user to get the most out of the tool. When we assess the current tools and models based on their default outputs, we get an incomplete picture of their latent potential — and increase the likelihood that we miss out on how they can help us in concrete ways, right now.
How AI Is Affecting Recruitment
A new piece in the Financial Times sheds light on the obstacles recruiters face as more candidates lean on AI.
Today’s AI tools are already remarkably proficient in generating content that is “pretty good.” With minimal effort, users can quickly produce a substantial volume of coherent text that to a casual reader sounds perfectly reasonable. This capability, while impressive, carries a host of implications for organizations, and we believe it’s important to flag them when we see them.
A recent Financial Times article raises one such implication: the growing use of AI in job applications. Surveys cited in the piece indicate that approximately half of job applicants are now using AI tools in their application process — from fine tuning resumes to submitting responses to written candidate assessments wholly created by AI. This trend, combined with the ease of online job submissions, presents a new challenge for recruiters. They’re facing higher volumes of applications, but often with diminishing quality.
The piece is worth reading in full. It highlights a common theme: the significant difference between free and paid versions of AI tools and the need for a “cyborg approach” to get to the best possible output.
But the real reason this piece is worth your attention is how it highlights the need to reconsider how we evaluate and select talent. Are our processes aligned with the realities of a world where advanced AI tools are widely available? Are we identifying the skills and potential needed for success in both the present and future, rather than relying on criteria that may have been relevant five years ago?
These aren’t just rhetorical questions. They represent critical challenges that organizations must address to remain competitive. At our firm, we’ve been actively engaging with these issues for the past 18 months, doing our best to make sure we are identifying, selecting, and developing talent for a world in which AI is fully integrated into how we work. If you and your teams haven’t been having these conversations, now is the time to start.
Google Announces Gemini Live
One step closer to everyday AI integration.
Google unveiled Gemini Live on Tuesday, positioning it as an answer to OpenAI's Advanced Voice Mode in ChatGPT. The technology aims to understand context, adapt to individual speech patterns, and engage in multi-turn dialogues. It opens up new avenues for personalized interactions, real-time information access, and dynamic problem-solving across personal and professional domains. Users might have a productive voice conversation with the assistant about what to make for dinner based on a photo of fridge contents, or ask the assistant to scan their inbox for important dates to add to their calendar. While these capabilities are promising, they also bring potential risks that warrant consideration.
As information becomes increasingly accessible through conversational interfaces, the distinction between verified facts and AI-generated content becomes less clear. The lack of visible source citations in Gemini Live's responses prompts questions about information credibility and the potential for misinformation. This underscores what we’ve long known to be true: critical thinking skills and digital literacy will continue to be paramount in an increasingly AI-influenced world.
Privacy considerations add another complexity to this advancement. As AI assistants like Gemini Live integrate further into our daily routines, they access more personal and potentially sensitive information. The risk of unauthorized access or data breaches through voice commands introduces new security concerns that users and developers need to anticipate.
As AI technology like Gemini Live or Advanced Voice Mode integrates into our daily routines, it’s will be important to remain attentive. Stay curious about these advancements, exploring their benefits while keeping a critical eye on potential pitfalls. By staying informed and engaged, we'll be better equipped to use AI tools wisely and ensure they improve our lives rather than complicating them.
A Note on Quality
Why we think “But the AI made mistakes” can be the wrong conclusion.
This past week a few of us were fortunate to spend a day on-site with a client’s corporate communication team conducting an in-depth orientation to generative AI and its use cases. It was a great day, and the team was all-in on wanting to both understand how the technology works, including its limitations (which we think is critical to using it well), and getting into the nuts and bolts of how to apply it in their day-to-day work.
In the course of the day, we returned to one conversation several times that we think is worth noting here. It was about quality, and what to make of varying levels of quality that generative AI can produce. Some of this is subjective — is the text / idea / image the AI produced good enough? — and we think prompting and knowing the relative strengths and weaknesses of generative AI is key to getting the best output you can muster (and you can see our prompt engineering guide here). But some of this is objective: is the citation in the generative AI output correct? Is the story it’s referring to real? Are the names accurate? And because generative AI tools are generating outputs, not searching for and serving them, this dimension of quality is a really important consideration because generative AI can (and very often does) get facts wrong.
Many use this as a reason to dismiss generative AI. “You can’t trust it,” is the claim. But we actually think this the wrong, and a very limiting, conclusion to draw. Our take, and one we raised several times in our session this past week is, “Well, who’s work can you truly trust?” By that we mean we have all sorts of quality assurance that we do when working with human colleagues based on the premise that people can and do make mistakes. We fact-check. We copy edit. We ask for editorial review of creative products. We check the spelling and the math.
The truth is that when working with people we regularly engage in review processes to assure quality because we know that while we can have great confidence in colleagues, quality requires review. Yet with generative AI people seem to hold a different standard. Maybe because it’s a computer program underneath, they expect it to be deterministic like a spell-check program or an Excel formula. But that’s not how these things work. They work an awful lot like people, creating output based on what you’ve asked for, and like people, they can make mistakes and get facts wrong.
For us, this doesn’t matter — unless we don’t bother to check. But that’s also true for work our live colleagues produce, and with our colleagues we always check. In fact, we usually have three levels of review: one with a colleague providing peer review for quality of thinking, one self-directed (with the creator using spell check, for example), and finally, a third review, by a colleague, for final quality assurance (facts, typos, names, figures, etc.). Only then does the work go out the door.
We think it’s helpful to hold generative AI to the same standard. Rather than thinking of ChatGPT or Claude as a computer to which you have outsourced work, think of it as a colleague who is doing some of the work. Subject it to the same quality assurance that you’d use with any colleague. Otherwise, you’re walking away from the large amount of value these tools can add to your workflow. Sure, they make mistakes. So do we, so do you. So take advantage of their strengths, and take the appropriate steps to mitigate their quality weaknesses. It’s what we do, and our work is better for it.
We’ll leave you with something cool: a little video showing what our AI leadership coach, ALEX, can do.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.
I feel like I'm starting to hit my stride in using gen-ai. Last week, I was working with one of my direct reports. She's rolling out a campaign on leading with empathy. Great content, message, and delivery. But I was challenging her to think of ways that we can keep the message alive. We were struggling at first, then I pulled up ALEX and Claude and started brainstorming. In 30 minutes, gen-ai helped us come up with a tagline, a digital giveaway, and ideas on how to expand on our current message to include not only expressing empathy, but to take it a step further and commit to helping solve the customer's problem. Overall, the experience felt productive. For the first time, it felt like we were working with another person on the team.