Confluence for 10.20.2024
The importance of communicating AI strategy. Can someone create a chatbot version of you? A step toward hallucination-free generative AI. More evidence of LLMs taking work from freelancers.
Good day and welcome to Confluence! Before we dive into this week’s content, we’ll point you to Ethan Mollick’s latest Substack post, “Thinking Like an AI.” Part of the challenge in helping people understand how to best use generative AI is helping them understand how they work — because, unlike the transmission in your car, you really do need to have some sense of how these things work if you’re going to do a good job of taking advantage of their strengths and avoiding their weaknesses. We’ve used this article from the Financial Times as a primer in the past, but Mollick’s latest is equally good if not a bit more accessible. Give it a read and share it broadly.
With that said, here’s what has our attention this week at the intersection of generative AI and corporate communication:
The Importance of Communicating AI Strategy
Can Someone Create a Chatbot Version of You?
A Step Towards Hallucination-Free Generative AI
More Evidence of LLMs Taking Work from Freelancers
The Importance of Communicating AI Strategy
A new Gallup survey reveals a chasm between leadership and employee perception of AI rollouts.
One of the trends we’ve observed this year is organizations moving from the exploration of generative AI to the implementation of tools at scale. Data are beginning to emerge on how that is playing out, and one recent body of research that captured our attention is Gallup’s AI in the Workplace: Answering Three Big Questions. The three questions? How many U.S. employees are using AI, how they’re using it, and how organizations can improve adoption. The data on each question are interesting, but one finding in particular stood out to us: the gap between leadership and employee perceptions of organizations’ AI strategies and rollouts — and the role communication teams can play to close that gap.
The article cites a recent Gallup survey that found 93% of Fortune 500 CHROs say their organizations have begun “using AI tools and technologies to improve business practices.” Only one-third of employees, however, say their organizations “[have] begun integrating new artificial intelligence (AI) technology or tools to improve business practices.” That’s a massive gap, and Gallup’s conclusion is both obvious and critical for communication teams: “If leaders want to achieve the productivity and innovation gains that AI promises, they need to clearly communicate their plans.”
Here’s Gallup’s full conclusion, with additional data that points to key differences between employees who feel their organization has a plan and those who don’t (the emphasis below is ours):
As mentioned above, while 93% of CHROs say they have begun using AI in their organizations, only 33% of employees have heard about it. And only 15% of employees say their organization has communicated a clear plan or strategy for integrating AI technology into current business practices.
However, when employees strongly agree that there is a clear plan, they are 2.9 times as likely to feel very prepared to work with AI and 4.7 times as likely to feel comfortable using AI in their role.
In other words, only a small part of the workforce is self-motivated early adopters. Most employees won’t feel comfortable with AI until leadership communicates a plan.
We won’t belabor the point, but for communication leaders in organizations that have begun a broad rollout of generative AI tools, now is the time to get involved. The success of sizable generative AI investments may well depend on it.
Another finding is also worth noting: the number of employees who say they are “very prepared” to work with AI actually dropped from 2023 to 2024, with only 6% of employees stating that they are “very comfortable” using AI in their roles, and 16% saying they are “very or somewhat comfortable” (compared to 32% reporting they are “very uncomfortable”). The survey found that 70% of employees say their organization does not have guidance or policies for using AI at work, and that “almost half of workers who use AI at least once a year say their organizations have not offered any sort of training on using AI at work.” As we’ve been advising our clients for many months, it’s essential to establish and communicate guidance about AI use and to train employees to use the technology.
There’s also a growing body of academic research that shows that just giving people access to these tools is not enough for employees to really benefit from them. Unlike a word processor, use cases for generative AI aren’t self-evident. You simply must give people some guidance, because adoption isn’t going to take care of itself. As with any major change initiative, it will require a rigorous change strategy. And communication is at the center of that.
Can Someone Create a Chatbot Version of You?
An example of how social norms and the regulatory environment lag behind the capabilities of generative AI.
Can someone create a chatbot you? This isn’t a question many of us have likely asked ourselves. But as an article published this week by Wired shows, the answer is yes. And the implications are unsettling.
The story centers on a father who learned someone had created an AI chatbot based on his deceased daughter using Character.AI. The example is jarring, and frankly to us it’s a bit upsetting. But it also highlights two critical points for communication professionals to keep in mind as we navigate the rapidly evolving landscape of generative AI.
First, social norms around the use of generative AI remain largely undefined. What’s upsetting and off-putting to some may not be to others, and we’re still in the early stages of understanding what this technology can do and how people will respond to various use cases. Just because certain things are possible — say, creating an AI chatbot of a CEO that anyone in the organization can speak with — it doesn’t mean we should do it.
Second, just as social norms will lag the technology, so too will the legal and regulatory environment. We simply don’t have the frameworks to regulate what this technology can do today, let alone what it will be capable of doing in the future. The gap between technological capability and legal oversight is widening, leaving a gray area that communication professionals must navigate with care.
Which brings us to the topic of governance. These tools need it, and anyone scaling generative AI as a communication or leadership asset needs to assure it. This means intellectual property and legal guidance, yes, but it also means oversight of use cases, consideration of implications on skills and talent, and more (topics which we find most organizations are spending little to no time thinking about). We should not implement new AI applications simply because they’re available or seem innovative. We need to think through how people will respond to the use of generative AI for specific purposes, the second-order consequences of those uses, and weigh the potential benefits against the risks and ethical implications.
A Step Towards Hallucination-Free Generative AI
And why fewer AI errors should lead to more human oversight, not less.
Because generative AI is a predictive technology (it’s predicting what it should say next, word after word), it can make mistakes and invent facts. We call these mistakes “hallucinations,” but “errors” is a more honest term, and they are an inevitable consequence of how large language models (LLMs) work under the hood.
Which is why we were interested to see that Primer Technologies just unveiled an update to its AI platform that achieves near-zero hallucination rates when analyzing massive data sets. This achievement marks a significant step toward more trustworthy AI models — especially in high-stakes environments where precision is crucial (it’s worth noting here that Primer is a contractor of the US government, which means we’ll be interested in what this news means for highly regulated industries). Fewer hallucinations will have big implications in how we integrate generative AI into our workflows. With its recent update, Primer’s platform boasts a 99.9% accuracy when retrieving data from large datasets, bringing us that much closer to AI systems we could rely on for completing critical tasks like drafting press releases or fact-checking sensitive communications with fewer — well, virtually zero — hallucinations.
The temptation here is clear: as AI accuracy improves, organizations might be inclined to relax their checks and balances. That’s a risky move. Instead of viewing improved accuracy as an invitation to reduce oversight, we should see it as an opportunity to enhance transparency in our AI-assisted processes. The more communicators integrate AI, the more stakeholders will demand clarity on how we source and verify information. Transparent AI use aligns with corporate values and demonstrates a commitment to ethical practices.
So what’s the path forward? We think it’s much the same as it has been, but with a bit more urgency. We’ve got to harmonize AI efficiency with human oversight. As AI models like Primer's push the boundaries of accuracy, our role as communicators must evolve. We need to ensure that transparency and ethical considerations remain at the forefront of our practices, even as the chances of error continue to diminish.
More Evidence of LLMs Taking Work from Freelancers
And we think this is the beginning of a trend.
If you’re going to think intelligently about the labor implications of generative AI, you need to think in terms of “task overlap,” not “job replacement.” There are few generative AI tools that can complete a human’s job from beginning to end (at least for now), but almost any generative AI tool can either do, or augment, individual tasks that a human completes in the course of their job. This means that the “they’re going to take our jobs” argument, at least for now, is a red herring. The real question is, “How much of your work will they replace or improve?”
That said, we do believe there are some roles that have very high exposure to generative AI tools, and that more than a few of these are in the creative arts freelance market: staff writers, PR talent, graphic artists, speechwriters, etc. Much of that work, we feel, LLMs with strong prompting can do quite well, and we’ve been forecasting to clients for some time that they should be attuned to what they could do in-house with generative AI tools at a lower cost than they could if they outsourced it to freelance talent.
Which is why we noted this recent paper, “The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market.” Summarized in one paragraph by Claude Opus:
This paper studies the short-term effects of the introduction of ChatGPT in November 2022 on employment outcomes of freelancers on a large online labor platform. Using a difference-in-differences research design, the authors find that freelancers offering services in writing-related occupations, which were most affected by ChatGPT’s capabilities, experienced a 2% decrease in the number of monthly jobs and a 5.2% decrease in monthly earnings compared to freelancers in less affected occupations. The negative effects were observed on both the extensive margin (lower probability of getting any job in a month) and intensive margin (fewer jobs and lower earnings conditional on working). The authors find similar adverse effects from the introduction of image-focused AI models on freelancers offering design and creative services. Interestingly, they find no evidence that higher quality freelancers were less impacted by AI, and in fact show that top performers may have been disproportionately affected. The results suggest generative AI models are substituting for human labor and reducing demand for knowledge workers of all quality levels in affected occupations in the short run.
The finding about top performers being disproportionately affected caught our attention in particular. At first we didn’t have much of a working theory as to why that might be the case, so we asked Claude Sonnet 3.5. Here’s what it had to say:
There are a few potential theories that could explain why top performers were disproportionately affected by the introduction of generative AI tools like ChatGPT, according to the findings of this paper:
Skill compression: Generative AI may be narrowing the productivity gap between high and low-skilled workers in certain tasks. As the paper suggests, this is "consistent with the growing experimental evidence that the adoption of LLM differentially benefits low-ability workers relative to high-ability ones." AI tools could be elevating the baseline quality and efficiency for many tasks, reducing the relative advantage of top performers.
Task substitution: The tasks that top performers excel at may be more easily replicable by AI compared to tasks performed by average workers. High-level writing, editing, and creative tasks that were previously the domain of top freelancers may now be feasible for AI to handle.
Client perceptions and expectations: Clients may perceive AI as a substitute for top talent, even if that's not entirely accurate. They may be more willing to use AI tools or lower-cost freelancers supplemented by AI rather than paying premium rates for top performers.
Pricing pressure: The availability of AI tools may be putting downward pressure on pricing, particularly for high-end services. Clients may be less willing to pay premium rates if they believe AI can produce similar quality work.
Overspecialization: Top performers may have highly specialized skills that are more vulnerable to AI disruption in the short term, while more versatile generalists might adapt more quickly.
Early adoption by competitors: Lower-skilled workers or newer entrants to the market may be quicker to adopt and integrate AI tools into their workflows, temporarily gaining an advantage over established top performers.
Volume vs. quality shift: AI may be shifting the market towards higher volume, lower cost work rather than premium quality work, disadvantaging top performers who rely on high-value, specialized projects.
Reputation lag: The reputational advantage of top performers may decrease if clients perceive AI-assisted work from less established freelancers as being of comparable quality.
We think the notion of skill compression is important. Other research shows that generative AI can make everyone better, but that it tends to disproportionately improve the work of lower performers — in this case, giving top creative talent more competition as generative AI improves the work of lower-skilled competitors. We buy that.
Finally, if we were betting folks, we’d say this is the start of a trend. As generative AI becomes more powerful, as corporate users become more proficient, and as organizations start to create agents with specific uses (like creative writing or graphic design within an organization’s brand standards) we expect demand for freelance talent to continue to drop. It’s a signal all professionals should note: some part of your job has exposure to generative AI. In some cases that exposure can make you better. In other cases, generative AI can do the work without you. Our job is to be aware of that exposure, evolve our own work so we continue to do things that only humans can do, and use generative AI to make our only-human work even more powerful and valuable than before.
We’ll leave you with something cool: a three-minute video, with 26 scenes, created from idea to final product in three minutes. Remember what we wrote above about freelance markets?
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.