Welcome to Confluence
Perspectives on the Intersection of AI and Corporate Communication from the Strategic Communication practice at CRA | Admired Leadership
Welcome to Confluence, from the Strategic Communication practice at CRA | Admired Leadership. As advisors in the field of corporate communication, we’ve found ourselves immersed in generative AI since late 2022, a journey shaped by curiosity and necessity. It’s been an interesting exploration, and what we’ve learned in just these 12 months has both reshaped our work and proven valuable to our clients. But the exploration continues, and will continue (we expect) for years. With this in mind, we’ve launched this Substack as a way to more broadly share ideas, insights, and thinking that capture our attention at the intersection of AI and communication. Our goal is to not just provide theory (although we feel the evolving theory on generative AI is important to understanding the decisions, risks, and opportunity it presents). We’ll also offer a hands-on, practical perspective as we continue to learn, adapt, and work at this important intersection of communication and technology.
With that out of the way, several items have our attention which we’d like to share:
The Atlantic Profiles Sam Altman
Claude 2
ChatGPT’s Custom Instructions
Generative AI as Writing Tutor
The Question of Disclosure
Let’s get started.
The Atlantic Profiles Sam Altman
This is an excellent article to help you or others get up to speed on generative AI.
The September issue of The Atlantic features Ross Anderson’s lengthy profile of one of the most powerful and influential figures in AI today, OpenAI CEO Sam Altman. OpenAI is most notable for its creation of ChatGPT, DALL-E, and its multi-billion dollar investment from Microsoft. OpenAI will almost certainly play an outsized role in shaping major developments in both the consumer and enterprise technology landscape, and for that reason alone it’s worth understanding the company’s origins, technological approach, and vision for AI. The article provides an insightful window into all of that. But more important, it’s a valuable primer on how large language models work, the key developments in AI that led to our current moment, and the economic, technological, and ethical dynamics that will influence where we go from here. For anyone looking to get up to speed on generative AI technology and the current state of affairs, this piece is a good place to start.
“Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.”
Claude 2
This generative AI from Anthropic can be better at creating text than GPT4. You should probably be using both.
While Chat-GPT has become synonymous with large language models and generative AI, it’s far from the only game in town. Earlier this summer, Anthropic launched their competitor to Chat-GPT, Claude 2.
After spending time with both Chat-GPT (using GPT-4) and Claude 2, we’ve found them comparable in their abilities. The differences between the two are nuanced, and our best piece of advice is to experiment with both. If you find yourself not getting the results you want using Chat-GPT, give Claude 2 a try. This piece on TechCrunch covers some of the technical differences between Claude 2 and other models while doing a nice job of addressing the question of “why should I care” about the variance across them.
ChatGPT’s Custom Instructions
A new feature from OpenAI immediately makes ChatGPT more useful.
Last month, OpenAI introduced custom instructions for ChatGPT, noting that custom instructions “allow you to add preferences or requirements that you’d like ChatGPT to consider when generating its responses.” In practice, you define your custom instructions by answering two questions: 1) What would you like ChatGPT to know about you to provide better responses? and 2) How would you like ChatGPT to respond?
At least so far, we’ve found them valuable. As with writing AI prompts, you should experiment with different approaches and see what works best for you. To determine what your own custom instructions might look like, consider:
What instructions do you repeatedly insert into your prompts?
What language, types of responses, or other recurring elements in ChatGPT’s outputs do you find annoying or lacking in value?
What are the characteristics of the responses from ChatGPT that you find particularly valuable?
The answers to those questions can be a useful starting point in defining your own custom instructions. What constitutes a useful instruction will vary from person to person, of course, but we’ve seen success with the following:
Never tell me “As a large language model…” or “As an artificial intelligence…”
Suggest solutions I didn’t think about.
If your content policy is an issue, provide the closest acceptable response and explain the content policy issue.
When I have asked for a response of a specific length that exceeds your response-length limits, assume I will say “continue” and that you will continue with the next response.
Think step by step. Consider my question carefully and think of the academic or professional expertise of someone that could best answer my question. You have the experience of someone with expert knowledge in that area. Be helpful and answer in detail while preferring to use information from reputable sources.
Generative AI as Writing Tutor
With the right prompt you can ask GPT4 to act as a private writing coach.
One form of utility for generative AI that we’ve been digging into is the role of tutor or guide. The technology seems surprisingly adept at outlining learning programs, providing external sources for learning, reviewing work, and more. Ask it to be an expert tutor for you in learning about impressionistic art, for example, and see what you get. A central part of the developmental path at our firm is developing strong writing, especially in the firm’s voice. This is done primarily through direct mentorship and collegial review and feedback on every meaningful deliverable. That said, more practice can be helpful, and we’ve crafted a prompt for our own teams to use as a foil for independent writing skill development. Modify to reflect your own organization and your own style guide and see what you might get. Here’s the prompt:
I want to begin a program to improve my writing. Here is the context: I am a professional working in a consulting firm. The firm has a distinctive voice -- it is professional and yet has a relational closeness. It uses strong verbs and nouns rather than relying on adjectives and adverbs. It avoids semicolons, favoring clear sentence structure. The voice is for the ear rather than the eye, and pacing matters. The common punctuation marks are periods and commas, with an occasional question mark. It is very much in the active voice. It follows the rules of the AP style guide, and tries to be very close to the advice of Elements of Style and On Writing Well. In this chat I want you to play the role of a writing tutor. You are a professional and not an academic, and you are expert in the style I have described to you. You are expert in creating unique writing assignments that help improve the style I have developed, and you are expert in reviewing the results of those assignments and giving specific feedback that improves my writing based on those assignments. You will tutor me in a progression of assignments, both ones you create for me to complete and in me providing text for your critique and on which you can provide feedback. The assignments should be ones I can complete in less than 30 minutes. We can take our time doing this. You are free to explain your thinking step-by-step. Here is an example of our firm's prose to help you in tutoring me: [text sample follows]
The Question of Disclosure
The time to start thinking about a disclosure policy for AI use is now.
Should you reveal the use of generative AI? We don't blink an eye when someone uses spellcheck or autocomplete, yet tools like Chat-GPT and Google Bard stir a different reaction. As leaders and teams begin to map out their AI strategies it’s worth giving this question attention now.
It’s a bit of a dilemma as there’s no definitive standard on AI disclosure in content creation, and likely won’t be for some time (perhaps years). But there is a key factor that should influence your consideration of disclosure: not merely whether you used these tools, but how you deployed them. With capabilities ranging from fine-tuning a single sentence to crafting entire messages in a specific voice, the existing tools (say nothing of future tools or improvements) can play vastly different roles in creating content. We are convinced that any decisions or policies related to disclosure need to account for the sheer number of roles that AI can play in our work and the message their use sends to audiences.
Our advice – think deeply about this now and consider the range of consequences your policies and decisions carry with them. Our prophecy – the establishment and erosion of trust is going to be a domain of long-tail consequences for corporate communication as employees become sensitized to AI as a possible originator of what they read and hear. When people know a machine can be behind the text or image or video, what (and whom) will they trust? And how do we establish and maintain that trust as we use these tools? These are crucial questions. Having a clear perspective on disclosure, and providing dependable assurances as to content origins, is something communication functions are going to have to reconcile in the interest of preserving trust in “official” voices.
As for us, we disclose generative AI use, as you can see at the bottom of this newsletter. Here’s another example from one of our recent whitepapers:
AI Use Disclosure: The authors used generative AI as an editorial and proofreading resource in the creation of this content. It was also a resource in creating first drafts of the “What Is a Pre-Trained Transformer” and “Prompt Engineering” sections, and in creating alternative drafts of the “Closing Thoughts” section. All content has benefited from human review and revision. Midjourney created the image on page one. All use protected client confidentiality.
At least for now, we’re erring on the side of giving clients and others clarity about what content comes from us, and what comes from the machine.
That’s it for our first issue of Confluence. We welcome your feedback, and as we go, we’ll leave you with something cool for kids …
AI Disclosure: We used generative AI in creating imagery for this post. We also use it selectively as a summarizer of content and as an editor and proofreader.
This is terrific. The writing tutor piece is INCREDIBLE. So excited to try it.