Confluence for 11.5.23
An AI executive order in the U.S. An AI safety summit in the U.K. Microsoft 365 Copilot goes live for enterprise customers. Ethan Mollick on prompting. Experiments in AI-powered prompting for imagery.

Welcome back to Confluence. Government action on both sides of the Atlantic captured most of the AI-related headlines this week, but those aren’t the only developments worth noting. Here’s what has our attention at the intersection of AI and corporate communication:
An AI Executive Order in the U.S.
An AI Safety Summit in the U.K.
Microsoft 365 Copilot Goes Live for Enterprise Customers
Ethan Mollick on Prompting
Experiments in AI-Powered Prompting for Imagery
An AI Executive Order in the U.S.
The directed actions will shape AI’s technological, economic, and societal trajectory — but don’t expect any immediate impact.
As many readers will know, last Monday the Biden White House issued a wide-ranging executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” You can find a Fact Sheet summary of the order here and the full text here. In the days since the issuing of the order, the question we’ve heard most from clients is “What does this mean for us?” The short answer (unless you happen to be developing a foundation AI model) is: not much — at least for now.
The order directs roughly 25 actions across eight categories ranging from safety, security, and privacy to consumer rights, worker support, and more. Most or all of the actions will be important to follow as they develop, and they will no doubt shape the trajectory of AI’s continued development and the subsequent economic and societal impact in the coming years. That said, in our view, there are three main takeaways from the Executive Order that Confluence readers should know right now:
You should not expect any immediate changes as a result of the Executive Order. The majority of the actions are about establishing guidance and best practices, conducting research, developing standards, and so on. While many of these actions may eventually affect your organization (more on that below), you should not expect to feel meaningful effects any time soon.
This is not likely to slow down the pace of immediate AI development and progress. Reactions across the technology landscape are, not surprisingly, mixed, with the order’s section on safety and security proving most polarizing. Time will tell how the safety and reporting provisions play out, but in our view the Executive Order will not slow the pace of development in the near term. We should continue to expect rapid capability improvements, both in the foundation models and the applications sitting on top of them. Indeed, the section of the order dedicated to Promoting Innovation and Competition calls for “catalyz[ing] AI research across the United States” and “us[ing] existing authorities to expand the ability of highly skilled immigrants and non-immigrants with expertise in critical areas to study, stay, and work in the United States.”
The broad sweep of the Executive Order reinforces that generative AI is indeed a general purpose technology — and the Executive Branch is treating it as such. Why does that matter? As we’ve written before, general purpose technologies “have pervasive impact, improve over time, and both require and spawn complementary innovation.” The wide-ranging scope of the Executive Order gives implicit recognition to the general purpose nature of this class of technologies. We should continue thinking about them as such and prepare accordingly.
Even with minimal short-term consequences, we believe a few of these actions are worth your attention given what they could mean for organizations in general and communication in particular. We’ll highlight three, though we encourage readers to review the full list themselves and draw their own conclusions:
“Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.” As machine-generated content proliferates and expands into new media (from text and imagery to video and audio), separating human- from machine-generated content will be increasingly important. As with any major technological innovation, we expect norms for this to develop over time, but this provision of the order could accelerate standards for adoption in both the public and private sectors.
“Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection.” The discussion of labor impact and job displacement will almost certainly intensify as the technology improves and diffuses more broadly into the workplace. Organizations can expect an increasing volume of guidance on this in the coming years.
“Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.” This is closely related to item #2 above. When this report materializes, we can expect it to be a “wake up call” for those not paying attention, while likely not offering many surprises for those following the developments closely.
United Kingdom Hosts AI Safety Summit
Political and industry leaders from around the world gathered for a first-of-its-kind summit on AI safety and risk.
In yet another sign that world leaders feel a sense of urgency to do something about AI, representatives from 29 countries met with leaders from industry and academia to start an international conversation about AI regulation. With much of the summit taking place behind closed doors, we can’t know exactly how the conversations unfolded. We do know, however, that at the end of the two days participating countries published the Bletchley Declaration in a sign of cooperation.
It’s worth spending a few minutes on the Declaration in its entirety, even though it lacks teeth and clear next steps. The three things to understand are:
There’s a recognition of the potential for AI to do good in the world.
Frontier models in particular introduce significant risk.
The risks that AI introduces are of international concern and thus require international cooperation to mitigate.
What this signals to us: Just like we saw with President Biden’s Executive Order, there’s a desire to keep pushing forward while acknowledging that progress is not without risk. Governments understand they need to do something, even if they aren’t sure about what that is. In the near team, expect these officials to lean heavily on industry experts for guidance on how to deal with AI. Over the long term, we expect calls to involve those with differing and broader areas of expertise to play will increase given the pervasive effects of AI.
Microsoft 365 Copilot Goes Live for Enterprise Customers
The power of GPT-4 now sits within the Microsoft 365 ecosystem.
Organizations can now bring the power of GPT-4 inside their firewalls — if they decide to spend $30 per user per month for at least 300 people. This is a major moment — and a likely inflection point in the adoption of generative AI within organizations — that we’ve been anticipating. Integrating the power of GPT-4 into the tools many of us use each day will open the door to hundreds of new uses cases for generative AI in corporate communications. On a more macro level, putting these tools into the hands of hundreds or thousands of users will significantly alter the way communication happens in organizations. What happens, for example, when 5,000 people immediately have the ability to create a pretty good newsletter in two minutes?
While it’s still early days for Copilot, here’s what we’ll be watching:
How many organizations adopt Copilot (and just as importantly, at what scale).
How people are using it and what use cases emerge.
The second- and third-order consequences for communication across the organization (one prediction: as noted above, we expect to see an explosion of content creation as it becomes easier for everyone to create quality content).
When Copilot will be available to organizations who wish to purchase fewer than 300 licenses.
We’ll do our best to keep you posted about what we learn as more and as more people gain access over the coming weeks and months. If you’re reading this and have access to Copilot, we’d love to hear about your experience and what you’re seeing in your organization — let us know in the comments.
Ethan Mollick on Prompting
“Just using AI will teach you how to use AI.”
If there’s one thing we’ve emphasized more than anything else with our clients over the course of 2023, it’s this: The most important thing you can do right now to better understand and become more proficient with generative AI is to get in the tools and use them. Ethan Mollick, whom we continue to recommend for his practical and insightful guidance on AI use, reinforces this in his recent post, “Working with AI: Two paths to prompting.”
The two paths to prompting that Mollick refers to are what he calls conversational prompting and structured prompting. To use conversational prompting is “to just talk to the AI to ask for what you want. You can even talk to the AI if you don’t know what you want — just tell it what you might need and see what happens.” Structured prompting, on the other hand, “is about turning the AI into a tool that does a single task well in a way that is repeatable and adapts to its user. Since the AI is not always built to do this, it will take experimentation and effort to make a prompt work somewhat consistently (it is very hard to reach 100% consistency with LLMs).”
Even relative to the high level of utility of all Mollick posts, this one is particularly useful and worth reading in its entirety. As Mollick writes,
The lesson is that just using AI will teach you how to use AI. You can become a world expert in the application of AI to your domain by just using AI a lot until you figure out what it is good and bad at. This is one of two reasons that I dislike the emphasis on prompting that pervades much of the discussions of AI: it makes using AI systems seem much harder and more mysterious than it is. Just use it and see where that takes you.
There’s one final point related to prompting that we will once again call out here. Figuring out what AI is good at and where it falters means intentionally pushing its capabilities. If it doesn’t give you exactly what you want following the first prompt, see if you can take a different approach. Sometimes you’ll find the true edge of its capabilities while other times you’ll find a better way of asking for and getting what you want.
Experiments in AI-Powered Prompting for Imagery
When ChatGPT meets Midjourney.
As we wrote last week, there are tradeoffs to using DALL-E and Midjourney for image generation. DALL-E, now integrated into ChatGPT and Bing Chat, allows for easy conversational prompting and iteration, while Midjourney consistently produces more impressive and surprising images. In our experiments, we’ve found that you can come close to actually getting the best of both. How? By using ChatGPT to help create prompts for Midjourney. Below is one example of how we’ve done this. While this example focuses on using AI-powered prompting for image generation, there are likely many other use cases and possibilities for AI-powered prompting more generally.
In the example below, we wanted to create an image of a cyborg inspired by Rodin’s The Thinker. We started by asking ChatGPT for some ideas.
Prompt: I want to use Midjourney or DALL-E to create an image. I want the image to look like Rodin's The Thinker, but I want the sculpture to be of a robot instead of a human. What would be some good prompts?
Creating a compelling image using language models like MidJourney or DALL-E involves formulating a prompt that is clear, specific, and evokes the artistic elements you have in mind. Considering you're looking for an image that borrows the iconic pose from Rodin's "The Thinker" but substitutes a robot for the human figure, here are some prompt suggestions:
Direct and Simple: "A robot in the pose of Rodin's The Thinker."
Adding Artistic Style: "A robot sculpted in the style of Rodin's The Thinker."
Incorporating Material Details: "A robot made of stainless steel and circuit boards, posed in the iconic posture of Rodin's The Thinker."
Adding Environment: "A robot sitting on a futuristic pedestal, mimicking the pose of Rodin's The Thinker, with a cybernetic background."
Adding Emotion or Expression: "A contemplative robot in the pose of Rodin's The Thinker, exuding an aura of deep thought."
Emphasizing Artistic Technique: "A pointillism-style painting of a robot captured in the classic pose of Rodin's The Thinker."
Hyperrealistic Approach: "A photorealistic image of a robot, intricately designed with gears and microchips, in the contemplative pose of Rodin's The Thinker."
Abstract Version: "An abstract interpretation of a robot sitting in the same posture as Rodin's The Thinker, made up of geometric shapes."
Cultural Context: "A robot in the pose of Rodin's The Thinker, adorned with symbols of artificial intelligence and machine learning."
Historical Fusion: "A steampunk robot sitting thoughtfully in the pose of Rodin's The Thinker, surrounded by Victorian-era machinery."
Each prompt has different elements to consider, such as the art style, the materials the robot is made from, the environment it's in, or the emotion it's supposed to convey. Feel free to combine or modify these prompts to suit your vision. The more specific you are, the more likely the AI will generate an image that aligns closely with what you're looking for.
The list of ideas was a helpful starting point, and we decided to pursue one of them.
Prompt: Thank you. What are some variations of #6 (emphasizing artistic technique)?
Certainly, emphasizing artistic technique in your prompt can lead to a unique interpretation of your concept. Here are variations based on the idea of portraying a robot in the style of Rodin's "The Thinker," each with a different artistic technique emphasized:
Impressionist Style: "An Impressionist painting of a robot in the pose of Rodin's The Thinker."
Cubist Approach: "A Cubist rendition of a robot posed like Rodin's The Thinker."
Surrealist Twist: "A Surrealist artwork featuring a robot in the contemplative pose of Rodin's The Thinker."
Watercolor Technique: "A watercolor painting of a robot in the iconic pose of Rodin's The Thinker."
Charcoal Sketch: "A detailed charcoal sketch of a robot sitting in the pose of Rodin's The Thinker."
Digital Art: "A digital art piece of a robot mimicking the pose of Rodin's The Thinker with pixelated or vector-based elements."
Mosaic Art: "A mosaic of a robot in the pose of Rodin's The Thinker, created with tiny, colorful tiles."
Stained Glass Interpretation: "A stained-glass artwork depicting a robot in the pose of Rodin's The Thinker."
Woodcut Print: "A woodcut print of a robot in the introspective pose of Rodin's The Thinker."
Graffiti Style: "A graffiti art interpretation of a robot in the iconic pose of Rodin's The Thinker."
Collage Approach: "A collage made of recycled electronic parts, forming a robot in the pose of Rodin's The Thinker."
Ink and Wash: "An ink and wash painting of a robot in the pose of Rodin's The Thinker, capturing the flow and balance between ink and water."
Pop Art: "A Pop Art version of a robot in the pose of Rodin's The Thinker, with bold colors and commercial imagery."
Chiaroscuro Lighting: "A robot in the pose of Rodin's The Thinker, rendered with chiaroscuro lighting to emphasize its form."
Fauvist Colors: "A Fauvist painting of a robot in the iconic pose of Rodin's The Thinker, with wildly expressive and non-naturalistic colors."
In less than two minutes, then, we had a list of 15 prompts with which to experiment in Midjourney. We tried several of these styles before determining that the charcoal sketch (prompt #5) was best for what we were looking for. Here’s the final product:

We’ll leave you with something cool. 45 years after John Lennon starting writing it, “Now and Then” was released this week, giving the world a new Beatles song decades since we last heard from them. AI was used to clean up and layer Lennon’s recording, parts of a 1995 recording session that included George Harrison, and contemporary efforts from Paul McCartney and Ringo Starr. Check out the official music video (directed by Peter Jackson) and learn more about it.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.