The data-driven future of storytelling: MIT’s Deb Roy on the message and the medium
January 19, 2022
The head of MIT’s Center for Constructive Communication talks about how data can help storytellers, what audiences of the future might look like, and why artificial intelligence cannot replace human creativity.
Deb Roy has spent much of his career studying the way humans use technology to communicate. As the director of the MIT Center for Constructive Communication, Roy leads an initiative to design human-machine systems that improve communication across divides and increase opportunity for underheard communities. The Center grew out of Roy’s earlier work at the MIT Media Lab, where he served as director from 2019-2021 and established the Laboratory for Social Machines in 2014. From 2013 to 2017, Roy served as Twitter’s chief media scientist.
Roy spoke with McKinsey senior partner Jonathan Dunn, who co-leads the firm’s Consumer Tech and Media practice, about how audiences and storytelling are evolving, and what the future of media and entertainment might look like as a result. The interview, condensed and edited, appears below.
McKinsey: Today we’re talking about the future of storytelling, in the far-out future. Like so many things, the pace of change around storytelling really accelerated over the course of the pandemic. The narrative of the story evolved within social networks, where many of us experienced it during that early March/April period. There was a lot of blurring between entertainment and reality, which social experiences often do. What were the building blocks of the changing dynamics of audience definitions?
Deb Roy: Audience is as important a concept as ever. If you are a storyteller, what’s the point unless you have an audience to communicate and connect with? We’ve known for quite a while that the internet and social media have fundamentally transformed what audiences are, how they form, how they behave, and how they process and respond to the storyteller. What might have been a pretty reasonable model of an audience as a group of mainly siloed individuals that you would reach through some kind of a broadcast channel has shifted to realizing now that the audience is networked. So instead of an audience, you’ve got an audience network.
But actually as you look more carefully, there are incredibly complex, subtle structures in those networks. There are various cliques, communities, or clusters. The concept of an enclave, which is essentially a group of people who have a shared experience, or shared interest, describes it best. It’s not that audiences are necessarily cut off from the rest of the environment, as, say, the term “cocoon” would suggest. But there are constricted ways in which information enters and exists in the enclave, and there is some understanding of who’s in the enclave, and what it is that brings the group together. But a person could actually be a part of multiple enclaves, and you can’t just map one person into one enclave. And audiences evolve, and they connect to one another.
One of the things we’re always striving to do is take a concept like audience enclaves and make it visible through data and analytics. If you’re a storyteller, there is a lot that can be gained from understanding what the structure or invisible patterns of those audience networks are, and being able to characterize them in human terms. In addition to validation that a storyteller knows their audience, there are sometimes interesting surprises to be found about your audience, such as unexpected connections to other audiences you’d also want to serve, or why people actually find a particular story or storyteller worth tuning into.
The network effects of “truth decay”
McKinsey: What does this increased visibility for networks and audience enclaves mean for our understanding of trust and influence going forward?
Deb Roy: The role of trust in networks is an area where there’s a growing recognition that something is wrong and getting worse. We all rely on networks of trust to decide how to act, and what to believe and not. There’s this notion of “truth decay,” where there’s a kind of a decaying of knowledge and the shared facts that we actually have in common. I do think that’s related to the dynamics that we can see to some degree in how networks are configured and what and how we’re sharing information.
A critical problem in all of this is how to approach sharing a message or story with a larger audience and build some kind of shared understanding. If you have fragmentation in that network, building common ground will be difficult. If you have highly defined and isolated enclaves where the only information that gets through is highly filtered and distorted, you will have breakdowns in the ability to build shared understanding across boundaries. I think our ability to trust one another is rooted in a foundation of understanding others, recognizing our common humanity, even though you may have many differences and therefore be in a different enclave.
The power and limits of AI/ML storytelling tools
McKinsey: The idea of storytellers better understanding the structure or invisible patterns of audience networks is more or less a fundamental redefinition of audience segmentation that some may view as backward, in that the audience is dictating what the storyteller should create, and that the analytics that enable this reverses the natural order of creativity. What do you think about that?
Deb Roy: There is a school of thought in which advances in artificial intelligence and machine learning sometimes feel almost magical. But it’s interesting to recognize the limits of even the most seemingly magical technologies against the human, social, and cultural context in which a story is told, shared, and builds shared meaning among members of an audience. I don’t see any obvious gains in all the capabilities of artificial intelligence capturing some of that context. It’s not just a strategic error to think about analytics and data informing and taking over the storytelling process in some sense. I think you’ll end up with pretty lousy stories that way.
So the question is, what does the ability to make invisible patterns visible from data do for us? How can that help you as a storyteller understand something about your audience that you otherwise would not have seen? And discover connections and possibilities that otherwise would be invisible. I don’t think that challenges or questions the basic assignment of roles, of who’s telling the story, human or machine. But it could really change the possibilities for what kind of stories could be told. And who they could connect with.
McKinsey: Talk about power tools that are possible on the horizon.
Deb Roy: These are tools that help people tell stories, rather than automate the human out of the job of storytelling. For example, we’ve been building a tool that channels AI in service of the storyteller—a machine-learning writing tool that programs itself. You have to set an objective for it, some kind of a target function, and then the machine will figure out how to optimize for that function given examples of training data.
We used pieces of text such as tweets and news headlines as training data for the AI, and looked at media habits of the people who chose to retweet and engage with each of those tweets. Then we asked whether we could predict media habits from looking at the choice of words and phrases that a person chose in engaging with that tweet. In particular we wondered about the political leanings of all of the people who end up interacting with that piece of content. We used more than 100 million retweets to train the AI model, which is where the advances in machine learning really come into play. We were able to build a quite highly predictive system that, when given a piece of text, could predict what kinds of people would engage with it in terms of their political leaning.
We took that model and built a writer’s tool with it, so that a writer might be able to broaden their audience with people who are different from themselves, about whom they don’t have the intuition to address on their own. They may even have blind spots to the hidden meaning of words and phrases that they might otherwise unknowingly use, which prevents them from connecting with certain groups.
We had some of our partners test the tool. We ran A/B experiments where they used suggestions and input from this tool to help rewrite their initial messages in order to get rid of hidden potential triggers and audience turnoffs that they had never intended.
And then we ran field experiments and found that in this case the tool let us reach a more politically diverse audience.
Casting the roles for machines and automation
McKinsey: There are concerns though about using AI that relies on past data that might reinforce offensive behaviors, views or norms that were acceptable at some time in the past. How concerned are you about that in the work that you do?
Deb Roy: There’s a growing concern about algorithmic bias, and relying on artificial intelligence that is learning from historical data that reflects biases. There’s growing recognition within technical communities and more generally that any kind of artificial intelligence technologies that rely on machine learning are basically looking back in time and finding patterns within that historical data to make future decisions based on those patterns. The risk is potentially getting locked into our past and continuing or even amplifying those patterns into the future. That’s a super-important concern, especially in the context of storytelling and making creative decisions. In some ways, the death of creativity is to end up trapped in just repeating patterns from the past.
There are a number of things that should be done to keep these sorts of problems in check while continuing to explore how we can channel the incredible powers of the technology into human service. One is being thoughtful about what tasks get automated. There is sometimes a default assumption that if you can automate a certain role or job that a person is doing, you should. I disagree with that. I think we need to take a step back and say, “But wait, what’s the overall function here?”
In a creative space where we are trying to create stories that inspire and connect with people, I think having that kind of default assumption can really be counterproductive and go against the creative mission. Beyond just thinking about the division of labor between human and machine, I think we should be asking ourselves continuously, what’s the division of agency? What role do we want each to play? Where is there judgment and decision-making involved that we want to remain under human control?
It might seem in the moment like the machine is making reasonable decisions, and so we hand more and more agency over to it and get rid of the people who used to make those decisions. But by the time we’ve realized we’ve locked ourselves or our organization into a kind of creative dead end, it’s too late, because the people are gone. They’ve moved on to a smarter organization that understands what the right role is for machines. So I think we need to be mindful of the big picture, thinking holistically about the right ways to bring these incredibly powerful technologies into a human-led process. We should not cede agency to the machines, because in the long run, it’s actually not going to be what we want.
The creative value of collective experience
McKinsey: One of the big developments in media in the last few years is a shift from so much media and storytelling being consumed by large groups of audiences to smaller, very intimate audiences. What does that mean for the way that stories are created or consumed?
Deb Roy: When you talk about all the different ways technology can allow every individual to have what they want, when they want it, where they want it, that is almost how we define technological progress. This is also making media consumption more and more individualized. When you look at the early days of any of these technologies, before we had the ability to create differentiated, individualized experiences, we had no choice but to have a collective experience. Once upon a time there were fewer television sets than people, and so groups of people gathered to watch TV, which means agreeing on what you’re going to watch together. Today, though, we have individualized delivery, and there’s a cost to that, which is that we lose our sense of connection with others, the shared experience.
There will be an inevitable swing back. It’s not that we’ll give up on individualized services. But there was a valuable unintended consequence to being forced to cluster around that television set when there was only one in the neighborhood. The conversation that happens around it actually fundamentally transforms the value of that content and the experience. That is what we’re seeing now with audiences that are networked. They’re consuming content that comes to them through the network, deciding what to share, and participating as active members of the network. There’s no simple way to predict broadcast or diffusion anymore. It depends on people, and how they’re responding to the content.
I think it’s inevitable that we’re going to see more and more old-school broadcast kinds of content serving these groups, because of the value it creates in building that shared experience. Even if it’s just a sunset you’re looking at, it’s just better when you view it together. That kind of synchronous experience in storytelling offers the same kind of value. So I cannot imagine our future progress will take the form of continued individualization of content. It’s not part of who we are.
Tapping the full potential of the “liquid medium”
McKinsey: One criticism of the evolution of storytelling over the last decade is that although there are so many more stories being told today, and so many technological advances, there’s been very little change in the format of the stories themselves. How do you see format creativity evolving in the long-term future?
Deb Roy: When we talk about the format of storytelling, there’s the specific format of the medium—motion and sound, resolution, and aspect ratio—that create a channel within which you have to fit the story. And then there are the diffusion properties that affect where a story can go, how long it’s going to live on, and how it’s going to morph as it passes through the hands and minds of audience members. That’s maybe not part of the format, but it’s part of the medium as well.
When you put those two together—format and propagation characteristics—the result can be radical. On one hand, it seems like storytelling formats haven’t changed that much. We’re kind of still stuck with the formats of television and film. Sometimes the evolution seems slow, but if you think of storytellers using Clubhouse, that has a format that they are trapped in, yet there’s already been a wild evolution in the art of audio performance on Clubhouse, and to begin with it’s really different from packing a story into a tweet. And then you think about how Twitter was just tweets, and then the Tweetstorm came along and changed everything. Each of these little tweaks in how we use these platforms creates a new storytelling format. Perhaps the most dramatic shifts in storytelling can be seen in very short form videos on TikTok and other social video platforms in which the hook for the story needs to land within a few seconds.
Before the internet, when you wanted to create a new format, you actually had to create a new technology and roll out a physical layer of infrastructure. And so when we went from telegraph to telephone, and we rolled out the radio networks, the movie studios and the theaters, and the television networks, you actually had to build bricks and mortar, and buy really expensive equipment, or lay millions of miles of cable. It took decades to roll out a new medium.
But today, now that the internet has been built, it allows any combination of sight, sound, and motion format to be mashed up with any combination of network diffusion properties, so all you have to do is write some code and you can create a new medium. Also, the properties of each social platform are different, and so each offers a different storytelling format, meaning that even though the underlying technology is the same, the medium is different. You can experiment with how ephemeral something is, and decide what modalities you want: Text? Speech? You want images? What format? It’s kind of like we have a liquid medium. Some kids in a garage could come up with a new storytelling format and blanket the country or the world in the blink of an eye.
So yes, on one hand it seems like we haven’t seen much change in storytelling format, but another way to look at it is that we have constantly changing formats that we can’t get a lock on for long enough to even allow the storytelling to evolve into one new format before another popular format arrives on the scene. We have entered an era of medium liquidity.
McKinsey: What are the possible implications that “liquid medium” holds for the kind of content that will be created going forward, particularly for all these diffuse audience networks and enclaves?
Deb Roy: It does seem like we’re on a path of cultural fragmentation that is unstoppable. It’s not all bad though. It complicates the process of having a functioning democracy, but on the other hand there’s a kind of accelerated evolution and flourishing of culture and language that is fascinating. I don’t know if it’s nine or 19 years out, but I can imagine an evolution of microcultures that are engaged enough that there will be stories and forms of entertainment that may be largely incomprehensible across groups, even though they’re in the English language. There would be an acceleration of a kind of inside speak, with languages and cultures that just continue to evolve more or less independent of one another. It’s the same way that languages and cultures have always formed, but at a new pace in a networked age.
I think the big surprise in all of this is for the people who built the internet and all of this technology, with the assumption that the more we connect with one another the more we’ll just all sort of speak one language, and there’ll be world peace. But it’s turned out to be a far more complicated story, hasn’t it?
About the author(s)
Deb Roy is a professor of media arts and sciences at the Massachusetts Institute of Technology and the director of the MIT Center for Constructive Communication, visiting professor at Harvard Law School, and co-founder and chair of the nonprofit Cortico. Jonathan Dunn is a senior partner in McKinsey’s New York office who co-leads the firm’s Consumer Technology & Media practice.
McKinsey & Company is an ongoing knowledge partner with the MIT Media Lab.