Advertisement
UK markets open in 2 minutes
  • NIKKEI 225

    40,780.70
    -131.67 (-0.32%)
     
  • HANG SENG

    17,460.66
    -338.95 (-1.90%)
     
  • CRUDE OIL

    82.72
    -0.44 (-0.53%)
     
  • GOLD FUTURES

    2,388.00
    -9.70 (-0.40%)
     
  • DOW

    39,375.87
    +67.87 (+0.17%)
     
  • Bitcoin GBP

    43,466.29
    -1,538.42 (-3.42%)
     
  • CMC Crypto 200

    1,156.51
    -9.61 (-0.82%)
     
  • NASDAQ Composite

    18,352.76
    +164.46 (+0.90%)
     
  • UK FTSE All Share

    4,486.08
    -11.89 (-0.26%)
     

How one company is unmasking the bad actors who use AI for ‘narrative warfare’

Courtesy of Blackbird

On this episode of Fortune’s Leadership Next podcast, Alan Murray sits down with Wasim Khaled, CEO and cofounder of Blackbird.AI. Khaled's company developed an AI platform that helps companies and security organizations protect themselves against, he said, "disinformation, misinformation, and narrative attacks." The conversation ranged from the ways AI is used to create warped realities, how companies can fight back against misinformation, and why the major AI platforms haven't focused on the disinformation problem.

Co-host Michal Lev-Ram is off this week.

Listen to the episode or read the transcript below.


Transcript

Alan Murray: Leadership Next is powered by the folks at Deloitte who, like me, are exploring the changing rules of business leadership and how CEOs are navigating this change.

ADVERTISEMENT

Welcome to Leadership Next, the podcast about the changing rules of business leadership. I'm Alan Murray. Michal Lev-Ram couldn't join me today.

We've got an interesting and unusual treat for you on today's podcast. You know, we normally talk to people who run big cap companies. In this case, we're talking to somebody who has cofounded a startup. It's not even that big a startup right now. It's not a unicorn, maybe a tenth of a unicorn. But it's doing something that's very near and dear to my heart: taking on the pollution of the information ecosystem that has happened in part because of the rise of new technology. His name is Wasim Khaled. The company is Blackbird.AI. And right now he's selling his services to brands that are worried about what he calls narrative attacks, attacks that happen in social media, in the broader information ecosystem. They get fed by, often by technology, by bots, by state actors in some cases.

This was a very fascinating conversation and probably very timely if you've been following what's been going on with Taylor Swift recently, the AI-generated explicit photos that are propagating themselves on the internet. It's an example of the kind of thing that's happening in today's world that does need an answer, and it matters to business. Think about what happened to AB InBev when they did an advertisement with a transgender influencer and the blowback that they got. Target had a similar example. There are lots of examples. And they may start with a real event, but then they get exacerbated, accelerated, and magnified by various types of technology and by various bad actors in the ecosystem. So this is one you're going to want to listen to. And here it is, Wasim Khaled, the cofounder and CEO of Blackbird.AI.

Wasim Khaled, what a fascinating area. Thank you so much for being with us.

Wasim Khaled: Yeah, thank you for having me. Really looking forward to this.

Murray: So let's start with the basics. Explain what Blackbird.AI is.

Khaled: Sure. So Blackbird provides [Forbes] Global 2000 and national security organizations with an AI platform that enables them to protect themselves and understand disinformation, misinformation, and what we call narrative attacks. And narrative attacks are any kind of assertion in the information ecosystem that can drive harm against a person, place, or thing.

Murray: Yeah, so can you give us a couple of examples of things that you're looking at?

Khaled: Absolutely. So if you think about the information ecosystem today, what has happened is threat actors have figured out how to take these opportunistic attacks around organizations based on a single, say, social media post. And they can take and they can manipulate it in a way that drives a narrative that can harm an organization. Now, these threat actors can use something a person says that is true…

Murray: Yeah.

Khaled: …but they recontextualize it or manipulate how that spreads and what the context of the original occurrence was to create that harm.

Murray: So if you think about how this affects a brand, you look at something like AB InBev, which did a social media campaign with a transgender influencer, and it went wild. Is that the kind of thing you're talking about, how that sort of spreads and explodes and expands?

Khaled: That is a textbook narrative attack that started not with disinformation, but just a thing that they did.

Murray: They actually did.

Khaled: Yeah, it was just a marketing campaign that turned into a narrative attack based on it inflaming the ideology of hyper agenda-driven communities online that decided to take that and turn it into something that could be an attack vector for that organization. And of course, we're talking about tens of billions of dollars of market share that was lost as a result in a matter of weeks.

Murray: So are you saying things like that don't just happen organically? That there are people behind them who are weaponizing them?

Khaled: Absolutely, and that is the very nature of what why we call it a narrative attack. It is not virality. So it used to be that, oh, hey, it's social and it goes viral, and people didn't really know what drove it. Well, this is fabricated a combination of fabricated virality and fabrication of the context of the narrative to begin with. And so you take that one post and you turn it into something that creates massive harm for the organization.

Murray: Yeah. So if I'm AB InBev or any of the other brands who've had to deal with this, Target was another good example. There are actually an increasing number of prominent textbook examples. What does Blackbird.AI do for me?

Khaled: Yeah, so if we are equipping analysts and threat intelligence teams to get a better sense of what are the mechanisms behind this narrative attack. So most people perceive that is something that just happened and we show them, okay, there might be a bot network, for example, that was deployed that is amplifying and driving the reach of that particular narrative. And if you think about it, every organization has a narrative, and every narrative has a counter narrative. And so it is the seeding of all of these counter narratives that are tearing down the messaging that that company wants to put out there in the world, that we really enable them to see.

Murray: And is the goal of Blackbird.AI to spot it early so you can take action? Or do you help? Do you take the action? How do you shut it down when you see an attack like this expanding?

Khaled: If you think about mitigation, you can't really mitigate what you can't measure. And so what companies used to do is they would take media monitoring tools, and those are a very poor proxy for harm. They look at things like keyword mentions and sentiment. We look at narratives as they're spreading, as they're snowballing, similar conversations happening in the dark web or social or news or other platforms, and we help them understand when it might reach a critical juncture where they need to come in and address it. Whether that's through litigation, whether it's through, you know, engaging with their crisis comms firm or their cybersecurity and threat intelligence firms. And then we do have kind of internalized playbooks that we can help them run. But usually it's a SaaS platform, like a software platform, that enables them to see things they could never see before, which is the thing that we hear from our customers the most often. They have playbooks. They just don't know what's happening anymore.

Murray: So they need faster information, right? Peripheral vision.

Khaled: Yeah.

Murray: So you're not so much in the tools business as you are in the alerts business. Then you have a problem here that you need to start to address.

Khaled: Yeah, I mean, you could call it a tool. I mean, it is an AI platform that can alert you, but we're not in the kind of human intelligence business where we're going to go out there and perhaps solve the problem and things of that nature from a from a comms perspective. It is technology enablement and acceleration to detect that problem, get it before it goes out of control. If you find it at the spark stage before it turns into like a forest fire, let's say.

Murray: So Wasim, we've seen an interesting example of this kind of narrative attack recently with Taylor Swift, and the AI-generated explicit photos of her, attacks on her and her relationship with Travis Kelce. What could Blackbird.AI do in a case like that?

Khaled: Yeah, you know, our team has used our platform to dig pretty deep on the nature of that entire incident.

Murray: For a client?

Khaled: This was something that we did for research, and we just put it up on our website so people could have a better sense of what happened. Right. I'll say a couple of things about this because you go in a lot of different directions, but I will say that, one, these images have been floating around the dark web for many weeks now.

Murray: Right.

Khaled: In fact, a lot of these generative AI images are almost like one-up contests in that space, like who can do a better job of fabricating reality around a particular topic. It then went to a few other kind of platforms and message boards and then landed on a platform where it started actually going more viral and people actually saw it. Now our teams, and many in this space, are not surprised by the act of putting those images together because for years this has already been going on. There are entire websites that are dedicated to deepfake celebrities. And in fact, before generative AI, 90% of what everyone was calling deepfakes were essentially nonconsensual imagery targeted at women. And, you know, the one thing I have to say here is the reason everyone's focusing on this is because it's Taylor Swift. But women have been dealing with this problem and children have been dealing with this problem from literally sextortion rings that are using these to actually create major harmful events around people.

Now, Taylor Swift has a massive brand. And so for her in particular, it takes her brand and changes it irreparably in some ways, even if people may think this is not real. But it still influences you because we're not really used to seeing this hyper-realistic imagery, video, and audio, and so people’s senses and kind of perceptions aren't really ready for these kinds of technology.

Murray: And so if Taylor Swift, who has become a massive business, hired you, what would you do for her?

Khaled: Where did it start? Where were the communities that were that were actually propagating this? But more importantly, being able to give them the receipts on how this thing kind of went out of control so they can actually go out and potentially litigate and do a number of other actions that would prevent these kind of things from going so viral in the future. I think also, if you want to be able to legally mitigate, you want to understand the mechanisms by which this actually occurred, down to even maybe the machinery in the models that were used to create the images.

Murray: How is generative AI going to affect this?

Khaled: Well, I would say if you go back several years that the problem of disinformation and manipulation of online ecosystems have been getting worse and worse, as well as people's ability to perceive reality and drive critical thinking around those.

Murray: Absolutely.

Khaled: And there's been lots of thought leadership around this. We more focus on like what can we do to drive solutions to help people with these problems. I will say generative AI exploding into the world about a year ago has exacerbated this a hundred-X over. Right? And the reason is these tools are getting cheaper and better and easier to use every day. If you think about threat actors all living online and on LinkedIn, they're using the same techniques on how to make their job easier as the rest of us. And so their tradecraft has just gotten so much easier and they can test so many more things. The information ecosystem has gotten massively noisier and harder for the average person or business to navigate.

Murray: What's the one headline that we're not seeing about AI?

Khaled: There are so many headlines out there about AI, I am trying to think which one is not really being covered. I think the one that I would like to see, and we're seeing a little bit more play there, is that the degree to which it can be used for good versus evil is not based on the morality of the AI, but in the morality of the user. It is a human problem, not an AI problem.

Murray: What keeps you up at night when it comes to AI?

Khaled: Well, I would say it's not just AI, but it's the application of AI to harmful acts, particularly in our space that is about warped realities. And I would say that the biggest problem today around those warped realities and the utilization of AI to drive them is that as a society, human beings aren't yet ready to question every single thing that they see, hear, or read. And so the fact that we have to question our reality on everything that we look at and try to perceive is extremely disturbing. And I think about also what our younger children—you know, I'm a father and I think about like the world that they're growing up in without that ability to be able to know if what they're seeing or looking at is artificial or manipulated. And so that concerns me and that's really why we do what we do.

Murray: Keeps me up too.

[Music starts.]

I'm here with Jason Girzadas, the CEO of Deloitte US, who is the sponsor of this podcast. Thanks for sponsoring, Jason, and thanks for joining me.

Jason Girzadas: Thank you, Alan. It's a pleasure to be here.

Murray: So rapid technological change creates issues of trust. What can businesses do to make sure they deploy new technology in a way that is trustworthy and ethical, that makes people feel comfortable with what's happening?

Girzadas: There's no doubt that trust is the ultimate differentiator. I mean, once you lose it, it's incredibly difficult to get back. And if you have it, it can be a real asset as it relates to customers and marketplace success overall. So, trust is the ultimate differentiator in almost every context. What we're seeing at Deloitte is a number of forums that are developing that are truly cross-industry, cross-sector forums. These are examples of a belief that industry has to come together around building trust and also establishing a framework for trust, what the rules of the road are and the guidelines are. This is an issue that is squarely on the CEO agenda and on the board agenda. It's really a determinant of long-term success. For most organizations, the need to think very strategically about where their trust level is at the moment and be mindful of how it evolves based upon decisions that are made is also critical.

Murray: Jason, thanks for your perspective and thanks for sponsoring Leadership Next.

Girzadas: Thank you.

[Music ends.]

Murray: Wasim, how much of your work is for the Defense Department or defense agencies?

Khaled: So national security organizations were actually our inception in the early days, but I would say about 15% of the work today. We'll see how that changes as we go on, because, frankly, foreign malign influence and information operations is becoming a bigger and bigger problem, particularly here in the U.S. And there's a lot of interest in being able to better understand that landscape.

Murray: How is this narrative warfare we're talking about, how might it influence the coming election? What are we going to see that will undermine the democratic process?

Khaled: So I think in some ways, any politics is, you know, the winning hearts and minds, right? In narrative attacks, where everyone gets their information from digital sources, becomes a method to shift the perceptions of the voter. Right? And so it's really a battle for the mind, a grey matter war, if you will, to really push people into a particular way of seeing the world. And that gives essentially foreign actors, for example, a major opening into creating distrust, into sowing discontent—but now in a very automated and scalable way, where people really won't understand that that's what they're subscribing to. But it's a back door to people's minds through the machine. And so that's going to be something that is going to be exploited.

Murray: How much of this, in your view, is caused by geopolitical actors? You know, we saw the effect of Russia in the last couple of elections. There's talk about Iran, a lot of focus on China, what's happening in China, attacks that are designed to undermine society in the West. How big a factor do you think that is in the problem?

Khaled: It depends on the topic that you are looking at, for sure. But one of the things that we've seen that is particularly, I think, unknown to most, is that when we talk about narrative attacks, often this kind of what looks like virality has state actors in very unpredictable places to actually amplify that narrative. And it's not just around politics, elections. It could be something very mundane like transportation or incidents that they can utilize to drive harm against the entire country's capability or posture. I'll give you a good example. You remember when there was that maybe an alien has been discovered in Mexico, and everybody was talking about that for a couple of weeks. One of the things that was being propagated there that were, very likely, from analysis, foreign malign influence was the narrative that, hey, here's another thing that the U.S. government has been hiding from the public. So you take this thing that's kind of gossip slash tabloid, but you turn it into something that just continues to erode trust in the ability of Americans to trust their governments across so many different topics is something that it's a developing tradecraft akin to cyber attacks and is in fact very closely linked to cyber attacks today, the whole information operation.

Murray: And it's it's very scary. I mean, I'm really glad you're focused on this. Look, I've spent my entire life as a journalist, and I believe in the search for facts and the search for truth. And to see what's happened over the course of the last decade and the deterioration of people's trust and and the ability to figure out what is real, what isn't real, what is true, what is false, facts seem to have disappeared as a concept. People seem to think they can believe whatever it is they choose to believe at the moment because of this deterioration of the information ecosystem. So give me something to be hopeful about. What can your technology do to help begin to turn around a problem that, as you say, has been heading south for years, if not decades?

Khaled: Yeah, you know, I would say that there is reason to be hopeful because the same types of technologies that, frankly, are making the problem worse can be used as a defensive measure or an illuminating product as well. And so for us, for Blackbird in particular, our mission has always been about fostering trust, safety, and integrity across the information ecosystem. When we set down that path, we also found that trying to do that is expensive. And so where we headed was particularly large organizations, and we've built a business around that. But in fact, February 14th, this Valentine's Day, actually, we're launching a product that we think could have much broader appeal for the exact problem that you're talking about. So every day people are bombarded with stories and tweets and messages that they ask themselves is the thing I'm looking at real? Is it is it not? And if so, like, what are the nuances to this? Right. So our product, it's something we call Compass. We call it Compass because it helps you see the way a little bit more. It is a context product. And so you can give it a question, you can give it a link or a post or a meme, or even a link to a video, and it will use our generative AI models and all of the risk signals we've developed for our enterprise customers around bot activity and in harmful communities, and it will tell you as much non-bias kind of two-sided context along with sources and where that came from, as well as summarizing points like a ChatGPA and and we think that's something that could be incredible for people who...

Murray: Help the average person discern the truth.

Khaled: When we started, Naushad [UzZaman], my partner and I, what we wanted to do was build a consumer product, right? And at the time, we neither had the knowledge, the technology, or the capacity or, frankly, the capital to be able to put out such a product. And so we've been working on this problem for seven years. And so we're coming around to finally get a good for you.

Murray: And your detour toward serving brands and serving government was because they had the money to pay you for the technology.

Khaled: Absolutely. And what's really interesting is we can help those customers with this, too, because this context engine plugged into that larger enterprise product also supercharges the things that they can see and understand.

Murray: I guess the question I have is, it's great you're doing this, why did we have to wait for you to do this like it? It bugs me that the big foundation models talk about hallucinations. I think that's a euphemism. I mean, some of the stuff is just factually wrong. There was a story in The New York Times about the lawyer who filed a brief that he got from ChatGPT, and it made up court cases. You know, any lawyer, first year lawyer who did that would be fired instantly. Same with any journalists who just made things up. I mean, I asked ChatGPT to give me a short bio of myself and it had me writing a book. Not only that I didn't write, it was a book that didn't exist. So why can't these large language models be trained to have more of a respect for the truth, more of a respect for facts? Why do we have to have a second round of models to do that for us?

Khaled: I think part of it is the desire on the part of the builders to deem this important, right. So I think it's not been a priority as much as putting just the lowest level of gates in to assure the largest engagement in traction.

Murray: I think in some cases that some people in Silicon Valley consider it a feature not a bug. I mean, I heard Marc Andreessen say hallucinations is another word for creativity. Well, maybe, but it's also lies. It's untruths. It's things that are totally made up out of thin air. I mean, any person who goes into a business like law or journalism is told at a very early age, don't make stuff up. Why doesn't somebody tell these models, don't make stuff up, deal with facts?

Khaled: Yeah, you know, sorry.

Murray: You can say I get excited about.

Khaled: Yeah, absolutely. No, I mean, you know, it's a technology that, frankly, artificial intelligence is a field in general is unusual in that even some of the builders don't really know exactly why the technology does a particular thing. So it's not you can't actually tell it to do that. You can just start, you can keep getting ahead of the unusual things that you see and try to put gates in place. But that's really not a priority for the builders, right? For us, that's been the priority since day one.

For seven years now, we've thought, okay, misinformation, disinformation is an existential problem for the world, right? So there needs to be somebody that goes out and just makes this their life's work. So part of that answer is like like many entrepreneurs and many startups, right? The reason nobody did it is because no one gave a damn about it enough to say, okay, I'm going to dedicate maybe the next five or ten years or more to make sure that there is a solution, that there is a safeguard for this thing that could, in theory, come back and become a massive global problem for everyone. In fact, in 2016, when we first started talking about this, we envisioned a world where one day there's going to be some sort of AI-driven computational propaganda machine that could make this a much bigger problem, and here in 2024, we're there. So it's frankly, it's the perseverance in bringing together an incredible team that all care.

Murray: Wasim, thank God that you and your colleagues all cared and thank you for doing what you were doing. Why put it out on Valentine's Day? Is there a message there or is it a coincidence?

Khaled: It is for the love of what we do.

Murray: Well, we love you for it. Thank you for what you're doing. I hope it catches on. I hope it works because I think the the deterioration of the information ecosystem is a serious social problem. Wasim Khaled, cofounder, CEO of Blackbird.AI, thanks for taking the time to be on Leadership Next.

Khaled: Thank you so much.

Murray: Leadership Next is edited by Nicole Vergara.

Michal Lev-Ram: Our executive producer is Chris Joslin.

Murray: Our theme is by Jason Snell.

Lev-Ram: Leadership Next is a production of Fortune Media.

Murray: Leadership Next episodes are produced by Fortune’s editorial team. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte or its personnel. Nor does Deloitte advocate or endorse any individuals or entities featured on the episodes.

This story was originally featured on Fortune.com