Someone made a deepfake of my voice for a scam! (With permission...)
Voice cloning is cheap and easy and another thing for cybercrime victims to worry about
"I need help. Oh my God! I hit a woman with my car," the fake Bob says. "It was an accident, but I'm in jail and they won't let me leave unless I come up with $20,000 for bail ...Can you help me? Please tell me you can send the money."
It's fake, but it sounds stunningly real. For $1, using an online service available to anyone, an expert was able to fake my voice and use it to create telephone-ready audio files that would deceive my mom.  We've all heard so much about artificial intelligence - AI - recently. For good reason, there have long been fears that AI-generated deepfake videos of government figures could cause chaos and confusion in an election. But there might be more reason to fear the use of this technology by criminals who want to create confusion in order to steal money from victims.
Already, there are various reports from around North America claiming that criminals are using AI-enhanced audio-generation tools to clone voices and steal from victims. So far, all we have are isolated anecdotes, but after spending a lot of time looking into this recently, and allowing a security researcher to make deepfakes out of me, I am convinced that there's plenty of cause for concern.
Reading these words is one thing; hearing voice clones in action is another. So I hope you'll listen to this week's episode of The Perfect Scam.  Professor Jonathan Anderson, an expert in artificial intelligence and computer security at Memorial University in Canada, does a great job of demonstrating the problem -- using my voice -- and explaining why there is cause for ... concern, but perhaps not alarm. His main suggestion: All consumers need to raise their digital literacy and become skeptical of everything they read, everything they see, and everything they hear. It's not so far-fetched; many people now realize that images can be 'photoshopped' to show fake evidence. We all need to extend that skepticism to everything we consume. Easier said, than done, however.
Still, I ponder, what kind of future are we building? Also in the episode, AI consultant Chloe Autio offers up some suggestions about how industry, governments, and other policymakers can make better choices now to avoid the darkest version of the future that I'm worried about.
I must admit I am still skeptical that criminals are using deepfakes to any great extent. Still, if you listen to this episode, you'll hear Phoenix mom Jennifer DeStefano describe a fake child abduction she endured, in which she was quite sure she heard her own child's voice crying for help. And you'll hear about an FBI agent's warning, and a Federal Trade Commission warning. As Professor Anderson put it, "Based on the technology that is easily, easily available for anyone with a credit card, I could very much believe that that's something that's actually going on."
To listen, click the play button below, or click this link, or find The Perfect Scam wherever you listen to podcasts. Below the play button is a full transcript of the episode if you'd rather read it.
------TRANSCRIPT----------
[00:00:00] Hi, I'm Bob Sullivan and this is The Perfect Scam. This week we'll be talking with people about a new form of the Grandparent Scam. One that plays on victims' heartstrings by using audio recordings of loved ones asking for help abroad. What makes this new is that the voices are actually AI generated, and they sound pretty convincing. In fact, you're listening to one right now. I'm not really Bob Sullivan. I'm an AI generated voice based on about 3 minutes’ worth of recordings of Bob on this program. It sounds pretty convincing though, doesn't it?
(MUSIC SEGUE)
[00:00:42] Bob: Welcome back to The Perfect Scam. I’m your real host, Bob Sullivan. Today we are bringing you a very special episode about a new development in the tech world that, well frankly, threatens to make things much easier for criminals and much harder for you; artificial intelligence, AI. If you listen to some people, it's already being used to supercharge the common grandparent scam or impostor scam because it's relatively easy, cheap for criminals to copy anyone's voice and make them say whatever the criminals want. What you heard at the beginning of this podcast wasn't me. It was an audio file generated with my permission by Professor Jonathan Anderson, an expert in artificial intelligence and computer security at Memorial University in Canada. And as I said, well, as that voice said, it sounds pretty convincing. It also might sound kind of like a novelty or even a light-hearted joke until you hear what some victims are saying.
(MUSIC SEGUE)
[00:01:46] At about 4:53 pm, I received a call from an unknown number upon exiting my car. At the final ring, I chose to answer it as "Unknown Calls," we're very familiar with, can often be a hospital or a doctor. It was Brianna sobbing and crying saying, "Mom..."
[00:02:03] Bob: That's the real-life voice of Jennifer DeStefano from Phoenix, who testified before Congress earlier this year about her harrowing experience with a kidnapping scam involving her daughter.
[00:02:15] Jennifer DeStefano: "Mom, these bad men have me. Help me, help me, help me." She begged and pleaded as the phone was taken from her. A threatening and vulgar man took the call over. "Listen here, I have your daughter. You call anybody, you call the police, and I'll drive to Mexico and you'll never see your daughter again." I ran inside and started screaming for help. The next few minutes were every parents' worse nightmare. I was fortunate to have a couple of moms there who knew me well, and they instantly went into action. One mom ran outside and called 911. The kidnapper demanded a million dollars and that was not possible. So then he decided on $50,000 in cash. That was when the first mom came back in and told me that 911 is very familiar with an AI scam where they can use someone's voice. But I didn't process that. It was my daughter's voice. It was her cries; it was her sobs. It was the way she spoke. I will never be able to shake that voice and the desperate cries for help out of my mind.
[00:03:11] Bob: Jennifer DeStefano ultimately tracked down her daughter who was safe at home so she didn't send the criminals any money, but she went through those moments of terror and still struggles with the incident partly because the voice sounded so real. And she's hardly the only one who thinks voice cloning or some kind of technology is actively being used right now by criminals to manipulate victims. Last year, an FBI agent in Texas warned about this use of artificial intelligence by criminals. Then in March, the Federal Trade Commission issued a warning about it. So far, all we have are scattered anecdotes about the use of computer-generated voices and scams though it's easy to see what the potential for this problem is. So I wanted to help us all get a better grasp on what's happening, and for this episode, we've spoken to a couple of experts who can help a lot with that. The first is Professor Jonathan Anderson who you've already met, sort of. Remember, I asked him to make the fake me.
[00:04:09] Jonathan Anderson: I teach students about computer security, so I'm not really a fraud expert. I certainly have some exposure to that kind of thing. I come at it from more of a, of a technical angle, but I guess this all started when there were some local cases of the grandparent scam that were getting run here and they were looking for somebody who is a, a general cybersecurity expert to comment, and, and that's where things started for me talking to people about this issue.
[00:04:33] Bob: So can, can you sort of right-size the problem for me and describe the problem for me at the moment?
[00:04:39] Jonathan Anderson: Sure, I mean, scams are nothing new, of course. And this particular scam of putting pressure on people and saying, oh, your loved one is in a lot of distress, I mean that's not a new thing, but what is new is it's so cheap, easy, convenient to clone someone's voice today that you can make these scams a lot more convincing. And so we have seen instances um, in Canada across the country. There have been people who at least claim that they heard their grandson's voice, their granddaughter's voice, that kind of thing on the phone. And based on the technology that is easily, easily available for anyone with a credit card, I could very much believe that that's something that's actually going on.
[00:05:22] Bob: But wait. I mean I'm sure we've all seen movies where someone's voice is cloned by some superspy. It's easy enough to imagine doing that in some sci-fi world, but now it's easy and cheap to do it. Like how cheap?
[00:05:37] Jonathan Anderson: So some services, uh you can pay $5 a month for a budget of however many words you get to generate and up to so many cloned voices. So a particular service that I was looking at very recently, and in fact, had some fun with cloning your voice, was $5 a month. Your first month is 80% off, and when you upload voice samples, they ask you to tick a box that says, "I promise not to use this cloned voice to conduct any fraudulent activities." And then you can have it generate whatever words you want.
[00:06:11] Bob: Well, you have to click a box, so that takes care of that problem.
[00:06:14] Jonathan Anderson: Yeah, absolutely. So certainly, I don't think this prevents any fraud, but it probably prevents that website from being sued and held liable for fraud perhaps.
[00:06:24] Bob: So did you say $5 a month, 80% off, so it's a dollar?
[00:06:29] Jonathan Anderson: I did, yeah. One US dollar for 30 days of generating, I think 50,000 words in cloned, generated voices.
[00:06:38] Bob: 50,000 words.
[00:06:41] Jonathan Anderson: Uh-hmm, and up to 10 cloned voices that you can kind of have in your menu at any one time, and then if you want to spend a little bit more per month, there are uh, much larger plans that you can go with.
[00:06:53] Bob: That's absolutely amazing.
[00:06:56] Bob: For one dollar you can make someone say anything you want in a voice that sounds realistic? Frankly, I was still skeptical that it was that easy, and definitely skeptical that it can be convincing enough to help criminals operate scams, so I asked Jonathan to simulate how that would work using my voice. And he said, "Easy." What you are about to hear is fake, let's be clear, but here's what a criminal could do with my voice for $1.
[00:07:26] (fake Bob) Steve, I need help. Oh my God! I hit a woman with my car. It was an accident, but I'm in jail and they won't let me leave unless I come up with $20,000 for bail and legal fees. I really need your help, Steve, or I'll have to stay here overnight. Can you help me? Please tell me you can send the money.
I've never felt this way about anybody before, Sandra. I wish we could be together sooner. As soon as I finished up the job on this oil rig, I will be on the first flight out of here. In the meantime, I've sent you something really special in the mail so you know how much I care about you.
I just need some money to help pay for the customs...
Now you've heard me say many times on this podcast to NEVER send gift cards as payment. Scammers will often pose as an official from a government agency and say you need to pay taxes or a fine, or they'll pretend to be text support from Microsoft or Apple and ask for payment through gift cards to fix something that's wrong with your computer. But this time is an exception to that rule. And I am going to ask you to purchase gift cards so I can help you protect yourself from the next big scam.
[00:08:35] Bob: Okay real Bob back here. Wow that’s absolutely amazing, okay so I've been writing about security and, and privacy scary things for a long time and it's pretty common, right, that people make a lot out of, out of things to get attention to themselves and so I'm always on, kind of an alert for that. And to be honest with you, I was like, okay, how good can these things be? I've listened to these clips. I'm alarmed (chuckles).
[00:08:59] Jonathan Anderson: Yep.
[00:09:00] Bob: I've read a lot about this. I knew this was going on. It was still disturbing to hear my own voice produced this way. Have you done it with yourself?
[00:09:06] Jonathan Anderson: I have. So I've generated voices for myself and a few different like media people that I've spoken with, and I played the one of me, and I thought it sounded quite a lot like me. My wife thought it sounded like a computer imitating me, so I guess that's good. Maybe the sample I uploaded wasn't high quality enough, or it might be that people use radio voice when they are media personalities and maybe that's slightly easier to imitate, I'm not 100% sure. But it, it does feel strange to listen to yourself, but not quite yourself.
[00:09:39] Bob: Yeah, did it make you want to shut the computer and go for a walk or anything?
[00:09:42] Jonathan Anderson: Hah, ah, well, doing computer security research makes me want to shut all the computers and go for a walk.
[00:09:48] Bob: (laughs)
[00:09:50] Bob: Okay, so I had to know, how do these services work? It turns out they do not work the way I imagined they do.
[00:09:59] Bob: My emotional response is that what these really are is like someone has somehow cut and pasted individual words like, like an old ransom note that was magazine words cut together and put on a piece of paper, but that's not what it is, is it?
[00:10:13] Jonathan Anderson: No, so there are services that have pretrained models of what a human voice sounds like in a generic sense, and then that model gets kind of customized by uploading samples of a particular person's voice. So we all remember the, the text--, text to speech engines that didn't sound very good where it said, "You - are - opening - a - web - browser." Well those have improved a lot, but now they can also be personalized to sound like a specific person by uploading a very, very small amount of recording of a voice. So I took about three minutes' worth of recordings of your voice, and of course, anyone who uses the internet would be able to get a lot more than three minutes of the voice of somebody who hosts a podcast or somebody who's a media personality or to be honest, anybody who posts a face--, a Facebook video. And when you upload that information, you're able to personalize this, this model of an AI generator, and then you provide a text, and it does text-to-speech, except instead of sounding like a generic voice, instead of sounding like Siri, it kind of sounds a lot like the person whose voice samples you uploaded. The more sample data you have, the more opportunity there is for that generated voice to sound more like the real person, but you can imagine that if you took those voice samples, and you played them ov--, to somebody over a phone which has kind of a noisy connection, and which really attenuates certain frequencies anyway, you could fool an awful lot of people, and you can make these things say whatever words you want. So you can say, you know, "Help me, Grandma, I've been arrested, and I've totaled the car, and uh here's my lawyer. Can you talk to my lawyer about providing some money so that he can get me out of jail?" And then another voice comes on the line and can answer questions interactively.
[00:11:58] Bob: You know that, that's just amazing, I'm so glad you explained it like that. Again, I was picturing now, maybe it’s event the movie Sneakers you know, they have to get a person to saying all those words, and then they can like cut and paste them together. This has nothing to do with that. This is like this is a voice with a full vocabulary that, that's just tweaked with little audio particulars that are sampled from, from people's other speaking, so that you can make the person say anything that you want them to say, and sound like they would sound, right?
[00:12:26] Jonathan Anderson: Absolutely. So yeah in, in the Sneakers' quote that you referenced, "My voice is my passport authorized me,"
[00:12:33] Bob: That's it, yes, yeah.
[00:12:34] Jonathan Anderson: It sounded a little bit odd, right? But now you can have whatever the normal vocal mannerisms of a specific person or not just like the specific tone in which they speak, but also how much they vary their voice; things like that. All of that can be captured, and there are some parameters that you tweak in order to make it sound as lifelike as possible, but if you have high quality voice recordings, you can basically make that voice say anything. And those, those parameters and how much data you have to upload influences exactly how faithful it sounds to the original voice. But even with just a few minutes of sample, it can sound, it can sound pretty darn close.
[00:13:12] Bob: So, you can see, voice cloning technology has come a long way since it was imagined by sci-fi movies of the 1970s and '80s. But you don't have to look too far to see another example of technology that works this way.
[00:13:27] Jonathan Anderson: So I think people used to have this idea of faking images, and, and indeed, faking images used to be a matter of somebody who was really, really good a photoshop going into the tool and dragging an image from one place into another and then having to fix up all the edges and it was, it was kind of a skilled art form almost, but now you can go to some online AI generation tools and say, give me an image that has this image, but where that person is surrounded by ocean or they're surrounded by prison bars, or they're, they're in a group shot with these other people from this other image. And it's kind of like that for voices as well now where we can generate a voice saying whatever we want it to say, not having to just copy and paste little bits and pieces from different places.
[00:14:12] Bob: Let's face it. Hearing a cloned voice that comes so close to real life is downright spooky. In fact, that spookiness has a name.
[00:14:23] Jonathan Anderson: I guess it, it kind of goes to our fundamental assumptions about what people can do and what machines can do, and there's something uncanny about, well there's the uncanny valley effect, but there's also something uncanny about something that you thought could only come from a human being that isn't coming from a human being. I think that that's deeply disquieting in this sense.
[00:14:49] Bob: Hmm, the, I'm sorry, what was the, the uncommon valley effect? What was that?
[00:14:53] Jonathan Anderson: Oh, so the uncanny valley is this thing where you see this in like animation, whereas animation gets more and more lifelike, people kind of receive it in a very positive way, and but there comes a point where something is so close to lifelike, but not quite, that people turn away from it with like revulsion and disgust. And this is something, it makes people feel really strange, and they really don't like it 'cause it's almost human, but not quite. Yeah, so people have written lots about the uncanny valley, and especially in like CGI and stuff. But the same thing kind of applies to I think the technology we're talking about here, if you have a voice that sounds almost like you, but not quite, it sounds really strange in a way that a purely synthetic voice like Siri or something wouldn't sound strange.
[00:15:44] Bob: That's fascinating, but it's, as you were describing it, it hits right close to home. So that makes a ton of sense. Um, as, as did the two uh audio samples that you sent, um, they were uh disturbing and, and hit, hit the point as clear as could be.
[00:16:01] Bob: Okay, spooky is one thing, unnerving is one thing. But should we be downright scared of the way voice cloning works and what criminals can do with it?
[00:16:13] Jonathan Anderson: Um, in the short term, I think we should be kind of concerned. I think we've come to a place after a long time where when you see an image of something that's really, really implausible, your first thought is not, I can't believe that that event happened. Your first thought is probably, I wonder if somebody photoshopped that. And I think we are soon going to get to a similar place with video and being able to generate images of people moving around and talking that seems pretty natural, And in fact, there's some services that although it's not quite lifelike yet, they are, they are doing things like that. But definitely with voices, if you hear a voice saying something that you don't think could be plausible, or that seems a little off to you, you should definitely be thinking, I wonder if that's actually my loved one, or if that's somebody impersonating them using a computer.
[00:17:05] Bob: You know, for quite a while now, at least I think a couple of election cycles, there's been concern about you know deep fake videos of political candidates being used. Um, and so that's where my brain was, and I think that's probably where a lot of people's brains were, that sort of like large scale um, political, socioeconomic attack, um, but using this kind of tool to steal money from vulnerable victims on the internet, seems honestly like a more realistic nefarious way to use this technology.
[00:17:37] Jonathan Anderson: It does, absolutely. And those two things can be related doing mass individually targeted things that people, where you fake a video from a political candidate saying, "Won't you, uh, Bob Sullivan, please donate to my campaign," or something. I mean maybe those two things can be married together, but certainly while people were looking at the big picture, they, well sometimes we forget to look at the details and individual people getting scammed out of thousands, or in some cases tens of thousands of dollars, it might not be a macro economic issue, but it is a really big deal to a lot of people.
[00:18:10] Bob: When I went to school, media literacy was a big term. You know and now it feels sort of like we need a 21st century digital literacy, which kind of comes down to not bling--, believing anything you see or hear. Um, and, and I guess that's a good thing. I, I have mixed feelings about that though, do you?
[00:18:29] Jonathan Anderson: Yeah, so I mean as a security researcher, or when I teach computer security to students, I will often tell them, now if you come into this class wearing rose-colored glasses, I'm afraid to tell you, you won't be wearing them by the end, 'cause we are going to look at things that people can do, and attacks that people can run, and some pretty nefarious things that people can do on the internet using various technologies. But part of the goal there is if we learn what it possible, then we learn what not to trust. And then if we know what not to trust, then we can think in a more comprehensive and fully grounded way about what we should trust, and why we trust certain things, and what are the appropriate reasons to trust certain things that we hear, see, et cetera.
[00:19:16] Bob: I do think that's good advice. Don't trust everything you read or everything you hear, or everything you see, but let's be honest, that is a very tall order, and frankly, I'm not sure how much I want to live in a world when nobody trusts anything.
[00:19:32] Jonathan Anderson: I mean it's going to be a long time before human nature isn't trusting because we are a fundamentally social creature, et cetera. Even things like text, I mean people have been writing false things down in printed form for hundreds of years. And yet, people still believe all kinds of interesting things that they read on Facebook. So I mean that's, dealing with human nature is well outside of my expertise, and I’m afraid I'm not sure what to do with that, but for this specific kind of scam, there are things that we can do, which look a lot like traditional responses to scams, you know, the, the old adage of, "If it sounds too good to be true, it probably is." Well, if it sounds too bad to be true, then maybe we should check. So in this particular scam for example, you will often have people creating a time-limited, high-pressure situation by calling someone and saying, "I'm in jail right now, and you need to talk to my lawyer, and you need to give them cash so that they can get me out of jail." Now first of all, I don't know a lot of lawyers who send couriers to your door to collect bundles of cash, but there are things that you could do such as, okay, well, where are you in jail? Let me call them back. Do you have a lawyer? Is this lawyer somebody I can find in a phone book, and can I call their office in order to arrange for whatever needs to be arranged for? It's kind of the, the old advice about, if you have credit card fraud, well don't tell the person who called you, call the number on the back of your card so that you know who you're talking to. And some of those kind of simple techniques do allow people to take back control of the conversation, and, and they're not kind of glitzy, exciting techniques, because they're not technologically based, but unfortunately, some of the technologically based things that people propose like trying to identify certain glitches in the AI-generated speech, those aren't going to last, because whatever those glitches are, we're going to be able to smooth them out soon enough. So really, we need a more fundamental approach that goes back to the sort of the first principles of, of scams.
[00:21:37] Bob: So, what about the companies involved? So having a checked box that says, "Fingers-crossed, promise you won't use this for fraud." I don't know if that sounds like enough to me. Should we be demanding more of our tech companies who make this technology available?
[00:21:50] Jonathan Anderson: I want to say yes. (chuckles) But in the long run, I don't know that that's actually going to make a difference. If you prevent companies based in western democracies from producing this kind of technology or if you prevent them from selling it to end-users without, I don't know, making the end-users register themselves with the local police or something, it's not going to stop the technology from being developed, it's just going to push it offshore. I think outlawing artificial voices would be a little bit like outlawing Photoshop. You might well be able to make Adobe stop shipping Photoshop, but you're not going to be able to prevent people around the world who have access from to technology from making image manipulation software.
[00:22:35] Bob: There are some situations where companies have developed technologies and simply decided they weren't safe to sell to the general public. That happened at Google and Facebook with facial recognition, for example. What about that as a solution?
[00:22:50] Jonathan Anderson: Yeah, so when you have a large company deciding whether or not they're going to use certain technology with the data that their users have provided them, then that's a slightly different conversation about the, the responsibilities that they have, and the ethics that they have to uphold, and the potential for regulation if they get that wrong, and of course, I guess they want to avoid that at all costs. But when you're looking at something like this where you have a company that is selling individual services to individual users and most of those individual users are probably generating chat bots for their website and other kind of legitimate uses, and a few are scammers, that might be a lot harder to deal with through that kind of a, a good corporate citizen kind of approach.
[00:23:37] Bob: I think the people listening to this topic right now, and in general in the news cycle, you know we are, we are the AI is scary phase, and deep fakes are very, very scary. So what sort of future are we building here?
[00:23:55] Jonathan Anderson: So I understand that when photographs in newspapers were a new thing, that you could find quotes in newspapers from people saying things like, "Well, if they wrote about it in the paper, I'm not going to believe that. But if there's a picture, it really must have happened." And I don't think people have that attitude anymore, because it took a few decades, but people got used to the idea that an image can be faked. And so now when we look at a newspaper, and we see an image, we aren't asking, does that image look doctored, we ask, who gave me this image? And if it's a professional photojournalist with a reputation to uphold at a reputable publication, we believe that the image hasn't been doctored, not because of some intrinsic quality, but because of who it came from. And I think that is kind of a long term, sustainable strategy for this sort of thing where if you can know where, if you can know the source of what you're listening to, if you can know the source or the conduit of who's providing you with a recording, an image, a video, then that's where you need to be making those trust decisions, not based on the quality of the thing itself. I mean it took a while for people to get used to the idea that air brushing images, and then later photoshopping images was a real thing, and it may, unfortunately, take a while for people to get used to this new scam, or I should say this new augmentation of an old scam. And unfortunately, while people are getting used to that idea, there may well be lots of exploitation of that kind of cognitive loophole, but hopefully people will catch up with it soon because just as we've all kind of learned, you shouldn't trust everything you read on the internet, well you also, unfortunately, shouldn't believe everything that you heard over your phone.
(MUSIC SEGUE)
[00:25:50] Bob: Jonathan did a great job of explaining the problem, and a scary good job of using a service to imitate my voice. But there are a lot of people out there trying to figure out what we can do about AI and so-called deep fakes like voice cloning. What can government do to rein in their use by criminals? What can industry associations do? What can the law do? What kind of future do we really want to build? Our next guest is deeply immersed in that conversation right now, and as you can see, it's pretty important we have this conversation right now.
[00:26:26] My name is Chloe Autio, and I'm an independent AI policy and governance consultant. So I help different types of organizations, public sector, private sector, some non-profits, sort of understand and make sense of the AI policy ecosystem, and then also understand how to build AI technologies a bit better.
[00:26:47] Bob: Okay, so that term that you just used there, the AI policy governance ecosystem, what is that?
[00:26:54] Chloe Autio: I would define it as sort of the network of actors and stakeholders, who are developing and deploying AI systems, and also those who are thinking about regulating them and the organizations that are engaging with regulators and these companies. So think, you know, civil society, academics, different types of foundations and organizations like think tanks that are doing interesting work and research on new and emerging topics. The AI policy ecosystem, as I describe it, is, you know, like I said, the network of people sort of thinking about how we can usher in the next wave of this technology responsibly and with guard--, guardrails in a way that also promotes its responsible uptake.
[00:27:38] Bob: Okay, so let's face it, right now every story has AI connected to it, just like every story had .com connected to it 25 years ago or whatnot, right. Um, and most of those stories, well a lot of those stories are scary, are about crime, they're about some sort of dystopian future. Um, how scared should we be about AI?
[00:27:59] Chloe Autio: That's a really good question, Bob. And I think a lot of people are thinking about this. I think every stakeholder in the network that I just described is trying to kind of figure out what, what all of this hype about AI actually means to them, or should mean to them, right? Should they be very scared, should we be talking about extinction? Are those fear--, fears real? Or should we kind of be thinking about the harms that actually exist today and maybe there's, there's a middle ground that we should be thinking about. But I, I think, you know, as with any sort of technology, right, there's a lot of hype and a lot of excitement initially, and then we sort of reach that plateau of, okay, how is this really working in the world? And what are the things that we need to be thinking about in relation to deployment implementation. And I think we're just sort of getting there with generative AI. So some of it remains to be determined, but I do think, and I will say that I think some of these fears, particularly around sort of like longer term risks that we haven't even really defined yet, are, are dominating a little bit of the conversation and, and we need to sort of continue to think about the harms that are happening right now.