The problem of deepfakes is real and growing. This is why We4C's UnCognito offers a cutting-edge solution for detecting audio deepfakes in real-time and at large scale.
Deepfakes are about to explode in number and sophistication, especially because new generative AI video, audio, and image tools make it easier than ever before to generate and manipulate content.
What’s interesting is that most VC’s don’t seem to be paying much attention to the deepfake detection and anti-AI security space. More than $2.7 billion has been invested in consumer generative AI content tools, but only $500 million in deepfake detection (Pitchbook). That’s surprising, given deepfakes can cost companies millions, and according to one study, fake news cost the global economy $78 billion in 2020.
Are investors right?
Maybe deepfake detection tools simply can’t keep up, so we should just make creators and publishers embed provenance data and call it a day. That’s what C2PA, a joint effort among Adobe, Arm, Intel, Microsoft and Truepic, aims to do with its new technical standard.
To dig more into this, I looked into how startups and incumbents are fighting deepfakes (market map below):
There’s 3 major ways that players are addressing deepfakes:
Method #1: Detection tools use various techniques to determine whether an image or video has been manipulated or created by AI. Some of these companies, like BioID, Clarity, and Kroop, use AI models trained on real and fake images to spot the differences.
Others identify specific signs that images, videos, and audio have been manipulated. For example, Intel’s FakeCatcher analyzes patterns of blood flow to detect fake videos. DARPA’s Semantic Forensic project develops logic-based frameworks to find anomalies, like mismatched earrings. Startups working on this include Attestiv, DeepMedia, Duck Duck Goose, Illuminarty, Reality Defender, and Resemble AI.
ID verification tools are a subset of detection tools built to authenticate personal documents and user profiles. They often combine image analysis with liveness detection (i.e., when you’re asked to take a selfie or make a face). AuthenticID, Hyperverge, Idenfy, iProov, Jumio, and Sensity are some of the companies in this space.
Of course, detection-based approaches are inherently retroactive, so they have to constantly keep up with evolving generative AI models. But many of these tools have 80%+ accuracy rates, compared to only about 60% for humans.
Method #2: Certification tools, on the other hand, proactively embed provenance data into image and video files, with a record permanently stored on a blockchain. Truepic allows enterprises to add, verify, and view C2PA content credentials, including at the point of capture on smartphone cameras. Similarly, CertifiedTrue allows users to capture, store, and certify photos for legal proceedings. This information is then recorded on a blockchain, which makes it permanent, public, and unalterable.
The upside is that we’re beginning to establish a standard for content authenticity; the downside is that these programs are opt-in. Authenticating all or even most of the content that exists and will be generated will be a major challenge, though some camera makers, like Canon, are working on embedding authentication at the point of capture.
However, with the proliferation of deepfakes, the paradigm is shifting from “real until proven fake” to “fake until proven real”. Authentication at the hardware level will likely become the only way to prove humanity, since publisher- or social media-level authentication only proves where content first appeared, not whether a human made it.
Method #3: Lastly, narrative tracking platforms examine how fraud and disinformation spreads through the web, keeping corporations and governments informed of high-risk narratives. This is a bigger-picture approach to fighting deepfakes that tracks the spread of misinformation online and verifies content by examining it in context.
Players include startups like Blackbird.AI and Buster.AI, as well as public-private partnerships like the EU-funded project WeVerify. For example, large companies use Blackbird.AI’s Constellation Dashboard to track online narratives, which are given risk scores, so that they can mitigate misinformation.
There’s not a single tool or strategy that can completely protect against the impact of deepfakes, so individuals, enterprises, and governments will have to rely on a mix of solutions. There’s certainly room for entrants in the deepfake detection and anti-AI security space.
Here are some key opportunities for builders and investors:
There’s no magic formula for defending against deepfakes. But with deepfakes causing financial and reputational harm to people, organizations, and governments, deepfake detection is an area to watch.
Warren Buffett cautioned the tens of thousands of shareholders who packed an arena for his annual meeting that artificial intelligence scams could become "the growth industry of all time."
Doubling down on his cautionary words from last year, Buffett told the throngs he recently came face to face with the downside of AI.
And it looked and sounded just like him. Someone made a fake video of Buffett, apparently convincing enough that the so-called Oracle of Omaha himself said he could imagine it tricking him into sending money overseas.
The billionaire investing guru predicted scammers will seize on the technology, and may do more harm with it than society can wring good. "As someone who doesn't understand a damn thing about it, it has enormous potential for good and enormous potential for harm and I just don't know how that plays out," he said.
The day started early Saturday with Berkshire Hathaway announcing a steep drop in earnings as the paper value of its investments plummeted and it pared its Apple holdings.
The company reported a $12.7 billion profit, or $8,825 per Class A share, in first the quarter, down 64% from $35.5 billion, or $24,377 per A share a year ago. But Buffett encourages investors to pay more attention to the conglomerate's operating earnings from the companies it actually owns. Those jumped 39% to $11.222 billion, or $7,796.47 per Class A share, led by insurance companies' performance.
None of it that got in the way of the fun.
Throngs flooded the arena to buy up Squishmallows of Buffett and former Vice Chairman Charlie Munger, who died last fall. The event attracts investors from all over the world and is unlike any other company meeting. Those attending for the first time are driven by an urgency to get here while the 93-year-old Buffett is still alive.
"This is one of the best events in the world to learn about investing. To learn from the gods of the industry," said Akshay Bhansali, who spent the better part of two days traveling from India to Omaha.
Devotees come from all over the world to vacuum up tidbits of wisdom from Buffett, who famously dubbed the meeting 'Woodstock for Capitalists.' But a key ingredient was missing this year: It was the first meeting since Munger died. The meeting opened with a video tribute highlighting some of his best known quotes, including classic lines like "If people weren't so often wrong, we wouldn't be so rich." The video also featured skits the investors made with Hollywood stars over the years, including a "Desperate Housewives" spoof where one of the women introduced Munger as her boyfriend and another in which actress Jaimie Lee Curtis swooned over him.
As the video ended, the arena erupted in a prolonged standing ovation honoring Munger, whom Buffett called "the architect of Berkshire Hathaway." Buffett said Munger remained curious about the world up until the end of his life at 99, hosting dinner parties, meeting with people and holding regular Zoom calls.
"Like his hero Ben Franklin, Charlie wanted to understand everything," Buffett said.
For decades, Munger and Buffett functioned as a classic comedy duo, with Buffett offering lengthy setups to Munger's witty one-liners. He once referred to unproven internet companies as "turds."
Together, the pair transformed Berkshire from a floundering textile mill into a massive conglomerate made up of a variety of interests, from insurance companies such as Geico to BNSF railroad to several major utilities and an assortment of other companies.
Munger often summed up the key to Berkshire's success as "trying to be consistently not stupid, instead of trying to be very intelligent." He and Buffett also were known for sticking to businesses they understood well.
"Warren always did at least 80% of the talking. But Charlie was a great foil," said Stansberry Research analyst Whitney Tilson, who was looking forward to his 27th consecutive meeting.
Next-gen leaders
Munger's absence, however, created space for shareholders to get to know better the two executives who directly oversee Berkshire's companies: Ajit Jain, who manages the insurance units; and Abel, who handles everything else and has been named Buffett's successor. The two shared the main stage with Buffett this year.
The first time Buffett kicked a question to Abel, he mistakenly said "Charlie?" Abel shrugged off the mistake and dove into the challenges utilities face from the increased risk of wildfires and some regulators' reluctance to let them collect a reasonable profit.
Morningstar analyst Greggory Warren said he believes Abel spoke up more Saturday and let shareholders see some of the brilliance Berkshire executives talk about.
Abel offered a twist on Munger's classic "I have nothing to add" line by often starting his answers Saturday by saying "The only thing I would add."
"Greg's a rock star," said Chris Bloomstran, president of Semper Augustus Investments Group. "The bench is deep. He won't have the same humor at the meeting. But I think we all come here to get a reminder every year to be rational."
A look to the future
Buffett has made clear that Abel will be Berkshire's next CEO, but he said Saturday that he had changed his opinion on how the company's investment portfolio should be handled. He had previously said it would fall to two investment managers who handle small chunks of the portfolio now. On Saturday, Buffett endorsed Abel for the gig, as well as overseeing the operating businesses and any acquisitions.
"He understands businesses extremely well. and if you understand businesses, you understand common stocks," Buffett said. Ultimately, it will be up to the board to decide, but the billionaire said he might come back and haunt them if they try to do it differently.
Overall, Buffett said Berkshire's system of having all the noninsurance companies report to Abel and the insurers report to Jain is working well. He himself hardly gets any calls from managers anymore because they get more guidance from Abel and Jain. "This place would work extremely well the next day if something happened to me," Buffett said.
Nevertheless, the best applause line of the day was Buffett's closing remark: "I not only hope that you come next year but I hope that I come next year."
A high school athletic director was arrested after an AI-generated voice recording of his school's principal making racist comments went viral.
Baltimore County Police arrested the former athletic director of Pikesville High School on Thursday, alleging he used an AI voice clone to impersonate the school’s principal, leading the public to believe Principal Eric Eiswert had made racist and antisemitic comments, according to The Baltimore Banner.
Dazhon Darien was stopped at a Baltimore airport on Thursday morning attempting to board a flight to Houston with a gun, according to the Banner. Investigators determined Darien faked Eiswert’s voice using an AI cloning tool. The AI voice recording, which was circulated widely on social media, made disparaging comments about Black students and the Jewish community.
“Based on an extensive investigation, detectives now have conclusive evidence the recording was not authentic,” the Baltimore County Police said in a press release. “As part of their investigation, detectives requested a forensic analyst contracted with the FBI to analyze the recording. The results from that analysis indicated the recording contained traces of AI-generated content.”
This deepfake reportedly led to a public outrage causing Principal Eiswert to receive a wave of hateful messages and forcing his temporary removal from the school. The school’s front desk was flooded with calls from concerned parents. The Pikesville school district ultimately arranged for a police presence at the school and Eiswert’s house to restore a sense of safety.
Baltimore Police officials say the former athletic director made the AI recording to retaliate against the school’s principal. A month before the recording went viral, The Banner reports that Eiswert launched an investigation into Darien for potential theft of school funds. Darien authorized a $1,916 payment to the school’s JV basketball coach, who was also his roommate, bypassing proper procedures. Darien submitted his resignation earlier in April, according to school documents.
Police say Darien was the first of three teachers to receive the audio clip the night before it went viral. The Banner reports another teacher who received the recording sent it to students, media outlets, and the NAACP. Police wrote in charging documents that Darien used the school network to search for OpenAI tools and use large language models on multiple occasions. However, a lot of people use these AI tools these days. It’s unclear at this time how investigators were able to pinpoint Darien as the creator of this voice recording.
The creation of AI-generated audio deepfakes is an increasingly large problem facing the tech world. The Federal Communications Commission took steps in February to outlaw deepfake robocalls after a Joe Biden deepfake misled New Hampshire voters.
In this case, AI experts were able to identify the alleged audio of the Baltimore principal was a fake. However, this came two months after the audio went viral, and the damage may have already been done. AI deepfakes really need to be stopped early on to minimize harm, but that’s easier said than done.
Deepfakes have long raised concern in social media, elections and the public sector. But now with technology advances making artificial intelligence-enabled voice and images more lifelike than ever, bad actors armed with deepfakes are coming for the enterprise.
“There were always fraudulent calls coming in. But the ability for these [AI] models now to imitate the actual voice patterns of an individual giving instructions to somebody with the phone to do something—these sorts of risks are brand new,” said Bill Cassidy, chief information officer at New York Life.
Banks and financial services providers are among the first companies to be targeted. “This space is just moving very fast,” said Kyle Kappel, U.S. Leader for Cyber at KPMG.
How fast was demonstrated earlier this month when OpenAI showcased technology that can recreate a human voice from a 15-second clip. Open AI said it would not release the technology publicly until it knows more about potential risks for misuse.
Sora, OpenAI’s new text-to-video AI model, can create realistic scenes. In an exclusive interview, WSJ’s Joanna Stern sat down with the company’s CTO, Mira Murati, who explained how it works but ducked questions about how the model was trained.
Among the concerns are that bad actors could use AI-generated audio to game voice-authentication software used by financial services companies to verify customers and grant them access to their accounts. Chase Bank was fooled recently by an AI-generated voice during an experiment. The bank said that to complete transactions and other financial requests, customers must provide additional information.
Deepfake incidents in the fintech sector increased 700% in 2023 from the previous year, according to a recent report by identity verification platform Sumsub.
Companies say they are working to put more guardrails in place to prepare for an incoming wave of generative AI-fueled attackers. For example, Cassidy said he is working with New York Life’s venture-capital group to identify startups and emerging technologies designed to combat deepfakes. “In many cases, the best defense of this generative AI threat is some form of generative AI on the other side,” he said.
Bad actors could also use AI to generate photos of fake driver’s licenses to set up online accounts, so Alex Carriles, chief digital officer of Simmons Bank, said he is changing some identity verification protocols. Previously, one step in setting up an account online with the bank involved customers uploading photos of driver’s licenses. Now that images of driver’s licenses can be easily generated with AI, the bank is working with security vendor IDScan.net to improve the process.
Rather than uploading a pre-existing picture, Carriles said, customers now must photograph their driver’s licenses through the bank’s app and then take selfies. To avoid a situation where they hold cameras up to a screen with an AI-generated visual of someone else’s face, the app instructs users to look left, right, up or down, as a generic AI deepfake won’t necessarily be prepared to do the same.
It can be difficult balancing giving users a good experience and making the process so seamless that attackers can coast through, Carriles said.
Not all banks are ringing alarm bells. KeyBank CIO Amy Brady said the bank was a technology laggard when it came to adopting voice authentication software. Now, Brady said she considers that was lucky given the risk of deepfakes.
Brady said she is no longer looking to implement voice authentication software until there are better tools for unmasking impersonations. “Sometimes being a laggard pays off,” she said.
Write to Isabelle Bousquette at isabelle.bousquette@wsj.com
Copyright ©2024 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
Appeared in the April 4, 2024, print edition as 'Deepfakes Are New Threat To Finance'.