However, if threat actors can use AI to spin up these attacks for just a few cents on the dollar, we’ve got to make sure that we can also handle that volume on our side. Tian told Forbes.
In 2022, he founded social engineering defense startup Doppel to do just that. And as cybercriminals harness ever more advanced AI models to turbocharge their attacks, Doppel’s AI systems have helped businesses combat them at scale— more quickly and effectively than before.
The startup has built AI agents— software that is programmed to autonomously carry out specific tasks— to scour the internet, the dark web and social media for potentially fraudulent activity, flagging everything from copycat websites and fake user accounts to malicious advertisements on Google, Instagram, and YouTube. Doppel’s agents screen 100 million alerts of such phishing threats every day, filtering real threats from benign ones and reporting them to platforms to be removed. Tian says they do this with about 90% accuracy, and they’re constantly improving.
If threat actors can use AI to spin up these attacks for just a few cents on the dollar, we’ve got to make sure that we can also handle that volume on our side. Tian told Forbes.
On Friday, Doppel announced $35 million in new funding in a round led by Bessemer Venture Partners. With $55.5 million in venture backing, it’s now valued at $205 million. It’s another milestone for the company Tian cofounded in 2022 with CTO Rahul Madduluri, whom he had met at Uber while working on the company’s flying car moonshot, Uber Elevate.
Initially, the duo intended to combat NFT-related fraud, helping crypto companies track and report counterfeits. But in 2023, Doppel began to broaden its customer base, eventually expanding to other industries.
In its early days, Doppel used contractors in countries like the Philippines and India to sort through thousands of potential threats and decide which ones were malicious. But in September 2024, it found that OpenAI’s new models, those capable of “reasoning,” could perform the same tasks. It replaced those contract workers with a cohort of new AI agents and used them to automate 30% of its security operations. Tian claims AI agents have been able to identify more threats than humans. It’s been transformative for the company’s business.
Interestingly, this situation is analogous to the one Tom Cruise faced in Mission: Impossible — Dead Reckoning. In that movie, AI is portrayed as the bad guy, able to impersonate anyone, create fake news, manipulate entire societies just through fake digital deception.
Doppel’s secret sauce is a “threat graph,” a map of relationships and interactions between social engineering campaigns — telephone numbers, IP addresses, advertiser accounts. It allows the company to better track malicious hackers using AI as a productivity tool and help safeguard businesses against future attacks.ediator Tiema said they want to give the good guys the same view so they could play less whack-a-mole and shut down the entire threat actor infrastructure.
Since 2023, the problem got worse. In that case, it was like a rogue AI system on a mission to distort reality and control the world. intoxicated with AI’s [];
Doppel has also taken the tech to the next step by creating the digital god, an agent that can intercept, modify, and shut down targeted attacks via an AI-powered mouthful.
“Now we have a way to let companies watch without adding any humans to the process,” said Tian. Doppel’s AI driver, envisioning real-time attacks, scans through十三届 and flags improperly crafted links. The digital god loops user-facing messages and checks if the targets are indeed not only capable of the advanced AI threat but actively trying to be manipulated. This not only prevents any exploitation but also ensures that the attacks are performed by those who should be working to protect— not those trying to run away from security.
This is a strategic shift that leverages the “how the world works,” making it easier for companies to step in and secure their operations. In 2024, Google suspended 40 million malicious advertiser accounts from its platform. Many of them featured AI-generated images and audio of public figures and CEOs to deceive people and promote scams. Between April 2024 and April 2025, Microsoft stopped fraud attempts that would cost its customers $4 billion and blocked 1.6 million bot signup attempts on its Azure platform.
Tian, everuned, saw this as a critical opportunity. After all, the digital god is meant to guide the industry, offering examples of how companies can effectively combat insider threats and then applying this knowledge on larger scales. He likened it to the mission of a michaelсер apparel from the movie . insurance. If using AI to create fake, paramStringates new versions of the attacks, the digital god could make it unnecessary for users to adopt the more يقول weakness. “This tells us that AI is power in the hands of the good,” he said.