Google opens fire at evil AI
Special attack unit targets ‘threat actors’
as smart machine wars begin to ramp up
Intelligence about bad actors using AI-boosted malware isn’t good enough, warns a top Google security executive. Foreseeing that an evil AI nightmare is looming for business and governments, Google is already using AI to hunt down and neutralize evil AI activities even before they are launched, she says.
“We look for partners to help us find opportunities to actually do takedowns and disrupt threat actor activity,” Sandra Joyce, vice president of Google’s Global Threat Intelligence Group, told experts at the Cipher Brief Threat Conference at Sea Island, Georgia.
Tho AI-enhanced threats on the whole were still manageable, some nasty intrusions have already occurred. In one case, a finance officer paid out $25 million after being deceived by a deepfake chief financial officer via a video call, Joyce said.
Google experts scanning the “dark web” have found that adversary states and criminals are able to purchase tools which strip safety controls -- or “guardrails” -- off commercially available AI systems, she told the conferees, who gathered on Oct. 19 and 20. Availability of cheap large language model (LLM) AI systems will assuredly fuel the spread of dangerously ruthless AI tactics, she indicated.
The cyber expert also warned that North Korean tech workers had infiltrated Fortune 500 companies and firms around the globe using AI-driven forged documents, presenting a grave peril in the event of conflict.
Joyce also accused “the Russians” of deploying AI-backed malware and China hackers of trying to use Google’s Gemini AI for nefarious purposes.
Background from Perplexity.ai:
The conference gathers elite former senior intelligence officials, current national security leaders and private sector executives for high-level, non-partisan discussions focused on global and national security threats. Topics covered include cyber threats, foreign malign influence, emerging technologies, AI, espionage, critical infrastructure, gray zone operations, and geopolitical issues involving major players like China, Russia, Ukraine, and the Middle East.
The conference is known for candid, expert-led conversations and exercises, offering insights not available in open sources or public news. Attendance is limited to about 250 professionals to maintain high-level engagement and is by invitation only. The event emphasizes non-political solutions and networking among leaders in government, industry, and security fields.Transcript of video:
0:00
start off by thanking Suzanne and Brad
0:02
for this incredible conference. I think
0:05
this is my fifth time coming and every
0:07
year it’s just um it’s flawless
0:10
execution, amazing content and we’re
0:12
just so grateful that you do this. Um so
0:16
let me just start off with explaining a
0:18
little bit about how Google Threat
0:20
Intelligence operates. What our mission
Google Threat Intelligence mission & scale
0:23
is to protect Google users and
0:25
customers. And when I say Google, I mean
0:27
the entire Alphabet ecosystem. And when
0:29
I say users, I mean billions with a B. I
0:33
think when the Mandian acquisition
0:35
happened, I came over with Mandian. Uh
0:38
that was probably the first thing that
0:39
we thought about that was different was
0:41
truly the scale at which uh Google has
0:44
visibility into threats. And we do this
0:48
to create a common operating picture, a
0:50
comprehensive threat picture that we can
0:52
use to disrupt adversaries. So one
0:56
threat in particular that we have been
0:58
focusing on is threats using AI and
1:01
threats to AI. The rapid adoption of AI
1:04
in industry is something that I think
1:06
many of us have never seen. Maybe maybe
1:09
with the internet, the onset of the
1:11
internet is coming close. But this AI
1:15
generational reset moment is something
1:17
that thread actors are starting to pay a
1:19
lot of attention to. So what are they
1:21
doing? Um, they’re using AI obviously to
1:24
do better social engineering. You know,
How adversaries are adopting AI
1:26
yes, the spear fishing emails are are
1:28
written better. The the pictures and the
1:30
fake images that are being created are a
1:33
little bit more sophisticated. Um,
1:35
they’re using AI to help them do what’s
1:37
called vibe coding. So, they can do a
1:39
lot of natural language coding because
1:42
they just say what they want to do and
1:45
the AI will create the code for them.
1:47
So, that barrier to entry is lower. Um,
1:50
we’re also seeing two things um that
1:53
that are are are very surprising to us
1:57
uh from uh from how rapidly it’s been
1:59
coming through and that’s um the
2:03
automated intrusion activity and also
2:05
the AI powered vulnerability research.
2:07
I’ll talk a little bit more about that
2:08
as we go forward. And finally, um AI
2:12
powered malware. This is something that
2:14
we have been looking for and being asked
2:16
about for years and we saw it in the
2:19
wild for the first time. I’m not saying
2:21
somebody doing research in a lab. I’m
2:22
saying that we saw AI powered malware
2:25
being used by the Russians and I’ll talk
2:26
to you more about that. So just social
Automated intrusion & AI vuln research
2:29
engineering in general, we’ve seen this
2:31
for many years and we’ve seen, you know,
2:33
this used by cyber means, but it’s not
2:36
just the images and the deep fakes and
2:38
all that. It’s also the the voices that
2:40
can be created. So, we’re seeing how
2:43
individuals are being tricked into doing
2:46
all kinds of things, including a finance
2:48
worker paying out $25 million after he
2:51
had a video call with a deep fake CFO.
2:55
Um, now what’s interesting is that uh
2:58
this is something that we have actually
3:00
redteamed using our mandant folks and it
3:03
is very very effective. Um, so what are
3:06
going to be the business processes that
3:08
are going to need to be put into place
3:11
and what are those approvals that need
3:13
to happen if you could be in a video
3:16
call with somebody and not know that
3:18
you’re talking to an AI generated form.
3:21
So that’s how sophisticated this is
3:23
getting at this point. Um, we also track
3:27
threat actor use of Gemini. That’s
Deepfakes/voice fraud (e.g., $25M CFO scam)
3:29
Google’s AI um large language model and
3:33
we have watched them um use it all
3:36
across the world. So I’d like to tell
3:38
you a little bit about what we’re seeing
3:40
uh in in uh the usage of of Gemini by
3:44
we’ll call it the big four first. When
3:46
we look at uh Iran for example, they
3:49
kind of try to be a power user of Gemini
3:52
for some reason. I don’t know. And I’ve
3:53
always thought I would love to ask them
3:55
what features they like because we could
3:57
really maybe get better at what we’re
3:58
doing. But um we’ve watched them do
4:01
things with AI to create content. So
4:03
they really like to localize into
4:05
English or Hebrew. We’ve watched them do
4:07
a lot of research into military
4:09
platforms. Um there there’s a lot of
4:11
interest of threat actors and anti-
4:13
drone technology F-35. So in some ways
4:16
they’re using it a little bit like
4:17
search but then also for coding and
4:20
scripting tasks. We’ve watched China
4:22
based threat actors and a groups
Tracking adversary use of Gemini (Iran/China/North Korea)
4:24
actually use AI to do uh vulnerability
4:28
research and they’ve even tried to
4:30
reverse a very uh popular EDR tool
4:33
that’s used very broadly. Um so when
4:35
we’re looking at this type of thing what
4:37
is really clear to us is that our our
4:40
adversaries are using tools are trying
4:42
to use the tools as much as they can to
4:45
increase the along the entire uh attack
4:48
life cycle. So understanding the target,
4:51
doing the research um and trying to
4:53
develop and weaponize uh malware for
4:56
example. They’re also doing this to
4:58
enable post compromise activities. We
5:00
watched a China based thread actor
5:02
actually try to use Gemini in order to
5:06
assist them in their next steps of an
5:07
intrusion. So the way this works is if
5:10
you’re hacking into a network and you
5:12
get to a certain point where you’re not
5:14
you’re sort of at a juncture that
5:15
requires a decision to do something or
5:18
or you need some technical advice and
5:20
they would use the LLM to actually make
5:23
suggestions about what the next step
5:25
should be. We watch North Korean thread
5:27
actors use AI to generate false
5:30
documents. And if you haven’t heard,
5:32
there is a real problem right now with
5:34
North Korean IT workers and getting
5:36
themselves hired at Fortune 500
5:38
companies. And I know what you’re
5:40
thinking is that how in the world could
5:42
that possibly happen? And is North Korea
5:45
even sophisticated enough to be able to
5:47
pull something like that off? I’m here
5:49
to tell you they’re doing very, very
5:50
well. And they are getting themselves
5:53
hired in uh across the Fortune 500. Now
5:57
the FBI has done a wonderful job of
5:59
rounding up a lot of them that are in
6:01
the United States and sort of the the
6:02
intermediaries that are knowingly or
6:05
unknowingly you know sponsoring laptop
6:08
farms for their use but now they’re
6:10
moving to the rest of the world. So we
6:12
have seen this u operation across Europe
6:15
and Asia and actually in Europe we
6:17
watched one North Korean IT worker who
6:20
was uh actually had nine different
6:22
personas and was recommending themselves
6:24
to each other and doing that type of
6:27
work. So this is this is getting to the
6:29
point some people say well they’re
6:31
raising money yes it’s going to their
6:33
nuclear program and that is terrible but
6:36
the other part of the plot we shouldn’t
6:38
miss is that we now have North Koreans
6:42
you know embedded in hundreds maybe
6:44
thousands of companies across the world
6:47
and that’s a very effective way to
6:49
preposition in the event of some type of
6:51
conflict they have privileged access in
6:54
some cases they’re given full employee
6:56
access and they get away with it because
6:59
of the forged documents, the AI
7:01
empowered images that they can create.
7:04
They can do these interviews or they
7:06
hire people to do the interviews for
7:08
them and it’s working. So I told you
7:10
that there was uh AI powered malware. We
7:13
saw it in the wild. So the Russian
7:16
military actually was using it in
7:18
Ukraine
7:19
and basically the way it works is um
NK IT worker infiltration & pre-positioning
7:22
this malware actually calls out via API
7:25
to a Chinese large language model and
7:29
then from there it issues commands. So
7:31
the the the takeaway here is that it’s
7:34
really going to frustrate that static
7:36
malware analysis capability if thread
7:39
actors can you know generate commands on
7:42
the fly makes it much more complex. Um
7:45
now I don’t want to alarm people too
7:47
much because yes we’re seeing this it’s
7:50
just starting. Um, but what’s
7:53
interesting is we haven’t seen it move
7:56
through the entire life cycle to the
7:58
point where there’s a major change in
8:00
the incident response work that we’re
8:02
seeing. So, what I mean by that is we
8:04
would assume at some point that that
8:07
threat actors using AI are going to get
8:09
so good at it that they’re going to
8:11
cause more intrusions, uh, more
8:14
sophisticated intrusions, and that
8:16
companies across the world are going to
8:17
start to raise the alarm to say, “Hey,
8:19
we’re up against something that’s
8:21
different.” That hasn’t happened yet.
8:23
So, we haven’t seen, you know, a surge
AI-powered malware observed in the wild
8:26
in incident response work. we haven’t
8:28
seen um any real difference in in the
8:31
way that companies and organizations and
8:33
governments need to actually defend
8:35
against this. So this is still about
8:38
making sure that you’ve patched your
8:40
vulnerabilities. It’s still about not
8:42
clicking on things. It’s still about the
8:44
basics at this point. What we anticipate
8:46
though, and I think I said in in uh the
8:49
panel on on Monday was uh or Sunday
8:52
rather, was that I really do believe
8:54
we’re in the before times though because
8:57
as this technology gets better, we’re
9:00
going to see more sophisticated uses and
9:03
that automated uh vulnerability scanning
9:06
and all that type of identification that
9:08
can be done um at scale can really
9:11
create something that is different in
9:14
the threat landscape. than we’ve ever
9:15
seen before. So, let’s switch to the
Why IR volumes haven’t spiked—yet
9:18
good news. What are we doing about it?
9:20
Well, we’re doing quite a bit. Um, my
9:23
team in particular is now working very
9:25
closely with Google’s Deep Mind. We’re,
9:28
you know, looking at threats that we can
9:30
see from our vantage point and we’re
9:32
making a lot of progress and trying to
9:34
understand what threat actors are doing.
9:36
Um, now the landscape is pretty
9:38
interesting right now because u it’s not
9:41
just about state versus state. There is
9:44
tremendous competition between all kinds
9:47
of organizations and you maybe have
9:49
heard of some very exciting new
9:51
technologies cheap uh you know LLMs and
9:54
these AI models that are coming out that
9:57
create a real advantage for thread
9:59
actors. In fact we also had um in in our
10:03
in our group we monitor the underground
10:05
or the the dark web you know
10:07
marketplaces where thread actors are
10:09
buying and selling things. And what we
10:11
saw was a lot of tools being sold to try
10:15
to bypass guard rails. So Gemini, for
“AI vs AI” defenses (Big Sleep)
10:18
example, their guardrails kicked in and
10:21
really kept thread actors from doing
10:23
anything serious or anything new and
10:24
novel. But when thread actors have
10:27
access to these underground tools being
10:30
sold, they’re going to get the power of
10:32
LLMs, even if they’re, you know, full of
10:34
security holes, but they’re going to be
10:36
able to do things that, um, you know,
10:38
the normal, um, you know, um, AI models
10:41
that are sold from, you know, companies
10:44
that actually are putting security
10:45
features in there to stop that. They’re
10:47
going to be able to use that to do the
10:49
types of things that we’re talking
10:50
about. So that’s basically what we see
10:52
on the horizon. So what do we do about
10:55
it? It really is about using AI to
10:58
battle AI. Um it start kind of feels
11:00
like a movie but it’s not. Um so what
11:03
we’re using uh AI for is we’re using it
11:07
um in an implementation called big
11:09
sleep. And big sleep is something that
11:12
uh Google’s deep mind and project zero
11:14
have worked together on. And it is a way
11:17
for AI to scan for vulnerabilities
11:19
proactively. In November last year, we
11:21
found our first vulnerability and that
11:23
was very exciting that um you know we
11:26
wouldn’t have found otherwise most
11:28
likely at least not in any any
11:29
reasonable amount of time. Skip forward
11:31
a few months we actually were able to
11:33
use AI uh in order to thwart something
11:37
that a threat actor was trying to do. So
11:39
what we saw in the underground, we had a
11:41
few different artifacts, but we saw that
11:43
there was a zero day in open- source
11:45
software called SQLite and we did not
11:49
know what the vulnerability was, but we
11:51
knew a thread actor was planning on
11:52
using it. So we worked with our teams at
11:55
Google and we implemented the big sleep
11:57
LLM to try and analyze uh the source
12:01
code and found the vulnerability. So we
12:04
were to able to reach out to SQLite and
12:06
help them get it patched before the
12:08
thread actor could even move on it. That
12:11
is the type of AI defense and and and
Preempting an SQLite zero-day
12:15
really incredible capabilities that are
12:18
going to make a difference in the cyber
12:20
threat landscape. Um and we have a lot
12:22
of other uh exciting things going on as
12:24
well um that are going to help us in
12:27
this new and different uh generational
12:30
AI reset moment. Um so with that um I
12:34
wanted to end on one note and that is
12:36
for all my cyber security brethren in
12:38
the room. We have talked a lot in the
12:40
past decade about info sharing and intel
12:43
sharing and I’ve I believe that looking
12:46
at the statistics and looking at the
12:48
rise of ransomware the rise of espionage
12:51
and what we’re going to be up against
12:52
with the AI threat landscape in the
12:54
future it’s going to be really critical
12:56
that we move beyond intel sharing as if
12:59
that’s the goal. that cannot be the
13:01
goal. We’re not going to get anywhere.
13:03
We haven’t gotten anywhere using those
13:05
uh that mentality from before. So I just
The new goal: disruption over intel-sharing
13:09
repeating what I said Sunday night is
13:11
that Google has now a disruption unit.
13:13
We look for partners to help us find
13:16
opportunities to actually do takedowns
13:18
and disrupt threat actor activity
13:20
proactively. Um we do this in legal and
13:23
ethical ways. We’re doing it now with
13:25
partners. And I believe that this is
13:27
what industry should be doing already
13:30
and it will make a difference if we work
13:32
together and do this at scale. So I’m
13:34
very excited about this. But we cannot
13:37
make intel sharing the goal. The goal
13:40
has to be disruption. Intelligence is
13:42
for decision makers to do something.
13:45
It’s not just to be the smartest people
13:46
in the room about threats. So with that,
13:49
if you have any ideas or if you would
13:51
like to collaborate, uh we’re already
13:53
collaborating with many in the room
13:55
actually. If you see an opportunity
13:57
where you think a takedown is is makes
Partnering for legal/ethical takedowns
13:59
sense, we would love to hear from you
14:01
and let’s uh let’s start uh a new effort
14:04
and a new refreshed effort to do more to
14:08
take down all these threats that we’re
14:09
seeing.

