Prepare for 2025 Cyberthreats with Research Insights from CyberArk Labs

January 14, 2025 Lavi Lazarovitz

The year 2025 started with a bang, with these cybersecurity stories making headlines in the first few days:

As the global threat landscape intensifies, the need for in-depth research and information sharing has never been greater. Our mission at CyberArk Labs is to empower cyber defenders with threat insights that help strengthen their identity security strategies. We’ve got some exciting cybersecurity research underway for 2025, but until it’s ready for prime time, we’re looking back at some of our favorite 2024 projects.

APT29’s Attack on Microsoft: Tracking Cozy Bear’s Footprints

A new chapter in the ongoing geopolitical chaos began with APT29’s attack on Microsoft. It was a stark reminder of the persistent and sophisticated threats from nation-state threat actors—and a glimpse at what would follow in the year ahead. Our research shed light on the infamous APT29 group (aka Cozy Bear, CozyCar, The Dukes, CozyDuke, Midnight Blizzard, Dark Halo, NOBELIUM and UNC2452), its motives and the tactics it continues to use. We deconstructed the Microsoft attack based on disclosed information to show how organizations might prevent similar attacks from happening to them. We highlighted red flags that could indicate password spraying, OAuth abuse and other malicious actions—and prescriptive Identity Threat Detection and Response (ITDR) recommendations.

Read the research here.

Anatomy of an LLM RCE

Much of our AI research in 2023 focused on how threat actors could use AI to influence known attack vectors and compromise human identities. In 2024, we explored the flipside: how they could attack large language models (LLMs) directly. After all, non-human, machine identities are the number one driver of overall identity growth today.

llms manipulation

There’s no question that LLMs are transforming personal and work life. Unfortunately, they’re also highly susceptible to manipulation. Consider how easily LLM-based chatbots can be steered into role-playing scenarios that allow them to bypass security measures, as we did in Operation Grandma. Internal usage of AI models presents new security challenges for which most organizations are unprepared. For instance, manipulated LLMs don’t just create ethical or policy concerns; they can also be used to aid in the compromise of the systems they’re integrated into. Our research demystified this risk by examining a specific LLM Remote Code Execution (RCE) vulnerability we uncovered.

Read the research here.

FuzzyAI: An Open-Source Tool to Help Safeguard Against AI Model Jailbreaks

To help organizations identify and address AI model vulnerabilities, we unveiled FuzzyAI, a new open-source framework that has jailbroken every major tested AI model. We call it FuzzyAI because, at its core is a powerful fuzzer—a tool that reveals software defects and vulnerabilities—capable of exposing vulnerabilities found via more than ten distinct attack techniques, from bypassing ethical filters to exposing hidden system prompts.

We’re excited to share this tool with the community and encourage fellow researchers and organizations to get involved to help uncover new adversarial techniques and advance defense mechanisms.

Explore Fuzzy AI on GitHub.

Game Cheating That Cheats You

For as long as video games have been around, gamers have looked for ways to cheat them. Today, there are plenty of software cheats available online to help players advance faster through game levels and settings. But installing anything on your machine comes with risks—and a game cheat could cheat you. We found proof of this after discovering that Evolve, a popular game cheat, was secretly running malware on players’ machines and stealing their financial data. You wouldn’t think gamers would be particularly interested in cybersecurity research, but what our team found garnered lots of attention.

Read the research here.

White FAANG: Devouring Your Personal Data

Popular websites know a lot about you—even some details you might not realize. This research shows how individual employees’ browsing and internet history can present cyber issues for their employers, personal lives and PII. Detailing how individual browsing history data—downloaded from technology giants like Facebook, Amazon, Apple, Netflix and Google (FAANG)—is easily stolen, we showed how an attacker might abuse this extensive information trove to serve as, for example, an attack vector into employer organizations.

Read the full research.

Unmasking the Cookie-Stealing Malware Threat

Multi-factor authentication (MFA) is a critical identity security protection, but any cybersecurity researcher will tell you it isn’t enough. Many threat actors have shifted from targeting clear text credentials to stealing cookies to hijack web sessions—bypassing MFA controls completely. Our team delved into today’s most popular cookie-stealing malware and prominent infostealer malware families to expose their favorite infection techniques and what you can do about them.

Read the research here.

In related research, we broke down the current state of browser cookies to help people better understand how their information is being stored across popular web browsers, numerous ways attackers can use cookies and best practices for safeguarding cookie data.

Finally, it’s important to understand that cookies go beyond browsers. As web applications and APIs become more complex and diverse—and as more machines and devices communicate with each other over the internet—different forms of session tokens (and associated risks) are emerging. My friend and colleague Shay Nahari, VP of CyberArk Red Team Services, describes the evolution of session-based attacks in this blog post and the Trust Issues podcast.

Explore CyberArk Labs’ Entire Threat Research Library

These research highlights offer just a glimpse into the many vital projects CyberArk Labs worked on in 2024. We invite you to explore the CyberArk Threat Research Blog and check back often to stay at the edge of cutting-edge cyber discoveries this year.

Lavi Lazarovitz is vice president of cyber research at CyberArk Labs.

Previous Article
Securing the Backbone of Enterprise GenAI
Securing the Backbone of Enterprise GenAI

The rise of generative AI (GenAI) over the past two years has driven a whirlwind of innovation and a massiv...

Next Article
7 Key Factors to Consider When Choosing a Modern PAM Solution in 2025
7 Key Factors to Consider When Choosing a Modern PAM Solution in 2025

In 2025, global cybersecurity trends like the rise of Zero Trust, tightening data privacy and AI regulation...