The Attacks Hiding in Your Identity Logs

Every identity-driven attack leaves behind traces. The problem isn’t the absence of signals. It’s that most organizations aren’t really looking in the right place.

Every identity-driven attack leaves behind traces. The problem isn’t the absence of signals. It’s that most organizations aren’t really looking in the right place. Security teams tend to focus heavily on endpoints and networks, deploying agents, inspecting traffic, and correlating alerts across infrastructure. Meanwhile, the identity layer, the system that ultimately decides who gets access to what, often receives far less attention beyond basic login failure monitoring. That gap creates an opportunity, and attackers have become very good at exploiting it. What makes this particularly frustrating is that the evidence is already there. Authentication logs, permission changes, and behavioral signals generated by identity platforms contain everything needed to spot compromise early. The challenge is less about collecting data and more about knowing what to look for and having the operational discipline to act on it in time.

Below are some of the most relevant attack patterns, how they typically show up in logs, and why they deserve closer scrutiny.

Credential spray campaigns

When attackers go after passwords, they rarely take the obvious route of hammering a single account repeatedly. Instead, they spread their attempts across thousands of accounts, testing one or two common passwords at a time. This technique, known as password spraying, is specifically designed to stay under lockout thresholds. If an account locks after five failed attempts, the attacker will stop at four and move on. They repeat the process across the entire directory. From a detection standpoint, this creates a subtle problem. No single account looks particularly suspicious. The real signal only emerges when you look at the broader pattern. You see a wave of failures distributed across many users, often coming from a small set of sources within a short window. Another detail that tends to get overlooked is the use of legacy authentication protocols like IMAP or SMTP. These often bypass MFA entirely and remain enabled longer than they should. A notable example is the 2023 breach of Microsoft’s corporate environment, where a state-sponsored group targeted a test account without MFA using exactly this approach.

The takeaway is straightforward. Monitoring must include every authentication path, not just the modern ones where controls are strongest.

Logins that defy geography

Some signals are intuitive. If a user logs in from Chicago at 9 AM and then from Tallinn thirty minutes later, something is clearly off. This “impossible travel” scenario is one of the most recognizable indicators of credential compromise. In practice, detecting it reliably is more complicated than it sounds. A simple distance-over-time rule is not enough. Users travel, they connect through VPNs, and they may legitimately access systems from multiple regions in a single day. Without context, these systems tend to generate a lot of noise. The more effective implementations build a baseline for each user. They consider where the user typically logs in from, how often they travel, and what kind of network patterns are normal. Alerts are then triggered only when behavior genuinely deviates from that baseline. Without that layer of understanding, teams often end up ignoring the alerts altogether. That can be worse than having no detection at all.

MFA fatigue

Multi-factor authentication is still one of the strongest defenses available, but attackers have found ways to work around it by targeting the user instead of the technology. The approach is simple. Once an attacker has a password, they repeatedly trigger MFA push notifications, sometimes dozens of times in quick succession. Eventually, the user may approve one just to stop the interruptions, or because they assume it is legitimate. In more advanced cases, the attacker follows up with a phone call, posing as IT support and guiding the user through the approval. This is not theoretical. It played a role in the Uber breach, where a contractor was bombarded with requests for over an hour before approving one. Similar techniques were used in incidents involving MGM Resorts and Cisco. From a detection standpoint, the pattern is fairly distinct. There is a burst of MFA prompts, multiple denials, and then a single approval. If it happens outside normal working hours, it becomes an even stronger indicator that something is wrong. Organizations that have moved to phishing-resistant methods, such as hardware security keys, have largely eliminated this vector. Many environments, however, still rely heavily on push-based MFA.

Privilege changes that do not make sense

Once inside, attackers almost always try to expand their access. That usually involves adding accounts to privileged groups, assigning new roles, or modifying policies in ways that make future activity easier. These actions are logged, but they are mixed in with a large volume of legitimate administrative activity. People change roles, new hires are onboarded, and temporary access is granted all the time. The signal is present, but it is buried in noise. What makes malicious activity stand out is context. A help desk account assigning Global Administrator rights at 2 AM on a weekend is not typical. An account suddenly modifying credentials tied to a service principal, similar to what was observed in the Midnight Blizzard attack, should raise immediate questions. Effective detection relies on understanding what is normal for each account and flagging deviations. Treating every change as equally suspicious is not practical and quickly leads to alert fatigue.

Compromised sessions that bypass MFA

One of the more concerning trends involves attackers bypassing MFA entirely by hijacking authenticated sessions. Instead of stealing credentials and logging in themselves, they intercept the session after the user has already completed authentication. This is often done using phishing proxies that sit between the user and the legitimate login page. The user enters credentials and completes MFA as usual, but the session token is captured and reused by the attacker. From a logging perspective, this is difficult to detect because the activity looks legitimate. There are no failed logins and no brute-force attempts. The session behaves like a normal authenticated user.
The clues are subtle and behavioral. A session might suddenly appear from a different IP address. The device fingerprint might change mid-session without a clear explanation. In some cases, the same token may be used from two locations at the same time. These signals only become visible when monitoring continues after authentication, not just at the login event itself.

The common thread

What makes these attacks challenging is that they do not rely on obviously malicious activity. Attackers are using valid credentials and interacting with systems in ways that, at first glance, appear normal. The indicators of compromise are found in the details. Timing does not quite line up. Locations do not make sense. Privilege changes feel out of place. Sessions behave inconsistently. These are not things traditional security tools, which focus on malware or signatures, are designed to catch. Organizations that treat identity as a core detection surface, with the same level of attention given to endpoints and networks, are far more likely to catch these patterns early. Those that rely solely on preventative controls such as MFA and access policies often discover incidents much later, after damage has already been done.

The data is already there. The real question is whether your processes and tools are set up to actually see it and respond before it is too late.

Cybercrime Has Gone Machine-Scale

AI is automating malware faster than security can adapt.

Get the facts

Cybercrime Has Gone Machine-Scale

AI is automating malware faster than security can adapt.

Get the facts