Breach and Charge
In the event you haven't been keeping with the latest security news, another data breach occurred. This time, it's the SANS Institute.
In an announcement, SANS stated they were the target of a phishing attack–although, somewhat contradictingly, they state in the announcement itself that it was not a targeted attack. As a result, a user's Office 365 account was compromised, resulting in 500+ emails being forwarded to what is assumed to be an attacker-controlled external email address. At this time, SANS estimates that there were approximately 28,000 customer PII records that were compromised.
They stated that the exposed data did not include financial information or passwords, but consisted of the following information:
- Work Title
- First name & Last name
- Work phone
- Company name
- Country of residence
Consent Phishing Explained
The style of phishing attack used is a technique referred to as "consent phishing." A linked Bleeping Computer article written by Sergiu Gatlan describes consent phishing:
Consent phishing is a variant of application-based attack where the targets are tricked into providing malicious Office 365 OAuth applications (web apps registered by the attackers with an OAuth 2.0 provider) access to their Office 365 accounts.
Once the victims grant the malicious apps permissions to their account data, the threat actors get their hands on access and refresh tokens that allow them to take control of the targets' Microsoft accounts and make API calls on their behalf through the attacker-controlled app.
In a nutshell, this type of phishing attack asks a user to install an OAuth-based application. The application requests a variety of permissions from the user to perform a variety of account activities that the user is required to consent to in order for the malicious application to work.
If it helps, think of this consent/approval scheme the same as when you install a fresh application from a smart phone's app store. Sure, you installed the application–but when you go to perform specific functions, the application doesn't have access to, say, read your contacts or access the file system. The user is required to consent to the application performing those actions before it can actually perform them.
This is an example of an OAuth application requesting consent to perform a variety of operations against a user account. Image supplied from BleepingComputer.
While the screen capture above probably isn't the exact same OAuth app used in the SANS phishing attack, it's probably similar enough.
Take note of the permission Read and write to your mailbox settings. In the breach announcement, the SANS Institute stated that the attackers managed to gain access to some 500+ e-mails. The attackers achieved this access by configuring a mail forwarding rule. This action would not have been possible without allowing the app the permission to write changes to the user's mailbox settings.
According to a post-investigation webinar that has since been posted to YouTube, the phishing email was received on July 24th, and the forwarding rule was detected on August 6th (fast-forward to about 3:55 for a summary of the webinar where I got this information). Additionally, the recorded webinar confirms the mail forwarding rule was active that same day, implying the user clicked the link in the email and consented to having the app installed (jump to the 9:00 mark). With the initial compromise on July 24th and the initial detection on August 6th, that's two weeks that this forwarding rule was left undetected.
For those who are looking for detailed indicators from this attack, SANs has released a follow-up blog post with that information.
The use of mail-forwarding rules as a data exfiltration method is nothing new; in fact, it’s a common enough phenomenon that Microsoft has recommended both exchange and O365 admins to check for it regularly. Microsoft has supplied a lot of guidance on how to do so.
Also, I recently discovered an interesting blog post by Matthew Green that details how attackers can hide mail forwarding rules–and how these so-called hidden rules can be detected as well. Finally, this blog post by LogRhythm demonstrates some of the methods that can be used to apply mail forwarding rules to O365 accounts, and what those logs look like when the activity is detected.
That covers how to find suspicious mail forwarding rules, but what about consent phishing and potentially malicious apps that users have enabled? The BleepingComputer blog post that I mentioned above actually provides a number of useful links on how to address this threat. See the section labeled "Consent Phishing Defense Measures" for more details.
The bottom line is that if you're not already looking at mail forwarding rules or consent phishing as potential threats, now is as good a time as any to start.
The Average According to Whom, Exactly?
The time between the initial compromise on July 24th and the initial detection on August 6th for the SANS Institute is considerably better than the industry average of 197 days. However, another study by the International Association of Privacy Professionals claims that the time between initial occurrence and discovery is a little bit over 13 days. Meanwhile, the Verizon DBIR says that both results are valid (check out page 34 of the 2020 report, specifically). A lot of places are getting way better at turn-around time, while other organizations are really dragging their feet on better and faster detection.
No matter how you slice it, 14 days is still plenty of time to do damage and acquire a significant amount of data. To the credit of the SANS Institute, however, their transparency while handling this incident has been amazing.
If it happens to everybody, are you preparing for it?
In our field of expertise, we have been exposed to the concept of “assume breach.” That is, it isn't a matter of whether or not an organization will be breached, but when. Some call this a defeatist mindset–but in reality, it's a realistic one.
The only difference between someone telling you to plan for an attack vs. someone telling you that you are completely secure, is how convincingly the second person can lie to you. Those people telling you to assume a breach aren't accepting defeat, they're telling you to have a plan to address the problem when it occurs.
Having the mitigations I listed above (or putting them into place) is a great start, but consider the following as well:
Do you have an incident response plan?
If so, that's great. If you don't, the people who just got breached wrote a guide on the lifecycle of an incident, and it's as good a starting point as any for developing a plan to meet your organization's needs.
Do you have a plan for interacting with the media (e.g. disclosing the breach)?
You really don't want Brian Krebs to be the one who reports your company was breached because you didn't coordinate on the disclosure or didn’t listen when a third party informed you about exposed data.
Do you have good situational awareness?
Many breaches are the result of lax security on assets that nobody was aware even existed. I've been an eager listener to many conversations between penetration testers and incident responders who said they gained initial access through an asset exposed to the internet that did not conform to an organization's security policy–whether because it was derelict and forgotten or otherwise.
Software and hardware inventory and asset management is an extremely tall order, and in most cases it's much easier said than done–but that doesn't stop it from being one of the most basic building blocks for keeping your users and the data you are responsible for secure.
Be aware of Shadow IT.
While I'm beating the drum on software and hardware asset management, Shadow IT is a significant problem in large organizations. Usually it's the result of users not being aware of IT policy, or the policies being too cumbersome for users to get the assets they need. This results in IT/Security being unaware that a bunch of unmanaged cloud deployments and servers are housing sensitive data.
Much like asset management, it's considerably difficult to detect rogue devices on enterprise networks and/or cloud deployments that the IT and security departments don’t know about. Work with your user community, make sure they're aware of procurement procedures, explain why there is a procedure (e.g. to keep user and customer data safe), and do your best to keep the process streamlined to make procurement as pain-free as possible.
Are you cooperating with good samaritans?
If your organization has a presence on social media or a public website, make sure that you provide a email@example.com email address so that good samaritans or third parties wishing to inform you of vulnerabilities, issues, or possible data exposures have an avenue to do so.
User education continues to be extremely important.
Long, long ago on this very blog, I posted a three-part primer on ransomware defense. Even though the threat that SANS faced wasn't ransomware, many of the defenses and hardening recommendations from that post are still relevant. Specifically, user education and good communication with the user community is extremely important.
Most of that blog post features recommendations towards building a positive user community and better user education. Make friends with your users–they are your eyes and ears. You want them to have the courage to communicate with your IT and security teams when they suspect there is a problem. In this day and age when everyone is more or less required to work from home, having their support is extremely valuable.
I hope this blog post gives you a good idea of what happened with the SANS Institute breach as well as provides you with some guidance towards protecting your environment from consent phishing and malicious forwarding rules. Remember to have a plan and be prepared for the worst, but always hope for the best.