Welcome to Hurricane Labs Foundry! I’m Tony Robinson, one of the senior security operations analysts at Hurricane Labs. The goal of this blog post is to inform viewers like you(™) about new and innovative Splunk and other technologies around the web, hot information security topics, and various in-house projects that our Splunk and SOC analysts have been working on.
Just like last time, this blog post is in digest format, where I’ll give you a brief description of the topic at hand, and links to supporting material, as necessary. So without further Adieu, let's get started!
With thanks to: Tom Kopchak for his contributions.
An article published recently in Vice/Motherboard (Note: Broken links are not being removed as we wish to uphold the intent of this post.) highlights a security nightmare in one of the most unlikely of places: farm tractors. Apparently, tractor hacking is becoming more and more common due to manufacturers, such as John Deere, implementing firmware restrictions that are only allowing parts and services to be performed by authorized dealers with factory-sanctioned tools. In some cases, nearly any repair, or part, requires a download from the manufacturer to operate.
If this makes you think of restrictions placed on other consumer technology, you’re absolutely correct. So, the next time you look at a tractor, just think of it as a 30,000 pound printer... A printer that’s entirely responsible for the livelihood and viability of a farmer’s business.
This puts farmers in a tough spot. If a tractor breaks down, it’s often not practical for them to take their tractor in for service to an authorized dealer. It’s also not uncommon for a repair to be able to be completed, with the exception of the software lockout. To circumvent this, technicians are using cracked John Deere software purchased on the black market in Ukraine.
This software allows for tractor repair technicians to work on the equipment without manufacturer approval. Of course, all of this is in violation of the EULA, which is accepted by the farmer at the time of the equipment purchase (yes, tractors have EULAs now too).
We already have enough problems with malicious firmware for devices designed without security in mind (I’m looking at you, Internet of Things). We’ve seen Internet connected cars being remotely controlled by unauthorized users, among other things. The last thing we need is to be encouraging people to install compromised firmware in order to simply operate and use a product for the purpose which it was intended.
To Heavy Forward, or Not To Heavy Forward, That is the Question
For Splunk admins, there has been a lot of discussion recently about whether Heavy Forwarders should be used in between Universal Forwarders and Splunk Indexers. For on-site Splunk deployments, we at Hurricane Labs generally believe that having Universal Forwarders sending data directly to an Indexer, or Index Cluster, is the best option for optimal performance (you might be interested in this Splunk “.conf” conference talk for our justification). For Splunk Cloud deployments, however, your use of Heavy Forwarders in between may vary.
For most Splunk Cloud deployments, we generally recommend having the Universal Forwarders send data through a locally deployed Heavy Forwarder before sending your data to Splunk Cloud. This allows you to deploy new Splunk applications to parse logs or provide additional analytics for your data rapidly. Deploying applications on Splunk Cloud systems requires submitting your desired applications through an approval process. This means it tends to take longer to get the applications you want deployed, and in some cases can result in rejected application requests for any number of reasons.
On the other hand, we have found that there are some edge cases where it may make more sense for your Universal Forwarders to transmit directly to Splunk Cloud. We have observed a few cases where users have massive volumes of data they are attempting to transmit to the Splunk Cloud through a Heavy Forwarder, but not enough bandwidth to consistently forward it in a reasonable amount of time. This results in large queues of data waiting to be transmitted. In these instances, it may be preferable to let individual Universal Forwarders handle transmitting logs to Splunk Cloud, which will prevent high volume data sources from saturating a single queue used for all of your collected logs.
In the end, the decision to deploy a Heavy Forwarder as a part of your Splunk Cloud deployment may depend on how much application flexibility you desire, or how much data you are attempting to collect.
IIS 6.0 Buffer Overflow (CVE 2017-7269)
A buffer overflow in the IIS 6.0 WebDAV service has been observed in the wild. IIS 6.0 ships with Windows Server 2003, which was marked as End of Life in July of 2015. This attack works via crafted PROPFIND requests to a WebDAV-enabled IIS 6.0 deployment, specifically, via an extreme long “If:” HTTP header sent in the PROPFIND request.
If you are interested in a proof of concept, here is a link to a GitHub repository. Unfortunately, due to the fact that Microsoft ended support for Server 2003 nearly 2 years ago now, it is not likely that we will see an official patch for this issue. However, I stumbled across this blog post that describes how to create a patch for this issue on your own. In addition to a lack of official patches, I have not seen any official Snort or Suricata rules that cover detection for this issue, but I observed this entry in the Snort community mailing, which is proposing a rule that can detect attempts to exploit this issue.
Please be aware that this software is NOT provided by Hurricane Labs, and is NOT offered with any sort of a guarantee implied or otherwise. Our official stance on this issue is to update to a supported version of Windows Server as soon as possible.
Reverse Engineering Malware 101
Have you ever wanted to get elbows deep into malware analysis, but didn’t know where to start? Maybe you have a lab environment all kitted out, but you have no idea where to proceed?
Amanda “Malware Unicorn” Rousseau, who is a research engineer for Endgame, Inc., an extremely knowledgeable peer, as well as a great friend of mine, recently released a class titled “Reverse Engineering Malware 101”. As the name implies, this is a beginners course to various facets of malware analysis, and how to get started. The amount and quality of the material released cannot be understated. What's more is that this class is entirely free. Go take a look and get started! You won’t be disappointed.
Revoking Trust in Symantec-issued SSL certificates in Google Chrome
According to many sources, Google’s Chrome web browser is one of the most popular web browser choices in the world today. As such, it has a tendency to lead the charge in terms of changes. Recently, it has been announced that Google plans on deprecating and removing trust in Symantec-issued SSL certificates.
According to a recent blog post on ArsTechnica, Google claims that Symantec has issued over 30,000 improper SSL certificates, violating the deep amount of trust placed in Symantec Certificate Authorities.
Google first plans on revoking Extended Validation status for all Symantec SSL certs, then over a period of time will eventually revoke support for Symantec SSL certs entirely by revoking trust in the Symantec Certificate Authority.
Multiple Cylance Endpoint Bypasses
Black Hills Information Security, a well-respected group of security researchers and penetration testers, has posted a series of guides on how to bypass CylancePROTECT endpoint protection. Penetration testers usually have their hands full bypassing antivirus and other endpoint protection mechanisms on enterprise networks, but some of these bypasses are incredibly simple.
Techniques utilized in the guides range from utilizing custom PowerShell payloads and backdoors to simply using Netcat/ncat and renaming powershell.exe on target systems to gain execution.
UNTIL NEXT TIME
Keep an eye out for Volume 4 and follow us on Twitter @hurricanelabs for updates!