The Hurricane Labs Foundry: Volume 5

Welcome to another edition of The Hurricane Labs Foundry! I’m Tony Robinson, one of the security operations analysts at Hurricane Labs. The goal of this series is to inform readers about the latest security news and innovation to keep you aware of the new threats and technologies. Additionally, this series discusses various aspects of Splunk deployment, as well as observations and projects from the Hurricane Labs SOC.

Welcome to another edition of The Hurricane Labs Foundry! I’m Tony Robinson, one of the security operations analysts at Hurricane Labs. The goal of this series is to inform readers about the latest security news and innovation to keep you aware of the new threats and technologies. Additionally, this series discusses various aspects of Splunk deployment, as well as observations and projects from the Hurricane Labs SOC.

The stories presented here are mostly short digests with links to source material, in order for readers to get the full scoop, and additional context as required.


Over the past several months I’ve written a book/PDF, which I had been previously been distributing for free. On June 5th, I self-published the document via Amazon’s CreateSpace platform.

My book, sized at 8.5" x 11", and weighing in at 600 pages, teaches you how to create a baseline virtual lab environment on any one of five commonly used hypervisors today -- VMware [Workstation, Fusion, ESXi], Microsoft Client Hyper-V (note that most of the instructions and documentation here apply to Hyper-V server as well), and Oracle VirtualBox. In addition to teaching readers how to build the lab themselves, the book also instructs readers how to set up supporting infrastructure such as an IPS, a small Splunk instance, and configure remote access to their lab VMs for ease of access and maintenance.

If you or your SOC engineers are interested in purchasing a copy for developing your own practice lab environment, the book can be found on Amazon here.



Splunk’s deployment server is a necessary component in a large, distributed Splunk environment. For most use cases, the standard method of configuring server classes in the web interface is an easy way for administrators to layout the distribution of apps to all of the Universal Forwarders in their environment. For more advanced use cases, Splunk supports reading the whitelist and blacklist for each server class from a CSV file. If this idea is then combined with Splunk’s native support of CSV files as lookup tables, a Splunk administrator can begin building dynamic server classes based on the results of a search.

In order to facilitate a migration from a standard deployment server to a setup that uses lookup tables, we wrote a small python script that reads in an existing serverclass.conf and generates a Splunk app containing a modified serverclass.conf as well as lookup tables based on the original whitelist and blacklist entries. It has been useful for centralizing many deployment servers down into one centralized configuration without having to do the entire migration by hand.

If you think this might be useful or just want to take a look, it is available for free via The usual disclaimers apply in that Hurricane Labs will not be able to support this tool directly. If you run into problems running this tool, please open a support case through github in order to get the necessary support. Thanks to Brian T Glenn for developing this tool and open-sourcing it.


On June 28th, a new strain of Self-Propagating Ransomware was discovered in the wild. This ransomware appears to be modeled after the Petya ransomware, in that it encrypts the victim’s entire disk and upon reboot displays a ransom note with instructions on how to decrypt the drive.

The Ukrainian tax software company M.E. Doc was the initial victim; their auto-update infrastructure was compromised and used to initially spread the ransomware. The ransomware then continued to spread, thanks to the “ETERNALBLUE” exploit released by the Shadow Brokers and patched by Microsoft, which also spreads via mimikatzpsexec and Windows WMIC commands if it has access to credentials, and EternalBlue failed. It is therefore extremely important to not allow this malware to gain initial access due to its ability to spread laterally through harvested credentials.

At this time, aside from the exploit and/or SMB traffic generated by the lateral movement of this ransomware, there are no other network artifacts to alert of off; the ransomware has no Command and Control infrastructure. FireEye has produced a yara signature that can be used to identify the malware, however.

As always, an ounce of prevention is worth a pound of cure. The best recommendation we have for fighting this ransomware is to prevent initial infection. M.E. Doc as a company is more or less offline as Ukrainian authorities seized server hardware, preventing further spread of the infection from the initial source.

However, as stated above, the existing malware self-propagates, uses pre-packaged zero-days, credential harvesting software, and psexec to continue spreading so long as there are viable targets to attack. If at all possible, ensure that your hosts have been patched against MS17-010, which was released in March of this year.

As with most ransomware attacks, viable backups from offline/segmented backup systems turn a major disaster into a moderately annoying day of restoring from backups. Other recommendations fall in the ‘easier said than done’ category, such as network segmentation, least privilege, and prevention of credential re-use; Things security pros have preached for years, but are extremely hard to implement properly.

As with most cases dealing in ransomware, don’t pay the ransom if you can avoid it. If you have files greater than 1MB in size, you may not be able to restore your files. Researchers uncovered a bug that may prevent the hackers responsible for the attack from restoring those files.

Since this event was kicked off due to compromised automatic update infrastructure, several security professionals are debating the effectiveness and security of automatic update infrastructure, telling individuals and enterprises to disable automatic updates, etc.

From a personal perspective and on your home networks, you should never disable automatic updates. Especially from major providers like Adobe and Microsoft. Keeping your systems up to date is one of the best and most basic defenses you can implement. In enterprise environments, however, you should be implementing patch management -- pilot deployments to ascertain possible issues before they become major company-wide issues, and change control to ensure that patches are applied company-wide in a consistent manner -- scheduled, approved, and with a clear plan on how to back out if there are problems.


I found this article via reddit’s “/r/netsec” board. There are a wide variety of proxies and secure web gateways out there and available, all backed by a different security vendor. In most cases, you can use these gateways to ensure a single point of entry and exit, in and out of the enterprise network. Not only that, a lot of these web gateways allow you to specify what “categories” users in your enterprise network are and are not allowed to view. For example, you might allow users to visit the category “news” to be able to go to CNN, ABC, BBC, or other news websites, while you may consider disabling the “social media” category to prevent users from wasting bandwidth on social media sites like Twitter and Facebook.

In most cases, these secure gateways allow you to configure how to handle uncategorized websites -- you can choose to have them blocked by default, or allow users to access them by default. In both real-world malware operations, in addition to adversary simulations that penetration testers perform, they are required to stand up new sites and domains constantly for command and control, phishing, etc. in order to gain and maintain their initial access. If the target organization’s web gateway/proxy is configured to deny access to uncategorized websites, this makes gaining and maintaining initial access in a target environment much more difficult.

Most of these vendors that sell web gateways have a submission system in order to have uncategorized sites categorized, or sites that may have been categorized incorrectly re-categorized appropriately as necessary. In a recent experiment, security researchers found that many security vendors do not validate these submissions. This implies that the categorization system used isn’t as high fidelity as originally thought. Attackers can simply register their phishing sites and/or Command and Control domains and have them categorized prior to attacking target organizations.

The best way to combat this issue is to reach out to the vendor that provides your secure web gateway solution and ensure that their categorization requests are checked for validity on a regular basis, and keep a watchful eye out -- websites and domains that were recently registered are fairly suspicious, but commonplace occurrence in most advanced malware campaigns and/or pen tests.


Keep an eye out for Volume 6 and follow us on Twitter @hurricanelabs for updates!