How to Interpret Any Alarms
Sandfly is written very carefully to not have false alarms. Many of the threats Sandfly detects are so specific to a compromise that they simply do not happen by accident. Other indicators are very suspicious by themselves and should be looked into if alerted. Basically, if Sandfly is pointing out a problem, it is a good idea to pay attention to it.
Was it an Accident?
When analyzing Sandfly results, it is important to look at the data from this mindset:
Could this have happened by accident?
Sandfly is written to look for issues that are not likely to be accidents. For instance, it is possible to have a false alarm on a Linux process that just happens to run a script out of the /tmp directory. But if you see that process is running out of a /tmp directory, and has open network ports open, then you should ask some questions.
Many signs of compromise are obvious on their face and Sandfly is really good at bringing these issues to your attention.
Computers Are Not Spontaneous and Do Not Fix Themselves
A good rule to know about security incidents is this: Computers are not spontaneous.
Computer processes do not just start crashing when they have never had a problem before. Log files do not delete themselves. Network ports do not just start opening and connecting outbound.
If a problem is found by Sandfly and then just vanishes it could be a false alarm. But then again, it may not be. Sandfly may have caught a glimpse of something before the attacker was able to conceal what they just did. Suspicious activity that fixes itself is in itself suspicious.
If you have been running Sandfly for a while and suddenly it reports a problem on a system that has never before had any alerts from Sandfly, it is time to think the worst.
Use Sandfly's Forensic Viewer to Full Effect
When Sandfly detects a problem, know that the sandfly that saw the issue already collected the most critical evidence associated with the event. We do not wait around and leave the evidence to be removed by the attacker.
Sandflies that focus on file threats collect file attribute data. Sandflies that look for suspicious processes collected many important pieces of the rogue process data.
The first thing you should do then when you see an alert is simply look at what Sandfly is showing you. For instance, pay attention to all of the following:
- The explanation of what Sandfly is telling you is going on.
- Creation times for files, directories and processes.
- User and group names associated with the alert.
- File attributes and hashes.
- Directory attributes, locations and names.
- Process environment variables attached to the running instance.
- Network ports and any network addresses.
Walk-through of a Sandfly Detected Threat
We will walk through some things you can see with Sandfly when it finds suspicious activity. In the example below, we see a group of alerts generated on a single host. These events all happened around the same time and were part of the same attack chain.
Sandfly will often identify multiple threats associated with the same attack to give you a very deep understanding of what is going on.
Multiple alarms on a host
Here we see multiple threats on this host:
- Hidden directories under a system binary area.
- Suspicious directory names under a system binary area.
- A system binary was renamed to something else to conceal what it is.
- An executable file is hidden under a system binary directory.
- A system shell has been renamed.
- A user has tampered with their history file to conceal activity.
Let us start with the sandfly that found a user history file linked to /dev/null (sandfly_user_history_dev_null). Below is the data shown for this threat that we will draw on for the next few steps:
Sandfly forensic data viewer for a tampered history file
What is Sandfly Telling You?
Sandfly has a plain-English explanation of what it found and why it thinks it is a problem. Usually this will put you onto the issue very quickly. For example you may see something like this below:
A user replaced their history file with a link to /dev/null for anti-forensics
The above is very clear what is going on. The user ubuntu linked their history file to /dev/null to conceal what they are doing. But before you run off to investigate, check out the other information in this alert that can help isolate the problem further.
Often attackers compromise a host and try to cover their tracks by altering system log files or changing login audit entries. But, many times they do not alter the timestamps of files they modify or create. The reality is that there are so many places for an attacker to forget to modify that they almost always leave behind traces that tell you exactly when the compromise happened.
In the case of our attacker that erased their history file and replaced it with a link to /dev/null, we can see the exact moment it happened in UTC time. Sandfly shows you the file link creation times, plus the creation times of the file linked to.
Intruder timestamps on history file they forgot to change
In the above we are interested in when the link date happened. Often if the intruder deleted their log file entries when they logged in, you can still see the approximate time the compromise happened by the timestamps they forgot to change. In the above example, the file_date_creation_link timestamp shows when this link was made.
The link creation time (2018-05-21 23:05:11Z) would be a great place to start to look for suspicious activity on this host.
Other files, directories, and processes have similar timestamps. You can see when new directories showed up and when new processes started. Sandfly will always try to obtain timestamps where it can to give you a good time window to start investigating the host.
User and Group Names
User and group names provide important information about who or what was responsible for suspicious activity. User and group names can reveal compromised accounts and processes, as well as an audit trail of activity to help you identify and isolate threats.
User root owns the .bash_history file
In our example above of our altered .bash_history file we see something very strange. The user home directory (file_path_root) with the history file is called "ubuntu" which is a standard user. Yet, the file owner (file_uid_name) is "root." This tells us that the user ubuntu likely obtained root privileges, then altered the history file under the new owner permissions.
You can also see that the group owner (file_gid_name) is also for the "root" group. This again confirms that the ubuntu user obtained root access and then modified the history file to cover their tracks after.
File Attributes and Hashes
When possible, Sandfly will always try to obtain full file attributes and cryptographic hashes of any files involved with a suspicious activity. Again this includes things like file creation times, owners, permissions, and other data.
But what about other file attributes? Sandfly helps there as well. In our example Sandfly found a suspicious file that looks like a system binary that has been renamed and hidden. Let us look at that next.
A system shell was found in /run/shm
Here the sandfly is telling us that someone took the system shell /bin/dash (which is linked to /bin/sh) and moved it into the system directory "/bin/.../.b"
Even worse, Sandfly shows that this shell is SUID to the file owner which is root (file_is_suid). It is also set to SGID of the group owner which is also root (file_is_sgid).
This means anyone running it has root permissions immediately. You can also see the file_mode which should be familiar to those that know Unix permissions. This file is mode 6755 which means both the SUID and SGID bits are set just in case you were not sure. You can also again see the file name and the full path to the file as well.
SUID root shell on a Linux host
This is really strange behavior for several reasons:
- Why is someone making a copy of a system shell outside the system directories?
- Why did they make that copy to a hidden sub-directory under the system binaries?
- Why did they put it under a suspiciously named directory "/bin/..."?
- Why is it named something hidden (".b")?
- Why is it SUID and SGID root?
You can already see this is going nowhere good. A look at the cryptographic hashes seals the deal:
Cryptographic hash of suspicious file
Here Sandfly has taken the MD5, SHA1, SHA256, and SHA512 hashes of the suspect file. It also tells us that the hash matches the file with the name "/bin/dash" (which /bin/sh links to on this host). We know for certain that this file is in fact a system shell that has been renamed.
If you are not sure about a file, you can take the cryptographic hash and run it through your favorite online malware database to see if it matches something dangerous.
In this case, the hash just matches /bin/dash which is not malware, but how the file is renamed and hidden is suspicious and signs that the system is probably compromised. The fact that it is SUID root also tells us that someone is on this system and left this shell behind as a way to get root privileges whenever they want.
Directory Attributes, Locations and Names
Just as with files, we can also look at the directory attributes. Let us go back up a level and look at the sandfly that found a suspicious directory:
Sandfly found a suspicious directory on Linux
And again Sandfly gives us a helpful explanation of exactly what is going on:
Suspicious directory under /run/shm
A view of the attributes shows the details of the directory:
Suspicious directory owner information
Here we see the suspicious name, who made it, when it was made, and other details like file permissions. You will note that the timestamp is very close to the time the history file was modified which brackets again our window of intrusion onto this host.
Just like files and directories, processes under Linux have similar properties that can tell you a lot about who or what is running it. Rogue processes are spotted by Sandfly and again we collect a lot of information about what is going on to help you build a picture of what to look for and focus on quickly.
Above we see an example of a process that is doing a lot of strange things:
- It is running from /tmp
- It opened a network port.
- It has a strange name (one character).
- It has deleted itself from the disk but is still running.
All of this spells big trouble. It could be a malicious network server, backdoor, or other similar operation. Network processes running out of temp directories is bad enough, but the fact it has a strange name, and the binary is missing on the disk means it is trying to hide from malware scanners. But, Sandfly is still able to pull up a lot of information.
Let us start with Sandfly's explanation:
That seems pretty straightforward. Now let us look at the process data.
In the above we can see everything about this process. We see who started it (ubuntu), when it was created (2018-05-22 01:52:17Z), the minutes it has been running (0, it is new!), and the directory where it was started (/tmp).
Going further we can see the command line used to start this process:
We can even see any environment variables that were attached to this process when it started. This can often leak the real IP address of the connection that started the process even if the intruder deleted this data from the logs (redacted for privacy reasons).
Process environment variables can reveal the real IP of the attacker, along with other useful information (IP redacted)
Network Ports and Any Network Addresses
Taking the above example, we can see the rogue process is listening on local network address 0.0.0.0 which means all addresses on the network interface. Finally we see it has an open TCP port 4444 waiting for inbound connections.
If a system was connected to this process, Sandfly will show you the remote address of the connected host as well.
Sandfly provides a lot of information on suspicious activity it detects. By looking at the data provided you can quickly and efficiently determine what the problem is, when the problem started, who started it, and how you might go about resolving the issue based on your internal policies.