Tuesday, September 11, 2007

Capturing flow data from your Linksys at home


As a big believer in flow/session data collection in all NIDS locations, it is only right that there be an easy way to do so at home without putting a full-time IDS in place. So with a trusty Linksys router re-flashed with DD-WRT, an extra package installed on the router, and a suite of flow collection/analysis tools on your primary Linux desktop, we can easily achieve this.

On your Linksys:

  1. First things first. In this scenario we re-flashed a Linksys router with DD-WRT, following these instructions.
  2. Next, via the DD-WRT web interface, we enabled JFFS2 support and SSH located in subsections of the Administration tab.
  3. Moving on, update your ipkg configuration, with: ipkg update. Then install fprobe via ipkg: ipkg install fprobe.
  4. Finally, add a shell script to /jffs/etc/config/fprobe.startup. Change permissions: chmod 700 fprobe.startup and reboot your router. The file should contain the following command: fprobe -i br0 -f ip 192.168.1.100:9801
A brief discussion of the fprobe command is needed:

  • -i specifies the interface you are interested in watching flows on. I chose my internal interface.
  • - f specifies a bpf filter. In this scenario, I chose to only create flow records for IP traffic.
  • IP:Port, is the remote IP address and UDP port that you have your flow collector listening on - this will be later on your desktop Linux box.
On your Linux box:

  1. Install flow-tools from here. All that is needed, is a standard: configure; make; make install. *There is one caveat to watch out for, if you use gcc 4.x, a patch available where you downloaded the tarball is necessary.
  2. Create a directory to store your flow data: mkdir -p /data/flows/internal
  3. If you run IPTables or some other host-based firewall, make sure to allow UDP 9801 connections from your router.
  4. Finally, both run the following command and add it somehow to your system startup (via /etc/rc.local, for example): /usr/local/netflow/bin/flow-capture 192.168.1.100/192.168.1.1/9801 -w /data/flows/internal
A brief discussion of the flow-capture command is needed:

  • You specify the network interface you want your collector to listen on, then the address of the flow probe, followed by the UDP port to use - all in a local/remote/UDP format.
  • -w specifies to write out flow files to that directory. By default, flow-capture will have new ones for every 15 minute chunk of time.
So now that we have some flow data being collected to your machine, what are some cool things we can do with it? Looking in flow-tools default install directory for binaries, /usr/local/netflow/bin, we see numerous flow-* tools. We'll look at a few briefly below.

Using flow-print:

flow-print < ft-v05.2007-09-11.080001-0400

The above command will print out the results contained in that particular flow file. The columns will contain srcIP/dstIP/protocol/srcPort/dstPort/octets/packets. The octets line is the equivalent of bytes. This is your standard session/flow data.

Adding a "-f 1" flag will produce timestamps among other things. The -f flag allows for numerous types of formatting and additional columns, etc.

On a sidenote, standard *nix tools - such as awk and grep can be very useful in pulling data from plain old dumps of the flow records.

Using flow-cat and flow-stat:

Much like Argus, with flow-tools you stack together various of the utilities to get output like you want.

flow-cat ft-v05.2007-09-11.0* | flow-stat -f9 -S2

In the above set of commands, flow-cat is used to concatenate all the files that names match that criteria. The resulting output is passed to flow-stat for crunching and displaying. The flow-stat command generates reports, taking formatting options via the -f flag and sorting on both -S and -s. Our example specified a report format on the Source IP address, and sorting based on the Octet (ie. Bytes) field (have a look at the man page for flow-stat to see all the various options). Thus, we now have detailed output from all those files, showing the *noisiest* source hosts listed by most bytes transferred.

Utilizing your desktop and a router, things you probably already have at home, you too can watch/collect/analyze flow data to keep a watchful eye on your network - without deploying a dedicated NIDS or NSM sensor.

Monday, September 3, 2007

Writing Prelude LML rules

In this post, we will cover the basics of adding your own rules to Prelude-LML, which is Prelude's own log monitoring analysis engine. Highly optimized in C, Prelude-LML, comes with numerous rules built-in for everything from SSH authentication to Netscreen firewall rules.

To start we will navigate to the default LML ruleset directory, which is located in /usr/local/etc/prelude-lml/ruleset - unless you specified otherwise.

Our Example

For example purposes, we'll use the made up syslog entry below to follow along with. It shows the date, a host named some_hostame, with a service called mylogger_service, that printed some message.

Aug 22 05:22:05 some_hostname mylogger_service: We have an imortant message here.

Setting Up

The first thing you want to do is have a look at pcre.rules. The pcre.rules file, provides a way to match on certain criteria that all rules in a certain set will all match on, a lot of them are based on services, such as ssh, which allows LML to limit how many log messages are processed against which rules.

Since our service, mylogger_service, is new and no other specific rulesets apply to it, we'll add a line to pcre.rules to only apply our new rule (which we will add to its own file later, called local.rules). Adding the following to pcre.rules will do just this:

regex=mylogger_service; include = local.rules;

What this does, is only process rules in the local.rules file, if the log entry contains "mylogger_service" in it.


Adding a Rule to its own Ruleset

So now we see that we have prepped our new rule file (local.rules), to receive any log entries that contain the line mylogger_service in them. We next need to add rules to our local.rules file for further processing and alerting on any matches. Here is what we will add to local.rules for our example:

# Detect important messages from the mylogger_service.
# LOG:Aug 22 05:22:05 some_hostname mylogger_service: We have an imortant message here.
regex=important message; \
classification.text=Important Message Detected.; \

id=32001; \

revision=1; \

assessment.impact.type=other; \

assessment.impact.severity=low; \

assessment.impact.description=An important message was detected with the mylogger_service; \

last;


Stepping through this example, we see the following:

- A comment line that describes what this rule is about.
- A LOG line that shows an actual example syslog entry for what we are looking for.
- regex, that is what we are matching on, any potential regex can be used here. Such as character classes (\w,\d) or wildcards (.,*,+).
- classification.text, here is the main alert text for this rule
- id, which differentiates on this particular rule
- revision, bump this up by one as you make production edits
- impact type, which can be things such as admin, user, other, etc.
- impact severity, such as low, medium, high
- impact description, a longer description of what most likely is referenced in classification.text
- last, which basically tells LML to stop further processing if this rule matches

Many more IDMEF fields may be used, such as references or process names.

When mylogger_service is seen in a syslog entry that LML processes, it will process this entry against all the rules in the local.rules file (which is how we set this up in pcre.rules). Furthermore, if the entry matches our regex of "important message", we will get an alert with severity of low, a message text of "Important Message Detected", and the various other settings we have set.

Conclusion

This example showed a simple way of adding rules to your LML engines. You may have noticed when looking in pcre.rules, at the top is a "best practices" section on creating/adding LML rules. For much more extensive information, look here.

Basic Pig usage to process Argus data

Some quick notes on testing out Pig in local mode to process some basic Argus data. Argus Capture a sampling of network traffic with Argus a...