Tuesday, July 31, 2007

Managing Snort Rules

I know many people like to use Oinkmaster to pull in rule updates, manage rules, etc. - and I am no different. But....one thing I tend not to do, is maintain numerous Oinkmaster configurations for various hosts. Let me elaborate on a rule management scheme for Snort signatures that I find easier to manage.

Let's say we have five Snort sensors deployed, watching various different types of networks and hosts - and thus requiring vastly different rule tweaks and tuning. Each of these sensors is going to require both some of the same and different rulesets and individual rules to be loaded. You could let Oinkmaster handle this or you could use thresholding within Snort itself.

Example Architecture:

  • Download new Snort signatures daily from snort.org and BleedingThreats via Oinkmaster, disabling rules that you globally do NOT use (with disablesid lines).
  • Run these new rules through a loaded Snort configuration (basically a config with most everything turned on), in testing mode ( -T ). Potentially quarantine or notify a rule maintainer that new rules are available.
  • If the new rules pass the Snort testing phase, make these rules centrally available - possibly via a revision control repository, scp, or packaged (rpm, deb, etc.).
  • Have your sensors check for new signatures at various intervals (using revision, timestamp, or version to decipher if a new ruleset is available). Let Snort restart with the new rules.
Here's the per-sensor tweak:
  • Utilize thresholding in Snort to suppress or threshold rules that affect less-than-all of your sensors. Have a look at threshold.conf that comes along with the Snort tarball.
So essentially, this approach, relies on one server to update and maintain centrally, new Snort signatures. A rule that can be removed from all sensor configurations is done so at the Oinkmaster level, while rules that affect anything but all of the sensors is handled at the sensor level in threshold.conf.

Here is the caveat and where many people take exception to this process: it isn't as efficient on the Snort detection engine itself. When rules are removed at the Oinkmaster level, they are never loaded into the Snort configuration. However, when suppressed via Snort thresholding, it happens post-processing (ie. Snort has already detected a hit on the rule, and it then decides it should be suppressed).

So the moral of the story is, if you have the spare cycles in beefy sensors that are not bogged down with the traffic they are watching - then you can benefit from the administration ease and having rule modifications stay close to the sensor for view, etc. If your sensors are already taxed, you should really use Oinkmaster all the way through.

Monday, July 23, 2007

High Availability Prelude Central Services

I recently posted to the Prelude wiki, a sample configuration for providing high availability for your central Prelude services.

Basically what the configuration provides is a setup across two servers to host these services: Manager, Correlator, Apache, MySQL, and Prewikka. It is purely a fault tolerant scheme, as opposed to a performance booster. Although, you could spread the load to increase performance - such as with a split of the web interface to one while the other provides database interaction for incoming events, or offloading things such as reporting and backups to the secondary.

MySQL v5 is required to avoid potential auto increment collisions when doing multi-master replication. Other than that, there really should be minimal changes needed for using various versions with regards to the other software pieces. Either host in the pair, is capable of taking over as the primary - only caveat is heartbeat is for machine failure, not service failure. So....you will still need your application-level monitoring (ie. Nagios or other snmp-based solution) in place to be notified of service issues.

Saturday, July 14, 2007

Changes of the Seasons


When the seasons change in New England, there are distinct things you notice - such as the chill in the air as the leaves change in fall, to the rainy months of early spring. Most changes are anticipated, but as New England weather goes, you always expect the unexpected - you can just as easily get snow in April or have warm January days. Where I am going with this is in relation to Network Awareness, the ability to notice changes, additions, etc. from endpoints on your network - whether this is new ports, increased protocol activity, or just actively getting to know the hosts making use of your network.

Much like the weather in New England, the network activity of your hosts is at many times predictable, but there are also the numerous anomalies that appear every day - hosts that shouldn't be running a web server, or increased activity from an IP address. Alerting on, and profiling these anomalies, is what I am getting at with this Network Awareness approach. Basically, utilizing existing tools (nessus, nmap, p0f, etc.), with storage (mysql, text, etc.), and custom tools (perl, c, etc.) to build profiles, notice trends, and generate alerts.

Maybe there are open source tools already in the this space (do you know of any?), but it also is a task that benefits from the flexibility of a home-grown process - as each network and set of endpoints is so vastly different nowadays.

Things of interest (have any others?):

* Build profiles and store all interesting events in a database, both for maintaining history, state, and future correlations

* Analyze various sources of data for various types of items

* Sources of Data:
  • nessus: both for assessing and verifying compliance, provides a baseline
  • nmap: actively profile port openings and OS detection
  • p0f: passively identify OS
  • tshark: for traffic profiling and statistics
  • pads: passively noticing new services offered
  • argus: counting hosts, ports, traffic, etc.
  • various others, including netics or fl0p
  • custom: for mining logs, running comparisons, etc.
* Important Items:
  • tcp and udp ports
  • ip addresses
  • services offered on those ports
  • identifying operating system usage
  • traffic patterns
  • establishing normal usage profiles on traffic, endpoints, and potentially users of those endpoints
* Establish signatures to build these profiles, notice trends, and spot anomalies


This is not an idea based on real-time alerting or analysis, but a crunching of various data to cast a light over areas that deserve attention or investigation. I guess the operative word here is change. Change can be good, especially when making improvements, but in our context, we are looking for those changes that indicate something unauthorized or outside the scope of a security policy. Services and people many times operate in a set pattern with noticeable characteristics...let's find the anomalies.

Friday, July 6, 2007

Brute Forcing SSH Passwords with Hydra


Quite often you may find the need to audit passwords without grabbing a copy of the hashes, or maybe need to generate a simulated brute force attack to test one of your sensors or correlation engines. In from stage left steps THC-Hydra, the self-described "very fast network logon cracker which supports many different services."

If you are familiar with BackTrack, running Hydra from within is quite easy, located under the online password cracking tools. Otherwise, Hydra can be built from source, just make sure to have openssl and ssh libraries installed for it to be compiled against - as usual, the configure script will let you know which libraries are lacking on your system.

Much like nmap, and the gui front-end for it, Hydra can be run from either the command-line or with a simple GTK gui wrapper. The only change necessary is to have X working, and specify xhydra as opposed to just hydra. I'll use the command-line options in this post, as the gui makes it extremely easy to figure out the options, etc. In fact, using the gui will actually build the hydra command-line for you to see how it is configured to run.

Numerous services are supported for cracking in the latest version of Hydra, which is 5.4 at the time of this post. Although we will use ssh2 in this example, other network services such as cvs, ftp, imap, mysql, ldap, and http are also available. So let's move on to running an over-the-air ssh password attack (exercise caution if you lock out accounts, or have other account policy settings in place)

A simple one-off username/password combo:

hydra 192.168.1.25 ssh2 -l foohacker -p bluebird

The above attempts to login over ssh v2 to 192.168.1.25 as foohacker with password of bluebird.

Quick alteration to utilize lists:

hydra -M targets.txt ssh2 -L users.txt -P passwords.txt

So...now we have replaced the single setting for each and allowed ourselves to brute force ssh login with a matrix of users, passwords, and hosts. I specify a single item per line in my flat text files when using these lists.

A couple options worth mentioning:

-f allows you to exit hydra once a match is found
-t allows you to manipulate the number of tasks it runs in parallel. from the readme, experimenting with this feature can result in improved speed or in disabling the service, :)

Have a look here and here, to learn more about the options, download source, and view changelogs.

Sunday, July 1, 2007

Prelude Registration Server


As anyone who has used Prelude, you will know that registering a sensor with a Prelude Manager/Relay, is the first step in having your sensor send alerts into your Prelude framework. Usually a combination of, (a) running 'prelude-adduser registration-server' on the manager/relay, and (b) running 'prelude-adduser register' on the sensor you are adding - followed by accepting the registration on the manager, etc.

In this post, I will show a quick way of setting up a pseudo-daemonized instance of the Prelude registration server, that will auto-accept the sensor registration. This comes in handy when you have a bunch of sensors to register, yet you don't want to constantly be going back to the manager console to acknowledge each individual sensor registration.

On the manager side, first install the screen utility.

Continuing on the manager machine, I usually create an init script, that has the process being the following:

/usr/bin/screen -d -m /usr/local/bin/prelude-adduser registration-server prelude-manager --passwd=somepassword --keepalive --no-confirm

What this command says is, have screen fire up this prelude command while detaching the screen session - thus putting it in the background, much like a daemonized process (ie. not running active in your console). The 'prelude-adduser registration-server' command runs using the prelude-manager analyzer profile. The key additions to the command, are the use of a pre-shared password, and the keepalive and no confirm options. The pre-shared password is used by the sensor registering, and the no confirm eliminates the need to accept the sensor registration on the manager each time. Finally, the keepalive option, does not cause the registration server to exit after a single successful registration on the manager side.

Finally, running the following on the sensors needing to register (in this example, a snort sensor):

prelude-adduser register prelude-snort-profile "idmef:w admin:r" 192.168.1.2 --uid snort --gid snort --passwd=somepassword

The above does the normal sensor registration pieces of specifying the profile in use, prelude permissions to use, and user/group to allow access to the sensor profile. The important addition, is the use of the pre-shared password that was specified in the registration server running on our manager.

Basic Pig usage to process Argus data

Some quick notes on testing out Pig in local mode to process some basic Argus data. Argus Capture a sampling of network traffic with Argus a...