In this post, we will cover the basics of adding your own rules to Prelude-LML, which is Prelude's own log monitoring analysis engine. Highly optimized in C, Prelude-LML, comes with numerous rules built-in for everything from SSH authentication to Netscreen firewall rules.
To start we will navigate to the default LML ruleset directory, which is located in /usr/local/etc/prelude-lml/ruleset - unless you specified otherwise.
Our Example
For example purposes, we'll use the made up syslog entry below to follow along with. It shows the date, a host named some_hostame, with a service called mylogger_service, that printed some message.
Aug 22 05:22:05 some_hostname mylogger_service: We have an imortant message here.
Setting Up
The first thing you want to do is have a look at pcre.rules. The pcre.rules file, provides a way to match on certain criteria that all rules in a certain set will all match on, a lot of them are based on services, such as ssh, which allows LML to limit how many log messages are processed against which rules.
Since our service, mylogger_service, is new and no other specific rulesets apply to it, we'll add a line to pcre.rules to only apply our new rule (which we will add to its own file later, called local.rules). Adding the following to pcre.rules will do just this:
regex=mylogger_service; include = local.rules;
What this does, is only process rules in the local.rules file, if the log entry contains "mylogger_service" in it.
Adding a Rule to its own Ruleset
So now we see that we have prepped our new rule file (local.rules), to receive any log entries that contain the line mylogger_service in them. We next need to add rules to our local.rules file for further processing and alerting on any matches. Here is what we will add to local.rules for our example:
# Detect important messages from the mylogger_service.
# LOG:Aug 22 05:22:05 some_hostname mylogger_service: We have an imortant message here.
regex=important message; \
classification.text=Important Message Detected.; \
id=32001; \
revision=1; \
assessment.impact.type=other; \
assessment.impact.severity=low; \
assessment.impact.description=An important message was detected with the mylogger_service; \
last;
Stepping through this example, we see the following:
- A comment line that describes what this rule is about.
- A LOG line that shows an actual example syslog entry for what we are looking for.
- regex, that is what we are matching on, any potential regex can be used here. Such as character classes (\w,\d) or wildcards (.,*,+).
- classification.text, here is the main alert text for this rule
- id, which differentiates on this particular rule
- revision, bump this up by one as you make production edits
- impact type, which can be things such as admin, user, other, etc.
- impact severity, such as low, medium, high
- impact description, a longer description of what most likely is referenced in classification.text
- last, which basically tells LML to stop further processing if this rule matches
Many more IDMEF fields may be used, such as references or process names.
When mylogger_service is seen in a syslog entry that LML processes, it will process this entry against all the rules in the local.rules file (which is how we set this up in pcre.rules). Furthermore, if the entry matches our regex of "important message", we will get an alert with severity of low, a message text of "Important Message Detected", and the various other settings we have set.
Conclusion
This example showed a simple way of adding rules to your LML engines. You may have noticed when looking in pcre.rules, at the top is a "best practices" section on creating/adding LML rules. For much more extensive information, look here.
Monday, September 3, 2007
Friday, August 17, 2007
Threat Assessments with Argus
A useful practice for both incident response and general discovery, is the practice of threat assessments using session/flow data. My tool of choice for this is Argus, but any session/flow tool such as NetFlow or SANCP will do. For further information beyond this post, reference the book Extrusion Detection for extensive details of traffic threat assessments with both Argus and SANCP. I'll assume you are already familiar with collecting Argus data, if not have a look at the Argus labels on this blog for articles pertaining to it.
What I'll describe here for conducting a threat assessment, is what I call a blind threat assessment. What I mean by "blind", is that I am not looking for particular traffic like you would when responding to an incident - where you know a victim address, and possibly a source address and protocols. In the past during any downtime that I had, I would pick an Argus data file (which I generally rotate either daily or every X number of hours, depending on how busy the sensor collecting the data is), and pick it apart.
Let's move on to an example, reading in your Argus file of choice.
ra -nn -r /data/argus_data.arg
This pulls in and displays all the data in the Argus file, including src/dst IPs & ports, data transferred, etc. But let's apply some BPFs to it - let's say your mail server is at address 192.168.l.25, and for this assessment you don't care about traffic to/from it.
ra -nn -r /data/argus_data.arg - not host 192.168.1.25
So now on the screen scrolls by gobs of data that does not contain anything related to your mail server at that address. Next, we may decide that any web traffic is of no interest to us today - so we append more BPFs to our current one and continue to whittle down the amount of traffic displayed by the Argus client.
ra -nn -r /data/argus_data.arg - not host 192.168.1.25 and not port 80 and not port 443
Next, you realize you are seeing a bunch of ARP traffic that is of little use to you currently - so let's get rid of it too.
ra -nn -r /data/argus_data.arg - not host 192.168.1.25 and not port 80 and not port 443 and not arp
The basic premise of this blind assessment is to narrow down your view of the data until you get to various things you may never notice, such as a user running a new peer-to-peer client or a rogue MP3 server on your corporate network. You can continue to limit with BPFs, adding them on to the end of your list, or start utilizing rasort to find larger bandwidth sessions (maybe you like the noisy stuff). The whole principle of this blind threat assessment, is that there is no wrong way of doing it - stumbling randomly across some weird connection then applying a human's logic to it, is something your traditional signature-based NIDS can't do.
You won't always be able to catch everything this way, as depending on how much traffic you look at and what you decide to globally eliminate, huge chunks of traffic will never be reviewed. Nonetheless, I feel that the occasional, manual review, adds value as you usually turn up something interesting that you did not know about. So take fifteen minutes of your day or week, and notice something new.
Tuesday, July 31, 2007
Managing Snort Rules
I know many people like to use Oinkmaster to pull in rule updates, manage rules, etc. - and I am no different. But....one thing I tend not to do, is maintain numerous Oinkmaster configurations for various hosts. Let me elaborate on a rule management scheme for Snort signatures that I find easier to manage.
Let's say we have five Snort sensors deployed, watching various different types of networks and hosts - and thus requiring vastly different rule tweaks and tuning. Each of these sensors is going to require both some of the same and different rulesets and individual rules to be loaded. You could let Oinkmaster handle this or you could use thresholding within Snort itself.
Example Architecture:
Here is the caveat and where many people take exception to this process: it isn't as efficient on the Snort detection engine itself. When rules are removed at the Oinkmaster level, they are never loaded into the Snort configuration. However, when suppressed via Snort thresholding, it happens post-processing (ie. Snort has already detected a hit on the rule, and it then decides it should be suppressed).
So the moral of the story is, if you have the spare cycles in beefy sensors that are not bogged down with the traffic they are watching - then you can benefit from the administration ease and having rule modifications stay close to the sensor for view, etc. If your sensors are already taxed, you should really use Oinkmaster all the way through.
Let's say we have five Snort sensors deployed, watching various different types of networks and hosts - and thus requiring vastly different rule tweaks and tuning. Each of these sensors is going to require both some of the same and different rulesets and individual rules to be loaded. You could let Oinkmaster handle this or you could use thresholding within Snort itself.
Example Architecture:
- Download new Snort signatures daily from snort.org and BleedingThreats via Oinkmaster, disabling rules that you globally do NOT use (with disablesid lines).
- Run these new rules through a loaded Snort configuration (basically a config with most everything turned on), in testing mode ( -T ). Potentially quarantine or notify a rule maintainer that new rules are available.
- If the new rules pass the Snort testing phase, make these rules centrally available - possibly via a revision control repository, scp, or packaged (rpm, deb, etc.).
- Have your sensors check for new signatures at various intervals (using revision, timestamp, or version to decipher if a new ruleset is available). Let Snort restart with the new rules.
- Utilize thresholding in Snort to suppress or threshold rules that affect less-than-all of your sensors. Have a look at threshold.conf that comes along with the Snort tarball.
Here is the caveat and where many people take exception to this process: it isn't as efficient on the Snort detection engine itself. When rules are removed at the Oinkmaster level, they are never loaded into the Snort configuration. However, when suppressed via Snort thresholding, it happens post-processing (ie. Snort has already detected a hit on the rule, and it then decides it should be suppressed).
So the moral of the story is, if you have the spare cycles in beefy sensors that are not bogged down with the traffic they are watching - then you can benefit from the administration ease and having rule modifications stay close to the sensor for view, etc. If your sensors are already taxed, you should really use Oinkmaster all the way through.
Monday, July 23, 2007
High Availability Prelude Central Services
I recently posted to the Prelude wiki, a sample configuration for providing high availability for your central Prelude services.
Basically what the configuration provides is a setup across two servers to host these services: Manager, Correlator, Apache, MySQL, and Prewikka. It is purely a fault tolerant scheme, as opposed to a performance booster. Although, you could spread the load to increase performance - such as with a split of the web interface to one while the other provides database interaction for incoming events, or offloading things such as reporting and backups to the secondary.
MySQL v5 is required to avoid potential auto increment collisions when doing multi-master replication. Other than that, there really should be minimal changes needed for using various versions with regards to the other software pieces. Either host in the pair, is capable of taking over as the primary - only caveat is heartbeat is for machine failure, not service failure. So....you will still need your application-level monitoring (ie. Nagios or other snmp-based solution) in place to be notified of service issues.
Basically what the configuration provides is a setup across two servers to host these services: Manager, Correlator, Apache, MySQL, and Prewikka. It is purely a fault tolerant scheme, as opposed to a performance booster. Although, you could spread the load to increase performance - such as with a split of the web interface to one while the other provides database interaction for incoming events, or offloading things such as reporting and backups to the secondary.
MySQL v5 is required to avoid potential auto increment collisions when doing multi-master replication. Other than that, there really should be minimal changes needed for using various versions with regards to the other software pieces. Either host in the pair, is capable of taking over as the primary - only caveat is heartbeat is for machine failure, not service failure. So....you will still need your application-level monitoring (ie. Nagios or other snmp-based solution) in place to be notified of service issues.
Saturday, July 14, 2007
Changes of the Seasons
When the seasons change in New England, there are distinct things you notice - such as the chill in the air as the leaves change in fall, to the rainy months of early spring. Most changes are anticipated, but as New England weather goes, you always expect the unexpected - you can just as easily get snow in April or have warm January days. Where I am going with this is in relation to Network Awareness, the ability to notice changes, additions, etc. from endpoints on your network - whether this is new ports, increased protocol activity, or just actively getting to know the hosts making use of your network.
Much like the weather in New England, the network activity of your hosts is at many times predictable, but there are also the numerous anomalies that appear every day - hosts that shouldn't be running a web server, or increased activity from an IP address. Alerting on, and profiling these anomalies, is what I am getting at with this Network Awareness approach. Basically, utilizing existing tools (nessus, nmap, p0f, etc.), with storage (mysql, text, etc.), and custom tools (perl, c, etc.) to build profiles, notice trends, and generate alerts.
Maybe there are open source tools already in the this space (do you know of any?), but it also is a task that benefits from the flexibility of a home-grown process - as each network and set of endpoints is so vastly different nowadays.
Things of interest (have any others?):
* Build profiles and store all interesting events in a database, both for maintaining history, state, and future correlations
* Analyze various sources of data for various types of items
* Sources of Data:
- nessus: both for assessing and verifying compliance, provides a baseline
- nmap: actively profile port openings and OS detection
- p0f: passively identify OS
- tshark: for traffic profiling and statistics
- pads: passively noticing new services offered
- argus: counting hosts, ports, traffic, etc.
- various others, including netics or fl0p
- custom: for mining logs, running comparisons, etc.
- tcp and udp ports
- ip addresses
- services offered on those ports
- identifying operating system usage
- traffic patterns
- establishing normal usage profiles on traffic, endpoints, and potentially users of those endpoints
This is not an idea based on real-time alerting or analysis, but a crunching of various data to cast a light over areas that deserve attention or investigation. I guess the operative word here is change. Change can be good, especially when making improvements, but in our context, we are looking for those changes that indicate something unauthorized or outside the scope of a security policy. Services and people many times operate in a set pattern with noticeable characteristics...let's find the anomalies.
Friday, July 6, 2007
Brute Forcing SSH Passwords with Hydra
Quite often you may find the need to audit passwords without grabbing a copy of the hashes, or maybe need to generate a simulated brute force attack to test one of your sensors or correlation engines. In from stage left steps THC-Hydra, the self-described "very fast network logon cracker which supports many different services."
If you are familiar with BackTrack, running Hydra from within is quite easy, located under the online password cracking tools. Otherwise, Hydra can be built from source, just make sure to have openssl and ssh libraries installed for it to be compiled against - as usual, the configure script will let you know which libraries are lacking on your system.
Much like nmap, and the gui front-end for it, Hydra can be run from either the command-line or with a simple GTK gui wrapper. The only change necessary is to have X working, and specify xhydra as opposed to just hydra. I'll use the command-line options in this post, as the gui makes it extremely easy to figure out the options, etc. In fact, using the gui will actually build the hydra command-line for you to see how it is configured to run.
Numerous services are supported for cracking in the latest version of Hydra, which is 5.4 at the time of this post. Although we will use ssh2 in this example, other network services such as cvs, ftp, imap, mysql, ldap, and http are also available. So let's move on to running an over-the-air ssh password attack (exercise caution if you lock out accounts, or have other account policy settings in place)
A simple one-off username/password combo:
hydra 192.168.1.25 ssh2 -l foohacker -p bluebird
The above attempts to login over ssh v2 to 192.168.1.25 as foohacker with password of bluebird.
Quick alteration to utilize lists:
hydra -M targets.txt ssh2 -L users.txt -P passwords.txt
So...now we have replaced the single setting for each and allowed ourselves to brute force ssh login with a matrix of users, passwords, and hosts. I specify a single item per line in my flat text files when using these lists.
A couple options worth mentioning:
-f allows you to exit hydra once a match is found
-t allows you to manipulate the number of tasks it runs in parallel. from the readme, experimenting with this feature can result in improved speed or in disabling the service, :)
Have a look here and here, to learn more about the options, download source, and view changelogs.
Sunday, July 1, 2007
Prelude Registration Server
As anyone who has used Prelude, you will know that registering a sensor with a Prelude Manager/Relay, is the first step in having your sensor send alerts into your Prelude framework. Usually a combination of, (a) running 'prelude-adduser registration-server' on the manager/relay, and (b) running 'prelude-adduser register' on the sensor you are adding - followed by accepting the registration on the manager, etc.
In this post, I will show a quick way of setting up a pseudo-daemonized instance of the Prelude registration server, that will auto-accept the sensor registration. This comes in handy when you have a bunch of sensors to register, yet you don't want to constantly be going back to the manager console to acknowledge each individual sensor registration.
On the manager side, first install the screen utility.
Continuing on the manager machine, I usually create an init script, that has the process being the following:
/usr/bin/screen -d -m /usr/local/bin/prelude-adduser registration-server prelude-manager --passwd=somepassword --keepalive --no-confirm
What this command says is, have screen fire up this prelude command while detaching the screen session - thus putting it in the background, much like a daemonized process (ie. not running active in your console). The 'prelude-adduser registration-server' command runs using the prelude-manager analyzer profile. The key additions to the command, are the use of a pre-shared password, and the keepalive and no confirm options. The pre-shared password is used by the sensor registering, and the no confirm eliminates the need to accept the sensor registration on the manager each time. Finally, the keepalive option, does not cause the registration server to exit after a single successful registration on the manager side.
Finally, running the following on the sensors needing to register (in this example, a snort sensor):
prelude-adduser register prelude-snort-profile "idmef:w admin:r" 192.168.1.2 --uid snort --gid snort --passwd=somepassword
The above does the normal sensor registration pieces of specifying the profile in use, prelude permissions to use, and user/group to allow access to the sensor profile. The important addition, is the use of the pre-shared password that was specified in the registration server running on our manager.
Subscribe to:
Posts (Atom)
Basic Pig usage to process Argus data
Some quick notes on testing out Pig in local mode to process some basic Argus data. Argus Capture a sampling of network traffic with Argus a...
-
Some quick notes on testing out Pig in local mode to process some basic Argus data. Argus Capture a sampling of network traffic with Argus a...
-
I figured I would post some general tuning options that really improve performance on busy central syslog-ng servers. The following setting...
-
The hybrid IDS, or "meta-IDS", as described by the project's founder, makes an excellent choice as a SIM/Event Management tool...