Quite some time ago I wrote about this subject: I See You!: Solaris Auditing (BSM). As much information is out there regarding Solaris Auditing the post was well received and pretty popular but I’ve never been happy with where I left it. Many people feel that auditing is “difficult”. Why? Because its hard to enable? No, thats simple, just run bsmconv and your done, edit 2 simple configs in /etc/security to tweak it… whats hard about that?
I’ll tell you why auditing is a pain in the butt… because for all the dozens (or hundreds) of tutorials almost none of them teach you how to actually use the auditing data. So you’ve got these really great audit trails but now what? This blog entry is about filling that void, similar to the post I did about actually using BART: Solaris Automated File Integrity Checking: bartlog.
As I said in my former post, enabling BSM is simple. There is a convenience wrapper in /etc/security which will turn on the auditd SMF service and add the following to /etc/system:
set c2audit:audit_load = 1
You reboot and auditing is going. So what about tweaking what it collects?
The following is my recommendation for /etc/security/audit_startup, these policies change the way auditing collects data:
/usr/sbin/auditconfig -setpolicy +cnt
#/usr/sbin/auditconfig -setpolicy +perzone
/usr/sbin/auditconfig -setpolicy +zonename
/usr/sbin/auditconfig -setpolicy +argv
The “+cnt” policy says that even if auditing can’t record data (usually because /var/audit is out of space) keep running. In a super high secure environment you would remove this so that if auditing wasn’t able to function the box would halt. Next, the “+zonename” policy adds the zonename to each audit entry, if you use Solaris Containers you want this policy. The “+argv” policy is very important, if you do not use this policy you’ll see commands executing but not the arguments, and typically when your auditing for security you aren’t just interested in the command but how its being executed. Additionally, you could add the “+arge” policy which would include the environment with each command, but that seems like major overkill to me.
Now, just a moment on the “+perzone” argument. By default (meaning, without +perzone) auditd in the globalzone will record everything on the box regardless of which zone it occurs in, this is why its so important to use the “+zonename” policy. So if zone “oracle1″ runs a command, an audit record is made to the audit trails in the global zone. There are at least two potential problems with this: 1) The users inside the zone can’t access the audit trails, and 2) The users inside the zone might not want to be audited. So by setting “+perzone” in the globalzone, each zone will audit itself and only itself. That means that the globalzone only records audit events that occur in the globalzone. It also means that each zone can choose to enable auditing within their own zone by enabling the auditd service and tweeking the configs in /etc/security.
Moving on… the other important config is /etc/security/audit_control, which determines what events are audited by default. I recommend the following:
#plugin: name=audit_syslog.so; p_flags=all
So “flags” define which classes we’re going to record by default. This can be changed per user in the audit_user file (maybe you really don’t trust a particular user?). “lo” is login/logouts including su activity. “ex” is executions. So these two flags together record people coming and going and running commands. I recommend this as the default and suggest that you strongly avoid auditing more unless you know what your doing. The “naflags” are like “flags” but apply to events that are “not attributable” to a user (such as a failed login for a user that doesn’t exist). If you need to know more about flags and configs and syslog, refer to my previous post.
Audit Trail Maintenance
Now that auditing is running, you’ll see audit trails in (by default) /var/audit. The format is “date.date.hostname”, which signifies that the audit trail is “terminated”, or complete. The current audit trail will be “date.not-terminated.hostname”.
There are 2 important tasks relating to maintaining these audit trails. First, we need to rotate them to keep them from growing too large. Secondly, we need to move them from the unsecure system (otherwise why would you audit it?) to a safe place.
Rotating audit trails is simple, run the “audit -n” command to terminate/close the existing audit trail and continue auditing to a new file. So the simplest way to invoke daily audit trail rotation is by adding the following line to the root crontab:
## Rotate the Audit Logs Nightly at Midnight.
0 0 * * * /usr/sbin/audit -n
So now your terminating audits every day, but you now need to get the audit trails off the local system. Some old documentation suggests mounting /var/audit as NFS… I’m not a fan of that idea. Instead, I’d recommend creating a script which runs the “audit -n” command above and then uses sftp or scp or something to move the audit trails to a centralized archive location. You might even want to compress them prior to sending, but the idea is simple enough.
One other method of storage would be to rotate the audit trail, immediately convert it to XML/HTML/text or whatever and then moving that…. but in my experience the raw audit files are much smaller than any report you produce, so compressing and storing them raw is probly the best policy.
Please note that the frequency at which you rotate and archive your audit trails depends on the sensitivity of the system. If a hacker is smart he’ll notice that BSM is enabled and proceed to both disable it and destroy the audit trails. Therefore, in a highly sensitive environment you might archive as frequency as every 5 minutes! How often you archive is up to you and your environment. Every hour? Every day? Every week? It all depends, but I encourage you to spend a couple minutes thinking about it.
Okey, so now your rotating nightly and thinking about how to centrally archive the audit files, now what?
Reporting Part 1: The Boring Basics
Here’s what you’ve always been told… use the auditreduce command to process the audit trails and then pipe the output to “praudit” to output it. Boring. Let me clarify this a bit.
praudit can read audit trails and produce ASCII text output or XML. You do not require auditreduce to use praudit. The most common method of using praudit is with the “-ls” arguments which creates an ASCII output containing one audit record per line. Its ugly and huge but it gets the job done. At that point you might use some script to parse the text file but I discourage doing this (we’ll see why shortly). Output to ASCII only for debugging, nothing else.
Audit files get big, so the auditreduce command is sort of like “grep” for audit trails. It will read the raw audit trail and, based on the arguments, create a new raw audit trails containing only what you want. For instance, if I only want see login/logout records, I could do the following:
# auditreduce -c lo /var/audit/someaudittrail > new-lo-audittrail
So, in this way, we might produce several smaller raw audit trails based on the big master one. But there are lots of great options that can be handy. For instance, each audit record contains a “SID”, Session ID. A session would start with login and end with logout and everything in between. So if we found a command execution that we find disturbing we would probly want to see everything done during that same session, so we could use auditreduce -s 12312312 /var/audit/someaudittrail | praudit -ls to see the entire session. Very handy indeed.
I highly recommend you take the time to look through the various search options offered by auditreduce(1M).
Okey, so all this you have probly heard before, so lets move on to some things you probly haven’t seen.
Reporting Part 2: XSLT
XML makes storing data easier for programs, but its only minorly useful for humans. The way we transform an XML document into something more palatable is by creating an XSL stylesheet. By using an XSL Tranform (XSLT) engine, such as xsltproc we can transform XML into HTML or plaintext or XML-FO which is then used to convert to print formats like PDF.
Okey, why the XML review? praudit -x will output audit trails as an XML document. Look at the header of that document:
root@quadra ~$ praudit -x /var/audit/20091015192239.20091015200550.quadra | head
<?xml version='1.0' encoding='UTF-8' ?>
<?xml-stylesheet type='text/xsl' href='file:///usr/share/lib/xml/style/adt_record.xsl.1' ?>
<!DOCTYPE audit PUBLIC '-//Sun Microsystems, Inc.//DTD Audit V1//EN' 'file:///usr/share/lib/xml/dtd/adt_record.dtd.1'>
<file iso8601="2009-10-15 12:22:39.506 -07:00">/var/audit/20091015081437.20091015192239.quadra</file>
<record version="2" event="execve(2)" host="quadra" iso8601="2009-10-15 12:22:39.502 -07:00">
<attribute mode="100555" uid="root" gid="bin" fsid="128" nodeid="875" device="0"/>
Do you notice the “xml-stylesheet” tag? Solaris ships with a proper XML DTD (Schema) but also an XML stylesheet for translation to HTML! Here is how you do it:
root@quadra ~$ praudit -x /var/audit/20091015192239.20091015200550.quadra > myAudit.xml
root@quadra ~$ xsltproc file:///usr/share/lib/xml/style/adt_record.xsl.1 myAudit.xml > myAudit.html
root@quadra ~$ head myAudit.html
<meta http-equiv="Content-Type" content="text/html; charset="UTF"-8">
<title>Audit Trail Data</title>
<body bgcolor="#FFFFFF" text="#000000">
<font face="Arial" size="+1"><b>Audit Trail Data</b></font><br />
Using this method we could script a cronjob to produce a daily report in human readable format. Furthermore, Firefox and most other browsers can do XSLT transformations natively, so if you are using a browser on a Solaris system (so that the XSL and DTD are local) you can simply open the XML in your browser and see it in pretty HTML format!
There are 2 important take-aways on this. Firstly, creating useful HTML reports from audit data is really easy. Don’t bother parsing out the praudit -s ASCII output. Secondly, and more importantly, you can spend a little time learning XSLT to create your own custom reports!
For example, I really want to see the audit report in a single table, instead of in bulleted lists. So I did just that. It took me about 30 minutes or reading and tinkering to get the basics down but it was much easier than I expected. Just copy the Solaris provided XSL and start tweaking it. Please, feel free to download and try out my modified XSL: benr_record.xsl. Please note that it is intended for “lo” reduced XML files and is far from perfect, this is for learning purposes only!
Hack it up and do some fun things. Put the data in the most useful form for you organization, add your logo to the output, etc. If you are feeling really hardcore you can download XSLT Design tools such as Altova StyleVision, but personally I found that it was easier for me to learn XSLT itself than to use the design tools.
Reporting Part 3: XML & PERL
XSLT is great, but there are limits to what it can do. If you want to create really comprehensive reports you’ll need to actually parse the XML itself. The advantage of doing so is that you can loop the data multiple times to add roll-up statistics, such as a summary of sessions, number of executions, average executions per session, etc. You might be able to replicate this by using the auditreduce command but thats way more processor intensive and wasteful.
While you could use any language, being a SysAdmin, I feel most at home with PERL. Thankfully the XML::Simple module is included with Solaris, so going this route means you don’t need to install anything new or potentially unsupported.
So with the power of BSM and PERL’s XML::Simple at my fingertips, I decided to create a tool that could print audit trails in a really pretty and friendly way, and bsm_report is the result. Just look at how beautiful this is:
root@quadra bsm$ ./bsm_report.pl
The Incredable Human Friendly BSM Audit Dumper email@example.com
USAGE: ./bsm_report.pl [-d] ( [-c ] -a ) | (-x /path/to/reduced.xml)
root@quadra bsm$ ./bsm_report.pl -a /var/audit/20091016225822.20091016225943.quadra
Reducing /var/audit/20091016225822.20091016225943.quadra ....
Processing /tmp/.audit-tmp.xml ....
C U D D L E T E C H A U D I T D U M P E R
Audit Begins: 2009-10-16 15:58:22.316 -07:00
Audit Ends: 2009-10-16 15:59:43.587 -07:00
login - ssh (failure) by benr as benr REMOTELY from lappy in zone global (3623559241)
login - ssh (success) by benr as benr REMOTELY from lappy in zone global (3415402787)
execve(2) (success) by benr as benr REMOTELY from lappy in zone global (3415402787) : /bin/cat -s /etc/motd
execve(2) (success) by benr as benr REMOTELY from lappy in zone global (3415402787) : /bin/mail -E
execve(2) (success) by benr as benr REMOTELY from lappy in zone global (3415402787) : cat /etc/shadow
execve(2) (success) by benr as benr REMOTELY from lappy in zone global (3415402787) : cat /etc/passwd
su (failure) by benr as root REMOTELY from lappy in zone global (3415402787)
su (failure) by benr as root REMOTELY from lappy in zone global (3415402787)
su (success) by benr as root REMOTELY from lappy in zone global (3415402787)
execve(2) (success) by benr as root REMOTELY from lappy in zone global (3415402787) : cat /etc/shadow
su logout (success) by benr as root REMOTELY from lappy in zone global (3415402787)
logout (success) by benr as benr REMOTELY from lappy in zone global (3415402787)
I have a couple more improvements to make to it and then you’ll see it get its own page on cuddletech. I hope you can see the advantage of this. While I think bsm_report will be useful for a lot of people, more importantly it provides a useful example from which you can build your own tools.
Perhaps the best way to interact with audit trails is within a real database. Using this same method in PERL you could easily create a tool to pump the audit trail data into MySQL, PostgreSQL, Oracle, or, my favorite, SQLite. Imagine a centralized database for audit data and a PERL script on each node which, from cron, runs every so often to rotate the audit trails, convert to XML, and then read all that data into a centralized database. Nifty goodness.
Reporting Part 4: Existing Software
I noted earlier that BSM seems “hard” because of its DYI nature. While I’m sure hundreds or thousands of Solaris environments have great auditing infrastructures, almost all of those are custom and folks aren’t sharing their tools, probly because they don’t think anyone would care. I’m trying to change that. But I do not want to suggest that no other tools exist. There are 3 that I’m aware of:
BSMgui is a simple Java program which can read raw audit files and display them based on audit class. Startup the program, “open” an audit file, then click all the audit classes you want and execute a search. Nifty. Its old but by no means out of date!
the BSM Analyzer is a PHP application which gives you a web-driven way to search and report on audit trails. Its old too, but still valuable. If you (like myself) are interested in web searchable audit files this is the solution for you, or at least a great example of how to implement one!
Finally, SNARE “from InterSect Alliance, is a proprietary Log Monitoring solution that builds on the open source Snare agents to provide a central audit event collection, analysis, reporting and archival system.” SNARE includes a Solaris Agent which integrates with BSM. I tried it on my Nevada box and had some minor issues but nothing serious. If you need a comprehensive end-to-end multi-platform auditing solution, have a look at it.
I’m certain there are more tools out there, namely in the form of plugins to suites like Tivoli, BMC Patrol, etc, but I won’t explore those here.
Solaris Auditing is extremely powerful, but audit logs are pointless unless you can generate useful reports and store the data in an accessible and intelligible way. I hope you have a new appreciation for the variety of ways in which you can create meaningful and useful reports.
If you’ve created your own in-house tools for BSM Auditing, please consider sharing them. Maybe not all that sexxy, but there is a real need from users to have these types of tools.
Furthermore, if you have found this post helpful please let me know. If its popular enough I may convert it into a small book with much more depth.