One change in USENIX from last year is the technical sessions have been squeezed back in to three days. In Boston last year, the technical sessions and tutorials both ran over five days. You pay for the conference by the day and many attendees (including me) just come for three days. The result was that you had to pick which three days you wanted to attend, knowing you would miss some of the technical sessions. They also ended the conference at noon on Friday.
Happily this year they are back to the three day technical session format. That also means the conference ends at 5:30 today instead of noon. However, I can already feel things winding down. The registration desk is closed and hotel employees have begun rearranging the furniture. That combined with a windy, gray day is a bit depressing.
Anyway, here's my recap of today's activities. First up for me was the System Administration Guru Session at 9am. I recognized a number of faces from the same session last year in Boston. We discussed topics centering around system and configuration management. These are some of the standard discussions whenever you get a bunch of sysadmins in the same room. David Parter from the University of Wisconsin led the discussion very competently. One topic I brought up was the notion of passive versus active system management. Three of the presentations I attended yesterday (including the one on Ourmon and the one on NetState) both focused on the idea of passively monitoring network packets to determine what OSes and applications are running on a network. It seems to me that the more active your network becomes, the more useful passive detection is. I compare it to the world of submarines: active monitoring (active sonar) is very good for determining what is out there. However, it also has side effects, like announcing your existence to everyone else. Passive sonar, on the other hand, is less effective but much more stealthy. Ok, maybe this isn't a great analogy but its still fun.
The general consensus was that passive monitoring is valuable, but active monitoring is more reliable (except for catching systems that are rarely on the network, of course). Passive monitoring also has to be supplemented with a statistical approach, since passive tools will occasionally guess wrong answers. For example, if your fingerprinting tool says a system is Windows XP 90 percent of the time but Mac OS X 10 percent, which answer is right? Probably Windows XP.
We also discussed disaster planning. I found that the attendees from the academic world thought about this quite a lot. One reason for this is they have to deal with state auditors who require this sort of planning.
One weakness of the session, I felt, was it was primarily attended by sysadmins from the academic world. As I discussed in yesterday's report, this can be a problem at USENIX in general. To be fair, there were several other session attendees from the commercial world, and there may have been more that just didn't say anything during the session.
I also learned a couple of things about the tools people are using. The consensus is that Request Tracker (RT) is the most common ticketing system in use, by a wide margin. also, everyone is using RRDtool to collect system data, but the the MRTG front end is not used much any more. Instead, people are using tools like Cricket.
I learned one thing at the 10:30 coffee break: you can have anything to eat that you like, as long at it is a miniature chocolate chip muffin. I think there's some sort of life lesson in that somewhere.
- "Usenix Day Three, Page 1/2"
- "Usenix Day Three, Page 2/2"