USENIX 2005: Unix in Southern California (Day Three)

This is a report on the third and final day of USENIX 2005. You might want to start at my report on Day one and then move to Day two. Day three of USENIX 2005 has come and is now almost gone. When you get to a conference, it always seems like the days will stretch on forever. Then before you know it, everything is all over and you are on your way home. Its kind of like summer camp, I guess.

One change in USENIX from last year is the technical sessions have
been squeezed back in to three days. In Boston last year, the
technical sessions and tutorials both ran over five days. You pay for
the conference by the day and many attendees (including me) just come
for three days. The result was that you had to pick which three days
you wanted to attend, knowing you would miss some of the technical
sessions. They also ended the conference at noon on Friday.

Happily this year they are back to the three day technical session
format. That also means the conference ends at 5:30 today instead of
noon. However, I can already feel things winding down. The
registration desk is closed and hotel employees have begun rearranging
the furniture. That combined with a windy, gray day is a bit
depressing.

Anyway, here’s my recap of today’s activities. First up for me was
the System Administration Guru Session at 9am. I recognized a number
of faces from the same session last year in Boston. We discussed
topics centering around system and configuration management. These
are some of the standard discussions whenever you get a bunch of
sysadmins in the same room. David Parter from the University of
Wisconsin led the discussion very competently. One topic I brought up
was the notion of passive versus active system management. Three of
the presentations I attended yesterday (including the one on Ourmon
and the one on NetState)
both focused on the idea of passively monitoring network packets to
determine what OSes and applications are running on a network. It
seems to me that the more active your network becomes, the more useful
passive detection is. I compare it to the world of submarines: active
monitoring (active sonar) is very good for determining what is out
there. However, it also has side effects, like announcing your
existence to everyone else. Passive sonar, on the other hand, is less
effective but much more stealthy. Ok, maybe this isn’t a great
analogy but its still fun.

The general consensus was that passive monitoring is valuable, but
active monitoring is more reliable (except for catching systems that
are rarely on the network, of course). Passive monitoring also has to
be supplemented with a statistical approach, since passive tools will
occasionally guess wrong answers. For example, if your fingerprinting
tool says a system is Windows XP 90 percent of the time but Mac OS X
10 percent, which answer is right? Probably Windows XP.

We also discussed disaster planning. I found that the attendees from
the academic world thought about this quite a lot. One reason for
this is they have to deal with state auditors who require this sort of
planning.

One weakness of the session, I felt, was it was primarily attended by
sysadmins from the academic world. As I discussed in yesterday’s
report, this can be a problem at USENIX in general. To be fair, there
were several other session attendees from the commercial world, and
there may have been more that just didn’t say anything during the
session.

I also learned a couple of things about the tools people are using. The
consensus is that Request
Tracker
(RT) is the most common ticketing system in use, by a wide
margin. also, everyone is using RRDtool
to collect system data, but the the MRTG front end is not used much
any more. Instead, people are using tools like Cricket.

I learned one thing at the 10:30 coffee break: you can have anything
to eat that you like, as long at it is a miniature chocolate chip
muffin. I think there’s some sort of life lesson in that somewhere.

The highlight of they day (well, for me anyway) was a talk by Vernor
Vinge. Some of you may recognize that name as a famous science
fiction author. Turns out he’s also a mathematician and computer
science professor. His talk was entitled “Possible Futures for
Software” and he outlines what he felt were four possible futures for
software development. Three of these were based on Moore’s law and
the possible outcomes of the exponential growth of computing
power. Would this growth continue indefinitely, plateau, or collapse
catastrophically?

The catastrophic collapse scenario is of course the most interesting
in many ways. Vinge pointed out that we depend heavily of embedded
microprocessors and those devices are all very phyiscally similar. One
obvious failure is the electromagnetic pulse of a high-altitude
nuclear blast. This would cause effects such as the immediate failure
of most modern automobiles due to the destruction of controlling
microprocessors.

His second scenario was called, “legacy software forever” and was the
plateau situation. In this case, future civilizations could end up
with literally thousands of years of basically the same software as
we have now. Vernor has written about this in some of his work and
raises an interesting question: when will it be better to dig through
all that accumulated software versus reinventing the wheel?

Another good point was that if hardware reaches this plateau, certain
kinds of regularly-constructed devices could escape that fate for a
while and continue to expand. The result in the future could be a
laptop that is essentially the same as what we have now – but with a
memory card containing everything that has ever been written. That is
both intriguing and depressing at the same time.

The third possible future related to Moore’s law is that computing
power advancements continue forever. This veers completely into
science fiction because Vinge believes there will be some sort of
‘singularity’ where there is a complete break with the past. For
example, the internet itself could become self-aware. While this is a
fun speculation, it’s also so fantastical that I’m not sure you can
draw any useful conclusions from it.

Vinge’s final scenario is “Ubiquitous Law Enforcement”. In this
future, every microprocessor contains a government-owned section that
controls all I/O and any other function the government sees fit to
control. It would be illegal to use a system that did not have this
feature. Vince noted that while this future is in many ways
terrifying, everyone finds something to like about it. It could
enable completely seamless tax collection as all transactions are
monitored. It would allow authors to reliably collect micropayments
for their work. The big caveat is that someone would have to design
this hardware, and it wouldn’t be the policy makers. It would be the
chip designers, and they could end up as the most powerful people in
the world. Vinge feels someone is guaranteed to attempt to implement
this future (and it is occurring now for things like Digital Right
Management), but it is the least likely to succeed.

While there are more USENIX activities this afternoon, I don’t have
time to write about them because of scheduling conflicts. Thus, I
will wrap up my report now. First of all, USENIX is an amazing
experience. One key reason for this is it is self-selecting. Nobody
is forced to go to this conference. People come because they want
to. That leads to a very high level of technical presentations and
discussions.

At the same time, USENIX has some problems that have to be addressed.
The big one is there aren’t enough young computer professionals here.
The organization has to do a better job of recruiting people. This
should probably start at the college level. If someone goes to USENIX
in college they are much more likely to ask their employer to send
them after they graduate.

The bottom line is that if you love Unix and computer science, you
should be going to USENIX. I plan on attending next year and I hope
to see some of you there as well. I hope you enjoyed my report and
please feel free to contact me if you have any questions.

About the Author:
Phil Hollenback is a System Administrator at a financial company in Manhattan. When he’s not taking care of computers, he
spends his time skateboarding and working on his website.


If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.

5 Comments

  1. 2005-04-16 6:24 am
  2. 2005-04-16 1:32 pm
  3. 2005-04-16 2:26 pm
  4. 2005-04-16 3:13 pm
  5. 2005-04-16 6:30 pm