“Servers need what free desktop applications needed before KDE came: a common framework. A common way of configuration, a common way of being configured using GUI tools, a unified view to monitor them… there are also options that many server have in common, like a IP to listen to or the allowed IP ranges.” Read the editorial here.
http://jfontain.free.fr/moodss/
Linux has its config files all over the place and in different formats.
It would be great if OSes (not just linux) exposed a framework for installing and configuration all applications. The configs would be broken down into system-wide settings and per-user settings and would be transferrable to new version of the same software. Windows has its registry but it’s too liberal. Anybody can write what they want and even interfere with other applications. There needs to be a read-only system for outside apps and a read-write system for same app. And to make things consistent the entries into such a system would have to fit into predefined pockets like…
Appname = Mozilla
Appversion = 1.50
Appvendor = Mozilla.org
Apptype = web browser
AppDependencies = (external apps that Mozilla requires to run)
yada, yada, yada
If we use such pockets then other apps can know what to reference when looking up external software. For instance, say a plugin requires Mozilla 1.4+. It could search for applications called Mozilla and look up the Appversion field to find the version.
Linux also suffers from dependency hell. I’m not a Linux developer but can somebody remind me when in this day and age we cannot staticly link libraries so that I can download a single file. I know many apps share the same functionality so why not share the same SOs but we have huge HDDs these days. Windows has DLL hell for the same reason. But many apps in Windows install files like mfc42.dll in the application’s subfolder so it doesn’t interfere with other apps. At least linux apps should come with their dependencies included in the main tar ball, rpm or whatever. BeOS has a nice install system that just works but because Linux is on so many platforms it would be difficult to imitate. I would love it if an OS level installer could automatically run the configure/install/make scripts on my behalf and install to the apps folder (where apps belong).
We *can* statically link, but the prize you pay is extreme bloat and much too high memory consumption. Instead of only loading a library once, you would load several copies of it into memory at once. Before you go and say something like “But memory is cheap”, I would like to remind you that if you use more than two memory modules in most motherboards, performerance suffers. It would also mean larger download sizes which is bad for modem users and much larger binaries which is bad for people with small harddrives.
All in all it would ruin a lot and not really gain a whole lot IMO. Personally, I would rather see something like autopackage become used.
A further problem would arise with software upgrades. Whenever
a new version of a library becomes available, you would have to download and reinstall all those applications which statically linked to it. After you have waited for the vendor or author to recompile and relink the application for you.
The real problem with statically linking is also security and updating. Suppose a bug/hole in openssl(or any library for that matter) is found. So you update openssl and restart the services. If staically linking, all the applications must be updated as well. And someone needs to rebuild them. Just a lot more work for virtually nothing.
LSB is there to ensure binary compatibilty, (atleast it will, once it supports more libraies)
“There is absolutely no need for GUI configuration tools.”
Well this says it all, honestly we can all stick our heads in the sand and relive the 70`s all over again but there is NO excuse for not having a GUI interface for most things.
GUI`s are by far easier to use then some bizzare CLI or config file, why should we as users with a 1 gigahertz + CPU 128+ megs of ram rumage through our file systems looking for and editing a file to do something ?
XML is designed to be a nice format for data exchange and even human readable but to say it`s more intuative then a GUI can be is beyond comprehension, you seriously need to step away from your terminal for a while !
Maybe a few developers can share the burdon of designing a common XML config format and a wizzard tool to build GUIs for it ?
Thank you for your responses. Those are good points.
Part of using libraries though is certifying that they are reliable for the task at hand. Any company that doesn’t so this is not accountable to its customers and that is regrettable. If I use OpenSSL I’ll test that it works to the point where I can recommend its use in my company’s product.
From the article
The command to get the list of open ports is hard to find out
$ nmap 127.0.0.1
Is that hard ?
The biggest security advantage of Linux, despite of being very transparent and based on Unix, is that you have first class security tools like nmap. Check this well-know list of the most useful security tools. It´s impressive to see how many of them are free software.
http://www.insecure.org/tools.html
You could also have several ‘levels’ for network security. E.g. at home you may want to have your personal webserver running for family members, so you set it to ‘medium’. […]
This kind of vagueness inspired perhaps by the security levels in Internet Explorer are really what I don´t want.
By the way, as nobody understands what these level really means, even microsoft´s developper, they are a lot of exploits who allows you to bypass these security level.
Just check http://perso.wanadoo.fr/gnumdk/Unpatched_IE/
This said, a common framework is a good idea, but what is exactly needed is not well defined in this article.
“There is absolutely no need for GUI configuration tools.”
Well this says it all, honestly we can all stick our heads in the sand and relive the 70`s all over again but there is NO excuse for not having a GUI interface for most things.
GUI`s are by far easier to use then some bizzare CLI or config file, why should we as users with a 1 gigahertz + CPU 128+ megs of ram rumage through our file systems looking for and editing a file to do something ?
Bla bla bla
If you seriously have used both the CLI and the GUI, you would know that both sucks in a very different way, and that for this very reason they are very complementary.
It´s not because the CLI appears before that it is outdated.
re: Several flaws in the article
That will list ONLY the ones listening at the loopback device.
Some will set up their services to listen only where its needed, and many/all services can safely listen to the loopback without fearing remote intruders.
Try
netstat -laptn
Sure, but I didn´t know what is your ip adress at
—.bb.online.no, so I take the example of the loopback device which always work 😉
The command to get the list of open ports is hard to find out
$ nmap 127.0.0.1
There are several problems with that:
1. It does not help you to find out which process uses which port, or how to configure them
2. You are assuming that all services listen to all ports. This is wrong. The correct way of using nmap is to find out all active interface addresses (using ifconfig) and then run nmap for each of them
3. How do you find out that nmap it the right command? One of the advantages of GUIs is that you can find out which functionality is available by browing the menus. For the command line there’s nothing like that, man pages only help when you know the command line. It is *much* more difficult to find out that ‘nmap’ helps you than finding the function in a GUI
eg A GUI configuration tool for Apache would be cumbersome and bulky; it would struggle to offer the flexibility of the simple XML file that is currently used and, frankly, is very intuitive if you actually know anything about Apache
There are two or three different groups of users who configure servers. There are people who do this (almost) every day, and for them having a GUI doesn’t matter at all. I assume that you are in this group.
Then there are people who know Apache and it’s concepts quite well, but only need to touch an apache configuration file maybe two times a year. I am in this group, and I hate it that every time I need to do a trival change I find myself searching for the syntax or option name in the manuals. It just costs me too much time that I would not have to spent if the option would be right in front of me in form of a checkbox that I just have to click. If I had an alternative web server that offers a well-integrated GUI I would dump Apache. (And yes, there are probably configuration tools that I can install, but if it does not work right out of the box on my distribution I would spend so much time installing it that I wouldnt save time anymore.)
1. It does not help you to find out which process uses which port, or how to configure them
You´re right. If you find something you didn´t expect, you have to take a look at the list of daemons running.
2. You are assuming that all services listen to all ports. This is wrong. The correct way of using nmap is to find out all active interface addresses (using ifconfig) and then run nmap for each of them
Same as my post above. Thanks to precise that.
3. How do you find out that nmap it the right command?
You take a look at my list above.
One of the advantages of GUIs is that you can find out which functionality is available by browing the menus.
Right, the CLI intuitiveness sucks
You have $ apropos subject
but it is not perfect.
Dont´ get me wrong : bash tends to sucks for a lot of things, but has its advantagess.
It is still more light, flexible and powerful than macosx finder. The CLI can also be more ergonomic
(intuitive!=ergonomic, vim is a good example). When something doesn´t work the way it should, the CLI tends also to be informative. OpenSSH is the best admin tool ever.
That´s what I said, both the GUI and the CLI sucks in a different way. I´m happy to know both, and to use the right tool to do what I need.
You´re right. If you find something you didn´t expect, you have to take a look at the list of daemons running.
This assumes that you have no servers running on unusual or dynamically allocated ports, and that you know all services and their daemons. Both is not neccessarily the case.
3. How do you find out that nmap it the right command?
You take a look at my list above.
How do I find your list? Is it displayed when I open the console?
You can’t assume that people will find it because it is written down somewhere, it must be on their screen or there must be an easy and logical way to get it on the screen.
Right, the CLI intuitiveness sucks
CLI is great if you know which commands are available and which options you have. If you do not know the name of the command a CLI is completely useless. And, unlike GUIs, there is no way to find out using trial&error (at least guessing command names would take a long time – who would guess that you have to type ‘ifconfig’ to see the network configuration?)
KDE is the only unified framework? Oh come on give me a break! I thought Eugenia liked Gnome.
NO excuse for not having a GUI interface for most things
I have to agree with Charlie (the 1st post). While I don’t have a problem with people using GUI interfaces there use is typically due to a lack of understanding. When there is a real problem on a server who do you want solving the problem? Someone who knows and understands the format of the filesystem and configuration files OR someone who primarily uses a GUI tool? There will always be a case where the GUI does not allow the type of change necessary and you will have to go to the command line and manually edit the file (GASP – in vi!!). To someone used to GUI interfaces this will be a strange and scary experience, not a comfortable one. Like anything the more comfortable you are the better equip you are to deal with an emergency.
I guess this a long winded way of restating:
What this editorial really says is, “I want to be able to configure my servers without really knowing what I’m doing.”
Because anyone familiar with the command-line would never request a GUI! Neither would we need to rummage through our file systems looking for and editing a file to do something, we already know where everything resides;).
By the way I am assuming you are or were a Windows administrator given your comment: stick our heads in the sand and relive the 70`s all over
While individual GUI tools may help occasionally, to state we need a global GUI config tool is to say we want to convolute the existing structure to make it easier for ANYONE to run. That sounds an awful lot like a Windows system to me…
How do I find your list? Is it displayed when I open the console?
That´s what the first poster meant : GUI or CLI tools, if you administer your own server which has sensitive data, you must do yourself some efforts to keep informed. You can´t trust Microsoft, Apple, Redhat, whoever to have done by default a perfect job.
This http://www.insecure.org/tools.html , is a link I gently propose to keep in your bookmark. It is not specifically about console tools. I contribute to ethereal (only #2, for the 1.0 version, we have to beat Nessus ;-), and it has a pretty GUI.
Otherwise, I think we mostly agree.
http://www.webmin.com
The real problem with all this is that it is very easy to design a GUI app but very hard to design a *good* GUI app. When that GUI app has to address all possible security and configuration concerns of a major Linux or Unix server, this task will become nearly impossible. If we want one (1) GUI application to somehow magically tie together all these possibilities and requirements, we are going to need an amazing team of developers to work together long and hard. I believe it will take longer to do the GUI than it took to develop the services that this GUI is supposed to control ;-).
Not that I would’nt like a GUI of this sort, but I am just imagining all the possible ways to configure things that I have seen, and trying to imagine how a GUI would handle all that without being the most confusing, bloated, complex piece of software ever seen…
After all these years, Microsoft has not managed to do it for their server, and they are dealing with a much more limited range of services and applications than a Unix server.
Apple has not managed to do it for the Mac XServe, even though they are the “king of the GUI”. (Please, I have had the job of configuring XServes, and it is even more pain than configuring regular FreeBSD or Linux).
What it all really comes back to is the knowledge of the user. Even if we have a wonderful do-everything GUI, it will take a *lot* of work to make it organized, and then it will take a lot of work for users to learn how to use it, and what each configuration option means. In the end, a lot of the the “intuitiveness” everyone hopes for will be swallowed up in the need for serious understanding of the system.
I can understand why being about to “feel things out” would be nice for an end-user, but there is no way I want a sysadmin to try to configure his servers by “trial and error.” If you don’t know what you’re doing, than please RTFM, because a bad admin is a danger to the entire internet.
KDE is the only unified framework?
No, but it was the first free framework for desktop apps.
The main advantage of a GUI over a CLI is that a CLI is language specific, and that language is English. A GUI can be tailored for other languages much, much easier than a CLI, plus it is also easier for usability features to be incorporated for disabled users.
…there is no way I want a sysadmin to try to configure his servers by “trial and error.” If you don’t know what you’re doing, than please RTFM, because a bad admin is a danger to the entire internet.
Is a admin a bad admin because he can’t remember whether the option was called ‘SymLinksIfOwnerMatch’ or ‘SymLinksIfOwnerMatches’?
I don’t want to convince anybody to use a GUI if he’s happy with text files, but there is a large group of people who cant remember the syntax of the configuration files because they so rarely need to touch them. And a GUI saves them a lot of time.
Yes. Also how insecure it is. As the author of the artcle mentioned, there are security problems with servers. You (Tactless (IP: 81.218.182.—) ) are proposing to add one more. Take look at webmin security record.
The assumption of the artcle s plainly wrong. Server tools for everybody. I bet then in a short time it is going to be a lot of relay mail servers (the idiot admin did not know how to manage sarver – that is already a problem), broken Apache, DNS (it is hard enough to keep with al updates) and so on and so for. In other words it is going to be as bad as with MS. Take look at Exchange or IIS. And KDE? He must be joking, with such memory hungry GUI. Something like twm would do. And unified framework – DCOM ring a bell? I am not against GUI but his (author arguing is naive).
“By the way I am assuming you are or were a Windows administrator given your comment: stick our heads in the sand and relive the 70`s all over”
Now that made me laugh, i`m a linux user and normally the one bashing M$ for all their evil deeds :0)
There is no need to use a GUI if your happy editing files etc, all i`m saying is that if your force everyone to use CLI tools, edit config files in multiple formats etc `NIX/BSD will simply become painfull to use when someone else (read Apple or M$) comes out with something better.
As for not being able to handle the complexity of it all then just put in the common options, you can always go edit some files if your knowledgable enough.
Wow i didn`t rant about hippies, bearded unix admins, klingons, bill gates, evil ninja monkies or kimba the white lion once *sigh* see what happens when you get old.
It’s ‘SymLinksIfOwnerMatch’ – it took me all of 10 seconds to look it up in Google. I don’t even know what it is, but I know what the correct spelling is ;-P
Yes, the admin who doesn’t know what an option is because he is too lazy to google it, RTFM, keep a notes, or ask a senior is bad. Likewise, programmers who do the same.
– There are already GUI tools. (Pick your favorite from a list of text editors and open up a config file)
-Config files are generally easy to figure out if you can simply grasp the concept of tokens, delimiters and terminators.
-Config files are generally easy to find: /etc or /home/username
-There is already a monitoring system in place. It’s called procfs.
After the big fat GUI is created, the next thing is the Security for Dummies book.
Security is too important to have someone not knowing what they are doing flipping GUI switches. For that matter, adding command line switches.
How do you find the right command for the job? GUIs to some extent have the same naming problem. How many people think web browser when they hear the word Opera? What is Windows Explorer anyway? Wouldn’t it have been better to call it Folder Explorer or File Explorer? Research is the only way to find the right tool for the job.
If a person is going to use a GUI, then they should also learn to use the command line and be able to read the config files.
I am not a big fan of man pages, but at the same time I’ve noticed that some GUI front ends are also very confusing and less than helpful. Personally I tend to write down commands on flash cards that I have on my desk, but it would be nice if the man pages contained more examples. My vote goes for improved documentation with more examples as well as a standard set of GUI configuration tools for each distribution, but not standard across distributions.
A management system for Linux enterprise servers is interesting, but the danger is that it could remove too much of the control from the power user. So it depends on the type of user, the view or schema perspective into the system. It’s not a bad idea, but the soltuion has to respect each type of user, so if you can do that, than the project can move past the inception phase.
> $ nmap 127.0.0.1
> Is that hard ?
You can bind servers to particular addresses, so doing this will be misleading.
I agree with Tim that having a framework for server apps would be a good thing. Where I disagree with him is in whether this should be something KDE itself is concerned with. Personally I think this is a job better handled by a separate library, with KDE’s involvement (if any) being in developing tools that use this library.
Yeah GUI’s suck, I mean who needs to use the GUI of OSNews to post comments. Who needs to use Mozilla to browse the web page. Who needs KDE, or whatever, to draw the mozilla browser.
Unless you are posting these kinds of “GUI sucks” comments from a Lynx or a text based browser you have absolutly no credibility at all. How many of you have setup your and changed you account setting in slashdot? If you have that is a GUI, why didn’t you just use lynx and use HTTP POST to change them or better yet go into the database and change them. I will tell you why because the GUI is useful, and in most cases it is quicker, and you don’t have to know every “in” and “out” of the system to use the system.
My 2 cents,
Nick
——————–
The computer was conceived as a tool to reduce complexity. Some people found this loss of complexity unacceptable, and developed UNIX to reintroduce it.
——————–
The computer was conceived as a tool to reduce complexity. Some people found this loss of complexity unacceptable, and developed UNIX to reintroduce it.
Your statement is inacurrate.
Unix was developped as much simpler remplacement of Multics. Like Ritchie sayed : Unix design is very simple, but it requires a genious to understand it.
I think what would be neat is to have a common configuration system which could automatically generate GUIs. This may be a bit off, but Wind River has a package to use for network management. I forget what its called…win manage?
Anyways, the basic idea is you create an XML file. Run it through their system and out the other end, you get all the code needed to generate a CLI and web based configuration system. Correct me if I’m wrong on this, we’re using Allegro toolkit right now, and switching to winmanage soon…just haven’t gotten to it yet.
Perhaps a similar concept could be used for configuration within the OS. Define options, ways of exposing them and then ‘compile’ it.
That way everyone is happy all the time. GUI people, CLI people, and config file people, and it doesn’t hurt the programmer.
Yamin
WHEN ARE YOU PEOPLE GOING TO GET IT THROUGH YOUR HEADS THAT THERE ARE PEOPLE OUT HERE WHO FIND IT VERY DIFFICULT TO USE PICTURES???????
I’m really starting to get disgusted with this fight. If you haven’t figured out yet that there are very good reasond to use GUI’s for some things, and converse with the computer for other things, and that there is convergence between the two for still other things, then don’t bother to try and adminster any computer systems, because you are not qualified.
“Yeah GUI’s suck, I mean who needs to use the GUI of OSNews to post comments. Who needs to use Mozilla to browse the web page. Who needs KDE, or whatever, to draw the mozilla browser.”
OK, folks, I’m going to try again.
I am BLIND (almost.) Why is it that I am denied access to Internet services if I choose to use Lynx? Why do I have to use a GUI?
I haven’t posted to OSNEWS lately with Lynx, because I *can* see a little bit, and keyboard support on some browsers is fairly good; however, there are times when browser makers forget about us blinks (their support is inconsistent,) and if I don’t happen to have a supportive browser, I prefer to use Lynx.
I plan to try OSNEWS with Lynx this weekend, if my personal machine is working again. If I can’t use it, I will let you know.
Again…computers are eminently usable for dialog. I cann *not* understand why people who rely almost entirely on dialog should be denied the Internet, or made to spend outrageous sums of their own (or government) money for some sotware/hardware combo that does half the job.
I hope osnews.com worked for you.
I have tried to visit this site with links and it works very well, as the little icon at the bottom of the page says (but you cannot see it)
Having spent a couple of years now in the Linux world, I have come to appreciate console-based administration a great deal. For one, if you use a system regularly and DO know where everything is, a console is objectively faster than a GUI. For another, remote administration is identical to local administration, and the only thing you need to make it so is SSH. For another, you don’t have to run an X server plus window manager on your web/file/etc. server, which must be admitted by all to be yet another security concern as well as a resource sink. For another, you can administer your server without having to succeed in getting X running, which on many machines can be a pain. Something like webmin may be a compromise, but I’ve found myself using webmin only until I know what I want to do. Then I just examine what webmin did and shut it off.
One of the things I appreciate about the *nix world is that you DO have to know something about what you are doing. I just assume that now, and am a much more capable admin as a result. GUI’s slow down development, hide functionality, and are basically just a lot more expensive on every level. Why not spend all that effort on learning the program you want to admin? It’s like spending all the time to learn Dreamweaver so that you don’t have to learn HTML or CSS. Now you’re tied to the development of a GUI tool, and can’t take advantage of the underlying code because you don’t know how it works. I’m all for GUI’s on the desktop, but I’ll take my servers with CLI’s.
Many *nix configuration files are in fact specialized programming laguages (some important even with Turing machnie power). Think about that for a while. You can’t embrace all the inifite flexibility of a programming language in a GUI, it simply doesn’t have enough expressing power (unless you create a GUI text editor for the language 😉
But some [dumb] people can’t understand that. They only know they are unable the config file and shout “I want something simple! I want a GUI!” Well, for simple tasks, simple configuration wizards can be created. But one you need macros, conditions, inclusion, etc. you definitely want to edit the config file directly.
Many *nix configuration files are in fact specialized programming laguages
And then it is a good idea that each server invents its own?
You can’t embrace all the inifite flexibility of a programming language in a GUI
I don’t know any server configuration format that has the expressing power that you are talking about, but if it were true it would be an even stronger case for a framework that allows the server to export configuration APIs and takes care of running the scripting language.
Funny you should mention that. I often post through links (doing it right now, in fact) because I am working through an ssh session or when I’m on a rescue CD. There are advantages to the GUI version (mainly higher res and pictures) but I find navigating the text-based system much faster than using a GUI. There are some things that a CLI is just more efficient for:
1) Developing (compiling, etc).
2) File management.
3) Configuration, etc.
4) Starting programs (much faster to hot-key run-command and have it auto-complete after a few characters than to take your hands off the keyboard and mouse through a bunch of menus).
If you do any of the above on a regular basis, you owe it to yourself (and your employer) to make yourself more productive by knowing when to use the CLI.
Anyway, nobody is saying that the GUI is useless. The type of stuff you’re mentioning (posting on OSNews, etc) are very directed, limited complexity tasks best handled through a GUI. However, when the task if highly complex, a GUI for it becomes extremely complex as well. Something to completely handle configuration files for server programs would have a UI on the same order of complexity as Maya or Softimage. A command line interface goes a long way to dealing with that complexity. That’s why programs like Mathematica or Matlab are still primarily command-line oriented. Even graphical apps like autocad (which need a GUI for obvious reasons) have tight integration with a form of a command-line.
okay, I think GUI configuration files are helpful if done RIGHT, as long as there is a help menu that tells you which file the gui writes to, and the gui is well designed I have no problems with them. I tend to edit my own config files using a text editor but, that’s me and it doesn’t work for everyone. Sorry to put out the flame.
“I hope osnews.com worked for you.
I have tried to visit this site with links and it works very well, as the little icon at the bottom of the page says (but you cannot see it)”
Actually, I can see it, but I wouldn’t have noticed it there until you pointed it out to me. My inclination is to put as much energy into the content as I can.
This is also why I find huge Web ads insulting and annoying. They try to steer people away from the valuable content, by forcing their attention onto the ads. Real Networks is one of the worst. They supposedly have a free version of RealAudio, but it takes a half hour of wading through their ads to find the link to it. This is why I have a strong tendency to *not* use their player; if they have to be so damn mercenary about it, why should I, how can I trust them when the going gets tough?
Anyway, thank you again. I’m going to try Links too, and see what it’s like to use. Maybe it has a bit better of a tool for handling large input like this.
The command to get the list of open ports is hard to find out
$ nmap 127.0.0.1
Is that hard ?
Yes, I’m only one in a slew of people to reply to this, but there are *much* better ways of listing all active, listening services than this.
On BSD operating systems, there is the ‘sockstat’ command. This will list the pid, process name, owner, protocol (TCP/UDP), and what they’re listening on (i.e. the location of the Unix domain socket or the address/port they are bound to) or where they’re connected to. This is the ideal command for this purpose, but unfortunately it’s BSD-specific.
On non-BSD systems, there’s ‘lsof’ which will provide a dump of all file descriptors across all processes that you own, or if run as root, across the whole system. You can then grep the output for LISTEN to obtain a list of all listener sockets, the process name, PID, owner, protocol (TCP/UDP), and what address/port they are bound to.
Unfortunately, lsof is not a standard facility on most systems, but it has been ported to many platforms. On Solaris, for example, it can be easily installed with pkg-get.
These are the sort of tools that should be used to analyze what the active listeners on your system are.
Essential and security critical packages like OpenSSH, LSH and OpenSSL had exploits in the last weeks and this should have convinced the last conservatives that it is not possible to write a complex server in C without having a remote exploit per year
Again, I must cite qmail. It really isn’t that hard to write a secure server in C… I’d say the real problems are in some very faulty design paradigms that the world has attempted to push upon programmers which lead to insecurities.
In regards to the OpenSSH vulnerability, this was determined to result in a denial of service at worst. There is no way of compromizing a system itself through this vulnerability.
In regards to the OpenSSL vulnerability, I blame it on ASN.1, the use of which requires highly complex input validation. ASN.1 is touted as being a cure to these sorts of ails, with the notion that ASN.1 parsers can be shared and reused across multiple platforms. Unfortunately, there is little/no code reuse amoung ASN.1 parsers, with everyone implementing their own.
Code reuse always comes at a price. New code must always be written to integrate two older pieces of code together, often performing non-trivial conversions which can lead to new input validation vulnerabilites.
There is no silver bullet for eliminating input validation vulnerabilities. Transitioning to a runtime environment merely offloads a portion of the input validation onto the runtime environment; it does not eliminate the programming errors themselves.
The solution, in my mind, has always been to centralize all input validation into a single portion of the code that can be heavily audited. Provide assumption-free interfaces to then access validated input. Security vulnerabilities are, in general, indicative of bad design, either on the part of the programmer or on the part of the protocol designer.
As for the protocols themselves, one of the key mistakes in the design of virtually every protocol in wide use today has been the use of readable text as opposed to a binary protocol which specifies fixed lengths for all datagrams and their subcomponents. The latter almost trivializes the act of input validation, whereas the former typically requires a complex parser and even more needlessly complex buffering routines.
Consolidating server configuration is certainly possible, but consolidation of server frameworks is not, for the most part. The framework of the server will vary greatly depending on the underlying connection handling and multiplexing model. There have been moves to try to consolidate server frameworks, such as libevent (http://www.monkey.org/~provos/libevent/) However, for something like an ssh server, this library would be virtually useless, as ssh merely forks for each connection, eliminating the need for an event handling library.
I’ve written a server in C that uses *gasp* its own framework and its own configuration format. What does that give me over using XML and something like libevent? Well, unless I were to use something like Apple’s plist format, XML would not greatly ease the effort of administering the server graphically (and with plists, it would only be of any real use on OS X). Not only would all servers need to use a common format, but they would also have to use common option names as well. And again, this wouldn’t be of any use unless it were supported by multiple servers, and then it still wouldn’t be of any use unless someone were to write tools to manipulate the common configuration format. Standardization would be a monumental effort, centrally coordinated by some regulatory body. Something like that with a considerable degree of cross-project collaboration occuring in the open source is virtually unheard of.
Why not use libevent? Well, the code in my server (http://pdtpd.sourceforge.net/) that abstracts across the same facilities as libevent is under 1000 lines, and not only does it support select, poll, epoll, and kqueues, it supports /dev/poll and WSAEventSelect() on Windows. It also integrates directly into the transactional thread pool mechanism that my server uses to handle requests, whereas with libevent I would still be required to write “glue code” to accomplish the same functionality.
Ultimately, servers are written the way they are for several good reasons, and I don’t think the author of this article has enough experience in server development to reall call these practices into question.
Erm, wouldn’t netstat suffice. Works for both unix and inet sockets.
“Erm, wouldn’t netstat suffice. Works for both unix and inet sockets.”
sockstat and lsof both give considerably more information than netstat. The shortcomings of netstat were noted in the article.
While working on an embedded HTTP compression server we faced the problem of talking to our running application from several UI programs (web, SNMP, and console). The approach we found is now LGPL package called Run Time Access. You can find it at: http://www.runtimaccess.com
The concept might be extended to more than one service.
Bob Smith