Performance analysis and bottleneck determination in Linux is not rocket science. It requires some basic knowledge of the hardware and kernel architecture and the use of some standard tools. Using a hands-on approach they’ll walk readers through the different subsystems and the key indicators, to understand which component constitutes the current bottleneck of a system.
Just vmstat, iostat and netstat ? I was expecting a bit more.
something i dont get is why the hell anybody worried about performance on linux and unix flavors. just get a fast computer. the OS’s are already fast enough for casual usage.
just my $0.02 cents
Judging by the lightweight content of this article, it sure isn’t rocket science.
…a rocket scientist
The article covers the subject for a broad audience, so it specifically avoids assuming previous knowledge, and starts from the very basic concepts.
The next steps in performance analysis involve more complex procedures including application (and maybe kernel) profiling, redesign of the implementation (through the use of RAM based filesystems, the modification of the scheduler and elevator strategies, the utilization of jumbo frames, etc.) and up to the re-compilation of applications with architecture specific optimizations (-march, etc.).
It would take a whole book to discuss all the techniques that advanced systems administrators utilize for performance analysis and improvement, but again, this was not the goal of this little article.
I’ll possibly write some follow up to this article in future, to dig more in depth in some of these techniques, but don’t hold your breath for now.
Best regards,
Flavio
What is dumb is not looking for issues that can be relieved by a quick command or simple adjustment before going off and throwing money at the problem.
It’s idiodic to throw faster processors at a problem that more I/O would fix, or vice versa. You need to figure out the root of the cause before taking steps to rectify it. Otherwise, it’s like hacking at a leg to fix a runny nose.
And it is moronic to spend the cost of 10 systems for what would only take a single system with the right adjustment if only a little time were spent.
Cheap or not, it’s still not free.
Hence the Linux article is more than welcome.
thanks.
Well, if someone would just port DTrace to Linux, life would be much easier for us developers. Of course we could just run Solaris instead…hrm.
Its a good article there, I am enjoying reading it.
To the person who said that performance is dumb, I hope you don’t expect everyone to periodically upgrade their systems just to run an OS fast. Systems don’t cost $0.02 cents. Having to throw expensive hardware at things to make them run fast is a sign of BloatWare.
network congestion? how does monitoring your interface in any way equal network congestion?
and how many network devices arent auto-config anymore? Would you honestly suggest that a common user try to adjust any of those network settings?
very simple article that I dont see being overly informative to the OSnews crowd… but maybe it is….
i personally would delete the second paragraph. not sure what you are trying to say, or why you mention a sysadmin, consdidering any sysadmin that isnt well beyond the scope of this article isnt much of a sysadmin! i think man/info pages probably are better than this article but maybe for someone new to linux this is cool….
isnt there something similar in linux, wasnt there a LTT (linux trace toolkit) or something similar….
heck, wasnt Dtrace released by sun, heck start working on a port
I’ll possibly write some follow up to this article in future, to dig more in depth in some of these techniques, but don’t hold your breath for now.
I’m looking forward to it.
Me, too.
Thx for the article.
A few clock cycles if you weren’t in KDE. Still a good article. It was also nice to see some positive feedback from readers. I was getting tired of readers nit picking and flamming authors.
-nX
What if the performance issue you are experiencing happens when you are not at work? These tools are fine when you are in the office, but none of these commands record the information for long term analysis, sysstat should have been included as well (http://perso.wanadoo.fr/sebastien.godard/).
By default sysstat records information every 10 minutes from 8:00 AM to 7:00 PM Monday through Friday including CPU, disk, memory, and network statistics. Just add a crontab entry in for either root or adm and adjust it as necessary.
Maybe you could write some kind of script or smth to make an automated analysis of vmstat, iostat etc. For “normal” people is reading vwmstat (and analysing it) fairly difficult.
I was about to include some information about sysstat and the venerable sar, which is the sysadmin swiss knife of performance analysis and capacity planning, but a thorough discussion about it would take a full chapter of a book.
Maybe in a future article…
Regards,
Flavio
Flavio,
No doubt that writing about sysstat and sar would take some effort, but I see it as a worthwhile effort.
If you’re looking for a whole book about Linux Performance tools, check out:
Optimizing Linux(R) Performance : A Hands-On Guide to Linux(R) Performance Tools
Sar and systat are covered.
Cheers,
–Phil
heck did it even mention top or ps
both simple tools that provide useful info…
of course you could script/schedule/etc any of the ones i mention or the ones mentioned in the article to run as often or occasionally as you like and dump the output to a file….
heck, lets hope and look forward to the more in-depth articles!
Not a bad intro to vmstat, netstat and the like. More thorough discussion of top, at least recommending the man pages for sar, etc would be nice.
Honestly though, this covers the boring easy to diagnose system problems. I’m still trying to find something about really low level tuning and analysis. Part of it is just not possible since Linux really sucks for instrumenting to measure performance and resource utilization. Now that Solaris has DTrace its the Gold Standard(TM) of instrumentability. Hopefully LTT will grow to some fraction of the usability and versatiliy of DTrace.
As for the comment about tuning is dumb, you obviously have never hit hardware limits. We run applications here on quad processor x86 boxes, and when those boxes are tapped out there’s nothing we can do but start tuning. Intel boxes aren’t readily available at more than 4 way, and switching architectures is neither cheap nor easy. Hell, the cost jump from a dual to a quad is non-linear, and getting above that just gets worse.
I think this is definitely an area where Linux has an awful lot of room to grow. The good side to this is that we’ve seen a large number of interesting and innovative solutions come from the OSS community surrounding the Linux kernel. The bad side is that we haven’t seen any of them relating to this!
Unless you are “locked in” to a particular vendor, HP does have 8 way machines:
http://h18004.www1.hp.com/products/servers/platforms/index-dl.html
I couldn’t agree with you more about the lack of instrumentation, I was trying to point this out as a shortcoming the other night and got attacked by the Linux Zealots. Compared to Solaris (which I also work with), Linux has some ways to go in terms of instrumentation and tools to gather long term performance statistics (particularly with applications). More than likely if Linux had system accounting similar to Solaris it might help out in your current situation by gathering application specific resource utilization information without using resource intensive application resource monitoring tools (depending on application).