“Linux has an abundance of excellent terminal applications. Interestingly, I could not find any decent comparison of their text display performance. Since I use the command line a lot, I want text output that is as fast as possible. When you compile a large project, you don’t want the console output to be the limiting factor. I took the burden on me to do a comprehensive comparison of the text through of all possible terminals.”
Looks like the gnome devs did a good job on the speed front when they worked on gnome terminal…I’m surprised at xterm though, it always felt fastest to me.
Believe it or not, it isn’t gnome-terminal that is fast, but vte. Behdad Esfahbod is on the gnome performance team and optimized the holy crap out of vte. Vte is the underlying library that displays the actual terminal in gnome-terminal.
http://ftp.acc.umu.se/pub/GNOME/sources/vte/
Like what kind of system the test was conducted on and the quality of the network connection.
I agree – also what is he running along with X? Gnome and KDE both have terminals that most likely “work” better with the respective environments. I know that for sure gnome-terminal starts faster when I’m running gnome, and konsole starts faster when I’m running KDE. Does xterm perform better in TWM or Fluxbox? or does Eterm perform better with E (16 or 17 ?!). I am curious about the various combinations.
And one more thing… why isn’t the standard console listed (i.e. no X at all)? I’d bet that blows away the competition hands down but I could be wrong…
Why would that be the case? These terminals that aren’t native to KDE/GNOME are not going to be using any of the respective toolkits. So I fail to see how that would affect performance. I don’t know if there is any extra layer when communicating with the X server.
Perhaps someone else could shed some light?
Hi, I’m the author of the comparison. I have also done a test of the normal console without X, but have not written anything in the text. You can find it in the 6th entry of the chart, it was slower than gnome-terminal and konsole.
I did a test on FreeBSD:
gnome-terminal: 0.150 – 0.180
ttyv (VESA 1280×1024): 0.013 – 0.040
~4 to ~14 times faster
EDIT:
ttyv (80×25): 0.021 – 0.024
Edited 2007-09-05 19:03
I did a test on FreeBSD:
gnome-terminal: 0.150 – 0.180
ttyv (VESA 1280×1024): 0.013 – 0.040
~4 to ~14 times faster
It depends what “time” you are using:
I did test with external time application under 100% CPU load BTW (Installing Windows XP in Win4BSD).
# /usr/bin/time cat rfc3261.txt
2.16 real 0.00 user 0.01 sys
# time cat rfc3261.txt
0.002u 0.012s 0:02.45 0.4% 48+1696k 0+0io 0pf+0w
Not sure what all those numbers mean…
FreeBSD 6.2-STABLE #46: Tue Jul 10 13:00:41 PDT 2007
KDE 3.5.7, Konsole 1.6.6
when you start an applications it needs to read various libraries. if these have been read recently then they will be cached into ram.
gnome-terminal will need things like the gtk libraries. if you have already stared nautilus and gnome-panel, then the gtk libraries will be cached.
same for konsole and the qt/kdelibs
This article is measuring the performance of the terminal. The loading of libraries at startup should not affect the outcome as startup time is not measured.
The faster startup has one particular reason. It’s not that the applications are faster within a specific desktop environment.
On startup, heavy apps like konsole or gnome-terminal need several shared libraries to be found and loaded into RAM. But they share most of these libs with other KDE/gnome programs.
You can try this out yourself:
– run ldd /usr/bin/gnome-terminal
– run ldd /usr/bin/gnome-panel
– compare
Here, gnome-terminal needs 75 libraries to be loaded dynamically. gnome-panel needs 72. A diff shows that the two programs share 70 libraries! So if you already have the gnome-panel running, that’s the simple reason why gnome-terminal starts up fast.
Compared to that, xterm only uses 22 shared libraries here. These are mainly X libraries. So if any other X app is already is running, chances are good xterm starts fast.
Hi, original author here: I have updated the site and added this information: I used Ubuntu 7.04, Gnome, ATI with fglrx, and a Pentium M.
Sure!!! If he had tested with a gigabit network card installed, the results would change this much!!! O_o
I read the summary and when he says terminal applications and I think applications like pine or vi. Applications that run within the terminal. Then I’m thinking “What could he be timing? Is pine faster than vi? How do you know? Timing ncurses performance? What kind of benchmark is this?”
Then, I pull my head out and “get it”.
The konsole memory consumption mentioned in the article (32 MB) seems very high. I just looked at how much my instance with 3 tabs uses, top reports 16m resident and 12m shared. When I open a new one with just a single tab 15m resident is reported along with the 12m shared. Still a lot for a terminal, but a lot less than the article implies.
Edit: That’s with a 1000 line buffer.
This was measured on Kubuntu Edgy with distro-provided KDE 3.5.7.
I remember a few years ago some similar testing was done, gnome-terminal was by far the slowest. I believe this was around the time of gnome 2.0, some nice improvements there apparently.
Edited 2007-09-05 18:05
Indeed, I’m very pleasantly surprised by how well gnome-terminal performs. I remember back in 2003 or so when people were recommended not to compile large projects in gnome-terminal because it was so slow that it would actually slow things down. This was back when I was using Gentoo, so I did a lot of compiling and any speed up was welcome
The konsole memory consumption mentioned in the article (32 MB) seems very high. I just looked at how much my instance with 3 tabs uses, top reports 16m resident and 12m shared.
Of course it seems high:
1. They’re using top (or I assume they are – they don’t say). Not the way to try and measure actual memory usage, because Linux lies a lot about actual memory usage.
2. KDE apps use an awful lot of shared libraries that tend to get included in silly ad-hoc memory analyses like this.
BeOS’s Terminal is taking about 1.3 MB on my machine right now, and it has a filled text buffer after compiling Haiku and now running an oft-interrupted bit-torrent download.
My 768MB goes VERY far on BeOS, that is, until I use Firefox, which uses 42MB on its own, and causes the system to reserve about 30MB more, and that is not counting any server’s allocations needed to support the application while running ( does include all libs, though ).
Still, Firefox on Ubuntu sucks up the RAM even worse on this machine, I often run out of memory ( and SYSTEM responsiveness ) when playing only one or two flash movies, or just trying to do a lot of browsing, but then Firefox is normally only using 250MB or so, it is the rest of Ubuntu that is sucking up the RAM, each little daemon and background application needed to provide functionality on Ubuntu uses an abysmal 2MB or so, when there is likely no need for that amount of usage for the feature set.
Is this just how the GNU/Linux kernel as compiled and patched by Ubuntu allocates? I think I will be compiling me a new kernel today 🙂
–The loon
What?… I was mostly on-topic.
I believe that this test is stupid. gnome terminal is the fastest because it eat text when it make the scrool. try to scrool in vi with gnome terminal, it is the most ugly terminal. The test is only useful for terminals when you are compiling no in the every day use.
Compiling is what most distro devs do, programmers do, source based distros do, package builders do, I call that eveyday use.
Most of the time I spend in my Unix is in a xterm and the scroll while you are compiling is’n so often because normally you compile the code that you are changed. I spend most time coding in Vim than compiling (the last *always* run in background). Perhaps you don’t use the terminal in the interactive mode.
I have another question: Does the text output speed really affect compiling speed that much? I mean, If you´d want to know hof fast gcc works in a respective terminal app, why not, uh, time THAT and compare those results? Wouldn´t that be more informative on this matter?
“Does the text output speed really affect compiling speed that much?”
No. It’s pretty unlikely that text output speed would be a major limiting factor for the time it takes to compile something.
Your on the money in your observation. It means nothing, thats why this is article is just beyond stupid. Worrying about how fast your terminal is gets you out of actual coding, which is probably the root of this guys problem.
Well, I wouldn’t call the test stupid, but it’s certainly not very accurate.
The fact is that Gnome terminal is not really running the test as intended because it’s skipping a lot of the text.
Try to scroll a large text file and you’ll see that it’s quite slow.
Konsole is quite fast as is Xterm, about as fast, it will scroll smoothly larger volumes of text.
Maybe it’s just the difference between our systems?
I ended up learning a bit about things by asking questions here and @ the blog where the article is posted so thanks to all (and to martin of course for the article).
I just ran the test on my MacBook Pro, and it seems that Terminal.app is quite fast, and iTerm.app is fairly decent. Here are the results:
Terminal.app with 10,000 lines buffer: 0.02s
iTerm.app with 10,000 lines buffer: 0.34s
Well, I compile a lot of code and always use or xterm or urxvt. I’m not sure if this test is good enough. Perhaps someone can answer these questions: do gnome-term and konsole use lots of cpu time to render text fast, “skip” tricks and scroll optimizations on X? If so, what is the impact on CPU usage? What about the memory consumed and possible paging increase?
If someone really wants to find the best console to use on projects “configure/make” steps I suggest to pick a huge project to be build, like KDE, and then compare the results.
And one last thing, PLEASE developers of autoconf, automake and libtool, CACHE the result of queries. It for sure will SPEED-UP more than anything else the building time.
“If someone really wants to find the best console to use on projects “configure/make” steps I suggest to pick a huge project to be build, like KDE, and then compare the results.”
It would be interesting on a scientific / statistic basis (more testing values – better proven results), but the use of a specific terminal won’t speed up compiling. If I remember correctly, an older urban legend claimed compiling processes finishing earlier if any output is redirected into /dev/null. I think buffering is what makes the compiling process speed not being limited by the terminal output speed.
Now kidding: The mentioned terminal applications are terminal emulators. Terminal emultors are for children. Real men use real terminals. Real terminals are hardware. Hardware for hard working men. I’ve still got around a DEC vt100 and an EAW P8000 KSR around. 🙂
Performance here is: 0.007u 0.023s 0:00.68 2.9% 16+312k 0+0io 0pf+0w (2 GHz Intel Celeron, Ati Radeon 9600, Driver ati + drm, FreeBSD 5.3/p5, XFree86 4.3.99.15, WindowMaker, XTerm)
Why is xfce4-terminal missing? On my system, it needs only about 60% of the time konsole takes to display the file. Is it that similar to gnome-terminal in the inside that it does not worth listing separately?
xfce4-terminal is a nifty program, but it uses the same widget for rendering gnome-terminal uses:
http://developer.gnome.org/arch/gnome/widgets/vte.html
Just for fun I did the test with the stock BeOS terminal. It didn’t do too badly, ranging from 1.08 – 1.25 seconds. It does smoothly scroll through all the text though.
So the I/O to the screen is blocking? Sorry, I didn’t RTFA so maybe that’s answered within, but let me ask this.
If you redirect the output to a file and then do tail -f in another process, would the job run faster?
Maybe I’m not understanding this at all. I’ve always had the feeling that console I/O speed would slow down a process, but shouldn’t the output to screen be decoupled (through non-blocking I/O, multithreading, something) so that until user input is required, the process would run unabated?
As a graphical example, if I’m copying a file in KDE, and grab the containing window, or for a better example, cause some kind of event in the window, the transfer doesn’t pause while I do this. The window would go away readily enough when the transfer completed “underneath” my mouse.
Is this event-driven approach not appropriate to the console where programs can redirect I/O. Does this get into the world of message passing and mutexes and such that was eschewed in the early days of Unix, or am I just totally off the mark?
Why doesn’t he send the output to a file and tail the file, if the output is so huge that it use too much memory?
Edited 2007-09-06 05:32
Then you’d be measuring the performance of cat and not the performance of the terminal.
No what I was asking is why does he need a fast terminal?
Terminal scroll already faster than you can read it, so what’s the point of having the fastest terminal?
If he just want to have the status of the compilation very fast, no need to ‘display the output’:
gcc foo.c >logfile 2>&1 ; echo “$?” is enough.
Strange this should be published now. I was just conducting similar tests trying to find the best terminal last weekend.
In my experience, he’s right; gnome-terminal is FAST, unless you have xcompmgr or compiz running. Then it gets very slow and uses up massive amounts of CPU time (X eats up to 100% of one of my CPUs.) But, if I run rxvt, or one of it’s descendants, with the compositing window manager on, it’s much faster than gnome-terminal.
As soon as I kill the compositor, the performance flips, and gnome-terminal is the winner.
Does anybody have a guess as to why? Only gnome-terminal gives me problems with the compositor, Xterm, rxvt, etc. don’t seem to be affected.
I am the current maintainer of Konsole (for KDE 4).
The file used in the benchmark is not large enough to get really useful feedback. There can be quite a lot of variation between test runs. My testing found that gnome-terminal/2.18 is an order of magnitude faster at catting very large files than Konsole/KDE 3 and about twice as fast as the current Konsole/KDE 4 build. The really, really expensive part is the text rendering and both “cheat” (compared to xterm) by cutting down the amount they do as much as possible. I’m not sure how he is measuring memory usage, but I make it 9MB of writable physical memory used on the test mentioned (Konsole/KDE 3).
Incidentally, the shiny new System Monitor tool in KDE 4 provides that figure in the ‘Memory’ column, instead of showing the rather less meaningful (and usually larger) VmSize figure. Which is great, because when non-expert reviewers compare a KDE 3 program (using KSysGuard) against its KDE 4 counterpart (using System Monitor) the figure for the KDE 4 program will be much smaller even if in reality the memory usage is the same. Why didn’t we think of that before
The bottom line though is that if you are running a process which is producing vast amounts of output, it will go noticeably faster if the terminal window on which it is being shown is not visible. Compiling doesn’t usually produce such vast quantities of output that this is a problem, especially if you are building C++ programs where compilation takes much longer.
I personally do not think that catting a large file is not the most useful measure of terminal performance because it does not really affect how smooth and snappy it feels in day-to-day work. Something which does feel slow in Konsole and gnome-terminal compared with xterm is anything which involves scrolling text in a large terminal window, such as moving around a file in Vim, paging through a file or simply browsing output with the scrollbar.
This is where slowdown gets much more irritating, so that is what I put effort into fixing for KDE 4, along with quick startup (it will never be quite as snappy as xterm I expect).
I have used top, how do you measure memory usage?
EDIT: I should have finished reading 🙂
Edited 2007-09-06 17:39
is the plain old console without X..
I think it’s a useful test. Everyday people like me run grep on a 100meg file not knowing if it matches a couple lines or the whole thing 😉 Then you’re smashing ctrl-c for a minute waiting for the terminals to catch up…
I thought this post was a joke. I used to know a fiction writer who would spend tons of time preparing to write, setting up his work area, the right feng-shui etc. trying to make sure nothing limited him before starting his all important writing tasks. Needless to say his writing was lousy and he was full of himself too.
I’m not saying they’re completely wrong because I get similar numbers here.
However, what I will say is that they are very misleading. To demonstrate, open gnome-terminal and run man gcc. Scroll up and down and witness choppy text. Do the same test in xterm and witness perfectly smooth scrolling, even if you use smooth fonts.
I don’t know how to measure this lag, but it matters a lot more (to me anyway) that the speed a terminal can simply dump a volume of text in one go.
Edited 2007-09-08 13:01