Trolltech has announced the availability of the second Qt 4 Technical Preview. Key highlights include support for AT-SPI, bringing accessibility to the Unix and Linux desktop for people with disabilities.
Trolltech has announced the availability of the second Qt 4 Technical Preview. Key highlights include support for AT-SPI, bringing accessibility to the Unix and Linux desktop for people with disabilities.
For some reason I Can’t get this to compile correctly on Mac OS X, it stops when building the plugins directory. Has anyone had any better luck!
P.S. I have only managed to get QT 3.3.0 to work, however anything later simply won’t compile.
I would have to disagree with Simba, IMO QT is far easier to use than Gtk. I have developed software using Gtk and Gtkmm and using signals was a pain in the ass, documentation was poor, and Gtk is incredibly slow, especially when using on Windows.
Besides QT includes lots of extras such as mysql access, network sockets, and easy to use xml parsers, it also runs so much faster.
Qt has had it too, on MacOS and Windows. Qt4 just brings it to *NIX, using ATK.
/me loves Qt!!
I can’t wait for 4 to be out with announced speedups, cleaner api, reducted executable size/memory footprint, etc..
I can’t wait for Arthur (painting engine) to draw over keith’s GL windows in native format :-).
I can’t wait for the new data model to simplify and boost display of my desktop icons :-).
Great revolution Trolltech.
“Gtk is incredibly slow, especially when using on Windows.”
It isn’t that slow on Windows. Plus, most of the bugs in earlier versions have been fixed in 2.4.x.
And besides, Qt isn’t even an option for cross-platform development on Windows if you are writing GPL software, or if you are a non-profit on a limited budget. Not unless you happen to have $1,300 lying around to drop for a commercial license.
The comment, re D-Bus bindings: “No preprocessor and no special wrapper classes are necessary.”
So, that’s still just the one extra preprocessor need then?
Quote:
“And besides, Qt isn’t even an option for cross-platform development on Windows if you are writing GPL software, or if you are a non-profit on a limited budget. Not unless you happen to have $1,300 lying around to drop for a commercial license.”
That’s not true.
I have a non commercial version that didn’t cost anything, and anyone can get that, just buy the official book, where the cd is included.
So you do not have to spend a lot of money to do open source or non-profit development with Qt.
The catch is of course that you have to buy that book, but that’s not that expensive, and even worth it.
Yesh. Get over the preprocessor already. If it isn’t already integrated into your Makefiles, do it, and quit complaining.
When C++ get’s features like introspection into the language (that’s a ~2007 timeframe, if XTI makes it into C++ 0x), then Trolltech can ditch the preprocessor. Until then, there is no way to do it without losing important features.
I agree Gtk 2.4 has improved alot on Windows, but I found problems when opening up the FileChooser and when using threads. Occasionally some widgets act strangely, especially when adding child widgets to a textview.
Again, you have got the cost of QT, although it can be used on Mac and X11 for GPL software, you have to pay to use it commercially, and you have to pay fullstop to use it on Windows.
However, the API on QT is simply easier to use, it may use the MOC, which slows down compile-time, but it makes writing applications a pleasure, and I found the documentation good and it was easier to learn.
I really hope Gtk improves with version 2.8 when hopefully is will be using Cairo.
“I have a non commercial version that didn’t cost anything, and anyone can get that, just buy the official book, where the cd is included.”
Trolltech makes no mention at all of any free licenses available for doing Windows development on their Web site. If this information is incorrect, that Trolltech really needs to update the license pages on their Web site, because it makes it look as if the only way for you to develop Windows applications with Qt is to purchase a commercial license. The only free licenses they mention cover only unix like systems (UNIX, Linux, Mac OS X.)
“However, the API on QT is simply easier to use.”
Well, we will have to agree to disagree on this. Personally I find the GTK+ API much easier to use than the QT one. This is very possibly because I do a lot more C programming than I do C++ programming, and I come from a Win32 API background (which is a C API as well, but GTK does a much better job of implementing classes in C than the Win32 API does). But either way, the first time I looked at the API documentation for GTK+, everything made sense right away. There were none of those “What is the purpose of this statement?” kind of issues that I had with the Qt documentation.
“…bringing accessibility to the Unix and Linux desktop for people with disabilities.”
That’s a direct quote from the referenced article. It’s not Eugenia’s statement.
But that’s one reason I pointed it out the way I did. It reminds me of Microsoft’s Windows 95 propeganda. “Bringing new technology such as pre-emptive multitasking and multithreading to the desktop”. Yeah… Sure… Except that OS/2 had both for years. Basically, trying to make a big deal about something that has already existed in a competing product for quite some time.
I decided to give Gaim for Windows a test drive (waiting for Trillian 3.0) – had a lot of stability problems with it. For example, the program tanks when I try to view somebody’s buddy info on AIM.
In my experience I consider Gtkmm to be a very good C++ API that comlies nicely with modern C++ style. Unfortunately I cannot say the same for Qt. There are quite a few Qt specific additions to the C++ language (eg. keywords such as slot, constructs like foreach (variable, container) statement;) and I don’t like that. I also prefer the template based (and thus strongly typed) libsic++ way of doing event driven programming. Just my 2 cent to this debate over elegance of Qt…
I dont know wtf you were doing but gaim runs flawlessly on Windows for me. I have used just about every version of gaim since .60 and it has never crashed on me. Looking at the bug reports on sf.net it looks like there are some people experiencing crashes when viewing info’s with weird fonts, however it is a lot less broad then just viewing anyones info. Why don’t I see your bug report?
(and thus strongly typed) libsic++
Correction: you mean statically typed. That means types must be known at compile time. Strongly-typed means that the language enforces type-safety, which C++ doesn’t.
In any case, static typing is the antithesis of polymorphism, and GUIs are inherently polymorphic. Ergo, it makes no sense to use contructs optimized for static typing, rather, it’s best to use a style that embraces and takes advantage of polymorphism. That’s why people love Qt (and Cocoa, which is even more dynamic). They apply dynamic programming to a naturally dynamic problem-domain.
Two questions, since I have no experience with GAIM.
First of all, which version of GTK does it use? GTK 1.3 was fairly buggy on Windows, but the 2.4.x versions of GTK are very much improved on Windows.
Second, where did you get the GTK DLLs you are using? I suggest using the “official” ones available directly from the Gimp / GTK project. Some of the third party ports are buggy. Go to http://www.gtk.org and follow the link “GTK+ for Win32”.
“In any case, static typing is the antithesis of polymorphism, and GUIs are inherently polymorphic. Ergo, it makes no sense to use contructs optimized for static typing…”
Well, first of all, static typinh is a lot more efficient so it still has a place.
Second of all, I agree with those who say that OOP is rather over-rated, and the benefits we were supposed to get from it have never really materialized. I don’t agree with ESR on very many things, but this is one place where I do. As he points out in his online book, Art of Unix Programming, OOP can often result in code that is more difficult to maintain and more difficult to understand than procedural code since programmers often resort to “over classing” to get things done. He also points out that some studies have shown that the overall lifetime cost of maintenance on C++ program is slightly higher than the overall lifetime cost of maintenance on an equivalent C program.
Whether this is because of inherant problems with the OOP paradigm, or because C++ does such a bad job of implementing OOP is hard to say, but most likely it is a combination of both. (Raymond, 2003).
Often times, OOP results in unnecessary (and probably inefficent) classes, when simply passing a function a pointer to a structure would probably be simpler and easier to understand.
And this comes back to my original point. I personally find GTK much easier to understand than Qt.
Well, first of all, static typinh is a lot more efficient so it still has a place.
I’m not saying it doesn’t have a place, rather, that GUIs aren’t it. Anyway, the performance argument is a shakey one in the face of type inference and advanced compiler techniqes. This is particularly true in GUI code, where you’re doing dynamic dispatch anyway (*cough* GObject *cough*). The only difference between having dynamic dispatch in the language, and faking it in something like C, is that in the latter case the compiler doesn’t have the opportunity to optimize it.
Second of all, I agree with those who say that OOP is rather over-rated, and the benefits we were supposed to get from it have never really materialized. I don’t agree with ESR on very many things, but this is one place where I do.
Is this the same Eric S. Raymond that wrote this article about how wonderful Python is?
http://www.linuxjournal.com/article.php?sid=3882
Guess what: two of the most powerful features of Python are it’s object-orientation, and it’s dynamic typing. Go write a GUI program in Python or Smalltalk and then tell me about how OOP and dynamic typing isn’t all it’s cracked up to be.
Well said, simba, OOP is exceedingly hyped and extremely abused that it makes my stomach hurt. And C++ is perhaps the worst language to write OOP code in.
But I digress, nice to see Trolltech incorporating some gnome accessibility libraries in theirs and also adopting freedesktop standards. I hope to see more of these direction towards interoperability from but the GTK+ and Qt devs in the future.
“Guess what: two of the most powerful features of Python are it’s object-orientation, and it’s dynamic typing.”
But Python doesn’t force you to use OOP. And yes, its dynamic typing is nice for rapid development, but often too inefficient for practical use. ESR is also a big proponent of the idea that compiled languages like C, C++, and FORTRAN have no place in modern application development and should be reserved for systems level programming. This is one reason he is such a big fan of Python. I strongly disagree with this. This line of thinking is why code becomes more bloated and more inefficient with each passing year, and why we need faster and faster processors and more and more memory all the time.
I still prefer to optimize my code as much as possible and that means that for any major project, I will still use a compiled language. And often that language is C. And because of this, I still continue to amaze my friends when I give them a fully functional and even relatively complex Windows application that is entirely self contained and weighs in at only 200K. It will fit on a floppy with nearly 1.2 Mb of space left to spare.
“I personally find GTK much easier to understand than Qt.”
Hmm, do you mean you find Qt harder to grasp than the complex type/signal system in gtk? Then you surely aren’t using c for developing in gtk?
“And C++ is perhaps the worst language to write OOP code in.”
Well, C++ is such an elephant of a languge that even the original designers have admitted that there is no way any one programmer can ever understand all there is to know about it, much less keep the entire language and its rules / constructs in their head.
The nice part about C, is that at its core, it is an elegantly simple language, and one can keep the entire language in their head easily. Because of this, one can download a library, or GUI toolkit written in C, and look through some sample code, and instantly understand what is going on. Because even though the whole of the code might be very complex, the language itself is so simple that it can be kept in one’s head.
This is definatly not true with C++. The language is very complex, and no one can keep the entire thing in their head. So in the case of C++, one can download a library, find some little known aspect of OOP that is being used, or some non-standard way of using it, and not be able to understand the code without weeding through documentation.
“And yes, its dynamic typing is nice for rapid development, but often too inefficient for practical use.”
So java Swing based on static typing – with all that wonderful programmer friendly event listener subclassing – is fast and efficient? And Better than Apple’s Cocoa with all that dynamic typing Objective-C nonsense. I think not.
But Python doesn’t force you to use OOP.
Python is always OOP. Everything is an object. It does force OOP everywhere, you just don’t notice because it’s so transparent. That’s what well-implemented OOP looks like.
And yes, its dynamic typing is nice for rapid development, but often too inefficient for practical use.
The current Python implementation is too inefficient for performance-critical applications. Performance is a property of implementations, not of languages. Anyway, there are lots of practical uses that are not performance-critical.
I still prefer to optimize my code as much as possible and that means that for any major project, I will still use a compiled language. And often that language is C.
What does “compiled language” have to do with anything? We’re talking about dynamic typing and object-orientation. Saying that wanting a “compiled language” means using C is like saying that wanting a car means buying a Ford.
I’d argue Qt’s slot mechanism is both complex and peculiar. The people who don’t like Qt, are the people who either don’t like or understand Qt’s slot mechanism or hate C++.
The people who don’t like GTK+ are the people who don’t know it has a powerful C++ binding that C++ proponents will likely prefer to Qt’s weird way of doing things.
“Hmm, do you mean you find Qt harder to grasp than the complex type/signal system in gtk? Then you surely aren’t using c for developing in gtk?”
Yes, I am developing in C using GTK. And no, I do not find the signal system to be complicated. Now like I said, perhaps this is because I am coming from a Win32 API background. And because of that background, I am used to ways of handling events and messages in C.
“What does “compiled language” have to do with anything? We’re talking about dynamic typing and object-orientation.”
The point was that this is the main reason ESR was singing the praises of Python in the article you referenced. Hense, ESR’s praises of Python and his ideas that OOP is over-rated are not in conflict with each other. Python allows one to do non OOP programming. And ESR’s reasons for liking Python have more to do with its clarity, and its rapid speed of development then with just the fact that it supports OOP.
“The people who don’t like GTK+ are the people who don’t know it has a powerful C++ binding that C++ proponents will likely prefer to Qt’s weird way of doing things.”
I’m actually the opposite. One of the big reasons I like GTK+ is specifically because it allows me to program in C and doesn’t make me use C++.
@Mystilleef:
I’d argue Qt’s slot mechanism is both complex and peculiar.
Why?
The people who don’t like GTK+ are the people who don’t know it has a powerful C++ binding that C++ proponents will likely prefer to Qt’s weird way of doing things.
Or people who think that the application of the STL paradigm to the GTK+ bindings is dumb idea. C++ is a multiparadigm language for a reason. It’s pointless to try to use a single paradigm everywhere just for some irrational desire for conformity.
Yes, I am developing in C using GTK. And no, I do not find the signal system to be complicated. Now like I said, perhaps this is because I am coming from a Win32 API background.
Win32 is one of the worst APIs ever concieved. Saying that GTK+ makes sense coming from Win32 is damning criticism of the API…
“Win32 is one of the worst APIs ever concieved. Saying that GTK+ makes sense coming from Win32 is damning criticism of the API…”
You miss the point yet again. The point is that it might explain why I don’t think GTK+ is complicated or difficult to understand. Because I am coming to it from something that was VERY complicated and difficult to understand.
“The nice part about C, is that at its core, it is an elegantly simple language”
But do you not think that the c code that is necessary to enable classing and inheritance turns out rather comlicated and in fact even ugly?
“What does “compiled language” have to do with anything? We’re talking about dynamic typing and object-orientation.”
Simple. The reason Python can do a lot of the things it can do is because it is an interpreted language. In order to handle some of the things that Python handles, you need to have a runtime engine. A compiled language cannot perform the runtime safety checks that an interpretor can, and thus cannot hold the programer’s hand like an interpretor can.
“But do you not think that the c code that is necessary to enable classing and inheritance turns out rather comlicated and in fact even ugly?”
From my perspective as a programmer using the toolkit? No. it is not complicated or ugly. Because GTK+ is very well designed and handles most of the messy details for me. if you dig into the GTK+ source code? Yes, it is complicated. But even then, not overly so. And it is understandable by someone who has a reasonable knowledge of C.
The reason Python can do a lot of the things it can do is because it is an interpreted language. In order to handle some of the things that Python handles, you need to have a runtime engine. A compiled language cannot perform the runtime safety checks that an interpretor can
Sure it can. It just inserts the checks into the generated machine code. Lot’s of compilers do it.
and thus cannot hold the programer’s hand like an interpretor can.
Having the compiler help out the programmer does not mean it’s holding the programmer’s hand. I always find it comical when C programmers pride themselves on being able to write code free of buffer overflows and memory leaks. If a dumb compiler can do it, it’s not very much of an achievement, is it? High-level programming is all about letting the compiler do the grunt-work a stupid computer can do, and letting the big expensive human brain do the work only it can do.
Why?
Others have already dwelled on the reason. I think a lot more C++ hackers would prefer signals/(slots) be handled with libsigc++, a template library most are familiar with, rather than a preprocessor.
Personally, I don’t care but I also prefer a template library to Qt’s way of doing it. I don’t enjoy coding in C++ to begin with, so call me biased.
Quote:
“The nice part about C, is that at its core, it is an elegantly simple language, and one can keep the entire language in their head easily. Because of this, one can download a library, or GUI toolkit written in C, and look through some sample code, and instantly understand what is going on. Because even though the whole of the code might be very complex, the language itself is so simple that it can be kept in one’s head.”
I see that differently…
When you use a 3rd party library or toolkit, you’re using their api too, so, saying that the code using those libs or toolkits will be very easy to read because c is easy to read is not something I understand.
What I see is that a language itself can be very beautiful, but 3rd party api’s can be very ugly.
I’m not saying that gtk is an ugly api though.
But neither is Qt. Qt is built with the following in mind:
“Code written with the Qt api must be readable by anyone who doesn’t have knowledge of Qt”.
Personally, I think that’s mostly true, but not always.
As for c or c++
I work with both… I write some gui programs in c++ using Qt, and I help a friend out with an ircd daemon and services written in c. I prefer c++ above c. C++ might not be the best overal language, but it does the job for me, and I don’t get nightmares if my binaries are a couple of kB’s bigger than a similar c program.
In the end, everyone uses the tools he likes the best and can give him or her the best results in the shortest time.
If you love something, it doesn’t make the alternative suck by definition, as for someone else, that might be vice versa.
“In the end, everyone uses the tools he likes the best and can give him or her the best results in the shortest time.”
I would guess KDE core developers are more productive using C++ and Qt compared to Gnome core developers using C and Gkt+ though — simply because programming in an OO fashion with a OO language is more natural and less error prone and than programming in an OO fashion using a low level procedural language as C.
As Rayiner Hashem also said, why not let the compiler do as much work as possible?
“Code written with the Qt api must be readable by anyone who doesn’t have knowledge of Qt”.
This is true, and is reflected in the internal implementation of Qt. The Qt code is beautiful, from the bottom up. This is reflected in the APIs, and makes Qt easy to use.
“The Qt code is beautiful, from the bottom up. This is reflected in the APIs, and makes Qt easy to use.”
Looking at the Qt API and source code made me finally realize what OOP is all about (in a practical sense at least). Perhaps Gtk+ could make you see that too, but I do not think anyone would say it is beautiful…
“Perhaps Gtk+ could make you see that too, but I do not think anyone would say it is beautiful…”
Then you have probably never used gtkmm. Because it is a far better example of OOP design than Qt. And as far as Qt being easy to understand by someone who does not know Qt, I don’t think it is, because it has custom extensions to the C++ language. (To be fair this is because Qt was developed before standard C++.) And those custom extensions do not make a toolkit easy to understand in my opinion.
Gtkmm is not proper OOP because it uses the STL paradigm and the STL is not proper OOP.
As for the C++ extensions: why are people so hung up over syntax? Syntax is trivial. Yeah, so Moc adds a few keywords. Big deal. The semantics behind the keywords is very simple, and easy to understand. Most importantly, the extensions enable a lot of simplification elsewhere in the API. For example, no need to hack around C++’s lack of introspection, moc takes care of that.
“Having the compiler help out the programmer does not mean it’s holding the programmer’s hand. I always find it comical when C programmers pride themselves on being able to write code free of buffer overflows and memory leaks.”
Having the compiler help out the programmer at the expense of performance is holding the programmer’s hand. There are other things that compiler does that are not holding the programmer’s hand, such as automatically using registers when they can increase performance. In this case, it is better left to the compiler (which probably understands the CPU archetechure better then the programmer), and leaving it to the compiler increases performance.
However, in the case of automatic memory management, etc., now you have a case where you are sacrificing performance to reduce complexity. In this case, yes, the compiler is holding the programmer’s hand.
“If a dumb compiler can do it, it’s not very much of an achievement, is it?”
Since when is it an issue of achievment? It’s simply an issue of sacrificing performance in favor of removing some complexity from programming. Obviously, we are not going to solve the issue here of whether this is a good idea or not. I personally don’t think it is and I would rather spend extra time coding and debugging in exchange for better performance. (That and sometimes the fact that you can’t manage your own memory or work directly with pointers in languages like Java results in kludgy ways of doing things that end up more complicated than if one had been able to do direct pointer manipulation.)
But as far it “not being much of an achievement”, if that is the case, then why have languages that do it for you? Apparently a lot of programmers would rather not do it. And given the amount of problems in software due to memory mismanagment and misuse of pointers, yes, it would seem that a lot of programmers do have problems getting it right. So yes, I think mastering memory management is an achievement for any programmer.
“Yeah, so Moc adds a few keywords.”
Simple. Because this is a big NO. You You should not simply add your own keywords to a language. ANSI and ISO standards documents exist for a reason.
I agree with simba I don’t think anybody coming to Qt will find it easy to read. That was the reason for my statement above about Qt’s slot/signal being complex and peculiar. In fact most C++ gurus I know hate it.
I’m not saying Qt sucks or is wrong, but I strongly disagree that it is better than GTK+ or it’s C++ bindings, or that one is more productive in Qt than one is in GTK+, or even that QT is more readable than GTK+.
I’d argue the productivity gains between C and C++ is neglible if at all present. If my experience is anything to go by C++ plus OO only makes project more complex and extremely torturous to debug, especially when people start being much to clever for themselves and decide to abuse inheritance to the point where code is not readable, or even followable.
“Gtkmm is not proper OOP because it uses the STL paradigm and the STL is not proper OOP.”
This is not a debate I want to have, since the OOP purists lost this debate awhile ago. When taken to the extreme, OOP requires one to write much more code than non OOP.
And besides, what is proper OOP and what isn’t is largely relative. This is still a very developing paradigm that largely does not work the way we would like it to. As I said, the productivity gains and easier maintenance that OOP was supposed to offer simply have not materialized. And some studies have shown that overall lifetime mainteance costs of C++ programs is actually slightly higher than equivalent C programs.
“I’d argue the productivity gains between C and C++ is neglible if at all present. If my experience is anything to go by C++ plus OO only makes project more complex and extremely torturous to debug, especially when people start being much to clever for themselves and decide to abuse inheritance to the point where code is not readable, or even followable.”
Yep. And this is probably the crux of the problem. OOP programming requires the programmer to be a lot more aware of the “family tree” of the object they are working with. “Does this object have this property already because it inherited it from its great, great, great grandparent somewhere 20 levels up?” that kind of thing. So I think when doing OOP programming, one must spend a lot more time reading documentation because often the inheritance levels are so many layers deep that it is not easily apparent what properties or methods a certain object has, etc. And god forbid you inherit an OOP project from someone else that is poorly documented. Good luck trying to trace the object hierarchy and find out all the methods and properties of any given class. This, in my opinion, adds a great deal of complexity.
Java tried to get around the multiple inheritance problem, but interfaces are arguably worse since they create new problems. That and Java’s use of inner classes and anonymous classes only serves to create code that is more difficult to follow, and often more confusing.
Having the compiler help out the programmer at the expense of performance is holding the programmer’s hand.
That’s a ridiculous critereon. You think C code doesn’t impose a performance penalty? At the end of the day, anything above assembly imposes a performance penalty in return for programmer convenience.
Since when is it an issue of achievment?
C programmers act like it is, doing things like equating the use of a GC to having one’s hand held.
That and sometimes the fact that you can’t manage your own memory or work directly with pointers in languages like Java
It should be noted that C’s ability to manage memory directly and work directly with pointers also results in huge performance penalties. C programmers miss the forest for the trees. They see the extra clock-cycle the array access takes, but ignore the 150 clock-cycle penalty they have to pay on each read() and write() because the kernel has to be seperated from unsafe C programs.
But as far it “not being much of an achievement”, if that is the case, then why have languages that do it for you?
It’s not an achievement because even a brainless compiler can do it. That doesn’t mean it isn’t grunt work that programmers don’t want to do. Look at it this way: washing your dishes is no big achievement. Any ten year old with a sponge can do it. That doesn’t prevent us from using dishwashers to do it.
Deep class trees are a symptom of a bad design. Too many people learned C++ by “I’ll just inherit from this class because its kind of like what I’m looking for”.
Modern object-oriented programming favors using interfaces and composition over inheritance.
That said, coming from a background of doing C++ professionaly for 6 years, it’s a clusterfsck of a language. The main reason I say that it is because I think it has to be one of the worse, if not the worse, of the popular languages for programming on projects with multiple programmers because of its multi-paradigm nature and its many gotchas in the language proper.
Sometimes just using structs and functions is a whole lot simpler for programmers to wrap their head around than clusterfscks of class hierarchies.
Object-Oriented programming and GUIs are one of the examples of where all the object-oriented hype really isn’t hype.
Simple. Because this is a big NO. You You should not simply add your own keywords to a language. ANSI and ISO standards documents exist for a reason.
Moc is written in ISO C++. It’s output is ISO C++. Ergo, Qt code is ISO C++ complient. Fundementally, moc is absolutely no different than all the macros that GTK+ code uses.
“It’s not an achievement because even a brainless compiler can do it. That doesn’t mean it isn’t grunt work that programmers don’t want to do.”
Your “dumb compiler” also writes machine code. Can you do that? I doubt you can, unless you are one of the people who started programming in the 70s or earlier. But I would certainly hope that you wouldn’t suggest that any programmer who can’t crank out direct machine code is “dumb” because a “even a brainless compiler can do that”.
“They see the extra clock-cycle the array access takes, but ignore the 150 clock-cycle penalty they have to pay on each read() and write() because the kernel has to be seperated from unsafe C programs.”
But this is never going to change no matter how high level and how “safe” languages get, because ultimately, you end up having to deal with a low level. For example, sure Java might be safe. But what if the runtime engine itself is buggy?
So sad this has become a troll ridden *cough*Simba*cough* flamefest.
Simple. Because this is a big NO. You You should not simply add your own keywords to a language. ANSI and ISO standards documents exist for a reason.
Fine consider QT a new language if you will. There is the Trolltech standard. It’s also available as GPL so you aren’t held in their proprietary grip. I was taking the QT tutorial and I found the whole slots mechanism a neat implementation.
I’ve read many times that programmer time is more expensive than hardware so leave as much to the language as possible.
And frankly isn’t programming supposed to be about thinking about the logic and forgetting about the implementation? Let all that nasty system dependent stuff get abstracted away? Really, use effecient algorithims and the rest will take care of itself (this goes for most software not necessarily embedded or nuclear reactors). There is nothing funnier than seeing first year satudents saying that their bubblesort would be faster if they could only right it in assembler instead of Java.
This is not a debate I want to have, since the OOP purists lost this debate awhile ago. When taken to the extreme, OOP requires one to write much more code than non OOP.
To parapharse your argument: “proof by assertion, proof by assertion, and some more proof by assertion.”
And besides, what is proper OOP and what isn’t is largely relative.
It’s not *that* relative. The STL discourages the use of polymorphism. No matter what your pet definition of OOP is, polymorphism is a part of it.
This is still a very developing paradigm that largely does not work the way we would like it to.
Again, proof by assertion.
As I said, the productivity gains and easier maintenance that OOP was supposed to offer simply have not materialized.
C++ and Java implementations of OOP usually suck (because the languages encourage it). That does not mean that you cannot do good OOP, and Qt is a very good example of that.
“Object-Oriented programming and GUIs are one of the examples of where all the object-oriented hype really isn’t hype.”
You are probably right here. But I think toolkits are probably one of the few examples where object oriented programming makes sense. For most problem domains, I don’t think it really makes sense.
“Too many people learned C++ by “I’ll just inherit from this class because its kind of like what I’m looking for.”
This is because many programmers are taught to take abstraction to extreme levels, and the result tends to be very deep levels of inheritance. “A dog is a kind of canine which is a kind of mammal which is a kind of animal which is a kind of life which is a kind of organic collection which is a kind of particle collection, etc.” Yes, some OOP code I have is really almost this deeply abstracted and goes well beyond the point where it makes sense to abstract it any further.
Maybe object oriented programming is a more natural way for some people to think about problems. But it isn’t for me. And that is because I think the underlying assumption is flawed–that underlying assumption being that humans think in terms of nouns and not in terms of verbs. ie, they are object oriented and not action oriented.
I don’t that is a valid assumption for all people. I personally tend to verb oriented when solving problems. Not noun oriented. And because of that, I usually find programming problems easier to solve in a procedural language than an object oriented language. Now maybe this is just because I learned programming before anyone had even though of object orientation, so I have a long habbit of “trained thinking” when it comes to how to solve a programming problem. And this is why the trend today is to start programmers off right away with object oriented programming so that they have less to “unlearn” (where as 10 years ago it was thought it was better to start off with procedural programming and then move to object oriented programming).
“There is the Trolltech standard. It’s also available as GPL so you aren’t held in their proprietary grip.”
True. But then you are stuck being forced to license your code a certain way. Neither option is very desirable from my point of view.
And once again, the GPL option is not even available for non UNIX platforms, at least there is no indication onf Trolltech’s Web site that it is.
“And frankly isn’t programming supposed to be about thinking about the logic and forgetting about the implementation? Let all that nasty system dependent stuff get abstracted away?”
I think all programmers should be worried about system level stuff and implementation. You can’t write good algorithms if you aren’t. For example, what if you write a deeply recursive algorithm without understanding what you are doing to the stack in the process? The recursion might seem like a good idea at first, until you realize that your algorithm experiences exponential growth. Suddenly it doesn’t seem like such a great algorith afterall… Especially if you are using recursion to write it.
Your “dumb compiler” also writes machine code. Can you do that? I doubt you can, unless you are one of the people who started programming in the 70s or earlier.
Dude. Writing out machine code isn’t that hard. x86 is well-documented, and if you can program ASM, you can write out machine code. It might be tedious and time-consuming, but it’s *easy*. Just like manual memory management. Look at it this way: nobody is impressed if you balance your checkbook by hand, instead of having a calculator do it. That’s like doing manual memory management (in most cases), or writing out machine code by hand. There is no point in doing what the machine can do itself.
But this is never going to change no matter how high level and how “safe” languages get, because ultimately, you end up having to deal with a low level.
If you didn’t have unsafe C programs writing random trash to memory, you wouldn’t need to protect the kernel. If you didn’t have to protect the kernel, you wouldn’t have those costly user/kernel transitions. Sure, you’d always have a little bit of code to deal with the low level, but only that little bit could crash the system.
For example, sure Java might be safe. But what if the runtime engine itself is buggy?
You only have to debug the runtime engine (or in the case of a native compiler, the code generator) once. You don’t have to deal with safety issues for each and every program.
There is a difference between understanding the low-level details, and having to deal with it for all code you write. The difference between a good C programmer and a good Python programmer is both know exactly what is happening at the low levels of the system, but only the C programmer has to think about it all the time.
You are probably right here. But I think toolkits are probably one of the few examples where object oriented programming makes sense.
Well, isn’t that what we’re talking about here? About how Qt’s object-oriented paradigm is good for GUIs?
This is because many programmers are taught to take abstraction to extreme levels, and the result tends to be very deep levels of inheritance.
These are poor programmers. These same programmers would be writing C code full of buffer overflows and memory leaks.
that underlying assumption being that humans think in terms of nouns and not in terms of verbs. ie, they are object oriented and not action oriented.
Object-orientation has nothing to do with nouns vs verbs. Several object-oriented languages uses a verb-noun paradigm. Which one is used is just a matter of syntax, and syntax is trivial. Object-orientation is about how data is packaged and managed. GTK+ and Qt use the fundementally same model here, the difference is that GTK+ uses C hacks to do it, while Qt uses C++ hacks to do it. My point is that the C++ hacks allow a better implementation of object-orientation.
Now maybe this is just because I learned programming before anyone had even though of object orientation
OOP has existed since the 1960’s, and was finely developed in Smalltalk by 1980. I doubt you learned to program more than 40 years ago…
“Dude. Writing out machine code isn’t that hard. x86 is well-documented, and if you can program ASM, you can write out machine code.”
Ok… This is getting a bit lame. Anyone who says that programming directly in machine code is easy has probably never actually done it. If I asked you do draw a frame on the screen that I could drag around with my mouse, and asked you do write the entire thing directly in machine code, could you do it? Could you do it with only your existing knowledge?
“Object-orientation has nothing to do with nouns vs verbs.”
The comp-sci textbooks would disagree with you. OOP does have to do with nouns vs. verbs. In OOP, you are encouraged to think in terms of nouns. In procedural programming, you have actions and data that those actions are performed on. In OOP, you have of course, the same thing. But you are encouraged to think of it as a noun. the object itself is a thing which comprises both its actions and its data.
“OOP has existed since the 1960’s, and was finely developed in Smalltalk by 1980. I doubt you learned to program more than 40 years ago…”
Alright, fine. Before anyone was actually using it for anything. I learned programming in the 1970s. Yes, at this time, Smalltalk was just starting to be talked about. And no one was taking object oriented programming seriously at this point. Hell, they were still trying to convince programmers of the benefits of structured programming at this point.
Let me get this discussion out of the land of abstraction. GUIs can take advantage of the following features of OOP:
1) Widget specified as named members: It’s natural to think of a widget as a class instance with certain named members (properties). Thus, widgets can have properties like position, state, etc. Qt is moving towards a model of using these properties to control the actual position/state of the widget, which seams natural.
2) Inheritence of widget data from parents: Most custom widgets can inherit most of their properties from their parent class, since they share the same basic state (position, etc).
3) Inheritence of widget method from parents: Most widgets share the same basic functionality, and custom widgets can reuse most of this basic functionality.
4) Polymorphic method dispatch: Code wants to use verbs on objects. If the code wants to move a widget, it doesn’t care if the widget is a menu or a toolbar, it just wants to move it. Polymorphism allows the code to use these generic verbs, and allow the dispatch mechanism to figure out the precise implementation to use. Moreover, GUI code usually needs to deal with heterogenous sets (list elements might be labels or icons), and polymorphism is critical for this.
5) Introspection: GUI code often does not now the types of widgets that will be in the UI (due to GUI designers or XML gui builders). It is useful for the code to be able to use introspection to figure these things out.
I will agree with almost all of your points. And of course, introspection is what makes JavaBeans so powerful.
As far as far as far as inheritance, yes. Of course this is very useful in GUI development, and GTK+ also uses this. I guess I just find casting a button to a GTK_CONTAINER and calling a container_add function, for example, to seem more natural to me then writing something like button.containerAdd(blah).
Ok… This is getting a bit lame. Anyone who says that programming directly in machine code is easy has probably never actually done it.
I thought everybody had done that trick where they enter a COM program in hex into DOS? Generating machine code from an ASM description isn’t hard. It’s a simple mechanical translation. Anybody with an Intel reference manual can do it. It might take time, but time-consuming != hard. It’ll take a long time to add 10,000 numbers by hand, but does that mean addition is hard?
Computers exist to solve tedious problems. Humans are needed to solve *hard* problems. Memory management, machine-code generation, and adding numbers belongs to the former category. Writing complex algorithms belongs to the latter category.
“Computers exist to solve tedious problems. Humans are needed to solve *hard* problems.”
My point is that just because a programmer does not know how to do everything by hand that their compiler can do does not make the programmer “dumb”. No programmer in the world can know everything there is to know about programming. But that does not make them dumb. And yes, some things that the compiler can do are hard to do by hand (or at least require a great deal of knowledge, learning, and practice to do well by hand). Writing good machine code is one of those things. It is relatively hard to do, and it is not something that someone is going to master without a lot of learning and practice.
I guess I just find casting a button to a GTK_CONTAINER and calling a container_add function, for example, to seem more natural to me then writing something like button.containerAdd(blah).
Well, if you consider that button.containerAdd(blah) is just synactic shorthand for containerAdd(button, blah), then the weakness of the C formulation becomes clear. The C++ formulation recognizes that a verb (containerAdd) can be applied to multiple types of objects, with the specific effect depending on the type. The C formulation ignores that, and says that a verb can act only on a single type. If you want to apply the verb (containerAdd) to a button, then you’ve got to convert it to a container first.
Doesn’t make much sense from the noun-verb formulation, does it?
My point is that just because a programmer does not know how to do everything by hand that their compiler can do does not make the programmer “dumb”.
I challenge you to find where I said that. Instead, what I said amounted to “if a programmer can do something a compiler can do, he is not necessarily smart.” This is based on the premise that the compiler cannot do anything that requires true intelligence, so his ability to emulate the compiler cannot contribute to his intelligence. It’s obvious that your statement does not follow from mine.
“Well, if you consider that button.containerAdd(blah) is just synactic shorthand for containerAdd(button, blah), then the weakness of the C formulation becomes clear.”
Perhaps. But as I said I am sure some it is force of habit because of years of programming with the procedural paradigm. If something is going to be done to an “object”, then that object should be passed as a parameter to a function. (Ok. Maybe I just don’t like change.) But basically, I am just still much more comfortable with something like container_add((CONTAINER)button, my_image) then with button.containerAdd(my_image).
There is one other thing I like about the first method over the second. It is immediately apparent that a button can be cast to a CONTAINER and that I can therefore use it in any function that expects to be passed a CONTAINER object. With something like button.containerAdd(my_image), it is not immediately apparent that the button is a container. I have no way of knowing whether containerAdd() in inherited from a parent class or not. However, with an explicit cast, I can know just by looking at that single line of code that I can actually use a button with any function that expects a container.
“I challenge you to find where I said that.”
Well, what you actually said was “Even a dumb compiler can do that”, which tends to imply that a programmer who cannot do it must not be very intelligent.
Well, what you actually said was “Even a dumb compiler can do that”, which tends to imply that a programmer who cannot do it must not be very intelligent.
Compilers do not display intelligence. If a programmer cannot do something a compiler can, that doesn’t reflect on his intelligence. Ability to juggle details, maybe, but not his intelligence. The point is that being able to do manual memory management is nothing special if something like a compiler (which has no intelligence) can do it. Just as not being able to do things a compiler can do doesn’t reflect on your intelligence (because, again, a compiler does not display intelligence), being able to do things a compiler can do does not reflect on your intelligence. This point is specifically in response to the idea that using a garbage collector is akin to having your mental hand held, ie: using a garbage collector reflects negatively on your intelligence.
“This point is specifically in response to the idea that using a garbage collector is akin to having your mental hand held, ie: using a garbage collector reflects negatively on your intelligence.”
It could imply more than not understanding memory management (which even if someone does not understand it does not mean they are not intelligent. It could just mean they are a beginning programmer). It could also imply lazyness, or carelessness. There are any number of things that coud necessitate “hand holding” by the compiler.
If something is going to be done to an “object”, then that object should be passed as a parameter to a function.
But object.addContainer(blah) is the exact same thing as addContainer(object, blah). It is even implemented as a parameter under the hood, and the semantics are the exact same in both cases. Some object-oriented language (Lisp, Dylan), even use the noun(object) syntax.
The meaningful difference isn’t the order of the words, but that in an OOP language a method is generic, and adapts its behavior depending on the paramter, while in a non-OOP language it doesn’t.
The thing about the cast also doesn’t make sense. When writing it, you have to already know that Button inherits from Container, so you can write the cast, just like in the OOP case. When reading it, you know that Button must inherit Container, because otherwise you couldn’t call addContainer on it. The key difference between the two is actually potential for error: the casting makes the C version less safe.
It could imply more than not understanding memory management
You seem to think that memory management is this complex thing that requires deep understanding. It’s not — that’s why a dumb compiler can do it. It’s simple: free memory that’s no longer referenced. It’s tedious for a human to do manually, but it’s a fundementally simple thing. Being able to write leak-free C code is about as impressive as being able to add 10,000 two-digit numbers together. Neat trick, but doesn’t reflect highly on your intelligence.
It could also imply lazyness, or carelessness.
Am I lazy or careless because I don’t write machine code by hand? If not, then it’s not laziness or carelessness to let the machine manage memory for me.
“The thing about the cast also doesn’t make sense. When writing it, you have to already know that Button inherits from Container, so you can write the cast, just like in the OOP case. When reading it, you know that Button must inherit Container, because otherwise you couldn’t call addContainer on it.”
Sure. But suppose I am learning the toolkit and looking at some sample code. That single line of code tells me that a button can be cast to a container, and that I can therefore pass a button to any function that expects a container. But container is not a good example. to demonstrate the point I was trying to make. Lets try this one:
foo_converter((FOO)my_object)
myObject.fooConverter()
The first one makes it immediately apparent that my_object can be cast to FOO and can be used as a parameter to any function that expects an object of type FOO. But what about the second? Does myObject inherit fooConverter from a superclass? Is myObject a type of FOO? Or is fooConverter just a method of myObject and myObject is not derived from class FOO? There is no way to know with the second object simply by looking at the one line of code. With the first line of code, there can be no doubt that my_object can be cast to FOO.
“You seem to think that memory management is this complex thing that requires deep understanding. It’s not — that’s why a dumb compiler can do it. It’s simple: free memory that’s no longer referenced.”
Simply freeing memory is not difficult. But other issues related to memory management (pointers, malloc(), pointer indirection, etc.) are things that new programmers often tend to have problems with. Pointers are widely considered to be one of the most difficult aspects of C for new programmers to become proficient at.
“Am I lazy or careless because I don’t write machine code by hand? If not, then it’s not laziness or carelessness to let the machine manage memory for me.”
I’m not saying that all of it is carelessness or lazyness. But it is no secret that pointer issues are one of the biggest sources of bugs in C and C++ programs. Now if pointers are so easy, then why are people screwing them up so often?
I’m not saying that all of it is carelessness or lazyness. But it is no secret that pointer issues are one of the biggest sources of bugs in C and C++ programs. Now if pointers are so easy, then why are people screwing them up so often?
So why do just keep spouting this drivel about C being great for GUI programming? Qt widgets can have a parent which is responsible for freeing the children, so you actually don’t need error prone mallocs, frees or C++ deletes so much. QString has reference counting which allows you to use value semantics, rather than pointers. Qt is very well designed precisely to avoid the problems which you yourself outline above.
“True. But then you are stuck being forced to license your code a certain way. Neither option is very desirable from my point of view.
And once again, the GPL option is not even available for non UNIX platforms, at least there is no indication onf Trolltech’s Web site that it is.”
Then buy a commercial license, and license your app any way you want to. If you are REALLY serious about programming, you should be able to pay some money for a quality set of tools. And besides, it’s not THAT expensive! Do carpenters get their tools for free?
And, iff you are writing in-house software, GPL does not limit you in any way.
This story is bringing out the trolls everywhere! (There are some doing the rounds on the dot as well, though there it’s the C++ is better than that crappy C/Glib stuff nonsense).
For my two cents I think C++ is an awful OOP language, and much prefer Java. C is actually a nice language once you get used to it and accept that it’s procedural, but can be a bit laborious at times (particularly with strings). However Qt makes C++ programming like Java so I don’t mind. Also I really like the look of the D-BUS interface, the sooner everyone starts using this the better.
“The first one makes it immediately apparent that my_object can be cast to FOO and can be used as a parameter to any function that expects an object of type FOO. But what about the second? Does myObject inherit fooConverter from a superclass?”
That’s just a matter of coding style. Nobody prevents you from casting the instance to FOO and then calling the method.
Usually this is skipped because it adds code without additional value.
Simba, don’t you think that it is wise to allow the intelligent human beeing deal with problems that can only be solved with creativity instead of letting him deal with tedious low level issues?
This is the basis of economic growth, and is analogous to work in factories beeing automated by robots. They can do the dirty work that people had to do by hand in the past.
Following this reasoning i think it is inexcuseable that a desktop environment is programmed in C rather than a higher level language. C++ may not be much of a higher level language, but at least it has type safety and built in OO capabilities.
In this regard i think KDE/Qt has a better foundation than Gnome/Gtk (I am thinking in terms of core development here).
I really don’t get the problem with moc. moc is a code generator, you can use it but you don’t have to.
It is like a parser generator. People use them instead of manually writing parsers because they automate an otherwise stupid work.
This seems to have wandered into another C vs C++ discussion. For those following along who don’t know the history of such discussions, here are some perhaps interesting points:
There are a few reasons why GTK+ is written in C and exposes a C API. One is that when it was started C++ support on Linux/gcc was quite immature. Another is C++ does not have a stable ABI (yet) so exporting raw C++ objects from a library is generally considered to be a bad thing. Windows does not do this, nor does the Mac, for obvious reasons – it would make it very hard to change the compiler to a newer version or one from a different vendor as C++ is so complex.
If you want a C++ interface to Win32 you’re expected to provide a wrapper library built with your compiler alongside your app.
TrollTech did not think about this problem because back then, Qt was designed as a bought-in component for commercial apps: ie it was meant to be distributed with the apps built upon it. Through a quirk of history, Qt has now become an OS platform component and this has serious implications: C++ binary portability is no joke.
GObject is accused of being complicated. It is complex I will agree, but to paraphrase jwz, it’s complex because your needs are complex. Quite a lot of the GObject code exists to help language binding authors bind GTK+ naturally into their language so they can choose strong/static typing a la C++, or very dynamic typing as in Python.
The equivalent on Windows is COM. GObject is far from perfect, but I’ll take that over COM any day. I think other developers with Win32 coding experience will agree with me.
Which language is the ‘best’ for GUI development is a rather pointless argument to have. Most of the time, developers cannot choose any language they like. They’re constrained by what they know and mostly what the code they are working on is already written in. Very few projects can scrap and rewrite everything in a new, cool language.
There are other factors. Performance is one. If Java apps had the memory and startup footprint of C++ apps I think it’d be rather more successful as a language for GUI apps than it is. Popularity is another. Lisp is often held as the most academically perfect language, but it never got the traction it needed to really take off.
On GTKmm and whether it’s sufficiently OOP or not, I’ll not comment as I’ve not used it. My familiarity with it comes from reading the docs. But, I’ll say that generic programming does not necessarily exclude polymorphism. You can have polymorphic generic containers, if you like. That’s a slightly pointless thing to do as the whole point of generics is to increase type safety, but it’s a tool you can apply when the situation warrants.
On Qt vs GTK+ API, I’ll say this. If you feel like flaming people over it, go write a Win32 app for a month or two, then come back. You’ll find you have a much mellower attitude to these things! GTK+ and Qt are both excellent, mature toolkits with APIs that are modern and well designed relative to the competition.
Even System.Windows.Forms in .NET is only just catching up, IMHO. As for Cocoa, well, the nice thing about GTK+ and Qt is that they are based on popular, well understood languages. I don’t really want to get into a discussion about whether Objective-C is better or worse than any other language, but the world understands C and C++ so having good support for these languages is essential. From what little I’ve seen of the Cocoa API, it came across as suffering from the problem Java has of rather over-academic APIs which are so heavily object-oriented that simple tasks, like listening on a socket, become rather convoluted.
I think I’ll leave it there. Just use whatever feels most natural to you.
Thankyou for making some constructive non-flaming comments.
Quite a lot of the GObject code exists to help language binding authors bind GTK+ naturally into their language so they can choose strong/static typing a la C++, or very dynamic typing as in Python.
I think the extra type information in the C++ Qt/KDE api allows you to generate language bindings automatically. That type info is missing in the C GTK+ api, and you need to manually annotate .defs files in lisp or xml for each release. For instance, the ruby bindings for Qt/KDE that I released recently are entirely auto-generated. The ‘Smoke’ library that they use is also language independent – the simplest description of it is ‘a moc on steriods’.
As for Cocoa, well, the nice thing about GTK+ and Qt is that they are based on popular, well understood languages. I don’t really want to get into a discussion about whether Objective-C is better or worse than any other language, but the world understands C and C++ so having good support for these languages is essential.
Yes, that’s a perfectly valid point. A further problem is that the dialect on Mac OS X is different from the one on Linux. On Mac OS X, you can mix C++ and Objective-C, but in every other environment you can’t.
From what little I’ve seen of the Cocoa API, it came across as suffering from the problem Java has of rather over-academic APIs which are so heavily object-oriented that simple tasks, like listening on a socket, become rather convoluted.
No I disagree here, the api is excellent and has been proven to be useful for over 15 years for developing large scale mission critical apps. But as you say, Objective-C is unfortunately too far off the mainstream, and writing bindings for other languages is a less of a ‘natural fit’ than a C++ based api.
I’ve been programming for a while, way back to using BBC BASIC, then up to Amiga BASIC, AMOS and the odd dabble in COBOL (no laughing 😉 ), and right now I am learning C after much heart ache with Java.
Believe me when I say this, OOP is the most unnatural, non-human way to look at solving a problem. When people solve a problem, they don’t look at things in terms of objects, classes and other crap. They think, ok, what are steps I have to go through to solve that particular problem, and that is how programming should be. Straight forward.
I want to put information in a database, so I ask a series of questions then submit the results to the database at the end of the interrogation.
People think of solving problems in terms of step-by-step instructions; ask question, wait for answer, validate answer, then do something with that answer, either save it or pass it into another set of instructions.
The commercial license IS quite expensive. Hell, I can buy Visual Studio .NET for less money than Qt wants for just a toolkit. And with that I get several languages, an IDE, and a toolkit.
And as far as the GPL (if it were even available for Windows Qt), yes, it does cause problems for in house programming since your workers can basically take your in house app and post it all over the Internet for others to download if they wish to do so. Granted for a lot of in house apps, there would probably be no public interest anyway since the app would be relatively specialized to what ever that company was doing. But still.
I’ve noticed that a lot of people are arguing the merits of Qt and Gtk based on which languages they use; saying that its harder to code a Gtk application than a Qt application because Gtk is object oriented (and it is) but C is not, or its harder to code in Qt because Qt adds new keywords.
Aren’t these arguments rather petty? Have the people who complain about a pre-processor and extra keywords never learned a new language before? Are you unable to admit that maybe having some more keywords could lend itself towards more expressive programming? Are the Qt coders incapable of coding without classes?
C is a language *many* are fluent in, so they will naturally gravitate towards Gtk. As an ex-C-Only programmer, I originally chose Gtk+ as my toolkit of choice based on this.
Unfortunately, many C programmers are also somewhat overzealous purists (my ex-self included). I decided to learn Python one day, and since then have never written another GUI application in C. Then I decided to try PyQt, and now I would never write another application with Gtk.
Both toolkits are, frankly, amazing. The hierarchy for Gtk is fairly easy to work with and very flexible, but using Qt you just get more. Purely Qt programs get simple things like auto cut/paste in their text boxes, whereas Gtk programs do not (you have to implement it yourself). If you extend your view to the Gnome v. KDE widgets debate, you’ll find that in general the KDE widgets do a lot more for you and are just as flexible as the Gnome widgets; even some Qt widgets provide more for you (a notable case is qtCanvas vs. GnomeCanvas, where qt canvas handles easily translucent png’s and sprites and GnomeCanvas is more low level and abstract).
I could live with implementing my own cut/past or canvas routines, if that was central to my project. Gtk has a lot going for it; a big thing like people mention is the promise of free (as in beer) cross-platform (including windows) development.
But for RAD, Gtk is behind. Glade is nice, but Designer makes it much easier to get the results you want. PyGtk is great, but PyQt has the advantage of wrapping an already OO language, which means that I can develop off of purely the C++ docs and be completely fine (I admit though its not too hard to do this with PyGtk and Gtk’s C docs or the excellent PyGtk docs). Classes like QApplication also let you escape a lot of unnecessary steps (of course I want my program to exit when the WM sends the destroy signal).
I simply find myself being more efficient with Qt than Gtk, and for that matter I prefer it. I think Qt is better; but I would hardly say that Gtk is bad.
Believe me when I say this, OOP is the most unnatural, non-human way to look at solving a problem. When people solve a problem, they don’t look at things in terms of objects, classes and other crap. They think, ok, what are steps I have to go through to solve that particular problem, and that is how programming should be. Straight forward.
You’re not alone in that feeling. There are definite benefits to object-oriented programming, especially in GUI programming, but to one of the greatest hoaxes portrayed over the IT world ever is the misconception that people think in terms of classes.
That said, I’ll still program in C++ over C most of the time, and C# over both of those just because of the libraries.
“Simba, don’t you think that it is wise to allow the intelligent human beeing deal with problems that can only be solved with creativity instead of letting him deal with tedious low level issues?”
Once again, this depends on what you are doing. Sometimes, taking the time to do that tedious low level stuff can have a huge effect on performance. I work in a field where performance is very important. I’ve even been known to drop into ASM for optimizing a few really critical operations.
But C++ largely does not isolate you from having to deal with the issues you have to deal with in C anyway. And as I pointed out before, some studies are actually suggesting that overall maintenance costs of C++ programs are higher than for equivalent C programs.
Sure Java does isolate you from many of these problems, but those who claim that Java code can anywhere near approach the speed of C code simply aren’t telling the truth.
“In this regard i think KDE/Qt has a better foundation than Gnome/Gtk (I am thinking in terms of core development here).”
I’m still going to disagree because Gtk gives me the option of working in both languages. And if I do decide I want to use C++, I can use gtkmm, which in my opinion, is better designed than Qt anyway.
“Gtk has a lot going for it; a big thing like people mention is the promise of free (as in beer) cross-platform (including windows) development.”
It is also simply more universally supported since GNOME is now the standard desktop on most commercial versions of UNIX. (and of course, also the standard on Red Hat). I suspect GNOME was chosen over KDE by Sun, etc. specifically because of the Gtk toolkit. Some of it I am sure had to do with licensing issues. But I also suspect some of it was technical. Gtk provides an easier migration path for Motif programmers than Qt does.
I will also admit that one of the reasons for my Gtk bias is that I am not very fond of Trolltech’s commercialism.
Having just looked at the tutorials for GTKmm and knowing a bit of Qt, I agree that GTKmm looks a lot more “standard C++” than Qt, but I find that Qt seems a lot easier to understand to me.
I have experience in C, scripting languages and Java, so I probably have the “right knowledge” to prefer Qt.
I disagree. I think in classes.
It’s very hard to start writing a program straight off in OOP mode, I grant you, in fact it’s nearly impossible.
It is, however, extraordinarily easy to design a program in OOP mode. It just a matter of how you think about it. I always come up with “things” that the program works on. Each “thing” has a couple of properties and it has a couple of actions that it can do. I then end up arranging connections between my things which, as the abstraction increases, resembles in real world terms what my program does.
Further, inheritance is great. In terms of code saving and the consequent ease of maintenance, it can’t be beaten.
That said, if you just want to throw something together in a hurry, procedural is the way to go. Likewise for thin database wrappers (e.g. simple fetch and write forms). However if you’re ever going to do any complex programming, OOP really comes into it’s own.
However I went to college in 1998, and was thought OOP the whole way (with a mercifully short dip into Haskell), so my fragile little mind may be a little warped 😉
“Further, inheritance is great. In terms of code saving and the consequent ease of maintenance, it can’t be beaten.”
For some things, yes, inheritance is nice to have. But consider my problem domain. I work on problems that are A: So specialized that there is nothing to inherit from, since no one has written anything yet. B: So specialized that nothing else can inherit from them anyway.
So in this case, it seems that classes are just unnecessary overhead for me.
Yes, I know I am probably setting a trao for myself. At some point, chances are I will be saying “I wish I had written that thing 5 years ago as a class, since it would be really useful to inherit its properties and methods right now.”
“It is also simply more universally supported since GNOME is now the standard desktop on most commercial versions of UNIX. (and of course, also the standard on Red Hat). I suspect GNOME was chosen over KDE by Sun, etc. specifically because of the Gtk toolkit. Some of it I am sure had to do with licensing issues. But I also suspect some of it was technical. Gtk provides an easier migration path for Motif programmers than Qt does.
I will also admit that one of the reasons for my Gtk bias is that I am not very fond of Trolltech’s commercialism.”
Ubiquity is another reason to use Gtk; you can expect any linux user with X to have Gtk installed for The GIMP and probably Gaim and Xchat as well; these applications are fairly “killer” and have either no counterpart or inferior counterparts (sorry kopete & ksirc).
You do not, however, usually find someone who has Qt installed for only *one* purpose; in my experience people are much more likely to use Abiword/Gnumeric instead of Koffice (or even OOo), Bluefish instead of Quanta, etc. just because these applications fit into their environment and do not require a “big” (its not really that big) library in the form of Qt.
As far as Qt goes, its main killer app is KDE; and of course KDE’s merits can either be largely over-loved (many kde folks) or largely hated/ignored (many gnome-folks). SuSe and Slackware are the two big distros that default to KDE; and Slackware has the popular Dropline Gnome distribution which turns lots of people, while SuSe/Novell still have to work out whether or not they will migrate to Ximian Gnome.
As a personal preference, I tend to prefer the way that Gnome looks & feels, but I enjoy actually *using* KDE more for various speed and preference related issues. D-BUS && Qt4 can only mean better things for both Gnome & KDE (who will hopefully one day reach tolerable interoperability).
I’m not sure if I have bias either way. Of course, I prefer Qt, but I’m not so sure that preference is bias since the connotation of “bias” is generally that it is unwarranted or that it gets in the way of logical thought. I have used (for at least 2 months) every version of KDE & Gnome since KDE 2.x and Gnome 1.4; and continue to switch desktops every few months to see what is up with the other camp (currently in KDE 3.3 mode; will try Gnome 2.8 when it is packaged by Pat for Slackware).
By the way I just want to point out the irony that you cite commercial support as a positive for Gnome/GTK and then cite commercialism as a negative for Trolltech. It doesn’t hamper your argument at all, its just semantic humor. I think that it was largely the old Qt license that created the need for Gtk/Gnome, and perhaps some disdain for C++ as dirty and over-complicated. I don’t have any experience migrating things from Motif, but I don’t know of many extant Gtk apps that started out in Motif, either.
Its very hard for me not to program OOP style. I’m always thinking about the easiest (and thus, laziest) way to write code, and this inevitably becomes code reuse – classes and interfaces.
>So I think when doing OOP programming, one must spend a lot more
>time reading documentation because often the inheritance levels are
>so many layers deep that it is not easily apparent what properties or
>methods a certain object has, etc.
So gtk sux? Are you sure you know how gtk is working?
Gtk is a full Object Oriented Toolkit!
Using Gtk without gobject is stupid and ugly! I know it, i learn gtk without using gobject, one day i talk to gtk/gnome devels and understand that I was wrong. But Gobject is as complex as C++, so …
“By the way I just want to point out the irony that you cite commercial support as a positive for Gnome/GTK and then cite commercialism as a negative for Trolltech. It doesn’t hamper your argument at all, its just semantic humor.”
Well, it’s not really ironic when you consider how I think about it. I would rather work with “truly free” tools when I can. Gtk provides that freedom. Qt does not because I cannot port to Windows without paying a licensing fee.
However, like most programmers, I also live in a world where my programs have to be able to run on commercial operating systems, particiularily Windows 2000 and Windows XP. But when “free” tools are supported by these commercial operating systems, it means I am not stuck using propreitary tools for development, even though my programs have to run on those commercial operating systems. So it is a very good thing when commercial companies adopt open source and “free” toolkits or technologies, because that gives me the freedom to use those “free” tools for developing software, even if that software needs to run on a commercial operating system. Of course, having a company like Sun adopt an open source technology is also a good argument booster for me when I make my pitch for why I want to use the tools I do. When I am asked “How do we know this open source stuff is any good”, I can respond by saying “Well, here is a list of major commercial companies that have adopted it and are using it.” My bosses won’t care about technical arguments such as “it’s community reviewed because the source is available, etc.” But they will care about the fact that a major commercial company who provides IT infrastructure for Fortune 500 companies is using this same technology.
“I don’t know of many extant Gtk apps that started out in Motif, either.”
I don’t either. But I suspect that will change since there are tons of GUI system administration tools and such written with Motif that ship on Sun workstations and the like. I suspect Sun is going to have to port those tools to GNOME. And that porting will probably be easier with Gtk than with Qt since these tools are written in C and not C++.
I’m also a slackware user, but I can get by _with out_ using gnome/gtk and some of the application you find killer.
Koulour paint has replaced the gimp for me (I’m not a very artistic person). If you don’t like ksirc, but like xchat, you should try out konverstation! And I don’t see any problems with kopete, great application.
If you are looking for a killer application that needs qt, (next to kde) then it must be k3b. Before k3b existed, I used a mix of command line tools and gtk gui tools. Now that there is k3b, burning cd’s or making backups (k3b + kdar) has become a no brainer.
I’m very curious as what qt4 will mean for the next generation kde desktop. So curious that I’m gona spend some hard cash on a new desktop system so that compiling becomes faster!
Currently I’m using a celeron 400MHz PC and compiling qt-only programs is no problem, even while still using kde and from within kdevelop. But trying to develop a kde version of the same program on those specs is a no go. It is amazing to see what low specs are needed to use kde/qt but frustrating to know that developing those same things need so much power.
“So gtk sux? Are you sure you know how gtk is working?
Gtk is a full Object Oriented Toolkit!”
See my previous point about the explicit cast making it immediately apparent that a certain object can be used with any function that expects another kind of object. As I said, that is not as apparent with OOP the way Java and C implement it because it is not immediately apparently whether an object inherits from something, or implements it itself.
Now if pointers are so easy, then why are people screwing them up so often?
Because dealing with pointers is tedious detail work, which humans aren’t good at. It’d be the same if people were writing machine code directly. It’s not a hard thing to do, take your ASM instruction, go to your reference manual, and put the right bits in the right places, but it’s tedious, and thus better left to the computer. The computer is good at easy, tedious problems — hence the usefulness of letting it handle memory management.
I’d suggest not knocking OOP until you’ve tried it. Java is not the greatest example of it. Qt is good, because it’s API manages to work around most of the defects of C++, but something like Smalltalk is even better, because it does OOP naturally.