Has the Object-Oriented Programming been hyped too much, or do we suffer from the OO paradigms exposed in languages such as C++, Java, C#? How about languages that support features such as prototyping and/or multi-dispatch?
Has the Object-Oriented Programming been hyped too much, or do we suffer from the OO paradigms exposed in languages such as C++, Java, C#? How about languages that support features such as prototyping and/or multi-dispatch?
I hated OO at university. In fact I began to believe one of the main reasons Design Patterns are big business is primarily because of flaws in the concept of OO design.
Encapsulation and abstraction within procedural programming is all anyone should ever need.
Phillips and standard, manual and cordless. My cordless doubles as a drill, and I also have a corded switchable hammer-drill. I don’t try to screw slotted screws using a phillips driver, nor a the cordless on something fragile. I never use driver bits in the hammer drill, even though they fit. I’m sure this doesn’t strike anyone as peculiar.
Why is it that people think differently of programming? Should you not have a selection of tools (languages and technologies) and use that which is appropriate for the situation?
>>A frequent argument for OOP is it helps with code >>reusability, but one can reuse code without OOP—often by >>simply copying and pasting.
He obviously never worked on code maintenance, and support for applications…
C++ is not the only tool for OOP. What about Python: everything is object.
Frankly, this is mostly rubbish along with some truths.
The conclusion is quite ridiculous (“hey lets wait for the day when computers will understand us directly ! yay ! sound like a fun idea !”)…
In my view, OOP is extremely nice and useful. And yes, I think that most people “don’t get it”. But I think the fault is more on the “oo”-claimed languages like C++ than on people. C++ is a strange beast, and claiming like in this article that “C++ too is a difficult language to use; it’s just not as difficult as C”, is quite astonishing. The C language is much less complex than C++.
One thing where I agree with the article: Inheritance is not automatically good and should be used with caution. Delegation is more often than not a much better Design Pattern.
In my view, OOP really is Smalltalk, Objective-C or Ruby. Not C++. And to me, that explain a lot why people has trouble with the C++-branded-OOP-style.
With a true OO language like Smalltalk, things are dynamics, it’s much easier to work in a “extreme programming” mode (lots of short iterations with refactoring), because by definition, your software won’t be static. It will evolves. That’s why static languages like C++ are a pain and that’s why dynamic languages are a much much better answer (with Smalltalk, you can modify things on the fly while your program is running, etc.). In fact I think the real thing is the messages paradigm over the function call paradigm.
Finally, I would say that with a good programming framework, things are really different — try to program with Cocoa/GNUstep to see what I mean. Use Gorm (the graphical object relationship modeler) to link your objects (or InterfaceBuilder on OSX). Things are a lot faster and simpler to program with Cocoa/GNUstep, and the reason is because it’s a GOOD object oriented framework.
Of course, bad programming won’t be stopped by OOP (perhaps even to the contrary..)
I agree with those that say to use the right tool for the right job. OO isn’t needed for everything – in many small projects it’s added complexity can be a hindrance. But copy and pasting as a way to reuse code – give me a break. That is just ridiculous. Also, encapsulation isn’t necessarily a way to literally hide code from other programmers. It also refers to the fact there ought to only be one entry/exit point for a method or data. If you can only set a variable by going through a single method, it becomes much easier to control that method. True, you could do something similar with procedural functions, but the combination of data and methods makes for a cleaner representation, imo.
Clive, I am interested in where I can read about encapsulation and abstraction within procedural programming. So far I’ve only found these concepts addressed within the context of OO languages.
Personally I haven’t done much progamming period, but I’ve been reading up a lot on (funny as you mention it) Design Patterns and the whole philosophy of designing to the interface, not the implementation. The seminal “Gang of Four” book uses C++ and UML as it’s tools for illustrating DP’s, and I assume for good reason. If you can show me how to create meaningful and adaptive interfaces using a non-OO approach that produces readable and maintanable code, I’m all ears.
When you boil it all down, OO is a conceptual tool – one that has gained substantial popularity, not just among hobbyist and novice programmers, but even amongst language gurus like Bruce Eckell, as well as the C++, java, php, perl, python, ruby (… shal I go on?) communities. (Doesn’t the linux kernel now support the use of C++ within the kernel?) If OO was so terrible, would this be the case?
It seems Richard Mansfield’s article really glosses over OO’s usefullness, and with it’s sweeping statements and generalizations (unaided by any practial examples or testimonies) degrades into one man’s editorial and heresay. I would be more impressed if someone were to write a more *balanced* article contrasting what OO offers and at what cost, as compared with other viable alternatives.
In my opinion OOP is the best paradigm we have at this time. Granted there are drawbacks to OOP, but when one combines the strengths of OOP for application design with Aspect Oriented approach to program maintenance, they have a winning combination.
GUI components are great time-savers, and they work well. But don’t confuse them with OOP itself. Few people attempt to modify the methods of components. You may change a text box’s font size, but you don’t change how a text box changes its font size.
Welcome to the OOP world!!
That’s exactly why OOP is good. You don’t have to change how a text box change it’s font size, because you trust your vendor enough to use his component without messing around with his code. OOP in this case, provides you a very clean interface between your code and your vendor’s code.
This guy for sure deserves respect, based on his curriculum. If he doesn’t like OOP, fine. But his points on this article really don’t make any sense.
One more thing… he never said what language he uses.. From the article, I can presume VB on Visual Studio .NET… just to clarify, VB.NET IS OOP!
He seems to have written an entire article about OOP without mentioning polymorphism. Isn’t that one of the main advantages of it: the ability to abstract out the common code from a related set of algorithms through virtual functions and inheritance.
Of course full muliple-inheritance C++ OOP is too much for simple amateur coding, but for more complex projects the solutions in pure procedural languages start to look a lot like custom-built versions of the functionality built into OOP languages.
So the author does not like OOP?
He has written loads of books on Visual Basic (the .net kind as well as the older ones).
Should I be surprised? 🙂
When I first read some of the comments here I assumed this was some green kid out of college authoring the article. I mean who would suggest that copy/pasting code is an effective mechanism for code reuse in a production environment? Who would suggest that group programming is an exception rather than a rule in the real world? Lastly, his examples are ridiculous. He says that most people who write quick off small apps don’t need OOP. It’s true that their own 30 line program can be procedural in concept. However it will be leveraging the OOP-based back end of the libraries in question. Consider the ease that we can manipulate Excel sheets, or merge XML and DB datasets. While the code the end user writes may be procedural, he is leveraging COM objects or .NET objects to get the job done. OOP is what makes all of that magic work. This guy needs to get real.
Nicolas, I actually agree with the article, but I believe that the author has not been exposed to languages such as Dylan, Smalltalk, Python, Lisp (CLOS) and so is trapped in the C++/Java/C# OO paradigm. C++/Java/C# are too static and hamper exploratory programming.
I’ve been playing around VisualWorks Smalltalk for a couple days now, and I’m really inpressed with the live-environment nature of it and how simple the syntax really is once you get used to it after a couple hours. Don’t ever underestimate the power of tools.
I’ve also been playing around with Slate http://slate.tunes.org/ too. I believe it really expands the power of object-orientation beyond what Smalltalk offers through prototyping and multi-dispatch (PMD).
Here is the Slate VM/compiler guy’s thesis on PMD. http://www-2.cs.cmu.edu/~aldrich/courses/819/salzman-pmd.pdf
The article seemed almost like a troll? cut-n-paste programming? What!? Then mentions DOS to Windoze is when the OOP started? Almost implying Winblows is OOP? Has he ever USED that Win32API? That is SO FAR from OOP (MFC isn’t even close either). Not sure what the point of the article was, and add to that his examples to lure you to procedural had no substance.
I only agree partly with this article.
When het says the following:
“With the possible exception of GUI components, I’ve never heard of an OOP success story that on close inspection demonstrated OOP’s efficiency.”
I know many examples of applications which do show the efficiency of OO programming, though most of these applications are limited to MacOS because they are programmed in Objective-C (with Apple’s Cocoa Framework):
* Delecious Library
* Apple’s media apps and AppleScript
* OmniGraffle
* Many more…
Integration of applications is a lot easier with Object Oriented paradigms and Apple shows it in their apps. GUI programmings is also extremely simple with Apple’s tools. Communication with the GUI happens through actions and outlets (messages sent from GUI to application code and from application code to GUI).
Other applications with were programmed in OpenStep were for example the first web browser by Tim Berners Lee and the Doom map editor by John Carmack. The Cocoa API is a (sort of) extended OpenStep API.
Anyone ever seen the Dodge commercial where the man and his wife are seeing a counselor, and he asks them about their vacations? Well, after the guy and his wife leave, he says “What a quack.”
That’s how I feel about this article.
He offers lots of opinion and conjecture, but he cites no direct evidence to support his arguments. He mentions some vague personal experiences (i.e., “Of all the OOP successes I’ve seen…” or “Most big companies do…”. Well, what exactly are those?). He doesn’t give any code examples and he doesn’t refer the reader to any more detailed articles or papers. He also seems to be out of step with what universities are actually teaching — of the universities from which I’ve had contact with students (about 3-4 schools), they all teach OOP from the very beginning and they tend to do it Java or .NET, so I’m not sure what his tripe about academics being obsessed with C++ was all about. In my experience, I’ve found that commercial software developers (again, a limited experience, but relevant: Microsoft, National Instruments, Raytheon, E-Systems, L-3 Communications) are far more attached to C++ than academics are.
The tone of his article seems to apex when he mentions C++. After that, it seems like he was really writing about the pitfalls of C++ and not OOP in general, even though he worded it that way. The first thing I thought of when he said something about the idea of code reuse not being as good in practice as it is in theory (with regards to OOP that is) was the Lisp language — the ability to reuse code in Lisp is fantastic and Lisp-based development houses trend toward much-better-than-average development times.
He also tries to argue that OOP is fine for groups, but that most programming is done by individuals (and that an individual hiding data from himself is silly). Firstly, he doesn’t offer any quantitative data to validate that statement, so it’s just supposition on his part. He either has no idea how many programmers work as individuals or he isn’t sharing his information with us. Beyond that, OOP is not for hiding anything from programmers. It’s for hiding implementations from other implementations. I always thought that was one of the very basic premises of OOP. I spend time programming both in groups and as an individual. In my group projects, using OOP concepts made it very easy from the get-go to design our software and then implement it. We were able to compartmentalize individual work much more easily than we would have been able to had we been using functional or procedural techniques. In my individual projects, I still choose to use OOP techniques for the same reason. My software design process tends to be simpler and the resulting design tends to be more succinct, with clear roles and limitations for each component. Implementation is also easier for me as an individual because I can establish well-defined interfaces between components and not have to be developing lots of different components at a time. Lastly, with an OO-design, it would be much easier for someone else to join me in a project and get up to speed and be productive than it would otherwise.
Perhaps one thing I should add is that the reason (for me, at least) that OOP makes design easier is that OO concepts make it easy for me to take a real-world situation and translate it into logical abstractions that I can describe to a computer. In languages like C where the user can define datatypes that are primarily data (don’t talk to me about function pointers), the programmer is limited in what he can describe in his code (or at least *how* he describes it). In OO languages like Java, he can describe these abstractions to be as simple or as complex as the situation requires. He doesn’t mention any of this in his article, and in my view, it’s a significant benefit of using OOP principles.
This has gotten too long, so I’m done.
One problem with OO can be seen when using inheritance, which can often hinder more than it can help. Some may even argue that the very idea of inheritance is flawed.
Aggregates and interfaces is what I say.
The trouble, and I’ve felt it, is that many OOP people are trying to say that OOP is the answer to everything but scripting (and some of them even want to take simple scripts from us). I know I sound like they are an evil government, and they aren’t. However, if you’ve listened to the encapsulation rhetoric from your CS professors enough you start to be able to see the hot air escape their head as they claim you need an OOP language to program OO. Truth is, encapsulation predates OOP by a lot (even c has scope, and ways to create objects of sorts). About the time they start teaching you about iterators and how it works great, except you can’t do these three things because of “the nature of the language (c++)” you start realizing that c++ is a bit of a hack.
Everyone does work differently. Personally, I like to have some clue of what my program is telling the machine to do. This is one reason I love to program in c, the other is that I don’t mind wasting hours correcting small and stupid mistakes I made. But I’d never do a lot of things in c, it’d be a waste of time and a lot of extra work to do many things.
There is nothin more annoying though, than wanting to install a program that requires 46 different packages because it’s written in some odd language and needs 14 different bindings for 4 different c libraries. Ok, I’m exaggerating; but you get the idea.
Most people will admit that inheritance is rarely useful, but when it is….
Ok so this guy wrote 32 books on computers since 1982 But come on… using C++ to exemplify OOP is wrong.
http://c2.com/cgi/wiki?CeePlusPlus
My take on this is that OOP is a GoodThing but is NoSilverBullet.
The guy makes many valid points, but are more valid in languages like C++/Java/C#. He points out C++ explicitly for his criticism. It’s hard to plan for change, and these languages impose a static structure on you that makes it even harder.
Here are some points that he makes that make sense:
* Programming has become bloated—ten lines of code are now needed where one used to suffice.
* Wrapping and mapping often use up programmer and execution time as OOP code struggles with various data stores.
* Massive API code libraries are “organized” into often-inexplicable structures, requiring programmers to waste time just figuring out where a function (method) is located and how to employ it.
* The peculiar, inhuman grammatical features in C++ and OOP’s gratuitous taxonomies continue to waste enormous amounts of programming time.
OO was never the panacea that so many people claimed it was, but it’s not even close with aforementioned languages. If you haven’t programmed in functional languages like Haskell, Lisp, Dylan, or in other more dynamic OO languages like Smalltalk (which I’m finding to be have some functional language qualities to it) then you don’t have a reference to compare what other languages consider “objects”.
The problem I see with OOP is that it covers so many different ideas; some of them good and some of them bad. The ideas I see commonly atributed to OOP include:
* encapsulation (hiding of implementation)
* polymorphism (multiple implementations of a single interface)
* inheritence (extending implementation while preserving type)
* functions and data grouped (as in C++)
I find that two of these ideas, encapsulation and polymorphism, are good things that cleanly complement each other. The cheif advantage of polymorphism is that it encapsulates the required implementation to an abstract interface. This is a good thing, as it allows the code to be easily extended to support new operations because the interface is solidified.
The other two ideas aren’t all that good. Three levels of inheritence tends to produce spaghetti code, where what is being depended on of the base isn’t understandable. And the grouping of functions and data has absolutely no relashionship to the good OOP described above, except that single-dispatch languages like C++ require functions to be a member of some class.
The other two ideas aren’t all that good. Three levels of inheritence tends to produce spaghetti code, where what is being depended on of the base isn’t understandable. And the grouping of functions and data has absolutely no relashionship to the good OOP described above, except that single-dispatch languages like C++ require functions to be a member of some class.
Bingo. I like the way that Dylan separates data and the methods that act on it with methods that form generic functions and multi-dispatch.
I think the guy is a flaming moron. I write group software AND my own private software, and I love objects. Neither do I think they are particularly hard to understand. When I first learned about them it was like an epiphany. Writing objects is clean, elegant, and straightforward.
First off, I don’t know who this “expert” programmer is. I doubt seriously he’s had nearly as much exposure to working in a professional shop of any size as he lets on and I wouldn’t hire him if you put a gun to my head and told me its either hire him or meet my maker.
My own experience (and for the record, I’ve been a senior developer at 2 Fortune 100 companies — one in Texas and one in Los Angeles)says favor composition over inheritence and so I thought I’d agree with at least the intent of the article. But in the end its just pure garbage. Did he actually suggest you just “copy and paste” for code reuse? Has he actually *maintained* any project he’s ever unleashed on his poor, unsuspecting customers? I doubt it. Far more likely he’s one of these types that gets the job, gets in over his head, and bails before anyone has figured out just how badly the’ve been taken.
In short, this article is sheer idiocy.
I think you are totally right.
Sure, I often write small procedural python scripts for specific tasks, but they can only be small because of the backing OO Library…
I also think, that you have to work on a larger Project to comprehend the usefulness of OOP. Writing simple VB Apps are perfectly fine without it.
And one last: you can write object oriented in nearly every language (C is my favorite 🙂
It’s not that hard to reuse existing code in an efficient manner even in a purely procedural shop.
In the largely FORTRAN66-based mainframe environment that I used to work in, those tasks that were common to several programs were usually encoded as an external subroutine and placed in a subroutine library.
That way, any programmer could reuse the subroutine in the future by calling it within their program and including the appropriate object code at compiler link time (and this step was handled behind the scenes automagically by our compilation process).
We had over 1500 of these routines, and many programs were little more than I/O code surrounding a series of external subroutine calls. It made many tasks a LOT easier — why reinvent the wheel when there might be several variants already constructed for you? 🙂
I have to say that I believe the author makes a good point. To me a construct like C++ is best used when multiple programmers have to combine their efforts. Otherwise I find it’s usually easier to break a program into pieces. For low to medium level routines, you can use assembly or C. For higher level funcionality, you can usually weave in a scripting language, like Python, that has OO elements. Sticking to one language or tool for a project seems crazy to me unless the priority is on usability between programmers.
The strongest thing OO has going for it is not the encapsulation of data, but the encapsulation of behavior. Most design patterns, and stuff people come up with on their own, allows a pluggable object to direct behavior at key points in the flow of a program. I use several policy management objects to defer decisions to another implementation. The resultant code is much more flexible. As an aside an OO wishlist would be
* Interface inheritance; inherit the interface, not implementation
* Open class definitions; able to add methods to existing classes without subclassing (Objective-C protocols)
* Multi methods; implies functions are not tied to a particular class, which is good, but orthogonal to current design
* Interface specifications for constructors; very useful IMHO, or allow pure abstract static methods
* Type inference; not OO, but c’mon, it’s 2005
* Algeberaic types; again, not OO, but c’mon, it’s 2005?
* Real functors a la Lambda expressions; IT’S 2005 ALREADY DAMNIT!
* Open class definitions; able to add methods to existing classes without subclassing (Objective-C protocols)
Try any of the prototype-based languages (Javascript, Self, Prothon, etc).
<em>* Interface inheritance; inherit the interface, not implementation</em>
You can always overload the function polymorphically when implementation is inherited; when you inherit interface only, there’s no corresponding way to inherit implementation. If you lose implementation inheritance, you lose the policy class mechanism described by Alexandrescu.
I’ve been developing software for about 10 years in various languages (BASIC, 80×86 assembler, C, Pascal, C++, Java, Perl) and it has become pretty clear to me that the one overriding benefit of OOP is to reduce complexity. All the other benefits are good (to varying degrees) but reducing complexity is the killer. If you find that using OOP is increasing the complexity of your code then what you are working on is trivial (don’t be insulted, I’ve worked on trivial apps too .
The author says several things that lead me to believe he’s only ever worked on trivial apps, but the one that made me laugh was “hiding information from yourself is silly”. No it is not. Information hiding is very usefull since it says “don’t worry about all this stuff, the only thing you need to remember is this”. It is simply another way to reduce complexity by abstracting many details into a simpler interface. But again, this is usefull only if you are working on a more complex application.
There are 2 things I agree with though:
– OOP marketting and hype is unnecessary
– inheritance is overused and too often misused
Some other stuff:
– OOP does owe much of its popularity to Microsoft
– C is very usefull and not too complex
– C++ has a terrible and over-complex syntax that has cost the software industry countless $. A better, simpler C++ would have saved several years of developer productivity
– Pascal (as in Borland’s Delphi) is a great language to develop Windows applications in
– multiple inheritance is good
– static typing is necessary for complex software development
– Perl style languages (loose typing) are for scripting and little utilities, sorry, they are not for software development
Cheers
You can always overload the function polymorphically when implementation is inherited; when you inherit interface only, there’s no corresponding way to inherit implementation. If you lose implementation inheritance, you lose the policy class mechanism described by Alexandrescu.
I didn’t mean remove implementation inheritence altogether. I just meant the option to only swipe the interface would be useful. Both mechanisms are useful. Sometimes I want to make a container that can act like an STL vector, but doesn’t bring all it’s baggage.
http://digitalmars.com/d/
I didn’t mean remove implementation inheritence altogether. I just meant the option to only swipe the interface would be useful. Both mechanisms are useful. Sometimes I want to make a container that can act like an STL vector, but doesn’t bring all it’s baggage.
In that case I agree.
I’m really tired of this growing prejudice against multiple inheritance. There are some situations where you want several “parent class” behaviors and aggregation/delegation just isn’t flexible enough.
D looks like an interesting language, but without MI I’m not going to waste my time learning it.
Seems to me that the we have a particularly bad combination of technologies that are widely used in Business, namely relational databases and OO languages. They seem to work against each other.
First off, OOP is not a silver bullet that will solve every problem. Many people seem to mistake that as does the author. OOP is good in some areas and terrible in others.
People have to realize there are a variety of tools available to solve problems. Picking the best one is the challenge.
Unfortunately OOP became a buzz-word and everyone had to do it, thus leading to failures where OOP was a bad choice.
However, totally dismissing OOP is complete ignorance.
Also, cutting and pasting code? As others have mentioned, WTF?! Talk about the worst way to manage code.
I have met managers like this guy and I dread working on projects with them because they are stuck in their beliefs and won’t budge.
First of all, C++ has many flaw due to the fact that it is more or less a “hack” of C. A language will support OO better if it is built around it in the first place (ie. Java, Smalltalk, Python, .Net etc.) So using C++ as an example in the article is not really good.
OO really makes it much easier to balance performance with maintenance with interfaces. By using interfaces, the programmer can change the underlying implementations without rewriting too much code (ex. Write a speedier implementation of CharSequence if they think String is too slow)
Of course, seeing that the author is a VB programmer, it would be rather difficult for him to understand the merti of VB (even with VB.NET, I doubt many VB programmers have changed their habits and starts to learn the OOP features in VB.NET).
Seems to me that the we have a particularly bad combination of technologies that are widely used in Business, namely relational databases and OO languages. They seem to work against each other.
It’s true that it can be drudgery to persist and vivify objects from a database (although there are frameworks to help with this in many languages). However, the conceptual mapping from objects’ data to records is fairly straightforward. In what way do they “work against each other”?
Multiple inheritance (and IMO inheritance in general), looks better on paper than in practice. I believe the title of the article should be “Inheritance Is Much Better in Theory Than in Practice”…regardless, multiple inheritance _will_ eventually bite you in the ass, which is why it is excluded from most modern languages; delegation is indeed sophisticated enough (in .Net at least) to accomplish anything you’d need with multiple inheritance (multicast delegates are a beautiful thing). I’ve made it a standard practice in my professional career to use interface inheritance over implementation inheritance wherever possible, or at most inherit from a single base class in an extreme case and have yet to run into any issues, and I believe it leads to cleaner, easier to maintain solutions. </.02>
I agree with you on everything but the C++ syntax. Any language, IMHO, including C++ can be used in a way that is easy to understand or impossible to work with. It is the programmer’s responsability to wrote code that is clean an well documented.
They are a lot of people who wrote code, but only few that can be called programmer…
regardless, multiple inheritance _will_ eventually bite you in the ass, which is why it is excluded from most modern languages
See, this is what I’m asking about. I can’t see what about MI makes it more prone to bite me in the ass than single inheritance. Especially (in C++) if you’re careful to use virtual inheritance properly.
In the article, the author states:
“When fast execution and memory conservation were more essential than clarity, zero-based indices, reverse-polish notation, and all kinds of bizarre punctuation and diction rose up into programming languages from the deep structure of the computer hardware itself. Some people don’t care about the man-centuries of unnecessary debugging these inefficiencies have caused. I do.
Efficiency is the goal of OOP, but the result is too often the opposite.
The day when a programming language needs to accommodate the machine rather than the human has long since passed. Indeed, in Microsoft’s Visual Studio .NET suite of languages, the compiled result is identical whether you use Java, C++, or Basic. But professionals and academics haven’t heard this news, or they’re just dismissing it. They continue to favor C++ and other offspring of C. So colleges now turn out thousands of programmers annually who don’t even know that serious alternatives to C++ or OOP exist. Countless academics point to OOP as the reason C++ is superior to C, neglecting to mention that C itself was an inherently painful language to use and that any abstraction would’ve been an improvement. C++ too is a difficult language to use; it’s just not as difficult as C. That’s pretty faint praise.”
This is a contradiction. First he’s banging on OOP as overrated and not very useful in real practice. Then he sings the praises of the .Net languages, all of which are fully object oriented. Second, he hammers about code efficiency, and whether or not OOP helps with it, but again sings the praises of .Net, which run in a VM (lots of extra memory usage and cpu cycles).
I agree with the author’s premise – that OOP is overrated and not always practical in the real world. This is one reason I’m not fond of Java, which forces OOP down your throat. Even Bjarne Stroustrup says emphatically that OOP is far from being a panecea, and bills his language invention as a “multi-paradigm” language.
So, in essence, OOP is just another tool in the programmers toolbox, that often gets over used and/or abused out of over-intellectual zealotry or infatuation with programming theory over programming practice.
I’ll have to say this was a good article. He certanly have
many points I recognize from experience.
Many of the comments in here certanly shows he is sort of right as well, people bitchslap OO on you as the savior of all. And anyone that says otherwise is a troll, right.
WRONG. But you won’t admit it, I’ll bet.
Pretty much exactly as you described. While its not difficult it is tedious and you basically have to build yet another abstraction. Since data is pretty much stored in a RDBMS then would it not be wiser to use OO for GUI’s and a table based language for the manipulation of data.
I don’t have enough experience to really speak with any authority. Just that it seems to make better sense than forcing a tool to work. Spending money on frameworks to alleviate the drudgery is just masking the real problem. The problem being that OO and RDBMS are not a good match.
I thinking the software development world would have benefitted greatly from a C++ Lite language. This would be just like C++ but minus: templates, function overloading (including constructors), -> symbol nonsense (why not always use . and let the compiler take care of the rest???), and header files. C++ as it is now would then be used for libraries (perfect for STL) and frameworks and NOT for application development. If you say “just don’t use those features of the language” then you’re out of touch with modern software development. Developers nowadays (unfortunately) are spending most of their time maintaining/enhancing old code rather than writing new code.
Getting the data out of the RDBMS and into strongly typed business objects alleviates many of the problems of meshing OO and RDBMS. Abstraction layers are not necessarily a bad thing, so long as they are warranted and solve a specific domain level problem; in this case getting data into objects that the layers above the business layer can interact with in a more native way. Welcome to n-tier development :-).
“…would it not be wiser to use OO for GUI’s and a table based language for the manipulation of data.”
What “table based language” would you recommend? Why introduce YADSL (yet another domain specific language) when SQL already provides this functionality? In .Net, strongly typed datasets provide a good jumping off point for this kind of functionality and lend themselves greatly to OOP.
Welcome to n-tier development :-).
Makes me think of Java in particular or more specifically J2EE and the seeming layer on top of layer.
As for as the table based language goes. I read a very good paper by a guy with a disdain for OO. He claimed as many have that OO is good for GUI’s but not much else. His original criticism can be found here
http://www.geocities.com/tablizer/oopbad.htm.
He talked about some of the older database like DBase and later XBase with their own proprietary database querying language. He had some ideas on what a table based language would look like. Seemed to make sense.
So I am not really sure if there is currently a viable Table orientated language but potentially we keep looking at the problem using the current convention (OO) instead of something different.
Here is a link to some ideas which seem to work. http://www.geocities.com/tablizer/top.htm. I would enjoy reading you comments on both papers.
You’ll note how well the OOP paradigm fits the OSS paradigm.
http://c2.com/cgi/wiki?GeneraOs
It’s nice when not only the language supports you, but the whole environment.
Yeah, “tablizer” is pretty well known for his criticism of OOP, and makes some very valid points (at least in his programming domain).
I especially agree with the coupling problems that many OOP designs fall prey to.
The problem with integration of RDBMS and OO is two-fold:
1. There is no really good implementation of the relational model in current DBMS products. Even the best SQL systems tend to overcomplicate things profusely, as well as violate many of the basic logical principles of the RM. SQL itself is not truly relational, and if you get a copy of the ANSI SQL spec, you will find that the word ‘relational’ (intentionally) never shows up. While I understand many tend to despise any attempt toward ‘purity’ in the relational model, the current approaches are missing many potential advantages. Read http://www.thethirdmanifesto.com/ for more. Essentially, it is possible for relational systems to have many more object-like capabilities (truly extensible/complex datatypes, inheritance, etc…) than has been realized with current SQL systems.
2. The concept of Object Oriented development is not a complete nor formalized logical model. It is more like a gelatinous collection of ideas. Many of these ideas are very good, but some combinations of them can be logically inconsistent, or at least used in inconsistant ways. Most OO developers tend to ‘blame’ the relational model for their woes, when in fact the relational model at least provides a consistent, unchanging approach to the needs of serious data management. For example, almost all OO developers follow the standard practice of “Object-relational” modelling, which means that you have a class corresponding to every table, and the class has attributes corresponding to every column in the table. No orthogonality at all. Now, if you make a change in your relational data model, your object model is broken, or vice versa. This is a classically brittle way of approaching things. Thus, as a response, many OO developers proceed to dumb down their data models to the point that they are barely normalized nor relational.
When working with a truly capable relational system, rather than the class-to-table correspondence, think about a class-to-datatype correspondence. This way, one could create an object-oriented framework that allows the business logic to be expressed in the database, but presented intelligently to the users by the application layer. Yes, this would take some serious work, and at times require almost an A.I. approach to certain problems, but it can result in more more intelligent software, without slavishly replicating business logic in both layers of your system.
Since you didn’t state a target language for this wishlist, I’m assuming C++.
* Interface inheritance; inherit the interface, not implementation
This would be nice.
* Open class definitions; able to add methods to existing classes without subclassing (Objective-C protocols)
I actually proposed this for functions that do not have to access private members and are not virtual. Since C++ uses vtables, the virtual methods must be defined with the class, and allowing an external function to access any data member just seems wrong.
* Multi methods; implies functions are not tied to a particular class, which is good, but orthogonal to current design.
It would be very nice. But if multimethods are to use vtable lookup, they’ll need to occupy space in the vtable. That means they’ll have to be available at both class definitions, so the vtable’s layout can be guaranteed.
* Interface specifications for constructors; very useful IMHO, or allow pure abstract static methods
I don’t quite understand this idea.
* Type inference; not OO, but c’mon, it’s 2005
See the ‘auto’ proposal. It allows a limited form of type inference by declaring a variable of type ‘auto’.
* Algeberaic types; again, not OO, but c’mon, it’s 2005?
Can you give me a brief overview of what algebraic types are and how they are useful?
* Real functors a la Lambda expressions; IT’S 2005 ALREADY DAMNIT!
Not in C++. Real lambdas require garbage collection, and requiring garbage collection in a language designed to be one level over assembly is wrong.
Dynamic-extent (stack allocated) lambdas would be great, though.
What a strange read. For a while, I was with him. People often use OO just to use OO, not to save work. And some languages tie you down to OO, like it or not.
He had a good head of steam going, and it seemed that the article would peak and then wrap up nicely.
However, there was a train wreck. He used C++ as the basis of his criticism of OO. And then goes on to sing the praises of .Net.
That’s like saying you think people are wasting money and resources buying Explorers, so they should go buy a Hummer.
But I don’t see his tragic fall at the end of the article as a reason to discount some of the points he makes. OO will not solve all programming problems, there will likely be other trends that surpass OO in popularity. Said trend might be better or worse.
-b
Lots of people badmouth VB but, it is well known
that more lines of code were written in VB then
in all other languages combined.
Purpose of programming language is to get things done,
real life shows that VB is the best tool for this.
Before I read all these comments, I would like to say that 90% of the development is usually internal stuff. Of those, just recently web-sites and web-apps have gained a major importance.
If *I* say that most of the development nowadays is so split between major roles, like:
1- Database
2- Backend
3- WebServer – MiddleTier
4- Applets, Flash, HTML/JavaScript
Etc.
I’m not lying. So OOP from this point of view (little roles that end up making the systems), is trendy. It’s soooo trendy, that it’s not even fun.
Many guys have debuted in programming so recently that OOP, Design Patterns and all these roles still are hard for them to grasp. That’s why they are always studying these terms.
A little while ago, this “OOP world” didn’t exist. Many of you expect that this “OOP world” will be forever with us, from now on. The author of the article did say that he expects that this trend will be no more in a few years. Maybe he is right, but no one knows for sure why this trend would give place to another trend. I could come up with a couple of possible turning points:
1- Longhorn will have web-like graphics on the desktop, so all the flashy effects and control will be available from the desktop. People will create such features for the other OSs to compete with Longhorn, so this kind of technology will be everywhere in a few years.
2- The web has been studyied. Its stateless nature is being copied. Its technologies will be supported natively in all OSs.
So, “OOP” as we know it, is about to change again. C++, then Java, then C#, then… are solving the OOP puzzle. People have tracked down what’s so interesting in OOP, and it is the use of Interfaces that protect and make available code/programs/webservices.
You, OOP “gurus”, must grow up, when the time is right.
Lots of people badmouth McDonalds fast food but, it is well known
that more McDonald’s food has been consumed than
all other restaurants combined.
Purpose of eating is to take in calories,
real life shows that McDonalds is the best restaurant for this.
Lots of people badmouth VB but, it is well known
that more lines of code were written in VB then
in all other languages combined. Purpose of programming language is to get things done,
real life shows that VB is the best tool for this.
One word for you: COBOL.
COBOL actually has a lot in common with VB. It was a shitty technology when it was invented (LISP was already available, C++ was already available), it gained a lot of ground because business monkeys could JUST ABOUT understand it, and it outlived its usefulness (having bizarre features grafted onto it–viz. Object Orientation) because of mindspace/implementation quantity inertia.
So, “OOP” as we know it, is about to change again. C++, then Java, then C#, then… are solving the OOP puzzle. People have tracked down what’s so interesting in OOP, and it is the use of Interfaces that protect and make available code/programs/webservices.
You, OOP “gurus”, must grow up, when the time is right
Well, maybe english isn’t your native language since you hail from Brazil, but to claim that C++, Java, and C# are solving the OOP puzzle shows a very limited understanding of OOP. Interfaces aren’t very interesting, in the fact that almost all languages use them either implicitly, explicitly, or via things like function pointers in straight C.
Take a look at Dylan, CLOS(Lisp OO), Smalltalk, or Slate for what others are doing in the OO world. C++, Java, and C#’s saving grace are the big libraries and the tools.
I agree with his criticisms of OOP more than I disagree, however none of the languages mentioned in the paper are “true” OOP languages, they are modern languages that attempt to mimic OOP concepts, which is why learning OOP w/ say, C++ is incredibly arduous. C++ is based on C, and indeed was originally called C with classes before becoming C++. From what I know, the only “true” OOP language used in large scale outside of acedamia is Smalltalk (and its derivatives) wherein everything is indeed an object. But I digress.
I’m not going to comment on the entire article(s) as it would simply take up too much space, but here are some general thoughts concerning his notion of table oriented programming and OOP disdains:
“Not Table-Friendly”
RDBMS are a good example of a self perpetuating paradigm which many would argue have long outlived their usefullness at what they do. OOP is quite new compared to the relational (and flat) model of RDBMS, so it’s no mystery as to why they don’t coexist in harmony; he’s correct in saying that we are stuck w/ RDBMS for a while. That’s as much as I’ll agree with though. The notion of a “table” needs to be eliminated as the assumption of such already ties you in to the RDBMS model…almost as much as the notion of an object ties you to OOP, i.e. OOP has no idea what a table is in as much as a RDBMS has no idea what an object is, so of course there needs to be some sort of mapping mechanism; this can actually be a good thing though as it allows for another “pluggable” layer which can be database agnostic…if the underlying table structure changes, only one logical unit of code would need to change as well. I also don’t like his take on db’s as a persistence mechanism; indeed this is first and foremost what a RDBMS is, the relational theory he mentions exists to provide referential integrity between tables, and to provide for data integrity as well. Most developers I’ve worked with do everything they can to move any type of business logic out of the db and into its own framework via code, again this allows for more fluid backend changes to be made without impacting the rest of the system. He’s not incorrect in saying that RDBMS should do as much processing on data as can be done, however this will ultimately lead to scaling problems, and it tightly couples different domains together. He also mentions translating from high level paradigms (relational and OOP); IMO he’s comparing 2 completely different types of notions; he says it’s spending complexity, I say it’s actually hiding complexity. .Net has the notion of datasets, which are first class objects used to represent relational data in a more PME (property, method, event) fashion, which any programmer (OOP or not) will immediately feel more comfortable working with. So, I believe adding that layer does indeed improve the system by providing a more consistent (familiar/comfortable) model to program against.
“BLOAT”
I don’t think I’ve ever seen a more ridiculous example (besides some of the code over on dailywtf) in my entire career. I see what he’s trying to get at, but anyone can write bloated code with any language, in any type of programming style. Operator overloading (in .Net) alleviates what he’s mentioning here, and all basic math functions come pre-overloaded (+-*/ etc).
This guy seems to want to move everything (back) onto the db server itself. Ideas such as n-tier development didn’t just invent themselves, they were born out of the shortcomings of single tier solutions. Conceptually he has some very valid (and good) points, but attempts to be reinventing the wheel. Regardless, thanks for the links…good reading.
Neither multimethods nor protoyping are incompatible with object oriented programming. So, that part of the synopsis is rather obtuse.
The author suggests copying and pasting as a sensible mechanism for reusability. I’m stunned, really.
The author rambles at length about complexity, when in reality object oriented programming is largely about simplifying interfaces through encapsulation and ad-hoc polymorphism, and while many languages that provide object-oriented features are complicated, their complexity often arises from other express purposes unrelated to object orientation, and often deal with providing mechanisms to establish static type safety, performance tuning, macros, backwards compatability with other languages, reliability, and package management.
It does not surprise me that the author was primarily interested in assembly programming and his brain settled somewhere in the era of procedural programming. A lot of people that fall into the category of erecting straw men for the purposes of lambasting a development methodology tend to be people that have developed their own way of doing things, or adapted to a way of doing things that was common in one period, only to find that the world has moved on without them. The entire uninteresting piece reads almost like it was assembled from one of the jumbled rants from Topmind, and like that famed net.kook, Richard Mansfield is writing zeolotry while other people are writing software that makes the world work.
“Well, maybe english isn’t your native language since you hail from Brazil, but to claim that C++, Java, and C# are solving the OOP puzzle shows a very limited understanding of OOP. Interfaces aren’t very interesting, in the fact that almost all languages use them either implicitly, explicitly, or via things like function pointers in straight C.”
When I refer to Interfaces, I mean that some people don’t want to expose all the code or program for someone to code for or with. It’s like showing only the tip of the iceberg of the code or program, and telling you to work with that. That has advantages and disadvantages, thus languages can be created around such decision. I don’t think people are interested in niche languages. I don’t think Microsoft and Sun are interested in niche languages. That’s why all the nice little languages won’t become mainstream anytime soon. (There is still hope 🙂
“Take a look at Dylan, CLOS(Lisp OO), Smalltalk, or Slate for what others are doing in the OO world. C++, Java, and C#’s saving grace are the big libraries and the tools.”
OOP is very natural to me, for I use Ruby and intend to use Ruby professionally. I thank you for mentioning such great languages.
Now I’m no longer of interest to you after revealing my preferred language, because it’s not even vaguely mainstream. 🙂
It’s funny that Ruby is so dynamic that we don’t really have interfaces, but I still think that Interfaces are important to most of you. I don’t want interfaces in Ruby, by the way. I want it all, you know. 🙂
It seems that many of us are on the same page with our critcism of C++ as a problem for OOP.
Brian Hook (of quake 2, quake 3, id software) fame has been having discussions about why C++ has set the software industry back 15 years (http://bookofhook.com/phpBB/index.php). Just search C++ in the forums. Now maybe that’s an exagerration, maybe not. C++ begat Java, and now we have C# which both, IMHO, suffer from much of the static structural problems of C++. For the sake of argument, I’ll blame Sun for giving us Java when they could have given us something much more. In fact, there were supposed heated discussions between Bill Joy and Gosling regarding what exactly the Java language should entail, with Goslin wanting a very dumbed-down C++ and Joy wanting to go in new directions. Gosling won obviously.
The problem with other languages is the tools support and some of them lack library support. Sun has been very conservative with the Java language and the JVM. I’m hoping the next iteration of Microsoft’s CLR (post 2.0) will address some of the issues that dynamic languages have with generic runtimes.
It sounds like the author treats object-oriented programming (OOP) like structured programming. In structured programming, you start with data structures. Functions are then written to operate on these data structures. If a function needs to operate on something, you just pass in a pointer|reference|whatever. This can certainly be effective, but if you start slinging data around, it can get hard to keep track of where changes are happening, who owns what, etc.
In OOP you have data and operations on that data all wrapped into a single entity. This implies that any operations on an object’s data should be confined to the object itself. Grabbing data members from other classes and futzing with them is not, therefore, object-oriented programming; it’s structured. To put it another way, Tell Don’t Ask (http://c2.com/cgi/wiki?TellDontAsk). I’m certainly not smart enough to say why that makes all the difference, but in my programs it sure seems to.
As for people not liking object-oriented programming: objects are a powerful abstraction technique. You may not need them. As with all abstractions, objects can obfuscate things when used incorrectly. Also, OOP just seems hard. I’ve been studying the paradigm for six or seven years, and I feel like I’m just starting to get the hang of it. Perhaps that’s the best argument against OOP–it’s just too hard for the majority of us to grasp as quickly as we’re expected to.
Actually, now that I’ve been playing around with Smalltalk for the past few days (the syntax really isn’t that bad. a couple hours to learn about 6 precedence rules for the most part) and I hear that Ruby derives much of it’s syntax from Smalltalk and is a pure OO language like Smalltalk I might be interested.
Now I’m no longer of interest to you after revealing my preferred language, because it’s not even vaguely mainstream. 🙂
Trust me, I’m interested For the past couple months I’ve been exploring very non-mainstream languages such as Self, Nice, Dylan, Cecil, etc…
Yeah, I hear Ruby really encourages duck typing. It looks like Python has PEP that introduces “monkey typing” which basically formalizes an interface for a method via decorators for cross package considerations.
http://peak.telecommunity.com/DevCenter/MonkeyTyping
Neither multimethods nor protoyping are incompatible with object oriented programming. So, that part of the synopsis is rather obtuse.
I wrote the synopsis so that’s my fault. I should have clarified by saying How about languages that support other OO features such as prototyping and/or multi-dispatch?
By “structured” programming I really wanted to say “procedural” just couldn’t remember the term. Thank you deleted for using it in your post.
It is a multi-paradigm language that happens to also be quite complicated, and did not standardize before its adoption had exploded in various infantile forms. Thus there have been a number of frameworks developed in C++ that the industry has carried along for legacy reasons that are themselves bad examples of how to use the language. This has had the effect of promoting the development of bad software in a complicated language, many times by people not adequately trained in its use. That most compilers are still not standards-compliant has not helped the evolution of the usage of the language, as many vendors have maintained hacks in order to offer their wares to a larger market. If you look at the sheer amount of effort the Boost components at times have to go to in order to have them work with a variety of compilers, you might just appreciate how much of a pain in the ass it can be to attempt to develop something well.
I would say that C++ for a long time has had to suffer from Microsoft, and its penchant for developing frustratingly-crufty APIs, and until about VC++ 7.x, completely-retarded compilers. They have, more than anyone else, I think, stunted the development of elegant C++ software, and promoting its usage where other languages would have often been preferable. Thus followed Java and C# in propogating questionable design choices for which “OOP” is constantly blamed.
I realized that I had interpreted the intent of the text incorrectly after I had submitted the comment. The fault was mine.
“Take a look at Dylan, CLOS(Lisp OO), Smalltalk, or Slate for what others are doing in the OO world. C++, Java, and C#’s saving grace are the big libraries and the tools.”
http://www.lispworks.com/
http://smalltalk.cincom.com/index.ssp
Not as big, but no slouch either.
I’ve downloaded trials of Corman Lisp, Dolphin Smalltalk, and I have VisualWorks Smalltalk (cincom) open right now. I love the “live” environments of Dolphin and VW.
While I am not sold on all of tablizer‘s arguments, the old tired OOP vs. RDBMS argument is every bit as hackneyed:
quoth jayson knight:
RDBMS are a good example of a self perpetuating paradigm which many would argue have long outlived their usefullness at what they do. OOP is quite new compared to the relational (and flat) model of RDBMS, so it’s no mystery as to why they don’t coexist in harmony;
Actually, the relational model is in fact newer (not to mention more logically rigorous) than OOP. OOP first appeared in Smalltalk in the early 60s, while the relational model wasn’t doscovered until the 70s. OOP grew out of some good thinking, although mixed with quite a bit of ad-hoc experimentation. Meanwhile, the relational model was created as a solution to the problems of ad-hoc hierarchical storage, which plagued legacy database systems. The relational model is not any more ‘flat’ than any other computing concept. Any relation has as many dimensions as it has attributes.
he’s correct in saying that we are stuck w/ RDBMS for a while. That’s as much as I’ll agree with though. The notion of a “table” needs to be eliminated as the assumption of such already ties you in to the RDBMS model
The relational model has no need of the ‘table’, which is really a misnomer for true relations. The relation as a concept is every bit as rich as OOP, with the additional benefit of offering a much better approach to constraints and data integrity than you will ever find in OOP systems.
(I will not bother to quote the rest…)
Basically jayson knight is dragging out the same old tired arguments against the RDBMS that are pretty much the party line for OOP adherents. Oh yes, *everyone* is moving business logic out of the database as fast as they can, because they find it much easier to handle it all in their UML diagram, etc… No, I rather think it is because so many developers have not bothered to learn the relational model except in the most shallow way. In the end, the relational model is all about a *logically consistent* way to manage your data. Ultimately, the most logically consistent way to do that is to make all management completely declarative, allowing the physical storage choices, optimization, etc… to be handled by the implementation of the DBMS. Relational logic is simply the logic of manipulating sets, which is really the only way to make your data truly consistent (any other way leaves you ultimately open to nasty surprises). Exactly how that is implemented is completely beside the point. Thus if you end up handling all your business logic in an OOP system, with a complicated hierarchy of classes, rule-processing, etc…, you might eventually end up re-creating an implementation of the relational model in your application code–probably, to paraphrase Greenspun, an “ad-hoc, informally-specified bug-ridden slow implementation”. 😉
There’s really no conflict between OOP and the relational model; one is about data manipulation while the other is about programming. In fact, you can create a relational DBMS with OOP. Any conflict exists only the the limited imaginations of some OOP users. Usually these are the same ones who think that a table-to-class mapping mechanism is the only way to conceive of ever interacting with a database.
OOP first appeared in Smalltalk in the early 60s
Smalltalk wasn’t around in the early 60s. Maybe you were thinking of Simula.
I suppose that’s correct, but it is usually talked about as part of the early history of Smalltalk. Point is, object-oriented programming originated in the early 60s. http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_I.htm…
The article is nonsensical. If you understand OOP, and are willing to put in the time with a pen and paper designing your program, you will end up with a program that is easy to read and maintain.
However people used to procedural languages often find that it’s quite a radical change. Likewise early “industrial” languages like C++ are pretty awful. C# is the best out there at the moment, with Java second (this is industrial OOP languages I’m talking about).
The idea of encapsulation is to make it easy for a programmer to change an implementation while minimising the side effects. It also makes it easier for new programmers to start working without worrying about side-effects. And, in fact, even with inheritance you can still have invisible private helper methods. The use of protected members can get a bit hairy though.
Likewise inheritance, when used properly, eases code maintenance by reducing duplicated code and thus duplicated instances of bugs.
The problem with these two is that older programmers don’t entirely understand the reasoning for them: in particular I’ve seen inheritance spectacularly abused by programmers who choose super-charge their classes on the assumption that it’ll make it easier for people to use them (think KDE vs Gnome)
The only thing I read from this paper is a missing understanding of a number of concepts. The most important of which is programming.
The first problem in the article is the fact that the author forgets that all programs need to be maintained after their initial creation. This maintenance happens with the code available. As OOP provides a great way to ensure locality and modulization. I tend to see less impact of changes in object oriented programs as opposed to procedural ones.
He is right at some point that OOP is not the saint everywhere. There are things better implemented outside of an object. An interesting approach here is the Object notion as used in haskell/clean, where an object is nothing more than data for which certain prodecures run. (Allowing extension of an object without subclassing)
One big mistake and the centerpiece of the argument is about encapsulation and inheritance. He forgets that these are meant for compilers not for people. In good OOP all code can be readable. Private means that you cannot use it in your program. This ensures (something which is very very hard in procedural languages) that certain data is only read in a certain way. Many procedural programs use various ways to get into the same data. This complicates CHANGING the code.
He is right about the fact that many people don’t write trully OOP code. It is difficult to write well OOP. This however is most a design problem. With procedural programming one has a completely different point of view on the program and can start with the main program and itterate through the procedures. In OOP one must first design the main classes of your system, and then implement them.
The point about academics is wrong. I work at a university and can tell you. C++ is to please employers, not the academics. In general the people who work on this area prefer more advanced languages like functional ones. No-one in business however uses them, so teaching them exclusively to students is stupid.
Last, is OOP perfect. Not at all, it requires too much “repeatable coding” of getters/setters and whatnotmore. There are things that should be easier. Especially in the area of reuse, but OOP is a tool that extends procedural programming in a way that the COMPILER helps to maintain the correctness of the code.
A similar point is the difference between weakly and strongly typed languages. While weakly typed languages are easy and fast to write, anyone who ever used them knows that debugging wrong types is a big part of the debugging needs of these languages. Something that strongly typed languages have the compiler do.
In short. OOP is a way in which you can organize your data and code such that the compiler can help you better in checking the correctness of your code. And the easiest errors to correct (let alone find) are the ones that are detected by the compiler.
Pascal and Delphi are hardly just a “fad”. Pascal was used throughout the eighties, albeit as a second-best to C, and Delphi was used throughout the nineties and is still used for desktop application development, at which it excels.
Further, refinements made to the Delphi language (single inheritance with interfaces, try-catch-finally blocks, a String primitive, collection classes, and more) had a huge effect on the Java language. In many ways, Java is an expansion of the Delphi language semantics with the C++ language syntax.
With regard to the “real world” comment, jobs are meant to be flawless (else why bother testing), and interfaces should be well defined in functional specs and design documents. While these inevitably change, a good project manager should reduce that – and a procedural language is hardly any better.
It’s interesting that the books published by the author deal with machine language (I wonder how he’d feel if he knew someone had developed and object-orientated assembly language!). He has got a point about excess layers of abstraction, a typical abuse of inheritance which is especially visible in the enormous stack now needed for J2EE (JSP, Taglibs, EJBs, Struts, JavaServer Faces, etc., etc.). However this is due to bad object orientated programming, not object-orientation itself.
Also, I think his quotes in the “science” of computer science is particularly ignorant: most languages and language features have been developed from principles discovered through computer science.
Is OOP the greastest tool ever invented??
Probably not…
Is it useful given a certain set of circumsatnces??
Definitly…..
Anyone who all out says OOP is mostly hype, is either:
a. Upset they didn’t think of the concept
b. NOT GETTING IT!
Given the proper set of problems, chances are the
concepts of OOP can be a great way to solve these problems,
methodicaly and reliably.
Rememeber, the concept can be limited by the language
you’re trying to use the concept in!
Sort of like trying to teach a person to write in Japanese
(symbology) with-out knowing the Japanese language…
they’re simply not going to get it!
I use OOP concepts to write code to control industrial
applications…..we CAN NOT afford a failure in
code in situations like this (not that it hasn’t happened!).
That all being said, I really like the comment about the
right tool for the right job, this hit’s it right on the mark!…..of course, it assumes the user knows how to use the tool properlly!
BTW, Copy and Paste code??????
Has this person ever really written real code before…?? Copy and Paste belongs to Word and Excel, leave it there!
that five hours ago I let myself get entangled in an OOP-vs-Procedural flamewar; I thought that was meant to have gone out with eigties!
Truly, OSNews has become the ultimate troll nursery.
However, I’ll lurk on a bit longer, if only because of the occasional intelligent voices that inexplicably appear from time to time.
LOL.
Let me guess, those intelligent voices are easier to agree with, aren’t they? 🙂
Further, refinements made to the Delphi language (single inheritance with interfaces, try-catch-finally blocks, a String primitive, collection classes, and more) had a huge effect on the Java language. In many ways, Java is an expansion of the Delphi language semantics with the C++ language syntax
Actually, it’s C# that is more of an outgrowth of Delphi and C++ than Java. Anders Hejlsberg, the head dude behind C#, worked at Borland for many years before going to Microsoft. I’ve never used Delphi, but doesn’t it have properties? I’m not sure about delegates.
Actually, this is one of the less troll-filled articles that osnews puts out, because it doesn’t have to do with open-source per se, and all the politics that the fanboys want to inject into it. Notice how there hasn’t been anything moderated down yet and not many even reported for abuse.
I find OOP useful (it’s not an end all), but this guy isn’t alone in saying that the OOP marketing machine has been on overdrive for many years. Java and C# are not the end-games of OOP.
Delphi does have properties. Each property can use either a method or an instance variable to get and set its value, usually a method is used for the setter and the instance variable is referenced directly for the “getter” e.g.
TMyClass = class
private
FMember : String
procedure SetMember (theMember : String);
public
property Member : String
read FMember
write SetMember;
end;
(sorry if the spaces go!)
Delphi also has something like delegates, they’re object pointers, e.g.
TMouseMoveEvent = procedure (x, y : Integer; Sender : TObject) of object;
TMyClass = class
private
FOnMouseMove : TMouseMoveEvent;
public
property OnMouseMove : TMouseMoveEvent
read FOnMouseMove
write FOnMouseMove;
end;
and then you write
procedure MyMouseMove (x, y : Integer; Sender : TObject);
begin
….
end;
var
class : TMyClass;
begin
class := TMyClass.Create;
class.OnMouseMove := MyMouseMove;
end.
In addition to this, Delphi uses plain-text files to store the details of the GUI, these are then converted to code at compile time: it’s a bit like XAML except it’s compile time and uses an INI type format instead of XML.
C++ has some very rough edges, but what do you want to then … use C? In that case most tend to say that you can do the C++ stuff in C with some tricks. Apart from those tricks chewing up a lot of your time, they have a problem with C++, not OOP in general. Java already looks a good deal cleaner. In general I think virtual machines with some of their properties (like just-in-time compiling) solve some problems that you see in C++, like binary compatibility etc.
If you want a language that allows you to code relatively low level (without garbage collector, compiles to an executable normally) but supports you in object oriented programming, perhaps C++, Delphi, etc. might not be perfect, but still better then the alternatives. Yet, as of the hype of java type language, we don’t see new ones getting created.