This series is aimed at programming language aficionados. There’s a lot of very abstract writing about programming languages and a lot of simple minded “language X sux!” style blog posts by people who know next to nothing about programming. What I feel is sorely missing is a kind of article that deliberately sacrifices the last 10% of precision that make the theoretical articles dry and long winded but still makes a point and discusses the various trade offs involved. This series is meant to fill that void and hopefully start a lot of discussions that are more enlightening than the articles themselves. I will point out some parallels in different parts of computing that I haven’t seen mentioned as well as analyze some well known rules of thumb and link to interesting blogs and articles.
Printing “Hello world!” to the screen is easy enough in just about any language to be written in a few lines of code and is therefore the first program most beginners write.
Of course it doesn’t teach you much about programming and proponents of language A in which hello world is cumbersome will tell you that their language of choice “scales better” i.e. it makes writing huge, complex programs easier than language B which makes hello world easy. That means the pain of coding in B grows asymptotically faster than the pain of coding in A for large applications.
Another concept concerning asymptotic behavior is big O notation.
It is a way of describing how much resources (CPU cycles, memory) an algorithm uses for some very big input size “n”.
For example if the algorithm has to sort a list of names alphabetically “n” would be the number of names on that list.
The figure shows O(n^2), O(n) and O(log n).
It is no accident that the curve of O(log n) starts higher:
often more complex algorithms are required to achieve better asymptotic behavior and the additional book keeping leads to a slowdown for simple cases (small n).
Connecting the dots, language A would be the O(log n) curve that starts high but grows slowly while language B would be O(n^2) which starts low but grows fast. I’m not claiming that there actually are languages for which “the pain of developing the app” grows exactly O(log n) or O(n^2) with the “size of the app” – that wouldn’t make much sense anyway since I haven’t given a precise definition of “pain” and “size”.
You can choose the algorithm based on the complexity of the problem, combining several algorithms into one program and using each where it works best.
One example of this is GMP.
The programming language analog of this is using more than one language, e.g. one for exploratory programming and a different one for implementing the parts that are already well understood and unlikely to change. The right tool for the job and all that. There is however the added cost of having to switch data structures/algorithms or, in the case of programming languages, having to learn and integrate more than one language.
If you want to avoid this cost there are of course languages that are not aimed at a specific problem size or domain and are moderately good/bad at everything, corresponding to the straight O(n) line. Whether or not this makes sense depends on who you ask, see for example Stallman’s “Why you should not use Tcl” and Ousterhout’s reply.
Assuming that it is possible to adapt the language to the problem, e.g. make the language stricter or less strict depending on the size of the problem, what’s the right default?
One of Larry Wall’s principles of language design is that “easy things should be easy and hard things should be possible”. I think the “hard things should be possible” part means that you should not cripple the language so severely that you’re not able to adapt it to the problem. So why should easy things be easy? Well, first of all programs start out small. But more important is the relative cost of adapting the language to the problem:
A couple of lines telling the compiler to be stricter are not even noticeable if your program is a million lines long. On the other hand if you’re writing a one liner and need ten lines to tell the anal retentive compiler to shut the fuck up it’s obviously no longer a one liner. Since the relative cost of adapting the language is far higher for small problems a programming language should be optimized for solving small problems while still being able to adapt to larger code sizes.
There’s also a psychological component to it.
It just feels wrong to spend great effort on a minuscule problem, even if it makes writing programs easier on average.
One example is static typing vs duck typing.
If your program is written in a dynamic language and grows beyond a certain point you probably need to make sure that the types of various variables are correct. So the program could even get longer than the same program written in a language with concise static typing.
Still it doesn’t feel nearly as wrong as writing templates in C++:
<oh compiler <please forgive me> but I <your puny human user> am unfortunately unable <to tell you <the exact type of this> > >
I’m sure a lot of people will disagree but to me all this fuss just to express the type equivalent of a shrug is ridiculous. The effort should be roughly proportional to the amount of information you want to convey and if hello world needs more than one line the language designer probably didn’t understand this or designed his language with only big programs in mind.
Why languages that are well suited for big programs suck for hello world and vice versa will be examined in the next article.
About the author
I’m Benjamin Schuenemann, a physics student from Germany. Programming languages are one of my hobbies. My main programming language is Python because I don’t forget it quite as fast as C, C++, Matlab or Perl.
I’ll start simply: what?
First, this article’s title is horribly misleading. ‘Hello World’ Considered Harmful – okay, it started out alright, but only 20% of the article is actually about why the author thinks hello world programs are harmful. The 20% that is about it is not very clear, and goes into a complex math discussion that seemed to rely on poor assumptions to begin with.
By the way, Big O notation is not “a way of describing how much resources (CPU cycles, memory) an algorithm uses for some very big input size n.”. It is used to describe the complexity of an algorithm. So don’t invoke computational complexity theory if you don’t really understand it.
The author’s discussion on asymptotic behavior came out of left field and as far as I’m concerned had nothing to do with the point of the article.
Then, finally, the discussion goes into static vs duck typing – WTF? Why? It in no way clearly supports the title of this article.
This article is a mess. It’s amazing it’s on the front page.
For me is more like Big wall of text.
agree
“I’ll start simply: what?”
That is totally what I thought.
“I’m Benjamin Schuenemann, a physics student”…
Why do physicists always mistake themselves as computer scientists?
I don’t hold any ill will against the author, but he clearly does not know his Computer Science.
He is attempting to apply methodology/terminology for describing algorithms to programming languages. While that is an interesting abstract idea, it needs to be better defined and he does not take it anywhere meaningful. Here is a much more interesting metric: http://shootout.alioth.debian.org/u64q/shapes.php
Which editor let this slip by?
I’m relieved. Somebody understood my idea and finds it interesting
Could you take it anywhere meaningful on one page?
Honestly, this is not meant as a “Oh, so Mr computer scientist knows better, well then show me, Mr computer scientist” kind of reply. I would love to see people evolve this idea. That’s why I wrote this article.
First off sorry for being an elitist bastard. Your presentation of such idea probably would have better received in a forum somewhere. That way you could have evolved your idea before presenting it to the world- without the authoritative context implied by an article.
I think your insight, that there are analytical methods for evaluating programming languages, is a valid and true point. The problem is that big-O notation is not very suitable, let me try to explain why.
Specificity, since big-O notation evaluates algorithms, not languages, it is language independent. If you are trying to apply the underlying principle of big-O notation very generically: In your axis on the big-O graphs you present, although unlabelled, have cost on the y-axis and problem size on the x-axis. What you should have done was replaced cost and problem size with the properties of languages that interest you. I think you will find however that one line graph cannot define a language as the result depends on a program examined.
One approach often done (see the link in my last post) is creating a scatter plot where the data points are programs written in a lang. and looking at trends.
Now, I understand your point is that features of a language that make it suitable for a small project (your example was a dynamic type system) may actually make is less ideal for a large project. But big-O notation can not really be applied to this problem in a tractable way.
If you are interested in this subject there are hundreds of papers in the IEEE journals of software engineering and ACM journals, they would be a very good place to start and propose many metrics you may find insightful. Definitely check out my original link.
I didn’t mean to imply an authoritative context:
“This series is meant to fill that void and hopefully start a lot of discussions that are more enlightening than the articles themselves.”
Maybe that was too subtle.
Also I wasn’t really trying to compare resource usage/verbosity of programming languages. I was comparing the effort required to solve a set of problems of varying sizes in a fixed programming language to the hardware requirements of a fixed algorithm operating on a set of inputs of varying sizes.
Still your link is interesting and I urge everybody to read it even though it can obviously only measure programming language _implementations_, not the languages themselves. So the maturity of the specific implementation tends to be an important factor.
You’re obviously right in that you cannot literally apply big O notation to programming languages. You could define a set of reference problems. Where it really falls apart is Turing completeness:
You can implement Ruby in Java and vice versa in a finite amount of code so they cannot really scale differently. What this means is discussed in the next article.
I think this kind of writing cannot be published as a paper. I want readers to develop a better intuition about programming languages and to better understand where some rules of thumb come from. You cannot write a paper along the lines of “X is sorta like Y”. You need to find a very small, very limited topic and really beat it to death, write about it till the last corner case and the last bit of ambiguity are removed.
That’s no fun.
I was waiting for somebody more knowledgeable than me to write an article. Nothing happened. So I sat down and wrote the kind of article that _I_ enjoy. If the vast majority of OSNews users are just annoyed by this I basically have two options:
-publish somewhere else
-change the style of the articles
The latter is problematic because you cannot write a good article if you hate what you’re writing. I can make longer or shorter paragraphs or cut back on my use of f words. What I cannot do is completely change the topic and line of reasoning. If people hate informal articles there’s nothing I can do to help them.
But maybe they’re just a vocal minority – I don’t know. This may sound like “Waaah, if you don’t like me I’ll take my ball and play somewhere else!”. That’s not how I feel about this. I enjoy what I’m doing and if just a single person likes my articles it was totally worth it.
Did I misunderstand your advice?
Were you thinking of a specific forum?
Just a few thoughts:
“…Maybe that was too subtle.”
Unfortunately, any article read outside a close group of friends is inherently going to be viewed as attempting to be authoritative.
“Also I wasn’t really trying to compare resource usage/verbosity of programming languages. I was comparing the effort required to solve a set of problems of varying sizes in a fixed programming language….”
Verbosity is somewhat correlated with effort. Not entirely, as I can take the same amount of time to write a 100 line Haskell program as a 800 line java program that do the same thing. Perhaps exploring the best way to quantify effort would be a good start. Although this is of course hard because humans and their varying skill levels are involved.
“it can obviously only measure programming language _implementations_, not the languages themselves.”
I agree, perhaps this merits a discussion off whether it is possible at all. I would find in unsurprising if measuring implementations was all that is possible. There are of course many other ways to compare languages, such as their position in the Chomsky hierarchy and family (procedural, functional, logic, etc).
“You’re obviously right in that you cannot literally apply big O notation to programming languages. You could define a set of reference problems. Where it really falls apart is Turing completeness..”
If you are examining a set of example implementation of an algorithm, it may be valuable as a metric to understand the “effort” of the average programmer using this language, even if it says nothing about the theoretical construct itself. I would also like to point out your use of the “Turing completeness” is incorrect, you may want to look into the term in more detail.
My advise concerning presenting idea such as you are would be a to read many of the informal proposals by Edgar Dijkstra. Also, don’t be afraid to take another page if that is the cost of being more precise. The difference between philosophy and theoretical computer science is mathematical formalism, people like it.
“Were you thinking of a specific forum? ”
Check out the IRC channels on frenode, use groups and google groups. I didn’t have a specific one in mind.
If you would like I would be willing to critique your future articles and at least tell you what I would guess people would not like.
Take care.
Thanks for writing the article, but I agree with others that your approach tends to obfuscate your ideas. The central idea, that it’s possible to make a systematic appraisal of the applicability and efficiency of different languages for different types of problem, is interesting. I wonder if you needed to get into the technicalities of Big O, although I confess, I don’t fully understand it. A graph showing that the complexity of different code increases for a given language when implementing different algorithm types might have sufficed as an explanation.
A lot will depend on where you intend to take your future articles and on your intended audience. Will this be an explanation of your idea for the general reader or will it be a treatise that specifies a precise algebraic method of programming language analysis?
Although I’m intrigued by what you’re proposing, I have to agree with the other commenter when I say that, stylistically, the current article is neither fish nor foul in its approach. Good luck and I’ll look out for your future attempts.
O(n^2) is complex math?
Please don’t think that just because I tried to give a short explanation of big O notation I do not understand asymptotic behavior. In fact I mentioned the phrase and wrote that it’s about growth. Don’t let the first sentence throw you.
Like I said, I wanted to sacrifice some precision to make the article less dry and long winded. Maybe I chose the wrong trade off and got people bitching instead of thinking…
Your article doesn’t make any sense. It is not organized in any coherant fashion. It goes off on wild tangents and draws conclusions out of thin air, and when conclusions aren’t drawn out of thin air, they are based on faulty assumptions. That is why people are bitching.
There is no possible way an intelligent discussion could be started from that article, it’s just too poorly written. Sorry.
It’s been a few years since I had to assess “big-o” (I hate that name) of various algorithms, but I don’t think it was ever referred to as being a way to describe the complexity of an algorithm. It certainly is used in complexity theory, but, it was used to indeed describe the resourcing usage of various algorithms under the worst and average case scenarios.
Or in layman’s terms, it essentially described an algorithm’s efficiency.
It describes the growth of resource requirements (cycles, memory, storage etc.) depending on the growth of input data – so yes, it describes how efficient an algorithm works. This is, as I mentioned, always connected to some kind of input data, because algorithms are usually considered processing some input data to generate output data. I’m sure many “modern” programmers haven’t ever heared of this. By the way, where is my favourite O(e^x) = exponential of today’s “modern” applications? And I think I missed O(0) = zero and O(1) = constant in the comparison… 🙂
I’ve always understood “big-O” to be an estimate of the number of operations, nothing more.
Of course this has nothing to do with efficiency or resource usage; and “big-O” notation is mostly meaningless unless you know the cost of the operations and the range for “n”.
Of course nobody mentions the cost of operations or the range of “n”, which makes it impossible to compare algorithms in this way. For example, you can’t tell if “k*n” is less than “l*(n^2)” without knowing what “k”, “l” actually are and the expected range for “n”, and if you do know what these values are then you can probably skip the useless “big-O” stuff and provide meaningful information direct from benchmarking.
To use a dodgy estimation in a dodgy way makes things worse. For example, if you can do something in one language with one command (that nobody can ever remember, that takes 3 hours of searching through documentation to figure out), then you can’t assume it’s better than a different language where you need 10 commands to do the same job (where these commands are all frequently used commands that everyone understands).
-Brendan
This is correct, Brendan. There are other notations that describe operations depending on the behaviour of the input amount n, such as Theta and Omega which define other orientation points such as “minimal requirements” and “maximal requirements”. See a comparison here:
http://en.wikipedia.org/wiki/Big_O_notation
O notation gives you a “top estimation” development behaviour, while Omega is for “minimal estimation” and Theta for “average estimation”.
So O is always good because it leaves makes things bigger than they are. Example from real life: A computer costs 899,95 Euro. This makes it cost 900 Euro which is 1000 Euro. 🙂 It’s about relations, not about actual precise values.
You can of course calculate the exact number of operations per algorithm in best and worst case and deductt he average case from this, given the knowledge that you know how much operations the statements really “cost”.
The O notation gives you a hint how resource requirements or time requirements will grow depending on the input – will it grow linearly, quadratically, or even exponentially? This is mostly interesting when you’re developing an algorithm that operates on huge amounts of data, and you want it to be processed as fast as possible. A searching algorithm with quadratic complexity would be bad then, one with logarithmic complexity much better. The O notation helps you to put your algorithms (in case you developed different ones) into complexity classes.
You let n run into infinity and have a look at how complexity grows. Of course n does matter. Some algorithms that are efficient for huge values of n are – at the same time – not efficient for small values on n. While the descriptional function can be evaluated for certain k, l and n (according to your example), it doesn’t matter for O of this function (at least not for n). The O notation is in most cases interesting for questions like “how will complexity be for n growing high?” The result is something like the advice: “Try to develop it so it runs in linear, or at least in logarithmic time. Your algorithm of quadratic complexity isn’t efficient enough because it will take much too long for the amount of data to be processed.” The O notation operates on a class of functions to characterize how requirements will develop, it doesn’t tell anything about the real requirements.
I do agree with this, allthough it cannot be measured in O. 🙂
Edited 2009-06-11 14:51 UTC
Quote from near the end of the article:
Still it doesn’t feel nearly as wrong as writing templates in C++:
<oh compiler <please forgive me> but I <your puny human user> am unfortunately unable <to tell you <the exact type of this> > >
I’m sure a lot of people will disagree but to me all this fuss just to express the type equivalent of a shrug is ridiculous. The effort should be roughly proportional to the amount of information you want to convey and if hello world needs more than one line the language designer probably didn’t understand this or designed his language with only big programs in mind.
There are two wrong things implied here.
The first is that you need to use C++ templates (or more generaly some advanced c++ concepts) if you want to just write a c++ “Hello world”.
This is wrong, in C++ exactly like in C, a “hello world” app will consist of 3 lines:
#include <YourFavoriteLibrary>
int main()
{
your_favorite_function(“hello world”);
}
See, no templates. Intentionally I made this example library-agnostic to emphasize that if anything you’d be really discussing the standard library, not the language itself.
Even if you use the standard C++ library (which starts making your hello world a not-representative use case as most real-world apps use other libraries for I/O), there still are no templates involved. It would be
#include <iostream>
int main()
{
std::cout << “hello world” << std::endl;
}
So, still no templates around. It’s true that “cout” is an object, not a function, which is harder for a beginner to wrap his mind around, but the benefits are massive enough to justify that (but we’re getting offtopic).
The other part of your comment is about templates syntax:
all this fuss just to express the type equivalent of a shrug is ridiculous.
A template is an undetermined type A that depends on parameters B,C,D,… and the c++ syntax for it is just:
A<B,C,D,…>
so i’m not sure what your critique is. And if you want to make a shortcut for a template notation that you use repeatedly, typedefs are here for that.
Also do notice that templates are an aspect of C++ that:
1) You don’t need to touch if you don’t want it, and
2) Basically no other programming language has an equivalent of, so C++ can hardly be blamed for at least offering it. No, C macros don’t allow 0.1% of what templates allow, neither do Java generics, nor do recent Fortran polymorphism improvements.
So what are C++ templates? Basically they are trees of types. Think of A<B,C> as a tree with leaves B,C atop a common trunk A. Then you can produce any tree, with any types as labels, by nesting templates: like A<B,C<D,E> >. Then people realized that c++ allows to perform any operation (turing complete) on these trees at compilation time. This is what makes c++ so uniquely powerful, but few people understand it.
Great comment!
And besides, really, statically typed languages provide such a benefit in both small and large-scale projects that dismissing it as a “hindrance” is rather silly.
Sure, there are lots of reasons to prefer dynamic typing, but there are lots of shortcomings to that, too. As long as you’re ready to deal with them…
I, for one, hope the day I have to maintain a truly large scale Python project never comes.
Yeah, that’s a good explanation of templates.
The compile time mechanics of C++ are even Turing complete IIRC. The problem is that you _have_ to tell the compiler all those things, even if you don’t care and don’t want to care.
Also I was using hello world loosely to refer to smallish programs.
AFAIK, Ada generics are about as powerful as C++’s. I will take ML and variants such as Ocaml’s parametric polymorphism over C++’s generics any day of the week. I do agree that C++’s templates are better than Java’s, Fortran’s and C’s.
I agree completely: I mostly used C in college. So far, for my job, I’ve used Ada, C++ and Java. I far, far prefer Ada two either of the other two. Ada is much, much more clear and readable than C++ is. C++ can get nasty: when all those class, angle-brackets, asterisks and ampersands start flying around, it can can become all but impossible to figure out what the hell’s going on in a C++ program. I love Ada’s simple, rigid, highly explicit typing system – because you can look at it and tell almost instantly exactly what’s going on, exactly what everything’s type is, and what you’re doing to it.
(I really, really hate C++: I could rant about how convoluted, illegible and… generally horrible that language is for a very long time.)
The first thing you thought he implied is ridiculous. There is no way he implied that. It was just a random rant against the readability of templates. I know they’re powerful and awesome, but used / abused liberally they kill readability.
It’s true that cout is an object and not a function. What is not true is that there are no templates in the above example. The << operator is actually a template parametrised by the argument (the “Hello World” string).
Yes, I’m nitpicking, but I’m actually trying to make a point: templates can be quite tricky to code, but they are usually quite easy to use.
If I’m not mistaken they’re overloaded functions, not templates…
Well, std::cout is of type ostream, which in turn is a typedef for basic_ostream<char>. The template is necessary to allow basic_ostream to work on both normal chars and wide chars (wchar_t).
“My main programming language is Python because I don’t forget it quite as fast as C”
Now that is some interesting statement.
Yeah, I like to be honest and open about my bias.
Exactly what was on my mind! This is the only interesting thing in the all article.
Stupid observation you probably already know: while a lot of Math and Physics researchers usually do a lot of programming, and often become fairly well-versed in one language or another, it usually isn’t C. Fortran is common, and you see your Perl users and Python users, and Ada, for people doing government work. It’s not really surprising that someone who’s in some other, non-CS field wouldn’t be good with C, or other curly-bracket languages.
Sorry if you hate my article.
Like I stated in the introduction I deliberately sacrificed some precision to make it shorter.
I also wanted to combine ideas in new and (at least to me) interesting ways. These two factors make my writing very vulnerable to criticism. I expected that.
I didn’t set out to write the most perfect explanation of big O notation or templates. That was never the point of this article. I wanted to give the noob reader a rough idea of the concepts that’s short enough to be skipped easily by the pros. Everybody knows how to use Google and Wikipedia.
Well, if everybody completely missed the point of this article I guess my writing does suck.
All I’m asking is this:
Don’t think I’m an idiot because I didn’t write a 10 page article that reiterates a simple idea for the 10billionth time. The facts are obvious and you don’t get criticized. It’s easy to write this kind of article. Easy and pointless.
Give me the benefit of the doubt.
I spent countless hours thinking about and rephrasing each and every part of this article.
Another option you have would be to thank the people who pointed out the errors, imprecisions, and flat-out incomprehensible parts of your article, then keep their comments in mind when you write the sequel.
Re-quoting you:
Still it doesn’t feel nearly as wrong as writing templates in C++
all this fuss just to express the type equivalent of a shrug is ridiculous
With such sentences, you’ve been asking for it!
You forgot the “to me”.
But yeah, that probably wasn’t all that smart.
I wasn’t exactly sure where on the scale from “obvious and boring” to “provocative” my article would be.
I prefer a flame war to a shrug any day of the week.
It seems as though a lot of people were left scratching their head over this one. I was left wondering if it was an analogy, because big O notation has nothing to do with programming languages. It is about algorithms, and algorithms are language neutral (though their implementations clearly aren’t language netural).
You may want to look at things like context free grammars, and non-deterministic finite automaton in order to understand how programming languages work. Once you can explain that in a clear and concise manner for the layman, maybe you’d have a happier following.
It is an analogy.
How is that not obvious?
Eh, I thought it was decent. There isn’t enough discussion of programming languages here. I didn’t realize that there were this many pedantic programmers unable to adapt to a lose informal treatment of even the subject of the programming language landscape.
But I think you have to define what “effort” is. “Effort” means typing more? or “Effort” means spending time trying to understand awkward language features?
For example… I think that Java (1.4) verbosity is a plus, not an effort. Typing is a very easy task, almost no cost.
Now… try to descipher Perl or C++ code… that’s TRUE effort!!!
Exactly.
Effort is really hard to define because it depends on tons of things:
-your writing/reading speed
-your debugger
-the lifetime of your program
-how often you add features
…
Even if you just consider typing you have to ask yourself: Do shift keys count as typing? If they do Perl doesn’t come out nearly as short
I’ll touch upon this in a future article.
Of course, if you’re the only person on OSNews who likes my articles I’m not sure how many more there’ll be.
Criticism first, the title is a bit misleading. Now for the praise, thanks for the article. I am way behind the times in relation to programming languages, but what you wrote made passing sense. I suppose things could have been said in better ways, but I didn’t take the time to write it. The f-bomb could have been left out, not that I am so prudish but it really doesn’t fit in a tech article. I do appreciate you taking the time to write something I haven’t seen written before and I understand you trying to say that one should try to use the right tool for the job rather than using the C4 to open the car door because that is what you had and knew about.
The hidden value of “Hello, World!” is that it gives delusions of understanding
Seriously, if you have ever brought up cold iron for the first time, fought to get the memory refresh timing right, debugged the hardware handshake between the chips, gotten vertical sync on the video tube, and had actually seen “Hello, World!” appear you would understand the tremendous amount of stuff that has to just work for that bit of piffle to appear.
A focus on “Hello, World!” as a programming exercise is just a result of lazy programming instructors and not a reflection of any particular language.
P.S.
We are not amused by profanity and don’t need it to keep our attention.
Who is “we”? Please speak for yourself. Personally, I love profanity 🙂
What the heck?
I might as well post an article about black holes on physicsnews.com, why not? I don’t fully understand it, I don’t fully understand the topic and I can’t even explain myself or add anything to it, but that didn’t stop this article from being published, did that…
It’s articles like this that make me look for better news sources.
Even if you don’t fully understand black holes, you can give your opinion and even make a point! Clever points of view are always interesting to read.
Hey, Democritus explained atomic theory 2500 years ago… I think He didn’t know so much about physics… xD
But… this article didn’t qualify as a “clever point of view” so it has gotten a whole lot of deserved flaming
And yes, no need for profanity (or desire) for something you want to be taken seriously that’s technical: if something is fictional in nature, or non-fiction but is a historical account, it makes sense, depending on the people and the story, but is wholly inappropriate in any technical article that isn’t about profanity or something that’s profanity-laden. Ok, I can see someone now attempting to justify profanity in the topic of computer hardware, software, working with them, or developing them, but… please don’t!
I’d be interested in your thoughts on static languages now gaining dynamic capabilities.
For example, adding the dynamic typing of Groovy to Java makes it such an easier language to use, allowing you to mix static and dynamic types and to write “scriptlets” like you do in Python. The same with the beta of C# 4.0, which adds dynamic typing.
Of course, earlier languages like Delphi and even Microsoft Access (VBA) allowed this, so these 2 languages are only 13 years behind.
Will adding dynamics make these languages easier for both hello world and larger, “enterprise” (i hate that word) apps, letting people stick to one language for both?
“Adding features” is THE problem… that’s why C++, C#, Perl, etc sucks.
Programming languages MUST be simple and straightforward (like C and classic Java). If you want features write a library, don’t mess with the language.
Well, yes and no.
Adding features _later_on_ makes debugging a nightmare.
That’s the problem with extending languages which is otherwise a very smart evolutionary path – just see the success of C++.
Having features in the language right from the start that noobs don’t use is something entirely different.
Ideally you would put as much as possible into libraries. Google for “growing a language”, you might like it.
Now as for making static languages more dynamic:
I think adding optional static typing to dynamic languages is a better approach IF – and this is critical – IF you have coding guidelines in place and enforced. That’s the short answer, the long one is in an article a few weeks down the road.
you wrote: “Now as for making static languages more dynamic: I think adding optional static typing to dynamic languages is a better approach IF – and this is critical – IF you have coding guidelines in place and enforced. That’s the short answer, the long one is in an article a few weeks down the road.”
So just like Perl 6 is doing now, adding optional static typing. I think the Parrot open-source “Common Language Runtime” and Perl 6 will be huge in about 2-3 years time. I’d be interested in that article.
I’ll check out “growing a language.” You have a good idea for the article, but just need to tighten it up a bit – it tended to meander. Sorry to be critical, but i double-majored in Poetry, and can be a bit anal sometime 🙂
That’s Java propaganda.
Plethora of features of C++ was never a problem with C++. The problem was the crappy standard library, and manual memory management.
C++ is pretty good these days (Qt is free!), and with next C++ standard we will get new *features* which will make C++ better (i.e. reduce the verbosity of C++ programs, and make hacking it more fun).
These days I see a market for two languages: C++ and Python. With those, you’ve covered pretty much everything and you can just ignore the middle-goers like Java or C#, and the restricted & “manual” world of C.
Ok so it wasn’t very well written (better than my German at least) but I think the article covers a very interesting subject.
Firstly, it is very difficult to make a value judgement about any given programming language without having a lot of experience with it. Always you can find an equal number of people loving and hating the language, with good reasons for each. Also you find a tutorial explaining how in SuperLang you can write some contrived piece of code using only 1/4 as many statements as in Java and also with much more clarity, and it all looks good, but in your heart you know this is just an example specifically chosen to make the language look good.
I’m also interested in the sort of “pain scalability” and static vs. dynamic typing things mentioned in the article. For example, I like Java, it’s a cool, useful and complete language which seems to help one avoid many common problems. However, the massive framework of code dealing with static typing (including e.g. Generics, which look like ass), checked exception handling, and code organization (why does your hello world function have to be in a class!?) is rather bothersome. On the other hand it is also all there for a reason, and as much as I love Ruby, Python etc. there are many times when you shoot yourself in the foot and suddenly realise the benefit of static typing!
Sometimes I get the feeling that the perfect programming language might have been invented already and is out there somewhere, but how on Earth would you know it when you saw it?
The “Why you should not use TCL” link was interesting. I thought it was rather notable that back in 1994, RMS was telling people not to use things for *technical* reasons, rather than resorting to fear tactics involving human slavery. Of course, even then, he was telling people not to use things. But still…
Edited 2009-06-11 03:27 UTC
What the world really needs is a “Why you should not listen to RMS” article.
No one has any idea what the article is about.
It seems like a random bunch of points we all kind of think of. There are a few topics in there… none of which are explored at any level to make it worth a read.
My suggestion is to pick one point and write an article on it. Give examples… counter examples… maybe draw a conclusion?
It got kind of interesting when you talked about ‘the easy things should be easy. The hard things should be possible”
Might I suggest you should have focused exclusively on that concept. Show us some concrete examples of what you mean. Heck, you could easily tie that in with dynamic versus static typing.
Big O… that just came out of nowhere without any purpose.
Just my 2 cents.
Edited 2009-06-11 04:13 UTC
Bill .. I will go with you on both point.
I have never been a good theoretician or some genius programmer. So I’m amazed when most of the commenters here so knowledgeable in their understnading of formal theories of algorithms and languages and grammer do not understand or try to any simple ideas.
I may offend a lot of people here but I just registered to comment on this article. Since the writer can be shredded to nothing so can be the commenters I guess. So here it goes.
I found most of the comments times more idotic than the article with no attempt to understand what the writer tried to explain even if they may decided its not worth much.
Firstly, Big O and similiar notation has a formal meanting in defining algorithm’s complexity. But how much intelligenc does it take it understand that the basic concept behind it is in expressing the often uneven growth of Y axis in terms of increase in X axis that is the size of problem and the effort/compelxity/resource of solving it using an algorithm. Now going back to the idea whats wrong with saying that there is a big O notation to explain the effort of a whole programming solution (not a single algorithm) with the problem size in a particular languge. It might not be deterministic, the process of determining that could be stil unknown but the concept is not invlaid.
The question of why the apple fall is much more significant than a lot of advance math on gravity that came after it. The number of people who can do the math I think is fairly larger than who can come up with concepts that are original and meaningful.
As for the rest of the article why do you need to retype the template paramaters on the righthand for
Java? Why does C++ now is getting option to define aliases on big template names? Why do you have to write static on method on a static class in Java?
C++ due to its evolutionary nature and for some unknown reason has a lot of unncecessary complexity (I am not really expert) its what the expert comitte aggrees and trying to solve with the latest draft so get off your high horses and acknowledge it. I know C++ template is an amazingly powerful but often complex things but it sure is being turned better so learn to take critisism about it.
Java has an aweful lot of verbosity that was never necessary to make it safe but maybe they were really making it for idiots. A lot of the feature aren’t there in java because they had a scheudule to catch so don’t make everything so peachy about it either.
I have mostly programmed in C and for the last 4 years almost exclusively on Java so I will do very bad on dynamic languages I would assume. So I’m not bashing static languages for the fun of it but because I know of some of their problems. Type errors making a hash of things in big projects with Dynamic typing is an issue to consider. So is reflection, RTTI, open classes, AOP .. so nothing’s prefect. So whre does the balance lies.
Almost everyone agrees that Dynamic typing works fantasic for short project and static typing is very good/safe on large even as some would say dynamic typing works there too? But can you see the relation of effort in programming and size of problem? It has clearly a similarily to how some algorithm works best at small problems and some on large .. insertion sort vs. quicksort anyone? This is the first time I acutally read somebody mentioning it in that context and he gets blown over.
This changes of what feature is advantageous in solving a problem as it gets larger/smaller is often chanigng so its a very important idea to consider.
Sorry if this is just too big. English is not native for me and I get too winding.
May be I missed something but …
The point of a “Hello World” program is NOT to start learning the language.
It is, surely, to give the developer a chance to check that the toolchain (compiler, linker, dependency manager, etc) actually work together in the way that the developer expects.
… or is everyone else also kinda sick of the article poster replying to almost every single well-meant (or poorly meant) bit of criticism posted?
I’ll be waiting for the second part of the article. Also I don’t think the title is THAT inappropriate cuz the title is a play on ‘goto considered harmful’. Also you could’ve avoided the whole “O notation” issue by labelling the axes
Anyway like others I’m a pro-template guy as well..
I don’t think that the explicit type information is as much as an issue as the terrifying template errors. The explicit type information is what makes C++ so efficient; thus the name “static typing”.
The explicit type information also provides an important utiity (once concepts are introduced; preferably with strong typedefs) – it rescues us from the ordeal of cryptic error messages we find in type inferred languages like Haskell.. But there ARE function templates which use a limited form of type inference anyway..
Also; I’m a fan of Ruby and REBOL, so it’s not like I’m living in programming elegance denial..
No, it’s just about C++ compilers being quite stupid.
The upcoming ‘auto’ keyword is a proof of that – it could be (and will be) done, it was just that C++ the Language was proceeding at snails pace, probably because of the standardization process.
You could debate the relevance of Ruby and REBOL to programming elegance ;-).
I’m not sure what was on my mind when I said that.. I meant efficiency stems from compiler’s knowledge of the type. Anyway; yea I’m aware of the two godsent keywords in C++0x (auto, decltype). Although it won’t unfortunately improve the situation regarding the templates; it’d quite relaxing if i were to write something like:
unordered_set dict = {{ 1, “first”s}, …};
Well, I generally stick to the “leave ____ alone” camp; so I jump in action only when something that’s good is disparaged.
As for the relevance, umm…
REBOL = Lisp + Tcl
Ruby = Python + Blocks
Let the debate begin!