Two articles at OnJava, first an article titled “Better, Faster, Lighter Programming in .NET and Java” and also “Avoiding J2EE cluster bugs“.
Two articles at OnJava, first an article titled “Better, Faster, Lighter Programming in .NET and Java” and also “Avoiding J2EE cluster bugs“.
Using inferior tools, just so programmers will be forced to understand what they’re doing, is a very misguided way of doing things. Take what engineers do. Sure, they get enough education to calculate the mass of a fully-assembled aircraft by hand, but engineering companies would be stupid if they made an $80,000 a year engineer do by hand what the computer can do by itself in seconds.
Higher-level languages simply give poor programmers a different way to shoot themselves in the foot. Instead of writing apps that segfault all the time, they write apps that abuse the GC and take all the system’s memory. However, they can save an experienced programmer significant time by allowing the compiler to take care of grunt-work so the programmer doesn’t have to waste his time with it.
It’s freaking 2004, not 1994. I heard your argument back in the late 80’s, early 90’s about switching from assembly to C. The same argument all over again.
Programmers still have to understand the language in order to function in the environment. C# is still a pretty complex language. You can say in some ways its a lot more complex than straight C. So what’s the problem? Something like C# is too complex for C programmers?
You’re not going to make better programmers with C or C#. But for those people that can program well, it gives them a new tool of abstraction along with a consistent, well thought out class libraries.
I’ll say again, your giving the same argument that the assembly language programmers in the late 80’s/early 90’s said.
We have lots of Java and lots of failed software projects. There is no measurable improvement in software quality. One might argue that there have been no compelling applications produced using Java or .NET. That’s not a good start for the world of “safe and sane” coding.
The basic story about “safe and sane” fireworks is that they are safer than real fireworks. But the show really sucks. If you want the crowd to be impressed, use the real thing.
All in all, “safe and sane” programming has nothing to show for itself except giant “make work for brother” sub-industries: training, certificates, exporting jobs, etc.
Isn’t it time to stop the rule of fear? Why does America keep pushing dumbed-down programming languages on the world?
Java is a dumbed-down programming language. That does not mean that safe languages in general are dumbed-down. Nobody has ever accused ML, for example, of being dumbed-down. Further, your analogy sucks. The same property that causes fireworks to be spectaculer (a big boom), is the same thing that makes them dangerous. On the other hand, that is not the case for safe languages. No user is “wowed” because his software is written in a language that let’s code barf all over it’s own data structures.
A language is used to communicate an idea. The problem is that communication is impossible. There simply does not exsist a communication meduium efficient enought to convey the full experience of the idea! All we can hope for it to trigger a similar experience in the recieving part so that he/she/it may act on our idea as we intend.
I’d say that a computer language (even though it’s somewhat formal) follows the same restriction. It can not be used to trasfer our idea to the computer.
In low level languages we can express in a formal way what response we want. It is however difficult to express an idea. It quicly becomes tedious and error prone as an idea often is complex and thus requires a complex response.
Higher level languages makes it easier to express the idea, but somewhat harder to formaly express the response.
It is the job of the compiler/interpreter to translate the description of the idea to a description that will trigger the wanted response.