Fatal Exception’s Neil McAllister writes in favor of new programming languages given the difficulty of upgrading existing, popular languages. ‘Whenever a new programming language is announced, a certain segment of the developer population always rolls its eyes and groans that we have quite enough to choose from already,’ McAllister writes. ‘But once a language reaches a certain tipping point of popularity, overhauling it to include support for new features, paradigms, and patterns is easier said than done.’ PHP 6, Perl 6, Python 3, ECMAScript 4 — ‘the lesson from all of these examples is clear: Programming languages move slowly, and the more popular a language is, the slower it moves. It is far, far easier to create a new language from whole cloth than it is to convince the existing user base of a popular language to accept radical changes.’
I find myself agreeing with all this author’s supporting evidence, and yet I’m leaning away from his conclusion. No, I don’t think we need yet another new language syntax.
Personally I’d rather work on a way to make existing languages more interoperable to make the choice of languages less restrictive.
Yeah, me too. He does a good job outlining the problem, but I can’t agree with the solution he proposes. I don’t know if I have a solution myself, except to say that for me personally, the biggest pain in the ass part about learning a new language is learning the APIs/framework behind it. For example, C# is not a hard language to learn, but the .NET framework is a monster.
Thus, I think we should work towards standardizing the APIs/frameworks, then there could be 3,000 different programming languages, and you could hop from one to the other with relative ease.
The ironic part is that without the standard libraries modern languages are useless. Any language is pretty much of the same effectiveness, when all standard libraries are removed from comparison.
Don’t completely agree, but standard libraries are definitely very important these days.
I definitely don’t agree with this. Any object-oriented language is pretty much of the same effectiveness, that I can live with. But, still, support for for example closures makes a big difference in effectiveness for particular problems. Functional or declarative languages are also more effective solutions for particular problems.
Even if you don’t consider standard libraries, there are very good reasons for chosing different languages for different types of problems.
When coding will be 80% of the time a developer spends during software development, then I will agree.
But currently coding is 20% of the time, unless you are inexperienced developer that likes to make his on mistakes.
Hmm.. if an experienced developer only spends 20% of the time on coding, then what does he do the remaining 80% of the time?
Keeping the inexperienced developers from marching the project towards disaster.
Hmm… either you have way too many inexperienced developers on your team, or either you need to replace those inexperienced developers with better inexperienced developers.
Let’s see…
Architecting
Thinking about a good way to go about coding
Testing
Debugging
New syntax isn’t important. If the differences between programming languages were just syntax then it would be very easy to convert between them and have them interoperate.
We need languages with different semantics.
This is exactly why we need something JVM-ish in the browser. It doesn’t have to be a JVM, maybe even a restricted subset of ECMAScript that can be heavily optimized, which would act as a compile target for other languages. As it stands, updates to JS take too long, and will be unsupported on older browsers.. what’s the point?
Any programmer worth his salt knows that you pick the tools based on the job you’re doing, and right now, the only choice for web development is JS. While JS works fine for some tasks, it isn’t well suited for every single thing you might want to run in a browser, and it never will be.
yourpalal,
“Any programmer worth his salt knows that you pick the tools based on the job you’re doing, and right now, the only choice for web development is JS. While JS works fine for some tasks, it isn’t well suited for every single thing you might want to run in a browser, and it never will be.”
Yes I agree. I often wish we could write code that could run equally well between the server and browser, and even as a native app on the desktop. It sucks having to write code in multiple languages between a client and server to handle web pages. I’ve even contemplated using javascript on the server to avoid logic duplication.
JVM-like technologies benefit from native performance and the same code can be used on the server, client, desktop, anywhere. It would open up new possibilities, such as freezing an application on the desktop, serialising it, and transferring it to run natively on a tablet. Javascript just isn’t up to the task and never will be.
Of course, this kind of power under end user control would scare the crap out of companies promoting walled gardens. So we’re probably stuck with less powerful javascript for a while longer.
“Any programmer worth his salt knows that you pick the tools based on the job you’re doing, and right now, the only choice for web development is JS. While JS works fine for some tasks, it isn’t well suited for every single thing you might want to run in a browser, and it never will be.”
Uhh.. Unless you mean something else other then what you type. This simply is not true. I could name at least 5 other languages that you can use for web development that is much more common then JS.
Shannara,
It was pretty clear to me he meant in the browser, where the choice of built in languages is indeed limited to javascript.
Edit: Although, now that I think about it, many years ago IE did support vbscript too.
Edited 2011-12-11 03:26 UTC
For a browser?
I’d love to be proven wrong, what 5 languages are used client side more than JS? I’ll accept languages that compile to JS in your list too.
I know Python can be compiled to JS, and Java through GWT, but I’m pretty sure that straight-up JS is still the #1 client-side language.
I like what you say. Browsers should natively support the execution of a certain type of bytecode, and come with a standard library. Instead of specifying that a browser support this version of JavaScript, we would just say that it supports this version of the bytecode, and this version of a standard library.
That way, anyone is free to write a compiler from his favorite language to bytecode that is supported on a browser.
The thing that bothers me, is that JavaScript is a language that is very hard to execute efficiently. And in the beginning when JavaScript was used for very simple stuff, that was fine. Nowadays, people try to build full-blown desktop applications in the browser, and the result is that so much time and energy is put into trying to execute JavaScript efficiently.
Can you imagine where we would be right now, if in the beginning one would have supported a specific bytecode set instead of JavaScript? In a way, nowadays we actually do use “JavaScript” as some kind of intermediate code (not really bytecode, but close) with for example GWT that just compiles Java code into JavaScript code: going from a language that as of today can be executed very efficiently into something that is hard to execute efficiently, and then trying to optimise the hard-to-optimise JavaScript.
Hi,
Just because something is “better” doesn’t mean it justifies the costs involved with changing from existing working solutions (developing new tools, retraining programmers, remembering a wide variety of different languages just to be able to maintain old code, etc).
To actually be better (rather than just “better”), something needs to be so much better that the costs of change is justified.
Existing languages aren’t perfect, but they are “good enough”. Slightly better is possible, but so much better that it’s worth changing is (almost?) impossible.
Basically, new languages are advocated by (and adopted by) short-sighted morons – people who only see “better” and fail to recognise or account for the costs of change they inflict on their peers and the industry as a whole. These people need to be hunted down and forced to pay for what they have done.
– Brendan
Totally agree, You should never do a rewrite on a piece of software because you instantly lose years worth of effort.
If all those years of effort took the software in the wrong direction and wound up crippling it more than helping it for future expansion, then yes, a complete rewrite is certainly the better option.
There’s one constant that you should always consider as much as possible — progression. If you want your language to survive the test of time, it not only needs to be good, it needs to be expandable/extendable. That is a key element in it’s basic design. If you ignore it, you will pay the price later.
Even so, rewrites kill companies. No matter how painful it is to stick with the existing version, that’s the version that’s paying the bills.
It’s all very well assigning developers to do the complete rewrite that in five years time will make everything better. But if you’re spending all the effort on the rewrite instead of on keeping the old product going, well… you probably won’t still be around in five years time.
That makes some big assumptions:
-that the software actually is paying the bills
-that it will take substantial time to do a working rewrite
-that the current troubled version is still usable and meeting the required needs
If all of that is true, I would tend to agree. If any of them are false, I can’t say the same.
I worked at one company where they decided they were going to rewrite the old product from scratch. 1 year later they had a product that they couldn’t sell to anyone, while the old product with all of it problems the clients still wanted.
The old product is still going and has been bought by new customers and is still paying the bills. If the amount of effort for the rewrite had been spent refactoring the product … It would probably would have none of the performance problems it currently has.
http://www.joelonsoftware.com/articles/fog0000000069.html
Edited 2011-12-12 11:08 UTC
The only thing that proves is that the rewrite was poorly handled. For every case where a rewrite was a disaster, you have a case where it was a great success and the old version was treated with good riddance.
If you create a new piece of crap, all you have is a new piece of crap. The problem isn’t that you made something new, it’s that you made it a piece of crap.
If you have something working, it is ridiculous to do a full rewrite. The only time I can see it making sense if the current platform is going to be depreciated, or fundamental core changes need to be made.
Re-factoring and rewriting modules is fine. Slowly changing the architecture is okay as well. It is pretty easy to do this after a few iterations.
Agreed, but that doesn’t mean that rewriting a working application is a good idea because it might be better in the future.
Many of the weird logic the applications have are there for very good reasons.
I’m not talking about doing rewrites when there’s no need for it. When you have poorly designed software that’s in a constant state of maintenance, you can reach a point where maintaining it and dealing with the poor design is no longer the best option — when too much time and too many resources are being committed just to keep it working. In these cases a proper rewrite can easily be the best solution if the end result is relieving the time & resources being spent.
The people who are saying that a rewrite is never a good choice and believe that there are never any circumstances which call for it, are completely nuts. They’re either not coders, not good coders, never had to deal with resource management, or aren’t good at it.
I have a good example of a well written application that requires a total rewrite – lack of documentation leads to weird things happening in production.
In addition, rewrite will easily remove 80% of code from custom domain. It is not so much a rewrite, as it is distilling the application to the business features and letting the custom code be replaced by COTS or opensource code.
I not saying “NEVER EVER EVER” rewrite … but in the vast number of cases I think it is better just to get the original requirements (if any) or map the behaviours and rewrite that particular module (i.e. Method in OOP) from scratch.
lucas_maximus,
Taking a position on whether to rewrite/maintain code can be a precarious decision
Any objections to the following exception: All applications written as a visual basic 6 VBDs must be rewritten at all costs.
I have a client still maintaining VBDs for their client. The code is in a terrible spaghetti-like mess, typical of VBDs. The problem is, they don’t want to revel how bad the code is since they wrote it.
Edited 2011-12-13 20:49 UTC
I believe that “Do it right the first time” should become some kind of mantra for computer science.
Good code ages well and is easy to fix. It doesn’t require rewrites.
The thing is, it takes a lot of mental discipline to actively apply this philosophy every day, including the worst ones, where you’re lacking sleep and must ship something that is currently barely finished in one week.
Edited 2011-12-12 17:53 UTC
The largest problem with this is the way good enough is sometimes best – a working but horribly coded project now has some benefits over a well-coded one later.
I have written last minute hacks that have stood the test of time. Something working is a lot more convincing than vapourware that does work.
Edited 2011-12-12 19:19 UTC
Oh comon, this is ridiculous. You can do sensible things the first time, but doing it “right” first time is a completely different matter.
I wrote a C# library the other day, a nice bit of OO programming. Very clean and encapsulated, today I found out a way to make it even cleaner … While my first attempt was sensible … tomorrow it will be perfect.
It’s not about writing perfectly clean code the first time you put your hands on a keyboard, it’s about shipping perfectly clean code in the end
Of course, if you have lots of time, you can experiment with low quality code before writing the real thing, and the more time you have the larger-scale these experiments can go (things like GSoC projects come to mind). However, everything that is released has fallen into a vicious circle that makes it much harder to fix unless it is very well coded to begin with.
TL;DR : Hackish cost is fast to write and a powerful research and testing tool, it should just not make its way to the final product IMHO.
Edited 2011-12-13 06:19 UTC
No I agree, but you will always find better ways of doing something down the line.
Absolutely agree. Refactor first, Rewrite in different language, only if everything else fails, or if its just a fun hobby.
I can’t agree more There are also a number of great application frameworks build on top of certain widely-used languages.
Doesn’t matter whether we need more languages or not. We are sure to get them! As the first comment to that article points out, making a language is easy and it can be a money making venture. So you are sure to see lots of new languages fighting for developers’ attention. And developers also have to keep up with the skills arms race and learn whatever newfangled language/framework/API becomes popular.
I also believe that we are in need of newer languages for specific domains, but at the same time they need to be able to work with the current infrastructure so that the new language doesn’t have to reinvent all the plumbing that current languages already do well. The new language can then be built quickly by adding a preprocessor or interpretive module to the tool mix just as the early C++ preproc did.
One example is to do math in Fortran and use F2C to merge the math code within a C system. Same with Matlab and others.
One weak area has always been concurrent programming of communicating processes over many processors or within hardware. When you model really large parallel systems, they look just like nested blocks of hardware logic where the processes might be software or hardware cells communicating with wires (hdl) or channels (occam/csp).
We can kind of hack around this with C++ classes like in System C which makes designing hardware really ugly. In the hardware field we already have Verilog and VHDL, for designing massively parallel systems but they don’t really work well with C and cost big $ to use. The System C guys came along and said, you hardware guys can just use our C++ classes to model wires, events and so the hardware code would run in a free C simulation environment. What was not free was the tools to convert the SystemC description back to Verilog to make the hardware, money ruins it all.
I ended up writing my own primitive Verilog subset V2C that allows hardware logic to be written in almost pure Verilog and compile to C for cycle simulation and the same code could then go through normal hardware synthesis tools usually free for FPGAs, no $ at all.
Even to this day I don’t think you can get this type of utility with out big bucks. I would like to have taken this further by combining subsets of Verilog and C+ to produce a smaller united language. In the Verilog syntax you describe the parallelism of logic blocks and signals and in the C syntax you write behavioral C functions to validate those blocks.
The first hardware description language I ever learnt for processor design was APL from the 60s and yet many old timers will know that was used for business software and likely just as incomprehensible.
Perhaps Go might be a useful tool for hardware guys since it has a far more natural syntax for describing block modules with multiple I/Os, still have to finish the tutorial.
just blabbering on
Well, people do have to live from something.
As someone that recently gotten interested in programming and doing a lot of reading on it, these influenced my decision.
Programming is hard to master. You got to have tutorial material, documentation, cookbooks ect.
Better to be real good in one or two relevant languages than mediocre in a lot.
Compilers and platform support.
Are there people who are going to be able to fix your code?
Are the enterprise going to have difficulty finding coders for that language?
Frameworks and IDE’s
A publishing house for instance will be more interested in writing a PHP6 book than a new one obscure language.
I also expect a new version of a current language to be much easier to master than a totally new one.
My vote lends towards the improvement of the current languages. Although new language are very interesting.
And what would you think of in-depth redesigns of existing languages that keep similar concepts (may compile in the same object files or bytecode), but strongly rework syntax ?
Lots of frustration:-)
But there will probably be books and examples on it farly quickly compared to a totally new language.
But every single established language where brand new at one stage.
If you have a proper computer science background you’ll be able to pick new languages/concepts in no time.
The proper way is to master the concepts not the tools.
EDIT: missing word
Edited 2011-12-11 16:26 UTC
That’s not the problem. As I see it, the problem is that you usually have a lot of re-usable code, classes, frameworks and engines written in some language and you’d have to re-write or port that code into the new language. And it would be a huge and wasted effort.
If you know what you’re doing, the language doesn’t really matter ***. As also said above, frameworks and libraries matter a lot more than the language itself. I can spend my time better than learning all language references by heart. If you can be good in a few, then it’s ok to be superficial in others. And sometimes it’s quite good to be at least superficially knowledgeable in a few others than your chosen ones, gives you insight and good ideas.
Edit: I’m correcting myself:
*** Except when it does, since there are situations where it’s imperative to choose one language over the other.
Edited 2011-12-12 11:31 UTC
“once a language reaches a certain tipping point of popularity, overhauling it to include support for new features, paradigms, and patterns is easier said than done”
Hm…what about Lisp style languages? The core is very simple and generic, and it is easy and efficiently extensible to support all the features of modern programming languages. And the result can be as fast as C/C++. From what I can see, Lisp is the king of DSLs, and there is no need to redesign everything all the time, just copy and improve macros to meet your current requirements.
Where other languages are build like a cathedral, monolithic and extensible only in ways that look distinctively different from the core language, Lisp is a bazaar where you can add and remove language capabilities as needed.
(((((((((((((((((Yes(I(agree)))))))))))))))))))
Old languages carry around a lot of baggage, and it is very difficult to get rid of this with language updates — you can usually add features, but it is extremely hard to get users to accept that you remove any. Even minor syntactic changes can be hard to do. This gets worse as a language have more users.
A new language can be designed cleanly without having to worry about backwards compatibility, so you often get a cleaner design with fewer surprising corner cases. Also, you can avoid the cases where the new features interact badly with the old features.
I agree with the previous poster that libraries are important — it is often a lot more work to write extensive libraries than it is to design and implement the language itself. But I also think libraries are in as much need of revision as languages. Languages tend to accumulate libraries and it is very difficult to get rid of the badly designed libraries once they are in widespread use. And it often isn’t enough to rewrite the implementation of these, as it may be the APIs themselves that are badly designed.
Also, when libraries become too big, they hurt performance. The result is that a simple web application equivalent to a HTML form compiles to 2MB of code and requires 80MB to run.
Extensive libraries also makes programmers lazy: They would rather use a library that is a poor fit to their problem than coding a solution that is better suited, even if this would only need 20 lines of code. In fact, many so-called programmers aren’t: They can’t program, they can only string library calls together.
This is also why I think Java and its ilk are badly suited for teaching programming: In most cases your program is 90% library calls and 10% real programming, so students come to believe that programming is stringing library calls together. This is why CMU have dropped OO languages from the first year of their CS programme — instead the students use Standard ML and a (fairly clean) C subset, both with fairly limited libraries.
I believe, there are several reasons why programming languages change. They are like people, born, mature, learn new things (add new features), and sometimes die.
I made a programming language designer tool, for my Graduate Thesis. Sort of user friendly, with pascal syntax, “Lex” alternative. Open Source Flex didn’t exist, or existed, but, wasn’t mainstream, yet.
It was in the middle 90’s, Object Oriented Programming Languages where on the raise, 4GL’s style like PowerBuiler and XBase where starting to leave the scene.
I have to defend to my thesis “judges” why my tool was need it, since there where already many programming languages (same topic that this post).
I arguee several things, but, the main idea was that technologies evolve, and change, and where not constant. That was before Browser Scripting Languages, and Functional Languages.