“In this article, I’ll attempt to list what I believe to be the most salient points of the economics of programming languages, and describe their effects on existing languages, as well as on those who desire to write and introduce new languages.”
“In this article, I’ll attempt to list what I believe to be the most salient points of the economics of programming languages, and describe their effects on existing languages, as well as on those who desire to write and introduce new languages.”
I’d like to address one thing, though:
Lisp, a very elegant language that is widely admired by language aficionados, has often taken the approach of being oriented to doing everything in Lisp – in the early 80s, computers were even produced that ran Lisp: “Lisp Machines”!
Lisp isn’t really any harder to bind to C than Java. Indeed, most Lisp implementations (even historically), have had very sophisticated foreign-function interfaces for integrating with C code.
Perhaps this desire to be Lisp “all the way down” has cost this language something in terms of its ability to co-opt through compatibility with C, but that’s a complex discussion in and of itself and is probably best left for another article.
That said, there is definitely a preference in the Lisp world to do things in Lisp instead of playing second-fiddle to C. Integrating with C has both advantages and disadvantages, however. Modern language implementations are severely constrained by the fact that the platforms they run on are throughly entreched in the C mindset. The more they integrate with and rely on C, the less they fulfill their own potential. They become slower than they could be, less efficient than they could be, and less productive than they could be.
The Lisp Machine was Lisp without the baggage of C. For example, it had no memory protection (which is actually quite an expensive feature, both in lines of code and in cpu time). It didn’t need memory protection, because Lisp code couldn’t corrupt memory. It’s APIs took full advantage of Lisp features, because only Lisp code would be calling them. The OS was integrated with the garbage collector, so you didn’t have any bad interactions between the GC and the VM, as you do with UNIX or Windows.
It must be remembered that the first Lisp Machines were developed at MIT around the same time as when the UNIX kernel was first rewritten in C. It was not at all clear then and during the heyday of the Lisp Machines in the 1980s that better technologies (and by better, I mean Lisp, Smalltalk, and their bretheren — what people could call “managed code” these days) would not win the platform wars. At the time, gambling on making a “better Lisp” seemed much more palatable than making a “more C friendly Lisp”.
Lisp is harder to interoperate with C than Java, because there is no Common Lisp FFI. The FFI of Corman Lisp, CLISP, CMUCL, Allegro CL, Lispworks, and MCL are different. There exists UFFI, but since it strives to expose only functionality universally compatible across the implementations it supports, when the FFI of the implementations it supports is lacking in some way (let’s say function pointers for instance), you lose. If you extend your scope of “lisp” further to include various schemes, long obsolete implementations, or semi-new lisp-like languages it only becomes more fragmented.
JNI is a most distasteful means of writing a binding in comparison to using FFIs for other languages, including certain implementations of Common Lisp and Scheme. I’d like to comment further but I’ll have to do it later.
That’s a good point. I was speaking more from the perspective of interfacing a given Lisp implementation with C. The various sundry Lisp FFIs might not be standardized like Java’s FFI, but they tend to be better.
I had a teacher that said that specialization is the first step towards extinction. This makes sense when you consider that humans are generalists. Being a niche player can help you grow in one area but not beyond it. Consider PHP, it is and it stays a niche player, but C was also a niche player at its humble beginnings. Not only is C used all over the place, but its syntax and philosophy have spread. How many C like languages can you count?
There are lots of Python people that think that a good web framework could leverage into Python being used more all over the place. I think Ruby and Ruby on Rails will probably take this chance from Python but that is not what’s important. Either way a niche is nice but if it doesn’t help leverage your language in other places it’s just the first step towards extinction.
Specialization is the first step towards adaptation and survival.
Look at nature and how there are millions of lifeforms specialized for a certain environment.
Without the specialization, there would have been no survival.
Man, the so-called “generalist”, is on the edge of destroying the ability of an entire planet to support life beyond the baceterium. If that is not “extinction”, I don’t know what is.
I’ve got to be blunt. Your teacher has been blinded by the Christian/PC bullshit that is taught in Western countries where they dumb down the worker class. Your teacher is teaching you the same stupid bullshit.
Extinction is a result of humans destroying niches. If a lifeforms niche is gone so is the lifeform. This has litle to do with Christian >>bullshit<< as you say. As for humans destroying the planet, well there are a whole lot of idiots runing around, but they are to generalized to go extinct.
But we are talking about programing languages not biology dont you agree?
IMHO, The Lisp Machines ( and Xerox Smalltalk ) had lost against the C team because :
– These machines needed a more powerful hardware which was far too expensive. During the whole ’80 people have tried to implement Lisp & Smalltalk ideas in ordinary hardware, resulting in the awful mess of the early versions of C++. Nowadays that problem has disappeared.
– Lisp was specifically targeted at intelligent programmers ( I’m not a skilled Lisp programmer btw… ). Nowadays that problem has worsened.
‘Lisp machines had no memory protection’
In some early Smalltalk implementations, the memory protection was used to help the garbage collector intercepting outdated reference with the semispace copying algorithm…
Nowadays, there is a big push towards Virtual Machines ( from Java, scripting languages, CLI, VMWare… ) which makes the principle of having a real Lisp or Smalltalk or Whatever machine inside an ordinary UNIX or Windows computer really available and more and more commonplace. ( Squeak Smalltalk for example ).
You can build a Lisp Machine without warming your soldering iron.
( The same Anonymous guy )
You don’t really need memory protection to implement things like write barriers. Some implementations use the MMU that way, for the simple reason that on existing machines, you’re paying the cost of having it anyway, so might was well use it to make the GC faster. However, if ditching memory protection could improve access latency to the L1/L2 caches by even a few cycles (by getting rid of the TLB lookups), it would be a worthwhile tradeoff, even if it meant implementing write barriers in software.
no HW memory protection also makes easy the interprocess communication, but it needs a EXTREMELY reliable compiler, the thing get worse with multi threads.
Yep, the compiler needs to be good about enforcing memory protection. The nice thing about the arrangement, though, is that only once piece of code needs to be vetted for possible memory-related exploits: the compiler. All other code can be written without regards to buffer overflow exploits. According to some studies, that’d eliminate about half of all security breaches.
I don’t see why it gets worse with regards to multiple threads though. Could you elaborate?
i’m thinking about single address space, where processes are like simple plugins and memory can be freely shared:
in a loop through an array, i can check the bounds before and then go with the loop (and without HW checking the thing can be faster than doing it in actual CPUs)
but if the program is multithreaded, and the array can eventually be shared, the compiler is forced to put synchronization in every access (and performance hit). (actually it’s the same problem now, but without HW memory protection, consequences are worse).
(excuse my english)
P.S.: about all those people complaining that X server must be in the kernel to make it more fast: code in ring 0 actually runs faster or it’s only the access to memory of the other proccesses?
but if the program is multithreaded, and the array can eventually be shared, the compiler is forced to put synchronization in every access (and performance hit). (actually it’s the same problem now, but without HW memory protection, consequences are worse).
Existing compilers for high-level languages with multi-threading implementations don’t insert any synchronization. They don’t guarantee that sort of data-integrity. It’s assumed that if you share data with another thread, and don’t check it properly, it can crash you. The only thing the compiler has to make sure of is that random data doesn’t get overwritten in the process. I think with proper typechecking, you can’t get a data overwrite even in the multithreaded case.
Code in ring 0 doesn’t run any faster. It still has to go through the whole MMU mechanisms. The complaint is that switching control to the X server (in ring 3) requires a 3 -> 0 -> 3 switch, while switching to the kernel just requires a 3 -> 0 switch.