Lluis, explains how Mono is not mono-language with two excellent screencasts that demo debugging across Java, Boo, and C# and the more practical application of interop between the Boo and Java languages all running under one runtime, Mono.
Lluis, explains how Mono is not mono-language with two excellent screencasts that demo debugging across Java, Boo, and C# and the more practical application of interop between the Boo and Java languages all running under one runtime, Mono.
there is just some sed script changing the code on the fly from
public class Foo : bar{}
to
public class Foo extends bar{}
😉
All of the languages that interoperate on the runtime only do so by providing different syntax for what are basically the same semantics. Where the languages afford differences from C# they lose interoperability or have to rely on inefficient mappings. Variants become class hierarchies, higher-order functions become delegates, and such. Reference types can be null, but in various languages this practice isn’t the norm and objects consumed might be wrapped in a variant type like option which can make for some truly baroque type signatures and pattern matching. Some languages have concepts that don’t map cleanly, like various higher-order module systems, multiple inheritance, continuations, restartable exceptions, very-late binding, and so forth.
.NET is less language-agnostic than it is syntax-agnostic. The more language-agnostic you treat the platform, the more performance or usability concessions you have to absorb.
Of course, the same applies to any language, be it compiled or interpreted. At some point, it still depends on the capabilities of the machine it runs on, whether that’s a virtual machine like Java or .NET uses, or real hardware. Afterall, the underlying processor doesn’t understand concepts like classes or delegates either.
Anonymous, I agree completely. Just look how C-ish VB became with VB.Net. At the end most languages end up looking like C#-wannabes…the CLR is not _so_ “agnostic” as advertised.
But still, Boo is a *very* beautiful language…truly “wrist friendly” as proposed by the author. If I were part of the Python “core team” I’d keep my eyes on it, and borrow one idea or two for Python 3.
Simon: look at how different Lisp and C# are, although both run on the PC. The point that the anonymous tried to make, IMO, is that if someone tried to create Lisp.Net would end up doing _a lot_ of concessions, all of them towards the C# way.
Does anyone know what app is used to create the ‘screencasts’?
What about the Forth.NET compiler I tried a year ago and that worked perfectly with Mono ? Is Forth close to C# ?
M. March, it’s called vnc2swf, as written in the article.
http://freshmeat.net/projects/vnc2swf/
You can compile anything you want to run on the VM, but it isn’t necessarily going to interoperate with anything else. “Disagreeing” with me makes absolutely no technical sense at all.
It’s true that .NET is pretty tied to a particular form of programming: object oriented and statically compiled. But with every revision MSFT is making revisions to the CLR to support more flexibility. I would be unsurprised if soon they’ll be able to fully support Scheme/Lisp. They have Jim Hugunin (Mr. IronPython and Jython) working for them now on dynamic language support. Seems like really cool stuff.
dunno, was it faster than good forth implementations?
Say, python on the CPython (very simple stack based machine, withouth even direct threading) is 20% slower than IronPython on mono/x86 (wich is not feature complete) and is even faster on PPC.
OTOH C# on the same runtime has performance muchy closer to cpp. This basically means that you can run anything on the clr just as fast as a bad interpreter.
As usual, see
http://www.panopticoncentral.net/archive/2005/03/21/8041.aspx
for a non biased view on how “everything is c#” in clrland.
(notice you can still have reasonm to like the clr, imo, but not laguage interop)
Of course it is easier to write a single inheritance OO language for the CLR than e.g. a functional language or a dynamic language. But it is also much easier to write a C compiler for the x86 isa then an optimizing scheme compiler, because C is much closer to the x86 isa than scheme.
Does that mean that the x86 isa is not language agnostic?
I wish people would actually take a look at the MSIL bytecode. It provides support for all kinds of concepts that C# and its kind don’t even use yet. For example: if the CLR is just designed to run C# and C# with strange syntax (VB.NET), then why does it have a tailcall instruction that the current C# compiler does not even use?
About IronPython vs CPython: IronPython is currently one of the fastest python implementations out there, and still people claim that the CLR does not support dynamic languages. What would it take to convince you? Python running at the speed of C++? That won’t happen without a massive research effort since dynamic languages like python are much harder to efficiently compile. The CLR has nothing to do with that.
No one uses the VM’s tail call instruction because it reduces performance. Compilers targeting .NET typically explicitly transform tail-recursive functions.
You can implement all manner of language constructs on the VM, which means surprisingly little. The more the semantics of your language differ from C#, the less interoperable the code written in it will be for other consumers of the platform. The more you try to salvage interoperability while maintaining different language constructs, the higher the performance penalty will be for using them. If you use delegates for implementing higher-order functions, then the performance of code written in a language where passing functions is typical will be incredibly poor. Delegates are not “function pointers,” they’re far more heavy-weight than “function pointers.” But if you don’t transform your higher-order functions into delegates, then how is a C# programmer going to make use of your interface? Are you going to generate a bunch of FunctionN interfaces? Well interfaces on the CLR are incredibly expensive too, and no C# programmer is going to want to use that interface. FunctionN classes with virtual functions? Still expensive, and still not desirable for a C# programmer. They won’t use your library; they might as well be programming Java.
How are you going to implement multimethods on the CLR in such a way that a C# programmer is going to make use of a library that uses them? Or exporting an interface with covariant return types? A parameterized type constrained by a variant that consists of parameterized interfaces?
The CLR is really not tremendously more “language agnostic” than the JVM, especially when you want to emit verifiable code. Far more languages have backends that target the JVM, but they certainly don’t necessarily interoperate especially well with one another, either.
You can’t hammer a square peg into a round hole.
As for dynamic languages, in a language X a class A may or may not respond to a message M (it cannot be determined statically, and simply because it responds once doesn’t mean that it will respond the next time) and when it does it may return objects of types B1, B2, …, Bn that implement one of two implicit interfaces. How do you map this soup in such a that it makes sense to a C# programmer? They sure aren’t going to use C#’s reflection capabilities to deal with this nonsense; they’re rip out their eyes and chop off their fingers.
I agree that it is difficult to get good interoperablilty with C# if you use advanced or unusual language constructs. But that has little to do with the CLR.
Getting good interoperablility between languages with completely different philosophies is genuinely difficult. This is not really surprising.
There is typically a fast way to implement a language construct and a safe and interoperable way. But this is not unusual, and there is a simple solution for this: have a compiler switch or an attribute to tell the compiler wether to use the fast or the interoperable way.
At the boundaries of your library, where interoperablilty is important, you use the safe, interoperable but slow implementation. In the core of your library where perforance matters you use the fast way.
If you use two languages with different paradigms (like e.g. functional and OO) in a project, you won’t have thousands of small modules in each language that call each other anyway. Typically you will have a few large parts written in an OO language and a few large parts in a functional language. For example you might want to write the GUI in C# and the core in a functional language such as Haskell. The interface between the two parts will be quite small if the whole thing is well designed. You can still deploy it as a single CLR binary.
“If you use delegates for implementing higher-order functions, then the performance of code written in a language where passing functions is typical will be incredibly poor.”
Delegates were slow in .NET 1.1. But they are *much* faster in .NET 2.0. This has been done especially to support functional language constructs like closures.
“How are you going to implement multimethods on the CLR in such a way that a C# programmer is going to make use of a library that uses them?”
Multimethods AKA multiple dispatch is supported out of the box by the CLR. I use it in a visual basic program to do drag and drop. Languages like VB.Net do not have syntax to support it, but that is trivial to add.
“The CLR is really not tremendously more “language agnostic” than the JVM, especially when you want to emit verifiable code.”
Sorry, but this is just completely false. For example, the JVM does not support out parameters or multiple return values. So the only way to get more than one value out of a function is to create a new object and return it. This is incredibly slow. In .NET you could either use ref/out parameters or just return a struct which is allocated on the stack and is therefore much faster. All this is in the verifiable subset of the MSIL.
I won’t even mention the nonexistent generics support of the JVM.
Ok, Anonymous is perfectly right in what they write. However, don’t make it sound like it’s the CLR’s fault. Interoperability, as exemplified, is trickey in itself.
BTW, if a guy managed to deliver Smalltalk.NET (Smallscript, S#) and another one Prolog.NET, it would seem the CLR does a pretty good job. Not perfect, but reasonable. I can see how S# can operate with C#, though the reverse is more complicated as Anonymnous points out.
prolog for net, first google result:
http://weblogs.asp.net/astopford/archive/2004/03/04/83761.aspx
how lovely, the first one targets java 1.1. and is thus interoperable with J#
Notice, I’m not saying in any way you can’t run something on the CLR. I’m just saying that it won’t take almost any advantage of the CLR, just like languages for the JVM do not take advantage of it (think of SISC, JCL, JRuby, Jython or whatever)
If you port your language of choice to the CLR, you can usually consume the whole framework class library and you get a reasonably efficient JIT compiler, a good garbage collector and all kinds of marshalling services for free.
The only thing you don’t get automatically is the ability for other .NET languages to consume code written in your new language.
Sounds like a good deal to me, and the fact that many interesting new languages (Nemerle, Boo etc) target the CLR seems to confirm this.
if you don’t port your language to mono,you can reutilize
all third party libraries written 5 years ago without trouble
yourself much about silly frameworks war.
> I agree that it is difficult to get good interoperablilty with C# if you use advanced or unusual
> language constructs. But that has little to do with the CLR.
When the discussion is regarding whether or not Mono is a language-neutral platform, rather than just a vehicle for permutations of syntax for C#, I would say that it has everything to do with the CLR.
> Getting good interoperablility between languages with completely different philosophies is
> genuinely difficult. This is not really surprising.
No kidding. It’s almost as if I’ve been pointing it out since yesterday.
> There is typically a fast way to implement a language construct and a safe and interoperable
> way. But this is not unusual, and there is a simple solution for this: have a compiler switch or an
> attribute to tell the compiler wether to use the fast or the interoperable way.
How exactly is this “simple?” This is not simple at all. That’s added complexity for the generation of the subset of items that are easily mapped and does nothing to solve interoperability concerns. How is this not “unusual?” What compilers do this? What compiler do you work on?
> At the boundaries of your library, where interoperablilty is important, you use the safe,
> interoperable but slow implementation. In the core of your library where perforance matters you
> use the fast way.
So in other words you write large quantities of code that cannot be reused by other consumers of the platform. That fosters language neutrality how, exactly? Not to mention that this cannot be easily implemented via compiler switches as you’ve alluded to above. Which classes do I wish to export to the outside, how do I specify this, and how does code that interacts with both “internal” and “external” compilation conventions interoperate?
> If you use two languages with different paradigms (like e.g. functional and OO) in a project, you
> won’t have thousands of small modules in each language that call each other anyway. Typically
> you will have a few large parts written in an OO language and a few large parts in a functional
> language.
Why? The whole point is to write code that can be reused regardless of the implementation language. It isn’t supposed to matter if it’s written in Eiffel, C#, or Standard ML. The whole point of the “language neutrality” of the CLR was to provide me with the ability to write a library in one language, another library in another language, and an application in a third language.
> For example you might want to write the GUI in C# and the core in a functional language such
> as Haskell. The interface between the two parts will be quite small if the whole thing is well
> designed. You can still deploy it as a single CLR binary.
Good luck interfacing code written in Haskell. Haskell’s type system is generally more sophisticated than the CTS, objects and type classes don’t map well on opposite sides of the fence, and AFAIK all GHC .NET backends have been basically abandoned. Disregarding your selection of Haskell for a moment, we’ll move to the more general issue you suggest, which is to devote energy to developing code that is essentially not reusable for other platform consumers. What motivation then do I have to write code in a language X for which interoperability is prohibitive, when I could just write it in C# and have every C#-with-a-different-syntax be able to reuse all of the code that I have developed, and probably every fringe language for which interoperation with code written in it isn’t worth the effort of writing it in but will undoubtedly have a CTS FFI? Congratulations, we’ve just replaced C with a platform that’s less portable, less efficient, has less pre-existing code, while doing nothing to foster development in X.
And from the other side, what motive do I have to target the CLR with my compiler? I waste lots of time mapping the CTS to my language, potentially mutilating my language to support the requirements of the CTS, abandon or reimplement the standard library of my language, abandon or reimplement third-party modules, typically receive significantly worse performance, and replace my FFI with the CTS. What exactly do I get in return in your scenario?
> Delegates were slow in .NET 1.1. But they are *much* faster in .NET 2.0. This has been done
> especially to support functional language constructs like closures.
Delegates in most cases can probably never be especially fast, however given how slow they are any improvement is welcome. These improvements probably have nothing to do with specifically making closures more efficient, but rather higher-order functions in general. Most likely this is to make event handlers in general more efficient, and isn’t motivated by state-capture.
> Multimethods AKA multiple dispatch is supported out of the box by the CLR. I use it in a visual
> basic program to do drag and drop. Languages like VB.Net do not have syntax to support it, but
> that is trivial to add.
Multimethods are not supported “out of the box” by the CLR. I take it you don’t Visitor Pattern much in whatever it is that you do.
You can implement multimethods on top of the CLR, but then as I asked you, how would you implement them such that consumers would actually make use of your library?
> Sorry, but this is just completely false. For example, the JVM does not support out parameters
> or multiple return values. So the only way to get more than one value out of a function is to
> create a new object and return it. This is incredibly slow. In .NET you could either use ref/out
> parameters or just return a struct which is allocated on the stack and is therefore much faster.
> All this is in the verifiable subset of the MSIL.
>
> I won’t even mention the nonexistent generics support of the JVM.
Not that the lack of generics in the JVM prevents the implementation of parametric polymorphism in a language (as it doesn’t in the CLR which doesn’t have an actual release with support for generics) and escape analysis provides stack allocation of objects. Good luck implementing real continuations on either, though. If you’re willing to sacrifice interoperability then there’s not a tremendous difference. The MS implementation of the CLR typically offers a slightly faster target than the JVM, but compared to the typical implementation of any of the pre-existing compiled languages with either or both as a target, they both perform badly.
> If you port your language of choice to the CLR, you can
> usually consume the whole framework class library and you
> get a reasonably efficient JIT compiler, a good garbage
> collector and all kinds of marshalling services for free.
So basically I get a slightly less crappy JVM, but with an efficient implementation for only one platform (mono might have a decent garbage collector someday!), much less pre-existing code (but don’t worry, because some industrious person will port each and every library or tool written for Java to C# given enough time) with better platform interop, and all I have to do is spend a nontrivial chunk of time writing a backend for my compiler (or reimplementing my interpreter), abandon anything that relies on my current FFI, marshal CTS types (or toss its warts into my standard library, or abandon my standard library) and probably break some amount of software actually written in my language.
> The only thing you don’t get automatically is the ability
> for other .NET languages to consume code written in your
> new language.
Those relegating it to some fringe corner of development that sees little use.
> Sounds like a good deal to me, and the fact that many
> interesting new languages (Nemerle, Boo etc) target the
> CLR seems to confirm this.
Languages with no legacy that are permutations of C# that add features from ML or look to mimic Python in syntax. I don’t see how that translates at all to the “deal” you’ve outlined. F#, SML.NET, A#, Common Larceny, IronPython, Eiffel#, et al are much more similar to that deal.
If you’re writing a new permutation of C# then of course targeting the CLR makes a lot of sense. On the other hand, look how much use Eiffel’s .NET backend has. .NET has really turned Eiffel from a fringe language into the toast of development. If I didn’t think that Microsoft paid for its development, I would almost feel sorry for ISE.
Languages on the .NET framework can choose to do anything they want, and they do not even have to follow any conventions to interop with third-party languages.
There is a basic “interop” layer in the CLI virtual machine called the “Common Language Specification” which is actually a subset of the things that can be done with C#. For example the casing of an identifier does not matter in CLS (It is for example illegal code to have the method “Hello” and “HELLO” as they would be considered as the same method at the CLS level). Another example are the lack of some data types, like signed bytes.
So CLS provides these interop guides that compiler authors must follow to interop, but they have the ultimate control.
In addition to the CLS code, languages can be “consumers”, “extenders” or “consumers and extenders” depending on how largely they participate in the CLS universe.
The system exposed by the CLS is a subset of the world exposed by the actual type system for the sake of interoperability. This is helpful because compilers do not have to be modified extensively or new concepts introduced (the limited data types is a great example).
Above the CLS compiler authors might choose to support more features, like all the types of the CLI, but that is a decision that a language designer has to make.
So the CLS in a way is just a contract to share library components. In the same way that COM was, or that some coimpilers and runtimes do: speak a subset of the language and you can particpipate.
But internally every language can do whatever it deems appropriate. The example of tailcalls has been used already, it is an instruction used heavily by functional languages. Nemerle for example requires this instruction.
Languages like JScript were fairly dynamic, and had features like lambda functions, a feature implemented with creative use of the generated code. This feature is now available also to C# developers in the form of anonymous methods and iterators.
Those languages build for the .NET Framework *do* share similarities, after all they want to integrate as deeply as possible (VB, C# from MS or the third party Nemerle and Boo compilers) but that does not force everyone to do this. IronPython on the other hand is first a Python runtime and a CLS citizen, but not a CLI citizen.
Imagination is the limit. If your imagination is limited, or you make decisions based on bumper-stickers, then there is little we can do.
Miguel
actually I agree that you get some nice libraries, less on the good jit, since you’are not able to make real use of that jit.
Any direct threaded interpreter made with VMGEN will be faster than, say, current IronPython.
As I initially said CLR is a nice thing, just not a wonderful interop platform as people say.
“What compilers do this?”
F# has a switch which determines wether higher order functions are passed as delegates or as unsafe function pointers. There are various other examples.
“What compiler do you work on?”
I wrote a generic compiler framework for the CLR and a few simple domain-specific languages. So I do know a bit about the topic. I have not yet written a turing-complete language for the CLR.
What compiler do *you* work on? No flame, but a serious question.
“Why? The whole point is to write code that can be reused regardless of the implementation language. It isn’t supposed to matter if it’s written in Eiffel, C#, or Standard ML. The whole point of the “language neutrality” of the CLR was to provide me with the ability to write a library in one language, another library in another language, and an application in a third language. ”
You can do that. I already did this in one project. But it is a large library written by one guy and a user interface written by some other guys. The interface between the two parts is small and well defined. The main benefit of using the CLR for both is that you can deploy it as a single, verifiable CLR executable.
“Delegates in most cases can probably never be especially fast, however given how slow they are any improvement is welcome. These improvements probably have nothing to do with specifically making closures more efficient, but rather higher-order functions in general. Most likely this is to make event handlers in general more efficient, and isn’t motivated by state-capture.”
AFAIK the changes in the delegate implementation have been done for better support of anonymous delegates in C#, which do support state capture.
And delegates (at least in CLR 2.0) are reasonably fast.
“Multimethods are not supported “out of the box” by the CLR. I take it you don’t Visitor Pattern much in whatever it is that you do.”
I have problems parsing this sentence. But multimethods *are* supported by the CLR, and no, I don’t mean the visitor pattern. The Type.InvokeMember method does correct multiple dispatch.
“You can implement multimethods on top of the CLR, but then as I asked you, how would you implement them such that consumers would actually make use of your library?”
It is no problem to do this. You would use a combination of reflection and MSIL code generation for good performance. The user would not have to be aware of this.
But since the CLR supports multiple dispatch out of the box, there is no need to do this.
You seem to think that providing a bridge from functional languages to OO languages destroys their purity. But what good is purity if you can only use your super advanced language in reserach projects?
IMHO the CLR offers the best chance to get functional programming languages into the mainstream.
> Languages on the .NET framework can choose to do anything
> they want, and they do not even have to follow any
> conventions to interop with third-party languages.
What’s sad is that this is basically the most insightful aspect of your post.
> But internally every language can do whatever it deems
> appropriate. The example of tailcalls has been used already,
> it is an instruction used heavily by functional languages.
> Nemerle for example requires this instruction.
No it doesn’t. Tail call elimination is done manually because Microsoft’s implementation of tailcall performs poorly. The only time Nemerle will emit tailcall is if you pass a command line argument specifically enabling it.
> Languages like JScript were fairly dynamic, and had
> features like lambda functions, a feature implemented with
> creative use of the generated code. This feature is now
> available also to C# developers in the form of anonymous
> methods and iterators.
You really think that the addition of anonymous delegates in C# is a result of JScript? Iterators, too? Ok…
> Imagination is the limit.
Well, Miguel, I think you should use your imagination less when making comments and perhaps stick with reality.
> “What compilers do this?”
> F# has a switch which determines wether higher order functions are passed as delegates or as
> unsafe function pointers. There are various other examples.
What are some other examples? That’s hardly an especially complete enumeration of ‘typically a fast way to implement a language construct and a safe and interoperable way.’ However the F# compiler does also have several other command-line options for specifying compilation behavior, but this isn’t useful in a mixed-compilation setting. You aren’t going to mix and match –closures-as-delegates and –closures-as-virtuals or –unverifiable. It’s worth noting that F# isn’t using delegates for higher-order functions. I actually rather doubt that interoperating with F# will be overly common from the C# side.
> What compiler do *you* work on? No flame, but a serious question.
Actively I work on a handful of compilers for DSLs originally used for developing code for embedded devices for the company that I work for, which are implemented in Standard ML. The compiler framework has a number of backends. Previously I’ve done more interesting work, but I wasn’t looking to go down memory lane with you; I was looking to see if you actually worked on one of the publically-available, optimally open-source compilers so that I could see your idea of ‘simple.’
> You can do that. I already did this in one project. But it is a large library written by one guy and
> a user interface written by some other guys. The interface between the two parts is small and
> well defined. The main benefit of using the CLR for both is that you can deploy it as a single,
> verifiable CLR executable.
This is precisely what I’m not referring to. The promise of the CLR’s ‘language neutrality’ was the flexibility of being able to write reusable software components in a language, not to specifically expose a constrained interface between large codebases or have one-way interoperability. The whole ‘mono not as a language’ with Java, C#, and Boo playing nicely together when they afford fairly-limited divergence is not exactly interesting. In other news I can make Nice and Java play together and use objects from both in SISC. Wow Java is so special!
> AFAIK the changes in the delegate implementation have been done for better support of
> anonymous delegates in C#, which do support state capture.
Anonymous delegates, other than also offering pseudo-closures in C#, are a syntactic nicety to make using them for event handlers more convenient. Of course there are also numerous higher-order additions to the library in 2.0 that will also be made more convenient to use if delegate performance isn’t completely awful. I think it’s fairly safe to assume that the general-class of higher-order functions is the motivation.
>> “You can implement multimethods on top of the CLR, but then as I asked you, how would you
>> implement them such that consumers would actually make use of your library?”
> The Type.InvokeMember method does correct multiple dispatch.
> It is no problem to do this. You would use a combination of reflection and MSIL code generation > for good performance. The user would not have to be aware of this.
>
> But since the CLR supports multiple dispatch out of the box, there is no need to do this.
That’s pretty funny since the Type::InvokeMember method is reflection. What I’d like for you to do now is either with Type::InvokeMember or however else you’d like, to explain how to ‘out of the box’ polymorphically dispatch on multiple arguments. Then you’ll tell us how to work with these multimethods from C# or VB.NET.
> You seem to think that providing a bridge from functional languages to OO languages destroys
> their purity. But what good is purity if you can only use your super advanced language in
> reserach projects?
I think an apple is an apple. If you need to modify a language in order to interoperate with a platform, then the resultant language is different. So then when SML.NET adds numerous extensions to the language and tosses out pieces of the language proper in order to create what is basically an FFI the CLR isn’t to be praised for its language neutrality. When the F# developers toss out the O in O’Caml and create a new language with a basis in Caml that isn’t great language neutrality. It really has nothing to do with whether or not the language is functional or not.
Since you mentioned it, though, most of those ‘super advanced languages’ that can only be used in ‘research projects’ are far more portable and more efficient than .NET.
> IMHO the CLR offers the best chance to get functional programming languages into the
> mainstream
That seems pretty unlikely. C didn’t. Portable high-performance implementations didn’t. There are more backends for ‘functional’ and ‘dynamic’ languages for the JVM than there are for the CLR, and it didn’t. If anything, C# will just continue to pick up approximations of advanced language features from other languages. Around when that happens people that develop for .NET will see the point in using a language with those features, whereas when they were available in other languages they just couldn’t justify switching languages to use them, because all of the code they wrote in those languages wouldn’t work well from C#.
“actually I agree that you get some nice libraries, less on the good jit, since you’are not able to make real use of that jit. ”
That depends. A straightforward implementation will have similar performance to an interpreter. But by special-casing often used methods and using these if appropriate you could take advantage of the JIT.
I think there is a project called psycho for python which has a similar approach.
Another, more manual way to do this would be to have optional type annotations. But the psycho approach is superior because it does not require modifications of the language.
Don’t get me wrong: IronPython is quite impressive given its low age. But I think there are some large improvements possible.
“This is precisely what I’m not referring to. The promise of the CLR’s ‘language neutrality’ was the flexibility of being able to write reusable software components in a language, not to specifically expose a constrained interface between large codebases or have one-way interoperability. ”
I guess I never saw this as a realistic possibility in the first place, so I am not as disappointed as you seem to be.
Of course you could write a program where every class is written in a different language. But that would only work if the languages had a similar philosophy, and then there is not much reason for different languages the first place.
But having large modules written in fundamentally different languages that run in the same process and share resources like thread pools, app domains etc. is still much better than having two processes that communicate over sockets. This is what I was expecting out of the language neutrality of .NET, and that is what I got.
“I was looking to see if you actually worked on one of the publically-available, optimally open-source compilers so that I could see your idea of ‘simple.'”
No. Sorry. My current job has nothing at all to do with compilers.
“Since you mentioned it, though, most of those ‘super advanced languages’ that can only be used in ‘research projects’ are far more portable and more efficient than .NET.”
Even if this were the case, it does not help much. My favorite language, clean, is efficient and portable, but there are about 10 people on this planet actually using it. I would gladly sacrifice performance if I could use it to develop .NET programs, where you can actually find an interesting and well paid job.
“So then when SML.NET adds numerous extensions to the language and tosses out pieces of the language proper in order to create what is basically an FFI the CLR isn’t to be praised for its language neutrality. When the F# developers toss out the O in O’Caml and create a new language with a basis in Caml that isn’t great language neutrality.”
OK. So some people have bastardized your favorite language to port them to the CLR. But that says nothing about the capabilities of the CLR. There is no proof by example.
When I see the MSIL instruction set I see nothing that prevents me from implementing just about any language I can think of. With the verifiable subset it will be slower than with unsafe MSIL, but still quite fast since there is lots of generally useful stuff such as structs, ref parameters and a generics system even in the verifiable subset. Interoperability with other languages is certainly not automatic, but it is possible.
By the way: you claimed that all CLR languages that have good interoperablilty are basically “permutations” of C# with different syntax. But Nemerle has a code macro system so that control structures that are hardcoded in C# are actually just nemerle macros. A significant part of Nemerle is written in Nemerle.
Nemerle is much more expressive than C#. It can do stuff that is absolutely impossible in C#. So Nemerle is clearly not just C# with different syntax.
>> Languages like JScript were fairly dynamic, and had
>> features like lambda functions, a feature implemented with
>> creative use of the generated code. This feature is now
>> available also to C# developers in the form of anonymous
>> methods and iterators.
>
> You really think that the addition of anonymous delegates
> C# is a result of JScript? Iterators, too? Ok…
I did not say that. You seem to be obsessed to the point of selective understanding.
Either you did not read what I wrote or you are trying to put words in my mouth or you are just stupid.
As for tailcall optimizations: it is not our fault that our JIT compiler is better than Microsoft’s at tailcalls in the particular case of the way its used by Nemerle.
I guess the language yiou can’t easily implement are those with call-with-current-continuation.. IIRC noone of the scheme-for-clr things has that, still (even IIRC there should be a paper coming out somewhere)
Call with current continuation will most definitely require unsafe MSIL. But with that it should be possible.
Doing them with verifiable MSIL would be quite slow.
I really don’t have patience to quote your text anymore.
I never thought the CLR would be a panacea of interoperability, which again is my point. It’s not particularly interesting technically.
I have no emotional investment in whatever it is that you seem to be insinuating. So if you can’t respond to what I say, at least be so kind as to not project your own emotional involvement with the CLR onto me.
Standard ML isn’t my “favorite language.”
Write a Clean compiler that targets the CLR if you think the platform is suitable. Enjoy!
I don’t know if you’re really dense or not, but being able to implement something in such a manner that isn’t interoperable isn’t particularly insightful.
Nemerle’s macros aren’t “exportable.” Stray from CLS => less interoperable. Nemerle also isn’t radically different, either.
I guess I never saw this as a realistic possibility in the first place, so I am not as disappointed as you seem to be.
That’s exactly his point. If that’s the case then why is the subject of .Net’s language neutrality constantly brought up as an advantage? It simply doesn’t exist. Try developing a project really utilising C# and then VB.Net and tell us if there were any advantages (especially in terms of typical RAD development – VB’s heritage) to using VB.Net at all. There simply aren’t any because they are just too identical. Microsoft has even had to cripple VB.Net to convince people that you use VB.Net for this and that and C# for more complex stuff. It’s pointless. There is one language for the CLR and .Net, and that’s C#. Even with the intelligence of your average VB developer they seem to be cottoning on to that.
All you’ve got if you don’t have that interoperability between languages of different paradigms is a pointless runtime to run your applications running on top of your native platform that does absolutely nothing for you. What good is that? I think people have been running away with the wet dream of running environments within environments, within runtimes within runtimes……..
I would gladly sacrifice performance if I could use it to develop .NET programs, where you can actually find an interesting and well paid job.
Nuff said.
OK. So some people have bastardized your favorite language to port them to the CLR. But that says nothing about the capabilities of the CLR.
If you have to seriously bastardise any language to port it to the CLR then it says quite a lot about the CLR’s capabilities – namely that its language neutrality is non-existent.
When I see the MSIL instruction set I see nothing that prevents me from implementing just about any language I can think of.
Here’s an incredible though – you can do whatever you want with any language you want without tying yourself to MSIL .
All you’ve got is a runtime sitting as a middle-man for your language for no reason whatsoever. The only point in the CLR is the interoperability and promised language-neutrality it supposedly brings, but for that interoperability to work you cannot stray at all from the OO/.Net paradigm. To do that you have to bastardise your language to such an extent that it completely loses any interesting features it would otherwise have had, and for research languages (which by their nature, are supposed to be interesting) what the hell is the point?!
For whatever reason, Microsoft seem hell-bent on creating a pointless layer above Windows and native code that everyone external to Microsoft are supposed to use for all kinds of programming.
Incidentally, that’s the main argument Microsoft used to rubbish Java and get out of Java development altogether.
But Nemerle has a code macro system so that control structures that are hardcoded in C# are actually just nemerle macros.
Sounds a bit like Microsoft’s 70% less code mantra crap for .Net 2.0. At some stage you, or someone, has to write the code and it has to do what you want.
Nemerle is much more expressive than C#.
Some of the syntactic sugar, at times, looks interesting but that’s all it is – sugar. There’s no reason you couldn’t add those capabilities to C# (and Microsoft probably will), but of course, Microsoft owns C# so people created another language. There’s just nothing compelling there.
The Pundits are back on OSNews. David is never tired of sticking his foot in his mouth. Not a matter of being ignorant, but making wild claims from the pulpit based on no evidence. The latest jewel is:
Some of the syntactic sugar, at times, looks interesting but that’s all it is – sugar. There’s no reason you couldn’t add those capabilities to C# (and Microsoft probably will), but of course, Microsoft owns C# so people created another language. There’s just nothing compelling there.
He clearly has not looked at Nemerle source code. A good place to start is the compiler itself which is written in Nemerle itself.
Of course, if you are a “pundit” whose opinion is based on how to invoke the method “System.Console.WriteLine”, well there are only so many ways you can call the library method “System.Console.WriteLine” and they wont look that different (after all, its a library call).
But the *language* itself is vastly different, Nemerle ships with great docs:
http://nemerle.org/Grok_Base_structure_of_programs
“I have no emotional investment in whatever it is that you seem to be insinuating.”
For somebody without emotional investment your postings are surprisingly vitriolic. I don’t believe you.
But I also see no point in further discussion with somebody who resorts to namecalling and fecal language in a technical discussion.
He clearly has not looked at Nemerle source code. A good place to start is the compiler itself which is written in Nemerle itself.
I certainly have looked at it, but not through rose tinted glasses. It’s also the easiest thing in the world to claim no evidence when you’re a bit cornered. What on Earth do you think that post was?
As for my runtimes within runtimes comment – IKVM – WTF??!! That just shows how utterly daft this whole thing has now become.
But the *language* itself is vastly different, Nemerle ships with great docs:
Nemerle is a superset of C# (their words not mine, and I think they’re even being a bit optimistic there). It’s certainly a variation on a theme, but not a terribly significant one. People have been thinking about some of these variations on OO programming and general syntax for years, and they certainly weren’t tried out for the first time running in a CLR although you’ll probably believe they were.
Most of the positive differences in Nemerle are to do with the English/human readability of the code. The usage of ‘when’ is certainly one of the most useful I’ve seen in this case, but again, it’s syntactic nice-to-have sugar. There’s no reason whatsoever you couldn’t adapt C# to add these features – that’s if they’re practical. With a lot of the extended type checking and the concept of things like mutable, you’d really have to see whether a lot of that would be practical in the real world.
I suppose the acid test is whether you would recommend Nemerle as the default language for Mono, and the recommended language that people write Mono-based Gnome, GTK# and other applications in rather than C#? I rather suspect the answer is going to be no.
Of course, if you are a “pundit” whose opinion is based on how to invoke the method “System.Console.WriteLine”, well there are only so many ways you can call the library method
There are only so many ways you can go about designing and developing an object-oriented language (and environment). Language neutrality cannot happen with .Net unless you buy into the OO/.Net paradigm, and if you do that you then you lose all the advantages of using any kind of different language! If you’d cared to read at all you’d find that has been the main point of most of the posts here.
You can’t pretend that all this is something that it isn’t (a space is actually a spade), and if you are you are then it’s the first sign to start seeking professional help.
I think you’ve spent too long hacking Mono and being immersed in the .Net world rather than developing actual systems with it before buying into its hype. If I were you, I’d be a bit scared about what I’d turned into. Your subject line about says it all.
“Most of the positive differences in Nemerle are to do with the English/human readability of the code. The usage of ‘when’ is certainly one of the most useful I’ve seen in this case, but again, it’s syntactic nice-to-have sugar.”
If you think that the when keyword is worth mentioning as a positive difference of nemerle you have no idea what the language is about.
Here is a summary of a real new feature of nemerle.
<http://nemerle.org/Macros_tutorial >
If you think that the when keyword is worth mentioning as a positive difference of nemerle you have no idea what the language is about.
I think it perfectly encapsulates what that, and other .Net/Mono languages, are about and how much freedom you’ve actually got .
I’ve read it. It doesn’t make any difference whatsoever because macros here are slightly different, and not very interesting ways, of performing code-reuse and another way of simply appearing to write less code (current flavour of the month). It also carries along with the theme of lumping as much in at compile-time as possible. There’s been a lot of this talked about with .Net and in other languages and environments over the years, and that’s been the theme for .Net 2.0 (we’ll certainly see some of that there – especially with databases and SQL). When talking about this however, people still have to actually write the code within those macros, or whatever else is used, and those macros have to be able to do exactly what you want or they’re useless.
Trust me – it’s not that spectacular.
Is mono really so slow that you have to watch the screen redraw? Or is that just an artifact of the webcast?
Given that you made false claims regarding Nemerle’s implementation, I’m inclined to suggest that you stop sticking your foot in your mouth.
You’re ignorant and don’t respond to my ‘technical’ comments, but instead ramble about what has been done to my “favorite language” or fabricate nonsense about what I think of “purity.” Indeed, this entire discussion started because you were offended by there being nothing particularly interoperable about the platform.
Is mono really so slow that you have to watch the screen redraw? Or is that just an artifact of the webcast?
No, it’s because he’s using vnc2swf (or something like that) to capture from his screen. You always get that redraw effect.
David: So what do you expect in language so you can call it spectacular? Something like:
begin my ultimate program:
do what I want!
end of my ultimate program
It is obvious that stuff must be written at some point, but the difference with macros and other stuff is between writing things by hand and generating them – which is a HUGE difference.
About this “sticking with OO –> one language”. I guess Nemerle has tried to prove and IMHO succeeded that OO / functional paradigm can work together. You got the OO interface (hey! people use it, it means it is good for interfaces, or are simply everybody except you dumb?) and you got functional values, immutability by default, type inference, pattern matching at expression level. I agree that at toplevel you need to have concept of classes, etc. to live good in .NET, but it doesn’t mean that you can’t have useful features in your language.
Tailcalls: we do generate jumps instead of recursive calls where we can. I agree that adding tailcall and making it slower than call is ultimate MS’s lameness, but on mono it works better – in overall mono is almost the same speed as MS.NET (even without generative GC), so you can always use Windows+MONO to get better performance.
To Anonymous: you know, people write applications most often, only some of them are writing libraries. So from their point of view, the ability to consume libraries is much more important that creating interoperable code. And with good language hosted in .NET you can do exactly that – write app in a convenient way, but use libs from .NET. And what is more important – hosting language is .NET is much easier than writing native compiler (ok, you can always generate C, use some GC available as library, but this seems
a longer and harder way)
I think O’Caml pretty obviously demonstrated that object-oriented, imperative, and functional programming styles can all be accommodated within a single language. Nemerle hasn’t proven anything in that respect.
If interoperability is unimportant to developers, and simply the importing of others’ libraries is essential, then it follows that any language with an FFI would be a suitable language for writing applications. .NET then provides nothing but a lower-performance platform in which to produce these libraries, and only slight improvements over the JVM which is in the same boat. Further it offers nothing but an inferior target for developing of these languages.
Further there are many portable frameworks for constructing compilers, of which .NET is simply one with an especially disappointing intermediate language.