Jeff Cogswell writes: “I’m about to make a confession. Even though I’ve written several books and articles about C++, I have a secret: C++ isn’t my favorite language. I have lots of languages that I use, each one for different purposes. But the language I consider my all-time favorite is Python.”
From Microsoft website – IronPython is the codename for a beta release of the Python programming language for the .NET platform.
Does that mean it is something like the Visual-J which microsoft brought out to dilute sun’s java language and which resulted in the famous lawsuit…?
Seriously, I would take any language implementation coming from microsoft with a grain of salt.
I suggest you give them the benefit of the doubt. This project was born outside of Microsoft. Then the guys got employed by MS and so it became an MS project.
It comes with the standard suite of python regression tests and pybenches. It passes most of them too!
Of course, IP is not python in the normal sense because you can use it to call into arbitrary CLR classes and even use COM interop to call out into totally foreign code. You can’t do this in CPython, so if you use these features, then you’re not going to be portable.
But, as I understand it, Python is meant to be glue code for binding things together and for doing relatively “scripty” things. If the whole goal of the langauge is for integrating platforms, who do you blame if the code you write isn’t portable?
“…even use COM interop to call out into totally foreign code. You can’t do this in CPython…”
Nitpick, you can do COM in CPython with the win32 extensions module. I recently wrote a python script which uses an Excel COM object to pull out data from a spreadsheet.
Of course, IronPython is not Python in the normal sense because you can use it to call into arbitrary CLR classes. (snip) You can’t do this in CPython, so if you use these features, then you’re not going to be portable.
Well, http://pythonnet.sourceforge.net/ allows calling arbitrary CLR classes from CPython. Also, Python for .NET developers are trying to keep the CLR interface compatible with IronPython.
a mash-up of COBOL and Lisp, with the best parts of either left out.
Too bad the authors of Python hadn’t had to write enough COBOL or Fortran to understand what a bad idea using indentation as a syntactic element is.
indentation as a syntactic element is bad? Why? Don’t you indent your code anyway to make it readable? So whats the difference having it be a syntactic element or not?
Because there are situtation in which it’s actually clearer to have two statements on one line than to separate them.
The canonical example in C is manual loop unrolling.
Your opinion. I would say just the opposite. In fact I have many many places in code I have written where I have explicitly commented on the fact that my code could have been more syntactically optimized by I chose NOT to for the sake of clarity, for anyone who had to support my code in the future.
Um, the C loop unrolling example isn’t “syntactically optimized”. It has exactly the same number of syntactic elements if you place the increment statements and action statements on the same line as it does if you don’t.
What it does have is a visual cue that the action and increment statements are tightly coupled. You can’t accomplish this in Python.
For this reason, the indentation rules of Python are not merely enforcing good indentation practice, since they add restrictions that prevent some useful practices.
Python indentation is like Pascal’s banishing of ‘goto’ because Wirth didn’t understand the situations in which goto made code more readable rather than less.
Ummmmm yes it is. Syntactically optimized means that it arranges the statements in a manner that is optimized in some form. In this case it combines the three elements of looping in one statement line.
You can do many similar things in Python, btw. Often I choose to NOT use those kind of options simply because using other options are clearer to programmers, especially those who might not be used to a language’s syntax.
You need to think about stuff like that in a corporate environment when whatever you write could arbitrarily be handed off to ANYONE with any varying degree of experience.
Ummmmm yes it is. Syntactically optimized means that it arranges the statements in a manner that is optimized in some form. In this case it combines the three elements of looping in one statement line.
you’re talking about the for construct itself, not about the example I’m describing.
ALright, maybe I am misinterpreting you. Tell me why it is better/easier?
“Python indentation is like Pascal’s banishing of ‘goto’ because Wirth didn’t understand the situations in which goto made code more readable rather than less.”
You have GOT to be kidding me. Please tell me you are kidding me. Code indentation is extremely important to readability. Perhaps I am misunderstanding you. GOTO statements created confusion by providing a somewhat (in appearance anyway) unstructured method of jumping in and out of code segments. Completely different from what we are talking about with code indentation.
Perhaps I am misunderstanding you.
You’re misunderstanding me.
Wirth banished goto from Pascal because it allowed certain ‘undesirable’ programming constructs. Doing so was an attempt to enforce readability by language design.
Python tries to enforce readability by language design by forcing indentation, including statement termination. But it’s easy to write unreadable code in Python just as it was easy to write spaghetti code in Pascal.
By banishing the goto, Wirth actually made some things harder to write clearly, about which see Knuth’s paper on goto structuring. So, the net effect was that Pascal didn’t prevent unreadable code but did in some instances enforce unreadable code.
By forcing indentation, Python robs the programmer of certain constructs which can be used to indicate code structure, and it causes a different sort of typographic blindness, because of the tab substitution rule.
In other words, Python is like Pascal in that it attempted to do something to make code more readable that didn’t really make the code more readable, but did eliminate some changes for readability.
Where you’re getting confused about what I’m saying is that C allows one to write code that’s at least as readable as Python because it allows one to properly indent. It allows readable code, but it doesn’t attempt to enforce it. Whereas Python attempts to enforce readable code, but in doing so merely leads to different ways to write unreadable code while restricting programmers unnecessarily when there’s a good reason to violate the convention of indentation.
This was a much more descriptive post! I understand what you are trying to say now. I still don’t necessarily agree with what you are saying, but that’s fine.
I still don’t understand this: “Python robs the programmer of certain constructs which can be used to indicate code structure”.
Indentation indicates a block of code, such as a loop, condition or other form of block statement. How does it rob the programmer of constructs that indicate code structure?
Code indentation is great… except when it’s not.
There are times when I explictly choose not to use indentation, and I get annoyed when a language chooses to tell me, for example “Use underscores instead of camel-case” or “Use spaces instead of tabs for indentation” where the reason is based on opinion.
It’s the age-old “thou shalt do it this way” argument with Python, despite the programmer’s wishes or better judgement.
And I have to say this: “GOTO” is sometimes just plain better, for all sorts of context-sensitive reasons. The comparison is because Wirth banished GOTO for a readability **opinion**.
“And I have to say this: “GOTO” is sometimes just plain better, for all sorts of context-sensitive reasons. The comparison is because Wirth banished GOTO for a readability **opinion**.”
I have never used “goto” (although I’ve used “setjmp”, which is even worse).
It can be acceptable to jump into a common error handling block inside a function, but nothing else. And even there, the code can be refactored to avoid it, with other benefits besides eliminating “goto”s.
Other possible uses are in jumping out of deeply nested loops, but if you are doing deeply nested loops, then your code is broken far beyond the use of “goto”.
It can be acceptable to jump into a common error handling block inside a function, but nothing else.
Read Knuth’s paper. There are several valid reasons for using a forward goto.
And even there, the code can be refactored to avoid it, with other benefits besides eliminating “goto”s.
Not always true. Especially when you have code that obtains resources in a certain order, operates on those resources, and then releases them in the reverse order, but has to bail if it can’t obtain all of the resources. The cleanest way to write this code in C is to use forward gotos.
Other possible uses are in jumping out of deeply nested loops, but if you are doing deeply nested loops, then your code is broken far beyond the use of “goto”.
Unless you’re writing code that models N dimensional mathematics, such as physics simulation code, in which case deeply nested loops in non-vector languages is the only way.
Like my C teacher said – “You can use GOTO, it’s in the language, right? Why not use it? After you write a million lines of code you can use 1 GOTO. Until then, don’t think about using it.”
Your C teacher should have read Knuth.
Good C programmers commonly use forward goto because it improves readability and simplifies code.
Agreed, in my company our ‘coding standard’ prevents the use of goto, which is quite frustrating..
As it means that often to return from function either you write a lot of if which makes the code unreadable, either you have many different return points which sometimes indice mistakes if you forget to clean-up before returning..
A goto at the end of the function to clean-up and return is a useful idiom..
ADA is more robust..
🙂
Ada is a beautiful language too. I had the opportunity to use it at DEC (there are actually parts of the VMS that are implemented in it).
I also wrote several IS applications with it.
I guess I don’t understand some of the comments people are making about language (A) sucks because of this or language (B) makes bad programmers.
All languages have pluses and minuses. It is how you, the developer, uses them that makes an application sink or swim.
Maybe students just are not being taught proper usage of data structures and algorithms anymore? Maybe Analysis and Design have fallen by the wayside? I don’t know… in “my day” analyzing the problem and designing a solution was far more important than the language chosen to implement the application, altho’ the language selection WAS part of the design.
Personally, I like ALL LANGUAGES. Each one is unique and has character of its own. Young developers should learn to appreciate the nuances of each new language they encounter and how each language can be leveraged in a given situation.
If you love Ada you’d love VHDL.
I had to go look it up. Yes, I like it already!
And PL/SQL…
I really like your comment: I can’t stand PHP nor VB personally, but when a good/experienced programmer (read: understands structure, design, probably has experience with several languages, etc) writes in these languages, it’s usually still “good code”, despite any features or design decisions that make it easy to write “bad code”.
“Too bad the authors of Python hadn’t had to write enough COBOL or Fortran to understand what a bad idea using indentation as a syntactic element is.”
There are many objectionable things in Python, but the indentation seems to be criticized only by people that never used the language.
The indentation rules in Python are exactly those that any half decent programmer already follows in *every* language (and Python doesn’t impose any specific indentation style, just *any* indentation style).
I think the complaint is more about what happens in large-scale projects when people invariably mess up the indentation guidelines and keep the script from running. I munged up some C code pretty bad the other day, because our policy is to use tabs to indent, but I was working on some code that had 3-space indents (btw — what kind of uncivilized jackass uses 3-space indentation?).
The indentation rules in Python are exactly those that any half decent programmer already follows in *every* language (and Python doesn’t impose any specific indentation style, just *any* indentation style).
The rule that statements are terminated by line endings and that multiple statements cannot appear on a single line pretty much blows that claim out of the water.
There are programming idioms, such as manual loop unrolling, which are clearer when multiple statements are allowed on a line.
Also, the tab replacement rule pretty much negates any advantage that using alignment to indicate block nesting gives you because you can’t tell when reading the code how the indentation is going to work after tab replacement.
;
You CAN have multiple statements per line in Python.
Know the language before you comment on it.
Especially if you’re going to make the same comment three times.
Sorry, am I repeating myself. Or are you referring to cloudy? I do tend to ramble, sometimes obnoxiously so.
I was agreeing with you with respect to Cloudy. He made essentially the same claim at least three times that I recall noticing. Perhaps it was more or less, because I only skimmed his posts.
Teach me, Obi Wan.
I know you can have multiple expressions[i] on a line. Where is the language in the reference manual that allows me to have multiple [i]statements?
If I’m wrong, than I have learned the language poorly, and I withdraw that particular criticism.
A statement is a group of expressions that end in a result? Not sure if you are indicating a nuance of language here or…? You are not allowed multiple BLOCK statements per line – which is part of the indentation goal of readability.
“There are programming idioms, such as manual loop unrolling, which are clearer when multiple statements are allowed on a line.”
And Python allows multiple statements on a single line, using a “;”. And code blocks can also have just one line. The following code…
for i in xrange(10): print i; print i*2
…is perfectly valid.
“Also, the tab replacement rule pretty much negates any advantage that using alignment to indicate block nesting gives you because you can’t tell when reading the code how the indentation is going to work after tab replacement.”
If you have aligned code, and you replace blocks of “n” spaces by tabs, you will end up with consistent results within a single code block.
As I said, people who complain about the indentation “problem” haven’t actually used the language.
If you have aligned code, and you replace blocks of “n” spaces by tabs, you will end up with consistent results within a single code block.
Yes. But if you aren’t doing the replacement consistently, you can end up with code that looks properly indented when displayed, but that the compiler treats as having different indentation than you think you see.
I’d try to show you an example, but since OSNews’ display software eats tabs and spaces, you can’t actually show valid Python code with it.
As I said, people who complain about the indentation “problem” haven’t actually used the language.
I’ve used it. I miessed that ‘;’ was allowed as a statement separator, and withdraw the comments about it, but I still hold the comments about indentation level not being unambiguously determinable (which wikipedia agrees with.)
“Yes. But if you aren’t doing the replacement consistently, you can end up with code that looks properly indented when displayed, but that the compiler treats as having different indentation than you think you see.”
And why should I do that? There is only one reason that calls for “spaces->tabs” or “tabs->spaces” replacement: when one or the other is enforced in a particular project, and you want to add code that used the “wrong” indentation method.
Problems may come up when replacing inside aligned lines that continue statements (like splitting a “print” over multiple lines), and code may look aligned when it isn’t. But Python doesn’t care about indentation in split statements, so there is no problem there.
I also complained about indentation when I first started with Python, but soon realized that it is a non-issue. People can come up with multiple scenarios where problems could arise, but in practice they never do. I’m still waiting to hear about a real-world situation where indentation in python caused problems.
And to those stating that not using indentation is sometimes desirable… I don’t want to look at your code… ever!
Assuming you’re a C/Pascal/Algol/etc programmer, do you use that indentation anyway? Probably. So it’s just using something you use anyway and making it required.
Except that
a) using end-of-line as end-of-statement means that idioms that put multiple statements on a line for readability are not possible
and
b) the tab substitution rule means that you can’t really tell by inspection what the indentation of a program really is.
Before anyone else moderates the repeat of the same inaccurate information up, may I suggest that people look at “statement list” in the Python Reference Manual. You can find it under Compound statements. In short you can do this:
a[i] = n1; i += 1
a[i] = n2; i += 1
a[i] = n3; i += 1
a[i] = n4; i += 1
if you really must.
Pretend that my brackets and index variable weren’t eaten.
Yeah, Python has the best parts of Lisp left out, but there is enough in there for the language to be enjoyable to use. Plus, programmers don’t blanche at the mention of Python, so you actually have the chance to use it sometimes.
It seems to me that the people most put off by the idea of indentation based syntax have always been programmers for whom the word ‘refactor’ might as well be a reference to an obscure foreign capital. I remember mentioning Python’s syntax to a colleague who was completely revulsed. Later I found out that his idea of routines were 100-1000 line tracts of utterly unreadable spaghetti.
Best regards,
Steve
Bad guess in this case. I’ve been a believer in loose coupling, implemenation hiding, and doing one thing per function since before the term ‘object oriented’ was invented.
My objections to programming languages that try to enforce good practice dates back to Dykstra’s observation, in the context of Pascal, that you can “write Fortran in any language”.
Readability is a function of programmer discipline, not a feature of a language.
I think readability is a function of langauge too. Modern C++ is an emminently unreadable language, as is Perl, even with good programmer discipline. Meanwhile, even poorly-written Python is often adequately readable, because there are only so many ways to screw up Python code while having it remain synactically valid.
a sentence with unusual structure, if I compose such a beast, the language unreadable makes, or my difficult writing it shows?
It’s easy to write poor C++, just as it is easy to write poor but correct English. But it’s not a particularly unreadable language, unless it’s been obfuscated.
About Perl, I’ll agree with you. But that’s because perl’s semantics are so bad that it wouldn’t matter what syntax it had it would be unreadable.
Yes, the ability to construct poor sentences does not imply a poor language, but that does not mean that some languages aren’t less amenable to clear-code than others.
Idiomatic modern C++ is quite hard to read. The choice of template argument delimeters as well as the choice of scoping delimeter break up the flow of the text, to a much greater extent than does the less obtrusive delimiters in Java. The convention of discouraging ‘using’ statements doesn’t help much. Even simple C constructs like loops blow up to absurd sizes in C++ code. Look through code that uses Boost or Loki sometimes. These libraries are written by good C++ programmers, even some experts in the field, and their readability is still sub-par.
I have to agree about templates. But then, the existance of templates in C++ is a hack to overcome a deficiency in the original language.
One can make a strong case that poor syntax in a language is usually evidence for poor design in that language, and C++ certainly suffers from plenty of poor design.
I’ll agree with you on that, and further assert that an excess of syntax, as is seen in C++ and Perl, is a likely a sign of insufficiently powerful primitives in the core language.
?! Pure personal opinion. I “personally” loved FORTRAN and it was one of my favorite languages to implement IS applications in the late 80’s and early 90’s. It was fast, had a clean layout, simple syntax and on a VAX running VMS could link into and use any graphical library you wanted. You could also implement modern data structures yourself if you needed them.
The biggest problem with FORTRAN is that it creates this single-threaded array-centric mindset in the people who use it. It recently fell on me to maintain some data collection software which was obviously written by a FORTRAN programmer outside our company. It’s so bad. Dynamic memory allocation? Pshaw. Locking? Never heard of it. Win32 message loop? What’s that?
In my (admittedly limited) experience, Java tends to corrupt the mind less than does FORTRAN or C. At least a Java guy will err on the side of too much abstraction, or too much dynamicism, or too much locking or thread-safety, because Java kind of programs you to do that. When it comes time to extend the software later, that’s less of a pain to deal with than not enough of these things.
Two things, not that I mentioned WHEN I used FORTRAN, better tools have since arrived on scene, and note that you do not NEED dynamic memory and multi-threading for MOST APPLICATIONS. If you need any kind of data locking you could either rely on a database or file system, or even use shared, public pages to manage locking.
Please never say that a programming language “creates this” or “creates that”. The people who use it are the ones responsible for HOW it is used. Programming languages do not write programs, people write programs.
The only language I feel has ever successfully used indentation is part of it’s syntax is Haskell, mostly because the functional style lends itself to small, bite-sized chunks — and I’m not going to argue just how “functional” Python is.
Even in Haskell, though, it’s a “wrapper” for the more common braces/parentheses (C- or Lisp-style) hierarchy and is internally translated into this style before compilation!
Indentation-only makes decent lambdas / anonymous functions nearly impossible in Python, it appears.
Counting nested brackets isn’t exactly fun either.
Great argument. While the block syntax in Python is a hot-button issue, people sometimes forget to consider what it’s replacing. Are curly braces better?
Off the topic of the article a bit, but PythonCard is really nice as well. For all of us aging hippies nostalgic for Hypercard!
http://pythoncard.sourceforge.net/
I consider contolling all of my data bit by bit a beutiful thing in C. That’s me.
I like C, but I also like Python. When I want to code for embedded systems (e.g. AVR or ARM) I use C and bits of assembler. On the other hand when I want to code GUI app I prefer Python + wxPython. I found python easy to learn – it took me 15 minutes.
And I presume you compose your TCP packets by hand, instead of being lazy and letting your TCP/IP stack do it for you?
Yah, it’s taking me about that much time to learn Lua. I am using XML for the graphics. The array work seems real fun like Python.
Usually with UIs you’re working with tons of data objects so it helps to create arrays easily. With C I would just be creating a container. A real tight container.
I get annoyned when people claim that IronPython is Microsoft’s embrace-and-extend tactics against Python, or anything like that. I urge them to explain these mails:
http://mail.python.org/pipermail/python-dev/2006-June/066591.html
http://mail.python.org/pipermail/python-dev/2006-June/066803.html
http://mail.python.org/pipermail/python-dev/2006-June/065991.html
Developers with microsoft.com mail addresses are asking python-dev about semantics of Python languages. No less. Now say what you want.
The lead on IronPython is Jim Hugunin. He is hardly new to Python, he wrote the original Numeric array package while at MIT and started the Jython project. He also worked at CNRI where he was in contact with Guido van Rossum among others.
If you’re a fan of Python and/or IronPython you should check out boo.
http://boo.codehaus.org/
It is a .NET language that is very similar to Python with a few notable exceptions outlined here:
http://boo.codehaus.org/Gotchas+for+Python+Users
Boo, being a statically typed language, is completely different from Python, which is a dynamically typed language. They share surface similarities, but they are fundementally different paradigms.
I find mapping existing languages to the .Net framework and the Common Language Specification utterly pointless, simply because you don’t get Python at the end of it but yet another .Net language. All you have are syntactic differences between .Net languages, because what you’re doing is mapping a language to a common language.
The two main languages in the .Net world are C# and VB.Net, and even VB programmers are questioning just what the point of VB is now because there’s just no difference. You might as well just learn C# and be done with it.
I find mapping existing languages to the .Net framework and the Common Language Specification utterly pointless, simply because you don’t get Python at the end of it but yet another .Net language.
And you are completely wrong. IronPython implements a full Python language, not any subset of it. As a result, it is not a “.NET language”, or in more technical term, “CLS compliant”. It is not its goal.
As a result, it is not a “.NET language”
It is written in C#, therefore it is. Yay! Let’s use one .Net language to write a completely new language using the same .Net framework. Brilliant.
All that happens is that you go round in circles thinking that you’re producing something different when in reality you’re reproducing exactly the same thing.
or in more technical term, “CLS compliant”.
It’s written in C#, so it inherits CLS compliance in some way. I find the notion that somehow it isn’t a .Net language and that it’s somehow different rather silly. If you use the .Net framework and .Net data types then you are bound by their limitations, and that comes out in the language eventually. If you’re not, then your interoperability suffers – which is no bad thing if you want a language that actually seems to do things differently. But since the point of .Net is interoperability……… ;-).
However, if you are merely reimplementing Python in C#, then you have to ask yourself if it wouldn’t be more sensible just to use the regular C implementation of Python. And if you have to fully reimplement a language like Python, then that means your environment is not language neutral ;-).
This was discussed on a Python list some time ago:
http://mail.python.org/pipermail/python-list/2003-December/198588.h…
The goal of ANY .NET languages is not to have a language per se but to allow access to .NET framework.
While most language implementations have macros (or sometimes libraries) which allow compatibility with popular constructs or functions, that’s just an handy tool for legacy code which should be converted.
So the goal of IronPython is not to have a Python implementation per se but to allow people to use Python to code against the .NET framework. This won’t be portable by definition, not as side-effect.
(Please notice that I’m not using Python but I’m using .NET framework)
It’s written in C#, so it inherits CLS compliance in some way. I find the notion that somehow it isn’t a .Net language and that it’s somehow different rather silly.
Neither C#, C++/CLI, VB.NET, nor many other languages that target CLI/CLR are inherently CLS-compliant languages. Each language has features that violate CLS compliance rules, and their respective compilers do not check for CLS compliance by default. Unless you actually take the necessary steps to ensure that your code is CLS compliant, it likely is not.
See the following links and excerpts from MSDN.
Common Language Specification
http://windowssdk.msdn.microsoft.com/en-us/library/12a7a7h3.aspx
“The CLS was designed to be large enough to include the language constructs that are commonly needed by developers, yet small enough that most languages are able to support it. In addition, any language construct that makes it impossible to rapidly verify the type safety of code was excluded from the CLS so that all CLS-compliant languages can produce verifiable code if they choose to do so.”
Writing CLS-Compliant Code
http://windowssdk.msdn.microsoft.com/en-us/library/bhc3fa7f.aspx
Language Interoperability Overview
http://windowssdk.msdn.microsoft.com/en-us/library/a2c7tshk.aspx
“Even though the runtime provides all managed code with support for executing in a multilanguage environment, there is no guarantee that the functionality of the types you create can be fully used by the programming languages that other developers use. This is primarily because each language compiler targeting the runtime uses the type system and metadata to support its own unique set of language features. In cases where you do not know what language the calling code will be written in, you are unlikely to know whether the features your component exposes are accessible to the caller. For example, if your language of choice provides support for unsigned integers, you might design a method with a parameter of type UInt32; but from a language that has no notion of unsigned integers, that method would be unusable.”
I’m afraid that if you think mapping Python or any other language to the CLR is pointless, you’ve failed to miss the point entirely. I would encourage you to consider these points:
1) It is easier than ever to pick the right tool for the job. If I feel that I need the easy expressiveness of Python/Boo for a particular part of my app, I can code that section with one of those languages. If on the other hand I need powerful macro and or functional programming support, Nemerle is a nice choice. For middle of the road stuff, I might do some coding in Java/C#. The *point* is that at the end of the day, it all works together as if it had all been written in just one language.
2) With many different languages comes many different ways of representing APIs, which makes the right tool for the right job approach prohibitive. With .Net, this is no longer the case. How I interact with OpenGL, collection classes, etc, etc, etc is the same no matter which .Net language I’m using. Thus the learning curve and the risk is mitigated.
3) The very fact that languages transform into a common denominator means that when NextNewLanguage# comes out to replace the current generation of languages, I’m completely insulated. As long as it devolves down to the CLR, all of my legacy code will work just fine.
4) This is just a higher level implementation of what has been around for a long time, or were you unaware of machine language? What was the point of C, C++, Pascal, etc, etc back in the day when in the end it was all reduced to machine language anyway?
Think about this technology and all of its implications. I am in no way shape or form a Microsoft fan, but I am having the time of my life using Mono because I find that this approach to coding gives me many benefits that in my 20 years of programming have *never* been available. This is not religious, but rather pragmatic. Having been through many ‘this is the end all be all language’ phases, I’m thinking that a framework which elevates the art beyond any particular language not only minimizes my risks, but also has the potential to help me realize better ROI and code reuse.
Best regards,
Steve
It is easier than ever to pick the right tool for the job. If I feel that I need the easy expressiveness of Python/Boo for a particular part of my app, I can code that section with one of those languages. If on the other hand I need powerful macro and or functional programming support, Nemerle is a nice choice.You’ve utterly missed the point.
The simply question you need to ask yourself is: Can you simply and trivially add support for many ‘features’ from one language like Boo or Nemerle to another .Net language like C#? If the answer is yes a different language is pointless. That’s what is happening with C# as it gets developed further, and you’ll see any alternative .Net languages fall even further by the wayside.
The only differences between .Net languages are syntactic differences simply for the sake of being different.
The *point* is that at the end of the day, it all works together as if it had all been written in just one language.
You’ve just given yourself everything you need to know there. That’s because *it is* one language ;-).
With .Net, this is no longer the case. How I interact with OpenGL, collection classes, etc, etc, etc is the same no matter which .Net language I’m using.
Makes a different language a bit pointless, doesn’t it? That’s because at the core of it it is the same language.
Having the same common framework and runtime environment that at the same time allows the use of different languages is a paradox that cannot be squared. Either you enjoy the benefits of having the same framwork and runtime environment, or you enjoy the benefits of having a different language created for a different purpose. You can’t have it all ways I’m afraid, and it’s something many in the .Net world have come to realise.
This is not religious, but rather pragmatic.
Hmmmm. My point is logic. If you can compile many languages to IL then they are logically the same language, separated only by different syntactic sugar that does the same thing.
Having been through many ‘this is the end all be all language’ phases, I’m thinking that a framework which elevates the art beyond any particular language
Any framework which gives you that is giving you one language, as .Net is doing and as you’ve confirmed yourself.
Whenever this is raised, fans of .Net squirm and wriggle like there’s no tomorrow. VB and C# coders out there have accepted there’s no difference between .Net languages, hence all the fuss from VB coders about what the point of VB.Net is. Microsoft has seemingly accepted it, and the promotion of .Net as a language neutral environment seems to have very much fallen by the wayside.
not only minimizes my risks, but also has the potential to help me realize better ROI and code reuse.
*Finger firmly down throat*. That’s a .Net fan comment if ever I heard one.
Edited 2006-07-16 18:19
Let me preface this comment by saying im no Microsoft fanboy, but I do think that the IDEA of .NET is probably the best thing Microsoft has ever done, even if the implementation(re:license) is poor. Infact, thats the only thing that really keeps me off the platform(though I am tracking mono). Its exactly what Sun should implement for Java. Support multiple languages natively.
Now, I realize that everything compiles down to CIL, but thats really no different then what GCC does when it uses one of its many front ends to compile down to a ‘middle end’ before compiling down to machine for the target system.
“My point is logic. If you can compile many languages to IL then they are logically the same language, separated only by different syntactic sugar that does the same thing. ”
Why write code in C or any language which compiles down to machine then? After all, whether you are writing your program in machine, raw assembly, assembly+macros, c, c++ or any other language which compiles down machine they all are logically doing the same thing separated only by different syntactic sugar.
The reason is abstraction. Writing even a mildly complex program in assembly is absolutely mind-blowing(its fun, but can be a drawn out affair). The reason to have multiple languages and in fact, higher level languages is to be able to abstract certain elements of the code AND be able to use the best features of different languages in a cohesive way.
This is one thing that CPython really excels at. It is a very good glue language. For instance, its much easier to write a gui app in python than c. However, if you have one function that is processor intensive, you can recode only that module in c and use it in your app.
Now, in this case, everything ends up as CLR bytecode, however, the principle is the same. By allowing different languages, you app can use the features of all languages. Now, I realize that you could code everything in CIL but that really is the assembly of CLR.
I’m having a good laugh at someone who’s modding these comments down, presumably because they just don’t like them :-).
Let me preface this comment by saying im no Microsoft fanboy, but I do think that the IDEA of .NET is probably the best thing Microsoft has ever done
Doesn’t really mean anything unfortunately.
Now, I realize that everything compiles down to CIL, but thats really no different then what GCC does when it uses one of its many front ends to compile down to a ‘middle end’ before compiling down to machine for the target system.
Totally and utterly different. The .Net framework and the CLS and CLR provides an object oriented framework, and it specifies exactly what must be implemented and how, including data types, functions etc. etc. That’s if you want interoperability, which is the whole point of using .Net anyway.
If what you’re saying was true then if you compiled C, C++, Java or Fortran with GCC then they would all use the same data types, functions, framework and classes and would be totally interoperable and interchangeable. They aren’t, for reasons that should be obvious. You’d end up with four languages that all did the same thing, and used the same paradigm, and where you’d have syntax that would be completely equivalent in each, line for line, as in .Net.
Why write code in C or any language which compiles down to machine then?
Because C actually provides an easier, higher-level language that actually does something different in it’s own right. There is no underlying framework straight jacketing it in the way that .Net demands for the sake of interoperability, and in turn it doesn’t straight jacket software written with it. There are no standard data types, no standard functions and no standard framework classes to adhere to. C++, Java and Fortran, to take the previous example, all do things that are different for different purposes, and as such, they are not interoperable – not directly anyway. If they all used the same framework, same data types, same functions and same paradigms and compiled to exactly the same underlying code then you’d only pick one language over another because of syntax. That’s not a good reason.
The reason to have multiple languages and in fact, higher level languages is to be able to abstract certain elements of the code
And perfectly sensible too, if you use totally different languages for totally different purposes in the way I’ve described. However, that’s not the case in .Net – unless you implement something totally different in it, but then what’s the point of interoperability and .Net?
AND be able to use the best features of different languages in a cohesive way.
The data types, functions, framework and concepts are so well defined and nailed down that any other higher level, human readable language over the top of C# becomes pointless. You might as well just implement those new features in C# instead. This is happening with successive versions of C# as it simply acquires many of the features of languages like Boo and Nemerle, and they are implemented in exactly the same way because of the underlying framework. You can reasonably ask if Boo and Nemerle are actually different languages. Once you’ve got up to C#, .Net simply doesn’t have abstraction. Within the framework and the concepts of its runtime environment it has been abstracted enough.
However, if you have one function that is processor intensive, you can recode only that module in c and use it in your app.
That’s tough luck I’m afraid. Once you start to dictate a common base, runtime, framework and set of standards you end with with each language being so straight jacketed that any differences that made them useful are gone or severely reduced. Interoperability has to be done in other and less exact ways, otherwise you lose all or most of the differences and advantages of the language you’ve chosen.
By allowing different languages, you app can use the features of all languages.
No, you can’t. You can only use languages that have been ported to .Net. If you want to use, in an interoperable way, a language with a different paradigm or that doesn’t implement various concepts in the same way as .Net then you’re out of luck. You can’t put square pegs in round holes, which is what many people claim is possible with .Net.
Now, I realize that you could code everything in CIL but that really is the assembly of CLR.
I think you’re getting your comparisons a bit mixed up. You could quite conceivably replace CIL code with C# since everything interoperability-wise should be CLS compliant, in theory, but that would simply be too restrictive in view of the other, non-compliant, things you may want to do in the runtime.
“Doesn’t really mean anything unfortunately. ”
Wasn’t intended to. Just my personal observation.
” If they all used the same framework, same data types, same functions and same paradigms and compiled to exactly the same underlying code then you’d only pick one language over another because of syntax. That’s not a good reason.”
The problem with that is not everything is the same. Also, syntax is important. Especially if you are porting an existing app.
”
That’s tough luck I’m afraid. Once you start to dictate a common base, runtime, framework and set of standards you end with with each language being so straight jacketed that any differences that made them useful are gone or severely reduced. Interoperability has to be done in other and less exact ways, otherwise you lose all or most of the differences and advantages of the language you’ve chosen. ”
This was an example of what you could do with CPython. On .net this could translate to a situation where the gui is already coded in vb.net however you want to use a different language for the backend.
“No, you can’t. You can only use languages that have been ported to .Net. If you want to use, in an interoperable way, a language with a different paradigm or that doesn’t implement various concepts in the same way as .Net then you’re out of luck. You can’t put square pegs in round holes, which is what many people claim is possible with .Net. ”
Well, the statement was only focused on .net , however, if you wanted to create a language specifically for a purpose, you might find it useful for that purpose. Remember, CPython(the reference version of Python) is coded in C(which I assume why C# is used for IronPython)
“I think you’re getting your comparisons a bit mixed up. You could quite conceivably replace CIL code with C# since everything interoperability-wise should be CLS compliant, in theory, but that would simply be too restrictive in view of the other, non-compliant, things you may want to do in the runtime.”
You could always replace assembly with C however, it would introduce irritations aswell.
One example(from Wikipedia)
C#
class HelloWorldApp
{
static void Main()
{
System.Console.WriteLine(“Hello, world!”);
}
}
CLI
.method public static void Main() cil managed
{
.entrypoint
.maxstack 1
ldstr “Hello, world!”
call void [mscorlib]System.Console::WriteLine(string)
ret
}
As you see, they are similar, in the sense that they do the same thing, but one is very different from the other.
Also, I think where IronPython could be very useful is in the case where you want to port an existing program to .net while staying on Python.
-Mike
…didn’t microsoft decide to create a next generation of the classic VB now, as well?
>..is written in C#, therefore it is. Yay! Let’s use one >.Net language to write a completely new language using >the same .Net framework. Brilliant.
>All that happens is that you go round in circles >thinking that you’re producing something different when >in reality you’re reproducing exactly the same thing.
And compiling down to machine code is different?
All our languages, .NET or not, compile down to the same code in the end.
I’m afraid you don’t get it.
And compiling down to machine code is different?
Yes. Common data types, a framework, language standards and a paradigm (OO) are not specified.
All our languages, .NET or not, compile down to the same code in the end.
No, they don’t. The way that the machine code is arrived at is vastly different, reflecting the vastly different tools, languages and concepts used for different purposes.
Trying to argue that .Net is just like compiling to machine code is not a great argument.
Edited 2006-07-16 21:11
Here’s the thing… you can make a language that uses datatypes which are not CLS compliant. In fact, most of the IP classes would not be CLS compliant, so you can’t write a class in IP and call it from C#. IP classes do not have to follow the class structure of .NET and they are in fact fully compatible with Python classes.
There’s also an implementation or two of Scheme for .NET. I don’t see any way you could shoehorn that into CLS compliance. C# also has features that are not CLS compliant. All of these languages have features that make them cool and change their level of abstraction from the baseline CLS standard in .NET. All this stuff is implemented at the base by .NET data types and compiled by the .NET jit, but how do you know from the IP language that classes are really just dictionaries of function objects, which are pointers to DynamicMethods? You don’t! The language hides all of this from you and transforms the .NET type system to build its own type system, much like C transforms machine primitives into a type system. You thus have a python language that can access .NET and perform on par with the CPython implementation.
I’ve just recently started using this language at my job (can’t be specific), but I’ve had a somewhat interesting history with Python…
I used to work for a company that held meetings for Python back in ’99/2000 and had heard Guido’s name mentioned often. There were some contractual issues going on between our company and the Python group at the time that I didn’t know the details of, but I can only assume Guido was getting the bad end of the deal. I had seen very little actual Python code at the time, and the whole >>>/… prompt deeply confused me. This was fairly early in Python’s history.
Now I’ve recently used Eclipse with pydev plugins, and I have to say that Python (not IronPython) has finally come up with a contender to Java and .net. I’ve only lightly worked with IronPython, because although I do like VB.net – I believe Python to be another framework altogether, and it should stay that way. I’ve been trying to get mono working on a mac for almost six months, but after I got python with wxWidgets working – I’ve lost my attention to mono-on-osx. Python’s portable, fast, easy to debug, and really reads like an advanced form of objective Basic.
I’ve grown to really like Python – although I did hate it at first. Significant whitespace was quite a hurdle at first, but just setting tab-options fixes the problem quickly in a good editor. I didn’t understand the “default-loading of modules decalred after values” for some time, those that know what I’m talking about can understand. But now that I’ve got a decent editor, and wxPython gui dingies installed, I don’t know that I’d like IronPython since it wouldn’t work on platforms other than Win. I do enjoy the .net framework, but I only use them for c# and VB so invading my perception of Python would be detramental.
./yay.py
Check out Ipython. It’s a pretty cool extension of the interactive python interpreter. It’s perhaps most notable for its pysh profile, which is an attempt at seamlessly integrating your native shell commands with interactive Python.
I find that it’s a pretty neat tool… I don’t use it as my default shell, but I reach for it when I’m about to do some complicated system administration stuff. IPython makes it so easy to translate your interactive hacking into a persistent python scripts for later use that I have quickly amassed a collection of little helper scripts.
Here is a cool video interview with Jim Hugunin, creator of IronPyton. (Interview is available in .wmv and .mp4 formats.)
http://port25.technet.com/archive/2006/06/01/2565.aspx
Jim Hugunin is a Pytho guru, so lay to rest any notion that he doesn’t know what he’s talking about.
Interestingly, IronPython started as a project intended to demonstrate that .NET sucked at supporting dynamic languages, but wound up demonstrating the opposite.