C# is a flexible programming language with a rich set of data types. In this sample chapter, you’ll learn the basics of this .NET-centric language (accompanied by instruction in an open source .NET competitor, Mono), with comparisons to other programming languages you may already be familiar with.
I doubt that MONO is anything close to be called a competitior. MONO will always be a few steps back of Microsoft. I do hope that the next parts of implementations that Microsoft does on .NET or whatever they do is protected behind patents which hopefully prevents this bad seed called MONO.
Troller..
I’m not so sure :
http://swpat.ffii.org/players/microsoft/
Heise report about Steve Ballmer’s talk at CeBit. At a speech event together with chancellor Schroeder, Ballmer says that Microsoft owns lots of patents which cover its new DotNet standard and that it aims to use them to prevent opensource implementations of DotNet. The key phrases read, in translation:
Responding to questions about the opening-up of the .NET framework, Ballmer announced that there would certainly be a “Common Language Runtime Implementation” for Unix, but then explained that this development would be limited to a subset, which was “intended only for academic use”. Ballmer rejected speculations about support for free .NET implementationens such as Mono: “We have invested so many millions in .NET, we have so many patents on .NET, which we want to cultivate.”
(Original document: http://www.heise.de/newsticker/data/jk-12.03.02-000/ )
Mono is a ways off from being a true “competitor” in terms of being usable as anything more than just a hobby compiler. If Mono can’t implement something as standard as delegates properly (I submitted a bugzilla report for this and it is fixed in CVS, but I haven’t downloaded it yet), the some of the more advanced features are sure to be a little fussy.
As for C#, I like it and I don’t like it. I have been using Java as my enterprise env since college and I have issues with Java as well, but I still think that Java is a slightly better implementation of an enterprise level system. I just recently started to code some enterprise apps in .NET/C# for one client. One of the main things about C# that I dislike is the “automatic” nature of the development environement. .NET Studio will “just do” some things when you create an ASP.NET page or C# file. I can’t stand that. It reminds me of IBM’s Visual Age program that had code you couldn’t edit. I quickly dumped that.
Of the things that I like about C# is the structs and the delegate model. Those are very cool and handy freatures. I hope Java adopts something close to those in the future. And the whole idea of boxing and unboxing primitaves is really, really cool.
Even so, I think that Java is still the enterprise system. .NET and C# are going to be playing catchup (even if they have features that Java/J2EE doesn’t have).
I don’t get this reference-type versus value-type nonsense. Normally (always in java) a variable is a reference to to an object. A reference is basically a pointer that you can’t set to dangerous values by mistake, yes? Why not just let all types be value-types and let the programmer take references (pointers) to them? (Like you do in C/C++, except safe.) The “struct” and “boxing” concepts seem rather unnecessary to me.
you hit the nail on the head there.
unless MS makes major changes to ITs implementation of C#, Mono will be inline with any one who builds a C# system to the ISO standard.
if Java sets the reference up automatically at creation of an object, then what is the difference? you can use a reference like a normal value object, no pointer operators needed.
It seems to me like Mono’s primary use will be the introduction of managed code into Gnome, at which point it will probably be used in conjunction with GTK instead of Windows forms.
I don’t think we’ll ever see 100% cross-compatibility between Microsoft’s .NET implementation and Mono, but ultimately I don’t think that matters, at least within Ximian’s intended scope-of-use
if Java sets the reference up automatically at creation of an object, then what is the difference? you can use a reference like a normal value object, no pointer operators needed.
Yes, it makes sense for java, where everything is a reference. But in C# you also have objects that are not references, like in C++, so why not do it like C++? The “->” operator is not necessary, you could use “.” in both cases.
Quoth the original:
“Normally (always in java) a variable is a reference to to an object.”
That’s not true. Certain Java types are not references. The built-in types, such as “int”, “long”, and “boolean”, are NOT references. They are value types.
“Value type” == “inline storage” (meaning they’re stack allocated objects, unless part of a large object, in which case they’re stored “inline”).
.NET allows you to define your own value types, instead of limiting you to only the builtin types. There are some restrictions, though: value types (“struct” types) are implicitly sealed, and thus can’t be used as a base class.
Why have value types? They’re important for some performance scenarios, useful for P/Invoke and unmanaged interop, and can help simplify memory management.
As for the second question:
“Why not just let all types be value-types and let the programmer take references (pointers) to them?”
That cannot work. Value types are stored inline, which requires that you know how large the type is. This means that you can’t store a (larger) derived type variable into a base type variable. (C++ lets you do this, but “slices” the derived information off, resulting in just the base class information).
If only value types were used, you couldn’t have Object Oriented programming.
(Not that that’s necessarily a bad thing; lots of code is written in a non-OO style. But very few people would live with that limitation today.)
So, why have boxing? To convert a value type into a reference type. This allows you to store value types (such as “int”) into a collection of Objects without needing a wrapper class (such as Java’s java.lang.Integer). This allows greater consistency.
aren’t the primitive types in Java objects?
“Why not just let all types be value-types and let the programmer take references (pointers) to them?”
That cannot work. Value types are stored inline, which requires that you know how large the type is. This means that you can’t store a (larger) derived type variable into a base type variable. (C++ lets you do this, but “slices” the derived information off, resulting in just the base class information).
So don’t do that then, use a reference/pointer variable. Is that a big problem?
What bugs me is that I have to decide from the start whether I want my class to be a reference type or a value type. That does not belong in the class definition. I may want to use my class both ways!
“aren’t the primitive types in Java objects?”
In a loose sense. But not in the sense that primitives are objects in a 100% object oriented language like Smalltalk.
In Java, primitives are not true objects because they cannot have methods. So primitives are not derived from classes. But each primitive has an associated class that contains methods for working on that primitive.
Example, type int has a class Integer. And class Integer contains methods such as Integer.parseInt() for converting a string number entry into an Integer. (There is also Double.parseInt(), Float.parseInt() )
You also cannot instantiate a primative, which in my opinion makes them not true objects.
Example:
int MyNum = new int();
will cause a compile time error.
Integer arrays, however, are real objects. So the following is valid:
int[] myArray = new int[10];
is a valid statement that intantiates an integer array object.
“if Java sets the reference up automatically at creation of an object, then what is the difference? you can use a reference like a normal value object, no pointer operators needed.”
There isn’t really any difference really. You just take a performance hit because Java has to keep track of all of the references that are in use, and the JVM has to take care of dereferencing pointers at runtime.
The benefit of course, is that you save invaluable programming time, and even more invaluable debugging time. Pointers are probably the biggest source of headaches in C and C++ programming.
Alas, you can’t have it both ways (specify at declaration whether you have a reference- or a value-type).
Why? Because, in .NET, the semantics between those two are different. Remember, .NET value-types are stored inline. This allows for a crucial performance optimization: the elimination of memory allocations.
Consider an array of 100 ints (a value-type):
int[] foo = new int[100]; // note: 1 memory allocation
Versus an array of 100 objects:
// 1 allocation
object[] bar = new string[100];
// N allocations
for (int i = 0; i != bar.Length; ++i) bar = new object();
Notice that we need N+1 memory allocations for an array of length N reference types, but we only need 1 memory allocation for value types.
Why is this so?
Let’s move to C++. What’s wrong with this:
class Foo {public: virtual void f();};
class Bar : public Foo {int n; public: virtual void f();};
void InvokeArray(Foo* pFoo, int len)
{
for (int i = 0; i != len; ++i) pFoo[i].f ();
}
void DoNotDoThis() { InvokeArray (new Bar[100], 100); }
The problem with this is because sizeof(Bar) != sizeof(Foo). The C++ compiler assumes that all elements in `pFoo’ (in InvokeArray) are of type `Foo’, and thus increments the pointer by sizeof(Foo). When you pass a `Bar’ array, InvokeArray() will index into the middle of a `Bar’ object, resulting in memory corruption issues.
See FAQ 21.4 at http://www.parashift.com/c++-faq-lite/proper-inheritance.html#faq-2…
Returning to .NET, the designers wanted to simplify the programming environment. “Gotcha’s” like the one above were something to remove from the languages and runtime.
The chosen fix was to ensure that an array of a reference type would be able to store sub-types (derived classes), thus requiring that all arrays of reference types suffer from the N+1 allocation strategy pointed out above. This is safe in the face of arrays-of-derived classes, but is not ideal for performance.
So, the solution: have value types, which are implicitly sealed, and which can be stored inline. This allows user-defined types that behave like the built-in types (1 allocation for arrays of N elements), while avoiding the array-of-base-type issue that C++ has.
The downside is that you [i]must, up front, declare whether you’re dealing with a reference type (class) or a value type (struct).
Syntax that would allow you to specify which behavior to have at the point of declaration (heap-allocated or stack-allocated, like C++ allows) would not solve the above array issue. And leaving a hole like that array issue is not a tenable solution.
– Jon
C#/.NET alows you to instantiate primitive types. This is legal C#:
int MyNum = new int();
Additionally, C# allows value-types to have methods, so this is also legal:
int y = int.Parse(“42”);
Boxing allows implicit conversion to objects:
object o = y;
And unboxing allows you to convert the object reference back into a value type:
int z = (int) y;
Thus is the joy of a unified type system. 🙂
“C#/.NET alows you to instantiate primitive types. This is legal C#:
int MyNum = new int();
Additionally, C# allows value-types to have methods, so this is also legal:
int y = int.Parse(“42″);”
Hmm… Interesting. So if you do that, is it still a primitive? Or is it actually an integer object derived from a class?
I haven’t really worked with C# at all, so this is all kind of interesting.
“int z = (int) y;”
This last one basically is casting an object to an integer type right?
Does the compiler enforce any kind of checking on this? To make sure you don’t accidently cast an object to an integer type that can’t possibly be cast to an integer?
Does the compiler enforce any kind of checking on this? To make sure you don’t accidently cast an object to an integer type that can’t possibly be cast to an integer?
I expect it would throw an exception.
C# also has an “as” operator that is like dynamic_cast<> in C++. (But it won’t work for primitive types.)
Value types are…weird.
They have two philosophical representations: when stored inline (stack allocated, etc.), they aren’t “real” objects. A “real” object would have an object header (containing a vptr, locking information, etc.); value-types lack this. Thus, a 32-bit int is actually 32-bits in size. Yay. Philosophically, this representation has no base class (as a base class would imply extra per-object size, substitutability {ISA relationships}, etc.). Methods can be invoked on object instances, though, just as you’d expect. But you won’t use the virtual function call mechanism; indeed, it wouldn’t make sense, as (again) value-types are sealed, so the virtual function mechanism isn’t needed (all calls are direct).
The second representation is the boxed representation, which is a “real” object that lives on the GC heap, has a vptr, etc. This representation, philosophically, has a base class of System.ValueType, and boxed value-types are reference types, and thus participate in virtual function calls and all the other wonders of OOP.
Box and unbox operations transition between these two representations.
To drive the difference home, .NET has covarient arrays. A reference-type array, such as string[], can be implicitly converted to an object[] array, without problem. But a value-type array, such as int[], cannot be converted to any other type. Array covarience applies only to reference-types. (To see why, see an earlier post, and the dangers this would allow, as seen in C++.)
So, is an “int” a primitive type? Sort of. “Primitive types” are not very special; they’re value types, and get the same treatment as any other value type. They are special in the sense that they have special tokens in the underlying intermediate language (CIL has int32, char, etc.), but that’s all. As far as the programmer is concerned, they are no different than any other value-type.
Seen in a different way, value-types are a performance hack, glued onto the side of the runtime. They behave strangely (pass-by-value semantics, inline storage, lack of array covariance, etc.) when compared with everything else, but they have their purposes. They integrate with everything else through the box/unbox transitions that the runtime automatically provides.
As for type checking, the compiler will enforce some type checking (when it can), and the runtime will enforce other type checking, throwing a InvalidCastException if you try to unbox a type incorrectly:
double d = 42.0;
object o = d;
int n = (int) o; // throws InvalidCastException, if not caught at compile time
– Jon
mmm, what is managed code?
Managed code is code that operates within the confines of the Common Language Infrastructure (CLI — the ECMA standard; also, the Common Language Runtime (CLR), and the .NET runtime).
That is, managed code is code written in an intermediate language (Common Intermediate Language, CIL, or MSIL), which is JITed at runtime, and operates within a garbage-collected environment.
Managed code has other benefits and drawbacks.
Basically, the term “managed code” is used instead of (e.g.) “Java bytecode” because the CLI is intended to target multiple languages, and the emphasis is on the environment itself rather than any particular language interface to that environment. It also allows us to have the term “unmanaged code”, which is anything that doesn’t run within the CLI, which includes everything else (COM, native C libraries, alternative runtime environments such as Python, etc.).
Why use managed code? Because it makes you more productive (compared to C and C++). At least in theory. It is also easier for multiple languages to consume, as long as all the other languages you care about are also managed languages.
Plus, in Longhorn all APIs will be managed, so if you’re staying on Windows you won’t be able to avoid managed code. This doesn’t mean that all Windows code must be managed, as the CLI has decent interop facilities so you can continue to use (and create) unmanaged code. Managed code will be preferred, however.
I like C# quite a bit, especially the set/get functions for members, the event mechanism, and the XML doc comments (ok, so I’m a technical writer most of the time, give me a break [tt]:-)[/tt]).
I also like the .NET Framework quite a bit; all of the things I’ve wanted have been there, and extremely well-designed. Using it for XML processing with XPath access on a DOM was extremely natural and easy.
VisualStudio.NET 2003 is a great platform for developing with C# and the .NET Framework, too; I was quite pleased with it.
That said, I want “complete” C#/.NET implementation(s) for MacOS X and other UNIXes. That would completely remove the need for me to ever touch C++ and Java again.
And Python for .NET was just released, so I’d be very content…
– chrish
Is C# really that different from Java, the differences between them could be synchronized in one version update of Java. I don’t understand why Microsoft has to develop an entirely new VM and langauge just to push Java out of the picture. Why could they not just use Java?? Instead they have to branch an entire area of programming for no reason other than to subvert the technology of another company because they hold the entire market in a check mate. Look at CLR, how different is it conceptually than the JVM, they could just have easily implemented a JVM instead of CLR.
“Is C# really that different from Java, the differences between them could be synchronized in one version update of Java. I don’t understand why Microsoft has to develop an entirely new VM and langauge just to push Java out of the picture”
Because Microsoft doesn’t just want a piece of the pie. They want the whole pie. They want to be able to determine what flavor the pie is, and what it is made out of.
.NET is a key part of their strategy at changing to a subscription based software delivery model.
I haven’t had the chance to play with .NET programming, but I have evaluated some .NET applications. And I think I will be sticking with J2EE, at least for now. It has been my experience the JVM outperforms .NET.
But that also brings up another point. And that is that .NET might actually help legitimize Java as a legitimate platform for programming GUI based desktop applications. Why? Because .NET has the same performance issues on the desktop that Java has. If those performance issues become “normal”, Java won’t look quite so bad on the desktop anymore.
Certainly Java has made great strides in certain areas. In fact, you often here that Java is as fast as native code these days with Hotspot technology. That’s true for underlying things such as math operations, and because of that, I enjoy using Java for Web site backend programming and such. But when it comes to desktop applications that require a GUI, I still invariably reach for my C++ compiler, because Swing is still painfully slow (Swing it Java’s GUI API). Swing performance is still painfully bad, which is why despite that Java is the world’s most in demand programming language today, the average end user wouldn’t know that because there are very few end user applications that are written in Java.
But .NET has the same problem on desktop applications.
Thanks, for the overview. So is parrot managed code or more specifically the bitecode produced by compiling it?
To me the CLI resembles more what ISA is/was for processors. You can have paralell development on both platforms and you don’t have to follow one platform or another idiosyncrasies. A bit of a HAL if you want. Or you can invent your own VM with how many registers or whatever and then try to emulate them on the real iron. I think this is a nice ideea. The speed of the processors is getting better for this.
I wonder though why Java hasn’t got this far as being a universal platform before .Net?
So, why not work a bit on Java for the Desktop. Improvements I mean. What if Java would follow Flash? I would say just create code that corresponds to native code through JNI and implement this on as many platforms as JVM is implemented on and then unify the API and Desktop will take off.
What’s the problem I wonder?