The single most anticipated (and dreaded?) feature of Visual C# 2.0 is the addition of Generics. This article will show you what problems generics solve, how to use them to improve your code, and why you need not fear them.
The single most anticipated (and dreaded?) feature of Visual C# 2.0 is the addition of Generics. This article will show you what problems generics solve, how to use them to improve your code, and why you need not fear them.
I do fear Generics. Resurrected C++ templates. No, the fact that C# generics are instantiated runtime, reducing code bloat, doesn’t comfort me at all. Templates are evil. Generics are evil. In general, all those type-checking gimmicks are evil.
Enter Python now.
Templates are incredibly useful for library authors. In my C++ experience, I’ve personally only written template code once and in a very limited scenario (although I’ve used a lot of code that has used templates).
As for code bloat, it’s really just a tradeoff. In C++ everything is expanded at compile time and will therefore run faster at execution time (no need to expand it). I don’t really know which is better and I would imagine it would depend on your situation.
as for me, I found templates very useful part of C++.. and btw. STL rules
I’ll second that
those introduced by strong static manifest typing *g*
C# is heading down the convoluted, bloated, inscrutable road of C++, trying to be all things to all people
I was starting reading the article really interested, only at the end I realize that generic == template. What a shame
Have you never heard speaking of template metaprogramming ?
Today is a nosense speaking of code bloath.
BTW: STL and BOOST are based on heavy use of template.
Regards
Gaetano Mendola
> those introduced by strong static manifest typing *g*
Actually, it’s the exact opposite: generics solve problems deriving from the use of non static typing, that is, they let you avoid non static typing altogether. Languages without generics will always induce slower code than language with generics, for the problem cases that generics are aimed at solving.
> Generics solve problems deriving from the use of dynamic typing. Languages without generics will always induce slower code than language with generics, for the problem cases that generics are aimed at solving.
Say that to Common Lisp compiler developers. Common Lisp is as fast as C.
> I do fear Generics. Resurrected C++ templates. No, the
> fact that C# generics are instantiated runtime, reducing > code bloat, doesn’t comfort me at all.
There’s a much more important difference between them: templates are glorified macros, whereas generic classes are in fact just classes with parameters.
What does that mean? When a C++ compiler encounters a template all it can do is check its syntax. It can’t typecheck it, it can’t produce any code, it can’t even check that the functions you use do actually exist.
All that has to be done (over and over again) when
you instantiate the template. Only then will any errors in the template show up.
Also, the template writer might expect certain operations to be defined on the template arguments, but there’s no way for him to express that explicitly. Someone instantiating the template with the wrong arguments will get rather unhelpful type error messages possibly deep inside the template code.
Generic classes, on the other hand, are fully type-checked as soon as the compiler encounters them, and the so-called “derivation constraints” allow generic class writers to express constraints on generic parameters.
As a consequence, users of generic classes don’t need to worry about type errors within the generic class, and if the generic arguments they provide don’t fit the constraints, they get a clear error message.
> In general, all those type-checking gimmicks are evil.
> Enter Python now.
Python has typechecking too, only it does it at run-time rather than compile-time. This way it gains in convenience and turn-around times, but loses in performance and robustness. Bugs that a static type checker would have found, have to be found through testing instead.
even the people from SmallTalk and Self will have something to say. And given that there are now JIT for them too, even the guys from python and parrot.
To Fabio:
‘they let you avoid non static typing’
Is this the problem?
They actually let you avoid casts. Casts are a problem implicit with strong static manifest typing.
So generics in C# solve problems generated from C#’s typing.
More, did you noticed I said: ‘strong static manifest’ ? people from OCaml, haskell and so on had generic functors for years, that are tightly integrated with the type inference system. And, obviously, ocaml is *fast*.
You may find interesting this paper: http://www.osl.iu.edu/publications/pubs/2003/comparing_generic_prog…
The C# generics are much cleaner and also more limited than C++ templates. And because of the way they are translated into MSIL it will not lead to significant code bloat. Because they use Value types, they will lead to much faster code. Just compare Dictionary<int,int> to Hashtable.
Another thing: this is not just about C#. The MSIL features that make generics possible will be very useful for other languages that have similar features. For example functional languages like F#, SML.NET etc. will probably use them.
All those people that complain about C# should keep in mind that microsoft research is full of functional language enthusiasts. So almost everything they do to MSIL will also benefit functional languages.
.NET is the best chance to get functional languages into the mainstream.
> Say that to Common Lisp compiler developers. Common Lisp
> is as fast as C.
Obviously not for the problem cases at hand. It’s simply impossible.
> Common Lisp is as fast as C.
Any benchmarks or other supporting arguments?
Even if a dynamically-typed language is compiled, there still remains the overhead of dynamic type checks.
Being able to pass any kind of data to any function also requires a boxed data representation, introducing more overhead.
Clever program analysis might be able to remove some type checks and perform some unboxing, but it can’t remove them all except in toy examples.
> To Fabio:
> ‘they let you avoid non static typing’
> Is this the problem?
Sure it is. Static typing has many advantages over dynamic typing, some of which are illustrated in the article we’re talking about.
> They actually let you avoid casts.
A nice concequence, yes.
> Casts are a problem implicit with strong static manifest > typing.
Why so, nothing forbids you to design a language which manifest typing that doesn’t need explicit casts. In fact, c++ doesn’t require explicit casts when upcasting, however it needs them when downcasting, for an obvious reason: downcasting is dangerous, and thus the compiler wants you to be sure of what you’re doing. But, in essence, nothing stops you from implemening implicit downcasting either.
In any case, be them explicit or implicit, casts, in dymically typed languages, or in the statically typed languages when there’s no way to apply static typing, will always be there, because they’re the only way to do transform one type into another, at runtime. And casts involve speed penalty. And speed penalty is a bad thing.
> So generics in C# solve problems generated from C#’s
> typing.
As said: no.
> More, did you noticed I said: ‘strong static manifest’ ?
> people from OCaml, haskell and so on had generic
> functors for years, that are tightly integrated with the
> type inference system. And, obviously, ocaml is *fast*.
You are contraddicting yourself now: first you said that generics are there just to avoid casting, a problem, you said, of languages with manifest typing, now you say that generics are also used in languages without manifest typing, basically disproving your point and proving my point that generics have more to do with the general advantage of static typing over dynamic typing than with casts.
Thanks 🙂
Andy,
you better google by yourself. there are *tons* of results about this.
A sample link:
http://rover.cs.northwestern.edu/~surana/blog/past/000150.html
Sure it is. Static typing has many advantages over dynamic typing, some of which are illustrated in the article we’re talking about.
Tell us which, I don’t see them, given that:
– you won’t be faster.
– you won’t catch more errors[1].
– you have to write more code.
casts, in dymically typed languages[..]will always be there, because they’re the only way to do transform one type into another, at runtime.
really? can you point me to casts in some (python|lisp|ruby|smalltalk) code?
You are contraddicting yourself now
No I’m not. I said ‘generics solve problems introduced from strong static manifest typing’. And I showed you that given a better type system (the haskell type system is turing complete in itself you get genric type safety for free.
And, again, There is no need for it.
[1]
no, you won’t. Cause if you have int fun(int,int) you still can’t assert the correctness of the functions without tests. And obviously once you have tests you don’t need to assert the correctness of the typing.
ADA has the strongest compile time checks that I can think of, yet an Ariane went BOOM cause they did a floating point casting error. That was funny.
> you better google by yourself
That’s not how it works. You make the assertion, you provide the references, or at least a convincing argument, or at the very least a well-reasoned rebuttal of my arguments.
As for the link you did provide: the LISP code in that ‘almabench’ benchmark in fact has explicit type declarations and inline annotations, e.g.:
C version:
void planetpv (double epoch[2], int np, double pv[2][3])
LISP version:
(declaim (inline planetpv))
(defun planetpv (epoch np pv)
(declare (type (simple-array double-float (2)) epoch)
(type fixnum np)
(type (simple-array double-float (2 3)) pv))
Of course the static type information allows the LISP compiler to emit code as efficient as C, but wasn’t this discussion about dynamic types and not having to write down the types?
wow, this looks a lot like Java Generics. Does Microsoft have no shame?
With that said. This is an extremely welcomed change for type-safe languages. You have to have a type-safe way to deal with datatypes that may have different inheritance trees, but are related logically. There are patterns that were created to deal with it, but they have gotten so complex (just look at J2EE! Generics are changing v1.5 is unexpected ways).
<shameless_ruby_plug>
Of course if you want the future right now, and not wait for C# v3.0 or JDK v2, use Ruby!
</shameless_ruby_plug>
There is no code bloat with C++ templates over coding an equal number of functions / classes / whatever by hand. There could be some situations where the compiler is not able to optimize the code as well as one could by hand. Then you will explicitly specialize your templates in C++.
The “code sharing” the author talks about is an entirely different thing. Nothing stops you from programming generic code in C++ with other means than templates. You will get the same “code sharing”. Of course you can add a small wrapper about that, using templates.
> wow, this looks a lot like Java Generics. Does Microsoft
> have no shame?
Probably not, but generics weren’t exactly Sun’s idea either. Generics were decades ago, and thankfully now seem to finally make it into mainstream languages after a lot of prodding from functional-language researchers.
That said, there are some small differences between C# and Java generics.
Java doesn’t allow base types (e.g. int, float) as type arguments; the corresponding wrapper classes have to be used instead (Integer, Float).
Also, the Java Virtual Machine will not be modified to include generics whereas the CLR will. Generic Java classes are therefore converted into standard ones, which leads to some unnecessary casts. Also, type arguments do not show up in Java Reflection.
>> dev: wow, this looks a lot like Java Generics. Does Microsoft have no shame?
In Java generics are syntactic sugar; they do nothing to improve performance.
> Tell us which, I don’t see them, given that:
>
> – you won’t be faster.
Of course you will. Simply because you don’t need to do runtime checking on types. Is that really difficult to grasp?
> – you won’t catch more errors[1].
No one said you will catch more errors, what’s been said is that you will catch more errors at compile time, rather than run time, which is an enormous benefit.
> – you have to write more code.
How so? You yourself mentioned Haskell: types are inferred, no “more code” to write.
>> casts, in dymically typed languages[..]will always be
>> there, because they’re the only way to do transform one
>> type into another, at runtime.
> really? can you point me to casts in some
> (python|lisp|ruby|smalltalk) code?
What part of “be it implicit or explicit” don’t you understand? Casts are ALWAYS there, even if in dynamically typed languages are implicit. You need to do lots of boxing/unboxing, when downcasting, in languages like c++, or always, in dynamically typed languages (leaving optimizations aside).
> No I’m not. I said ‘generics solve problems introduced
> from strong static manifest typing’. And I showed you
> that given a better type system (the haskell type system
> is turing complete in itself you get genric type
> safety for free.
I get what you mean now, however some people prefer to write the types explicitely, so unless you’re saying that Haskell is the best language on earth and everyone should be using it, I don’t see your point. In other words, Haskell may be better than c++ or c#, which doesn’t imply any dynamically typed language is better than c++ or c#.
> And, again, There is no need for it.
For “it” what, generics? Feel free to believe it.
Some of you should stop talking about Common Lisp if you don’t know anything about it. Because minds could be poisoned by your lack of information.
You can write Common Lisp to capture C’s performance, by adding types. You can’t write C to capture Common Lisp’s expressive power, except by writing a Common Lisp compiler.
Common Lisp is not in this discussion. It is agnostic about the whole type/performance religious war. You can choose both sides in the same app. It’s scary to think how much useless crap is said about it on forums.
> In Java generics are syntactic sugar;
Nonsense, the term “syntactic sugar” means features that can be implemented through simple syntactic substitution.
C++ templates could arguably be considered fancy syntactic sugar, although modern compilers do rather more than manipulating text to implement them.
The implementation of generics requires semantic analysis, type-checking and the insertion of casts, i.e. they’re a lot more complicated and powerful than “syntactic sugar”.
> they do nothing to improve performance.
That may be true, but they do greatly improve the expressiveness and static type safety of Java. Plus, you don’t have to type all those annoying casts when taking things out of collections.
IMHO this is the one of the biggest sources of the bugs.
@Fabio:
Obviously not for the problem cases at hand. It’s simply impossible.
Actually, for the case at hand, it’s *always* possible. A Lisp compiler can always emit a copy-down method (a specialization in C++ parlance) when it has the same info as the C++ compiler — the types of all arguments.
@Andy:
Clever program analysis might be able to remove some type checks and perform some unboxing, but it can’t remove them all except in toy examples.
Experience shows that it doesn’t matter if the compiler can remove all the checks. It only matters if the compiler can remove the checks in the 10% of the code that takes 90% of the runtime. Experience also shows that this is usually the case.
Of course the static type information allows the LISP compiler to emit code as efficient as C, but wasn’t this discussion about dynamic types and not having to write down the types?
The beauty of a language with soft-typing is that it’s up to the programmer to decide the trade-off. When writing performance-intensive code, CL programmers usually adopt the following style: write, debug, and test a straight-forward version of the code. Then, look at the output of your profiler and see what’s taking the most time. If you’ve got a smart compiler (eg: CMUCL/SBCL), look at its optimizer report to see what info it needs that it cannot infer for itself. Lastly, add declarations as necessary to make the code run fast.
Usually, the number of declarations will be a small fraction of the number required in a C program, because the optimizer will infer most of the types. Also, you won’t have to fuss with type declarations while you’re writing the code — you can concentrate on writing *correct* code. Ultimately, the worst-case scenario in a soft-typed language (where the optimizer bombs and cannot infer anything), is the best-case scenario in C++/Java/C# (you have to write all the declarations anyway).
IMHO this is the one of the biggest sources of the bugs.
IMHO you don’t know what you are talking about
>> andy: Nonsense, the term “syntactic sugar” means features that can be implemented through simple syntactic substitution.
It was my understanding at the time that Java generics did no static type checking. I didn’t mean to mislead anyone about that.
Of course, that doesn’t make the Java troll correct, for we could still say, “wow, Java generics look a lot like C++ generics; doesn’t sun have any shame?”
>> andy: C++ templates could arguably be considered fancy syntactic sugar, although modern compilers do rather more than manipulating text to implement them.
C++ templates obviously cannot be implemented in terms of existing C++ semantics, so they cannot “arguably be considered syntactic sugar.”
>> andy: The implementation of generics requires semantic analysis, type-checking and the insertion of casts, i.e. they’re a lot more complicated and powerful than “syntactic sugar”.
Implementing generics does not require the insertion of casts; that is what generics are intended to prevent. Casts are by definition *explicit* conversions. I think you mean that generics require the insertion of implicit conversions; however, the only implicit conversions are those that result from trying to construct a vector <int> from a vector <char>, but such conversions are part of the base language and are not specific to the language’s generic constructs.
>> andy: That may be true, but they do greatly improve the expressiveness and static type safety of Java.
Yes, I was wrong on this point.
> Obviously not for the problem cases at hand. It’s simply
> impossible. Actually, for the case at hand, it’s
> *always* possible. A Lisp compiler can always emit a
> copy-down method (a specialization in C++ parlance) when
> it has the same info as the C++ compiler — the types
> of all arguments.
Sorry, but that makes no sense. The point here is that, in the case at hand, the C++ compiler has NO information on the type, it’s the programmer that has to tell it what type the argument is, via casting. The compiler could attempt an automatic downcasting himself, but that would involve run-time type checking.
Any dynamically typed language can’t avoid doing runtime type checking in the case at hand, that’s a simple fact.
Fabio: You are right in that runtime type checking causes performance penalty. However, type inference works much better than you think.
With static typing, you *always* specify types, and get speed. With dynamic typing, you can choose not to specify types and get a little slowdown, or you can choose to specify *just handful* of types and let compiler do type inference and get a equal speed with static typing.
Ok, above is not accurate since static typing != explicit typing. ML family of languages do static typing but also type inference so that programmers don’t need to specify types. ML parametric types can be considered to correspond to C++ templates or C# generics, just much better.
Premature optimization is the root of all evils, profiler is your friend. *Most of the time*, runtime type checking penalty is not worth the annoying declarations and type gimmicks. When it matters, you can start to specify types. Isn’t this much better?
for me that i work a lot with oop (c#, vbnet and java) generics is relly a feature i was waiting…
however i don’t like the sintax, perhaps it’s a little anoying or difficult fot beginners to read
this one would much better
example
HashTable orders = new HashTables of Order() ;
Orders is a class in this example
Thanks for raising the level of this discussion from
“LISP has dynamic types and LISP can be as fast as C, so who needs static types”.
I take your point about the compiler only needing to remove the important 10% of runtime checks, and I’m also minded to believe that should mostly be possible. Have you got any supporting evidence on that? And why did the ‘almabench’ programmers feel the need to put type declarations in, given that a small numeric program like that should be particularly easy for automatic type assignment?
But performance is only one issue anyway. The two other big ones are the costs of writing and testing your program initially and then maintaining it later.
Static typing eliminates bugs at compile-time that otherwise have to be found during testing. It does add the extra cost of having to put in types, but how tedious is that really? It’s not like every function call has to be annotated; only function headers and local variables need to be typed.
One has to think about those anyway, and I find it helps if I actually have to write them down.
Static typing does become cumbersome when it is not flexible enough and you need to put in explicit casts, but with the introduction of generics a lot of those will disappear. Hopefully C# and Java won’t stop there and develop even more powerful type systems. (There are enough experimental languages around to take their cues from.)
And any program maintainer, whether it’s the original programmer a few months later or someone else, will be thankful for the documentation that type annotations provide. Comments can help, but they’re often not put in, they’re out of sync with the code, and they can’t be checked by the compiler.
Dynamic typing has its place on the command line, in write-them-and-forget scripts and for rapid prototyping. I don’t think they’re a good choice for programs that need to be maintained over a long time.
The soft-typing approach certainly is an interesting and sensible compromise in the dynamic/static typing debate, allowing a seamless transition from prototyping to production software.
Compiler switches could ensure that programmers really put in the type information (at least for the module interfaces) before releasing the real thing.
> There’s a much more important difference between them:
> templates are glorified macros, whereas generic classes
> are in fact just classes with parameters.
templates in C++ are a lot more powerful than macros. It’s been demonstrated that templates are Turing complete; which is why you can do meta programming in C++.
> What does that mean? When a C++ compiler encounters a
> template all it can do is check its syntax. It can’t
> typecheck it, it can’t produce any code, it can’t even
> check that the functions you use do actually exist.
Correction: these are the MINIMUM requirements for a C++ compiler.
Ever used VC 7.1 or GCC 3.3? If you have, maybe you noticed that they’re quite picky about template syntaxe (i.e. you have to use the “template” and the “typename” keywords anywhere where adecquate). This is not just for the fun of typing more code, but to allow any C++ compiler to do a full check of the template code, even when not instanciated.
GCC 3.4 implements this, and it’s a pleasure to type template code with it.
> All that has to be done (over and over again) when
> you instantiate the template. Only then will any errors in
> the template show up.
Give GCC 3.4 a try.
> Also, the template writer might expect certain operations
> to be defined on the template arguments, but there’s no
> way for him to express that explicitly. Someone
> instantiating the template with the wrong arguments will
> get rather unhelpful type error messages possibly deep
> inside the template code.
That’s what static_assert() is for.
But then, wrong arguments in any code will lead to errors. Be it template code, or a simple C function.
That’s why documentation is so important.
> Generic classes, on the other hand, are fully type-checked
> as soon as the compiler encounters them, and the so-called
> “derivation constraints” allow generic class writers to
> express constraints on generic parameters.
> As a consequence, users of generic classes don’t need to
> worry about type errors within the generic class, and if
> the generic arguments they provide don’t fit the
> constraints, they get a clear error message.
There is a reason why in C++, a template code (a class or a function) is not instanciated until used. It’s to allow two logical interface to live in the same code. For example, it may happen that a class has a member function that makes absolutely no sense when using std::string has a template argument, but it doesn’t keep you from using the rest of the class with std::string.
There are also other concepts that can solve these cases, such as template specialization or substitution-failure-is-not-an-error (“if an invalid argument or return type is formed during the instantiation of a function template, the instantiation is removed from the overload resolution set instead of causing a compilation error” (Boost)).
the worst-case scenario in a soft-typed language (where the optimizer bombs and cannot infer anything), is the best-case scenario in C++/Java/C# (you have to write all the declarations anyway)
That doesn’t make sense. The soft-typed worst-case scenario performs worse, plus you don’t get the documentation. Unless you mean cases where the static type system isn’t flexible enough and requires casts. With generics that will occur a lot less.
> With static typing, you *always* specify types, and get > speed.
Speed is only one advantage of explicit typing. The others are that more bugs are found at compile-time and that the type annotations also serve as always-up-to-date documentation.
Static type inference finds the compile-time bugs too, but the error messages are often difficult to understand and appear far from the actual bug.
Sorry, but that makes no sense. The point here is that, in the case at hand, the C++ compiler has NO information on the type, it’s the programmer that has to tell it what type the argument is, via casting.
What’re you talking about — casting has nothing to do with templates. The whole point of templates is to allow the programmer to tell the compiler what the types are. At the point where a template is called, the C++ compiler knows the types of all arguments, and emits code specialized for those types. Ie, when you have the generic:
template (typename T)
T genericAdd(T lhs, T rhs) { return lhs + rhs; }
You cannot call it without the compiler knowing what the type of T is at each case. My point is that when a CL compiler doesn’t know the types, the call will still work, because the compiler will emit a runtime dispatch, and when it does know the types, it will emit code specialized for those types.
templates in C++ are a lot more powerful than macros
I’m not arguing with that, hence the admittedly flippant use of “glorified”. Templates are properly integrated with the language, i.e. template instantiation is done at the syntax tree level rather than at the character-stream level. And modern compilers do handle template instantiation cleverly to avoid duplication.
That’s what static_assert() is for.
Admittedly I haven’t heard about that before, but as far as I can see from a quick google query this is a clever macro hack that is not part of the language.
It’s nice if the template writer puts it in to give better error messages, but he’s in no way required to and the static_asserts might be out of sync with the actual requirements of the template.
C# and Java won’t compile a generic class if it uses methods that aren’t guaranteed by the constraints on its arguments.
Still compiling gcc 3.4 …
Have you got any supporting evidence on that?
There are lots of assorted microbenchmarks on comp.lang.lisp. My favorite overall benchmark is: http://www.flownet.com/gat/papers/lisp-java.pdf
The Lisp vs Java results are outdated, but the Lisp vs C++ results should be accurate. The overall finding was that the Lisp programs were faster on average than the C/C++ programs, though the fastest C++ programs were faster than the fastest Lisp programs. What the microbenchmarks suggest is that the Lisp programmer
And why did the ‘almabench’ programmers feel the need to put type declarations in, given that a small numeric program like that should be particularly easy for automatic type assignment?
Different CL compilers have different degrees of intelligence in their type inference. The almabench programmers probably erred on the side of caution, to allow good results from compilers less intelligent than CMUCL/SBCL.
Static typing eliminates bugs at compile-time that otherwise have to be found during testing.
The general concensus among Lisp programmers, and one that fits my experience, is that type checking only catches very simple bugs that are quickly caught the first time a dynamically-typed program is run anyway. The category of subtle typing bugs is rather small in practice.
It does add the extra cost of having to put in types, but how tedious is that really?
Yes. One, because it breaks your train of thought to have to worry about types, but more importantly because refactoring code can cause a ripple-effect that requires large amounts of manual type fixups.
One has to think about those anyway, and I find it helps if I actually have to write them down.
Then write them down. The whole idea of soft-typing is that you can write the types you think will really help, while ignoring other types (like the types of loop variables and temporaries).
Dynamic typing has its place on the command line, in write-them-and-forget scripts and for rapid prototyping. I don’t think they’re a good choice for programs that need to be maintained over a long time.
Dynamically typed languages were being used for mission-critical programs before Java was a figment of Bill Joy’s imagination. Ericsson, to this day, uses Erlang (which is dynamically typed), for their mission-critical communications.
Yes, static_assert is kind of a hack, but a very usefull one. Of course, it’s not the solution to every problem in C++, but it’s one more tool in your toolbox.
C++ templates are still more powerful than generics, even if they are more complex to use (and more error prone). At least you have the choice of having more powerful tools.
Templates and generics are primarly used by libraries, which is why I don’t see its complexity as a such bad thing; but that’s my personal opinion.
Plus newest compilers are getting good at analysing template code and reporting errors in a human readable fashion.
> Still compiling gcc 3.4 …
Way to go. 🙂
If you want a small code test example for GCC 3.4:
http://users.pandora.be/tfautre/template.cpp
This code compiles with absolutely no error or warning on GCC 3.3 or VC 7.1. But GCC 3.4 will go further and notice that the variable “aa” hasn’t been declared.
I hope that VC 8.0 will also follow GCC 3.4 footsteps. If it does, it’ll be C++ heaven using these two major compilers for coding templates.
IMHO it would be a good thing that the next C++ standard requires these additional checks instead of making them optional.
The fundamental problem with C++ templates is that they are partial solutions to two different problems combined in one feature. They offer genericity as well as metaprogramming, but are not really optimal for either (especially metaprogramming). A much better solution is to have proper procedural macros to handle metaprogramming, then a generics mechanism.
It must be noted that there is nothing in Lisp-like languages quite the same as generics. The importance of generics is not the performance benefit (as I’ve said, a good optimizer can figure most of that out itself), but the ability of the programmer to assert certain invariants about the code.
If you think about it, both type declarations and generics are really just examples of generalized assertions. ‘int i’ is the assertion : “the type of the variable i is an integer in this scope,” while “vector(int) vec” is an assertion that “vec is a container that always contains integers.” There are lots of other generalized assertions that could be useful, such as : “the range of this integer is between 0 and 10.”
Now, different languages all offer some of these features, but I don’t know of any that express them all as facets of one underlying feature. Dylan has its limited types, and CL has its limited integers, which can specify certain constraints, and Eiffel has its contracts, but they are all special-purpose features, not a generalize construct (like a lambda or a type).
That doesn’t make sense. The soft-typed worst-case scenario performs worse, plus you don’t get the documentation.
We’re not arguing the same point The problem is really two-dimensional, since we’ve got two variables — the amount of type inference the compiler achieved, and the performance required of the program. So let’s fix one of them (performance), and specify that the resultant solution should perform about as well as C.
Then, the best-case for soft-typing becomes: the compiler infers everything, and you never have to deal with type declarations. The worst-case becomes: the compiler infers nothing, and you have as many type declarations as in the C code. In both cases, the performance is the same, what changes is how much programmer effort was required to get there. The point is that the worst-case scenario, you put in as much effort as for the C code, for about the same performance. That is, of course, the best (and only!) case for the C code.
> What’re you talking about — casting has nothing to do
> with templates.
Perhaps you could bother to read the article, next time, before commenting, uh?
The situation is this:
1) You’ve got a collection of items
2) You want these items to be all of the same kind
3) You don’t want to write specific code for each kind of
item
You’ve got 2 solutions for that:
a) cast everything to the common base class (C++ doesn’t have a common base class by default, so it’s even more cumbersome there) whem inserting the items in the collection, and cast them back when extracting them.
This of course involves speed penalties and is error prone at run time, since when casting back you might incurr in an exception if the object is not of the required type.
b) use generics.
Nobody cares about Lisp anymore. We want modern, robust languages like Java and C# for today’s demanding agile development and information modelling tasks. An enterprise wants a industry-standard language with a proven track record, not some academic toy.
@Fabio:
Perhaps you could bother to read the article, next time, before commenting, uh?
Agh. Such poor reading comprehension! Let’s try this again. Your original comment:
Say that to Common Lisp compiler developers. Common Lisp is as fast as C.
Obviously not for the problem cases at hand. It’s simply impossible.
My point is that a Lisp compiler will be as fast as a C++ compiler (in the case at hand, which I take to mean dealing with generic containers) because the Lisp compiler will use copy-down methods, and the C++ programmer/compiler will use templates.
@Slanger: Please wake me up when your buzzword-compliant “modern” languages get eval() or macros (or both!). I just spent several hours today and yesterday obeying Greenspun’s 10th rule, because C++ (and C# and Java) doesn’t have either. Specifically, I had something that called for the interpreter pattern (go read GoF), which is trivial to implement with eval(), easy to implement with macros, and onerous to implement otherwise.
Modern agile development doesn’t need interpreters or macros. Hullo? Java and C# are compiled languages. What we do need, is web development frameworks, enterprise beans, and information modelling to give us the edge over the competition, with a shorter time to market and reduced training costs.
Development in C# or Java is hardly agile. They don’t even have interactive compilers! And C# and Java desperately need macros (*cough* Xen *cough*). And eval() has nothing to do with compiled vs interpreted languages. Quite often, you need to have your program interpret statements in non-source languages. Think of SQL expressions. If you can just plug-in a pre-built SQL library, than yay for you, but unless your solution is already built for you, C# and Java make it a pain to do it yourself.
> Nobody cares about Lisp anymore. We want modern, robust languages like
> Java and C# for today’s demanding agile development and information
> modelling tasks. An enterprise wants a industry-standard language with
> a proven track record, not some academic toy.
> Java and C# are compiled languages. What we do need, is web
> development frameworks, enterprise beans, and information modelling to
> give us the edge over the competition, with a shorter time to market
> and reduced training costs.
No kidding. Lisp might have been good for its time, but with innovations like garbage collection, memory safety, bignums, object-oriented programming, XML, Agile design, etc, we can’t keep reinventing the wheel and retrofitting old languages to do new things.
Just look at the pain Python went through for garbage collection! It’s a few years older than Java, but a lot younger than lisp. I’m sure Sun or Microsoft will look at these macros Lisp has and see if they are worth it for general programming.
😉
Development in C# or Java is hardly agile. They don’t even have interactive compilers!
I guess you’ve never used Eclipse or IDEA then. How do you think that “live code” in those IDEs work?
It wouldn’t be a big deal to write a plugin that is comparable to Emacs/Xemacs scratch buffer. It’s probably already out there.
No kidding. Lisp might have been good for its time, but with innovations like garbage collection, memory safety, bignums, object-oriented programming, XML, Agile design, etc, we can’t keep reinventing the wheel and retrofitting old languages to do new things.
You do realize that Lisp has had all those things you’ve mentioned for decades now? Well, back in the day, “XML” was called “s-exprs.” Indeed, its C# that’s retrofitting stuff like delegates (lambdas), iterators (lambdas), domain-specific languages (macros), etc. After just a few years of evolution, C# is already more creaky than CL. Eg: there is no point in having delegates in C# 2.0, when it’ll have lambdas too. There is no point in having struct types, when you’ve got optimizers that do that under the hood. CL has its warts too (properties in symbols) but it’s older than I am!
I’m sure Sun or Microsoft will look at these macros Lisp has and see if they are worth it for general programming.
The Microsoft language designers are surprisingly ignorant of Lisp. Which makes sense, Microsoft has a very commercial mentality, and most commercial-types have not ever looked at Lisp beyond a painful intro to Scheme in college…
Looks like Rayiner referred to interactive compiler, not IDE. With common lisp you can redefine code at runtime; functions, classes, etc. No classloader needed.
One thing they keep talking about on usenet is how functions can cache themselves automatically. You can write a command that takes a function name and replaces it with a self-caching version.
That last post of mine was a sarcastic response to the troll btw. No worry about bignums, GC was supported for many decades, etc.
Thankfully, I get to avoid Java programming in general. The Java folks are, however, going in the right direction by copying Smalltalk IDEs. How interactive is IntelliJ? Eclipse is certainly nothing close to a Lisp or Smalltalk IDE, but I haven’t ponied up the change for IntelliJ. Can you redefine methods at runtime? Can you evaluate expressions at runtime? How about modify classes at runtime?
Hm, subtly at 10:24pm is not appreciated
> Hm, subtly at 10:24pm is not appreciated
Yeah, I feel pretty bad about that. This is a site for people who like cutting edge OS’es, and I don’t get why people put down cutting edge languages. Without doing any research.
But if people need to bust out of that crazy dynamic/static duality, I guess these flamewars are for the best.
Ever notice how the worst is from those guys who know Scheme and think they know Common Lisp? They might look a little similar, but thats like saying Java looks like C.
Have you looked at one of the 3.0Mx builds of Eclipse lately. It has some pretty good refactoring support and is closing in on IDEA rapidly. Eclipse’s interface needs some more work in my opinon, but since IDEA is swing-based on the fonts look like crap even with windows L&F and AA enabled, plus it’s 300 or so bucks to buy I’ll probably stick with Eclipse, especially with the abundance of major(e.g. not a hexeditor) plugins that are available.
Can you evaluate expressions at runtime?
In c# there is a class that has methods to evaluate expressions on the fly, I don’t know about java, maybe Groovy has something like that.
Can you redefine methods at runtime?
Reflection should take care of that
How about modify classes at runtime?
Not sure, maybe with a custom classloader, but in any case it’s really irrelevant except for SmugLispWeenies http://c2.com/cgi/wiki?SmugLispWeenies like you.
you have to wonder why people are still descovering generics and there are still people that think they are a bad idea.
All the problems of code bloat are coding problems, they all have know solutions. Just like using a procedural software design technique to extream like JSP can lead to excessive source code you have to show some level of engineering understanding of the methods and uses them at the correct time and in the correct ways.
Generics, and Metaprogramming are just tools that you should use at the right time for the right job.
—
David Allan Finch – ACCU member
See, you miss the point of what I mean by “Interactive compiler.” It’s a fundementally different paradigm than regular development. If you can’t modify methods at runtime, or modify classes at runtime, and need to use a separate class to evaluate code at runtime, then you don’t support interactive development. The whole idea of interactive development is that you don’t work *on* your program, you work *inside* your program, building it bottom-up and testing interactively as you go along. Btw — it’s nothing special about Lisp. Languages that support interactive development are a dime a dozen these days.
> Can you redefine methods at runtime?
> Reflection should take care of that
I’m a Java programmer too. Reflection does not take care of that unless you have a classloader…
> Not sure, maybe with a custom classloader, but in any case it’s really
> irrelevant except for SmugLispWeenies
> http://c2.com/cgi/wiki?SmugLispWeenies like you.
Ooh, haven’t heard that one before a million times. Guess Theo de raadt is a smug unix weenie for flaming people like you?
You don’t know what a classloader does in the language you’re ‘expert’ in, but you have time to dig out some trivial link.
You know, anon, people have all sorts of backgrounds which occasionally bias them in a certain way. While I enjoy lisp, it did take me a while to rewrite all those biases. It wasn’t easy, and partly I did it out of anger about those biases.
Looking thru usenet archives, I notice that programmers used to flame each other hard. Without thinking anything of it. But communication standards are different now. (Worse or better, I don’t know.)
Hey, anonasswipe, I never claimed that I was an expert in Java, but it’s clear from your remarks you aren’t either because reflection using a Method class will let you redefine methods at runtime. Do your homework aol-boy.
Ooh, haven’t heard that one before a million times. Guess Theo de raadt is a smug unix weenie for flaming people like you?
Where the hell did Theo de raadt come into this discussion. Put down the crackpipe, this is not a BSD discussion.
It’s hilarious how people get so worked up over Lisp and claim that Lisp has solved every problem under the sun 50 years ago when Lisp as a production language is a failure. Plain and simple. It had enough time to prove itself and didn’t. I think some people are still crying over AIwinter. Now, why don’t you go cry to your lisp mentor on usenet because you just got bitch-slapped around AOL boy.
> Hey, anonasswipe,
Got under your skin, eh?
> I never claimed that I was an expert in Java, but
> it’s clear from your remarks you aren’t either because reflection
> using a Method class will let you redefine methods at runtime. Do your
> homework aol-boy.
So you can add code which you just wrote, whilst your system is executing, right? Not only don’t you know Java, you’re a liar too.
> It’s hilarious how people get so worked up over Lisp and claim that
> Lisp has solved every problem under the sun 50 years ago when Lisp as
> a production language is a failure. Plain and simple. It had enough
> time to prove itself and didn’t.
Ever notice that funny language in Emacs? :):)
> I think some people are still crying
> over AIwinter. Now, why don’t you go cry to your lisp mentor on usenet
Sorry I hurt you. You look right riled up!