Explore the design and rationale for the new C++/CLI language introduced with Visual C++ 2005. Use this knowledge to write powerful .NET applications with the most powerful programming language for .NET programming.
Explore the design and rationale for the new C++/CLI language introduced with Visual C++ 2005. Use this knowledge to write powerful .NET applications with the most powerful programming language for .NET programming.
It looks like someone affiliated with mono is writing a WHIRL(the intermediate language for the Open64 compiler I guess) to CIL. Don’t know how far along it is, but would be pretty cool to have an opensource/unix C++ to CIL compiler down the road.
After reading this document I must say that I don’t see much sence in a C++ compiler for .Net. If I am not mistaken compile-time programing isn’t implemented. But that is one of the most powerfull features of C++.
Some things are just not possible in C# so you have to revert to C++ and expose a .NET managed layer for some APIs. For instance, the old ATL IABContainer/IAddrBook interfaces from MAPI are not accessable directly by C# because no GUIDs even exist or have been published. So trying to use interop to define a ComVisible interface is no good. Having started in VB, moved to c# a few years ago, I’m finding myself more and more drawn to .NET managed C++ simply because C# isn’t suited so well in all instances.
I posted a suggestion on the C# 2.0 beta feedback thing “#include header.h support for “unsafe” code blocks within C#”, so you can go:
#include [wab.h] ..osnews doesnt allow tag syntax
using System;
void StandardManagedMethod()
{
// regular C# code
}
unsafe void MyMethod()
{
//
// Write “unsafe” C# code accessing standard non-.NET libraries)
// without having to resort to Interop structures. In this example,
// Interop in C# is impossible since the structure GUID’s are not
// published.
//
IABContainer *lpContainer;
IAddrBook *lpAddr;
}
I believe the C++/CLI supports (or will support) both .Net generics and C++ templates. So compile time programming is implemented.
Brandon Bray wrote about them here:
http://blogs.msdn.com/branbray/archive/2003/11/19/51023.aspx
Stan Lippman here:
http://blogs.msdn.com/slippman/archive/2004/08/05/209606.aspx
But, I don’t think meta-programming is the biggest benefit. I think the most appealing feature is C++’s type system. In particular, deterministic destruction.
Anyway, for getting an insight into C++/CLI, Stan Lippman and Herb Sutter’s blogs are excellent.
http://blogs.msdn.com/hsutter/
http://blogs.msdn.com/slippman/
Cool, now I can totally recall my previous post. C++/CLI is definitiv something cool.
Prior to VS7, Microsoft’s C++ implementation was so wonky it was practically a different language (which was fine with them – a lot of companies write for Windows first, and it helped make porting to other platforms harder.)
Then they got beat up enough for standards conformance that they released VS7.
Now that everyone “knows” that VS speaks something pretty close to real C++, they’re adding extensions again, baiting the trap with easy access to all the neat classes in the .NET framework.
I thought Apple was dumb to try to have their own platform (Objective-C) that made it so much harder to share code – MS is doing the same thing, but they have the market share to actually make it work and really hurt the market. Objective-C is syntactically different and it’s obvious what needs to be changed; VC++ 2005 looks more semantically different, so porting to anything else may be even harder.
Well I would say that MS is approaching the ISO-standard in a quite a good speed.
Those extension for .NET are extension and are only valid when you are writing .NET code. They don’t appear in native C++, so that shouldn’t be a problem.
Anyway, every compiler adds his own extension to the language, which makes porting not an easy job anyway.
It already has a different name: C++/CLI.
Anyone who things that’s the same as Standard C++ needs some reading comprehension skills, and would likely think Apple’s “Objective C++” is C++, or that C++ is the same as C.
Furthermore, C++/CLI will be standardized through ECMA, just as C# and the CLI were; see http://www.ecma-international.org/news/ecma-TG5-PR.htm.
Even more interesting, some of the C++/CLI extensions are being proposed to the C++ standards body, to bring C++/CLI and C++ closer together. One such example is the nullptr keyword; see:
http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1488.pdf
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1601.pdf
As for your contention that C++/CLI is syntactically similar, but semantically different, that is not true, at least from what I’ve seen so far. C++/CLI is intended to be a proper superset of Standard C++, so they should be syntactically similar. (Whether C++/CLI is a proper superset is another matter; implementation issues complicate this, so that you currently can’t have an unmanaged class member in a managed class, as one example.) C++/CLI adds new keywords such as gcnew, property, and “spaced keywords” such as ref class. These are syntactic changes. (You could argue that ref class looks like a semantic change, but it wouldn’t compile under Standard C++, so it’s a syntactic change as well as a semantic change.)
All semantic changes I’ve seen have to do with the syntactic changes and with integration issues, such as value classes not being able to have default constructors, copy constructors, and destructors. This limitation is due to CLI requirements. It also won’t change the semantics of existing C++ code, as existing C++ code will be an unmanaged class, not a managed value class.
In short, there shouldn’t be any semantic changes without a corresponding syntactic change.
However, this doesn’t help with the “build on Windows first, port to other platforms” problem. If a programmer unwittingly uses C++/CLI language features, this will complicate porting to Standard C++. I don’t consider this to be a major problem, as I doubt it could be any worse than the current library situation, where developers use Windows-specific libraries which don’t exist on other platforms, thus limiting portability.
The biggest impediment to portability will continue to be program dependencies (libraries, platforms, etc.). I don’t see C++/CLI changing this any time soon.
umm,
Paul, Objective C is only needed for the GUI code. the back end can be ansi C/C++
I tend to lean against using Ms tools to build software, in order to maintain portability. In other words, I big on open, cross-platform standards.
But, as much as MS disgusts me with a lot of their business practices and insecure, unstable software, I have to give them kudos for building great development tools. I also have to give them kudos for making their C++ implementation more standards compliant.
Finally, I look at the C++/CLI, being that it will be an ECMA standard, as ultimately good for C++. This means that C++ is still a viable, desirable tool for the most prevailant desktop platform in existence.
As for using MS extensions – If one wants to maintain portability, one should always avoid vendor extensions of any kind, or make usage of vendor extensions as separate modules that are easily substituted with other platform extensions.
I am a fan of generative programming, I often submit many interesting articles from the code generation network
http://www.codegeneration.net
to OSNews, but I have failed so far to see a legitimate use for compile time programming via templates.
It is a cute hack, sure. But a legitimate technique?
Regards,
Marc
It’s something inbetween a cute hack and a legitimate technique. To the extent that templates do what they were designed to (generic code, parameterized code), they’re fine. I’d put stuff like Loki’s smart pointers into that list. Some more esoteric uses are useful, but hacky (eg: C++ numeric libraries that use templates for some compile-time evaluation). And some stuff is just plain unusuably hacky, like Boost.Lambda. Lot’s of stuff in “Modern C++ Design” is usable, but too hacky for my taste. They are really an example of solutions utterly limited by technology instead of by developer capability. The developers in these cases clearly knew what they wanted (Boost.Lambda = lambdas/closures, Loki visitor pattern utilities = multi-method dispatch, etc) but the language wouldn’t let them do it properly. If C++ really wants to go the route of code generation and metaprogramming, they should do it properly. That means having a proper procedural macro facility that can directly modify a program graph that is something higher-level than an AST (because C++’s AST would be farking unusable).
PS> The title of the article is silly. C++ is more powerful than C# 1.0, but the feature-set of C++ and C# 2.0 does not coincide, and C# 2.0 has some very powerful features like lambdas. In any case, I’d say that Nemerle is certainly the most powerful language available for .NET, if only because there is no Common Lisp or Dylan port to it yet
Templates… A cute hack? Interesting perspective…
Please read Modern C++ Design for the power this “cute hack” permits:
http://www.amazon.com/exec/obidos/tg/detail/-/0201704315/qid=109284…
Templates permit compile-time code generation. Modern C++ Design has several examples where entire class hierarchies are generated with one line of code. That can greatly simplify maintenance.
The compile-time code generation can also be used to improve execution performance. The Blitz++ library uses automatic loop-unrolling, removal temporaries, and other optimizations which allows C++ to execute numerical algorithms nearly as fast as Fortran 77/90 , instead of at a mere fraction (as could otherwise be the case), and without introducing special language “warts” such as C99’s restrict pointers. See:
http://www.oonumerics.org/blitz/
http://www.oonumerics.org/blitz/manual/blitz02.html#l31
Template meta-programming also permits “expression templates”, which permits the introduction of new syntax without requiring language changes. One such example is the “lambda”-style programming permitted by the Boost Lambda library:
http://www.boost.org/libs/lambda/doc/
This permits you to use the std::for_each algorithm to print out the contents of a container without needing to introduce intermediate “helper” functions/functoids:
// Using the Boost Lambda Library:
std::for_each (a.begin(), a.end(), std::cout << _1 << ‘ ‘);
// Alternative, non-expression-template approaches:
// skip std::for_each and just use a `for’ loop
for (some-type::iterator i = a.begin(); i != e.end(); ++i) {
std::cout << *i << ‘ ‘;
}
// Alternatively, introduce a new, global, class/function:
struct Functor {
void operator()(int v) {std::cout << v << ‘ ‘;}
};
std::for_each (a.begin(), a.end(), Functor());
For many, the notational convenience of lambda-style programming is useful, cleans up the code, and simplifies understanding.
All of these techniques would not be possible without template meta-programming, and many would disagree with your assessment that these are merely “cute hacks”. In some cases (Blitz++), these “cute hacks” permit the use of C++ where normally another language would need to be used.
All in favor of migrating a tedious, baroque syntax to a new VM and class hierarchy, raise your hands
the nays have it
>> Please read Modern C++ Design for the power this “cute hack” permits:
and please see how the programmers who actually work at your company use templates to completely erode your productivity and sanity
this book more than any i can remember has destroyed what sanity and rational hesitance many c++ programmers have had…nothing like contrived examples to make you think you can redesign your code around a brand new paradigm
advice: mail copies of this book to your competitors, you will be very glad you did
Please read Modern C++ Design for the power this “cute hack” permits:
I’ve read it. It’s powerful. It’s still a hack.
Templates permit compile-time code generation.
So do procedural macros, and unlike templates they are actually designed for code generation, and are much clearer and more powerful as a result.
That can greatly simplify maintenance.
Yes it can. However, a lot of the examples in Modern C++ design show correct identification of the problem, the correct general solution to the problem, and then a severely twisted implementation based on the constraints of the C++ template model.
Template meta-programming also permits “expression templates”, which permits the introduction of new syntax without requiring language changes. One such example is the “lambda”-style programming permitted by the Boost Lambda library:
This exists in name only. Boost.Lambda pushes templates too far, and is nearly unusable. I’m sorry, but lambdas are inherently a simple and elegant concept, and Boost.Lambda is decidedly not. It requires you to use a stylized C++ syntax in which the semantics are similar too but not identical to C++, creating a sure recipie for disaster. Beyond that, they have a huge compile-time performance penalty and result in unreadable compiler errors for the simplest-looking code.
// Using the Boost Lambda Library:
std::for_each (a.begin(), a.end(), std::cout << _1 << ‘ ‘);
Interesting you should mention this, yet leave out what happens when you want to modify that slightly, to accumulate each entry in a list. For example, if a is a container of integers, you cannot do:
int sum = 0;
std::for_each(a.begin(), a.end(), sum += _1);
Since “sum” is not a lambda expression, you have to do:
int sum = 0;
std::for_each(a.begin(), a.end(), var(sum) += _1);
There is an entire page of extra semantics in the BLL documentation detailing such pitfalls. Meanwhile, in any language with real lambdas, the syntax and semantics of the lambda are no different from any other code. BLL is more of a cruel joke, showing C++ programmers the power of lambdas without giving them a good way to use it.
C’mon, that’s a little over the top. Templates are a very useful tool. In the wrong hands, of course any tool can be abused. My primary beef with templates is that they can (and often do) cause code bloat, if you’re not careful. But that can be mitigated by using templates only in situations where they make sense, not in every class. Another issue is debugging. Templates can be considerably more difficult to debug, particularly when you nest various types.
These issues are generally not a productivity issue to disciplined shops. However, some people make the mistake of assuming that templates are an all-or-nothing proposition. That if they start using them in one place, the use of templates must spread like a cancer throughout the codebase. Not at all true. My advice is not to throw out the baby with the bath water. Use templates where it makes sense. For example, if you’re creating generic iterators or container classes that only vary by type, those are situations in which templates make sense. I’m not a big fan of STL; but STL does a reasonably good job of reducing internal dependencies, so that you can use a small subset of classes without dragging everything in.
Procedural macros are much worse than templates, as far as productivity goes. Unlike templates, macros are practically impossible to debug without dropping down into the generated assembly language. Not to mention that projects that utilize macros tend to become extremely brittle and difficult to maintain over time. It’s difficult (or impossible) to determine what the code does just by looking at it — you generally have to compile with /P in order to see the preprocessed output. Cyclomatic complexity also tends to spiral, in my experience, as people don’t know how much code is being generated. Bad voodoo.
Bottom line: I wouldn’t measure the worth of templates by a single book — or a single implementation that you’ve come across. When I use templates, I use my own classes that don’t have the cruft and weirdness that you would find in a lot of these class frameworks. Books are generally not a very good place to get working code, either. Not a bad place to start — but I wouldn’t just grab code from a book and throw it into a production system.
Gil wrote: Procedural macros are much worse than templates, as far as productivity goes. Unlike templates, macros are practically impossible to debug without dropping down into the generated assembly language.
I think Rayiner thought of the Scheme macros, which have hardly anything in common with the C/C++ text replacement macros you are referring to.
Half of the problems people have with text-replacement macros in C++ could be replaced by giving IDEs a macro-expand capability in the editor.
In any case, C macros aren’t “procedural macros.” In Common Lisp (and in some Scheme extensions), macros are just regular functions that are run at compile-time to transform input code before it get’s to the compiler. They can be used to do all the things that Johnathan mentioned (including compile-time evaluation in numeric code, something which most pattern-matching macro systems can’t do). The key advantage of real macros over template metaprogramming is that you don’t have the template metalanguage sitting off to the side as in C++. You can use regular code to implement the tranformations.
People here are nit-picking about the use of templates, where in certain situations it leads to bloat, or hard to debug code, or doesn’t make sense in certain class hierarchies.
People get hung up with a certain technique or library, (templates, STL, generics, etc), and figure that it is a universal solution for everything, and try to apply it in situations where it is not best suited.
C++ is a large, powerful, flexible, feature-rich, multi-paradigm language. It’s just a matter of applying the right feature/technique/library to the right situation. And there are plenty of materials out there that help point the programmer in the right direction, not the least of which is Stroustrup’s book. “The C++ Programming Language” is not only the ultimate defintion and reference, it is chalk full of the why’s and how’s of various features and techniques, and wise advice of when and when not to use them, and how to use them effectively.
Just because, due to it’s power, flexibility and rich features, C++ allows you to blow your leg off, does not mean that it’s hard or a bad language. With C++’s great features and power comes great responsibility for the programmer.
It’s not a matter of nit-picking. Code-generation, metaprogramming, and domain-specific languages are very good implementation techniques. A lot of C++ people are pushing templates as a good way to implement those techniques. However, templates are a terrible way to implement those techniques. That doesn’t mean that templates are bad, or C++ is bad (both are good within a certain application domain, though I think people tend to overestimate C++’s applicability). What it does mean is that the modern C++ movement shows that programmers do want to use these techniques, and that the current tools available in C++ just aren’t good enough for that.
Very well put, Rayimer.
C++ is not the be all to end all – no language is. And, of course, C++, as powerful and flexible as it already is, could stand some improvement in certain areas, or gain better support via standard or vendor libraries. Your post eleoquently points this out.
My post was in response to people who take the “C++ sucks” stance because a particular implemenation of code-generation, meta programming or templates did not work for them.
The most powerful language for .NET is SML.
http://research.microsoft.com/projects/sml.net/
http://www.cl.cam.ac.uk/Research/TSG/SMLNET/
What does powerful mean in the context of programming?
Furthermore, under .NET, C++ = C#
Too much emphasis is on the programming languages and almost nothing on system research and development. A company like Microsoft can’t market open and accessible knowledge, so instead they market a new language and stuff that is irrelevant but is hyped.
What does powerful mean in the context of programming?
Furthermore, under .NET, C++ = C#
It means that whatever language or tool that’s being described as powerful enables you to do a lot of stuff, and do it very well. You can do a lot of stuff with C++, and I mean a lot:
http://www.research.att.com/~bs/applications.html
Actually, I think not enough emphasis is placed on programming languages. Good programming languages make good systems easier to write. For example, consider all the trouble caused by C’s lack of memory safety. CPUs had to invent memory protection because C apps could arbitrarily write all over memory. Operating systems had to seperate memory contexts to take advantage of this, and had to seperate the memory spaces of system services from userspace services. This resulted in the following developments:
1) Complex VMs that have complex code for allowing protected sharing of memory.
2) Complex IPC mechanisms that try to reduce the performance hit of copying data around protected memory areas all the time.
3) Complex system-call conventions (eg: Linux’s virtual system calls) that try to reduce the overhead of applications making kernel calls.
4) Movement of system code into servers (eg: X, microkernel OS serverse, etc) to protect system code from itself.
5) Complex mechanisms (stack-guard, NX bit, etc) to protect apps from their own memory corruption bugs, and to prevent those bugs from being used to break security.
Meanwhile, the proper solution was to fix the language. Get rid of C’s ability to write randomly over memory, maybe reserving some special calls for system-level code only, and you can get rid of all of that crap I mentioned above and make a simple, fast system that puts everything in one address space.
More generally, programming languages constrain not just what you can design, but how you think about design systems. Poorly-suited programming languages lead to poorly-convieved system designs.
At the low level of system implementation, sometimes the hardware instruction layer has to support a new feature like context switching, but that is a good thing, not a bad thing, however I wish that there was much more unique hardware out there and that can only come about by open source. The C language has it’s place at that layer, all of the hardware is a language layer as well, even the digital logic, so moving up the stack you have some high level languages like Perl, Phython, etc. I’m just saying that there are plenty of choices and I’m not impressed by any new fangled high level language. The .Net/Mono stuff didn’t even support multi pardigm design for a long time, yet people were going ape over it, because it’s an entertainment industry.
Maybe some of C should be implemented as hardware instructions, I think that it has been a question of cost but this would be better than having to learn a machine instruction set.
There are two basic problems with that idea:
1) C is actually a poor abstraction at that level. Modern hardware looks very different from what C exposes. For example:
– Modern hardware often has non-uniform memory access times, while C programs treat memory as a single entity.
– Modern hardware supports vector operations, while C doesn’t (natively).
– Modern hardware is parallel and out-of-order, while C assumes that code runs serially and in order.
For the lowest-level of the system, the ideal language (from a performance standpoint) for modern computers would actually be a safe, strongly-typed, concurrent language with functional elements. Memory-safety and strong typing allow for better optimization, while concurrency and functional features allow code to take better advantage of parallel execution units and deep pipelines.
2) By implementing a system using an unsafe language, you impose a massive performance penalty on the entire system. When users are allowed to write unsafe code, that means that you need protection between processes, and that brings into play all that performance-sucking functionality I mentioned earlier. When you don’t allow unsafe code, or run all unsafe code in a sandbox, you can get rid of those features.
For example, take window servers. Programmers put GUIs in userspace servers to protect the GUI from errors in user programs, and to protect the kernel from errors in the GUI. That means that certain things (eg: round-trips between the server and client) become relatively expensive. Also, instead of just sending pointers around, you have to copy data around. Lastly, you have to make some trade-offs (ie: should widgets be client-side or server side?) that you wouldn’t otherwise have to make.
Smart coding can reduce the performance penalty, but you pay for it in complexity and limited flexibility. If you wrote everything in a safe language (either managed code or native-code via safe compilers), you can get rid of that abstraction. You could just put everything in the same address space, and know that errors in one component couldn’t crash another component.
Books like Modern C++ design and the use of generic/template functionality to render complex abstractions are beyond the use/comprehension of the average programmer to solve problems effectively. That doesn’t stop the majority of them from trying and failing sometimes badly.
Those who call these techniques “hacks” and prefer preprocessor macros for code generation need to step away from keyboard for a second and consider the expressive power these techniques offer.
To the extent that c# and the .NET platform generally is about dumbing down/homogenizing/automating the production of software, it is quite true that C++ doesn’t fit.
The design goals of C++ and the design goals of the c# language are not the same.
And I say that C already is hardware from the point of view that software is a hardware metaphor. I don’t know if it is good or not.
Think of this though, if you could have the ultimate language, let’s say, the english language at a high level, than how would that solve our problems.
And on another note, the design goal of C# and C++.Net is the same, because they are both centered on making money for Microsoft. They were designed to be like Java, and to take back the market share that Sun was pulling away with. The design goal is to achive that general thing (money) that doesn’t solve any problem other than to be general.
If language was such an important concern, than are you saying that if we all learned French, than the world would be a peaceful place?
For the last time. I’m not talking about preprocessor macros. I’m talking about procedural macros. In terms of expressiveness (as a number from 1-100), I’d rate them in the following order:
Preprocessor macros – 2: Very limited expressiveness, basically, text-replacement.
Templates – 10: Moderate expressiveness for a very limited number of situations.
Procedural macros – 100: Extremely expressive. In fact, any concept you can express in the host language can be used to express the intention of the macro. Want to grab a text-file off the network via SOAP + XML to implement your macro? No problem!
@Rayiner Hashem, great point. I’ll just chalk this one up as another reason to use managed code.
For the last time. I’m not talking about preprocessor macros. I’m talking about procedural macros.
Since it keeps getting misunderstood, an example: take the logical-or operator ‘||’ (assuming for the moment that C or C++ don’t have it natively). There is no way in C to simulate it using functions or CPP macros; it probably could be simulated using templates, but for the price of having to use a syntax different to all other operators.
In scheme you would write:
(define-syntax or2
(syntax-rules ()
((or2 a b) ; pattern
(let ((temp a)) ; template
(if temp
temp
b)))))
and then could use (or2 a b) as if it were a native language construct, with having ‘b’ evaluated only if ‘a’ evaluates as false. You can even use a recursive definition
(define-syntax or
(syntax-rules ()
((or) ; OR of zero arguments
#f) ; is always false
((or a) ; OR of one argument
a) ; is equivalent to the argument expression
((or a b c …) ; OR of two or more arguments
(let ((temp a)) ; is the first or the OR of the rest
(if temp
temp
(or b c …))))))
and get an or operator taking an arbitrary number of arguments: (or foo) would be accepted as well as (or foo bar baz).
(Scheme examples taken from http://www.cs.utexas.edu/users/wilson/schintro/schintro_130.html ).
Lars, the advantage Scheme has there is that it wasn’t designed to be upward compatible to C for acceptance.
Sure, C++ is ugly around the edges. But you can take your C++ compiler and compile C code unless it’s deliberately broken in that regard (hi Linux kernel folks).
Sure Bjarne could have written a much “cleaner” language than C++ instead, but that would have ended like so many other “clean” but seldom-used languages… like Scheme.
No offense intended, but C++ never stepped up to the plate claiming to be the be-all, end-all. It claimed to extend C to become more useful and support more paradigms, like OO and generic programming, without breaking compatibility (too much). IMHO, it followed up on that claim quite nicely.
This is not a problem with C++, and it is not a problem with Scheme, there is no language solution. That’s what I’m trying to say, that these are not real problems, but they are invented ones. The C++ programmer will sleep easy tonight, so will the Scheme programmer, but the individual that doesn’t think that a certain C++ library is good or is not designed well or can’t be designed well, has a problem that he can’t solve because he has no control over C++ or the industry, or anything except himself and what he can type himself within time and other restraints.
There is a solution, and it has something to do with eliminating the problem for yourself, and for others who see it as being a problem, but not for those who don’t see or don’t care about it. Does that sound alright?
…but there’s one thing, that nobody has done yet. Nobody has prioritized control as an important accomplishment. One of the greatest things about open source is that it’s a window into the future. You can gain a deeper insight into the present condition of the industry, and therefore the forshadowing of how the future is stacked up. You can also figure out how to knock it all down.
So you find out that you are all alone. Why didn’t the C++ commitee look at Scheme and put 2 + 2 together and make a revision to the standard. Well, it’s probably not even about that, or maybe nobody knows about Scheme’s handling of macros. Nobody on the commitee went to MIT. I thought that this was a small world, what’s happening.
Stroustrup answers his email. See what he has to say.
I said nothing about C++ being ugly or Scheme being nice (in fact I’m an enthusiastic C++ programmer).
I said: “this is what a procedural macro looks like”, and the example just happened to be in Scheme. And there’s nothing in the concept of procedural macros which makes them ineligible for the C++ we all love and cherish – nothing except a lack of imagination in its users.
And while I know the historic reasons for the omission of procedural macros from C++, it is a fact that modern programmers want them – as the ‘hackish’ use of templates demonstrates.
It’s not an invented problem. It’s a way to use legitimate implementation techniques that C++ doesn’t support properly. C++ is (admirably) a multi-paradigm language. It supports OOP, procedural, and generic programming. Why shouldn’t it support macro-based programming too?
Let me give you a concrete reason. The reason that people are trying to get C++ templates to do macro tricks is partially because of people like Alexandescu. He realized that a lot of the standard design patterns that programmers use could be codified in terms of metaprogramming. Thus, when a programmer wants to use this design pattern, he can depend on an existing metaprogram instead of rolling his own. This achieves code reuse and eliminates a lot of tedious grunt-work. Further, when another programmer comes along to maintain this code, he can be assured that the original program is specified in terms of well-known design patterns, because metaprogramming makes design patterns so much easier to use. The point of giving C++ proper metaprogramming support is so these sorts of things can be implemented cleanly instead of using all sorts of hard-to-read template tricks.
1) C is actually a poor abstraction at that level. Modern hardware looks very different from what C exposes. …
I think many top notch developers might disagree with you on this, such as people like Dennis Ritchie, Ken Thompson, Bjarne Stroustrup, Charles Petzold, and Linus Torvalds, as well as the millions of developers that have successfully used C and C++ over the last 30 years, on all hardware platforms, for things like kernel development, embedded programming, APIs, GUI libraries, shells, userland utilities, editors, databases, compilers, games, large systems development, and virtually any software imaginable.
C and C++ have acheived near universal use and success in all kinds of software development, throughout all platforms, with no corporate maketing to push it’s use (like Java and .Net have enjoyed). C and C++ have been proven beyond a shadow of a doubt to be very effective tools in a wide range of applications. They are by no means perfect languages, but they have always gotten the job done, in a big way, and continue to do so.
It’s all fine and good to have theoretical qualms with a language and espouse some other real or “ideal” language. But with C and C++, the proof is in the pudding.
I highly doubt they’d disagree with me. They know how processors work, and they know enough to know that the C machine model is no longer a good mapping to how hardware really works. It was at the time it was designed on PDP machines 30 years ago, but machines look very different today.
Now, the fact that C is so popular for low-level work is a matter of inertia and popularity, little more.
No offense but this exercise in comparing FOTM programming languages ignores the basic fact that software idioms are designed for a purpose. C++ templates were designed to be geared towards generating type declarations rather than extending C++’s syntax like Scheme macros can do.
If I wanted to extend my compiler with my own syntax I would use a language like Scheme or Lisp.
I don’t. I never have. In truth at the moment I can’t conceive of why I ever would, but that is most likely a failure of my imagination and I’m very tired at the moment. However, I do want type independence in a library that thousands of other developers can use and understand, today. Therefore I use templates. They are a difficult abstraction but once grasped are extremely expressive. And once the abstraction is complete, it is well-hidden from users.
It is true that compiler/tool support has never been all that it could be (error messages, etc). They have improved markedly but regardless, good (devious)template programmers can use templates themseleves to provide more meaningful compile time errors.
I’m not a reactionary, but all this FOTM language hopping once a decade is silly. I am quite happy in this idiom for solving the problems I need to and wouldn’t trade it for the inherent semantic ambiguity of scheme procedural macros simply out of academic curiousity or the desire to turn a quick buck through obsolescence.
Actually, by implementing libraries you already do extend your language. And by using advanced template mechanism, you _are_ using something very akin to procedural macros, just in a very complex and semantically ambiguous form.
Btw, it was one of C’s great inventions to implement a good chunk of “the language” in the form of library calls, and thus achieve syntactical coherence between the core language and your own extensions.
Java is more faster than C++!