In this article, Bjarne Stroustrup (the father of C++) talks about the next version of the widely used general-purpose programming language. This next version, called C++0x for now, will most likely be finished by 2009. The article discusses new language features, for example concepts – which specify the properties required of a type and can be used with templates. Stroupstrup also talks about new standard C++ libraries.
Doesn’t seem much of an improvement to standard? Am I missing something? Optional gc is pretty much nothing, when you could use hans-boehm gc quite easily.
-Ad
..or a language that doesn’t use old fashioned pointers to begin with
> or a language that doesn’t use old fashioned pointers to begin with
C++ is a multi-paradigm language so it needs pointers.
C++ is a multi-paradigm language so it needs pointers.
What do you need pointers for in multi-paradigm programming? There are plenty that do without:
http://en.wikipedia.org/wiki/Multi-paradigm_programming_language
You can be a multi-paradigm language without supporting pointers. It’s just a matter of which paradigms you want to adopt. I’d argue that the C paradigm of “memory is an array typeless bytes” is largely useless today, as are pointers.
Yes, C++ is a systems programming language. You do need untyped unsafe access to memory for such a language. This does not mean you need pointers. To do systems-level programming, you need some external ASM anyway (to handle things like writing to I/O ports, or for things like returning from interrupts). So it doesn’t do any harm to stick a “put_byte()” “get_byte()” interface into that external ASM, and do away with pointers entirely!
Edited 2006-01-03 16:34
When doing systems programming the overhead from doing get_byte() and put_byte() 100,000 times instead of dereferencing pointers would actually matter, you see.
Are you sure there is no way the compiler could recognize your simple function and get rid of the function call overhead by inlining it?
I’d like to see a forced inline in c++. If it can’t inline: Error. But don’t say this:
<<std-alocator, 1> < dogs < cats < sleeping < with < lions < and the end < is < coming >>>>, std-allocator > something didn’t work dude!
Say this: failed to inline function [name] at [line#]
I suppose that’s implementation. So redirect that comment to RMS.
Even if it were inlined get_byte() and set_byte() would only end up being a more restrictive syntax for pointers. After all, what do they have to take as a parameter? That’s right, the address to get from or set to. Or in other words, a pointer.
..or a language that doesn’t use old fashioned pointers to begin with
Use of pointers in C++ is officially discouraged in favor of references, which have been supported for 10+ years.
References in C++ are essentially pointers, except you can’t do pointer magic with them. Pointer magic is what leads to pointer hell, most of the time. Unless of course you are a really good C programmer.
But since references in C++ can’t have null values, pointers are really hard to avoid for private interfaces. For public API’s though, they are easy to avoid. References also give you the opportunity of declaring your parameters const (i.e. “const MyType& p”).
There is a general misconception that C++ is an ugly language. It definitely isn’t, it just doesn’t force it’s users to be good programmers. C++ can be more elegant than Java and C#, even Python, if done properly. The problem is there are so many features and no restrictions, so you need to know what you’re doing to do anything non-horrible.
– Simon
It’s ugliness comes from ‘premature optimisation’ while I think that a language should provide manual memory allocation, unchecked array reference, etc.. it shouldn’t do so *by default*.
By default a GC should be available, array reference checked everything, etc and only when the programmers have made measurements that the performance is not satisfying and that changing the algorithm is not enough, then it could remove the safety feature on the (usually small) parts where it matters.
In C++, it’s the opposite, you have to jumps through hoop to use a GC for example.
Plus a language so complex, that compilers writers have a hard time to implement it is definitely ugly in my book.
Author clearly defined focus of the language.
It’s the “systems programming”, not business application developement.
It’s good that they improve on present strenghts of c++ instead of playing cli/java catch up game. It’ll allow to position c++ more firmly in its current “niche”.
Not everyone has to be obsessed with world domination.
Won’t computers be able to program themselves by then?
In all seriousness, 2009 is 3+ years away, which is an eternity in computer years. Stroustrup just doesn’t seem to get that C++ is a dying language (outside of compsci at least), and should use his talents designing a new language which is…well, not so 20th century. Sometimes you just need to let stuff go, and get in w/ the new.
C++ is not a dying language.
There are lots of applications for which no other language is better suited.
Most of the more recent languages are memory and cpu hogs, and don’t get me started on garbage collection.
Not a dying language, agreed.
But try to develop a non trivial application with a team of 10+ developers, then you are likely to use constructs and techniques such as virtual member functions, reference counting, with mutex protection for the count when multithreading etc.
This consumes inordinate amounts of cpu and in fact, converges toward the cpu usage of a similar java program.
Suddenly, garbage collection becomes less of a cpu hog.
The memory situation is more clear cut, but again, when you have a long running application (server stuff), the process reaches it working set memory usage and stays there. All you have to do is have enough memory to support the working set.
Development effort is also greater in C++, and it takes longer to become proficient at C++ than at java.
But when you need low level programming, C++ definitely cuts it!
> Stroustrup just doesn’t seem to get that C++ is a dying language (outside of compsci at least)
C++ even starts to be used as a hardware description language (SystemC). It is not what I would call a dying language…
3 years is an eternity in ‘hardware years’ but it is nothing in ‘software years’.
In all seriousness, 2009 is 3+ years away, which is an eternity in computer years.
We love to pretend that, but no. 3 years are nothing, or why are we still using Unix systems written in plain C (and still hype them in the forms of Linux, Solaris and OS X)?
hah. I’ll believe that when 90% of the computer users in the world are running Microsoft’s Singularity.
Not gonna happen.
I believe their time should be spent designing a new C++ from the ground up. All the mistakes made by C++’s original design should be fixed. A more simple syntax should be chosen. I certainly don’t know even half the things which C++ allows, simply because there is only a certain amount of syntax I can remember and when I do use the more advanced (obscure) abilities of the language it just produces obfuscated code because it’s not what I or other people are used to seeing.
GC should be an optional part of the language. Templates should be more like generics. I feel although I’m sure people will disagree virtual methods should be the default and non-virtual should be an option. And I’m sure there are many, many things think I don’t even know about which should be changed. I personally would like a way of intercepting method calls to objects like Objective-C.
Language design has come a long way, there is a needs for a “low level” OO language which has been as well thought out like was Java was. Lessons can be learnt from the design of Java and Ruby and Python and all the other languages and incorporated into a new more clean, fast language. Yes Backwards computability will be a huge problem but I think it is a necessary evil. Languages such as python and Java have done fine having no backwards compatibility with other languages.
I agree. Back in the mid- to late 90’s it would have been fair to call me a C++ expert.
Operator overloading, method overloading, and multiple inheritence already made the language pretty complex. But then when templates were introduced, it became just unmanageably complex.
For my programming now-adays, I almost always use Python. If I’m really really concerned about speed I’ll use C++, but not for anything very complex. If I have to write a really big complicated app these days, I’d probably go for Java or learn C#.
I don’t think the complexity of templates come from their syntax, but more from the fact that when doing advanced template stuff, the code rapidly grows very complicated and hard to follow.
I do think that the concept checking they plan to implement may help here, as a lot of the complexity in templates come from the absence of such checking. First, because it causes even simple errors with template-based libraries to be detected deep in some internal templates, resulting in very cryptic error messages.
And then because a lot of the complexity of, for instancem the STL is caused by the need to provide different implementation of some things depending of, for instance, an iterator type, which is done currently in a complicated and convoluted way involving a lot of template magic to work around the absence of constraints for template type parameters.
I guess template based implementations can become much simpler thanks to this.
As for virtual functions as default… Please, no. One of the strength of C++ as a low-level language is that you don’t pay for things you don’t use, and in most case, template allow for compile-time polymorphism where you don’t need virtual functions.
Being able to opt-in for solutions that have an extra overhead, like virtual functions, rather than opt-out is crucial for a lot of applications.
You don’t want to impicitely get a 4 byte overhead whenever you define a class, or whenever you happen to add a member function in a struct.
These are often small objects mean to live on the stack, to be passed as values, or to be stored in arrays.
C++ is not for everyone. Even though I very much like the flexibility of C++, I do know that I misuse it often. If you are C++ material, you use it or you leave it for some other language. But for some tasks you won’t find any help in those beautiful languages such as Java or Python. That’s why they can be … fine having no backwards compatibility with other languages
I agree that C++ can be made easier to learn or use and the author agrees and C++0x is moving in that direction.
I find Stroustrup as one of the most insightful people I know of. His focus on better libraries and features that change the way people design not write their code is simply brilliant.
If C++ community in general focused more on better libraries as Stroustrup does, I think it would have been a more pleasant language to use. There’s ofcourse lots of good libraries out there but it could’ve been more.
Managed libraries and even languages don’t replace good coding practices in any scenario although they help out allot.
C++ has caught up with managed the approach (better libraries) just as fast as PHP has turned into PHP 5. I firmly believe that interfaces are more for web use because of their stability in NON-accurate scenarios and pointers for desktop.
C++ is a beautiful language and C is great. Most C coders will tell you they can’t live without pointers. Using C++ and coming from C# I can tell you that C++ looks allot better then C# because you can break up your code more and design it allot more tightly with allot less restrictions. Also you feel allot more comfortable not having to open up a large text editor every single time. I sometimes don’t even compile because of the low resrtictions which create less compiler errors.
.NET is fine for the Internet mainly which it was originally touted for. Most UIs will probably be Internet based soon anyway which they are. For me I am more into PHP for the Internet and C/C++ for system and it’s accompanying UIs with GTK,QT, and FOX.
“…concepts – which specify the properties required of a type and can be used with templates.”
Is this a form of type safety?
I would say it is a form of “meta-type safety”. It is basically type-safety at the template level.
Currently, when a template wants a type, you can throw about any type at it.
It is still type safe because C++ is type safe, but if for instance you give a string where an iterator is expected, you will not get a straightforward error like “you need to pass an iterator here”, but instead some cryptic one possibly deep in some internal implelmentation template stating that some operation cannot be done or that there is no operator defined that do a particular thing.
This make template programming look more complicated than it actually is, because even simple mistakes give you complicated looking errors that can be hard to unravel, especially if you’re not used to sort through the garbage printed by the compiler.
for (auto p = v.begin(); p!=v.end(); ++p)
cout << *p << endl;
What bout
for (auto p : v)
cout << *p << endl;
?
C++ biggest problem are C programmers. I have seen so much .cpp files that were littered with C string functions or C-style casts.
The problem with c strings is that the STL uses them…
ifstream::open(const char *fileName);
It should be using string, that’s a standard part of the STL which would take people’s old const char * and convert it anyway. But it’s not. So, you end up using string and then realizing a few things:
1.) It’s using c strings internally.
2.) It’s not any faster. It doesn’t try to improve the efficiency of many concatenations.
3.) It converts “easily” between c string and string.
4.) It’s not safe, it doesn’t check for over-bounds requests and writes.
Why use it? Because it might have an int indicating size which saves time for people who can’t help but do this: for (int c=0; c < strlen(str); ++c); instead of for (int c=0; str[c]; ++c)?
If I did c++ more often I’d happily use string because I think it can be something useful and good. And who knows, maybe non-gnu implementations of the STL are better? But I can completely understand why people aren’t zinged on STL strings: They’re nothing special. They’ve been, for me, so much less than they can be.
Eventhough I hardly use C++ anymore, I have to object to the fact that it is hard to learn. Accelerated C++ is one of the best “learn how to programm” books I have ever read.
It was said, that C++ focuses on system level programming. How about games, I think it is pretty dominant in that sector as well, isn’t it ?
“It was said, that C++ focuses on system level programming. How about games, I think it is pretty dominant in that sector as well, isn’t it ?”
It is. And actually, the article does include “game engines” in the list of what he considers “systems programming”.
They’re wrong on their first guideline. C++ is a complicated, horrifically obnoxious, mediocre, half-done, attempt at improving c.
C# is really a much better successor.
Does this mean the next c++ will be a proper superset of c so that it actually works with all c headers and c code?
I really hope c++’s time is coming to a close. Many of the things people love to do in c++ can be done in higher level languages like c#, java, and even python. I suppose most people will probably still want it for things like gaming; but who knows maybe that will even move to things like c# (which lets you use unmanaged memory if you really really want).
But maybe I’m just completely ruined on c++?
“I really hope c++’s time is coming to a close. Many of the things people love to do in c++ can be done in higher level languages like c#, java, and even python. I suppose most people will probably still want it for things like gaming; but who knows maybe that will even move to things like c# (which lets you use unmanaged memory if you really really want). ”
I dont see games going to C# or anything similar anytime soon. All those JIT+GC languages rely on the “who cares, storage is plentiful and CPUs are powerful” principle.
Games need to do a lots of time-consuming and varied stuff, fast, in real-time, and often with limited amounts of processing power and memory (game consoles).
A language relying on techniques like GC and JIT that are not deterministic in terms of execution time and memory consumption are a big no-no here.
Also on the server side, there is still a place which isn’t going away from C++ anytime soon, which are MMORPGs. These are highly interactive realtime applications with lots of concurrency, and you can’t afford to let anything except rendering be taken care of exclusively by the client, which means that the server basically have to run a full-fledged version of the game without rendering but with collision detections, etc.
Add to this that the amount of network updates to be sent back and forth between players is exponential depending on the number of people gathering in sight range, and that a lot of recent game provide each player with their own instance of some parts of the game world complete with their own monsters and NPC, and scalability both in terms of CPU and memory would become a nightmare very fast in a JIT+GC language.
A language relying on techniques like GC and JIT that are not deterministic in terms of execution time and memory consumption are a big no-no here.
All languages are non-deterministic in terms of execution time and memory consumption. It’s called the Halting Theorem
Seriously, though, even C and C++ are practically non-deterministic in timing, especially on a modern OS. Calling malloc() on a GC’ed system might invoke a long garbage collection, while at the same time invoking it in a non-GC’ed system might cause the VM to decide to page something out. Moreover, in order to make manual memory management, well, manageable, C/C++ programmers tend to use techniques like reference counting and freelists, which can be massively non-deterministic.
No matter what language you’re programming in, if you want even soft guarantees about timing, you have to avoid memory allocation in your critical loops. If you follow the guidelines for writing real-time code, you can do it almost as easily in a language with a good GC as in a manually-managed language.
All languages are non-deterministic in terms of execution time and memory consumption. It’s called the Halting Theorem
This is incorrect. The relevant meaning of the halting problem here deals with the existence of a general method for proving termination in programs given a Turing Machine. It is trivial to define languages for which it is not possible to express computations that do not terminate. You are familiar with many, presumably. Secondly this does not mean that it is impossible to prove whether an algorithm terminates or not for any set of inputs, which is something I would hope any of the people here with degrees in Computer Science do as often as possible in appropriate situations.
The general-discussion about the determinism or precision of predicting memory allocation and deallocation is too broad to discuss precisely.
You’re right. I should have been more careful. All Turing-complete languages allow non-deterministic programs to be written. The point being that C++ and Java, both being Turing-complete, are each equally non-deterministic. Happy now
All languages are non-deterministic in terms of execution time and memory consumption.
It depends how you define deterministic. There’s one huge feature missing in “modern” languages such as Java or C#: Deterministic destruction. Although GC takes care of your memory, it doesn’t help you with your resources. In C# you’re forced to manually Dispose your objects that encapsulate resources or unmanaged objects, or else who knows when the resource gets freed. That’s what makes the language non-deterministic.
C programmers are used to cleaning up after themselves, and there are no language exceptions in C (only Access Violation, Divide by Zero, which shouldn’t be exceptions in C++ either). However, in C# there are language exceptions, and yet there is no support for deterministic destruction, which is a very dangerous combination, which will certainly lead to a disaster in the long run. Yes, there is the using keyword, but it’s only for local objects. Try to store objects in a container, and you’re immediately out of help. Basically you’re forced to create your own version of containers every time you need deterministic destruction, just because the language doesn’t know the concept of destructor.
If used correctly, C++ is very safe and convenient. You just have to use boost::shared_ptr, instead of messing with raw pointers. That eliminates the need for garbage collection and all the mess that the lack of destructors introduced in C# and Java. As far as complexity goes, you just have to buy books that start the discussion with std::vector and boost::shared_ptr, not with char*.
It doesn’t mean native pointers need to be eliminated altogether. In image processing they’re extremely helpful. The array[i++] syntax requires 2 registers, while the *p++ syntax uses only 1, and it makes a huge difference. The Intel Pentium architecture is horribly lacking registers, so in a double or triple embedded loop one can save a lot of CPU cycles by minimizing the number of variables.
C++ is not for everyone, and it’s not perfect, but C# and Java are so not well though-out that they’re absolutely no replacement to C++ for my programming, and they won’t be anytime soon. Come on, there is not even const member function support in C#. You can write trivial 5-line event handlers in C#, but I would immediately reject the idea of using that language for anything inherently complex. It’s a disaster waiting to happen, for not supporting the const keyword, and not having deterministic destructors in the language. If I want to mix the advantages of C# and ISO C++, my best best is C++/CLI.
It depends how you define deterministic.
There is only one way to define it!
That’s what makes the language non-deterministic.
While that is true, I should point out that with C++, implementations of malloc are non-deterministic in that they can walk arbitrarily long free-lists on free(). Also, techniques like references counting can cause arbitrarily long deletions, resulting from reference chains. Thus, both types of languages are non-deterministic, and you’re reduced to arguing about which one is more usefully deterministic.
If used correctly, C++ is very safe and convenient. You just have to use boost::shared_ptr, instead of messing with raw pointers.
boost::shared_ptr is non-deterministic. A given deletion can result in a walking of the entire heap, just as with a GC. More importantly, all the extra arithmetic during pointer copies results in boost::shared_ptr being quite slow, in terms of mutator throughput, in comparison to a good GC.
It doesn’t mean native pointers need to be eliminated altogether. In image processing they’re extremely helpful. The array[i++] syntax requires 2 registers, while the *p++ syntax uses only 1, and it makes a huge difference.
It’s interesting you mention this. What you’re suggesting is actually generally a performance loss. See AMD’s optimization guide for the K8 processor: http://www.amd.com/us-en/assets/content_type/DownloadableAssets/dwa…, pages 10-12. They point out that using the array notation is preferable to using the pointer notation, because it reduces possible aliasing in the code. Modern optimizers can deal with structured constructs like arrays much more easily than unstructured constructs like pointers (the same thing for goto vs functions, actually), so it generally a performance win to use such constructs when possible. A halfway decent code generator can easily deal with the register usage of pointers versus arrays (and indeed, can convert the latter to the former after optimization). However, even a powerful optimizer can be stymied when trying to do things like loop-nest-optimization or auto-vectorization in the face of pointers.
Many GC/JIT languages provide some non-managed functionality. Some even provide hooks into faster languages.
I’m not recommending this style of development for games. But who knows, maybe someone will like it and begin using it.
The problem with managed languages and games has nothing to do with efficiency. Efficiency is fine, this has been shown time and again. The problem is that GC cycles can be devastating to real time dependant things like video games. It’s not that GC is so much slower than malloc/free; it’s that it does everything at once where malloc lets the programmer know where he’s doing his work (although he’s still not sure how expensive that malloc will be, he at least knows it won’t be so expensive that it’ll take a half second, GC could be).
Maybe you’re leading to this. But it’s not speed that’s the problem, it’s availibility.
Anyway, games are only one class of applications in development today. C++ may be the language for them. But for the other 97% of developers out there …
C++ is a general-purpose programming language with a bias towards systems programming that:
* is a better C
I’ve used a lot more C++ than straight C in my days, but I think a lot of people would disagree with that blanket statement…if I’m reading him right.
There are *lots* of programmers that appreciate the simplicity of C for systems programming, and would rather write low-level libraries in straight C and move way up to Python or Ruby for “scripting” of the application. C++, with all its warts, also brought in lots of complexity.
I still think that C++ is great at certain things though, games being a major one and certain types of systems programming, but C++ bindings have always seem to be a bigger pain than C because of the ABI. Also, look at how C++ is basically dead on the server side except for hardcore, Google-like systems.
People complaining about C++ complexity, then saying Java is a cleaner, easier language, should try coding in J2EE, particularily EJBs. Now that’s the ultimate in complexity.
C++ has a healthy future ahead of it. It is particularily good for a lot of systems level programming, games programming, GUI libraries (see awesome QT and GTKmm), and over the counter shrink wrapped software.
C++ has been completely ousted in corporate computing, once a C++ stronghold, by Java/J2EE (for lot’s of reasons), as well as other higher level interpred/mangaed languages like C#, Perl, Python, etc. The ever dynamic nature of customized corporate apps demands higher level interpreted/mangaged languages for the greater productivity, fewer bugs (no memory management), and better security (sandbox of interpreter/virtual machine).
Also, I suspect that a lot of desktop software will migrate from C++ to Java and C#, but those languages won’t replace C++ in those arenas because they won’t deliver C++’s speed or lighter resource usage.
As for operating systems and hardware drivers and embedded programming that is and will probably remain the domain of plain old C.
Small to medium sized web apps are the domain of PHP, Ruby, ASP, Perl, etc, while the really big web apps are the domain of J2EE.
So, as usual, it’s the best tool for the job, and C++ is and will remain a very viable tool for particular jobs, just like all the other languages (which would be better for other jobs).
GTKmm is a wrapper. GTK is coded in c.
I used to use C++ a few years ago, and I found it *tolerable* if I followed these guidelines:
– Avoid any other sort of inheritance besides public.
– Avoid multiple inheritance and virtual base classes.
– Avoid fancy template macros (is that what they’re called?).
– Avoid operator overloading (unless you can prove it will *really* make your life much easier).
– Don’t overload new, new [], delete, delete [], and don’t use placement new.
– Avoid RTTI.
– Avoid exception specifications. However, of course *do* use exceptions in general.
– Don’t use unnamed enums.
Using C++ that way, it was still pretty complex, but way nicer than straight C.
These days though, life is much sweeter just using Python.
IMO, any changes they consider making for C++0x should be to simplify and slim down the language.
I agree with all of these, and must add a few of my own:
– Stick to the STL and the Boost libraries when possible.
– Use automatic variables when possible, use smart pointers otherwise.
– Prefer const variables to #define constants.
– Avoid preprocessor macros in general.
– Use references instead of pointers.
Following these guidelines, C++ can be a relatively pleasant language to program in. It’s quite limited in what its capable of, and its still a synactic hassle, but its bearable. Fortunately, I don’t have to do much C++ any more, and can stick to Lisp and Python, with the occasional Matlab hack thrown in for good measure.
The problem with c strings is that the STL uses them…
ifstream::open(const char *fileName);
“ifstream” does not belong to the STL.
It should be using string, that’s a standard part of the STL which would take people’s old const char * and convert it anyway. But it’s not. So, you end up using string and then realizing a few things:
[…]
4.) It’s not safe, it doesn’t check for over-bounds requests and writes.
You already stated this four weeks ago. And you have already been corrected at that time. That you still post this falsehood is annoying.
You can read up what the facts are in the C++ Standard, beginning at part 21.2 “String classes”. Page 380, approximately.
The rest of your post is littered with strange statements too. But I am not so quick, calling them wrong, because you might use uncommon definitions of words (like “c string”, “many concatenations” and “easily”), so that under your own definitions, your statements are true. It must suffice to say that probably many people do not share your definitions.