“The recently finished C++ ISO standard, with the working name of C++0x, is due to be published this summer, following the finishing touches to the ISO spec language and standards wonks agreed upon in March.”
“The recently finished C++ ISO standard, with the working name of C++0x, is due to be published this summer, following the finishing touches to the ISO spec language and standards wonks agreed upon in March.”
A better summary of the new features:
http://en.wikipedia.org/wiki/C%2B%2B0x
Also, GCC support:
http://gcc.gnu.org/projects/cxx0x.html
Edited 2011-06-15 08:38 UTC
While I like the additions on their own I am not sure I like C++ as a whole.
What I would have like to see is a very strong “C++ The Good Parts” recommendation with a lot of commitment. And one that goes way beyond what current guidelines say. 0x would have been a great time for that.
I am not sure that _I_ would start _new_ projects in 0x when fairly mature and much simpler solutions like Go exist for a lot of similar problem spaces.
If D features Go style interfaces, or Go feature D style template modules, I’m sold. Sadly neither is true
There is nothing spectacular about Go interfaces besides not having to write an “implements” somewhere.
I like the language, but if you know your way around programming with interfaces and protocols in another languages, you will easily find similar features.
The thing is that I feel it will increase the flexibility of the language as changes in an interface doesn’t require the implementation/class to make the respective change.
The bit about writing “implements” is in some ways same as the “auto” keyword in C# and D — it saves a LOT of retyping and improves interface-implementation segregation.
New interfaces can also be written as per one’s requirements and the older classes needn’t implement it. In general, we write wrappers to take care of this.
IIRC, most of the C++-inspired languages don’t feature method renaming or deleting abilities as in Eiffel which is where the strength of nominal typing lies. Without that, it’s a constrained form of Go styled interfaces since the method signature has to be the same.
Since when? It is like any other programming language
that supports interfaces. If the interface changes somehow, the methods of the types that implement the interface need to be modified accordingly.
Same thing as in any other class with interfaces. I fail to see your point here.
Go is a very nice language, I play a lot with it and even contributed changes back. But lets not confuse the language marketing with facts.
In particular I’m talking about the “interface segregation principle”. If linked via interfaces, then yes the changes are necessary. But if there are multiple interfaces or if the function accepts the implementation itself, then no changes are necessary.
How is it the same thing? You don’t have to write (most) wrappers in Go to fit older classes into newer interfaces…
FWIW, I was wondering what it’d be like if C++’s (now postponed) concepts could be adapted to runtime somehow. Go’s structural typed interfaces come close, so naturally I find em interesting
Edited 2011-06-16 22:36 UTC
Do not wish for that: it would be a large open and welcoming avenue for bugs and compatibility problems. Imagine this “feature” in a plugin environment: nightmares more than you want.
Well, I’d invoke the great god murphy’s law: bad designs would spring up either way.
People end up writing bad code be it C++ or Ruby
I’ll be honest though, I often keep modifying my interfaces rapidly but modify the classes much more slowly
Really? How do you add a function to an interface without having to add it to the implementing classes? How can you change a parameter type, or change a function signature without adapting the code that depends on it?
Class C1 implements A1, A2
make change to A2; C1 still works with A1
“And…”
Why don’t you finish your sentence? What happens when some code that expects the changed A2 is given an old C1 that doesn’t have the change?
And how is that an advantage to have C1 drift away from A2? I guess C1 being an implementation of A2 was motivated by a need. What happens to that need?
I think you are just struggling to justify the unjustifiable. Give us a working example of what you are advocating.
Um, I didn’t say “and..” at any point. My bad though, I should’ve elaborated.
The advantage is that there’s no real disadvantage in exchange for improved flexibility.
The need for classes themselves are because you want to construct objects that adhere to an interface.
Interfaces exist to facilitate modularity i.e., they exist in different modules.
The way I can see (right now, atleast), the only case where “C1 implements A2; A2 changes” is an issue is when Object of C1 is passed to a method accepting A2 (say, M1) in an invocation (I1). Then:
#1 I1 and M1 are in same module (which implies recompilation either way)
#2 I1 and M1 are in different modules (in which case, only I1 needs to be changed and not C1 and I1)
Mind you, I’m not saying “there’s no case where a class MUST implement an interface” — however that is better moved to compile-time annotations than make a language restriction.
I don’t think a working example actually exists where interface hierarchy *is* required. However, a simple example for me would be:
In reality though a LinkedList does NOT have to implement Stack or Queue. Also, even the slightest change necessitates an unneeded recompilation. On top of it; size, dependency and complexity of the class increases.
Edited 2011-06-18 00:37 UTC
Can do the same in any language that supports interfaces concept.
Plus Go does not provide any solution for collisions, when your change to A2 conflicts with A1.
From what I can remember, only Eiffel (and descendants) provides a complete solution for the problem of method-collision.
And if you do the same in other languages, it will/may cause a runtime or compiletime error. Why? Cuz unless we’re also compiling code where you pass a C1 object into a method which accepts A1 object, you are less likely to face an issue. But in Go, no such strict description is required that C1 *must* implement every method of A1 or A2.
Then I have an exercise for you. Remove the comment from the interface A2 and make the code compile according to your description.
package main
import “fmt”
type A1 interface {
SayHi (int)
}
type A2 interface {
SayHo (string)
//CompileError(int)
}
type C1 struct {
}
func (self *C1) SayHi (count int) {
fmt.Printf (“C says %d\n”, count)
}
func (self *C1) SayHo (name string) {
fmt.Printf (“C says %s\n”, name)
}
func UseA1 (val A1) {
val.SayHi (30)
}
func UseA2 (val A2) {
val.SayHo (“John”)
}
func main () {
c := new(C1)
c.SayHi(25)
c.SayHo(“Mike”)
UseA1(c)
UseA2(c)
}
That’s the exact point I’m making.
You tried to pass an object of C1 to an interface of A2.
That is, the class itself has no requirement to adhere to any interface. *Why* should an object of C1 implement A2?
Edited 2011-06-18 14:02 UTC
C1 does implement A2. Otherwise the compiler will throw an error.
I give up discussing Go with you.
I am one of the guys improving Go support on Windows. What have you done with Go so far?
So? What concerns the programmer is whether an object of C1 works in a method accepting A2.
Tell me why this in any sense relevant to the discussion. Or are you trying to say that since you are doing some work, my point is irrelevant as compared to yours?
That’s a pretty big thing IMO. Especially if you’re working with code you can’t change.
Why? The code you write to support an interface needs to be all on the same package.
It is not possible to add interface support to a type that lives in another package.
No it doesn’t…
I mean, I implement java’s interfaces to get access to functionality. Java’s interfaces (and classes that operate on these interfaces) are in a different package than my package, isn’t it?
You did not understand my remark.
You can only add methods to types inside the same package the type is defined.
So if the interface is defined in package A, the type I am using is defined in package B, both I don’t have access to the source code.
Now the interface changes, you cannot add the new methods to the type without modifying the package B as well. Because you are not allowed to add methods to you package B type, from package C for example.
Wait, what? I thought Go (or any language with open classes) allows you to do that; keeping structure and behaviour in different packages.
Which proves you have not written Go code…
Of course I haven’t. Where did I ever claim I do? That said, you are yet to show me a case where open interfaces are less preferable to strict interfaces.
Turns out I wasn’t wrong. Casting does take care of this issue. And you said *you* coded in Go?
I too would like to see a much simplified C++ design, instead of even more features added with few things removed. But I fear that we each would like to keep/remove a very different set of features, sort of proving Stourstup’s point of creating a monster-sized “multi-paradigm” language.
As an example, I like that C++, like C, is good for all OS layers, provided that you are cautious with some features. But many other persons would probably prefer something that’s fine-tuned for application-level development, and totally drops the power and control needed for low-level use in favor of extra comfort.
I doubt it will happen any time soon.
Personally I would like to have a more type safe systems language available. Joining the low level power with the high level expressiveness.
Sadly most of the languages that tried to displace C++, never fully managed it.
The main reason is that there isn’t any other multi-paradigm language with C++ power that is currently available. Maybe only Ada would be one, but I won’t see anyone wanting to jump ship to it.
And tool vendors are slowly bringing their C++ tooling to the same level we already have in managed languages.
The LLVM project has already shown how much error reporting and static analysis can be improved if the compiler is integrated into the tooling.
What would you call more type safe ? Something where you can’t joyfully cast a pointer into an integer of the same size and vice versa ?
I think that it’s more about legacy. C++ code is everywhere. Powerful C++ compilers are everywhere, and target everything. Documentation about the internals of C++ code generated by various popular compilers is also everywhere.
If I wanted to implement an OS in, say, Pascal Object, I think the most painful part would be missing low-level documentation, not some intrinsic language inferiority.
Yup, I’ve seen it too. Pretty impressive, hope that the GCC team will start to work on it too at some point.
Edited 2011-06-15 12:48 UTC
Classic Mac OS was partially written in object pascal. That’s why it was so awesome when introduced and awesomely bad when replaced by OSX. I think there is a law of the universe that somewhere that says Apple is not allowed to use a commonly used programming language for its api.
Actually, earlier version of MacOS classic, up to 6 or 7 were written in a mix of assembler and straight Pascal, not Object Pascal. Object Pascal was pretty much a Borland thing (it was a really nice language though).
Later versions (when PPC was introduced I think) were transitioned to C, but they still used Pascal calling convention for compatibility.
NeXT / OSX is a lot cleaner than MacOS classic, give Objective-C a try, think you might like it.
Get you facts straight. Object Pascal was created by Apple for MacOS and later on adopted by Borland.
You can read about it here,
“Revolution in the Valley: The Insanely Great Story of How the Mac Was Made”
http://www.amazon.com/Revolution-Valley-Insanely-Great-Story/dp/059…
OK, my bad, thought ObjectPascal was just Borland.
Never wrote Object Pascal on old Macs, but wrote a few straight Pascal progs on system 6
I do not think Borland’s Object Pascal (a part of Turbo Pascal) is the same as Apple’s Object Pascal, they just share the same name. Borland’s Object Pascal was later renamed Delphi.
I have, its nice. Much better than MacOS classic. I didn’t meant to criticize it, its just different in the same way that Classic is.
Object Pascal a Borland thing?
The Borland version/dialect is nowadays perhaps the most known “Delphi”, but there is/was an Apple version/dialect of Object Pascal as well.
Exactly! The type of thing that has brought us buffer overruns and pointer errors along with all the related security issues.
Algol, PL/I, Oberon, Modula-2, Modula-3, Ada are all system programming languages done right.
Unfortunately many programmers don’t like static typing that much.
Maybe Go, D or Sing# manage to change this, but I doubt.
It is called Object Pascal, and the first version of this Pascal dialect was used by Apple to create the original MacOS.
But then how would you code a memory allocators, which manipulates integer memory locations at the core and returns object pointers at the end, in such a language ?
Or how would you operate things like paging hardware ?
At the core, for the hardware itself, a pointer is just a long integer. Making a very strong distinction between pointers and integers, without allowing one to punctually switch between one and the other like C does, could make low-level hardware manipulation quite difficult…
Thanks and sorry for the confusion. You see, French programming manuals have gone crazy enough to translate the language’s name as “Pascal Objet”, and I remembered it as “Pascal Object”, even though retrospectively it indeed makes little sense.
Edited 2011-06-15 15:30 UTC
Usually there are two possibilities. In some cases it is relegated to an assembly module that provides such operations. But most of the cases these types of operations belong to a virtual package/module named system/unsafe/etc.
I think you may find interesting the following links
http://www-old.oberon.ethz.ch/WirthPubl/ProjectOberon.pdf
http://www-spin.cs.washington.edu/papers/WCS/m3os.ps
http://www.ethistory.ethz.ch/rueckblicke/departemente/dinfk/weitere…
http://www.inf.ethz.ch/personal/wirth/Articles/Modula-Oberon-June.p…
http://www.bitsavers.org/pdf/eth/lilith/ThePersonalComputerLilith_1…
http://marte.unican.es/documentation/fosdem2009-miguel-telleria.pdf
And lets not forget Multics, developed in PL/I before Unix was even born,
http://www.multicians.org/pl1.html
Having said this, I also like a lot to program in C and C++. And to be honest if you program in Modern C++ it is easy to keep you code safe.
Thanks for the clarification I could argue that in C++, I also only use casts in areas where they are either desperately needed or desperately convenient, but then you say yourself that…
One day, I really should try to learn a bit about the STL and see if it offers a solution to C’s longstanding problems with array bound checking. Sure, not checking is faster at the lowest levels, but at the application level things like buffer overflow really shouldn’t exist…
That’s crazy, when I think of it : I have never been using more of C++’s standard library than iostream and new/delete. Well, guess using it mostly for OSdeving and with SDL (that is essentially a C library) helped a bit ^^’
Edited 2011-06-15 17:19 UTC
STL has been kinda painful so far, at least when using the various algorithms it offers – functors everywhere, ugh. Going to be a lot nicer with c++0x lambdas
That said, I haven’t been doing manual new/delete for ages in C++, and nobody really should; for application-level code you have the STL containers (which aren’t painful to use by themselves), and for more specialized purposes you should be doing your own containers – http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization is part of why C++ is awesome.
http://www.cplusplus.com/reference/stl/vector/at/
“The difference between this member function and member operator function operator[] is that vector::at signals if the requested position is out of range by throwing an out_of_range exception.”
That said, typically it’s not very useful to do bounds checking when indexing an array. Your algorithm should know whether you’ll exceed the bounds anyway (or it’s a very weird algorithm).
Type safety has nothing to do with nominal typing. OCaml and Haskell have a (more powerful) version of the type system in Go. For most part, C++’s class system is mostly restricted VTables. Greater flexibility and consistent type system has nearly always been better for languages..
PS: I have only cursory knowledge of most of these languages. I love C++ for the templates, not so much a fan of the class system
Edited 2011-06-16 22:51 UTC
The “class system”, whatever meaning you intended to put into the phrase, in C++ is exactly the same as in Java with one glaring difference and another less obvious: multiple inheritance and friends.
I don’t know any other OO languages but, as I understand it, a “class system” is only about inheritance, abstraction and encapsulation. I can’t see how these basic bricks of OOP can be implemented in such a strikingly better way that you would be “not fan of the C++ class system”.
Could you elaborate on what you call “class system” and C++’s shortcomings as to that class system?
What makes you think I’m talking about Java as being in any way superior?
Class System is done (almost) right in Eiffel. Method Renaming and Deleting.
The deficiencies? Diamond Inheritance and Constructor Chaining can make easy mistakes. The argument given is that “good practices and patterns will obviate this” but most of them exist only to overcome the shortcomings of the type system.
PS: And to clarify, the reason i say “class system” is because I don’t wish to include templates into it, which are a lot more awesome
Edited 2011-06-17 11:16 UTC
As I wrote earlier: “I don’t know any other OO languages […]”
Once again, what are the said shortcomings? Wikipedia tells me Eiffel also has diamond inheritance and a quick Google search led me to a C# article saying constructor chaining is better than an initialization function. CC also exists in Java and I guess it is inherently tied to the inheritance concept, hence to any OO language. So the deficiencies you point out are also in the same language whose class system was “done (almost) right” to you? I am still puzzled as to what you’re getting at.
Anyway, what’s the language you prefer in terms of OO design?
One thing that C++ lacks and opens the door for cool stuff is metaclasses.
Almost every OO language has metaclasses. Even Simula had it, but Bjarne thought it would be too heavy to have it in C++, so we are left with RTTI.
In the OS/2 days, IBM created SOM, which provided metaclasses for C++, but SOM died with OS/2.
http://en.wikipedia.org/wiki/IBM_System_Object_Model
I really enjoyed the productivity I could get out of Smalltalk, but it never caught on for several reasons, although Ruby has a bit of it.
IMHO… Runtime metaclasses are mostly a way of getting around the limitations of the language’s OO system or a poorly structured library in exchange for slowing the language down. That’s why Smalltalk and Ruby are much slower than C++… Templates cover this deficiency to some extent though (if C++ with templates is compared to Java without metaclasses)
Compile-time macros (and metaclasses) though would be totally awesome. I remember correctly, D has ’em to some extent.
There no slow or fast languages. Only slow or fast implementations
moondevil,
“There no slow or fast languages. Only slow or fast implementations”
I’m glad someone other than me is saying it.
Ideally a language ought to be picked on it’s ability to express the intent of the developer. And the compiler would figure out the details of getting it to run most efficiently on the platform.
The perfect compiler would take any algorithm written in any language and transcode it to the best equivalent algorithm which is known.
Of course we are not quite there yet.
If you’re suggesting that, say, vtables can run as fast as templates, then I wanna see some benchmarks.
Unless you’re talking about aggressive optimisations, which can be applied to BOTH static and dynamic languages.
No.
I am saying that a language is not fast or slow. The way it is defined might allow or disallow certain optimizations to be done, but that is it.
All languages can be implemented as an interpreter, with JIT support or a plain old compiler. It is just a matter of ROI of the implementation.
I am old enough to remember BBS discussions about how slow C was and everyone would code in assembly until the end of the world.
But assembly IS still faster than C. Only that C is overall much more efficient to code in than assembly and that computers are fast enough to ignore the difference. Likewise, OO was also considered slow but the computers grew fast enough. However, falling back would still show a significant to dramatic speed boost, only that it’s not noteworthy.
Likewise, I prefer to code in Ruby with its heavy reflection than Go anyday (mind you, i still prefer C++ over Go). However, I don’t see why C++ should add reflection. Instead, adding these features would instead slow it down IMO and make it a less likeable alternative than more flexible languages. Besides, Objective-C kinda does the same job..
WELL….
[disclaimer: I’m more of a language enthusiast, so I may make some obvious mistakes due to insufficient understanding]
The reason I said “almost right” is because I haven’t found a single language that covers all the deficiencies (I conjured up) yet.
IIRC there are two problems of diamond inheritance — method collision and multiple constructor invocation.
Method Collision is when a single method of a class has atleast two inherited implementations from its parent classes [they needn’t have the same superclass]. Eiffel lets you explicitly control how each method is implemented.
However Eiffel faces the same issue that the same constructor can get called multiple times and may cause problems as in C++. The solution to this (accd to Java) is to use interfaces, however it doesn’t address method collision and may require a lot of boilerplate code.
And yes, it’s inherently tied to the concept of construction, so multiple parents can cause a mess.
(about the language in terms of OO) If you mean practically usable language, it’d be Ruby I guess which is so delightful to code in. In statically typed, I prefer C++ over others although D is starting to prettier every day.
If you mean theoretically, maybe CZ; a paper I found long ago titled “Multimethods with Multiple Inheritance without Diamonds” which addresses both the issues.
Ehh, accidentally replied to my own comment. Anyway, here’s my reply to your post..
http://www.osnews.com/permalink?477640
Maybe a good solution would be to define use cases and subsets.
Something like Qt forces you to write very different C++ than when you write low level system stuff. So have Qt-C++ and system-C++.
Maybe the guys who like ot define ISO standards are the wrong people to ask for something like that.
I guess once GO gets its concurrent GC a lot more people will see the light and only face the C++ monster when they really have to. Let’s talk again in 10 years
Well, in practice everyone already uses a subset of C++, I think. Some are into heavy templating, others get into more traditional inherited class hierarchies, and then there are those who use C++ like a better C with extra tricks like function overloading and a “bool” type…
The problem is that in many projects, the types that involve multi-site across the globe, it is very hard to keep everyone on the same subset.
This is the main reason why you end up forcing style guides on the teams.
Still C++11 is looking a good improvement.
A lot of features have been added having simplicity as main feature; for example: “auto” and “range-based” for.
For example, to show all items in a vector, in current ISO C++ you need to do something like:
template <typename T>
void show(const vector<T>& v)
{
for (typename vector<T>::const_iterator i =
v.begin(); i != v.end(); i++)
cout << *i << endl;
}
In the new C++0x you can do also this stuff:
template <typename T>
void show(const vector<T>& v)
{
for (auto i = v.begin(); i != v.end(); i++)
cout << *i << endl;
}
or using the range-based loop:
template <typename T>
void show(const vector<T>& v)
{
for (auto& x : v)
cout << i << endl;
}
IMHO, simple, beautiful and powerful because they are new features, but they add simplicity too.
While you can totally see it from this point of view, I see it myself at something which makes C++ compilers more difficult to implement, and as such probably less efficient in each individual area
In the educative example above, you should really use ++i instead of i++. This may seem like nitpicking, but there is a significant performance difference. When using i++, the compiler will have to invoke the copy constructor at each iteration.
I thought most modern compilers were smart enough to recognize that you’re not using the unincremented value, and thus do the right thing?
Assuming that the compiler can inline the overloaded ++ operator (which it should in this case), and you have a trivial copy constructor (which again, should be the case), modern compilers should be able to completely remove the copy. Probably.
If the interface is external (from a frame-work or shared library) the compiler can often not do those kinds of optimizations. With C++ overloading there is no guarantee that ++i and i++ has the same effect when used as procedure calls.
Agree 100%. I taught myself c/c++ in the mid 90’s from books written in the late 70s early 80s at the school library and old borland Turbo C compilers. When moving to gcc in the late 90’s I really wanted some *good* *modern* best practices guide to using the language. I never found anything of the sort. When I got my first job, I was made fun of for my ancient syntax patterns, even by the gray beards.
The way I improved my style was by reading Dr Dobbs, The C/C++ User’s Journal and C++ Report.
Back in the day when Internet was called Arpanet and the best some of us could wish for was some form of BBS access.
Later on, the nice books from Andrei Alexandrescu, Scott Meyers and of course Bjarne.
Go “fairly mature”?
This link is great to have an overview of the new features
http://www2.research.att.com/~bs/C++0xFAQ.html
Given that to this day, not one single compiler fully implements the C99 standard, despite the fact that it has been 12 years since it was published, how long is it going to be before any of the new features of the C++ standard can actually be used without tying your code to a specific compiler? If compiler implementation of the new features moves as slow as it has been doing for C99, I think we have a long wait ahead of us before we can actually safely use the new features in the standard.
Edited 2011-06-15 15:50 UTC
Well, GCC is almost there, apart for the conccurency stuff…
http://gcc.gnu.org/projects/cxx0x.html
EDIT : By the way, many thanks to whoever will implement stdint.h in g++. I dream of the day where I’ll be able to remove this ugly GCC-specific hack_stdint.h header of mine from my OS’ code ^^
Edited 2011-06-15 16:02 UTC
And all the good stuff from the rest. They have all the easy stuff done, but the really awesome stuff that would let me cut cookie-cutter code in half is still not done.
Inheriting and delegating constructors and non-static member initializing, is what makes up most cookie-cutter code when making new classes. I dream of the day where ‘mark CTRL+C,CTRL+V’, is replaced with ‘mark DEL’.
Actually the latest versions of g++ have implemented a lot of features of C++0x. They are turned off by default, but you can enable them by adding -std=c++0x to your g++ command line.
You can see the list of the features implemented and not implemented in g++ in this URL:
http://gcc.gnu.org/gcc-4.7/cxx0x_status.html
Edited 2011-06-15 15:58 UTC
C99 has another problem, most compiler vendors do not care.
Most developers are quite happy with C89 features. The majority of developers that care about other features have moved to C++ or managed languages.
At least this is the official excuse from many companies when you ask about C99 support at developer conferences.
Bad excuse Things like stdint should have been in the C standard from the very beginning. It’s unacceptable to create a system programming language that doesn’t feature a way to get integers of known size without assembly snippets.
The short/long thing was broken from the beginning. There should have been int for everyday use and fixed-size integers for specific uses to start with.
Edited 2011-06-15 17:09 UTC
You are right, and the worst are Borland and Microsoft. They are the main ones stating the point I was referring above.
If you are using Linux or another POSIX system accurate integers have been there for a “long long” time.
As part of the C library maybe, but not built in the “core” GCC, as it’s supposed to. The C99 version of stdint.h has only appeared in gcc 4.5.something, IIRC, and the C++11 part is yet to be implemented.
May seem like nitpicking, I give you that, but the difference is there when you develop an OS as a hobby and DON’T want stdlib stuff to be linked in your code
Edited 2011-06-19 18:54 UTC
Edit: Clarified
POSIX and stdlib are not the same. One is a system interface, the other the standard library. The C99 stdint.h in the standard library (which you don’t like), what I referred to was the system interface.
Anyway it was a joke.. “long long” time?
Edited 2011-06-19 19:08 UTC
Then there’s something which I’ve not understood, because I’m pretty sure that GCC will link my C files to something called stdint.h with the -nostdlib and -ffreestanding flags on.
However, attempting to put a #include <cstdlib> somewhere will result in a “fatal error: cstdlib: No such file or directory”, as planned.
Are you sure that stdint.h is considered a library feature and not a core language feature ?
Yeah, got it I’m happy that many compilers decided to make long 64-bit instead by the way, “long long” sounded really stupid imo ^^
Simple things like <stdint.h> works standalone because it is only a header-file with no object core.
Last I had to write a kernel though, we had to copy stdint.h into the directory we were building the kernel from. It did not work with freestanding back then. I guess things have gotten easier
Whether that means it is now a core C language feature and not a core C standard library feature, I am… ehmm.. I guess, maybe.. I am not pedantic enough to care.
Edited 2011-06-19 19:34 UTC
I think the change has happened with the GCC 4.5 release. Before that, I couldn’t use stdint.h either. The release notes of GCC 4.5 state :
“<stdint.h> is provided by GCC, or fixed where the system headers provide a nonconforming version, on some but not yet all systems. On systems where types in this header have been defined as char, GCC retains this definition although it is not permitted by C99.”
After having explored the D programming language, I’m a convert. D is very much designed using a “less is more” approach, and I’m sold on it. The fact that it has garbage collection built-in is nice as well.
Simple, elegant and clean.
D is “C++ done right”.
Is D used by any large organizations, or for any large projects?
I don’t think so…. not yet.
However, Andrei Alexandrescu, one of the gurus in the C++ community, has written a book on D – “The D Programming Language”. There is also a very active and friendly D forum at Digital Mars.
There are several D compilers. The original Digital Mars one. GDC, a D front-end to GCC. LDC, an LLVM-based one.
There’s a very good list of D features here –
http://www.d-programming-language.org/comparison.html
Edited 2011-06-16 01:32 UTC
not to star a flame war but d is way too heavy. They loaded it up with reserve keywords and just lots of extra crap. d should have been smaller and more orthogonal than c++. Instead they decided to load it up with bloat.
Edited 2011-06-16 04:20 UTC
What the heck are you talking about? It is C++ that has the bloat with the ten-thousand special cases for templates and constructors and operators and overloading and implicit type conversions and so forth and so on. D seems to have simplified a lot of that. I’m not sure what else you are referring to in terms of “bloat” (a term that is thrown around far too often these days — as if we should all be doing 8-bit assembler on some 10,000 transistor chip from the 70s).
The problem with “done right” OS, tools, languages is that most of them fail victim of “worse is better” concept.
http://dreamsongs.com/WorseIsBetter.html
D is in some areas C++ done right and has quite a few nice features, but C++ has the tooling and industry support. So unless D provides a few killer features the adoption will never be that much.
A language to succeed has either to provide killer features that gather people around it until the language gains momentum, or needs industry push.
D has quite a lot of flaws, actually. More flaws than C++.
For example, the artificial separation between structs and classes.
The list is quite long.
The separation between structs and classes is eerily similar to the policies in .NET. It’s nice to have explicit value type semantics rather than the fingers-crossed “semantics” of C++.
That’s one of the things keeping me sticking with C++. Garbage collection is an ugly and unsatisfying solution to a minor problem.
I think a better solution is judicious use of reference counting & shallow copying. Then you get to keep speed, determinism, smoothness (no stopping the world), and deterministic destruction (for RAII).
Completely agree.
One friend told me something like: “You do not need garbage collection if you do not produce garbage!”… really true!!
Timmmm,
“That’s one of the things keeping me sticking with C++. Garbage collection is an ugly and unsatisfying solution to a minor problem.”
I think it’s more a matter of opinion.
The problem with delete in a managed language is that the language can no longer vouch for program safeness with regards to preventing code from corrupting it’s own structures. Deleting an object which is still in use could cause stray pointers/references.
With this in mind, does your aversion to garbage collection (and implicitly managed languages) carry over to environments like Perl/PHP/javascript for web development? I think using unmanaged C/C++ would be rather dangerous.
“I think a better solution is judicious use of reference counting & shallow copying. Then you get to keep speed, determinism, smoothness (no stopping the world), and deterministic destruction (for RAII).”
It’s tempting to think so, but it’s not safe to assume that malloc/free is always going to be faster than GC. There are alot of factors, it came up not long ago:
http://www.osnews.com/comments/24843
Are you thinking that the language ought to enforce reference counting on objects or just that programmers should use it as a design pattern?
One issue with ref counting is cyclic data structures. Though it may be uncommon, I’d hate for any language to be limited in this fashion: Visual basic used ref counting and was susceptible to memory leaks in cyclic structures.
I personally don’t have any problem using new/delete myself, but I don’t have any serious objections to managed memory if it helps devs produce more reliable code.
Edited 2011-06-18 03:45 UTC
Same thing here.
Actually C++ was the last systems programming language to be designed without GC. And Bjarne only did it because of the bad experiences with the Simula GC. You can read about it on “Design and Evolution of C++”.
Luckly the language is powerfull enough that now refcounting classes are part of the standard library. So you get to choose when to use what.
Plus the C++11 standard defined a GC API, so that compiler vendors can provide their own GC if they so desire. You can read more about it here,
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2527.html
http://portal.acm.org/citation.cfm?doid=1542431.1542437