“For the past several years we’ve been working towards migrating from GCC to Clang/LLVM as our default compiler. We intend to ship FreeBSD 10.0 with Clang as the default compiler on i386 and amd64 platforms. To this end, we will make WITH_CLANG_IS_CC the default on i386 and amd64 platforms on November 4th.”
Well… I hope the performance of Clang compiled FreeBSD 10 will be on par with gcc compiled FreeBSD 9.
This kind of decisions remind me a lot to the “eat your own dog food” attitude so common in big IT companies… now Open Source world is suffering the same affection.
I think you’re mixing up your terms.
NIH (Not Invented Here) is an affliction.
Eating your own dogfood is a praise-worthy quality that means that you’re less likely to ship garbage because the people most able to fix problems that annoy them are full-blown users.
Affliction?
Ironic, now that the biggest projects written in C are compiled with something written in C++ (or, increasingly so, in GCC’s case).
There’s nothing ironic about it. Different projects use different languages.
It’s ESPECIALLY ironic now that YOU’RE in the discussion, as one of the people I’ve recently encountered with irrational arguments against C++.
Irrational? Hardly. My argument against C++ is/was always about complexity.
Regardless of your views on my views, my participation has nothing to do with irony. I was merely correcting your misuse of the word.
There exists other languages that promote complexity much more than C++. eg, Java and C, IRONICALLY. But what makes your arguments irrational are your supporting arguments. It’s not irrational to argue against complexity, but it is irrational to bring up irrelevant and downright unsafe practices as proof of a language’s unnecessary complexity.
If I may remind you, you plainly stated that it was preferable (to you) to risk buffer overruns with array management than using slightly more complex but safer features of a language.
That is highly irrational.
It has everything to do with irony. I just looked up a couple of explanations of the concept “situational irony”. This situation fits, and even more so with your involvement.
I don’t know – Java does encourage over-design, but C++ encourages nigh-unreadable template fun and magical mystery action-at-a-distance.
Of course, C++ in the right hands is fine, and it does also depend on what you’re coding against (Qt code tends to look good to me), and dash of C++ can remove a lot of pain from a C project.
Nothing about C++ encourages templates. Most of the useful template stuff is already expressed in the standard library. Programmers who aren’t cowboy coders won’t use metaprogramming – and we must use non-cowboy-coders as a standard candle because cowboy coders can do damage in any language equally.
I would say that RAII prevents most occurrences of mystery action-at-a-distance. Resources have limited scope and are cleaned up once out of scope, reducing chances of mystery action-at-a-distance.
Conversely, Java lacking destructors and relying on cleanup functions being called explicitly in finally blocks fails to improve upon C++ as was the aim. Overreliance on inheritance as a way of extending functionality is error-prone, and in Eclipse, for example, often requires looking at code of the superclass to make sure your extensions don’t break it. That’s extreme mystery action at a distance.
Since Java 7 you can make use of try-with-resources, which pretty much covers the RAAI scenarios.
Blame the programmers, not the language.
I can also give examples of C++ frameworks, which rely on inheritance to death, coupled with nice touches of multiple inheritance.
You still need to make it explicit with a statement. RAII operates as soon as you acquire the resource. The benefit of C++ RAII is that it’s automatic.
But you’re the one who brought up extreme mystery action at a distance, which only exists due to programmers, not the language. You also blamed the language for abuses of templates by programmers. You can’t have one standard for criticizing one language and a different standard for another.
I’m blaming the language insofar as that at least C++ templates (I much prefer Ada generics I freely admit) provide a viable alternative. Java generics are tacky add ons.
Java the LANGUAGE itself has an overreliance on inheritance, so I don’t need to blame the programmers. I would always encourage programmers to write in the way the language designers intended the language to be used (which is inheritance up the yin yang for OO dominant languages), even if the language is badly designed.
WTF are you smoking?!
Is your anger to Java, a tool, so big that you don’t see to whom you’re replying?!
People make mistakes. However, remove that one errant paragraph and my points still stand.
Ok.
I use Java as I use any other language, as tools depending on project requirements and what the customer asks for.
I will also take C++ over C, unless required to do otherwise.
As for inheritance abuse in Java, it is true that a lot of people do it, but you can also have quite nice arquitectures using interfaces instead.
Sadly, many developers don’t know any better, and you tend to be forced to abuse inheritance as soon as you start incorporating a lot of 3rd party libraries.
In C++ you can find such examples in ATL, MFC, OWL and TurboVision and most game engines.
Java was initially designed around inheritance abuse. Inheritance is all over Java’s oldest APIs; note for example that Java’s original Vector class holds Objects (and not a specific type), which means that you can shove any java Object into them, and that you can’t shove a bare scalar type into them (yes, I know that Integer is an object that just thinly wraps around an int). This makes these things extremely type-unsafe; if you want to constrain the types you put in a Java vector, you have to enforce those constraints yourself explicitly, in your code, and you always have to explicitly type-cast the things you get out before you use them. Essentially, Sun was trying to use polymorphism to provide a flexible, generic container without actually including generics in the language, and the result was a mess.
And while Java has proper generics now, that heritage is definitely still in the language. Java by design puts a lot of weight classes, objects and polymorphism.
I might also point out that, at least at my school, we actually teach in some of our courses that “inheritance is a functionality extension and code-reuse mechanism, and you should use the hell out of it.” And while I think that’s insane, I think that view is one of the fundamental assumptions that went into Java.
OOP and inheritance is sometimes the right metaphor, but not always. It’s great in game engines, because you often find yourself building large trees of heterogenious but semantically-related types (i.e. large trees of objects that are renderable, but have different render methods). Ditto for UI frameworks, where you find yourself adding lots of different types that are all widgets. But, “look, this technique is used a lot in C++, too!” is not the same as “look, C++ over-uses this technique!” OOP is Java’s central organizing metaphor, whereas C++ is a multi-paradigm language that also supports imperative-style code and template meta-programming where those make more sense.
Everything you said about Java I can say about C++.
I started using C++ in 1992, on those days C++ was still “C with classes”, and everything you complain about Java is how C++ was used on those days.
Everyone was blindly trying to do Smalltalk in C++.
My first compiler with basic support for templates, which were still being standardized, was Turbo C++ 3.1 for Windows 3.x, bought in 1994.
Templates were only working properly across major C++ compilers around 2004. Still some edge cases of the C++98 were not fully implemented across all major compilers. And even afterwards there were many companies that disallowed the use of templates in their production code.
Which means in many enterprises out there, C++ developers have been writing C++ code the same way, you Java forces people to do so.
Actually, what happened is that many early adopters of Java, were C++ developers that were never able to use templates, and carried on coding in Java as they always did before in C++.
You say yourself that there are better ways to code in Java, but you teach your students otherwise. How can the tool be blamed?
Not exactly. Yes, C++ developed slowly. Yes, early compilers supported different sub-sets of it, so yes, different developers make wildly different uses of the language even today. And yes, C++ as a language has its own set of flaws, some of them quite severe. But the generic containers in the C++ standard library have been (comparatively-)type-safe templates for forever, and C++ has been multi-paradigm for most of its history. Java as a platform needed to suffer a dramatic loss of developer mind-share before its developers finally relented, compromised the language’s elegant OO design and grudgingly started to add support for a lot of features that C++ (and Python and Ada and C#) have had forever. We had to wait way to long for generics, and absurdly long for RAII.
Funny, because here in Germany I get called every single week for new Java projects. My employer has lots of Java projects proposals without enough developers to take care of them all.
That does not look like loss of mind-share to me.
Now I agree with you that generics in Java suck, and I also don’t like the way the language is going with the annotation overload that it is getting (@override, @value, …).
For me, Java is just another tool. The language I use, always depends on the project requirements, and is usually already decided by the customer.
I admit that I have no solid empiracal measure of language usage or user-base growth, but I have the strong impression that Java as a general-purpose desktop application language is dieing. It isn’t being shipped on new Windows machines anymore, new software projects aren’t using Java, the language is fading away. It certainly still has its users — I’ve written an internal-use web service in JSP, and it’s Android’s native language, after all — but I think it’s days as a general-use language are pretty much over.
I never said that generics in Java sucked! I’m glad they finally got them, long overdue as they where. Given how much more type-safe generics make abstract container classes, I was saying that it was a crying shame that it took Sun so long to break down and add generics into the language!
To me, that’s the story of the growth of Java. They looked at C++ and said “there’s gotta be a simpler way to build a language.” (And for my money, they’re right on that.) So they stripped out a lot of the most confusing features and worst design decisions, and Java was what they where left with. But they took out to much; things like generics, while ugly from a type-flexible pure OOP perspective, end up being net benefits for the language. And they where too slow to add those features back into Java. So you’re left with an under-expressive language that’s missing features that older languages have had since before Java existed.
I mean, Java finally gets what amount to a Python with-block in Java 7, in 2012? Why the hell did it take that long? And why is the RAII block that we finally do get merged with try-catch blocks? What would’ve been so bad about introducing a with block?
For me generics in Java suck, because of type erasure they are only half way there when compared to what other languages offer as generics. But yeah, it is still way better than not having them.
As for the things taking time to be implemented in Java vs Python, it is always like that when a language is subject to some form of standardization process, look at how slow FORTRAN, Ada, C++, C, OpenGL standards evolved, just to name a few examples.
And compare their evolution with languages and API not subject to standardization processes.
What do you mean? Can you elaborate on this a bit? *curious*
If I may remind you, I plainly stated that there is no risk of buffer overflows if you follow correct practices. At no point in time did I encourage unsafe practices. I spelled out for you exactly what must be done. And even with that in mind, it is not only simpler to do it my way but it also produces easier to read and understand code.
You’ve obviously found poor references for irony. There’s simply no expectation that a C project would not be compiled with a compiler written in C++. Thus there cannot be any irony when it happens (and it happens all the time, further removing it from the realm of irony.)
Similarly, my participation is not ironic because there’s no basis to expect that I wouldn’t participate in correcting a misuse of an overly misused word.
You keep forgetting we live in the real world where even the best programmers make mistakes. Do you know what a mistake is? People don’t have to intentionally diverge from correct practices to make a security error.
You encourage it by enabling the mindset that people should make security a matter of ego and pride at being able to claim mastery over manual buffer management. Encouraging manual buffer management (because of your dislike of C++) is reckless.
From the point of everyone who is not a developer: they don’t care. What they see is a security flaw and their wasted money.
Since when has manual anything been simpler and easier?
What makes it ironic is that Linus Torvalds and people like you have said that C++ is a bad language and that C is preferable to C++. I’m pretty sure the BSD people hold similar views. Then you get compilers written in a language you despise compiling your programs written in a language you prefer over the compiler’s implementation language.
Your preferred language has basically become possible only because of a language you hate, and it’s happened to the biggest projects with the greatest haters. Irony.
And the “expectation” criteria of irony is not an empirical one but a vaguely narrative one. For example, say we’re both watching a movie. Something happens which you don’t expect to follow from what has happened, while I fully expected it to because I understand the narrative structure. Does that make not ironic simply because I expected it, even though you didn’t expect it and it was certainly the writer’s intention of it being ironic?
Of course not. Irony is a literary or narrative property that applies whether or not an actual person actually expects it or not.
Oh hey, the security circus is back in town. I’ve already dealt with the issues you bring up, but here we go again.
Mistakes get checked by others. Don’t release software before hand.
Using arrays is not a matter of pride, but one of humility. Other people have to read the code. Use something easy to read.
Manual can be easier if it makes things easier to understand. In this case, arrays have special syntax that makes them easier to understand than vectors. Thus even though it’s manual, it’s still easier.
The language of the compiler is irrelevant. Most projects just use the most popular compiler. It simply makes rational sense to do so, as popular compilers have more extensions, optimizations, and (hopefully) less bugs (or they’ll be fixed sooner.)
It is entirely possible to compile linux or freeBSD with tcc or pcc. Both of which are written in C. Therefore, it is not true that C requires C++.
It is also important to point out, that using a program is in no way, shape, or form a validation of the language used to write it.
In order to hate, one must have passion. I certainly don’t. I don’t go around trying to stop people from using C++, it’s their choice they can use what they like.
Personally, I find C++ to be an ugly language. It tries to do too many things at once and ends up being overly verbose in everything it does. This makes it harder to read. Which makes it harder to understand.
I much prefer using smaller, more concise languages. Yes, that’s languages plural.
Lastly, irony. Irony is all about expectation. If you don’t understand this then, please, stop using the word. Specifically, Irony is about averaged expectation, also known as common sense. Something can be ironic only if the average person wouldn’t have expected that outcome.
Also, expectation is empirical. You can go out and ask people what they expect.
No man, if you don’t think his favorite language is the best language for all projects, then you’re clearly an idiot.
No, my favourite language is C++PythonAda, but no such language exists.
So your favorite language is two parts bloat to one part batteries included?
No wonder we disagree. That’s pretty much the definition of the worst language ever.
I think people are slowly accepting that C++ eventually gets to replace C in most areas where C is still relevant in the desktop/server.
MacOS X device drivers are done in C++ (IOKit).
Most of the Win32 APIs since Windows 2000 are actually COM based and Microsoft publicly announced that C is only relevant for legacy code and they rather focus in C++. More so in Windows 8.
Symbian and BeOS are done in C++.
Only Linux and BSD are still have pure C/ASM kernels. I don’t know about Aix, HP-UX and Solaris.
Now for embedded systems C still have a place, as many of them are still coded in Assembly and companies are now slowly moving up to C.
Of course C will exist for decades still, as it does not make sense to rewrite code that works just for changing language.
I seriously doubt that, not only from my own experience but also from what I’ve see of the language popularity benchmarks C is holding on as strong as ever (it recently beat Java for the top spot on Tiobe).
Actually it uses a subset of C++ with no exceptions, no templates, no multiple inheritance etc, which kind of begs the question why they couldn’t just settle with plain C to begin with for those drivers.
C is here to stay, it’s the lowest common denominator as far as high level languages go, supported by pretty much every platform, and useable from just about any other language.
That doesn’t mean it’s the best choice for every project, there are certainly areas in which other languages like C++, Java, C#, Python, Go, etc are likely better choices as they offer a higher level of abstraction.
A particular area in which I wager C will always reign supreme is in library/framework code, the reason projects like zlib, flac, libjpeg, png, sdl, audio/video codecs, lzma, etc etc are written in C is because it’s A) fast and small memory footprint B) callable from just about anything.
Also none of the ‘new’ languages really compete with C, new languages like Go, Rust are higher level and compete primarily with C++ or even higher level languages.
It is only the lowest common denominator on the operating systems that happen to have C as their API.
In Symbian you need a C++ compiler, even C code gets compiled by C++ compiler.
Starting with Windows 8, WinRT comes into the picture as the future direction of the operating system API.
Eventually C++ will be the lowest level API you can get on Windows.
Ever tried to link a program unit written in C++ into a project written FORTRAN? It’s possible, but ugly. It’s much easier to link a C program unit into a FORTRAN project.
C has a lot of other use cases too, of course, but C-style linking is definitely still the lingua franca of multi-language projects. Most languages and compiler suites support C linking, and if you’re going to mix multiple languages in a single project (which happens, I’ve worked on a project that mixed C, C++, Ada and Fortran), you’re likely to be exporting everything with C linking at the boundaries where those languages meet.
I really wish someone would do something about that… I mean it’s not so bad when you can generate the C bindings, but it’s still a pain.
Using JVM/.Net languages you can sort of avoid the issue because they’re all represented the same way… but that doesn’t help me if I want to make Haskell and Python talk.
I could make a library that would know how to dynamically look up information, but that would basically involve imbedding Python in Haskell (or vice versa.) And it probably wouldn’t perform well. It may almost be less effort to create 2 languages with similar characteristics and build in the cross talk functionality (allowing the compiler/interpreter to do optimization.)
extern “fortran” ….
On the operating systems, where the ABI == C linkage model.
On Lilith for example, you would need to use Modula-2 linkage module. In Native Oberon, the ABI is Oberon based. In Spin it is Modula-3 based and so forth.
You can argue they are all dead, but lets take Windows 8. On the platforms where the only Windows ABI is WinRT like the ARM tablets, the lowest API is COM and the only native code compiler C++.
In this case, your languages need to communicate via the COM bindings, there is no C interop any longer.
Sure you can still use C, but it will be C communicating via the COM API.
This means in the long run, C++ replaces C as the lowest API on Windows, if WinRT is sucessfull.
In Symbian likewise, if you’re using some C stuff, you’re actually compiling C like code in a C++ compiler, because the ABI and exposed interfaces are all C++ based.
Again, C as lingua franca only works if C is the language exposed by the operating system. Lets not forget there were system programming languages before it, and after it, why should C exist forever?
Unless the C++ program unit has global static instances of classes with default ctors that need to run; then your compiler and linker need to know to find those initializers and run them.
The problem isn’t only the system API, the problem is things like pre-main initialization. Other languages have state that the system needs to keep track of; maybe they have a GC that needs to be updated, maybe they have pre-main initializers that need to happen, maybe they have some other structure that the language run-time is supposed to keep track of. C doesn’t do any of that; when you link in a C library written in C, you know that there’s no implicit pre-main that you need to call, and no GC whose state needs updating, and etc.
(Yes, C libraries can have init routines, but they’re explicit and part of the documented API; if you get someone else’s C++ library, that isn’t supposed to be linked into C or FORTRAN or Ada code, and try to link it in anyway, you have to figure out what your compiler and platform name the premain, hunt it down, and invoke it explicitly, which is a large part of what I was referring to when I said “it was nasty”. And if more than one premain initialization routine was generated, you get to worry about that, and if the order in which they’re called is constrained, you get worry about that.)
We’re talking about different things. COM is an object-communication system, not a library linking system. As far as I can understand it (and I’m not a Windows developer), COM would take care of the problem of communicating objects between heterogeneous languages, but it’s not an ABI or linking standard and it wouldn’t take care of actually linking the program units in the first place.
Which, you’re right, would make cross-language linking much easier if you’re working with Microsoft’s native tools and the set of languages that it’s build system and platform well-support, but won’t help you at all if you’re trying to link in an Ada or FORTRAN unit.
And the system-development languages before it blew, which is why C was developed. And the system-development languages used after it have to be restricted; remember the above post about how C++, when used as a kernel language, can’t make use of a laundry-list of features? That’s because of the same problem; you can’t use features that would require the generation of implicit premains, or would refer to state that the underlying system is supposed to maintain (because in a kernel, there is no underlying system). The system-development-safe part of C++ you end up with isn’t much larger or much different than C.
Since when? COM is all about libraries.
Your COM components can exist in separate executables, for more security. In this case a kind of local RPC is used.
However, most COM components are actually dynamic libraries that get dynamically linked with your application and follow the same format as C++ VMT. No communication going on here.
In Windows 8 COM got extended. Now with WinRT, COM makes use of .NET metadata for language interoperation.
If Metro succeeds, many Windows developers believe Win32 might be in the legacy path, with WinRT taking over the full spectrum of Windows APIs.
C got developed, because of UNIX. If UNIX had failed in the market, most probably no one would be talking about C today.
Now UNIX got successfull, everyone wanted to have UNIX like utilities and C started to be ported everywhere.
The day, operating system vendors start using another language for systems programming, like Microsoft is doing now with Windows 8, then C starts to loose its influence in this area as well. At least in the desktop area.
If the operating system vendor does not offer you a C compiler, or C like APIs, then there is not a C interface to talk about.
You FORTRAN compiler, Ada compiler will need to support another type of interface.
This is nothing new. From what I know, there are no C like interfaces in mainframe systems, and you are forced to use whatever call convention the OS vendor decided upon.
I’m not a Windows dev, but from what I’m reading, COM isn’t a calling convention, it’s an object serialization technique. You’re right that it will help different OOP languages on the same underlying platform swap objects easily, but that’s not the only problem I’m talking about. That works because COM support is pushed into the language run-time and underlying platform, and handled by the compiler; things change when one of the languages you’re using isn’t object-oriented or doesn’t have usable COM bindings, like Fortran, Ada or C. Exactly the reason that C is the common language for multi-language linking is that C linking is the lowest common denominator among calling conventions, and that’s still going to be true even on a platform that makes heavy use of COM in its low-level APIs.
“C exists because of Unix” is historically true, but you’re missing the point. The reason that C came into existance was that B and BCPL – and pretty much all low-level system languages – sucked. C became popular because there was a huge need for a portable, high-level language that could still be used for low-level work. C and Unix answered real needs in the market that nobody else did; it’s not as if they became popular by coincidence or clever marketing.
You keep avoiding to answer how you would use C linkage if the operating system would no longer offer it.
No, the UNIX authors did not like the other system programming languages. Algol 68 and PL/I were two that could have been used.
There were operating systems already written in those languages, so I doubt that they really sucked.
But I was not there, so my conclusion might be completely false.
OK, here: modern Linux doesn’t even strictly use C calling and linkage. The reason that C is used as the linking standard at the seams of some multi-language projects is that C is the lowest-common denominator between different function call models. C-style calling and linking is extremely simple; it can be mapped onto the actual calling-and-linking in use on any platform, and for almost any language, there is a subset of that language that can be mapped onto C calling conventions.
Using COM for object interchange and leaving the rest up to the compiler is compute until you want to link in a module written in a non-OOP language or compiled by a different compiler suite. Then you’re back to a C-style interface and C calling conventions. Not because it’s the platform’s normal calling and linking conventions, but because it’s a patch of common ground between the function-call models used in different compiled languages.
Which isn’t even to say that all multi-language projects will just do everything as extern C and call it a day; for some languages and platforms, other techniques (like COM interchange) will be better. But C as the basic, universal model of calling a function will never go away completely, and your eager and self-assured predictions of C’s decline and demise in the coming years are comically premature.
Yeah, I’m going to stick with Thompson and Richie’s contemporary assessment of the available languages, especially given that Richie was dissatisfied enough with his existing alternatives to create a whole new language.
Also, C had plenty of its own selling-points. I’ve written C and FORTRAN, and I can tell you which I’d rather use for a new project. C was quicker and cleaner than anything it was competing against, and its portability was another major asset (FORTRAN compilers had a tendency to have syntax and operators specific to the platform they where designed for).
I don’t know why Algol and PL/I never gained traction. I’ve only heard PL/I mentioned as a historical footnote, but I don’t know how it stacked up against it’s contemporaries. As for Algol, bear in mind that it had already been released and largely failed in the market (yes it was used, yes it was still around, no, it had not displaced FORTRAN) when C got out into the wild in the late 70’s.
NB that I am just under 30 and was not around for any of the above. I just payed attention in Programming Languages. ^.^
It might take a few generations still, but I am confident that with the change to more strongly typed languages, this will eventually happen. Even C++ has a stronger type safety than C.
Just out of curiosity, yesterday I was reading some OS/400 documentation, nowadays known as z/OS. And discovered that everything in the OS gets compiled to bytecode and JITted on installation, similar to what .NET does.
In this mainframe OS, the only calling convention is bytecode based, there is no C calling convention.
Anyway, maybe I am plain wrong about C’s future, and my bias against it is from a frustrated Turbo Pascal guy,that has seen enough core dumps and pointer tricks gone wrong in his life.
Who knows what the future reserves.
36 here, and started using computers back in 86 at the age of 10.
I was lucky that my university department had lots of literature of the early computing days, and I also took all compiler development and system programming classes. So it was quite nice to experience so much from the computing history.
Sadly I think the younger generation that start nowadays learning about computers will miss quite a few things.
On this, you and I are in complete agreement. I’m teaching some of them, and while there are a few bright spots, on the whole, I’m pretty worried.
http://lists.freebsd.org/pipermail/freebsd-current/2012-September/0…
http://lists.freebsd.org/pipermail/freebsd-current/2012-September/0…
Simple comparasion of freebsd built by clang and gcc