LLVM 2.8 has been released. The release notes describe this new, ehm, release in greater detail, so head on over and give it a read.
LLVM 2.8 has been released. The release notes describe this new, ehm, release in greater detail, so head on over and give it a read.
Back when Snow Leopard shipped and introduced me to Clang/LLVM, C++ wasn’t supported. From reading the release announcement, it looks like this is the first C++ & ObjC++ release.
Cool.
The progress and success of the LLVM project is just plain impressive. Who knew there was so much room left for innovation in compilation? One thing that’s a bit disappointing with this release though is the lack of love the LLVM Machine Code subsystem is getting for ELF. Maybe a Linux developer will come along to mirror all that goodness Darwin (Mach-O?) users are getting now. Jealous.
I’d just like to say that I wish people would stop using the word “love” when they mean “attention”.
I know they won’t, but I had to say it anyway. Go ahead & vote me down; I feel much better now. 🙂
Well, they don’t say “uber” any more, and “SKU” seems to be on the way out. This one, as all cliches do, will also come to pass.
The ELF work is now in HEAD, would probably mature enough to go into 2.9 release.
Well, anyone who has used autotools, gdb, or libtool!
I agree though, their rapid progress is impressive. As is the way they have properly modularised everything correctly to allow code re-use.
What about software patents ?
Apple is a big contributor to LLVM.
They have many patents citing LLVM by name inside them.
This is very dangerous as the licence isn’t GPLv3 and doesn’t protect users from being sued.
The LLVM license allows closed source components to be created and used. However, the main project is committed to open source and requires anyone making contributions to sign a patent agreement:
http://llvm.org/docs/DeveloperPolicy.html#patents
They’ve actually already removed some code from llvm that infringed on a non-contributors patent. Hopefully their diligence will pay off and software patents wont be a major problem for them. It would be lovely if they weren’t a problem in general. ;o)
One word. Research. Try it. It works.
http://en.swpat.org/wiki/Apple_Inc.
http://goo.gl/UfRs
Please stop this nonsense about end-users being sued. End-users can not be sued for patent infringements in products they are using, I can not be sued for patent infringements in my TV, the one who will be sued is the manufacturer of the TV.
Maybe he is actually a developer or prospective developer.
The scope of patent law isn’t restricted to manufacturers, it covers any use of the invention. So end users could very well be sued. You could very well be blocked from using a device you designed and built yourself for personal use, if it infringed an existing patent.
This is the reason the major software vendors have to include legal indemnity for IP as part of their license agreement. It would be fairly tough selling software into the fortune 500s if you didn’t, as there would be too much potential liability for them otherwise.
Keywords being designed and built. Not purchased and used.
So then, tell me why no other industry provides this to their end-users? Maybe it’s because other industries are mature and not headed by morons?
Maybe they understand that if you put this into practice the whole capitalist system would collapse because consumers wouldn’t dare purchase any products?
Does consumers in the U.S really have such a weak position that they have to fear being sued for patent infringements they could not possible know exist?
What happened to the free market that is supposed to spur innovation?
Edited 2010-10-10 02:19 UTC
Depends on the country you’re in: in the Netherlands, patent protection only applies to using the invention ‘in or for a company’ (rijksoctrooiwet 1910 art 30), not to purely personal use.
Please don’t give legal advice when you do not know what the hell you are talking about… End users CAN be (and have been, many times) sued for patent infringements in products they are using. You many feel that is unfair, illogical, stupid, or whatever – but I assure you that is simply the way it is.
If you buy something which does not specifically grant you indemnity by the manufacturer (which legally makes them the target in your stead), you are fair game.
Now, if you want to speak to whether or not you are “likely” to be sued, frankly that has almost nothing to do with your position in the supply chain and everything to do with how deep your pockets are and how likely you are to put up a fight…
And I do mean everything – frankly the target of a lawsuit is generally whoever the lawyers feel they can get to roll over easiest (or has enough money to make it worth while to endure a prolonged fight for a big payoff).
RIAA suits are a perfect example. Its not patent law, but that isn’t the point. Do the targets of these lawsuits have money? Generally no – but the lawyers make up for it in volume (sue LOTS of people). And the payoff is strictly in settlements – if no one settled there would be absolutely no incentive for them to do this, and I mean no incentive. By the time a case gets to court they have already spent more money than they would ever likely see out of it… because the defendant is generally broke and can’t pay anyway.
My point is this approach can be applied to patents as well. If the situation is just so, and a few lawyers get together and cook up a similar scheme where they think they can squeeze a few thousand people out of a few thousand dollars each in settlements, well if you think you can’t be sued you are sorely mistaken…
Maybe you should take your own advice.
Unless you can give an example that’s bullshit.
Really? Why then does my mobile phone, for example, not give me any indemnification against being sued for patent infringement? When you buy a car do you get indemnification? When you buy a lighter?
These products are no different from software in terms of patent infringement.
Actually, that’s exactly the point. The RIAA lawsuits are about copyright and it’s an entirely different thing. It doesn’t take a genius to figure out that illegally redistributing (not necessarily downloading though) movies is against copyright law. On the other hand, knowing if a certain product could possibly infringe on some of the millions of patents that exists is an impossible task for a consumer. Consumers are simply not expected to have that kind of detailed and technical knowledge of every product they purchase. I can guarantee you that if a company brought to court a case against Joe Sixpack for infringing on some patent in a product he had purchased it would be thrown out right away unless you could prove that the defendant was aware of the infringement prior to purchase. Innocent until proven guilty, isn’t that what you say over there?
If consumers where responsible to research in detail every product they purchased the whole capitalist system would collapse because not a single consumer would dare purchase anything in fear of being sued.
But hey, the American legal system might just be fucked up enough so that you should worry.
I’m having trouble finding a reference, but there were a chain of cases filed by a company called Lemelson Medical, Education and Research Foundation concerning bar code scanners. They did sue some of the scanner manufacturers, but they also sued businesses that used the scanners (namely some large retail outlets). In the capaicity that the retailers were sued, they were end users of the product (they used it, they didn’t sell it). Why did they go after the retailers? Because they had more money…
The case ended up being thrown out (patent was unenforcable), but that is not the point. I didn’t say it was easy to sue end-users (it is more difficult). It certainly would not go over well with the media, and judges don’t like plantiffs that stir things up… I simply said that you can sue end users… US law specifically states that you can.
I never said software was different – it is exactly the same. You may or may not get indemnification when you buy something, you would have to carefully read your license/bill of sale/warranty to find out if you do. But hardware is no different from software – if you are not indemnified by the manufacturer than you are fair game for legal action in the US – that is just a fact.
No it isn’t. Please read this, and if you don’t understand it read it again (this is taken directly from US law):
That is very black and white, and stipulates quite clearly who action can be taken against for patent suits… It doesn’t matter at all that you didn’t know the thingimabob you bought infringed on a patent. No sure there are consumer protection laws that might trump this kind of thing in court – but that doesn’t preclude an end user from being sued.
It is just that fucked up. That is what I am trying to tell you… I’m not claiming I like it – I’m just trying to make it clear that you are very very wrong about this.
I have never seen anything other than software providing this indemnification.
Yep and it’s royally fucked up. I expect that law was written in simpler times when there wasn’t millions of patents and so many complicated devices. That, or the people who made this law were idiots.
I don’t see how you can reasonably expect consumers today to know if a product infringes on a patent. That’s just insane.
Yep. Sorry, I usually come to these issues from a non-US standpoint. I’m used to consumers actually having rights.
You mean these “LLVM” patents? http://www.freeishsoftware.org/index.php/component/content/article/…
Turns out the people spreading rumors about there being patents on LLVM are a bunch of liars spreading FUD about LLVM.
Edited 2010-10-10 01:37 UTC
No patents about LLVM ? And what about this ?
If you actually read the patent, you’d find that’s not actually a patent on LLVM at all, it’s a patent about a certain method of compiling javascript to a intermediate representation such as LLVM IR or even some other representation, meaning the string “LLVM IR” only appears in the patent at all to serve as an example.
The LLVM bit is really not at all even an interesting part of the patent, the patent is on the method used to convert javascript into an IR of the programmer’s choosing – what that IR is is irrelevant.
Edit: Oh, and it is not a patent (at least not yet), it is only a patent application.
Edited 2010-10-10 13:47 UTC
GCC is still the best.
Far from being “the best”.
Based on what metric?
I’m not saying it isn’t, as GCC is a very mature and robust compiler suite, with many many years of development behind it, but I’m sure it isn’t better in every metric.
Unless you’re trolling, in which case, MSVC is better than GCC.
Sure hope it’s not about licences, I’m so tired of that.
From my own tests and from what I’ve read of others GCC still generates the fastest code, that is even without optimizations such as PGO which llvm lacks, however clang/llvm compiles quite a bit faster (sure, some of it may have to do with not optimizing quite as well as gcc) and the error reporting is imo superior.
The great thing is that clang adopted gcc’s flags etc so it should work as a drop-in replacement. And as such there’s no need for anyone to put all their eggs in one basket, not even on a per project basis. You can (or atleast will be able to once clang has sufficient compability) use them interchangeably and harness the strenghts of each compiler where it suits you.
Well, GCC is a compiler collection (which is what the CC stands for). GCC has a fair number of supported languages and target architectures.
http://en.wikipedia.org/wiki/GNU_Compiler_Collection#Languages
http://en.wikipedia.org/wiki/GNU_Compiler_Collection#Architectures
In contrast, LLVM front ends actually appear to come from GCC:
http://en.wikipedia.org/wiki/Low_Level_Virtual_Machine#Front_ends
… whereas I can’t find any information on what machine architectures are supported, so I would presume it is only x86 and x86_64.
So in terms of at least the metric “what it supports”, GCC takes quite some beating.
Yes gcc supports more architectures and the same goes for languages and I doubt this is going to change anytime soon since llvm’s language/architecture support direction is largely that of Apple’s which shows in their clang/llvm’s history c->objc->c++ .
As for frontends, llvm was piggybacking on gcc with llvm-gcc for quite some time while clang was maturing but llvm-gcc has been deprecated now for the dragonegg plugin which allows you to use llvm as a backend for gcc (from gcc 4.5 onwards iirc).
When you were looking for information, did you consider reading the release notes?
Trolling about compilers, seriously???
More like trolling about licenses. GCC vs LLVM has become a BSD vs GPL proxy flamewar in some circles.
A GPL’d compiler is an issue for various projects but especially since GCC moved to GPLv3.
Edited 2010-10-08 03:44 UTC
I don’t know why you’ve been marked down given that Apple’s reason for supporting LLVM/Clang was so that they can heavily integrate XCode and the compiler together so that meaningful error messages can be provided to the programmer as to debug their programme. There are many other considerations but it is amazing how something that seems like common sense is being turned by licence zealots into a religious cause.
GCC is a slow compiler, especially with C++ and it’s getting slower with each new release. It is a memory pig. Error messages/reporting is very poor. The code is overly complex and hard for people to hack on. The license especially with the newer versions is unacceptable for a lot of projects.
LLVM is much better in all of these areas.
GCC also does not generate the best code on pretty
much any architecture and on more than enough of the architecures it supports it generates really poor code.
Can you somehow back up this claim?
From my experiences Mingw-GCC on Windows with O3 generates MUCH faster code than MSVC (dunno about 2010 though). I’ve done some numeric computations benchmarks.
Ehh? Not nearly so, I really want to think this is not about you being a BSD fan and gcc being GPL, please say it’s not so.
Here’s where I kind of agree with you, except that I don’t think it’s ‘very poor’ but rather that llvm’s is ‘very good’, as in best of class. It’s one of the things I personally would want the gcc devs to work on, but although there are plans on this ( http://gcc.gnu.org/wiki/Better_Diagnostics?action=fullsearch&contex… ) the corporations that to a large extent directs gcc’s focus (ibm, red hat, novell etc) are obviously prioritizing optimizations as it stands with the focus of gcc 4.6 being that of polyhedral optimizations. Nothing wrong with that, certainly there’s alot of performance to be had, as shown by pocc for example: http://www-roc.inria.fr/~pouchet/software/pocc/ but again I’d rather prefer it if they put the optimizations aside for a whlie and worked on the diagnostics.
Not at all. Its called doing actual real world benchmarks. Building kernel and userland and finding each release is slower.
Well, while I’m not rolling my own kernel, I do compile applications like blender and inkscape regularly (atleast on a weekly basis) and my experience is not that of yours. While I haven’t noted any major improvements in compilation-speed these past releases I’ve certainly not noted any regressions either.
Though this could perhaps depend on which optimization flags you use, optimizations are added regularly to -O3 (which could measurably slow down compilation) I certainly haven’t noticed it. Since I’m curious I’d like to do some benchmarking on my own in this area, can you offer any statistics with which to compare, and also which gcc versions you are talking about?
Maybe in terms of code generation, but adding a frontend to gcc at first sight seems a nightmare in comparison to llvm (I am a fan of stallman/GPL). I hope they change it or prove me wrong. Switching to C++ sounds interesting and possibly rewarding.
Best at what? I thought it was common knowledge that icc is better for performance.
http://multimedia.cx/eggs/icc-vs-gcc-smackdown-round-3/
GCC is of course more portable but I wouldn’t call it the best.
Outdated common knowledge, then. GCC has improved very fast for the past 5 years now in the performance area, and simultaneously the nature of compiler optimization has shifted: 5 years ago it was mostly about the back-end carefully selecting asm instructions, which ICC is very good at, but nowadays programmers of performance-critical software use more and more intrinsics (especially for SIMD) so to a large extent they control that themselves. In other words, ICC’s superiority in the area of auto-vectorization is becoming irrelevant as that doesn’t get nearly as good as intrinsics-based vectorization anyway. Instead, compiler optimization has been moving to a larger scale, with e.g. the advanced loop transformations (polyhedral model) introduced in gcc 4.4, with partial loop unrolling, partial function inlining, better constants propagation, etc.
Talk is cheap, real benchmarks are more telling.
The one I provided was from 2009.
Here’s another from 2010:
http://macles.blogspot.com/2010/08/intel-atom-icc-gcc-clang.html
Here is another:
http://www.luxrender.net/forum/viewtopic.php?f=21&t=603
ffmpeg benchmark:
http://geminialpha.blogspot.com/2008/03/icc-vs-gcc-43.html
Here is someone showing how clamav can be recompiled with icc for a significant performance boost:
http://groups.google.com/group/linuxdna/browse_thread/thread/36a354…
I see no reason why I should believe that GCC will create a faster binary in most cases. But if you would like to convince me otherwise then pick some commonly used open source programs and create your own benchmarks.
A lot of these are outdated. They use GCC 4.3, 4.2, and, would you believe it, one (clamAV) even uses GCC 3.4 ! ^^’
The sole test using up-to-date GCC is the 2010 one, and it shows that GCC 4.5 is often pretty close to ICC in terms of performance, although compilation is much slower.
Two concerns :
-This was a svn, pre-beta build of GCC, so compilation performance has probably improved a bit (though one should test this).
-Also, I would like to know how code generated by ICC behaves on AMD processors.
You do have a point about ICC not being outdated, and being faster on at least some Intel processors, though.
Edited 2010-10-08 18:48 UTC
It at least shows that the common belief is well founded. If you want to find and show some newer benchmarks then by all means please do, I don’t have an emotional attachment to any compiler.
I don’t have such attachment either, it’s just that I have some sympathy towards GCC because…
-It is the sole C/C++ compiler producing high-performance code which I know of that’s not owned by a megalomaniac mega-corporation or a processor manufacturer.
-It has some very useful extensions.
-It works everywhere and for just about every target.
-It’s heavily covered by documentation on the internet.
-It’s licensed under GPL, which I personally like better than BSD licensing.
Now, if they could only copy LLVM’s nice warnings…
Edited 2010-10-08 21:21 UTC
Finally one that is relevant. Gcc 4.5 has been out for quite a while now and Gcc 4.6 is reaching end of stage one this month. Very impressive results for ICC on the pi_fftc6 test.
This looks good for gcc since Intel recently hired longtime gcc developers CodeSourcery to work on (amongst other things) improving optimizations for the core ix range in gcc.
edit: oh, and here is the latest (ever?) smackdown if anyone is interested, still quite outdated though:
http://multimedia.cx/eggs/compiler-smackdown-2010-1-64-bit/
Edited 2010-10-08 19:24 UTC
According to what criterium? llvm, clang, etc. make it a whole lot easier to add IDE plugins, write debuggers, static analysers, etc.
Every compiler has it’s strengths and weaknesses. E.g., compile a very large array of structs. gcc will become a monstrosity, taking enormous amounts of memory, while VC++ effortlessly compiles it in a VM with only 512MB RAM.
It’s not exactly clear-cut. But the fact is that gcc until recently didn’t have a decent system for plugins to avoid proprietary plugins, and LLVM has forced them to offer such functionality to compete.
Edited 2010-10-09 10:32 UTC
Indeed. LLVM is something nice, be it only because it pushes innovation forward in areas which GCC traditionally doesn’t explore (internal structure, error messages, plug-ins…).
However, I’m not sure the memory and resource consumption of LLVM is a good argument. Currently, LLVM optimizes code much less than GCC implementations (at least on the tests provided here). Chances are that to optimize code as much as gcc -O3 (or maybe the new -Ofast, not sure what it actually does), LLVM could require much more memory than it currently does.
Edited 2010-10-09 11:17 UTC
What I’m really surprised about is how quickly they got C++ support in it given that many compilers even to this day their compilers fail to conform to many of the C++ standards that have existed for many years. Hopefully more operating systems will jump on board and expand the support further; maybe we’ll see LLVM/Clang become the official compiler for *BSD’s and their ports some time in the future, with many of the GNU/GCC’isms finally removed from open source code and replaced with open standards code that allow it to be compiled with any compiler.
Edited 2010-10-08 05:01 UTC
Problem is, the C++ standard lacks some very useful things like the __attribute__ ((packed)) of GCC, which prove to be mandatory under certain circumstances. And a stdint header like that of C99, too, I’m tired of defining uintX_t myself in a compiler-specific fashion.
I know it is a pain in the rear that the standard lacks some niceties but I’d sooner that people wrote code according to the standard, even if it means they have to write an extra 1000 lines of code simply for the sake of being able to pick up any compiler and it all works nicely because the programmer chose to stick to the straight and narrow.
How was it done before these features appeared? they wrote it out the long way – time to go back to the good old days instead of looking for quick and dirty short cuts that create giant clusterfucks when it comes to code portability.
It’s not a matter of number of LOC, you just can’t define a packed structure in vanilla C++. Nor know the size of the integers you’re using (which is quite idiotic for anything except but int which is supposed to handle the machine’s faster integer type, if you ask me).
When people did not have compiler extensions, they used either known compiler-specific behavior (like my typedef long int64_t), gigantic macros that are horrible to debug, or assembly code which has the worst possible portability. Generally a combination of all three. I don’t see what’s wrong with using compiler extensions, compared to this mess…
When I read this again, I fear that this is going to be misinterpreted, so I prefer to say it right away : I know that I can use sizeof() to know the size of integer types. But it doesn’t make the results less platform- and compiler-specific.
If I use, say, “short” or “long”, there’s no way I can know what the size in bits of those are on a random platform and compiler, which is highly inconvenient in some cases (like if you want to be sure you can store number X in an integer data type, or for low-level development tasks) while not being exactly advantageous in any way. See http://en.wikipedia.org/wiki/64-bit#Specific_C-language_data_models as an example of this horrible mess.
We wouldn’t have this if C, back in the day, had been defined with stdint-like fixed-size integer types in the first place and only “char” and “int” as vaguely-sized types (char because it’s used to store characters anyway and int because it’s a convenient shortcut to the machine’s fastest integer type), or if its horrible integer types were not copied by C++ for compatibility reasons.
But well, I can’t go back in time and change this… And thanks to the introduction of stdint.h in the C99 revision of C, the thing has been fixed there. So I can only hope that this fix is ported to C++ someday. Fixed-sized integers are just the way it should have been done in the first place.
Edited 2010-10-08 14:18 UTC
Oh, and another almost-mandatory compiler extension to C and derivatives that’s not properly described by the relevant standards : inline assembly. As awful as GCC’s syntax for it can be, it’s often much better than keeping separate .s files and writing headers for them, because…
-Doing so is overkill for those assembly snippets of less than 10 lines that form most of CPU-specific code.
-When you’re writing ASM, you don’t write portable code anyway.
-It’s better for code clarity.
-Except for that class of tiny CPU-specific code chunks, ASM is often used for high-performance code, that kind of code where even a CALL’s penalty can be too much…
Edited 2010-10-08 17:00 UTC
Assembly place is on .s files.
I for one vote for not having inline assembly.
I don’t see any problem with having to write a few external functions.
Clang has been committed to FreeBSD -CURRENT, and can compile the kernel + world. Not quite ready for the ports tree, but one can boot and use a FreeBSD system compiled twice by Clang. It’s self-hosting!!
But in terms of compiling the ports they aren’t there yet because so many of the ports rely on GNU/GCC’isms that result in compilation fails. Maybe with the rise of a standards compliant compiler we’ll see open source developers more attuned to writing their code according to those standards rather than using nasty hacks and work arounds.
Here is a sneak peak as to FreeBSD 9.0 features:
http://ivoras.sharanet.org/freebsd/freebsd9.html
The big thing I’m looking for, along side better hardware support, is tickless kernel support as to improve the over all battery life/power management. What FreeBSD really does need is to remove HAL and provide a native FreeBSD back end to both GNOME and KDE rather than the situation now which is the worst of both worlds and none of the benefits.
GCC is standard-compliant if you ask it to be so Using command lines like this one http://c-faq.com/resources/fn86.html , it can be as pedantic and standard-compliant as possible (and report errors better)
Well, according to Phoronix’s results, a tickless kernel alone won’t magically improve power management, it just makes the way for future power management improvements (like pausing some daemons more or less aggressively when their presence is not absolutely necessary) : http://www.phoronix.com/scan.php?page=article&item=651&num=1
Edited 2010-10-08 17:08 UTC
Even simpler:
gcc -pedantic -Wall -Wextra
A “standard” is more often than not somebody else’s “nasty hacks and work arounds.”
But at least a set of “nasty hacks and workarounds” where at least some group could agree on, and a whole contingent of programmers has to conform to.
It really makes life easier, and sometimes makes progress slower.