“We are very pleased to announce this release, the first major update to cairo since the original 1.0 release 10 months ago. Compared to cairo 1.0, the 1.2 release doubles the number of supported backends, adding PDF, PostScript & SVG backends to the previous xlib/win32, and image backends.”
“the PDF backend in the cairo 1.2.0 release will emit vectorized output for all operations drawn with the default OVER operator, even when the drawing includes translucence. The PostScript backend emits vectorized output as long as there is no translucency. The SVG backend can handle basically anything with vector output”
Hi-res PDF printing in Firefox here we come. Stick that in your pipe and smoke it IE. Also, it was said a long time ago when Firefox trunk (now ‘minefield’) moved onto Cairo that it is technically very easy to save an image of the entire webpage being viewed (rather than just the viewport). An extension for Firefox already provides this functionality, but it hacks it together using Java. Having a simple “File > Save page as Image/PDF” would be awesome.
That would be sweet.
Well but this is nothing new, it was about time it will be available in Linux also. MacOSX had this for ages and Vista new UI system (can’t remember name right now) has it also, so I’m afraid IE will have this too
—
Pixel image editor – http://www.kanzelsberger.com
Weeell… we had it first, actually: in 2001 I could simply print to file in Mozilla 0.9.x, then ps2pdf the result, and voila: instant pdf ๐
(that’s what I do even now, actually: I also have a nautilus-action to convert the ps to pdf without touching the CLI)
I’m really impressed by how fast X11 has grown in Linux (of course also *NIX/BSD) since the death of XFree86 and the birth of xorg.
Graphical *nix was quite bad compared to windows performance wise, but also had some major advantages. The main advantage is of course the network transparency. I really enjoy the ability to run an application remotely with a perfect integration. Now if xorg can also be very performant then windows has an even more serious competitor, not only on servers but also on the desktop.
The entertaining thing is that the current Cairo/XGLX/AIGLX stack that everyone things is so fast and smooth is actually an order of magnitude slower than what was in there before. The main reason it seems faster is optimizations in GNOME, and more importantly, double-buffering of the desktop.
This is of course not intended as a slight to the Cairo folks*, but rather as a dig at the people who blamed X11 for performance all these years. Raw drawing performance wasn’t the problem, and it never was. Synchronization, which was solved as a byproduct of compositing, always was the issue.
*) Their attention to proper API design and high output quality is much preferable to a focus on performance. Moreover, its not at all surprising if Cairo never becomes faster than core X11. Cairo does a whole lot more than core X11 does, and contrary to popular belief, the core X11 routines were already highly optimized.
Edited 2006-07-02 00:20
That is a great point.
A well defined and designed application or library can always be refined and optimized for speed.
A poorly designed app that runs fast rarely ever gets refactored into a decent design that works well with other software. I mean, look at the Wintendo family of products from Microsoft.
I think X11 dodged a bullet, really.
Also, we should remember that Cairo is still a very young product, unlike X.org, which is very mature. Obviously, the initial focus for Cairo would be on API and design instead of performance. They are only starting to get a little bit of performance optimization in version 1.2. We probably have to wait for version 1.4 (before the release of Firefox 3.0/Mozilla 1.9) to see some significant performance optimizations.
XGLX and AIGLX is aimed at lightening the load of CPU and speeding up very complex operations (Quartz Extreme and CoreImage type operations) instead of simple drawing operations.
You are missing one obvious fact. Cairo at one point already is way faster than X. X hat one problem, it never was the speed, but it was the fact that the underlying technology never really allowed to utilize the power of modern graphics adapters. Even if Cairo is slower in plain execution speed, its drawing primitives are on a way higher level and can utilize the functionality of modern graphic adapters way better than plain X.
There is a huge difference in a set of commands, drawing brezier curves for one font letter and only issuing font a size B letter C.
There also is a difference between shifting bitmaps over a network layer and just tell the graphics card, draw me polygon Y.
The only thing really faster in XGLX is drawing of composited bitmaps, which is used for anti-aliased text (they are not drawn as curves as you imply). This is something that has been available via RENDER for awhile, though, and is quite independent of Cairo.
As for utilizing the capabilities of modern graphics cards — Cairo can do this in theory, but it doesn’t yet do so to the point where the hardware acceleration makes things faster than core X. That’s okay, because it doesn’t realy need to be.
Which brings me back to my point. Utilizing the capabilities of modern hardware is really a moot point, because that’s really not what makes current user interfaces feel slow on modern CPUs. OS X still renders all its vector graphics on the CPU, and it feels quite smooth, because how fast you can draw primitives really isn’t the bottleneck.
their website tells me nothing about what it is, what it does, or why I might want it… clearly if I need to ask what it is I’m not l33t enough to need it!
“Cairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System, win32, and image buffers. Experimental backends include OpenGL (through glitz), Quartz, XCB, PostScript and PDF file output.”
Right on the main page.
If you use GTK 2.8+ or Gnome 2.10+ then you are already using Cairo and enjoying its benefits!
Cairo is a Vector drawing library. It has nothing to do with the end user.
The statement I like really is :
” As with all of cairo’s major releases, cairo 1.2 retains both source and binary compatibility with the cairo 1.0 series. Programs compiled for cairo 1.0 should run with cairo 1.2 without requiring any modification.”
So they don’t break API compatibility. That’s one of the few things that lacks in a few open-source projects these days.
Have you ever taken the time as to WHY some projects break API/ABI compatibility. Instead of spouting off senseless crap, how about investigating the decisions behind any changes.
Gstreamer 0.8 to Gstreamer 0.10 is a good example of what I said.
The way OSS projects work the whole sw life cycle out is a little bit different. It’s a spiral like-model where every previous version can be consideted a prototype for the following version.
There is not an upstart design and architecture but continous refinements between releases based on experience. Of course, at some time the mass of current users starts depending on the project’s stability, settles down with presently working design, demands compatity.
Finding a correct ballance between opportunity of making dramatic improvements and satysfying larger user base is quite hard, but responsive and experienced mainainers know how to choose best moment to announce point release.
This part of lifecycle in commercial products happens behing closed doors, before first release, you it’s not possible to compare OSS and commercial release strategies.
The point is the design for the project should be good enough and there should be no reason to break API/ABI compatibility.
And you’re bitching about .80 breaking with .10 – oh, should I start bitching that Microsoft broke compatibility between .NET versions 1.x and 2.x? should I start bitching because Microsoft is breaking compatibility between Windows XP and Vista?
Please, there are perfectly valid reasons, and the Gstreamer is hardly a project I would called ‘ready for prime time’ no matter how much the halfwitts over at GNOME embraced it.
BTW I you had a small bit of knowledge on Gstreamer you would realise that .10 is the release that came out after
.8 (It’s not .10 and .80).
You could start “bitching” about MS .NET 1.x and 2.0 and you would be right to do so. It was a bad design decision on their part.
BTW Vista doesn’t break compatibility with XP/2000/9x . All your applications would continue to work fine. Try running the old Turbo C++ 3.0 on XP someday : You will realise it just works even though it was made in 1992. Now thats compatibility.
Gstreamer made a bad decision in the beginning and so they had to go with the API/ABI compatibility breakage. It could have been avoided if everything was planned well since the beginning.
It’s impossible to future proof and expecting binary compatibility to stay forever is just idiotic. While trying to maintain binary compatibility is a good idea, you need to be willing to break it when necessary.
Consider if you figured out some algorithms that would make an application 100x faster, but it would damage binary compatibilty. What would you choose? You can’t perfectly plan out an application unless you never want to have a finished version.
If you can plan for all API changes you’ll ever need ahead of time, then your software probably doesn’t do anything interesting. Breaking compatibility once in awhile is a hell of a lot better than what Microsoft does, which is to slaveishly maintain compatibility so much that huge amounts of workarounds get written to avoid breaking anything.
Yes I agree with your point but there are open source projects that have broken API/ABI compatibility due to bad decisions and that should not be the case with any modern open source project.
Well, why don’t you help those project to design their API/ABI? Or maintain old APIs?
As always criticism are easy, doing is hard.
Breaking compatibility once in awhile is a hell of a lot better than what Microsoft does, which is to slaveishly maintain compatibility so much that huge amounts of workarounds get written to avoid breaking anything.
Unfortunately, such is the duty and social service of producing Windows, this is something Microsoft have to try and do.
As it seems, you never really worked on big project, did you?
ABI/APIs can only sit still if the base requirements stay still. Once the requirement change, even by a bit, ABIs tend to break.
Now, you can maintain compatibility by creating the following code:
if (old_api_client)
do_things_old_api_style();
else
do_things_new_api_style();
But it’ll complicate your code by an order of magnitude and make it barely
readable. Oh, and even worse, it’ll will make your code more prone to bugs and security
exploits.
You -only- need API/ABI compatibility is your library is being use to run legacy closed source applications that cannot be modified and compiled to use the new library code… but in the OSS, we don’t have that, do we?
BTW, haven’t you given up on posting the same point, over and over again?
Does that mean in the FOSS world I rewrite a music application if the multimedia backend is not API compatible?
You will be wasting time doing that. Cairo is the way an open source project should be: well planned, and gstreamer showed us how we should plan some things from the beginning and not break API compatibility due to bad decisions.
You will be wasting time doing that. Cairo is the way an open source project should be: well planned, and gstreamer showed us how we should plan some things from the beginning and not break API compatibility due to bad decisions.
You can -do- what ever you like.
Don’t want to port your application to the new back-end? Help supporting the old one. (In the case of gstreamer, gstreamer08 is still alive and well, thank you)
Sorry, I don’t buy the “If you design a project well, you won’t have to break ABI in the future” mantra. It’s a load a ****.
Since I started writing code for a living, I saw too many project die to over-planning.
Even worse, I saw too many project become an unreadable, incomprehensible spaghetti due to having 16 different incrementation of the same API, all maintained side by side.
As I said, this FOSS. Nothing forces you to do anything. Don’t like gstreamer?
Leave the post button alone and write yourself a new, well designed, audio
API.
G.
Don’t want to port your application to the new back-end? Help supporting the old one. (In the case of gstreamer, gstreamer08 is still alive and well, thank you)
So you now have people supporting two different codebases… Talk about wasted efforts.
It seems to me the point of the original poster wasn’t “make a perfect design that won’t have to be modified EVER“, but rather “spend at least some time in design rather than hacking off everything together”. Not only hacking off everything might go wrong, but it may waste lots of time for everybody (since the maintainers and those relying on the project will have to rewrite their code).
It seems to me the Cairo developers tried to put a vision on paper before rushing to implementation. On the other hand, GStreamer isn’t a 1.0 project, so I guess such breakage was expected. In fact, the main culprits might be those early adopters.
In fact, the main culprits might be those early adopters.
I wouldn’t say that so. If there weren’t early adopters gstreamer folks would have much less info on what it is needed, what is good, what is not.
Planning large lib as gstreamer is too big to be done in one day by one head. You have to learn on your mistakes and those early adopter can only help you in doing that.
D-Bus is the same. And look what they promote. No stable API until 1.0. Now, I’ve been using D-Bus for my code quite a while and never had problems if they changed API, it was the fact/part which I agreed on when I started using it.
When you use non-stable interfaces it is best practice to write stable wrappers for your code. These will be compatible across your code and will have stable API. In case of change you just have to correct wrapper and all is working as it should. And that particular application won’t even notice that different D-Bus (in my example) is underneath.
Now after 1.0 I will stop using my wrappers, until then this is my way to go. For example version x.y of my softwre will simply pose requirement D-Bus >= 1.0 and not 0.20 as it is now, but it won’t use wrapper interfaces either.
ABI/APIs can only sit still if the base requirements stay still. Once the requirement change, even by a bit, ABIs tend to break.
Base requirements changing often are a sign of a badly designed or managed project.
“Base requirements changing often are a sign of a badly designed or managed project.”
Yes, but in most cases, it’s just a sign of OSS’ basic “release early, release often” philosophy.
Don’t spend time over-designing things; don’t spend time maintaining old ABI’s if you want to break it; just let evolution take its course.
Yes, it sucks when gstreamer changes under your feet; it’s even worse, when I have to change my own kernel module(s) every-time someone in lkml is having a new brilliant idea.
On the other hand, I’ve been playing with Linux (and now developing exclusively on Linux) since 1996, and I cannot even begin to compare RedHat 1.x (1.01?), the first Linux I ever installed, to Fedora Core 5.
You may not like the way the OSS world does things… But the OSS world must be doing something right, or else, I wouldn’t be typing this message on vim7 running from within Firefox 1.5.0.4 which is running on KDE 3.5.3, which is running under X.org 7.0, which is running on… well, you get the picture.
G.
Good point, I’d also like to point out that some people here are talking about how wonderfull it is that a program compiled for windows 3.11 runs perfectly on windows xp, yet open source software inherently doesn’t need binary compatibility due to it’s open nature. Sure it’s nice every now and then, especially for noobs (like me) who can’t compile programs and need a binary for its system and can’t find the right .deb file… (although there are some projects working on that)
Binary compatibility is a problem created by the closed source world and doesn’t need to be introduced into the open-source world. The closed-source world tends to bend the rules to their liking, and often tries to break compatibility with the open-source software world, so I really see no reason why the open-source community should try to step forward to help them to keep binary compatibility (for example like how a newer kernel breaks nvidia-driver compatibility and how that sucks, or how the new X.org 7.1 breaks nvidia-compatibility and how that sucks). Why keep a new idea from getting used while it is not the open source worlds problem if someting changes?
On my operating system, LoseThos, source code applications are the norm. In fact, things get compiled when you run them. With a few exceptions, binary files are not needed at all. This allows for forward compatibility, without locking things in. Since default parameters, like in C++, are available, functions can be altered without breaking compatibility–a new parameter can be added to functions and functionality changed without affecting old code.
See http://www.losethos.com
I don’t care how amazing your operating system is, your post is still spamming and braggartry. You’re not going to win followers that way.
Actually, although some Windows 3.11 apps run flawlessly on Windows 2000, many of the Win16 apps (and even some Win32 apps designed for Win95) didn’t work or even install. I remember Word 2 definitely had some problems, which is an issue since few programs these days support the Word 2 binary format well. So 100% binary compatibility on the Window platform is a myth and ignores the more important issue, file format compatibility. For every program you have, you’ll likely have 1000 data files, many of which you’d like to keep without a lossy conversion. The open source world has been far ahead of this issue since it tends to focus text formats that can be easily upgrades if necessary or on standards (like binary PDF or text based Postscript or RTF or XML). I might have to learn to use a different program today than I did a decade ago, but my I still have access to my precious data, which is what counts mosts.
BTW, a stable source level API *is* important. POSIX and the X-Window today is the same as it was two decades ago (they both just have some extensions). Programs from the old days, for the most part still work today so you can build on your previous knowledge and skill. What happened with the gstreamer case is that the APIs are not 1.0 level so they don’t make any guarentees on having a stable API yet (other than the promise that they’d try their hardest to avoid it). gstreamer itself (the last time I checked) wasn’t declared an official part of the GNOME platform (it’s part of a “technological preview” in Microsoft parlance) precisely because its API isn’t stable yet. GNOME has committed to a stable source API within the 2.x series, so this policy makes sense. GNOME does however plan to incorporate it when its API is stable, so work on gstreamer is encouraged.
i guess more of this kinda stuff will be possible! :
http://weblogs.mozillazine.org/roc/archives/images/new-future.png
Rotating a page is a bit overkill IMHO, although I’m sure someone, sometime, may have a use for it. But what I’m really interested in is a zoom facility for Firefox comparable with Opera’s: fast and smooth.
Currently, Firefox only “zooms” by changing font size, which isn’t zoom at all. Well, it depends on how the webmaster designed the page; many of them break when you blow up the fonts even a bit.
Am I the only one that first thought we were talking about Microsoft’s long dead Cairo here?
yes.
Yes
Yes ๐
And maybe guys should go with AGG instead ๐ …
-pekr-
The problem with AGG ( http://www.antigrain.com/ ), so to speak, seems to be that it’s too complex for people to easily understand it. Unfortunately, this is an impression that is given by its advanced use of C++ and its very high modularization: it’s just out of reach of most *nix geeks.
Another issue is that AGG doesn’t have many backends, but we can trace this issue back to the fact that not many people seem interested in AGG, despite its superior (IMHO) design.
Congrats to the Cairo team. Cairo is leading by example in the areas of solid high-quality design and impeccable ABI compatibility since 1.0.
Some misconceptions I’ve seen in these comments: .NET 2.0 doesn’t break .NET 1.1 applications. In fact, .NET has very good assembly versioning which causes applications compiled for 1.1 to use the 1.1 framework library. Oh, and most importantly—.NET assemblies don’t _have_ ABI. Why? The runtime allocates all the memory between the caller and the callee so that will always work properly. Also, when a programmer references members of structures or classes, the _name_ is used as the identifier, as opposed to a memory address. Same with methods and parameters and delegates and etc etc.
The stuff underneath the hood (the CLI bytecode) didn’t change much either, just a few more features. That didn’t break backwards compatibility. And even if .NET didn’t have the versioning stuff, could we look at the version numbers? 1.1 to 2.0. That’s a major version number change. Version numbers exist to show compatibility. You’re supposed to change the major version number when the ABI changes. Oh, and just one of my personal views (shared by Cairo–see below) is that if your software has a major version number of ‘0’, then nothing is permanent.
One thing (perhaps the only thing) which Microsoft is _damned_ good at… is keeping old shit running. The majority of the work on XP was actually backwards compatibility testing. Too bad they can’t design anything before they make it (.NET was designed by the lead architect of Delphi from Borland by the way–not a surprise that Borland hopped on the bandwagon quick).
As for Cairo, well.. when it was first released, I jumped in and wrote (coincidentally) a .NET binding for it. As I upgraded Cairo with development, I had to continue updating the bindings until Cairo 1.0, when the API froze. I was happy when that finally happened.
The point is, APIs and ABIs alike are supposed to “stabilize” as the requirements set forth are matched and properly implemented. No project should hit the ground running with a perfect ABI that will never change. In fact, I don’t think it’s possible. Also, you just cannot expect your API to remain fresh forever. Requirements change.
ahh, huge difference between program compatability and library compatability.