“Whether X configuration is getting you down or you need a lightweight graphics solution for that embedded app, Richard Drummond thinks SciTech has the answer.” LinuxFormat’s complete review is posted at SciTech’s web site in PDF format.
“Whether X configuration is getting you down or you need a lightweight graphics solution for that embedded app, Richard Drummond thinks SciTech has the answer.” LinuxFormat’s complete review is posted at SciTech’s web site in PDF format.
Going by their figures, I expected some kind of performance improvement under X. I was however disappointed to find that with Scitech SNAP, my screensaver [Euphoria(OpenGL)] was running very choppily, when in fact it used to run just fine before.
On a related note RC3 is now available. The chip support list has now been expanded to include 150 chipsets, up about a dozen from RC2).
http://www.scitechsoft.com/products/ent/snap_linux.html
Very intresting
SNAP has no OpenGL support (yet) so anything that uses OpenGL is going to run slower than with accelerated XFree drivers. If you don’t use OpenGL then Snap may be worth a look, if you do, then you don’t want SNAP.
SciTech SNAP Graphics currently offers OpenGL support in software only and not in HW as I assume your existing driver does. What this means is that if you require blazingly fast performance from your 3D screensaver than SciTech SNAP is likely not a solution for you. However, if you are looking for a drop in and forget display driver solution then happy days are ahead.
By the way, thanks for taking the time to try SciTech Graphics for Linux, we greatly appreciate your time.
like the manual says… it “currently does not support the XVideo extension or GL direct rendering, thus is doesn’t support hardware-acceleration for video playback or 3D graphics.”
SciTech has done a tremendous job creating a generic video card driver. For example, if you are still pulling out hair trying to get Linux to co-operate with your Savage8 video card, this is your best solution. If you go with the VESA solution, you will go bald and blind!
This software will not be a gamer’s choice, but it is an excellent product for the office environment.
I’m just curious how Hardware vendors think about SNAP. Is is well supported by the industry? Do vendors write drivers for SNAP (like they do for Windows, and some do for Linux) or is just the Scitech guys?
I really hope this technology catches on, with users and OS and Hardware vendors.
Oh, and I hope they add 3d soon (OpenGL and D3D – wouldn’t that be cool – D3D on Linux, FreeBSD, OBOS, Mac OS X, etc 😉 )
Best wishes.
I wonder, how does Scitech’s solution compare with XiG’s Accelerated-X? Forget OpenGL (Which XiG has in hardware, and SNAP doesn’t) for the comparison, how do they compare in 2D?
In a nutshell vender support is good but we would obviously like to see more of it. We get excellent cooperation from companies such as IBM, HP, Intel, ATI, and Matrox and decent support from others, VIA/S3…. In most cases we have long standing relationships that allow us to gain access to the required HW and specs early on in the dev process, and as such we are able to get robust drivers to market shortly after product launch (many times earlier).
As for extending support to other OS’es we have made our SDK available and welcome contributions from OS developers looking to add complete Graphics support to their OS – you will be hearing related news on this front in the coming weeks.
Atleast for me, my video cards seem to be supported equally in both SNAP and X11. If these drivers would have full support for 3D or maybe support the linux frambuffer (now those drivers are really lacking) it would have been a great driverpack. But for X11 usage it isnt worth the money…
You are correct in that there is not much to offer the consumer style user of Linux right now. Our target market for this initial release is more along the lines of Enterprise Customers, or customers who can’t get XFree86 working properly on their hardware.
As for Linux framebuffer support, I am working on that aspect of it 😉
Is SciTech MGL an alternative to OpenGL?
And does MGL work with the SNAP drivers (or is it one or the other)?
Regarding how we compare to alternate and or forked versions of X such as XiG. Not really certain how to answer this directly, as the two products are so different in approach. For example looking at XiG I see 9 – 12 variants of their solution all offering various levels of functionality with price points from form $39 – $199 per seat. Where as SciTech SNAP Graphics for Linux comes in one flavor, with all features enabled and support included for $19.95. The XiG solution is a complete X server replacement and there for does not work with Xfree86. SciTech SNAP Graphics for Linux is designed to work in concert with Xfree86.
Additionally, and in keeping with our development goal which was to build a simple, powerful, and compatible graphics solution, SciTech SNAP Graphics makes no product distinction between system types, i.e. Laptops, servers, and workstations. SciTech SNAP Graphics supports them all in a single easy to install package. To the contrary XiG has separate versions for various system types and offers multiple combinations of feature support with in each product type – I don’t know about you but the product page confused the heck out of me.
Again SciTech SNAP Graphics for Linux is designed to simplify things for the end user whether they manage a single machine at home or 40 thousand machines at work.
Since the SciTech developers are probably listening, I have some questions. I’ve been following SNAP ever since Rocklyte announced that Athene would use SNAP on top of a linux kernel (instead of X or DFB). I was intrigued. But after reading about SNAP it doesn’t seem to be what I expected.
What I would really like to see, and I think something that would help linux gain market share as a desktop OS, is a rootless graphical environment akin to MacOSX’s Quartz/Cocoa which bypasses X altogether. For a desktop OS, it is completely unnecessary to use the client/server architecture of X. X performs very poorly compared to other windowing systems and I think the architecture is to blame. We need to see a new environment (like DirectFB but with decent hardware support), we need to see GTK, QT, etc. get ported to this environment, and then linux will be unstoppable – it will be like MacOS X for inexpensive PC hardware.
Is SNAP such a thing? I originally thought so, but it doesn’t seem to be independent of X. As long as we go through X, performance will be bad. No matter how good the drivers are.
No MGL is not a replacement for OpenGL – (if it were I would be one happy guy;) SciTech MGL is a low-level graphics library which can be used to develop anything from multi-OS applications, OS’es, to high performance Games. Way back in the day, A version of Quake 2 was actually underway which utilized SciTech MGL as its graphics lib…. you may be able to find a version of MGL Quake and Hexen II floating around on the web to this day which utilize SciTech MGL.
Well the good news is that SciTech SNAP is such a thing, as it can be completely independent of X. Again take a look at the white paper which is available on the SciTech website. However if you are like me and prefer to have answers rather than pointers to answers read on;)
SciTech SNAP is a device driver API created by SciTech to address the issue of writing multi-OS device drivers that can be easily and cost effectively supported across multiple OS’es. The “Graphics” portion is the Hardware specific code, which are in the form of OS neutral driver binaries – (roughly 200 individual graphics chipset drivers and counting). The “for Linux” portion is simply the Shell driver or the OS specific code. The real power in this architecture comes when you realize that you can replace Linux with any OS in about 1000 lines of code – of course testing and certification still need to be done which is time consuming but well worth the effort.
SciTech SNAP (API) + Graphics(200 Graphic Driver Binaries) + for Linux( Xfree86 specific shell driver) = SciTech SNAP Graphics for Linux – The Simple Driver Solution
The real power in this architecture comes when you realize that you can replace Linux with any OS in as little as 1000 lines of code – of course testing and certification still need to be done which is time consuming but worth the results.
I think that make another X server without any performance improvement is stupid. Why pay for inferior product if I can use Xfree86 for $0 ?
It would be better sell drivers for XFree86 (and contribute to XFree86 development too) and easy GUIs for XFree86 configuration.
Why pay if I can use Xfree86 for $0 ?
A great point, and if Xree86 supplied drivers work for your needs than I can see little reason to ask you to purchase SciTech SNAP Graphics for Linux. Again, the real benefactors of this technology are end users whom are struggling with mis-configured and or buggy free driver solutions and simply want a machine that works, or IT professionals whom need an easy to administer, supported solution that works in all their systems regardless of installed HW or installed version of XFree86.
With that said, I would hope that if nothing else you would take the time to try the demo – who knows you might be pleasantly surprised:)
No MGL is not a replacement for OpenGL
OK, but do MGL and the SNAP drivers interact with each other?
Would I need the SNAP drivers for MGL to support Voodoo cards, for example? I looked at your “Diagram 1: Scitech SNAP Architecture” picture and I couldn’t figure out how MGL would work with the SNAP drivers.
For whatever reason Red Hat 9 doesn’t auto-detect my monitor correctly and I have the whole virtual desktop thingy, where the desktop is larger then the screen itself so you have to drag the mouse to the edges to see the rest of it – extremely annoying! All previous versions of Red Hat detected my freaking monitor just fine….so, thats what drove me to try SNAP. SNAP detected everything just dandy, installation was easy, and outside of no 3D or Xvideo I liked it. Performance? Felt the same, nothing overly noticable. XiG’s Accelerated X really did feel faster in every area, but their pricey and installation is kinda involved. Plus the demo only lasts for twenty minutes after installation hell?!? Why not a week or something? Ugh. I dunno what the hell changed in RH9 but it pisses me off.
No – SNAP is a different thing: it does offer a Xfree specific driver (wrapper) to their Binary Dirver System. And this system is indepentend form the OS itself…. so these binaries are e.g. already working like a charm also on OS/2…
so only ONE driver binary plus one spefic wrapper for diffrent OS (like eComstation/OS/2, Linux, and so on)
So if a chip-companie provides support for one OS via SNAP – all other systems will benefit… or the otehr argumentation: write us (pay us) for one driver – but reveive support for Linux, OS/2, BSD….
That should be nice for companies like Matrox and ATI…
Well and SNAP itself (the product) is very interesting for companies like IBM – eg. they’ve licensed a special version long time ago and in the result “old good OS/2” does have sate-of-the-art video-driver support with hardwarezoom and twinview function… 🙂
Now all these Binary drivers may be “resued” for Linux as well :-))
And the last really nive thing about SNAP: its a drewam for end users and system adminstrators: just pull out your old card and plugin a new videocard and reboot – no new driver needed :-)…..
You may even stay with you old version of XFree – as Snap is doing the driver-part…
MGL might be a bit off topic for this discussion, so may I suggest that you contact me directly and I will put you in contact with an engineer whom can provide you with more information about working with MGL than you can digest in a single sitting;)
Yes, MGL works with the SNAP drivers built into SciTech SNAP Graphics for Linux. It can also work without the SNAP driver package installed if you license the drivers to ship with your product (ie: more intended for embedded/industrial systems developers who want a solution to use without X).
For a desktop OS, it is completely unnecessary to use the client/server architecture of X.
>>>>>>>>>>>
client/server is *free*. There are three ways to do graphics:
The OS X Way: Each window has a dedicated buffer, and all rendering is done via the CPU directly into this buffer. This has the advantages of low latency between drawing calls (no context switchs), but has the disadvantage that it is hard to effectively use hardware acceleration for drawing. (Before anyone mentions Quartz “Extreme” take a *hard* look at Apple’s tech docs.)
The Windows Way: The graphics server is in the kernel, and drawing is done via sending draw buffers to the kernel. This has the advantage of minimizing data copies and context switches. Also, it is easy to use HW acceleration because only one component (the GDI) is in control of the hardware. This has the disadvantage that it is insecure, and that kernel calls are so expensive (hundreds of clock cycles) that you gain little practical advantage from minimizing the copies.
The X11/BeOS/QNX/etc way: The graphics server is a seperate process. Drawing is done via batching commands to a local buffer. When the batch is complete, a kernel call is done to copy the data to the server, and a context switch is incurred switching to the server. This has the advantage that it is easy to use HW acceleration, and also preserves stability and security. This has the disadvantage that context switches and copies are incurred for each batch.
The last two are the really practical ones, because the first one makes it hard to use (current) hardware. With proper buffering, there is no practical performance difference between the last two. If kernel calls weren’t so expensive, the second one (the GDI method) would be really fast, but kernel calls are getting *slower* (in clock cycles) as time progresses, not faster. So you’re going to have to do lots of buffering anyway, and once you start buffering, you really lose nothing by putting the whole thing into a seperate process.
X performs very poorly compared to other windowing systems
>>>>>>>>>
Really? Got any benchmarks to prove it? The current ones on the net (and also personal ones I’ve conducted) prove otherwise. According to people familier with the X architecture, the problems lie not in raw drawing speed (which X is really quite good at, thanks to the heavily buffered nature) but in synchronization and redraw handling.
and I think the architecture is to blame.
>>>>>>>>
You and hundreds of other armchair developers who have no real familiarity with the subject. BeOS and QNX prove that client/server GUIs can be blazingly fast. You really want to argue with them?
Hi Rayiner,
Your description of the three methods is accurate and well summarised, however it must be pointed out that the speed of a GUI has less to do with throughput and more to do with the display management technique itself. During Athene’s development all three techniques were tried and eventually it was settled that window buffering (as also used in OS X) provides the best user performance in the modern desktop space. If developing games then the other two techniques are more appropriate as they provide better hardware access.
The best solution, if you can manage it, is to provide support for one solution for the desktop (preferably with window buffering) and another solution for full screen gaming. Microsoft also discovered this, in that case resulting in the delivery of Direct X. They will never gain the benefits of window buffering without a rewrite of their graphics system though.
It should also be pointed out the window buffering technique also makes it possible to add translucency, transparency and alpha blending effects to the desktop with comparitive ease. If you try to do that sort of thing when using the other methods, it is nothing short of agonising for the developer (speaking from experience). Since these new interface features are in demand, I think that the use of non-buffered techniques in future window systems is going to be limited to embedded systems.
And yes, X11 is slow. Not for raw performance reasons or the server calls, but mostly because its design means that any graphics data held privately in the client app needs to be constantly copied through to the X Server when blitted to the screen. There are ways to get around that (such as opening graphics memory at the server end), but this prevents the programmer from being able to read pixels off the surface very quickly, or even use the CPU on the surface at all. It’s a double edged sword for X11 coders and the available solutions like XFDGA and XShmImage are hard to use and/or inadequate for desktop apps. In the next version of Athene/X11, shared imaging will be used to circumvent this issue for us, but X11 will always have problems because its design is so outdated and behind the pace of modern video technology. That’s why Rocklyte chose SNAP as a graphics solution – it does what we need in order to support modern hardware to its full potential. Here’s hoping that X dies sooner than later (and Rocklyte will persevere in building a free solution to do just that).
I don’t know the technical reasons why, nor do I have benchmarks, but X11 always feels slower than just about every other OS I’ve run. It just feels slower to the end user.
@Paul Manias:
While I appreciate your point of view with respect to the experience you have implementing your system on top of X, I would argue that you’re trying to apply a particular style of programming the graphics hardware to an architecture that demands a different style. In this regard, the biggest problem with X is not the internal design, but how its functionality is exposed to the programmer. This is why effort is being expended on coming up with more efficient and programmer friendly replacements for xlib. Besides that, from people who have similar experiences implementing graphics systems on top of X (Havoc Pennington among others) do not seem to think that there is an inherent architectural limitation in X that prevents writing efficient GUIs on top.
@CaptainN:
First, the “feel” is very subjective. A lot of people look at “feel” by resizing windows. Resizing windows (or generally, just playing around with widgets) is a worst-case situation for most X apps, and they display performance that does not give the same impression that you would get using the sysetm on a regular basis.
Second, GUIs on top of X (specifically, KDE and GNOME) usually feel slower than Windows. For me (KDE 3.2 CVS on P4 2GHz) the difference is very marginal, because KDE seems to get much better with moderate increases in CPU power, but I do remember it to be a great deal slower than Windows when I had a PII-300. However, the progress of GUI apps in the last few years has convinced me that the problems could not possibly be in X itself. X hasn’t changed that much since 4.x came out. Yet, my GUI is at least twice as fast as, say, when KDE 3.0 came out. This huge difference indicates to me that the problem is in the higher layers of software, not X. Testing done by people writing GUIs on top of X bear this out.
Okay, a little feel test. My setup:
2GHz P4.
640MB RAM.
64MB GeForce4 MX, latest NVIDIA drivers in both OSs.
Gentoo 1.4 and Windows XP SP1.
KDE 3.2 CVS (Circa August 20-ish)
Default XP Theme, ThinKeramik KDE theme
15″ Dell LCD, 1600×1200
Konqueror vs IE:
Resizing OSNews: No rubber-banding in either, frame movement in IE noticibly jerkier.
Resizing Slashdot: Mild rubber banding in Konqueror. Mild frame jerking in IE. When making the window smaller slowly, the WinXP scrollbars almost dissapear, while the Konqueror window frames show lots of tearing, but keep up better.
Conclusion: Slight edge to IE. Windows XP apparently does some sort of contents/window-frame synchronization, so instead of the window contents rubber-banding, the window frame resizes jerkily.
KWord vs MS Word 2000:
Resizing page with heavy text: Toolbars and menus resize equally smoothly. Internal text view resizes similarly smoothly. Much more tearing in KWord scrollbars, but Word doesn’t seem to use the same scrollbars as IE. It uses grey scrollbars instead, which are much less complex than ThinKeramik’s scrollbars.
Conclusion: Smoothness award goes to MS Word, only because of the scrollbars. Qt needs to double-buffer its scrollbars.
Konqueror File Mode vs IE File Mode:
In a complex directory, Konqui has the definate edge. It shows very slight flashing (might just be my LCD, though) as icons are moved around, but the overall resize is perfectly smooth. The frame movement in IE is noticibly jerky.
KWord on top of Konqueror vs MS Word on top of IE:
Both are opened to Slashdot main page. Konqueror shows no redraw when KWord is resized on top of it. IE shows a large black square for 1/3 a second until IE catches up.
Conclusion: Konqueror runs away with it. IE needs to do better buffering.
AOL AIM vs Kopete: Zero redraw for both. Conclusion: tie.
Visual Studio.NET vs KDevelop:
When resizing a window in IDEAL mode (with several sub windows, like documentation, code view, etc) there is a little bit of flicker at the edges of the resize operation. In Visual Studio.NET, there is extensive internal flicker (between sub-views) and rather bad window frame behavior. Resizing is jerky, and the red close button dissapears because it can’t keep up.
Conclusion: KDevelop. And unlike VS.NET, it looks like the rest of the desktop
General impressions: The “Snappiness” between both systems is about the same. Menus pop up just as fast, etc. Opening up the configuration tab of KWord, MS Word, IE, and Konqueror all take about the same amount of time. Windows seems more polished, with less redraw in places, and less flicker. The frame-sync feature eliminates most rubber-banding, but makes frame movement jerky, which is an okay tradeoff. However, KDE (thanks to the 2.6 kernel) behaves a whole lot nicer under load, and in general, suffers from fewer pauses. For example, epsxe (my PSX emulator) freezes up windows for 10s of seconds when you try to exit full-screen mode. Linux hardly freezes up long enough to notice.
its not X. Period. KDE and GNOME are both slow, but X doesn’t have to be. ROX Filer is extremely fast, even on my lowly 266 mhz PII. Fluxbox is also very very fast and extremely low in overhead. Flux + Rox = speed. It always remains responsive, yet KDE and Gnome…Gnome seems to be getting faster (I don’t use KDE much) but nonetheless its slower then it should be. Why? I dunno. Its no where near as slow as Mac OS X was or still is on lower hardware, but still. GNOME runs considerablly faster on my bondie blue iMac (233 mhz G3) then Mac OS X ever did, for example.
Thanks for your explanation of the way each windowing system works. You clearly have a much better understanding of how each one works than I do. I was under the impression that all display data in the X system had to pass through the network driver – is this not the case?
While my technical knowledge of X is severely limited, as I have only done programming for windows, I run both a highly tweaked (read: fast as possible) copy of X on my gentoo install, and windows on a tri-booting machine (Win XP, gentoo, and QNX Neutrino). In terms of GUI responsiveness and speed, Windows blows X away. This may be because of better driver support, but I don’t think so – surely, the ability to direct an application to a remote display by means of layering the architecture with the network in the middle is not free. In my opinion, this should be sacrificed in a desktop OS to improve the local user experience.
On a more interesting note, even without good driver support, QNX’s photon GUI blows both Windows and X away. Whether this is because it is simply not feature-rich I don’t know – I had assumed it was because Photon did not do the whole remote display thing, but I could be mistaken.
Additionally, though I have been an avid mac-hater in the past, I have to admit that the responsiveness of MacOSX’s GUI blows EVERYTHING else away. I have never used a better operating system. Period. Since macs are ridiculously overpriced, programmers on the X86 platform should make it our goal to devise a desktop operating system whose elegance rivals that of MacOSX. If Microsoft is not the first to do this (and methinks they would have to start over with a brand new codebase, or grab the BSD codebase like apple 🙂 then whoever is the first to do this will have a chance at being serious competition to Microsoft.
I have been excited about this for some time. Before it was the 3D that bothered me.
Now if it had Xvideo support, I would buy it TODAY.
Also enabling TV out on cards with an encoder chip would be very nice.
The PnP support alone is worth $20. The speed difference is nothing. If you can notice the speed difference (in the few tests where SNAP is slower) in everyday use, you must be a cyborg.
I definately want the 3D support, even at a higher price. But not having Xv support really kills the plans I had for a semi-embedded non-X device for commercial distribution.
How bad is video playback without Xv on these? Usable?
This comparison is surely fundamentally flawed?
Windows redraw “feels” awful to me and always has — I came from the Amiga to the PC (no, I know it’s dead, I’m over it) and Windows has always felt “sluggish” by comparison.
So comparing X-based stuff to Windows and declaring it “as good” isn’t really much of a comparison.
“It’s as fast as a not-very-fast thing!”.
I understand where you’re coming from, etc. but…
Well, compared to, say, BeOS (from what little I saw – I liked it but I don’t need another marginalized platform) the redraw in Windows is pretty awful…
And yeah, I agree with other posters that the admittedly-subjective feel of X11 is that it’s a dog. And that’s coming from someone who’s job is a unix admin…
Well, the speed penalty for copying a 600×480 private XImage to the display instead of using shared imaging is roughly 46% slower (on my machine), regardless of the fact that the server and client are local. That’s an extremely serious design flaw given that this speed loss could have been completely prevented if X was designed correctly in the first place. Despite the fact that hacks now exist to get around this, if a graphics system needs a hack to work properly then the design structure is clearly broken. Flaws of this magnitude should be inexcusable no matter what your opinion.
Most developers aren’t even aware of the issue, or the fact that it’s down to them to employ a hack to circumvent it. There is practically no documentation on this subject at all and it takes a lot of hard work to develop a full awareness of the various problems when developing for X11. The fact that a developer even has to do such research, whereas other graphics systems ‘just work’, illustrates how bad things are.
Speaking in terms of theory, one can always argue that X is fast enough, but in *practice* this is certainly not the case. This is primarily why the benchmarks are saying one thing while the users know better from their experience in using real-world apps.
Rayiner, your explanation of different windowing systems is decent, but incorrect on a few points:
“The OS X Way: Each window has a dedicated buffer, and all rendering is done via the CPU directly into this buffer. This has the advantages of low latency between drawing calls (no context switchs), but has the disadvantage that it is hard to effectively use hardware acceleration for drawing.”
What exactly about the Apple architecture makes it difficult to utilise hardware acceleration? Graphics cards have had the ability to draw into offscreen video memory at any location into a buffer of an almost arbitrary size for many, many years now. Hence the fact that OSX stores window buffers in offscreen memory doesn’t mean that everything has to be drawn in software! Quite the contrary. All drawing for the window can be done entirely in hardware where the hardware supports the necessary primitives (including 3D and hardware YUV conversion/scaling) before the compositing engine takes the buffers and composits them for the final output. The important point here is that all the rendering can proceed with practically no clipping (except basic rectanular clipping to the window boundaries), which vastly simplifies the drawing code paths making the drawing code smaller and faster.
IMHO MaxOS X’s GUI subsystem is incredibly well designed and very snappy (I know, I use it every day at home on my PowerMac ;-).
“The Windows Way: The graphics server is in the kernel, and drawing is done via sending draw buffers to the kernel.”
No, that is not correct at all. Windows 2K/XP is actually very nearly identical to X in many ways, because the user land GDI component batches up drawing commands into a local buffer and then passes the batch of commands to the kernel level component. The kernel level component decodes the batched instructions and then farms out the requests to the graphics driver. Many people don’t realise this, but Windows 2K/XP is actually a client/server style architecture, but the different is it is not over a network (or local Unix sockets), just between user space and kernel space. You still have all the same performance overheads such as batching of drawing commands and the context switching between user space and kernel space.
Windows 9x on the other hand was nothing like this, and all drawing was done in user space with mutexes used to control access to the hardware by a single process (mostly handled by the Win16Mutex). Strangely enough the Windows 9x graphics subsystem is what many people would like to see for XFree86; Direct Rendering for 2D graphics.
“This has the disadvantage that it is insecure, and that kernel calls are so expensive (hundreds of clock cycles) that you gain little practical advantage from minimizing the copies.”
Kernel calls cause a context switch and are not really all that different from a context switch into a nother process as used by the X11 client/server model. Context switches are expensive, period. The speed difference between a kernel calls and an IPC call between two processes is negligible.
“The X11/BeOS/QNX/etc way: The graphics server is a seperate process. Drawing is done via batching commands to a local buffer.”
Yep, this one is dead on.
“X performs very poorly compared to other windowing systems
>>>>>>>>>
Really? Got any benchmarks to prove it? The current ones on the net (and also personal ones I’ve conducted) prove otherwise. According to people familier with the X architecture, the problems lie not in raw drawing speed (which X is really quite good at, thanks to the heavily buffered nature) but in synchronization and redraw handling.”
I agree that X11 is not the problem, especially once you realise that Windows XP/2K are very similar to X11 in that they both use a client/server style architecture. As Paul Manias has pointed out (as many XFree86 developers have done in the past on mailing lists), the sluggishness of X11 is not due to the X server or network layers, but due to poorly coded applications. Unfortunately for X11 it is *way* to easy to code applications poorly, due to the lack of documentation on how to use the necessary hacks to make your programs faster.
So in some respects I would agree that a cleaned up and more optimised xlib API for X application developers would help solve this problem, but of course you then need to convert developers over to use this new API to write the apps! Many will just want to stick to what they know and continue with the status quo…
It is true that my experience with X11 is based pretty exclusively on my experiences with KDE and Gnome. Both of these projects declair that they are using best practices in X11 to develop their DEs. If that is the case and they are both slow, that points to a problem. I take your point that the programming interfaces to X11 are poorly implement, and that may well be the problem with KDE and Gnome. Now the question is, what is being done to fix it?
About the subjective nature of the feel that it is slow, well, lots of people are saying the same thing. Are we all mad, or is it really slow (benchmarks mean nothing in this question)?
Back to the original topic (SNAP), will SNAP support other graphics libraries, such as DirectFB (is DirectFB a seporate graphics library to replace X11)?
Back to the original topic (SNAP), will SNAP support other graphics libraries…”
The short answer to this question is yep;)
This is a question to the developers ov SciTech SNAP.
If it only takes 1000 lines of code to support a new platform, is there “shell code” for Windows XP? Will you create one? That way, hardware vendors could create their drivers using your SNAP technology and support all platforms supported by SNAP at once!
Sure, we have shell code for Windows. Hardware vendors have no interest in doing anything different on that platform, though. That is their primary business, and they have far too much already invested.
Looks like they’ve FINALLY added the SiS630/730 chipset to their list, (the 730 seems to be used almost interchangeably on both SiS’s pages and most linux drivers) I’ll be sure to give this a whirl as soon as I can download again! [mutters] damn viruses infecting the Library’s computer IT..
Thanks for the feedback – look for news regarding the official release of SciTech SNAP Graphics for Linux on our website in the coming days. Other exciting announcements related to SciTech SNAP will also be made here on OSNews in the very near future as well as on the SciTech Site.
Cheers,
The guys from SciTech
What exactly about the Apple architecture makes it difficult to utilise hardware acceleration?
>>>>>>>>>>
Synchronizing access to the graphics hardware. You can do it, but most developers don’t design their 2D architecture to handle multiple simultanious clients. As a result, switching between graphics contexts becomes very expensive. I don’t know if anybody remembers BWindowScreen, but it did something like this. As a result, applications had to follow a rather complex synchronization protocol to manage the graphics context. As I remember it, the Be developers never did like BWindowScreen very much.
Hence the fact that OSX stores window buffers in offscreen memory doesn’t mean that everything has to be drawn in software!
>>>>>>>
It doesn’t mean that this has to happen, but for practical purposes, this is what happens. Unless you move to OpenGL, giving each app its own hardware context has large performance implications. Even with a good OpenGL implementation that properly implements support for multiple contexts, there is still a performance hit.
No, that is not correct at all. Windows 2K/XP is actually very nearly identical to X in many ways, because the user land GDI component batches up drawing commands into a local buffer and then passes the batch of commands to the kernel level component.
>>>>>>>
The user-level component is in GDI32.dll, in the same address space as the application. Its directly equivilent to xlib.
Many people don’t realise this, but Windows 2K/XP is actually a client/server style architecture, but the different is it is not over a network (or local Unix sockets), just between user space and kernel space.
>>>>>>>>>
Its not a proper client server architecture, because the “server” (the kernel) does not run in an independent thread from the client, and has full access to the client address space.
You still have all the same performance overheads such as batching of drawing commands and the context switching between user space and kernel space.
>>>>>>>>>
A switch between user and kernel space is not a context switch. Indeed, its several times faster than a context switch. Also, the semantics are nothing at all alike. A kernel call is more like a very slow function call rather than like a context switch.
I agree that X11 is not the problem, especially once you realise that Windows XP/2K are very similar to X11 in that they both use a client/server style architecture.
>>>>>>>>
Except they aren’t, and WinXP/2k doesn’t.
Unfortunately for X11 it is *way* to easy to code applications poorly, due to the lack of documentation on how to use the necessary hacks to make your programs faster.
>>>>>>>>>>>
Agreed. Work is being done to combat this, however. The X developers and the DE developers are talking (read the mailing lists) and someone is trying to write a more direct xlib replacement.