“[…] I think you can see where I’m going with this – by moving software to .NET, Microsoft frees themselves from the x86 pit. They need to compile the .NET framework for each platform they want to support, and they need to write the final compiler stage that converts the IL to machine code, and bingo your code runs on the new platform!” Read the article at Kuro5hin.
1. The compilation process sounds like java, but many say java programs are slugish, so are .net?
2. Linux and BSD programs come in source code, so I suppose they do not have platform dependent problems.
You’ve hit the nail on the head with point 2. The only bit of binary-only code on my machine is the NVIDIA driver. Thus, I have complete freedom to move that whatever computing platform I want, without taking the performance hit (especially in startup speed!) of complex VM-based languages like .NET and Java. I know not everyone is as fortunate as me, some people are forced to use certain closed source programs. Perhaps it would be useful, then, to compile IL or Java bytecode to native code at install time. Without the overhead of parsing and high-level optimizations (most of them) the job of compiling would be minimized to scheduling and generating instructions. That could easily be an order of magnitude faster, and thus even complex apps could be installed in a few minutes.
windows, unless i am mistaken(which is often the case) sits on top of, among other things, HAL. the hardware abstraction layer. my understand was that if they “ported” hal over, that windows would be able to run, much easier(as opposed to porting everything to a new platform). but i could be far off base with that thinking.
Mostly obvious but MS & Intel will remain wedded for quite some time to come inspite of the turf tensions.
If I were a computer architect at MS, I might be looking at the P4 or Athlon risc cores and thinking, “We can do better”. Why not force Intels hand to build a decent 32/64 ISA that can also runs directly on the Pentium risc core, with full access to dozens of orthoganal registers etc while for the moment keeping the x86 instruction decode around long enough to establish a new clean ISA that would let .Net run like a dream. x86 opcodes could/should go the way of the ISA bus but Intel could still remain king of cpus. Perhaps some of the Alpha talent & expertise could be better applied to the x86 future replacement rather than the Itanium server only line.
Its funny, Intel with Hyper threading is gradually stepping on MS toes about who should be in control of the process scheduler.
And MS is getting the potential to demand what the cpu should look like, wonder why they haven’t so far.
Point 1 is partially true (atleast I think)
If I am not mistaken, a LOT of open source apps are dependant on Gnome or KDE libs. So without those you cannot compile them in Windows.. unless some one has made a program that will let you do this.. ( I have no idea). What we need is pretty much what MS and Sun have been doing… only we need it to run MUCH faster and better. This will happen… sooner rather than later IMO. But who am i? hehe
Point 1 is partially true (atleast I think)
Java has a JIT compiler which equates to slower app startup but faster at runtime. .Net is better in the sense that apps are only JIT compiled once and the compiled binary is stored in a repository which is recalled every time the user runs the app. It only recompiles if the app is updated.
With regards to the article, as much as I don’t like Microsoft and am resistent to .Net, it would have been useful to me a year ago. Prior to Microsoft’s decision to make Intel’s StongARM processor the standard for PocketPC devices, there were 2 additional processors found in these PDAs: SH3 and MIPS. I develop software for PocketPCs, and it was a pain in the ass to have to compile and redistribute 3 different binaries, further more during the debugging process, I noticed different behaviors on different processors. My program would crash on the MIPS PDA, but not on the StrongARM.
Now a days most all PocketPCs, even Linux based ones use StrongARM, but Toshiba’s new PocketPC uses Intel’s XScale CPU. Microsoft refuses to optimize Windows CE for the XScale therefore software on that platform will not perform any better than the 206 MHz StrongARM, even though the XScale CPU in the Toshiba is 400 MHz. When .Net embedded framework comes out I do hope Microsoft optimizes the .Net runtime for XScale, that way .Net apps will have a performance advantage.
Actually, in a strange way, .NET could be exactly what apple needs as well. We already know that OSX, like XP, can be easily ported to any new architecture. All that’s stopping apple is the fact that they’d lose application compatability (which they sort of did with OSX).
What if everyone wrote mac applications and windows applications in .NET. Not only would the mac gain a lot of apps (any .net windows app would run on a mac) but they would gain just as much portability as ms would.
Don’t forget Linux apps :-).
First off:
Maybe I’m being a bit harsh, but is this new to most people? I know that MS is giving hype to most people rather than the facts (like any business), so a lot of those who seem to have opinions about .NET have no idea what it is. I applaud the author of the article for writing it. I’ve tried to explain it to folks on here, on Slashdot and on MacSlash, but it all amounts to a handful of goons saying that they’re afraid of hail-storm, and that they don’t want to have to pay monthly to use Office XP 2004.45. Those things are not .NET.
The MS HAL: (heh)
The HAL in NT/2k/XP does not achieve the same thing as the .NET system. The HAL abstracts the hardware just a bit to a certain level to make drivers easier to write and code easier to port. It means a common API to wherever the HAL runs. It does not provide any means to binary compatibility either via a virtual machine or through code translation between native binary on different processors. The HAL is sort of a micro-micro-kernel, which sits underneath everything else in the NT line, including the kernel proper.
Java and JITs
TLy said: Java has a JIT compiler which equates to slower app startup but faster at runtime.
Uhhh, not quite. That might be a line fed to users and developers by Sun, but is not necessarily the case. Look at the Smalltalk language, where VM technology really matured. With the Squeak implementation, the default VM is one that doesn’t do JIT compiling- it do a regular bytecode interpretation. Another VM for Squeak (less portable, requires asm as opposed to just ANSI C for the other VM) works on the same binary image as the more common VM and doesn’t take any more time to start up and no significant additional memory overhead. It simply deals with bytecode faster and runs Smalltalk apps faster. I never understood why Java required such an obscene amount of RAM and start-up time- starting up a big Squeak environment and a non-trvial application (from scratch within the Squeak environment) on my iBook 500 takes maybe 2 seconds. With no disk-chugging. Starting a GUI hello world Swing app takes at least 15 seconds and seems to grind the disk for some unknown reason. I guess it’s not unknown- just that hello world Swing app takes up a good 20-40 MB of RAM. Squeak on the otherhand can achieve the same effect (with many libraries loaded, a full GUI system, pre-compiled applications ready to run) in as little as a 2-3 MB of machine RAM.
It’s no wonder why Sun pours so much money into the hype behind Java. Otherwise, it’d just be laughed at.
If anything else, we can look forward to things like .NET usurping Java. If not MS’s implementation of .NET, the combination of the availability of MS.NET, Mono and dotGNU on many platforms. Java is the language of the dot-com dot-bomb, let’s all hope it goes down the same tube as so many of the other over-funded, poorly-executed get-rich-quick schemes we saw in the late 90s.
It’s kinda interesting people bash x86 so much. The “virtual” x86 instruction set has led to a whole lot of technologies that modern processors, even RISC ones, employ. For examaple, take register renaming. With a fixed register file, it is often hard to increase the number of software visible registers (the SPARC’s register windows is an extension). Because of the limitation of only 8 registers on x86, register renaming was implemented in the early Cyrix processors. Now, a modern P4 has a huge register file, 128 integer, and 128 floating point. Because of the virtual instruction set, the internal logic of the P4 and Athlon have been able to improve at an incredible rate, much faster than if a direct instruction set had been used. So in the end, let’s not bash x86 so much. It’s true its probably time for something new, but x86 isn’t nearly as bad as the clueless masses make it out to be.
First JIT on Java does not compile on start up, HOTSPOT compiles regually run code as the application is running. The rest of the code is interperted.
The next set of procs will prob. come out running .NET common runtime thingie nativly, like Sun’s newer SPARCs run java’s byte code native.
The idea of releasing a program in some form of platform independent bytecode to increase portability across processors has its heart in the right place, but what I will never understand is why this process has to occur at application run time.
What I think would be much better is to provide an installer with two sets of bytecode: native opcodes for your primary target architecture(s) (x86 in most cases) and platform independent bytecode. If you’re installing to a platform for which you don’t have native bytecode, you translate the platform independent bytecode at INSTALL TIME to native bytecode. This achieves both the goals of portability and performance, and all the end-user deals with is a longer than normal install provided they’re using a platform which isn’t mainstream.
From what I understand of .net, you could do this if you initialised all the objects used in your application in the install application, they would be compiled and ready to go (.NET only compiles once, then cashes the object)
You just have to compile the KDE/QT and GTK libs on windows then link you apps against them.
People run KDE on many platforms.
It is hard to get some platform independance like endianess done if you never considered it.
The article may have pointed the correct things, but I believe that there may be other reasons to move to IL rather than freeing up Microsoft from the x86 platform.
An intermediate language, or bytecode is very useful for programmers. This is because the code runs on a virtual machine which can control and check certain things easily.
Java is easy to program because of the virtual machine, it is also slow because of it. But in general there is more plusses on that side because CPU speed increases a lot faster than anything else which solves the speed problem. For example when Java first came in, I remember it was extremely slow. Now in my Pentium 850Mhz machine, applets feel like other applications.
It may be a natural move to IL for Microsoft. I think Java showed us that the future will be in this type of languages where it is very easy to write, debug and read.
on “OS News”. Issue 09.
Why PC vendors are in bed with Microsoft (at least one of the reason) ? Why, when XP was announced, many PC vendors stopped to investaged Linux on the desktop PC ?
Because PC requires higher ressources than previous OS. So, to increase “experience” with XP, customers will plane to update thier PC. Microsoft makes PC vendors happy because they are sure that PC sells will continue in order to run new M$ products.
The same goes for .NET, I won’t be surprise that to “increase your experience”, you’ll have to update the PC (1 Go RAM – P4 3 GHz).
Technology is one side of .NET, the other side is to be sure that customers will be doing the same thing using [buying] faster harware (say differently: that the software will continue to be more and more ressources hungryness). This is the path to introduce harware obsolence quickly.
This business way is the core of the PC economy for more than 15 years. Like it or hate it but it is.
Enough facts! Let the FUD begin! 😛
The same goes for .NET, I won’t be surprise that to “increase your experience”, you’ll have to update the PC (1 Go RAM – P4 3 GHz).
The .Net framework currently runs fairly well on most machines that can run XP well. The system requirements for the .Net framework when running on XP are fairly close to the requirements for XP alone (slightly higher RAM requirement, though to get XP to run really well you should have much more RAM than 128MB in the first place).
Technology is one side of .NET, the other side is to be sure that customers will be doing the same thing using [buying] faster harware (say differently: that the software will continue to be more and more ressources hungryness). This is the path to introduce harware obsolence quickly.
This business way is the core of the PC economy for more than 15 years. Like it or hate it but it is.
I’ve been running XP, the .Net framework, and VS.Net on some fairly old hardware for testing my apps (P3-500 w/ ~192MB RAM). There are a few cases where upgrading it to 256 or 512MB of RAM would help, but then most users aren’t going to be running VS.Net while they’re running a bunch of end-user .Net apps, and most people running VS.Net in the first place would want a heftier system just for the sake of compile times. If a system containing mostly components I bought over 2-4 years ago can run this stuff, I fail to see how they’re pushing people to upgrade to the stuff I normally use for development (frankly, I only replaced my 1GHz system because the motherboard died when I moved, meanwhile CPUs are moving to 3GHz, which just means prices of lower-speed CPUs go down for the average users).
The only people buying new processors for XP are the same ones that just buy a whole new computer when they ‘upgrade’ to the latest version of Windows. Most of them were probably running 98 or 95, too. The most common upgrades required for XP are RAM and hard drive upgrades, either of which is fairly cheap.
like someone else already said, the biggest advantage of .Net is the sandboxed environment… once you use a good virtual machine it’s really painful to go back to c/c++
For those worried about speed, keep in mind that Java is not the only representative of a system like this. Java is indeed a poor example- taking up more RAM than it should need and having a very poor user experience. I’ve heard that Java servlets are fast enough- that’s swell. But most people agree (excluding those with 2 GHz machines of course that non-trivial Java *applications* are slow, often too slow. Just because Java is slow, it doesn’t mean that .NET will be slow. Of course .NET will be a little slower than completely native code. There’s a trade off, and as a developer, I’ve foudn it to be a very worthwhile one.
“The same goes for .NET, I won’t be surprise that to “increase your experience”, you’ll have to update the PC (1 Go RAM – P4 3 GHz). ”
Oh please!!!!
I run a C# IDE written in C# on my K6-2+ 500 w/448 MB RAM (W2K), and it runs as fast as my Delphi and C++ Builder IDE’s.
In this case MS has surpassed even my expectations.
As a professional Java developer, I would never think of building a Java based GUI app, even with JDK 1.4.x.
I would consider building a GUI app with C# without question.
When they first announced the CLI and IL, I thought that it was a brilliant move (I still do), specifically for this reason. The more services and software that can be pushed into this indepenent form, the faster and more quickly the software can move from platform to platform.
That’s a good thing for MS, as it gives them the practical capability to advance with disparate kinds of hardware. When NT came out for, what, MIPS, Alpha and Intel, the issue for users wasn’t that NT could run on mulitple platforms. The issue was that software that they wanted to run could not. All of the old Windows software moved semi-painlessly to the x86 version of NT, but stopped cold on the MIPS and Alpha versions.
Now, imagine, if, say, Office.NET was written to the CLI. Now, that software can run on XP IA-64, XP Itanium, XP “Hammer”, whatever. Will it be the fastest it can be? No. But, many will tell you that slow software is better than no software, and I don’t think it will be that slow. I think the fact that you can compile C++ into this form is a very important detail.
As for Java vs .NET (actually, CLI), the key difference here is that Java is designed to run on a virtual machine. Java is running in everything from cell phones to 100+ CPU mainframes. And it’s not the language that’s been ported, it’s the VM.
The VM byte code is designed to be interpreted. The interpeter writers are able to take the interpreter and VM semantics and implement them with a JIT, but they DO NOT HAVE TO. It’s an option taken for performance. But, on a space limited system, it’s not required. The VM can run purely interpreted and be compliant.
CLI/IL is NOT interpreted, and was not designed to be. It must be compiled first, then executed. This decision may prevent CLI from getting into smaller devices, but it allows the designers to make assumptions about the system. For example, and I don’t know, IL may, perhaps, compile to better code than JITed Java byte code for the same functionality. I’d like to think that’s the case, but I don’t know.
But if that is the case, then the compiled .Net code will be more efficient than JITed Java code, because .Net code was designed to be compiled all along, whereas the Java code was not. The JITing was, while probably not an after thought, less of a design consideration than the interpreter itself.
Also, the Oberon system compiles it source code into an intermediate form, distributes that as a “binary”, and then does final code generation on the target platform, providing the code generator with lots of nice semantic information to use for optimization that is usually lost in something like a Java byte code. I don’t know if .Net maintains that kind of information or not.
Basically, why distribute machine language when you can distribute the final internal forms from the compiler and let the system do the final phase.
The other interesting point on this, is that with the huge dominance of the Intel platform, MS is still covering its tracks for the Next Thing.
Everyone who argues it will painfully slow is wrong.
Everyone who argues it will be as fast as native apps is also wrong.
One big point here is being able to target for a specific CPU. Running .Net apps on a 386 will be slower. Running them on a Pentium 4 will be about the same speed. Running them on an Itanium will result in huge speed benefits.
Running them on a PocketPC will be slower than anything else, but will be possible.
> 1. The compilation process sounds like java, but many say
> java programs are slugish, so are .net?
there’s a slight initial overhead as the JIT compiler bootstraps, but then they run pretty smoothly. The best part is the (default) graphical interface (WinForms): clean (none of that nonstandard or quasi-standard “skinned” widgets crap you have to put up with Java) and FAST! (it’s hard to tell a .NET program from a normal Windows program)
> 2. Linux and BSD programs come in source code, so I
> suppose they do not have platform dependent problems.
ROTFL! sorry, platform independence isn’t such a simplicistic concept