Spartan is still going to use Microsoft’s Chakra JavaScript engine and Microsoft’s Trident rendering engine (not WebKit), sources say. As Neowin’s Brad Sams reported back in September, the coming browser will look and feel more like Chrome and Firefox and will support extensions. Sams also reported on December 29 that Microsoft has two different versions of Trident in the works, which also seemingly supports the claim that the company has two different Trident-based browsers.
However, if my sources are right, Spartan is not IE 12. Instead, Spartan is a new, light-weight browser Microsoft is building.
Windows 10 (at least the desktop version) will ship with both Spartan and IE 11, my sources say. IE 11 will be there for backward-compatibility’s sake. Spartan will be available for both desktop and mobile (phone/tablet) versions of Windows 10, sources say.
I’m guessing not having to worry about supporting websites built for older versions of IE will make development a lot easier, and the change in name is a huge PR bonus.Shipping two browsers on Windows 10 seems a bit… Well, I don’t know, convoluted. Hopefully we’ll be able to kick IE right off our computers.
is the number of processor cores increases rapidly, while no big advances in single thread performance has been achieved lately.
This trend is based on
1. the engineering effort to achieve even more IPC gets bigger and bigger, the performance per watt ratio gets worse and worse
2. battery lifetime has become a selling point
The single thread performance to power consumption ratio gets worse with increasing single thread performance.
ARM’s BIG.little approach (http://community.arm.com/groups/processors/blog/2013/06/18/ten-thin…) takes this fact into account. The second figure illustrates the power consumption over performance of a high IPC design vs. a low IPC design. The low IPC design achieves a significant better performance per watt ratio than the high IPC design.
I hope Microsoft Spartan is a reaction on this trend as the Firefox Servo rendering engine is.
Update – Sorry, but I forgot to mention the Google Blink engine.
pica
Edited 2014-12-30 20:48 UTC
Interestingly I just read an article on reddit/r/programming where Linus thinks that parallelism won’t happen:
http://www.reddit.com/r/programming/comments/2qsqus/linus_the_whole…
Not sure who’s right, but I have to admit I haven’t seen a massive explosion in the number of cores. Except for a few CPUs, the number of cores seems to be less than 10 even though we have multiple cores now for more than 8 years.
Give it time. We are mostly limited by process shrink hiccups (yes, even almighty Intel is struggling to release new, smaller processes on time) and in the near future by the material (silicon) itself. Once these problems are solved though, we’ll see a dramatic increase in cores.
Obviously, there’s a point of diminishing returns, where adding more cores won’t automatically mean a faster computing experience. When THAT happens, perhaps the Von Neumann architecture itself might no longer be practical.
1c3d0g,
I think we’re already reaching diminishing returns today at around 8 cores using shared memory applications/architectures. If we get rid of shared memory then it becomes arbitrarily scalable, however then multi-threaded programming would no longer scale with the number of cores since threads fundamentally depend on shared memory.
That’s why massively parallel architectures of the future will necessarily have to depart from the SMP we are used to. Massively parallel programming in the future will be more akin to programming many nodes in a cluster.
Wondercool,
Unfortunately the SMP route has it’s own barriers with regards to parallelism. Cache-coherency is particularly problematic because every CPU needs to keep in sync and as the number of CPUs grows the more traffic and contention there will be on the interconnect buses. This contention can be demonstrated on a 4 core system and will only get more pronounced as cores are added.
We eventually reach a point where the efficiency losses of adding more cores outweighs their benefit for parallel computing. I think this is the main reason we don’t see very many massively parallel CPU architectures outside of special problem domains (like GPUs)
Thanks for the link, but where is the original Linus’ comment ?
Anyway, I feel that he missed something. It’s hard to put into words, but clearly, just like the monolith vs. micro-kernel debate he had with Andrew in early 90s, that proved him 80% wrong 20 years later, the parallel computation argument is flawed and biased.
And the answer will not be found in 20 years, more like 2 years.
It’s the top line, starting with ‘Linus: ..’
Ah! a whole new debate, micro vs. mono-lithic. Even today there are not many successful implementations of micro kernels, QNX probably the only commercial one?
Sucessfull, technically or commercially ? What’s the goal anyway ?
Same for parallelization, should it serve marketing guys (remember the ‘Cloud’ stuff) or solve actual real-life problems ?
Currently parallelization, with the right algorithms, just do perform quite well.
Yeah, cluster nodes, distribution, parallelization, interprocess communication, fault tolerance, why the heck I have ‘Erlang’ in mind every time ?
That is such a weak argument. Because you believe he was wrong in a 20 year old debate, it disqualifies him from having an opinion 20 years later?
He actually argues quite well in that thread for his point of view, and so far you haven’t uttered one thing showing any fault in his reasoning. And no, saying “Erlang” is not a proper answer at the level he is discussing.
dpJudas,
You are right of course, never the less sometimes we tend to over-emphasize the views of famous people just because they are famous, and Linus is no exception to this. I think this is the point Kochise was getting at, the merits of one argument over another shouldn’t change just because Linus says so. I’m not really trying to pick on Linus here, the same thing can be said of other tech celebrities too, just because they are influential doesn’t strictly mean they are right.
I happen to think more parallelism will happen, but it will take the shape of a cluster due to the bottlenecks inherent to SMP scaling. There will be an arms race, cluster computers will become commodity. Computer vision will advance. Consumer devices will be much more intelligence than current technology. With sufficient parallelism, consumer devices could have the “intelligence” of say IBM’s Watson. To get here we will need to solve many engineering problems for sure, but massive parallelism is absolutely going to play a key role. I think Linus downplayed the importance because he was thinking in terms of solving today’s problems rather than tomorrow’s problems. Eventually we won’t know how we ever managed lived without clusters.
Edited 2014-12-31 18:08 UTC
Yeah, that’s it. I do not want to fadden Linus’ impressive work and (unlocked) achievements, but he is an expert in his own field, and having his nose dug inside an incredible number of lines of code, all dealing with pretty much nothing much than low level stuff, some sort of SMP but hardly massive parallelization, I guess he’s not the best to answer this issue.
I think Google Labs would. Or Total, due to their expertize in underground image massively parallel processing to seek for new petroleum reservoir. I bet that Linus is aware of some stuff in these fields, reading articles and/or having contacts. But I’m unsure he really understand the whole conceptually different meaning of apprehending mathematical problems.
That’s why I mentioned Erlang. Or Lisp. Or Scala. Try comparing them to C or Asm. They are just not engineered to solve the same problems. Functional programming languages do NOT address booting a motherboard and setting up device drivers. I have a hand in both worlds, and I can tell that IBM’s Watson was not coded in C. Neither the new AI about to be revealed. And parallelization is the key. Just like neurons performing single simple tasks. That’s where everything is leading, slowly but surely.
If it’s the same rendering engine and the same Javascript engine, it sounds like they will simply end up breaking compatibility with sites that check for the IE user agent string, and won’t do anything to advance web technology. Looks like another frantic attempt to copy Chrome, just like everyone else is doing these days.
Edited 2014-12-30 21:13 UTC
Who is copying Chrome? I know Opera is almost a skinned Chrome but otherwise. I hope you are not referring to using tabs, multi process, consolidated menu’s, tabs on top, popup status bar, rapid development or modern javascript engines.
Because that is just the way browsers were going and Chrome was first with some because of timing and huge resources.
Fun trivia: Opera was the first tabbed web browser if you count MDI. (and if you do not, then Phoenix/Firebird was first with true TDI, long before it changed its name to firefox)
FWIW the IE from latest available Windows 10 TP build (9879) has user-agent string “Mozilla/5.0 (Windows NT 6.4; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Safari/537.36 Edge/12.0“, and it is detected as Chrome by most sites.
Oh, good. Another browser to support. A browser that renders pages almost the same as IE in “standards” mode.
I’m wondering what could be the strategy here with deploying two browsers based on the same engines – a heavy one (continuing the IE tradition) and a light one?
Could it be that the light one would be the basis of an internet appliance akin to a ChromeBook?
I think its about decoupling the browser from the enterprise baggage that IE drags around with it and making it silently updatable like Chrome is.
I also would be surprised if its actually two browsers. More like one browser which switches to IEs rendering path if it encounters a site run with compatibility view.
Makes sense to me.
There are over 50 combinations of hardware/Windows/IE version currently supported. (Just have a look on the Update Catalog site for the update number for the most recent IE Cumulative Update.) About two thirds of those will disappear over the next twelve and a half months, but decoupling the browser from the OS will allow the specific combinations to completely wither away over the following decade or so.
That just *has* to be worthwhile.
You don’t need to wait for a new browser from MS for that
I welcome that microsoft is still trying to improve it’s web browser but one thing i ask of them is PLEASE PLEASE test your browsers with your own products,
I am getting sick of hacks and fixes with new IE’s that don’t work correctly with their own products like SharePoint, UAG, CRM and Dynamics.
I do find it funny that Firefox gives me a better experience in Sharepoint than IE does.
tell me about it, you have to laugh else it would simply drive you insane, i don’t see how most of this leaves the labs / production line as a simple test brings these problems up,
MS – will have to really AMAZE us – if they’re ever going to get the percentage of users that they used to have. Chrome and Firefox seem to have “won that war” 4 years ago. I haven’t seen any superiority to come out of Microsoft in a long time.
Do they really need to ? Even the carousel stuff for browser selection haven’t really helped the challengers to get a higher percentage of the market share. IT policies don’t help much either. The average Joe will still use the bundled browser, provided it is good enough to open Facebook and Youporn.