APCMag has tested Intel’s latest Core i7 processor architecture that does away with the Front Side Bus and replaces it with the company’s QuickPath Interconnect and has (back from the dead) hyperthreading. “This month, Intel moves on from the Core microarchitecture to the next generation of processors for mobile, desktop and servers, codenamed Nehalem and officially named the Core i7 family. We’ve spent a few weeks with Intel’s test kit for the new desktop part, codenamed Bloomfield, as well as the new compatible motherboard chipset the X58 Express, codenamed Tylersburg.”
i7 looks like a mixture between Apple and Microsoft
LOL
Took me a while to get that one, but yes, it’s a slight co-incidence that the next version of Windows is ‘Windows 7’ and Intel’s next chip is the i7.
AMD had the Athlon XP whilst Windows XP was around, now it’s Intel’s turn eh?
I wonder if there’s some bidding war that went between Intel and AMD to get the rights to ‘7’?
Unless your into gaming any post 2005 CPU is more than fast enough (unless you use Vista).
Will this mean that your current Intel MB won’t work? At least AMD usually keeps socket compatibility for years. Intel constantly break hardware compatibility for no apparent technical reason.
Well, the new processor also has a memory controller onboard; they need extra pins for that because you DON’T want all memory accesses to have to go through the CPU core. Apple tried that on one of their computers years ago and it severely hampered performance.
You’re ignoring the content creation sector. With consumer grade DSLRs and HD camcorders becoming cheap and commonplace, you’re going to find a lot of people who are starting to require a lot more CPU power.
As long as CPU manufacturers keep churning out faster hardware, we’ll keep finding new ways to utilize them.
This time there is a technical reason:
They moved the memory controller into the CPU for greater bandwidth, thus requiring more pins. Since the article states they’re planning to integrate the GPU with the CPU you should expect the next socket change around 2010-2011.
The performance boost in their benchmarks is actually quite substantial. It remains to be seen how/if this translates into a noticeable performance boost for the average user. But I wouldn’t really say it’s only for gamers. More performant CPUs bring lots of advantages: faster ripping, rendering, compiling to name just a few. Hell, even Latex takes way too long once my documents get complex enough (>100 pages with lots of pictures which isn’t all that complex). This is on a single core @2GHz, granted, but I don’t think it’s IO bound on even the latest quad systems.
Basically if something takes any noticeable amount of time it’s taking too long.
Furthermore faster processors might help increase the popularity of programming languages like python and ruby.
Also, if anybody hasn’t read the article yet, do yourself a favor and
use the “print” link right next to the headline.
Yeah, anyway serious people don’t need more than a typewriter…
Did you hear of image processing, video processing, music processing, software compilation, 3d rendering, scientific visualisation/research, voice recognition, network servers, the list goes on… Well basically, there are tons of people actually needing cpu’s to process things.
Edited 2008-11-03 12:54 UTC
Really? I’m looking at getting an i7 system within a couple of month, and games is one of the few areas where my current CPU isn’t giving me any problems (if anything a new graphics card would help far more in that area). Processing RAW photos, video editing and some of the stuff I do in MATLAB, those are the areas where my CPU could really use some help. You do realize that computers can be used for things other than games and the internet?
And pr0n. Don’t forget pr0n.
Games, internet, and pr0n? A bit redundant, no?
You can’t only think in the context of everyday home desktop use. My algorthms will love this one, they can’t get enough of cpus and bandwidth
Still, as of 2008, you can see Splash Screens for seconds, and you have to wait a long time until data is converted from one format to another, etc.
I think there’s still lots of room for improvements in performance. Multicore CPUs were already a big leap (you might not have noticed it – but just sit back to an older single core machine, and run some converting task in the background and try to do something else during that time. You will really notice a difference).
Unfortunately, storage tech isn’t improving fast enough. Loading apps from disk is limited more by hard disk speed than processor speed hence you will be seeing splash screens for quite a while. Or any other task that involves saving large amounts of data.
Processor intensive tasks such as compiling or converting file formats will see real boosts and according to most accounts i7 with the new motherboard is a beast. According to ars a single quad core top end nehalem processor is equivalent to 2 quad core penryn xeons systems
Wordperfect 5.1 running on a 486 loaded so quickly that you couldn’t read the splash screen. That was 15 years ago!
Hi,
Yeah – modern programmers are smart enough to add a “do nothing” delay loop to keep the marketing department happy…
With IBM, Sun, AMD, NVIDIA and many others working together on HyperTransport I hope that there is good reason why they chose to go it alone on QuickPath rather than NIH.
Edited 2008-11-03 11:47 UTC
because they are intel and are big enough that amd and others will be forced to follow
(that is assuming their tech is faster, if it turns out to be half the speed the market will force Intel to u-turn)
Intel has done it’s fair share of following. They came up with their own 64Bit x86 extensions, but MS told them they were going to go with AMD64. So Intel had to adopt AMD64 and they called EMT64.
AMD first integrated the memory controller on their CPUs about 2004. Intel is just getting around to it now. Intel is not always the leader.
AMD didn’t create or invent on-chip memory controllers, EMT64/AMD64 is nice but that’s about it.
I didn’t say AMD did an integrated memory controller first, I said they did it before Intel. We are talking x86 processors here, not RISC, not Itanium, x86
AMD64 gave us cheap affordable non-Itanic 64Bit processors. That’s more than just nice. By forcing Intel to use AMDs extensions, not it’s own, MS and AMD created a standard that Intel had to follow. That’s 2 cases (and pretty big ones) Throw in hyper transport and dual core chips, and you’ve got a situation where Intel has been led by it’s nose.
We wouldn’t even have nice fast Core Duos if it wasn’t for AMD punishing Intel with the Athlon XP and the Athlon 64. AMD forced Intel off it’s ass and back in the game.
I still don’t consider on-chip memory controller important or innovative, just because you adopt a technology early doesn’t make it good.
yes it is good, but being stuck at DDR over DDR2 for a while is also an issue.
Its clear that Intel’s strategy has been better than AMDs since Core2 came out regardless of any technology AMD came up with.
“yes it is good, but being stuck at DDR over DDR2 for a while is also an issue.”
I’m not sure why? With no frontside bus to worry about, AMD’s logic was why introduce the latency. It also allowed people to leverage the ram they already had lying around, making systems more affordable when Intel systems had to make use of DDR2. Ram may not be expensive now, but at the time DDR was much cheaper.
Seemed like a good plan to me.
I never said an onboard memory controller was innovative, I said that AMD did it before Intel, and it made a big difference when they did.
Intel did NOT come up with their own 64bit x86 extensions.
I beg to differ.
Intel’s 64-bit CPU was the Itanium, which was supposed to *replace* x86. It used a completely new CPU architecure and instruction set. This was known as IA-64, while 32-bit x86 was called IA-32.
http://www.intel.com/design/flash/nand/mainstream/index.htm
These Intel X25-M are a real waste of benchmarking against, if you have any sense of attempting to provide real-world benchmarks, for the general audience.
Stop skewing your results folks. 99.9% of the consumers for Nehalem will not be deploying these SSD solutions.
Considering how expensive the early i7 systems are, it seems like a waste not to use an SSD. You might as well go all out.
What’s the value in this observation? That logic makes zero sense when your job is to show real-world results. Nehalem will be flooding the market shortly and you won’t see the market running out to replace HDD with SSD.
i7 isn’t really that expensive. They seem to be starting around the same price as the current Core2 Quad Q6700, and the motherboards are only slightly more expensive than equivalent X48 motherboards.
Indeed.
And when Shangai launches, the i7 will come down in price as usual, so the 920 should become affordable real soon… and then I’ll swap my C2D E6600 for one Yay! more gaming goodness!
Does anyone know if / when an FSB-less Atom processor will arrive? From what I understand, this could mean significant power saving on Netbook chips – maybe a good reason to wait for th next generation of EEE PC’s?
Moorestown is coming next year.