ExtremeTech was able to get an exclusive peak at Intel’s newest quad core processor, the Core 2 Extreme Quad E6700. After running through the gauntlet of 3DMark and content creation tests, among others, the quad core CPU boasts impressive performance results, particularly in video and 3D tests. According to the article, video creators and 3D animation artists should probably start saving up.
If a 3D renderer is properly multi-threaded, it will in theory run twice as fast every time you double the amount of CPUs/cores that render a given image. The reason for this is because it will begin rendering a new section of the image on each new core, while still sharing most of the data in memory (textures, geometry, etc).
Each thread will render a single square block of the image, say 64×64. If you have two threads it will render two at a time, with 4 it’ll render 4 and so on. Many scene assets are shared in memory so we’re not scaling linearly the amount of memory used either.
For freelance artists and shader artists who need to render many times, locally this is *really* cool. A dual CPU quad-core Clovertown or Kentsfield is just plain awesome!
If a 3D renderer is properly multi-threaded, it will in theory run twice as fast every time you double the amount of CPUs/cores that render a given image.
By Amdahl’s Law that’s not quite true, unless you discard the overhead from dispatching threads, gathering the results and any serial parts.
For many applications you don’t get as big gains from simply using the same workload as before, as you can get from scaling the workload together with added resources.
Get more done in the same amount of time, instead of same amount of work in less time.
2 conroes glued together sharing a 1066Mhz bus and having a TDP of 125 watts don’t quite feel like a true quad-core system.
Yes, the 3D rendering benchmarks show some really nice figures, but somehow i still need to see more benchmarks to believe its performance.
All in all, i’d pick a 3.2Ghz conroe anytime…
I am looking forward to what AMD is going to offer. Something grander I hope! This seems kind of like slapping on some glue and sticking two procs together.
Yes, the 3D rendering benchmarks show some really nice figures, but somehow i still need to see more benchmarks to believe its performance.
http://www.tomshardware.com/2006/09/10/four_cores_on_the_rampage/pa…
Thanks. Looking briefly at TH’s benchmarks ti just shows that for everyday apps, games, etc the quad core performance gains are either minimal, nonexistant or even (in a couple of cases) worse to the dual-core conroe.
Video rendering/encoding is a niche were it really does work, but i wouldn’t bet on a quad-core performance beeing generally “dual-core x 180%” even for multi-threaded apps.
“…the quad core CPU boasts impressive performance results, particularly in video and 3D tests.”
If it does so nicely in 3d tests, why isn’t there a dual core video card yet? They already suck enough power, so I don’t think that’s an issue. Do they just want to sell crossfire and SLI? Why aren’t gpu’s on the mobo yet? That would speed up 3d stuff immensely I would think. Have your cpu socket and your gpu socket, they could both still have their separate ram and everything. Anyway, just my ideas.
1) Dual-core GPUs don’t make much sense. GPUs are already tremendously parallel, with dozens of rather independent pixel-shading and vertex-shading cores operating simultaniously. It makes more sense to just increase the number of those shader cores (which they do, within the limits of available die-size), instead of replicating all the shared components as well.
2) GPU’s aren’t on the motherboard because it wouldn’t really be any faster (why would it?), because vendors wouldn’t agree to a single socket, and its hard to run high data-rate memory traces over removable memory sockets, as in a motherboard.
I’m sure we’ll see ATI GPUs made for AMD latest socket, as well as inexpensive multisocket motherboards.
AMD CPU sockets have fantastic bandwidth to memory,and they are meant to be used by chips other than CPUs (you can buy exotic accelerators made with programmable logic chips that plug in there). It might even be inexpensive, with some sort of TurboCache scheme and just a little fast on-chip memory, and without the heavy performance hit of sharing memory through PCIX.
Possibly nVidia wouldn’t bring to market something so specialized, but now that ATI is in AMD’s bandwagon, I’m sure their hardware will be coming.
And won’t it be neat? We will soon see a 4-socket board where we could plug any combination of Athlon X2 and X4 and Radeon SLI’d devices. Won’t be cheap, though. Or cool as in temperature.
3) Current high-end GPU’s have high thermal output thus requiring custom heatpipe/water cooling solutions which would be hard to implement on motherboard PCB due to limited space and complex mounting issues.
GPUs are large and complex. If NVIDIA or ATI could double the budgets of their GPUs and still obtain useful yields they would just add more resources to the same GPU because it would reduce much of the complexity the present SLI/CrossFire approaches add.
Putting a GPU on the motherboard doesn’t necessarily change anything. There are plenty of devices with integrated GPUs on the motherboard, either as part of the chipset or simply soldered onto the motherboard. In the long run there will probably be a point where the CPU and the GPU are much less distinct. I could actually write at great length about that but I’ll spare you.
I think the results are a bit disapointing.
Intel always failed in scalability unlike AMD, and now their future multicore CPU will not deviate from this religion.
As we could see from those benchmark results, 98% of computer users will not see performance in the quad core CPU because they are basically not 3dsmax users or other really multithreaded applications.
And for this performance margin, you will see a huge price difference between dual and quad, thus this future CPU will not be good from Price/Performance ratio point of view, which by the way most customers ask for (remember that almost all of PC users are not 3dsmax or Gamers.)
Add also the problem of heating that will rise due to this update and finally intel is back where it was before in terms of heat crisis; and what about 8 cores’ heat?! Please, don’t remind me! Temp then will probably be 50 celcius when Idle.
Intel of course did a good job when they moved to core technology but that was just a remedy for performance retardation that palgued the history of Pentium 4, I expected intel not just to reach a normal position but to achieve more than that considering their resources and power.
If intel keeps ignoring heat issues in the future by not reducing their clock frequency when CPU is Idle or runnig low on processing (aka speed step) then AMD would be more cooler overall even with little performance loss.
Currently Intel speed step reduce frequency by 15% (like in E 6400 2.13GHz to 1.8Ghz); while AMD frequency reduction reach 45% ( like in X2 4200+ 2.2GHz to 1.0GHz), add to that that intel speed reduction is just steppy not gradual like AMD which scale all the way in the range (from 2.2 to 1.0, ie you will see 2.2, 2.1, 2.0, 1.9, 1.8, …1.0)
Due to this I noticed that overall AMD CPUs are more colder than Intel current Core2 Duo (tested by the real thing, laser guided heat thermometer)
Intel must really do more to switch me from AMD; but for some (gamers, graphics intensive application users, and compliers) Intel still is the best choice.
Add also the problem of heating that will rise due to this update and finally intel is back where it was before in terms of heat crisis; and what about 8 cores’ heat?! Please, don’t remind me! Temp then will probably be 50 celcius when Idle.
How about getting some facts?
The peak core temperature recorded during the test was 66 degrees Celsius in conjunction with an Intel retail cooler. In contrast, the maximum temperature of the Core 2 Extreme was 43 degrees Celsius.
http://www.tomshardware.com/2006/09/10/four_cores_on_the_rampage/pa…
Due to this I noticed that overall AMD CPUs are more colder than Intel current Core2 Duo (tested by the real thing, laser guided heat thermometer)
Was your laser thermometer aimed at the CPU core?
All test results show the opposite. The only AMD CPU’s that are colder than C2D are “Energy Efficient” Athlon series with 35W TDP, however they have even lower performance per watt ratio. For instance, look here (it’s in russian, but diagrams are enough):
http://www.overclockers.ru/lab/22949.shtml
“Was your laser thermometer aimed at the CPU core?
All test results show the opposite. The only AMD CPU’s that are colder than C2D are “Energy Efficient” Athlon series with 35W TDP”
But also remember, MAX TDP ratings don’t always map to real world, check out this article. The “non-efficient” 3800+ is actually really effecient.
http://www.tomshardware.com/2006/09/25/green_machine/
from the article “Intel’s release of Core 2 may have stolen AMD’s performance-per-clock thunder, but AMD retains its advantage in performance-per-watt when including the platform.”
“The peak core temperature recorded during the test was 66 degrees Celsius”
That’s more than in my Pentium 4 @3.0GHz PGA 478 with HT on. That CPU core temp was 55 max even after gaming.
55 was the thermal diode temp recorded by Mobo “Hardware Monitor” @ BIOS. Which will proof my point again.
Currently Intel speed step reduce frequency by 15% (like in E 6400 2.13GHz to 1.8Ghz); while AMD frequency reduction reach 45% ( like in X2 4200+ 2.2GHz to 1.0GHz), add to that that intel speed reduction is just steppy not gradual like AMD which scale all the way in the range (from 2.2 to 1.0, ie you will see 2.2, 2.1, 2.0, 1.9, 1.8, …1.0)
For Intel Pentium 4 and Xeon, software (e.g. the OS) can set the clock modulation to as low as 12.5% (or 12.5%, 25%, 37.5%, …, 87.5%). This is roughly 2.13 GHz down to 266 MHz.
Obviously which setting the OS decides to use when depends on the OS, not Intel (if the OS supports it at all).
“Core 2 Extreme Quad E6700”
wow, that’s quite a mouthful. what’s next, the:
Core 5 (cause it’s so much better than 2 they decided to skip some numbers) Ultra Rocking Extreme Quadralicious Fantasmic XXT999000! (exclamation point trademarked)?
…not peak
Edited 2006-09-29 13:08
not impressed. I think ill keep my real multi socket board for a little while longer. 4x AMD 275 still makes do. =)
We got a dual-quad-core dell workstation in a research lab at my university… as far as we could tell from some simple tests… Maya 7 could only utilize ~25% of the entire load capacity when rendering.
Interestingly… it wasn’t constant on two of the 8 cores… it was more like when one core started to pick up, the other one that was working would directly drop off.