AMD finally fleshed out the “Asset Smart” strategy it has been talking about since, at least, last December. The result: AMD is now fabless.
Jerry Sanders, former AMD CEO and co-founder, once quipped, “Only real men have fabs.” When presented with the question about real men having fabs, Dirk Meyer, the current AMD CEO, responded with, “We feel like we’re still pretty manly at AMD. Frankly, the math has changed.” Has the math changed? Stock traders seem to think so. AMD shares registered a 8.5% increase to close at $4.59 per share.
The new pure-play foundry company is being referred to as The Foundry Company, for the time being. The company will be owned by AMD and the Abu Dhabi based Advanced Technology Investment Company (ATIC). ATIC is dropping a cool $2.1 billion for a 55.6% majority interest in the new company; AMD will have a 44.4% minority interest in the company and equal voting rights, as reported by InformationWeek.
In other links, Barron’s breaks the deal down to bullet points, Digitimes presents a no-frills, fact filled writeup, and the EETimes is cheery and optimistic.
This is a good move by Ruiz but it is not good enough, in my opinion. AMD is buying precious time so as to delay the inevitable disaster that is waiting around the corner. Instead of taking advantage of the unprecedented opportunity afforded by the parallel programming crisis to leapfrog over its main competitor with breakthrough technology, AMD chose to play the me-too fiddle. That is sad. But it is not too late for Hector Ruiz and Dirk Meyer to turn the company around and become the leader of the processor industry for decades to come. And then they can buy their old fab back, if they feel manly.
http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-prog…
Edited 2008-10-08 17:25 UTC
Savain. Seriously. Stop spamming every thread/CPU related article on the net with your BS.
What are you, Mussolini? I am a free man in a free country. You can’t stop me, man. If you don’t like what I write, don’t read it. After all, nobody is twisting your arm, right?
I can’t stop you but I can mod you down so others aren’t subjected to your posts.
Wow! I’m trembling like a leaf. You can censor my stuff until shit comes out your ears on OSnews or Slashdot or what have you, but these are not the only venues on the internet. Not every web site allows censorship, you know. Censorship is a chicken shit tactic used by bird brains and, in the end, it always loses. LOL.
Dude, clean up your vocabulary. Censorship is a state (as in nation state) driven suppression of some matter or the other. What you experience on sites like Slashdot or this one is called “peer review”.
Your post can still be read, nobody is going to jail for reading your stuff and you are not going to jail (or wrose) for posting it. People are just tired with your broken record routine or “spamming forums” or just disagree with you. Marketplace of ideas, you know, freedom of speech et al.
In this time and age, where many governments invade our privacy and try to regulate and suppress our legitimate demands of information transparency, people like YOU, who spout censorship nonsense as soon as someone disagrees with them and employs personal judgement, are one of the many obstacles we face in fending off censorship. You devaluate the whole concept by expanding it on petty personal disagreements. Shame on you.
Janssen, eat shit. Your demand that I clean up my vocabulary is an attempt at censorship. Any attempt by a group or an individual to suppress the free expression of another’s opinion is censorship. And yes, peer review is censorship. It’s primarily a way to suppress dissenting opinion and it is used to do just that. Shame on you and OSNews and Slashdot and Digg and Wikipedia and all the other social sites for encouraging a practice that is nothing but censorship.
Nobody suppresses you.
You’re not that important.
Modding down just means we don’t like you.
What’s with the “we” shit? You assholes have your own little chicken shit gang on OSNews now? LOL.
“We” means me and my personal army from 4chan
Nah. “we” means the chicken-shit ass-kissing gang that has taken over the discussion boards on OSNews. ahahaha… I got the feeling that most of you are unemployed and living at your parents’ homes. You feel powerful when you mod someone’s comments down, don’t you? ahahaha… AHAHAHA…
Notparker? Is that you?
If you were trying to convince everyone here that you’re just trolling for reactions (and not doing a particularly good job, at that), it would appear that you’ve succeeded.
Nope. I just wanted to make sure that the spineless censorship gang who has taken residence on OSNews understand that they can kiss my ass. Everybody knows who I am. My name is Louis Savain. If you retards got any gonads left, stop throwing rocks from behind the curtain like scared little girls and come out and identify yourselves. Anything else means that you are a bunch of gutless swines. LOL.
So juvenile trolling it is, then.
OSNews already has a ranting, semi-coherent lunatic-in-residence. If you’re after the job, I’m afraid you’ll have to fight Moulineuf for it.
So spineless ass kissing it is then. It figures.
Oh look, a “monkey see, monkey do” troll. You could at least try coming up with your own material.
But please, don’t let me interrupt your whining and crying about how the moderation system is oppressing you.
Hope this helps.
It’s not a moderation system. It’s a censorship system that has been taken over by a spineless minority of cretins with too much time on their hands. OSNews should eliminate it. Individual readers should have the final say on what they want to read. OSNews should instead add a kill file mechanism that anybody can use at their discretion without the annoying influence of a chicken shit gang of gutless morons with a power complex.
I am still waiting for you and your wormy buddies to identify yourselves. Anonymity is the refuge of cowards. You have no gonads and you know it. ahahaha…
Translation: WAAAAAH! Baby wants his bottle!
ahahaha… Nah. More like StephenBeDoper and his jelly-back buddies have got no gonads. ahahaha… AHAHAHA… ahahaha…
See you around, dope man. ahahaha…
Uncontrollable laughter is a common sign of mental retardation, you know.
ahahaha… Hiding behind a pseudonym is a common sign of missing or atrophied gonads, you know. Unfortunately for you, testicle transplant is not a procedure that is currently offered by the medical community. You’re shit out of luck. ahahaha… AHAHAHA…
You really are easily-amused, aren’t you?
What will your next post be: more BAWWWWing about the moderation system, or more obsessing about gonads? Oooh, the suspense!
Gonads? Did anybody say gonads? Oh, I see. You can’t find yours. It figures. ahahaha… Anonymous cowards who throw stones from behind curtains like scared little girls don’t have any gonads to speak of. ahahaha… AHAHAHA… ahahaha…
Identify yourself, you spineless jackass!
ahahaha… Never mind. I’m just having a little fun at your expense. Slow day and all that. You have to admit that your avatar does look like a spineless jackass though. Or is it a neutered goat? I can’t tell. ahahaha…
Hey, thanks – you just helped me win a $5 bet (namely: that you would reply within 2 hours, and that your reply would contain the words “gonads” and “ahahahahahahahahaha”).
Now, my predictable little friend, let’s see if you can get you reply posted in *less* than an hour this time. Chop-chop, I have $10 riding on it this time.
Hope this helps!
I’m glad to help you win some money. When you have no gonads, it pays to have a little money so you can maybe buy some. ahahaha… Hey, you may get rich doing this. ahahaha…
PS. I’m still waiting for you to show some backbone and identify yourself. Anything else means you have no gonads. ahahaha… I thought you’d like that. Only gutless cowards attack others from the safety of anonymity.
Gonads, gonads, gonads… ahahaha… AHAHAHA…
It’s impossible to read that without imagining you having an adorable little moment of triumph – pumping your fist in the air, and yelling “Ah-ha, I’ve got him! Now he must tell me his name – or I will tell everyone else on the playground that he doesn’t have gonads! Muwahahaha!”
Hey, you know what? You should go tell your mom and dad how awesome you are at the Internet! Maybe they’ll buy you some legos and let you stay up late tonight – or, or, or maybe you’ll even get Lunchables for school tomorrow!
ahahaha… Luke, your feeble attempts to master the art of insults leave a lot to be desired. Show the galaxy that you got your own gonads. Identify yourself. Be a man. Be a woman. Show some backbone. Stop being be a slug. ahahaha… AHAHAHA…
Wow, you don’t even have the wits to make a proper star wars reference? You’re really not very good at this, are you?
Well, my name is Sergey Popov.
I’m a perl/java programmer, I am married and live in my own flat.
I tell you that you don’t know anything about censorship. If you do, you lie.
Here’s my photo IRL – http://i22.photobucket.com/albums/b322/faijeya/married/married3.jpg
Also, you’re a dick.
Have a nice day.
Same to you Popov. You at least got balls. Tiny little balls, I would say, but an admirable trait nonetheless. LOL. Now I want to hear from rajj who blasted my first comment and proceeded to anonymously spread lies about me and from all those gutless worms who modded me down.
It’s your privilege to disagree with me and I don’t even feel suppressed or “censored” by you. All your cursing will not change that, you are just damaging your cause. But that’s your privilege, too.
Janssen, if the success of my “cause” depended on the approval of a bunch of gutless jackasses on OSNews, I would rather see it fail. LOL.
Then, why do you post your links here?
Well, maybe I enjoy pissing off the resident censorship gang. LOL. Maybe the members of the censorship gang are not the only people who read OSNews. Maybe it’s because I am a free man and I can post wherever I please. Unless I am banned by OSNews from posting, I will continue to post here just to piss you off. How about that?
Edited 2008-10-11 15:05 UTC
You crack me up, little buddy. And that’s cool.
Translation: “Don’t tase me, bro!”
Do I look like a legislator? How and in what way am I encroaching upon your liberty? Screaming, “free-speech” at the top of your lungs doesn’t save you from public ridicule.
Further, I didn’t mod you down either.
You wish you were a legislator, asshole. I am not losing any sleep over my stuff being ridiculed on OSNews by a bunch of cretins. IOW, you can kiss my ass, rajj. And same to your buddies. How about that? LOL.
I’ve said this before, and I’ll say this again. Even though you fancy yourself some sort of 21st century Galileo Galilei, in reality you’re a lot closer to Frank Chu.
So stop it, please.
Evangs, f–k you, whoever you are. I am not a pompous ass like some of the crackpot assholes in the scientific community whose asses you kiss. LOL. I just want to see some progress done in certain fields before I die. That’s all. If you don’t like what I write, don’t read it. As simple as that. And f–k you one more time. And the mule you sleep with. How about that for free speech, eh? LOL.
Does it ever bother you that you have actually managed to develop an international reputation for being a crackpot?
Not at all. The only thing that bothers me is that my detractors are too chicken shit and gutless to identify themselves. I won’t get the satisfaction of seeing them eat a mountain of crow when I’m vindicated. You’re a prime example of an anonymous ass kisser. ahahaha… I would rather be a crackpot any day than a spineless ass kisser. ahahaha… AHAHAHA…
Okay, I was curious to see what it is that makes you and others think Savain is babbing on about that you call BS.
Well as a veteran hardware software engineer, I would say most of the comments against him are pathetic. No wonder OSNews has gotten so utterly boring recently, nobody knows anything about the past and what will have to be reinvented again. On his blog he appears to have reinvented what we in the hardware (VLSI) industry have been doing for 4+ decades, expressing concurrency in a more hardware like fashion using event driven cycle simulation. I still have to fathom if there is anything else there beyond what I am familiar with. If everyone started modding me off to <0, I’d probably get pissed too and walk away.
Two decades ago there was a parallel processor that was easy to write concurrent programs for and map onto any number of available communicating processors. Typically 1-1000 cores were used in various apps with little change to the code, but those were 4KB/chip days and the apps were very much DSP like. The hardware is long gone but the || model still works in CSP based languages that can run on x86, not sure how well it exploits multiple x86 cores though. The model is very similar to hardware design languages, model processes as communicating hardware objects. I could say APL, Occam, ADA, various CSP languages, Verilog, VHDL, Matlab, even Haskell etc all fall into a hardware -software continuum. All of those have been used to express hardware designs which is inherently parallel.
He is right about one thing though, todays x86 really does suck in so many ways, it is very fast in some things like Video Codecs (extreme cache locality) but in practice is many orders slower when dealing with highly unordered memory requests. It all boils down to that pesky Memory Wall, true random accesses are now 1000s times slower than the aggregate datapaths. Going to 64b address space and having many cores only worsens this. What is the point of having 4 or more cores when even the 1st one is under utilized due to this wall.
One can remedy this somewhat by starting with a memory system using a low latency RL DRAM from Micron that is much closer in performance to the processor cycle speed. The penalty is that atleast 40 threads must be used per processor to hide the remaining memory latency, in effect trading a giant Memory Wall for a modest Thread Wall, and the memory costs more too. On this kind of processor, I would have no trouble partitioning the graphic compute intensive parts of my apps onto large numbers of threads. I really have no idea how to exploit todays hardware anymore. Even moving data around in memory is highly unpredicable, longer blocks take far more cycles per word. In the MVC type of app, only the View part is usually compute intensive but also easy to tile.
So what thesis do you propose to make parallel computing run well on the next AMD/Intel masterpiece? Enlighten me please.
As for AMD going fabless, that is sad, end of an era, as Jerry Sanders used to say so often, real men have fabs. I did work for one of AMDs parts a long time ago.
If everyone started modding me off to <0, I’d probably get pissed too and walk away.
Censorship is censorship, especially when it is based on popularity. I see the same crap happen on Digg, Slashdot and all those other so-called “social” sites. If OSNews cannot find a way to keep a small cretinous gang of regulars from censoring people they disagree with, then f–k OSNews and the mule it rode in on. I don’t need this shit.
Savain is a well known internet troll that routinely antagonizes the Erlang people and spams links to his blog everywhere. He also attacks well established physics theories based purely upon evidence from the bible, personal inability to believe and other such non-sense. To top it all off, he basically calls Babbage and Turing idiots routinely (not to mention everyone else). The real kicker is that he will register multiple accounts to reply to himself in praise.
His COSA idea is just a finite state machine. I don’t see anything revolutionary about it. The problem with FSMs is that they don’t scale well. The number of states and transitions between states quickly grows out of control to the point that nobody can understand it. The reason why software is modeled the way it is now is because PEOPLE can understand it.
Yo, monkey. It’s one thing to try to censor others but it’s another to cowardly hide behind your anonymity to spread blatant lies. You’re a gutless piece of shit, and you know it. Everybody knows who I am. Identify yourself, you spineless moron.
I don’t condone trolls either (esp if religion or strange physics or rudeness is in play). Its a shame that anyone would call Babbage, Turing or anyone an idiot, people make the best of what is available to them, very few can predict what can be done with technology that doesn’t yet exist. It is also a shame when the technology is right there and is unused because the current market is almost set in concrete.
I looked briefly at his blog and some of the comments there. As I said before the cycle simulation is exactly the way we have designed simpler synchronous digital chips for decades. We often start with Fortran, C or Matlab codes usually for DSP problems that are expressed sequentially and we parallelize them and end up with hardware that is essentially an enormous group of FSMs. Can software folks do this, I don’t see why not for some problems but it is only suitable for problems that look like they could be made in hardware. We have more tools today that do alot of the grunt work so its more about specifying what we want the chip to do, and it figures out the high and low level architecture, plus the layout. Many of these tools are graphical entry too. I don’t see why some of these tools couldn’t be reengineered to be useful to those working with parallel processes, placement, scheduling and so on.
Now if you consider FPGAs you have a possibility of moving parallel software codes more fluidly into suitable HDLs and timesharing synthesized blocks in FPGA fabric, Combine that with Opteron HT bus and you have maybe some acceleration possibilities for deeper pockets, its been done a few times. Some of the Occam people ended up in this space trying to convince software people that they could design hardware with HandelC etc but it is a struggle.
On your point about software modeling. When we understand how software works, we do so usually in an idealized simplified processor model ignoring how the processor really does all it’s work. The same is true of hardware designs, we understand them at various abstraction levels, we have many transformational verification tools and they mostly work when we constrain the design style. We usually have a software model to compare inputs, outputs and internals against those of the hardware simulation, often one to one, bit by bit. Clearly then some software could be written as concurrent processes much like larger hardware blocks, its been done before and will be done again with or without hardware support. The FSM level of abstaction is really only useful for the hardware guys though.
Sort of a Transputerfarm on a Chip:
http://www.intellasys.net
Though it has other targets than the usual computing.
Btw. i have a Sata->Atapi->USB-Controller in an external Disk, which is somehow related to one of their former Chips. Has never let me down so far, FORTH chugging along at its best 🙂
Yes there are at least a dozen of these sorts of chips out there that typically include atleast 8 and upto several hundred simple cores, often use barrel wheel MTA design. I’ve losty track of most of them but they all superficially look like transputer farms, but they aren’t of course because they don’t include support for communicating processes or the process scheduler in hardware. They are generally using some other scheme for mapping processes onto sites. Many x Transputer people have gone into these projects as well as Intel/AMD so you see familiar ideas in new clothes Many of these chips get used in networking gear. Atiq Raza of RMI formerly Athlon Architect has a MIPs based multicore which does use RLDRAM.
If you take the comparison to an extreme some of the FPGA jumbo chips can look like a multi core chip too, each core centered around some BlockRam and a local DSP datapath cut into silicon. If you hook it up to DDR DRAM you end up right back with a memory wall again, not enough I/O pins to feed all the cores.
YES! AMD should quickly come out with BREAKTHROUGH technology from their ASS to LEAP FROG over their enemies!
They should figure out how to efficiently spread out programs over multiple cores that aren’t designed to efficiently scale to multiple cores! ITS SO EASY!
Edited 2008-10-08 18:25 UTC
Well… he does bring up a good point. Both Intel and AMD have hit a barrier to being able to give us *faster* chips, and have turned to multicore and heavy PR and marketing to make people feel that they should continue upgrading to later, faster computers, while dumping the problem on programmers in a move which one might go so far as to call a cop out. I don’t think the reality of this state of affairs gets enough press. And I am very skeptical that desktops are ever going to be parallelized to the extent that most users will ever benefit from the massive number of cores which are soon to come our way. In fact, I suspect we will suffer, as multithreaded code is going to be buggier and come with additional, hard to track down, race condition related issues. I’ll stop short of saying that Intel and AMD are leading us straight to hell. “Down the garden path” may be a more appropriate phrase. It is critical to both companies that a demand for new processors continues, whether the consumer actually benefits from it… or not.
Update: BOINC projects will likely be big winners, though. 😉
Edited 2008-10-08 19:30 UTC
They are not standing completely passive. I personally have good hopes for Intel Threading Building Blocks, which seems to be a sober move in the right directions.
This situation is a bit like when everybody was complaining when going from some 8bit encoding to Unicode some 10 years ago. The bulk of the mess went into the VM of languages like C# and Java and now it’s not really a problem (although having wrapper macros/functions and conversions to wchar_t internally makes it painful in C/C++ still. I think this is one of the main reasons C++ has lost to Java/C#).
I think that as languages mature around threading, and 3pp tools like tbb gets stable and accepted, the threading will come somewhat natural. The big complication with threading I think will arrive with NUMA which I guess is the next logical step in 5-10 years. I hope tbb and the likes will be able to evolve naturally to cope with NUMA instead of requiring a rewrite from scratch.
“C++ have lost to C#/Java.”
Sorry, but this is *totally* FUD.
My hat’s off to your splendid reasoning and motivation behind your argument.
Apart from legacy stuff and low level things that need to be written in C/C++, how many use C/C++ nowdays? I’m not a big fan of massive VM:s making my dual core 2GB RAM computer really slow, but my dislike for them does not make them go away.
“Linux, Windows, Mac OS X and all operating systems are legacy and nobody uses them because they are made in C/C++.”
“All software done today in C/C++ is legacy”
Yeah, right. Nice Try.
A lot of software today is done using C/C++. I do C/C++ and it’s far from legacy.
I’ve seem many attempts through the computer history of many languages that have failed.
The main reason that Java/C# are still on the market is because they are endorsed by big companies.
Take Sun and Microsoft.NET team out of the game… Who will survive? C/C++.
Edited 2008-10-09 11:44 UTC
Let’s see, on my machine the software that I’m currently running or frequently run include :
1) Browser
2) IM client
3) Mail client
4) Office Suite
5) Text editor
6) Photo editor
7) RAW editor
8) Media player
9) Photo manager
10) Bittorrent client
11) Loads and loads of games
12) R/Matlab
13) IDE
14) Misc OS tools
None of them are written in Java or .NET. This is on Windows and on Mac OS X which I use predominantly. While a lot of code is undoubtedly written in .NET and Java, they rarely leave the company door (i.e. they’re in-house apps). Nobody who writes client side apps where the client *pays* for the software writes it in anything other than C++. Thus if C++ is dead, when I die I wanna die like C++!
Good point At the moment, though, I’m perfectly happy with letting my main programs fight over 1 or two cores and letting some Distributed Computing projects use up the idle cores.
That’s probably a waste of power. It might be more efficient if those researchers purchased computing time on a proper supercomputer and did their calculations in a place where thought has been put into calculations per joule or at least efficient ways of getting the energy needed to do the computations.
Purchasing such compute time is far easier said than done. (Most BOINC projects could not even remotely afford such.) Plus, very arguably, some projects hold the promise to offset any mflops/sec/watt differences with benefits which outweigh it. Climateprediction.net and hydrogen.net come to mind. It requires a certain amount of power simpy to light up the machine. So really only the difference between the power consumption of an idle core and a fully loaded one “counts” during those times of day that the machine would be on anyway.
Quad cores like the Q6600, and to an even greater degree, the 45nm Q9xxx series of quad cores, running on 80 Plus certified power supplies and not loaded down with power hungry graphics cards, etc, actually do quite well on a mflops/sec/watt basis compared to “Top 500” supercomuters.
http://www.top500.org/lists/2008/06/highlights/power
In my experience, something on the order of 120 mips/watt or so is possible on these machines.
I use a Q6600 and the very affordable Earthwatts 380 80 Plus rated PSU and get about 80 mflops/sec/watt absolute, running Climateprediction.net, measured at the wall socket. I’m in front of the machine using it almost all day. The difference between idle and fully loaded is only 60 watts, so I am effectively getting 185 mflops/sec/watt during that time.
During the winter, the power consumption can help offset heating costs, though my heat pump can do the job more efficiently. During the summer, the situation is reversed, of course.
Edited 2008-10-09 02:33 UTC
The hope is that after a point the extra cores will allow us to run algorithms that simply aren’t feasable on few cores. Probably games will be an early beneificiary, since they might use the multiple cores to have better AIs or more complex scenes while still keeping up with realtime performance.
I highly doubt that. From what I know of AI apps, those are the very last thing that would run well on lots of cores with limited memory access.
AI needs access to very large data or knowledge sets that are far beyond the caches window. On the other hand DSP like apps like ripping, encoding, decoding, crypto, neural nets, speech, image, any major math problem, should run like a charm.
transputer_guy, you don’t think that Nehalem will do much to defeat the memory wall?
I will take a look at Nehalem, I also need to see where C++ is going in parallel support.
As long as the system uses conventional multiplexed address DDRn DRAM, you can’t solve the Memory Wall. The fundamental problem is the DRAM latency >60ns plus memory management overhead, as well as its poor bank management. Latency only gets relatively longer unless you take a hard axe to it and then hide whats left over. The trusty old DRAM is almost 30years old in its basic architecture, it goes way back to the 4027 4K chip. Its address bus was multiplexed to save pins when pins were expensive. From 1984 to 2004, the worst case Ras cycle only halved, the bit I/O rate increased greatly though. It also went to syncronous design and to CMOS but it is still recognizably the same old beast, only 250K times denser.
RLDRAM can reduce the 60ns+ down to 15ns or so and it allows all 8 banks to fly concurently giving sustained full random in bank issue rate of 2ns in an SRAM like package. Thats for 512Mb chips with L3 type performance. With that you could relegate many GBs of DRAM to disk caching or swap space and have the RLDRAM for main memory. Either 4 or 8 way instruction threading will hide the 8 clock latency and multiple cores can use up the 8 way bank issue rate. One 1000 threaded Mips is more valuable to me than several 1000 bogus Mips and very predicatable too. Ofcourse most of the time most of the threads will be idle as is the classic single thread design, but memory accesses for load and store can be effectively 2 opcode slots.
RLDRAM is currently used in networking gear for name translation tables.
IMHO It’s the beginning of the end. Now AMD doesn’t have a complete control over their chip prices… They depend on 3rd parties. It sounds pretty dangerous to me.
The cost of owning a fab is very expensive and if they’re under utilised, its even more inefficient. The question AMD have to ask themselves is whether the perceived ‘benefits’ of owning the fabs really resulting in a competitive edge. I’m sure what the AMD people have done is ask themselves whether the same thing could be achieved by outsourcing the production and at the same time not only save money but remain competitive.
Lets also remember that owning a fab is more than just ‘owning’ one, there is alot of capital and investment tied up in it. If you outsource it, it should be cheaper for both parties given that AMD would be removed from the huge capital outlays in reference to re-investments and the fab company would benefit in focusing on getting as much business outside AMD as possible (and thus the economies of scale kick in).
I guess it’s good news for AMD.
AMD is currently in a good momentum against nVidia. Their good chipsets (mainly 780G) is also responsible for gaining back some lost market in chipset/CPU business; but the release of really new architectures will be the main force behind AMD’s future.
Currently looks like they’re targeting some smooth transition to the new AM3 chipset with DDR3 support, but people are probably more anxious about their Fusion technology… even more with the good RV770 release.
Looks like the first Fusions will be based in the current Phenom architecture instead of a new one, but lets see how it’ll do again new Intel ones…
Personally, I hope they do good and the market gets a nice and health competition. Everyone, including the costumer, wins them. =]
I have got a great idea! Split the company in two pieces – one pice a CPU maker – use the name AMD – and the other piece a graphic card manufacturer and call them ATI. Wouldn’t that be great?
We ‘re living in strage time – every company wants to rule the world and to achieve this, the buy other companies as they can. And when they struggle, they split in pieces again. How stupid can mankind ….?
More difficult that it sounds. AMD need the graphics and chipset to make themselves competitive with Intel. On the other hand, however, I do think that they lack sound and network/wireless. If they had the money, IMHO they should buy out a small wireless/network chip company and make a complete end to end competitor to Centrino – and better yet, expand that standard kit out to desktops as well. A stable platform is what OEM’s look for and if AMD can provide it, it’ll give them a competitive edge.
Not only do they now lose all their control over the Fab production process, they’re at the whim of third parties to pull the volume/product when AMD needs it. Watch this space, things are going to go downhill fast.
This is a HUGE will for Intel – A massive problem for AMD.
They shouldn’t of gone near ATI, then they wouldn’t have been in such a bad mess.
I’m sure you are speaking from your vast experience of running a chip maker and operating a fab…
I use slackware in my Pentium II Computer 5 years Ago. for the first time, it’s to hard to understand and to admin the system, But day by day i like this system where slackware is simple, light and “configure your self/ do it your self” distribution.
I’m learn so many thing in slackware,From the instalation (BSD syle), edit the configuration file and so many interesting command tool and text based aplication.
Until now, slackware is my choise.