AMD plans to start shipping the USB 3-equipped chipset in the fourth quarter of 2010, beating Intel to the post. Intel hasn’t announced its official plans for integrated USB 3 support yet, but various sources say it’s not expected until we’re well into 2011.
…the new iMacs might have it. I hadn’t realised Intel didn’t have it yet, so that explains that one. I was hoping they might have e-SATA too…
Can haz powered Lightspeak nao?
Seriously, USB is a pretty lame interconnects, even in it’s latest iteration, how it won out over Firewire is beyond me, and spare me the “Apple’s greedy $1 licensing” spiel. If you think USB is better then you have never used Firewire and no not why it is THE standard for audio and video work since USB’s latency is just too high as well as it’s low sustainable transfer rate.
With the current version of FW being able to handle 3.2 Gbps and with the next version (IEEE P1394d) doubling that to 6.4Gbps it should be no contest considering that FW is actually able to get close to it’s rated speed while I’ve never seen USB ever get within miles of it’s rated speed.
Maybe FW is more complicated to implement and thus more expensive or there are very few manufacturers of chips which keep the prices high.
This is what is said on Wikipedia:
“However, the royalty which Apple and other patent holders initially demanded from users of FireWire (US$0.25 per end-user system) and the more expensive hardware needed to implement it (US$1–$2), both of which have since been dropped[citation needed], have prevented FireWire from displacing USB in low-end mass-market computer peripherals, where product cost is a major constraint.”
I do know that in IT/electronics everything wants to gravitate towars a mono-culture/monopoly. And if USB has a very large part of the market (because of a slightly lower price), it will mean bulk prices go down for those chips.
Maybe we’ll never know.
Yah, we get what they pay for, which is disconcerting as most companies these days only look at next quarter profits without having any kind of long term goal, which as we have seen time and time again has caused nothing but bad things, not just in IT.
Not only the low latency is good but the sustained throughput when compared to USB2 is awesome. People look at 480mbps for USB2 and fail to understand it is the burst rate rather than the actual throughput. I’ve yet to see my hard disk get above 100mbps (15MBps) when transferring files to my external hard disk and yet I’d have no problems with a sustained throughput on my old firewire hard disk.
On the PC platform, USB2 performs well enough for simple file storage / transfer, which is what most home users made us of.
My Gigabyte mainboard and a Dell laptop both have FireWire 400 built-in so I bought an external enclosure
and cables. End result is that the difference in performance isn’t worth it for the cost premium.
For what I paid for the enclosure and cables plus another $20, I recently bought a USB3 enclosure, an add-in card for my desktop and 2 USB3 cables – the performance improvement over anything apart from eSATA is astonishing.
Unless FireWire bumps the speed way above the current 800 that readily available and makes is cost competitive with USB3, there’s no way they’ll make any inroads against USB
But like I said, it is entirely useless if it is only a burst transfer with realistic rates being only around 15MBps, and USB3 isn’t going to be much better either. Making something simple maybe great in terms of publishing data rates and lowering production cost but it is annoying when I’ve yet to see devices even get close to those data rates.
For Firewire to compete it would need to lower the cost and it can’t because if they lowered the cost it would remove all the advantages that come with firewire in the first place. It is fast, reliable and low latency because it does all its stuff in hardware, it has strict standards and guidelines – so in other words you can’t have cheap, poor quality implementations because the standard is that demanding. I would sooner the Firewire ‘alliance’ market the virtues of it and get more hardware, even if it means some sort of arrangement to lower the barrier to entry until critical mass is reached.
What USB2 devices have you been using? I’ve been routinely transferring gigabytes of data from internal
disks to USB2 enclosures (mostly Vantec-branded) and
flash drives ( Patriot and OCZ ) and routinely get upwards of 20 MB/s for writes and 26-30 MB/s for reads.
Totally wrong – my Vantec USB3 enlosure, combined with
Asus U3S6 add-in card allow all my drives, including my SSDs (fastest of which is G.Skill (Indilinx) Falcon 64GB) to perform at native speeds. It matches eSATA – may even surpass it but I don’t have anything fast enough to max out either interface.
The USB interface rate is definitely misleading and I recall an engineer saying that, based on the spec, 40 MB/s would be an absolute maximum, not 60.
However, your experiences with 15 MB/s must have been a long time ago,with sub-standard equipment or very small file sizes. Many of the cheap flash drives have horrendous performance or have adequate read performance but execrable write speeds
Then, I’m sorry to say that Firewire is doomed or will remain an expensive niche. It the IDE vs SCSI battle all over again. IDE was “good enough” and the price was right and started co-opting SCSI features and became SATA.
Now we have SATA and SAS which are one-way compatible
and SATA is good enough to be included in a number of servers and storage arrays.
You’re probably moving large files. Small files seem to make for very low performance over USB.
500MB+ files: 30-35MB/s
<5MB files; 5-15MB/s
I consistency get around 29MBs with external hard drivers (Both read and write).
Flash drives seem to get around 29MBs read and 15MBs write.
One short coming of USB seems to be when all CPU’s are fully loaded, throughput decreases.
I wonder whether the hard disk is limited because of something to do with Mac OS X and how it handles USB and/or FAT file system – funny enough when I convert it over to HFS+ the throughput improves but it still isn’t close to that of the Windows world so are there issues relating to how things are handled on Mac OS X? is there a weigh up between conserving power and throughput given that power management is baked right into the I/O Kit where power management is automatic rather than the driver developer manually regulating power management.
And if you think Firewire’s impressive, you’ve never used eSATA or any moden SCSI implementation. Both make Firewire look like a pathetic slowpoke.
Only two things have kept Firewire alive this long. The first was Apple’s “Not Invented Here” stubborn refusal to support USB 2, until years after everyone else supported it. And the second was DV cams with FW support.
LOL! There are barely any devices that support FW 800, let alone the 1600 and 3200 variants (even though the standard was ratified back in 2007). With that track record, would expect shipping 6.4Gbps Firewire devices sometime around 2015.
Of course, the difference is that the people behind USB don’t have their heads firmly inserted in their asses. Case in point: FW 800’s lack of backwards-compatibility with FW 400. I would love to have been in that planning meeting.
“Hey guys, I got a brilliant idea! Let’s introduce a newer, faster version of the standard… and get this, lets make the cables incompatible! That way, we’ll fragment an already-tiny niche, and even our existing installed base will have to jump through hoops to use the new version!”
Sure USB 3 is a nice upgrade but eSata is plenty fast.
Here’s one beef I have with eSATA – cable connector design. In the 3 years I’ve been buying eSATA, I’ve had 3 cables fray at the connectors and leave pieces behind in the enclosure slot.
This has NEVER happened with either USB or FireWire connectors.
Here’s another eSATA complaint – what the heck were they thinking by not providing a powered option from the beginning – and then waiting until USB3 devices were nearly available.
Who says it’s even on Intel’s gameplan? Who says it’s even a priority? Who says it’s even NECESSARY?!?
I think people are missing that you can have USB 3 without integrating it into the mainboard chipset — and that integrating it may not be the best idea in terms of cost… larger die for lower yields, more stuff in the chipset means more stuff to worry about thermally, and shorter life for a chipset design when you have to retool because the next flavor of the week protocol comes along.
It’s not like Intel or AMD go nuts integrating other transfer devices on their chipsets… I mean pull up your average X58 motherboard…
Let’s say an MSI X58M
Chipset: ICH10
Audio: Realtek ALC889
Ethernet: Realtek 8111
USB 2.0: NEC 1820
Flip it around, how about a AMD chipset board, like the AMD 880FX equipped Asus Crosshair.
Chispet: AMD 880FX
Audio: Asus “SupremeFX”
Ethernet: Marvell 8059
USB 2.0: Promise (can’t make out the number)
… and frankly, 90%+ of the boards currently on the market use a USB chip that’s separate from the mainboard chipset (typically made by FTDI or a design licensed from them) in the first place and work just FINE.
If it’s good enough for USB 2, and good enough for Ethernet — why is it suddenly a “race” to put it into the chipset? Only real reason I can think is sensationalist reporting.
Edited 2010-07-31 23:39 UTC
AMD takes all opportunities to try to gain marketshare but at the end its gonna lose together with ATI.
Intel+NVIDIA is the way to go all linux users know that by now.
Eh… no… if AMD continues on innovating, it will gain market-share.
And about ATI, Linux represents 1% or less of the desktop market-share. As it progresses, I’m sure the ATI’s driver will get better.
Not integrating xHCI would be a waste unless you drop USB1/USB2 controllers from the chipset. xHCI covers all of USB1/2/3, dropping the “companion controller” model employed for USB2/EHCI.
So if you rely on xHCI being implemented in a separate chip, you actually carry _extra_ functionality (UHCI/OHCI and EHCI) that you wouldn’t need if you integrate xHCI in the first place.
Definitely I am doing something wrong when submitting a news.
I tried to submit this link to article:
http://www.tomshardware.com/reviews/usb-3.0-flash-drive-superspeed,…
last week. The news was in pending list for a while but was never published. So, here I go to put the link in the comment related to that.
Enjoy !
Nice read, but frustrating in the end because they fail to give an explanation or to make hypotheses about some benchmark charts, especially the “Access Times” one.