The availability of 10 Gigabit ethernet will likely spell a major change for networking, as it becomes more widespread and cheaper. As one networking company CEO quoted in an eWeek article notes, “the new 10G fabric should allow for a seamless multiuse network incorporating storage, administrative and HPC communication needs, in comparison to having multiple networks as we do today.”
Exactly what they said about gig-e?
the day i will be impressed with network is the day Virtua Fighter will be playable on the net while keeping is current move by frame fighting system.
Speed is one thing, real time access time is another. There is still BIG sector that can’t be taped in because of the latecy factor.
damn, was just about to get a gigabit switch now 8-porters have come down to 50-100ukp, including some with jumbo frames.
most new motherboards come with gig nics these days, and a pci card is only about 10ukp.
although i’ve never seen a pc (not even a dual 3.2ghz xeon ht) get more than 700mbps, i have seen an ultrasparc10 get 900mbps over fibre. i really doubt hard drives could keep up with 10gbps though.
What kind of bus would you hook this 10Gbps card to? As I understand it’s already problematic to run 1Gbps over PCI..
The speed of light has a major effect on minimum latency across the internet. It is roughly 1ms per 300km. So the minimum latency from SF – NYC would still be 15-16ms even without counting any delays due to routing, etc.
so I can use gigabit with the existing infrastructure when I get the money. it was a huge pain to run all those wires, I do not with to do it again, expecialy now that I would have to take down all those ceiling tiles in teh basement.
I use to think that 100 MB was fast enough, then I tried to back up 5 GB of data from my desktop to my server… it took nealry 4 hours. Gb eathernet would have made it about 5 minutes.
100Mbps can transfer data at roughly 11MB/s, which should transfer 5GB in 7min. If it took you 4 hours to transfer 5GB the 100Mbps network should not have been the limiting factor, unless it was badly misconfigured.
though 1 pc can only use a certain amount of network speed the network itself benefits from the higher thouroughput.
say you can only get 1Gb/s(800Mb/s after overhead), that means than 10 computers can use 1 port on the 10Gb hub/switch without slowdown. its like having a 10 line highway and only being able to take up 1 line, that just leaves a who lot of room for other cars(data) to travel with less risk of collision.
> although i’ve never seen a pc (not even a dual 3.2ghz xeon ht) get more than 700mbp
in general you will NEVER see more. there is the theoretical max and the real world max. with the overhead of ethernet & tcp/ip you will only see 70-80% of the listed speed as an sustainable rate. so 70-80mb/s on 100mb and 700-800mb/s on 1gb.
What kind of bus would you hook this 10Gbps card to? As I understand it’s already problematic to run 1Gbps over PCI..
Infiniband.
considering my firewire drive transfers that data in 15 minutes, your calculations are WAY off.
on top of that, Eathernet is noisy (more or less so depending upon the protocol to communicate the data), so that limits the transfer speed as well.
Infiniband.
4X Infiniband is only capible of 8 Gbits/s, not fast enough for 10GbE. (of course, the 64-bit 133MHz PCI-X 10GbE adapters Intel sells are only capible of the same)
On the host side of things you’d want to plug into an 8x PCI-E slot, which is capible of up to 32 Gbits/s
Who’s That?:
If it takes your firewire drive 15min to transfer 5GB it is either extremely slow or you have a very large amount of very small files that is causing the slowdown. Taking 15min to transfer 5GB means your drive is only transfering at roughly 5.5MB/s which even my cheap usb flash drive can do.
Assuming you are transfering data from a drive that can sustain at least 11MB/s, which any hard drive made since ~ 2000 should be able to do, you should be able to transfer 5GB in about 7min. I regularly reach 10-11MB/s on 100Mbps on my home network and 9MB/s+ on my shared office network.
Example at office:
apt-get source kde-i18n
Fetched 236MB in 21s (11.1MB/s)
I dont remember this very well since it been a few years…
But doesn’t light in fibre travel about same speed as an electrical signal? (both at about 2/3 of speed of light in vacuum)
I dont see why latency would be a ‘problem’ with optical signals if it is’nt with electrical? should be about the same?
You are probably correct on the 2/3 figure, I do recall it being significantly less than c. So in the SF <-> NYC case it would be closer to 23-24ms minimum.
I didn’t mean to imply that there would be a difference between optical vs copper (electrical) networking. I was just stating to AlienSoldier that no matter how fast networks (bandwidth) become the minimum latency will still be governed by speed of light, which for long distances is a significant factor.
PCI express has been out almost a 18 months now and there’s hardly any high end boards that have been released yet.
PCI-X doesn’t hold a candle to PCI-Express x8 for throughput.
I want to be using my nforce4 ultra motherboard with an athlon64 X2 processor loaded up with ram with a 10GB PCI-Express card & a 12 port SATA PCI-Express raid card.
Get on the ball guys!
Please show me a PCI-E board that has a 8x for desktop consumer usage. All I got it a 2x16x 1 4x and 2 1x. I’d like to purchase one soon lol. Still waiting on that SATA 2 to come out.
As for 10 gigabit ethernet where getting there lol. I thought my Intel Pro 10/100/1000 MT PCI-X card was fast in my file server but I see we need more then Cat 6 Cabling.
10GB is great for back bones but not for much else. Even the fasted hard drives/disk arrays have a hardtime keeping up with Gig E. Give me Internet 2 and a 100mb line into my house and i will happy as a pig in shit.
Yes, I am transferring 5 GB of my home folder on OS X. it is all JPEGs, AAC music files, and documents. nothing is larger than probably 3 MBs.
and the transfer was from my Desktop to my laptop is target mode (makes it a firewire drive), but even my USB 2.0 drive is about as fast, so that mitigates any theoretical overhead that one might try to deduce given the target mode of the computer.
my point was that 100 Mb/s connection is no where near as fast as that other person was trying to make it seem.
So many people have two GigE connections in a bunch of places yet never bother to use link aggregation.
It is a small step to 2Gbit and that is basically SATA 1.0 right there. That is faster than most drives (but not drive arrays) can deliver data.
As dual cable 2GigE is free or cheap, it is a nice stepping stone to 10GigE.
“Please show me a PCI-E board that has a 8x for desktop consumer usage. All I got it a 2x16x 1 4x and 2 1x.”
Huh? Theres nothing stopping you from sticking an pcie 8x card into a 16x slot. In any case, the Dell SC420 comes with a pcie 8x and 1x slot.
“I’d like to purchase one soon lol. Still waiting on that SATA 2 to come out.”
Theres nothing to wait for. The nForce4 Ultra/2200/2050 chipsets support four SATA II drives…
As mentioned earlier (although never really elaborated), the bigger issue in communications now is not throughput. its latency. If we wanted high throughput, we wouldn’t be using any form of ethernet, we’d use capacity-1 mac protocols instead. For a mac protocol, it doesnt get much better than ethernet (the only thing left to tweak are backoff algorithms).
Network mediums are fast as most signals travel near the speed of light, however, long distances, lots of intermediate hardware (i.e. routers/switches), and the networking stack (i.e. software) all add considerable amounts of latency. Until we can address these issues, 10Gb ethernet is going to be under-utilized (Especially, if you send small packets – as they will have to be padded to the minimum packet size… which Im guessing is now 100x bigger than fast ethernet?).
I was just stating to AlienSoldier that no matter how fast networks (bandwidth) become the minimum latency will still be governed by speed of light, which for long distances is a significant factor.
Here’s to hoping Quantum pairing will fix that up at some point in time…
Actually Intel sells an 4 Port 1Gigabit NIC.
4x 1gbit = 4x 125Mbyte/sec in each way, makes ~ 1Gbyte/sec total, this are near the total maximum of bandwith for one 133Mhz PCI-X Slot.
The next generations are 266Mhz PCI-X and 533Mhz, where each new standard doubles the bandwith. 266Mhz and 533Mhz PCI-X should appear at the same time, like as the 133Mhz PCI-X are splitted in two 100Mhz or four 66Mhz PCI-X Slots.
PCI-X 266Mhz are enough for 16 Gigabit brutto and PCI-X 533Mhz have bandwith up to 32 Gigabit.
PCI-X 1066 are intended for 100 Gigabit or 40 Gigabit Networkcards, even with only 64 Gigabit Bandwith.
PCI-X are in my sights are the better future for extension cards, cause all standards are upward compatible.
The most cards are today in servers are only 64bit/66Mhz PCI cards, the newer one 133Mhz or some with 100Mhz PCI-X.
PCI express aren’t intended to be the standard for the next decade. It’s not sure for now that the PCI express Version 2 standard are upward compatible to the current one. And PCI express aren’t upward compatible to the current pci bus and drivers.
What i miss from the hardware vendors are Raid 6 or 15/51 SATAII / SAS (Serial attached SCSI) Controllers and optical disk drives with SATA II, so i could connect all drives to one controller. And i allmost can forget the crapy onboard controllers.
The feature for the current PC-buses aren’t “secure”. AMD have his Hyperport interface, intel wan’t to see PCI express as the internal bus, and all current protocolls/connections should be a new Infiniband standard…
Sorry for my pore english, it’s not my native language.
“Here’s to hoping Quantum pairing will fix that up at some point in time…”
Sorry, but till today these experiments allow for different interpretations, and transmitting a signal with higher than light speed has not been achieved.
It could well be that the universe allows for transmitting “noise” but no signal at overlight speed. You know, all people trying to build up a phsical model which would allow for higher than light speeds were proven wrong at some point by experiments to date. And none of the experiments contradicted Einstein’s Theory of Relativity. Some were close to the edge, and Quantum Mechanics sometimes seems to get beyond speed of light, but when trying to send a signal with overlight speed everybody fails till today.
Whilst link aggregation (aka port trunking) can help with the problem, the issue is often that the switches you plug into will only support a limited number of port trunks.For example the HP ProCurve 5300xl series of switches can only support upto 36 trunk groups – that is you’d only be able to have 36 individual servers with their dual GigE NICs aggregated into a 2Gbps link.
FWIW I personally think that for now 10Gbps links are going to be for backbone and switch to switch traffic rather than for individual machines.
I should also probably note that a major issue with network performance is the amount of CPU required to service the number of interrupts generated by a traditional NIC – there is a rule of thumb from Sun that you should allow 1Mhz of CPU per 1Mbit of network traffic. Thus for a GigE link you should allow for 1GHz of CPU just to service the network traffic if you want to fully utilise the network link as much as possible. Whilst the use of jumbo frames does ease this problem, it is definately an issue which I see on both Windows and Unix servers; its one of the reasons why Sun is looking to put TCP/IP processing work on a sub-CPU on their newer systems (they call it FireHose AFAIR).
All too often I’ll find that people are complaining about network performance and when I look into it the problems are more often down to their systems not actually having enough CPU to cope with their workload and processing network traffic. As with anything, getting good performance is a matter of balancing numerous factors and improving one area – say going to 10GigE – means that other areas like disk, CPU, memory, bus bandwidth et al have to be improved as well.
well i am waiting on 100GigE
none of this 10Gig crap for me…. just not fast enough…
(that was joke btw)
gig backbone and 100meg links to the desktop seem plenty fast for most anything… of course 10gig backbones might be sweet!
of course ethernet is a messy system, token ring anyone