PC General
PC General - Tuesday, July 8, 2008 12:48 - 0 Comments
USB2 vs Firewire
USB2 Vs Firewire Comparison
The current hot contenders for portable device architectures are Firewire, and USB 2.0. While theoritical performance of USB offers it greater throughput, these numbers are a bit deceiving, as we’ll see. IEEE 1394, or Friewire, uses peer-to-peer connectivity, while USB 2.0 uses the other dominant method, master-slave networking. Let’s take a brief look at how the two operate, and try to determine which is best suited for your indivisual needs.
Master-slave technology distributes information to individual nodes on a network through centralized serviervers, routers, hubs, and switches. Users cannot directly communicate, but must got through these locations, even when they are physically located in adjacent positions. Many devices may be connected to a single router, and that router, in turn, may only be a small part of the larger network. This is typical network architecture in corporate situations where data flow is monitored and controlled through a server. The largest drawback to this set up is that a failed hub or router can potentially crash large portions of the network, isolating vital components. Is USB 2.0, even though throughput has higher potential, it becomes diluted as data “bounces” from one location to another.
Peer-to-peer architecture, however, allows every device on the network to communicate directly with every other device. A user who needs to print a file on a network printer can print directly, without tying up other resources. This allows for greater dependability as well, since node A can exchange information with node D directly, without relying on nodes B and C to be active and available to process the same information across the network. This same method can be attributed to apparently higher data speeds for the same reason; there is no stress placed on a central location which could slow down the entire network. On extremely large networks, P2P, or peer-to=peer, are essential to smooth operations, as a truly large network would not be functional with all data traffic being directed through a single server, or group of servers operating in tandem.
Article written by MyComputerAid.com