Steve Jobs recently told a Mac user, enquiring about the probability of USB3 on Macs in the near feature, that the technology is not ready because Intel has yet to adopt the platform. A recent rumour slated Intel to integrate USB3 it into its chipsets by no earlier than 2012.
LaCie electronics, however, is not prepared to wait around until 2012, and has just released an USB3.0 driver for Mac OS X. Just one catch: it only works with LaCie’s hardware.
Intel not implementing USB3? Perhaps because it’s basically worthless. The increase in speed is minor, at the expense of CPU. USB is so badly designed and implemented, it has never come close to the real world speeds seen with Firewire. Firewire 800 still trounces USB3. This joke has to end at some point and my guess is Intel (and Apple) are betting that LightPeak is the it. All cables over one cable, at 10 Gbps.
Now imagine a laptop with only one port, but that can drive any number of connected devices, from HD screens to disk drives.
And yet how widely used is Firewire these days? Or eSata, for that matter – 90% of external disks are USB-only, and you’ll pay quite a bit extra for eSATA or Firewire support, if you can even find them.
Superior technology doesn’t always win…
The reason why it didn’t win is because firewire has a higher standard to which vendors must abide by – in other words they can’t do the dodgy corner cutting that one could otherwise get away with in the case of USB2 and thus reduce the cost to them at the consumer.
Early on in the USB development you had a split between those who wanted the hardware to do more of the work and a second group who argued that it would be better done in software then let the CPU carry the load – since the performance of CPU’s are improving rapidly the performance hit should be unnoticeable.
Unfortunately the industry is filled with corner cutting, from ‘winmodems’ to ‘winprinters’ to simply aweful promises when it comes to USB technology simply so some dick can save 5cents per unit.
And because USB is both cheap and “good enough”. That’s a *very* hard combination to beat – there might be better alternatives, but USB is cheap, and doesn’t get any complaints from the majority of users.
Manufacturers aren’t going to go with a more expensive option just to please a tiny proportion of users. Not when they can call that a “high end” option, and charge you extra for it.
You talk as if Firewire never cut corners.
Firewire reads and writes to the system memory which allows anyone with physical access to a machine to perform all manner of hacks to a Firewire-enabled system. Now while I appreciate that direct memory access was intended for speed boosts, did the developers not think that this was perhaps a flawed shortcut in terms of system security?
If someone has access to your machine – you’re fucked already; all the gymnastics in the world won’t change the reality that the would-be hardware hijacker as you by your balls even if you didn’t have a firewire device. To bring up the DMA issue sounds like the death throws of desperation than concerns about genuine security concerns of Ms Sixpack having her machine hacked whilst surfing the internet all because of a firewire port.
There’s more to security than keeping Joe Blogs safe from the web. What about company workstations? Public access / cyber cafes? Media labs? There are plenty of occasions when a system needs to be secured from it’s physical users.
While nobody is disputing that physical machine access opens the system up to a great deal more attack vectors, that also doesn’t mean that firewire should hand them an additional one on a silver platter.
DMA in firewire was cut corner. Clear and simple.
Now I’m not saying that makes USB > Firewire. I’m just saying that corners were cut with firewire as well. I just wanted to give your posts some perspective as they read as if firewire could do no wrong while USB failed at every hurdle.
Edited 2010-11-06 03:43 UTC
Actually, despite the somewhat derogatory terms, I always thought making the devices dumber and putting the logic in the software as much as possible was actually a pretty good idea…
I’ve owned both ‘win’modems and -printers that worked just fine under linux.
Sure, but only as long as a minimal amount of logic is kept in the hardware so that it is accessed by software via a standard interface.
Modern graphics, sound, and WiFi hardware, with their “one driver per vendor/hardware” philosophy, are just too much of a pain.
Edited 2010-11-07 15:05 UTC
Things being mainstream and in common use does not make them good. All too often said things are actually pretty shitty.
USB disks are a good example of where USB2 fails miserably compared to Firewire.
Its easy enough to find Firewire disk enclosures and no I am not paying more.
Ya, you can “win” like BluRay and its very quickly on the way to being dead.
I have to disagree there. I prefer firewire too to be honest, but USB3 is a huge improvement over USB2. The benchmarks Ive seen show up to 200MB/sec throughput at 1/4 the CPU utilization of USB2 on identical hardware.
One example (I have seen a few others that showed more or less the same data):
http://www.myce.com/article/usb3-superspeed-a-first-look-26421/Read…
Now this may not be representative of Intel’s onboard controller (this is an NEC controller), but it does show it is certainly possible to keep CPU utilization under control with USB3.
I’m not saying its perfect, but to say it is worthless is waaayyyy over the top.
I’ve been using USB 3.0 for several months. For what I paid, it’s totally worth it; on a PC, USB 3.0 vs eSATA may be a headscratcher but over FireWire, it’s a complete no-brainer.
While typing this, I moved nearly 7 GBytes of mixed files at approx 95 MB/s, that’s plenty fast considering there’s a hard disk on each end. Unfortunately, I don’t have a pair of SSDs to try the same transfer.
Sorry, but that’s garbage.
The speed increase is a factor of 10, without any additional CPU overhead involved. In fact, with xHCI, more mundane work is loaded off from the CPU into the controller chip, simply because it’s nearly impossible for CPU or busses to keep up with a 5GHz bus.
I implemented drivers for all USB controller standards, as well as a couple of USB devices.
Even with USB1 or USB2, I could schedule a 2TB transfer in one go, and then have the controller work on it until it’s done, with _no_ CPU intervention at all. The main problem is that my systems rarely have the memory to back it 😉
The main reason intel doesn’t push USB3 is that it’s still copper based. The USB3 drama (nothing in IT is done without sufficient drama) had that episode where Intel wanted to go optical, and all the other stakeholders didn’t.
So Intel was the editor of the standard (and thus responsible for its late release) and entirely uninterested in it – not a healthy situation for the standard.
Meanwhile, they continued to work on LightPeak. And given that ancestry, I fully expect it to be USB3-with-photons, so the same “issues” about CPU usage and so on will apply there as they do (or don’t) with USB.
Pig, lip stick, and all that (though it might be a good thing for Apple, given the negative campaigning they did to USB to promote Firewire)
Still trounces? ON WHAT PLANET?!?
480mbps usb2 vs. 800mbps firewire 800 vs. 5gbps usb3 on-spec & 6gbps most implmenetations. That’s not a ‘minor’ increase in speed.
As to optical like LightPeak, much like Firewire it’s too damned expensive to implement device side for it’s own good.
Performance is NOT everything — See the parallel drive days of IDE vs. SCSI… IDE won that fight by sheer virtue of price to performance with the only people going SCSI/SCSI2 being those with either more money than brains, or the people who REALLY needed that last ounce of performance; Though in the early Mac days it was a laugh that 90% of hard drives for them were MFM drives in SCSI enclosures — including the ones sold by Apple. Even back then they did everything they could to make them more expensive than necessary with a side of vendor lock in.
I compare it to the nimrods who would go out and buy a i7 975 for $1090 when the i7 950 is only $294. I’m so sure that 8% speed boost is so worth almost double the price — not.
Per root hub USB 2 is as fast or FASTER than Firewire 400 –FACT. 480mbps vs. 400mbps… I want to make a simple slave device using a cheap microcontroller like a PIC or AtMega, for USB 2.0 I go buy something simple like 48 pin FT245BM for 44 cents or use a chip that already has USB slave capability built in (I’m a fan of the AT90USB1286 like that on the Teensy++) and for under $30 prototyping or $5 low-count production I’m in business.
Firewire? You need two separate chips, the ‘link layer’ and ‘physical layer’ — typically 100 pins each, only available in LQFP (so kiss off making a prototype without a real fab facility), and so complex and convoluted a set of outputs you end up needing a dedicated processor sitting between your target microcontroller and the two ‘layer’ chips to turn the outputs from them into something useful! Much less ALL of those chips STARTING at around $20 a POP for orders in the THOUSANDS, it’s no wonder you will be hard pressed to find firewire in ANYTHING anymore.
Hell, digikey and mouser even stopped selling anything more complex than plugs for firewire.
Now, I’m not sure about the Firewire vs USB3, I can speak to how terrible USB is. I’ve done several embedded projects that required either USB connectivity, or a USB host, being run by a PIC Micro-controller. While both are possible, and pretty easy to get working initially, working with USB in an embedded environment made the faults of USB 1.1/2 clear. I got more reliable data out of UDP going to a russian server and back, then I did across the USB connection. And reliablitiy varried by device/computer combination. I found that most USB mice send more packets that fail CRC then they do good ones. It’s terrible.
This is where I shake my head. It will never happen. It’s a nice theory, but in practice, would be absolutely useless. A laptop with one LightPeak port is about as useful as a laptop with a single USB port. Think of it this way. How much do you like cables? Enough to daisy chain your laptop to your keyboard, mouse, your second monitor, your external hard drive, and your ‘LightPeak Mini’ enabled phone? *shakes head* Won’t happen.
What I predict, honestly, is that LightPeak will replace FireWire, not USB. It will also replace eSata and DisplayPort more than likely. And all computers with LightPeak will come with no fewer than two LightPeak ports. (Three will be standard, with one of them in the traditional HDMI location on today’s laptops. It might even be labeled as the display port) Laptops will continue to use USB, phones will continue to be USB, and the Status Quo will remain about the same.
*shrugs* ‘Technically Superior’ is, and always has been, a technicality. ‘Better’ isn’t based on merit, but on the lowest common denominator between price, proliferation, and practicality. I hate it, but it’s true.
I’ts not that bad.
I remember a few years ago how people trounced USB2 on some professional audio interfaces. Most of these audio companies made driver improvements and CPU’s caught up and now some of them can record a full band in full resolution/bit depth, no problem.
One should’nt criticise a new technology just because the other supporting technologies is slow to catch up.
You just said it. Vendors had to hack around it and you had to throw more CPU power at the problem to make up for crappy USB.
There’s hardly a way the CPU can make up for “crappy USB” by calculating more.
Full resolution in full depth (stereo, 96kHz, 24bit) would be a mere 576000 bytes/sec. Even USB1 could handle that in theory (for one channel).
How much CPU is used on that depends on how often you fetch new data (poll early, poll often = reduced latency). No matter which bus you use, that will mean a certain CPU overhead to get at the data. The main difference between buses might be how simple it is to move data through the bus driver stack of your OS
If you can push audio data directly from the controller into user space memory of the target process, you can actually get along with _very_ low CPU overhead (near 0) and low latency (~250µs) on USB2 at the same time.
But I doubt, any OS vendor would be very happy about a driver developer doing that without explicit support in the stack, and so it might be a rather fragile way of doing things (ie. nothing you do for _professional_ audio).
Maybe because Intel is shipping LightPeak in 2011.
Firewire can have it’s share of problems depending on the chipset brand.
No kidding. I wonder how much experience people here have with Firewire. I work extensively with high quality Firewire machine vision cameras and they are far from trouble free. Very sensitive to the type of firewire cards. Cheap ones generally don’t work, anything through a Firewire hub generally doesn’t work, different versions of Windows have different bugs in the Firewire drivers which necessitate different patches.
The USB version of the same camera from the same company is far easier to just plug and go.
Yes, Firewire 800 can’t be beat in terms of speed, but still isn’t adequate for high resolution, high framerate images, so in the end we still need lightpeak or something similar to put both of them to rest.
Edited 2010-11-05 01:37 UTC
Yeah, I had a similar experience with a high-framerate Firewire camera and a cheap Firewire card. Devices are not necessarily recognized, and when I got it to work, it was only to discover that the computer would then regularly go down because of an issue with the connection wire.
So maybe Firewire puts a higher demand on the manufacturer, but a minimal level of electrical protection is not part of it.
Edited 2010-11-05 06:34 UTC
Sure it relies a lot on interrupt and thus the CPU, firewire ( alias IEEE 1394 ) has its issues too ( http://en.wikipedia.org/wiki/IEEE_1394_interface#Security_issues ), and has faced fragmentation (names , and and plug types).
Firewire is suppose to give power to peripheral but depending on the plug and manufacturer it is not taken in account (because of the popularity of the small connector) ( giving an advantage to USB where (weak) power and data are given on the same cable.
And comparing USB to firewire is not that much relevant as they were designed with different purpose in mind (firewire is more relevant in realtime communication link, like for audio/video application or small networking ).
There’s nothing wrong with USB. What it lacks in capablity and performance, it makes up for in simplicity and low cost. Computer mice, keyboard, and other HID devices have no use for the complexity of firewire. Those are strengths
This means that for only $8 I was able to get a nifty little USB adapter that lets me plug in either an N64 or PSX Dual Shock controller. It is nothing more than a Cyprus CY7C63001 USB microcontroler, and less than 2 dozen resistors. USB’s simplicity on the device side allows it to serve as a replacement for old school serial and parallel ports.
Maybe Firewire can fit the bill, but if you look for hobby kits, they are overwhelmingly USB, when compared to Firewire.
$8? That’s a great price!
Which model is it if you don’t mind me asking?
It’s called “Boom”. Don’t remember where I got it. It’s not the greatest; since a dualshock controller has two more buttons than an N64, two N64 buttons register as two buttons being pressed. There are bits of strange behavior also, and it sometimes trips out software. Look on ebay. I think that’s where I found it.
Alternatively, you could spend $50 on a used Adaptoid. Those are awesome if all you need is N64 controller support. Games that provide force feedback will use the rumble pack, and N64 emulators can access the memory cards through it.
Oh yeah, this reminds me the Windows 98 times, when USB memory sticks jus started. You needed different driver for each stick to work on W98. Later the universal driver was made.
I can perfectly understand why it only works with their gear.
The potential support nightmare they’d be taking on they tried to make it work with everyone else’s gear would be overwhelming, particularly here at the start of USB3 when bugs (their own and others’) are likely.
Another of those “god forbid they don’t include it on the CPU die” moments. Let’s be specific; Intel isn’t supporting it built into the CPU… BIG **** DEAL!!! It’s not like you can’t add a NEC or TI controller chip on the bus to rectify this. IDE, SATA and regular USB didn’t used to be either, and that didn’t stop anyone from deploying those technologies now, did it?!?
Like it kills any motherboard vendor to spend the buck-fifty on adding a NEC µPD720200 to any board on the device bus… Like say most every premium X58 chipset mainboard on the market right now… See the AsRock Extreme Pro I just dropped $150 on for the machine I’m using right now or the $130 ASUS P7P55D sitting on my bench getting ready to go into another build.
Besides it’s not like having it on the die offers REAL WORLD differences in performance or power consumption compared to being a BUS device… if anything that’s just more crap on the CPU you have to keep cool when you could go sinkless with an external chip. (we’re going WAY too overboard on combining this stuff down!)
Then of course as always you have the firewire nutjobs pouring out of the woodwork making claims about Firewire that haven’t been true for a decade… Newsflash dead end technology nuts – USB3 is capable of OVER SIX TIMES the speed of Firewire 800, while costing a fraction to actually implement!
You know, 800mbit/sec vs. 5000mbit/sec?
… then the whole BS “it uses more CPU” arguement – the ONLY thing about it that would make it use more CPU is that it is moving more data! (six times more) so of COURSE the CPU footprint is higher. AS to bit per bit, there should be no difference, ESPECIALLY if the gruntwork is being handled by a off-board controller chip like the ones from NEC or TI.
As to Steve’s little statement, that’s just more proof he doesn’t know **** about hardware and should stick to what he does know, applying his reality distortion field and marketing magic to cover up the shoddy sleazy piss poor quality engineering of their overpriced garbage.
… then the whole BS “it uses more CPU” arguement – the ONLY thing about it that would make it use more CPU is that it is moving more data! (six times more) so of COURSE the CPU footprint is higher. AS to bit per bit, there should be no difference, ESPECIALLY if the gruntwork is being handled by a off-board controller chip like the ones from NEC or TI.
Tbh, bit-for-bit Firewire does use less CPU than USB. Firewire handles the transferal of data independently of the CPU whereas USB requires CPU in the middle. But true enough, it seems USB3 doesn’t use any more CPU than USB2 while still pushing out a lot more data, so it’s perfectly reasonable in my opinion.
The whole Firewire versus USB debate is useless, though: Firewire is somewhat cleaner design, but it never catched on, and it never will. USB simply got such a good start and now that almost everything imaginable has a USB port it’s just not going to happen that all that would be thrown away. As such it’s pointless to continue moaning, Firewire will not happen.
Never actually seen real proof of that – I’ve heard it CLAIMED… But really if a 8mhz atMega microcontroller can handle full USB2 throughput I’m not really worried about it on a multi-ghz multi-core multi-threaded PC. That alone makes it a BS claim against it — kind of like web developer ‘experts’ calling tables slow; when a 386/40 running IE3 on Win 3.1 could handle them, is that really a valid point against tables today? (mind you, there’s lots of valid points against tables for layout!)
It’s cleaner in terms of the connectors and what goes on across the wires between them; I can agree with that — The problem is behind the connectors and interfacing to the CPU on one side and the device on the other. It takes two massive dedicated chips just to make a slave device, much less the host and is an absurdly convoluted mess compared to the one-chip solutions ranging from simple slave controllers like the FTDI used on the Arduino, to the one integrated into the more complex A90USB AtMega chips.
Cleaner? NOT SO MUCH!