Today at NewMobileComputing there’s a feature article about a prototype Wireless LAN-based open alternative to traditonal woreless networks. There’s also some info on Palm’s OS marketshare, a PalmOS 6 preview, and a $299 Linux PDA.
Today at NewMobileComputing there’s a feature article about a prototype Wireless LAN-based open alternative to traditonal woreless networks. There’s also some info on Palm’s OS marketshare, a PalmOS 6 preview, and a $299 Linux PDA.
…but where the hell is Eugenia? What makes her think she can take a vacation?
No offense David, you do a nice job, but it’s just not as much fun!
See for yourself:
http://theregister.co.uk/content/55/31638.html
I got a bit clued in when he started being concerned that ipv6 wouldn’t have a large enough address space, even though it allows for 3.4 × 10^38 addresses. He pretty casually tossed it out, just because mobile devices would have hard-coded addresses. Uh, right.
And then he just casually glossed over how you’d route to (apparently) more than 10^38 different devices without using any kind of network segmentation. Currently, routing is done in a big-endian way: Companies an blocks of space, and everyone else basically has to maintain a list of how to get to each independent block of space. You need a route to an IP, you find the portion of the IP that matches a block of space you know about and route to that space. That space then makes a larger match, and it all continues. (Yes, this is a gloss-over also, but at least it’s correct.)
There are currently 10s of thousands of routes that all backbone routers have to store, and that many routes often cause problems. What this guy is talking about is throwing away the whole big-endian nature of routing (routing based on the least specific information — such as the first octet of your IP address) and instead maintaining a route for every single device on the ‘net. Now remember, that’s more than 10^38 devices he’s talking about, apparently (btw, the difference in size between the smallest particle in the universe and the size of the universe itself is only 10^80, so that gives you an idea of how many IP address ipv6 offers).
Let’s say that he wants to do that. And let’s say that a route only needs to have the IP address in question (which is definitely not true — you’d need lots more info). Okay, we’ve got 128 x 10^38 routes. A terabyte is 2^30 bytes, which is (strangely) roughly equivalent to 10^12 bytes. Again, working in rough numbers here, that means that every backbone router would have to maintain, um, about 10^20 terabytes of routes to do this.
Yeah, can you say whacko?
Get some knowledge about how the ‘net actually works before you start making crap up.
Strangely, most of the other stuff he’s talking about might actually work, in terms of using WiFi to replace wireless phones. But anyone who thinks telecom is going to look anything similar in 10 years is on crack anyway. Land lines will be gone, it’ll mostly be VoIP, and mobiles will be little computers doing VoIP. Sorry, he’s not doing anything special.
newmobilecomputing looks like the exact same website as OSNEWS… are they under the same creators?
NewMobilecomputing is osnews’ sister site
Luke,
You have the wrong end of the stick and you have completely confused yourself.
I quote:
“I got a bit clued in when he started being concerned that ipv6 wouldn’t have a large enough address space, even though it allows for 3.4 × 10^38 addresses. He pretty casually tossed it out, just because mobile devices would have hard-coded addresses. Uh, right.”
Uh wrong. Even with 10^38 addressing space, I fully expect the amount of devices requiring an address to outstrip this within 10 years. The problems is that because the IP address is hardwired, it essentially cannot be reallocated, even if the device is destroyed. We just do not have that form of tracking mechanism. Nokia alone is due to produce of 200 Million units in this next quarter that, in our terms, would be 2*10^7 addressing space gone in just 3 months. I hope you see the problem.
I am also not talking about throwing away any form of routing. Can you explain where you got this idea from and perhaps I can clear it up for you. You have just misunderstood something.
Also, you may wish to read this article. I wouldn’t rely to much on an author who can’t even use a PC to review my work.
http://www.theregister.co.uk/content/35/31683.html
Regards,
Mark McCarron.
Sorry my mistake thats 2*10^8. Also, that is just one company. If we then factor in the new routers, servers, and the amount of PCs that will also have a wireless card. Soon enough we will have web enabled digital watches, radios, TVs and possibly even web enabled digital paper for ebooks and news. When people no longer face call charges and have essentially free access there will be an online explosion. This will open the flood-gates.
I did not ‘pretty casually’ toss out IPv6, I was looking at IBM’s mistake of the 1024KB limit on memory and this be very costly in global terms. IPv6 designers did not envision the rapid growth that will result from this system and certainly did not expect that upper limit to be reached within their life-time, let alone in 10 years.
Regards,
Mark.
Limitations of this Article
Well, firstly, you are not all as talented as me.
Bit full of ourselves, aren’t we?
PGP for mobile lossy data encryption? Chuckle.
hehe. Yes, completely!!!!
We tested PGP and it works brilliant, actually its based on OpenPGP. Performance was perfect for a mobile. After the initial hash is completed, it streams beautifully.
Well, PGP in the broadcast/wireless world is not widely used (I’d say 99% don’t use it), because it doesn’t scale well for high bitrates, low bitrates with high userloads, or block-based software FEC.
What would make most sense would be to use PK to exchange initial symmetric keys which are used for session-based block encryption (ECB + seed derived from hardware IDs, or whatever), since FEC recovery is then quick and trivial. UDP is another option to speed data transfer up, vs TCP. But then you have to write a packet handler on each side to do reconstruction. The fact that you’re doing video transfer – I see no reason why you would use TCP. UDP+tweened frames/codec correction is the way to go.
Just my 2c, but you knew all this already, right?
This is just a quick response Kon, thanks for your input. I will be back later to fill in more details on this.
UDP does not provide end-to-end connectivity and does not acknowledge the packets either. UDP therefore is not of much use in streaming applications unless there is extremely low latency on the networks with low packet loss. Wireless communication suffers from packet loss, a lot of it. UDP is the ‘shotgun’ of internet protocols. So end-to-end connectivity is required to keep the stream. TCP does not meet the specification required, however, FAST TCP does and boosts network performance by 6000%.
Do a search on FAST TCP, it is the future. TCP’s days are numbered.
Well, I don’t agree at all that TCP is preferred over UDP to deliver streaming content. In fact most applications that stream video over any medium besides standard ‘internet’ TCP, use some form of UDP encapsulation, because it is more efficient, both from a server-load point of view (deliver one stream to many) and a bandwidth point-of-view (no nack implosions after infrastructure failure). Acknowledgement is definitely something you don’t want to do when delivering video over IP.
Its a difference between delivering to a few receivers, and delivering to hundreds of thousands of receivers. Cable systems are using UDP for IP video VOD distribution, DTV/HDTV/*FDM data encapsulation uses UDP packets, most modern players support UDP A/V data feeds, even satellite data delivery (DVB, etc) uses UDP packet encapsulation. DSL VOD and IPTV delivery is using UDP. These are a mixture of extreme high and low packet and latency environments.
One deals with loss by using software and hardware-based FEC (turbo codes, reed-solomon, even hamming and carousels), and client-side buffering (be that round-robin memory queuing, or padded files stuffed as packets roll in).
Just like a standard OTA/Satellite broadcast, missed data is not an issue. Any decoder worth its salt will be able to adapt within a minimum packet loss threshold and sustain the video playback. Not requiring a single connection back to the originating server to receive data will always give you better scalability than any protocol based on handshaking and acknowledgements.
As for Fast TCP network performance, how useful is adaptive data rate prediction when you know your video stream is playing at CBR, and at that, CBR which is well under the bitrate limitations of TCP? Not that useful at all…
In terms of CBR, that is alright when you are playing a single stream to a single handset. The network we are talking about will have over 100,000 (in a small city alone) access points and distributing that load over the intelligent self-organising router structure requires adaptive data rate prediction. This is not a small office solution, but, an enterprise mission-critical deployment.
We are also talking about an urban deployment of a 5GHz line-of-sight system with high packet loss expected. UDP is good in cables and satelite systems due to the signal strength and quality. It cannot even be a consideration for this type of system. The signal is just to weak.
Also, because of the bandwidth and FAST TCP, acknowledgement of packets is not even an issue, its part of the standard packet management system and it works fine with current TCP implementations of streaming video.
UDP is good in cable and satellite due to signal strength and quality? That goes against every proven study I’ve ever seen or any deployment I’ve worked on in the last 10 years. These systems are completely at the whim of packet loss. To think that a TCP based system where reconnections are required after frequent connection loss is superior – I can’t even fathom the logic behind that.
There are data broadcasting companies which make a living *merely* off the fact that they provide software FEC codes for wireless error recovery. And why you are not making use of UDP multicast to deploy video makes even less sense.
Packet acknowledgement is still an issue. And server-client connectivity, connection scalability, and handshaking is also an issue. These are non-issues when deploying via udp.
If you have real packet loss issues you would put drones in the field to do automated packet accounting and link quality measurement.
Anyway, good luck – the rest of the wireless data broadcasting community will continue to move in the opposite direction.
Now, Kon you made this comment earlier:
“In fact most applications that stream video over any medium besides standard ‘internet’ TCP, use some form of UDP encapsulation…”
So, what your saying is that normal TCP is used for streaming media. Even AOL-Time Warner and Microsoft are in current negotions with the developers of FAST TCP to deliver their streaming-media services.
Now, you also said:
“UDP is good in cable and satellite due to signal strength and quality? That goes against every proven study I’ve ever seen or any deployment I’ve worked on in the last 10 years.”
The answer is yes when you compare that signal strength and quality against a line-of-sight WI-FI implementation. For your information, the ‘rest’ of the wireless data broadcasting community is going FAST TCP as well. If you are moving in the opposite direction, then you will be slowing networks down.
Good luck to you.