Linked by Kroc Camen on Thu 12th Nov 2009 19:30 UTC
Google Google have created a new HTTP-based protocol "SPDY" (pronounced "Speedy") to solve the problem of the client-server latency with HTTP. "We want to continue building on the web's tradition of experimentation and optimization, to further support the evolution of websites and browsers. So over the last few months, a few of us here at Google have been experimenting with new ways for web browsers and servers to speak to each other, resulting in a prototype web server and Google Chrome client with SPDY support."
Order by: Score:
The world is mine
by dmrio on Thu 12th Nov 2009 20:15 UTC
dmrio
Member since:
2005-08-26

Google is REALLY interested in take over the entire internet!

Reply Score: 3

RE: The world is mine
by Kroc on Thu 12th Nov 2009 21:08 UTC in reply to "The world is mine"
Kroc Member since:
2005-11-10

With open standards and open source software that anybody can implement and use!

Reply Score: 12

RE[2]: The world is mine
by dmrio on Fri 13th Nov 2009 15:07 UTC in reply to "RE: The world is mine"
dmrio Member since:
2005-08-26

Can anyone implement and use just a small amount of their annual income on the same openness? They are doing business, not charity.

Reply Score: 1

RE: The world is mine
by gustl on Sat 14th Nov 2009 10:40 UTC in reply to "The world is mine"
gustl Member since:
2006-01-19

Doing it the way they are doing it now, I can see no downside at all.

Doing it the Microsoft way is what I despise.

Reply Score: 3

Looking for free labour force...
by dindin on Thu 12th Nov 2009 20:23 UTC
dindin
Member since:
2006-03-29

In the page, they are again asking for community work on this. They have reached a point where ........

Reply Score: 2

Delgarde Member since:
2008-08-19

In the page, they are again asking for community work on this. They have reached a point where ........


Well, that's more or less how open-source works, isn't it? A core team builds the basics of something, then looks for collaborators to help it grow?

For something as fundamental as the HTTP protocol, Google certainly can't do everything themselves - they need people who make web servers, and web browsers, and HTTP and web services libraries to pick up what they've done, and incorporate it into their own projects...

Reply Score: 5

BallmerKnowsBest Member since:
2008-06-02

In the page, they are again asking for community work on this. They have reached a point where ........


So you'd rather that Google closed the code and released it as part of a proprietary product instead? Interesting approach to open source advocacy.

Reply Score: 4

FunkyELF Member since:
2006-07-26

So you'd rather that Google closed the code and released it as part of a proprietary product instead? Interesting approach to open source advocacy.


Or even worse, completing the product in-house without any feedback and then releasing it as open source.

Reply Score: 2

Interesting
by Delgarde on Thu 12th Nov 2009 20:27 UTC
Delgarde
Member since:
2008-08-19

Header compression is something that I can certainly see being useful. Web apps using AJAXy techniques, web services - they're characterized by having relatively little content, meaning the uncompressed headers could often be half the traffic being transmitted.

Reply Score: 6

Downside
by geleto on Thu 12th Nov 2009 20:32 UTC
geleto
Member since:
2005-07-06

I see one downside to this. With only one TCP connection loosing a packet will pause the transmission of ALL resources till the the lost packet is retransmitted. Because of the way the TCP/IP congestion avoidance works (increase speed till you start loosing packets) - this will not be a rare occurrence. There are two ways around this - use multiple TCP streams or better - use UDP.

Edited 2009-11-12 20:33 UTC

Reply Score: 2

RE: Downside
by cerbie on Thu 12th Nov 2009 23:42 UTC in reply to "Downside"
cerbie Member since:
2006-01-02

...but the resources to be retransmitted are also now smaller and more efficient, helping to negate it. So, if it becomes a problem, do a little reconfiguration, and change default recommendations on new pieces of network infrastructure. The networks will adapt, if it's a problem.

If it ends up working out, it can be worked into browsers and web servers all over, and many of us can benefit. Those who don't benefit can safely ignore it, if it's implemented well. We all win. Yay.

The Real Problem we have is that old protocols have proven themselves extensible and robust. But, those protocols weren't designed to do what we're doing with them. So, if you can extend them again, wrap them in something, etc., you can gain 90% of the benefits of a superior protocol, but with easy drop-down for "legacy" systems, and easy routing through "legacy" systems. This is generally a win, when starting from proven-good tech, even if it adds layers of complexity.

Reply Score: 4

RE: Downside
by modmans2ndcoming on Fri 13th Nov 2009 04:10 UTC in reply to "Downside"
modmans2ndcoming Member since:
2005-11-09

oh... yes... lets use UDP so we can get half a webpage... a corrupted ssl session, non-working or wrong working sites.

Yes... UDP is the solution to everyone's problems.

oh wait no... its not because it is a mindless protocol that does not care of something important is lost or if it is wasting its time sending the data to the other end.

Reply Score: 6

RE[2]: Downside
by geleto on Fri 13th Nov 2009 17:29 UTC in reply to "RE: Downside"
geleto Member since:
2005-07-06

You can implement detection and retransmission of lost packets on top of UDP. The problem is the in-order transmission of packets in TCP. Because of it when you loose a packet all received packets will wait till the lost is retransmitted. With UDP you can use the data in the new packets right away, no matter if an older packet is missing and has to be retransmitted.
Imagine the situation where you are loading many images on a page simultaneously, a packet is lost and because only one TCP connection is used - all images stall till the lost packet is retransmitted.

Edited 2009-11-13 17:31 UTC

Reply Score: 1

RE[3]: Downside
by modmans2ndcoming on Sun 15th Nov 2009 04:41 UTC in reply to "RE[2]: Downside"
modmans2ndcoming Member since:
2005-11-09

Thanks for playing but you have no idea what you are talking about.

Reply Score: 2

I want that DSL connection!
by jharrell on Thu 12th Nov 2009 20:39 UTC
jharrell
Member since:
2007-07-30

"On the lower-bandwidth DSL link, in which the upload link is only 375 Mbps"

WOW. Who cares about header compression when you've got 375 Mbps!

Edited 2009-11-12 20:39 UTC

Reply Score: 7

RE: I want that DSL connection!
by elanthis on Thu 12th Nov 2009 22:32 UTC in reply to "I want that DSL connection!"
elanthis Member since:
2007-02-17

HTTP headers are huge. Paylods are huge. 375 Mbps is not enough when you have dozens of largish HTTP requests flying over the wire. Remember, that's mega BITS, not mega BYTES, and that's just a measure of bandwidth, not latency. Also keep in mind that as soon as a request gets larger than a single packet/frame, performance can quickly tank. If the compression keeps the entire request in under the MTU, you can get huge latency reductions.

Reply Score: 2

RE[2]: I want that DSL connection!
by Verunks on Thu 12th Nov 2009 22:39 UTC in reply to "RE: I want that DSL connection!"
Verunks Member since:
2007-04-02

actually that's a typo, even in bits it would be huge
the real value is 375 Kbps as you can see in this page
http://dev.chromium.org/spdy/spdy-whitepaper

Reply Score: 7

SSL and Header compression?
by joshv on Thu 12th Nov 2009 20:47 UTC
joshv
Member since:
2006-03-18

It was my understanding that SSL compressed the stream as a side effect of encryption, and that headers are within the encrypted stream - so if they are using SSL exclusively, why would you need to compress headers?

Reply Score: 2

Calling all FOSS Developers...
by tomcat on Thu 12th Nov 2009 20:58 UTC
tomcat
Member since:
2006-01-06

Please drop what you're doing, sacrifice your personal time, and give free resources so that Google can make their next several billion dollars. (Suckers)

Reply Score: 0

RE: Calling all FOSS Developers...
by evert on Thu 12th Nov 2009 22:06 UTC in reply to "Calling all FOSS Developers..."
evert Member since:
2005-07-06

Summer of Google works the other way around.

Reply Score: 7

dnebdal Member since:
2008-08-27

Please enlighten us - what are those purposes?
For bonus points, explain the SoC students working on Haiku, FreeBSD, and other OSes and projects google has never used.

Reply Score: 6

tomcat Member since:
2006-01-06

Please enlighten us - what are those purposes? For bonus points, explain the SoC students working on Haiku, FreeBSD, and other OSes and projects google has never used.


Mindshare: They gain a bunch of Google seminary students who will likely support and user Google's platform in the future, and simultaneously deny their competitors (Microsoft, Yahoo, etc) from gaining mindshare with the next generation of devs.

Edited 2009-11-12 23:44 UTC

Reply Score: 3

ichi Member since:
2007-03-06

It's quite interesting then to see that one of the funded projects is Mono.

Does Google gain mindshare? Sure, from those that get into the program, but that's about it. Funding a project and gaining mindshare always go together.

People get paid to work on projects not related with Google, so Google doesn't get to take advantage of any of that (not any more than any other company).

As far as I know no one is stopping Microsoft or Yahoo from applying as a mentoring organization and getting Google sponsored coders for free.

Reply Score: 5

DOSguy Member since:
2009-07-27

The picture you try to paint suggests this bunch of students has the CHOICE to "support and use Google's platform in the future". They are students; I assume they can think for themselves and understand what marketing means.
I don't see any problem with Google's SoC and I am happy that a big company like Google tries to be open in at least some way and shares a lot of it's research and code. Even if it is their only motive to make a lot of money or to market itself: this is the case for every company. With Google we at least get something in return....

Reply Score: 5

tomcat Member since:
2006-01-06

Anybody who spends any time helping Google become more successful -- without being compensated -- is a moron. If Google cares enough about this project, they will FUND it.

Edited 2009-11-13 01:17 UTC

Reply Score: 1

Panajev Member since:
2008-01-09

I guess I do not get why people clamor for closed source software be opened... like they have done with JAVA in the past...

Open it only when it is ready?

Reply Score: 1

DOSguy Member since:
2009-07-27

Google is not going to be more successfull just because of a faster implementation of HTTP. Every internet user would benefit from a faster WWW though, and anybody contributing to such a goal, paid or unpaid, successful or unsuccessful, deserves credit and respect.

Reply Score: 2

ichi Member since:
2007-03-06

How does paying students to work on projects like bzflag or scummvm fit with Google's "own purposes"? What would those purposes be?

Reply Score: 4

tomcat Member since:
2006-01-06

How does paying students to work on projects like bzflag or scummvm fit with Google's "own purposes"? What would those purposes be?


http://www.osnews.com/thread?394382

Reply Score: 2

madcrow Member since:
2006-03-13

How does paying students to work on projects like bzflag or scummvm fit with Google's "own purposes"? What would those purposes be?

Clearly Google employees are bored and in need of better games on their so-called "workstations" What could be better than a nice game of "Monkey Island" or BZflag?

Reply Score: 5

Soulbender Member since:
2005-08-18

Alternatively you can spend your time rooting for a commercial company that cares nothing about you, do their QA work for them and not get paid for that either while they're making billions.

Reply Score: 7

v Another copy-paste move by Google
by harcalion on Thu 12th Nov 2009 21:07 UTC
mckill Member since:
2007-06-12

So, after ripping all the features for Chrome off from their competitors and offering none new, now they want to copy paste Opera Unite into their new client-server HTTPish protocol. Will Google ever create anything innovative?


what do you mean 'rip off' and 'offering none'? their source is there for everyone to also 'rip off' and every webkit change they've done has been committed back to webkit, they didn't fork.

Reply Score: 7

galvanash Member since:
2006-01-25

So, after ripping all the features for Chrome off from their competitors and offering none new, now they want to copy paste Opera Unite into their new client-server HTTPish protocol. Will Google ever create anything innovative?


As stupid as it sounds, probably the most noteworthy feature of Google Chrome (and the one that differentiates it the most) is that it puts its tabs above the address bar. Innovative? I wouldn't say that - it just plain makes more sense that way. But it certainly wasn't ripped off from anyone else.

As for Opera Unite??? What in the hell are you talking about? This SPDY stuff isn't even remotely related to that, and I mean not even r e m o t e l y.

Reply Score: 3

FealDorf Member since:
2008-01-07

[I'll probably be voted negative for this but who cares!]

Chrome = Speed Dial, Top-tabs, bookmark syncing, etc..
Opera DOES have all these before Chrome. The first two were introduced in Opera before anywhere else, while I believe Opera was the first browser to integrate bookmark sync.
SPDY is more like Opera Turbo on the other hand, which compresses HTTP streams but also reduces quality of images.
Hell, even GMail wasn't the first 1GB mail -- I remember this mac fansite (spymac? it's a strange site now) who offered 1GB free email before Google did.

Opera is often innovative but doesn't put much energy into refining it. Google on the other hand, waxes and polishes it and makes it shiny for the user.

[rant]And seriously; it being open source doesn't automatically mean that any business can and will adopt it.. It's better, sure.. but that doesn't stop their world domination ^_^[/rant]

Reply Score: 2

Kroc Member since:
2005-11-10

Opera Turbo is a proxy. If you don't mind your data being routed through Europe and heavily compressed beyond recognition.

SPDY is _not_ a feature in some web browser--it is a communications standard that anybody could implement in any browser. They have created a test version in Chrome, but Mozilla could just as well implement it too.

Reply Score: 2

FealDorf Member since:
2008-01-07

Both of the technologies do the same thing -- compress webpages. One does it via a proxy, the other does it through protocol implementation. And a proxy is much easier to integrate as compared to a wholly new standard. Unless you have something racially against europe, if it sends me my pages faster I have no issues. Images? Yes! It's for viewing web pages faster on slow dialups. That's the exact intent. So other than your personal bias against opera, there's not much else different.

To sum it up, both of them do *exactly* the same thing - compress web pages. One does it via a proxy, the other is a wholly new standard. Now read the part where I said, Opera innovates and Google polishes it.

Reply Score: 1

Johann Chua Member since:
2005-07-22

Unless you have something racially against europe, if it sends me my pages faster I have no issues.


Way to jump to conclusions. Depending on where you are, re-routing through Europe could make things slower.

Reply Score: 3

FealDorf Member since:
2008-01-07

"Unless you have something racially against europe, if it sends me my pages faster I have no issues.


Way to jump to conclusions. Depending on where you are, re-routing through Europe could make things slower.
"
if it sends me my pages faster
See that part there? Opera makes sure if the pages you get are faster with Turbo on. Else, it warns you and disables itself.

Edited 2009-11-13 13:30 UTC

Reply Score: 2

Bill Shooter of Bul Member since:
2006-07-14

I do have concerns with increasing displays of racism in Europe ( and other places as well) , if that's what you meant. But, I'd just prefer not to MITM myself out of paranoia.

Reply Score: 2

gustl Member since:
2006-01-19


[rant]And seriously; it being open source doesn't automatically mean that any business can and will adopt it.. It's better, sure.. but that doesn't stop their world domination ^_^[/rant]


World domination by open source software is no problem, because bad behavior by such an open source project immediately leads to forks. Just look at waht happened to XFree86: They got forked by Xorg the second they started behaving funny (closed license).

I do not get, why people don't seem to grasp the difference between world domination by a closed source entity vs. world domination by an open source entity.
It is as different as night and day.

Reply Score: 2

SMTP should be ditched as well..
by JacobMunoz on Thu 12th Nov 2009 22:20 UTC
JacobMunoz
Member since:
2006-03-17

I think Email as it exists also carries some painful legacy decisions - although I don't know which is harder to ditch: http or smtp?

HTTP: it would be nice to have a new protocol like SPDY, but stop and think about how many services and applications were designed with only HTTP in mind.. it hurts. Browsers change every few months, not enterprise-level applications. If anything, SPDY could be at least be used as an auxiliary or complementary browser data pipeline. But calls to replace HTTP mostly come from performance issues, not catastrophic design flaws (enter SMTP)..

SMTP: the fact that you're expected to have an inbox of gargantuan capacity so every idiot in the world can send you pill offers to make your d!@k bigger is as stupid as taking pills to make your d!@k bigger. As it exists today, any trained beagle can spam millions of people and disappear with no recourse. Terabytes of "Viva Viagra!" is due to the simple fact that the sender is not liable for the storage of the message - you are, you sucker. If the message is of any actual importance, the sending server should be available for the recipient to retrieve when they decide to. This provides many improvements over SMTP such as:

1) confirmation of delivery
--- you know if it was accessed and when - The occasional 'send message receipt' confirmation some current email clients provide you with is flaky and can easily be circumvented - this could not be.

2) authenticity
--- you have to be able to find them to get the message, they can't just disappear. geographic info could also be used to identify real people (do you personally know anyone in Nigeria? probably not...)

3) actual security
--- you send them a key, they retrieve and decode the message from your server.

4) no attachment limits
--- meaning, no more maximum attachment size because you're retrieving the files directly from the sender's 'outbox'. "please email me that 2.2GB file" OK! now you can! Once they've retrieved it, the sender can clear it from their outbox - OR, send many people the same file from ONE copy instead of creating a duplicate for each recipient. This saves time, resources, and energy (aka $$$)!

5) the protocols and standards already exist
--- sftp and pgp would be just fine, a simple notification protocol (perhaps SMTP itself) would send you a validated header (sender, recipient, key, download location, etc) which you could choose to view or not.

You'll still get emails, but spammers will be easily identified because their location (and perhaps an authenticity stamp) will point to the server - if not, you can't get the message even if you wanted to. And again - if it's so damned important I know senders will be happy to hold the message till recipients pick it up...? right?


But we're talking about HTTP here, which I can say isn't quite as broken. Although they should keep working on SPDY, because give it a few years and the world will find a way to break it...

Edited 2009-11-12 22:21 UTC

Reply Score: 6

modmans2ndcoming Member since:
2005-11-09

Google wave takes care of that!!!

Seriously though... google wave... when comparing it to gmail, could replace gmail if the wave protocol was available for everyone to join in on. It actualy COULD replace e-mail and do a sweet job at it.

Reply Score: 3

sbenitezb Member since:
2005-07-22

Graylisting.

Reply Score: 2

Soulbender Member since:
2005-08-18
JacobMunoz Member since:
2006-03-17

Good link, but it kind of makes me think this architecture will never get adopted. Nine years after it's namesake and I sure as heck never heard of it (although it does exactly what I was looking for). But there lies the problem, how do you enable it's adoption on a widespread basis, without breaking compatibility, and without locking into a vendor's service? Google Wave, innocent as it is - it's still a provided service delivered by a company. I'm looking for an architectural change (like I.M.2000) that could be adopted transparently, perhaps we'll have to wait till email's completely unusable for it to really change...?

Reply Score: 2

RE: SMTP should be ditched as well..
by ghen on Fri 13th Nov 2009 08:00 UTC in reply to "SMTP should be ditched as well.."
ghen Member since:
2005-08-31

1) confirmation of delivery
--- you know if it was accessed and when - The occasional 'send message receipt' confirmation some current email clients provide you with is flaky and can easily be circumvented - this could not be.


And that's a wanted feature?
You must be one of those marketing guys!

Reply Score: 3

All I want to know is...
by bornagainenguin on Fri 13th Nov 2009 03:01 UTC
bornagainenguin
Member since:
2005-08-07

Who is 'Goolge' and why haven't we hard of them before now?

--bornagainpenguin

Reply Score: 5

RE: All I want to know is...
by Kroc on Fri 13th Nov 2009 07:37 UTC in reply to "All I want to know is..."
Kroc Member since:
2005-11-10

Google’s evil twin. Their motto is "Do No Good". They’re a closed and proprietary company constantly seeking to usurp the web with their own proprietary technologies and patents—a bit like Microsoft, you could say!

Reply Score: 5

None of this crap is new
by tyrione on Fri 13th Nov 2009 03:21 UTC
tyrione
Member since:
2005-11-21

This has been developed by all major OS vendors, Apache, W3C and other projects.

Google spits on it, calls it a new project with a lame name and suddenly it's gold?

Get real.

Wake me when Apache 3.0 becomes reality and the overhead they will rip out for that projects becomes consumable.

That alone will drop a major amount of delay on interactions between the client/server model.

Reply Score: 1

RE: None of this crap is new
by gomerComPiler on Fri 13th Nov 2009 07:42 UTC in reply to "None of this crap is new"
gomerComPiler Member since:
2009-08-29

Hey - did you Photoshop your teeth to be so gleaming white?

Reply Score: 12

Applaud and boo, all in one
by deathshadow on Fri 13th Nov 2009 09:25 UTC
deathshadow
Member since:
2005-07-12

In a way I applaud the idea of addressing latency. Handshaking, the process of requesting a file is one of the biggest bottlenecks remaining on the internet that can make even the fastest connections seem slow.

To slightly restate and correct what Kroc said, every time you request a file it takes the equivalent of two (or more!) pings to/from the server before you even start receiving data. Real world that's 200-400ms if you have what's considered a low latency connection, and if you are making a lot of hops between point A and B or worse, have connections like dialup, satellite or are just connecting to a server overtaxed on requests - that could be up to one SECOND per file, regardless of how fast the throughput of your connection is.

Most browsers try to alleviate this by doing multiple concurrent connections to each server - the usual default is eight. Since the filesizes are different there is also some overlap over those eight connections, but if the server is overtaxed many of those could be rejected and the browser have to wait. As a rule of thumb the best way to estimate the overhead is to subtract eight, reduce to 75%, and multiply by 200ms as the low and one second as the high.

Take the home page of OSNews for example - 5 documents, 26 images, 2 objects, 17 scripts (what the?!? Lemme guess, jquery ****otry?) and one stylesheet... That's 51 files, so (51-8)*0.75==32.25, we'll round down to 32. 32*200 = 6.4 seconds overhead on first load on a good day, or 32 seconds on a bad day. (subsequent pages will be faster due to caching)

So these types of optimizations are a good idea... BUT

More of the blame goes in the lap of web developers many of whom frankly are blissfully unaware of this situation, don't give a **** about it, or are just sleazing out websites any old way. Even more blame goes on the recent spate of 'jquery can solve anything' asshattery and the embracing of other scripting and CSS frameworks that do NOT make pages simpler, leaner, or easier to maintain even when they claim to. Jquery, Mootools, YUI, Grid960 - Complete rubbish that bloat out pages, make them HARDER to maintain than if you just took the time to learn to do them PROPERLY, and often defeat the point of even using scripting or CSS in the first place. CSS frameworks are the worst offenders on that, encouraging the use of presentational classes and non-semantic tags - at which point you are using CSS why?

I'm going to use OSNews as an example - no offense, but fair is fair and the majority of websites have these types of issues.

First we have the 26 images - for WHAT? Well, a lot of them are just the little .gif icons. Since they are not actually content images and break up CSS off styling badly, I'd move them into the CSS and use what's called a sliding-background or sprite system reducing about fifteen of those images to a single file. (In fact it would reduce some 40 or so images to a single file). This file would probably be smaller than the current files combined size since things like the palette would be shared and you may see better encoding runs. Researching some of the other images and about 22 of those 26 images should probably only be one or two images total. Let's say two, so that's 20 handshakes removed, aka three to fifteen seconds shaved off firstload.

On the 12 scripts about half of them are the advertising (wow, there's advertising here? Sorry, Opera user, I don't see them!) so there's not much optimization to be done there EXCEPT, it's five or six separate adverts. If people aren't clicking on one, they aren't gonna click on SIX.

But, the rest of the scripts? First, take my advice and swing a giant axe at that jquery nonsense. If you are blowing 19k compressed (54k uncompressed) on a scripting library before you even do anything USEFUL with it, you are probably ****ing up. Google analytics? What, you don't have webalizer installed? 90% of the same information can be gleaned from your server logs, and the rest isn't so important you should be slowing the page load to a crawl with an extra off-server request and 23k of scripting! There's a ****load of 'scripting for nothing' in there. Hell, apart from the adverts the only thing I see on the entire site that warrants the use of javascript is the characters left counter on the post page! (Lemme guess, bought into that ajax for reducing bandwidth asshattery?) - Be wary of 'gee ain't it neat' bullshit.

... and on top of all that you come to the file sizes. 209k compressed/347k uncompressed is probably TWICE as large as the home page needs to be, especially when you've got 23k of CSS. 61k of markup (served as 15k compressed) for only 13k of content with no content images (they're all presentational), most of that content being flat text is a sure sign that the markup is probably fat bloated poorly written rubbish - likely more of 1997 to it than 2009 - no offense, I still love the site even with it's poorly thought out fixed metric fonts and fixed width layout - that I override with opera user.js.

You peek under the hood and it becomes fairly obvious where the markup bloat is. ID on body (since a document can only have one body what the **** are you using an ID for), unnecessary spans inside the legend, unnecessary name on the h1 (you need to point to top, you've got #header RIGHT before it!), nesting a OL inside a UL for no good reason (for a dropdown menu I've never seen - lemme guess, scripted and doesn't work in Opera?), unneccessary wrapping div around the menu and the side section (which honestly I don't think should be a separate UL), those stupid bloated AJAX tabs with no scripting off degradation, or the sidebar lists doped to the gills with unnecessary spans and classes. Just as George Carlin said "Not every ejaculation deserves a name" not every element needs a class.

Using MODERN coding techniques and axing a bunch of code that isn't actually doing anything, it should be possible to reduce the total filesizes to about half what it is now, and eliminate almost 75% of the file requests in the process... Quadrupling the load speed of the site (and similarly easing the burden on the server!)

So really, do we need a new technology, or do we need better education on how to write a website and less "gee ain't it neat" bullshit? (Like scripting for nothing or using AJAX to "speed things up by doing the exact opposite")

Edited 2009-11-13 09:27 UTC

Reply Score: 4

RE: Applaud and boo, all in one
by kaiwai on Fri 13th Nov 2009 10:09 UTC in reply to "Applaud and boo, all in one"
kaiwai Member since:
2005-07-06

I think what pisses me off the most is the fact that I've made websites, I want for example geometric shapes but I can't do it without having to use a weird combination of CSS and gif files. Why can't the W3C add some even most basic features which would allow one to get rid of large amounts of crap. Heck, if they had a geometric tag which allowed me to create a box with curved corners I wouldn't need to use the frankenstein code I use today.

What would be so hard to create:

<shape type="quad" fill-color="#000000" corners="curved />

Or something like that. There are many things that people add to CSS that shouldn't need to be there if the W3C got their act together - where the W3C members have done nothing to improve the current situation in the last 5 years except to drag their feet on every single advancement put forward - because some jerk off in a mobile phone company can't be figged upping the specifications in their products to handle the new features. Believe me, I've seen the conversations and it is amazing how features are being held up because of a few nosy wankers holding sway in the meetings.

Reply Score: 2

RE[2]: Applaud and boo, all in one
by Kroc on Fri 13th Nov 2009 11:33 UTC in reply to "RE: Applaud and boo, all in one"
Kroc Member since:
2005-11-10

"SVG 1.0 became a W3C Recommendation on September 4, 2001" -- Wikipedia.

Reply Score: 1

RE[2]: Applaud and boo, all in one
by ba1l on Fri 13th Nov 2009 15:16 UTC in reply to "RE: Applaud and boo, all in one"
ba1l Member since:
2007-09-08

While it's hardly simple, SVG was actually intended for exactly this kind of thing. The problem is that only Webkit allows you to use SVG anywhere you'd use an image.

Gecko and Opera allow you to use SVG for the contents of an element only. Internet Explorer doesn't support SVG at all, but allows VML (an ancestor of SVG) to be used in the same way you can use SVG in Gecko and Opera.

So the functionality is there (in the standards) and has been there since 2001. We just aren't able to use it unless we only want to support one browser. Cool if you're writing an iPhone application, but frustrating otherwise.

As for your specific example, you can do that with CSS, using border-radius. Something like this:

-moz-border-radius: 10px;
-webkit-border-radius: 10px;
border-radius: 10px;

Of course, as with everything added to CSS or HTML since 1999, it doesn't work in Internet Explorer.

Blaming the W3C for everything hardly seems fair, considering that these specs were published almost a decade ago, and remain unimplemented. Besides, there are plenty of other things to blame the W3C for. Not having actually produced any new specs in almost a decade, for example.

Reply Score: 3

RE[2]: Applaud and boo, all in one
by cerbie on Fri 13th Nov 2009 21:15 UTC in reply to "RE: Applaud and boo, all in one"
cerbie Member since:
2006-01-02

.

Edited 2009-11-13 21:15 UTC

Reply Score: 2

RE: Applaud and boo, all in one
by Kroc on Fri 13th Nov 2009 13:00 UTC in reply to "Applaud and boo, all in one"
Kroc Member since:
2005-11-10

I agree absolutely.

Since Adam already spilled the beans in one of the Conversations, I may as well come out and state what is probably already obvious: There is a new site in the works, I'm coding the front end.

_All_ of your concerns will be addressed.

The OSnews front end code is abysmally bad. Slow, bloated and the CSS is a deathtrap to maintain (the back end (all the database stuff) is very good and easily up to the task).

Whilst we may not see eye to eye on HTML5/CSS3, I too am opposed to wasted resources, unnecessary JavaScript and plain crap coding. My own site adheres to those ideals. Let me state clearly that OSn5 will be _better_ than camendesign.com. I may even be able to impress you (though I doubt that ;) )

Reply Score: 1

RE: Applaud and boo, all in one
by edmnc on Fri 13th Nov 2009 13:33 UTC in reply to "Applaud and boo, all in one"
edmnc Member since:
2006-02-21

Google analytics? What, you don't have webalizer installed? 90% of the same information can be gleaned from your server logs


That there just means you don't use google analytics (or don't know how to use it). It is a very powerful peace of software that can't be replaced by analog, webalizer or digging through logfiles.

Reply Score: 1

deathshadow Member since:
2005-07-12

That there just means you don't use google analytics (or don't know how to use it). It is a very powerful peace of software that can't be replaced by analog, webalizer or digging through logfiles.

No, it's just that the extra handful of minor bits of information it presents is only of use to people obsessing on tracking instead of concentrating on building content of value - usually making such information only of REAL use to the asshats building websites who's sole purpose is click-through advertising bullshit or are participating in glorified marketing scams like affiliate programs... such things having all the business legitimacy of Vector Knives or Amway.

Edited 2009-11-14 15:18 UTC

Reply Score: 3

W3C: Do your job!
by 3rdalbum on Fri 13th Nov 2009 10:02 UTC
3rdalbum
Member since:
2008-05-26

I agree, this isn't anything new. I remember reading about how this could be done way back in 1999 (the original article author is probably working for Google now).

This should be the W3C's job, to update web standards and promote the new updated versions. Instead, the W3C works on useless crap like "XML Events", "Timed Text", XHTML 2.0 and "Semantic Web" (which is due to reach alpha state some time after the release of Duke Nukem Forever).

Let's face it, HTTP 1.1 is abandonware, and I think we have to applaud Google for taking the initiative and actually implementing it and trying to put some weight behind the push. On the same token, let's see Google push more for IPv6 and the ideas suggested by two of the people in the comments for this article :-)

Reply Score: 2

Google vs Internet Explorer
by Brunis on Fri 13th Nov 2009 11:27 UTC
Brunis
Member since:
2005-11-01

I thought the comment about "Internet Explorer" not waving any flags was uncalled for.. i think you should direct that hatred towards the slacking company behind the shitty product! Oh, and a bit off-topic aswell.. not really Microsofts fault HTTP is crap?

Reply Score: 1

RE: Google vs Internet Explorer
by Kroc on Fri 13th Nov 2009 11:37 UTC in reply to "Google vs Internet Explorer"
Kroc Member since:
2005-11-10

It's nobody's specific fault that HTTP is crap, but then what matters is who is going to do anything about it.

Microsoft have had total dominance of the web for almost a decade. At no point during that time did they attempt to improve the status quo. At no point did they say that "You know, HTTP is slow and could do with improving". They just coasted along with careless disdain.

Reply Score: 1

RE[2]: Google vs Internet Explorer
by FealDorf on Fri 13th Nov 2009 12:55 UTC in reply to "RE: Google vs Internet Explorer"
FealDorf Member since:
2008-01-07

Agreed, IE had a lot of time at hand to improve the quality of user's experience but they didn't do much about it.

On the other hand, IE *did* indirectly invent AJAX ;)

Reply Score: 1

Big deal!
by harsha.reddy on Fri 13th Nov 2009 13:53 UTC
harsha.reddy
Member since:
2007-02-20

not a big deal! I have realized the limitations of web and I stay with in those limits.

SPDY.. is called beating around the bush!!

Reply Score: 1