Linked by Thom Holwerda on Thu 6th Sep 2018 23:34 UTC
Google

"People have a really hard time understanding URLs," says Adrienne Porter Felt, Chrome's engineering manager. "They're hard to read, it's hard to know which part of them is supposed to be trusted, and in general I don't think URLs are working as a good way to convey site identity. So we want to move toward a place where web identity is understandable by everyone - they know who they're talking to when they're using a website and they can reason about whether they can trust them. But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we're figuring out the right way to convey identity."

Judging by the reactions across the web to this news, I'm going to have the minority opinion by saying that I'm actually a proponent of looking at what's wrong with the status quo so we can try to improve it. Computing is actually an incredibly conservative industry, and far too often the reaction to "can we do this better?" is "no, because it's always been that way".

That being said, I'm not a fan of such an undertaking in this specific case being done by a for-profit, closed entity such as Google. I know the Chromium project is open source, but it's effectively a Google project and what they decide goes - an important effort such as modernizing the URL scheme should be an industry-wide effort.

Order by: Score:
I'm for it
by kwan_e on Fri 7th Sep 2018 00:40 UTC
kwan_e
Member since:
2007-02-18

As Emily Stark, a technical lead at Chrome puts it, the project is the URLephant in the room.


I was against it, until this pun.

Reply Score: 5

Comment by jigzat
by jigzat on Fri 7th Sep 2018 02:30 UTC
jigzat
Member since:
2008-10-30

Just like email it is the most basic form of digital communication at the user level. Killing the URL will give browser manufacturers full control of what users can or cannot read. Welcome to the walled wide web

Reply Score: 12

Page identity
by sukru on Fri 7th Sep 2018 02:44 UTC
sukru
Member since:
2006-11-19

The history seems to repeat itself. Before open web was this popular, there were closed systems, like AOL and Compuserve. They not only had proprietery systems, but there was the "keyword".

At that time the Internet was marketed similar to an appliance, and most of the population not having prior computer interactions, this seemed like a natural choice.

Then we had a long run of computer literate people who used their desktops for most stuff. At this point almost everyone knew what a URL was.

Now we are back to the point where main interaction with the Internet is thru specialized applications. Mobile ones have no URLs, they just talk directly to cloud services. Web based ones try to hide the URL, and only a few have proper back/fw navigation or bookmarkability.

We still have OSnews and others which have URLs for URI (resource identification). News sites, amazon product pages etc usually fall into this category. Nevertheless there is also a large body of web content where URL no longer works properly.

I'm not sure what the solution is. However at least in the short to middle term I would prefer keeping the URLs as is.

Edited 2018-09-07 02:44 UTC

Reply Score: 7

RE: Page identity
by shogun56 on Fri 7th Sep 2018 15:40 UTC in reply to "Page identity"
shogun56 Member since:
2018-09-07

> Nevertheless there is also a large body of web content where URL no longer works properly.

Indeed. Again this is not a URL problem it's a CMS or authorship problem. CMS' got cute and decided to abandon the strict hierarchy model. Ok, then the very least they could have done was generate a permanent and SHORT URL that would refer to that document in perpetuity or at least till the doc was deleted.

The problem we have is programmers are lazy, ignorant, and have no concept of permanence. I've talked to way too many who think 'Google Search site:blah' is the answer to everything. What percentage of the great unwashed even know that mechanism? 10^-18 probably.

Reply Score: 3

Display URLs better?
by sklofur on Fri 7th Sep 2018 03:03 UTC
sklofur
Member since:
2016-03-28

I'd say that while people understand the whole .com part of URLs, there's difficulty in dealing with the whole slash-this-slash-that part.

Essentially, URLs are hierarchical directory listings. For example, I like the way that Microsoft made the file paths in Windows an interactive breadcrumb list.

Imagine a similar functionality where you could find a section of a website without even touching the actual page. The website could provide some sort of modified site map for the browser to read:

"You are here: [OSNews] > [Stories] > [2018]"

To get to the 2017 list, you just click on [2018] and pick [2017].

Or what about about clicking "[OSNews] > [Search:]" and entering your query at the end?

Just a thought.

Reply Score: 4

RE: Display URLs better?
by ssokolow on Fri 7th Sep 2018 04:01 UTC in reply to "Display URLs better?"
ssokolow Member since:
2010-01-21

I'd say that while people understand the whole .com part of URLs, there's difficulty in dealing with the whole slash-this-slash-that part.

Essentially, URLs are hierarchical directory listings. For example, I like the way that Microsoft made the file paths in Windows an interactive breadcrumb list.

Imagine a similar functionality where you could find a section of a website without even touching the actual page. The website could provide some sort of modified site map for the browser to read:

"You are here: [OSNews] > [Stories] > [2018]"

To get to the 2017 list, you just click on [2018] and pick [2017].

Or what about about clicking "[OSNews] > [Search:]" and entering your query at the end?

Just a thought.


The sad thing is, that was envisioned early on and some browsers had a bank of first/prev/up/next/last buttons equivalent to structured document viewers.

The "first", "last", "index", and "up" values for the "rel" attribute are gone in HTML 5, but "prev" and "next" still exist and a well-structured URL gives a natural way to intuit "up".

https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types

As for search, that's already there in some form. Visit a site with OpenSearch metadata (eg. a WordPress blog) and then, at a later date, type the domain name into Chrome's address bar. A "Press [Tab] to search [domain name]" hint will appear on the right end of the address bar.

Reply Score: 4

RE: Display URLs better?
by nicubunu on Fri 7th Sep 2018 06:14 UTC in reply to "Display URLs better?"
nicubunu Member since:
2014-01-08

What you describe is mere navigation inside a website, but if you want to share a certain story from that website with other people or if you want to link to it from another website or bookmark it to review later, you need a way to point to that way to individually point to that story, something like a Uniform Resource Locator (in short URL). It may not have to look exactly like this, but it has to be universal, understood by each and every application.

Reply Score: 5

RE: Display URLs better?
by haakin on Fri 7th Sep 2018 09:08 UTC in reply to "Display URLs better?"
haakin Member since:
2008-12-18
Another layer of abstraction
by james_gnz on Fri 7th Sep 2018 03:58 UTC
james_gnz
Member since:
2006-02-16

... over time, URLs have gotten more and more difficult to read and understand. ... URLs have increasingly become unintelligible strings of gibberish ...

Rather than adding another layer abstraction, couldn't we fix the one we've already got, by phasing out the gibberish URLs?

Reply Score: 5

RE: Another layer of abstraction
by shotsman on Sat 8th Sep 2018 16:42 UTC in reply to "Another layer of abstraction"
shotsman Member since:
2005-07-22

URL Shortening is a disaster waiting to happen.

When you click on the URL you really have no idea where it is taking you.
You could suddenly find yourself on a Child Pron site and facing time in jail just for visiting it.

I know that this is a small risk but I refuse to take it.
I do not and will not ever click on a shortened URL.

Reply Score: 3

darknexus Member since:
2008-07-15

URL Shortening is a disaster waiting to happen.

No, it isn't. It's already happened, and disasters have struck from shortened URLs since it first started. All because a stupid social networking service (Twitter, I'm looking at you) wanted to keep things under 140 characters so it would be SMS compatible, which didn't matter because nobody wanted to use the service that way anyway. Now we're stuck with the fallout from that idiotic decision.

Reply Score: 2

v speedy -> http/2
by jimmystewpot on Fri 7th Sep 2018 05:23 UTC
RE: speedy -> http/2
by ahferroin7 on Fri 7th Sep 2018 13:03 UTC in reply to "speedy -> http/2"
ahferroin7 Member since:
2015-10-30

So you see exactly zero problem with monstrosities like this:

https://www.amazon.com/Rosewill-1000Mbps-Ethernet-supported-RC-411v3...

That's an Amazon product page URL for a link in the search results. It's functionally identical for most purposes to this direct link to the product page for the same thing:

https://www.amazon.com/Rosewill-1000Mbps-Ethernet-supported-RC-411v3...

Which while not perfect, is at least readable and something that could be typed by hand without significant fear of mistyping it.

The only practical difference between the two is that the first one pre-loads the search box with the search query you used to find the item in question in the first place, yet the first one is indisputably less readable than the second one, just to provide a feature most people don't actually need.

The big problem here is not a lack of education. Even though I can tell you what most of those parameters in the query string mean, that doesn't make the first URL any more readable for me than it is to someone who has near zero background in computers. The issue here is that sites are overloading the URL to do a lot more than it really needs to. It should not be referencing information from the previous page (that's what the 'Referer' header and/or cookies are for), It should not be embedding huge amounts of information that are not going to be used by actual users, etc.

For an example of things done sanely, take a look at how MediaWiki uses the URL. All the pages on a MediaWiki wiki can be accessed directly with a URL of the form:

http://example.com/wiki/Title

Where 'Title' is the title of the page (possibly with a few modifications to make sure it's valid for a URL). From there, you can get direct links to sections by just adding a fragment with the name of the section. No need for query strings, no special magic numbers, just the exact titles of the page and section you want to go to. It's simple enough that even people who have essentially zero background with computers can understand it. If the rest of the internet handled things like that, there would be no need for what Google is trying to do here.

Reply Score: 5

RE[2]: speedy -> http/2
by kwan_e on Fri 7th Sep 2018 13:14 UTC in reply to "RE: speedy -> http/2"
kwan_e Member since:
2007-02-18

It should not be referencing information from the previous page (that's what the 'Referer' header and/or cookies are for), It should not be embedding huge amounts of information that are not going to be used by actual users, etc.


It's part cargo-cult programming/designing, and part this:

https://en.wikipedia.org/wiki/HTTP_referer

"Many websites log referrers as part of their attempt to track their users. Most web log analysis software can process this information. Because referrer information can violate privacy, some web browsers allow the user to disable the sending of referrer information.[7] Some proxy and firewall software will also filter out referrer information, to avoid leaking the location of non-public websites. This can, in turn, cause problems: some web servers block parts of their website to web browsers that do not send the right referrer information, in an attempt to prevent deep linking or unauthorised use of images (bandwidth theft). Some proxy software has the ability to give the top-level address of the target website as the referrer, which usually prevents these problems while still not divulging the user's last-visited website.

Many blogs publish referrer information in order to link back to people who are linking to them, and hence broaden the conversation. This has led, in turn, to the rise of referrer spam: the sending of fake referrer information in order to popularize the spammer's website. "


As far as I know, browsers, firewalls and proxies aren't so free to chop off or change bits of a URL they don't like. So if a website really wants to track you, they are forced to do this.

Reply Score: 4

RE[2]: speedy -> http/2
by bn-7bc on Fri 7th Sep 2018 15:57 UTC in reply to "RE: speedy -> http/2"
bn-7bc Member since:
2005-09-04

Wow that has got to be the most verbouse product name in histoey what us wrong with «Rosewill 10/100/1000Mbps PCIE Ethernet NIC», and just put any other info in the description/specs. But i get your point urls tend to becone long very quicly but that might just be a result of amazon to have their urls give human readable information. The url could just have been www.amazon.com/producs/product#

Reply Score: 2

RE[3]: speedy -> http/2
by ahferroin7 on Fri 7th Sep 2018 17:02 UTC in reply to "RE[2]: speedy -> http/2"
ahferroin7 Member since:
2015-10-30

Even with giving human information, it doesn't need to be that long in the first example. As I said, all the stuff in the query string, as well as the 'ref=' part right before it, is unnecessary to just display the product page.

The really ridiculous thing though is that what's in the URL for the product name is shortened. The full product name on Amazon is 'Rosewill 10/100/1000Mbps Gigabit PCI Express, PCIE Network Adapter/Network Card/Ethernet Card, Win10 supported (RC-411v3)'. And that's a result of a completely separate issue with how Amazon's search functionality works, namely that matches in the product name get prioritized over matches in the description, so pumping your product name for the listing full of keywords makes it more likely you'll get a top spot in the search results (and it worked in this case, the exact search query was 'pci express network card').

The irony of all this is that the auto-conversion of URL's to links done by OSNews made it much harder to clearly see what I'm talking about here, since it cuts off both URL's before the point where they differ.

Reply Score: 3

RE[2]: speedy -> http/2
by dionicio on Fri 7th Sep 2018 16:06 UTC in reply to "RE: speedy -> http/2"
dionicio Member since:
2006-07-12

Your 2nd Exmpl. is an URL, ahferroin7.

Your 1st Exmpl. is a Non-Uniform Negotiating "Commit".

If Well Remembering, Microsoft -and lot others- handle DB sessions through persistence. What we see at the navigation bar is just for our "tranquility".

Not even right to use the Navigation History at such places, You could crash or turn to a room whose doors already "expired". Even the room could "expire" a minute after you left. Created to the interest of that particular "your punctual reality".

When There, We are not in the WWW, anymore. Isn't so, Tim?

Edited 2018-09-07 16:12 UTC

Reply Score: 1

RE[2]: speedy -> http/2
by dbox2005 on Fri 7th Sep 2018 18:09 UTC in reply to "RE: speedy -> http/2"
dbox2005 Member since:
2017-11-22

Hey...an URL can also be dinamically generated. You cannot change that. I would actually prefer to be able to still dig into sites hidden pages or content by manually changing the URL address myself. Why do you want to take that from me ???

Reply Score: 1

Conservative
by nicubunu on Fri 7th Sep 2018 06:06 UTC
nicubunu
Member since:
2014-01-08

The resistance to change in computing is natural, we are bombarded with changes made for the change sake, so people became circumspect.

Reply Score: 2

RE: Conservative
by Serafean on Fri 7th Sep 2018 08:48 UTC in reply to "Conservative"
Serafean Member since:
2013-01-08

Pretty much this.
Where many people wanting to change something fail is by not understanding the reason it is that way in the first place.

Take email. Some people hate the way email is designed. But expose all the problems it has to solve, and suddenly you find out that you'd end up with something similarly complex.

The "because it has always been done that way" is a first line of defense when you don't understand something, and see that there is more to it than meets the eye.

Reply Score: 3

RE[2]: Conservative
by Alfman on Fri 7th Sep 2018 14:41 UTC in reply to "RE: Conservative"
Alfman Member since:
2011-01-28

Serafean,

Take email. Some people hate the way email is designed. But expose all the problems it has to solve, and suddenly you find out that you'd end up with something similarly complex.

The "because it has always been done that way" is a first line of defense when you don't understand something, and see that there is more to it than meets the eye.


Disagree very much. Email really is too complex and it wouldn't have to be if we could re-engineer it. The hodgepodge of protocols and extensions which are IMAP/POP/SMTP/DKIM/SPF/TXT/DMARC/MX/PGP/etc are in a real mess. If we had the opportunity to redesign email from scratch, we really could get rid of tons of complexity while simultaneously making email more consistent.


Just take unicode as one example to illustrate my point. Here's an email header I got from a recent purchase:

Subject: =?UTF-8?Q?=E2=9C=85_ORDER_CONFIRMED:_Sopoby_100pcs_Assort...?=


The obvious & trivial way to use unicode is just to encode the entire protocol (which is text based) using UTF-8 as follows such that unicode works everywhere with no tricks:
Subject: ✅ ORDER CONFIRMED: Sopoby 100pcs Assort...


However, because SMTP standard predates UTF-8 and expects 7bit ASCII, they had to find a way to "hack it in" by adding new lexical preprocessor that simultaneously adds complexity and removes clarity. This is just one of many quirks that caught me off guard when I was writing scripts to parse emails. So can you honestly say email's complexity is intrinsic? No, I don't think so, it's the consequence of hammering new features into an old protocol while maintaining backwards compatibility. We can't ignore backwards compatibility, but the truth of the matter is that email is far more complex and security is less effective as a result.

Edited 2018-09-07 14:47 UTC

Reply Score: 4

RE[3]: Conservative
by darknexus on Fri 7th Sep 2018 15:19 UTC in reply to "RE[2]: Conservative"
darknexus Member since:
2008-07-15

If we had the opportunity to redesign email from scratch, we really could get rid of tons of complexity while simultaneously making email more consistent.

Agreed, but the OP also makes a valid point. Yes, if we redesigned email right now, we could eliminate much of the complexity we see while meeting today's requirements. But that word, today, is the key. Decades down the line as we are with email right now, you would find a similar hodgepodge of complexity as new requirements that were not foreseen at the start come to be necessary. After all, email started out simple, too.

Reply Score: 3

RE[4]: Conservative
by Alfman on Sat 8th Sep 2018 00:43 UTC in reply to "RE[3]: Conservative"
Alfman Member since:
2011-01-28

darknexus,

Agreed, but the OP also makes a valid point. Yes, if we redesigned email right now, we could eliminate much of the complexity we see while meeting today's requirements. But that word, today, is the key. Decades down the line as we are with email right now, you would find a similar hodgepodge of complexity as new requirements that were not foreseen at the start come to be necessary. After all, email started out simple, too.


You make a valid point, but it doesn't seem like the same point that Serafean was making due to one critical word that he used:
But expose all the problems it has to solve, and suddenly you find out that you'd end up with something similarly complex.


He was implying that the goals of email are intrinsically complex and anyone who develops it will "suddenly end up with something similarly complex" and that's the point I disagree with for the reasons stated earlier. If we replace "suddenly" with "eventually", then it does change the meaning to match your point:

But expose all the problems it has to solve, and eventually you find out that you'd end up with something similarly complex.



And it's true complexity does arise over time, yet a big difference between now and then is that when SMTP was published by a man in 1982, there was very little experience in digital mail systems, certainly not on a wide scale. Not only do we have tons more knowledge and experience today, but email is also more mature.

Taking the unicode example again:
ASCII, or the "American Standard Code for Information Interchange" had shortcomings with internationalization. This was bound to cause problems later on. Are there similar assumptions that unicode authors will have failed to consider for the next several decades? Perhaps, but due to maturity and hindsight I do think it's fair to say that unicode is more future-proof than ASCII was. In a similar vein, I think there are things we could do with email that would help with long term stability. But of course the big problem is actually moving the world to that point without breaking compatibility with existing legacy software.

Edited 2018-09-08 01:01 UTC

Reply Score: 4

v RE[3]: Conservative
by shogun56 on Fri 7th Sep 2018 15:31 UTC in reply to "RE[2]: Conservative"
RE[4]: Conservative
by galvanash on Fri 7th Sep 2018 16:28 UTC in reply to "RE[3]: Conservative"
galvanash Member since:
2006-01-25

Its 2018... UTF8 is plain text.

Reply Score: 6

v RE[5]: Conservative
by shogun56 on Fri 7th Sep 2018 18:32 UTC in reply to "RE[4]: Conservative"
RE[6]: Conservative
by ssokolow on Fri 7th Sep 2018 19:23 UTC in reply to "RE[5]: Conservative"
ssokolow Member since:
2010-01-21

UTF-8 is NOT remotely plain text. Are you telling me you can understand this as written?
\x6d\x79\x20\x6d\x6f\x74\x68\x65\x72\x20\x28\x77\x65\x6e \x74\x29\x20\x74\x6f\x20\x40\x6d\x61\x72\x6b\x65\x74\x21

Plain text means 7-bit ascii at most and is frequently reduced further to be the set of "printable characters".

If someone wants to be stupid and send non-plain text in email then they are required to multi-part/mime it. That they put crap in the Subject header means they are dumber still.


How anglocentric of you.

Reply Score: 6

v RE[7]: Conservative
by shogun56 on Fri 7th Sep 2018 19:28 UTC in reply to "RE[6]: Conservative"
RE[6]: Conservative
by galvanash on Sat 8th Sep 2018 00:08 UTC in reply to "RE[5]: Conservative"
galvanash Member since:
2006-01-25

UTF-8 is NOT remotely plain text. Are you telling me you can understand this as written?
\x6d\x79\x20\x6d\x6f\x74\x68\x65\x72\x20\x28\x77\x65\x6e \x74\x29\x20\x74\x6f\x20\x40\x6d\x61\x72\x6b\x65\x74\x21


No, because that isn't UTF-8, that is a bunch of unnecessary escape sequences that don't serve any purpose. I can understand "my mother (went) to @market!" though just fine, why didn't you just type that? That is UTF-8 plain text, there is no need to escape it, because this website (and just about any piece of modern software) understands it just fine.

Best part is you could have typed "Τη γλώσσα μου έδωσαν ελληνική" or "Стоял он, дум великих полн" or "ಬಾ ಇಲ್ಲಿ ಸಂಭವಿಸು ಇಂದೆನ್ನ ಹೃದಯದಲಿ" and if I spoke those languages I would have understood them too - and you don't need to escape them, because they are plain text.

What you call "plain text" is what Unicode was created to fix. 128 characters is a bug, not a feature. Every character on this website is UTF-8... Are you telling me you can't understand it?

Edited 2018-09-08 00:14 UTC

Reply Score: 5

RE[4]: Conservative
by Sodki on Fri 7th Sep 2018 19:53 UTC in reply to "RE[3]: Conservative"
Sodki Member since:
2005-11-10

The problem is the RETARD who thought it was a good idea to put non-plain-text in the Subject of an email or anywhere in the body!

YOU DON"T DO THAT! Period.


So I can't write the names of my family members in an e-mail? You really want to go down that route?

Reply Score: 3

RE[5]: Conservative
by shogun56 on Fri 7th Sep 2018 21:08 UTC in reply to "RE[4]: Conservative"
shogun56 Member since:
2018-09-07

> So I can't write the names of my family members in an e-mail?

Nope! Declare the body of your email as a multi-part with mime and charset per the RFC and you'll be fine.

But if you think the SMTP transport layer has any obligation to accommodate you doing it WRONG, or your email client to auto-magically deduce there's some Unicode in the body and "help you out" you're naive.

Follow the specification or reap the consequences.

Reply Score: 1

RE[6]: Conservative
by Sodki on Fri 7th Sep 2018 21:55 UTC in reply to "RE[5]: Conservative"
Sodki Member since:
2005-11-10

> So I can't write the names of my family members in an e-mail?

Nope! Declare the body of your email as a multi-part with mime and charset per the RFC and you'll be fine.


I seemed to have misinterpreted your previous quote, then. I mostly agree with you, but I do believe the subject and other fields should support all characters in some form or fashion.

Reply Score: 2

RE[6]: Conservative
by Alfman on Sat 8th Sep 2018 01:05 UTC in reply to "RE[5]: Conservative"
Alfman Member since:
2011-01-28

shogun56,


No, the problem is not unicode or utf-8 support or lack thereof. The problem is the RETARD who thought it was a good idea to put non-plain-text in the Subject of an email or anywhere in the body!

YOU DON"T DO THAT! Period. People who are too stupid to understand plain text is the ONLY correct way to do things do not belong in the chain of decision-making or programming.


Congrats, I completely didn't anticipate this sort of response. Instead of responding to your point though, I'm going to sit back and let others do that while I watch the show. Also, welcome to osnews ;)

Edited 2018-09-08 01:16 UTC

Reply Score: 3

RE[7]: Conservative
by galvanash on Sat 8th Sep 2018 05:59 UTC in reply to "RE[6]: Conservative"
galvanash Member since:
2006-01-25

LOL

Reply Score: 2

URLs should be human readable
by r_a_trip on Fri 7th Sep 2018 07:22 UTC
r_a_trip
Member since:
2005-07-06

I'm not against changes that make things more clear and easily understandable. Maybe Google will come up with something brilliant. It's just that I'm jaded. Often "improvements" are of the kind that hides the ugliness behind a pretty curtain and the main problem still exists, but now neither average users nor experts have a clear view on what is happening.

The biggest problem with "modern" URLs is the 600+ character garbage after the first slash delimiting the domain. That garbage is machine readable and URLs weren't made for machines. URLs were made for humans. It should be recognisable between every slash.

Reply Score: 3

RE: URLs should be human readable
by zima on Sat 8th Sep 2018 01:37 UTC in reply to "URLs should be human readable"
zima Member since:
2005-07-06

The biggest problem with "modern" URLs is the 600+ character garbage after the first slash delimiting the domain. That garbage is machine readable and URLs weren't made for machines. URLs were made for humans. It should be recognisable between every slash.

What's kinda funny here is that Google is guilty of such URLs themselves, in search results and the ones you get sent to after clicking an ad...

Edited 2018-09-08 01:38 UTC

Reply Score: 5

Here we go again
by lasuit on Fri 7th Sep 2018 07:22 UTC
lasuit
Member since:
2005-11-02

Yet another opportunity for a large for-profit organization to "filter" the raw information so we can see what they want us to see.

Reply Score: 2

Let me resume this for you
by franzrogar on Fri 7th Sep 2018 07:32 UTC
franzrogar
Member since:
2012-05-17

URLs (as originally designed) are human readable.

Google (and other culprits, ie PHP creators) have been f--king up the guidelines in their own benefit.

And now Google wants to, what, to "kill the URL"? Sorry, but no. You f--ked up your Search Engine with crappy refers, fix it, and leave the original URLs alive.

(Of course, with the exception by Berners Lee of killing off the "http://" and maybe transform it into "web:")

Edited 2018-09-07 07:33 UTC

Reply Score: 2

RE: Let me resume this for you
by jh27 on Fri 7th Sep 2018 13:04 UTC in reply to "Let me resume this for you"
jh27 Member since:
2018-07-06

URLs (as originally designed) are human readable.

Google (and other culprits, ie PHP creators) have been f--king up the guidelines in their own benefit.

And now Google wants to, what, to "kill the URL"? Sorry, but no. You f--ked up your Search Engine with crappy refers, fix it, and leave the original URLs alive.

(Of course, with the exception by Berners Lee of killing off the "http://" and maybe transform it into "web:")


I'd agree with that. I find a lot of my URLs need fixing to remove https://...ampproject.com from the start of them. I love the fact that they provide zero indication of what they perceive the current issues to be or what the replacement might look like. Reminds me of how certain email clients improve they interface by showing the sender name rather than the email address.

Reply Score: 1

RE: Let me resume this for you
by dionicio on Fri 7th Sep 2018 17:07 UTC in reply to "Let me resume this for you"
dionicio Member since:
2006-07-12

Two of the original, fundamental aims. Readability, Memorability.

Two histories: Actual and a brand new, more to the original philosophy.

My Guess:

Your site/My access-level request/My user-name/..

Followed by my own naming, mapping, indexing, etc.

Zero-level, anonymous access present Standard Persistent Name Location Mapping, and We need only:

Your site/

Every personalized tour should present also this Standard Mapping in machine and human readable form, for the navigator to use it as resource and for the user to share.

Every resource, object and query to have a link to a true, site-wide URL.

Even land slides, rivers change course, but Coordinates doesn't DISAPPEAR. That's the purpose of persistence. Nihl, is not right at a situational universe like the WWW.

If 3 years ago I bought that sofa, need access to that same info, at exactly the same URL, even if a big red bar at top saying "archive".

That's human. We are so.

Reply Score: 2

RE[2]: Let me resume this for you
by dbox2005 on Fri 7th Sep 2018 18:11 UTC in reply to "RE: Let me resume this for you"
dbox2005 Member since:
2017-11-22

That is the definition of laziness...not human.

Reply Score: 1

dionicio Member since:
2006-07-12

"laziness...not human."

Excuse Me?

That's the reason of URLs, to begin with. Maybe you're being -ironic. < /:) >

Edited 2018-09-07 19:17 UTC

Reply Score: 3

Clickbait title
by Sodki on Fri 7th Sep 2018 07:54 UTC
Sodki
Member since:
2005-11-10

"We want to challenge how URLs should be displayed" is very different from "Google wants to kill the URL".

Want to display the URL in a different way? Fine. Want to write a title that has nothing to do with the article? Not fine.

Reply Score: 5

RE: Clickbait title
by dionicio on Fri 7th Sep 2018 19:33 UTC in reply to "Clickbait title"
dionicio Member since:
2006-07-12

Its an effort at abstraction. Per se, nothing wrong.

On it being hegemonic, we are at the same hole here, that a few decades ago, with Microsoft's "active" pages:
Innocent in principle, profoundly damaging to the protocolary nature of the WWW.

Reply Score: 1

URLs are fine
by Dr.Cyber on Fri 7th Sep 2018 08:35 UTC
Dr.Cyber
Member since:
2017-06-17

I would rather not have huge corporations use their power to impose standards on us.

Especially laughable is that Google talks about protecting our privacy while they themselves are doing everything in their power to compromise our privacy.

I guess they want to protect our privacy against OTHER entities who would compromise it, so that they are the only ones with our private information.

Reply Score: 1

"should be an industry-wide effort"
by Adurbe on Fri 7th Sep 2018 10:20 UTC
Adurbe
Member since:
2005-07-06

Nothing in computing is Ever industry-wide, someone always wins and others are forced to follow.

The modern browser was basically led by Netscape and consolidated by Microsoft. How a user interacts with a browser today is basically the same as back in the 90s.

If google make the switch, they are gambling the bank on it. If consumers switch, they will lead the way people connect with internet for the next 25 years unopposed. However, that inherent inertia of users could also mean they move to more familiar systems like edge, leaving Chrome as the modern BOB

Reply Score: 2

kwan_e Member since:
2007-02-18

Nothing in computing is Ever industry-wide, someone always wins and others are forced to follow.


That's a self contradiction. If someone is capable of winning, and the others are forced to follow, they are, by definition, industry wide.

On the Web, no one wins. There are two defacto standards bodies in the W3C and IETF, and they don't even stand behind any of their own standards because at the end of the day they are merely recommendations or request for comments.

Reply Score: 3

A blockchain must somehow be the solution
by xylifyx on Fri 7th Sep 2018 10:32 UTC
xylifyx
Member since:
2008-05-03

We need to find something to use these things for, other than crypto currencies

Reply Score: 2

Complexity grows....
by HereIsAThought on Fri 7th Sep 2018 11:18 UTC
HereIsAThought
Member since:
2017-09-14

Technologies seem to have a path from created to replace something that's too complex, become slowly more complex over time, to finally being replaced by something simpler. Rinse and repeat.

HTTP/2 with encryption, binary etc all makes sense in terms of better performance and security - yet - yet - try just writing a simple page with a text editor and hit reload on the browser reading from the file system - you can't - something precious has also been lost.

Same goes for URL's - just yesterday I told somebody - hey you can just add #t=1m30s to the youtube URL to get it to start at a specific point. Without URL's the 'hackability' of the web goes significantly down.

Low barriers to entry for engaging with a technology is key to long term tech viability.

What current guru's forget is that while it's just a another small step for them, they are gradually pulling up the ladder for new developers which are the future.

These new developers then decide to reinvent instead of struggle to get on the first run and the whole process repeats.

Hackability - low barrier to entry - is a key feature that Google seems to be forgetting - too many gurus perhaps.

Reply Score: 3

Comment by hardcode57
by hardcode57 on Fri 7th Sep 2018 13:00 UTC
hardcode57
Member since:
2014-06-02

I'm wary of Google deciding this for everyone, but the alternative of having some industry committee decide is worse: what you end up with is a decision that is 10 years in the making, formed as a compromise between the financial interests of the companies that have paid to be on the committee, and which will be adopted with all he alacrity of 1pv6.

Reply Score: 2

No Need to kill...
by dionicio on Fri 7th Sep 2018 15:30 UTC
dionicio
Member since:
2006-07-12

And you Know IT.

Deprecation is almost complete. Dedicated to all the Hippies that started your little Company. Now Monstrous Leviathan.

</Irony>

End the Works on persistence, redundancy, indexing, mapping. Instead of toying with hegemonic moves.

Edited 2018-09-07 15:40 UTC

Reply Score: 1

RE: No Need to kill...
by dionicio on Fri 7th Sep 2018 15:45 UTC in reply to "No Need to kill..."
dionicio Member since:
2006-07-12

3 of them easily solved with old, trustful P2P, but -alas, no "competitive" advantage.

Reply Score: 2

v fck Google
by dbox2005 on Fri 7th Sep 2018 18:14 UTC
It is about convention, stupid !
by _QJ_ on Sat 8th Sep 2018 20:04 UTC
_QJ_
Member since:
2009-03-12

As 1+1 = 2, URL is a convention.

If tomorrow I log on a site, like OSNEWS, and I got just an icon green to tell me it is the good one trusted OSNEWS site.... Ho, and I am trusted also by OSNEWS...
I am okay with it. Just one condition :

It must be an international convention !

All actors in the Internet must accept and follow the new norm.

And I don't care if technically URL has been replaced by a SAML protocol with a third party trusting service. Until... Every actors respect the norms (sales, conditions, privacy, environmental norms, etc).

Ironic question : Does Chrome's team has contacted W3C to talk about it ?

Reply Score: 2

Rememoring a little...
by dionicio on Sun 9th Sep 2018 13:15 UTC
dionicio
Member since:
2006-07-12

File Systems where created at hyerarchical Coms, for hyerarchically organized users. If You where no allowed to Know where a resource was residing, or what his name or alias was, thats was it, end of matter.

Spirit at CERN was exactly the opposite.

Departments wanted, needed tho share, open their research and allow others to build over it. The deeper research becomes, the more respected and trusted.
That's how Science works.

File Systems of the time were mainly hyerarchical also.

Tim broke that. CERN at the time had the power to do that.

Of late, data brokers -Leviathans want a come back to the old Status Quo. Money pressure is beyond imagination.

Battle is already lost if -as usual- Users dont give a damn.

https://www.theguardian.com/technology/2018/sep/08/decentralisation-...

You can not build over foundations you cant touch. For this to work, digital rights has to be made the other front.

Reply Score: 2

RE: Rememoring a little...
by dionicio on Sun 9th Sep 2018 14:28 UTC in reply to "Rememoring a little..."
dionicio Member since:
2006-07-12

The issue is highly political:

Think of a water hole in dry season: Crocodriles digg dipper, lions and hienas round it day and mainly at night. Thats scarcity management. That's capitalism abbreviated.

On differing from water, there is no reason info couldn't be multiplied at infinitum.

The war is for you to use Their Wells, and no others.

That's Why chains of trust are vital to any new efforts in this path.

The Shared Data could be digitized, but no, chains of trust are not going to be digital.

Edited 2018-09-09 14:33 UTC

Reply Score: 2