Google says it has begun requiring users to turn on JavaScript, the widely used programming language to make web pages interactive, in order to use Google Search.
In an email to TechCrunch, a company spokesperson claimed that the change is intended to “better protect” Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won’t work properly and that the quality of search results tends to be degraded.
↫ Kyle Wiggers at TechCrunch
One of the strangely odd compliments you could give Google Search is that it would load even on the weirdest or oldest browsers, simply because it didn’t require JavaScript. Whether I loaded Google Search in the JS-less Dillo, Blazer on PalmOS, or the latest Firefox, I’d end up with a search box I could type something into and search. Sure, beyond that the web would be, shall we say, problematic, but at least Google Search worked. With this move, Google will end such compatibility, which was most likely a side effect more than policy.
I know a lot of people lament the widespread reliance on and requirement to have JavaScript, and it surely can be and is abused, but it’s also the reality of people asking more and more of their tools on the web. I would love it websites gracefully degraded on browsers without JavaScript, but that’s simply not a realistic thing to expect, sadly. JavaScript is part of the web now – and has been for a long time – and every website using or requiring JavaScript makes the web no more or less “open” than the web requiring any of the other myriad of technologies, like more recent versions of TLS. Nobody is stopping anyone from implementing support for JS.
I’m not a proponent of JavaScript or anything like that – in fact, I’m annoyed I can’t load our WordPress backend in browsers that don’t have it, but I’m just as annoyed that I can’t load websites on older machines just because they don’t have later versions of TLS. Technology “progresses”, and as long as the technologies being regarded as “progress” are not closed or encumbered by patents, I can be annoyed by it, but I can’t exactly be against it.
The idea that it’s JavaScript making the web bad and not shit web developers and shit managers and shit corporations sure is one hell of a take.
Duck.com works just fine without js, even works great with terminal based browsers with no graphics.
I agree. It used to be awesome when even the most basic browsers like
lynx
was sufficient to get information from the web.However JavaScript is now the de facto “payment token” to access these websites. As in “proof of work”, not “proof of stake” in crypto jargon.
Why?
CAPTCHAs no longer work. It only annoys humans, while modern AIs, like ChatGPT and even simple vision models can easily pass them. (Next test: If you *fail* you are human, as passing is too difficult and machine only territory. Joking of course).
Now, JavaScript doing a bit of local work acts as a DRM style security for the websites. Some are explicit, they just make you visit a splash page. Others track in the background.
If they think you are “human enough”, or at least running a modern browser without automation, they would let you in.
The only other choice is actually paying with money. Web3, micro-transactions, whatever you call it. Every time you visit a website a small amount will be deducted from your wallet. Each Google Search is 5 cents.
Or… actually one more: monthly subscriptions, which Google does in YouTube. You know who is behind every request, and don’t need to care about automation much.
In any case, none of these choices are ideal anymore. And I can’t blame any individual actor.
@Thom Holwerda: OSNews seems to support the [code] HTML tag, but uses the same style. Can we fix that?
(I also tried [TT] and [PRE] both of which were auto filtered out)
sukru,
Yeah captchas don’t really work against today’s threat model and users obviously find them intrusive.
Javascript dependencies can make things harder to programmatically reproduce low level HTTP requests. But then again it’s not much of a security barrier considering that programmers can automate browser requests undetectably using the Selenium web driver API with chrome or FF.
https://developer.chrome.com/docs/chromedriver/get-started
These requests are genuinely authentic: fingerprinting, javascript engine, etc. Cloudflare’s automatic bot request detection is completely oblivious to this technique and there’s nothing they can do about it. Google’s canceled web DRM initiative might have made it possible for websites to verify the browser is running in “lock down mode”. Although DRM is notorious for being defeated.
I agree this would put off bot operators, but realistically if these microtransactions were mandatory to use a site including google.com I think there would be substantial user losses too.
Thanks to improving AI, bots are going to impersonate people and there’s not much that can be done to technically stop it. I think long term solution has to be focusing more on good/bad behaviors and focusing less on who/what is behind those behaviors.
The WebDriver spec says the browser is supposed to set navigator.webdriver in the site’s JavaScript environment to report that it’s being driven to allow sites to prevent this… it just happens to be another feature that’s been sitting unimplemented in Firefox’s bug tracker for years.
If you want to puppet the browser, you need to use something like the LDTP/PyATOM/Cobra testing framework which puppets applications through their accessibility APIs like screen readers do. Then websites attempting to tell your bot apart from humans risk running afoul of laws relating to discriminating against people with disabilities or becoming subject to compliance requirements for medical information privacy laws.
*nod* Clay Shirky wrote a post back in 2000 named The Case Against Micropayments where he focuses on how the fixed per-payment decision cost is much more “expensive” than the money. (And that was before we became overwhelmed with decision fatigue as badly as now. See also Johnny Harris’s Why you’re so tired.)
*nod* Reminds me of the spam pre-filter I wrote for my sites (technically a spam filter, since I still have to get around to the later stages) which is sort of a “third thing” after spell-check and grammar-check, focused on catching “I wouldn’t want this from humans either” stuff and sending the submitter back around to fix them.
(eg. I want at least two words of non-URL per URL, I want less than 50% of the characters to be from outside the basic latin1 character set, I don’t want HTML or BBcode tags in a plaintext field, I don’t want to see the same URL more than once, I don’t want e-mail addresses outside the reply-to field, I don’t want URLs, domain names, or e-mail addresses in the subject line, etc.)
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Haha, that’s interesting. It was never going to be foolproof anyway. That’s the way it is with most of these tactics…might stop casual attempts, but not someone determined.
archive.org link is offline for me. I kind of remember around that time some were saying micropayments would fix email spam too. That’d didn’t really happen.
Yeah, there are several technical checks one can do. Spammers have gotten very good at creating comments that agree with the author/article, which probably helps them not get deleted, but then they’re made obvious because they always have something to sell and include irrelevant links to gambling or whatever they’re trying to promote. I think we could combat this a bit better by keeping links as a privilege for users with a bit more reputation. Hopefully new users would be understanding.
Huh. It was up for me when I posted it and it’s up for me now. I wonder whether there was a blip of downtime or whether it’s some kind of location-related failure.
My plan for if that becomes necessary is to only allow links from a whitelist of URL patterns for things where I trust moderation to be active enough to make them unsuitable for use by spammers. (eg. Wikipedia articles, IMDB entries, etc.)
Currently, as far as choice of URL goes, I just blacklist known URL shorteners and link monetizing services with a message to please use the actual/full URL and it’s very effective.
As a developer well aware of what can be achieved without JavaScript and how much more fragile JavaScript-based designs are (eg. in the face of flaky network connections), I will continue to neither give paying business nor recommendations to vendors who fail the “are they competent enough to use the features already provided by the browser rather than reimplementing them?” test.
Sure, slap something like a Cloudflare challenge on if you must… but if you need JS to template your site or display a drop-down menu, you’re clearly not competent enough.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
I fully understand the apprehension for javascript web UIs, sometimes web devs use javascript to bad effect. But ironically some of my own web projects I am most fond of are javascript based.
One client had a system with a couple hundred users that was painfully slow to use in large part because every step of the process had to hit the server and database over and over again. I replaced the entire system with a javascript application that could be navigated and updated entirely in the client until saved in a single postback event. They were impressed at how much better the new system was. I honestly don’t think it would have been as good if I had stuck with HTML postback forms.
Another project I enjoyed working on was a software defined radio.
https://ibb.co/9r468Ry
Frankly a lot more fun than what I normally work on. I ended up being responsible for implementing both the back end as well as front end on this project. I learned so much building the RF code from scratch. It could demodulate around 40 channels concurrently from a 20mhz raw RF stream in real time. It’s also my one and only project to use postrgresql.
Anyway the javascript client was pretty neat and allowed users to program the radio and record and listen in on audio frequencies from hundreds of miles away opening a web page in a browser. Although not a project goal, this even worked from my phone! Anyone who’s played with SDR software will be familiar with these concepts but as far as I know I’m the first to build an SDR client for the browser 🙂
Alas mozilla ended up breaking the browser audio playback at one point.
BTW look at that old scroll bar in the screen shot, what a thing of beauty. I hate what modern browsers and applications have done to minimize control surfaces. We all took this for granted, but then it got taken away and modern applications are a usability nightmare. Several times a week I find myself pixel hunting the hit box for scrolling and/or resizing the window.
Just this week I visited a client and he was having a hell of a time moving browser and office windows between screens (local versus projector) because some idiotic UI designers decided to completely remove the title bar.
God forbid borders have more than a pixel – it’s sacrilege! Nevermind the enormous screens and tons of whitespace. They either didn’t test their UI with real users or they did and disregarded usability problems in favor of more empty white-space because that’s trendy.
This.
Amen. At least, on KDE, you don’t need to focus a window to scroll-wheel a widget on it, and you can Alt+LeftClick drag to move a window or Alt+RightClick drag to resize a window from anywhere inside it.
(I’ll confess that I leave my window borders on KDE configured at 1px for aesthetic reasons because it’s so much easier to just Alt+Click to move or resize them but, granted, that’s mostly because, with a 4480×1080 desktop, I tend to leave my pointer speed so high that even a 5px border would be a bit slow to acquire with the cursor. I still leave borders traditional on my single-monitor Win98SE, WinXP, Win7, and Mac OS 9 machines.)
I tend to find that sort of design enraging because, invariably, I’ll fill out the entire thing and then only discover on the final card/screen that I need to do something like whitelisting reCAPTCHA in uMatrix and reloading the page, and then everything I entered gets discarded.
If there are no required intermediate round-trips, I make the thing unroll into one very long page with JS disabled.
If there ARE required intermediate round-trips, then I use classical AJAX to progressively enhance things so the whole page doesn’t need to be reloaded if JavaScript is enabled, but it’s still functional.
I make an exception for stuff that’s actually a web app. The general rule of thumb is “How appropriate does the idea feel of implementing this as a native GUI using PyQt?”
…though, usually, I’m so determined to keep to my “design my desktop to perform well on the 2011 Athlon II X2 270 that I only upgraded off a year ago” principles that I don’t write “true” apps with browser interfaces unless they’re going to involve hypertext either way. (If it involves hypertext, I might as well piggy-back on my browser’s existing ad-blocking extension setup in case oEmbed support gets added and also make it amenable to being turned into groupware.)
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
I didn’t bother implementing a no-js version as it would have been twice the work and I honestly don’t think anyone would have used it Their old software was built with HTML form postback and they hated it, that’s why I got the job. Obviously I’ll concede JS can be done badly, but it can also provide some really nice features that improve productivity and aren’t possible otherwise.
I’m very picky about this too, haha. Javascript gets a bad name because of extremely bloated frameworks. but javascript itself performs well. My SDR application was fully streaming. The spectrum was rendered in real-time using javascript as were the audio scopes as well as the audio playback itself. All of those purple fields in the screen shot were editable by either typing in values and/or dragging the respective markers on the frequency bar. And all of these changes propagated to the server in real time. Any other connected client would see the changes in real time too.
Slow interfaces are not technically because of javascript itself but poor development and bloated frameworks. I will concede these are quite common. Server side can be slow as well. I probably don’t need to tell you, but inefficiency is not something modern developers are encouraged to solve. unfortunately.
I’d say my aversion to JavaScript apps is a death of a thousand cuts.
My Firefox having 3000+ suspended/unloaded tabs as I struggle to get all the Reddit/Lobste.rs articles and YouTube videos files away in my folders for things I don’t want to lose but don’t have time to read/watch? Jankier.
My Firefox having a stack of half a dozen different ad-blocking and anti-tracking extensions? Jankier.
Tracing garbage collection generally requiring double the amount of memory compared to stuff like C/C++/Rust to allow for floating garbage (actual item of jargon) to amortize collection costs? My Athlon II X2 270 was already maxed out a 32GiB of RAM.
Firefox, Ungoogled Chromium, Tor Browser, and Thunderbird are the heaviest non-game applications on my desktop by an order of magnitude even before you bring individual app efficiency into it.
…and then there’s the nativeness argument. I’m just so fed up with all the papercuts from how JavaScript UIs invariably don’t replicate the native UI’s behaviours perfectly, re-creating bugs that were solved problems in Mac OS in the 80s and on Windows as of Windows 95. (eg. Drag-and-drop reordering on a “list widget” where, because of how the browser implements its primitives, you’d better jolly well wait a second before releasing the button or it’ll cancel your drag-and-drop despite you having had to wait all that time for the auto-scroll to get you from source to destination.)
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
32GiB shouldn’t be a bottleneck, it wasn’t so long ago new apple PCs where still shipping with 8GiB, haha. Even an RPI4 with 4GiB should be ok for most browsing (don’t open many tabs), although the CPU is a performance bottleneck.
I just looked and besides my VMs, plasmashell and steam are the main culprit for me on an idle system. I’m very annoyed that steam is even running right now. The steam client is behaving like a virus…ugh.
Firefox is less than 1GB right now and 2GB with all the extensions and a dozen tabs seems normal. However I know that there are cases where things don’t go normally. Plenty of such cases are reported online…
https://www.reddit.com/r/firefox/comments/12wkh6a/firefox_gradually_increasing_memory_usage_to_100/
It’s hard to pinpoint their local problem without specifics. This hasn’t happened to me recently, but I have seen FF go to 10-20GB range and crashing, I suspect there were memory leaks with javascript/video/something. The website definitely made a difference.
I’ve noticed some garbage collecting software (ie java) use more memory when you have a lot of it (ie overprovisioned) because they’re optimized to perform GC cycles less frequently. So if you’ve got 32GiB, it may just hold off on garbage collecting for a lot longer than if you have 8GiB. It’s a performance tradeoff. Maybe you are observing this effect.
I typically use the same browser controls (like drop downs, input boxes, buttons, etc) in javascript. Javascript is more or less replacing server generated HTML. My goal isn’t usually to recreate browser controls, but to create more client-side interactivity over the postback model. I suppose you may be one of those who will never like websites that do this, but I do maintain that when it’s done well it shouldn’t impose a performance burden and can even improve performance over server driven interfaces.
Sorry this got buried under other tabs and I forgot to keep checking for replies.
I currently have over 3800 unloaded/suspended tabs that I’m trying to get triaged… many of them un-triaged YouTube videos and there’s definitely some kind of memory leak in Firefox that gets fixed by periodically restarting it. (I know there’s at least one open bug relating to that in the context of YouTube.)
Plus, I also run other stuff simultaneously, like VirtualBox. I remember when I originally bought the machine with 16GiB of RAM “so there will be plenty of room to run multiple VMs simultaneously” and now I’m running a machine with 64GiB of RAM.
The “Minimize memory usage” button in Firefox’s about:memory (basically “GC, CC, and dump caches”) doesn’t help much.
I’m more talking about stuff that the browser doesn’t provide access to the native implementations of. (eg. Interactive/selectable list/tree/table views with properly native multi-select and drag-and-drop behaviour AND support for rich rendering of entries akin to what you get by subclassing QItemDelegate so that web devs can actually see them as suitable for purpose, unlike the multi-select form widget.)
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
I can imagine how 3800 tabs could cause problems, haha. Wow. I don’t open anything close to this many.
Maybe browsers could innovate and gain the ability to embed native controls into web pages. 🙂
(a callback to IE, if it wasn’t obvious).
Well, I’ve triaged my way down to 3612 now, partly thanks to some of them being things like “batch of two dozen tabs of reference material for an on-hold programming project that just need to be moved into a bookmarks folder” and others being “I know I like this YouTuber… toss them in the associated subfolder in my ‘pretend I’ll get around to watching this some day’ bookmarks folder”.
Amen to that! À propos of which: remember the big argument over the (proposed) Rustdoc conversion years ago? Perfect example of where JS just shouldn’t be involved except to enhance the experience. I’m really tired of JS-dependent sites.
At least there is no more Flash-dependent sites.
Or is there ?
One major difference between requiring newer versions of TLS and requiring JavaScript is that supporting JS is far from sufficient for being able to use a site that depends on JS. There’s so much more that can go wrong along the way. Sure, TLS can have its own issues, but orders of magnitude fewer, and orders of magnitude less likely. In contrast, JS is brittle—it should be reserved for things that can’t be done well without it.
Javascript aside, Google search is so much garbage these days as it doesn’t even work anymore. It can’t even search by phrase properly (Russian websites have no problem finding needed results). Youtube search is even worse.