Google says it has begun requiring users to turn on JavaScript, the widely used programming language to make web pages interactive, in order to use Google Search.
In an email to TechCrunch, a company spokesperson claimed that the change is intended to “better protect” Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won’t work properly and that the quality of search results tends to be degraded.
↫ Kyle Wiggers at TechCrunch
One of the strangely odd compliments you could give Google Search is that it would load even on the weirdest or oldest browsers, simply because it didn’t require JavaScript. Whether I loaded Google Search in the JS-less Dillo, Blazer on PalmOS, or the latest Firefox, I’d end up with a search box I could type something into and search. Sure, beyond that the web would be, shall we say, problematic, but at least Google Search worked. With this move, Google will end such compatibility, which was most likely a side effect more than policy.
I know a lot of people lament the widespread reliance on and requirement to have JavaScript, and it surely can be and is abused, but it’s also the reality of people asking more and more of their tools on the web. I would love it websites gracefully degraded on browsers without JavaScript, but that’s simply not a realistic thing to expect, sadly. JavaScript is part of the web now – and has been for a long time – and every website using or requiring JavaScript makes the web no more or less “open” than the web requiring any of the other myriad of technologies, like more recent versions of TLS. Nobody is stopping anyone from implementing support for JS.
I’m not a proponent of JavaScript or anything like that – in fact, I’m annoyed I can’t load our WordPress backend in browsers that don’t have it, but I’m just as annoyed that I can’t load websites on older machines just because they don’t have later versions of TLS. Technology “progresses”, and as long as the technologies being regarded as “progress” are not closed or encumbered by patents, I can be annoyed by it, but I can’t exactly be against it.
The idea that it’s JavaScript making the web bad and not shit web developers and shit managers and shit corporations sure is one hell of a take.
Duck.com works just fine without js, even works great with terminal based browsers with no graphics.
I agree. It used to be awesome when even the most basic browsers like
lynx
was sufficient to get information from the web.However JavaScript is now the de facto “payment token” to access these websites. As in “proof of work”, not “proof of stake” in crypto jargon.
Why?
CAPTCHAs no longer work. It only annoys humans, while modern AIs, like ChatGPT and even simple vision models can easily pass them. (Next test: If you *fail* you are human, as passing is too difficult and machine only territory. Joking of course).
Now, JavaScript doing a bit of local work acts as a DRM style security for the websites. Some are explicit, they just make you visit a splash page. Others track in the background.
If they think you are “human enough”, or at least running a modern browser without automation, they would let you in.
The only other choice is actually paying with money. Web3, micro-transactions, whatever you call it. Every time you visit a website a small amount will be deducted from your wallet. Each Google Search is 5 cents.
Or… actually one more: monthly subscriptions, which Google does in YouTube. You know who is behind every request, and don’t need to care about automation much.
In any case, none of these choices are ideal anymore. And I can’t blame any individual actor.
@Thom Holwerda: OSNews seems to support the [code] HTML tag, but uses the same style. Can we fix that?
(I also tried [TT] and [PRE] both of which were auto filtered out)
sukru,
Yeah captchas don’t really work against today’s threat model and users obviously find them intrusive.
Javascript dependencies can make things harder to programmatically reproduce low level HTTP requests. But then again it’s not much of a security barrier considering that programmers can automate browser requests undetectably using the Selenium web driver API with chrome or FF.
https://developer.chrome.com/docs/chromedriver/get-started
These requests are genuinely authentic: fingerprinting, javascript engine, etc. Cloudflare’s automatic bot request detection is completely oblivious to this technique and there’s nothing they can do about it. Google’s canceled web DRM initiative might have made it possible for websites to verify the browser is running in “lock down mode”. Although DRM is notorious for being defeated.
I agree this would put off bot operators, but realistically if these microtransactions were mandatory to use a site including google.com I think there would be substantial user losses too.
Thanks to improving AI, bots are going to impersonate people and there’s not much that can be done to technically stop it. I think long term solution has to be focusing more on good/bad behaviors and focusing less on who/what is behind those behaviors.
The WebDriver spec says the browser is supposed to set navigator.webdriver in the site’s JavaScript environment to report that it’s being driven to allow sites to prevent this… it just happens to be another feature that’s been sitting unimplemented in Firefox’s bug tracker for years.
If you want to puppet the browser, you need to use something like the LDTP/PyATOM/Cobra testing framework which puppets applications through their accessibility APIs like screen readers do. Then websites attempting to tell your bot apart from humans risk running afoul of laws relating to discriminating against people with disabilities or becoming subject to compliance requirements for medical information privacy laws.
*nod* Clay Shirky wrote a post back in 2000 named The Case Against Micropayments where he focuses on how the fixed per-payment decision cost is much more “expensive” than the money. (And that was before we became overwhelmed with decision fatigue as badly as now. See also Johnny Harris’s Why you’re so tired.)
*nod* Reminds me of the spam pre-filter I wrote for my sites (technically a spam filter, since I still have to get around to the later stages) which is sort of a “third thing” after spell-check and grammar-check, focused on catching “I wouldn’t want this from humans either” stuff and sending the submitter back around to fix them.
(eg. I want at least two words of non-URL per URL, I want less than 50% of the characters to be from outside the basic latin1 character set, I don’t want HTML or BBcode tags in a plaintext field, I don’t want to see the same URL more than once, I don’t want e-mail addresses outside the reply-to field, I don’t want URLs, domain names, or e-mail addresses in the subject line, etc.)
As a developer well aware of what can be achieved without JavaScript and how much more fragile JavaScript-based designs are (eg. in the face of flaky network connections), I will continue to neither give paying business nor recommendations to vendors who fail the “are they competent enough to use the features already provided by the browser rather than reimplementing them?” test.
Sure, slap something like a Cloudflare challenge on if you must… but if you need JS to template your site or display a drop-down menu, you’re clearly not competent enough.