Google says it has begun requiring users to turn on JavaScript, the widely used programming language to make web pages interactive, in order to use Google Search.
In an email to TechCrunch, a company spokesperson claimed that the change is intended to “better protect” Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won’t work properly and that the quality of search results tends to be degraded.
↫ Kyle Wiggers at TechCrunch
One of the strangely odd compliments you could give Google Search is that it would load even on the weirdest or oldest browsers, simply because it didn’t require JavaScript. Whether I loaded Google Search in the JS-less Dillo, Blazer on PalmOS, or the latest Firefox, I’d end up with a search box I could type something into and search. Sure, beyond that the web would be, shall we say, problematic, but at least Google Search worked. With this move, Google will end such compatibility, which was most likely a side effect more than policy.
I know a lot of people lament the widespread reliance on and requirement to have JavaScript, and it surely can be and is abused, but it’s also the reality of people asking more and more of their tools on the web. I would love it websites gracefully degraded on browsers without JavaScript, but that’s simply not a realistic thing to expect, sadly. JavaScript is part of the web now – and has been for a long time – and every website using or requiring JavaScript makes the web no more or less “open” than the web requiring any of the other myriad of technologies, like more recent versions of TLS. Nobody is stopping anyone from implementing support for JS.
I’m not a proponent of JavaScript or anything like that – in fact, I’m annoyed I can’t load our WordPress backend in browsers that don’t have it, but I’m just as annoyed that I can’t load websites on older machines just because they don’t have later versions of TLS. Technology “progresses”, and as long as the technologies being regarded as “progress” are not closed or encumbered by patents, I can be annoyed by it, but I can’t exactly be against it.
The idea that it’s JavaScript making the web bad and not shit web developers and shit managers and shit corporations sure is one hell of a take.
Duck.com works just fine without js, even works great with terminal based browsers with no graphics.
I agree. It used to be awesome when even the most basic browsers like
lynx
was sufficient to get information from the web.However JavaScript is now the de facto “payment token” to access these websites. As in “proof of work”, not “proof of stake” in crypto jargon.
Why?
CAPTCHAs no longer work. It only annoys humans, while modern AIs, like ChatGPT and even simple vision models can easily pass them. (Next test: If you *fail* you are human, as passing is too difficult and machine only territory. Joking of course).
Now, JavaScript doing a bit of local work acts as a DRM style security for the websites. Some are explicit, they just make you visit a splash page. Others track in the background.
If they think you are “human enough”, or at least running a modern browser without automation, they would let you in.
The only other choice is actually paying with money. Web3, micro-transactions, whatever you call it. Every time you visit a website a small amount will be deducted from your wallet. Each Google Search is 5 cents.
Or… actually one more: monthly subscriptions, which Google does in YouTube. You know who is behind every request, and don’t need to care about automation much.
In any case, none of these choices are ideal anymore. And I can’t blame any individual actor.
@Thom Holwerda: OSNews seems to support the [code] HTML tag, but uses the same style. Can we fix that?
(I also tried [TT] and [PRE] both of which were auto filtered out)
sukru,
Yeah captchas don’t really work against today’s threat model and users obviously find them intrusive.
Javascript dependencies can make things harder to programmatically reproduce low level HTTP requests. But then again it’s not much of a security barrier considering that programmers can automate browser requests undetectably using the Selenium web driver API with chrome or FF.
https://developer.chrome.com/docs/chromedriver/get-started
These requests are genuinely authentic: fingerprinting, javascript engine, etc. Cloudflare’s automatic bot request detection is completely oblivious to this technique and there’s nothing they can do about it. Google’s canceled web DRM initiative might have made it possible for websites to verify the browser is running in “lock down mode”. Although DRM is notorious for being defeated.
I agree this would put off bot operators, but realistically if these microtransactions were mandatory to use a site including google.com I think there would be substantial user losses too.
Thanks to improving AI, bots are going to impersonate people and there’s not much that can be done to technically stop it. I think long term solution has to be focusing more on good/bad behaviors and focusing less on who/what is behind those behaviors.
The WebDriver spec says the browser is supposed to set navigator.webdriver in the site’s JavaScript environment to report that it’s being driven to allow sites to prevent this… it just happens to be another feature that’s been sitting unimplemented in Firefox’s bug tracker for years.
If you want to puppet the browser, you need to use something like the LDTP/PyATOM/Cobra testing framework which puppets applications through their accessibility APIs like screen readers do. Then websites attempting to tell your bot apart from humans risk running afoul of laws relating to discriminating against people with disabilities or becoming subject to compliance requirements for medical information privacy laws.
*nod* Clay Shirky wrote a post back in 2000 named The Case Against Micropayments where he focuses on how the fixed per-payment decision cost is much more “expensive” than the money. (And that was before we became overwhelmed with decision fatigue as badly as now. See also Johnny Harris’s Why you’re so tired.)
*nod* Reminds me of the spam pre-filter I wrote for my sites (technically a spam filter, since I still have to get around to the later stages) which is sort of a “third thing” after spell-check and grammar-check, focused on catching “I wouldn’t want this from humans either” stuff and sending the submitter back around to fix them.
(eg. I want at least two words of non-URL per URL, I want less than 50% of the characters to be from outside the basic latin1 character set, I don’t want HTML or BBcode tags in a plaintext field, I don’t want to see the same URL more than once, I don’t want e-mail addresses outside the reply-to field, I don’t want URLs, domain names, or e-mail addresses in the subject line, etc.)
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Haha, that’s interesting. It was never going to be foolproof anyway. That’s the way it is with most of these tactics…might stop casual attempts, but not someone determined.
archive.org link is offline for me. I kind of remember around that time some were saying micropayments would fix email spam too. That’d didn’t really happen.
Yeah, there are several technical checks one can do. Spammers have gotten very good at creating comments that agree with the author/article, which probably helps them not get deleted, but then they’re made obvious because they always have something to sell and include irrelevant links to gambling or whatever they’re trying to promote. I think we could combat this a bit better by keeping links as a privilege for users with a bit more reputation. Hopefully new users would be understanding.
As a developer well aware of what can be achieved without JavaScript and how much more fragile JavaScript-based designs are (eg. in the face of flaky network connections), I will continue to neither give paying business nor recommendations to vendors who fail the “are they competent enough to use the features already provided by the browser rather than reimplementing them?” test.
Sure, slap something like a Cloudflare challenge on if you must… but if you need JS to template your site or display a drop-down menu, you’re clearly not competent enough.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
I fully understand the apprehension for javascript web UIs, sometimes web devs use javascript to bad effect. But ironically some of my own web projects I am most fond of are javascript based.
One client had a system with a couple hundred users that was painfully slow to use in large part because every step of the process had to hit the server and database over and over again. I replaced the entire system with a javascript application that could be navigated and updated entirely in the client until saved in a single postback event. They were impressed at how much better the new system was. I honestly don’t think it would have been as good if I had stuck with HTML postback forms.
Another project I enjoyed working on was a software defined radio.
https://ibb.co/9r468Ry
Frankly a lot more fun than what I normally work on. I ended up being responsible for implementing both the back end as well as front end on this project. I learned so much building the RF code from scratch. It could demodulate around 40 channels concurrently from a 20mhz raw RF stream in real time. It’s also my one and only project to use postrgresql.
Anyway the javascript client was pretty neat and allowed users to program the radio and record and listen in on audio frequencies from hundreds of miles away opening a web page in a browser. Although not a project goal, this even worked from my phone! Anyone who’s played with SDR software will be familiar with these concepts but as far as I know I’m the first to build an SDR client for the browser 🙂
Alas mozilla ended up breaking the browser audio playback at one point.
BTW look at that old scroll bar in the screen shot, what a thing of beauty. I hate what modern browsers and applications have done to minimize control surfaces. We all took this for granted, but then it got taken away and modern applications are a usability nightmare. Several times a week I find myself pixel hunting the hit box for scrolling and/or resizing the window.
Just this week I visited a client and he was having a hell of a time moving browser and office windows between screens (local versus projector) because some idiotic UI designers decided to completely remove the title bar.
God forbid borders have more than a pixel – it’s sacrilege! Nevermind the enormous screens and tons of whitespace. They either didn’t test their UI with real users or they did and disregarded usability problems in favor of more empty white-space because that’s trendy.