Google has tried on and off for years to hide full URLs in Chrome’s address bar, because apparently long web addresses are scary and evil. Despite the public backlash that came after every previous attempt, Google is pressing on with new plans to hide all parts of web addresses except the domain name.
A few new feature flags have appeared in Chrome’s Dev and Canary channels (V85), which modify the appearance and behavior of web addresses in the address bar. The main flag is called “Omnibox UI Hide Steady-State URL Path, Query, and Ref” which hides everything in the current web address except the domain name. For example, “https://www.androidpolice.com/2020/06/07/lenovo-ideapad-flex-5-chromebook-review/” is simply displayed as “androidpolice.com.”
As I’ve said numerous times before, I like the idea of seeing if we can improve the way browsers how browsers display addresses – if we don’t try to improve because “that’s just how things are, we end up with garbage like the UNIX/Linux directory naming conventions. However, I don’t think Google doing this singlehandedly and one its own is a good idea; this should be a standards-based process, open to comments from everyone.
People who bash *nix directory naming generally don’t understand why it is there, and why it is great.
Whenever I use a Mac, I get irritated as it doesn’t really use it.
Also, this is one of the reasons I don’t use Chrome. You should not ever obfuscate the URL. Now if anything sites should do better at not giving links that are a billion characters long…
One of the many web dos and donts consist of NOT clicking on shortened URLs as you have no idea where they are really going…
leech,
It isn’t clear to me what issue Thom’s has with URLs or directories, Does he mean FHS?
https://en.wikipedia.org
(I intentionally removed the full URL to the wikipedia article about FHS for effect!)
A directory is just a hierarchy and URLs can hierarchies as well, but it is technically just text. I’m not really understanding what the problem is with it. Clearly a lot of websites do not have the best organization, but that’s on the website’s owner and/or the framework’s developer more than something specifically wrong with URLs.
Personally I feel URL are useful to indicates where you are at. I often have to discuss specific URLs for trouble shooting and this is just going to make things worse for me. Google probably wants users to be more dependent on google searches.
If there’s any part of the URL I’d mind hiding least, it is the querystring portion since it’s often not meant to be human readable/writable.
URL shorteners are bad because it’s impossible to know where a link is going to go.
They may have been used at one point to overcome twitter limitations, but it’s better just to fix twitter and use direct linking.
I guess you’ll like this extension or one of the others with the same function then: https://chrome.google.com/webstore/detail/neat-url/jchobbjgibcahbheicfocecmhocglkco
“Senseless attack”? Why not debate the concept on its own merits?
Of all the mistakes made during the creation of the Internet, and there were many (IPv4 anyone?), the URL is probably the most egregious one, as it the one that is end-user facing. The whole concept is ridiculous. Why is it unique? Why do I have to register it via third-party? Why are its conventions regulated? Why can’t I name my site “Erez’s site” and then go and register it on Google/DDG/BING? You’ll laugh, but AOL had the right idea with their keywords, only they made them also for sell and unique.
The URL should’ve been never user facing. The moment you need to type “https://…” you know something isn’t right. It’s like entering your 1000$ a night hotel suite and finding that to get water running in the shower you have to remove the pipe from the sink. Wrench included!
I’m not 100% behind Google’s move, as it’s basically trying to hide the whole disaster, rather than using their monopoly to alter it. But I guess hiding the plumbing behind a nice wall is a start, and it will make users more susceptible to using google.com as their address bar (as if that’s not already the case…)
URL’s, by their definition, need to be unique. You can’t have 2 sites called “Google”, because you’d never know which site you actually wanted.
Think of it like leaving the house/flat number off an address. There’s loads of people who live on your road, you can’t just put that road name on your parcels and expect it to arrive at the right person.
And yes, the URL SHOULD be user facing, as it tells the user where exactly on the internet they are. Much like a house address.
[sarcasm] Heck, going back to physical building addresses, they’re pretty confusing. There’s no real purpose for an post code/ZIP code for the end user, they’re there for the post office. Maybe we should start taking those off addresses [/sarcasm]
OK, you missed the whole point.
The site(s) have an address, its called IP address. By your metaphor URL are like giving each house its own unique name. So you have “Joe’s house” at 1st Elm st., and no one in the WORLD cannot now call their house “Joe’s house”. They can maybe call them “Joe’s house in Los Angeles”, but the other Joe in LA now have to find another name and so on and so forth. Technically, I shouldn’t need the URL. I could register “Erez’s site” in google at address 123.456.789.000 and anyone who searches for me there will be directed to my site. Everything else should be the same, most sites already have relative navigation anyway. Linking to other sites will be done either directly, or via some API. You search, again, Google, for “Erez’s site” and you get a bunch of results, each including “link to here” which you then use as the href. It’s all possible, and nothing of it should be end-user facing.
“And yes, the URL SHOULD be user facing, as it tells the user where exactly on the internet they are. Much like a house address.” right, because when you go to 24th Pine st. and go to the 6th floor and knock on “Sammy”, the door must say “24 Pine St. Queens, New York City, New York, USA, 6th floor, 4th apartment” otherwise you won’t know where in the world you are right now! and you’ll have to have “24 Pine St. Queens, New York City, New York, USA, 6th floor, 4th apartment, bathroom” in the bathroom, and “24 Pine St. Queens, New York City, New York, USA, 6th floor, 4th apartment, kitchen” sign in the kitchen. No, you don’t need to know “where in the Internet” you are. It’s ridiculous. You want to go to Amazon to buy a book. You type “Amazon” to the search bar, You get a list from whatever, google, your browser, a combination of both. You click on Amazon, you go there. You know where you are. The site knows where you are. What information do the end user need to divine from “https://amazon.com/……”? We’ve become so used to this hack that we think of it as the only possible solution to the navigation problem, and it’s really a hack. A very beneficial one for domain squatters and registrars, but a hack nonetheless.
I still don’t get it – what’s the difference in “Erez’s site” and “erez-site.com” in that case?
Erez’s site is not unique, Erez-site.com is unique. You can also call your site “Erez’s site” and if the user searches for “Erez’s site” it will find both our results. Once a domain is registered, it’s game over, no one else can use Erez-site.com until it expires.
@Erez
Regardless, websites still need to be unique in order for the computer to know what site you requested. Regardless of if you’re using a URL or IP address, every website needs a unique identifier, much like every house needs a unique address.
URL’s predate search engines, and were the primary way of navigating the early internet before search engines dominated. Early internet “directory” pages were popular, with many links in a phone book style of listings. GeoCities used this sort of directory format, but obviously every webpage still needed a unique URL.
Sure, today, URL’s are largely not necessary, in the same way they were 20 years ago. However, i think you’re misunderstanding the importance of unique addresses/domains and their associated directory structure. The internet cannot function in any capacity without unique identifiers for web pages, much like the post office can’t function without unique addresses to send mail to.
Computers, whilst pretty magical, can’t read minds and make assumptions that the “Erez’s Site” you want is the one it’s sending you to.
I think you need to do some more research on the fundamentals of how computers do file management before you come here moaning how things don’t work how you think they work. I think your misunderstanding of the importance on unique identifiers is the primary issue here.
״websites still need to be unique in order for the computer to know what site you requested״
That’s what IP addresses are for.
“URL’s predate search engines, ”
obviously. However, their existence shows that even with unique domains the end-user still needs to use a search mechanism, so the need for the URL to be unique is now irrelevant.
“Computers, whilst pretty magical, can’t read minds and make assumptions that the “Erez’s Site” you want is the one it’s sending you to.”
Yes, But the actual concept of the unique domain name isn’t for the computer, it’s for the end-user. The computer can navigate happily to https://tinyurl.com/whatever which redirects to https://whatever.com/whatever. And besides, it doesn’t really go to https://whatever.com, but to Name server that points to the IP address where whatever.com is.
Erez,
Either you severely misunderstand the problem & need for unique URLs, or your not explaining your idea well. Can you explain how you want it to work from the ground up?
Portals like AOL/prodigy/etc back in the day or facebook/google/etc today are popular, but not fundamental. And a lot of us feel it would be a mistake to eliminate the means to bypass them (google may like the idea, but that doesn’t make it good for the web). You can go to “zoom.com” or “ebay.com” explicitly & directly without having to remember a computer address.
I’ve explained in another post why this isn’t sufficient.
Erez,
The IP address does not identify the site, much less the specific resource on the site! It only identifies the host. The URL (or something like it) is absolutely needed to identify specific resources. Sometimes a host only runs one site, but there are a lot of reasons it isn’t always the case. Even if we assume a 1:1 correlation, the IP address has a significant shortcoming because the resources become permanently tied to that host. You cannot move the site to a different host, unless you are willing to break all the existing resources when you move. Furthermore website owners don’t own IPs to use indefinitely, if a hosting company shuts down you loose your website. The more you look at the problem the more you see why a level of indirection is necessary.
It’s absolutely not a hack. It’s essential not just to how the web works today, but the way any resource location system has to work. A resource location system would be useless if it’s not unique.
“I see you requested ‘make my day’, but there were 72,312 results. Here’s a funny cat picture, hopefully it’s what you were looking for.”
Something that works more the way you describe is freenet where every resource is described by its hash value. Even a project like freenet still requires URLs as, does every resource location system, even though they’re not very human readable. Users cannot be expected to type these in, they’re completely dependent on pre-installed directories to find websites (the URLs are hashes). The design is more of a biproduct of the project’s anonymity goals over anything else, but you could build a freenet browser that hides them entirely. It’s got it’s own usability issues. Still, maybe the project would interest you.
No it isn’t! I can host many websites on 1 IP address. Just by browsing to an ip address you don’t have sufficient information to know where you are going.
In your world there would soon be 1 million “Bank site” of which 999.999 would be criminals trying to hack your data and 1 real site that nobody could find. And that would just be on “registrar Google”. On “registrar 2” you would only see 20 “Bank site” and all 20 would be different yet again so you couldn’t even tell someone “it’s the first one”
Your ideas are completely ridiculous and lack any understanding. Please explain how I could send someone a link to this exact article or your exact post with your system?
I’ve worked in & around the hosting world since the early 90s, and it’s been at least 2 decades since I’ve worked with a server that hosted more than one site & where the individual had their own IPs. In the vast majority of cases these days, there will be one IP assigned to the server, which will be shared by all sites/hosts on that server – in that setup, without using domain names (or something equivalent), then the server has no way of knowing which site a request is for.
So, at a minimum, the solution you propose would not be feasible until the entirety (or at least a majority) of the internet has moved to IPv6, because otherwise there’s no realistic for every individual site to have its own IP address.
Earlier you mentioned:
Pages in AOL did have unique IDs that they could be accessed by, without using keywords – the main differences were that they were even less human-readable than web URLs, and there was no way to find the “real URL” of content in AOL without being a developer.
And in practice, it was worse. If you want “erez-site.com,” but someone else already registered it, then with domain names you could at least get an equivalent under one of the many, many other TLDs. Not so with a system like AOL keywords.
The objection is not against hiding the “http://” (Is there anyone regularly using ftp:// or gopher:// anymore?) but the part following “.com/” or “.org/” or whatever.
Let’s be honest, it’s 2020. There’s not many people using HTTP:// any more, it’s all HTTPS://
I object to the whole concept of URL. Besides, what difference does it make to hide either part or all of the url? I’m looking now on Safari and it says “onsews.com”. What will the knowledge of “/story/131911/google-resumes-its-senseless-attack-on-the-url-bar-hides-full-addresses-on-chrome-85/#comment-10408323” will give me? Why not replace the URL with the title altogether? Wouldn’t putting “Google resumes its senseless attack on the URL bar, hides full addresses on Chrome 85 – OSnews” in the address bar would be more informative than anything else? That’s why there is a “title” in the html. Because when HTML was created, someone had the sense to realise that URL are not human-readable.
Assume that you’re looking for a second hand car, and looking at the ads.
Which one would you prefer to see as the title of most classified ads?
1st alternative: car://mercedes/cla_200
2nd alternative: mercedes/cla_200
3rd alternative: mercedes
I say the 3rd is utterly useless in quickly providing information.
The 1st provides the information, but it can be made more effective.
The 2nd one looks like a sensible option from this perspective, doesn’t it?
OK, I understand that you operate under the premise that the URL is the only option for the site to convey information to the user. In this view, I would agree that it should be shown in its completeness.. However, this is not the case, because, in your example, I wouldn’t use either of this car salesmen information choices, but would actually go to its competitor who presents me with a nice sign that says the model, the year, some specs, price, you know. Actual usable information. The URL is a Uniform Resource Locator. It’s a way to route your request to an IP address. It’s not meant for delivering information. And whatever use you may want to have for it, it’s still there, sadly, I can click on the address bar and view the full URL in all it’s UX-disastrousness.
There’s multiple comments from previous articles on OSnews that show exactly what you need to know. I shall detail them for you.
a) The main thing you’re missing is covered in this comment: https://www.osnews.com
b) However it’s an extremely good idea to heed the advice mentioned in this comment: https://www.osnews.com
c) Also, for a general background/overview, this comment is excellent: https://www.osnews.com
You can easily link to another comment in the site without having to force the user to copy/paste URLs. Just because this specific site decided not to use any such solution does not make it gospel. And would probably be much more durable for when onsews changes platform and as result break all the hardcoded urls in the page.
@Erez:
No, you really cannot. Of course you could generate a button that copies that URL for the user or that generates a unique URL that brings the user to the content but the only way to pass location data from one source to another is through URL’s (technically URI’s but I am not going to go there for this topic)
Maybe Erez should research URL, URI and URN and think about all the implications that your solution would have
Erez,
I don’t want to focus on URLs not needing to be unique, because it’s clear that any argument down that path is going to be futile.
So the next point to discuss is whether URLs need to be fully visible and human readable. To be fair that’s more of an opinion. In theory they don’t need to be. As Brendan pointed out it’s very annoying when you don’t have a better idea of what the URL is pointing to.
Maybe you want URLs to look like text and less like a hierarchy? There’s technical reasons that URL hierarchies are useful for web development, but in theory we could get rid of hierarchies altogether and just have the server lookup every resource by title or something like that. It might look something like this…
“osnews.com/my favorite operating systems”
This could be made to work. Every request would need to be recorded and looked up in a database, it would require some work, but it’s doable. However it’s not really clear if this is an article, a comment, a picture, a staff page, etc. They’re all going to be mixed into the same namespace. At some point hierarchies are useful not merely to the computer, but also for the humans and arguably this could be more descriptive.
“osnews.com/profile/alfman/my favorite operating systems”
Blogs are relatively simple, but huge websites (ie microsoft) would be creating disorder on a massive scale if they eliminated hierarchies by clumping everything together into a huge namespace. So IMHO you could technically get rid of hierarchies, but you wouldn’t want to because they are useful!
Let me ask you a rhetorical question: do you put all of your files regardless of what they are (work, school, personal, resume, music, photos, taxes, etc) into a single directory? Obviously the computer could do it. I think you’ll find that it is us humans that prefer order.
There’s another good reason for not hiding the URL. Some sites have such bad navigation that I find myself modifying the URL directly.
So what do you do if someone registers Erez-site.net (while you have Erez-site.com) and copies your site except does some fishing? You want the full info in URL to prevent you from being hacked or being on the wrong site.
It’s the same with the dreaded Windows file explorer defaults: by default it hides the extension. Making you guess if you see Erez.exe, Erez.bat, Erez.doc, Erez.xlsx, Erez.pl or Erez. First thing I switch off.
Also if there is ambiguity for the text in the address bar, the only way to find the right page is to enter a search query in a searchbar thereby giving information to an advertising 3rd party for no reason at all.
Only the full url will give you all the information, the title of the page is something that should be displayed at the top of the webpage.
Don’t fix what is working fine.
Last thing first, it’s not working fine, it’s just pretty irreplaceable at this point. Same with IPv4 and HTTP/1 and whatever else can’t be replaced not because it’s working fine, but because it’s irreplaceable.
Second, most phishing schemes don’t even bother to register anything that looks like the actual domain, and yet people do give their PayPal credentials or whatever to them..
Third, there’s very little reason, in a modern OS to use file extension. Most files on a Linux OS don’t have file extension and yet the system have no problem functioning as usual. You can have visual cues for whether a file is an image or a word file, a zip archive or a web page, and the OS will usually know what to do with it even if you don’t see the extension.
Fourth. All web pages are html/css/js, so there’s no need for adopting the file-manager metaphor other than this was a 1:1 representation of the way the original sites were stored. These days, if a page is in foo.com/form/api/register doesn’t mean it’s in /var/www/html/foo.com/form/api/register, but the concept stuck because, as I mentioned above, it’s pretty much irreplaceable today.
Yes the OS knows what to do, including Windows. but the HUMAN doesn’t!!
You see 5 files called Erez in the FIle Explorer with a small icon beside it that hopefully tells you what extension the file is. And the user doesn’t know what file to click to open it!!!!
Only the unique URL will tell you where you are in the internet world, U for Unique in URL but some ‘user-friendly’ name won’t
Stated as a fact, but completely untrue in reality. Although Linux (the OS) doens’t need extensions they are extremely common and used by both the users and shells/DE’s to determine how to open files. Change a file extension from readme.html to readme.txt and it will most likely open in another program by default. Visual cues are very expensive in terms of performance because they require (part of) every file to be read. Visual cues also only work in a GUI, so doing a dir/ls tells you nothing.
There is a reason we use paths. Both for organizing and security it has proven to be a great way for both human and machine to use our files. Every other solution (mostly search based) has required very complicated systems while limiting it’s generic purpose and offering little benefits
I’m starting to think you’re a troll.
Or maybe – I’m not sure I use the term correctly – a technology evangelist?
“However, I don’t think Google doing this singlehandedly and one its own is a good idea; this should be a standards-based process, open to comments from everyone.”
This is not a functional element of the web – it’s a presentation feature of a browser.
There may be problems that come from doing this, hence the devs should be careful about pushing a change on to unsuspecting users. But it’s very much an area that browsers should be able to innovate independently, and as users we get to choose which browser matches our requirements.
OK therefore throwing in some UNIX/Linux is a garbage BS, we need to move forward but Google shouldn’t do it. Thom, please take more than 5 minutes, to write an article and please be more articulated about it. Current one is just about being ignorant.
Geck,
I’d also be curious what Thom has in mind.
I could see someone there arguing this as a “safety” feature for their users – the courts have routinely bought the stupid notion that manually changing a URL in the browser as actually being “hacking” a website. Seriously! If people have no access to the URL, they can’t accidentally get themselves in trouble for hacking… and I can scarcely bring myself to say that with a straight face. What is the world coming to?!
Personally, I want easy access to exactly where I am on the net at any given time. Do I love urls? No, but I don’t hate them either unless they’re a bazillion characters long. But still, it’s not the end of the world. As far as I can tell, the vast majority of users seem to find their way around the net no problem. They typically navigate to a homepage of some sort, and use links & buttons from there. What “problem” is hiding urls, or parts of urls, supposed to solve?
Some of you are very clearly passionate, with strong opinions about urls and what users should/shouldn’t see. But, it’s hard to see urls as a problem rather than preference here. I seriously doubt anyone here is held back in any way because they’re exposed to full urls.
friedchicken,
I agree, I don’t think anyone is hurt by it being there, but it is about preference.
Since I do tech support a lot of users, often on websites, I see this as potentially making my job harder.
The only reason to hide the rest of the url by default (you will likely see/get it when you click the urlbar for copy/pasting purposes) is to clean up the UI. Most mobile browsers already do this by hiding the urlbar when you scroll down. For a desktop browser they most likely want to combine the urlbar with other functionality.
Personally, just like with file extensions and paths, I want to see exactly where I am by default, from most important to least important so showing the domain but possibly not all querystring-parameters
I have to ask then, is desktop browser ui’s that in need of a clean-up? I can already neatly organize my tabs, extensions, favorites, navigation & other buttons. I’m not sure how it can be further optimized or minimized without going too far in that direction. If you’re hiding something the user is frequently unhiding, you went overboard hiding it in the first place. The urlbar already doubles as a searchbar, what more does it need to be?
One thing I definitely don’t want is mobile ui needs or practices being shoehorned into the desktop, or unnecessarily jacking around with the ui. I swear, rather than addressing a legitimate need some of this stuff feels more like scratching an OCD itch or satisfying a never-ending desire to `tweak` something. It seems a little, “Does this need to be fixed? – Doesn’t matter, fix it anyway!”‘ish.
Oh my, from where to start?
People with no knowledge about why things are the way they are should first try to learn or ask someone to explain, I’m sure Alfman, avgalen, Brendan and many other guys here will be more than willing to help.
It really doesn’t help to advance our understanding when we have to fight against so many misconceptions and defenses of things with flawed basis.
URLs and paths are terrible, yes, they are, but complex things, more frequently than not, can’t be reduced to simplistic solutions.
I’m all ears but I never read or heard a better solution for huge volumes of information than hierarchical approaches, even our libraries and museums are organized this way.
I may be the minority here, but if you hide the actual URL, how can the ordinary Joe determine if a link is part of a phishing attack? I personally dont like that idea for that reason alone… That’s unless there’s a way to reveal the true source going forward.
spiderdroid,
The article says they’ll show the domain, but hide everything after that.
So instead of the full URL you’d just see the domain.
http://www.microsoft.com/en-us/windows/ -> http://www.microsoft.com
http://www.microsoft.com/en-us/sql-server/ -> http://www.microsoft.com
http://www.microsoft.com/en-us/store/b/business?icid=CNavBusinessStore -> http://www.microsoft.com
Note that chrome already hides the “https://” part…
So whichever page you are at, you’ll only see the domain, at least by default.
Note 2: wordpress reinjected the “http://”, I didn’t type it however and didn’t intend for “http://” to show above.
I really would like to see W3C standardizing a domain index that could be download/shared with timestamp, title, subject, relative link, and some more minor properties. It would also has, for specific subjects, tags for special communities (physics, fields of engineering, medical specialties and other scientific matters) subjected to verification by trusted entities, like ASME, IEEE, ISO, APS, or a specially created one. Could even be based around sqlite format.
Does not need to be something mandated, but browsers could support it directly for sites that comply. It would save me a lot of time when I need to search for articles in the fewer sites I trust.
Things are growing exponentialy, and it has becoming a pain to search using a general tool like Google.
As a side effect, it would make us all dump a good deal of Google dependency.
acobar,
I think hyper-g from back in the 90s does exactly that. Mind you I’d never actually heard about it until I read about it on osnews some years back…
http://www.osnews.com/story/29344/obsolesced-rise-and-fall-of-the-gopher-protocol/
I don’t know how it was implemented but apparently it had bidirectional indexing and could run distributed queries…
https://ftp.isds.tugraz.at/pub/papers/inet93.pdf
Really fascinating. I do not know how well it worked, but it sounds more organized & sophisticated than what we have now with modern websites. It has a parallel universe feel to it. Quoting Thom Holwerda from the link above:
The lack of (public) indexes leaves us highly dependent on search engines and esp. google, which obviously isn’t a great situation to be in. Previously an osnews article discussed opening google’s index, which doesn’t sound very realistic, however it would allow for more innovation to occur in the search space.
http://www.osnews.com/story/130329/to-break-googles-monopoly-on-search-make-its-index-public/
Gopher is/was a more complete solution. What I wish is a more “down-to-Earth” approach that could be easily integrated with what we already have. Many of the sites I go to already have a kind of search engine, but they are clunky and based on Google tech that is too generic. I would much prefer a browser integrated search engine with some sql capabilities. It would work this way:
* List of sites I want to search a subject (arstechnica, physicsworld, nature, etc., can be taken from the bookmarks or added by hand);
* Subjects; (physics, engineering, math, biology, etc., again can be selected from a curated list or added);
* Words on title field (-> optional);
* Words on abstract/summary/synopses (-> optional);
* Authors (-> optional);
* Sanctioned by: (-> optional, IEEE, ASME, ISO, etc.);
An option to save my search, or a curated result of it, locally. Done, no more Google for me and no more crap results.
The sites could run a simple application that could generate the sqlite database and give it a timestamp, so that we would know if anything changed and even could send just the changes from the last time it was checked and also restricted by the subject submitted.
It is a simple solution for specific needs instead of what we have. Even Elsevier would be not against it.
Very close to what we already have to do on libraries.
acobar,
Ah, so you’ve come around then? 🙂
Jan 2019…
https://www.osnews.com/story/129315/google-takes-its-first-steps-toward-, killing-the-url/
I agree, too many sites do this. Even here osnews used to have a comment search feature, but with wordpress it got dropped. Google and others focus too much on free text searching, which has it’s place but I’ve longed for an SQL-like web search capability. It’s infeasible for me to crawl all of the web myself, but if I could somehow obtain the indexes myself, I would build a structured search engine for them!
You’re really attached to building on an sqlite database it seems 🙂 It’s a nice self-contained database and works great in some projects, but to be honest I would probably criticize it as a data interchange format for being overly complex & overkill. I work a lot with data feeds and to be honest trivial file formats are often the easiest to work with.
.
Alfman,
Like what? json, xml?
Still would need some kind of fields conformity to be truly useful. I hinted about sqlite because many apps already have it integrated and it is way easier to mix data from it than it is with structured text. Firefox and Chrome (and derivatives) already use it (well, it is also true for libraries that process json or xml).
Alfman,
Why not use a bit of everything?
Just hear me out… the values could be encoded with ASN.1, wrapped in a JSON structure, converted to JPEG (lossy to save on space, but no so lossy as to make it illegible), embedded into a DOCX XML document, and of course zipped, uuencoded, wrapped on 80 character boundary, and stored in new DNS resource records to take advantage of DNS’s existing infrastructure and scalability. Or, the event that one’s DNS provider doesn’t provide enough space or custom resource records, they can just store the DNS zone file in an sqlite database and use rsync to transfer it on a new IANA assigned port. Or if that doesn’t pan out, maybe github or amazon s3 could be used instead. Maybe it would be unwise to be reliant on microsoft and amazon…a possible alternative could be to create an extention for bitcoin’s blockchain and store the sqlite database in there. This would make it relatively secure.
Anyways, we’ll need a browser extention to handle this back end stuff, but only at first, eventually the hope is that the W3C could approve it as part of the next HTML standard.
Hopefully I haven’t missed anything important…feedback welcome!
I believe this is being done to fight phishing attempts. Some phishing attempts essentially put in fake domain names to fool users. This essentially ensures that the domain name can not be hidden from the user.
This is not a web standard, so there is no need for Google to consult on this. This doesn’t change the way the web works. It doesn’t even change the way the URL bar works, just what is displayed in there.
I think Safari on the iPad already does this by the way.
@Alfman
Thanks, Still doesn’t dissuade the issue as they already removed the protocol portion. I believe it just stupefies the next generation, and I’m getting older and I’m averse to change unless it’s totally required. How did you insert those quotes?
spiderdroid,
Yeah, I don’t feel removing the full URL really helps anyone at all.
Ironically something that actually is a problem is NOT fixed by google’s changes. Consider:
microsoft.com.services.xyz.net/login.html -> microsoft.com.services.xyz.net
The problem is DNS goes right to left (subdomains are added to the left), whereas mentally people will parse it as left to right. So I would say probably a lot of users could be tricked this way.
You need to use blockquote as follows:
<blockquote>My quotes rule!</blockquote>
I’ve suggested osnews add it the faq, hasn’t happened though…Thom can we get this up there?
BTW, anyone else noticing that osnews is extremely laggy today? I’m intermittently seeing requests on the order of 20 seconds and even some timeouts. I’m able to ping the mykinsta server (osnews is hosted on on a google cloud hosting provider) during these delays, so it’s not network related, something on the shared server itself is causing these delays.
I know why, Thon just implemented part of the proposes and directions you gave to me. It is going to be the best standard of them all! It is a winning strategy, the absolute best! Very stable, lets make Internet great again!
😉