We strive to ensure choice and transparency for all Chrome users as they browse the web. Part of this choice is the ability to use the hundreds of thousands of extensions available in the Chrome Web Store to customize the browsing experience in useful and productivity-boosting ways. However, we continue to receive large volumes of complaints from users about unwanted extensions causing their Chrome experience to change unexpectedly – and the majority of these complaints are attributed to confusing or deceptive uses of inline installation on websites. As we’ve attempted to address this problem over the past few years, we’ve learned that the information displayed alongside extensions in the Chrome Web Store plays a critical role in ensuring that users can make informed decisions about whether to install an extension. When installed through the Chrome Web Store, extensions are significantly less likely to be uninstalled or cause user complaints, compared to extensions installed through inline installation.
Later this summer, inline installation will be retired on all platforms. Going forward, users will only be able to install extensions from within the Chrome Web Store, where they can view all information about an extension’s functionality prior to installing.
Am I the only one who’s assuming this will eventually allow Google to remove all adblockers from Chrome?
https://www.youtube.com/watch?v=ztVMib1T4T4
Always nice to see Google giving Mozilla a helping hand. (Firefox still allows inline installation, but mitigates the risk by having more human overview on what extensions they sign.)
That said, this seems to only affect the APIs used for triggering an extension install from within a website, so I’m sure this will just push such extensions to encourage people to add –enable-easy-off-store-extension-install to their Chrome launcher and then install the extension via the offline workflow for unsigned extensions.
…and if that goes away, we’ll probably see ad-blockers that run inside userscript hosts, daring Google to restrict all userscripts.
Edited 2018-06-13 02:03 UTC
Firefox and now chrome. We expect this from closed/proprietary software, but it’s disappointing to see open source browsers going this route now too. This is so short sighted and goes against the principals of FOSS philosophy. No users were asking for forcefully imposed restrictions in either case. Sideloading should be a choice.
https://developer.mozilla.org/en-US/Add-ons/Distribution
This disgusts me, 3rd party innovation should be a right, not a privilege. At least with open source, 3rd parties can remove these anti-features and publish unofficial owner-friendly versions like waterfox and srware iron.
https://www.waterfoxproject.org/en-US/
https://www.srware.net/en/software_srware_iron.php
Ironically we have to sideload the full browser now just to allow sideloaded extensions. However sideloading browsers is another right that’s under attack by some vendors.
What happened here? Computers used to be about promoting innovation and empowering users, but we’re making a decidedly dark turn by stripping our choices and deploying technology in ways that restrict us and hold back innovation in order to control us
Strictly speaking, Google Chrome is not open source, it’s proprietary. Chromium is open source, but Chrome is not.
ahferroin7,
It’s possible that I’m mistaken, but with this announcement coming from chromium.org, I assume it means that the open source chromium version will also become locked down going forward.
https://blog.chromium.org/2018/06/improving-extension-transparency-f…
It makes me wonder if linux distros will push out browsers with sideloading by users prohibited?
Ubuntu repos only offer chromium, you actually have to add a 3rd party repo to install google’s proprietary chrome browser.
https://askubuntu.com/questions/510056/how-to-install-google-chrome
If anyone has answers, please weigh in.
The thing is though, most people really don’t need to be able to sideload extensions from an inline link on a website, which is what this is disabling. In fact, that method of sideloading is a common secondary component of social engineering attacks, which is most likely why Google is doing this. You can still side-load via direct injection into the profile, or by manually loading things on the local system, you just won’t be able to click a download link and directly install stuff.
Realistically, I doubt that most distros will care, it’s technically improving security, and not likely to impact a vast majority of their users.
ahferroin7,
So by that reasoning, would you be in support of google disabling android side loading and killing off alternatives like fdroid?
There’s far less incentive for them to do so on Android though. It’s pretty difficult to trick a person into granting a special app permission or go through side-loading via ADB, whereas CHrome just requires the person to click ‘Yes’ on one dialog that is by no means scary.
Realistically though, most people wouldn’t care on Android either, as just like Chrome, it’s a reasonably small population who use such thing.
Note that I’m not advocating that they should do this. Personally, I’d much rather having something you have to turn on to allow sideloading at all (they should have had this from the start though), plus a much scarier dialog when you try to side-load something. I’m just stating that realistically, it probably won’t affect a vast majority of their user-base negatively.
ahferroin7,
So you, WorknMan, and I are all in agreement on this point.
Edited 2018-06-13 14:53 UTC
Translation:
1. Starting today, a random site out on the web which calls chrome.webstore.install() will trigger a redirect to the corresponding Chrome Web Store Page rather than directly popping up the “Do you want to install this Google-signed extension?” dialog.
2. Starting September 12, 2018, users will be sent to the Chrome Web Store to confirm they actually want any extensions previously installed via chrome.webstore.install()
3. In early December 2018, the chrome.webstore.install() JavaScript API will be removed.
Edited 2018-06-13 19:31 UTC
Chromium is little better. Remember when Chromium was downloading blackbox binaries that could listen in on people’s microphones on Debian?
Chromium is /not/ some utopian version of Chrome.
Really open source software has to protect reputation as well to have end users. Having malware sideloading is not a good thing for reputation.
Basically instead of being annoyed see if you can design a more flexible system that meet the requirement of protect users who want malware protection with third party vetted extentions and those who want to side load.
Really I think those two are in fact mutually exclusive. To prevent malware loading all executable extensions have to be vetted by someone.
This might be a case were firefox and chrome need two brand names. One for those who want to side load and one for those who want the security of third party vetted. Of course nothing says mozilla/google could not make both.
It’s really not that hard. Just make side-loading a hoop you have to jump through to turn it on, such that nobody’s ever going to do it without having to look up directions. And when they turn it on, make them ‘OK’ about three dialogs that say, ‘WARNING: THIS IS NOT A GOOD IDEA!!!!’
At that point, I think app developers have done their part to sufficiently protect users from themselves, and also giving users the freedom to do what they want, while sufficiently warning them of potential dangers.
IMO, unless they’re on an enterprise network or something, it’s ultimately up to users to decide what to do.
Edited 2018-06-13 12:46 UTC
WorknMan,
Group policy handles this well, although I’m not sure what the chrome browser supports in terms of group policy.
For a commercial application I work on, the local user settings can be overridden by registry keys that can be locked down if desired in an enterprise setting. But we certainly don’t cripple our product for everyone else!
Think closer what describe has already been tried so was voted down for very good reasons..
https://support.mozilla.org/en-US/questions/1101877
Malware did start enabling auto validation off.
https://www.theregister.co.uk/2016/04/04/top_firefox_extensions_can_…
Prefab attack tools have started appearing 3 years back those include automatically disable extension validation checks.
So we are to the point for security having an on/off switch for validation is not an option. Users general users don’t in most case need the ability to run unsigned extensions there is no point putting the largest percentage of users at risk.
This is why I said it need to be split. Expert/power users who should have the skill to validate what their browser is loading have a usage for the feature. So there should be a expert/power user version of firefox/chrome and a general user version of firefox/chrome possible under two different brand names.
The idea of putting a control anywhere is just a switch the hostiles are going to exploit.
Alfman the reality here the bugs asking for this are not on the public bugzilla but in the security list.
So far you have not suggested a workable solution. Start thinking you have hostile who want to embed in your browser to collect your bank details and other things and will work out ways of changing applications settings to let them do that. You have to make that as hard as possible.
This now means you need the browser split in 2. Users not needing to side load unsigned extensions need to be split from those who need to side load unsigned extensions because loading unsigned extensions is a heck of a security risk.
oiaohm,
On the contrary, having a switch satisfies the needs of both crowds. As we’ve said, hide the option so that only people looking for it will enable it.
Edited 2018-06-13 15:42 UTC
It’s not just that. Forcing signing was also a response to malware with an external component which was using shaped windows, programmatically overlaid and positioned, to fool the users into thinking that Firefox itself was guiding them through the process of enabling the malware extension.
Attacker are looking for the option and will code it into their sideloading solution if it possible.
ssokolow finds that I could remember where it was. Firefox splits into 2 when the remove the feature to install extensions unsigned from branded forms their browser.
https://wiki.mozilla.org/Add-ons/Extension_Signing#Unbranded_Builds
The unbranded builds can be installed next to the firefox branded builds. The unbranded has extension signing off can be option. Like you are doing banking and the like having your extensions validated most cases a good thing and this will be safer done by branded form of firefox where extension signing checks cannot be turned off.
Now if you are testing out some new prototype extension as a developer you will be using the unbranded forms. If you are needing to side load something that mozilla will not approve(ok this could be really dangerous) again unbranded form and harm comes to you its your fault.
This is a fairly good compromise particular when you wake up in 2015 attackers started sideloading hostile extensions.
Yes Mozilla has given another option to the prior configuration on off switch. Of course the way Mozilla has done you have to intentionally install a particular versions that are intentionally not heavily advertised to use unsigned extensions so you should know what you are getting yourself into. Its not like Mozilla with firefox has made it absolutely impossible to use unsigned extensions.
oiaohm,
I used to run the dev versions, but I’ve stopped running the dev builds due to broken updates. A better way to handle it would have been to have a switch and just not support users who have sideloaded extensions enabled, that approach would cover everyone’s needs better without requiring users to get developer builds. We should aim for secure by default, but respect owner’s wishes to customize.
Then again my opinion stems from a firm belief in openness where owners are in control. I strongly oppose a future where owners can’t do things on their own devices because someone else holds the keys. If you don’t have this belief in openness like I do, then of course you’ll be more amenable to restrictions like these.
Edited 2018-06-14 13:28 UTC
Chrome will forced at some point to remove switch as well. Attackers have got very good at changing the switches without users noticing.
Sometimes the right choice is user needing to choose at installation if particular features are on offer or not.
Really its foolish to attempt to cover everyone needs with 1 program when the needs are in fact mutually exclusive.
Really have you been able to describe a switch attackers could not modify. Please do not say the chrome command line one as there are already attacks that get past that. Firefox default of not allow so far as not been defeated.
Now requesting a unbranded that is released with the same versions as the stable for development and the like would be valid request in some ways. Drop the switch idea unless you can design something attackers are not documented defeating.
Alfman really you want to blame the developers without consider the problem they have in front them.
If web brower gets know as a likely browser to have you bank raid how long will it have market share. So security is a serous problem.
Of course the issue with development version breaking and the like is different problem to allow signed extensions right. For developers could there not be a unbranded same versions as branded release that follow the same versions as the branded in auto updates. Thinking unbranded the same versions as the release are on the mozilla servers just not with aligned update.
Basically have the unbranded fixed up to do this would provide a route for everyone legal to have what they want while making attackers life has hard as possible.
oiaohm,
I’m sorry, but it is absolutely wrong and disingenuous to claim that these are mutually exclusive. That’s the whole point since the first post, what I’m asking for gives everyone what *they* want on *their own* devices. What you are asking for gives you what *you want* on *my device*. But why should you or anyone else have a say what goes on on my device?
Edited 2018-06-14 14:04 UTC
Well, if they’ve compromised your system to the point where they have access to the switch, why the hell would they bother to turn it off? At that point, they can do whatever they want.
Chrome does it via command-line switches like –enable-easy-off-store-extension-install which are harder for malware to reliably modify, since users can launch the browser through various avenues and they’ll be ignored when externally opening a new tab in a browser instance that was launched without them.
Mozilla went the “different editions” route. Setting the xpinstall.signatures.required about:config key to false is ignored in “branded” builds of the stable and beta channels, but it’s obeyed in the following release channels:
ESR (Extended Support Release for organizations, ie. oldstable):
https://www.mozilla.org/en-US/firefox/organizations/
Unbranded builds for the stable and beta channels:
https://wiki.mozilla.org/Add-ons/Extension_Signing#Unbranded_Builds
Developer Edition (ie. alpha channel):
https://www.mozilla.org/en-US/firefox/developer/
Nightlies:
https://www.mozilla.org/en-US/firefox/channel/desktop/#nightly
Honestly, if you have to remove the option because of how persistent and clever attackers have gotten, that’s a pretty good balance to strike. The two editions average users will gravitate to, but not the unbranded versions of them, nor the versions targeted at organizations or developers.
Edited 2018-06-13 19:36 UTC
Except that we’ve spent years teaching users to ignore SSL errors, ignore UAC warnings, and pretty much all other “Are you sure you want to do this?” dialogs.
In fact, the standard method for dealing with those pesky UAC prompts? Turn off UAC! People on this site, who theoretically ought to know better, turn it off all the time, and brag about it.
Security prompts have been made meaningless.
Yeah, that’s why I say… make several of them in succession, with the last one being a giant, ‘HEY DUMBASS….’
Except that dialogs never worked, ever, to prevent unwitting users from doing harmful things on their computer.
Dialogs are these things in Windows that annoy you until you make them go away.
It’s the same thing with Android’s old permission system. Another dialog that you grudgingly accept when installing an app.
Yes there are people who are nazis about it or people who apply their common sense and actually try to comprehend what is going on, but these are not the users that need protection in the first place.
The struggle between freedom and user protection is real, and while I am all for freedom, there is no easy answer to it (certainly not “let’s just slap another dialog on it”).
Edited 2018-06-13 21:57 UTC
Ford Prefect,
No need to get fancy, just disable it by default, and let users who care go into about:config to change it. Done.
Not for nothing, but I was a kid once and computers then had virtually no protections from user error. The computers did what you asked whether it was dangerous or not. You know what, things worked out fine. I learned how to fix things and how to be responsible. I also learned how to modify them to make them better. If it weren’t for the unhindered low level accessibility that I benefited from, my skills would be largely stunted today.
I think protecting people against their will is actually doing them and society a larger disservice than we are admitting. These policies that make it increasingly difficult to go under the hood ultimately increase the knowledge gap between laymen and specialists. We complain about people being dumb, yet we’re guilty of creating the technology that keeps them dumb.
Edited 2018-06-13 22:58 UTC
The people who need this protection are the ones who aren’t interested in decreasing their knowledge gap; they’re interested in completing a task or visiting a site or paying their bills online or whatever. These people should be protected precisely because they A) have no interest in taking additional steps on their own, and B) because of A are a likely avenue of attack. Not doing anything, or telling people to “learn” isn’t going to solve anything.
echo.ranger,
…which is exactly why we are in favor of disabling it by default but allowing those who explicitly search for it to turn it on. It’s the best solution satisfying the needs of both types of people. It’s like sideloading on android, normal users benefit from the protection of the app store, but power users who need & want more control can change the policy on their own devices.
Side-loading is different because it’s an OS-level setting that the OS can reliably deny applications the ability to programmatically toggle.
Anything Firefox can do, another Win32/POSIX/etc. app can do because they both have the same privilege level when it comes to available persistent storage APIs.
(TL;DR: Android runs apps in a sandbox and stores the sideloading setting’s state outside that sandbox. To do the same for Firefox, it would have to run and store its data in an execution context that legacy “can mess with anything in the user account” Win32/POSIX applications can’t touch.)
Edited 2018-06-14 22:38 UTC
ssokolow,
Those attacking browser are no different to pick pockets. They want to take you information without you noticing.
https://www.theregister.co.uk/2016/04/04/top_firefox_extensions_can_…
My link from before already counters the idea that an attacker on a compromised system can do what ever the like straight away.
System compromised does not mean all protection systems are off.
1) compromise system
2) establish persistence without being noticed.
This is basically the first 2 steps to infection. Yes you can have temporary infected systems where attacker is only able to achieve first step and their breach is undone every reboot.
Signed extensions help with prevent compromise and signed extensions not have application off switch makes doing establish persistence harder.
Having to install a program to change the setting means having to run past anti-virus that is deeper down in system.
PUA / PUP (potentially unwanted application / potentially unwanted program)
Yes anti-virus have these features these days. Simple for security system in operation system to detect installing application of lower security compared to some internal application value. So more effectively able to warn users and make attackers life harder.
Power users can install a different version. This is the reality. Normal users needing the protection don’t want to have the skills to have to check if unsigned sideloading is on or off. Think about a chromebook in developer mode it displays a message every boot warning user. Power users of browsers don’t want to put up with this either.
If you explicitly search it on mozilla site you will find the unbranded version that does what some power users want. Please note not all power users want to have the ability to side load unsigned either. Because some power users only use firefox and the like for banking and actions like where they want everything as secure as possible.
The group wanting unsigned side loading and the group needing protection are mutually exclusive.
You ask both groups want they want when you attempt to make it work you in hell.
Groups wanting protections.
1) Don’t want to have any knowledge to have the protection.
2) Don’t want to see warning messages they have to fix if the protection has been disabled just prefer it not able to be disabled.
3) Large percent of the group wanting protection are not power users so don’t have a large amount of knowledge to fix anything.
Groups not wanting the protection.
1) Don’t want a warning message every single time they run their browser to display the weaken security.
2) They don’t want to have to-do per device unlocking that would most likely involve adding more DRM code to firefox.
Android devices where you can unlock the boot loader by contacting vendor and getting per device id code means attacker cannot do this on mass simply. This cannot be done to firefox. The per device unlocking is the other way to make it as hard a possible for the attackers. 2 versions of the application is way nicer than this.
The idea of a hidden control option does not cut it as the option cannot be hidden and has to be in in face choice and in the fact difference.
Requiring a different installer is in face the groups not wanting the protection get to agree once to it. No message to bug users who don’t have skill to fix it. No message to bug users who don’t want the protection.
The reality is the best solution for this is split into two different named productions built from the same source. The name tells you the protection level. Also splitting in two covers the case power user wants protection on some sites and not on others.
Remember the message information user that protection is off if you did a program embedded option cannot be something they can automatically click though. So highly annoying. Way more annoying than having to install a different version of program..
Alfman trying to do this as 1 application you are going to either not properly protect the users who need protection or annoy the live hell out of power users every time they run the program without protection. This is truly a mutually exclusive problem the happiest solution for most two groups of applications. Of course you cannot make everyone 100 percent happy so we have people like you who don’t really understand problem and don’t attempt to fix it and take a overly simplistic view of the problem so complaining.
Edited 2018-06-14 23:28 UTC
oiaohm,
No problem, a simple solution would be to modify the title or something to make it clear it’s enabled and make it easier to turn off sideloading than to turn it on. Done. No drama or confusion for anyone. This would just be for extra security since normal users will not enable it in the first place.
Not done. Extensions in firefox can alter the title. Again you are suggesting something that required education to normal users. It would have to be majorly in face thing.
Its not normal users who enable it. Its attackers against normal users who enable it who will attempt to work out ways to hide the fact the setting has been changed from the normal user.
Sorry the arguement that normal user does not enable it does not fly. How are you going to make sure attackers against normal users don’t enable it without normal user becoming aware.
oiaohm,
I’m not sure if you are merely playing devil’s advocate, or you genuinely didn’t understand the point made by WorknMan and myself. Either way, it’s worth taking a closer look to see why logically he’s right.
We’re all in agreement that mozilla should prevent extensions from being able to modify the side loading setting. Now assume your scenario is true and that an extension managed to change the mode anyways. We can logically deduce that these other facts are true:
1. Mozilla’s security audits failed and hackers succeeded in getting their malicious extensions signed by mozilla, which unsuspecting users then downloaded and executed.
2. This malicious extension somehow succeeded in a privilege escalation attack that breached firefox’s security enforcement.
3. By this point the battle is lost and mozilla/firefox/user are already compromised regardless of if the malware enables sideloading or not.
In other words sideloading didn’t cause the vulnerability, faults in mozilla/firefox security model did.
Is it a problem? Yes of course, but it clearly wasn’t caused by sideloading and there’s plenty of mischievous things that attackers can do even if sideloading isn’t an option. You used the example of hackers snooping on user bank details to illustrate a failure. Ok, but that can happen regardless of the presence of a sideload option or not. A hacker can add untrusted root/host certificates and then start conducting man in the middle attacks against HTTPS connections, if you think like a hacker the possibilities are endless.
I was hanging around the relevant Bugzilla threads and other discussion channels when the decision was made.
One of the several threats it was intended as a solution to was an arms race with bundleware that wasn’t nasty enough to get classified as malware by removal tools but was persistently circumventing Mozilla’s attempts to force informed consent for the browser extension.
(That’s where the aforementioned “shaped-window overlay that fools the user into thinking the browser itself is guiding them through OKing the side-loaded extension in the Mozilla-provided prompt” example came from. Mozilla actually implemented a consent prompt similar in concept to a side-loading toggle before going all the way to enforced signing.)
ssokolow,
Wouldn’t all of mozilla’s security settings be vulnerable to the same vulnerability? Whatever steps mozilla took to fix those could be applied to this setting as well.
I obviously wasn’t there, but I would have suggested a modal dialog box separate from the main window that extensions aren’t allowed to modify.
Edited 2018-06-15 03:40 UTC
The bundleware in question was using Win32 APIs to inspect Firefox’s windows and then programmatically position completely separate top-level, always-on-top windows (shaped like arrows and such) over Firefox’s own windows in such a way that they looked like they were part of them but Firefox didn’t know the difference.
Mozilla decided they didn’t want to join Punkbuster in that kind of cat-and-mouse game so, instead, they just came up with the current system, which lets them ban bad actors from access to layperson installs with no toggle to circumvent it.
After all, it’s much harder to convince a layperson to install a whole new browser… especially when that browser is intentionally distinguished from the usual one. That sort of thing tends to prompt calls to whatever computer geek they know.
Edited 2018-06-15 03:41 UTC
ssokolow,
If that’s true, it doesn’t sound like they actually fixed the root vulnerability. Merely removing the sideloading option doesn’t fix the vulnerability that hackers would have used.
It’s unfortunate that they didn’t just use a modal dialog box for confirmation.
Unless you’re using “modal dialog box” to mean something far more specific than what I understand “modal” and “dialog box” to mean from my experience developing GUIs, it wouldn’t help.
ssokolow,
A browser modal dialog would mean that no other browser window can cover it up and no extensions would run while the dialog is active. The dialog could be made full screen if you want.
I’m willing to entertain any scenarios you want but ultimately I refuse to believe that smart people cannot come up with a mechanism that confirms A) the user’s deliberate intent B) that only sufficiently smart people will pass.
Problem here is under windows and OS X at least there is no way to absolutely confirm that a action has come from keyboard/mouse/approved input device of user. This is why AutoHotkey can be used so successfully under windows without any special approval.
So there is no way to confirm users deliberate intent. This does bring a interesting problem for all those so called online agreed to contracts. How can they confirm the user was in fact there and that it was not something automated in the person system doing it. Before you say captcha
https://nakedsecurity.sophos.com/2017/11/01/now-anyone-can-fool-reca…
Yep captcha is being regularly defeated.
Alfman reality like it or not there is a major flaw in the operating systems people like to use with not true workable fix.
This is the problem features power users love for automating stuff due to the fact they don’t need any special permissions are ideal tools for hostiles to allow changing settings and to install extensions into browsers and the like in ways users may not notice.
oiaohm,
Captchas are inherently insecure, but you’re changing the parameters of the problem. We were talking about an application protecting security settings from untrusted code running in a sandbox that it controls. This is quite a bit different from a webserver sending a “captcha” prompt across the internet to a computer which is neither in a sandbox, nor under the developer’s control.
Edited 2018-06-15 14:36 UTC
In that case, I guess we’re at an impasse. All I can suggest is that reading enough The Old New Thing might help you to see things from my perspective.
Edited 2018-06-15 19:49 UTC
ssokolow,
There’s thousands of things a hacker can do. Sure an attacker could additionally enable sideloading, but that’s a consequence of the vulnerability rather than a cause of it. Like I said before, the attacker could install fraudulent SSL certificate authorities and intercept all HTTPS traffic. A hacker could enable webrtc’s screen/webcam/microphone sharing against the user’s will. A hacker could install new proxies, etc. A hacker could download/install software against the owner’s will, even an unofficial version of the browser. Even more things can be done outside the browser.
Pretending that browsers can be safe under a compromised system scenario is wishful thinking!
I realize it sucks, particularly on operating systems where all applications inherit the same user permissions. This is why I’ve always been a promoter of sandboxing to give users control over what applications can see/do. Obviously windows has lagged behind a lot in this area, but I think that’s a discussion best left for another time
Edited 2018-06-15 21:14 UTC
You’re ignoring the “malware detection considers it acceptable” aspect of this situation.
This particular case refers to bundleware providers who are trying to exploit the gap between what the malware scanner considers acceptable and what Mozilla does.
If they go too far, then they get put on the malware scanner’s list and the Google Safe Browsing list and lose.
(In other words, the kinds of companies that are now pouring over the GDPR looking for ways to lawyer loopholes so they can continue their existing dirty tricks.)
Edited 2018-06-15 22:08 UTC
ssokolow,
I’m not really ignoring it, only pointing out that mozilla hasn’t solved the problem even though I don’t think it’s their fault. You are right that we need better support for software isolation at the OS level. For it’s part, microsoft has adapted app level isolation but only for the windows store apps. They seem to have concluded that win32s are a lost cause in terms of isolation, which is truly unfortunate because it makes it very difficult to protect software from userspace malware as they share the same permissions
Reality you only have to look at android. Malware installed without defeating the OS security on Android the ability to mess with another applications settings is not there. The ability for a non approve application to remote control another application is not there with Android either. So firefox on Android in theory could do a user controlled flag for signed and unsigned extensions. Android of the few platforms where firefox runs where there doing unsigned extensions by user approval could done.
Reality is that the lightly compromised system scenario is so bad because default operating system security in the operating system people like to use is so crap.
Mozilla removing unsigned extension support is to control electronic pickpockets as much as able. The true correct fix would be Microsoft with Windows and Apple with OS X improve their general application security. Under Linux is wayland, flatpak and portals coming more normal and better implemented to see applications with sandbox breaking permissions. So this fault being addressed properly is a few years off at-least in Linux. In Windows and Mac OS who knows when it will be addressed.
Edited 2018-06-15 22:41 UTC
oiaohm,
Yes! I think this is something we can all agree upon, let’s build on that
This is being a under-skilled without facts states something is impossible when its not. https://winepak.org/
Exactly why does every application under windows have to share the same permissions. Win32/Win64 does not have to be a lost cause in isolation but implementing is not going to be cheep. I don’t see Microsoft reaching into their pocket and doing it without being forced.
Of course winepak is not as isolated as it need to be yet as this still need wayland support and a few other things. Yes breaking into many application in winepak you cannot directly modify settings of other applications. Make it work with wayland and you have win32 support without most of the evils. If Microsoft would add like flatpak portal for protected settings as well problem solved.
Us who know what can be done says a lot of pressure should be placed on Microsoft to improve win32/win64 because it can be improved. It will cost millions of dollars in developer time to implement. Us who know understand that it can be done and it cost.
Edited 2018-06-16 10:08 UTC
oiaohm,
I think we need microsoft to step it up, but I doubt they’re very interested in doing so. They’d rather push us all into their walled garden where they finally gave us app isolation, but they took away owner control
I suppose you may be ok with owner restrictions though, given your stance in this discussion?
Edited 2018-06-16 15:07 UTC
My personal system runs Linux I am following flatpak and other technologies like it. I would prefer it fixed correctly.
Yes UWP gives you app isolation. But look around for a interface in UWP to confirm that action has come from user and find it not there. Even firefox in UWP would not provide the features you really need. The ability to have settings only set by approved programs or physical human. This fixed then settings like unsigned extensions could be left to the user.
Reality here you say walled garden took way your control. The hard part is you never had proper control in the first place. We have being using operating systems that are like a car who steering wheel nut is missing yes it works most of the time but from time to time the wheel comes off in you hands and you hope to get it back on before you crash into something. UWP and microsoft app store you have upgraded from a car to a tank but the nut on the steering wheel is still missing and its far worse to drive.
Really I want a OS with user properly in control. Application settings critical to security not being able to be changed without user/hardware owner approval and understanding.
I understand the compromise google and mozilla are doing with their browsers but its not like they have much choice in the matter as the problem is way worse and they can only alter what they control.
oiaohm,
I agree that windows app store was built around microsoft’s goal of controlling users rather than empowering owners & developers to enforce better security. I don’t like it one bit…they obviously could do better but it is what it is.
Edited 2018-06-16 19:13 UTC
Reality Microsoft is idiot here. Because if they were offering users proper security for being in their walled garden they would be able to bate more users into it. So it does not make logical sense other than cost not to do it properly at least for the walled garden. Android 9/P is added option of hardware to confirm user presence so even if the core of Android is broken it still cannot pretend to be acting on the user direction. Yes there will be a special button on a lot of android 9/P devices connected to a special isolated system.
Android 9/P shows how far we can take this. There are reasons for hardware based security. Of course protecting the user nothing says this hardware has to be black box only that it designed insanely well so attackers cannot open it with data intact.
Lot of ways Google Android work is the leader in confirming user action vs software done actions. Everyone should be attempt to catch up to this.
Edited 2018-06-17 01:45 UTC
oiaohm,
In theory, we could rate real repair shops by playing dumb customers and seeing what they find. I think the extensions would be more likely to be checked than server certificates, but who knows. I am curious, but I’m not about to spend my money sending my computer to multiple repair shops to see how effective they are at finding malware, haha.
I have a friend that does it, I could ask him what he does.
http://www.bsa.org
You don’t have to part of BSA looking for unlicensed software used they do in fact at times send machines with different defects to different repair shops to see if unlicensed software was used. So there is average stats on what repair shops successfully find and also what they repeatably fail to find. Not openly published because it kind of would be highly useful to those designing attacks.
So repair shops at random are rated. It would be better if it was not at random.
oiaohm,
They’re a copyright advocacy group, I don’t see where they rate repair shops for malware detection, could you send a more specific link?
Edited 2018-06-17 14:11 UTC
This does not fix the problem you need something like flatpak with portals and wayland to in fact absolutely protect settings. General modal dialogs under windows will not work.
UAC windows maybe if it worked also applications cannot use UAC to protect their settings in a clean safe way.
The problem here is that you still have to support accessibility interfaces.
Alfman list the features in firefox that can give attacker persistence that anti-virus software and ISP could miss. Remember man in middle generates odd network traffic so its not that hidden. It is a very short list that gives persistence.
Please remember any exploit attack might have got into the system by then attempt to add extension to firefox. The example I gave was a direct attack against firefox but there are other examples where attacker gets in by a word document flaw then creates persistence by adding extensions to browsers as this can be a harder area for Anti-virus software to scan.
As I said Alfman you are not putting forward anything that can fix this. Requirement to fix it properly is rework how graphical interfaces of operation systems work from the ground up so that programs cannot fool other programs that user approved of the action unless they were specially approved applications.
Mozilla developers have to work with the defectiveness of current operating systems.
Yes since 2014 Mozilla has been attempt to improve that.
oiaohm,
Really? These are security settings that you argue normal users should not get into, suddenly accessibility is a concern? Why?
They were also much less useful back then; and for example you probably wouldn’t want to do online banking with them…
oiaohm,
Well then I challenge you to find a single instance of a bug report/feature request where an end user has asked the devs to disable their ability to enable sideloading.
The reason your premise is wrong is because users who don’t want siding don’t have to enable it in the first place.
There’s a huge moral difference between changing the defaults to help keep users safe and forcefully imposing restrictions on everyone; we need more Richard Stallmans to fight this mentality!
Edited 2018-06-13 13:36 UTC
Technically correct, but totally wrong. What the users want protection from is clicking on a link, and being tricked into installing an extension that turns their browser into an ad-infested crypto-currency mining pig.
See my rant about security warnings having no meaning any longer, and the only reasonable way to fix this is to disable side-loading.
Personally, I’d rather they took the Android approach, and require you to go to another menu and manually enable side-loading first– at least that way, there’s some level of user-initiated action.
I wonder if I’m the only one who disables side-loading after I side-load an app on my android devices?
You contradict your own point though. The very fact that you can side load an app on your Android devices and then disable it is the very freedom that Google is trying to kill in Chrome. If the only solution is to disable side loading for everyone, are you going to give up those apps you side-loaded when Android’s turn comes around?
On android, I don’t get a pop-up that offers to enable side-loading– and even if I did, it would warn me that something nefarious is trying to happen.
The difference is that Chrome can still trick you into loading an application from a non-approved source. I’m OK with that being disabled.
I would like the option to manually enable side-loaded apps, however, even if it means going into chrome settings.
grat,
Yep, anyone who believes in sideloading being a choice (rather than being forced into a walled garden) should agree with that.
What happened here is that we should use browsers other than Chrome and Firefox. No companies should ever be allowed to acts as gatekeepers to an open web.
Chrome and Chromium only support x86 and ARM. Firefox is going down the Rust hole. So I will try to make the transition to browsers that are free software and not biased towards any architecture in particular.
I hope GNOME Web (formerly known as Ephiphany) and Falkon (formerly known as Qupzilla) will be good enough and work across architectures, so Google and Mozilla can keep their browsers to themselves.
The Rust compiler supports every target LLVM does and has been a driving factor in writing or improving LLVM backends like the in-progress AVR backend and then more-complete MSP430 backend.
https://forge.rust-lang.org/platform-support.html
Even in C++, you don’t magically support every compiler without effort, so I’m curious which platform pre-Rust Firefox supported that LLVM does not.
If users accept this, Android will probably be next.
Inline installation is not sideloading…
You can’t and never have been able to use it to sideload unsigned extensions, that is not what it is for. It is used to install extensions from the chrome app store, it is simply a way for a web site to trigger said installation without the user having to actually visit the app store themselves.
This change simply means that instead of the extensions automatically installing after the user prompts, the user instead is redirected to the app store to complete the install. That’s it. It’s better this way. Really…
Users press OK. They always do. Its not a good idea to let website’s have the power to trigger an extension install like this, too many users just don’t read the prompts and hit OK. This way they will actually get sent to the apps page in the store and will have to hit the install button themselves.
Sideloading is still possible (although its complicated, as it has always been). Inline installation has nothing to do with sideloading, nothing at all…
https://developer.chrome.com/apps/external_extensions
In short, your all getting angry over nothing… You can all put your pitchforks down.
Edited 2018-06-13 17:15 UTC
galvanash,
If you find more information that confirms or refutes this, please let me know. I’m just going by the info google has put out, which isn’t very clear
Use a canary build (they have command line switches to bypass some of this stuff) and/or turn on developer mode. Shitty answer, but it is the only way. Its not painless or easy (like I said), but it is still possible.
For day to day use though, if you want to run an extension with no fuss and muss, it pretty much has to come from the app store now (and that has been true for quite a while now). Leaving developer mode on all the time (with an unsigned extension installed) will result in a lot of security nags… If you can live with the nags it does work fine though.
I was just pointing out that you can sideload, its just been pushed into a developer only feature (similar to how Windows 10 does sideloading).
My point really was that this new restriction (i.e. the linked article begin discussed) is not related to side-loading at all. Inline installations have never allowed installing from anywhere but the chrome app store – it was just a mechanism to let developers host a page from which they could trigger installation, and it was horribly misused and amounted to no good…
ps. As of right now, this method is the quickest and easiest, and works on all platforms afaik…
https://a9t9.com/howto/install-chrome-extension-from-file
Edited 2018-06-13 19:19 UTC