Let us say you are an independent developer and it is time to publish your app to the world. To make it easier, you build an installer and start distributing it. A courageous early adopter downloads and runs it, only to be greeted by this strongly worded warning:
[…]Indeed, in today’s Windows environment, Microsoft actively blocks binaries from running; thanks to “SmartScreen”.
This article details some of the problems with SmartSCreen, which in theory could be an important and useful technology.
This isn’t just SmartScreen. This also happened with Symantec AntiVirus (and Panda, and probably others). Every year we updated some software with new insights and (tax) laws and then distrubuted that to our clients customers. Even when we used digital signatures we would get blocked by the scanners the first few days as “possible malware”. Eventually we just started to submit our software to whitelists like https://submit.symantec.com/whitelist/, wait a few days and then release the software. This slowed us down a little bit but was the most practical approach. It absolutely beat having to explain to our customers why we were “sending them virusses”
The only reason we got flagged as “possible malware” was because we had < 5 people with this version of the software installed running Symantec. Once Symantec has confirmation of 5 machines it automatically whitelisted that version of the software.
This is not me, but it might as well have been: https://community.broadcom.com/symantecenterprise/communities/community-home/digestviewer/viewthread?MessageKey=3a21862d-e740-4092-850e-6f20680474ad&CommunityKey=1ecf5f55-9545-44d6-b0f4-4e4a7f5f5e68&tab=digestviewer#bm3a21862d-e740-4092-850e-6f20680474ad
We were never bothered by SmartScreen actually. Maybe because we run it on enough machines during development that we already reach the “safety” threshold 😉
I clicked that link for symantec’s whitelist, which briefly flashed up that it’s no longer available and redirected me to a page which allows me to submit a false positive or false negative. But the whole point of this article is what happens when a binary is not determined to be good or bad – the “default” way AV products report unknown binaries is becoming problematic.
I’ve been having problems with this for around 15 years, and it’s been getting steadily worse. Recently I had a situation where AV products started flagging my binaries (because they were “unknown”), and that gets reported to VirusTotal, and that gets consumed by second tier companies who then start reporting it as truly bad, and that gets consumed by URL scanning services which complain that my URL is hosting malicious content, and that gets consumed by firewall and AV companies. Fixing the original reports was easy enough, but I was never really able to fix the echo chamber.
malxau,
That made me laugh even though I know it’s sad.
It is the equivalent of giving up and throwing the towel, but anti-malware systems can no longer depend on heuristics for new code. They are at a great disadvantage, because bad actors could easily use their own detectors to make sure the new code passes free.
I.e.: it is very easy to upload your new .exe to online services and check against every known antivirus. This way if you were to write a new bitcoin stealing worm, you can ensure it is not detected by any known system.
The shortcut security software writers found was marking *everything* suspicious, until they actually get a sample of the binary and run it in a sandbox or similar.
Question: Does buying a certificate, or having a third party service certify your binaries avoid this problem?
> Does buying a certificate, or having a third party service certify your binaries avoid this problem?
Did you read the article? This is what the vast majority of it is about, and any summary I can give here won’t be as good as the original. The very terse version is that regular certs are insufficient to cause SmartScreen to trust by default, although they do allow for trust to be built over time. EV certs trust by default, but are not available to individuals and are prohibitively expensive. I don’t know about you, but the majority of software I’m running today comes from non-corporate, volunteer-driven individuals, so unavailability and cost of certs are serious issues.
I must admit, I just skimped the summary, but the main point stands.
Unfortunate that it is, no certificate, or a cheap certificate accessible to everyone is almost similar. The current state of the world not only has bad actors, but also lots of mistakes. How can we know a new developer took good precaution building their binaries? The system might be infected unbeknownst to them.
Okay I am pessimistic, and I have been developing code for so many years, I too would want to easily spread my products.
An EV certificate is only $400. Starting a company in Delaware costs $89, or a bit more if done thru an agent. Hosting, web certificates, etc, everything should come under $1k.
It is a bit steep for entry, but then there are other alternatives:
– Using “Stores” that validate your software (they take a 30% cut, but at least do offer some services in return)
– Distributing software in source form, or on alternate platforms
– Web based software that does not need a download, or installed as a PWA
and others.
I know the situation is less than ideal (to say the least). But it is still manageable.
It really depends, a lot of projects that depend on volunteers don’t make money. Now you expect them to shell money that they’re not going to make back just to distribute free software? Back when I was a kid I didn’t have to do any of this crap to distribute programs, which is good because I wouldn’t have had the money to do it. Not only is it non-ideal, but it may even be a non-starter for the next generation of indy development.
Alfman,
Times have changed. I remember being able to trust freeware / shareware downloads from the web. Even the random ones were generally okay.
Then came the massive bad actors. First you could not even install Windows without unplugging the Ethernet cable, and installing a firewall. Just being connected during setup meant you would be infected. But that was an external threat.
Then the shareware authors themselves became greedy. If they were not, the download site operators would gladly bundle all sorts of crapware with the free software.
Today, for an average person, it is very difficult to trust a download, even if has a reputable brand, and comes from a known download site.
So the OS gatekeepers became a bit too extreme. Yes, I don’t like the situation, but I understand what that hapenned.
That being said, so called “trusted” developers can easily install a kernel rootkit without telling the users to make online games fair (Sony, LoL, etc). Or install a backdoor that is wide open to the internet (CCleaner).
sukru,
The code certificates have never been a great solution. The fact that a piece of code is signed doesn’t tell me anything about the intentions of the software developer or how safe it is. Code certificates were microsoft’s initial solution to active-x insecurities, but it didn’t help because code signatures didn’t fix the underlying flaws in the security model, and it doesn’t fix the flaws today either.
My ideas for solving this have been consistent over the years, we should be able to easily run untrusted code in an isolated sandbox and not be at significant risk when we do. The problem with mainstream operating systems is that normally they assume everything run by a user should have the permissions of the user, and that’s fundamentally insecure. Sandboxing is a solution. However a very serious problem with sandboxing on some platforms is when the manufacturer abuses the sandbox to impose restrictions against owner rights, removing our control on our own machines.. Sandboxing is a useful security technology, but the ethical use of sandboxing needs to empower the owners to secure themselves from malware rather than empower the manufacturers to jail the owners.
sukru, I’d argue that times did change…and then changed again.
It’s true that during the 2000s we saw shareware and freeware authors looking for alternate revenue streams, since people didn’t pay for the products. And it’s true that we saw dubious hosting, including downloads.com and even sourceforge.net, messing with binaries. But after that, closed source freeware gave way to open source, hosting moved to places like github, and those problems are much more rare today. This is why we have problems about cost, because the software we get today has _no_ revenue stream. Note in particular that git hosting and compilation tools are effectively free for open source software, and if a project is lucky enough to get meager donations, there should be better ways to spend those than giving them to CAs.
I’d argue that the malware problem in open source Windows executables can be categorized as:
1. Malicious source. This seems to be rare (or very effective in that it’s not being detected.)
2. An author intentionally compiling malicious binaries (that don’t correspond to the source.) This also seems to be rare.
3. A compromised build environment such that the author unintentionally distributes malicious content. This is the most concerning one.
4. A compromised hosting service so the legitimate content is replaced with malicious content. This also happens too frequently.
I’d argue that code signing in its current form doesn’t solve 1, 2 or 3, and is of limited value against 4 because unlike SSL, code signing certificates do not specify content origin. All a malicious actor has to do is get _a_ certificate, from a CA or stolen from any developer, and if it has enough trust, users would end up trusting malicious binaries.
Our current signing is really asking the user, “do you trust Joe Developer’s build environment is secure so no malicious payload could be included or certs misappropriated?” A user has no way to answer that.
What I think we need is to sign that binaries match sources, regardless of who generated them. This is just like CI services today – if a trusted CI build environment compiled code from a specified repo at a specified hash, it can sign the binaries to indicate that the binaries correspond to specified source. From there, installing the binary allows the user to confirm this is what they want to install; inspect or download the source if they choose; or even request that the source be re-built through the same CI process to validate that the binaries match.
This gets certificates out of the hands of a million or so developers and into the hands of a dozen or so CI pipelines, which are far more likely to be secure. It doesn’t ensure that source isn’t malicious, but what we have today can’t do that either.
Alfman,
I agree that sandboxes can be useful, but look at how Mac OS had implemented them:
https://www.osnews.com/story/131830/macos-10-15-alow-by-design/
And even if the software works for a while, it could have a delayed trigger to mutate into malware later on:https://www.reddit.com/r/blackhat/comments/6gu7ej/malware_with_timedelayed_payloads/
So you install the software, let’s say a famous code editor, in a sandbox, and allow it to access your Documents/ folder. You might not know two months from now it would encrypt all your data and hold them for random.
And backups are not the solution. Even with Time Machine and Restore Points users are still lagging behind real backup regimens.
malxau, Alfman,
For certificates, I agree they are not infallible. That is why I included so called piracy protections which themselves should be considered malware.
Basically I am at “giving up” stage.
Alfman,
My point was that there is a solution, but I also warned that OS vendors may abuse it. Alas, if the entity responsible for things breaking, violating our rights, forcing unwanted behaviors, etc is the OS vendor themselves, then we’re in trouble because that kind of malware can’t be stopped.
All of the solutions are intrinsically dependent on the integrity of the OS vendor, for better or for worse.
The vast majority of the time there’s no need to give a random application access to all files in the first place though. The need to do that on android is a dumb limitation stemming from the complete lack of standard dialogs. If we had standard dialogs to provide access to specific files, the need for system-wide file access would be significantly reduced. I find it extremely disappointing android didn’t adopt the concept of standard dialogs from windows, this is a great feature on windows and I find android sorely lacking a similar feature.
Alfman,
> The vast majority of the time there’s no need to give a random application access to all files in the first place though.
As far as I can tell, we’ve evolved to a place where tablets are getting physically larger and have keyboard support, and the real distinction between a tablet and a PC is whether the workload requires a sandbox or not. In other words, if PCs did what you’re suggesting, they’d look like Android/iOS. I know you’re suggesting that these have been vendor abused, but we currently don’t have an alternative sandbox that really expresses PC scenarios.
It may be possible to build a sandbox that can express my workload, but I haven’t seen it yet. Think about what a sandbox for git or make looks like – in a strict sense, it’s true that these should only have access to defined locations, but the definition of those locations is much more complex than “my pictures.”
Personally I’m maintaining a command prompt shell, and in a strict sense it’s true that it shouldn’t be accessing things unless the user specified them. But to sandbox it, command line arguments, which may include wildcards, need to be incorporated into the sandbox scope. Users would also need to believe that the security benefit of a highly dynamic sandbox is worth the obvious performance cost of its continual adjustment and enforcement on each syscall.
Right now when people suggest sandboxing, what I hear is “we have a solution for the trivial cases but don’t have a plan beyond that.” And my entire life is “beyond that.” How am I supposed to develop the system that implements the sandbox from within the sandbox?
There are two issues to resolve: privacy and security. The sandboxes might help with those, but they need to go a long way.
For privacy, we want to make sure our personal data is not shared with third parties. The trivial way to do this would be preventing all network access to new programs by default. However that is rarely the case.
First, even local programs open ports. Granted with recent advances we no longer use X11 TCP connections to 127.0.0.1, but even simple things like validating a software certificate would require the network. This means we would need to have some sort of firewall and/or escalation mechanism for each individual sandbox.
The second part is security. As malxau mentioned, some things are easy (Plex only accesses /media and specific ports), some can be done (Notepad can only open files selected by the system standard dialogue), but many are almost impossible.
When taken to logical extremes, the sandboxing becomes unmanagable. For daemons, like Plex, things are actually easy. We had SELinux for years, and after some initial headache the system works perfectly. However for interactive desktop applications all bets are off.
sukru,
I haven’t taken an ideological extreme though, I’m was pointing out that sandboxing on desktop operating systems is deficient because it is. Whether and how you use sandboxing is up to you, I’d like to see the tools become more available and accessible to owners.
In terms of your point I’d argue that a much better way to protect your data is by preventing applications from having access in the first place than to try and contain leaks on the network side.
“In terms of your point I’d argue that a much better way to protect your data is by preventing applications from having access in the first place than to try and contain leaks on the network side.”
Let’s tell that to the users of viral apps:
https://www.usatoday.com/story/tech/news/2019/07/17/faceapp-privacy-concerns-raised-aged-photos-go-viral/1753898001/
“Some privacy and security experts have expressed concerns that users granting the St. Petersburg, Russia-based FaceApp access to photos on their smartphones is a grand giveaway of privacy and personal information.”
It runs in a sandbox, requires explicit permission to your photos, but people willingly give their consent.
malxau,
(I intentionally responded here because of how poorly wordpress handles threads)
Well, why does that matter though? The fact that android or IOS have application sandboxes doesn’t really negate the benefits on PCs. Even if tablets and smart phones didn’t exist, I’d still be speaking up for better sandboxing tools on PCs to enable security isolation. I’d want it to be superior to the sandboxing on android and IOS though!
Sandboxie is an example of such a tool for sandboxing windows applications:
https://www.sandboxie.com/HowItWorks
Incidentally I just noticed they’ve terminated development and the website is due to be taken down later this year, at least they open sourced it first.
I understand what you are saying, but that’s largely a byproduct of how limited sandboxing is with android & IOS. Unfortunately these weren’t developed with owner interests in mind, which makes them rather disappointing for owner control. Keep in mind, apple’s agenda for sandboxes was to restrict the owner rather than to empower us. However it doesn’t have to be this way and I’d really like to see some more powerful sandboxing tools evolve on PCs.
Owners should have full control over every sandbox if they choose to exercise it, since every sandbox would inherit a default profile at first, but from there you could add/remove programs & permissions as needed. With a good GUI, you could provide owners with far more capable sandboxing than what’s provided with android & IOS.
For example, you could load games up in their own sandboxes. You could create one sandbox for all development projects, or break projects down into multiple sandboxes if such a need arose. If you get some untrusted code from friends or a stranger, you could create a new sandbox to evaluate it in. You can backup & snapshot sandboxes. If you really wanted to throw everything in a single sandbox, of course you can do that too!
This gives owners a lot of flexibility in terms of how to secure their data & applications. We’re starting to see an increased use of containers on linux, which is a kind of sandbox, but we need to make them trivial & universal to use with wizards and integrated desktop environment support. When software is downloaded, it could prompt which sandbox to install to.
At the OS level, containers get us part of the way there but doing it by hand would be extremely tedious, the process is too darn rough around the edges, I would like to see this become trivial such that anyone can use it without much effort once they learn the basics of sandboxing with a GUI that is self-documenting. I think it’s a great feature to have, but I’m less optimistic about the OS communities coming together to make it happen.
Well, things like shells and file explorers are hard to sandbox if the intention is for them to have administrative access to your files. Administrative tools in general are bad candidates for sandboxing by their very nature. You could limit them to readonly access, Or maybe you could use a temporary sandbox to help you evaluate the shell before allowing it to run on a trusted sandbox with your files. This wouldn’t protect against a “time bomb”, but neither does any other certificate based solution anyways. Maybe a sandbox could monitor suspiciously high levels of activity, but that’s getting into A/V territory. Hypothetically if you only ever need to use the shell with certain projects, then maybe it would be adequate to add the shell to those sandboxes without giving it access to everything else.
Operating system kernels already need to enforce permissions anyways, I don’t think the overhead would be perceivable.
I hope you don’t clump me in with those people 🙂
I think you ought to be able to implement hierarchical sandboxes, however I don’t necessarily think the average user would need that.
Alfman,
I wasn’t familiar with Sandboxie, thanks for pointing me to it. It seems to have a very different set of issues to iOS/Android type sandboxing.
The sandbox model that I thought you were proposing is one that infers from user activity which resources the program should have access to, and denies access to anything else. This is the classic “file picker” type sandbox where a program can only access a file after the user has selected it through an OS control. Porting that to a shell implies understanding arguments and grammar. This is why it seems both difficult and very slow, because each command is being parsed by the sandboxing engine, rights reconfigured, then launch a program, which in turn performs access checks.
Sandboxie doesn’t seem to be anything like that sophisticated. It’s just checking based on the sandbox whether a resource should be protected, and if so, modifications can be directed to an alternate location and merged back for read operations within the sandbox. I doubt this model can support my command shell.
I’d still stick with my general skepticism that this model can be applied to a moderately sophisticated environment. Sandboxie looks like it should be able to support git in that it can allow modifications to a source tree but not other files. But even something like vim gets weird, because the user is clearly indicating the object they’d like to alter, but the sandbox isn’t aware of that, and may end up redirecting user requested changes.
If the model for supporting more sophisticated things is to use them outside of the sandbox, it starts to get back where we started, which is there’s no good way to trust software from the Internet, and all software is from the Internet. I think we’re agreeing that code signing doesn’t really provide that trust either. But without some model for trust, we end up with things like Smartscreen whose basic logic is that if the user is discouraged from doing anything, they’ll be discouraged from doing unsafe things incidentally.
malxau,
Most games & applications can function elegantly in a sandbox. However tools that are intended to perform systemwide/administrative functions are considerably harder to sandbox due to their administrative functionality. With your shell, you probably explicitly want your it to be able to perform any function on any resource without question, which means you’re implicitly asking for something akin to full root access. To state the obvious: this isn’t a problem for yourself, but rather for other people who don’t trust you.
Conceivably though a sandbox might implement controls that work a bit like selinux: handle different permissions on resources, but also different levels of enforcement on those permissions. For example: disable, logging mode, prompting mode, enforcement mode. It might add session duration limits as well. This way you could install the shell with default restrictions, and you would get notifications about any activity that exceeds the permissions. For instance, you may grant it only read access and only writes would trigger a prompt. Depending on your usage, this could get you want you want. It could still get annoying of course if you choose not to open the security aperture perminently, but at the end of the day you ultimately must decide whether to allow activities or deny them.
I disagree, sandboxing / containers can support sophisticated environments quite well. Sandboxing is very compelling especially when running untrusted software and I don’t feel this is an exaggeration. Think of it is like a virtual machine: you can install untrusted software inside a virtual machine, and it will work fine alongside the other programs inside the virtual machine. Regardless of what these programs do inside the VM, the rest of your VMs and files are off limits. This works great so long as these programs don’t need to access other virtual machines.
Your problem only arises since you want to use an untrusted tool in an administrative capacity or you explicitly want the untrusted tool to transcend the containers, yet these goals seem to be implicitly at odds with being contained. Let me ask you what do you expect to happen when you run an untrusted administrative tool? I think the best you can hope for by default is notifying/prompting the user to confirm the actions being taken.
I’m not against trust models per say, but my point is that sandboxing fills a huge glaring gap in what is otherwise an all or nothing security model. At the very least a sandbox allows you to test applications safely in isolation,
I’m not trying to deny your point that you may not want to run certain applications in an isolated sandbox, but I would suggest that you would have needed to make the same choice without the sandbox anyways. The sandbox is just a tool, you can use it or not. Not for nothing, but if you download a binary from an HTTPS website and verify the hashes, cryptographicly speaking that gives you about the same security as a signed binary – either you take the author at their word and trust them or you don’t, signing the code again is cryptographicly redundant.
I’d be in favor of having some standards to make validating downloaded software much easier to automate. That way at least an OS could validate the integrity and source of the software you download without you having to manually confirm the hash or requiring devs to buy a separate certificate. If it’s open source, you could verify that the source code corresponds to the binary. The standard might even allow 3rd parties to attest to the qualify of the source code.
(broken links to avoid wordpress auto-moderation)
Rather than being a proprietary black box like smartscreen, this could give more useful information from multiple sources to help users make a more informed decision. Of course it would be tough to get the industry to embrace anything like this, but that’s a different problem, haha.
Alfman,
(We’re the only ones left and the threading is…bleh.)
What I was trying to say on a fork of this thread is that users today are being asked whether they trust the software author, the build environment, and the distribution channel. The model I was proposing solves at least two of these.
Well, I’m still hoping for a way of either establishing trust or at least reducing distrust. There will always be administrative tools, so the question becomes whether users need to disable their only line of defense (the sandbox) or whether there are other lines of defense that can be drawn.
I’m not sure that it’s necessarily true that a command shell is administrative, in the classic sense of that term. Most of the time it runs as an unelevated user, constrained to whatever access that user has. So long as the actions it performs are directed by the user, this makes a lot of sense. The question is how to verify that assumption.
My real fear with sandboxes is that just like Smartscreen, the temptation for a platform vendor will be to strongly discourage applications from executing outside of them. That is the exact same problem I’m having (and the author of this article is having) today. I don’t deny that they add value as a purely optional facility.
malxau,
You didn’t say anything of the idea I proposed to verify the owner using standard HTTPS protocols. We’re both well aware that knowing who developed the software doesn’t imply that you can trust them, but maybe you could “crowd source” the problem in a similar way to how DNSBL are a crowd sourced solution to identifying spammers. Just a thought…
You say there will always be administrative tools, but honestly I find sandboxing lacking today in terms of both sophistication and ease of use.
This level of access with no sandbox puts all the user’s data at risk. There are some applicaitons, like the shells you bring up, where the user expects to have access to their files through the shell. However for many untrusted applications this will hand them the key to the castle when they don’t need it.
I agree totally, you have no idea how many reservations I have because of this. The merit in any application sandboxing lies in whether the owner is fully in control. There’s a lot of ways this could go very badly for us if owners are taken out of the chain of trust and vendors take control.