Privacy, Security Archive

Pitch deck gives new details on company’s plan to listen to your devices for ad targeting

For years now, people believe that their smartphones are listening to their conversations through their microphones, all the time, even when the microphone is clearly not activated. Targeted advertising lies at the root of this conviction; when you just had a conversation with a friend about buying a pink didgeridoo and a flanel ukelele, and you then get ads for pink didgeridoos and flanel ukeleles, it makes intuitive sense to assume your phone was listening to you. How else would Google, Amazon, Facebook, or whatever, know your deepest didgeridoo desires and untapped ukelele urges? The truth is that targeted advertising using cross-site cookies and profile building is far more effective than people think, and on top of that, people often forget what they did on their phone or laptop ten minutes ago, let alone yesterday or last week. Smartphones are not secretly listening to you, and it’s not through covert microphone activation that it knows about your musical interests. But then. Media conglomerate Cox Media Group has been pitching tech companies on a new targeted advertising tool that uses audio recordings culled from smart home devices. The existence of this program was revealed late last year. Now, however, 404 Media has also gotten its hands on additional details about the program through a leaked pitch deck. The contents of the deck are creepy, to say the least. Cox’s tool is creepily called “Active Listening” and the deck claims that it works by using smart devices, which can “capture real-time intent data by listening to our conversations.” After the data is captured, advertisers can “pair this voice-data with behavioral data to target in-market consumers,” the deck says. The vague use of artificial intelligence to collect data about consumers’ online behavior is also mentioned, with the deck noting that consumers “leave a data trail based on their conversations and online behavior” and that the AI-fueled tool can collect and analyze said “behavioral and voice data from 470+ sources.” ↫ Lucas Ropek at Gizmodo Looking at the pitch deck in question, you can argue that it’s not even referring to smartphones, and that it is incredibly vague – probably on purpose – what “active listening” and “conversations” are really referring to. It might as well be simply referring to the various conversations on unencrypted messaging platforms, directly with companies, or stuff like that. “Smart devices” is also intentionally vague, and could be anything from one of those smart fridges to your smartphone. But you could also argue that yes, this seems to be pretty much referring to “listening to our conversations” in the most literal sense, by somehow – we have no idea how – turning on our smartphone microphones, in secret, without iOS or Android, or Apple or Google, knowing about it? It seems far-fetched, but at the same time, a lot of corporate and government programs and efforts seemed far-fetched until some whisteblower spilled the beans. The feeling that your phones are listening to you without your consent, in secret, will never go away. Even if some irrefutable evidence came up that it isn’t possible, it’s just too plausible to be cast aside.

Heliography in darkness

Telegram doesn’t hold up to the promise of being private, nor secure. The end-to-end encryption is opt-in, only applies to one-on-one conversations and uses a controversial ‘homebrewn’ encryption algorithm. The rest of this article outlines some of the fundamentally broken aspects of Telegram. ↫ h3artbl33d Telegram is not a secure messenger, nor is it a platform you should want to be on. Chats are not encrypted by default, and are stored in plain text on Telegram’s server. Only chats between two (not more!) people who also happen to both be online at that time can be “encrypted”. In addition, the quotation marks highlight another massive issue with Telegram: its “encryption” is non-standard, home-grown, and countless security researchers have warned against relying on it. Telegram’s issues go even further than this, though. The application also copies your contacts to its servers and keeps them there, they’ve got a “People nearby” feature that shares location data, and so much more. The linked article does a great job of listing the litany of problems Telegram has, backed up by sources and studies, and these alone should convince anyone to not use Telegram for anything serious. And that’s even before we talk about Telegram’s utter disinterest in stopping the highly illegal activities that openly take place on its platform, from selling drugs, down to far more shocking and dangerous activities like sharing revenge pron, CSAM, and more. Telegram has a long history of not giving a single iota about shuttering groups that share and promote such material, leaving victims of such heinous crimes out in the cold. Don’t use Telegram. A much better alternative is Signal, and hell, even WhatsApp, of all things, is a better choice.

Driving forward in Android drivers

Google’s own Project Zero security research effort, which often finds and publishes vulnerabilities in both other companies’ and its own products, set its sights on Android once more, this time focusing on third-party kernel drivers. Android’s open-source ecosystem has led to an incredible diversity of manufacturers and vendors developing software that runs on a broad variety of hardware. This hardware requires supporting drivers, meaning that many different codebases carry the potential to compromise a significant segment of Android phones. There are recent public examples of third-party drivers containing serious vulnerabilities that are exploited on Android. While there exists a well-established body of public (and In-the-Wild) security research on Android GPU drivers, other chipset components may not be as frequently audited so this research sought to explore those drivers in greater detail. ↫ Seth Jenkins They found a whole host of security issues in these third-party kernel drivers in phones both from Google itself as well as from other companies. An interesting point the authors make is that because it’s getting ever harder to find 0-days in core Android, people with nefarious intent are looking at other parts of an Android system now, and these kernel drivers are an inviting avenue for them. They seem to focus mostly on GPU drivers, for now, but it stands to reason they’ll be targeting other drivers, too. As usual with Android, the discovered exploits were often fixed, but the patches took way, way too long to find their way to end users due to the OEMs lagging behind when it comes to sending those patches to users. The authors propose wider adoption of Android APEX to make it easier to OEMs to deliver kernel patches to users faster. I always like the Project Zero studies and articles, because they really take no prisoners, and whether they’re investigating someone else like Microsoft or Apple, or their own company Google, they go in hard, do not surgarcoat their findings, and apply the same standards to everyone.

Corporate greed from Apple and Google has destroyed the passkey future

William Brown, developer of webauthn-rs, has written a scathing blog post detailing how corporate interests – namely, Apple and Google – have completely and utterly destroyed the concept of passkeys. The basic gist is that Apple and Google were more interested in control and locking in users than in providing a user-friendly passwordless future, and in doing so have made passkeys effectively a worse user experience than just using passwords in a password manager. Since then Passkeys are now seen as a way to capture users and audiences into a platform. What better way to encourage long term entrapment of users then by locking all their credentials into your platform, and even better, credentials that can’t be extracted or exported in any capacity. Both Chrome and Safari will try to force you into using either hybrid (caBLE) where you scan a QR code with your phone to authenticate – you have to click through menus to use a security key. caBLE is not even a good experience, taking more than 60 seconds work in most cases. The UI is beyond obnoxious at this point. Sometimes I think the password game has a better ux. The more egregious offender is Android, which won’t even activate your security key if the website sends the set of options that are needed for Passkeys. This means the IDP gets to choose what device you enroll without your input. And of course, all the developer examples only show you the options to activate “Google Passkeys stored in Google Password Manager”. After all, why would you want to use anything else? ↫ William Brown The whole post is a sobering read of how a dream of passwordless, and even usernameless, authentication was right within our grasp, usable by everyone, until Apple and Google got involved and enshittified the standards and tools to promote lock-in and their own interests above the user experience. If even someone as knowledgeable about this subject as Brown, who writes actual software to make these things work, is advising against using passkeys, you know something’s gone horribly wrong. I also looked into possibly using passkeys, including using things like a Yubikey, but the process seems so complex and unpleasant that I, too, concluded just sticking to Bitwarden and my favourite open source TFA application was a far superior user experience.

Backdoor in upstream xz/liblzma leading to SSH server compromise

After observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors) I figured out the answer: The upstream xz repository and the xz tarballs have been backdoored. At first I thought this was a compromise of debian’s package, but it turns out to be upstream. ↫ Andres Freund I don’t normally report on security issues, but this is a big one not just because of the severity of the issue itself, but also because of its origins: it was created by and added to upstream xz/liblzma by a regular contributor of said project, and makes it possibly to bypass SSH encryption. It was discovered more or less by accident by Andres Freund. I have not yet analyzed precisely what is being checked for in the injected code, to allow unauthorized access. Since this is running in a pre-authentication context, it seems likely to allow some form of access or other form of remote code execution. ↫ Andres Freund The exploit was only added to the release tarballs, and not present when taking the code off GitHub manually. Luckily for all of us, the exploit has only made it way to the most bloodiest of bleeding edge distributions, such as Fedora Rawhide 41 and Debian testing, unstable and experimental, and as such has not been widely spread just yet. Nobody seems to know quite yet what the ultimate intent of the exploit seems to be. Of note: the person who added the compromising code was recently added as a Linux kernel maintainer.

The Apple curl security incident 12604

When this command line option is used with curl on macOS, the version shipped by Apple, it seems to fall back and checks the system CA store in case the provided set of CA certs fail the verification. A secondary check that was not asked for, is not documented and plain frankly comes completely by surprise. Therefore, when a user runs the check with a trimmed and dedicated CA cert file, it will not fail if the system CA store contains a cert that can verify the server! This is a security problem because now suddenly certificate checks pass that should not pass. ↫ Daniel Stenberg Absolutely wild that Apple does not consider this a security issue.

Meet ‘Link History,’ Facebook’s new way to track the websites you visit

Facebook recently rolled out a new “Link History” setting that creates a special repository of all the links you click on in the Facebook mobile app. You can opt out if you’re proactive, but the company is pushing Link History on users, and the data is used for targeted ads. As lawmakers introduce tech regulations and Apple and Google beef up privacy restrictions, Meta is doubling down and searching for new ways to preserve its data harvesting empire. The company pitches Link History as a useful tool for consumers “with your browsing activity saved in one place,” rather than another way to keep tabs on your behavior. With the new setting you’ll “never lose a link again,” Facebook says in a pop-up encouraging users to consent to the new tracking method. The company goes on to mention that “When you allow link history, we may use your information to improve your ads across Meta technologies.” The app keeps the toggle switched on in the pop-up, steering users towards accepting Link History unless they take the time to look carefully. ↫ Thomas Germain at Gizmodo As more and more people in the technology press who used to be against Facebook have changed their tune since the launch of Facebook’s Threads – the tech press needs eyeballs in one place for ad revenue, and with Twitter effectively dead, Threads is its replacement – it’s easy to forget just what a sleazy, slimy, and disgusting company Facebook really is.

Federal government is using data from push notifications to track contacts

Government investigators in the United States have used push notification data to pursue people of interest, Sen. Ron Wyden (D-Ore.) said in a letter Wednesday to the Justice Department, revealing for the first time a way in which Americans can be tracked through a basic service provided by their smartphones. Wyden’s letter said the Justice Department had prohibited Apple and Google from discussing the technique and asked it to change the rule, noting that his office had received a tip that foreign governments had also begun requesting the push-notification data. ↫ Drew Harwell for The Washington Post Not surprising, of course. The one nugget of good news here is that while Apple’s policy is to hand over this data after a mere subpoena (“privacy is a fundamental human right“, everybody), Google requires an actual court order, meaning federal officials must convince a judge of the validity of the request. Assuming this is not a nebulous secret backroom deal but a proper judicial process, I’m actually okay with that – law enforcement does need the ability to investigate potential criminals, and as long as this happens within the boundaries of the law and properly overseen and approved by the judiciary every step along the way, I can support it.

Hacking the Canon imageCLASS MF742Cdw/MF743Cdw (again)

There has been quite a bit of documentation about exploiting the CANON Printer firmware in the past. For some more background information I suggest reading these posts by SYNACKTIV, doar-e and DEVCORE. I highly recommend reading all of it if you want to learn more about hacking (CANON) Printers. The TL;DR is: We’re dealing with a Custom RTOS called DRYOS engineered by CANON that doesn’t ship with any modern mitigations like W^X or ASLR. That means that after getting a bit acquainted with this alien RTOS it is relatively easy to write (reliable) exploits for it. Having a custom operating system doesn’t mean it’s more secure than popular solutions.

ANSI Terminal security in 2023 and finding 10 CVEs

This paper reflects work done in late 2022 and 2023 to audit for vulnerabilities in terminal emulators, with a focus on open source software. The results of this work were 10 CVEs against terminal emulators that could result in Remote Code Execution (RCE), in addition various other bugs and hardening opportunities were found. The exact context and severity of these vulnerabilities varied, but some form of code execution was found to be possible on several common terminal emulators across the main client platforms of today. Additionally several new ways to exploit these kind of vulnerabilities were found. This is the full technical write-up that assumes some familiarity with the subject matter, for a more gentle introduction see my post on the G-Research site. Some light reading for the weekend.

Clever malvertising attack uses Punycode to look like KeePass’s official website

Threat actors are known for impersonating popular brands in order to trick users. In a recent malvertising campaign, we observed a malicious Google ad for KeePass, the open-source password manager which was extremely deceiving. We previously reported on how brand impersonations are a common occurrence these days due to a feature known as tracking templates, but this attack used an additional layer of deception. The malicious actors registered a copycat internationalized domain name that uses Punycode, a special character encoding, to masquerade as the real KeePass site. The difference between the two sites is visually so subtle it will undoubtably fool many people. We have reported this incident to Google but would like to warn users that the ad is still currently running. Ad blockers are security tools. This proves it once again.

I tested an HDMI adapter that demands your location, browsing data, photos, and spams you with ads

I recently got my hands on an ordinary-looking iPhone-to-HDMI adapter that mimics Apple’s branding and, when plugged in, runs a program that implores you to “Scan QR code for use.” That QR code takes you to an ad-riddled website that asks you to download an app that asks for your location data, access to your photos and videos, runs a bizarre web browser, installs tracking cookies, takes “sensor data,” and uses that data to target you with ads. The adapter’s app also kindly informed me that it’s sending all of my data to China. Just imagine what kind of stuff is happening that isn’t perpetrated by crude idiots, but by competent state-sponsored actors. I don’t believe for a second that at least a number of products from Apple, Dell, HP, and so on, manufactured in Chinese state-owned factories, are not compromised. The temptation is too high, and even if, say, Apple found something inside one of their devices rolling off the factory line – what are they going to do? Publicly blame the Chinese government, whom they depend on for virtually all their manufacturing? You may trust HP, but do you trust the entire chain of people and entities controlling their supply chain?

Cars are the worst product category we have ever reviewed for privacy

Car makers have been bragging about their cars being “computers on wheels” for years to promote their advanced features. However, the conversation about what driving a computer means for its occupants’ privacy hasn’t really caught up. While we worried that our doorbells and watches that connect to the internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data-gobbling machines. Machines that, because of their all those brag-worthy bells and whistles, have an unmatched power to watch, listen, and collect information about what you do and where you go in your car. All 25 car brands we researched earned our *Privacy Not Included warning label — making cars the official worst category of products for privacy that we have ever reviewed. Much to the surprise of nobody.

Why is .US being used to phish so many of us?

Domain names ending in “.US” — the top-level domain for the United States — are among the most prevalent in phishing scams, new research shows. This is noteworthy because .US is overseen by the U.S. government, which is frequently the target of phishing domains ending in .US. Also, .US domains are only supposed to be available to U.S. citizens and to those who can demonstrate that they have a physical presence in the United States. The answer is GoDaddy.

Not everything is secret in encrypted apps like iMessage and WhatsApp

The mess I’m describing — end-to-end encryption but with certain exceptions — may be a healthy balance of your privacy and our safety. The problem is it’s confusing to know what is encrypted and secret in communications apps, what is not and why it might matter to you. To illuminate the nuances, I broke down five questions about end-to-end encryption for five communications apps. This is straightforward and good overview of what, exactly, is end-to-end encrypted in the various chat and IM applications we use today. There’s a lot of ifs and buts here.

Bypassing Bitlocker using a cheap logic analyzer on a Lenovo laptop

The BitLocker partition is encrypted using the Full Volume Encryption Key (FVEK). The FVEK itself is encrypted using the Volume Master Key (VMK) and stored on the disk, next to the encrypted data. This permits key rotations without re-encrypting the whole disk. The VMK is stored in the TPM. Thus the disk can only be decrypted when booted from this computer (there is a recovery mechanism in Active Directory though). In order to decrypt the disk, the CPU will ask that the TPM sends the VMK over the SPI bus. The vulnerability should be obvious: at some point in the boot process, the VMK transits unencrypted between the TPM and the CPU. This means that it can be captured and used to decrypt the disk. This seems like such an obvious design flaw, and yet, that’s exactly how it works – and yes, as this article notes, you can indeed capture the VMK in-transit and decrypt the disk.

Cyber-attack on UK’s electoral registers revealed

The UK’s elections watchdog has revealed it has been the victim of a “complex cyber-attack” potentially affecting millions of voters. The Electoral Commission said unspecified “hostile actors” had managed to gain access to copies of the electoral registers, from August 2021. Hackers also broke into its emails and “control systems” but the attack was not discovered until October last year. The watchdog has warned people to watch out for unauthorised use of their data. That seems like a state-level attack, and such data could easily be used for online influence campaigns during elections, something that is happening all over the western world right now. I wonder just how bad the hack actually was? “Millions of voters” sounds bad, but… The commission says it is difficult to predict exactly how many people could be affected, but it estimates the register for each year contains the details of around 40 million people. Holy cow.

Compromising Garmin’s sport watches: a deep dive into GarminOS and its MonkeyC virtual machine

I reversed the firmware of my Garmin Forerunner 245 Music back in 2022 and found a dozen or so vulnerabilities in their support for Connect IQ applications. They can be exploited to bypass permissions and compromise the watch. I have published various scripts and proof-of-concept apps to a GitHub repository. Coordinating disclosure with Garmin, some of the vulnerabilities have been around since 2015 and affect over a hundred models, including fitness watches, outdoor handhelds, and GPS for bikes. Raise your hands if you’re surprised. Any time someone takes even a cursory glance at internet of things devices or connected anythings that isn’t a well-studied platform from the likes of Apple, Google, or Microsoft, they find boatloads of security issues, dangerous bugs, stupid design decisions, and so much more.

Meta wants EU users to apply for permission to opt out of data collection

Ars Technica reports: Meta announced that starting next Wednesday, some Facebook and Instagram users in the European Union will for the first time be able to opt out of sharing first-party data used to serve highly personalized ads, The Wall Street Journal reported. The move marks a big change from Meta’s current business model, where every video and piece of content clicked on its platforms provides a data point for its online advertisers. People “familiar with the matter” told the Journal that Facebook and Instagram users will soon be able to access a form that can be submitted to Meta to object to sweeping data collection. If those requests are approved, those users will only allow Meta to target ads based on broader categories of data collection, like age range or general location. This immediately feels like something that shouldn’t be legal. Why on earth do I have to convince Facebook to respect my privacy? I should not have to provide any justification to them whatsoever – if I want them to respect my privacy, they should just damn do so, no questions asked. It seems I’m not alone: Other privacy activists have criticized Meta’s plan to provide an objection form to end sweeping data collection. Fight for the Future Director Evan Greer told Ars that Meta’s plan provides “privacy in name only” because users who might opt out if given a “yes/no” option may be less likely to fill out the objection form that requires them to justify their decision. “No one should have to provide a justification for why they don’t want to be surveilled and manipulated,” Greer told Ars. Exactly.