Privacy, Security Archive
Encrypting the data stored locally on your hard drives is generally a good idea, specifically if you have use a laptop and take it with you a lot and thieves might get a hold of it. This issue becomes even more pressing if you carry sensitive data as a dissident or whistleblower and have to deal with law enforcement. Or, you know, if you’re an American citizen fascist paramilitary groups like ICE doesn’t like because your skin colour is too brown or whatever. Windows offers local disk encryption too, in the form of its BitLocker feature, and Microsoft suggests users store their encryption keys on Microsoft’s servers. However, when you do so, these keys will be stored unencrypted, and it turns out Microsoft will happily hand them over to law enforcement. “This is private data on a private computer and they made the architectural choice to hold access to that data. They absolutely should be treating it like something that belongs to the user,” said Matt Green, cryptography expert and associate professor at the Johns Hopkins University Information Security Institute. “If Apple can do it, if Google can do it, then Microsoft can do it. Microsoft is the only company that’s not doing this,” he added. “It’s a little weird… The lesson here is that if you have access to keys, eventually law enforcement is going to come.” ↫ Thomas Brewster Microsoft is choosing to store these keys in unencrypted fashion, and that of course means law enforcement is going to come knocking. With everything that’s happening in the United States at the moment, the platitude of “I have nothing to hide” has lost even more of its meaning, as people – even toddlers – are being snatched from the streets and out of their homes on a daily basis by fascist paramilitaries. Even if times were better, though, Microsoft should still refrain from storing these keys unencrypted. It is entirely possible, nay, trivial to address this shortcoming, but the odds of the company fixing this while trying to suck up to the current US regime seem small. Everybody, but especially those living under totalitarian(-esque) regimes, should be taking extra care to make sure their data isn’t just encrypted, but that the keys are safe as well.
Alpine Linux maintainer Ariadne Conill has published a very interesting blog post about the shortcomings of both sudo and doas, and offers a potential different way of achieving the same goals as those tools. Systems built around identity-based access control tend to rely on ambient authority: policy is centralized and errors in the policy configuration or bugs in the policy engine can allow attackers to make full use of that ambient authority. In the case of a SUID binary like doas or sudo, that means an attacker can obtain root access in the event of a bug or misconfiguration. What if there was a better way? Instead of thinking about privilege escalation as becoming root for a moment, what if it meant being handed a narrowly scoped capability, one with just enough authority to perform a specific action and nothing more? Enter the object-capability model. ↫ Ariadne Conill To bring this approach to life, they created a tool called capsudo. Instead of temporarily changing your identity, capsudo can grant far more fine-grained capabilities that match the exact task you’re trying to accomplish. As an example, Conill details mounting and unmounting – with capsudo, you can not only grant the ability for a user to mount and unmount whatever device, but also allow the user to only mount or unmount just one specific device. Another example given is how capsudo can be used to give a service account user to only those resources the account needs to perform its tasks. Of course, Conill explains all of this way better than I ever could, with actual example commands and more details. Conill happens to be the same person who created Wayback, illustrating that they have a tendency to look at problems in a unique and interesting way. I’m not smart enough to determine if this approach makes sense compared to sudo or doas, but the way it’s described it does feel like a superior, more secure solution.
I suspect that many people who take an interest in Internet privacy don’t appreciate how hard it is to resist browser fingerprinting. Taking steps to reduce it leads to inconvenience and, with the present state of technology, even the most intrusive approaches are only partially effective. The data collected by fingerprinting is invisible to the user, and stored somewhere beyond the user’s reach. On the other hand, browser fingerprinting produces only statistical results, and usually can’t be used to track or identify a user with certainty. The data it collects has a relatively short lifespan – days to weeks, not months or years. While it probably can be used for sinister purposes, my main concern is that it supports the intrusive, out-of-control online advertising industry, which has made a wasteland of the Internet. ↫ Kevin Boone My view on this matter is probably a bit more extreme than some: I believe it should be illegal to track users for advertising purposes, because the data collected and the targeting it enables not only violate basic privacy rights enshrined in most constitutions, they also pose a massive danger in other ways. This very same targeting data is already being abused by totalitarian states to influence our politics, which has had disastrous results. Of course, our own democratic governments’ hands aren’t exactly clean either in this regard, as they increasingly want to use this data to stop “terrorists” and otherwise infringe on basic rights. Finally, any time such data ends up on the black market after data breaches, criminals, organised or otherwise, also get their hands on it. I have no idea what such a ban should look like, or if it’s possible to do this even remotely effectively. In the current political climate in many western countries, which are dominated by the wealthy few and corporate interests, it’s highly unlikely that even if such a ban was passed as lip service to concerned constituents, any fines or other deterrents would probably be far too low to make a difference anyway. As such, my desire to have targeted online advertising banned is mostly theory, not practice – further illustrated by the European Union caving like cowards on privacy to even the slightest bit of pressure. Best I can do for now is not partake in this advertising hellhole. I disabled and removed all advertising from OSNews recently, and have always strongly advised everyone to use as many adblocking options as possible. We not only have a Pi-Hole to keep all of our devices at home safe, but also use a second layer of on-device adblockers, and I advise everyone to do the same.
Choosing a remote networking or VPN-style tool sounds simple on paper. In reality, it’s one of those decisions that often gets overthought. There are protocols, dashboards, pricing tiers, trust models, and endless opinions flying around. Before long, people end up comparing tools feature by feature without ever stepping back to ask what actually matters for their setup. The Most Common Comparison Mistake A lot of people begin by searching for “best” tools. Best VPN. Best mesh network. Best remote access solution. That framing alone creates problems because there is no universal best. There’s only what fits a specific use case. A setup that works perfectly for a solo developer managing two servers may fall apart in a team environment. Likewise, something built for enterprise-scale access might feel heavy and unnecessary for a home lab. The better starting point is not the tools themselves, but the questions behind them. Why Feature Lists Can Be Misleading Feature lists look objective, but they often hide important trade-offs. For example, a tool might advertise easy onboarding, but rely heavily on third-party services behind the scenes. Another might offer full control but expect more manual setup and maintenance. Neither approach is wrong, but they solve different problems. This is where reading neutral breakdowns can help. Not to copy someone else’s choice, but to understand how different tools approach the same challenge. Some people explore resources that discuss topics like Wireguard vs Tailscale simply to see how trade-offs are framed, rather than to pick a winner. The value is in the reasoning, not the conclusion. Control, Simplicity, and Trust Most remote networking decisions eventually circle back to three themes: Control How much visibility and configuration freedom is needed? Some setups benefit from abstraction. Others benefit from transparency. Simplicity Fewer moving parts usually mean fewer things to break. But simplicity for the user sometimes means complexity behind the scenes. Trust This includes trust in the software itself, but also in how authentication, coordination, and updates are handled over time. Balancing these three factors often matters more than raw performance numbers. Avoiding the “Set It and Forget It” Trap Another common mistake is assuming the first setup will last forever. Networks evolve. Teams grow. Requirements change. A tool that felt perfect at the beginning may start to feel limiting six months later. That doesn’t mean it was a bad choice. It means the environment changed. Keeping this in mind makes comparisons less stressful. The goal isn’t permanence. It’s suitability for the current stage. A More Practical Way to Compare Instead of asking which tool is better, it’s often more useful to ask: What problem does this tool solve particularly well? What complexity does it introduce in return? What assumptions does it make about how the network should work? Those answers tend to reveal whether a tool fits, without needing to rank it against everything else. Stepping Back From the Noise The ecosystem around remote networking tools is active and opinionated. Comparisons, debates, and hot takes are everywhere. That can be helpful, but it can also cloud judgment. The real success comes from choosing something that matches how the network is actually used, not how it looks on a chart or list.
Why are so many business leaders suddenly questioning their old approach to managing risk? The answer lies in the world’s growing unpredictability. From supply chain shocks and cyber threats to environmental concerns and social accountability, risk is no longer confined to balance sheets. It’s emotional, digital, and public. Leaders are now expected to anticipate problems, not just react to them. The stakes are higher, and every mistake can go viral within minutes. This fast-changing environment is forcing companies to modernize their strategies. The rise of digital tools like GRC software has made risk management more proactive, connected, and transparent. Leaders now realize that effective risk control isn’t about fear, it’s about foresight. They’re rethinking how to protect their organizations while staying innovative and trustworthy in a world that demands both. Global Uncertainty Keeps Raising the Stakes The world economy has become deeply connected, and that’s both a strength and a weakness. A disruption in one region can affect companies across the globe. Leaders can no longer rely on fixed plans when external risks like political tension or sudden market shifts can derail operations overnight. Uncertainty has become the new constant. That’s why leaders are rethinking how they plan, assess, and respond to risk. They know that flexibility, not rigidity, is now the true measure of resilience. Data Exposure and Digital Risks Create New Pressure Cybersecurity threats have turned into boardroom issues. One data breach can destroy years of reputation and trust. With digital transformation spreading across industries, leaders must face new risks that didn’t exist before. This digital pressure is making companies more cautious and analytical. Many have realized that manual systems can’t keep up with the pace of modern threats. They’re moving toward automated, integrated frameworks that give them real-time visibility and faster responses. Regulatory Complexity Forces Smarter Oversight Laws and compliance standards keep changing. What was acceptable last year may not be legal today. This constant regulatory motion puts enormous pressure on leadership teams to stay current. The result is a growing demand for precision and accountability. Modern tools like GRC software help simplify compliance tracking, giving leaders the confidence to operate safely within changing frameworks. They can focus on strategy rather than paperwork while keeping their organizations protected. Stakeholder Expectations Push for Transparency Customers, investors, and employees all expect honesty and responsibility. People want to know how companies behave, not just what they sell. This shift in public expectation has become a major trigger point for leaders. Organizations are now judged by their actions and values. Leaders are rethinking how to make transparency part of their brand identity, not just a compliance checkbox. They understand that openness builds long-term trust and that trust is now currency. Sustainability and Ethics Redefine Leadership Priorities Modern leaders no longer view profit as their only measure of success. They are now judged on how well they protect the planet, their workers, and their communities. Environmental, social, and governance (ESG) standards have entered the boardroom as key performance indicators. This broader sense of responsibility is changing how leaders define risk. The focus has shifted from short-term financial stability to long-term ethical resilience. Companies that ignore sustainability today face reputation loss tomorrow. Leaders are rethinking corporate risk because the world has changed faster than their systems have. Old models built on predictability can’t survive in an unpredictable age. Every trigger from technology to transparency reminds executives that risk management must evolve. This shift isn’t about fear; it’s about foresight. Modern leaders want control, clarity, and confidence in every decision. With smarter frameworks and digital support systems guiding the way, they’re turning risk into a path for stronger governance, deeper trust, and long-term stability.
The lone volunteer maintainer of libxml2, one of the open source ecosystem’s most widely used XML parsing libraries, has announced a policy shift that drops support for embargoed security vulnerability reports. This change highlights growing frustration among unpaid maintainers bearing the brunt of big tech’s security demands without compensation or support. Wellnhofer’s blunt assessment is that coordinated disclosure mostly benefits large tech companies while leaving maintainers doing unpaid work. He criticized the OpenSSF and Linux Foundation membership costs as a financial barrier to single person maintainers gaining additional support. ↫ Sarah Gooding The problem is that, according to Wellnhofer, libxml2 was never supposed to be widely used, but now every major technology company with billions in quarterly revenue are basically expecting an unpaid maintainer to fix the security issues – many of which questionable – they throw his way. The point is that libxml2 never had the quality to be used in mainstream browsers or operating systems to begin with. It all started when Apple made libxml2 a core component of all their OSes. Then Google followed suit and now even Microsoft is using libxml2 in their OS outside of Edge. This should have never happened. Originally it was kind of a growth hack, but now these companies make billions of profits and refuse to pay back their technical debt, either by switching to better solutions, developing their own or by trying to improve libxml2. The behavior of these companies is irresponsible. Even if they claim otherwise, they don’t care about the security and privacy of their users. They only try to fix symptoms. ↫ Nick Wellnhofer It’s wild that a library never intended to be widely used in any critical infrastructure is now used all over the place, even though it just does not have the level of quality and security needed to perform such a role. These are the words of Wellnhofer himself – an addition to the project’s readme now makes this point very clear, and I absolutely love the wording: This is open-source software written by hobbyists, maintained by a singlevolunteer, badly tested, written in a memory-unsafe language and full ofsecurity bugs. It is foolish to use this software to process untrusted data.As such, we treat security issues like any other bug. Each security reportwe receive will be made public immediately and won’t be prioritized. ↫ libxml2’s readme If you want libxml2 to fulfill a role it was never intended to fulfill, make it happen. With contributions. With money. Don’t just throw a whole slew of security demands a sole maintainer’s way and hope he will do the work for you.
Brands can now understand their audience with near-perfect precision, adjusting messages in real-time to match shifting interests and behaviors. AI marketing platforms are transforming digital strategies by using machine learning and predictive modeling to go beyond basic demographics, relying on real-time insights and behavioral data for more relevant results. Smarter Data, Sharper Decisions AI-powered marketing tools from a top AI marketing company thrive on data. Every click, scroll, and pause becomes a signal. These platforms process large volumes quickly, revealing patterns like preferences, intent, and emotional triggers before any action is taken. This shift enables more accurate content delivery. Predictive algorithms match content to user interest automatically, removing guesswork and boosting engagement and conversions. Audience targeting now predicts what a person is likely to do next, not just what they did before. This makes the platforms more effective than older automation tools. Dynamic Personalization Without Delay Real-time decision-making is a key feature of many AI-driven platforms. Instead of following a static set of rules, the system updates targeting strategies on the fly. It adapts based on how users interact with a campaign as it runs. For example, a consumer might engage with a product but hesitate to buy. In seconds, the platform can deliver tailored content like price offers or product comparisons based on behavior. This dynamic personalization boosts both brand performance and user experience by showing content that feels timely and relevant. Precision in Channel and Timing Knowing when and where to deliver a message is as important as the message itself. AI marketing platforms analyze data across multiple channels such as display, search, social, and even connected TV. They find the most effective path to reach a specific audience segment. Timing plays a major role in campaign performance. The same message can have different results depending on when it’s seen. By analyzing behavior patterns and past responses, platforms identify the best delivery windows. This precision improves return on ad spend and reduces wasted impressions, making campaigns more efficient. Forecast-Based Audience Grouping Many marketing teams once relied on static audience segments like age, income, or location. However, these categories offer only surface-level insight. AI marketing platforms now go deeper by creating dynamic segments that evolve with each new data input, offering a more flexible and responsive targeting method. These predictive segments are based on real-time indicators. Platforms cluster users by shared behaviors, browsing habits, and content engagement. Two users from the same demographic may have very different purchase patterns. Predictive segmentation separates them, allowing for more accurate targeting and more relevant messaging. Leveraging Deep Contextual Intelligence A standout feature in certain platforms involves advanced contextual modeling. Unlike simple keyword-based systems, this approach examines the full context of a digital environment. Sentiment, intent, and placement quality are analyzed in real time. By understanding both the content and the consumer’s behavior within that content, platforms can deliver better-aligned ads. These placements not only perform better but also feel more natural to users. This use of deep contextual signals sets leading platforms apart from basic automation tools. It allows marketers to scale quality targeting across a wide range of content environments without sacrificing relevance or brand safety. AI marketing platforms are redefining how brands connect with their audiences through precision, speed, and adaptability. By combining real-time insights, predictive segmentation, and contextual intelligence, these tools streamline targeting and drive higher engagement. As digital marketing grows more competitive, partnering with a top AI marketing company offers the strategic advantage needed to stay ahead, ensuring campaigns are not just seen but acted upon more effectively. Staying innovative with the right platform can turn data into decisions that truly move the needle.
Microsoft’s Recall feature, which takes screenshots of the contents of your screen every few seconds, saves them, and then runs text and image recognition to extract information from them, has had a rocky start. Even now that it’s out there and Microsoft deems it ready for everyone to use, it has huge security and privacy gaps, and one of them is that applications that contain sensitive information, such as the Windows Signal application, cannot ‘opt out’ of having their contents scraped. Signal was rather unhappy with this massive privacy risk, and decided to do something about it. It’s called screen security, and is Windows-only because it’s specifically designed to counter Windows Recall. If you attempt to take a screenshot of Signal Desktop when screen security is enabled, nothing will appear. This limitation can be frustrating, but it might look familiar to you if you’ve ever had the audacity to try and take a screenshot of a movie or TV show on Windows. According to Microsoft’s official developer documentation, setting the correct Digital Rights Management (DRM) flag on the application window will ensure that “content won’t show up in Recall or any other screenshot application.” So that’s exactly what Signal Desktop is now doing on Windows 11 by default. ↫ Joshua Lund on the Signal blog Microsoft cares more about enforcing the rights of massive corporations than it does about respecting the privacy of its users. As such, everything is in place in Windows to ensure neither you nor Recall can take screenshots of, I don’t know, the Bee Movie, but nothing has been put in place to protect your private and sensitive messages in a service like Signal. This really tells you all you need to know about who Microsoft truly cares about, and it sure as hell isn’t you, the user. What Signal is doing is absolutely brilliant. By turning Windows’ digital rights management features against Recall to protect the privacy of Signal users, Signal has made it impossible – or at least very hard – for Microsoft to address this. Of course, this also means that taking screenshots of the Signal application on Windows for legitimate purposes is more cumbersome now, but since you can temporarily turn screen security off to take a screenshot means it’s not impossible. I almost want other Windows developers to employ this same trick, just to make Recall less valuable, but that’s probably not a great idea considering how much it would annoy users just trying to take legitimate screenshots. My uneducated guess is that this is exactly why Microsoft isn’t providing developers with the kind of fine-grained controls to let Recall know what it can and cannot take screenshots of: Microsoft must know Recall is a feature for shareholders, not for users, and that users will ask developers to opt-out of any Recall snooping if such APIs were officially available. Microsoft wants to make it has hard as possible for applications to opt out of being sucked into the privacy black hole that is Recall, but in doing so, might be pushing developers to use DRM to achieve the same goal. Just delicious. Signal also signed off with a scathing indictment of “AI” as a whole. “Take a screenshot every few seconds” legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like “How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?” — but more sophisticated threats are on the horizon. The integration of AI agents with pervasive permissions, questionable security hygiene, and an insatiable hunger for data has the potential to break the blood-brain barrier between applications and operating systems. This poses a significant threat to Signal, and to every privacy-preserving application in general. ↫ Joshua Lund on the Signal blog Heed this warning.
A decade ago, I published a book on privacy “Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance.” In the book, and since then, in articles and speeches, I have been dispensing advice to people on how to protect their privacy. But my advice did not envision the moment we are in – where the government would collaborate with a tech CEO to strip-mine all of our data from government databases and use it to pursue political enemies. In the parlance of cybersecurity, I had the wrong “threat model,” which is a fancy way of describing the risks I was seeking to mitigate. I had not considered that the United States might be swept into the rising tide of what scholars call “competitive authoritarianism” – authoritarian regimes that retain some of the trappings of democracy, such as elections, but use the power of the state to crush any meaningful dissent. ↫ Julia Angwin Democracy is not nearly as much of a given as many people think, and in this day and age, where massive amounts of Americans’ data and personal information are collected and stored by the very corporations supporting the Trump regime, Americans have to think very differently about where digital threats actually come from. Nothing protects any American – or anyone visiting America – from ending up in an El Salvadorian concentration camp. Plan accordingly.
Some more light reading: While it was already established that the open source supply chain was often the target of malicious actors, what is stunning is the amount of energy invested by Jia Tan to gain the trust of the maintainer of the xz project, acquire push access to the repository and then among other perfectly legitimate contributions insert – piece by piece – the code for a very sophisticated and obfuscated backdoor. This should be a wake up call for the OSS community. We should consider the open source supply chain a high value target for powerful threat actors, and to collectively find countermeasures against such attacks. In this article, I’ll discuss the inner workings of the xz backdoor and how I think we could have mechanically detected it thanks to build reproducibility. ↫ Julien Malka It’s a very detailed look at the situation and what Nix could to prevent it in the future.
We now have the bizarre situation that anyone with any sense can see that America is no longer a reliable partner, and that the entire US business world bows to Trump’s dictatorial will, but we STILL are doing everything we can to transfer entire governments and most of our own businesses to their clouds. Not only is it scary to have all your data available to US spying, it is also a huge risk for your business/government continuity. From now on, all our business processes can be brought to a halt with the push of a button in the US. And not only will everything then stop, will we ever get our data back? Or are we being held hostage? This is not a theoretical scenario, something like this has already happened. ↫ Bert Hubert The cold and harsh reality is that the alliance between the United States and Europe, the single-most powerful alliance in human history, is over. Voters in the United States prefer their country ally itself with the brutal and genocidal dictator of Russia, instead of being allied with the democratic and free nations of Europe. That’s their choice to make, their consequences to face, and inevitably, their cross to bear. Governments in Europe have not yet fully accepted that they can no longer rely on the United States for, well, anything. Whether it be existential, like needing to shore up defense spending and possibly unifying European militaries, or something more mundane, like which computer systems European governments use, the United States should be treated in much the same way as Russia or China. Europe has to fend for itself, spend on itself, and build for itself, instead of assuming that the Americans will come through on any “promise” they make. An unreliable partner like the US is a massive liability. Bert Hubert is exactly right. European data needs to be stored within European borders. Just as we wouldn’t store our data on servers owned or controlled by the Chinese government, we shouldn’t be storing our data on servers owned or controlled by the US government. The general European public is already changing its buying habits – it’s time our governments do so too.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. ↫ Josh Aas on the Let’s Encrypt website They’re ending the expiration notification service because it’s costly, adds a ton of complexity to their systems, and constitutes a privacy risk because of all the email addresses they have to keep on file just for this feature. Considering there are other services that offer this functionality, and the fact many people automate this already anyway, it makes sense to stop sending out emails. Anyway, just a head’s up.
The content of a private message should only be known by the sender and recipient. Unfortunately, this rule can change when the message is sent online. A malicious person might intercept and read the message or change its content. End-to-end encryption (E2EE) is an emerging technology that protects content in transit. This technology ensures only the sender and recipient view the content. End-to-end security services are important in messaging. It is changing messaging communication in modern life and beyond. Image Source: Pexels What end-to-end encrypted messaging involves E2EE involves sending messages that only the targeted recipient can read. In the past, messages sent through messaging apps were not secured. Any person could intercept these messages and multiple people could read them before targeted recipients received them. This technology encrypts the message before it leaves the sender’s device. It lets the information travel securely online and decrypt once it reaches the recipient’s device. E2EE ensures no other person can read the message including service providers. Popular messaging apps such as Messenger and WhatsApp use this technology. Sometimes Messenger is hacked leading to data loss and exposure to the wrong people. When this happens, hackers might change your profile, deny you access, or begin sending scam messages to people. In these situations, you need information and for that https://moonlock.com/messenger-hacked is the best option. Knowing what to do if Messenger is hacked and acting timely on it can save you from unexpected and some of the uncontrollable situations. If you notice suspicious messages take action and log out of Messenger quickly. You may deny hackers access to your data once you sign out of Messenger. Next, change your password and make it harder for malicious people to guess it. How does end-to-end encryption work? Popular messaging apps contain prebuilt E2EE-enabled features ensuring messages travel safely online. They use application layer encryption ensuring messages sent through multiple devices are safe. This ensures data transferred retains integrity and securely identifies receiving devices. Application encryption such as Facetime end-to-end encryption has various roles. Once users begin typing a message, this feature is activated by generating a private key. Once the user taps the send button, this feature encrypts the message containing the encryption key. The information remains encrypted until it reaches the recipient’s device. The recipient’s app contains the encryption feature, which automatically uses the decryption key to open the message. Sometimes senders may use a third-party encryption app for this goal. In this case, recipients must use the key to decrypt the message. Image Source: Pexels How does E2EE benefit individuals and organizations? Messaging apps encrypt and send all types of data – from text to pictures, files, music, and videos. Data senders prepare the information they want to send and the app activates an encryption algorithm ready for safe transmission. This data only decrypts once it arrives on the recipient’s device. Users can apply other security measures to protect this data after decrypting. Application encryption or E2EE offers many advantages to individuals and organizations. Can encrypted data be hacked? Several challenges face E2EE although the benefits bypass these setbacks. As much as there is the good side of technology, the bad side exists too. Hackers use advanced resources to cheat encryption systems and penetrate them. Several complexities face E2EE systems and encryption key management tops them. E2EE can be hacked especially if the sender and recipient devices are vulnerable due to compromised security. This exposes the secured data to theft possibilities and hackers can use advanced systems to crack the keys. Although the data is secured, it can be intercepted if the transmission system has a low processing speed. Users may fail to comply with regulations leading to vulnerable devices. Advancing technology leads to safer online data transmission but also opens breach possibilities. This calls for message senders and recipients to take extra security measures and protect their devices. They should understand how to encrypt iPhone and other internet connection devices. The future of messaging Organizations and individuals are adopting end-to-end encryption in messaging due to the security benefits. They overlook the setbacks and embrace the benefits that overshadow them. 2024 data shows that more than 90% of organizations have started or completed digital transformation processes. This demands more elaborate and proactive security measures in messaging data. This need is pushing organizations into innovation and creating advanced cybersecurity solutions. In the future, there will be wider E2EE adoption as part of safety measures. There is a rush to develop an encryption system that can withstand quantum technology. This technology is currently resistant to encryption which is posing a big threat to E2EE goals and advancements. One of the solutions that might counter this challenge is homomorphic encryption which is already in advanced development phases. The AI threat detection model is already in use and advancing, making it a key solution to future prevention of messaging interception, key decoding, and hacking. Security experts predict there will be a new generation of messaging apps soon. Several advanced messaging services have already emerged offering top security and privacy to users. Mac users for instance are already benefiting from Facetime end-to-end encryption. New technologies such as Telegram, WhatsApp Business, and Messenger are offering fresh hope in safe messaging. Privacy and security experts foresee a future where E2EE works the same on any device, operating system, or messaging service. Cryptographic compliance could be stricter to ensure the growing user base is protected. This will require providers to adopt a balanced model between security and compliance. There is a growing need for organizers and providers to spread user awareness more. This will not stop but will continually increase as the future approaches. More communication channels will be encrypted and user authentication will advance further. This approach will provide flawless integration with collaboration tools across the board. More organizations will adopt zero trust models and E2EE could become the security default soon. Strengthening messaging channels at the user level Messaging apps like WhatsApp encrypts files, texts, and calls, ensuring secure transmission.
For years now, people believe that their smartphones are listening to their conversations through their microphones, all the time, even when the microphone is clearly not activated. Targeted advertising lies at the root of this conviction; when you just had a conversation with a friend about buying a pink didgeridoo and a flanel ukelele, and you then get ads for pink didgeridoos and flanel ukeleles, it makes intuitive sense to assume your phone was listening to you. How else would Google, Amazon, Facebook, or whatever, know your deepest didgeridoo desires and untapped ukelele urges? The truth is that targeted advertising using cross-site cookies and profile building is far more effective than people think, and on top of that, people often forget what they did on their phone or laptop ten minutes ago, let alone yesterday or last week. Smartphones are not secretly listening to you, and it’s not through covert microphone activation that it knows about your musical interests. But then. Media conglomerate Cox Media Group has been pitching tech companies on a new targeted advertising tool that uses audio recordings culled from smart home devices. The existence of this program was revealed late last year. Now, however, 404 Media has also gotten its hands on additional details about the program through a leaked pitch deck. The contents of the deck are creepy, to say the least. Cox’s tool is creepily called “Active Listening” and the deck claims that it works by using smart devices, which can “capture real-time intent data by listening to our conversations.” After the data is captured, advertisers can “pair this voice-data with behavioral data to target in-market consumers,” the deck says. The vague use of artificial intelligence to collect data about consumers’ online behavior is also mentioned, with the deck noting that consumers “leave a data trail based on their conversations and online behavior” and that the AI-fueled tool can collect and analyze said “behavioral and voice data from 470+ sources.” ↫ Lucas Ropek at Gizmodo Looking at the pitch deck in question, you can argue that it’s not even referring to smartphones, and that it is incredibly vague – probably on purpose – what “active listening” and “conversations” are really referring to. It might as well be simply referring to the various conversations on unencrypted messaging platforms, directly with companies, or stuff like that. “Smart devices” is also intentionally vague, and could be anything from one of those smart fridges to your smartphone. But you could also argue that yes, this seems to be pretty much referring to “listening to our conversations” in the most literal sense, by somehow – we have no idea how – turning on our smartphone microphones, in secret, without iOS or Android, or Apple or Google, knowing about it? It seems far-fetched, but at the same time, a lot of corporate and government programs and efforts seemed far-fetched until some whisteblower spilled the beans. The feeling that your phones are listening to you without your consent, in secret, will never go away. Even if some irrefutable evidence came up that it isn’t possible, it’s just too plausible to be cast aside.
Telegram doesn’t hold up to the promise of being private, nor secure. The end-to-end encryption is opt-in, only applies to one-on-one conversations and uses a controversial ‘homebrewn’ encryption algorithm. The rest of this article outlines some of the fundamentally broken aspects of Telegram. ↫ h3artbl33d Telegram is not a secure messenger, nor is it a platform you should want to be on. Chats are not encrypted by default, and are stored in plain text on Telegram’s server. Only chats between two (not more!) people who also happen to both be online at that time can be “encrypted”. In addition, the quotation marks highlight another massive issue with Telegram: its “encryption” is non-standard, home-grown, and countless security researchers have warned against relying on it. Telegram’s issues go even further than this, though. The application also copies your contacts to its servers and keeps them there, they’ve got a “People nearby” feature that shares location data, and so much more. The linked article does a great job of listing the litany of problems Telegram has, backed up by sources and studies, and these alone should convince anyone to not use Telegram for anything serious. And that’s even before we talk about Telegram’s utter disinterest in stopping the highly illegal activities that openly take place on its platform, from selling drugs, down to far more shocking and dangerous activities like sharing revenge pron, CSAM, and more. Telegram has a long history of not giving a single iota about shuttering groups that share and promote such material, leaving victims of such heinous crimes out in the cold. Don’t use Telegram. A much better alternative is Signal, and hell, even WhatsApp, of all things, is a better choice.
Google’s own Project Zero security research effort, which often finds and publishes vulnerabilities in both other companies’ and its own products, set its sights on Android once more, this time focusing on third-party kernel drivers. Android’s open-source ecosystem has led to an incredible diversity of manufacturers and vendors developing software that runs on a broad variety of hardware. This hardware requires supporting drivers, meaning that many different codebases carry the potential to compromise a significant segment of Android phones. There are recent public examples of third-party drivers containing serious vulnerabilities that are exploited on Android. While there exists a well-established body of public (and In-the-Wild) security research on Android GPU drivers, other chipset components may not be as frequently audited so this research sought to explore those drivers in greater detail. ↫ Seth Jenkins They found a whole host of security issues in these third-party kernel drivers in phones both from Google itself as well as from other companies. An interesting point the authors make is that because it’s getting ever harder to find 0-days in core Android, people with nefarious intent are looking at other parts of an Android system now, and these kernel drivers are an inviting avenue for them. They seem to focus mostly on GPU drivers, for now, but it stands to reason they’ll be targeting other drivers, too. As usual with Android, the discovered exploits were often fixed, but the patches took way, way too long to find their way to end users due to the OEMs lagging behind when it comes to sending those patches to users. The authors propose wider adoption of Android APEX to make it easier to OEMs to deliver kernel patches to users faster. I always like the Project Zero studies and articles, because they really take no prisoners, and whether they’re investigating someone else like Microsoft or Apple, or their own company Google, they go in hard, do not surgarcoat their findings, and apply the same standards to everyone.
William Brown, developer of webauthn-rs, has written a scathing blog post detailing how corporate interests – namely, Apple and Google – have completely and utterly destroyed the concept of passkeys. The basic gist is that Apple and Google were more interested in control and locking in users than in providing a user-friendly passwordless future, and in doing so have made passkeys effectively a worse user experience than just using passwords in a password manager. Since then Passkeys are now seen as a way to capture users and audiences into a platform. What better way to encourage long term entrapment of users then by locking all their credentials into your platform, and even better, credentials that can’t be extracted or exported in any capacity. Both Chrome and Safari will try to force you into using either hybrid (caBLE) where you scan a QR code with your phone to authenticate – you have to click through menus to use a security key. caBLE is not even a good experience, taking more than 60 seconds work in most cases. The UI is beyond obnoxious at this point. Sometimes I think the password game has a better ux. The more egregious offender is Android, which won’t even activate your security key if the website sends the set of options that are needed for Passkeys. This means the IDP gets to choose what device you enroll without your input. And of course, all the developer examples only show you the options to activate “Google Passkeys stored in Google Password Manager”. After all, why would you want to use anything else? ↫ William Brown The whole post is a sobering read of how a dream of passwordless, and even usernameless, authentication was right within our grasp, usable by everyone, until Apple and Google got involved and enshittified the standards and tools to promote lock-in and their own interests above the user experience. If even someone as knowledgeable about this subject as Brown, who writes actual software to make these things work, is advising against using passkeys, you know something’s gone horribly wrong. I also looked into possibly using passkeys, including using things like a Yubikey, but the process seems so complex and unpleasant that I, too, concluded just sticking to Bitwarden and my favourite open source TFA application was a far superior user experience.
After observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors) I figured out the answer: The upstream xz repository and the xz tarballs have been backdoored. At first I thought this was a compromise of debian’s package, but it turns out to be upstream. ↫ Andres Freund I don’t normally report on security issues, but this is a big one not just because of the severity of the issue itself, but also because of its origins: it was created by and added to upstream xz/liblzma by a regular contributor of said project, and makes it possibly to bypass SSH encryption. It was discovered more or less by accident by Andres Freund. I have not yet analyzed precisely what is being checked for in the injected code, to allow unauthorized access. Since this is running in a pre-authentication context, it seems likely to allow some form of access or other form of remote code execution. ↫ Andres Freund The exploit was only added to the release tarballs, and not present when taking the code off GitHub manually. Luckily for all of us, the exploit has only made it way to the most bloodiest of bleeding edge distributions, such as Fedora Rawhide 41 and Debian testing, unstable and experimental, and as such has not been widely spread just yet. Nobody seems to know quite yet what the ultimate intent of the exploit seems to be. Of note: the person who added the compromising code was recently added as a Linux kernel maintainer.
When this command line option is used with curl on macOS, the version shipped by Apple, it seems to fall back and checks the system CA store in case the provided set of CA certs fail the verification. A secondary check that was not asked for, is not documented and plain frankly comes completely by surprise. Therefore, when a user runs the check with a trimmed and dedicated CA cert file, it will not fail if the system CA store contains a cert that can verify the server! This is a security problem because now suddenly certificate checks pass that should not pass. ↫ Daniel Stenberg Absolutely wild that Apple does not consider this a security issue.
The supermassive leak contains data from numerous previous breaches, comprising an astounding 12 terabytes of information, spanning over a mind-boggling 26 billion records. The leak, which contains LinkedIn, Twitter, Weibo, Tencent, and other platforms’ user data, is almost certainly the largest ever discovered. ↫ Vilius Petkauskas at cybernews Holy cow.