Posts Tagged ‘security’
“Privacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone.”*…
… And now, indeed, we’re beginning to notice. Hana Lee Goldin surveys the state of play– who’s buying our personal information, what they’re using it for, and how the system works behind the screen– and considers our options…
Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.
But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.
If we’ve been online since the late 1990s or early 2000s, our data footprint can include social media accounts we’ve created, online purchases we’ve made, forums we’ve posted in, loyalty cards we’ve used, and apps we’ve installed going back decades. Some of that information lives on platforms we’ve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities we’ve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.
Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe it’s impossible to go through daily life without having their data tracked. The unease is there. What’s missing is a clear picture of what’s happening on the other side of the transaction…
[Goldin explains what data is being collected and shared, and by whom; how the data is managed and trafficked; how its being used (by insurance and financial companies, employers and landlords, retailers, AI companies, governments, and criminals); and how “inferred” data is used to augment the “hard” data. It’s chilling. She then puts the issue into context, and discusses we we can– and cannot– do about it…]
… The philosopher Helen Nissenbaum has a framework for what’s happening here: contextual integrity. The idea is that privacy isn’t about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search won’t follow us into an insurance application. In the current data economy, that’s exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.
This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, we’re also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we haven’t developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think we’re consenting to and what we’ve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.
One of the reasons the “so what” question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platform’s terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.
The architecture of digital consent treats data sharing as a binary: agree to the terms or don’t use the product. There’s rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the “choice” to share data often functions as a condition of entry into daily life rather than an informed negotiation. We’re not handing over data because we’ve weighed the tradeoff and decided it’s fair. We’re handing it over because the alternative is exclusion from services we rely on.
This is the structural context behind the Pew Research Center finding that more than half of Americans believe it’s impossible to go through daily life without being tracked. For many of us, it isn’t possible, at least not without significant inconvenience or sacrifice. The question isn’t whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints we’re operating in, and whether the system can be pushed – through regulation, through market pressure, through better tools – toward something more transparent.
California’s Delete Act, which took effect in January 2026, is the strongest example of what’s emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Union’s GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: what’s available to a California or EU resident may not extend to someone in a state without comparable legislation.
Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These can’t erase the data trail entirely, but they can narrow what’s actively available for sale.
Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that don’t need those permissions to function. We can treat “sign in with Google” and “sign in with Facebook” buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of “we may share your information with our partners,” where “partners” just means anyone willing to pay.
Most of us don’t read privacy policies, and the policies aren’t built to be read. They average thousands of words of dense legal language filled with terms like “legitimate interest,” “data processor,” and “de-identified data.” Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, “accept all” buttons positioned prominently while “customize settings” options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.
The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We don’t connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.
The same critical thinking we’ve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: who’s collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, we’re the product being evaluated, and the questions are being asked about us rather than by us.
So can we get it back? Not entirely. Data that’s already been collected, copied, sold, and processed across multiple systems can’t be fully recalled. What we can do is reduce what’s actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didn’t exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.
So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices we’re charged, the credit we’re offered, the jobs we’re considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We can’t close the file entirely, but we can make more informed decisions about what goes into it next…
Eminently worth reading in full: “So What if They Have My Data?“
See also: “Why Do We Care So Much About Privacy?” (source of the image above) in which Louis Menand suggests that our concern should be with the “weaponization” of data…
* Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security
###
As we reinforce our rights, we might recall that it was on this date in 1996 that the internet-as-we’ve-come-to-know-it broke big into the mainstream: Yahoo! launched the national campaign that asked “Do You Yahoo?” advertising its web-based search service on national television. The campaign was created by ad agency Black Rocket and Yahoo Marketing Head Karen Edwards (whose many awards for the work include a seat in the Advertising Hall of Achievement).
An early spot from the campaign…
“Quantum computation is … nothing less than a distinctly new way of harnessing nature”*…
As the tools in the world around us change, the world– and we– change with them. The onslaught of AI is the change that seems to be grabbing most of our mindshare these days… and with reason. But there are, of course, other changes (in biotech, in materials science, et al.) that are also going to be hugely impactful.
Today, a look at the computing technology stalking up behind AI: quantum computing. As enthusiasts like David Deutsch (author of the quote above) argue, it can have tremendous benefits, perhaps especially in our ability to model (and thus better understand) our reality.
But quantum computing will, if/when it arrives, also present huge challenges to us as individuals and as societies– perhaps most prominently in its threat to the ways in which we protect our systems and our information: We’ve felt pretty safe for decades, secure in the knowledge that we could lose passwords to phising or hacks, but that it would take the “classical” computers we have 1 billion years to break today’s RSA-2048 encryption. A quantum computer could crack it in as little as a hundred seconds.
The technology has been “somewhere on the horizon” for 30 years… so not something that has seemed urgent to confront. But progress has accelerated; a recent Google paper reports on a programming and architectural breakthrough that greatly reduces the computing resources necessary to break classical cryptography… putting the prospect of “Q-Day” (the point at which quantum computers become powerful enough to break standard encryption methods (RSA, ECC), endangering global digital security) much closer, which would put everything from crypto-wallets to our e-banking accounts at risk.
Charlie Wood brings us up to speed…
Some 30 years ago, the mathematician Peter Shor took a niche physics project — the dream of building a computer based on the counterintuitive rules of quantum mechanics — and shook the world.
Shor worked out a way for quantum computers to swiftly solve a couple of math problems that classical computers could complete only after many billions of years. Those two math problems happened to be the ones that secured the then-emerging digital world. The trustworthiness of nearly every website, inbox, and bank account rests on the assumption that these two problems are impossible to solve. Shor’s algorithm proved that assumption wrong.
For 30 years, Shor’s algorithm has been a security threat in theory only. Physicists initially estimated that they would need a colossal quantum machine with billions of qubits — the elements used in quantum calculations — to run it. That estimate has come down drastically over the years, falling recently to a million qubits. But it has still always sat comfortably beyond the modest capabilities of existing quantum computers, which typically have just hundreds of qubits.
However, two different groups of researchers have just announced advances that notably reduce the gap between theoretical estimates and real machines. A star-studded team of quantum physicists at the California Institute of Technology went public with a design for a quantum computer that could break encryption with only tens of thousands of qubits and said that it had formed a company to build the machine. And researchers at Google announced that they had developed an implementation of Shor’s algorithm that is ten times as efficient as the best previous method.
Neither company has the hardware to break encryption today. But the results underscore what some quantum physicists had already come to suspect: that powerful quantum computers may be years away, rather than decades. “If you care about privacy or you have secrets, then you better start looking for alternatives,” said Nikolas Breuckmann, a mathematical physicist at the University of Bristol, who did not work on either of the papers.
While the new results may provide a jolt for the policymakers and corporations that guard our digital infrastructure, they also signal the rapid progress that physicists have made toward building machines that will let them more thoroughly explore the quantum world.
“We’re going to actually do this,” said Dolev Bluvstein, a Caltech physicist and CEO of the new company, Oratomic…
[Wood unpacks the history of the development of the technology and explores the challenges that remain; he concludes…]
… If any group succeeds at building a quantum computer that can realize Shor’s algorithm, it will mark the end an era — specifically, the “Noisy Intermediate Scale Quantum” era, as Preskill dubbed the pre-error-correction period in a 2018 paper. Each researcher has a vision for what to pursue first with a machine in the new “fault-tolerant” era.
[Robert] Huang said he would start by running Shor’s algorithm, just to prove that the device works. After that, he said he would try to use it to speed up machine learning — an application to be detailed in coming work.
Most of the architects building quantum computers, whether at Oratomic or other startups, are physicists at heart. They’re interested in physics, not cryptography. Specifically, they’re interested in all the things a computer fluent in the language of quantum mechanics could teach them about the quantum realm, such as what sort of materials might become superconductors even at warm temperatures. Preskill, for his part, would like to simulate the quantum nature of space-time.
The Caltech group knows it has years of work ahead before any of its dreams have a chance of coming true. But the researchers can’t wait to get started. “Pick a cooler life quest than building the world’s first quantum computer with your friends!” said a jubilant Bluvstein, reached by phone shortly before their paper went live, before rushing off to celebrate…
Eminently worth reading in full: “New Advances Bring the Era of Quantum Computers Closer Than Ever,” from @walkingthedot.bsky.social in @quantamagazine.bsky.social.
* David Deutsch, The Fabric of Reality
###
As we prepare, we might take a moment to appreciate just how vastly and deeply the legacy systems challenged by quantum computing run, recalling that on this date in 1959 Mary Hawes, a computer scientist for the Burroughs Corporation held a meeting of computers users, manufacturers, and academics at the University of Pennsylvania aimed at creating a common business oriented programming language. At the meeting, representative Grace Hopper suggested that they ask the Department of Defense to fund the effort to create such a language. Also attending was Charles Phillips who was director of the Data System Research Staff at the DoD and was excited by the possibility of a common language streamlining their operations. He agreed to sponsor the creation of such a language. This was the genesis of what would eventually become the COBOL language.
To this day COBOL is still the most common programming language used in business, finance, and administrative systems for companies and governments, primarily on mainframe systems, with around 200 billion lines of code still in production use… all of which are in question and/or at risk in a world of quantum computing.
“There are two types of encryption: one that will prevent your sister from reading your diary and one that will prevent your government”*…
… But sometimes the encryption you think will work against governments won’t even deter your sister. Joesph Cox on the recently-uncovered vulnerabilities in TETRA, the encryption standard used in radios worldwide…
A group of cybersecurity researchers has uncovered what they believe is an intentional backdoor in encrypted radios used by police, military, and critical infrastructure entities around the world. The backdoor may have existed for decades, potentially exposing a wealth of sensitive information transmitted across them, according to the researchers… The end result, however, are radios with traffic that can be decrypted using consumer hardware like an ordinary laptop in under a minute…
The research is the first public and in-depth analysis of the TErrestrial Trunked RAdio (TETRA) standard in the more than 20 years the standard has existed. Not all users of TETRA-powered radios use the specific encryption algorithim called TEA1 which is impacted by the backdoor. TEA1 is part of the TETRA standard approved for export to other countries. But the researchers also found other, multiple vulnerabilities across TETRA that could allow historical decryption of communications and deanonymization. TETRA-radio users in general include national police forces and emergency services in Europe; military organizations in Africa; and train operators in North America and critical infrastructure providers elsewhere.
Midnight Blue [presented] their findings at the Black Hat cybersecurity conference in August. The details of the talk have been closely under wraps, with the Black Hat website simply describing the briefing as a “Redacted Telecom Talk.” That reason for secrecy was in large part due to the unusually long disclosure process. Wetzels told Motherboard the team has been disclosing these vulnerabilities to impacted parties so they can be fixed for more than a year and a half. That included an initial meeting with Dutch police in January 2022, a meeting with the intelligence community later that month, and then the main bulk of providing information and mitigations being distributed to stakeholders. NLnet Foundation, an organization which funds “those with ideas to fix the internet,” financed the research.
The European Telecommunications Standards Institute (ETSI), an organization that standardizes technologies across the industry, first created TETRA in 1995. Since then, TETRA has been used in products, including radios, sold by Motorola, Airbus, and more. Crucially, TETRA is not open-source. Instead, it relies on what the researchers describe in their presentation slides as “secret, proprietary cryptography,” meaning it is typically difficult for outside experts to verify how secure the standard really is.
…
Bart Jacobs, a professor of security, privacy and identity, who did not work on the research itself but says he was briefed on it, said he hopes “this really is the end of closed, proprietary crypto, not based on open, publicly scrutinised standards.”…
The veil, pierced: “Researchers Find ‘Backdoor’ in Encrypted Police and Military Radios,” from @josephfcox in @motherboard. (Not long after this article ran– and after the downfall of Vice, Motherboard’s parent), Cox and a number of his talented Motherboard colleagues launched 404 Media. Check it out.)
Remarkably, some of the radio systems enabling critical infrastructure are even easier to hack– they aren’t even encrypted.
* Bruce Schneier (@schneierblog)
###
As we take precautions, we might recall that it was on this date in 1980 that the last IBM 7030 “Stretch” mainframe in active use is decommissioned at Brigham Young University. The first Stretch was was delivered to Los Alamos National Laboratory in 1961, giving the model almost 20 years of operational service.
The Stretch was famous for many things, but perhaps most notably it was the first IBM computer to use transistors instead of vacuum tubes; it was the first computer to be designed with the help of an earlier computer; and it was the world’s fastest computer from 1961 to 1964.
“One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.”*…
And yet, for centuries no one has succeeded. Now, as Erica Klarreich reports, cryptographers want to know which of five possible worlds we inhabit, which will reveal whether truly secure cryptography is even possible…
Many computer scientists focus on overcoming hard computational problems. But there’s one area of computer science in which hardness is an asset: cryptography, where you want hard obstacles between your adversaries and your secrets.
Unfortunately, we don’t know whether secure cryptography truly exists. Over millennia, people have created ciphers that seemed unbreakable right until they were broken. Today, our internet transactions and state secrets are guarded by encryption methods that seem secure but could conceivably fail at any moment.
To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries. We know of many computational problems that seem hard, but maybe we just haven’t been clever enough to solve them. Or maybe some of them are hard, but their hardness isn’t of a kind that lends itself to secure encryption. Fundamentally, cryptographers wonder: Is there enough hardness in the universe to make cryptography possible?
In 1995, Russell Impagliazzo of the University of California, San Diego broke down the question of hardness into a set of sub-questions that computer scientists could tackle one piece at a time. To summarize the state of knowledge in this area, he described five possible worlds — fancifully named Algorithmica, Heuristica, Pessiland, Minicrypt and Cryptomania — with ascending levels of hardness and cryptographic possibility. Any of these could be the world we live in…
Explore each of them– and their implications for secure encryption– at “Which Computational Universe Do We Live In?” from @EricaKlarreich in @QuantaMagazine.
###
As we contemplate codes, we might we might send communicative birthday greetings to a frequently–featured hero of your correspondent, Claude Elwood Shannon; he was born on this date in 1916. A mathematician, electrical engineer– and cryptographer– he is known as “the father of information theory.” But he is also remembered for his contributions to digital circuit design theory and for his cryptanalysis work during World War II, both as a codebreaker and as a designer of secure communications systems.











You must be logged in to post a comment.