(Roughly) Daily

Posts Tagged ‘communications

“One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.”*…

And yet, for centuries no one has succeeded. Now, as Erica Klarreich reports, cryptographers want to know which of five possible worlds we inhabit, which will reveal whether truly secure cryptography is even possible…

Many computer scientists focus on overcoming hard computational problems. But there’s one area of computer science in which hardness is an asset: cryptography, where you want hard obstacles between your adversaries and your secrets.

Unfortunately, we don’t know whether secure cryptography truly exists. Over millennia, people have created ciphers that seemed unbreakable right until they were broken. Today, our internet transactions and state secrets are guarded by encryption methods that seem secure but could conceivably fail at any moment.

To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries. We know of many computational problems that seem hard, but maybe we just haven’t been clever enough to solve them. Or maybe some of them are hard, but their hardness isn’t of a kind that lends itself to secure encryption. Fundamentally, cryptographers wonder: Is there enough hardness in the universe to make cryptography possible?

In 1995, Russell Impagliazzo of the University of California, San Diego broke down the question of hardness into a set of sub-questions that computer scientists could tackle one piece at a time. To summarize the state of knowledge in this area, he described five possible worlds — fancifully named Algorithmica, Heuristica, Pessiland, Minicrypt and Cryptomania — with ascending levels of hardness and cryptographic possibility. Any of these could be the world we live in…

Explore each of them– and their implications for secure encryption– at “Which Computational Universe Do We Live In?” from @EricaKlarreich in @QuantaMagazine.

Charles Babbage

###

As we contemplate codes, we might we might send communicative birthday greetings to a frequentlyfeatured hero of your correspondent, Claude Elwood Shannon; he was born on this date in 1916.  A mathematician, electrical engineer– and cryptographer– he is known as “the father of information theory.”  But he is also remembered for his contributions to digital circuit design theory and for his cryptanalysis work during World War II, both as a codebreaker and as a designer of secure communications systems.

220px-ClaudeShannon_MFO3807

 source

“With my tongue in one cheek only, I’d suggest that had our palaeolithic ancestors discovered the peer-review dredger, we would be still sitting in caves”*…

As a format, “scholarly” scientific communications are slow, encourage hype, and are difficult to correct. Stuart Ritchie argues that a radical overhaul of publishing could make science better…

… Having been printed on paper since the very first scientific journal was inaugurated in 1665, the overwhelming majority of research is now submitted, reviewed and read online. During the pandemic, it was often devoured on social media, an essential part of the unfolding story of Covid-19. Hard copies of journals are increasingly viewed as curiosities – or not viewed at all.

But although the internet has transformed the way we read it, the overall system for how we publish science remains largely unchanged. We still have scientific papers; we still send them off to peer reviewers; we still have editors who give the ultimate thumbs up or down as to whether a paper is published in their journal.

This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce “better” results, and sometimes even commit fraud in order to impress those all-important gatekeepers. This drastically distorts our view of what really went on.

There are some possible fixes that change the way journals work. Maybe the decision to publish could be made based only on the methodology of a study, rather than on its results (this is already happening to a modest extent in a few journals). Maybe scientists could just publish all their research by default, and journals would curate, rather than decide, which results get out into the world. But maybe we could go a step further, and get rid of scientific papers altogether…

A bold proposal: “The big idea: should we get rid of the scientific paper?,” from @StuartJRitchie in @guardian.

Apposite (if only in its critical posture): “The Two Paper Rule.” See also “In what sense is the science of science a science?” for context.

Zygmunt Bauman

###

As we noodle on knowledge, we might recall that it was on this date in 1964 that AT&T connected the first Picturephone call (between Disneyland in California and the World’s Fair in New York). The device consisted of a telephone handset and a small, matching TV, which allowed telephone users to see each other in fuzzy video images as they carried on a conversation. It was commercially-released shortly thereafter (prices ranged from $16 to $27 for a three-minute call between special booths AT&T set up in New York, Washington, and Chicago), but didn’t catch on.

source

“Your job as a scientist is to figure out how you’re fooling yourself”*…

Larger version here

And like scientists, so all of us…

Science has shown that we tend to make all sorts of mental mistakes, called “cognitive biases”, that can affect both our thinking and actions. These biases can lead to us extrapolating information from the wrong sources, seeking to confirm existing beliefs, or failing to remember events the way they actually happened!

To be sure, this is all part of being human—but such cognitive biases can also have a profound effect on our endeavors, investments, and life in general.

Humans have a tendency to think in particular ways that can lead to systematic deviations from making rational judgments.

These tendencies usually arise from:

• Information processing shortcuts

• The limited processing ability of the brain

• Emotional and moral motivations

• Distortions in storing and retrieving memories

• Social influence

Cognitive biases have been studied for decades by academics in the fields of cognitive science, social psychology, and behavioral economics, but they are especially relevant in today’s information-packed world. They influence the way we think and act, and such irrational mental shortcuts can lead to all kinds of problems in entrepreneurship, investing, or management.

Here are five examples of how these types of biases can affect people in the business world:

1. Familiarity Bias: An investor puts her money in “what she knows”, rather than seeking the obvious benefits from portfolio diversification. Just because a certain type of industry or security is familiar doesn’t make it the logical selection.

2. Self-Attribution Bias: An entrepreneur overly attributes his company’s success to himself, rather than other factors (team, luck, industry trends). When things go bad, he blames these external factors for derailing his progress.

3. Anchoring Bias: An employee in a salary negotiation is too dependent on the first number mentioned in the negotiations, rather than rationally examining a range of options.

4. Survivorship Bias: Entrepreneurship looks easy, because there are so many successful entrepreneurs out there. However, this is a cognitive bias: the successful entrepreneurs are the ones still around, while the millions who failed went and did other things.

5. Gambler’s Fallacy: A venture capitalist sees a portfolio company rise and rise in value after its IPO, far behind what he initially thought possible. Instead of holding on to a winner and rationally evaluating the possibility that appreciation could still continue, he dumps the stock to lock in the existing gains.

An aid to thinking about thinking: “Every Single Cognitive Bias in One Infographic.” From DesignHacks.co via Visual Capitalist.

And for a fascinating look of cognitive bias’ equally dangerous cousin, innumeracy, see here.

* Saul Perlmutter, astrophysicist, Nobel laureate

###

As we cogitate, we might recall that it was on this date in 1859 that “The Carrington Event” began. Lasting two days, it was the largest solar storm on record: a large solar flare (a coronal mass ejection, or CME) that affected many of the (relatively few) electronics and telegraph lines on Earth.

A solar storm of this magnitude occurring today would cause widespread electrical disruptions, blackouts, and damage due to extended outages of the electrical grid. The solar storm of 2012 was of similar magnitude, but it passed Earth’s orbit without striking the planet, missing by nine days. See here for more detail on what such a storm might entail.

Sunspots of 1 September 1859, as sketched by R.C. Carrington. A and B mark the initial positions of an intensely bright event, which moved over the course of five minutes to C and D before disappearing.

“Foresight begins when we accept that we are now creating a civilization of risk”*…

There have been a handful folks– Vernor Vinge, Don Michael, Sherry Turkle, to name a few– who were, decades ago, exceptionally foresightful about the technologically-meditated present in which we live. Philip Agre belongs in their number…

In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then-UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy…

As Reed Albergotti (@ReedAlbergotti) explains, better late than never: “He predicted the dark side of the Internet 30 years ago. Why did no one listen?

Agre’s papers are here.

* Jacques Ellul

###

As we consider consequences, we might recall that it was on this date in 1858 that Queen Victoria sent the first official telegraph message across the Atlantic Ocean from London to U. S. President James Buchanan, in Washington D.C.– an initiated a new era in global communications.

Transmission of the message began at 10:50am and wasn’t completed until 4:30am the next day, taking nearly eighteen hours to reach Newfoundland, Canada. Ninety-nine words, containing five hundred nine letters, were transmitted at a rate of about two minutes per letter.

After White House staff had satisfied themselves that it wasn’t a hoax, the President sent a reply of 143 words in a relatively rapid ten hours. Without the cable, a dispatch in one direction alone would have taken rouighly twelve days by the speediest combination of inland telegraph and fast steamer.

source

“O brave new world”*…

 

law and AI

 

With the arrival of autonomous weapons systems (AWS)[1] on the 21st century battlefield, the nature of warfare is poised for dramatic change.[2] Overseen by artificial intelligence (AI), fueled by terabytes of data and operating at lightning-fast speed, AWS will be the decisive feature of future military conflicts.[3] Nonetheless, under the American way of war, AWS will operate within existing legal and policy guidelines that establish conditions and criteria for the application of force.[4] Even as the Department of Defense (DoD) places limitations on when and how AWS may take action,[5] the pace of new conflicts and adoption of AWS by peer competitors will ultimately push military leaders to empower AI-enabled weapons to make decisions with less and less human input.[6] As such, timely, accurate, and context-specific legal advice during the planning and operation of AWS missions will be essential. In the face of digital-decision-making, mere human legal advisors will be challenged to keep up!

Fortunately, at the same time that AI is changing warfare, the practice of law is undergoing a similar AI-driven transformation.[7]

From The Judge Advocate General’s CorpsThe Reporter: “Autonomous Weapons Need Autonomous Lawyers.”

As I finish drafting this post [on October 5], I’ve discovered that none of the links are available any longer; the piece (and the referenced articles within it, also from The Reporter) were apparently removed from public view while I was drafting this– from a Reporter web page that, obviously, opened for me earlier.  You will find other references to (and excerpts from/comments on) the article here, here, and here.  I’m leaving the original links in, in case they become active again…

* Shakespeare, The Tempest

###

As we wonder if this can end well, we might recall that it was on this date in 1983 that Ameritech executive Bob Barnett made a phone call from a car parked near Soldier Field in Chicago, officially launching the first cellular network in the United States.

barnett-300x165

Barnett (foreground, in the car) and his audience

 

Written by (Roughly) Daily

October 13, 2019 at 1:01 am

%d bloggers like this: