(Roughly) Daily

Posts Tagged ‘communications

“Whoever wishes to keep a secret must hide the fact that he possesses one”*…

… or, as Sheon Han explains, maybe not…

Imagine you had some useful knowledge — maybe a secret recipe, or the key to a cipher. Could you prove to a friend that you had that knowledge, without revealing anything about it? Computer scientists proved over 30 years ago that you could, if you used what’s called a zero-knowledge proof.

For a simple way to understand this idea, let’s suppose you want to show your friend that you know how to get through a maze, without divulging any details about the path. You could simply traverse the maze within a time limit, while your friend was forbidden from watching. (The time limit is necessary because given enough time, anyone can eventually find their way out through trial and error.) Your friend would know you could do it, but they wouldn’t know how.

Zero-knowledge proofs are helpful to cryptographers, who work with secret information, but also to researchers of computational complexity, which deals with classifying the difficulty of different problems. “A lot of modern cryptography relies on complexity assumptions — on the assumption that certain problems are hard to solve, so there has always been some connections between the two worlds,” said Claude Crépeau, a computer scientist at McGill University. “But [these] proofs have created a whole world of connection.”…

More about how zero-knowledge proofs allow researchers conclusively to demonstrate their knowledge without divulging the knowledge itself: “How Do You Prove a Secret?,” from @sheonhan in @QuantaMagazine.

* Johann Wolfgang von Goethe

###

As we stay sub rosa, we might recall that today (All Saints Day) is the (fictional) birthday of Hello Kitty (full name: Kitty White); she was born in a suburb of London. A cartoon character designed by Yuko Shimizu (currently designed by Yuko Yamaguchi), she is the property of the Japanese company Sanrio. An avatar of kawaii (cute) culture, Hello Kitty is one of the highest-grossing media franchises of all time; Hello Kitty product sales and media licensing fees have run as high as $8 billion a year.

source

Written by (Roughly) Daily

November 1, 2022 at 1:00 am

“The people are pieces of software called avatars. They are the audiovisual bodies that people use to communicate with each other in the Metaverse.”*…

Tim O’Reilly with a (customarily) wise assessment of an emerging new technology…

The metaphors we use to describe new technology constrain how we think about it, and, like an out-of-date map, often lead us astray. So it is with the metaverse. Some people seem to think of it as a kind of real estate, complete with land grabs and the attempt to bring traffic to whatever bit of virtual property they’ve created.

Seen through the lens of the real estate metaphor, the metaverse becomes a natural successor not just to Second Life but to the World Wide Web and to social media feeds, which can be thought of as a set of places (sites) to visit. Virtual Reality headsets will make these places more immersive, we imagine.

But what if, instead of thinking of the metaverse as a set of interconnected virtual places, we think of it as a communications medium? Using this metaphor, we see the metaverse as a continuation of a line that passes through messaging and email to “rendezvous”-type social apps like Zoom, Google Meet, Microsoft Teams, and, for wide broadcast, Twitch + Discord. This is a progression from text to images to video, and from store-and-forward networks to real time (and, for broadcast, “stored time,” which is a useful way of thinking about recorded video), but in each case, the interactions are not place based but happening in the ether between two or more connected people. The occasion is more the point than the place…

Tim explains what he means– and what that could mean: “The Metaverse is not a place- it’s a communications medium,” @timoreilly in @radar.

* Neal Stephenson, Snow Crash (the origination of the term “metaverse”)

###

As we jack in, we might send well-connected birthday greetings to Paul Otlet; he was born on this date in 1868. An author, entrepreneur, lawyer, and peace activist, he is considered the father of information science. He created Universal Decimal Classification (which would later become a faceted classification) and was responsible for the development of an early information retrieval tool, the “Repertoire Bibliographique Universel” (RBU) which utilized 3×5 inch index cards, used commonly in library catalogs around the world (though now largely displaced by the advent of the online public access catalog or OPAC). Indeed, Otlet predicted the advent of the internet (though over-optimisitically imagined that it would appear in the 1930s).

For more of his remarkable story, see “Knowledge, like air, is vital to life. Like air, no one should be denied it.”

source

Written by (Roughly) Daily

August 23, 2022 at 1:00 am

“One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.”*…

And yet, for centuries no one has succeeded. Now, as Erica Klarreich reports, cryptographers want to know which of five possible worlds we inhabit, which will reveal whether truly secure cryptography is even possible…

Many computer scientists focus on overcoming hard computational problems. But there’s one area of computer science in which hardness is an asset: cryptography, where you want hard obstacles between your adversaries and your secrets.

Unfortunately, we don’t know whether secure cryptography truly exists. Over millennia, people have created ciphers that seemed unbreakable right until they were broken. Today, our internet transactions and state secrets are guarded by encryption methods that seem secure but could conceivably fail at any moment.

To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries. We know of many computational problems that seem hard, but maybe we just haven’t been clever enough to solve them. Or maybe some of them are hard, but their hardness isn’t of a kind that lends itself to secure encryption. Fundamentally, cryptographers wonder: Is there enough hardness in the universe to make cryptography possible?

In 1995, Russell Impagliazzo of the University of California, San Diego broke down the question of hardness into a set of sub-questions that computer scientists could tackle one piece at a time. To summarize the state of knowledge in this area, he described five possible worlds — fancifully named Algorithmica, Heuristica, Pessiland, Minicrypt and Cryptomania — with ascending levels of hardness and cryptographic possibility. Any of these could be the world we live in…

Explore each of them– and their implications for secure encryption– at “Which Computational Universe Do We Live In?” from @EricaKlarreich in @QuantaMagazine.

Charles Babbage

###

As we contemplate codes, we might we might send communicative birthday greetings to a frequentlyfeatured hero of your correspondent, Claude Elwood Shannon; he was born on this date in 1916.  A mathematician, electrical engineer– and cryptographer– he is known as “the father of information theory.”  But he is also remembered for his contributions to digital circuit design theory and for his cryptanalysis work during World War II, both as a codebreaker and as a designer of secure communications systems.

220px-ClaudeShannon_MFO3807

 source

“With my tongue in one cheek only, I’d suggest that had our palaeolithic ancestors discovered the peer-review dredger, we would be still sitting in caves”*…

As a format, “scholarly” scientific communications are slow, encourage hype, and are difficult to correct. Stuart Ritchie argues that a radical overhaul of publishing could make science better…

… Having been printed on paper since the very first scientific journal was inaugurated in 1665, the overwhelming majority of research is now submitted, reviewed and read online. During the pandemic, it was often devoured on social media, an essential part of the unfolding story of Covid-19. Hard copies of journals are increasingly viewed as curiosities – or not viewed at all.

But although the internet has transformed the way we read it, the overall system for how we publish science remains largely unchanged. We still have scientific papers; we still send them off to peer reviewers; we still have editors who give the ultimate thumbs up or down as to whether a paper is published in their journal.

This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce “better” results, and sometimes even commit fraud in order to impress those all-important gatekeepers. This drastically distorts our view of what really went on.

There are some possible fixes that change the way journals work. Maybe the decision to publish could be made based only on the methodology of a study, rather than on its results (this is already happening to a modest extent in a few journals). Maybe scientists could just publish all their research by default, and journals would curate, rather than decide, which results get out into the world. But maybe we could go a step further, and get rid of scientific papers altogether…

A bold proposal: “The big idea: should we get rid of the scientific paper?,” from @StuartJRitchie in @guardian.

Apposite (if only in its critical posture): “The Two Paper Rule.” See also “In what sense is the science of science a science?” for context.

Zygmunt Bauman

###

As we noodle on knowledge, we might recall that it was on this date in 1964 that AT&T connected the first Picturephone call (between Disneyland in California and the World’s Fair in New York). The device consisted of a telephone handset and a small, matching TV, which allowed telephone users to see each other in fuzzy video images as they carried on a conversation. It was commercially-released shortly thereafter (prices ranged from $16 to $27 for a three-minute call between special booths AT&T set up in New York, Washington, and Chicago), but didn’t catch on.

source

“Your job as a scientist is to figure out how you’re fooling yourself”*…

Larger version here

And like scientists, so all of us…

Science has shown that we tend to make all sorts of mental mistakes, called “cognitive biases”, that can affect both our thinking and actions. These biases can lead to us extrapolating information from the wrong sources, seeking to confirm existing beliefs, or failing to remember events the way they actually happened!

To be sure, this is all part of being human—but such cognitive biases can also have a profound effect on our endeavors, investments, and life in general.

Humans have a tendency to think in particular ways that can lead to systematic deviations from making rational judgments.

These tendencies usually arise from:

• Information processing shortcuts

• The limited processing ability of the brain

• Emotional and moral motivations

• Distortions in storing and retrieving memories

• Social influence

Cognitive biases have been studied for decades by academics in the fields of cognitive science, social psychology, and behavioral economics, but they are especially relevant in today’s information-packed world. They influence the way we think and act, and such irrational mental shortcuts can lead to all kinds of problems in entrepreneurship, investing, or management.

Here are five examples of how these types of biases can affect people in the business world:

1. Familiarity Bias: An investor puts her money in “what she knows”, rather than seeking the obvious benefits from portfolio diversification. Just because a certain type of industry or security is familiar doesn’t make it the logical selection.

2. Self-Attribution Bias: An entrepreneur overly attributes his company’s success to himself, rather than other factors (team, luck, industry trends). When things go bad, he blames these external factors for derailing his progress.

3. Anchoring Bias: An employee in a salary negotiation is too dependent on the first number mentioned in the negotiations, rather than rationally examining a range of options.

4. Survivorship Bias: Entrepreneurship looks easy, because there are so many successful entrepreneurs out there. However, this is a cognitive bias: the successful entrepreneurs are the ones still around, while the millions who failed went and did other things.

5. Gambler’s Fallacy: A venture capitalist sees a portfolio company rise and rise in value after its IPO, far behind what he initially thought possible. Instead of holding on to a winner and rationally evaluating the possibility that appreciation could still continue, he dumps the stock to lock in the existing gains.

An aid to thinking about thinking: “Every Single Cognitive Bias in One Infographic.” From DesignHacks.co via Visual Capitalist.

And for a fascinating look of cognitive bias’ equally dangerous cousin, innumeracy, see here.

* Saul Perlmutter, astrophysicist, Nobel laureate

###

As we cogitate, we might recall that it was on this date in 1859 that “The Carrington Event” began. Lasting two days, it was the largest solar storm on record: a large solar flare (a coronal mass ejection, or CME) that affected many of the (relatively few) electronics and telegraph lines on Earth.

A solar storm of this magnitude occurring today would cause widespread electrical disruptions, blackouts, and damage due to extended outages of the electrical grid. The solar storm of 2012 was of similar magnitude, but it passed Earth’s orbit without striking the planet, missing by nine days. See here for more detail on what such a storm might entail.

Sunspots of 1 September 1859, as sketched by R.C. Carrington. A and B mark the initial positions of an intensely bright event, which moved over the course of five minutes to C and D before disappearing.
%d bloggers like this: