(Roughly) Daily

Posts Tagged ‘claude shannon

“One cannot conceive anything so strange and so implausible that it has not already been said by one philosopher or another”*…

Wisdom for the exquisite Existential Comics (“A philosophy comic about the inevitable anguish of living a brief life in an absurd world. Also jokes.”)…

Frege was an early philosopher of language, who formulated a theory of semantics that largely had to do with how we form truth propositions about the world. His theories were enormously influential for people like Russel, Carnap, and even Wittgenstein early in his career. They all recognized that the languages we use are ambiguous, so making exact determinations was always difficult. Most of them were logicians and mathematicians, and wanted to render ordinary language as exact and precise as mathematical language, so we could go about doing empirical science with perfect clarity. Russell, Carnap, and others even vowed to create an exact scientific language (narrator: “they didn’t create an exact scientific language”).

Later on, Wittgenstein and other philosophers such as J.L. Austin came to believe that a fundamental mistake was made about the nature of language itself. Language, they thought, doesn’t pick out truth propositions about the world at all. Speech acts were fundamentally no different than other actions, and were merely used in social situations to bring about certain effects. For example, in asking for a sandwich to be passed across the table, we do not pick out a certain set of facts about the world, we only utter the words with the expectations that it will cause certain behavior in others. Learning what is and isn’t a sandwich is more like learning the rules of a game than making declarations about what exists in the world, so for Wittgenstein, what is or isn’t a sandwich depends only on the success or failure of the word “sandwich” in a social context, regardless of what actual physical properties a sandwich has in common with, say, a hotdog.

Is a Hotdog a Sandwich? A Definitive Study,” from @existentialcomics.com.

* René Descartes

###

As we add mayonnaise, we might send thoughtful birthday greetings to Norbert Wiener; he was born on this date in 1894. A computer scientist, mathematician, and philosopher, Wiener is considered the originator of cybernetics, the science of communication as it relates to living things and machines– a field that has had implications for implications for a wide variety of fields, including engineering, systems control, computer science, biology, neuroscience, and philosophy. (Wiener credited Leibniz as the “patron saint of cybernetics.)

His work heavily influenced computer pioneer John von Neumann, information theorist Claude Shannon, anthropologists Margaret Mead and Gregory Bateson, and many others. Wiener was one of the first to theorize that all intelligent behavior was the result of feedback mechanisms and could possibly be simulated by machines– an important early step towards the development of modern artificial intelligence.

source

“We ceased to be the lunatic fringe. We’re now the lunatic core.”*…

Further, in a fashion, to yesterday’s post on analog computing, an essay from Benjamin Labatut (the author of two remarkable works of “scientific-historical fiction,” When We Cease to Understand the World and The MANIAC, continuing the animating theme of those books…

We will never know how many died during the Butlerian Jihad. Was it millions? Billions? Trillions, perhaps? It was a fantastic rage, a great revolt that spread like wildfire, consuming everything in its path, a chaos that engulfed generations in an orgy of destruction lasting almost a hundred years. A war with a death toll so high that it left a permanent scar on humanity’s soul. But we will never know the names of those who fought and died in it, or the immense suffering and destruction it caused, because the Butlerian Jihad, abominable and devastating as it was, never happened.

The Jihad was an imagined event, conjured up by Frank Herbert as part of the lore that animates his science-fiction saga Dune. It was humanity’s last stand against sentient technology, a crusade to overthrow the god of machine-logic and eradicate the conscious computers and robots that in the future had almost entirely enslaved us. Herbert described it as “a thalamic pause for all humankind,” an era of such violence run amok that it completely transformed the way society developed from then onward. But we know very little of what actually happened during the struggle itself, because in the original Dune series, Herbert gives us only the faintest outlines—hints, murmurs, and whispers, which carry the ghostly weight of prophecy. The Jihad reshaped civilization by outlawing artificial intelligence or any machine that simulated our minds, placing a damper on the worst excesses of technology. However, it was fought so many eons before the events portrayed in the novels that by the time they occur it has faded into legend and crystallized in apocrypha. The hard-won lessons of the catastrophe are preserved in popular wisdom and sayings: “Man may not be replaced.” “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” “We do not trust the unknown which can arise from imaginative technology.” “We must negate the machines-that-think.” The most enduring legacy of the Jihad was a profound change in humankind’s relationship to technology. Because the target of that great hunt, where we stalked and preyed upon the very artifacts we had created to lift ourselves above the seat that nature had intended for us, was not just mechanical intelligence but the machinelike attitude that had taken hold of our species: “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments,” Herbert wrote.

Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program!

The Butlerian Jihad removed a crutch—the part of ourselves that we had given over to technology—and forced human minds to develop above and beyond the limits of mechanistic reasoning, so that we would no longer depend on computers to do our thinking for us.

Herbert’s fantasy, his far-flung vision of a devastating war between humanity and the god of machine-logic, seemed quaint when he began writing it in the Sixties. Back then, computers were primitive by modern standards, massive mainframe contraptions that could process only hundreds of thousands of cycles per second (instead of billions, like today), had very little memory, operated via punch cards, and were not connected to one another. And we have easily ignored Herbert’s warnings ever since, but now the Butlerian Jihad has suddenly returned to plague us. The artificial-intelligence apocalypse is a new fear that keeps many up at night, a terror born of great advances that seem to suggest that, if we are not very careful, we may—with our own hands—bring forth a future where humanity has no place. This strange nightmare is a credible danger only because so many of our dreams are threatening to come true. It is the culmination of a long process that hearkens back to the origins of civilization itself, to the time when the world was filled with magic and dread, and the only way to guarantee our survival was to call down the power of the gods.

Apotheosis has always haunted the soul of humankind. Since ancient times we have suffered the longing to become gods and exceed the limits nature has placed on us. To achieve this, we built altars and performed rituals to ask for wisdom, blessings, and the means to reach beyond our capabilities. While we tend to believe that it is only now, in the modern world, that power and knowledge carry great risks, primitive knowledge was also dangerous, because in antiquity a part of our understanding of the world and ourselves did not come from us, but from the Other. From the gods, from spirits, from raging voices that spoke in silence.

[Labatut invokes the mysteries of the Vedas and their Altar of Fire, which was meant to develop “a mind, (that) when properly developed, could fly like a bird with outstretched wings and conquer the skies.”…]

Seen from afar by people who were not aware of what was being made, these men and women must surely have looked like bricklayers gone mad. And that same frantic folly seems to possess those who, in recent decades, have dedicated their hearts and minds to the building of a new mathematical construct, a soulless copy of certain aspects of our thinking that we have chosen to name “artificial intelligence,” a tool so formidable that, if we are to believe the most zealous among its devotees, will help us reach the heavens and become immortal…

[Labatut recounts the stories– and works– of some of the creators of AI’s DNA: George Boole (and his logic), Claude Shannon (who put that logic to work), and Geoffrey Hinton (Boole’s great-great-grandson, and “the Godfather of AI,” who created of the first neural networks, but has more recently undergone a change of opinion)…]

… Hinton has been transformed. He has mutated from an evangelist of a new form of reason into a prophet of doom. He says that what changed his mind was the realization that we had, in fact, not replicated our intelligence, but created a superior one.

Or was it something else, perhaps? Did some unconscious part of him whisper that it was he, rather than his great-great-grandfather, who was intended by God to find the mechanisms of thought? Hinton does not believe in God, and he would surely deny his ancestor’s claim that pain is an instrument of the Lord’s will, since he was forced to have every one of his meals on his knees, resting on a pillow like a monk praying at the altar, because of a back injury that caused him excruciating pain. For more than seventeen years, he could not sit down, and only since 2022 has he managed to do so long enough to eat.

Hinton is adamant that the dangers of thinking machines are real. And not just short-term effects like job replacement, disinformation, or autonomous lethal weapons, but an existential risk that some discount as fantasy: that our place in the world might be supplanted by AI. Part of his fear is that he believes AI could actually achieve a sort of immortality, as the Vedic gods did. “The good news,” he has said, “is we figured out how to build things that are immortal. When a piece of hardware dies, they don’t die. If you’ve got the weights stored in some medium and you can find another piece of hardware that can run the same instructions, then you can bring it to life again. So, we’ve got immortality. But it’s not for us.”

Hinton seems to be afraid of what we might see when the embers of the Altar of Fire die down at the end of the sacrifice and the sharp coldness of the beings we have conjured up starts to seep into our bones. Are we really headed for obsolescence? Will humanity perish, not because of the way we treat all that surrounds us, nor due to some massive unthinking rock hurled at us by gravity, but as a consequence of our own irrational need to know all that can be known? The supposed AI apocalypse is different from the mushroom-cloud horror of nuclear war, and unlike the ravages of the wildfires, droughts, and inundations that are becoming commonplace, because it arises from things that we have, since the beginning of civilization, always considered positive and central to what makes us human: reason, intelligence, logic, and the capacity to solve the problems, puzzles, and evils that taint even the most fortunate person’s existence with everyday suffering. But in clawing our way to apotheosis, in daring to follow the footsteps of the Vedic gods who managed to escape from Death, we may shine a light on things that should remain in darkness. Because even if artificial intelligence never lives up to the grand and terrifying nightmare visions that presage a nonhuman world where algorithms hum along without us, we will still have to contend with the myriad effects this technology will have on human society, culture, and economics.

In the meantime, the larger specter of superintelligent AI looms over us. And while it is less likely and perhaps even impossible (nothing but a fairy tale, some say, a horror story intended to attract more money and investment by presenting a series of powerful systems not as the next step in our technological development but as a death-god that ends the world), it cannot be easily dispelled, for it reaches down and touches the fibers of our mythmaking apparatus, that part of our being that is atavistic and fearful, because it reminds us of a time when we shivered in caves and huddled together, while outside in the dark, with eyes that could see in the night, the many savage beasts and monsters of the past sniffed around for traces of our scent.

As every new AI model becomes stronger, as the voices of warning form a chorus, and even the most optimistic among us begin to fear this new technology, it is harder and harder to think without panic or to reason with logic. Thankfully, we have many other talents that don’t answer to reason. And we can always rise and take a step back from the void toward which we have so hurriedly thrown ourselves, by lending an ear to the strange voices that arise from our imagination, that feral territory that will always remain a necessary refuge and counterpoint to rationality.

Faced, as we are, with wild speculation, confronted with dangers that no one, however smart or well informed, is truly capable of managing or understanding, and taunted by the promises of unlimited potential, we may have to sound out the future not merely with science, politics, and reason, but with that devil-eye we use to see in the dark: fiction. Because we can find keys to doors we have yet to encounter in the worlds that authors have imagined in the past. As we grope forward in a daze, battered and bewildered by the capabilities of AI, we could do worse than to think about the desert planet where the protagonists of Herbert’s Dune novels sought to peer into the streaming sands of future time, under the heady spell of a drug called spice, to find the Golden Path, a way for human beings to break from tyranny and avoid extinction or stagnation by being more diverse, resilient, and free, evolving past purely logical reasoning and developing our minds and faculties to the point where our thoughts and actions are unpredictable and not bound by statistics. Herbert’s books, with their strange mixture of past and present, remind us that there are many ways in which we can continue forward while preserving our humanity. AI is here already, but what we choose to do with it and what limits we agree to place on its development remain decisions to be made. No matter how many billions of dollars are invested in the AI companies that promise to eliminate work, solve climate change, cure cancer, and rain down miracles unlike anything we have seen before, we can never fully give ourselves over to these mathematical creatures, these beings with no soul or sympathy, because they are neither alive nor conscious—at least not yet, and certainly not like us—so they do not share the contradictory nature of our minds.

In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them. But we should also consider a warning from Herbert, the central commandment he chose to enshrine at the heart of future humanity’s key religious text, a rule meant to keep us from becoming subservient to the products of our reason, and from bowing down before the God of Logic and his many fearsome offspring:

Thou shalt not make a machine in the likeness of a human mind

Before and after artificial intelligence: “The Gods of Logic” in @Harpers. Eminently worth reading in full.

For a less pessimistic view, see: “A Journey Through the Uncanny Valley: Our Relational Futures with AI,” from @dylanhendricks at @iftf.

* Geoffrey Hinton

###

As we deliberate on Daedalus’ caution, we might we might send fantastically far-sighted birthday greetings to a tecno-optimist who might likely have brushed aside Labatut’s concerns: Hugo Gernsback, a Luxemborgian-American inventor, broadcast pioneer, writer, and publisher; he was born on this date in 1884.

Gernsback held 80 patents at the time of his death; he founded radio station WRNY, was involved in the first television broadcasts, and is considered a pioneer in amateur radio.  But it was as a writer and publisher that he probably left his most lasting mark:  In 1926, as owner/publisher of the magazine Modern Electrics, he filled a blank spot in his publication by dashing off the first chapter of a series called “Ralph 124C 41+.” The twelve installments of “Ralph” were filled with inventions unknown in 1926, including “television” (Gernsback is credited with introducing the word), fluorescent lighting, juke boxes, solar energy, television, microfilm, vending machines, and the device we now call radar.

The “Ralph” series was an astounding success with readers; and later that year Gernsback founded the first magazine devoted to science fiction, Amazing Stories.  Believing that the perfect sci-fi story is “75 percent literature interwoven with 25 percent science,” he coined the term “science fiction.”

Gernsback was a “careful” businessman, who was tight with the fees that he paid his writers– so tight that H. P. Lovecraft and Clark Ashton Smith referred to him as “Hugo the Rat.”

Still, his contributions to the genre as publisher were so significant that, along with H.G. Wells and Jules Verne, he is sometimes called “The Father of Science Fiction”; in his honor, the annual Science Fiction Achievement awards are called the “Hugos.”

(Coincidentally, today is also the birthday– in 1906– of Philo T. Farnsworth, the man who actually did invent television.)

Gernsback, wearing one of his inventions, TV Glasses

source

“Man’s most serious activity is play”*…

The end of a game of Hex on a standard 11×11 board. Here, White wins the game. (source)

Until the mid-20th century, the “playing fields” on board games tended to be composed of squares; then hexagons emerged. Jon-Paul Dyson explains why…

A board game begins with the board. But how is that board divided up? Often the simplest unit of division is a square. Consider the 64 squares of a chess board, or the 92 squares on a Stratego board. In each case, players take control of a square which exists in relation to other spaces around it, especially if they share adjoining borders. The design of these game boards affords or encourages certain types of movement, usually horizontally or vertically (in four directions) or in some cases diagonally in eight directions (as with the bishop in chess).

And yet there exists a problem with this sort of layout in any game that allows freedom of movement, because the connection between these squares is uneven. Although squares share a long border horizontally and vertically, they do not share such a border on the diagonal connections. In a game like chess, where you physically pick up a piece to move it, this is not much of an issue. But as simulation board games began to develop after World War II, this proved more problematic. Many of these games involved sliding pieces (or cardboard tiles that were frustrating to pick up) from square to square, like army units occupying territory. For these situations, hexagonal spaces that provided equal movement in six directions, produced a better solution.

As is true throughout the history of innovation, whenever there is a problem, it usually turns out that multiple people arrive at similar inventive solutions. That was the case with the development of the hex as a basic unit of division in board games.

Piet Hein [see here], a Danish polymath, who was a quantum physicist as well as a designer, poet, and puzzle and game inventor, came up with the idea in 1942 for a game in which players competed to create connected lines across a game board made up of hexagonal spaces. Thus he might be credited as the father of hex. Yet in the late 1940s, American mathematician John Nash (the subject of the movie A Beautiful Mind) independently invented a similar game at Princeton that also used hex tiles [though we should note that it was a variation on the Shannon switching game, created by Claude Shannon sometime before 1951]. In 1952, Parker Brothers released a version of the game which they called Hex.

This was a time of post-war prosperity when people increasingly had the discretionary income to buy board games, but it was also a period when the United States and the Soviet Union, allies during the war, had become bitter rivals locked in a Cold War. Rather than downsizing after the victory over Germany and Japan, the American military complex shifted from fighting the Axis powers to planning for a major conflict with the Soviet Union and engaging in a series of smaller wars such as that fought in Korea. To help plan American strategy, the Army Air Force and the Douglas Aircraft Company created the Rand Corporation, a think tank that made significant contributions to American policy and computing.

One of the projects the Rand Corporation focused on was modeling conflict through the use of war games. To that end Alexander Mood, a staff member at the Rand Corporation, introduced a honeycombed, hex-shaped board that allowed pieces to move in six directions rather than just four. John Nash was at the Rand Corporation and, in a 1952 paper he coauthored entitled “Some War Games,” he and coauthor R. M. Thrall described using this hex-based system for ground and air games.

It was another game creator, however, who took this development and made the most significant contribution to the development of hex-based games: Charles S. Roberts. Roberts was an army veteran who in 1954 published Tactics, a military simulation board game that is often credited as the first modern wargame. Roberts then founded the game company Avalon Hill, and his games and their innovative simulation of battlefield odds drew the attention of the Rand Corporation because his Combat Results Table for determining the outcome of battles mirrored systems they had developed. The Rand Corporation invited Roberts to visit, and supposedly while he was there he noticed their use of hex-based boards.

Recognizing the superiority of a hex-based system for simulating movement, Roberts began using it in game design in 1961. That year was the centennial of the American Civil War, and so there was a demand for historical simulations. Roberts redesigned his recently released game Gettysburg with the new hex pattern. The Strong owns copies of Gettysburg belonging to Roberts, both in the older square format and in the revised hex version. He also used it for the Avalon Hill game Chancellorsville, another Civil War simulation. Soon the hex system became commonplace in a high proportion of wargames, as well as in more mainstream games such as the 1969 release Psyche-Paths.

Since then, hex board layouts have been used in a wide variety of games. Settlers of Catan is perhaps the most famous example, but plenty of others exist including the spaces in the game Hero Scape. Even video games will often use the hex layout, not only in wargames but in titles such as in Sid Meier’s Civilization V

Hex Marks the Spot,” from @jpdysonplay and @museumofplay.

* George Santayana

###

As we make our moves, we might send playful birthday greetings to Seymour Papert; he was born on this date in 1928.  Trained as a mathematician, Papert was a pioneer of computer science, and in particular, artificial intelligence. He created the Epistemology and Learning Research Group at the MIT Architecture Machine Group (which later became the MIT Media Lab); he directed MIT’s Artificial Intelligence Laboratory; he authored the hugely-influential LOGO computer language; and he was a principal of the One Laptop Per Child Program.  Called by Marvin Minsky “the greatest living mathematics educator,” Papert won a Guggenheim fellowship (1980), a Marconi International fellowship (1981), the Software Publishers Association Lifetime Achievement Award (1994), and the Smithsonian Award (1997).

A champion of fun and games in learning, Papert was the brain behind Lego Mindstorms.

 source

“It was orderly, like the universe. It had logic. It was dependable. Using it allowed a kind of moral uplift, as one’s own chaos was also brought under control.”*…

(Roughly) Daily has looked before at the history of the filing cabinet, rooted in the work of Craig Robertson (@craig2robertson). He has deepened his research and published a new book, The Filing Cabinet: A Vertical History of Information. An Xiao Mina offers an appreciation– and a consideration of one of the central questions it raises: can emergent knowledge coexist with an internet that privileges the kind “certainty” that’s implicit in the filing paradigm that was born with the filing cabinet and that informs our “knowledge systems” today…

… The 20th century saw an emergent information paradigm shaped by corporate capitalism, which emphasized maximizing profit and minimizing the time workers spent on tasks. Offices once kept their information in books—think Ebenezer Scrooge with his quill pen, updating his thick ledger on Christmas. The filing cabinet changed all that, encouraging what Robertson calls “granular certainty,” or “the drive to break more and more of life and its everyday routines into discrete, observable, and manageable parts.” This represented an important conceptualization: Information became a practical unit of knowledge that could be standardized, classified, and effortlessly stored and retrieved.

Take medical records, which require multiple layers of organization to support routine hospital business. “At the Bryn Mawr Hospital,” Robertson writes, “six different card files provided access to patient information: an alphabetical file of admission cards for discharged patients, an alphabetical file for the accident ward, a file to record all operations, a disease file, a diagnostic file, and a doctors’ file that recorded the number of patients each physician referred to the hospital.” The underlying logic of this system was that the storage of medical records didn’t just keep them safe; it made sure that those records could be accessed easily.

Robertson’s deep focus on the filing cabinet grounds the book in history and not historical analogy. He touches very little on Big Data and indexing and instead dives into the materiality of the filing cabinet and the principles of information management that guided its evolution. But students of technology and information studies will immediately see this history shaping our world today…

[And] if the filing cabinet, as a tool of business and capital, guides how we access digital information today, its legacy of certainty overshadows the messiness intrinsic to acquiring knowledge—the sort that requires reflection, contextualization, and good-faith debate. Ask the internet difficult questions with complex answers—questions of philosophy, political science, aesthetics, perception—and you’ll get responses using the same neat little index cards with summaries of findings. What makes for an ethical way of life? What is the best English-language translation of the poetry of Borges? What are the long-term effects of social inequalities, and how do we resolve them? Is it Yanny or Laurel?

Information collection and distribution today tends to follow the rigidity of cabinet logic to its natural extreme, but that bias leaves unattended more complex puzzles. The human condition inherently demands a degree of comfort with uncertainty and ambiguity, as we carefully balance incomplete and conflicting data points, competing value systems, and intricate frameworks to arrive at some form of knowing. In that sense, the filing cabinet, despite its deep roots in our contemporary information architecture, is just one step in our epistemological journey, not its end…

A captivating new history helps us see a humble appliance’s sweeping influence on modern life: “The Logic of the Filing Cabinet Is Everywhere.”

* Jeanette Winterson, Why Be Happy When You Could Be Normal?

###

As we store and retrieve, we might recall that it was on this date in 19955 that the term “artificial intelligence” was coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place at Dartmouth a year later, in July and August 1956, is generally recognized as the official birth date of the new field. 

Dartmouth Conference attendees: Marvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence (Photo: Margaret Minsky)

source

“One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.”*…

And yet, for centuries no one has succeeded. Now, as Erica Klarreich reports, cryptographers want to know which of five possible worlds we inhabit, which will reveal whether truly secure cryptography is even possible…

Many computer scientists focus on overcoming hard computational problems. But there’s one area of computer science in which hardness is an asset: cryptography, where you want hard obstacles between your adversaries and your secrets.

Unfortunately, we don’t know whether secure cryptography truly exists. Over millennia, people have created ciphers that seemed unbreakable right until they were broken. Today, our internet transactions and state secrets are guarded by encryption methods that seem secure but could conceivably fail at any moment.

To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries. We know of many computational problems that seem hard, but maybe we just haven’t been clever enough to solve them. Or maybe some of them are hard, but their hardness isn’t of a kind that lends itself to secure encryption. Fundamentally, cryptographers wonder: Is there enough hardness in the universe to make cryptography possible?

In 1995, Russell Impagliazzo of the University of California, San Diego broke down the question of hardness into a set of sub-questions that computer scientists could tackle one piece at a time. To summarize the state of knowledge in this area, he described five possible worlds — fancifully named Algorithmica, Heuristica, Pessiland, Minicrypt and Cryptomania — with ascending levels of hardness and cryptographic possibility. Any of these could be the world we live in…

Explore each of them– and their implications for secure encryption– at “Which Computational Universe Do We Live In?” from @EricaKlarreich in @QuantaMagazine.

Charles Babbage

###

As we contemplate codes, we might we might send communicative birthday greetings to a frequentlyfeatured hero of your correspondent, Claude Elwood Shannon; he was born on this date in 1916.  A mathematician, electrical engineer– and cryptographer– he is known as “the father of information theory.”  But he is also remembered for his contributions to digital circuit design theory and for his cryptanalysis work during World War II, both as a codebreaker and as a designer of secure communications systems.

220px-ClaudeShannon_MFO3807

 source