(Roughly) Daily

Posts Tagged ‘computer

“Mathematics is the music of reason”*…

An illustration of a mathematician engaged in work, drawing geometric shapes and formulas on paper, with a three-dimensional geometric object and interconnected lines of mathematical concepts in the background.

New technologies, most centrally AI, are arming scientists with tools that might not just accelerate or enhance their work, but altogether transform it. As Jordana Cepelewicz reports, mathematicians have started to prepare for a profound shift in what it means to do math…

Since the start of the 20th century, the heart of mathematics has been the proof — a rigorous, logical argument for whether a given statement is true or false. Mathematicians’ careers are measured by what kinds of theorems they can prove, and how many. They spend the bulk of their time coming up with fresh insights to make a proof work, then translating those intuitions into step-by-step deductions, fitting different lines of reasoning together like puzzle pieces.

The best proofs are works of art. They’re not just rigorous; they’re elegant, creative and beautiful. This makes them feel like a distinctly human activity — our way of making sense of the world, of sharpening our minds, of testing the limits of thought itself.

But proofs are also inherently rational. And so it was only natural that when researchers started developing artificial intelligence in the mid-1950s, they hoped to automate theorem proving: to design computer programs capable of generating proofs of their own. They had some success. One of the earliest AI programs could output proofs of dozens of statements in mathematical logic. Other programs followed, coming up with ways to prove statements in geometry, calculus and other areas.

Still, these automated theorem provers were limited. The kinds of theorems that mathematicians really cared about required too much complexity and creativity. Mathematical research continued as it always had, unaffected and undeterred.

Now that’s starting to change. Over the past few years, mathematicians have used machine learning models (opens a new tab) to uncover new patterns, invent new conjectures, and find counterexamples to old ones. They’ve created powerful proof assistants both to verify whether a given proof is correct and to organize their mathematical knowledge.

They have not, as yet, built systems that can generate the proofs from start to finish, but that may be changing. In 2024, Google DeepMind announced that they had developed an AI system that scored a silver medal in the International Mathematical Olympiad, a prestigious proof-based exam for high school students. OpenAI’s more generalized “large language model,” ChatGPT, has made significant headway on reproducing proofs and solving challenging problems, as have smaller-scale bespoke systems. “It’s stunning how much they’re improving,” said Andrew Granville, a mathematician at the University of Montreal who until recently doubted claims that this technology might soon have a real impact on theorem proving. “They absolutely blow apart where I thought the limitations were. The cat’s out of the bag.”

Researchers predict they’ll be able to start outsourcing more tedious sections of proofs to AI within the next few years. They’re mixed on whether AI will ever be able to prove their most important conjectures entirely: Some are willing to entertain the notion, while others think there are insurmountable technological barriers. But it’s no longer entirely out of the question that the more creative aspects of the mathematical enterprise might one day be automated.

Even so, most mathematicians at the moment “have their heads buried firmly in the sand,” Granville said. They’re ignoring the latest developments, preferring to spend their time and energy on their usual jobs.

Continuing to do so, some researchers warn, would be a mistake. Even the ability to outsource boring or rote parts of proofs to AI “would drastically alter what we do and how we think about math over time,” said Akshay Venkatesh, a preeminent mathematician and Fields medalist at the Institute for Advanced Study in Princeton, New Jersey.

He and a relatively small group of other mathematicians are now starting to examine what an AI-powered mathematical future might look like, and how it will change what they value. In such a future, instead of spending most of their time proving theorems, mathematicians will play the role of critic, translator, conductor, experimentalist. Mathematics might draw closer to laboratory sciences, or even to the arts and humanities.

Imagining how AI will transform mathematics isn’t just an exercise in preparation. It has forced mathematicians to reckon with what mathematics really is at its core, and what it’s for…

Absolutely fascinating: “Mathematical Beauty, Truth, and Proof in the Age of AI,” from @jordanacep.bsky.social‬ in @quantamagazine.bsky.social‬. Eminently worth reading in full.

James Joseph Sylvester

###

As we wonder about ways of knowing, we might spare a thought for a man whose work helped trigger an earlier iteration of this enhance/transform discussion and laid the groundwork for the one unpacked in the article linked above above: J. Presper Eckert; he died on this day in 1995. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

Three men interacting with a large vintage computer console, with tape reels in the background.
Eckert (standing and gesturing) and Mauchy (at the console), demonstrating the UNIVAC to Walter Cronkite (source)

“I wonder, he wondered, if any human has ever felt this way before about an android.”*…

Well, yes… Centuries before audio deepfakes and text-to-speech software, inventors in the eighteenth century constructed androids with swelling lungs, flexible lips, and moving tongues to simulate human speech. Jessica Riskin explores the history of such talking heads, from their origins in musical automata to inventors’ quixotic attempts to make machines pronounce words, converse, and declare their love…

The word “android”, derived from Greek roots meaning “manlike”, was the coinage of Gabriel Naudé, French physician and librarian, personal doctor to Louis XIII, and later architect of the forty-thousand-volume library of Cardinal Jules Mazarin. Naudé was a rationalist and an enemy of superstition. In 1625 he published a defense of Scholastic philosophers to whom tradition had ascribed works of magic. He included the thirteenth-century Dominican friar, theologian, and philosopher Albertus Magnus (Albert the Great), who, according to legend, had built an artificial man made of bronze.

This story seems to have originated long after Albert’s death with Alfonso de Madrigal (also known as El Tostado), a voluminous commentator of the fifteenth century, who adapted and embellished the tales of moving statues and talking brazen heads in medieval lore. El Tostado said that Albert had worked for thirty years to compose a whole man out of metal. The automaton supplied Albert with the answers to all of his most vexing questions and problems and even, in some versions of the tale, obligingly dictated a large part of Albert’s voluminous writings. The machine had met its fate, according to El Tostado, when Albert’s student, Thomas Aquinas, smashed it to bits in frustration, having grown tired of “its great babbling and chattering”.

Naudé did not believe in Albert’s talkative statue. He rejected it and other tales of talking automaton heads as “false, absurd and erroneous”. The reason Naudé cited was the statues’ lack of equipment: being altogether without “muscles, lungs, epiglottis, and all that is necessary for a perfect articulation of the voice”, they simply did not have the necessary “parts and instruments” to speak reasonably. Naudé concluded, in light of all the reports, that Albert the Great probably had built an automaton, but never one that could give him intelligible and articulate responses to questions. Instead, Albert’s machine must have been similar to the Egyptian statue of Memnon, much discussed by ancient authors, which murmured agreeably when the sun shone upon it: the heat caused the air inside the statue to “rarefy” so that it was forced out through little pipes, making a murmuring sound.

Despite disbelieving in Albert the Great’s talking head, Naudé gave it a powerful new name, referring to it as the “android”. Thus deftly, he smuggled a new term into the language, for according to the 1695 dictionary by the French philosopher and writer Pierre Bayle, “android” had been “an absolutely unknown word, & purely an invention of Naudé, who used it boldly as though it were established.” It was a propitious moment for neologisms: Naudé’s term quickly infiltrated the emerging genre of dictionaries and encyclopedias. Bayle repeated it in the article on “Albert le Grand” in his dictionary. Thence, “android” secured its immortality as the headword of an article — citing Naudé and Bayle — in the first volume of the supplement to the English encyclopedist Ephraim Chambers’ Cyclopaedia. In denying the existence of Albert’s android, Naudé had given life to the android as a category of machine.

But the first actual android of the new, experimental-philosphical variety for which the historical record contains rich information — “android” in Naudé’s root sense, a working human-shaped assemblage of “necessary parts” and instruments — went on display on February 3, 1738…

[There follows a fascinating account of examples from the 18th and 19th centuries…]

Plates depicting the components of artificial and natural speech from Wolfgang von Kempelen’s The Mechanism of Speech (1791) — Source

… In the early part of the twentieth century, designers of artificial speech moved on from mechanical to electrical speech synthesis. The simulation of the organs and process of speaking — of the trembling glottis, the malleable vocal tract, the supple tongue and mouth — was specific to the last decades of the eighteenth century, when philosophers and mechanicians and paying audiences were briefly preoccupied with the idea that articulate language was a bodily function: that Descartes’ divide between mind and body might be bridged in the organs of speech…

The origin of the word “android” and (very) early examples: “You Are My Friend” from @PublicDomainRev.

* Philip K. Dick, “Do Androids Dream of Electric Sheep?”

###

As we muse on the mechanical, we might spare a thought for a man whose work helped pave the way for androids as we currently conceive them: J. Presper Eckert; he died on this day in 1995. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

Eckert (standing and gesturing) and Mauchy (at the console), demonstrating the UNIVAC to Walter Cronkite (source)

“The golden ratio is the key”*…

… in any case, to good design. So, how did it come into currency? Western tradition tends to credit the Greeks and Euclid (via Fibonacci), while acknowledging that they may have been inspired by the Egyptians. But recent research has surfaced a a more tantalizing prospect:

Design remains a largely white profession, with Black people still vastly underrepresented – making up just 3% of the design industry, according to a 2019 survey

Part of the lack of representation might have had to do with the fact that prevailing tenets of design seemed to hew closely to Western traditions, with purported origins in Ancient Greece and the schools out of Germany, Russia and the Netherlands deemed paragons of the field. A “Black aesthetic” has seemed to be altogether absent.

But what if a uniquely African aesthetic has been deeply embedded in Western design all along? 

Through my research collaboration with design scholar Ron Eglash, author of “African Fractals,” I discovered that the design style that undergirds much of the graphic design profession today – the Swiss design tradition that uses the golden ratio – may have roots in African culture

The golden ratio refers to the mathematical expression of “1: phi,” where phi is an irrational number, roughly 1.618. 

Visually, this ratio can be represented as the “golden rectangle,” with the ratio of side “a” to side “b” the same as the ratio of the sides “a”-plus-“b” to “a.” 

The golden rectangle. If you divide ‘a’ by ‘b’ and ‘a’-plus-‘b’ by ‘a,’ you get phi, which is roughly 1.618

Create a square on one side of the golden rectangle, and the remaining space will form another golden rectangle. Repeat that process in each new golden rectangle, subdividing in the same direction, and you’ll get a golden spiral [the image at the top of this post], arguably the more popular and recognizable representation of the golden ratio.

This ratio is called “golden” or “divine” because it’s visually pleasing, and some scholars argue that the human eye can more readily interpret images that incorporate it.

For these reasons, you’ll see the golden ratio, rectangle and spiral incorporated into the design of public spaces and emulated in the artwork in museum halls and hanging on gallery walls. It’s also reflected in naturearchitecture, and design – and it forms a key component of modern Swiss design.

The Swiss design style emerged in the 20th century from an amalgamation of Russian, Dutch and German aesthetics. It’s been called one of the most important movements in the history of graphic design and provided the foundation for the rise of modernist graphic design in North America.

The Helvetica font, which originated in Switzerland, and Swiss graphic compositions – from ads to book covers, web pages and posters – are often organized according to the golden rectangle. Swiss architect Le Corbusier famously centered his design philosophy on the golden ratio, which he described as “[resounding] in man by an organic inevitability.”

An ad for Swiss Air by graphic designer Josef Müller-Brockmann incorporates the golden ratio. Grafic Notes

Graphic design scholars – represented particularly by Greek architecture scholar Marcus Vitruvius Pollo – have tended to credit early Greek culture for incorporating the golden rectangle into design. They’ll point to the Parthenon as a notable example of a building that implemented the ratio in its construction.

But empirical measurements don’t support the Parthenon’s purported golden proportions, since its actual ratio is 4:9 – two whole numbers. As I’ve pointed out, the Greeks, notably the mathematician Euclid, were aware of the golden ratio, but it was mentioned only in the context of the relationship between two lines or figures. No Greek sources use the phrase “golden rectangle” or suggest its use in design.

In fact, ancient Greek writings on architecture almost always stress the importance of whole number ratios, not the golden ratio. To the Greeks, whole number ratios represented Platonic concepts of perfection, so it’s far more likely that the Parthenon would have been built in accordance with these ideals.

If not from the ancient Greeks, where, then, did the golden rectangle originate? 

In Africa, design practices tend to focus on bottom-up growth and organic, fractal forms. They are created in a sort of feedback loop, what computer scientists call “recursion.” You start with a basic shape and then divide it into smaller versions of itself, so that the subdivisions are embedded in the original shape. What emerges is called a “self-similar” pattern, because the whole can be found in the parts… 

Robert Bringhurst, author of the canonical work “The Elements of Typographic Style,” subtly hints at the golden ratio’s African origins:

“If we look for a numerical approximation to this ratio, 1: phi, we will find it in something called the Fibonacci series, named for the thirteenth-century mathematician Leonardo Fibonacci. Though he died two centuries before Gutenberg, Fibonacci is important in the history of European typography as well as mathematics. He was born in Pisa but studied in North Africa.”

These scaling patterns can be seen in ancient Egyptian design, and archaeological evidence shows that African cultural influences traveled down the Nile river. For instance, Egyptologist Alexander Badaway found the Fibonacci Series’ use in the layout of the Temple of Karnak. It is arranged in the same way African villages grow: starting with a sacred altar or “seed shape” before accumulating larger spaces that spiral outward.

Given that Fibonacci specifically traveled to North Africa to learn about mathematics, it is not unreasonable to speculate that Fibonacci brought the sequence from North Africa. Its first appearance in Europe is not in ancient Greece, but in “Liber Abaci,” Fibonacci’s book of math published in Italy in 1202. 

Why does all of this matter?

Well, in many ways, it doesn’t. We care about “who was first” only because we live in a system obsessed with proclaiming some people winners – the intellectual property owners that history should remember. That same system declares some people losers, removed from history and, subsequently, their lands, undeserving of any due reparations. 

Yet as many strive to live in a just, equitable and peaceful world, it is important to restore a more multicultural sense of intellectual history, particularly within graphic design’s canon. And once Black graphic design students see the influences of their predecessors, perhaps they will be inspired and motivated anew to recover that history – and continue to build upon its legacy.

The longer-than-we’ve-acknowledged history of the Golden Ratio in design; Audrey Bennett (@audreygbennett) unpacks “The African roots of Swiss design.”

For more on Fibonacci‘s acquisitive habits, see this earlier post.

* Sir Edward Victor Appleton, Nobel Laureate in physics (1947)

###

As we ruminate on relationships, we might send careful-calculated birthday greetings to Mary Jackson; she was born on this date in 1921. A mathematician and aerospace engineer, she worked at Langley Research Center in Hampton, Virginia (part of the National Advisory Committee for Aeronautics [NACA], which in 1958 was succeeded by the National Aeronautics and Space Administration [NASA]) for most of her career. She began as a “computer” at the segregated West Area Computing division in 1951; in 1958, she became NASA’s first black female engineer.

Jackson’s story features in the 2016 non-fiction book Hidden Figures: The American Dream and the Untold Story of the Black Women Who Helped Win the Space Race. She is one of the three protagonists in Hidden Figures, the film adaptation released the same year. In 2019, she was posthumously awarded the Congressional Gold Medal; in 2020 the Washington, D.C. headquarters of NASA was renamed the Mary W. Jackson NASA Headquarters.

source

“We know the past but cannot control it. We control the future but cannot know it.”*…

Readers will know of your correspondent’s fascination with– and admiration for– Claude Shannon

Within engineering and mathematics circles, Shannon is a revered figure. At 21 [in 1937], he published what’s been called the most important master’s thesis of all time, explaining how binary switches could do logic and laying the foundation for all future digital computers. At the age of 32, he published A Mathematical Theory of Communication, which Scientific American called “the Magna Carta of the information age.” Shannon’s masterwork invented the bit, or the objective measurement of information, and explained how digital codes could allow us to compress and send any message with perfect accuracy.

But Shannon wasn’t just a brilliant theoretical mind — he was a remarkably fun, practical, and inventive one as well. There are plenty of mathematicians and engineers who write great papers. There are fewer who, like Shannon, are also jugglers, unicyclists, gadgeteers, first-rate chess players, codebreakers, expert stock pickers, and amateur poets.

Shannon worked on the top-secret transatlantic phone line connecting FDR and Winston Churchill during World War II and co-built what was arguably the world’s first wearable computer. He learned to fly airplanes and played the jazz clarinet. He rigged up a false wall in his house that could rotate with the press of a button, and he once built a gadget whose only purpose when it was turned on was to open up, release a mechanical hand, and turn itself off. Oh, and he once had a photo spread in Vogue.

Think of him as a cross between Albert Einstein and the Dos Equis guy…

From Jimmy Soni (@jimmyasoni), co-author of A Mind At Play: How Claude Shannon Invented the Information Age: “11 Life Lessons From History’s Most Underrated Genius.”

* Claude Shannon

###

As we learn from the best, we might recall that it was on this date in 1946 that an early beneficiary of Shannon’s thinking, the ENIAC (Electronic Numerical Integrator And Computer), was first demonstrated in operation.  (It was announced to the public the following day.) The first general-purpose computer (Turing-complete, digital, and capable of being programmed and re-programmed to solve different problems), ENIAC was begun in 1943, as part of the U.S’s war effort (as a classified military project known as “Project PX”); it was conceived and designed by John Mauchly and Presper Eckert of the University of Pennsylvania, where it was built.  The finished machine, composed of 17,468 electronic vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints, weighed more than 27 tons and occupied a 30 x 50 foot room– in its time the largest single electronic apparatus in the world.  ENIAC’s basic clock speed was 100,000 cycles per second. Today’s home computers have clock speeds of 1,000,000,000 cycles per second.

 source

“Moore’s Law is really a thing about human activity, it’s about vision, it’s about what you’re allowed to believe”*…

 

Karen-fungal-computing-2

 

In moments of technological frustration, it helps to remember that a computer is basically a rock. That is its fundamental witchcraft, or ours: for all its processing power, the device that runs your life is just a complex arrangement of minerals animated by electricity and language. Smart rocks. The components are mined from the Earth at great cost, and they eventually return to the Earth, however poisoned. This rock-and-metal paradigm has mostly served us well. The miniaturization of metallic components onto wafers of silicon — an empirical trend we call Moore’s Law — has defined the last half-century of life on Earth, giving us wristwatch computers, pocket-sized satellites and enough raw computational power to model the climate, discover unknown molecules, and emulate human learning.

But there are limits to what a rock can do. Computer scientists have been predicting the end of Moore’s Law for decades. The cost of fabricating next-generation chips is growing more prohibitive the closer we draw to the physical limits of miniaturization. And there are only so many rocks left. Demand for the high-purity silica sand used to manufacture silicon chips is so high that we’re facing a global, and irreversible, sand shortage; and the supply chain for commonly-used minerals, like tin, tungsten, tantalum, and gold, fuels bloody conflicts all over the world. If we expect 21st century computers to process the ever-growing amounts of data our culture produces — and we expect them to do so sustainably — we will need to reimagine how computers are built. We may even need to reimagine what a computer is to begin with.

It’s tempting to believe that computing paradigms are set in stone, so to speak. But there are already alternatives on the horizon. Quantum computing, for one, would shift us from a realm of binary ones and zeroes to one of qubits, making computers drastically faster than we can currently imagine, and the impossible — like unbreakable cryptography — newly possible. Still further off are computer architectures rebuilt around a novel electronic component called a memristor. Speculatively proposed by the physicist Leon Chua in 1971, first proven to exist in 2008, a memristor is a resistor with memory, which makes it capable of retaining data without power. A computer built around memristors could turn off and on like a light switch. It wouldn’t require the conductive layer of silicon necessary for traditional resistors. This would open computing to new substrates — the possibility, even, of integrating computers into atomically thin nano-materials. But these are architectural changes, not material ones.

For material changes, we must look farther afield, to an organism that occurs naturally only in the most fleeting of places. We need to glimpse into the loamy rot of a felled tree in the woods of the Pacific Northwest, or examine the glistening walls of a damp cave. That’s where we may just find the answer to computing’s intractable rock problem: down there, among the slime molds…

It’s time to reimagine what a computer could be: “Beyond Smart Rocks.”

(TotH to Patrick Tanguay.)

* “Moore’s Law is really a thing about human activity, it’s about vision, it’s about what you’re allowed to believe. Because people are really limited by their beliefs, they limit themselves by what they allow themselves to believe about what is possible.”  – Carver Mead

###

As we celebrate slime, we might send fantastically far-sighted birthday greetings to Hugo Gernsback, a Luxemborgian-American inventor, broadcast pioneer, writer, and publisher; he was born on this date in 1884.

Gernsback held 80 patents at the time of his death; he founded radio station WRNY, was involved in the first television broadcasts, and is considered a pioneer in amateur radio.  But it was as a writer and publisher that he probably left his most lasting mark:  In 1926, as owner/publisher of the magazine Modern Electrics, he filled a blank spot in his publication by dashing off the first chapter of a series called “Ralph 124C 41+.” The twelve installments of “Ralph” were filled with inventions unknown in 1926, including “television” (Gernsback is credited with introducing the word), fluorescent lighting, juke boxes, solar energy, television, microfilm, vending machines, and the device we now call radar.

The “Ralph” series was an astounding success with readers; and later that year Gernsback founded the first magazine devoted to science fiction, Amazing Stories.  Believing that the perfect sci-fi story is “75 percent literature interwoven with 25 percent science,” he coined the term “science fiction.”

Gernsback was a “careful” businessman, who was tight with the fees that he paid his writers– so tight that H. P. Lovecraft and Clark Ashton Smith referred to him as “Hugo the Rat.”

Still, his contributions to the genre as publisher were so significant that, along with H.G. Wells and Jules Verne, he is sometimes called “The Father of Science Fiction”; in his honor, the annual Science Fiction Achievement awards are called the “Hugos.”

(Coincidentally, today is also the birthday– in 1906– of Philo T. Farnsworth, the man who actually did invent television-as-we-know-it…)

Gernsback’s 1963 “Television glasses” (source)