Posts Tagged ‘Presper Eckert’
“The original idea of the web was that it should be a collaborative space where you can communicate through sharing information”*…
From yesterday’s post on the possible (and promising, but also potentially painful) future of computing to a pressing predicament we face today. The estimable Anil Dash on the threats to the open web…
You must imagine Sam Altman holding a knife to Tim Berners-Lee’s throat.
It’s not a pleasant image. Sir Tim is, rightly, revered as the genial father of the World Wide Web. But, all the signs are pointing to the fact that we might be in endgame for “open” as we’ve known it on the Internet over the last few decades.
The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.
Now, from content to code, communities to culture, we can see example after example of that open web under attack. Every single aspect of the radical architecture I just described is threatened, by those who have profited most from that exact system.
Today, the good people who act as thoughtful stewards of the web infrastructure are still showing the same generosity of spirit that has created opportunity for billions of people and connected society in ways too vast to count while —not incidentally— also creating trillions of dollars of value and countless jobs around the world. But the increasingly-extremist tycoons of Big Tech have decided that that’s not good enough.
Now, the hectobillionaires have begun their final assault on the last, best parts of what’s still open, and likely won’t rest until they’ve either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them. Whether or not they succeed is going to be decided by decisions that we all make as a community in the coming months. Even though there have always been threats to openness on the web, the stakes have never been higher than they are this time.
Right now, too many of the players in the open ecosystem are still carrying on with business as usual, even though those tactics have been failing to stop big tech for years. I don’t say this lightly: it looks to me like 2026 is the year that decides whether the open web as we know it will survive at all, and we have to fight like the threat is existential. Because it is…
[Dash details the treats– largely, but not entirely driven by AI and its purveyors. He concludes…]
… The threat to the open web is far more profound than just some platforms that are under siege. The most egregious harm is the way that the generosity and grace of the people who keep the web open is being abused and exploited. Those people who maintain open source software? They’re hardly getting rich — that’s thankless, costly work, which they often choose instead of cashing in at some startup. Similarly, volunteering for Wikipedia is hardly profitable. Defining super-technical open standards takes time and patience, sometimes over a period of years, and there’s no fortune or fame in it.
Creators who fight hard to stay independent are often choosing to make less money, to go without winning awards or the other trappings of big media, just in order to maintain control and authority over their content, and because they think it’s the right way to connect with an audience. Publishers who’ve survived through year after year of attacks from tech platforms get rewarded by… getting to do it again the next year. Tim Berners-Lee is no billionaire, but none of those guys with the hundreds of billions of dollars would have all of their riches without him. And the thanks he gets from them is that they’re trying to kill the beautiful gift that he gave to the world, and replace it with a tedious, extortive slop mall.
So, we’re in endgame now. They see their chance to run the playbook again, and do to Wikipedians what Uber did to cab drivers, to get users addicted to closed apps like they are to social media, to force podcasters to chase an algorithm like kids on TikTok. If everyone across the open internet can gather together, and see that we’re all in one fight together, and push back with the same ferocity with which we’re being attacked, then we do have a shot at stopping them.
At one time, it was considered impossibly unlikely that anybody would ever create open technologies that would ever succeed in being useful for people, let alone that they would become a daily part of enabling billions of people to connect and communicate and make their lives better. So I don’t think it’s any more unlikely that the same communities can summon that kind of spirit again, and beat back the wealthiest people in the world, to ensure that the next generation gets to have these same amazing resources to rely on for decades to come.
Alright, if it’s not hopeless, what are the concrete things we can do? The first thing is to directly support organizations in the fight. Either those that are at risk, or those that are protecting those at risk. You can give directly to support the Internet Archive, or volunteer to help them out. Wikipedia welcomes your donation or your community participation. The Electronic Frontier Foundation is fighting for better policy and to defend your rights on virtually all of these issues, and could use your support or provides a list of ways to volunteer or take action. The Mozilla Foundation can also use your donations and is driving change. (And full disclosure — I’m involved in pretty much all of these organizations in some capacity, ranging from volunteer to advisor to board member.) That’s because I’m trying to make sure my deeds match my words! These are the people whom I’ve seen, with my own eyes, stay the hand of those who would hold the knife to the necks of the open web’s defenders. [Further full disclosure: so is your correpondent, and so have I.]
Beyond just what these organizations do, though, we can remember how much the open web matters. I know from my time on the board of Stack Overflow that we got to see the rise of an incredibly generous community built around sharing information openly, under open licenses. There are very few platforms in history that helped more people have more economic mobility than the number of people who got good-paying jobs as coders as a result of the information on that site. And then we got to see the toll that extractive LLMs had when they took advantage of that community without any consideration for the impact it would have when they trained models on the generosity of that site’s members without reciprocating in kind.
The good of the web only exists because of the openness of the web. They can’t just keep on taking and taking without expecting people to finally draw a line and saying “enough”. And interestingly, opportunities might exist where the tycoons least expect it. I saw Mike Masnick’s recent piece where he argued that one of the things that might enable a resurgence of the open web might be… AI. It would seem counterintuitive to anyone who’s read everything I’ve shared here to imagine that anything good could come of these same technologies that have caused so much harm.
But ultimately what matters is power. It is precisely because technologies like LLMs have powers that the authoritarians have rushed to try to take them over and wield them as effectively as they can. I don’t think that platforms owned and operated by those bad actors can be the tools that disrupt their agenda. I do think it might be possible that the creative communities that built the web in the first place could use their same innovative spirit to build what could be, for lack of a better term, called “good AI“. It’s going to take better policy, which may be impossible in the short term at the federal level in the U.S., but can certainly happen at more local levels and in the rest of the world. Though I’m skeptical about putting too much of the burden on individual users, we can certainly change culture and educate people so that more people feel empowered and motivated to choose alternatives to the big tech and big AI platforms that got us into this situation. And we can encourage harm reduction approaches for the people and institutions that are already locked into using these tools, because as we’ve seen, even small individual actions can get institutions to change course.
Ultimately I think, if given the choice, people will pick home-cooked, locally-grown, heart-felt digital meals over factory-farmed fast food technology every time…
Unless we act, it’s “Endgame for the Open Web,” from @anildash.com. Eminently worth reading in full.
* Tim Berners-Lee… who should know.
###
As we protect what’s precious, we might send carefully-calculated birthday greetings to a man whose work helped lay the foundation for both the promise and the peril unpacked in the article linked above above: J. Presper Eckert; he was born on this day in 1919. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

“Technology challenges us to assert our human values, which means that first of all, we have to figure out what they are”*…
As we head into the weekend, some food for thought…
A decade ago, the world was, at once, both the seed of today and a very different place: In what was considered one of the biggest political upsets in American political history (and the fifth and most recent presidential election in which the winning candidate lost the popular vote), Donald Trump was elected to his first term. The U.K. chose Brexit. The stock market finished strong, with the Dow Jones, S&P 500, and Nasdaq reaching new highs. (In the 10 years that have followed, the Dow has risen about 150%; the S&P 500, roughly 400%; and the NASDAQ has roughly sextupled.)
It was a big year for pop culture, marked by Beyoncé’s Lemonade, the massive Pokémon Go craze, and the rise of Netflix with Stranger Things, the Rio Olympics, and the loss of icons like David Bowie and Prince.
It was also a big year in tech: Russian hacking and disinfo (especially on Facebook) was a huge story– as was Apple’s elimination of the headphone jack in the iPhone 7. Theranos collapsed; and Wells fargo opened millions of accounts for customers without those customers’ permission (for which they were sunsequently fined $3 Billion). And Virtual Reality was everywhere (in the promises/offers from tech companies), but nowhere in the market. TikTok was launched in 2016, but hadn’t yet become the phenomenon (and avatar of algorithmly-driven feeds) that it has become. And in the course of 2016, artificial intelligence made the leap from “science fiction concept” to “almost meaningless buzzword” (though in fairness, 2016 was the year that Google DeepMind’s AlphaGo program triumphed against South Korean Go grandmaster Lee Sedol).
Back in 2016, the estimable Alan Jacobs was pondering the road ahead. In a piece for The New Atlantis, he coined and discussed a series of aphorisms relevant to the future as then he saw it. He begins…
Aphorisms are essentially an aristocratic genre of writing. The apho-
rist does not argue or explain, he asserts; and implicit in his assertion
is a conviction that he is wiser or more intelligent than his readers.
– W. H. Auden and Louis Kronenberger, The Viking Book of AphorismsAuthor’s Note: I hope that the statement above is wrong, believing that certain adjustments can be made to the aphoristic procedure that will rescue the following collection from arrogance. The trick is to do this in a way that does not sacrifice
the provocative character that makes the aphorism, at its best, such a powerful form of utterance.Here I employ two strategies to enable me to walk this tightrope. The first is to characterize the aphorisms as “theses for disputation,” à la Martin Luther — that is, I invite response, especially response in the form of disagreement or correction. The second is to create a kind of textual conversation, both on the page and beyond it, by adding commentary (often in the form of quotation) that elucidates each thesis, perhaps even increases its provocativeness, but never descends into coarsely explanatory pedantry…
[There follows a series of provocations and discussions that feel as relevant– and important– today as they were a decade ago. He concludes…]
… Precisely because of this mystery, we need to evaluate our technologies according to the criteria established by our need for “conviviality.”
I use the term with the particular meaning that Ivan Illich gives it in Tools for Conviviality [here]:
I intend it to mean autonomous and creative intercourse among per-
sons, and the intercourse of persons with their environment; and this
in contrast with the conditioned response of persons to the demands
made upon them by others, and by a man-made environment. I con-
sider conviviality to be individual freedom realized in personal inter-
dependence and, as such, an intrinsic ethical value. I believe that, in
any society, as conviviality is reduced below a certain level, no amount
of industrial productivity can effectively satisfy the needs it creates
among society’s members.In my judgment, nothing is more needful in our present technological moment than the rehabilitation and exploration of Illich’s notion of conviviality, and the use of it, first, to apprehend the tools we habitually employ and, second, to alter or replace them. For the point of any truly valuable critique of technology is not merely to understand our tools but to change them — and us…
Eminently worth reading in full, as its still all-too-relevant: “Attending to Technology- Theses for Disputation,” from @ayjay.bsky.social.
Pair with a provocative piece from another fan of Illich, L. M. Sacasas (@lmsacasas.bsky.social): “Surviving the Show: Illich And The Case For An Askesis of Perception.”
[Image above: source]
###
As we think about tech, we might recall that it was on this date in 1946 that an ancestor of today’s social networks, streaming services, and AIs, the ENIAC (Electronic Numerical Integrator And Computer), was first demonstrated in operation. (It was announced to the public the following day.) The first general-purpose computer (Turing-complete, digital, and capable of being programmed and re-programmed to solve different problems), ENIAC was begun in 1943, as part of the U.S’s war effort (as a classified military project known as “Project PX“); it was conceived and designed by John Mauchly and Presper Eckert of the University of Pennsylvania, where it was built. The finished machine, composed of 17,468 electronic vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints, weighed more than 27 tons and occupied a 30 x 50 foot room– in its time the largest single electronic apparatus in the world. ENIAC’s basic clock speed was 100,000 cycles per second (or Hertz). Today’s home computers have clock speeds of 3,500,000,000 cycles per second or more.

“Mathematics is the music of reason”*…
New technologies, most centrally AI, are arming scientists with tools that might not just accelerate or enhance their work, but altogether transform it. As Jordana Cepelewicz reports, mathematicians have started to prepare for a profound shift in what it means to do math…
Since the start of the 20th century, the heart of mathematics has been the proof — a rigorous, logical argument for whether a given statement is true or false. Mathematicians’ careers are measured by what kinds of theorems they can prove, and how many. They spend the bulk of their time coming up with fresh insights to make a proof work, then translating those intuitions into step-by-step deductions, fitting different lines of reasoning together like puzzle pieces.
The best proofs are works of art. They’re not just rigorous; they’re elegant, creative and beautiful. This makes them feel like a distinctly human activity — our way of making sense of the world, of sharpening our minds, of testing the limits of thought itself.
But proofs are also inherently rational. And so it was only natural that when researchers started developing artificial intelligence in the mid-1950s, they hoped to automate theorem proving: to design computer programs capable of generating proofs of their own. They had some success. One of the earliest AI programs could output proofs of dozens of statements in mathematical logic. Other programs followed, coming up with ways to prove statements in geometry, calculus and other areas.
Still, these automated theorem provers were limited. The kinds of theorems that mathematicians really cared about required too much complexity and creativity. Mathematical research continued as it always had, unaffected and undeterred.
Now that’s starting to change. Over the past few years, mathematicians have used machine learning models (opens a new tab) to uncover new patterns, invent new conjectures, and find counterexamples to old ones. They’ve created powerful proof assistants both to verify whether a given proof is correct and to organize their mathematical knowledge.
They have not, as yet, built systems that can generate the proofs from start to finish, but that may be changing. In 2024, Google DeepMind announced that they had developed an AI system that scored a silver medal in the International Mathematical Olympiad, a prestigious proof-based exam for high school students. OpenAI’s more generalized “large language model,” ChatGPT, has made significant headway on reproducing proofs and solving challenging problems, as have smaller-scale bespoke systems. “It’s stunning how much they’re improving,” said Andrew Granville, a mathematician at the University of Montreal who until recently doubted claims that this technology might soon have a real impact on theorem proving. “They absolutely blow apart where I thought the limitations were. The cat’s out of the bag.”
Researchers predict they’ll be able to start outsourcing more tedious sections of proofs to AI within the next few years. They’re mixed on whether AI will ever be able to prove their most important conjectures entirely: Some are willing to entertain the notion, while others think there are insurmountable technological barriers. But it’s no longer entirely out of the question that the more creative aspects of the mathematical enterprise might one day be automated.
Even so, most mathematicians at the moment “have their heads buried firmly in the sand,” Granville said. They’re ignoring the latest developments, preferring to spend their time and energy on their usual jobs.
Continuing to do so, some researchers warn, would be a mistake. Even the ability to outsource boring or rote parts of proofs to AI “would drastically alter what we do and how we think about math over time,” said Akshay Venkatesh, a preeminent mathematician and Fields medalist at the Institute for Advanced Study in Princeton, New Jersey.
He and a relatively small group of other mathematicians are now starting to examine what an AI-powered mathematical future might look like, and how it will change what they value. In such a future, instead of spending most of their time proving theorems, mathematicians will play the role of critic, translator, conductor, experimentalist. Mathematics might draw closer to laboratory sciences, or even to the arts and humanities.
Imagining how AI will transform mathematics isn’t just an exercise in preparation. It has forced mathematicians to reckon with what mathematics really is at its core, and what it’s for…
Absolutely fascinating: “Mathematical Beauty, Truth, and Proof in the Age of AI,” from @jordanacep.bsky.social in @quantamagazine.bsky.social. Eminently worth reading in full.
###
As we wonder about ways of knowing, we might spare a thought for a man whose work helped trigger an earlier iteration of this enhance/transform discussion and laid the groundwork for the one unpacked in the article linked above above: J. Presper Eckert; he died on this day in 1995. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

“I wonder, he wondered, if any human has ever felt this way before about an android.”*…
Well, yes… Centuries before audio deepfakes and text-to-speech software, inventors in the eighteenth century constructed androids with swelling lungs, flexible lips, and moving tongues to simulate human speech. Jessica Riskin explores the history of such talking heads, from their origins in musical automata to inventors’ quixotic attempts to make machines pronounce words, converse, and declare their love…
The word “android”, derived from Greek roots meaning “manlike”, was the coinage of Gabriel Naudé, French physician and librarian, personal doctor to Louis XIII, and later architect of the forty-thousand-volume library of Cardinal Jules Mazarin. Naudé was a rationalist and an enemy of superstition. In 1625 he published a defense of Scholastic philosophers to whom tradition had ascribed works of magic. He included the thirteenth-century Dominican friar, theologian, and philosopher Albertus Magnus (Albert the Great), who, according to legend, had built an artificial man made of bronze.
This story seems to have originated long after Albert’s death with Alfonso de Madrigal (also known as El Tostado), a voluminous commentator of the fifteenth century, who adapted and embellished the tales of moving statues and talking brazen heads in medieval lore. El Tostado said that Albert had worked for thirty years to compose a whole man out of metal. The automaton supplied Albert with the answers to all of his most vexing questions and problems and even, in some versions of the tale, obligingly dictated a large part of Albert’s voluminous writings. The machine had met its fate, according to El Tostado, when Albert’s student, Thomas Aquinas, smashed it to bits in frustration, having grown tired of “its great babbling and chattering”.
Naudé did not believe in Albert’s talkative statue. He rejected it and other tales of talking automaton heads as “false, absurd and erroneous”. The reason Naudé cited was the statues’ lack of equipment: being altogether without “muscles, lungs, epiglottis, and all that is necessary for a perfect articulation of the voice”, they simply did not have the necessary “parts and instruments” to speak reasonably. Naudé concluded, in light of all the reports, that Albert the Great probably had built an automaton, but never one that could give him intelligible and articulate responses to questions. Instead, Albert’s machine must have been similar to the Egyptian statue of Memnon, much discussed by ancient authors, which murmured agreeably when the sun shone upon it: the heat caused the air inside the statue to “rarefy” so that it was forced out through little pipes, making a murmuring sound.
Despite disbelieving in Albert the Great’s talking head, Naudé gave it a powerful new name, referring to it as the “android”. Thus deftly, he smuggled a new term into the language, for according to the 1695 dictionary by the French philosopher and writer Pierre Bayle, “android” had been “an absolutely unknown word, & purely an invention of Naudé, who used it boldly as though it were established.” It was a propitious moment for neologisms: Naudé’s term quickly infiltrated the emerging genre of dictionaries and encyclopedias. Bayle repeated it in the article on “Albert le Grand” in his dictionary. Thence, “android” secured its immortality as the headword of an article — citing Naudé and Bayle — in the first volume of the supplement to the English encyclopedist Ephraim Chambers’ Cyclopaedia. In denying the existence of Albert’s android, Naudé had given life to the android as a category of machine.
But the first actual android of the new, experimental-philosphical variety for which the historical record contains rich information — “android” in Naudé’s root sense, a working human-shaped assemblage of “necessary parts” and instruments — went on display on February 3, 1738…
[There follows a fascinating account of examples from the 18th and 19th centuries…]
Plates depicting the components of artificial and natural speech from Wolfgang von Kempelen’s The Mechanism of Speech (1791) — Source
… In the early part of the twentieth century, designers of artificial speech moved on from mechanical to electrical speech synthesis. The simulation of the organs and process of speaking — of the trembling glottis, the malleable vocal tract, the supple tongue and mouth — was specific to the last decades of the eighteenth century, when philosophers and mechanicians and paying audiences were briefly preoccupied with the idea that articulate language was a bodily function: that Descartes’ divide between mind and body might be bridged in the organs of speech…
The origin of the word “android” and (very) early examples: “You Are My Friend” from @PublicDomainRev.
* Philip K. Dick, “Do Androids Dream of Electric Sheep?”
###
As we muse on the mechanical, we might spare a thought for a man whose work helped pave the way for androids as we currently conceive them: J. Presper Eckert; he died on this day in 1995. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

“Man is not disturbed by events, but by the view he takes of them”*…
From Stripe Partners, a framework for rethinking the way we talk about the AI future…
AI is both a new technology and a new type of technology. It is the first technology that learns and that has the potential to outstrip its makers’ capabilities and develop independently.
As Large Language Models bring to life the realities of AI’s potential to operate at unprecedented, ‘human’ levels of sophistication, projections about its future have gained urgency. The dominant framework being applied to identify AI’s potential futures is 165 years old: Charles Darwin’s theory of evolution.
Darwin’s evolutionary framework is rendered most clearly in Dan Hendycks work for the Center for AI Safety which posits a future where natural selection could cause the most influential future AI agents to have selfish tendencies that might see AI’s favour their own agendas over the safety of humankind.
The choice of Natural Selection as a framework makes sense given AI’s emerging status as a quasi-sentient, highly adaptive technology that can learn and grow. The choice is a response to the limitations inherent in existing models for technological adoption which treat technologies as inert tools that only come to life when used by people.
The risk in applying this lens to AI is that it goes too far in assigning independent agency to AI. Estimates on the timing of the emergence of ‘Artificial General Intelligence’ vary, but spending some time with the current crop of Generative AI platforms confirms the view that AI’s with intelligences that are closer to humans are some way off. In the interim using natural selection as a lens to understand AI positions humans as further out of the developmental loop than is actually the case. Competitive forces whether market or military will shape AI’s development, but these will not be the only forces at play and direct interaction with humans will be the principal driver for AI’s progress in the near term.
A year ago we wrote about the opportunity to reframe the impact of AI on organisations through the lens of Actor Network Theory (ANT). More than a singular theory, ANT describes an approach to studying social and technological systems developed by Bruno Latour, Michel Callon, Madeleine Akrich and John Law in the early 1980s.
ANT posits that the social and natural world is best understood as dynamic networks of humans and nonhuman actors… In our 2023 piece we suggested that ANT, with its focus on framing society and human-technology interactions in terms of dynamic networks where every actor whether human or machine impacts the network, was a useful way of exploring the ways in which AI will impact people, and people will impact AI.
A year on the value of ANT as a framework for exploring AI’s future has become clearer. The critical point when comparing an ANT frame to an evolutionary one is the way in which the ANT framing highlights how AI will progress with and through people’s interactions with it. When viewed as an actor in a network, not a technology in isolation, AI will never be separate from human interventions…
A provocative argument, well worth reading in full: “Why the debate about the future of AI needs less Darwin and more Latour,” from @stripepartners.
Apposite: “Whose risks? Whose benefits?” from Mandy Brown.
* Epictetus
###
As we reframe, we might recall that it was on this date in 1946 that an ancestor of today’s AIs, the ENIAC (Electronic Numerical Integrator And Computer), was first demonstrated in operation. (It was announced to the public the following day.) The first general-purpose computer (Turing-complete, digital, and capable of being programmed and re-programmed to solve different problems), ENIAC was begun in 1943, as part of the U.S’s war effort (as a classified military project known as “Project PX“); it was conceived and designed by John Mauchly and Presper Eckert of the University of Pennsylvania, where it was built. The finished machine, composed of 17,468 electronic vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints, weighed more than 27 tons and occupied a 30 x 50 foot room– in its time the largest single electronic apparatus in the world. ENIAC’s basic clock speed was 100,000 cycles per second (or Hertz). Today’s home computers have clock speeds of 3,500,000,000 cycles per second or more.







You must be logged in to post a comment.