Posts Tagged ‘artificial intelligence’
“There is no such thing as a dysfunctional organization, because every organization is perfectly aligned to achieve the results it currently gets”*…
… and if we’re not careful, we might not be too pleased with what we get. Sam Altman says the one-person billion-dollar company is coming. Evan Ratliff tells the tale of his attempt to build a completely AI-automated venture…
… If you’ve spent any time consuming any AI news this year—and even if you’ve tried desperately not to—you may have heard that in the industry, 2025 is the “year of the agent.” This year, in other words, is the year when AI systems are evolving from passive chatbots, waiting to field our questions, to active players, out there working on our behalf.
There’s not a well agreed upon definition of AI agents, but generally you can think of them as versions of large language model chatbots that are given autonomy in the world. They are able to take in information, navigate digital space, and take action. There are elementary agents, like customer service assistants that can independently field, triage, and handle inbound calls, or sales bots that can cycle through email lists and spam the good leads. There are programming agents, the foot soldiers of vibe coding. OpenAI and other companies have launched “agentic browsers” that can buy plane tickets and proactively order groceries for you.
In the year of our agent, 2025, the AI hype flywheel has been spinning up ever more grandiose notions of what agents can be and will do. Not just as AI assistants, but as full-fledged AI employees that will work alongside us, or instead of us. “What jobs are going to be made redundant in a world where I am sat here as a CEO with a thousand AI agents?” asked host Steven Bartlett on a recent episode of The Diary of a CEO podcast. (The answer, according to his esteemed panel: nearly all of them). Dario Amodei of Anthropic famously warned in May that AI (and implicitly, AI agents) could wipe out half of all entry-level white-collar jobs in the next one to five years. Heeding that siren call, corporate giants are embracing the AI agent future right now—like Ford’s partnership with an AI sales and service agent named “Jerry,” or Goldman Sachs “hiring” its AI software engineer, “Devin.” OpenAI’s Sam Altman, meanwhile, talks regularly about a possible billion-dollar company with just one human being involved. San Francisco is awash in startup founders with virtual employees, as nearly half of the companies in the spring class of Y Combinator are building their product around AI agents.
Hearing all this, I started to wonder: Was the AI employee age upon us already? And even, could I be the proprietor of Altman’s one-man unicorn? As it happens, I had some experience with agents, having created a bunch of AI agent voice clones of myself for the first season of my podcast, Shell Game.
I also have an entrepreneurial history, having once been the cofounder and CEO of the media and tech startup Atavist, backed by the likes of Andreessen Horowitz, Peter Thiel’s Founders Fund, and Eric Schmidt’s Innovation Endeavors. The eponymous magazine we created is still thriving today. I wasn’t born to be a startup manager, however, and the tech side kind of fizzled out. But I’m told failure is the greatest teacher. So I figured, why not try again? Except this time, I’d take the AI boosters at their word, forgo pesky human hires, and embrace the all-AI employee future…
Eminently worth reading in full: “All of My Employees Are AI Agents, and So Are My Executives,” from @evrat.bsky.social in @wired.com.
Via Caitlin Dewey (@caitlindewey.bsky.social), whose tease/summary puts it plainly:
Ratliff, the undefeated king of tech journalism stunts, is back with another banger: For this piece and the accompanying podcast series, he created a start-up staffed entirely by so-called AI agents. The agents can communicate by email, Slack, text and phone, both with Ratliff and among themselves, and they have free range to complete tasks like writing code and searching the open internet. Despite their capabilities, however, the whole project’s a constant farce. A funny, stupid, telling farce that says quite a lot about the future of work that many technologists envision now…
###
As we analyze autonomy, we might we might spare a jaundiced thought for Trofim Denisovich Lysenko; he died on this date in 1976. A Soviet biologist and agronomist, he believed the Mendelian theory of heredity to be wrong, and developed his own, allowing for “soft inheritance”– the heretability of learned behavior. (He believed that in one generation of a hybridized crop, the desired individual could be selected and mated again and continue to produce the same desired product, without worrying about separation/segregation in future breeds–he assumed that after a lifetime of developing (acquiring) the best set of traits to survive, those must be passed down to the next generation.)
In many way Lysenko’s theories recall Lamarck’s “organic evolution” and its concept of “soft evolution” (the passage of learned traits), though Lysenko denied any connection. He followed I. V. Michurin’s fanciful idea that plants could be forced to adapt to any environmental conditions, for example converting summer wheat to winter wheat by storing the seeds in ice. With Stalin’s support for two decades, he actively obstructed the course of Soviet biology, caused the imprisonment and death of many of the country’s eminent biologists who disagreed with him, and imposed conditions that contributed to the disastrous decline of Soviet agriculture and the famines that resulted.
Interestingly, some current research suggests that heritable learning– or a semblance of it– may in fact be happening by virtue of epigenetics… though nothing vaguely resembling Lysenko’s theory.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”*…
Anil Dash, with a grounded view of artificial intelligence…
Even though AI has been the most-talked-about topic in tech for a few years now, we’re in an unusual situation where the most common opinion about AI within the tech industry is barely ever mentioned.
Most people who actually have technical roles within the tech industry, like engineers, product managers, and others who actually make the technologies we all use, are fluent in the latest technologies like LLMs. They aren’t the big, loud billionaires that usually get treated as the spokespeople for all of tech.
And what they all share is an extraordinary degree of consistency in their feelings about AI, which can be pretty succinctly summed up:
Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.
What’s amazing is the reality that virtually 100% of tech experts I talk to in the industry feel this way, yet nobody outside of that cohort will mention this reality. What we all want is for people to just treat AI as a “normal technology“, as Arvind Naryanan and Sayash Kapoor so perfectly put it. I might be a little more angry and a little less eloquent: stop being so goddamn creepy and weird about the technology! It’s just tech, everything doesn’t have to become some weird religion that you beat people over the head with, or gamble the entire stock market on…
Eminently worth reading in full: “The Majority AI View,” from @anildash.com.
Pair with: “Artificial Intelligences, So Far,” from @kevinkelly.bsky.social.
For an explanation of (some of) the dangers of over-hyping, see: “America’s future could hinge on whether AI slightly disappoints,” from @noahpinion.blog.web.brid.gy.
And for a peek at what lies behind each GenAI query: “Cartography of generative AI,” from @tallerestampa.bsky.social via @flowingdata.com.
While the arguments above are practical, note that a plethora of tech experts have weighed in with a a note of existential caution: “Statement on Superintelligence.”
Further to which (and finally), a piece from the Federal Reserve Bank of Dallas, projecting the economic impact of AI. It suggests that AI could provide a modest but meaningful boost to GDP over the next 25 years… if The Fed’s “Goldilocks Scenario” (in which, per Dash’s and Kelly’s comments, AI makes consistent incremental contributions to “keep living standards improving at their historical rate”) plays out. You’ll note that they also considered two other scenarios: a “benign singularity” scenario in which “AI eventually surpasses human intelligence, leading to rapid and unpredictable changes to the economy and society” and an “extinction singularity” in which “machine intelligence overtakes human intelligence at some finite point in the near future, the machines become malevolent, and this eventually leads to human extinction.”
Interesting times in which we live…
###
As we parse pumped prognostication, we might recall that it was on this date in 4004 BCE that the Universe was created… as per calculations by Archbishop James Ussher in the mid-17th century. Ussher, the head of the Anglican Church of Ireland at the time, attempted to calculate the dates of many important events described in the Old Testament. His calculations, which he published in 1650, were not that far off from many other estimates made at the time. Isaac Newton, for example, believed that the world was created in 4000 BC.
When Clarence Darrow prepared his famous examination of William Jennings Bryan in the Scopes trial [see here], he chose to focus primarily on a chronology of Biblical events prepared by a seventeenth-century Irish bishop, James Ussher. American fundamentalists in 1925 found—and generally accepted as accurate—Ussher’s careful calculation of dates, going all the way back to Creation, in the margins of their family Bibles. (In fact, until the 1970s, the Bibles placed in nearly every hotel room by the Gideon Society carried his chronology.) The King James Version of the Bible introduced into evidence by the prosecution in Dayton contained Ussher’s famous chronology, and Bryan more than once would be forced to resort to the bishop’s dates as he tried to respond to Darrow’s questions.
“In science, it happens every few years that something that was previously considered a mistake suddenly reverses all views, or that an inconspicuous and despised idea becomes the ruler of a new realm of thought.”*…
Today’s post is, in essence, the recommendation of the current issue of of a publication referenced here often, Noema. It’s editor, Nathan Gardels, previews its contents…
When a concept that organizes our reality is replaced by an entirely different and incommensurate worldview, it is called a “paradigm shift.”
The theme of this edition of Noema was conceived in early 2024. At that time, we had in mind the epochal shift from the paradigm of globalization, in which markets, trade and technology cross borders, to “the Planetary,” where we recognize that the whole Earth system embeds and entangles human civilization in its habitat.
This deeper awareness has been enabled by the emergence of a technological exoskeleton of satellites, sensors and cloud computation that expands the heretofore limited scope of human understanding of the world, repositioning our place in the natural order. Neither above nor apart from nature, we have now come to realize we are part and parcel of one interdependent organism comprised of multiple intelligences striving for sustainable equilibrium.
The disclosure of climate change as a destabilizing consequence of human endeavor was enabled in the first place by planetary-scale computation. This capacity holds out the evolutionary prospect that human, machine and Earth intelligence might one day merge into a kind of planetary sapience that restores and maintains the ecological balance.
As we have written often in Noema, this conceptual reorientation would entail a redefinition of what realism means in geopolitics. This new condition calls not for the old “realpolitik” that seeks to secure the interests of nation-states against each other but for a “Gaiapolitik” aimed at securing a livable biosphere for all.
As logically compelling as this case for planetary realism may be, the paradigm shift underway is going in the opposite direction. Instead of the global interconnectivity forged in recent decades maturing into a planetary perspective, it is breaking up into a renewed nationalism more emphatically sovereigntist than before the advent of globalization.
In short, the prevailing political temperament around the world today is out of sync with the planetary imperative. This does not diminish its reality but, for the moment, eclipses and derails its emergence as the conscious organizing principle of human civilization.
The paradigm shift we are witnessing today not only marks a move away from a planetary awareness but also signals the last sigh of liberal universalism as the dominant governing philosophy of the postwar order since 1945.
The rules-based liberal international order, underwritten and guaranteed for decades by American might, has been consigned to the ash can of history by the summary defection of its founding architect from its terms and premises.
Under President Donald Trump and his allies, America has effectively joined the revisionist powers of China and Russia by baldly asserting sovereigntist self-interest unencumbered by rules that also encompass the interests of others.
Tariff walls, outright trade wars and unraveling alliances are supplanting the expansive web of global commerce, Western unity and cultural cross-fertilization that characterized times only recently. In a further break from the established order, Team Trump openly contemplates its own Anschluss of other people’s territory in Greenland, the Panama Canal and even Canada, instead of expressing outrage at China’s desire to take Taiwan, Russia’s bloody attempt to seize Ukraine or Israel’s increasing occupation of the Palestinian territories.
As Francis Fukuyama and Niall Ferguson discuss in a collage of commentary in this Noema edition, these developments portend the return to a world not unlike that of the 19th century, when the great powers carved out exclusive domains of influence.
The obvious great powers that would constitute a world apportioned in this way are China and Russia, both grasping at Eurasia, plus the United States and India. Whether Europe falls within the American sphere of influence depends on its capacity to cohere as a continental entity and find its identity as an alternative within a West that is fracturing under the strain of America’s revisionist turn.
Since the future appears to be taking us back to the 19th century, one cannot say we are in “uncharted territory.” On the contrary, we’ve been down this path before and know how it led to world wars that the global rules-based order, for all its well-known faults, was meant to avoid repeating.
On the American home front, and increasingly elsewhere in the West, it appears the “strong gods” of family, faith and nation are prevailing over the culturally liberal sentiments of an open society.
When there is no common agreement on what constitutes the good life, culture is politicized. As Alexandre Lefebvre argues in Noema, who gets to define “the good life” has become the central political question of our time. As in China, Russia, Iran or Turkey, governing authorities in the West are increasingly seeking to assign the moral substance of their vision to the state in place of the neutral proceduralism of liberal regimes that, at least in theory, embrace the diversity of all values without favor.
As the ascendant traditionalists see it, this rights-based liberalism grants a kind of converse moral substance to the state by virtue of the permissive openness it invites, nourishes and protects.
In many ways, liberalism was bound to fail just as Marxism did, and for the same reason. Marxism lacked a theory of politics that accommodated diverse constituencies because it assumed the universality of the interests of one class. Similarly, liberalism has falsely assumed its own universality, believing that there can be a consensus on only one conception of “the good life.” In reality, where some see declaring gender identity as the positive freedom to pursue self-realization, others see it as the corrosion of traditional Christian morality.
Like the British philosopher John Gray, Lefebvre suggests that the liberalism of the future may well entail a constitutionally grounded “modus vivendi” of autonomous jurisdictions as one way to keep the civil peace in diverse societies.
What is stunning in this context is how rapidly the America that elected Trump has tilted toward illiberal democracy under his tumultuous reign. Team Trump has robustly pursued retribution against political enemies, scorned universities as “the enemy,” moved to dismantle the administrative state and climate policies, demeaned the judicial system and cultivated crony corruption. Moreover, in the Orwellian name of free speech, Trump insists on ideological conformity across the board, from college students to corporate law firms.
To base the idea of democracy solely on elections invites this kind of illiberalism because it implies that majoritarian rule is all that is necessary for legitimacy. But, as the American founding fathers well understood, the will of the majority does not embrace all interests in a society, which must be protected equally. That is the reason for constitutional rule as the founding principle of a liberal polity.
In constitutional theory, the imposition of limitations and restraints — the “negative” — is what prevents the majority from absolute domination. It is the negative that makes the Constitution and the “positive” that makes government. One is the power of acting, the other the power of amending or arresting action. The two combined make a constitutional government.
It is this governing arrangement that made America great. The biggest danger of Making America Great Again is that a movement that believes it is the embodiment of the will of the majority will cast aside any constraints on its power as a contrivance by the elites of the ancien régime to keep the masses down.
In Niall Ferguson’s contribution to Noema, the historian raises the specter that “history was always against any republic lasting 250 years. This republic is in its late republican phase, with the intimations of empire much more visible.”…
… As politicized cultural battles and the churning geopolitical economy further unfold, a paradigm shift of a significance similar to planetary awareness is taking place that will redefine what it means to be human.
Across the sciences, we are coming to understand the self-organizing principle of “computation” as the building block of all forms of budding intelligence, from primitive cells to generative AI. This process involves learning from the environment, assembling information and arranging it by sharing functional instructions through “copying and pasting” code, so that an organism can develop, reproduce and sustain itself.
As Google’s Blaise Agüera y Arcas and James Manyika write in this issue, “computing existed in nature long before we built the first ‘artificial computers.’ … Understanding computing as a natural phenomenon will enable fundamental advances not only in computer science and AI, but also in physics and biology.”
More than half a century ago, they note, pioneering computer scientists had the intuition that organic and inorganic intelligence follow the same set of rules for development. “John von Neumann,” write the authors, “realized that for a complex organism to reproduce, it would need to contain instructions for building itself, along with a machine for reading and executing those instructions.” The technical requirements for that “universal constructor” in nature — the tape-like instructions of DNA — “correspond precisely to the technical requirements for the earliest computers.”
“Life,” they continue, “is computational because its existence over time depends on growth, healing or reproduction, and computation itself must evolve to support these essential functions.”
Grasping the correspondence with natural computation and learning from it, they believe, will render AI “brainlike” as it further evolves along the path from mimicking neural computation to predictive intelligence, general intelligence and, ultimately, collective intelligence. “Brains, AI agents and societies can all become more capable through increased scale. However, size alone is not enough. Intelligence is fundamentally social, powered by cooperation and the division of labor among many agents.”
In short, as philosopher of technology Tobias Rees also argues in this issue, the evolution of computation as a symbiosis of human and machine will cause us to rethink what it means to be human as, for the first time in history, a “more than human” intelligence emerges on our planet.
These contradictions and crosscurrents of the profound paradigm shifts we are living through all at once mark what future historians will surely describe as the Age of Upheaval…
FWIW, I worry that the diagnosis of our current political/cultural morass is maybe not dark enough. And as to AI, I’m no wide-eyed believer in the current cycle of hype. Indeed, I worry that AI could contribute to our social ills in the short term both by increasing and amplying the atomization and misinformation that we suffer and by challenging the economy if, as seems all too plausible, current over-enthusisam/over investment occasions a crash. That said, I honor the wisdom of Roy Amara: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
In any case, every link above is eminently worth clicking/reading; better yet, buy the issue.
All change: “Paradigm Shifts,” from @noemamag.com.
[Image above: source]
* Robert Musil, The Man Without Qualities
###
As we buckle up, we might recall that it was on ths date in 1944 that IBM dedicated the first program-controlled calculator, the Automatic Sequence Controlled Calculator (known best as the Harvard Mark I)– one of the earliest, if not the earliest, general-purpose electromechanical computers, and the one that laibd the base for subsequent development… and thus a catalyst for the string of developments– technical, social, and political with which we’re wrestling now.
Designer Howard Aiken had enlisted IBM as a partner in 1937; company chairman Thomas Watson Sr. personally approved the project and its funding. On completion it was put to work on a set war-related tasks, including calculations– overseen by John von Neumann— for the Manhattan Project.
The Mark I was the industry’s largest electromechanical calculator… and it was large: 51 feet long, 8 feet high, and 2 feet deep; it weighed about 9,445 pounds The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. It could do 3 additions or subtractions in a second; a multiplication took 6 seconds; a division took 15.3 seconds; and a logarithm or a trigonometric function took over a minute… ridiculously slow by today’s standards, but a huge advance in its time.
“Mathematics is the music of reason”*…
New technologies, most centrally AI, are arming scientists with tools that might not just accelerate or enhance their work, but altogether transform it. As Jordana Cepelewicz reports, mathematicians have started to prepare for a profound shift in what it means to do math…
Since the start of the 20th century, the heart of mathematics has been the proof — a rigorous, logical argument for whether a given statement is true or false. Mathematicians’ careers are measured by what kinds of theorems they can prove, and how many. They spend the bulk of their time coming up with fresh insights to make a proof work, then translating those intuitions into step-by-step deductions, fitting different lines of reasoning together like puzzle pieces.
The best proofs are works of art. They’re not just rigorous; they’re elegant, creative and beautiful. This makes them feel like a distinctly human activity — our way of making sense of the world, of sharpening our minds, of testing the limits of thought itself.
But proofs are also inherently rational. And so it was only natural that when researchers started developing artificial intelligence in the mid-1950s, they hoped to automate theorem proving: to design computer programs capable of generating proofs of their own. They had some success. One of the earliest AI programs could output proofs of dozens of statements in mathematical logic. Other programs followed, coming up with ways to prove statements in geometry, calculus and other areas.
Still, these automated theorem provers were limited. The kinds of theorems that mathematicians really cared about required too much complexity and creativity. Mathematical research continued as it always had, unaffected and undeterred.
Now that’s starting to change. Over the past few years, mathematicians have used machine learning models (opens a new tab) to uncover new patterns, invent new conjectures, and find counterexamples to old ones. They’ve created powerful proof assistants both to verify whether a given proof is correct and to organize their mathematical knowledge.
They have not, as yet, built systems that can generate the proofs from start to finish, but that may be changing. In 2024, Google DeepMind announced that they had developed an AI system that scored a silver medal in the International Mathematical Olympiad, a prestigious proof-based exam for high school students. OpenAI’s more generalized “large language model,” ChatGPT, has made significant headway on reproducing proofs and solving challenging problems, as have smaller-scale bespoke systems. “It’s stunning how much they’re improving,” said Andrew Granville, a mathematician at the University of Montreal who until recently doubted claims that this technology might soon have a real impact on theorem proving. “They absolutely blow apart where I thought the limitations were. The cat’s out of the bag.”
Researchers predict they’ll be able to start outsourcing more tedious sections of proofs to AI within the next few years. They’re mixed on whether AI will ever be able to prove their most important conjectures entirely: Some are willing to entertain the notion, while others think there are insurmountable technological barriers. But it’s no longer entirely out of the question that the more creative aspects of the mathematical enterprise might one day be automated.
Even so, most mathematicians at the moment “have their heads buried firmly in the sand,” Granville said. They’re ignoring the latest developments, preferring to spend their time and energy on their usual jobs.
Continuing to do so, some researchers warn, would be a mistake. Even the ability to outsource boring or rote parts of proofs to AI “would drastically alter what we do and how we think about math over time,” said Akshay Venkatesh, a preeminent mathematician and Fields medalist at the Institute for Advanced Study in Princeton, New Jersey.
He and a relatively small group of other mathematicians are now starting to examine what an AI-powered mathematical future might look like, and how it will change what they value. In such a future, instead of spending most of their time proving theorems, mathematicians will play the role of critic, translator, conductor, experimentalist. Mathematics might draw closer to laboratory sciences, or even to the arts and humanities.
Imagining how AI will transform mathematics isn’t just an exercise in preparation. It has forced mathematicians to reckon with what mathematics really is at its core, and what it’s for…
Absolutely fascinating: “Mathematical Beauty, Truth, and Proof in the Age of AI,” from @jordanacep.bsky.social in @quantamagazine.bsky.social. Eminently worth reading in full.
###
As we wonder about ways of knowing, we might spare a thought for a man whose work helped trigger an earlier iteration of this enhance/transform discussion and laid the groundwork for the one unpacked in the article linked above above: J. Presper Eckert; he died on this day in 1995. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.











You must be logged in to post a comment.