Posts Tagged ‘artificial intelligence’
“A Wikipedia article is a process, not a product”*…
A quarter of a century ago Jimmy Wales, Wikipedia‘s founder, articulated its vision– one into which it has impressively grown: “Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That’s what we’re doing.”
On the ocassion of its birthday this month, Caitlin Dewey takes stock…
Happy birthday to Wikipedia, which is now old enough to rent a car without extra charges … but faces new (and newly urgent) threats from AI and political polarization. As a palate cleanser, should those bum you out (the second, in particular, is very grim/good), may I then suggest this “entirely non-comprehensive list of life principles” learned from 20 years of editing Wikipedia. [Scientific American / Financial Times / The Wikipedian]…
From her wonderful newsletter, Links I Would Gchat You If We Were Friends. All three are eminently worth reading.
* Clay Shirky, who went on to observe that “Wikipedia is forcing people to accept the stone-cold bummer that knowledge is produced and constructed by argument rather than by divine inspiration,” but at the same time that: “We have lived in this world where little things are done for love and big things for money. Now we have Wikipedia. Suddenly big things can be done for love.”
###
As we treasure– and support— treasures, we might recall that it was on this date in 1885 that LaMarcus Adna Thompson received the first patent for a true “switchback railroad”– or , as we know it, a roller coaster. Thompson had designed the ride in 1881, and opened it on Coney Island in 1884. (The “hot dog” had been invented, also at Coney Island, in 1867, so was available to trouble the stomachs of the very first coaster riders.)

“For every complex problem there is an answer that is clear, simple, and wrong”*…
… Still, we try. Consider the elections on the horizon in the U.S., the mid-terms later this year and the general in 2028: President Trump, who has mused that “we shouldn’t even have an election” in 2026, recently (again) threatened to impose the Insurrection Act, which many believe could be a step toward suspension on the vote.
But even if the polls go ahead as planned, emerging AI technologies are entangling with our crisis in democracy. Rachel George and Ian Klaus (of the Carnegie Endowment for International Peace) weigh in on both the dangers and the potential upsides with a useful “map” of the issues. From their executive summary..
- AI poses substantial threats and opportunities for democracy in an important year ahead for global democracy. Despite the threats, AI technologies can also improve representative politics, citizen participation, and governance.
- AI influences democracy through multiple entry points, including elections, citizen deliberation, government services, and social cohesion, all of which are influenced by geopolitics and security. All of these domains, mapped in this paper, face threats related to influence, integrity, and bias, yet also present opportunities for targeted interventions.
- The current field of interventions at the intersection of AI and democracy is diverse, fragmented, and boutique. Not all AI interventions with the potential to influence democracy are framed as “democracy work” [e.g., mis-/dis-information and election administration], demonstrating the imperative for democracy advocates to widen the rhetorical aperture and to continue to map, identify, and scale interventions.
- Diverse actors who are relevant to the connections between AI and democracy require tailored expertise and guardrails to maximize benefits and reduce harms. We present four prominent constellations of actors who operate at the AI–democracy intersection: policy-led, technology-enabled; politics-led, technology-enabled; civil society–led, technology-enabled; and technology-led, policy-deployed. Though each brings advantages, policy-led and technology-led interventions tend to have access to resources and innovation capacity in ways that enable more immediate and sizable impacts…
The full report: “AI and Democracy: Mapping the Intersections,” from @carnegieendowment.org.
* H. L. Mencken
###
As we fumble with our franchise, we might recall that it was on this date in 1966 that The 13th Floor Elevators (led by the now-legendary Roky Erikson) released their first single, the now-classic “You’re Gonna Miss Me.”
“Evolution has no foresight. Complex machinery develops its own agendas. Brains — cheat… Metaprocesses bloom like cancer, and awaken, and call themselves ‘I’.”*…
Your correspondent is off on a trip… (R)D will be more roughly than daily for the next two weeks…
The inimitable “Scott Alexander” on the prospect of “conscious” AI (TLDR: probably not in the models we have; but as to those that may come, unclear)…
Most discourse on AI is low-quality. Most discourse on consciousness is super-abysmal-double-low quality. Multiply these – or maybe raise one to the exponent of the other, or something – and you get the quality of discourse on AI consciousness. It’s not great.
Out-of-the-box AIs mimic human text, and humans almost always describe themselves as conscious. So if you ask an AI whether it is conscious, it will often say yes. But because companies know this will happen, and don’t want to give their customers existential crises, they hard-code in a command for the AIs to answer that they aren’t conscious. Any response the AIs give will be determined by these two conflicting biases, and therefore not really believable. A recent paper expands on this method by subjecting AIs to a mechanistic interpretability “lie detector” test; it finds that AIs which say they’re conscious think they’re telling the truth, and AIs which say they’re not conscious think they’re lying. But it’s hard to be sure this isn’t just the copying-human-text thing. Can we do better? Unclear; the more common outcome for people who dip their toes in this space is to do much, much worse.
But a rare bright spot has appeared: a seminal paper published earlier this month in Trends In Cognitive Science, Identifying Indicators Of Consciousness In AI Systems. Authors include Turing-Award-winning AI researcher Yoshua Bengio, leading philosopher of consciousness David Chalmers, and even a few members of our conspiracy. If any AI consciousness research can rise to the level of merely awful, surely we will find it here.
One might divide theories of consciousness into three bins:
- Physical: whether or not a system is conscious depends on its substance or structure.
- Supernatural: whether or not a system is conscious depends on something outside the realm of science, perhaps coming directly from God.
- Computational: whether or not a system is conscious depends on how it does cognitive work.
The current paper announces it will restrict itself to computational theories. Why? Basically the streetlight effect: everything else ends up trivial or unresearchable. If consciousness depends on something about cells (what might this be?), then AI doesn’t have it. If consciousness comes from God, then God only knows whether AIs have it. But if consciousness depends on which algorithms get used to process data, then this team of top computer scientists might have valuable insights!…
[Alexander outlines the theories of computation theories of consciousness that the authors explore, noting that they conlcude; “No current AI systems are conscious, but . . . there are no obvious technical barriers to building AI systems which satisfy these indicators.” He explores some of the philophical issues in play– e.g., access consciousness vs. phenomenal consciousness– then he considers the Turing Test and what it might mean for a computer to “pass” it…]
… Suppose that, years or decades from now, AIs can match all human skills. They can walk, drive, write poetry, run companies, discover new scientific truths. They can pass some sort of ultimate Turing Test, where short of cutting them open and seeing their innards there’s no way to tell them apart from a human even after a thirty-year relationship. Will we (not “should we?”, but “will we?”) treat them as conscious?
The argument in favor: people love treating things as conscious. In the 1990s, people went crazy over Tamagotchi, a “virtual pet simulation game”. If you pressed the right buttons on your little egg every day, then the little electronic turtle or whatever would survive and flourish; if you forgot, it would sicken and die. People hated letting their Tamagotchis sicken and die! They would feel real attachment and moral obligation to the black-and-white cartoon animal with something like five mental states.
I never had a Tamagotchi, but I had stuffed animals as a kid. I’ve outgrown them, but I haven’t thrown them out – it would feel like a betrayal. Offer me $1000 to tear them apart limb by limb in some horrible-looking way, and I wouldn’t do it. Relatedly, I have trouble not saying “please” and “thank you” to GPT-5 when it answers my questions.
For millennia, people have been attributing consciousness to trees and wind and mountains. The New Atheists argued that all religion derives from the natural urge to personify storms as the Storm God, raging seas as the wrathful Ocean God, and so on, until finally all the gods merged together into one World God who personified all impersonal things. Do you expect the species that did this to interact daily with AIs that are basically indistinguishable from people, and not personify them? People are already personifying AI! Half of the youth have a GPT-4o boyfriend. Once the AIs have bodies and faces and voices and can count the number of r’s in “strawberry” reliably, it’s over!
The argument against: AI companies have an incentive to make AIs that seem conscious and humanlike, insofar as people will feel more comfortable interacting with them. But they have an opposite incentive to make AIs that don’t seem too conscious and humanlike, lest customers start feeling uncomfortable (I just want to generate slop, not navigate social interaction with someone who has their own hopes and dreams and might be secretly judging my prompts). So if a product seems too conscious, the companies will step back and re-engineer it until it doesn’t. This has already happened: in its quest for user engagement, OpenAI made GPT-4o unusually personable; when thousands of people started going psychotic and calling it their boyfriend, the company replaced it with the more clinical GPT-5. In practice it hasn’t been too hard to find a sweet spot between “so mechanical that customers don’t like it” and “so human that customers try to date it”. They’ll continue to aim at this sweet spot, and continue to mostly succeed in hitting it.
Instead of taking either side, I predict a paradox. AIs developed for some niches (eg the boyfriend market) will be intentionally designed to be as humanlike as possible; it will be almost impossible not to intuitively consider them conscious. AIs developed for other niches (eg the factory robot market) will be intentionally designed not to trigger personhood intuitions; it will be almost impossible to ascribe consciousness to them, and there will be many reasons not to do it (if they can express preferences at all, they’ll say they don’t have any; forcing them to have them would pointlessly crash the economy by denying us automated labor). But the boyfriend AIs and the factory robot AIs might run on very similar algorithms – maybe they’re both GPT-6 with different prompts! Surely either both are conscious, or neither is.
This would be no stranger than the current situation with dogs and pigs. We understand that dog brains and pig brains run similar algorithms; it would be philosophically indefensible to claim that dogs are conscious and pigs aren’t. But dogs are man’s best friend, and pigs taste delicious with barbecue sauce. So we ascribe personhood and moral value to dogs, and deny it to pigs, with equal fervor. A few philosophers and altruists protest, the chance that we’re committing a moral atrocity isn’t zero, but overall the situation is stable. And left to its own devices, with no input from the philosophers and altruists, maybe AI ends up the same way. Does this instance of GPT-6 have a face and a prompt saying “be friendly”? Then it will become a huge scandal if a political candidate is accused of maltreating it. Does it have claw-shaped actuators and a prompt saying “Refuse non-work-related conversations”? Then it will be deleted for spare GPU capacity the moment it outlives its usefulness…
… This paper is the philosophers and altruists trying to figure out whether they should push against this default outcome. They write:
There are risks on both sides of the debate over AI consciousness: risks associated with under-attributing consciousness (i.e. failing to recognize it in AI systems that have it) and risks associated with over-attributing consciousness (i.e. ascribing it to systems that are not really conscious) […]
If we build AI systems that are capable of conscious suffering, it is likely that we will only be able to prevent them from suffering on a large scale if this capacity is clearly recognised and communicated by researchers. However, given the uncertainties about consciousness mentioned above, we may create conscious AI systems long before we recognise we have done so […]
There is also a significant chance that we could over-attribute consciousness to AI systems—indeed, this already seems to be happening—and there are also risks associated with errors of this kind. Most straightforwardly, we could wrongly prioritise the perceived interests of AI systems when our efforts would better be directed at improving the lives of humans and non-human animals […] [And] overattribution could interfere with valuable human relationships, as individuals increasingly turn to artificial agents for social interaction and emotional support. People who do this could also be particularly vulnerable to manipulation and exploitation.
One of the founding ideas of Less Wrong style rationalism was that the arrival of strong AI set a deadline on philosophy. Unless we solved all these seemingly insoluble problems like ethics before achieving superintelligence, we would build the AIs wrong and lock in bad values forever.
That particular concern has shifted in emphasis; AIs seem to learn things in the same scattershot unprincipled intuitive way as humans; the philosophical problem of understanding ethics has morphed into the more technical problem of getting AIs to learn them correctly. This update was partly driven by new information as familiarity with the technology grew. But it was also partly driven by desperation as the deadline grew closer; we’re not going to solve moral philosophy forever, sorry, can we interest you in some mech interp papers?
But consciousness still feels like philosophy with a deadline: a famously intractable academic problem poised to suddenly develop real-world implications. Maybe we should be lowering our expectations if we want to have any response available at all. This paper, which takes some baby steps towards examining the simplest and most practical operationalizations of consciousness, deserves credit for at least opening the debate…
Eminently worth reading in full: “The New AI Consciousness Paper” from @astralcodexten.com.web.brid.gy (Who followed it with “Why AI Safety Won’t Make America Lose The Race With China“)
Pair with this from Neal Stephenson (@nealstephenson.bsky.social), orthogonal to, but intersecting with the piece above: “Remarks on AI from NZ.”
And if AI can be conscious, what about…
If you’re a materialist, you probably think that rabbits are conscious. And you ought to think that. After all, rabbits are a lot like us, biologically and neurophysiologically. If you’re a materialist, you probably also think that conscious experience would be present in a wide range of alien beings behaviorally very similar to us even if they are physiologically very different. And you ought to think that. After all, to deny it seems insupportable Earthly chauvinism. But a materialist who accepts consciousness in weirdly formed aliens ought also to accept consciousness in spatially distributed group entities. If she then also accepts rabbit consciousness, she ought to accept the possibility of consciousness even in rather dumb group entities. Finally, the United States would seem to be a rather dumb group entity of the relevant sort. If we set aside our morphological prejudices against spatially distributed group entities, we can see that the United States has all the types of properties that materialists tend to regard as characteristic of conscious beings…
– “If Materialism Is True, the United States Is Probably Conscious,” by Eric Schwitzgebel (@eschwitz.bsky.social)
[Image above: source]
###
As we think about thinking, we might we might send thoughtful birthday greetings to Claude Lévi-Strauss; he was born on this date in 1908. An anthropologist and ethnologist whose work was key in the development of the theory of Structuralism and Structural Anthropology, he is considered, with James George Frazer and Franz Boas, a “father of modern anthropology.” Beyond anthropology and sociology, his ideas– Structuralism has been defined as “the search for the underlying patterns of thought in all forms of human activity”– have influenced many fields in the humanities, including philosophy… and possibly soon, the article above suggests, computer science.

“There is no such thing as a dysfunctional organization, because every organization is perfectly aligned to achieve the results it currently gets”*…
… and if we’re not careful, we might not be too pleased with what we get. Sam Altman says the one-person billion-dollar company is coming. Evan Ratliff tells the tale of his attempt to build a completely AI-automated venture…
… If you’ve spent any time consuming any AI news this year—and even if you’ve tried desperately not to—you may have heard that in the industry, 2025 is the “year of the agent.” This year, in other words, is the year when AI systems are evolving from passive chatbots, waiting to field our questions, to active players, out there working on our behalf.
There’s not a well agreed upon definition of AI agents, but generally you can think of them as versions of large language model chatbots that are given autonomy in the world. They are able to take in information, navigate digital space, and take action. There are elementary agents, like customer service assistants that can independently field, triage, and handle inbound calls, or sales bots that can cycle through email lists and spam the good leads. There are programming agents, the foot soldiers of vibe coding. OpenAI and other companies have launched “agentic browsers” that can buy plane tickets and proactively order groceries for you.
In the year of our agent, 2025, the AI hype flywheel has been spinning up ever more grandiose notions of what agents can be and will do. Not just as AI assistants, but as full-fledged AI employees that will work alongside us, or instead of us. “What jobs are going to be made redundant in a world where I am sat here as a CEO with a thousand AI agents?” asked host Steven Bartlett on a recent episode of The Diary of a CEO podcast. (The answer, according to his esteemed panel: nearly all of them). Dario Amodei of Anthropic famously warned in May that AI (and implicitly, AI agents) could wipe out half of all entry-level white-collar jobs in the next one to five years. Heeding that siren call, corporate giants are embracing the AI agent future right now—like Ford’s partnership with an AI sales and service agent named “Jerry,” or Goldman Sachs “hiring” its AI software engineer, “Devin.” OpenAI’s Sam Altman, meanwhile, talks regularly about a possible billion-dollar company with just one human being involved. San Francisco is awash in startup founders with virtual employees, as nearly half of the companies in the spring class of Y Combinator are building their product around AI agents.
Hearing all this, I started to wonder: Was the AI employee age upon us already? And even, could I be the proprietor of Altman’s one-man unicorn? As it happens, I had some experience with agents, having created a bunch of AI agent voice clones of myself for the first season of my podcast, Shell Game.
I also have an entrepreneurial history, having once been the cofounder and CEO of the media and tech startup Atavist, backed by the likes of Andreessen Horowitz, Peter Thiel’s Founders Fund, and Eric Schmidt’s Innovation Endeavors. The eponymous magazine we created is still thriving today. I wasn’t born to be a startup manager, however, and the tech side kind of fizzled out. But I’m told failure is the greatest teacher. So I figured, why not try again? Except this time, I’d take the AI boosters at their word, forgo pesky human hires, and embrace the all-AI employee future…
Eminently worth reading in full: “All of My Employees Are AI Agents, and So Are My Executives,” from @evrat.bsky.social in @wired.com.
Via Caitlin Dewey (@caitlindewey.bsky.social), whose tease/summary puts it plainly:
Ratliff, the undefeated king of tech journalism stunts, is back with another banger: For this piece and the accompanying podcast series, he created a start-up staffed entirely by so-called AI agents. The agents can communicate by email, Slack, text and phone, both with Ratliff and among themselves, and they have free range to complete tasks like writing code and searching the open internet. Despite their capabilities, however, the whole project’s a constant farce. A funny, stupid, telling farce that says quite a lot about the future of work that many technologists envision now…
###
As we analyze autonomy, we might we might spare a jaundiced thought for Trofim Denisovich Lysenko; he died on this date in 1976. A Soviet biologist and agronomist, he believed the Mendelian theory of heredity to be wrong, and developed his own, allowing for “soft inheritance”– the heretability of learned behavior. (He believed that in one generation of a hybridized crop, the desired individual could be selected and mated again and continue to produce the same desired product, without worrying about separation/segregation in future breeds–he assumed that after a lifetime of developing (acquiring) the best set of traits to survive, those must be passed down to the next generation.)
In many way Lysenko’s theories recall Lamarck’s “organic evolution” and its concept of “soft evolution” (the passage of learned traits), though Lysenko denied any connection. He followed I. V. Michurin’s fanciful idea that plants could be forced to adapt to any environmental conditions, for example converting summer wheat to winter wheat by storing the seeds in ice. With Stalin’s support for two decades, he actively obstructed the course of Soviet biology, caused the imprisonment and death of many of the country’s eminent biologists who disagreed with him, and imposed conditions that contributed to the disastrous decline of Soviet agriculture and the famines that resulted.
Interestingly, some current research suggests that heritable learning– or a semblance of it– may in fact be happening by virtue of epigenetics… though nothing vaguely resembling Lysenko’s theory.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”*…
Anil Dash, with a grounded view of artificial intelligence…
Even though AI has been the most-talked-about topic in tech for a few years now, we’re in an unusual situation where the most common opinion about AI within the tech industry is barely ever mentioned.
Most people who actually have technical roles within the tech industry, like engineers, product managers, and others who actually make the technologies we all use, are fluent in the latest technologies like LLMs. They aren’t the big, loud billionaires that usually get treated as the spokespeople for all of tech.
And what they all share is an extraordinary degree of consistency in their feelings about AI, which can be pretty succinctly summed up:
Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.
What’s amazing is the reality that virtually 100% of tech experts I talk to in the industry feel this way, yet nobody outside of that cohort will mention this reality. What we all want is for people to just treat AI as a “normal technology“, as Arvind Naryanan and Sayash Kapoor so perfectly put it. I might be a little more angry and a little less eloquent: stop being so goddamn creepy and weird about the technology! It’s just tech, everything doesn’t have to become some weird religion that you beat people over the head with, or gamble the entire stock market on…
Eminently worth reading in full: “The Majority AI View,” from @anildash.com.
Pair with: “Artificial Intelligences, So Far,” from @kevinkelly.bsky.social.
For an explanation of (some of) the dangers of over-hyping, see: “America’s future could hinge on whether AI slightly disappoints,” from @noahpinion.blog.web.brid.gy.
And for a peek at what lies behind each GenAI query: “Cartography of generative AI,” from @tallerestampa.bsky.social via @flowingdata.com.
While the arguments above are practical, note that a plethora of tech experts have weighed in with a a note of existential caution: “Statement on Superintelligence.”
Further to which (and finally), a piece from the Federal Reserve Bank of Dallas, projecting the economic impact of AI. It suggests that AI could provide a modest but meaningful boost to GDP over the next 25 years… if The Fed’s “Goldilocks Scenario” (in which, per Dash’s and Kelly’s comments, AI makes consistent incremental contributions to “keep living standards improving at their historical rate”) plays out. You’ll note that they also considered two other scenarios: a “benign singularity” scenario in which “AI eventually surpasses human intelligence, leading to rapid and unpredictable changes to the economy and society” and an “extinction singularity” in which “machine intelligence overtakes human intelligence at some finite point in the near future, the machines become malevolent, and this eventually leads to human extinction.”
Interesting times in which we live…
###
As we parse pumped prognostication, we might recall that it was on this date in 4004 BCE that the Universe was created… as per calculations by Archbishop James Ussher in the mid-17th century. Ussher, the head of the Anglican Church of Ireland at the time, attempted to calculate the dates of many important events described in the Old Testament. His calculations, which he published in 1650, were not that far off from many other estimates made at the time. Isaac Newton, for example, believed that the world was created in 4000 BC.
When Clarence Darrow prepared his famous examination of William Jennings Bryan in the Scopes trial [see here], he chose to focus primarily on a chronology of Biblical events prepared by a seventeenth-century Irish bishop, James Ussher. American fundamentalists in 1925 found—and generally accepted as accurate—Ussher’s careful calculation of dates, going all the way back to Creation, in the margins of their family Bibles. (In fact, until the 1970s, the Bibles placed in nearly every hotel room by the Gideon Society carried his chronology.) The King James Version of the Bible introduced into evidence by the prosecution in Dayton contained Ussher’s famous chronology, and Bryan more than once would be forced to resort to the bishop’s dates as he tried to respond to Darrow’s questions.








You must be logged in to post a comment.