(Roughly) Daily

Posts Tagged ‘artificial intelligence

“The new media are not ways of relating to us the ‘real’ world; they are the real world and they reshape what remains of the old world at will.”*…

A collection of open magazines and newspapers spread out on a surface, featuring articles and images, with an iPad displaying a news website in the center.

There is a vortex of forces shaping the future of journalism. Censorship, both direct and indirect, is on the rise in the U.S. and around the world. Concentration of media ownership is homogenizing coverage and creating “news deserts.”

At the same time, new technology and new applications of that technology are reshaping the Fourth estate. The Reuters Institute at Oxford surveyed 280 digital leaders from 51 countries and territories to learn what they are seeing– and planning. From the Executive Summary…

We are still at the early stages of another big shift in technology (Generative AI) which threatens to upend the news industry by offering more efficient ways of accessing and distilling information at scale. At the same time, creators and influencers (humans) are driving a shift towards personality-led news, at the expense of media institutions that can often feel less relevant, less interesting, and less authentic. In 2026 the news media are likely to be further squeezed by these two powerful forces.

Understanding the impact of these trends, and working out how to combat them, will be high up the ‘to do list’ of media executives this year, despite the unevenly distributed pace of change across countries and demographics.

Existential challenges abound. Declining engagement for traditional media combined with low trust is leading many politicians, businessmen, and celebrities to conclude that they can bypass the media entirely, giving interviews instead to sympathetic podcasters or YouTubers. This Trump 2.0 playbook – now widely copied around the world – often comes bundled with a barrage of intimidating legal threats against publishers and continuing attempts to undermine trust by branding independent media and individual journalists as ‘fake news’. These narratives are finding fertile ground with audiences – especially younger ones – that prefer the convenience of accessing news from platforms, and have weaker connections with traditional news brands. Meanwhile search engines are turning into AI-driven answer engines, where content is surfaced in chat windows, raising fears that referral traffic for publishers could dry up, undermining existing and future business models.

Despite these difficulties many traditional news organisations remain optimistic about their own business – if not about journalism itself. Publishers will be focused this year on re-engineering their businesses for the age of AI, with more distinctive content and a more human face. They will also be looking beyond the article, investing more in multiple formats especially video and adjusting their content to make it more ‘liquid’ and therefore easier to reformat and personalise. At the same time, they’ll be continuing to work out how best to use Generative AI themselves across newsgathering, packaging, and distribution. It’s a delicate balancing act but one that – if they can pull it off – holds out the promise of greater efficiency and more relevant and engaging journalism.

These are the main findings from our industry survey:

  • Only slightly more than a third (38%) of our sample of editors, CEOs, and digital executives say they are confident about the prospects for journalism in the year ahead – that’s 22pp lower than four years ago. Stated concerns relate to politically motivated attacks on journalism, loss of USAID money that previously supported independent media in many parts of the world, and significant declines in traffic to many online news sites.
  • By contrast, around half (53%) say they are confident about their own business prospects, similar to last year’s figure. Upmarket subscription-based publishers with strong direct traffic can see a path to long-term profitability, even as those that remain dependent on advertising and print worry about sharp declines in revenue and the potential impact of AI powered search on the bottom line.
  • Publishers expect traffic from search engines to decline by more than 40% over the next three years – not quite ‘Google Zero’ but a substantial impact none the less. Data sourced for this report from analytics provider Chartbeat shows that aggregate traffic to hundreds of news sites from Google search has already started to dip, with publishers that rely on lifestyle content saying they have been particularly affected by the roll out of Google’s AI overviews. This comes after substantial falls in referral traffic to news sites from Facebook (-43%) and X, formerly Twitter (-46%) over the last three years.
  • In response, publishers say it will be important to focus on more original investigations and on the ground reporting (+91 percentage point difference between ‘more’ and ‘less’), contextual analysis and explanation (+82) and human stories (+72). By contrast, they plan to scale back service journalism (-42), evergreen content (-32), and general news (-38), which many expect to become commoditised by AI chatbots. At the same time, they think it will be important to invest in more video (+79) – including ‘watch tabs’ – more audio formats (+71) such as podcasts but a bit less in text output.
  • In terms of off-platform strategies, YouTube will be the main focus for publishers this year with a net score of +74, up substantially on last year. Other video-led platforms such as TikTok (+56) and Instagram (+41) are also key priorities – along with working out how to navigate distribution through AI platforms (+61) such as OpenAI’s ChatGPT, Google’s Gemini and Perplexity. Google Discover remains a critical (+19), if slightly volatile, source of referral traffic, while some publishers are looking to find new audiences via newsletter platforms such as Substack (+8). By contrast, publishers will be deprioritising effort spent on old-style Google SEO (-25) – as well as traditional social networks Facebook (-23) and X (-52)
  • Last year we predicted the emergence of ‘agentic AI’, but this year we can expect to start to see real-world impact of these more advanced technologies. Some sources suggest that there will soon be more bots than people reading publisher websites, as tools like Huxe and OpenAI’s Pulse offer personalised news briefings at scale. Three-quarters of our respondents (75%) expect ‘agentic tools’ to have a ‘large’ or ‘very large’ impact on the news industry in the near future.
  • Alongside the traffic disruption from AI, news executives also see opportunities to build new revenue from licensing content (or a share of advertising revenue) within chatbots. Around a fifth (20%) of publisher respondents – mainly from upmarket news companies – expect future revenues to be substantial, with half (49%) saying that they expect a minor contribution. A further fifth (20%), mostly made up of local publishers, public broadcasters, or those from smaller countries, say they do not expect any income from AI deals.
  • More widely, subscription and membership remain the biggest revenue focus (76%) for publishers, ahead of both display (68%) and native advertising (64%). Online and physical events (54%) are also becoming more important as part of a diversified revenue strategy. Reliance on philanthropic and foundation support (18%) has declined this year, after cuts of media support budgets in the United States and elsewhere.
  • Meanwhile news organisations’ use of AI technologies continues to increase across all categories, with back-end automation considered ‘important’ this year by the vast majority (97%) of publisher respondents, many of whom integrated pilot systems into content management systems in the last year. Newsgathering cases (82%) are now the second most important, with faster coding and product development (81%) also gaining traction.
  • Over four in ten (44%) survey respondents say that their newsroom AI initiatives are showing ‘promising’ results, but a similar proportion (42%) describe them as ‘limited’. Two-thirds of respondents (67%) say they have not saved any jobs so far as a result of AI efficiencies. Around one in seven (16%) say they have slightly reduced staff numbers but a further one in ten (9%) have added new roles/cost.
  • The rise of news creators and influencers is a concern for publishers in two ways. More than two-thirds (70%) of our respondents are concerned that they are taking time and attention away from publisher content. Four in ten (39%) worry that they are at risk of losing top editorial talent to the creator ecosystem, which offers more control and potentially higher financial rewards.
  • Responding to the increased competition and a shift of trust towards personalities, three-quarters (76%) of publisher respondents say they will be trying to get their staff to behave more like creators this year. Half (50%) said they would be partnering with creators to help distribute content, around a third (31%) said they would be hiring creators, for example to run their social media accounts. A further 28% are looking to set up creator studios and facilitate joint ventures.

More widely, could 2026 be the year when AI company stock valuations come down to earth with a bump, amid concerns about whether their trillion-dollar bets will pay back their investors? Meanwhile the amount of low-quality AI automated content, including so-called ‘pink slime’ sites, looks set to explode, with platforms struggling to distinguish this from legitimate news.

We can expect more public concern about the role of big tech in our lives. This may include individual acts of ‘Appstinence’ and other forms of digital detox and a desire for more IRL (In Real Life) connection. Governments will also come under pressure to do more to protect young and other vulnerable groups online, even in the United States.

The creator economy will continue to surge, fuelled by investments from video platforms and streamers. At the top end creators will look more like Hollywood moguls with big budgets and their own studio complexes. Within news, we’ll also see the emergence of bigger, more robust, creator-led companies delivering significant revenues as well as value to audiences – offering ever greater competition for traditional journalism…

Read the report in full: “Journalism, media, and technology trends and predictions 2026,” from @reutersinstitute.bsky.social.

* Marshall McLuhan

###

As we ponder the prospects of the press, we might type a birthday note to John Baskerville, a pioneering English printer and typefounder, who was born on this date in 1706.  Among Baskerville’s publications in the British Museum’s collection are Aesop’s Fables (1761), the Bible (1763), and the works of Horace (1770)– many printed on a stock he invented, “wove paper”, which was considerably smoother than “laid paper”, allowing for sharper printing results.  And as for his fonts,  Baskerville’s creations (including the famous “Baskerville,” a predecessor to the very similar Times New Roman) were so successful that his competitors resorted to claims that they damaged the eyes.

Portrait of an 18th-century man wearing a dark coat with white ruffled cuffs, seated with hands clasped.

source

Title page of 'Bucolica, Georgica, et Aeneis' by Publius Vergilius Maro, printed in Birmingham in 1757.
Baskerville’s first publication, an edition of Virgil. (source)

“A Wikipedia article is a process, not a product”*…

Logo celebrating the 25th anniversary of Wikipedia, featuring a globe, symbols for different languages, a birthday cake, and two people holding hands.

A quarter of a century ago Jimmy Wales, Wikipedia‘s founder, articulated its vision– one into which it has impressively grown: “Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That’s what we’re doing.”

On the ocassion of its birthday this month, Caitlin Dewey takes stock…

Happy birthday to Wikipedia, which is now old enough to rent a car without extra charges … but faces new (and newly urgent) threats from AI and political polarization. As a palate cleanser, should those bum you out (the second, in particular, is very grim/good), may I then suggest this “entirely non-comprehensive list of life principles” learned from 20 years of editing Wikipedia. [Scientific American / Financial Times / The Wikipedian]…

From her wonderful newsletter, Links I Would Gchat You If We Were Friends. All three are eminently worth reading.

* Clay Shirky, who went on to observe that “Wikipedia is forcing people to accept the stone-cold bummer that knowledge is produced and constructed by argument rather than by divine inspiration,” but at the same time that: “We have lived in this world where little things are done for love and big things for money. Now we have Wikipedia. Suddenly big things can be done for love.”

###

As we treasure– and support— treasures, we might recall that it was on this date in 1885 that LaMarcus Adna Thompson received the first patent for a true “switchback railroad”– or , as we know it, a roller coaster.  Thompson had designed the ride in 1881, and opened it on Coney Island in 1884.  (The “hot dog” had been invented, also at Coney Island, in 1867, so was available to trouble the stomachs of the very first coaster riders.)

An illustration of an early amusement park featuring a wooden roller coaster, people walking along pathways, and beachgoers in the distance, with American flags displayed at the park.
Thompson’s original Switchback Railway at Coney Island (source)

“For every complex problem there is an answer that is clear, simple, and wrong”*…

Close-up of a digital globe with illuminated continents and swirling lines of data, representing technology and connectivity.

… Still, we try. Consider the elections on the horizon in the U.S., the mid-terms later this year and the general in 2028: President Trump, who has mused that “we shouldn’t even have an election” in 2026, recently (again) threatened to impose the Insurrection Act, which many believe could be a step toward suspension on the vote.

But even if the polls go ahead as planned, emerging AI technologies are entangling with our crisis in democracy. Rachel George and Ian Klaus (of the Carnegie Endowment for International Peace) weigh in on both the dangers and the potential upsides with a useful “map” of the issues. From their executive summary..

  • AI poses substantial threats and opportunities for democracy in an important year ahead for global democracy. Despite the threats, AI technologies can also improve representative politics, citizen participation, and governance.
  • AI influences democracy through multiple entry points, including elections, citizen deliberation, government services, and social cohesion, all of which are influenced by geopolitics and security. All of these domains, mapped in this paper, face threats related to influence, integrity, and bias, yet also present opportunities for targeted interventions.
  • The current field of interventions at the intersection of AI and democracy is diverse, fragmented, and boutique. Not all AI interventions with the potential to influence democracy are framed as “democracy work” [e.g., mis-/dis-information and election administration], demonstrating the imperative for democracy advocates to widen the rhetorical aperture and to continue to map, identify, and scale interventions.
  • Diverse actors who are relevant to the connections between AI and democracy require tailored expertise and guardrails to maximize benefits and reduce harms. We present four prominent constellations of actors who operate at the AI–democracy intersection: policy-led, technology-enabled; politics-led, technology-enabled; civil society–led, technology-enabled; and technology-led, policy-deployed. Though each brings advantages, policy-led and technology-led interventions tend to have access to resources and innovation capacity in ways that enable more immediate and sizable impacts…

The full report: “AI and Democracy: Mapping the Intersections,” from @carnegieendowment.org.

* H. L. Mencken

###

As we fumble with our franchise, we might recall that it was on this date in 1966 that The 13th Floor Elevators (led by the now-legendary Roky Erikson) released their first single, the now-classic “You’re Gonna Miss Me.”

Written by (Roughly) Daily

January 17, 2026 at 1:00 am

“Evolution has no foresight. Complex machinery develops its own agendas. Brains — cheat… Metaprocesses bloom like cancer, and awaken, and call themselves ‘I’.”*…

Silhouette of a woman's face merged with a digital representation of a humanoid figure, symbolizing the intersection of human consciousness and artificial intelligence.

Your correspondent is off on a trip… (R)D will be more roughly than daily for the next two weeks…

The inimitable “Scott Alexander” on the prospect of “conscious” AI (TLDR: probably not in the models we have; but as to those that may come, unclear)…

Most discourse on AI is low-quality. Most discourse on consciousness is super-abysmal-double-low quality. Multiply these – or maybe raise one to the exponent of the other, or something – and you get the quality of discourse on AI consciousness. It’s not great.

Out-of-the-box AIs mimic human text, and humans almost always describe themselves as conscious. So if you ask an AI whether it is conscious, it will often say yes. But because companies know this will happen, and don’t want to give their customers existential crises, they hard-code in a command for the AIs to answer that they aren’t conscious. Any response the AIs give will be determined by these two conflicting biases, and therefore not really believable. A recent paper expands on this method by subjecting AIs to a mechanistic interpretability “lie detector” test; it finds that AIs which say they’re conscious think they’re telling the truth, and AIs which say they’re not conscious think they’re lying. But it’s hard to be sure this isn’t just the copying-human-text thing. Can we do better? Unclear; the more common outcome for people who dip their toes in this space is to do much, much worse.

But a rare bright spot has appeared: a seminal paper published earlier this month in Trends In Cognitive Science, Identifying Indicators Of Consciousness In AI Systems. Authors include Turing-Award-winning AI researcher Yoshua Bengio, leading philosopher of consciousness David Chalmers, and even a few members of our conspiracy. If any AI consciousness research can rise to the level of merely awful, surely we will find it here.

One might divide theories of consciousness into three bins:

  • Physical: whether or not a system is conscious depends on its substance or structure.
  • Supernatural: whether or not a system is conscious depends on something outside the realm of science, perhaps coming directly from God.
  • Computational: whether or not a system is conscious depends on how it does cognitive work.

The current paper announces it will restrict itself to computational theories. Why? Basically the streetlight effect: everything else ends up trivial or unresearchable. If consciousness depends on something about cells (what might this be?), then AI doesn’t have it. If consciousness comes from God, then God only knows whether AIs have it. But if consciousness depends on which algorithms get used to process data, then this team of top computer scientists might have valuable insights!…

[Alexander outlines the theories of computation theories of consciousness that the authors explore, noting that they conlcude; “No current AI systems are conscious, but . . . there are no obvious technical barriers to building AI systems which satisfy these indicators.” He explores some of the philophical issues in play– e.g., access consciousness vs. phenomenal consciousness– then he considers the Turing Test and what it might mean for a computer to “pass” it…]

… Suppose that, years or decades from now, AIs can match all human skills. They can walk, drive, write poetry, run companies, discover new scientific truths. They can pass some sort of ultimate Turing Test, where short of cutting them open and seeing their innards there’s no way to tell them apart from a human even after a thirty-year relationship. Will we (not “should we?”, but “will we?”) treat them as conscious?

The argument in favor: people love treating things as conscious. In the 1990s, people went crazy over Tamagotchi, a “virtual pet simulation game”. If you pressed the right buttons on your little egg every day, then the little electronic turtle or whatever would survive and flourish; if you forgot, it would sicken and die. People hated letting their Tamagotchis sicken and die! They would feel real attachment and moral obligation to the black-and-white cartoon animal with something like five mental states.

I never had a Tamagotchi, but I had stuffed animals as a kid. I’ve outgrown them, but I haven’t thrown them out – it would feel like a betrayal. Offer me $1000 to tear them apart limb by limb in some horrible-looking way, and I wouldn’t do it. Relatedly, I have trouble not saying “please” and “thank you” to GPT-5 when it answers my questions.

For millennia, people have been attributing consciousness to trees and wind and mountains. The New Atheists argued that all religion derives from the natural urge to personify storms as the Storm God, raging seas as the wrathful Ocean God, and so on, until finally all the gods merged together into one World God who personified all impersonal things. Do you expect the species that did this to interact daily with AIs that are basically indistinguishable from people, and not personify them? People are already personifying AI! Half of the youth have a GPT-4o boyfriend. Once the AIs have bodies and faces and voices and can count the number of r’s in “strawberry” reliably, it’s over!

The argument against: AI companies have an incentive to make AIs that seem conscious and humanlike, insofar as people will feel more comfortable interacting with them. But they have an opposite incentive to make AIs that don’t seem too conscious and humanlike, lest customers start feeling uncomfortable (I just want to generate slop, not navigate social interaction with someone who has their own hopes and dreams and might be secretly judging my prompts). So if a product seems too conscious, the companies will step back and re-engineer it until it doesn’t. This has already happened: in its quest for user engagement, OpenAI made GPT-4o unusually personable; when thousands of people started going psychotic and calling it their boyfriend, the company replaced it with the more clinical GPT-5. In practice it hasn’t been too hard to find a sweet spot between “so mechanical that customers don’t like it” and “so human that customers try to date it”. They’ll continue to aim at this sweet spot, and continue to mostly succeed in hitting it.

Instead of taking either side, I predict a paradox. AIs developed for some niches (eg the boyfriend market) will be intentionally designed to be as humanlike as possible; it will be almost impossible not to intuitively consider them conscious. AIs developed for other niches (eg the factory robot market) will be intentionally designed not to trigger personhood intuitions; it will be almost impossible to ascribe consciousness to them, and there will be many reasons not to do it (if they can express preferences at all, they’ll say they don’t have any; forcing them to have them would pointlessly crash the economy by denying us automated labor). But the boyfriend AIs and the factory robot AIs might run on very similar algorithms – maybe they’re both GPT-6 with different prompts! Surely either both are conscious, or neither is.

This would be no stranger than the current situation with dogs and pigs. We understand that dog brains and pig brains run similar algorithms; it would be philosophically indefensible to claim that dogs are conscious and pigs aren’t. But dogs are man’s best friend, and pigs taste delicious with barbecue sauce. So we ascribe personhood and moral value to dogs, and deny it to pigs, with equal fervor. A few philosophers and altruists protest, the chance that we’re committing a moral atrocity isn’t zero, but overall the situation is stable. And left to its own devices, with no input from the philosophers and altruists, maybe AI ends up the same way. Does this instance of GPT-6 have a face and a prompt saying “be friendly”? Then it will become a huge scandal if a political candidate is accused of maltreating it. Does it have claw-shaped actuators and a prompt saying “Refuse non-work-related conversations”? Then it will be deleted for spare GPU capacity the moment it outlives its usefulness…

… This paper is the philosophers and altruists trying to figure out whether they should push against this default outcome. They write:

There are risks on both sides of the debate over AI consciousness: risks associated with under-attributing consciousness (i.e. failing to recognize it in AI systems that have it) and risks associated with over-attributing consciousness (i.e. ascribing it to systems that are not really conscious) […]

If we build AI systems that are capable of conscious suffering, it is likely that we will only be able to prevent them from suffering on a large scale if this capacity is clearly recognised and communicated by researchers. However, given the uncertainties about consciousness mentioned above, we may create conscious AI systems long before we recognise we have done so […]

There is also a significant chance that we could over-attribute consciousness to AI systems—indeed, this already seems to be happening—and there are also risks associated with errors of this kind. Most straightforwardly, we could wrongly prioritise the perceived interests of AI systems when our efforts would better be directed at improving the lives of humans and non-human animals […] [And] overattribution could interfere with valuable human relationships, as individuals increasingly turn to artificial agents for social interaction and emotional support. People who do this could also be particularly vulnerable to manipulation and exploitation.

One of the founding ideas of Less Wrong style rationalism was that the arrival of strong AI set a deadline on philosophy. Unless we solved all these seemingly insoluble problems like ethics before achieving superintelligence, we would build the AIs wrong and lock in bad values forever.

That particular concern has shifted in emphasis; AIs seem to learn things in the same scattershot unprincipled intuitive way as humans; the philosophical problem of understanding ethics has morphed into the more technical problem of getting AIs to learn them correctly. This update was partly driven by new information as familiarity with the technology grew. But it was also partly driven by desperation as the deadline grew closer; we’re not going to solve moral philosophy forever, sorry, can we interest you in some mech interp papers?

But consciousness still feels like philosophy with a deadline: a famously intractable academic problem poised to suddenly develop real-world implications. Maybe we should be lowering our expectations if we want to have any response available at all. This paper, which takes some baby steps towards examining the simplest and most practical operationalizations of consciousness, deserves credit for at least opening the debate…

Eminently worth reading in full: “The New AI Consciousness Paper” from @astralcodexten.com.web.brid.gy (Who followed it with “Why AI Safety Won’t Make America Lose The Race With China“)

Pair with this from Neal Stephenson (@nealstephenson.bsky.social), orthogonal to, but intersecting with the piece above: “Remarks on AI from NZ.”

And if AI can be conscious, what about…

If you’re a materialist, you probably think that rabbits are conscious. And you ought to think that. After all, rabbits are a lot like us, biologically and neurophysiologically. If you’re a materialist, you probably also think that conscious experience would be present in a wide range of alien beings behaviorally very similar to us even if they are physiologically very different. And you ought to think that. After all, to deny it seems insupportable Earthly chauvinism. But a materialist who accepts consciousness in weirdly formed aliens ought also to accept consciousness in spatially distributed group entities. If she then also accepts rabbit consciousness, she ought to accept the possibility of consciousness even in rather dumb group entities. Finally, the United States would seem to be a rather dumb group entity of the relevant sort. If we set aside our morphological prejudices against spatially distributed group entities, we can see that the United States has all the types of properties that materialists tend to regard as characteristic of conscious beings…

– “If Materialism Is True, the United States Is Probably Conscious,” by Eric Schwitzgebel (@eschwitz.bsky.social)

[Image above: source]

Peter Watts, Blindsight

###

As we think about thinking, we might we might send thoughtful birthday greetings to Claude Lévi-Strauss; he was born on this date in 1908.  An anthropologist and ethnologist whose work was key in the development of the theory of Structuralism and Structural Anthropology, he is considered, with James George Frazer and Franz Boas, a “father of modern anthropology.”  Beyond anthropology and sociology, his ideas– Structuralism has been defined as “the search for the underlying patterns of thought in all forms of human activity”– have influenced many fields in the humanities, including philosophy… and possibly soon, the article above suggests, computer science.

220px-Levi-strauss_260

source

“There is no such thing as a dysfunctional organization, because every organization is perfectly aligned to achieve the results it currently gets”*…

Three humanoid robots interacting with a computer, set against a blue background, showcasing a futuristic theme.

… and if we’re not careful, we might not be too pleased with what we get. Sam Altman says the one-person billion-dollar company is coming. Evan Ratliff tells the tale of his attempt to build a completely AI-automated venture…

… If you’ve spent any time consuming any AI news this year—and even if you’ve tried desperately not to—you may have heard that in the industry, 2025 is the “year of the agent.” This year, in other words, is the year when AI systems are evolving from passive chatbots, waiting to field our questions, to active players, out there working on our behalf.

There’s not a well agreed upon definition of AI agents, but generally you can think of them as versions of large language model chatbots that are given autonomy in the world. They are able to take in information, navigate digital space, and take action. There are elementary agents, like customer service assistants that can independently field, triage, and handle inbound calls, or sales bots that can cycle through email lists and spam the good leads. There are programming agents, the foot soldiers of vibe coding. OpenAI and other companies have launched “agentic browsers” that can buy plane tickets and proactively order groceries for you.

In the year of our agent, 2025, the AI hype flywheel has been spinning up ever more grandiose notions of what agents can be and will do. Not just as AI assistants, but as full-fledged AI employees that will work alongside us, or instead of us. “What jobs are going to be made redundant in a world where I am sat here as a CEO with a thousand AI agents?” asked host Steven Bartlett on a recent episode of The Diary of a CEO podcast. (The answer, according to his esteemed panel: nearly all of them). Dario Amodei of Anthropic famously warned in May that AI (and implicitly, AI agents) could wipe out half of all entry-level white-collar jobs in the next one to five years. Heeding that siren call, corporate giants are embracing the AI agent future right now—like Ford’s partnership with an AI sales and service agent named “Jerry,” or Goldman Sachs “hiring” its AI software engineer, “Devin.” OpenAI’s Sam Altman, meanwhile, talks regularly about a possible billion-dollar company with just one human being involved. San Francisco is awash in startup founders with virtual employees, as nearly half of the companies in the spring class of Y Combinator are building their product around AI agents.

Hearing all this, I started to wonder: Was the AI employee age upon us already? And even, could I be the proprietor of Altman’s one-man unicorn? As it happens, I had some experience with agents, having created a bunch of AI agent voice clones of myself for the first season of my podcast, Shell Game.

I also have an entrepreneurial history, having once been the cofounder and CEO of the media and tech startup Atavist, backed by the likes of Andreessen Horowitz, Peter Thiel’s Founders Fund, and Eric Schmidt’s Innovation Endeavors. The eponymous magazine we created is still thriving today. I wasn’t born to be a startup manager, however, and the tech side kind of fizzled out. But I’m told failure is the greatest teacher. So I figured, why not try again? Except this time, I’d take the AI boosters at their word, forgo pesky human hires, and embrace the all-AI employee future…

Eminently worth reading in full: “All of My Employees Are AI Agents, and So Are My Executives,” from @evrat.bsky.social in @wired.com.

Via Caitlin Dewey (@caitlindewey.bsky.social), whose tease/summary puts it plainly:

Ratliff, the undefeated king of tech journalism stunts, is back with another banger: For this piece and the accompanying podcast series, he created a start-up staffed entirely by so-called AI agents. The agents can communicate by email, Slack, text and phone, both with Ratliff and among themselves, and they have free range to complete tasks like writing code and searching the open internet. Despite their capabilities, however, the whole project’s a constant farce. A funny, stupid, telling farce that says quite a lot about the future of work that many technologists envision now…

Ronald Heifetz

###

As we analyze autonomy, we might we might spare a jaundiced thought for Trofim Denisovich Lysenko; he died on this date in 1976.  A Soviet biologist and agronomist, he believed the Mendelian theory of heredity to be wrong, and developed his own, allowing for “soft inheritance”– the heretability of learned behavior. (He believed that in one generation of a hybridized crop, the desired individual could be selected and mated again and continue to produce the same desired product, without worrying about separation/segregation in future breeds–he assumed that after a lifetime of developing (acquiring) the best set of traits to survive, those must be passed down to the next generation.)

In many way Lysenko’s theories recall Lamarck’s “organic evolution” and its concept of “soft evolution” (the passage of learned traits), though Lysenko denied any connection. He followed I. V. Michurin’s fanciful idea that plants could be forced to adapt to any environmental conditions, for example converting summer wheat to winter wheat by storing the seeds in ice.  With Stalin’s support for two decades, he actively obstructed the course of Soviet biology, caused the imprisonment and death of many of the country’s eminent biologists who disagreed with him, and imposed conditions that contributed to the disastrous decline of Soviet agriculture and the famines that resulted.

Interestingly, some current research suggests that heritable learning– or a semblance of it– may in fact be happening by virtue of epigenetics… though nothing vaguely resembling Lysenko’s theory.

A black and white portrait of Trofim Lysenko, a Soviet biologist and agronomist, staring directly at the camera with a serious expression.


 source

Written by (Roughly) Daily

November 20, 2025 at 1:00 am