Posts Tagged ‘artificial intelligence’
“Always look on the bright side of life”*…
The estimable economic historian Louis Hyman has been engaged in an on-going “friendly debate” with his equally-estimable friend and Johns Hopkins colleague Rama Chellappa on “what AI means”…
… As I see this debate, this question of our age, there are two main questions that history can shed some light on.
- Is AI a complement or a substitute for labor? That is, will it increase demand for and the productivity of workers, or decrease it?
- Will AI be controlled by the few or be accessible to the many?
A Complement or a Substitute?
Consider a some of the most important technologies of the past 200 years.
When I am asked about what automation might look like, I inevitably discuss agriculture. Roughly all of our ancestors were farmers and approximately none of us today are. Yet we still eat bread made from wheat. That shift is possible because of automation.
The mechanical thresher, used to process wheat, was a substitute for the most backbreaking work of the harvest. But it also enabled more land to be cultivated, and that land was cultivated more efficiently, allowing for greater harvests. Mechanization of the farm, like the thresher, turned the American Midwest into the breadbasket of the world.
Those displaced farmers found work on railroads, moving all that. And those jobs, according to people at the time, were a kind of liberation from the raw animal labor of threshing. On net, it created demand for more workers at better wages in work more fit for people than beasts. For those that remained farmers, they found other higher-value work to be done. On a farm, there is always more work to do.
The failure, then and now, is to think farmers were only threshers. That was one part of their jobs. Today, our work, for most people, is also a bundle of tasks. Workers then and now could and can focus on parts of their job that are of higher value. And in a new economy, new tasks in new industries will be created. Many of the jobs that we do today (web designer, UI expert) were simply unimaginable in 1850. That is a good thing.
Consider now the assembly line. I’m sure you all know about the staggering increases in productivity that come from the division of labor. If you take my class in industrial history, you would learn deeply about the story of the automobile. With the assembly line, and no other change in technology, car assembly went from 12 and a half hours to about 30 minutes (once they worked out the kinks). Did this reduce the demand for workers? No. It reduced the price of cars. And that increased the demand for workers, who eventually could demand even higher wages through unionization.
It is important here to realize that better tools don’t make us get paid worse. They generally make us get paid more. Why? Because the tool, without the person, is useless. Even for today’s most cutting-edge AIs, that is true. It can code, but it can only code what I imagine it to code. It can draw, but only what I imagine it to draw. That is true for AIs as it was true for the thresher.
So, I would offer that AI will create more growth, more abundance. In the long run, all growth comes from higher productivity.
I would add one more piece to this story. Economic inequality has worsened since roughly 1970. It has worsened, therefore, not in the industrial era, but the digital era. I have argued elsewhere that this happened because for decades we did not use computers as tools of automation but as glorified typewriters (and then as televisions). Our productivity did not increase, especially to justify the expense of computers. Economists have debated for decades now over the lack of increase in productivity that came with the “digital age” of computing, but it is simple. We don’t use them as computers. Now we can.
For the first time now, normal people with their normal problems can use their computers to solve and automate their problems. AI can write code. AI can automate their tedium. The digital age did not bring any gains because it had no yet arrived. We were living through the last gasp of the industrial economy.
It is now here.
This technology will unleash unimaginable productivity gains. It will level the playing field between coders and the rest of us. Coders will lose their jobs, to be sure, but for the rest of us, the bundle of workplace tasks will become much better.
And truthfully, the demand for real computer scientists will probably increase in the era of vibe-coding. Computer science itself is a bundle of skills, of which coding is just one. The more important skill – software and data architecture – will only increase in demand as the usefulness of software expands…
[Hyman goes on to explore the dangers of monopolization (which, for reasons he explains, he believes are overstated); the future of softward (which, he believes, will skew to open-sorce), and of hardware (which, he believes will not be a bottleneck). He concludes…]
… Put together we come to a very different picture of what the digital age will be. The industrial age required massive investments to build the factories to make the products that were in demand. In the digital age, in contrast, the factories to build digital products will be made by the AI on your laptop. That is not inequality. That is equality.
The physical products of the Fordist industrial age were made for the mass market. In contrast, the digital products of the post-fordist digital age will be long-tail products. I don’t need to make mass market products; I can make them for a small niche, or just for myself.
Rather than fostering inequality, AI, then, is a great equalizer. To make products for a global market you don’t need a billion-dollar factory. You just need a laptop. That is astonishing.
That said, it will not be all sunshine and rainbows. Will AI solve the inequities of capitalism or its reliance on externalities as a source of primitive accumulation? Probably not.
But at the same time, AI is not a normal technology in that it has the potential to radically undermine many of the tendencies to concentrate capital that we have seen in the industrial age. We have been automated out of work before, that is nothing new, but it has always concentrated capital in the hands of the few. For the first time, there is potentially an alternative path forward.
AI will bring the digital age out of the hands of the coders. AI will not widen the gap—it will bridge it. Its ubiquity will mean that AI will be a tool that nearly all of us will be able to use in our daily work, which will make ordinary people more productive and prosperous…
Eminently worth reading in full: “Hooray! Post-Fordism Is Finally Here!“
Even as Hyman’s message is reassuring in the context of the flood of jeremiads in which we’re awash, it’s worth remembering that eerily-similar points were made a couple of decades ago about the threat/promise of digital publishing/commerce. Given the then-current conditions and then-plausible futures, those predictions might have come true… but in the event, they didn’t pan out as projected. That said, things are changing, so maybe this time things are different?
(Image above: source)
* song (by Eric Idle) from Monty Python’s Life Of Brian
###
As we resolve to remain rosy, we might send productive birthday greetings to Andrew Meikle; he was born on this date in 1719. A Scottish millwright, he invented the threshing machine (for removing the husks from grain, as mentioned above). One of the key developments of the British Agricultural Revolution in the late 18th century., it was also one of the main causes of the Swing Riots— an 1830 uprising by English and Scottish agricultural workers protesting agricultural mechanization and harsh working conditions.

“The present is pregnant with the future”*…
The estimable Tim O’Reilly uses scenario planning to create an insightful look at AI, our futures, and the choices that will define them…
We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose 20%. Salesforce has shed thousands of customer support workers, saying AI was already doing half the work. And a Stanford study found that software developers aged 22 to 25 saw employment drop nearly 20% from its peak, while developers over 26 were doing fine.
But how are we to square this news with a Vanguard study that found that the 100 occupations most exposed to AI were actually outperforming the rest of the labor market in both job growth and wages, and a rigorous NBER study of 25,000 Danish workers that found zero measurable effect of AI on earnings or hours?
Other studies could contribute to either side of the argument. For example, PwC’s 2025 Global AI Jobs Barometer, analyzing close to a billion job ads across six continents, found that workers with AI skills earn a 56% wage premium, and that productivity growth has nearly quadrupled in the industries most exposed to AI.
This is exactly the kind of contradictory, uncertain landscape that scenario planning was designed for. Scenario planning doesn’t ask you to predict what the future will be. It asks you to imagine divergent possible futures and to develop a strategy that improves your odds of success across all of them. I’ve used it many times at O’Reilly and have written about it before with COVID and climate change as illustrative examples. The argument between those who say AI will cause mass unemployment and those who insist technology always creates more jobs than it destroys is a debate that will only be resolved by time. Both sides have evidence. Both are probably right at some level. And both framings are not terribly helpful for anyone trying to figure out what to do next…
[O’Reilly explains the scenario approach, then applies it to our future with AI (see the image above), astutely assessing the conflicting signals that we’ve experiencing; he explores the “robust strategy” for our uncertian future (strategic choices that make sense regardless of which future unfolds); then he concludes…
… I’ll return to the theme that I sounded in my book WTF? What’s the Future and Why It’s Up To Us.
Every time a company uses AI to do what it was already doing with fewer people, it is making a choice for the lower half of the scenario grid. Every time a company uses AI to do something that wasn’t previously possible, to serve a customer who wasn’t previously served, to solve a problem that wasn’t previously solvable, it is making a choice for the upper half. These choices compound, for good or ill. An economy that uses AI primarily for efficiency will slowly hollow itself out.
Looking at the news from the future, both sets of signals are present. The question is which will dominate. AI will give us both the Augmentation Economy and the Displacement Crisis, in different measures in different places, depending on the choices we make.
Scenario planning teaches us that we don’t have to predict which future we’ll get. We do have to prepare for a very uncertain future. But the robust strategy, the one that works across every quadrant, is to focus on doing more, not just doing the same with less, and to find ways that human taste still matters in what is created. As long as there is unmet demand, as long as there are problems we haven’t solved and people we haven’t served, AI will augment human work rather than replacing it. It’s only when we stop looking for new things to do that the machines come for the jobs…
Eminently worth reading in full. Indeed, speaking as a long-time scenario planner, your correspondent can only wish that everyone who wields “scenarios” applies the approach as appropriately, adriotly, and acutely as Tim has: “Scenario Planning for AI and the ‘Jobless Future‘,” from @timoreilly.bsky.social.
* Voltaire
###
As we take the long view, we might send formative birthday greetings to Mark Pinsker; he was born on this date in 1923. A mathematician, he made impoprtant contributions to the fields of information theory, probability theory, coding theory, ergodic theory, mathematical statistics, and communication networks. This work, which helped lay the foundation for AI-as-we-know-it, earned him the IEEE Claude E. Shannon Award in 1978, and the IEEE Richard W. Hamming Medal in 1996, among other honors.
“Don’t eat your seed corn”*…
AI doesn’t really “think.” Rather, it remembers how we thought together. Are we’re about to stop giving it anything worth remembering? Bright Simons with a provocative analysis…
We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.
The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.
But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.
So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.
Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course…
[Simons sketches the history of humans’ intertwined development of both social/organizational and utile technologies, concluding…]
… What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.
Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.
The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”
That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?…
[Simons explores evidence that this may already be materially underway, then explores what that “atrophy” might mean …]
… If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.
This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges…
[Simons unpacks that heritage, and puts it into dialogues with recent thoughts from Dario Amodei, Leopold Aschenbrenner, and Sam Altman, concluding…]
… The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy. It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.
By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.
Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.
The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible…
… The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.
The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.
The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.
Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.
None of these individual acts is catastrophic. However, their compound effect may be.
The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.
Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.
In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.
The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from…
Eminently worth reading in full: “The Social Edge of Intelligence.”
Consider also the complementary perspectives in “What will be scarce?,” from Alex Imas (via Tim O’Reilly/ @timoreilly.bsky.social)… and in the second piece featured last Monday: ““Curiosity Is No Solo Act.“
Apposite: “Some Unintended Consequences Of AI,” from Quentin Hardy.
And finally, from the estimable Nathan Gardels, a suggestion that Open AI’s recent paper on industrial policy for the Age of AI fills a vacuum left by an unimaginative political class and should be taken seriously, at least as a conversation starter: “OpenAI Proposes A ‘Social Contract’ For The Intelligence Age.”
* Old agricultural proverb
###
As we take the long view, we might recall that today is the anniverary of a techological advance that both fed the social edge and encouraged the build out of the technostructure from which today’s AI hatched: on this date in 1993 Version 1.0 of the web browser Mosaic was released by the National Center for Supercomputing Applications. It was the first software to provide a graphical user interface for the emerging World Wide Web, including the ability to display inline graphics.
The lead Mosaic developer was Marc Andreesen, one of the future founders of Netscape, and now a principal at the venture capital firm Andreessen Horowitz (AKA “a16z”)… where he has been become a major investor in, promoter of, and politicial champion of the current crop of AI firms.
“The original idea of the web was that it should be a collaborative space where you can communicate through sharing information”*…
From yesterday’s post on the possible (and promising, but also potentially painful) future of computing to a pressing predicament we face today. The estimable Anil Dash on the threats to the open web…
You must imagine Sam Altman holding a knife to Tim Berners-Lee’s throat.
It’s not a pleasant image. Sir Tim is, rightly, revered as the genial father of the World Wide Web. But, all the signs are pointing to the fact that we might be in endgame for “open” as we’ve known it on the Internet over the last few decades.
The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.
Now, from content to code, communities to culture, we can see example after example of that open web under attack. Every single aspect of the radical architecture I just described is threatened, by those who have profited most from that exact system.
Today, the good people who act as thoughtful stewards of the web infrastructure are still showing the same generosity of spirit that has created opportunity for billions of people and connected society in ways too vast to count while —not incidentally— also creating trillions of dollars of value and countless jobs around the world. But the increasingly-extremist tycoons of Big Tech have decided that that’s not good enough.
Now, the hectobillionaires have begun their final assault on the last, best parts of what’s still open, and likely won’t rest until they’ve either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them. Whether or not they succeed is going to be decided by decisions that we all make as a community in the coming months. Even though there have always been threats to openness on the web, the stakes have never been higher than they are this time.
Right now, too many of the players in the open ecosystem are still carrying on with business as usual, even though those tactics have been failing to stop big tech for years. I don’t say this lightly: it looks to me like 2026 is the year that decides whether the open web as we know it will survive at all, and we have to fight like the threat is existential. Because it is…
[Dash details the treats– largely, but not entirely driven by AI and its purveyors. He concludes…]
… The threat to the open web is far more profound than just some platforms that are under siege. The most egregious harm is the way that the generosity and grace of the people who keep the web open is being abused and exploited. Those people who maintain open source software? They’re hardly getting rich — that’s thankless, costly work, which they often choose instead of cashing in at some startup. Similarly, volunteering for Wikipedia is hardly profitable. Defining super-technical open standards takes time and patience, sometimes over a period of years, and there’s no fortune or fame in it.
Creators who fight hard to stay independent are often choosing to make less money, to go without winning awards or the other trappings of big media, just in order to maintain control and authority over their content, and because they think it’s the right way to connect with an audience. Publishers who’ve survived through year after year of attacks from tech platforms get rewarded by… getting to do it again the next year. Tim Berners-Lee is no billionaire, but none of those guys with the hundreds of billions of dollars would have all of their riches without him. And the thanks he gets from them is that they’re trying to kill the beautiful gift that he gave to the world, and replace it with a tedious, extortive slop mall.
So, we’re in endgame now. They see their chance to run the playbook again, and do to Wikipedians what Uber did to cab drivers, to get users addicted to closed apps like they are to social media, to force podcasters to chase an algorithm like kids on TikTok. If everyone across the open internet can gather together, and see that we’re all in one fight together, and push back with the same ferocity with which we’re being attacked, then we do have a shot at stopping them.
At one time, it was considered impossibly unlikely that anybody would ever create open technologies that would ever succeed in being useful for people, let alone that they would become a daily part of enabling billions of people to connect and communicate and make their lives better. So I don’t think it’s any more unlikely that the same communities can summon that kind of spirit again, and beat back the wealthiest people in the world, to ensure that the next generation gets to have these same amazing resources to rely on for decades to come.
Alright, if it’s not hopeless, what are the concrete things we can do? The first thing is to directly support organizations in the fight. Either those that are at risk, or those that are protecting those at risk. You can give directly to support the Internet Archive, or volunteer to help them out. Wikipedia welcomes your donation or your community participation. The Electronic Frontier Foundation is fighting for better policy and to defend your rights on virtually all of these issues, and could use your support or provides a list of ways to volunteer or take action. The Mozilla Foundation can also use your donations and is driving change. (And full disclosure — I’m involved in pretty much all of these organizations in some capacity, ranging from volunteer to advisor to board member.) That’s because I’m trying to make sure my deeds match my words! These are the people whom I’ve seen, with my own eyes, stay the hand of those who would hold the knife to the necks of the open web’s defenders. [Further full disclosure: so is your correpondent, and so have I.]
Beyond just what these organizations do, though, we can remember how much the open web matters. I know from my time on the board of Stack Overflow that we got to see the rise of an incredibly generous community built around sharing information openly, under open licenses. There are very few platforms in history that helped more people have more economic mobility than the number of people who got good-paying jobs as coders as a result of the information on that site. And then we got to see the toll that extractive LLMs had when they took advantage of that community without any consideration for the impact it would have when they trained models on the generosity of that site’s members without reciprocating in kind.
The good of the web only exists because of the openness of the web. They can’t just keep on taking and taking without expecting people to finally draw a line and saying “enough”. And interestingly, opportunities might exist where the tycoons least expect it. I saw Mike Masnick’s recent piece where he argued that one of the things that might enable a resurgence of the open web might be… AI. It would seem counterintuitive to anyone who’s read everything I’ve shared here to imagine that anything good could come of these same technologies that have caused so much harm.
But ultimately what matters is power. It is precisely because technologies like LLMs have powers that the authoritarians have rushed to try to take them over and wield them as effectively as they can. I don’t think that platforms owned and operated by those bad actors can be the tools that disrupt their agenda. I do think it might be possible that the creative communities that built the web in the first place could use their same innovative spirit to build what could be, for lack of a better term, called “good AI“. It’s going to take better policy, which may be impossible in the short term at the federal level in the U.S., but can certainly happen at more local levels and in the rest of the world. Though I’m skeptical about putting too much of the burden on individual users, we can certainly change culture and educate people so that more people feel empowered and motivated to choose alternatives to the big tech and big AI platforms that got us into this situation. And we can encourage harm reduction approaches for the people and institutions that are already locked into using these tools, because as we’ve seen, even small individual actions can get institutions to change course.
Ultimately I think, if given the choice, people will pick home-cooked, locally-grown, heart-felt digital meals over factory-farmed fast food technology every time…
Unless we act, it’s “Endgame for the Open Web,” from @anildash.com. Eminently worth reading in full.
* Tim Berners-Lee… who should know.
###
As we protect what’s precious, we might send carefully-calculated birthday greetings to a man whose work helped lay the foundation for both the promise and the peril unpacked in the article linked above above: J. Presper Eckert; he was born on this day in 1919. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.









You must be logged in to post a comment.