Posts Tagged ‘mouse’
“The best way to predict the future is to invent it”*…
Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)
The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…
… in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.
Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…
– source
In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:
The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?
At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…
For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…
[Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]
The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”
As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…
Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.
[Image above: source]
* Alan Kay
###
As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925. An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.
Written by (Roughly) Daily
January 30, 2026 at 1:00 am
Posted in Uncategorized
Tagged with AI, AI risk, artifical intelligence, computer mouse, culture, Dario Amodei, Doug Engelbart, graphical user interfaces, history, hypertext, Mike Loukides, mouse, networked computers, scenario planning, scenarios, Singularity, Technology, Tim O'Reilly
“Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics?”*…

Half a century ago, Lewis Mumford developed a concept that explains why we trade autonomy for convenience…
… Surveying the state of the high-tech life, it is tempting to ponder how it got so bad, while simultaneously forgetting what it was that initially convinced one to hastily click “I agree” on the terms of service. Before certain social media platforms became foul-smelling swamps of conspiratorial misinformation, many of us joined them for what seemed like good reasons; before sighing at the speed with which their batteries die, smartphone owners were once awed by these devices: before grumbling that there was nothing worth watching, viewers were astounded by how much streaming content was available at one’s fingertips. Overwhelmed by the way today’s tech seems to be burying us in the bad, it’s easy to forget the extent to which tech won us over by offering us a share in the good — or to be more precise, in “the goods.”
Nearly 50 years ago, long before smartphones and social media, the social critic Lewis Mumford put a name to the way that complex technological systems offer a share in their benefits in exchange for compliance. He called it a “bribe.” With this label, Mumford sought to acknowledge the genuine plentitude that technological systems make available to many people, while emphasizing that this is not an offer of a gift but of a deal. Surrender to the power of complex technological systems — allow them to oversee, track, quantify, guide, manipulate, grade, nudge, and surveil you — and the system will offer you back an appealing share in its spoils. What is good for the growth of the technological system is presented as also being good for the individual, and as proof of this, here is something new and shiny. Sure, that shiny new thing is keeping tabs on you (and feeding all of that information back to the larger technological system), but it also lets you do things you genuinely could not do before. For a bribe to be accepted it needs to promise something truly enticing, and Mumford, in his essay “Authoritarian and Democratic Technics,” acknowledged that “the bargain we are being asked to ratify takes the form of a magnificent bribe.” The danger, however, was that “once one opts for the system no further choice remains.”
For Mumford, the bribe was not primarily about getting people into the habit of buying new gadgets and machines. Rather it was about incorporating people into a world that complex technological systems were remaking in their own image. Anticipating resistance, the bribe meets people not with the boot heel, but with the gift subscription.
The bribe is a discomforting concept. It asks us to consider the ways the things we purchase wind up buying us off, it asks us to see how taking that first bribe makes it easier to take the next one, and, even as it pushes us to reflect on our own complicity, it reminds us of the ways technological systems eliminate their alternatives. Writing about the bribe decades ago, Mumford was trying to sound the alarm, as he put it: “This is not a prediction of what will happen, but a warning against what may happen.” As with all of his glum predictions, it was one that Mumford hoped to be proven wrong about. Yet as one scrolls between reviews of the latest smartphone, revelations about the latest misdeeds of some massive tech company, and commentary about the way we have become so reliant on these systems that we cannot seriously speak about simply turning them off — it seems clear that what Mumford warned “may happen” has indeed happened…
Eminently worth reading in full: “The Magnificent Bribe,” by Zachary Loeb in @_reallifemag.
As to (some of) the modern implications of that bargain, see also Shoshana Zuboff‘s: “You Are the Object of a Secret Extraction Operation.”
As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information. The promise of the surveillance dividend now draws surveillance economics into the “normal” economy, from insurance, retail, banking and finance to agriculture, automobiles, education, health care and more. Today all apps and software, no matter how benign they appear, are designed to maximize data collection.
Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic. The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage…
And resonantly: “AI-tocracy” a working paper from NBER that links the development of artificial intelligence with the interests of autocracies: from the abstract:
Can frontier innovation be sustained under autocracy? We argue that innovation and autocracy can be mutually reinforcing when: (i) the new technology bolsters the autocrat’s power; and (ii) the autocrat’s demand for the technology stimulates further innovation in applications beyond those benefiting it directly. We test for such a mutually reinforcing relationship in the context of facial recognition AI in China. To do so, we gather comprehensive data on AI firms and government procurement contracts, as well as on social unrest across China during the last decade. We first show that autocrats benefit from AI: local unrest leads to greater government procurement of facial recognition AI, and increased AI procurement suppresses subsequent unrest. We then show that AI innovation benefits from autocrats’ suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets. Taken together, these results suggest the possibility of sustained AI innovation under the Chinese regime: AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.
(And, Anne Applebaum warns, “The Bad Guys Are Winning.”)
* “Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics? The answer to this question is both paradoxical and ironic. Present day technics differs from that of the overtly brutal, half-baked authoritarian systems of the past in one highly favorable particular: it has accepted the basic principle of democracy, that every member of society should have a share in its goods. By progressively fulfilling this part of the democratic promise, our system has achieved a hold over the whole community that threatens to wipe out every other vestige of democracy.
The bargain we are being asked to ratify takes the form of a magnificent bribe. Under the democratic-authoritarian social contract, each member of the community may claim every material advantage, every intellectual and emotional stimulus he may desire, in quantities hardly available hitherto even for a restricted minority: food, housing, swift transportation, instantaneous communication, medical care, entertainment, education. But on one condition: that one must not merely ask for nothing that the system does not provide, but likewise agree to take everything offered, duly processed and fabricated, homogenized and equalized, in the precise quantities that the system, rather than the person, requires. Once one opts for the system no further choice remains. In a word, if one surrenders one’s life at source, authoritarian technics will give back as much of it as can be mechanically graded, quantitatively multiplied, collectively manipulated and magnified.”
– Lewis Mumford in “Authoritarian and Democratic Technics,” via @LMSacasas
###
As we untangle user agreements, we might recall that it was on this date in 1970 that Douglas Engelbart (see here, here, and here) was granted a patent (US No. 3,541,541) on the “X-Y Position Indicator for a Display System,” the world’s first prototype computer mouse– a wooden block containing the tracking apparatus, with a single button attached.
Written by (Roughly) Daily
November 17, 2021 at 1:00 am
Posted in Uncategorized
Tagged with bargain, bribe, coercion, computers, computing, culture, Douglas Englebart, history, history of computing, history of technology, invention, L.M. Sacasas, Lewis Mumford, Life, magnificent bribe, mouse, Patent, social media, technics, Technology, Zachary Loeb
“It is not enough for code to work”*…

It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence…
Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break… Software failures are failures of understanding, and of imagination…
Invisible– but all too real and painful– problems, and the attempts to make them visible: “The Coming Software Apocalypse.”
* Robert C. Martin, Clean Code: A Handbook of Agile Software Craftsmanship
###
As we Code for America, we might recall that it was on this date in 1983 that Microsoft released its first software application, Microsoft Word 1.0. For use with MS-DOS compatible systems, Word was the first word processing software to make extensive use of a computer mouse. (Not coincidentally, Microsoft had released a computer mouse for IBM-compatible PCs earlier in the year.) A free demo version of Word was included with the current edition of PC World— the first time a floppy disk was included with a magazine.
Written by (Roughly) Daily
September 29, 2017 at 1:01 am
Posted in Uncategorized
Tagged with catastrophe, computing, engineering, failure, history, Microsoft, mouse, software, Technology, Word
“Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in”*…

In European societies, knowledge is often pictured as a tree: a single trunk – the core – with branches splaying outwards towards distant peripheries. The imagery of this tree is so deeply embedded in European thought-patterns that every form of institution has been marshalled into a ‘centre-periphery’ pattern. In philosophy, for example, there are certain ‘core’ subjects and other more marginal, peripheral, and implicitly expendable, ones. Likewise, a persistent, and demonstrably false, picture of science has it as consisting of a ‘stem’ of pure science (namely fundamental physics) with secondary domains of special sciences at varying degrees of remove: branches growing from, and dependent upon, the foundational trunk.
Knowledge should indeed be thought of as a tree – just not this kind of tree. Rather than the European fruiter with its single trunk, knowledge should be pictured as a banyan tree, in which a multiplicity of aerial roots sustains a centreless organic system. The tree of knowledge has a plurality of roots, and structures of knowledge are multiply grounded in the earth: the body of knowledge is a single organic whole, no part of which is more or less dispensable than any other…
As Krishna observed in the in the Bhagavad-Gītā, “stands an undying banyan tree.” Explore it at “The tree of knowledge is not an apple or an oak but a banyan.”
* Isaac Asimov
###
As we celebrate diversity, we might spare a thought for Douglas Carl Engelbart; he died on this date in 2013. An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it.
Written by (Roughly) Daily
July 2, 2017 at 1:01 am
Posted in Uncategorized
Tagged with Engelbart, history, hypertext, innovators, knowledge, mouse, philosophy, taxonomy, Technology, tree
Hello?… Hello?…

Quoth the always-amusing Tyler Hellard: “ConferenceCall.biz is a spectacular display of existential despair and the modern condition.”
And indeed it is.
###
As we remember that we can press “8” to mute at any time, we might email elegantly and creatively designed birthday greetings to Douglas Carl Engelbart; he was born on this date in 1925. An engineer and inventor who was a computing and internet pioneer, Doug (who passed away last year) is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it.
Written by (Roughly) Daily
January 30, 2014 at 1:01 am
Posted in Uncategorized
Tagged with computing, conference call, Engelbart, history of computing, history of technology, humor, Mother of All Demos, mouse



You must be logged in to post a comment.