(Roughly) Daily

Posts Tagged ‘artificial intelligence

“Bureaucracy defends the status quo long past the time when the quo has lost its status”*…

… which is one of the reasons that they’re hard to update. Kevin Baker describes a 1998 visit to the IRS Atlanta Service Center and ponders its lessons…

… the first thing you’d notice would be the wires. They ran everywhere, and the building obviously hadn’t been constructed with them in mind. As you walked down a corridor, passing carts full of paper returns and rows of “tingle tables,” you would tread over those wires on a raised metal gangway. Each work area had an off-ramp, where both the wires and people would disembark…

… The desks were covered with dot matrix paper, cartons of files, and Sperry terminals glowing a dull monochromatic glow. These computers were linked to a mainframe in another room. Magnetic tapes from that mainframe, and from mainframes all over the country, would be airlifted to National Airport in Washington DC. From there, they’d be put on trucks to a West Virginia town of about 14,000 people called Martinsburg. There, they’d be loaded into a machine, the first version of which was known colloquially—and not entirely affectionately—as the “Martinsburg Monster.” This computer amounted to something like a national nerve center for the IRS. On it programs called the Individual Master File and the Business Master File processed the country’s tax records. These programs also organized much of the work. If there were a problem at Martinsburg, work across the IRS’s offices spanning the continent could and frequently did shut down.

Despite decades of attempts to kill it, The IRS’s Individual Master File, an almost sixty-year old accumulation of government Assembly Language, lives on. Part of this strange persistence can be pegged squarely on Congress’s well-documented history of starving the IRS for funding. But another part of it is that the Individual Master File has become so completely entangled in the life of the agency that modernizing it resembles delicate surgery more than a straightforward software upgrade. Job descriptions, work processes, collective bargaining agreements, administrative law, and technical infrastructure all coalesce together and interface with it, so that a seemingly technical task requires considerable sociological, historical, legal, and political knowledge.

In 2023, as it was in the 1980s, the IRS is a cyborg bureaucracy, an entangled mass of law, hardware, software, and clerical labor. It was among the first government agencies to embrace automatic data processing and large-scale digital computing. And it used these technologies to organize work, to make decisions, and to understand itself. In important ways, the lines between the digital shadow of the agency—its artificial bureaucracy—and its physical presence became difficult if not impossible to disentangle….

Baker is launching a new Substack, devoted to exploring precisely this kind tangle– and what it might portend…

This series, called Artificial Bureaucracy, is a long-term project looking at the history of government computing in the fifty-year period between 1945-1995. I think this is a timely subject. In the past several years, promoters and critics of artificial intelligence alike have talked up the possibility that decision-making and even governance itself may soon be handed over to sophisticated AI systems. What draws together both the dreams of boosters and the nightmares of critics is a deterministic orientation towards the future of technology, a conception of technology as autonomous and somehow beyond the possibility of control.

These visions mostly ignore the fact that the computerization of governance is a project at least seventy years in the making, and that project has never been determined, in the first instance or the last, primarily by “technological” factors. Like everything in government, the hardware and software systems that make up its artificial bureaucracy were and are subject to negotiation, conflict, administrative inertia, and the individual agency of its users.

Looking at government computing can also tell us something about AI. The historian of computing, Michael Mahoney has argued that studying the history of software is the process of learning how groups of people came to put their worlds in a machine. If this is right—and I think it is—our conceptions of “artificial intelligence” have an unwarranted individualistic bias; the proper way to understand machine intelligence isn’t by analogy to individual human knowledge and decision-making, but to methods of bureaucratic knowledge and action. If it is about anything, the story of AI is the story of bureaucracy. And if the future of governance is AI, then it makes sense to know something about its past…

Is bureaucracy the future of AI? Check it out the first post in Artificial Bureaucracy, from @kevinbaker@mastodon.social.

* Laurence J. Peter

###

As we size up systems, we might recall that it was on this date in 1935 that President Franklin D. Roosevelt signed the Social Security Act. A key component of Roosevelt’s New Deal domestic program, the Act created both the Social Security program and insurance against unemployment

Roosevelt signs Social Security Bill (source)

“These are the forgeries of jealousy”*…

Analysis of Leonardo da Vinci’s Salvator Mundi required dividing a high-resolution image of the complete painting into a set of overlapping square tiles. But only those tiles that contained sufficient visual information, such as the ones outlined here, were input to the author’s neural-network classifier.

Is it authentic? Attorney and AI practitioner Steven J. Frank, working with his wife, art historian and curator Andrea Frank (together, Art Eye-D Associates), brings machine learning to bear…

The sound must have been deafening—all those champagne corks popping at Christie’s, the British auction house, on 15 November 2017. A portrait of Jesus, known as Salvator Mundi (Latin for “savior of the world”), had just sold at Christie’s in New York for US $450.3 million, making it by far the most expensive painting ever to change hands.

But even as the gavel fell, a persistent chorus of doubters voiced skepticism. Was it really painted by Leonardo da Vinci, the towering Renaissance master, as a panel of experts had determined six years earlier? A little over 50 years before that, a Louisiana man had purchased the painting in London for a mere £45. And prior to the rediscovery of Salvator Mundi, no Leonardo painting had been uncovered since 1909.

Some of the doubting experts questioned the work’s provenance—the historical record of sales and transfers—and noted that the heavily damaged painting had undergone extensive restoration. Others saw the hand of one of Leonardo’s many protégés rather than the work of the master himself.

Is it possible to establish the authenticity of a work of art amid conflicting expert opinions and incomplete evidence? Scientific measurements can establish a painting’s age and reveal subsurface detail, but they can’t directly identify its creator. That reLeonardo da quires subtle judgments of style and technique, which, it might seem, only art experts could provide. In fact, this task is well suited to computer analysis, particularly by neural networks—computer algorithms that excel at examining patterns. Convolutional neural networks (CNNs), designed to analyze images, have been used to good advantage in a wide range of applications, including recognizing faces and helping to pilot self-driving cars. Why not also use them to authenticate art?

That’s what I asked my wife, Andrea M. Frank, a professional curator of art images, in 2018. Although I have spent most of my career working as an intellectual-property attorney, my addiction to online education had recently culminated in a graduate certificate in artificial intelligence from Columbia University. Andrea was contemplating retirement. So together we took on this new challenge…

With millions at stake, deep learning enters the art world. The fascinating story: “This AI Can Spot an Art Forgery,” @ArtAEye in @IEEESpectrum.

* Shakespeare (Titania, A Midsummer Night’s Dream, Act II, Scene 1)

###

As we honor authenticity, we might spare a thought for a champion of authenticity in a different sense, Joris Hoefnagel; he died on this date in 1601. A Flemish painter, printmaker, miniaturist, draftsman, and merchant, he is noted for his illustrations of natural history subjects, topographical views, illuminations (he was one of the last manuscript illuminators), and mythological works.

Hoefnagel made a major contribution to the development of topographical drawing. But perhaps more impactfully, his manuscript illuminations and ornamental designs played an important role in the emergence of floral still-life painting as an independent genre in northern Europe at the end of the 16th century. The almost scientific naturalism of his botanical and animal drawings served as a model for a later generation of Netherlandish artists.  Through these nature studies he also contributed to the development of natural history and he was thus a founder of proto-scientific inquiry.

Portrait of Joris Hoefnagel, engraving by Jan Sadeler, 1592 (source)

“Without reflection, we go blindly on our way”*…

… or at least sociopathic. Indeed, Evgeny Morozov suggests, we may be well on our way. There may be versions of A.G.I. (Artificial General Intelligence) that will be a boon to society; but, he argues, the current approaches aren’t likely to yield them…

… The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or A.G.I., that worries the experts.

A.G.I. doesn’t exist yet, but some believe that the rapidly growing capabilities of OpenAI’s ChatGPT suggest its emergence is near. Sam Altman, a co-founder of OpenAI, has described it as “systems that are generally smarter than humans.” Building such systems remains a daunting — some say impossible — task. But the benefits appear truly tantalizing.

Imagine Roombas, no longer condemned to vacuuming the floors, that evolve into all-purpose robots, happy to brew morning coffee or fold laundry — without ever being programmed to do these things.Sounds appealing. But should these A.G.I. Roombas get too powerful, their mission to create a spotless utopia might get messy for their dust-spreading human masters. At least we’ve had a good run.Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization. Mr. Altman, the face of this campaign, embarked on a global tour to charm lawmakers. Earlier this year he wrote that A.G.I. might even turbocharge the economy, boost scientific knowledge and “elevate humanity by increasing abundance.”

This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral. They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.

Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.

Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.

Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation…

… the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how…

[There follows a bracing run-down…]

… Margaret Thatcher’s other famous neoliberal dictum was that “there is no such thing as society.”The A.G.I. lobby unwittingly shares this grim view. For them, the kind of intelligence worth replicating is a function of what happens in individuals’ heads rather than in society at large.

But human intelligence is as much a product of policies and institutions as it is of genes and individual aptitudes. It’s easier to be smart on a fellowship in the Library of Congress than while working several jobs in a place without a bookstore or even decent Wi-Fi.

It doesn’t seem all that controversial to suggest that more scholarships and public libraries will do wonders for boosting human intelligence. But for the solutionist crowd in Silicon Valley, augmenting intelligence is primarily a technological problem — hence the excitement about A.G.I.

However, if A.G.I.-ism really is neoliberalism by other means, then we should be ready to see fewer — not more — intelligence-enabling institutions. After all, they are the remnants of that dreaded “society” that, for neoliberals, doesn’t really exist. A.G.I.’s grand project of amplifying intelligence may end up shrinking it.

Because of such solutionist bias, even seemingly innovative policy ideas around A.G.I. fail to excite. Take the recent proposal for a “Manhattan Project for A.I. Safety.” This is premised on the false idea that there’s no alternative to A.G.I.But wouldn’t our quest for augmenting intelligence be far more effective if the government funded a Manhattan Project for culture and education and the institutions that nurture them instead?

Without such efforts, the vast cultural resources of our existing public institutions risk becoming mere training data sets for A.G.I. start-ups, reinforcing the falsehood that society doesn’t exist…

If it’s true that we shape our tools, then our tools shape us, then it behooves us to be very careful as to how we shape them… Eminently worth reading in full: “The True Threat of Artificial Intelligence” (gift link) from @evgenymorozov in @nytimes.

Apposite: on the A. I. we currently have: “The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con,” from @baldurbjarnason.

[Image above: source]

* Margaret J. Wheatley

###

As we set aside solutionism, we might we might send thoroughly-organized birthday greetings to Josiah Wedgwood; he was born on this date in 1730. An English potter, businessman (he founded the Wedgwood company), and inventor (he designed the company’s process machinery and high-temperature beehive-shaped kilns), he is credited, via his technique of “division of labor,” with the industrialization of the manufacture of pottery– and via his example, much of British (and thus American) manufacturing. Wedgwood was a member of the Lunar Society, the Royal Society, and was an ardent abolitionist.  His daughter, Susannah, was the mother of Charles Darwin.

source

“Nothing is so painful to the human mind as a great and sudden change”*…

If an AI-infused web is the future, what can we learn from the past? Jeff Jarvis has some provocative thoughts…

The Gutenberg Parenthesis—the theory that inspired my book of the same name—holds that the era of print was a grand exception in the course of history. I ask what lessons we may learn from society’s development of print culture as we leave it for what follows the connected age of networks, data, and intelligent machines—and as we negotiate the fate of such institutions as copyright, the author, and mass media as they are challenged by developments such as generative AI. 

Let’s start from the beginning…

In examining the half-millennium of print’s history, three moments in time struck me: 

  • After Johannes Gutenberg’s development of movable type in the 1450s in Europe (separate from its prior invention in China and Korea), it took a half-century for the book as we now know it to evolve out of its scribal roots—with titles, title pages, and page numbers. It took another century, until this side and that of 1600, before there arose tremendous innovation with print: the invention of the modern novel with Cervantes, the essay with Montaigne, a market for printed plays with Shakespeare, and the newspaper.
  • It took another century before a business model for print at last emerged with copyright, which was enacted in Britain in 1710, not to protect authors but instead to transform literary works into tradable assets, primarily for the benefit of the still-developing industry of publishing. 
  • And it was one more century—after 1800—before major changes came to the technology of print: the steel press, stereotyping (to mold complete pages rather than resetting type with every edition), steam-powered presses, paper made from abundant wood pulp instead of scarce rags, and eventually the marvelous Linotype, eliminating the job of the typesetter. Before the mechanization and industrialization of print, the average circulation of a daily newspaper in America was 4,000 (the size of a healthy Substack newsletter these days). Afterwards, mass media, the mass market, and the idea of the mass were born alongside the advertising to support them. 

One lesson in this timeline is that the change we experience today, which we think is moving fast, is likely only the beginning. We are only a quarter century past the introduction of the commercial web browser, which puts us at about 1480 in Gutenberg years. There could be much disruption and invention still ahead. Another lesson is that many of the institutions we assume are immutable—copyright, the concept of creativity as property, mass media and its scale, advertising and the attention economy—are not forever. That is to say that we can reconsider, reinvent, reject, or replace them as need and opportunity present…

Read on for his suggestion for a reinvention of copyright: “Gutenberg’s lessons in the era of AI,” from @jeffjarvis via @azeem in his valuable newsletter @ExponentialView.

* Mary Wollstonecraft Shelley, Frankenstein

###

As we contemplate change, we might spare a thought for Jan Hus. A  Czech theologian and philosopher who became a Church reformer, he was burned at the stake as a heretic (for condemning indulgences and the Crusades) on this date in 1415. His teachings (which largely echoed those of Wycliffe) had a strong influence, over a century later, on Martin Luther, helping inspire the Reformation… which was fueled by Gutenberg’s technology, which had been developed and begun to spread in the meantime.

Jan Hus at the stake, Jena codex (c. 1500) source

“The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence”*…

Ah, but what about humor…

Humor is a central aspect of human communication that has not been solved for artificial agents so far. Large language models (LLMs) are increasingly able to capture implicit and contextual information. Especially, OpenAI’s ChatGPT recently gained immense public attention. The GPT3-based model almost seems to communicate on a human level and can even tell jokes. Humor is an essential component of human communication. But is ChatGPT really funny? We put ChatGPT’s sense of humor to the test. In a series of exploratory experiments around jokes, i.e., generation, explanation, and detection, we seek to understand ChatGPT’s capability to grasp and reproduce human humor. Since the model itself is not accessible, we applied prompt-based experiments. Our empirical evidence indicates that jokes are not hard-coded but mostly also not newly generated by the model. Over 90% of 1008 generated jokes were the same 25 Jokes. The system accurately explains valid jokes but also comes up with fictional explanations for invalid jokes. Joke-typical characteristics can mislead ChatGPT in the classification of jokes. ChatGPT has not solved computational humor yet but it can be a big leap toward “funny” machines…

Or can it? “ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models,” in @arxiv.

* Jean Baudrillard

###

As we titter, we might send birthday giggles to a man who don’t need no stinking LLM, Scott Thompson; he was born on this date in 1959. A comedian and actor, he is best known as a member of The Kids in the Hall and for playing Brian on The Larry Sanders Show.

source

Written by (Roughly) Daily

June 12, 2023 at 1:00 am

%d bloggers like this: