(Roughly) Daily

Posts Tagged ‘computing

“Even a fool who keeps silent is considered wise; when he closes his lips, he is deemed intelligent.”*…

A substantial– and important– look at a troubling current aflow in the world of technology today: Emily Gorcenski on the millenarianism and manifest destiny of AI and techno-futurism…

… Early Christian missionaries traveled the pagan lands looking for heathens to convert. Evangelical movements almost definitionally involve spreading the word of Jesus Christ as a core element of their faith. The missionary holds the key that unlocks eternal life and the only cost is conversion: the more souls saved, the holier the work. The idea of going out into the world to spread the good word and convert them to our product/language/platform is a deep tradition in the technology industry. We even hire people specifically to do that. We call them technology evangelists.

Successful evangelism has two key requirements. First, it must offer the promised land, the hope of a better life, of eternal salvation. Second, it must have a willing mark, someone desperate enough (perhaps through coercion) to be included in that vision of eternity, better if they can believe strongly enough to become acolytes themselves. This formed the basis of the crypto community: Ponzi schemes sustain only as long as there are new willing participants and when those participants realize that their own continued success is contingent on still more conversions, the incentive to act in their own best interest is strong. It worked for a while to keep the crypto bubble alive. Where this failed was in every other aspect of web3.

There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.

Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.

The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.

I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.

… The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence, so we pivoted to artificial life. We failed to make artificial life, so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

This post won’t convince anyone on the inside of the harms they are experiencing nor the harms they are causing. That’s not been my intent. You can’t remove someone from a cult if they’re not ready to leave. And the eye-popping data science salaries don’t really incentivize someone to get out. No. My intent was to give some clarity and explanatory insight to those who haven’t fallen under the Singularity’s spell. It’s a hope that if—when—the GenAI bubble bursts, we can maybe immunize ourselves against whatever follows it. And it’s a plea to get people to understand that America has never stopped believing in its manifest destiny.

David Nye described 19th and 20th century American perception technology using the same concept of the sublime that philosophers used to describe Niagara Falls. Americans once beheld with divine wonder the locomotive and the skyscraper, the atom bomb and the Saturn V rocket. I wonder if we’ll behold AI with that same reverence. I pray that we will not. Our real earthly resources are wearing thin. Computing has surpassed aviation in terms of its carbon threat. The earth contains only so many rare earth elements. We may face Armageddon. There will be no Singularity to save us. We have the power to reject our manifest destinies…

Eminently worth reading in full: “Making God,” from @EmilyGorcenski (a relay to mastodon and BlueSky).

See also: “Effective Obfuscation,” from Molly White (@molly0xFFF) and this thread from Emily Bender (@emilymbender).

* Proverbs 17:28

###

As we resist recruitment, we might spare a thought for Ada Lovelace (or, more properly, Augusta Ada King, Countess of Lovelace, née Byron); she died on this date in 1852. A mathematician and writer, she is chiefly remembered for her work on Charles Babbage‘s proposed mechanical general-purpose computer, the Analytical Engine— for which she authored what can reasonably be considered the first “computer program.” She was the first to recognize that the machine had applications beyond pure calculation, and so is one of the “parents” of the modern computer.

Daguerreotype by Antoine Claudet, c. 1843 (source)

“Everything is designed. Few things are designed well.”*…

Those of us in the U.S. are used to molded plastic seating on public transport. Not so in the U.K, where moquette, a velvet-like material, is favored by upholsterers for its durability. Artists like Paul Nash and Enid Marx were commissioned to create intricate designs that gave trains and buses a modish visual identity. And the tradition continues: new moquette can still be found on the seats that zoom beneath the city….

Moquette is the durable, woolen seating material that is used in upholstery on public transport all over the world.

Coming from the French word for carpet, moquette has been seen and sat upon by millions of commuters on buses, trains, trams and trolleybuses for over 100 years.

It is produced on looms using the Jacquard weaving technique, with a pile usually made up of 85% wool mixed with 15% nylon.

Moquette was chosen for public transport for two reasons. First, because it is hard wearing and durable. Second, because its colour and patterns disguise signs of dirt, wear and tear. On top of this moquette had the advantage of being easy and cheap to mass-produce.

Moquette was first applied to public transport seating in London in the 1920s when the patterns were designed by the manufacturers…

A history of moquette

Riding in style on the upholstery that gives London Transport its unique look and feel: “A history of Moquette,” from @ltmuseum and @TheBrowser.

Brian Reed

###

As we settle in, we might spare a thought for William “Willy” A. Higinbotham; he died on this date in 1994.  A physicist who was a member of the team that developed the first atomic bomb, he later became a leader in the nuclear non-proliferation movement.

But Higinbotham may be better remembered as the creator of Tennis for Two— the first interactive analog computer game, one of the first electronic games to use a graphical display, and the first to be created as entertainment (as opposed to as a demonstration of a computer’s capabilities).  He built it for the 1958 visitor day at Brookhaven National Laboratory.

It used a small analogue computer with ten direct-connected operational amplifiers and output a side view of the curved flight of the tennis ball on an oscilloscope only five inches in diameter. Each player had a control knob and a button.

source

The 1958 Tennis for Two exhibit

source

“There are two types of encryption: one that will prevent your sister from reading your diary and one that will prevent your government”*…

… But sometimes the encryption you think will work against governments won’t even deter your sister. Joesph Cox on the recently-uncovered vulnerabilities in TETRA, the encryption standard used in radios worldwide…

A group of cybersecurity researchers has uncovered what they believe is an intentional backdoor in encrypted radios used by police, military, and critical infrastructure entities around the world. The backdoor may have existed for decades, potentially exposing a wealth of sensitive information transmitted across them, according to the researchers… The end result, however, are radios with traffic that can be decrypted using consumer hardware like an ordinary laptop in under a minute…

The research is the first public and in-depth analysis of the TErrestrial Trunked RAdio (TETRA) standard in the more than 20 years the standard has existed. Not all users of TETRA-powered radios use the specific encryption algorithim called TEA1 which is impacted by the backdoor. TEA1 is part of the TETRA standard approved for export to other countries. But the researchers also found other, multiple vulnerabilities across TETRA that could allow historical decryption of communications and deanonymization. TETRA-radio users in general include national police forces and emergency services in Europe; military organizations in Africa; and train operators in North America and critical infrastructure providers elsewhere. 

Midnight Blue [presented] their findings at the Black Hat cybersecurity conference in August. The details of the talk have been closely under wraps, with the Black Hat website simply describing the briefing as a “Redacted Telecom Talk.” That reason for secrecy was in large part due to the unusually long disclosure process. Wetzels told Motherboard the team has been disclosing these vulnerabilities to impacted parties so they can be fixed for more than a year and a half. That included an initial meeting with Dutch police in January 2022, a meeting with the intelligence community later that month, and then the main bulk of providing information and mitigations being distributed to stakeholders. NLnet Foundation, an organization which funds “those with ideas to fix the internet,” financed the research.

The European Telecommunications Standards Institute (ETSI), an organization that standardizes technologies across the industry, first created TETRA in 1995. Since then, TETRA has been used in products, including radios, sold by Motorola, Airbus, and more. Crucially, TETRA is not open-source. Instead, it relies on what the researchers describe in their presentation slides as “secret, proprietary cryptography,” meaning it is typically difficult for outside experts to verify how secure the standard really is.

Bart Jacobs, a professor of security, privacy and identity, who did not work on the research itself but says he was briefed on it, said he hopes “this really is the end of closed, proprietary crypto, not based on open, publicly scrutinised standards.”…

The veil, pierced: “Researchers Find ‘Backdoor’ in Encrypted Police and Military Radios,” from @josephfcox in @motherboard. (Not long after this article ran– and after the downfall of Vice, Motherboard’s parent), Cox and a number of his talented Motherboard colleagues launched 404 Media. Check it out.)

Remarkably, some of the radio systems enabling critical infrastructure are even easier to hack– they aren’t even encrypted.

Bruce Schneier (@schneierblog)

###

As we take precautions, we might recall that it was on this date in 1980 that the last IBM 7030 “Stretch” mainframe in active use is decommissioned at Brigham Young University. The first Stretch was was delivered to Los Alamos National Laboratory in 1961, giving the model almost 20 years of operational service.

The Stretch was famous for many things, but perhaps most notably it was the first IBM computer to use transistors instead of vacuum tubes; it was the first computer to be designed with the help of an earlier computer; and it was the world’s fastest computer from 1961 to 1964.

source

“Bureaucracy defends the status quo long past the time when the quo has lost its status”*…

… which is one of the reasons that they’re hard to update. Kevin Baker describes a 1998 visit to the IRS Atlanta Service Center and ponders its lessons…

… the first thing you’d notice would be the wires. They ran everywhere, and the building obviously hadn’t been constructed with them in mind. As you walked down a corridor, passing carts full of paper returns and rows of “tingle tables,” you would tread over those wires on a raised metal gangway. Each work area had an off-ramp, where both the wires and people would disembark…

… The desks were covered with dot matrix paper, cartons of files, and Sperry terminals glowing a dull monochromatic glow. These computers were linked to a mainframe in another room. Magnetic tapes from that mainframe, and from mainframes all over the country, would be airlifted to National Airport in Washington DC. From there, they’d be put on trucks to a West Virginia town of about 14,000 people called Martinsburg. There, they’d be loaded into a machine, the first version of which was known colloquially—and not entirely affectionately—as the “Martinsburg Monster.” This computer amounted to something like a national nerve center for the IRS. On it programs called the Individual Master File and the Business Master File processed the country’s tax records. These programs also organized much of the work. If there were a problem at Martinsburg, work across the IRS’s offices spanning the continent could and frequently did shut down.

Despite decades of attempts to kill it, The IRS’s Individual Master File, an almost sixty-year old accumulation of government Assembly Language, lives on. Part of this strange persistence can be pegged squarely on Congress’s well-documented history of starving the IRS for funding. But another part of it is that the Individual Master File has become so completely entangled in the life of the agency that modernizing it resembles delicate surgery more than a straightforward software upgrade. Job descriptions, work processes, collective bargaining agreements, administrative law, and technical infrastructure all coalesce together and interface with it, so that a seemingly technical task requires considerable sociological, historical, legal, and political knowledge.

In 2023, as it was in the 1980s, the IRS is a cyborg bureaucracy, an entangled mass of law, hardware, software, and clerical labor. It was among the first government agencies to embrace automatic data processing and large-scale digital computing. And it used these technologies to organize work, to make decisions, and to understand itself. In important ways, the lines between the digital shadow of the agency—its artificial bureaucracy—and its physical presence became difficult if not impossible to disentangle….

Baker is launching a new Substack, devoted to exploring precisely this kind tangle– and what it might portend…

This series, called Artificial Bureaucracy, is a long-term project looking at the history of government computing in the fifty-year period between 1945-1995. I think this is a timely subject. In the past several years, promoters and critics of artificial intelligence alike have talked up the possibility that decision-making and even governance itself may soon be handed over to sophisticated AI systems. What draws together both the dreams of boosters and the nightmares of critics is a deterministic orientation towards the future of technology, a conception of technology as autonomous and somehow beyond the possibility of control.

These visions mostly ignore the fact that the computerization of governance is a project at least seventy years in the making, and that project has never been determined, in the first instance or the last, primarily by “technological” factors. Like everything in government, the hardware and software systems that make up its artificial bureaucracy were and are subject to negotiation, conflict, administrative inertia, and the individual agency of its users.

Looking at government computing can also tell us something about AI. The historian of computing, Michael Mahoney has argued that studying the history of software is the process of learning how groups of people came to put their worlds in a machine. If this is right—and I think it is—our conceptions of “artificial intelligence” have an unwarranted individualistic bias; the proper way to understand machine intelligence isn’t by analogy to individual human knowledge and decision-making, but to methods of bureaucratic knowledge and action. If it is about anything, the story of AI is the story of bureaucracy. And if the future of governance is AI, then it makes sense to know something about its past…

Is bureaucracy the future of AI? Check it out the first post in Artificial Bureaucracy, from @kevinbaker@mastodon.social.

* Laurence J. Peter

###

As we size up systems, we might recall that it was on this date in 1935 that President Franklin D. Roosevelt signed the Social Security Act. A key component of Roosevelt’s New Deal domestic program, the Act created both the Social Security program and insurance against unemployment

Roosevelt signs Social Security Bill (source)

“No problem can be solved from the same level of consciousness that created it”*…

Christof Koch settles his bet with David Chalmers (with a case of wine)

… perhaps especially not the problem of consciousness itself. At least for now…

A 25-year science wager has come to an end. In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, that it is still an ongoing quest — and declared Chalmers the winner.

What ultimately helped to settle the bet was a key study testing two leading hypotheses about the neural basis of consciousness, whose findings were unveiled at the conference.

“It was always a relatively good bet for me and a bold bet for Christof,” says Chalmers, who is now co-director of the Center for Mind, Brain and Consciousness at New York University. But he also says this isn’t the end of the story, and that an answer will come eventually: “There’s been a lot of progress in the field.”

Consciousness is everything a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.

Despite a vast effort — and a 25-year bet — researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”…

Neuroscientist Christof Koch wagered philosopher David Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now. But the quest continues: “Decades-long bet on consciousness ends — and it’s philosopher 1, neuroscientist 0,” from @Nature. Eminently worth reading in full for background and state-of-play.

* Albert Einstein

###

As we ponder pondering, we might spare a thought for Vannevar Bush; he died on this date in 1974. An engineer, inventor, and science administrator, he headed the World War II U.S. Office of Scientific Research and Development (OSRD), through which almost all wartime military R&D was carried out, including important developments in radar and the initiation and early administration of the Manhattan Project. He emphasized the importance of scientific research to national security and economic well-being, and was chiefly responsible for the movement that led to the creation of the National Science Foundation.

Bush also did his own work. Before the war, in 1925, at age 35, he developed the differential analyzer, the world’s first analog computer, capable of solving differential equations. It put into productive form, the mechanical concept left incomplete by Charles Babbage, 50 years earlier; and theoretical work by Lord Kelvin. The machine filled a 20×30 ft room. He seeded ideas later adopted as internet hypertext links.

source

%d bloggers like this: