(Roughly) Daily

Posts Tagged ‘systems

“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…

Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…

Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.

Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.

To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).

Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.

To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.

In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…

… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.

We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.

I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.

But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…

While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.

This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.

The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…

… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.

But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.

With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).

Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.

So then what’s the take-away?

For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:

It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.

What a lovely new concern for parents we’ve invented!

Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.

Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.

For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.

In other words, pro-human bias and AI distrust are cognitively beneficial.

It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.

The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.

Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?

* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b

###

As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861.  Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.

But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).

“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”

 source

“Tennyson said that if we could understand a single flower we would know who we are and what the world is”*…

Reality feels “stable” enough to talk about it– though all logic seems to point away from that possibility. Marco Giancotti unpacks what he suggests is the only line of reasoning that resolves that paradox…

What is the source of what we call order? Why do many things look too complex, too perfectly organized to arise unintentionally from chaos? How can something as special as a star or a flower even happen? And, for that matter, why do some natural phenomena seem designed for a purpose?

We live in a universe of forces eternally straining to crush things together or tear them apart. There is no physical law for “forming shapes”, no law for being separated from other things, no law for staying still.

Boundaries are in the eye of the beholder, not in the world out there. Out there is only tumult, clashing, and shuffling of everything with everything else.

And yet, our familiar world is filled with things stable and consistent enough for us to give them names—and to live our whole lives with.

In this essay we’ll tackle these questions at the very root. We need good questions to get good answers, so we’ll begin by clarifying the problem. It has to do with probabilities—we’ll see why those natural objects seem so utterly unlikely to happen by chance, and we’ll find the fundamental process that solves the dilemma.

This will take us most of the way, but we’ll have one final obstacle to overcome, a cognitive Last Boss: living things still feel a little magical in some way, imbued with a mysterious substance called “purpose” that feels qualitatively different from how inanimate things work. This kind of confusion runs very deep in our culture. To remove it, I’ll give a name to something that, as far as I know, hasn’t been named before: phenomena that I’ll be calling—enigmatically, for now—“Water Lilies.”…

Applying systems dynamics, complexity, and emergence to understanding reality itself: “Recursion, Tidy Stars, and Water Lilies,” from @marco_giancotti (the second in a trilogy of essays: part one here; subscribe to his newsletter for Part Three when it drops).

* Jorge Luis Borges, “The Zahir

###

As we explore existence, we might spare a thought for Francis Simpson; he died on this date in 2003. An English naturalist, conservationist, and chronicler of the countryside and wild flowers of his native Suffolk, he became a botanist at Ipswich Museum, where he worked until his retirement in 1977.

He published one of the most highly regarded county floras, simply entitled Simpson’s Flora of Suffolk, and in 1938 saved a small meadow, famous for its snakeshead fritillaries, from being drained and ploughed into farmland. Using donations amounting to £75, he was able to purchase the field, Mickfield Meadow, for the Society for the Promotion of Nature Reserves. Today, it is one of the oldest nature reserves in the country, protecting the meadow flowers now surrounded by farmland.

source

“Fast gets all our attention, slow has all the power”*…

Coleman McCormick on a framework that can help us understand change in systems– and build resiliance…

A forest is a complex ecosystem made up of thousands of organisms living, evolving, interacting with each other, and changing over time.

At the top of the hierarchy are the leaves, changing annually, growing, dying, and shedding in a year-long seasonal cycle. Next there are branches, fewer in number and slower in growth. Then the whole tree itself, changing over decades. The tree sits in a stand of dozens, and the stand in a forest of thousands of individual trees. The forest within a biome, the biome in a region with a particular climate.

You get the idea.

All natural ecosystems evolve in layers like this that connect to each other, but move at different speeds. You can imagine other systems with similar structures: your body is made up of proteins, DNA strands, organelles, cells, membranes, organs, a skeleton, and eventually, your whole body. Cells are being generated but also dying off at almost the same rate. Slower layers like the nervous system take a long time to heal (if ever) when subjected to injury.

Seeing complex systems this way — as layered collections of variable-speed elements — is a useful framework for understanding why we have a hard time changing them.

Stewart Brand [and here and here] noticed this recurring pattern in the anatomy of systems, which he called pace layering.

The concept builds on an observation made by architect Frank Duffy, who noticed a hierarchy in the components of buildings. In his book How Buildings Learn, Brand expanded this observation into a model he termed “shearing layers,” which describes how different parts of a structure change at varying speeds. Site → Structure → Skin → Services → Space plan → Stuff. Each must survive or adapt on different timelines. When architecture fails to account for the different rates at which users need to modify these layers, it results in rigid, non-functional design. Buildings where Services or the Space Plan are overly inflexible are difficult to adapt to users’ changing needs.

In his later book The Clock of the Long Now, Brand expanded the concept of shearing layers to a civilizational scale:

At the bottom, nature moves along on its own eons-level time scale. In the middle, governance and culture shift with generations. Infrastructure and commerce in the range of years. And on the surface, fashionable trends flare up and die out with sometimes daily regularity, like the turbulent wave tops in a stormy ocean. Each layer serves a function:

Fast learns, slow remembers.  Fast proposes, slow disposes.  Fast is discontinuous, slow is continuous.  Fast and small instructs slow and big by accrued innovation and by occasional revolution.  Slow and big controls small and fast by constraint and constancy.  Fast gets all our attention, slow has all the power...

… Seeing the world through this lens — not only of scale, but also of time — has distant reach to so many other domains. It’s a fundamental characteristic of how systems work and adapt to change.

The fast flurry of activity at the top of a pace layered system creates a testbed for new ideas. In the forest, each individual tree can try out different evolutionary adaptations. New survival strategies are tested in numbers not possible if entire ecosystems had to move together. If one tree tests a new trait that turns out not to work, only a single organism is at risk, not the whole forest.

Because upper layers move faster they can also rebound faster. A forest fire or a passing herd of elk causes some damage, but only at the surface level upper crust of our strata. The bark and branches and leaves may get eaten or burn off, but in a few weeks they bounce back.

Pace layering builds resiliency into complex systems. The fast layers shield the slower ones from shocks, while selectively transmitting changes down through the layers, allowing slower ones to incorporate those adaptations. But some changes propagate too fast.

Some of the worst cases of system shock happen when change shakes to lower levels too rapidly. Look at the collapse of the Soviet Union. A rapid change in the governance layer caused wreaked havoc in the layers above: massive instability on a national scale, rippling through the whole system for decades. In this case, a totalitarian government imposed rigidity on commerce, infrastructure, and even fashion, and didn’t allow for the necessary shifting and experimentation required for the system to maintain resilience.

Drawing sharp lines between layers actually draws an inaccurate picture of how a thriving system works. A more accurate diagram would show smoother gradients across the transitions between layers.

Resilience comes from allowing this gradient — this slippage — at the junctions between layers. Each layer, above and below, must allow for give and take from its neighbors. Slow layers must permit some influence at the edges, and fast layers must slow down to maintain a workable interface with the slower. The layers need to be able to negotiate with one another. If the fast ignores the constraints of the slow, you get discontinuous instability. If the slow never bends to the fast, you get stifling stagnation…

[McCormick explores the applicability of this framework to governance and to corporate activity…]

… With age, my mind seems to sink to lower levels in the hierarchy. “Current things” are more likely to hit me and bounce off. We come around to new ideas more slowly. Above us are the teenagers, trying new technologies, listening to new music, pushing new memes, on a weekly or daily basis. We parents underneath can’t keep up.

But “keeping up” isn’t our role! Fast learns, slow remembers. Fast tries things, slow preserves what works. Resilient, sustainable systems balance this learning and remembering.

Not every meme or new song or fashion trend has staying power, but some do. The ones with notable resonance absorb and influence the culture below. Youth play the role of experimenters, continuously throwing new ideas at the wall — some good, many terrible. The elders carry the torch of tradition, and provide the stable platform of time-tested solutions on top of which the innovators can explore.

Pace layering is one of those ideas with such broad reach that once you learn about it, you see it everywhere…

The hidden architecture of resilient systems: “Pace Layers,” from @colemanm.

For Stewart’s own essay on Pace Layers, see here; and for more, here.

* Stewart Brand

###

As we take the long view, we might send connective birthday greetings to Alexander MacMillan; he was born on this date in 1818. MacMillan was cofounder (in 1843) with his brother Daniel, of Macmillan Publishers, one of the “Big Five” English language publishers.

Though not himself a professional scientist, MacMillan did much to promote science in the Victorian times– especially when he established the journal Nature (in 1869), enabling communication between men of science. The journal had the support of many influential contributors, including Thomas Huxley. Yet, it remained a financial challenge for Macmillan. Other scientific quarterlies had short lives, but Macmillan tolerated losses for three decades, committed to the journal’s mission “to place before the general public the grand results of scientific work and scientific discovery; and to urge the claims of science to move to a more general recognition in education and in daily life.” That mission continues to the present day.

source

“The purpose of a system is what it does”*…

Via Patrick Tanguay and his wonderful newsletter, Sentiers. Tanguay observes, “diagnosing what’s going on in society right now, how our multiple systems function and the issues that emerge from that, is not an easy task. It’s probably unfair then to also expect solutions from one article, but that’s what I was hoping for by the end of this one,” by Barath Raghavan and Bruce Schneier

Technology was once simply a tool—and a small one at that—used to amplify human intent and capacity. That was the story of the industrial revolution: we could control nature and build large, complex human societies, and the more we employed and mastered technology, the better things got. We don’t live in that world anymore. Not only has technology become entangled with the structure of society, but we also can no longer see the world around us without it. The separation is gone, and the control we thought we once had has revealed itself as a mirage. We’re in a transitional period of history right now.

We tell ourselves stories about technology and society every day. Those stories shape how we use and develop new technologies as well as the new stories and uses that will come with it. They determine who’s in charge, who benefits, who’s to blame, and what it all means.

Some people are excited about the emerging technologies poised to remake society. Others are hoping for us to see this as folly and adopt simpler, less tech-centric ways of living. And many feel that they have little understanding of what is happening and even less say in the matter.

But we never had total control of technology in the first place, nor is there a pretechnological golden age to which we can return. The truth is that our data-centric way of seeing the world isn’t serving us well. We need to tease out a third option. To do so, we first need to understand how we got here.

When we describe something as being abstract, we mean it is removed from reality: conceptual and not material, distant and not close-up. What happens when we live in a world built entirely of the abstract? A world in which we no longer care for the messy, contingent, nebulous, raw, and ambiguous reality that has defined humanity for most of our species’ existence? We are about to find out, as we begin to see the world through the lens of data structures.

Two decades ago, in his book Seeing Like a State, anthropologist James C. Scott explored what happens when governments, or those with authority, attempt and fail to “improve the human condition.” Scott found that to understand societies and ecosystems, government functionaries and their private sector equivalents reduced messy reality to idealized, abstracted, and quantified simplifications that made the mess more “legible” to them. With this legibility came the ability to assess and then impose new social, economic, and ecological arrangements from the top down: communities of people became taxable citizens, a tangled and primeval forest became a monoculture timber operation, and a convoluted premodern town became a regimented industrial city.

This kind of abstraction was seemingly necessary to create the world around us today. It is difficult to manage a large organization, let alone an interconnected global society of eight billion people, without some sort of structure and means to abstract away details. Facility with abstraction, and abstract reasoning, has enabled all sorts of advancements in science, technology, engineering, and math—the very fields we are constantly being told are in highest demand.

The map is not the territory [quoth Alfred Korzybski], and no amount of intellectualization will make it so. Creating abstract representations by necessity leaves out important detail and context. Inevitably, as Scott cataloged, the use of large-scale abstractions fails, leaving leadership bewildered at the failure and ordinary people worse off. 

But our desire to abstract never went away, and technology, as always, serves to amplify intent and capacity. Now, we manifest this abstraction with software. Computing supercharges the creative and practical use of abstraction. This is what life is like when we see the world the way a data structure sees the world. These are the same tricks Scott documented. What has changed is their speed and their ubiquity…

… Data structures dominate our world and are a byproduct of the rational, modern era, but they are ushering in an age of chaos. We need to embrace and tame, but not extinguish, this chaos for a better world…

As [Lewis] Mumford wrote in his classic history of technology, “The essential distinction between a machine and a tool lies in the degree of independence in the operation from the skill and motive power of the operator.” A tool is controlled by a human user, whereas a machine does what its designer wanted. As technologists, we can build tools, rather than machines, that flexibly allow people to make partial, contextual sense of the online and physical world around them. As citizens, we can create meaningful organizations that span our communities but without the permanence (and thus overhead) of old-school organizations.

Seeing like a data structure has been both a blessing and a curse. Increasingly, it feels like it is an avalanche, an out-of-control force that will reshape everything in its path. But it’s also a choice, and there is a different path we can take. The job of enabling a new society, one that accepts the complexity and messiness of our current world without being overwhelmed by it, is one all of us can take part it. There is a different future we can build, together…

A fascinating and important piece: “Seeing Like a Data Structure,” @schneierblog @BelferCenter. Eminently worth reading in full.

See also: “Empty Innovation

Stafford Beer

###

As we reframe, we might send free birthday greetings to Matthias Ettrich; he was born on this date in 1972. A computer scientist interested in Linux and open source software, he Ettrich founded KDE in 1996 to create (as he put it on Usenet) “consistent, nice looking free desktop environment” for Unix-like systems using Qt as its widget toolkit– to which end, he has developed LyX. A German, operating in Berlin, Ettrich was awarded the Federal Cross of Merit for his contributions to free software.

source

“Never call an accountant a credit to his profession; a good accountant is a debit to his profession.”*…

The estimable Henry Farrell on accountancy as a lens on the hidden systems of the world…

When reading Cory Doctorow’s latest novel, The Bezzle [which your correspondent highly recommends], I kept on thinking about another recent book, Bruce Schneier’s A Hacker’s Mind: How the Powerful Bend Society’s Rules and How to Bend Them Back [ditto]. Cory’s book is fiction, and Bruce’s non-fiction, but they are clearly examples of the same broad genre (the ‘pre-apocalyptic systems thriller’?). Both are about hackers, but tell us to pay attention to other things than computers and traditional information systems. We need to go beneath the glossy surfaces of cyberpunk and look closely at the messy, complex systems of power beneath them. And these systems – like those described in the very early cyberpunk of William Gibson and others – are all about money and power.

What Bruce says:

In my story, hacking isn’t just something bored teenagers or rival governments do to computer systems … It isn’t countercultural misbehavior by the less powerful. A hacker is more likely to be working for a hedge fund, finding a loophole in financial regulations that lets her siphon extra profits out of the system. He’s more likely in a corporate office. Or an elected official. Hacking is integral to the job of every government lobbyist. It’s how social media systems keep us on our platform.

Bruce’s prime example of hacking is Peter Thiel using a Roth IRA to stash his Paypal shares and turn them into $5 billion, tax free.

This underscores his four key points. First, hacking isn’t just about computers. It’s about finding the loopholes; figuring out how to make complex system of rules do things that they aren’t supposed to. Second, it isn’t countercultural. Most of the hacking you might care about is done by boring seeming people in boring seeming clothes (I’m reminded of Sam Anthony’s anecdote about how the costume designer of the film Hackers visited with people at a 2600 conference for background research, but decided that they “were a bunch of boring nerds and went and took pictures of club kids on St. Marks instead”). Third, hacking tends to reinforce power symmetries rather than undermine them. The rich have far more resources to figure out how to gimmick the rules. Fourth, we should mostly identify ourselves not with the hackers but the hacked. Because that is who, in fact, we mostly are….

… Still, there are things you can do to fight back. One of the major themes of The Bezzle is that prison is now a profit model. Tyler Cowen, the economist, used to talk a lot about “markets in everything.” I occasionally responded by pointing to “captive markets in everything.” And there isn’t any market that is more literally captive than prisoners. As for-profit corporations (and venal authorities) came to realize this, they started to systematically remake the rules and hack the gaps in the regulatory system to squeeze prisoners and their relatives for as much money as possible, charging extortionate amounts for mail, for phone calls, for books that could only be accessed through proprietary electronic tablets.

That’s changing, in part thanks to ingenious counter hacking. The Appeal published a piece last week on how Securus, “the nation’s largest prison and jail telecom corporation,” had to effectively default on nearly a billion dollars of debt. Part of the reason for the company’s travails is that activists have figured out how to use the system against it…

… In other sectors, where companies doing sketchy things have publicly traded shares, activists have started getting motions passed at shareholder meetings, to challenge their policies. However, the companies have begun in turn to sue, using the legal system in unconventional ways to try to prevent these unconventional tactics. Again, as both Bruce and Cory suggest, the preponderance of hacking muscle is owned by the powerful, not those challenging them.

Even so, the more that ordinary people understand the complexities of the system, the more that they will be able to push back. Perhaps the most magnificent example of this is Max Schrems, an Austrian law student who successfully bollocksed-up the entire system of EU-US data transfers by spotting loopholes and incoherencies and weaponizing them in EU courts. Cory’s Martin Hench books seem to me to purpose-designed to inspire a thousand Max Schrems – people who are probably past their teenage years, have some grounding in the relevant professions, and really want to see things change.

And in this, the books return to some of the original ambitions of ‘cyberpunk,’ a somewhat ungainly and contested term that has come to emphasize the literary movement’s countercultural cool over its actual intentions…

One word that never appears in Neuromancer, and for good reason: “Internet.” When it was written, the Internet was just one among many information networks, and there was no reason to suspect that it would defeat and devour its rivals, subordinating them to its own logic. Before cyberspace and the Internet became entangled, Gibson’s term was a synecdoche for a much broader set of phenomena. What cyberspace actually referred to back then was more ‘capitalism’ than ‘computerized information.’

So, in a very important sense, The Bezzle returns to the original mission statement – understanding how the hacker mythos is entwined with capitalism. To actually understand hacking, we need to understand the complex systems of finance and how they work. If you really want to penetrate the system, you need to really grasp what money is and what it does. That, I think, is what Cory is trying to tell us. And so too Bruce. The nexus between accountancy and hacking is not a literary trick or artifice. It is an important fact about the world, which both fiction and non-fiction writers need to pay attention to…

Eminently worth reading in full: “Today’s hackers wear green eyeshades, not mirrorshades,” from @henryfarrell in his invaluable newsletter Programmable Mutter.

Charles Lyell

###

As we ponder power, we might recall that on this date in 1927, a “counter-hacker” in a different domain, Mae West, was sentenced to jail for obscenity.

Her first starring role on Broadway was in a 1926 play entitled Sex, which she wrote, produced, and directed. Although conservative critics panned the show, ticket sales were strong. The production did not go over well with city officials, who had received complaints from some religious groups, and the theater was raided and West arrested along with the cast. She was taken to the Jefferson Market Court House (now Jefferson Market Library), where she was prosecuted on morals charges, and on April 19, 1927, was sentenced to 10 days for “corrupting the morals of youth.” Though West could have paid a fine and been let off, she chose the jail sentence for the publicity it would garner. While incarcerated on Welfare Island (now known as Roosevelt Island), she dined with the warden and his wife; she told reporters that she had worn her silk panties while serving time, in lieu of the “burlap” the other girls had to wear. West got great mileage from this jail stint. She served eight days with two days off for “good behavior”.

Wikipedia

source