Posts Tagged ‘social science’
“The older one gets the more convinced one becomes that his Majesty King Chance does three-quarters of the business of this miserable universe”*…
In an essay adapted from his book Fluke: Chance, Chaos, and Why Everything We Do Matters, Brian Klass argues that social scientists are clinging to simple models of reality – with disastrous results. Instead, he suggests, they must embrace chaos theory…
The social world doesn’t work how we pretend it does. Too often, we are led to believe it is a structured, ordered system defined by clear rules and patterns. The economy, apparently, runs on supply-and-demand curves. Politics is a science. Even human beliefs can be charted, plotted, graphed. And using the right regression we can tame even the most baffling elements of the human condition. Within this dominant, hubristic paradigm of social science, our world is treated as one that can be understood, controlled and bent to our whims. It can’t.
Our history has been an endless but futile struggle to impose order, certainty and rationality onto a Universe defined by disorder, chance and chaos. And, in the 21st century, this tendency seems to be only increasing as calamities in the social world become more unpredictable. From 9/11 to the financial crisis, the Arab Spring to the rise of populism, and from a global pandemic to devastating wars, our modern world feels more prone to disastrous ‘shocks’ than ever before. Though we’ve got mountains of data and sophisticated models, we haven’t gotten much better at figuring out what looms around the corner. Social science has utterly failed to anticipate these bolts from the blue. In fact, most rigorous attempts to understand the social world simply ignore its chaotic quality – writing it off as ‘noise’ – so we can cram our complex reality into neater, tidier models. But when you peer closer at the underlying nature of causality, it becomes impossible to ignore the role of flukes and chance events. Shouldn’t our social models take chaos more seriously?
The problem is that social scientists don’t seem to know how to incorporate the nonlinearity of chaos. For how can disciplines such as psychology, sociology, economics and political science anticipate the world-changing effects of something as small as one consequential day of sightseeing or as ephemeral as passing clouds?
On 30 October 1926, Henry and Mabel Stimsonstepped off a steam train in Kyoto, Japan and set in motion an unbroken chain of events that, two decades later, led to the deaths of 140,000 people in a city more than 300 km away.
The American couple began their short holiday in Japan’s former imperial capital by walking from the railway yard to their room at the nearby Miyako Hotel. It was autumn. The maples had turned crimson, and the ginkgo trees had burst into a golden shade of yellow. Henry chronicled a ‘beautiful day devoted to sightseeing’ in his diary.
Nineteen years later, he had become the Unites States Secretary of War, the chief civilian overseeing military operations in the Second World War, and would soon join a clandestine committee of soldiers and scientists tasked with deciding how to use the first atomic bomb. One Japanese city ticked several boxes: the former imperial capital. The Target Committee agreed that Kyoto must be destroyed. They drew up a tactical bombing map and decided to aim for the city’s railway yard, just around the corner from the Miyako Hotel where the Stimsons had stayed in 1926.
Stimson pleaded with the president Harry Truman not to bomb Kyoto. He sent cables in protest. The generals began referring to Kyoto as Stimson’s ‘pet city’. Eventually, Truman acquiesced, removing Kyoto from the list of targets. On 6 August 1945, Hiroshima was bombed instead.
The next atomic bomb was intended for Kokura, a city at the tip of Japan’s southern island of Kyushu. On the morning of 9 August, three days after Hiroshima was destroyed, six US B-29 bombers were launched, including the strike plane Bockscar. Around 10:45am, Bockscarprepared to release its payload. But, according to the flight log, the target ‘was obscured by heavy ground haze and smoke’. The crew decided not to risk accidentally dropping the atomic bomb in the wrong place.
Bockscar then headed for the secondary target, Nagasaki. But it, too, was obscured. Running low on fuel, the plane prepared to return to base, but a momentary break in the clouds gave the bombardier a clear view of the city. Unbeknown to anyone below, Nagasaki was bombed due to passing clouds over Kokura. To this day, the Japanese refer to ‘Kokura’s luck’ when one unknowingly escapes disaster.
Roughly 200,000 people died in the attacks on Hiroshima and Nagasaki – and not Kyoto and Kokura – largely due to one couple’s vacation two decades earlier and some passing clouds. But if such random events could lead to so many deaths and change the direction of a globally destructive war, how are we to understand or predict the fates of human society? Where, in the models of social change, are we supposed to chart the variables for travel itineraries and clouds?
In the 1970s, the British mathematician George Box quipped that ‘all models are wrong, but some are useful’. But today, many of the models we use to describe our social world are neither right nor useful. There is a better way. And it doesn’t entail a futile search for regular patterns in the maddening complexity of life. Instead, it involves learning to navigate the chaos of our social worlds…
[Klass reviews the history of our attempts to conquer uncertainty, concluding with Edward Norton “Butterfly Effect” Lorenz and what he discovered when he tried to predict the weather…]
… Any error, even a trillionth of a percentage point off on any part of the system, would eventually make any predictions about the future futile. Lorenz had discovered chaos theory.
The core principle of the theory is this: chaotic systems are highly sensitive to initial conditions. That means these systems are fully deterministic but also utterly unpredictable. As Poincaré had anticipated in 1908, small changes in conditions can produce enormous errors. By demonstrating this sensitivity, Lorenz proved Poincaré right.
Chaos theory, to this day, explains why our weather forecasts remain useless beyond a week or two. To predict meteorological changes accurately, we, like Laplace’s demon, would have to be perfect in our understanding of weather systems, and – no matter how advanced our supercomputers may seem – we never will be. Confidence in a predictable future, therefore, is the province of charlatans and fools; or, as the US theologian Pema Chödrön put it: ‘If you’re invested in security and certainty, you are on the wrong planet.’
The second wrinkle in our conception of an ordered, certain world came from the discoveries of quantum mechanics that began in the early 20th century. Seemingly irreducible randomness was discovered in bewildering quantum equations, shifting the dominant scientific conception of our world from determinism to indeterminism (though some interpretations of quantum physics arguably remain compatible with a deterministic universe, such as the ‘many-worlds’ interpretation, Bohmian mechanics, also known as the ‘pilot-wave’ model, and the less prominent theory of superdeterminism). Scientific breakthroughs in quantum physics showed that the unruly nature of the Universe could not be fully explained by either gods or Newtonian physics. The world may be defined, at least in part, by equations that yield inexplicable randomness. And it is not just a partly random world, either. It is startlingly arbitrary…
…
… How can we make sense of social change when consequential shifts often arise from chaos? This is the untameable bane of social science, a field that tries to detect patterns and assert control over the most unruly, chaotic system that exists in the known Universe: 8 billion interacting human brains embedded in a constantly changing world. While we search for order and patterns, we spend less time focused on an obvious but consequential truth. Flukes matter.
Though some scholars in the 19th century, such as the English philosopher John Stuart Mill and his intellectual descendants, believed there were laws governing human behaviour, social science was swiftly disabused of the notion that a straightforward social physics was possible. Instead, most social scientists have aimed toward what the US sociologist Robert K Merton called ‘middle-range theory’, in which researchers hope to identify regularities and patterns in certain smaller realms that can perhaps later be stitched together to derive the broader theoretical underpinnings of human society. Though some social scientists are sceptical that such broader theoretical underpinnings exist, the most common approach to social science is to use empirical data from the past to tease out ordered patterns that point to stable relationships between causes and effects. Which variables best correlate with the onset of civil wars? Which economic indicators offer the most accurate early warning signs of recessions? What causes democracy?
In the mid-20th century, researchers no longer sought the social equivalent of a physical law (like gravity), but they still looked for ways of deriving clear-cut patterns within the social world. What limited this ability was technology. Just as Lorenz was constrained by the available technology when forecasting weather in the Pacific theatre of the Second World War, so too were social scientists constrained by a lack of computing power. This changed in the 1980s and ’90s, when cheap and sophisticated computers became new tools for understanding social worlds. Suddenly, social scientists – sociologists, economists, psychologists or political scientists – could take a large number of variables and plug them into statistical software packages such as SPSS and Stata, or programming languages such as R. Complex equations would then process these data points, finding the ‘line of best fit’ using a ‘linear regression’, to help explain how groups of humans change over time. A quantitative revolution was born.
By the 2000s, area studies specialists who had previously done their research by trekking across the globe and embedding themselves in specific cultures were largely supplanted by office-bound data junkies who could manipulate numbers and offer evidence of hidden relationships that were obscured prior to the rise of sophisticated numerical analysis. In the process, social science became dominated by one computational tool above all others: linear regressions. To help explain social change, this tool uses past data to try to understand the relationships between variables. A regression produces a simplified equation that tries to fit the cluster of real-world datapoints, while ‘controlling’ for potential confounders, in the hopes of identifying which variables drive change. Using this tool, researchers can feed a model with a seemingly endless string of data as they attempt to answer difficult questions. Does oil hinder democracy? How much does poverty affect political violence? What are the social determinants of crime? With the right data and a linear regression, researchers can plausibly identify patterns with defensible, data-driven equations. This is how much of our knowledge about social systems is currently produced. There is just one glaring problem: our social world isn’t linear. It’s chaotic…
… The deeply flawed assumptions of social modelling do not persist because economists and political scientists are idiots, but rather because the dominant tool for answering social questions has not been meaningfully updated for decades. It is true that some significant improvements have been made since the 1990s. We now have more careful data analysis, better accounting for systematic bias, and more sophisticated methods for inferring causality, as well as new approaches, such as experiments that use randomised control trials. However, these approaches can’t solve many of the lingering problems of tackling complexity and chaos. For example, how would you ethically run an experiment to determine which factors definitively provoke civil wars? And how do you know that an experiment in one place and time would produce a similar result a year later in a different part of the world?
These drawbacks have meant that, despite tremendous innovations in technology, linear regressions remain the outdated king of social research. As the US economist J Doyne Farmer puts it in his book Making Sense of Chaos (2024): ‘The core assumptions of mainstream economics don’t match reality, and the methods based on them don’t scale well from small problems to big problems.’ For Farmer, these methods are primarily limited by technology. They have been, he writes, ‘unable to take full advantage of the huge advances in data and technology.’
The drawbacks also mean that social research often has poor predictive power. And, as a result, social science doesn’t even really try to make predictions. In 2022, Mark Verhagen, a research fellow at the University of Oxford, examined a decade of articles in the top academic journals in a variety of disciplines. Only 12 articles out of 2,414 tried to make predictions in the American Economic Review. For the top political science journal, American Political Science Review, the figure was 4 out of 743. And in the American Journal of Sociology, not a single article made a concrete prediction. This has yielded the bizarre dynamic that many social science models can never be definitively falsified, so some deeply flawed theories linger on indefinitely as zombie ideas that refuse to die.
A core purpose of social science research is to prevent avoidable problems and improve human prosperity. Surely that requires more researchers to make predictions about the world at some point – even if chaos theory shows that those claims are likely to be inaccurate.
We produce too many models that are often wrong and rarely useful. But there is a better way. And it will come from synthesising lessons from fields that social scientists have mostly ignored.
Chaos theory emerged in the 1960s and, in the following decades, mathematical physicists such as David Ruelle and Philip Anderson recognised the significance of Lorenz’s insights for our understanding of real-world dynamical systems. As these ideas spread, misfit thinkers from an array of disciplines began to coalesce around a new way of thinking that was at odds with the mainstream conventions in their own fields. They called it ‘complexity’ or ‘complex systems’ research. For these early thinkers, Mecca was the Santa Fe Institute in New Mexico, not far from the sagebrush-dotted hills where the atomic bomb was born. But unlike Mecca, the Santa Fe Institute did not become the hub of a global movement.
Public interest in chaos and complexity surged in the 1980s and ’90s with the publication of James Gleick’s popular science book Chaos (1987), and a prominent reference from Jeff Goldblum’s character in the film Jurassic Park (1993). ‘The shorthand is the butterfly effect,’ he says, when asked to explain chaos theory. ‘A butterfly can flap its wings in Peking and in Central Park you get rain instead of sunshine.’ But aside from a few fringe thinkers who broke free of disciplinary silos, social science responded to the complexity craze mostly with a shrug. This was a profound error, which has contributed to our flawed understanding of some of the most basic questions about society. Taking chaos and complexity seriously requires a fresh approach.
One alternative to linear regressions is agent-based modelling, a kind of virtual experiment in which computers simulate the behaviour of individual people within a society. This tool allows researchers to see how individual actions, with their own motivations, come together to create larger social patterns. Agent-based modelling has been effective at solving problems that involve relatively straightforward decision-making, such as flows of car traffic or the spread of disease during a pandemic. As these models improve, with advances in computational power, they will inevitably continue to yield actionable insights for more complex social domains. Crucially, agent-based models can capture nonlinear dynamics and emergent phenomena, and reveal unexpected bottlenecks or tipping points that would otherwise go unnoticed. They might allow us to better imagine possible worlds, not just measure patterns from the past. They offer a powerful but underused tool in future-oriented social research involving complex systems.
Additionally, social scientists could incorporate chaotic dynamics by acknowledging the limits of seeking regularities and patterns. Instead, they might try to anticipate and identify systems on the brink, near a consequential tipping point – systems that could be set off by a disgruntled vegetable vendor or triggered by a murdered archduke. The study of ‘self-organised criticality’ in physics and complexity science could help social scientists make sense of this kind of fragility. Proposed by the physicists Per Bak, Chao Tang and Kurt Wiesenfeld, the concept offers a useful analogy for social systems that may disastrously collapse. When a system organises itself toward a critical state, a single fluke could cause the system to change abruptly. By analogy, modern trade networks race toward an optimised but fragile state: a single gust of wind can twist one boat sideways and cause billions of dollars in economic damage, as happened in 2021 when a ship blocked the Suez Canal.
The theory of self-organised criticality was based on the sandpile model, which could be used to evaluate how and why cascades or avalanches occur within systems. If you add grains of sand, one at a time, to a sandpile, eventually, a single grain of sand can cause an avalanche. But that collapse becomes more likely as the sandpile soars to its limit. A social sandpile model could provide a useful intellectual framework for analysing the resilience of complex social systems. Someone lighting themselves on fire, God forbid, in Norway is unlikely to spark a civil war or regime collapse. That is because the Norwegian sandpile is lower, less stretched to its limit, and therefore less prone to unexpected cascades and tipping points than the towering sandpile that led to the Arab Spring.
There are other lessons for social research to be learned from nonlinear evaluations of ecological breakdown. In biology, for instance, the theory of ‘critical slowing down’ predicts that systems near a tipping point – like a struggling coral reef that is being overrun with algae – will take longer to recover from small disturbances. This response seems to act as an early warning system for ecosystems on the brink of collapse.
Social scientists should be drawing on these innovations from complex systems and related fields of research rather than ignoring them. Better efforts to study resilience and fragility in nonlinear systems would drastically improve our ability to avert avoidable catastrophes. And yet, so much social research still chases the outdated dream of distilling the chaotic complexity of our world into a straightforward equation, a simple, ordered representation of a fundamentally disordered world.
When we try to explain our social world, we foolishly ignore the flukes. We imagine that the levers of social change and the gears of history are constrained, not chaotic. We cling to a stripped-down, storybook version of reality, hoping to discover stable patterns. When given the choice between complex uncertainty and comforting – but wrong – certainty, we too often choose comfort.
In truth, we live in an unruly world often governed by chaos. And in that world, the trajectory of our lives, our societies and our histories can forever be diverted by something as small as stepping off a steam train for a beautiful day of sightseeing, or as ephemeral as passing clouds…
Eminently worth reading in full: “The forces of chance,” from @brianklaas in @aeonmag.
* Niccolò Machiavelli, The Prince
###
As we contemplate contingency, we might recall that it was on this date in 1906, at the first International Radiotelegraph Convention in Berlin, that the Morse Code signal “SOS”– “. . . _ _ _ . . .”– became the global standard radio distress signal. While it was officially replaced in 1999 by the Global Maritime Distress Safety System, SOS is still recognized as a visual distress signal.
SOS has traditionally be “translated” (expanded) to mean “save our ship,” “save our souls,” “send out succor,” or other such pleas. But while these may be helpful mnemonics, SOS is not an abbreviation or acronym. Rather, according to the Oxford English Dictionary, the letters were chosen simply because they are easily transmitted in Morse code.

Written by (Roughly) Daily
November 3, 2024 at 1:00 am
Posted in Uncategorized
Tagged with Chaos, chaos theory, foresight, history, models, Morse Code, prediction, Science, social science, social sciences, SOS
“Why does a public discussion of economic policy so often show the abysmal ignorance of the participants?”*…
… It could, Walt Frick suggests, have to do with the way in which economics has been taught for decades, centering zombie ideas from before economics began to become an empirical disciple. Happily, he suggests, that may be changing…
What happens to the job market when the government raises the minimum wage? For decades, higher education in the United States has taught economics students to answer this question by reasoning from first principles. When the price of something rises, people tend to buy less of it. Therefore, if the price of labour rises, businesses will choose to ‘buy’ less of it – meaning they’ll hire fewer people. Students learn that a higher minimum wage means fewer jobs.
But there’s another way to answer the question, and in the early 1990s the economists David Card and Alan Krueger tried it: they went out and looked. Card and Krueger collected data on fast-food jobs along the border between New Jersey and Pennsylvania, before and after New Jersey’s minimum wage increase. The fast-food restaurants on the New Jersey side of the border were similar to the ones on the Pennsylvania side in nearly every respect, except that they now had to pay higher wages. Would they hire fewer workers in response?
The prediction from conventional economic theory is unambiguous,’ Card and Krueger wrote. It was also wrong. Fast-food restaurants in New Jersey didn’t hire fewer workers – instead, Card and Krueger found that employment slightly increased. Their paper set off a hunt for other ‘natural experiments’ that could rigorously test economic theory and – alongside other research agendas like behavioural economics – transformed the field.
Over the past 30 years, PhD-level education in economics has become more empirical, more psychological, and more attuned to the many ways that markets can fail. Introductory economics courses, however, are not so easy to transform. Big, synoptic textbooks are hard to put together and, once they are adopted as the foundation of introductory courses, professors and institutions are slow to abandon them. So introductory economics textbooks have continued to teach that a higher minimum wage leads to fewer people working – usually as an example of how useful and relevant the simple model of competitive markets could be. As a result of this lag between what economists know and how introductory economics is taught, a gulf developed between the way students first encounter economics and how most leading economists practice it. Students learned about the virtues of markets, deduced from a few seemingly simple assumptions. Economists and their graduate students, meanwhile, catalogued more and more ways those assumptions could go wrong.
Today, 30 years after Card and Krueger’s paper, economics curriculums around the world continue to challenge the facile view that students used to learn, in which unfettered markets work wonders. These changes – like spending more time studying market failures or emphasising individuals’ capacity for altruism, not just selfishness – have a political valence since conservatives often hide behind the laissez-faire logic of introductory economics. But the evolution of Econ 101 is not as subversive as it may sound. Instead, it reflects the direction the wider discipline has taken toward empiricism and more varied models of economic behaviour. Econ 101 is not changing to reflect a particular ideology; it is finally catching up to the field it purports to represent….
[Frick describes the recent evolution– or revolution– in curricula…]
… It’s tempting to judge [open-source text project] CORE and even Harvard’s [recently-overhauled introductory economics course] Ec10 in ideological terms – as an overdue response or countermeasure to a laissez-faire approach. But the evolution of Econ 101 is about more than politics. (Despite its focus on traditionally more progressive topics, CORE has been criticised for being insufficiently ‘heterodox’, according to Stevens.) By elevating empiricism and by teaching multiple models of the economy, students in these new curriculums are learning how social sciences actually work.
“A model is just an allegory,” says the economist David Autor in his intermediate microeconomics course at MIT. For decades, Econ 101 taught one major allegory, in which markets worked well of their own accord, and buyers and sellers all emerged better off. Government, when it was mentioned at all, was frequently portrayed as an overzealous maintenance man – able to solve some problems but also meddling in markets that were fine on their own.
That is not how most contemporary economists think. Instead, they see the competitive market as one model among many. ‘The multiplicity of models is economics’ strength,’ writes the Harvard economist Dani Rodrik in Economics Rules (2015). ‘[W]e have a menu to choose from and need an empirical method for making that choice.’ As the Econ 101 curriculum catches up, economics students are finally getting a taste of the variety that the field has to offer.
As much of an improvement as the new curriculums are, they raise a puzzle. The traditional Econ 101 course was, for all its flaws, coherent and memorable. Students came away with a clear framework for thinking about the world. What does the new Econ 101 leave students with, other than an appreciation that the world is complicated, and that data is important?
[UCL economist and CORE co-creator Wendy] Carlin’s answer is that “the workhorse [of Econ 101] is that actors make decisions.” Modelling those decisions remains a central part of economics. What’s changed is the way decision-makers are represented: they can be selfish, but they can also be altruistic. They can be rational, but they can also be biased or blinkered. They are social and strategic, and they interact with one another not just with the faceless market. Models help approximate the most salient features of these interactions, and students learn several different ones to guide their understanding. They also learn that models must fit the facts, and that a crucial part of economics is leaving the armchair and observing what is going on in the world…
On the importance of recognizing the mutability of models and re-emphasizing learning in an essential discipline: “Economics 101,” from @wfrick in @aeonmag.
* economist (and Nobel Laureate) Robert Solow
###
As we revise, we might recall that it was on this date in 1963 that President John F. Kennedy signed the Equal Pay Act into law. Aimed at abolishing wage disparity based on sex, the legislation was part of Kennedy’s New Frontier Program. On the one hand, since it’s enactment, the wage gap has narrowed; on the other, it is still large: in 1963, women were on average paid about 60% of a man’s income for the same job; today, that figure is roughly 80%.
Opponents of the Act (including, of course, many economists) suggested that higher wages for women would discourage employers from hiring them; in fact, female participation in the workforce has grown– the gap between their participation and that of prime-age men has shrunk to less than one-third of its previous size. Some of those critics also argued that higher wages for women would a drag on economy; to observe the obvious, the economy has, by myriad measures, grown materially over the period– indeed, beyond the “no EPA” projections of those opponents.

Written by (Roughly) Daily
June 10, 2024 at 1:00 am
Posted in Uncategorized
Tagged with college, courses, culture, discrimination, economics, education, Equal Pay Act, gender, history, John F Kennedy, New Frontier, policy, Psychology, sex, social science, teaching
“If it looks like a duck, walks like a duck, and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg”*…

Emerging technology is being used (as ever it has been) to exploit our reflexive assumptions. Victor R. Lee suggests that it’s time to to recalibrate how authenticity is judged…
It turns out that that pop stars Drake and The Weeknd didn’t suddenly drop a new track that went viral on TikTok and YouTube in April 2023. The photograph that won an international photography competition that same month wasn’t a real photograph. And the image of Pope Francis sporting a Balenciaga jacket that appeared in March 2023? That was also a fake.
All were made with the help of generative artifical intelligence, the new technology that can generate humanlike text, audio, and images on demand through programs such as ChatGPT, Midjourney, and Bard, among others.
There’s certainly something unsettling about the ease with which people can be duped by these fakes, and I see it as a harbinger of an authenticity crisis that raises some difficult questions.
How will voters know whether a video of a political candidate saying something offensive was real or generated by AI? Will people be willing to pay artists for their work when AI can create something visually stunning? Why follow certain authors when stories in their writing style will be freely circulating on the internet?
I’ve been seeing the anxiety play out all around me at Stanford University, where I’m a professor and also lead a large generative AI and education initiative.
With text, image, audio, and video all becoming easier for anyone to produce through new generative AI tools, I believe people are going to need to reexamine and recalibrate how authenticity is judged in the first place.
Fortunately, social science offers some guidance.
Long before generative AI and ChatGPT rose to the fore, people had been probing what makes something feel authentic…
“Rethinking Authenticity in the Era of Generative AI,” from @VicariousLee in @undarkmag. Eminently worth reading in full.
And to put these issues into a socio-economic context, see Ted Chiang‘s “Will A.I. Become the New McKinsey?” (and closer to the theme of the piece above, his earlier “ChatGPT Is a Blurry JPEG of the Web“).
* Victor R. Lee (in the article linked above)
###
As we ruminate on the real, we might send sentient birthday greetings to Oliver Selfridge; he was born on this date in 1926. A mathematician, he became an early– and seminal– computer scientist: a pioneer in artificial intelligence, and “the father of machine perception.”
Marvin Minsky considered Selfridge to be one of his mentors, and with Selfridge organized the 1956 Dartmouth workshop that is considered the founding event of artificial intelligence as a field. Selfridge wrote important early papers on neural networks, pattern recognition, and machine learning; and his “Pandemonium” paper (1959) is generally recognized as a classic in artificial intelligence. In it, Selfridge introduced the notion of “demons” that record events as they occur, recognize patterns in those events, and may trigger subsequent events according to patterns they recognize– which, over time, gave rise to aspect-oriented programming.
Written by (Roughly) Daily
May 10, 2023 at 1:00 am
Posted in Uncategorized
Tagged with AI, artificial intelligence, authenticity, computing, culture, Dartmouth workshop, fakes, history, Marvin Minsky, Oliver Selfridge, perception, social science
“The functionalist organization, by privileging progress (i.e. time), causes the condition of its own possibility”*…
Meet the new boss, painfully similar to the old boss…
While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes…
From Henry Farrell and Marion Fourcade, a reminder that’s what’s old is new again: “The Moral Economy of High-Tech Modernism,” in an issue of Daedalus, edited by Farrell and Margaret Levi (@margaretlevi).
See also: “The Algorithm Society and Its Discontents” (or here) by Brad DeLong (@delong).
Apposite: “What Greek myths can teach us about the dangers of AI.”
(Image above: source)
* “The functionalist organization, by privileging progress (i.e. time), causes the condition of its own possibility–space itself–to be forgotten: space thus becomes the blind spot in a scientific and political technology. This is the way in which the Concept-city functions: a place of transformations and appropriations, the object of various kinds of interference but also a subject that is constantly enriched by new attributes, it is simultaneously the machinery and the hero of modernity.” – Michel de Certeau
###
As we ponder platforms, we might recall that it was on this date in 1955 that the first computer operating system was demonstrated…
Computer pioneer Doug Ross demonstrates the Director tape for MIT’s Whirlwind machine. It’s a new idea: a permanent set of instructions on how the computer should operate.
Six years in the making, MIT’s Whirlwind computer was the first digital computer that could display real-time text and graphics on a video terminal, which was then just a large oscilloscope screen. Whirlwind used 4,500 vacuum tubes to process data…
Another one of its contributions was Director, a set of programming instructions…
“March 8, 1955: The Mother of All Operating Systems“
The first permanent set of instructions for a computer, it was in essence the first operating system. Loaded by paper tape, Director allowed operators to load multiple problems in Whirlwind by taking advantage of newer, faster photoelectric tape reader technology, eliminating the need for manual human intervention in changing tapes on older mechanical tape readers.

Written by (Roughly) Daily
March 8, 2023 at 1:00 am
Posted in Uncategorized
Tagged with algorithms, bureaucracy, computers, computing, Doug Ross, governance, history, markets, modernism, operating system, organization, platforms, politics, social science, Technology, Whirlwind
“Rumors and reports of man’s relation with animals are the world’s oldest news stories, headlined in the stars of the zodiac, posted on the walls of prehistoric caves”*…
Aerial view of a kite in the Khaybar area of north-west Saudi Arabia. These ancient hunting structures were named ‘kites’ by aviators in the 1920s because, observed from above, their form is reminiscent of old-fashioned child’s kites with streamers.
… and on the surface of the desert. Vittoria Benzine explains…
In the 1920s, British Royal Air Force pilots over the Middle East recorded the first sightings of what they dubbed desert kites—massive patterns carved into rocky land, often resembling the famous flying toy.
Archaeologists have since debated the purpose of these enigmas, which appear across geographies and eras, dating back to the Neolithic Period (10,000–2,200 B.C.E.) in Jordan, the early Bronze Age (3,300–2,100 B.C.E.) in Israel’s Negev Desert, and the Middle Bronze Age (2,100–1,550 B.C.E.) in Armenia. Some thought they were cultural cornerstones. Still more posited they were pens for domesticating animals.
Three recent peer-reviewed papers confirm popular hypotheses that the desert kites actually served as mass hunting traps, allowing early desert dwellers to kill entire herds of game at once. While they were active, the kites funneled gazelle and ibex down tapered, wall-lined paths which ended in massive pits or sudden cliffs where creatures were trapped and killed. The kites’s particular placement, length, and shape generally demonstrate a sophisticated knowledge of landscapes and animal behaviors…
The full story at “Scientists Have Cracked the Origins of ‘Desert Kites,’ Massive Prehistoric Patterns That Were Carved into the Middle Eastern Desert,” from @vittoriabenzine in @artnet.
* Lewis Lapham
###
As we we admire ingenuity, we might spare a thought for Siegfried Frederick (“S.F.” or “Fred”) Nadel; he died on this date in 1956. An anthropologist who did important work in Africa, he is best remembered as a theorist whose work built on the thinking of Bronislaw Malinowski, sociologist Max Weber, philosopher Alfred North Whitehead, and psychologist Kurt Koffka. In The Foundations of Social Anthropology (1951) he asserted that the main task of the science is to explain as well as to describe aim-controlled, purposive behaviour. Suggesting that sociological facts emerge from psychological facts, he argued that full explanations are to be derived from psychological exploration of motivation and consciousness. And in his posthumous Theory of Social Structure (1958), regarded as one of the 20th century’s foremost theoretical works in the social sciences, Nadel examined social roles, which he considered to be crucial in the analysis of social structure.
Written by (Roughly) Daily
January 14, 2023 at 1:00 am
Posted in Uncategorized
Tagged with ancient history, ancient technology, anthropology, Archaeology, desert kites, history, Nadel, Science, social science, social structure, Technology






You must be logged in to post a comment.