(Roughly) Daily

Posts Tagged ‘computers

“Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics?”*…

Half a century ago, Lewis Mumford developed a concept that explains why we trade autonomy for convenience…

… Surveying the state of the high-tech life, it is tempting to ponder how it got so bad, while simultaneously forgetting what it was that initially convinced one to hastily click “I agree” on the terms of service. Before certain social media platforms became foul-smelling swamps of conspiratorial misinformation, many of us joined them for what seemed like good reasons; before sighing at the speed with which their batteries die, smartphone owners were once awed by these devices: before grumbling that there was nothing worth watching, viewers were astounded by how much streaming content was available at one’s fingertips. Overwhelmed by the way today’s tech seems to be burying us in the bad, it’s easy to forget the extent to which tech won us over by offering us a share in the good — or to be more precise, in “the goods.” 

Nearly 50 years ago, long before smartphones and social media, the social critic Lewis Mumford put a name to the way that complex technological systems offer a share in their benefits in exchange for compliance. He called it a “bribe.” With this label, Mumford sought to acknowledge the genuine plentitude that technological systems make available to many people, while emphasizing that this is not an offer of a gift but of a deal. Surrender to the power of complex technological systems — allow them to oversee, track, quantify, guide, manipulate, grade, nudge, and surveil you — and the system will offer you back an appealing share in its spoils. What is good for the growth of the technological system is presented as also being good for the individual, and as proof of this, here is something new and shiny. Sure, that shiny new thing is keeping tabs on you (and feeding all of that information back to the larger technological system), but it also lets you do things you genuinely could not do before. For a bribe to be accepted it needs to promise something truly enticing, and Mumford, in his essay “Authoritarian and Democratic Technics,” acknowledged that “the bargain we are being asked to ratify takes the form of a magnificent bribe.” The danger, however, was that “once one opts for the system no further choice remains.” 

For Mumford, the bribe was not primarily about getting people into the habit of buying new gadgets and machines. Rather it was about incorporating people into a world that complex technological systems were remaking in their own image. Anticipating resistance, the bribe meets people not with the boot heel, but with the gift subscription.

The bribe is a discomforting concept. It asks us to consider the ways the things we purchase wind up buying us off, it asks us to see how taking that first bribe makes it easier to take the next one, and, even as it pushes us to reflect on our own complicity, it reminds us of the ways technological systems eliminate their alternatives. Writing about the bribe decades ago, Mumford was trying to sound the alarm, as he put it: “This is not a prediction of what will happen, but a warning against what may happen.” As with all of his glum predictions, it was one that Mumford hoped to be proven wrong about. Yet as one scrolls between reviews of the latest smartphone, revelations about the latest misdeeds of some massive tech company, and commentary about the way we have become so reliant on these systems that we cannot seriously speak about simply turning them off — it seems clear that what Mumford warned “may happen” has indeed happened…

Eminently worth reading in full: “The Magnificent Bribe,” by Zachary Loeb in @_reallifemag.

As to (some of) the modern implications of that bargain, see also Shoshana Zuboff‘s: “You Are the Object of a Secret Extraction Operation.”

As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information. The promise of the surveillance dividend now draws surveillance economics into the “normal” economy, from insurance, retail, banking and finance to agriculture, automobiles, education, health care and more. Today all apps and software, no matter how benign they appear, are designed to maximize data collection.

Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic. The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage…

And resonantly: “AI-tocracy” a working paper from NBER that links the development of artificial intelligence with the interests of autocracies: from the abstract:

Can frontier innovation be sustained under autocracy? We argue that innovation and autocracy can be mutually reinforcing when: (i) the new technology bolsters the autocrat’s power; and (ii) the autocrat’s demand for the technology stimulates further innovation in applications beyond those benefiting it directly. We test for such a mutually reinforcing relationship in the context of facial recognition AI in China. To do so, we gather comprehensive data on AI firms and government procurement contracts, as well as on social unrest across China during the last decade. We first show that autocrats benefit from AI: local unrest leads to greater government procurement of facial recognition AI, and increased AI procurement suppresses subsequent unrest. We then show that AI innovation benefits from autocrats’ suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets. Taken together, these results suggest the possibility of sustained AI innovation under the Chinese regime: AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.

(And, Anne Applebaum warns, “The Bad Guys Are Winning.”)

* “Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics? The answer to this question is both paradoxical and ironic. Present day technics differs from that of the overtly brutal, half-baked authoritarian systems of the past in one highly favorable particular: it has accepted the basic principle of democracy, that every member of society should have a share in its goods. By progressively fulfilling this part of the democratic promise, our system has achieved a hold over the whole community that threatens to wipe out every other vestige of democracy.

The bargain we are being asked to ratify takes the form of a magnificent bribe. Under the democratic-authoritarian social contract, each member of the community may claim every material advantage, every intellectual and emotional stimulus he may desire, in quantities hardly available hitherto even for a restricted minority: food, housing, swift transportation, instantaneous communication, medical care, entertainment, education. But on one condition: that one must not merely ask for nothing that the system does not provide, but likewise agree to take everything offered, duly processed and fabricated, homogenized and equalized, in the precise quantities that the system, rather than the person, requires. Once one opts for the system no further choice remains. In a word, if one surrenders one’s life at source, authoritarian technics will give back as much of it as can be mechanically graded, quantitatively multiplied, collectively manipulated and magnified.”

– Lewis Mumford in “Authoritarian and Democratic Technics,” via @LMSacasas

###

As we untangle user agreements, we might recall that it was on this date in 1970 that Douglas Engelbart (see here, here, and here) was granted a patent (US No. 3,541,541) on the “X-Y Position Indicator for a Display System,” the world’s first prototype computer mouse– a wooden block containing the tracking apparatus, with a single button attached.

source

source

“What might once have been called advertising must now be understood as continuous behavior modification on a titanic scale, but without informed consent”*…

Illustration by Anders Nilsen

“Which category have they put you in?”

This sinister question—at least, it was meant to sound sinister—headlined the advertising copy for The 480, a 1964 novel by Eugene Burdick. His previous best sellers, The Ugly American and Fail-Safe, had caused sensations in political circles, and the new one promised to do the same. Its jacket featured the image of a punched card. The title referred to 480 categories of voter, defined by region, religion, age, and other demographic characteristics, such as “Midwestern, rural, Protestant, lower income, female.” Many readers recoiled from the notion of being sorted into one of these boxes. The New York Times’s reviewer called The 480 a “shock novel” and found it implausible.

What was so shocking? What was implausible? The idea that a company might use computer technology and behavioral science to gather and crunch data on American citizens, with the nefarious goal of influencing a presidential election.

In the 1950s and 1960s this seemed like science fiction. Actually, The 480 was a thinly disguised roman à clef, based on a real-life company called Simulmatics, which had secretly worked for the 1960 campaign of John F. Kennedy. Burdick had been a political operative himself and knew the Simulmatics founders well. The company’s confidential reports and memoranda went straight into his prose. And the 480 categories—listed in an appendix to the novel—were the real Simulmatics voter types, the creation of what one of its founders called “a kind of Manhattan Project gamble in politics.”

Simulmatics was founded in 1959 and lasted eleven years. Jill Lepore mentioned its involvement in the Kennedy campaign in These Truths (2018), her monumental history of the United States; she was already on the trail of the story she tells in her new book, If Then. Lepore is a brilliant and prolific historian with an eye for unusual and revealing stories, and this one is a remarkable saga, sometimes comical, sometimes ominous: a “shadow history of the 1960s,” as she writes, because Simulmatics stumbled through the decade as a bit player, onstage for the Vietnam War, the civil rights movement, the Great Society, the riots and protests. It began with grand ambitions to invent a new kind of predictive behavioral science, in a research environment increasingly tied to a rising defense establishment amid the anxiety of the cold war. It ended ignominiously, in embarrassment and bankruptcy.

Irving Kristol, the future architect of neoconservativism, dismissed Simulmatics in 1964 as “a struggling little company which, despite the fact that it worked on a few problems for the Kennedy organization in 1960, has since had a difficult time making ends meet,” and he wasn’t wrong. Today it is almost completely forgotten. Yet Lepore finds in it a plausible untold origin story for our current panopticon: a world of constant surveillance, if not by the state then by megacorporations that make vast fortunes by predicting and manipulating our behavior—including, most insidiously, our behavior as voters…

The ever-illuminating James Gleick (@JamesGleick) unpacks the remarkable Jill Lepore‘s new history, If Then: How the Simulmatics Corporation Invented the Future: “Simulating Democracy.”

See also: this week’s Bloomberg Businessweek, and for historical perspective, “Age of Invention: The Tools of Absolutism.”

* Jaron Lanier (see, e.g., here and here)

###

As we think about the targets painted on our chests, we might recall that it was on this date in 2011 that Facebook introduced the Timeline as the design of a user’s main Facebook page.

source

Written by (Roughly) Daily

September 22, 2020 at 1:01 am

“Plans are worthless, but planning is everything”*…

 

gilmanweber-1536x1024

 

We’re living through a real-time natural experiment on a global scale. The differential performance of countries, cities and regions in the face of the COVID-19 pandemic is a live test of the effectiveness, capacity and legitimacy of governments, leaders and social contracts.

The progression of the initial outbreak in different countries followed three main patterns. Countries like Singapore and Taiwan represented Pattern A, where (despite many connections to the original source of the outbreak in China) vigilant government action effectively cut off community transmission, keeping total cases and deaths low. China and South Korea represented Pattern B: an initial uncontrolled outbreak followed by draconian government interventions that succeeded in getting at least the first wave of the outbreak under control.

Pattern C is represented by countries like Italy and Iran, where waiting too long to lock down populations led to a short-term exponential growth of new cases that overwhelmed the healthcare system and resulted in a large number of deaths. In the United States, the lack of effective and universally applied social isolation mechanisms, as well as a fragmented healthcare system and a significant delay in rolling out mass virus testing, led to a replication of Pattern C, at least in densely populated places like New York City and Chicago.

Despite the Chinese and Americans blaming each other and crediting their own political system for successful responses, the course of the virus didn’t score easy political points on either side of the new Cold War. Regime type isn’t correlated with outcomes. Authoritarian and democratic countries are included in each of the three patterns of responses: authoritarian China and democratic South Korea had effective responses to a dramatic breakout; authoritarian Singapore and democratic Taiwan both managed to quarantine and contain the virus; authoritarian Iran and democratic Italy both experienced catastrophe.

It’s generally a mistake to make long-term forecasts in the midst of a hurricane, but some outlines of lasting shifts are emerging. First, a government or society’s capacity for technical competence in executing plans matters more than ideology or structure. The most effective arrangements for dealing with the pandemic have been found in countries that combine a participatory public culture of information sharing with operational experts competently executing decisions. Second, hyper-individualist views of privacy and other forms of risk are likely to be submerged as countries move to restrict personal freedoms and use personal data to manage public and aggregated social risks. Third, countries that are able to successfully take a longer view of planning and risk management will be at a significant advantage…

From Steve Weber and @nils_gilman, an argument for the importance of operational expertise, plans for the long-term, and the socialization of some risks: “The Long Shadow Of The Future.”

* Dwight D. Eisenhower

###

As we make ourselves ready, we might recall that it was on this date in 1822 that Charles Babbage [see almanac entry here] proposes a Difference Engine in a paper to the Royal Astronomical Society (which he’d helped found two years earlier).

In Babbage’s time, printed mathematical tables were calculated by human computers… in other words, by hand.  They were central to navigation, science, and engineering, as well as mathematics– but mistakes occurred, both in transcription and in calculation.  Babbage determined to mechanize the process and to reduce– indeed, to eliminate– errors.  His Difference Engine was intended as precisely that sort of mechanical calculator (in this instance, to compute values of polynomial functions).

In 1833 he began his programmable Analytical Machine (AKA, the Analytical Engine), the forerunner of modern computers, with coding help from Ada Lovelace, who created an algorithm for the Analytical Machine to calculate a sequence of Bernoulli numbers— for which she is remembered as the first computer programmer.

220px-Difference_engine_plate_1853

A portion of the difference engine

source

 

 

“A better world won’t come about simply because we use data; data has its dark underside.”*…

 

Data

 

Data isn’t the new oil, it’s the new CO2. It’s a common trope in the data/tech field to say that “data is the new oil”. The basic idea being – it’s a new resource that is being extracted, it is valuable, and is a raw product that fuels other industries. But it also implies that data in inherently valuable in and of itself and that “my data” is valuable, a resource that I really should tap in to.

In reality, we are more impacted by other people’s data (with whom we are grouped) than we are by data about us. As I have written in the MIT Technology Review – “even if you deny consent to ‘your’ data being used, an organisation can use data about other people to make statistical extrapolations that affect you.” We are bound by other people’s consent. Our own consent (or lack thereof) is becoming increasingly irrelevant. We won’t solve the societal problems pervasive data surveillance is causing by rushing through online consent forms. If you see data as CO2, it becomes clearer that its impacts are societal not solely individual. My neighbour’s car emissions, the emissions from a factory on a different continent, impact me more than my own emissions or lack thereof. This isn’t to abdicate individual responsibility or harm. It’s adding a new lens that we too often miss entirely.

We should not endlessly be defending arguments along the lines that “people choose to willingly give up their freedom in exchange for free stuff online”. The argument is flawed for two reasons. First the reason that is usually given – people have no choice but to consent in order to access the service, so consent is manufactured.  We are not exercising choice in providing data but rather resigned to the fact that they have no choice in the matter.

The second, less well known but just as powerful, argument is that we are not only bound by other people’s data; we are bound by other people’s consent.  In an era of machine learning-driven group profiling, this effectively renders my denial of consent meaningless. Even if I withhold consent, say I refuse to use Facebook or Twitter or Amazon, the fact that everyone around me has joined means there are just as many data points about me to target and surveil. The issue is systemic, it is not one where a lone individual can make a choice and opt out of the system. We perpetuate this myth by talking about data as our own individual “oil”, ready to sell to the highest bidder. In reality I have little control over this supposed resource which acts more like an atmospheric pollutant, impacting me and others in myriads of indirect ways. There are more relations – direct and indirect – between data related to me, data about me, data inferred about me via others than I can possibly imagine, let alone control with the tools we have at our disposal today.

Because of this, we need a social, systemic approach to deal with our data emissions. An environmental approach to data rights as I’ve argued previously. But first let’s all admit that the line of inquiry defending pervasive surveillance in the name of “individual freedom” and individual consent gets us nowhere closer to understanding the threats we are facing.

Martin Tisné argues for an “environmental” approach to data rights: “Data isn’t the new oil, it’s the new CO2.”

Lest one think that we couldn’t/shouldn’t have seen this (and related issues like over dependence on algorithms, the digital divide, et al.) coming, see also Paul Baran‘s prescient 1968 essay, “On the Future Computer Era,” one of the last pieces he did at RAND, before co-leading the spin-off of The Institute for the Future.

* Mike Loukides, Ethics and Data Science

###

As we ponder privacy, we might recall that it was on this date in 1981 that IBM released IBM model number 5150– AKA the IBM PC– the original version and progenitor of the IBM PC compatible hardware platform. Since the machine was based on open architecture, within a short time of its introduction, third-party suppliers of peripheral devices, expansion cards, and software proliferated; the influence of the IBM PC on the personal computer market was substantial in standardizing a platform for personal computers (and creating a market for Microsoft’s operating system– first PC DOS, then Windows– on which the PC platform ran).  “IBM compatible” became an important criterion for sales growth; after the 1980s, only the Apple Macintosh family kept a significant share of the microcomputer market without compatibility with the IBM personal computer.

IBM PC source

 

Written by (Roughly) Daily

August 12, 2019 at 1:01 am

“Big Data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it”*…

 

a-day-in-data-1200

 

You’ve probably heard of kilobytes, megabytes, gigabytes, or even terabytes.

These data units are common everyday amounts that the average person may run into. Units this size may be big enough to quantify the amount of data sent in an email attachment, or the data stored on a hard drive, for example.

In the coming years, however, these common units will begin to seem more quaint – that’s because the entire digital universe is expected to reach 44 zettabytes by 2020.

If this number is correct, it will mean there are 40 times more bytes than there are stars in the observable universe…

The stuff of dreams, the stuff of nightmares: “How Much Data is Generated Each Day?

* Dan Ariely

###

As we revel in really, really big numbers, we might spare a thought for Edgar Frank “Ted” Codd; he died on this date in 2003.  A distinguished computer scientist who did important work on cellular automata, he is best remembered as the father of computer databases– as the person who laid the foundation for for relational databases, for storing and retrieving information in computer records.

150px-Edgar_F_Coddsource

 

Written by (Roughly) Daily

April 18, 2019 at 1:01 am

%d bloggers like this: