(Roughly) Daily

Posts Tagged ‘history

“Symbols can be so beautiful, sometimes”*…

 

McDonalds

 

One of Northern Europe’s arguably most distinctive exports is “slow TV”: real-time recordings of train journeys, ferry crossings or the migration of reindeer, which regularly draw record audiences.

Among perhaps the most successful — and least exciting — examples of that genre is the live stream of a McDonald’s cheeseburger with fries. At its peak, it drew 2 million viewers a month. The only element on the screen that moves, however, is the time display.

The burger looks the same way, hour after hour.

As of this week, it has looked like that for 10 years.

Purchased hours before the corporation pulled out of the country in 2009, in the wake of Iceland’s devastating financial crisis, the last surviving McDonald’s burger has become much more than a burger. To some, it stands for the greed and excessive capitalism that “created an economic collapse that was so bad that even McDonald’s had to close down,” said Hjörtur Smárason, 43, who purchased the fateful burger in 2009. To others, the eerily fresh look of the 10-year-old meal has served as a warning against the excessive consumption of fast food…

A symbol for our times: “The cautionary political tale of Iceland’s last McDonald’s burger that simply won’t rot, even after 10 years.”

* Kurt Vonnegut, Breakfast of Champions

###

As we muse of the messages in our meals, we might send gloriously-written birthday greetings to today’s epigramist, Kurt Vonnegut Jr.; he was born on this date in 1922.  In a career spanning over 50 years, Vonnegut published fourteen novels, three short story collections, five plays, and five works of non-fiction, with further collections being published after his death. He is probably best known for his darkly-satirical, best-selling 1969 novel Slaughterhouse-Five.

Vonnegut called George Orwell his favorite writer, and admitted that he tried to emulate Orwell– “I like his concern for the poor, I like his socialism, I like his simplicity”– though early in his career Vonnegut decided to model his style after Henry David Thoreau, who wrote as if from the perspective of a child.  And of course, Vonnegut’s life and work are resonant with Mark Twain and The Adventures of Huckleberry Finn. 

Author Josip Novakovich marveled that “The ease with which he writes is sheerly masterly, Mozartian.”  The Los Angeles Times suggested that Vonnegut will “rightly be remembered as a darkly humorous social critic and the premier novelist of the counterculture“; The New York Times agreed, calling Vonnegut the “counterculture’s novelist.”

220px-Kurt_Vonnegut_1972 source

 

 

 

 

Written by LW

November 11, 2019 at 1:01 am

“Nothing is at last sacred but the integrity of your own mind”*…

 

mind internet

 

Imagine that a person’s brain could be scanned in great detail and recreated in a computer simulation. The person’s mind and memories, emotions and personality would be duplicated. In effect, a new and equally valid version of that person would now exist, in a potentially immortal, digital form. This futuristic possibility is called mind uploading. The science of the brain and of consciousness increasingly suggests that mind uploading is possible – there are no laws of physics to prevent it. The technology is likely to be far in our future; it may be centuries before the details are fully worked out – and yet given how much interest and effort is already directed towards that goal, mind uploading seems inevitable. Of course we can’t be certain how it might affect our culture but as the technology of simulation and artificial neural networks shapes up, we can guess what that mind uploading future might be like.

Suppose one day you go into an uploading clinic to have your brain scanned. Let’s be generous and pretend the technology works perfectly. It’s been tested and debugged. It captures all your synapses in sufficient detail to recreate your unique mind. It gives that mind a standard-issue, virtual body that’s reasonably comfortable, with your face and voice attached, in a virtual environment like a high-quality video game. Let’s pretend all of this has come true.

Who is that second you?

Princeton neuroscientist, psychologist, and philosopher Michael Graziano explores: “What happens if your mind lives forever on the internet?

* Ralph Waldo Emerson, Self-Reliance

###

As we ponder presence, we might spare a thought for William “Willy” A. Higinbotham; he died on this date in 1994.  A physicist who was a member of the team that developed the first atomic bomb, he later became a leader in the nuclear non-proliferation movement.

But Higinbotham may be better remembered as the creator of Tennis for Two— the first interactive analog computer game, one of the first electronic games to use a graphical display, and the first to be created as entertainment (as opposed to as a demonstration of a computer’s capabilities).  He built it for the 1958 visitor day at Brookhaven National Laboratory.

It used a small analogue computer with ten direct-connected operational amplifiers and output a side view of the curved flight of the tennis ball on an oscilloscope only five inches in diameter. Each player had a control knob and a button.

 source

The 1958 Tennis for Two exhibit

source

 

Written by LW

November 10, 2019 at 1:01 am

“The future isn’t what it used to be”*…

 

Blade runner

 

When Ridley Scott’s Blade Runner was released in 1982, its dystopian future seemed light years away. But fans of the critically-acclaimed science fiction film might [be] feeling a little funny. As its opening sequence informs us, the movie takes place in Los Angeles, November 2019…

That’s to say, from now on, Blade Runner is no longer set in the future.

220px-Blade_Runner_(1982_poster)

For a list of other works whose futures are already past, visit Screen Crush (the source of the image at the top); and for a more complete list, click here.

* variously attributed to Paul Valéry, Laura Riding, Robert Graves, and (with the substitution of “ain’t” for “isn’t”) Yogi Berra

###

As we adjust our expectations, we might send imaginative birthday greetings to Hedwig Eva Maria Kiesler; she was born on this date in 1914.  Better known by her stage name, Hedy Lamarr, she became a huge movie star at MGM.

By the time American audiences were introduced to Austrian actress Hedy Lamarr in the 1938 film Algiers, she had already lived an eventful life. She got her scandalous start in film in Czechoslovakia (her first role was in the erotic Ecstasy). She was married at 19 in pre-World War II Europe to Fritz Mandl, a paranoid, overly protective arms dealer linked with fascists in Italy and Nazis in Germany. After her father’s sudden death and as the war approached, she fled Mandl’s country estate in the middle of the night and escaped to London. Unable to return home to Vienna where her mother lived,  and determined to get into the movies, she booked passage to the States on the same ship as mogul Louis B. Mayer. Flaunting herself, she drew his attention and signed with his MGM Studios before they docked.

Arriving in Hollywood brought her a new name (Lamarr was originally Kiesler), fame, multiple marriages and divorces and a foray into groundbreaking work as a producer, before she eventually became a recluse. But perhaps the most fascinating aspect of Lamarr’s life isn’t as well known: during WWII, when she was 27the movie star invented and patented an ingenious forerunner of current high-tech communications…

The story of the movie star who invented spread-spectrum radio, the secure signal technology that helped the Allies avoid having their radio communications intercepted by the Axis forces, and that lies at the heart of the cellular phone system that we all use today: “Why Hedy Lamarr Was Hollywood’s Secret Weapon.”

“Any girl can be glamorous. All you have to do is stand still and look stupid” – Hedy Lamarr

220px-Hedy_Lamarr_Publicity_Photo_for_The_Heavenly_Body_1944 source

 

Written by LW

November 9, 2019 at 1:01 am

“The clearest way into the Universe is through a forest wilderness”*…

 

Forest

 

Consider a forest: One notices the trunks, of course, and the canopy. If a few roots project artfully above the soil and fallen leaves, one notices those too, but with little thought for a matrix that may spread as deep and wide as the branches above. Fungi don’t register at all except for a sprinkling of mushrooms; those are regarded in isolation, rather than as the fruiting tips of a vast underground lattice intertwined with those roots. The world beneath the earth is as rich as the one above.

For the past two decades, Suzanne Simard, a professor in the Department of Forest & Conservation at the University of British Columbia, has studied that unappreciated underworld. Her specialty is mycorrhizae: the symbiotic unions of fungi and root long known to help plants absorb nutrients from soil. Beginning with landmark experiments describing how carbon flowed between paper birch and Douglas fir trees, Simard found that mycorrhizae didn’t just connect trees to the earth, but to each other as well.

Simard went on to show how mycorrhizae-linked trees form networks, with individuals she dubbed Mother Trees at the center of communities that are in turn linked to one another, exchanging nutrients and water in a literally pulsing web that includes not only trees but all of a forest’s life. These insights had profound implications for our understanding of forest ecology—but that was just the start.

It’s not just nutrient flows that Simard describes. It’s communication. She—and other scientists studying roots, and also chemical signals and even the sounds plant make—have pushed the study of plants into the realm of intelligence. Rather than biological automata, they might be understood as creatures with capacities that in animals are readily regarded as learning, memory, decision-making, and even agency.

Plants communicate, nurture their seedlings– and feel stress.  An interview with Suzanne Simard: “Never Underestimate the Intelligence of Trees.”

Pair with: “Should this tree have the same rights as you?

* John Muir

###

As we contemplate cultivation, we might recall that it was on this date in 1602 that The Bodleian Library at Oxford formally opened.  (Sir Thomas Bodley had donated over 2000 books in his personal library to replace the earlier Duke of Glouchester’s (Duke Humphrey’s) Library, which had been dispersed.  Bodley’s bequest was made in 1598; but the full collection wasn’t catalogued and made available until this date in 1602, when the Library reopened with its new name, in honor of its benefactor.  Eight years later, Bodley made a deal with the Stationer’s Company– which licensed [provided copyright] for all publications in England– that a copy of everything licensed should be sent to the Bodleian…  making it a Copyright Depository, the first and now one of six in the UK.)

240px-Bodleian_Library_entrance,_Oxford

The Bodleian’s entrance, with the coats-of-arms of several Oxford colleges

source

 

Written by LW

November 8, 2019 at 1:01 am

“Reality is not a function of the event as event, but of the relationship of that event to past, and future, events”*…

 

ARPAnet

Dr. Leonard Kleinrock poses beside the processor in the UCLA lab where the first ARPANET message was sent

 

The first message transmitted over ARPANET, the pioneering Pentagon-funded data-sharing network, late in the evening on October 29, 1969, was incomplete due to a technical error. UCLA graduate student Charley Kline was testing a “host to host” connection across the nascent network to a machine at SRI in Menlo Park, California, and things seemed to be going well–until SRI’s machine, operated by Bill Duvall, crashed partway through the transmission, meaning the only letters received from the attempted “login” were “lo.”

Kline thought little of the event at the time, but it’s since become the stuff of legend and poetic reinterpretation. “As in, lo and behold!” ARPANET developer and early internet icon Leonard Kleinrock says, grinning as he recounts the story in the 2016 Werner Herzog documentary Lo and Behold: Reveries of the Connected World. Others have interpreted the truncated transmission as “a stuttered hello”; one camp argues it was a prescient “LOL.”

It’s a staple of tech hagiography to inject history’s banal realities with monumental foresight and noble intentions; Mark Zuckerberg demonstrated as much recently, when he claimed Facebook was founded in response to the Iraq War, rather than to rate the attractiveness of Harvard women. It’s understandable to wish that ARPANET’s inaugural message, too, had offered a bit more gravity, given all that the network and its eventual successor, the internet, hath wrought upon the world. But perhaps the most enduring truth of the internet is that so many of its foundational moments and decisive turning points—from Kline’s “lo” to Zuckerberg’s late-night coding sessions producing a service for “dumb fucks” at Harvard—emerged from ad hoc actions and experiments undertaken with little sense of foresight or posterity. In this respect, the inaugural “lo” was entirely apt…

Fifty years after the first successful (or, successful enough) transmission across the ARPANET, we’ve effectively terraformed the planet into a giant computer founded on the ARPANET’s architecture. The messages transmitted across it have certainly become more complex, but the illusion that its ad-hoc infrastructure developed in a political vacuum has become harder and harder to maintain. That illusion has been pierced since 2016, but the myth that seems poised to replace it—that technology can in fact automate away bias and politics itself—is no less insidious.

The vapidity of the first ARPANET message is a reminder of the fallacy of this kind of apolitical, monumental storytelling about technology’s harms and benefits. Few isolated events in the development of the internet were as heroic as we may imagine, or as nefarious as we may fear. But even the most ad hoc of these events occurred in a particular ideological context. What is the result of ignoring or blithely denying that context? Lo and behold: It looks a lot like 2019.

Half a century after the first ARPANET message, pop culture still views connectivity as disconnected from the political worldview that produced it.  The always-illuminating Ingrid Burrington argues that that’s a problem: “How We Misremember the Internet’s Origins.”

“Is everyone who lives in Ignorance like you?” asked Milo.
“Much worse,” he said longingly. “But I don’t live here. I’m from a place very far away called Context.”
Norton Juster, The Phantom Tollbooth

* Robert Penn Warren, All the King’s Men

###

As we ruminate on roots, we might send carefully-coded birthday greetings to Gordon Eubanks; he was born on this date in 1946.  A microcomputer pioneer, he earned his PhD studying under Garry Kildall, who founded Digital Research; his dissertation was BASIC-E, a compiler designed for Kildall’s CP/M operating system.  In 1981, after DR lost the IBM operating system contract to Microsoft (per yesterday’s almanac entry), Eubanks joined DR to create new programming languages.  He soon came to doubt DR’s viability, and left to join Symantec, where he helped develop Q & A, an integrated database and wordprocessor with natural language query. He rose through Symantec’s ranks to become it’s President and CEO.  Later he became president and CEO of Oblix, a silicon valley startup that creates software for web security (acquired by Oracle in 2005).

eubanks source

 

Written by LW

November 7, 2019 at 1:01 am

“There are two ways to make money in business: bundling and unbundling”*…

bundle

Many ventures seek profit by repackaging existing goods and services as revenue streams they can control, with technology frequently serving as the mechanism. The tech industry’s mythology about itself as a “disruptor” of the status quo revolves around this concept: Inefficient bundles (newspapers, cable TV, shopping malls) are disaggregated by companies that serve consumers better by letting them choose the features they want as stand-alone products, unencumbered of their former baggage. Why pay for a package of thousands of unwatched cable television channels, when you can pay for only the ones you watch? Who wants to subsidize journalism when all you care about is sports scores?

Media has been the most obvious target of digital unbundling because of the internet’s ability to subsume other forms and modularize their content. But almost anything can be understood as a bundle of some kind — a messy entanglement of variously useful functions embedded in a set of objects, places, institutions, and jobs that is rarely optimized for serving a single purpose. And accordingly, we hear promises to unbundle more and more entities. Transportation systems are being unbundled by various ridesharing and other mobility-as-a-service startups, causing driving, parking, navigation, and vehicle maintenance to decouple from their traditional locus in the privately owned automobile. Higher education, which has historically embedded classroom learning in an expensive bundle that often includes residence on campus and extracurricular activities, is undergoing a similar change via tools for remote learning…

Things that have been unbundled rarely remain unbundled for very long. Whether digital or physical, people actually like bundles, because they supply a legible social structure and simplify the complexity presented by a paralyzing array of consumer choices. The Silicon Valley disruption narrative implies that bundles are suboptimal and thus bad, but as it turns out, it is only someone else’s bundles that are bad: The tech industry’s unbundling has actually paved the way for invidious forms of rebundling. The apps and services that replaced the newspaper are now bundled on iPhone home screens or within social media platforms, where they are combined with new things that no consumer asked for: advertising, data mining, and manipulative interfaces. Facebook, for instance, unbundled a variety of long-established social practices from their existing analog context — photo sharing, wishing a friend happy birthday, or inviting someone to a party — and recombined them into its new bundle, accompanied by ad targeting and algorithmic filtering. In such cases, a bundle becomes less a bargain than a form of coercion, locking users into arrangements that are harder to escape than what they replaced. Ironically, digital bundles like Facebook also introduce novel ambiguities and adjacencies in place of those they sought to eliminate, such as anger about the political leanings of distant acquaintances or awareness of social gatherings that happened without you (side effects that are likely to motivate future unbundling efforts in turn)…

In a consideration of one of the most fundamental dynamics afoot in our economy today, and of its consequences, Drew Austin observes that no goods or services are stand-alone: “Bundling and Unbundling.”

* Jim Barksdale (in 1995, when he was the CEO of Netscape)

###

As we contemplate connection, we might recall that it was on this date in 1980 that IBM and Microsoft signed the agreement that made Microsoft the supplier of the operating system for the soon-to-be-released IBM PC.  IBM had hoped to do a deal with Digital Research (the creators of CP/M), but DR would not sign an NDA.

On Nov. 6, 1980, the contract that would change the future of computing was signed: IBM would pay Microsoft $430,000 for what would be called MS-DOS. But the key provision in that agreement was the one that allowed Microsoft to license the operating system to other computer manufacturers besides IBM — a nonexclusive arrangement that IBM agreed to in part because it was caught up in decades of antitrust investigations and litigation. IBM’s legal caution, however, would prove to be Microsoft’s business windfall, opening the door for the company to become the dominant tech company of the era.

Hundreds of thousands of IBM computers were sold with MS-DOS, but more than that, Microsoft became the maker of the crucial connection that was needed between the software and hardware used to operate computers. Company revenue skyrocketed from $16 million in 1981 to $140 million in 1985 as other computer-makers like Tandy and Commodore also chose to partner with them.

And as Microsoft’s fortunes rose, IBM’s declined. The company known as Big Blue, which had once been the largest in America, and 3,000 times the size of Microsoft, lost control of the PC platform it had helped build as software became more important than hardware.  [source]

Microsoft Founders Paul Allen and Bill Gates

Paul Allen and Bill Gates in those early years

source

 

Written by LW

November 6, 2019 at 1:01 am

“Not with a bang, but a whimper”*…

 

automation

 

What actually happens to workers when a company deploys automation? The common assumption seems to be that the employee simply disappears wholesale, replaced one-for-one with an AI interface or an array of mechanized arms.

Yet given the extensive punditeering, handwringing, and stump-speeching around the “robots are coming for our jobs” phenomenon—which I will never miss an opportunity to point out is falsely represented—research into what happens to the individual worker remains relatively thin. Studies have attempted to monitor the impact of automation on wages on aggregate or to correlate employment to levels of robotization.

But few in-depth investigations have been made into what happens to each worker after their companies roll out automation initiatives. Earlier this year, though, a paper authored by economists James Bessen, Maarten Goos, Anna Salomons, and Wiljan Van den Berge set out to do exactly that…

What emerges is a portrait of workplace automation that is ominous in a less dramatic manner than we’re typically made to understand. For one thing, there is no ‘robot apocalypse’, even after a major corporate automation event. Unlike mass layoffs, automation does not appear to immediately and directly send workers packing en masse.

Instead, automation increases the likelihood that workers will be driven away from their previous jobs at the companies—whether they’re fired, or moved to less rewarding tasks, or quit—and causes a long-term loss of wages for the employee.

The report finds that “firm-level automation increases the probability of workers separating from their employers and decreases days worked, leading to a 5-year cumulative wage income loss of 11 percent of one year’s earnings.” That’s a pretty significant loss.

Worse still, the study found that even in the Netherlands, which has a comparatively generous social safety net to, say, the United States, workers were only able to offset a fraction of those losses with benefits provided by the state. Older workers, meanwhile, were more likely to retire early—deprived of years of income they may have been counting on.

Interestingly, the effects of automation were felt similarly through all manner of company—small, large, industrial, services-oriented, and so on. The study covered all non-finance sector firms, and found that worker separation and income loss were “quite pervasive across worker types, firm sizes and sectors.”

Automation, in other words, forces a more pervasive, slower-acting and much less visible phenomenon than the robots-are-eating-our-jobs talk is preparing us for…

The result, Bessen says, is an added strain on the social safety net that it is currently woefully unprepared to handle. As more and more firms join the automation goldrush—a 2018 McKinsey survey of 1,300 companies worldwide found that three-quarters of them had either begun to automate business processes or planned to do so next year—the number of workers forced out of firms seems likely to tick up, or at least hold steady. What is unlikely to happen, per this research, is an automation-driven mass exodus of jobs.

This is a double-edged sword: While it’s obviously good that thousands of workers are unlikely to be fired in one fell swoop when a process is automated at a corporation, it also means the pain of automation is distributed in smaller, more personalized doses, and thus less likely to prompt any sort of urgent public response. If an entire Amazon warehouse were suddenly automated, it might spur policymakers to try to address the issue; if automation has been slowly hurting us for years, it’s harder to rally support for stemming the pain…

Brian Merchant on the ironic challenge of addressing the slow-motion, trickle-down social, economic, and cultural threats of automation– that they will accrue gradually, like erosion, not catastrophically… making it harder to generate a sense of urgency around creating a response: “There’s an Automation Crisis Underway Right Now, It’s Just Mostly Invisible.”

* T. S. Eliot, “The Hollow Men”

###

As we think systemically, we might recall that it was on this date in 1994 that Ken McCarthy, Marc Andreessen, and Mark Graham held the first conference to focus on the commercial potential of the World Wide Web.

 

 

Written by LW

November 5, 2019 at 1:01 am

%d bloggers like this: