(Roughly) Daily

Posts Tagged ‘Technology

“Simplicity is the ultimate sophistication”*…

 

selectric

Selectric I Typewriter, 1961 aluminum, steel, molded plastic.

 

The Cooper Hewitt, Smithsonian Design Museum’s diverse collection, spanning thirty centuries of historic and contemporary design, includes the world’s coolest office, a large snail shell, snakes, a dragon and four bearded men, a cone propped up on a bench, a pair of colorful hands, a mysterious tv and a perpetual calendar.

The selection above is from the Digital Collection, which one can browse in full here… or just dive into the collection in full.

* Frequently attributed to Leonardo da Vinci, but likely first used by Clare Boothe Luce in her 1931 novel Stuffed Shirts

###

As we let form follow function, we might recall that it was on this date in 1875 that the first “weather map” ran in a newspaper (The Times, London).  It was the creation of polymath Sir Francis Galton, an explorer and anthropologist who was also a statistician and meteorologist.

The map was not a forecast, but a representation of the conditions of the previous day. This is known as a synoptic chart, meaning that it shows a summary of the weather situation. Readers could make their own predictions based on the information it provided.

Galton’s chart differs from the modern version only in minor details. It shows the temperature for each region, with dotted lines marking the boundaries of areas of different barometric pressures. It also describes the state of the sky in each land region, with terms such as “dull” or “cloud,” or the sea condition – “smooth” or “slight swell”… [source]

weather source

 

 

 

Written by LW

April 1, 2020 at 1:01 am

“We must be free not because we claim freedom, but because we practice it”*…

 

algorithm

 

There is a growing sense of unease around algorithmic modes of governance (‘algocracies’) and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology…

From a pre-print of John Danaher‘s (@JohnDanaher) chapter in the forthcoming Oxford Handbook on the Philosophy of Technology, edited by Shannon Vallor: “Freedom in an Age of Algocracy “… a little dense, but very useful.

[image above: source]

* William Faulkner

###

As we meet the new boss, same as the old boss, we might recall that it was on this date in 1962 that telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1 source

 

Written by LW

February 24, 2020 at 1:01 am

“All human beings have three lives: public, private, and secret”*…

 

Privacy

A monitor displays the Omron Corp. Okao face- and emotion-detection technology during CES 2020

 

Twenty years ago at a Silicon Valley product launch, Sun Microsystems CEO Scott McNealy dismissed concern about digital privacy as a red herring: “You have zero privacy anyway. Get over it.

“Zero privacy” was meant to placate us, suggesting that we have a fixed amount of stuff about ourselves that we’d like to keep private. Once we realized that stuff had already been exposed and, yet, the world still turned, we would see that it was no big deal. But what poses as unsentimental truth telling isn’t cynical enough about the parlous state of our privacy.

That’s because the barrel of privacy invasion has no bottom. The rallying cry for privacy should begin with the strangely heartening fact that it can always get worse. Even now there’s something yet to lose, something often worth fiercely defending.

For a recent example, consider Clearview AI: a tiny, secretive startup that became the subject of a recent investigation by Kashmir Hill in The New York Times. According to the article, the company scraped billions of photos from social-networking and other sites on the web—without permission from the sites in question, or the users who submitted them—and built a comprehensive database of labeled faces primed for search by facial recognition. Their early customers included multiple police departments (and individual officers), which used the tool without warrants. Clearview has argued they have a right to the data because they’re “public.”

In general, searching by a face to gain a name and then other information is on the verge of wide availability: The Russian internet giant Yandex appears to have deployed facial-recognition technology in its image search tool. If you upload an unlabeled picture of my face into Google image search, it identifies me and then further searches my name, and I’m barely a public figure, if at all.

Given ever more refined surveillance, what might the world look like if we were to try to “get over” the loss of this privacy? Two very different extrapolations might allow us to glimpse some of the consequences of our privacy choices (or lack thereof) that are taking shape even today…

From Jonathan Zittrain (@zittrain), two scenarios for a post-privacy future: “A World Without Privacy Will Revive the Masquerade.”

* Gabriel García Márquez

###

As we get personal, we might send provocatively nonsensical birthday greetings to Hugo Ball; he was born on this date in 1886.  Ball worked as an actor with Max Reinhardt and Hermann Bahr in Berlin until the outbreak of World War I.  A staunch pacifist, Ball made his way to Switzerland, where he turned his hand to poetry in an attempt to express his horror at the conflagration enveloping Europe. (“The war is founded on a glaring mistake, men have been confused with machines.”)

Settling in Zürich, Ball was a co-founder of the Dada movement (and, lore suggests, its namer, having allegedly picked the word at random from a dictionary).  With Tristan Tzara and Jan Arp, among others, he co-founded and presided over the Cabaret Voltaire, the epicenter of Dada.  And in 1916, he created the first Dada Manifesto (Tzara’s came two years later).

 source

 

Written by LW

February 22, 2020 at 1:01 am

“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets”*…

 

Pope AI

Francis Bacon, Study after Velazquez’s Portrait of Pope Innocent X, 1953

 

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done…

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed…

Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it. A “non-evil” Google, built by two Stanford dropouts, is just not the same entity as modern Alphabet’s global multinational network, with its extensive planetary holdings in clouds, transmission cables, operating systems, and device manufacturing.

It’s not that Google and Alphabet become evil just because they’re big and rich. Frankly, they’re not even all that “evil.” They’re just inherently involved in huge, tangled, complex, consequential schemes, with much more variegated populations than had originally been imagined. It’s like the ethical difference between being two parish priests and becoming Pope.

Of course the actual Pope will confront Artificial Intelligence. His response will not be “is it socially beneficial to the user-base?” but rather, “does it serve God?” So unless you’re willing to morally out-rank the Pope, you need to understand that religious leaders will use Artificial Intelligence in precisely the way that televangelists have used television.

So I don’t mind the moralizing about AI. I even enjoy it as metaphysical game, but I do have one caveat about this activity, something that genuinely bothers me. The practitioners of AI are not up-front about the genuine allure of their enterprise, which is all about the old-school Steve-Jobsian charisma of denting the universe while becoming insanely great. Nobody does AI for our moral betterment; everybody does it to feel transcendent.

AI activists are not everyday brogrammers churning out grocery-code. These are visionary zealots driven by powerful urges they seem unwilling to confront. If you want to impress me with your moral authority, gaze first within your own soul.

Excerpted from the marvelous Bruce Sterling‘s essay “Artificial Morality,” a contribution to the Provocations series, a project of the Los Angeles Review of Books in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

* Voltaire

###

As we agonize over algorithms, we might recall that it was on this date in 1872 that Luther Crowell patented a machine for the manufacture of accordion-sided, flat-bottomed paper bags (#123,811).  That said, Margaret E. Knight might more accurately be considered the “mother of the modern shopping bag”; she had perfected square bottoms two years earlier.

source

 

“If we were capable of thinking of everything, we would still be living in Eden, rent-free with all-you-can-eat buffets and infinitely better daytime TV programming”*…

 

buffet

 

Few things epitomize America more than the all-you-can-eat buffet.

For a small fee, you’re granted unencumbered access to a wonderland of gluttony. It is a place where saucy meatballs and egg rolls share the same plate without prejudice, where a tub of chocolate pudding finds a home on the salad bar, where variety and quantity reign supreme.

“The buffet is a celebration of excess,” says Chef Matthew Britt, an assistant professor at the Johnson & Wales College of Culinary Arts. “It exists for those who want it all.”

But one has to wonder: How does an industry that encourages its customers to maximize consumption stay in business?

To find out, we spoke with industry experts, chefs, and buffet owners. As it turns out, it’s harder to “beat” the buffet than you might think…

Is it possible to out-eat the price you pay for a buffet?  How do these places make money?  The dollars and cents behind the meat and potatoes: “The economics of all-you-can-eat buffets.”

* Dean Koontz

###

As we pile it high, we might recall that it was on this date in 1883 that A. Ashwell, of Herne Hill in South London, received a patent for the “vacant/engaged” door bolt for lavatory doors… presumably a relief to the folks who had been using the public restrooms that had been introduced in London in 1852.

lock source

 

Written by LW

February 17, 2020 at 1:01 am

“The idea of a ‘virtual reality’ such as the Metaverse is by now widespread in the computer-graphics community and is being implemented in a number of different ways”*…

 

Metaverse-1020x620-c-default

 

Technology frequently produces surprises that nobody predicts. However, the biggest developments are often anticipated decades in advance. In 1945 Vannevar Bush described what he-called the “Memex”, a single device that would store all books, records and communications, and mechanically link them together by association. This concept was then used to formulate the idea of “hypertext” (a term coined two decades later), which in turn guided the development of the World Wide Web (developed another two decades later). The “Streaming Wars” have only just begun, yet the first streaming video took place more than 25 years ago. What’s more, many of the attributes of this so-called war have been hypothesized for decades, such as virtually infinite supplies of content, on-demand playback, interactivity, dynamic and personalized ads, and the value of converging content with distribution.

In this sense, the rough outlines of future solutions are often understood and, in a sense, agreed upon well in advance of the technical capacity to produce them. Still, it’s often impossible to predict how they’ll fall into place, which features matter more or less, what sort of governance models or competitive dynamics will drive them, or what new experiences will be produced…

Since the late 1970s and early 1980s, many of those in the technology community have imagined a future state of, if not quasi-successor to, the Internet – called the “Metaverse”. And it would revolutionize not just the infrastructure layer of the digital world, but also much of the physical one, as well as all the services and platforms atop them, how they work, and what they sell. Although the full vision for the Metaverse remains hard to define, seemingly fantastical, and decades away, the pieces have started to feel very real. And as always with this sort of change, its arc is as long and unpredictable as its end state is lucrative.

To this end, the Metaverse has become the newest macro-goal for many of the world’s tech giants…

Matthew Ball (@ballmatthew)  peers ahead: “The Metaverse: What It Is, Where to Find it, Who Will Build It, and Fortnite.”

[image above: source]

* “The idea of a ‘virtual reality’ such as the Metaverse is by now widespread in the computer-graphics community and is being implemented in a number of different ways. The particular vision of the Metaverse as expressed in this novel originated from idle discussion between me and Jaime (Captain Bandwidth) Taaffe — which does not imply that blame for any of the unrealistic or tawdry aspects of the Metaverse should be placed on anyone but me. The words ‘avatar’ (in the sense used here) and ‘Metaverse’ are my inventions, which I came up with when I decided that existing words (such as ‘virtual reality’) were simply too awkward to use. […] after the first publication of Snow Crash, I learned that the term ‘avatar’ has actually been in use for a number of years as part of a virtual reality system called ‘Habitat’ […] in addition to avatars, Habitat includes many of the basic features of the Metaverse as described in this book”…   – Neal Stephenson, Author’s acknowledgments, Snow Crash, Bantam, 2003 (reissue)

###

As we visualize the virtual, we might recall that it was on this date in 1978 that the first computer bulletin board system went on-line.  Created in Chicago by Ward Christensen and Randy Suess, the Computerized Bulletin Board System (CBBS) had been built in 30 days.

250px-Remoteaccess1 source

 

“The Net is the new underlying infrastructure for civilization itself”*…

 

infrastructure

 

Most governments have traditionally argued that there are certain critical societal assets that should be built, managed, and controlled by public entities — think streets, airports, fire fighting, parks, policing, tunnels, an army. (And in just about every rich country except this one, access to and/or the provision of health care.) The choice to have, say, a city-owned park reflects two key facts: first, a civic judgment that having green outdoor spaces is important to the city; and second, that free parks open to all are unlikely to be produced by private companies driven by a motive for profit.

When it comes to the Internet we all live on, huge swaths of it are owned, controlled, and operated by private companies — companies like Facebook, Google, Amazon, Apple, Microsoft, and Twitter. In many cases, those companies’ public impacts aren’t in any significant conflict with their private motivations for profit. But in some cases… they are. Is there room for a public infrastructure that can offer an alternative to (or reduce the harm done by) those tech giants?

A diagnosis of the issue with a set of proposed remedies: “Public infrastructure isn’t just bridges and water mains: Here’s an argument for extending the concept to digital spaces.”

This article is based on a piece by Ethan Zuckerman, written for the Knight First Amendment Institute at Columbia, in which he lays out what he calls the case for digital public infrastructure. (He also published a summary of it here.)

Pair with this consideration of another piece of our political/social/economic “infrastructure,” corporate law, and its effects– contract, property, collateral, trust, corporate, and bankruptcy law, an “empire of law”: “How ‘Big Law’ Makes Big Money.”

* Doc Searles

###

As we contemplate the commons, we might recall that it was on this date in 1865 that the U.S. government dismantled a monstrous piece of “infrastructure” when Congress passed the Thirteenth Amendment to the United States Constitution and submitted it to the states for ratification.

The amendment abolished slavery with the declaration: “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.”

Thomas Nast’s engraving, “Emancipation,” 1865

source

 

Written by LW

January 31, 2020 at 1:01 am

%d bloggers like this: