(Roughly) Daily

Posts Tagged ‘AI

“The future is already here — it’s just not very evenly distributed”*…

Brewarrina Aboriginal Fish Traps, 1883 (source)

The future is not a destination. We build it every day in the present. This is, perhaps, a wild paraphrasing of the acclaimed author and futurist William Gibson who, when asked what a distant future might hold, replied that the future was already here, it was just unevenly distributed. I often ponder this Gibson provocation, wondering where around me the future might be lurking. Catching glimpses of the future in the present would be helpful. But then, I think, rather than hoping to see a glimpse of the future, we could instead actively build one. Or at the very least tell stories about what it might be. Stories that unfold a world or worlds in which we might want to live – neither dystopian nor utopian, but ours. I know we can still shape those worlds and make them into somewhere that reflects our humanity, our different cultures and our cares.

Of course, it is not enough to tell stories about some distant or unevenly distributed future; we need to find ways of disrupting the present too. It might be less important to have a compelling and coherent vision of the future than an active and considered approach to building possible futures. It is as much about critical doing as critical thinking. One approach to the future might be to focus less on the instruments of technologies per se and more on the broader systems that will be necessary to bring those futures into existence…

It might be less important to have a compelling and coherent vision of the future than an active and considered approach to building possible futures. It is as much about critical doing as critical thinking…

AI is always, and already, a lot more than just a constellation of technologies. It exists as a set of conversations in which we are all implicated: we discuss AI, worry out loud about its ethical frameworks, watch movies in which it figures centrally, and read news stories about its impact…

[S]tories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together…

When I returned to Australia in 2017, I wanted to build other futures and to acknowledge the country where my work had started and where I was now working again. I knew I needed to find a different world and a different intersection, and to find new ways to tell stories of technology and of the future – I wanted some different pasts and some different touchstones.

I first saw a photograph of the Brewarrina Aboriginal Fish Traps in a Guardian news article, and the image stayed with me.. That black-­and-­white photograph from the late 1800s showed long, sweeping lines of grey stones arcing across a fast-­moving river. The water flowing around the lines of stones was tipped white at the breakpoints. And although there was no one in the image, the arrangement of the stones was deliberate, human-­made and enduring. It was a photograph of the one of the oldest known human-­built technical systems on the planet. And while there are ongoing debates about its exact age – 4,000 years, 10,000 years, 40,000 thousand years – there are no arguments about its complexity or sophistication…

I came to think that the importance of this place was not about the traps per se. It was about the system those traps create, and the systems in which they are, themselves, embedded. This is a system thousands of years in the making and keeping. This is a system that required concerted and continuous effort. This was something that required generations, both of accumulated knowledge about how the environment worked and accumulated knowledge about hydrology and about fish, and an accumulated commitment to continuing to build, sustain and upgrade that system over time.

The technical, cultural and ecological elements cement the significance of this place, not only as a heritage site but as a knowledge base on which contemporary systems could be built. Ideas about sustainability; ideas about systems that are decades or centuries in the making; ideas about systems that endure and systems that are built explicitly to endure. Systems that are built to ensure the continuities of culture feel like the kind of systems that we might want to be investing in now. This feels like the outline of a story of the future we would want to tell…

Now, we need to make a different kind of story about the future. One that focuses not just on the technologies, but on the systems in which these technologies will reside. The opportunity to focus on a future that holds those systems – and also on a way of approaching them in the present – feels both immense and acute. And the ways we might need to disrupt the present feel especially important in this moment of liminality, disorientation and profound unease, socially and ecologically. In a present where the links towards the future seem to have been derailed from the tracks we’ve laid in past decades, there is an opportunity to reform. Ultimately, we would need to think a little differently, ask different kinds of questions, bring as many diverse and divergent kinds of people along on the journey and look holistically and critically at the many propositions that computing in particular – and advanced technologies in general – present.

For me, the Brewarrina Fish Traps are a powerful way of framing how current technological systems should and could unfold. These present a very different future, one we can glimpse in the present and in the past; one that always is and always will be. In this moment, we need to be reminded that stories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together.

Genevieve Bell (@feraldata) on the importance of stories of systems, serendipity, and grace: “Touching the future.” (via Sentiers)

For more, see her Long Now talk, “The 4th Industrial Revolution: Responsible & Secure AI.”

And for an extended riff on the context and implications of the Richard Brautigan poem that she quotes in her piece, see Adam Curtis’ “All Watched Over By Machines Of Loving Grace” (streaming on Amazon Prime).

And for an apposite look at the Renaissance, when mechanical inventions served as a medium for experimental thinking about all aspects of the cosmos, see “When Engineers Were Humanists.”

* William Gibson (in an interview on Fresh Air in August, 1993; repeated by him– and others– many, many times since)

###

As we think like good ancestors, we might spare a thought for Henry, Duke of Cornwall. The the first child of King Henry VIII of England and his first wife, Catherine of Aragon, celebrated as the heir apparent, he died within weeks of his birth, on this date in 1511. His death and Henry VIII’s failure to produce another surviving male heir with Catherine led to succession and marriage crises that affected the relationship between the English church and Roman Catholicism, giving rise to the English Reformation.

Michael Sittow’s Virgin and Child. The woman appears to have been modelled on Catherine of Aragon.

source

“Facts alone, no matter how numerous or verifiable, do not automatically arrange themselves into an intelligible, or truthful, picture of the world. It is the task of the human mind to invent a theoretical framework to account for them.”*…

PPPL physicist Hong Qin in front of images of planetary orbits and computer code

… or maybe not. A couple of decades ago, your correspondent came across a short book that aimed to explain how we think know what we think know, Truth– a history and guide of the perplexed, by Felipe Fernández-Armesto (then, a professor of history at Oxford; now, at Notre Dame)…

According to Fernández-Armesto, people throughout history have sought to get at the truth in one or more of four basic ways. The first is through feeling. Truth is a tangible entity. The third-century B.C. Chinese sage Chuang Tzu stated, ”The universe is one.” Others described the universe as a unity of opposites. To the fifth-century B.C. Greek philosopher Heraclitus, the cosmos is a tension like that of the bow or the lyre. The notion of chaos comes along only later, together with uncomfortable concepts like infinity.

Then there is authoritarianism, ”the truth you are told.” Divinities can tell us what is wanted, if only we can discover how to hear them. The ancient Greeks believed that Apollo would speak through the mouth of an old peasant woman in a room filled with the smoke of bay leaves; traditionalist Azande in the Nilotic Sudan depend on the response of poisoned chickens. People consult sacred books, or watch for apparitions. Others look inside themselves, for truths that were imprinted in their minds before they were born or buried in their subconscious minds.

Reasoning is the third way Fernández-Armesto cites. Since knowledge attained by divination or introspection is subject to misinterpretation, eventually people return to the use of reason, which helped thinkers like Chuang Tzu and Heraclitus describe the universe. Logical analysis was used in China and Egypt long before it was discovered in Greece and in India. If the Greeks are mistakenly credited with the invention of rational thinking, it is because of the effective ways they wrote about it. Plato illustrated his dialogues with memorable myths and brilliant metaphors. Truth, as he saw it, could be discovered only by abstract reasoning, without reliance on sense perception or observation of outside phenomena. Rather, he sought to excavate it from the recesses of the mind. The word for truth in Greek, aletheia, means ”what is not forgotten.”

Plato’s pupil Aristotle developed the techniques of logical analysis that still enable us to get at the knowledge hidden within us. He examined propositions by stating possible contradictions and developed the syllogism, a method of proof based on stated premises. His methods of reasoning have influenced independent thinkers ever since. Logicians developed a system of notation, free from the associations of language, that comes close to being a kind of mathematics. The uses of pure reason have had a particular appeal to lovers of force, and have flourished in times of absolutism like the 17th and 18th centuries.

Finally, there is sense perception. Unlike his teacher, Plato, and many of Plato’s followers, Aristotle realized that pure logic had its limits. He began with study of the natural world and used evidence gained from experience or experimentation to support his arguments. Ever since, as Fernández-Armesto puts it, science and sense have kept time together, like voices in a duet that sing different tunes. The combination of theoretical and practical gave Western thinkers an edge over purer reasoning schemes in India and China.

The scientific revolution began when European thinkers broke free from religious authoritarianism and stopped regarding this earth as the center of the universe. They used mathematics along with experimentation and reasoning and developed mechanical tools like the telescope. Fernández-Armesto’s favorite example of their empirical spirit is the grueling Arctic expedition in 1736 in which the French scientist Pierre Moreau de Maupertuis determined (rightly) that the earth was not round like a ball but rather an oblate spheroid…

source

One of Fernández-Armesto most basic points is that our capacity to apprehend “the truth”– to “know”– has developed throughout history. And history’s not over. So, your correspondent wondered, mightn’t there emerge a fifth source of truth, one rooted in the assessment of vast, ever-more-complete data maps of reality– a fifth way of knowing?

Well, those days may be upon us…

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

he algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a ‘serving algorithm,’ then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”

The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless ‘translate’ a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

“I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”

Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” [Qin’s collaborator Eric] Palmerduca said…

But then, as Edwin Hubble observed, “observations always involve theory,” theory that’s implicit in the particulars and the structure of the data being collected and fed to the AI. So, perhaps this is less a new way of knowing, than a new way of enhancing Fernández-Armesto’s third way– reason– as it became the scientific method…

The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”

In either case: “New machine learning theory raises questions about nature of science.”

Francis Bello

###

As we experiment with epistemology, we might send carefully-observed and calculated birthday greetings to Georg Joachim de Porris (better known by his professional name, Rheticus; he was born on this date in 1514. A mathematician, astronomer, cartographer, navigational-instrument maker, medical practitioner, and teacher, he was well-known in his day for his stature in all of those fields. But he is surely best-remembered as the sole pupil of Copernicus, whose work he championed– most impactfully, facilitating the publication of his master’s De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres)… and informing the most famous work by yesterday’s birthday boy, Galileo.

source

“I am so clever that sometimes I don’t understand a single word of what I am saying”*…

Humans claim to be intelligent, but what exactly is intelligence? Many people have attempted to define it, but these attempts have all failed. So I propose a new definition: intelligence is whatever humans do.

I will attempt to prove this new definition is superior to all previous attempts to define intelligence. First, consider humans’ history. It is a story of repeated failures. First humans thought the Earth was flat. Then they thought the Sun went around the Earth. Then they thought the Earth was the center of the universe. Then they thought the universe was static and unchanging. Then they thought the universe was infinite and expanding. Humans were wrong about alchemy, phrenology, bloodletting, creationism, astrology, numerology, and homeopathy. They were also wrong about the best way to harvest crops, the best way to govern, the best way to punish criminals, and the best way to cure the sick.

I will not go into the many ways humans have been wrong about morality. The list is long and depressing. If humans are so smart, how come they keep being wrong about everything?

So, what does it mean to be intelligent?…

Arram Sabeti (@arram) gave a prompt to GPT-3, a machine-learning language model; it wrote: “Are Humans Intelligent?- a Salty AI Op-Ed.”

(image above: source)

* Oscar Wilde

###

As we hail our new robot overlords, we might recall that it was on this date in 1814 that London suffered “The Great Beer Flood Disaster” when the metal bands on an immense vat at Meux’s Horse Shoe Brewery snapped, releasing a tidal wave of 3,555 barrels of Porter (571 tons– more than 1 million pints), which swept away the brewery walls, flooded nearby basements, and collapsed several adjacent tenements. While there were reports of over twenty fatalities resulting from poisoning by the porter fumes or alcohol coma, it appears that the death toll was 8, and those from the destruction caused by the huge wave of beer in the structures surrounding the brewery.

(The U.S. had its own vat mishap in 1919, when a Boston molasses plant suffered similarly-burst bands, creating a heavy wave of molasses moving at a speed of an estimated 35 mph; it killed 21 and injured 150.)

Meux’s Horse Shoe Brewery

source

“We must be free not because we claim freedom, but because we practice it”*…

 

algorithm

 

There is a growing sense of unease around algorithmic modes of governance (‘algocracies’) and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology…

From a pre-print of John Danaher‘s (@JohnDanaher) chapter in the forthcoming Oxford Handbook on the Philosophy of Technology, edited by Shannon Vallor: “Freedom in an Age of Algocracy “… a little dense, but very useful.

[image above: source]

* William Faulkner

###

As we meet the new boss, same as the old boss, we might recall that it was on this date in 1962 that telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1 source

 

Written by (Roughly) Daily

February 24, 2020 at 1:01 am

“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets”*…

 

Pope AI

Francis Bacon, Study after Velazquez’s Portrait of Pope Innocent X, 1953

 

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done…

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed…

Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it. A “non-evil” Google, built by two Stanford dropouts, is just not the same entity as modern Alphabet’s global multinational network, with its extensive planetary holdings in clouds, transmission cables, operating systems, and device manufacturing.

It’s not that Google and Alphabet become evil just because they’re big and rich. Frankly, they’re not even all that “evil.” They’re just inherently involved in huge, tangled, complex, consequential schemes, with much more variegated populations than had originally been imagined. It’s like the ethical difference between being two parish priests and becoming Pope.

Of course the actual Pope will confront Artificial Intelligence. His response will not be “is it socially beneficial to the user-base?” but rather, “does it serve God?” So unless you’re willing to morally out-rank the Pope, you need to understand that religious leaders will use Artificial Intelligence in precisely the way that televangelists have used television.

So I don’t mind the moralizing about AI. I even enjoy it as metaphysical game, but I do have one caveat about this activity, something that genuinely bothers me. The practitioners of AI are not up-front about the genuine allure of their enterprise, which is all about the old-school Steve-Jobsian charisma of denting the universe while becoming insanely great. Nobody does AI for our moral betterment; everybody does it to feel transcendent.

AI activists are not everyday brogrammers churning out grocery-code. These are visionary zealots driven by powerful urges they seem unwilling to confront. If you want to impress me with your moral authority, gaze first within your own soul.

Excerpted from the marvelous Bruce Sterling‘s essay “Artificial Morality,” a contribution to the Provocations series, a project of the Los Angeles Review of Books in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

* Voltaire

###

As we agonize over algorithms, we might recall that it was on this date in 1872 that Luther Crowell patented a machine for the manufacture of accordion-sided, flat-bottomed paper bags (#123,811).  That said, Margaret E. Knight might more accurately be considered the “mother of the modern shopping bag”; she had perfected square bottoms two years earlier.

source

 

Written by (Roughly) Daily

February 20, 2020 at 1:01 am

%d bloggers like this: