(Roughly) Daily

Posts Tagged ‘AI

“O brave new world”*…

 

law and AI

 

With the arrival of autonomous weapons systems (AWS)[1] on the 21st century battlefield, the nature of warfare is poised for dramatic change.[2] Overseen by artificial intelligence (AI), fueled by terabytes of data and operating at lightning-fast speed, AWS will be the decisive feature of future military conflicts.[3] Nonetheless, under the American way of war, AWS will operate within existing legal and policy guidelines that establish conditions and criteria for the application of force.[4] Even as the Department of Defense (DoD) places limitations on when and how AWS may take action,[5] the pace of new conflicts and adoption of AWS by peer competitors will ultimately push military leaders to empower AI-enabled weapons to make decisions with less and less human input.[6] As such, timely, accurate, and context-specific legal advice during the planning and operation of AWS missions will be essential. In the face of digital-decision-making, mere human legal advisors will be challenged to keep up!

Fortunately, at the same time that AI is changing warfare, the practice of law is undergoing a similar AI-driven transformation.[7]

From The Judge Advocate General’s CorpsThe Reporter: “Autonomous Weapons Need Autonomous Lawyers.”

As I finish drafting this post [on October 5], I’ve discovered that none of the links are available any longer; the piece (and the referenced articles within it, also from The Reporter) were apparently removed from public view while I was drafting this– from a Reporter web page that, obviously, opened for me earlier.  You will find other references to (and excerpts from/comments on) the article here, here, and here.  I’m leaving the original links in, in case they become active again…

* Shakespeare, The Tempest

###

As we wonder if this can end well, we might recall that it was on this date in 1983 that Ameritech executive Bob Barnett made a phone call from a car parked near Soldier Field in Chicago, officially launching the first cellular network in the United States.

barnett-300x165

Barnett (foreground, in the car) and his audience

 

Written by LW

October 13, 2019 at 1:01 am

“How about a little magic?”*…

 

sorcerers apprentice

 

Once upon a time (bear with me if you’ve heard this one), there was a company which made a significant advance in artificial intelligence. Given their incredibly sophisticated new system, they started to put it to ever-wider uses, asking it to optimize their business for everything from the lofty to the mundane.

And one day, the CEO wanted to grab a paperclip to hold some papers together, and found there weren’t any in the tray by the printer. “Alice!” he cried (for Alice was the name of his machine learning lead) “Can you tell the damned AI to make sure we don’t run out of paperclips again?”…

What could possibly go wrong?

[As you’ll read in the full and fascinating article, a great deal…]

Computer scientists tell the story of the Paperclip Maximizer as a sort of cross between the Sorcerer’s Apprentice and the Matrix; a reminder of why it’s crucially important to tell your system not just what its goals are, but how it should balance those goals against costs. It frequently comes with a warning that it’s easy to forget a cost somewhere, and so you should always check your models carefully to make sure they aren’t accidentally turning in to Paperclip Maximizers…

But this parable is not just about computer science. Replace the paper clips in the story above with money, and you will see the rise of finance…

Yonatan Zunger tells a powerful story that’s not (only) about AI: “The Parable of the Paperclip Maximizer.”

* Mickey Mouse, The Sorcerer’s Apprentice

###

As we’re careful what we wish for (and how we wish for it), we might recall that it was on this date in 1631 that the Puritans in the recently-chartered Massachusetts Bay Colony issued a General Court Ordinance that banned gambling: “whatsoever that have cards, dice or tables in their houses, shall make away with them before the next court under pain of punishment.”

Mass gambling source

 

Written by LW

March 22, 2019 at 1:01 am

“Outward show is a wonderful perverter of the reason”*…

 

facial analysis

Humans have long hungered for a short-hand to help in understanding and managing other humans.  From phrenology to the Myers-Briggs Test, we’ve tried dozens of short-cuts… and tended to find that at best they weren’t actually very helpful; at worst, they were reinforcing of stereotypes that were inaccurate, and so led to results that were unfair and ineffective.  Still, the quest continues– these days powered by artificial intelligence.  What could go wrong?…

Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short.

While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception.

But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling.

Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face.

In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.

But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science…

“Emotion detection” has grown from a research project to a $20bn industry; learn more about why that’s a cause for concern: “Don’t look now: why you should be worried about machines reading your emotions.”

* Marcus Aurelius, Meditations

###

As we insist on the individual, we might recall that it was on this date in 1989 that Tim Berners-Lee submitted a proposal to CERN for developing a new way of linking and sharing information over the Internet.

It was the first time Berners-Lee proposed a system that would ultimately become the World Wide Web; but his proposal was basically a relatively vague request to research the details and feasibility of such a system.  He later submitted a proposal on November 12, 1990 that much more directly detailed the actual implementation of the World Wide Web.

web25-significant-white-300x248 source

 

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it”*…

 

robit writer

 

Recently, OpenAI announced its latest breakthrough, GPT-2, a language model that can write essays to a prompt, answer questions, and summarize longer works… sufficiently successfully that OpenAI has said that it’s too dangerous to release the code (lest it result in “deepfake news” or other misleading mischief).

Scott Alexander contemplates the results.  His conclusion:

a brain running at 5% capacity is about as good as the best AI that the brightest geniuses working in the best-equipped laboratories in the greatest country in the world are able to produce in 2019. But:

We believe this project is the first step in the direction of developing large NLP systems without task-specific training data. That is, we are developing a machine language system in the generative style with no explicit rules for producing text. We hope for future collaborations between computer scientists, linguists, and machine learning researchers.

A boring sentiment from an interesting source: the AI wrote that when asked to describe itself. We live in interesting times.

His complete post, eminently worthy of reading in full: “Do Neural Nets Dream of Electric Hobbits?

[image above, and another account of OpenAI’s creation: “OpenAI says its new robo-writer is too dangerous for public release“]

* Eliezer Yudkowsky

###

As we take the Turing Test, we might send elegantly-designed birthday greetings to Steve Jobs; he was born on this date in 1955.  While he is surely well-known to every reader here, let us note for the record that he was was instrumental in developing the Macintosh, the computer that took Apple to unprecedented levels of success.  After leaving the company he started with Steve Wozniak, Jobs continued his personal computer development at his NeXT Inc.  In 1997, Jobs returned to Apple to lead the company into a new era based on NeXT technologies and consumer electronics.  Some of Jobs’ achievements in this new era include the iMac, the iPhone, the iTunes music store, the iPod, and the iPad.  Under Jobs’ leadership Apple was at one time the world’s most valuable company. (And, of course, he bought Pixar from George Lucas, and oversaw both its rise to animation dominance and its sale to Disney– as a product of which Jobs became Disney’s largest single shareholder.)

Jobs source

 

Written by LW

February 24, 2019 at 1:01 am

“The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge”*…

 

40990080740_17170c03ec_z

After the fall of the Berlin Wall, East German citizens were offered the chance to read the files kept on them by the Stasi, the much-feared Communist-era secret police service. To date, it is estimated that only 10 percent have taken the opportunity.

In 2007, James Watson, the co-discoverer of the structure of DNA, asked that he not be given any information about his APOE gene, one allele of which is a known risk factor for Alzheimer’s disease.

Most people tell pollsters that, given the choice, they would prefer not to know the date of their own death—or even the future dates of happy events.

Each of these is an example of willful ignorance. Socrates may have made the case that the unexamined life is not worth living, and Hobbes may have argued that curiosity is mankind’s primary passion, but many of our oldest stories actually describe the dangers of knowing too much. From Adam and Eve and the tree of knowledge to Prometheus stealing the secret of fire, they teach us that real-life decisions need to strike a delicate balance between choosing to know, and choosing not to.

But what if a technology came along that shifted this balance unpredictably, complicating how we make decisions about when to remain ignorant? That technology is here: It’s called artificial intelligence.

AI can find patterns and make inferences using relatively little data. Only a handful of Facebook likes are necessary to predict your personality, race, and gender, for example. Another computer algorithm claims it can distinguish between homosexual and heterosexual men with 81 percent accuracy, and homosexual and heterosexual women with 71 percent accuracy, based on their picture alone. An algorithm named COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) can predict criminal recidivism from data like juvenile arrests, criminal records in the family, education, social isolation, and leisure activities with 65 percent accuracy…

Knowledge can sometimes corrupt judgment, and we often choose to remain deliberately ignorant in response.  But in an age of all-knowing algorithms, how do we choose not to know?  Two scientists at the Max Planck Institute for Human Development argue that “We Need to Save Ignorance From AI.”

* Daniel J. Boorstin

###

As we consider closing our eyes, we might send discoverable birthday greetings to Tim Bray; he was born on this date in 1955.  A seminal software developer and entrepreneur, he is probably best known as the co-author of the original specifications for the XML and XML namespace, open standards that fueled the growth of the internet (by setting down simple rules for encoding documents in a format that is both human-readable and machine-readable), and as the co-founder of the Open Text Corporation, which released the Open Text Index, one of the first popular commercial web search engines.

40990080840_2a593e7046_o source

 

Written by LW

June 21, 2018 at 1:01 am

“Reality is broken”*…

 

Paperclips, a new game from designer Frank Lantz, starts simply. The top left of the screen gets a bit of text, probably in Times New Roman, and a couple of clickable buttons: Make a paperclip. You click, and a counter turns over. One.

The game ends—big, significant spoiler here—with the destruction of the universe.

In between, Lantz, the director of the New York University Games Center, manages to incept the player with a new appreciation for the narrative potential of addictive clicker games, exponential growth curves, and artificial intelligence run amok…

More at “The way the world ends: not with a bang but a paperclip“; play Lantz’s game here.

(Then, as you consider reports like this, remind yourself that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”)

* Jane McGonigal, Reality is Broken: Why Games Make Us Better and How They Can Change the World

###

As we play we hope not prophetically, we might recall that it was on this date in 4004 BCE that the Universe was created… as per calculations by Archbishop James Ussher in the mid-17th century.

When Clarence Darrow prepared his famous examination of William Jennings Bryan in the Scopes trial [see here], he chose to focus primarily on a chronology of Biblical events prepared by a seventeenth-century Irish bishop, James Ussher. American fundamentalists in 1925 found—and generally accepted as accurate—Ussher’s careful calculation of dates, going all the way back to Creation, in the margins of their family Bibles.  (In fact, until the 1970s, the Bibles placed in nearly every hotel room by the Gideon Society carried his chronology.)  The King James Version of the Bible introduced into evidence by the prosecution in Dayton contained Ussher’s famous chronology, and Bryan more than once would be forced to resort to the bishop’s dates as he tried to respond to Darrow’s questions.

source

Ussher

source

 

 

Written by LW

October 23, 2017 at 1:01 am

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”*…

 

We are surrounded by hysteria about the future of artificial intelligence and robotics—hysteria about how powerful they will become, how quickly, and what they will do to jobs.

I recently saw a story in ­MarketWatch that said robots will take half of today’s jobs in 10 to 20 years. It even had a graphic to prove the numbers.

The claims are ludicrous. (I try to maintain professional language, but sometimes …) For instance, the story appears to say that we will go from one million grounds and maintenance workers in the U.S. to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? Zero. How many realistic demonstrations have there been of robots working in this arena? Zero. Similar stories apply to all the other categories where it is suggested that we will see the end of more than 90 percent of jobs that currently require physical presence at some particular site.

Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs, the Singularity, or the advent of AI that has values different from ours and might try to destroy us. We need to push back on these mistakes. But why are people making them? I see seven common reasons…

Mistaken extrapolations, limited imagination, and other common mistakes that distract us from thinking more productively about the future: Rodney Brooks on “The Seven Deadly Sins of AI Predictions.”

* Roy Amara, co-founder of The Institute for the Future

###

As we sharpen our analyses, we might recall that it was on this date in 1995 that The Media Lab at the Massachusetts Institute of Technology chronicled the World Wide Web in its A Day in the Life of Cyberspace project.

To celebrate its 10th anniversary, the Media Lab had invited submissions for the days leading up to October 10, 1995, on a variety of issues related to technology and the Internet, including privacy, expression, age, wealth, faith, body, place, languages, and the environment.  Then on October 10, a team at MIT collected, edited, and published the contributions to “create a mosaic of life at the dawn of the digital revolution that is transforming our planet.”

source

 

 

Written by LW

October 10, 2017 at 1:01 am

%d bloggers like this: