(Roughly) Daily

Posts Tagged ‘computing

“I’m a little tea pot / Short and stout”*…

The original Utah teapot, currently on display at the Computer History Museum in Mountain View, California.

The fascinating story of the “Utah teapot,” the ur-object in the development of computer graphics…

This unassuming object—the “Utah teapot,” as it’s affectionately known—has had an enormous influence on the history of computing, dating back to 1974, when computer scientist Martin Newell was a Ph.D. student at the University of Utah.

The U of U was a powerhouse of computer graphics research then, and Newell had some novel ideas for algorithms that could realistically display 3D shapes—rendering complex effects like shadows, reflective textures, or rotations that reveal obscured surfaces. But, to his chagrin, he struggled to find a digitized object worthy of his methods. Objects that were typically used for simulating reflections, like a chess pawn, a donut, and an urn, were too simple.

One day over tea, Newell told his wife Sandra that he needed more interesting models. Sandra suggested that he digitize the shapes of the tea service they were using, a simple Melitta set from a local department store. It was an auspicious choice: The curves, handle, lid, and spout of the teapot all conspired to make it an ideal object for graphical experiment. Unlike other objects, the teapot could, for instance, cast a shadow on itself in several places. Newell grabbed some graph paper and a pencil, and sketched it.

Back in his lab, he entered the sketched coordinates—called Bézier control points, first used in the design of automobile bodies—on a Tektronix storage tube, an early text and graphics computer terminal. The result was a lovely virtual teapot, more versatile (and probably cuter) than any 3D model to date.

The new model was particularly appealing to Newell’s colleague, Jim Blinn [of whom Ivan Sutherland, the head of the program at Utah and a computer graphics pioneer said, “There are about a dozen great computer graphics people and Jim Blinn is six of them”]. One day, demonstrating how his software could adjust an object’s height, Blinn flattened the teapot a bit, and decided he liked the look of that version better. The distinctive Utah teapot was born.

The computer model proved useful for Newell’s own research, featuring prominently in his next few publications. But he and Blinn also took the important step of sharing their model publicly. As it turned out, other researchers were also starved for interesting 3D models, and the digital teapot was exactly the experimental test bed they needed. At the same time, the shape was simple enough for Newell to input and for computers to process. (Rumor has it some researchers even had the data points memorized!) And unlike many household items, like furniture or fruit-in-a-bowl, the teapot’s simulated surface looked realistic without superimposing an artificial, textured pattern.

The teapot quickly became a beloved staple of the graphics community. Teapot after teapot graced the pages and covers of computer graphics journals.  “Anyone with a new idea about rendering and lighting would announce it by first trying it out on a teapot,” writes animator Tom Sito in Moving Innovation...

These days, the Utah teapot has achieved legendary status. It’s a built-in shape in many 3D graphics software packages used for testing, benchmarking, and demonstration. Graphics geeks like to sneak it into scenes and games as an in-joke, an homage to their countless hours of rendering teapots; hence its appearances in Windows, Toy Story, and The Simpsons

Over the past few years, the teapot has been 3D printed back into the physical world, both as a trinket and as actual china. Pixar even made its own music video in honor of the teapot, titled “This Teapot’s Made for Walking,” and a teapot wind-up toy as a promotion for its Renderman software.

Newell has jokingly lamented that, despite all his algorithmic innovations, he’ll be remembered primarily for “that damned teapot.” But as much as computer scientists try to prove their chops by inventing clever algorithms, test beds for experimentation often leave a bigger mark. Newell essentially designed the model organism of computer graphics: to graphics researchers as lab mice are to biologists.

For the rest of us the humble teapot serves as a reminder that, in the right hands, something simple can become an icon of creativity and hidden potential…

How a humble serving piece shaped a technological domain: “The Most Important Object In Computer Graphics History Is This Teapot,” from Jesse Dunietz (@jdunietz)

* from “I’m a Little Tea Pot,” a 1939 novelty song by George Harold Sanders and Clarence Z. Kelley

###

As we muse on models, we might send foundational birthday greetings to Michael Faraday; he was born on this date in 1791. One of the great experimental scientists of all time, Faraday made huge contributions to the study of electromagnetism and electrochemistry.

Although Faraday received little formal education, he was one of the most influential scientists in history. It was by his research on the magnetic field around a conductor carrying a direct current that Faraday established the basis for the concept of the electromagnetic field in physics. Faraday also established that magnetism could affect rays of light and that there was an underlying relationship between the two phenomena. He similarly discovered the principles of electromagnetic induction and diamagnetism, and the laws of electrolysis. His inventions of electromagnetic rotary devices formed the foundation of electric motor technology, and it was largely due to his efforts that electricity became practical for use in technology [including, of course, computing and computer graphics].

As a chemist, Faraday discovered benzene, investigated the clathrate hydrate of chlorine, invented an early form of the Bunsen burner and the system of oxidation numbers, and popularised terminology such as “anode“, “cathode“, “electrode” and “ion“. Faraday ultimately became the first and foremost Fullerian Professor of Chemistry at the Royal Institution, a lifetime position.

Faraday was an excellent experimentalist who conveyed his ideas in clear and simple language; his mathematical abilities, however, did not extend as far as trigonometry and were limited to the simplest algebra. James Clerk Maxwell took the work of Faraday and others and summarized it in a set of equations which is accepted as the basis of all modern theories of electromagnetic phenomena. On Faraday’s uses of lines of force, Maxwell wrote that they show Faraday “to have been in reality a mathematician of a very high order – one from whom the mathematicians of the future may derive valuable and fertile methods.”…

Albert Einstein kept a picture of Faraday on his study wall, alongside pictures of Arthur Schopenhauer and James Clerk Maxwell. Physicist Ernest Rutherford stated, “When we consider the magnitude and extent of his discoveries and their influence on the progress of science and of industry, there is no honour too great to pay to the memory of Faraday, one of the greatest scientific discoverers of all time.”

Wikipedia

source

“We often plough so much energy into the big picture, we forget the pixels”*…

Alvy Ray Smith (see also here) was born before computers, made his first computer graphic in 1964, cofounded Pixar, was the first director of computer graphics at Lucasfilm, and the first graphics fellow at Microsoft. He is the author of the terrific new book A Biography of the Pixel (2021), from which, this excerpt…

I have billions of pixels in my cellphone, and you probably do too. But what is a pixel? Why do so many people think that pixels are little abutting squares? Now that we’re aswim in an ocean of zettapixels (21 zeros), it’s time to understand what they are. The underlying idea – a repackaging of infinity – is subtle and beautiful. Far from being squares or dots that ‘sort of’ approximate a smooth visual scene, pixels are the profound and exact concept at the heart of all the images that surround us – the elementary particles of modern pictures.

This brief history of the pixel begins with Joseph Fourier in the French Revolution and ends in the year 2000 – the recent millennium. I strip away the usual mathematical baggage that hides the pixel from ordinary view, and then present a way of looking at what it has wrought.

The millennium is a suitable endpoint because it marked what’s called the great digital convergence, an immense but uncelebrated event, when all the old analogue media types coalesced into the one digital medium. The era of digital light – all pictures, for whatever purposes, made of pixels – thus quietly began. It’s a vast field: books, movies, television, electronic games, cellphones displays, app interfaces, virtual reality, weather satellite images, Mars rover pictures – to mention a few categories – even parking meters and dashboards. Nearly all pictures in the world today are digital light, including nearly all the printed words. In fact, because of the digital explosion, this includes nearly all the pictures ever made. Art museums and kindergartens are among the few remaining analogue bastions, where pictures fashioned from old media can reliably be found…

An exact mathematical concept, pixels are the elementary particles of pictures, based on a subtle unpacking of infinity: “Pixel: a biography,” from @alvyray.

Dame Silvia Cartwright

###

As we ruminate on resolution, we might recall that it was on this date in 1947 that fabled computer scientist Grace Hopper (see here and here), then a programmer at Harvard’s Harvard’s Mark II Aiken Relay computer, found and documented the first computer “bug”– an insect that had lodged in the works.  The incident is recorded in Hopper’s logbook alongside the offending moth, taped to the logbook page: “15:45 Relay #70 Panel F (moth) in relay. First actual case of bug being found.”

This anecdote has led to Hopper being pretty widely credited with coining the term “bug” (and ultimately “de-bug”) in its technological usage… but the term actually dates back at least to Thomas Edison…

bug
Grace Hopper’s log entry (source)

Written by (Roughly) Daily

September 9, 2021 at 1:00 am

“Foresight begins when we accept that we are now creating a civilization of risk”*…

There have been a handful folks– Vernor Vinge, Don Michael, Sherry Turkle, to name a few– who were, decades ago, exceptionally foresightful about the technologically-meditated present in which we live. Philip Agre belongs in their number…

In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then-UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy…

As Reed Albergotti (@ReedAlbergotti) explains, better late than never: “He predicted the dark side of the Internet 30 years ago. Why did no one listen?

Agre’s papers are here.

* Jacques Ellul

###

As we consider consequences, we might recall that it was on this date in 1858 that Queen Victoria sent the first official telegraph message across the Atlantic Ocean from London to U. S. President James Buchanan, in Washington D.C.– an initiated a new era in global communications.

Transmission of the message began at 10:50am and wasn’t completed until 4:30am the next day, taking nearly eighteen hours to reach Newfoundland, Canada. Ninety-nine words, containing five hundred nine letters, were transmitted at a rate of about two minutes per letter.

After White House staff had satisfied themselves that it wasn’t a hoax, the President sent a reply of 143 words in a relatively rapid ten hours. Without the cable, a dispatch in one direction alone would have taken rouighly twelve days by the speediest combination of inland telegraph and fast steamer.

source

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say”*…

There’s a depressing sort of symmetry in the fact that our modern paradigms of privacy were developed in response to the proliferation of photography and their exploitation by tabloids. The seminal 1890 Harvard Law Review article The Right to Privacy—which every essay about data privacy is contractually obligated to cite—argued that the right of an individual to object to the publication of photographs ought to be considered part of a general ‘right to be let alone’.

30 years on, privacy is still largely conceived of as an individual thing, wherein we get to make solo decisions about when we want to be left alone and when we’re comfortable being trespassed upon. This principle undergirds the notice-and-consent model of data management, which you might also know as the pavlovian response to click “I agree” on any popup and login screen with little regard for the forty pages of legalese you might be agreeing to.

The thing is, the right to be left alone makes perfect sense when you’re managing information relationships between individuals, where there are generally pretty clear social norms around what constitutes a boundary violation. Reasonable people can and do disagree as to the level of privacy they expect, but if I invite you into my home and you snoop through my bedside table and read my diary, there isn’t much ambiguity about that being an invasion.

But in the age of ✨ networked computing ✨, this individual model of privacy just doesn’t scale anymore. There are too many exponentially intersecting relationships for any of us to keep in our head. It’s no longer just about what we tell a friend or the tax collector or even a journalist. It’s the digital footprint that we often unknowingly leave in our wake every time we interact with something online, and how all of those websites and apps and their shadowy partners talk to each other behind our backs. It’s the cameras in malls tracking our location and sometimes emotions, and it’s the license plate readers compiling a log of our movements.

At a time when governments and companies are increasingly investing in surveillance mechanisms under the guise of security and transparency, that scale is only going to keep growing. Our individual comfort about whether we are left alone is no longer the only, or even the most salient part of the story, and we need to think about privacy as a public good and a collective value.

I like thinking about privacy as being collective, because it feels like a more true reflection of the fact that our lives are made up of relationships, and information about our lives is social and contextual by nature. The fact that I have a sister also indicates that my sister has at least one sibling: me. If I took a DNA test through 23andme I’m not just disclosing information about me but also about everyone that I’m related to, none of whom are able to give consent. The privacy implications for familial DNA are pretty broad: this information might be used to sell or withhold products and services, expose family secrets, or implicate a future as-yet-unborn relative in a crime. I could email 23andme and ask them to delete my records, and they might eventually comply in a month or three. But my present and future relatives wouldn’t be able to do that, or even know that their privacy had been compromised at all.

Even with data that’s less fraught than our genome, our decisions about what we expose to the world have externalities for the people around us. I might think nothing of posting a photo of going out with my friends and mentioning the name of the bar, but I’ve just exposed our physical location to the internet. If one of my friends has had to deal with a stalker in their past, I could’ve put their physical safety at risk. Even if I’m careful to make the post friends-only, the people I trust are not the same as the people my friends trust. In an individual model of privacy, we are only as private as our least private friend.

Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me.

Data collection isn’t always bad, but it is always risky. Sometimes that’s due to shoddy design and programming or lazy security practices. But even the best engineers often fail to build risk-free systems, by the very nature of systems.

Systems are easier to attack than they are to defend. If you want to defend a system, you have to make sure every part of it is perfectly implemented to guard against any possible vulnerabilities. Oftentimes, trying to defend a system means adding additional components, which just ends up creating more potential weak points. Whereas if you want to attack, all you have to do is find the one weakness that the systems designer missed. (Or, to paraphrase the IRA, you only have to be lucky once.)

This is true of all systems, digital or analog, but the thing that makes computer systems particularly vulnerable is that the same weaknesses can be deployed across millions of devices, in our phones and laptops and watches and toasters and refrigerators and doorbells. When a vulnerability is discovered in one system, an entire class of devices around the world is instantly a potential target, but we still have to go fix them one by one.

This is how the Equifax data leak happened. Equifax used a piece of open source software that had a security flaw in it, the people who work on that software found it and fixed it, and instead of diligently updating their systems Equifax hit the snooze button for four months and let hackers steal hundreds of millions of customer records. And while Equifax is definitely guilty of aforementioned lazy security practices, this incident also illustrates how fragile computer systems are. From the moment this bug was discovered, every server in the world that ran that software was at risk.

What’s worse, in many cases people weren’t even aware that their data was stored with Equifax. If you’re an adult who has had a job or a phone bill or interacted with a bank in the last seven years, your identifying information is collected by Equifax whether you like it or not. The only way to opt out would have been to be among the small percentage of overwhelmingly young, poor, and racialized people who have no credit histories, which significantly limits the scope of their ability to participate in the economy. How do you notice-and-consent your way out of that?

There unfortunately isn’t one weird trick to save democracy, but that doesn’t mean there aren’t lessons we can learn from history to figure out how to protect privacy as a public good. The scale and ubiquity of computers may be unprecedented, but so is the scale of our collective knowledge…

Read the full piece (and you should) for Jenny Zhang‘s (@phirephoenix) compelling case that we should treat– and protect– privacy as a public good, and explanation of how we might do that: “Left alone, together.” TotH to Sentiers.

[image above: source]

* Edward Snowden

###

As we think about each other, we might recall that it was on this date in 1939 that the first government appropriation was made to the support the construction of the Harvard Mark I computer.

Designer Howard Aiken had enlisted IBM as a partner in 1937; company chairman Thomas Watson Sr. personally approved the project and its funding. It was completed in 1944 (and put to work on a set war-related tasks, including calculations– overseen by John von Neumann— for the Manhattan Project). 

The Mark I was the industry’s largest electromechanical calculator… and it was large: 51 feet long, 8 feet high, and 2 feet deep; it weighed about 9,445 pounds  The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. It could do 3 additions or subtractions in a second; a multiplication took 6 seconds; a division took 15.3 seconds; and a logarithm or a trigonometric function took over a minute… ridiculously slow by today’s standards, but a huge advance in its time.

source

“The ancient Oracle said that I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing.”*…

The site of the oracle at Dodona

Your correspondent will be off-line for the next 10 days or so; regular service will resume on or around April 26th. In the meantime, a meeting of the (very) old and the (very) new…

The Virtual Reality Oracle (VRO) is a first-person virtual reality experience of oracular divination at the ancient Greek site of Dodona circa 450 BCE. Immerse yourself in the lives of ordinary people and community leaders alike as they travel to Dodona to consult the gods. Inspired by the questions they posed on themes as wide-ranging as wellbeing, work, and theft, perhaps you in turn will ask your question of Zeus?…

Homer mentioned Dodona; now you can be there, then. An immersive experience of the ancient Greek gods: “Virtual Reality Oracle.”

* Socrates

###

As we look for answers, we might recall that it was on this date in 1977 that both the Apple II and Commodore PET 2001 personal computers were introduced at the first annual West Coast Computer Faire.

Ironically, Commodore had previously rejected purchasing the Apple II from Steve Jobs and Steve Wozniak, deciding to build their own computers. Both computers used the same processor, the MOS 6502, but the companies had two different design strategies and it showed on this day. Apple wanted to build computers with more features at a higher price point. Commodore wanted to sell less feature-filled computers at a lower price point. The Apple II had color, graphics, and sound selling for $1298. The Commodore PET only had a monochrome display and was priced at $795.

Note, it was very difficult finding a picture with both an original Apple II (not IIe) and Commodore PET 2001. I could only find this picture that also includes the TRS-80, another PC introduced later in 1977.

source
The photo mentioned above: The Apple II is back left; the PET, back right

source

Written by (Roughly) Daily

April 16, 2021 at 1:01 am

%d bloggers like this: