(Roughly) Daily

Posts Tagged ‘Moore’s Law

“Moore’s Law is really a thing about human activity, it’s about vision, it’s about what you’re allowed to believe”*…

 

Karen-fungal-computing-2

 

In moments of technological frustration, it helps to remember that a computer is basically a rock. That is its fundamental witchcraft, or ours: for all its processing power, the device that runs your life is just a complex arrangement of minerals animated by electricity and language. Smart rocks. The components are mined from the Earth at great cost, and they eventually return to the Earth, however poisoned. This rock-and-metal paradigm has mostly served us well. The miniaturization of metallic components onto wafers of silicon — an empirical trend we call Moore’s Law — has defined the last half-century of life on Earth, giving us wristwatch computers, pocket-sized satellites and enough raw computational power to model the climate, discover unknown molecules, and emulate human learning.

But there are limits to what a rock can do. Computer scientists have been predicting the end of Moore’s Law for decades. The cost of fabricating next-generation chips is growing more prohibitive the closer we draw to the physical limits of miniaturization. And there are only so many rocks left. Demand for the high-purity silica sand used to manufacture silicon chips is so high that we’re facing a global, and irreversible, sand shortage; and the supply chain for commonly-used minerals, like tin, tungsten, tantalum, and gold, fuels bloody conflicts all over the world. If we expect 21st century computers to process the ever-growing amounts of data our culture produces — and we expect them to do so sustainably — we will need to reimagine how computers are built. We may even need to reimagine what a computer is to begin with.

It’s tempting to believe that computing paradigms are set in stone, so to speak. But there are already alternatives on the horizon. Quantum computing, for one, would shift us from a realm of binary ones and zeroes to one of qubits, making computers drastically faster than we can currently imagine, and the impossible — like unbreakable cryptography — newly possible. Still further off are computer architectures rebuilt around a novel electronic component called a memristor. Speculatively proposed by the physicist Leon Chua in 1971, first proven to exist in 2008, a memristor is a resistor with memory, which makes it capable of retaining data without power. A computer built around memristors could turn off and on like a light switch. It wouldn’t require the conductive layer of silicon necessary for traditional resistors. This would open computing to new substrates — the possibility, even, of integrating computers into atomically thin nano-materials. But these are architectural changes, not material ones.

For material changes, we must look farther afield, to an organism that occurs naturally only in the most fleeting of places. We need to glimpse into the loamy rot of a felled tree in the woods of the Pacific Northwest, or examine the glistening walls of a damp cave. That’s where we may just find the answer to computing’s intractable rock problem: down there, among the slime molds…

It’s time to reimagine what a computer could be: “Beyond Smart Rocks.”

(TotH to Patrick Tanguay.)

* “Moore’s Law is really a thing about human activity, it’s about vision, it’s about what you’re allowed to believe. Because people are really limited by their beliefs, they limit themselves by what they allow themselves to believe about what is possible.”  – Carver Mead

###

As we celebrate slime, we might send fantastically far-sighted birthday greetings to Hugo Gernsback, a Luxemborgian-American inventor, broadcast pioneer, writer, and publisher; he was born on this date in 1884.

Gernsback held 80 patents at the time of his death; he founded radio station WRNY, was involved in the first television broadcasts, and is considered a pioneer in amateur radio.  But it was as a writer and publisher that he probably left his most lasting mark:  In 1926, as owner/publisher of the magazine Modern Electrics, he filled a blank spot in his publication by dashing off the first chapter of a series called “Ralph 124C 41+.” The twelve installments of “Ralph” were filled with inventions unknown in 1926, including “television” (Gernsback is credited with introducing the word), fluorescent lighting, juke boxes, solar energy, television, microfilm, vending machines, and the device we now call radar.

The “Ralph” series was an astounding success with readers; and later that year Gernsback founded the first magazine devoted to science fiction, Amazing Stories.  Believing that the perfect sci-fi story is “75 percent literature interwoven with 25 percent science,” he coined the term “science fiction.”

Gernsback was a “careful” businessman, who was tight with the fees that he paid his writers– so tight that H. P. Lovecraft and Clark Ashton Smith referred to him as “Hugo the Rat.”

Still, his contributions to the genre as publisher were so significant that, along with H.G. Wells and Jules Verne, he is sometimes called “The Father of Science Fiction”; in his honor, the annual Science Fiction Achievement awards are called the “Hugos.”

(Coincidentally, today is also the birthday– in 1906– of Philo T. Farnsworth, the man who actually did invent television… and was thus the inspiration for the name “Philco.”)

[UPDATE- With thanks to friend MK for the catch:  your correspondent was relying on an apocryphal tale in attributing the Philco brand name to to Philo Farnsworth.  Farsworth did work with the company, and helped them enter the television business.  But the Philco trademark dates back to 1919– pre-television days– as a label for what was then the Philadelphia Storage Battery Company.]

Gernsback, wearing one of his inventions, TV Glasses

source

 

 

“The number of transistors on integrated circuits doubles approximately every two years”*…

 

Moore’s Law has held up almost astoundingly well…

 source (and larger version)

This seemingly inexorable march has enabled an extraordinary range of new products and services– from intercontinental ballistic missiles to global environmental monitoring systems and from smart phones to medical implants…  But researchers at Carnegie Mellon University are sounding an alarm…

The speed of our technology doubles every year, right? Not anymore. We’ve come to take for granted that as the years go on, computing technology gets faster, cheaper and more energy-efficient.

In their recent paper, “Science and research policy at the end of Moore’s law” published in Nature Electronics, however, Carnegie Mellon University researchers Hassan Khan, David Hounshell, and Erica Fuchs argue that future advancement in microprocessors faces new and unprecedented challenges…

In the seven decades following the invention of the transistor at Bell Labs, warnings about impending limits to miniaturization and the corresponding slow down of Moore’s Law have come regularly from industry observers and academic researchers. Despite these warnings, semiconductor technology continually progressed along the Moore’s Law trajectory. Khan, Hounshell, and Fuchs’ archival work and oral histories, however, make clear that times are changing.

“The current technological and structural challenges facing the industry are unprecedented and undermine the incentives for continued collective action in research and development,” the authors state in the paper, “which has underpinned the last 50 years of transformational worldwide economic growth and social advance.”

As the authors explain in their paper, progress in semiconductor technology is undergoing a seismic shift driven by changes in the underlying technology and product-end markets…

To continue advancing general purpose computing capabilities at reduced cost with economy-wide benefits will likely require entirely new semiconductor process and device technology.” explains Engineering and Public Policy graduate Hassan Khan. “The underlying science for this technology is as of yet unknown, and will require significant research funds – an order of magnitude more than is being invested today.”

The authors conclude by arguing that the lack of private incentives creates a case for greatly increased public funding and the need for leadership beyond traditional stakeholders. They suggest that funding is needed of $600 million dollars per year with 90% of those funds from public research dollars, and the rest most likely from defense agencies…

Read the complete summary at “Moore’s law has ended. What comes next?“; read the complete Nature article here.

* a paraphrase of Gordon’s Moore’s assertion– known as “Moore’s law”– in the thirty-fifth anniversary issue of Electronics magazine, published on April 19, 1965

###

As we pack ’em ever tighter, we might send carefully-computed birthday greetings to Thomas John Watson Sr.; he was born on this date in 1874.  A mentee of from John Henry Patterson’s at NCR, where Watson began his career, Watson became the chairman and CEO of the Computing-Tabulating-Recording Company (CTR), which, in 1924, he renamed International Business Machines– IBM.  He began using his famous motto– THINK– while still at NCR, but carried it with him to IBM…  where it became that corporation’s first trademark (in 1935).  That motto was the inspiration for the naming of the Thinkpad– and Watson himself (along with Sherlock’s Holmes’ trusty companion), for the naming of IBM’s Artificial Intelligence product.

 source

 

Let’s get small…

Gerd Binnig and Heinrich Rohrer, inventors of the Scanning Tunneling Microscope (source: IBM)

Twenty years ago, technicians at IBM’s Almaden Research Lab pulled a nifty stunt with their scanning tunneling microscope (STM).  IBM scientists had invented the STM nine years earlier in IBM’s Zurich Lab (and received a Nobel prize for it in 1996); while the STM was originally intended simply to create visualizations of things very, very tiny, the folks at Almaden realized that the technique used– it “felt” the atoms in question with similarly-charged particles, then mapped the object– could be reversed:  the STM could change it’s charge, “pin” an atom, and move it…  The first illustration– and, some argue, the first example of “practical” nanotechnology– was this IBM logo, “written” in xenon atoms:

source: IBM

Over the last two decades, the STM has become a critical tool for chip makers, enabling them to perfect  current DRAM and flash memories.  Now, the folks at Almaden, still pushing the limits of their gear, they’ve turned their STMs into slo-mo movie cameras, and captured the atomic process of setting and erasing a bit on a single atom– that’s to say, of the operation of a single-atom DRAM.

Practical applications- atomic memories, better solar cells, and ultimately, atomic scale quantum computers– are, of course, some way off… but Moore’s Law seems safe for awhile.

Read all about it in EE Times.

As we drop the needle on that Steve Martin album, we might recall that it was on this date in 1908 that the Model T went on sale; it cost $825 (roughly equivalent to $20,000) today.  Ford’s advances in the technologies used both in the car and in its manufacture, along with economies of scale,  resulted in  steady price reductions over the next decade: by the 1920s, the price had fallen to $290 (equivalent to roughly $3,250 today).

1908 advertisement

%d bloggers like this: