(Roughly) Daily

Posts Tagged ‘computers

“The clustering of technological innovation in time and space helps explain both the uneven growth among nations and the rise and decline of hegemonic powers”*…

As scholars like Robert Gordon and Tyler Cowan have begun to call out a slowing of progress and growth in the U.S., others are beginning to wonder if “innovation clusters” like Silicon Valley are still advantageous. For example, Brian J. Asquith

In 2011, the economist Tyler Cowen published The Great Stagnation, a short treatise with a provocative hypothesis. Cowen challenged his audience to look beyond the gleam of the internet and personal compu­ting, arguing that these innovations masked a more troubling reality. Cowen contended that, since the 1970s, there has been a marked stagna­tion in critical economic indicators: median family income, total factor productivity growth, and average annual GDP growth have all plateaued…

In the years since the publication of the Great Stagnation hypothesis, others have stepped forward to offer support for this theory. Robert Gordon’s 2017 The Rise and Fall of American Growth chronicles in engrossing detail the beginnings of the Second Industrial Revolution in the United States, starting around 1870, the acceleration of growth spanning the 1920–70 period, and then a general slowdown and stagnation since about 1970. Gordon’s key finding is that, while the growth rate of average total factor productivity from 1920 to 1970 was 1.9 percent, it was just 0.6 percent from 1970 to 2014, where 1970 represents a secular trend break for reasons still not entirely understood. Cowen’s and Gordon’s insights have since been further corroborated by numerous research papers. Research productivity across a variety of measures (researchers per paper, R&D spending needed to maintain existing growth rates, etc.) has been on the decline across the developed world. Languishing productivity growth extends beyond research-intensive industries. In sectors such as construction, the value added per worker was 40 percent lower in 2020 than it was in 1970. The trend is mirrored in firm productivity growth, where a small number of superstar firms see exceptionally strong growth and the rest of the distribution increasingly lags behind.

A 2020 article by Nicholas Bloom and three coauthors in the American Economic Review cut right to the chase by asking, “Are Ideas Getting Harder to Find?,” and answered its own question in the affirm­ative.6 Depending on the data source, the authors find that while the number of researchers has grown sharply, output per researcher has declined sharply, leading aggregate research productivity to decline by 5 percent per year.

This stagnation should elicit greater surprise and concern because it persists despite advanced economies adhering to the established eco­nomics prescription intended to boost growth and inno­vation rates: (1) promote mass higher education, (2) identify particularly bright young people via standardized testing and direct them to re­search‑intensive universities, and (3) pipe basic research grants through the university system to foster locally-driven research and development networks that supercharge productivity…

… the tech cluster phenomenon stands out because there is a fundamental discrepancy between how the clusters function in practice versus their theoretical contributions to greater growth rates. The emergence of tech clusters has been celebrated by many leading economists because of a range of findings that innovative people become more productive (by various metrics) when they work in the same location as other talented people in the same field. In this telling, the essence of innovation can be boiled down to three things: co-location, co-location, co-location. No other urban form seems to facili­tate innovation like a cluster of interconnected researchers and firms.

This line of reasoning yields a straightforward syllogism: technology clusters enhance individual innovation and productivity. The local na­ture of innovation notwithstanding, technologies developed within these clusters can be adopted and enjoyed globally. Thus, while not everyone can live in a tech cluster, individuals worldwide benefit from new advances and innovations generated there, and some of the outsized economic gains the clusters produce can then be redistributed to people outside of the clusters to smooth over any lingering inequalities. There­fore, any policy that weakens these tech clusters leads to a diminished rate of innovation and leaves humanity as a whole poorer.

Yet the fact that the emergence of the tech clusters has also coincided with Cowen’s Great Stagnation raises certain questions. Are there shortcomings in the empirical evidence on the effects of the tech clusters? Does technology really diffuse across the rest of the economy as many economists assume? Do the tech clusters inherently prioritize welfare-enhancing technologies? Is there some role for federal or state action to improve the situation? Clusters are not unique to the postwar period: Detroit famously achieved a large agglomeration economy based on automobiles in the early twentieth century, and several authors have drawn parallels between the ascents of Detroit and Silicon Valley. What makes today’s tech clusters distinct from past ones? The fact that the tech clusters have not yielded the same society-enhancing benefits that they once promised should invite further scrutiny…

How could this be? What can we do about it? Eminently worth reading in full: “Superstars or Black Holes: Are Tech Clusters Causing Stagnation?” (possible soft paywall), from @basquith827.

See also: Brad DeLong, on comments from Eric Schmidt: “That an externality market failure is partly counterbalanced and offset by a behavioral-irrationality-herd-mania cognitive failure is a fact about the world. But it does not mean that we should not be thinking and working very hard to build a better system—or that those who profit mightily from herd mania on the part of others should feel good about themselves.”

* Robert Gilpin

###

As we contemplate co-location, we might recall that it was on this date in 1956 that a denizen of one of America’s leading tech/innovation hubs, Jay Forrester at MIT [see here and here], was awarded a patent for his coincident current magnetic core memory (Patent No. 2,736,880). Forrester’s invention, a “multicoordinate digital information storage device,” became the standard memory device for digital computers until supplanted by solid state (semiconductor) RAM in the mid-1970s.

source

“One thing I’ve learned over time is, if you hit a golf ball into water, it won’t float”*…

Happy New Year!

In the spirit of Tom Whitwell’s lists, Jason Kottke‘s collection of learnings from 2023-gone-by…

Purple Heart medals that were made for the planned (and then cancelled) invasion of Japan in 1945 are still being given out to wounded US military personnel.

The San Francisco subway system still runs on 5 1/4-inch floppies.

Bottled water has an expiration date — it’s the bottle not the water that expires.

Multicellular life developed on Earth more than 25 separate times.

Horseshoe crabs are older than Saturn’s rings.

Ernest Hemingway only used 59 exclamation points across his entire collection of works.

MLB broadcaster Vin Scully’s career lasted 67 seasons, during which he called a game managed by Connie Mack (born in 1862) and one Julio Urías (born in 1996) played in.

Almost 800,000 Maryland licence plates include a URL that now points to an online casino in the Philippines because someone let the domain registration lapse.

Dozens more at: “52 Interesting Things I Learned in 2023.”

* Arnold Palmer

###

As we live and learn, we might spare a thought for Grace Brewster Murray Hopper; she died on this date in 1992.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

Source

Written by (Roughly) Daily

January 1, 2024 at 1:00 am

“Even a fool who keeps silent is considered wise; when he closes his lips, he is deemed intelligent.”*…

A substantial– and important– look at a troubling current aflow in the world of technology today: Emily Gorcenski on the millenarianism and manifest destiny of AI and techno-futurism…

… Early Christian missionaries traveled the pagan lands looking for heathens to convert. Evangelical movements almost definitionally involve spreading the word of Jesus Christ as a core element of their faith. The missionary holds the key that unlocks eternal life and the only cost is conversion: the more souls saved, the holier the work. The idea of going out into the world to spread the good word and convert them to our product/language/platform is a deep tradition in the technology industry. We even hire people specifically to do that. We call them technology evangelists.

Successful evangelism has two key requirements. First, it must offer the promised land, the hope of a better life, of eternal salvation. Second, it must have a willing mark, someone desperate enough (perhaps through coercion) to be included in that vision of eternity, better if they can believe strongly enough to become acolytes themselves. This formed the basis of the crypto community: Ponzi schemes sustain only as long as there are new willing participants and when those participants realize that their own continued success is contingent on still more conversions, the incentive to act in their own best interest is strong. It worked for a while to keep the crypto bubble alive. Where this failed was in every other aspect of web3.

There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.

Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.

The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.

I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.

… The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence, so we pivoted to artificial life. We failed to make artificial life, so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

This post won’t convince anyone on the inside of the harms they are experiencing nor the harms they are causing. That’s not been my intent. You can’t remove someone from a cult if they’re not ready to leave. And the eye-popping data science salaries don’t really incentivize someone to get out. No. My intent was to give some clarity and explanatory insight to those who haven’t fallen under the Singularity’s spell. It’s a hope that if—when—the GenAI bubble bursts, we can maybe immunize ourselves against whatever follows it. And it’s a plea to get people to understand that America has never stopped believing in its manifest destiny.

David Nye described 19th and 20th century American perception technology using the same concept of the sublime that philosophers used to describe Niagara Falls. Americans once beheld with divine wonder the locomotive and the skyscraper, the atom bomb and the Saturn V rocket. I wonder if we’ll behold AI with that same reverence. I pray that we will not. Our real earthly resources are wearing thin. Computing has surpassed aviation in terms of its carbon threat. The earth contains only so many rare earth elements. We may face Armageddon. There will be no Singularity to save us. We have the power to reject our manifest destinies…

Eminently worth reading in full: “Making God,” from @EmilyGorcenski (a relay to mastodon and BlueSky).

See also: “Effective Obfuscation,” from Molly White (@molly0xFFF) and this thread from Emily Bender (@emilymbender).

* Proverbs 17:28

###

As we resist recruitment, we might spare a thought for Ada Lovelace (or, more properly, Augusta Ada King, Countess of Lovelace, née Byron); she died on this date in 1852. A mathematician and writer, she is chiefly remembered for her work on Charles Babbage‘s proposed mechanical general-purpose computer, the Analytical Engine— for which she authored what can reasonably be considered the first “computer program.” She was the first to recognize that the machine had applications beyond pure calculation, and so is one of the “parents” of the modern computer.

Daguerreotype by Antoine Claudet, c. 1843 (source)

“There are two types of encryption: one that will prevent your sister from reading your diary and one that will prevent your government”*…

… But sometimes the encryption you think will work against governments won’t even deter your sister. Joesph Cox on the recently-uncovered vulnerabilities in TETRA, the encryption standard used in radios worldwide…

A group of cybersecurity researchers has uncovered what they believe is an intentional backdoor in encrypted radios used by police, military, and critical infrastructure entities around the world. The backdoor may have existed for decades, potentially exposing a wealth of sensitive information transmitted across them, according to the researchers… The end result, however, are radios with traffic that can be decrypted using consumer hardware like an ordinary laptop in under a minute…

The research is the first public and in-depth analysis of the TErrestrial Trunked RAdio (TETRA) standard in the more than 20 years the standard has existed. Not all users of TETRA-powered radios use the specific encryption algorithim called TEA1 which is impacted by the backdoor. TEA1 is part of the TETRA standard approved for export to other countries. But the researchers also found other, multiple vulnerabilities across TETRA that could allow historical decryption of communications and deanonymization. TETRA-radio users in general include national police forces and emergency services in Europe; military organizations in Africa; and train operators in North America and critical infrastructure providers elsewhere. 

Midnight Blue [presented] their findings at the Black Hat cybersecurity conference in August. The details of the talk have been closely under wraps, with the Black Hat website simply describing the briefing as a “Redacted Telecom Talk.” That reason for secrecy was in large part due to the unusually long disclosure process. Wetzels told Motherboard the team has been disclosing these vulnerabilities to impacted parties so they can be fixed for more than a year and a half. That included an initial meeting with Dutch police in January 2022, a meeting with the intelligence community later that month, and then the main bulk of providing information and mitigations being distributed to stakeholders. NLnet Foundation, an organization which funds “those with ideas to fix the internet,” financed the research.

The European Telecommunications Standards Institute (ETSI), an organization that standardizes technologies across the industry, first created TETRA in 1995. Since then, TETRA has been used in products, including radios, sold by Motorola, Airbus, and more. Crucially, TETRA is not open-source. Instead, it relies on what the researchers describe in their presentation slides as “secret, proprietary cryptography,” meaning it is typically difficult for outside experts to verify how secure the standard really is.

Bart Jacobs, a professor of security, privacy and identity, who did not work on the research itself but says he was briefed on it, said he hopes “this really is the end of closed, proprietary crypto, not based on open, publicly scrutinised standards.”…

The veil, pierced: “Researchers Find ‘Backdoor’ in Encrypted Police and Military Radios,” from @josephfcox in @motherboard. (Not long after this article ran– and after the downfall of Vice, Motherboard’s parent), Cox and a number of his talented Motherboard colleagues launched 404 Media. Check it out.)

Remarkably, some of the radio systems enabling critical infrastructure are even easier to hack– they aren’t even encrypted.

Bruce Schneier (@schneierblog)

###

As we take precautions, we might recall that it was on this date in 1980 that the last IBM 7030 “Stretch” mainframe in active use is decommissioned at Brigham Young University. The first Stretch was was delivered to Los Alamos National Laboratory in 1961, giving the model almost 20 years of operational service.

The Stretch was famous for many things, but perhaps most notably it was the first IBM computer to use transistors instead of vacuum tubes; it was the first computer to be designed with the help of an earlier computer; and it was the world’s fastest computer from 1961 to 1964.

source

“No problem can be solved from the same level of consciousness that created it”*…

Christof Koch settles his bet with David Chalmers (with a case of wine)

… perhaps especially not the problem of consciousness itself. At least for now…

A 25-year science wager has come to an end. In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, that it is still an ongoing quest — and declared Chalmers the winner.

What ultimately helped to settle the bet was a key study testing two leading hypotheses about the neural basis of consciousness, whose findings were unveiled at the conference.

“It was always a relatively good bet for me and a bold bet for Christof,” says Chalmers, who is now co-director of the Center for Mind, Brain and Consciousness at New York University. But he also says this isn’t the end of the story, and that an answer will come eventually: “There’s been a lot of progress in the field.”

Consciousness is everything a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.

Despite a vast effort — and a 25-year bet — researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”…

Neuroscientist Christof Koch wagered philosopher David Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now. But the quest continues: “Decades-long bet on consciousness ends — and it’s philosopher 1, neuroscientist 0,” from @Nature. Eminently worth reading in full for background and state-of-play.

* Albert Einstein

###

As we ponder pondering, we might spare a thought for Vannevar Bush; he died on this date in 1974. An engineer, inventor, and science administrator, he headed the World War II U.S. Office of Scientific Research and Development (OSRD), through which almost all wartime military R&D was carried out, including important developments in radar and the initiation and early administration of the Manhattan Project. He emphasized the importance of scientific research to national security and economic well-being, and was chiefly responsible for the movement that led to the creation of the National Science Foundation.

Bush also did his own work. Before the war, in 1925, at age 35, he developed the differential analyzer, the world’s first analog computer, capable of solving differential equations. It put into productive form, the mechanical concept left incomplete by Charles Babbage, 50 years earlier; and theoretical work by Lord Kelvin. The machine filled a 20×30 ft room. He seeded ideas later adopted as internet hypertext links.

source