(Roughly) Daily

Posts Tagged ‘Tim O’Reilly

“The best way to predict the future is to invent it”*…

A vintage futuristic car driving down a tree-lined road with a man and a woman smiling inside.

Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

… in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

– source

In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

[Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

[Image above: source]

Alan Kay

###

As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

“The people are pieces of software called avatars. They are the audiovisual bodies that people use to communicate with each other in the Metaverse.”*…

Tim O’Reilly with a (customarily) wise assessment of an emerging new technology…

The metaphors we use to describe new technology constrain how we think about it, and, like an out-of-date map, often lead us astray. So it is with the metaverse. Some people seem to think of it as a kind of real estate, complete with land grabs and the attempt to bring traffic to whatever bit of virtual property they’ve created.

Seen through the lens of the real estate metaphor, the metaverse becomes a natural successor not just to Second Life but to the World Wide Web and to social media feeds, which can be thought of as a set of places (sites) to visit. Virtual Reality headsets will make these places more immersive, we imagine.

But what if, instead of thinking of the metaverse as a set of interconnected virtual places, we think of it as a communications medium? Using this metaphor, we see the metaverse as a continuation of a line that passes through messaging and email to “rendezvous”-type social apps like Zoom, Google Meet, Microsoft Teams, and, for wide broadcast, Twitch + Discord. This is a progression from text to images to video, and from store-and-forward networks to real time (and, for broadcast, “stored time,” which is a useful way of thinking about recorded video), but in each case, the interactions are not place based but happening in the ether between two or more connected people. The occasion is more the point than the place…

Tim explains what he means– and what that could mean: “The Metaverse is not a place- it’s a communications medium,” @timoreilly in @radar.

* Neal Stephenson, Snow Crash (the origination of the term “metaverse”)

###

As we jack in, we might send well-connected birthday greetings to Paul Otlet; he was born on this date in 1868. An author, entrepreneur, lawyer, and peace activist, he is considered the father of information science. He created Universal Decimal Classification (which would later become a faceted classification) and was responsible for the development of an early information retrieval tool, the “Repertoire Bibliographique Universel” (RBU) which utilized 3×5 inch index cards, used commonly in library catalogs around the world (though now largely displaced by the advent of the online public access catalog or OPAC). Indeed, Otlet predicted the advent of the internet (though over-optimisitically imagined that it would appear in the 1930s).

For more of his remarkable story, see “Knowledge, like air, is vital to life. Like air, no one should be denied it.”

source

Written by (Roughly) Daily

August 23, 2022 at 1:00 am

“Create more value than you capture”*…

As Donald Trump’s presidency careened to its ignominious end, with a mob of his supporters storming of the US Capitol, Facebook and Twitter banned the US president for inciting the violence. With that act, the scope of the political power wielded by Big Tech became impossible to ignore.

Whether these platforms have too much political power is a debate that is just beginning. Their outsize economic power, though, is unquestionable. The combined market capitalization of the five largest US tech platforms – Alphabet (Google), Amazon, Apple, Facebook, and Microsoft – rose by $2.7 trillion in 2020. Following the addition of Tesla to the S&P 500, the Big Six tech firms now represent nearly one-quarter of the index’s valuation. And with the spread of COVID-19, the leading digital platforms have become de facto essential service providers, enabling a mass transition to remote and isolated living.

And yet the political pressure on Big Tech has continued to rise. There is a growing consensus that platforms have been abusing their power, driving profits by exploiting consumer privacy, crushing the competition, and buying up potential rivals.

The economics of platforms is different from the economics of traditional offline and one-sided markets. Policymakers therefore need to reconsider some of their most basic assumptions, asking themselves whether they are even focusing on the right things.

A key challenge is to determine how the value of data diverges from the value created by providing a data-generating service. Platforms have the power to shape how decisions are made, which in turn can alter the value of the data being amassed. The implication, as Google co-founders Larry Page and Sergey Brin foresaw in a 1998 paper, is that advertisers or any other third-party interest can embed mixed motives into the design of a digital service. In the case of internet search, the advertising imperative can distract from efforts to improve the core service, because the focus is on the value generated for advertisers rather than for users.

As this example shows, it is necessary to ask who benefits the most from the design of a given service. If a platform’s core mission is to maximize profits from advertising, that fact will shape how it pursues innovation, engages with the public, and designs its products and services.

Moreover, it is important to understand that even if antitrust authorities were empowered to break up companies like Google and Facebook, that would not eliminate the data extraction and monetization that lie at the heart of their business models. Creating competition among a bunch of mini-Facebooks would not weed out such practices, and may even entrench them further as companies race to the bottom to extract the most value for their paying customers…

Digital markets do not have to be extractive and exploitative. They could be quite different, but only if we ourselves start to think differently. We need to recognize, as Adam Smith did, that there is a difference between profits and rents – between the wealth generated by creating value and wealth that is amassed through extraction. The first is a reward for taking risks that improve the productive capacity of an economy; the second comes from seizing an undue share of the reward without providing comparable improvements to the economy’s productive capacity.

For the past half-century, corporate governance has rested on the notion of shareholder value. The result is an economy in which it is increasingly important to differentiate firms that are actually driving innovation from those that are not. There is no shortage of firms that are engaged merely in financial engineering, share buy-backs, and rent-seeking, extracting gains from actual risk takers while under-investing in the goods and services that generate value.

The digital economy has accelerated this conflation of wealth creation and rent extraction, making it all the more difficult to differentiate between the two. The issue is not just that financial intermediaries are shaping how value is created and distributed across firms, but that these extractive mechanisms are embedded within user interfaces; they are baked into digital markets by design…

The proliferation of such practices shows why we need to focus more on the “how” of wealth creation, and less on the “bottom line.” An economy that produces wealth from privacy-respecting innovations would not function anything like one that encourages the systematic exploitation of private data.

But building a new economic foundation will require a shift from the shareholder model to a stakeholder model that embodies a deeper appreciation of public value creation. Wealth and other desirable market outcomes are collectively co-created among public, private, and civic domains, and should be understood as such. Policy analysis and corporate decision-making can no longer be guided solely by concerns about maximizing efficiency. We now also must consider whether wealth generation is actually improving society and strengthening the ability to respond to social challenges.

After all, the fact that platforms are creating wealth does not mean they are creating public value. A firm with access to massive amounts of data and network effects could, in theory, use its position to improve social well-being. But it is unlikely to do so if it is operating under a framework that prizes the generation of advertising revenue over everything else, including the performance of products and services…

Today’s digital economy has grown up around a business model of data and wealth extraction, confounding traditional antitrust paradigms and undermining the public and social value that otherwise could be derived from technological innovation. An acute diagnosis of a fundamental structural challenge, and thoughts on steps to address it– Mariana Mazzucato (@MazzucatoM), Tim O’Reilly (@timoreilly), and colleagues: “Reimagining the Platform Economy.” Do click through to read piece read the entire piece.

* Tim O’Reilly

###

As we dig deep, we might recall that it was on this date in 2005 that YouTube was founded and registered (though it didn’t launch until November of that year). The creation of three PayPal vets (Chad HurleySteve Chen, and Jawed Karim), it was bought by Google one year after launch (in November 2006) for $1.65 billion. Operating as one of Google’s subsidiaries, it is now (per Alexa Internet Rankings) the second most trafficked web site, after its parent’s search page.

YouTube logos over time

source

Written by (Roughly) Daily

February 14, 2021 at 1:01 am

The essence of entrepreneuring…

From the Kauffman Foundation’s “Sketchbook” series, “Make it Happen,” a wonderful animation of a recent interview with Tim O’Reilly on the “Maker Movement” (see here and here)– and on what it can teach us about innovation and entrepreneurial energy:

 click image above, or here, for video

For more, see CNN’s interview with Make‘s founder (and Tim’s long-time publishing partner), Dale Dougherty.

As we return with enthusiasm to our workbenches, we might recall that it was on this date in 1872 that U.S. Patent No.123,790 was awarded to Silas Noble and James P. Cooley for a device that allowed “a block of wood, with little waste and in one operation, [to] be cut up in to toothpicks ready for use.”  The inventors had been working together since 1854, as drum makers; at the time of the toothpick breakthrough, their company , Noble and Cooley, which remains in the percussion business to this day, was manufacturing 100,000 drums per year.

So, in much the same way that an unplanned byproduct of NASA’s space program was the powdered drink that gave American households a convenient source of vitamin C (Tang), Noble and Cooley’s quest for better drum shells and sticks helped bring down the cost of cleaner teeth and healthier gums…

 source

Written by (Roughly) Daily

February 20, 2012 at 1:01 am

Your warranty is expiring…

Readers in and around Silicon Valley are only too familiar with “exponential growth”– the engine that powers (or supposedly will power) the hockey-stick growth described in business plan after business plan.  While that growth all too rarely actually materializes in new ventures, the phenomenon of exponential growth is very real…

Consider, for example, human death.  As Gravity and Levity notes:

What do you think are the odds that you will die during the next year?  Try to put a number to it — 1 in 100?  1 in 10,000?  Whatever it is, it will be twice as large 8 years from now.

This startling fact was first noticed by the British actuary Benjamin Gompertz in 1825 and is now called the “Gompertz Law of human mortality.”  Your probability of dying during a given year doubles every 8 years.  For me, a 25-year-old American, the probability of dying during the next year is a fairly miniscule 0.03% — about 1 in 3,000.  When I’m 33 it will be about 1 in 1,500, when I’m 42 it will be about 1 in 750, and so on.  By the time I reach age 100 (and I do plan on it) the probability of living to 101 will only be about 50%.  This is seriously fast growth — my mortality rate is increasing exponentially with age.

And if my mortality rate (the probability of dying during the next year, or during the next second, however you want to phrase it) is rising exponentially, that means that the probability of me surviving to a particular age is falling super-exponentially.

Read the rest of “Your Body Wasn’t Built to Last- A Lesson from Human Mortality Rates“…  (Thanks, Tim O’Reilly, for the tip.)

As we add another supplement to the morning’s pile, we might recall that it was on this date in 1962 that Nelson Mandela was arrested, then imprisoned at Johannesburg Fort.  While he was later moved to Robben Island, he was not released until 1990.

Nelson Mandela

Image via Wikipedia

Reblog this post [with Zemanta]