(Roughly) Daily

Posts Tagged ‘apple

“The bigger, the better”*…

Thea Applebaum Licht with a reminder that, when it comes to size, Texas has got nothing on California…

Between about 1905 and 1915, the United States entered a golden age of postcards. Cheaper and faster mail service, the advent of “divided back” cards (freeing the entire front for images), and improved commercial printing all drove a new mass market for collectible communication. It was at this same moment that a craze for “tall-tale” or “exaggeration” postcards reached its peak. By cutting, collaging, and re-photographing images, artists created out-of-proportion illusions. One of the most popular genres was agricultural goods of fantastic dimensions.

Nowhere were such postcards more popular than in the western states. There, in the heart of the tough business of agriculture, illustrations of folkloric American abundance were understandable favorites. Pride and place were tied up with the prodigious crops. Supersized fruits and vegetables were often accompanied by brief captions: “How We Do Things at Attica, Wis.”, “The Kind We Raise in Our State”, or “The Kind We Grow in Texas”. Photographers like William “Dad” H. Martin and Alfred Stanley Johnson Jr. captured farmers harvesting furniture-sized onions and stacking corn cobs like timber, fisherman reeling in leviathans, and children sharing canoe-like slices of watermelon.

In the series of exaggeration postcards [produced in the run-up to the postcard boom, then published during it] collected [here], it is California that takes center stage. Produced by the prolific San Francisco–based publisher Edward H. Mitchell, each card features a single rail car rolling through lush farmland. Aboard are gargantuan, luminous fruits and vegetables: dimpled navel oranges, a dusky bunch of grapes, and mottled walnuts. Placed end-to-end, the cards would make a colorful train crossing California’s fertile valleys. Unlike other, more action-packed “tall-tale” cards — filled with farmers, fisherman, and children for scale — Mitchell’s series is restrained. Sharply illuminated, the colossal cargo lean toward artwork rather than gag. “A Carload of Mammoth Apples”[here], green-yellow and gleaming, could have been plucked from Rene Magritte’s The Son of Man [here].

Fabulous fruit and vegetables: “Calicornication: Postcards of Giant Produce (1909),” from @publicdomainrev.bsky.social.

In other art-related news: (very) long-term readers might recall that, back in 2008, (R)D reported that London’s Daily Mail believed that it had tracked him down, and that he is Robin Gunningham. Now as Boing Boing reports:

Anyone reading Banksy’s Wikipedia article at any point since a famous Mail on Sunday exposé in 2008 would likely get the impression the secretive stenciler is probably Robin Gunningham or Robert Del Naja, artists who came from the Bristol Underground. Reuters, having conducted extensive research into their movements, finds both men present at critical moments, but only one at all of them: an arrest report from New York City puts Gunningham firmly in the frame, and recent public records from Ukraine put it beyond doubt.

We later unearthed previously undisclosed U.S. court records and police reports. These included a hand-written confession by the artist to a long-ago misdemeanor charge of disorderly conduct – a document that revealed, beyond dispute, Banksy’s true identity. … Reuters presented that man with its findings about his identity and detailed questions about his work and career. He didn’t reply. Banksy’s company, Pest Control, said the artist “has decided to say nothing.”

His long-time lawyer, Mark Stephens, wrote to Reuters that Banksy “does not accept that many of the details contained within your enquiry are correct.” He didn’t elaborate. Without confirming or denying Banksy’s identity, Stephens urged us not to publish this report, saying doing so would violate the artist’s privacy, interfere with his art and put him in danger.

Del Naja (better known for other work) evidently participates in painting the murals and is perhaps the stencil draftsman (Banksy: “he can actually draw”). Banksy’s former manager, Steve Lazarides, organized a legal name change for Gunningham after the Mail on Sunday item, which successfully ended records for Banksy’s movements under his birth name and stymied researchers—until Reuters figured out the new one by poring through Ukrainian public records on days Del Naja was there. Gunningham used the name David Jones, among the most common in the U.K. If it rings a bell, you might be thinking of another famous British artist was who obliged by his record company to find something more unique.

* common idiom

###

As we live large, we might spare a thought for Isaac Newton; he died on this date (O.S.) in 1727. A polymath who was a key figure in the Scientific Revolution and the Enlightenment that followed, Newton was a mathematician, physicist, astronomer, alchemist, theologian, author, and inventor. He contributed to and refined the scientific method, and his work is considered the most influential in bringing forth modern science. His book Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687, achieved the first great unification in physics and established classical mechanics.  He also made seminal contributions to optics, and shares credit with the German mathematician Gottfried Wilhelm Leibniz for formulating infinitesimal calculus. (Newton developed calculus a couple of years before Leibniz, but published a couple of years after.) Newton spent the last three decades of his life in London, serving as Warden (1696–1699) and Master (1699–1727) of the Royal Mint, a role in which he increased the trustworthiness/accuracy and security of British coinage in a way crucial to the rise of Great Britain as a commercial and colonial power.

Newton, of course, had a famous relationship with fruit:

Newton often told the story that he was inspired to formulate his theory of gravitation by watching the fall of an apple from a tree. The story is believed to have passed into popular knowledge after being related by Catherine Barton, Newton’s niece, to Voltaire. Voltaire then wrote in his Essay on Epic Poetry (1727), “Sir Isaac Newton walking in his gardens, had the first thought of his system of gravitation, upon seeing an apple falling from a tree.” – source

Newton’s apple is thought to have been the green skinned ‘Flower of Kent’ variety.

Newton’s Tree with Woolsthorpe Manor (where, during the Plague, Newton was staying when he had his insight) behind (source)

“See, all our people are businessmen. Their loyalty’s based on that.”*…

A glass Apple award presented by Tim Cook to President Donald Trump during an event, featuring the inscription 'PRESIDENT DONALD J. TRUMP APPLE AMERICAN MANUFACTURING PROGRAM' and 'MADE IN USA 2025' in the foreground, with a portrait of Ronald Reagan visible in the background.

In his nifty newsletter, Benedict Evans observes…

In ‘Godfather II,’ the Cuban representative of ITT gave President Batista a solid gold telephone. In 2025, Apple’s Tim Cook gave President Trump a piece of Corning Glass on a gold plinth…

See this University of Florida piece for more background on the 1959 “gift” to Bautita and the corruption it came to symbolize. (But note that the U of F note incorretly attributes the gesture to AT&T; it was in fact from ITT.) And see Tim Cook pay Apple’s “tribute” here.

Apposite: “A UFC fight at the White House.”

Oh, and (from The Onion): “Frito-Lay CEO Gifts Trump Gold Funyun.”

* “Michael Corleone” (Al Pacino), The Godfather Part II

###

As we ponder payola, we might remind ourselves that things have been– and can again be– different: on this date in 1935 President Franklin D. Roosevelt signed the Social Security Act, part of his New Deal program that created a government pension system for the retired.

By 1930, the United States was, along with Switzerland, the only modern industrial country without any national social security system. Amid the Great Depression, the physician Francis Townsend galvanized support behind a proposal to issue direct payments to older people. Responding to that movement, Roosevelt organized a committee led by Secretary of Labor Frances Perkins to develop a major social welfare program proposal. Roosevelt presented the plan in early 1935 and signed the Social Security Act into law on August 14, 1935. The Supreme Court upheld the act in two major cases decided in 1937.

The law established the Social Security program. The old-age program is funded by payroll taxes, and over the ensuing decades, it contributed to a dramatic decline in poverty among older people, and spending on Social Security became a significant part of the federal budget. The Social Security Act also established an unemployment insurance program [only a few states had poorly-funded programs at the time] administered by the states and the Aid to Dependent Children program, which provided aid to families headed by single mothers. The law was later amended by acts such as the Social Security Amendments of 1965, which established two major healthcare programs: Medicare and Medicaid.

source

President Franklin D. Roosevelt signing the Social Security Act in 1935, surrounded by several officials and advisors.
Roosevelt signs Social Security Bill (source)

“We shape our tools and thereafter our tools shape us”*…

A late 19th C. illustration of 18th-C. people, gobsmacked by the many tech changes that have made their world irrelevant

AI is on the march, with implications, TBD, for… well, for everything. Nayef Al-Rodhan ponders its potential impact on philosophy…

Around the world, Artificial Intelligence (AI) is seeping into every aspect of our daily life, transforming our computational power, and with it the manufacturing speed, military capabilities, and the fabric of our societies. Generative AI applications such as OpenAI’s ChatGPT, the fastest growing consumer application in history, have created both positive anticipation and alarm about the future potential of AI technology. Predictions range from doomsday scenarios describing the extinction of the human species to optimistic takes on how it could revolutionise the way we work, live and communicate. If used correctly, AI could catapult scientific, economic and technological advances into a new phase in human history. In doing so it has the potential to solve some of humanity’s biggest problems by preventing serious food and water scarcitymitigating inequality and povertydiagnosing life-threatening diseases, tackling climate change, preventing pandemics, designing new game-changing proteins, and much more. 

AI technology is rapidly moving in the direction of Artificial General Intelligence (AGI), the ability to achieve human-level machine intelligence, with Google’s AI Chief recently predicting that there is a 50% chance that we’ll reach AGI within five years. This raises important questions about our human nature, our sentience, and our dignity needs. Can AI ever become truly sentient? If so, how will we know if that happens? Should sentient machines share similar rights and responsibilities as humans? The boardroom drama at OpenAI in late November 2023 also deepened the debate about the dangers of techno-capitalism: is it possible for corporate giants in the AI space to balance safety with the pursuit of revenues and profit? 

As AI advances at a breakneck speed, ethical considerations are becoming increasingly critical. Sentient AI implies that the technology has the capacity to evolve and be self-aware, in doing so feeling and experiencing the world just like a human would. According to the British mathematician Alan Turing, if the human cannot distinguish between whether it is conversing with an AI or another human, then the AI in question has passed the test. However, given AI’s sophisticated conversational skills and ability to give the impression of consciousness, the Turing Test is becoming too narrow and does not grasp all the nuances of what makes us sentient and, more broadly, human. To stay on the front foot of technological progress, we need to supplement the Turing Test with transdisciplinary frameworks for evaluating increasingly human-like AI. These frameworks should be based on approaches rooted in psychology, neuroscience, philosophy, the social sciences, political science and other relevant disciplines. 

We do not yet have a full understanding of what makes a thing sentient but transdisciplinary efforts by neuroscientists, computer scientists and philosophers are helping develop a deeper understanding of consciousness and sentience. So far, we have found that emotions are one of the important characteristics needed for sentience, as is agency or intrinsic motivation. A sentient AI would need to have the ability to create autonomous goals and an ability to pursue these goals. In human beings, this quality has evolved from our intrinsic survival instinct, while in AI it is still, for now, lacking. According to recent studies, a sense of time, narrative, and memory is also critical for determining sentience. A level of sentence comparable to humans would require autobiographical memory and a concept of the linear progression of time. In current AI systems, these capabilities are limited – but recent developments raise uncomfortable philosophical questions about whether sentient AI should share similar rights and responsibilities in the event that it becomes a reality. And if so, how does one hold the technology accountable for their actions? And how will we define – legally and ethically – sentient AI’s role in society? We currently treat AI technology and machines as property, so how will this change if they are granted their own rights? There is no clear-cut answer, but as I argued in ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’, we should attribute agency to machines whenever they appear to possess the same qualities that characterise humans. I also believe that machines ought to be treated as agents if they prove themselves to be emotional, amoral, and egoist. 

These debates, however they unfold, will clearly have deep implications on the future of philosophy itself. In ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’ I make the case that it is a short step from AI’s present capabilities to its potential future use developing novel philosophical hypotheses and thought experiments. It is therefore not unthinkable that future AI systems could break new ground in the field of normative ethics, helping pinpoint moral principles that human philosophers have failed to grasp. However, we should be mindful that their conception of morality or beauty, for example, may have nothing in common with ours, or it may supersede our own capacities and reflections. This could limit the ability of sophisticated artificial agents to answer long-standing philosophical questions, however superior they may be to the most advanced human intellectual output. We should consider how these developments are likely to impact how we understand the world around us, both in terms of the subject matter and of the theorising entity involved. Artificial agents will no doubt be put under the microscope and will be studied alongside the human mind and human nature: not just to compare and contrast, but also to understand how these artificial entities relate to – and treat – one another, and humanity itself. There is also the question of how human philosophers will react if and when AI-steered machines become superior philosophical theorisers. Will flesh and blood philosophers be forced to compete cognitively with entities whose intellectual abilities vastly supersede our own? Will AI systems overtake our limited human reasoning and reflective capacities? If this happens, what does this mean for our own human agency, the control we have over our lives and the future of our societies?…

… Powerful AI technologies will progressively increase our capabilities, for good or ill. We therefore need to be clear-sighted about the AI governance frameworks urgently needed to futureproof the safe use of AI. The recent high drama at OpenAI, whose founding mission is “to ensure that artificial general intelligence benefits all of humanity”, gave us a glimpse of the main rift in the AI industry, pitting those focused on commercial growth against those uneasy with the potential ramifications of the unbridled development of AI. However well-motivated AI governance schemes might be, they are less robust than one would hope. At the same time, self-regulation by global tech companies is becoming increasingly difficult given the large sums at stake and the economic and political influence of these companies.

With this in mind, we must keep an open mind not just about the immediate man-made dangers of AI technologies but also their potential to redefine what it means to be human. They will shape how we understand and engage with the world, in doing so making us reevaluate our place in it. Our chances of survival as a species and the likelihood of our existence in a free, independent, peaceful, prosperous, creative and dignified world will depend on the future trajectory of AI. Our historical yearning for longing and belonging hangs in the balance. To protect citizens from potential harm and limit the risks, AI should be regulated just like any other technology. We must also apply transdisciplinary approaches to make sure that the use and governance of AI is always steered by human dignity needs for all, at all times and under all circumstances. AI’s trajectory is not predetermined, but the clock is ticking and humanity may have less time than it thinks to control its collective destiny… 

Eminently worth reading in full. Whether or not one agrees with the author’s specific conclusions, his larger point– that we need to be mindful and purposive about the deployment of AI is surely well-taken: “Sentience, Safe AI and The Future of Philosophy: A Transdisciplinary Analysis,” from @SustainHistory in @oxpubphil.

See also: “Thinking About AI, Before AI Disappears” from Quentin Hardy‘s new newsletter, Technohumanism. (source of image above).

* Father John Culkin, SJ, a Professor of Communication at Fordham University (and friend of Marshall McLuhan, to whom the quote is often incorrectly attributed)

###

As we think about thinking, we might recall that it was on this date in 1979 that Apple began work on the Lisa, which would become the world’s first commercial computer with a graphical user interface.

Originally intended to sell for $2,000 and ship in 1981, the Lisa was delayed until 1983 and sold for $10,000. Utilizing technology ahead of its time, its high cost, relative lack of software, and some hardware reliability issues ultimately sank the success of the Lisa. Still, much of the technology introduced by the Lisa (itself rooted in the earlier work of Doug Engelbart [and here] and Xerox PARC) influenced the development of the Macintosh as well as other future computer and operating system designs: e.g., a bitmapped display, a window-based graphical user interface, icons, folders, mouse (two-button), (Ethernet) networking, file servers, print servers, and email.

The Lisa, with its development team (source)

“If I could explain it to the average person, I wouldn’t have been worth the Nobel Prize”*…

Alex Murrell on the surge of sameness all around us…

The interiors of our homes, coffee shops and restaurants all look the same. The buildings where we live and work all look the same. The cars we drive, their colours and their logos all look the same. The way we look and the way we dress all looks the same. Our movies, books and video games all look the same. And the brands we buy, their adverts, identities and taglines all look the same.

But it doesn’t end there. In the age of average, homogeneity can be found in an almost indefinite number of domains.

The Instagram pictures we post, the tweets we read, the TV we watch, the app icons we click, the skylines we see, the websites we visit and the illustrations which adorn them all look the same. The list goes on, and on, and on…

Perhaps when times are turbulent, people seek the safety of the familiar. Perhaps it’s our obsession with quantification and optimisation. Or maybe it’s the inevitable result of inspiration becoming globalised…

But it’s not all bad news.

I believe that the age of average is the age of opportunity…

Lots more mesmerizing examples: “The age of average,” from @alexjmurrell.

* Richard Feynman

###

As we think different, we might recall that it was on this date in 1976 that Steve Jobs, Steve Wozniak, and Ronald Wayne signed a partnership agreement that established the company that would become Apple Computer, Inc.– a company that was all about trumping sameness– on January 3, 1977.

Wayne left the partnership eleven days later, relinquishing his ten percent share for $2,300.

Apple in Steve Job’s parents’ home on Crist Drive in Los Altos, California. Although it is widely believed that the company was founded in the house’s garage, Apple co-founder Steve Wozniak called it “a bit of a myth”. Jobs and Wozniak did, however, move some operations to the garage when the bedroom became too crowded.

source

Written by (Roughly) Daily

April 1, 2023 at 1:00 am

“There must be some other way out of here, said the joker to the thief / There’s too much confusion, I can’t get no relief”*…

The dangers of rapid scaling: Praveen Seshadri on what ails Google and how it can turn things around…

I joined Google just before the pandemic when the company I had co-founded, AppSheet, was acquired by Google Cloud. The acquiring team and executives welcomed us and treated us well. We joined with great enthusiasm and commitment to integrate AppSheet into Google and make it a success. Yet, now at the expiry of my three year mandatory retention period, I have left Google understanding how a once-great company has slowly ceased to function.

Google has 175,000+ capable and well-compensated employees who get very little done quarter over quarter, year over year. Like mice, they are trapped in a maze of approvals, launch processes, legal reviews, performance reviews, exec reviews, documents, meetings, bug reports, triage, OKRs, H1 plans followed by H2 plans, all-hands summits, and inevitable reorgs. The mice are regularly fed their “cheese” (promotions, bonuses, fancy food, fancier perks) and despite many wanting to experience personal satisfaction and impact from their work, the system trains them to quell these inappropriate desires and learn what it actually means to be “Googley” — just don’t rock the boat. As Deepak Malhotra put it in his excellent business fable, at some point the problem is no longer that the mouse is in a maze. The problem is that “the maze is in the mouse.”

It is a fragile moment for Google with the pressure from OpenAI + Microsoft. Most people view this challenge along the technology axis, although there is now the gnawing suspicion that it might be a symptom of some deeper malaise. The recent layoffs have caused angst within the company as many employees view this as a failure of management or a surrender to activist investors. In a way, this reflects a general lack of self-awareness across both management and employees. Google’s fundamental problems are along the culture axis and everything else is a reflection of it. Of course, I’m not the only person to observe these issues (see the post by Noam Bardin, Waze founder and ex-Googler).

The way I see it, Google has four core cultural problems. They are all the natural consequences of having a money-printing machine called “Ads” that has kept growing relentlessly every year, hiding all other sins.

(1) no mission, (2) no urgency, (3) delusions of exceptionalism, (4) mismanagement…

A provocative diagnosis: “The maze is in the mouse.” Eminently worth reading in full.

* Bob Dylan, “All Along the Watchtower”

###

As we go back to basics, we might recall that it was on this date in 1955 that a boy was born to University of Wisconsin graduate students Joanne Simpson and Abdulfattah Jandali. He was given up for adoption and taken in by a machinist and his wife in Mountain View, California. They named him Steve Jobs.

source

Written by (Roughly) Daily

February 24, 2023 at 1:00 am