(Roughly) Daily

Posts Tagged ‘computer science

“One thing I’ve learned over time is, if you hit a golf ball into water, it won’t float”*…

Happy New Year!

In the spirit of Tom Whitwell’s lists, Jason Kottke‘s collection of learnings from 2023-gone-by…

Purple Heart medals that were made for the planned (and then cancelled) invasion of Japan in 1945 are still being given out to wounded US military personnel.

The San Francisco subway system still runs on 5 1/4-inch floppies.

Bottled water has an expiration date — it’s the bottle not the water that expires.

Multicellular life developed on Earth more than 25 separate times.

Horseshoe crabs are older than Saturn’s rings.

Ernest Hemingway only used 59 exclamation points across his entire collection of works.

MLB broadcaster Vin Scully’s career lasted 67 seasons, during which he called a game managed by Connie Mack (born in 1862) and one Julio Urías (born in 1996) played in.

Almost 800,000 Maryland licence plates include a URL that now points to an online casino in the Philippines because someone let the domain registration lapse.

Dozens more at: “52 Interesting Things I Learned in 2023.”

* Arnold Palmer

###

As we live and learn, we might spare a thought for Grace Brewster Murray Hopper; she died on this date in 1992.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

Source

Written by (Roughly) Daily

January 1, 2024 at 1:00 am

“We are as gods and might as well get good at it”*…

In 1968, Stewart Brand and small group of colleagues published the first Whole Earth Catalog, then followed it over the years with a series of updates, spin-offs, and sequels. An at-the-time unprecedented marriage of counterculture magazine and product catalog, it (and its successors) have been enormously influential. Now, as Long Now‘s Jacob Kupperman reports, the entire run of Whole Earth publications is freely available online…

When the Whole Earth Catalog arrived in the Fall of 01968, it came bearing a simple, epochal label: “Access to Tools.” As its editor and Long Now Co-founder Stewart Brand wrote in the introduction to that first edition, the goal was for the Catalog to serve as an “evaluation and access device” for tools that empowered its readers “to conduct his own education, find his own inspiration, shape his own environment, and share his adventure with whoever is interested.”

The key word in all of that idealistic declaration of purpose was “access.” The Whole Earth Catalog did not intend to directly grant its readers this knowledge, wisdom, and mastery, but to provide a kaleidoscopic array of gateways from which they could attempt to find it themselves.

Yet for years, access to the Whole Earth Catalog itself has been difficult. 55 years on from the first publication of the Catalog, it mostly lives on in the interstices — as a symbol of a vibrant countercultural history and an inspiration for writers, designers, and technologists, but less so as an actual set of catalogs that you can read. The Catalog is not lost media per se — copies can be found in libraries, archives, and personal collections across the world — but accessing its trove of information is no longer as easy as it was in its heyday.

That is, until now.

on the 55th anniversary of the publication of the original Whole Earth Catalog, Gray Area and the Internet Archive have made the Catalog freely available online via the Whole Earth Index, a website bringing together more than 130 Whole Earth Catalog-related publications, ranging from some of the earliest Catalogs published in the late 01960s and early 01970s to 02002 issues of Whole Earth Magazine.

Within the site’s grid of publications rests a cornucopia of writing and curation, from in-depth looks at space colonies to ecological analyses of the insurance industry to reporting on the state of the global teenager at the turn of the 01990s. The Whole Earth Index is a work of love, a noncommercial enterprise designed, as project lead and Gray Area Executive Director Barry Threw told Long Now Ideas, to “allow us to reflect on how we got to where we are and regain some of that connection to the countercultural world” of the Bay Area of the 01960s and 01970s.

For the people who helped make the Whole Earth Catalog and its descendants, the Whole Earth Index is in many ways a dream come true. Long Now Board Member Kevin Kelly, who wrote for, edited, and led the CoEvolution Quarterly, the Whole Earth Review, and later editions of the Whole Earth Catalog, told us that he found “the interface to this historic collection to be as good, maybe even better, as reading the original paper artifacts,” adding that he’d “been giddy with delight in how satisfying this archive is.”  The project’s model of “instant access from your home, for free!”, Kelly noted, was something that the team behind the Whole Earth Catalog could only dream of when they began their work.

The open-ended design of the Whole Earth Index is intended as a sort of provocation towards future works — a message and invitation in the spirit of the original catalog’s epochal claim that “we are as gods and might as well get good at it.” The tens of thousands of scanned pages will live on the servers of the Internet Archive — as good a place as any to try and stave off a Digital Dark Age — but the ideas of the Whole Earth Catalog and its heirs will always live among those of us who read it and access its tools. What will you do with them?

The Whole Earth Catalog and its descendants are newly available online through the Whole Earth Index: “The Lasting Whole Earth Catalog,” from @Jacobkupp and @longnow.

* Stewart Brand, in the “Statement of Purpose” in the first Whole Earth Catalog

###

As we treasure tools, we might spare a thought for a man whose work kicked in about the same time as the Whole Earth Catalog– and intersected with it in myriad ways (e.g., The WELL), Jon Postel; he died on this date in 1998. A computer scientist, he played a pivotal role in creating and administering the Internet. As a graduate student in the late 1960s, he was instrumental in developing ARPANET, the forerunner of the internet. He is known principally for being the Editor of the Request for Comment (RFC) document series from which internet standards emerged, for Simple Mail Transfer Protocol (SMTP), and for founding and administering the Internet Assigned Numbers Authority (IANA) until his death.

During his lifetime he was referred to as the “god of the Internet” for his comprehensive influence; Postel himself noted that this “compliment” came with a barb, the suggestion that he should be replaced by a “professional,” and responded with typical self-effacing matter-of-factness: “Of course, there isn’t any ‘God of the Internet.’ The Internet works because a lot of people cooperate to do things together.”

source

“Many of the things you can count, don’t count. Many of the things you can’t count, really count”*…

Still, we count… and have, as Keith Houston explains, for much, if not most of human history…

Figuring out when humans began to count systematically, with purpose, is not easy. Our first real clues are a handful of curious, carved bones dating from the final few millennia of the three-​million-​year expanse of the Old Stone Age, or Paleolithic era. Those bones are humanity’s first pocket calculators: For the prehistoric humans who carved them, they were mathematical notebooks and counting aids rolled into one. For the anthropologists who unearthed them thousands of years later, they were proof that our ability to count had manifested itself no later than 40,000 years ago.

Counting, fundamentally, is the act of assigning distinct labels to each member of a group of similar things to convey either the size of that group or the position of individual items within it. The first type of counting yields cardinal numbers such as “one,” “two,” and “three”; the second gives ordinals such as “first,” “second,” and “third.”

At first, our hominid ancestors probably did not count very high. Many body parts present themselves in pairs—​arms, hands, eyes, ears, and so on—​thereby leading to an innate familiarity with the concept of a pair and, by extension, the numbers 1 and 2. But when those hominids regarded the wider world, they did not yet find a need to count much higher. One wolf is manageable; two wolves are a challenge; any more than that and time spent counting wolves is better spent making oneself scarce. The result is that the very smallest whole numbers have a special place in human culture, and especially in language. English, for instance, has a host of specialized terms centered around twoness: a brace of pheasants; a team of horses; a yoke of oxen; a pair of, well, anything. An ancient Greek could employ specific plurals to distinguish between groups of one, two, and many friends (ho philosto philo, and hoi philoi). In Latin, the numbers 1 to 4 get special treatment, much as “one” and “two” correspond to “first” and “second,” while “three” and “four” correspond directly with “third” and “fourth.” The Romans extended that special treatment into their day-​to-​day lives: after their first four sons, a Roman family would typically name the rest by number (Quintus, Sextus, Septimus, and so forth), and only the first four months of the early Roman calendar had proper names. Even tally marks, the age-​old “five-​barred gate” used to score card games or track rounds of drinks, speaks of a deep-​seated need to keep things simple.

Counting in the prehistoric world would have been intimately bound to the actual, not the abstract. Some languages still bear traces of this: a speaker of Fijian may say doko to mean “one hundred mulberry bushes,” but also koro to mean “one hundred coconuts.” Germans will talk about a Faden, meaning a length of thread about the same width as an adult’s outstretched arms. The Japanese count different kinds of things in different ways: there are separate sequences of cardinal numbers for books; for other bundles of paper such as magazines and newspapers; for cars, appliances, bicycles, and similar machines; for animals and demons; for long, thin objects such as pencils or rivers; for small, round objects; for people; and more.

Gradually, as our day-​to-​day lives took on more structure and sophistication, so, too, did our ability to count. When farming a herd of livestock, for example, keeping track of the number of one’s sheep or goats was of paramount importance, and as humans divided themselves more rigidly into groups of friends and foes, those who could count allies and enemies had an advantage over those who could not. Number words graduated from being labels for physical objects into abstract concepts that floated around in the mental ether until they were assigned to actual things.

Even so, we still have no real idea how early humans started to count in the first place. Did they gesture? Speak? Gather pebbles in the correct amount? To form an educated guess, anthropologists have turned to those tribes and peoples isolated from the greater body of humanity, whether by accident of geography or deliberate seclusion. The conclusion they reached is simple. We learned to count with our fingers…

From an excerpt from Houston’s new book, Empire of the Sum: The Rise and Reign of the Pocket Calculator: “The Early History of Counting,” @OrkneyDullard in @laphamsquart.

* Albert Einstein

###

As we tally, we might send carefully calculated birthday greetings to Stephen Wolfram; he was born on this date in 1959. A computer scientist, mathematician, physicist, and businessman, he has made contributions to all of these fields. But he is probably best known for his creation of the software system Mathematica (a kind of “idea processor” that allows scientists and technologists to work fluidly in equations, code, and text), which is linked to WolframAlpha (an online answer engine that provides additional data, some of which is kept updated in real time).

source

Written by (Roughly) Daily

August 29, 2023 at 1:00 am

“Life is a Zen koan, that is, an unsolvable riddle. But the contemplation of that riddle – even though it cannot be solved – is, in itself, transformative.”*…

How hard is it to prove that problems are hard to solve? Meta-complexity theorists have been asking questions like this for decades. And as Ben Brubaker explains, a string of recent results has started to deliver answers…

… Even seasoned researchers find understanding in short supply when they confront the central open question in theoretical computer science, known as the P versus NP problem. In essence, that question asks whether many computational problems long considered extremely difficult can actually be solved easily (via a secret shortcut we haven’t discovered yet), or whether, as most researchers suspect, they truly are hard. At stake is nothing less than the nature of what’s knowable.

Despite decades of effort by researchers in the field of computational complexity theory — the study of such questions about the intrinsic difficulty of different problems — a resolution to the P versus NP question has remained elusive. And it’s not even clear where a would-be proof should start.

“There’s no road map,” said Michael Sipser, a veteran complexity theorist at the Massachusetts Institute of Technology who spent years grappling with the problem in the 1980s. “It’s like you’re going into the wilderness.”

It seems that proving that computational problems are hard to solve is itself a hard task. But why is it so hard? And just how hard is it? Carmosino and other researchers in the subfield of meta-complexity reformulate questions like this as computational problems, propelling the field forward by turning the lens of complexity theory back on itself.

“You might think, ‘OK, that’s kind of cool. Maybe the complexity theorists have gone crazy,’” said Rahul Ilango, a graduate student at MIT who has produced some of the most exciting recent results in the field.

By studying these inward-looking questions, researchers have learned that the hardness of proving computational hardness is intimately tied to fundamental questions that may at first seem unrelated. How hard is it to spot hidden patterns in apparently random data? And if truly hard problems do exist, how often are they hard?

“It’s become clear that meta-complexity is close to the heart of things,” said Scott Aaronson, a complexity theorist at the University of Texas, Austin.

This is the story of the long and winding trail that led researchers from the P versus NP problem to meta-complexity. It hasn’t been an easy journey — the path is littered with false turns and roadblocks, and it loops back on itself again and again. Yet for meta-complexity researchers, that journey into an uncharted landscape is its own reward. Start asking seemingly simple questions, said Valentine Kabanets, a complexity theorist at Simon Fraser University in Canada, and “you have no idea where you’re going to go.”…

Complexity theorists are confronting their most puzzling problem yet– complexity theory itself: “Complexity Theory’s 50-Year Journey to the Limits of Knowledge,” from @benbenbrubaker in @QuantaMagazine.

* Tom Robbins

###

As we limn limits, we might send thoroughly cooked birthday greetings to Denis Papin; he was born on this date in 1647. A mathematician and physicist who worked with  Christiaan Huygens and Gottfried Leibniz, Papin is better remembered as the inventor of the steam digester, the forerunner of the pressure cooker and of the steam engine.

source

“Those who can imagine anything, can create the impossible”*…

As Charlie Wood explains, physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors…

… When it comes to conventional machine learning, computer scientists have discovered that bigger is better. Stuffing a neural network with more artificial neurons — nodes that store numerical values — improves its ability to tell a dachshund from a Dalmatian, or to succeed at myriad other pattern recognition tasks. Truly tremendous neural networks can pull off unnervingly human undertakings like composing essays and creating illustrations. With more computational muscle, even grander feats may become possible. This potential has motivated a multitude of efforts to develop more powerful and efficient methods of computation.

[Cornell’s Peter McMahon] and a band of like-minded physicists champion an unorthodox approach: Get the universe to crunch the numbers for us. “Many physical systems can naturally do some computation way more efficiently or faster than a computer can,” McMahon said. He cites wind tunnels: When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.

A wind tunnel is a single-minded machine; it simulates aerodynamics. Researchers like McMahon are after an apparatus that can learn to do anything — a system that can adapt its behavior through trial and error to acquire any new ability, such as classifying handwritten digits or distinguishing one spoken vowel from another. Recent work has shown that physical systems like waves of light, networks of superconductors and branching streams of electrons can all learn.

“We are reinventing not just the hardware,” said Benjamin Scellier, a mathematician at the Swiss Federal Institute of Technology Zurich in Switzerland who helped design a new physical learning algorithm, but “also the whole computing paradigm.”…

Computing at the largest scale? “How to Make the Universe Think for Us,” from @walkingthedot in @QuantaMagazine.

Alan Turing

###

As we think big, we might send well-connected birthday greetings to Leonard Kleinrock; he was born on this date in 1934. A computer scientist, he made several foundational contributions the field, in particular to the theoretical foundations of data communication in computer networking. Perhaps most notably, he was central to the development of ARPANET (which essentially grew up to be the internet); his graduate students at UCLA were instrumental in developing the communication protocols for internetworking that made that possible.

Kleinrock at a meeting of the members of the Internet Hall of Fame

source