(Roughly) Daily

“They swore by concrete. They built for eternity”*…

Understanding how the materials we use work– and don’t work– together…

For most of a red swamp crayfish’s life, cambarincola barbarae are a welcome sight. Barbarae – whitish, leech-like worms, each a couple of millimeters long – eat the swamp scum off the crayfish’s shells and gills, and in most cases improve the crayfish’s health and life expectancy. Together, barbarae and crayfish form a mutualistic symbiotic relationship. Both species benefit from their cohabitation, and barbarae have evolved to the point where their entire life cycle, from egg to adult, occurs while attached to a crayfish.

But their symbiosis is contextual – a tentative truce. Young crayfish (who molt their shells more frequently and therefore accumulate less scum) don’t need much cleaning, and will take pains to remove barbarae from their shells. And even when molting has slowed and a crayfish has allowed the symbiosis to flourish, there are limits to barbarae’s loyalty: If there isn’t enough food for them to survive, they’ll turn parasitic, devouring their host’s gills and eventually killing them.

Like symbioses, composite materials can be incredibly productive: two things coming together to create something stronger. But like crayfish and barbarae, their outcomes can also be tragic. Rarely are two materials a perfect match for each other, and as the environment changes their relationship can turn destructive. And when composites turn destructive – as was evident in the reinforced concrete when the Champlain Towers North were inspected back in 2018 – the fallout can be catastrophic.

The history of what we now call composite materials goes back many thousands of years. For modern consumers, the most common composites are fiber-reinforced plastics (the colloquial “carbon fiber” and “fiberglass”), but perhaps the first composites in history were reinforced mud bricks. The Mesopotamians learned to temper their bricks by mixing straw into them at least as early as 2254 BC, increasing their tensile strength and preventing them from cracking as they dried. This method continues around the world today.

But by far the most commonly used composite material in history is steel-reinforced concrete. Roman concrete usage started as early as 200 BCE, and almost three centuries later Pliny the Elder included a note about what appears to be high quality hydraulic concrete in his Naturalis Historiae. These recipes were subsequently forgotten, and the material largely disappeared between the Pantheon and the mid nineteenth century. Modern concrete involves some legitimate process control: limestone and other materials are heated to around 900° C to create portland cement, which is then pulverized and mixed with water (and aggregate) to create an exothermic reaction resulting in a hard and durable object. The entire process consumes vast amounts of power and produces vast amounts of carbon dioxide, and the industry supporting it today is estimated to be worth about a half a trillion dollars.

But in spite of the fortunes that have been invested in the portland cement process (as well as in a wide range of concrete admixtures, which are used to tune both the wet mixture and the finished product), the true magic of contemporary concrete is the fact that it is so often reinforced with steel – dramatically increasing its tensile strength and making it suitable for a wide range of structural applications. This innovation arose in the mid-nineteenth century, when between 1848 and 1867 it was developed by three successive Frenchmen. In the late 1870s, around the time that the first reinforced concrete building was built in New York City, the American inventor Thaddeus Hyatt noted a critical quality of the material: through some fantastic luck, the coefficients of thermal expansion of steel and concrete are strikingly similar, allowing a composite steel-concrete structure to withstand warm/cool cycles without fracturing. This quality opened up the floodgates, and in the 1880s the pioneering architect-engineer Ernest Ransome built a string of reinforced concrete structures around the San Francisco Bay Area. From there it was history.

More than any other physical technology, it is reinforced concrete that defines the 20th century. Versatile, strong, and (relatively) durable, the material is critical to life and industry as we know it. Reinforced concrete was the material of choice of Albert Kahn, who with Henry Ford defined 20th century industrial architecture; reinforced concrete is a key part of  nearly every type of logistical infrastructure, from roads to bridges to container terminals; reinforced concrete makes up the literal launch pads for human space travel. It’s a critical component of power plants, dams, wind turbines, and the vast majority of mid- to late-twentieth century homes and apartment buildings. Its high compressive strength makes it ideally suited for footings and foundations; its high tensile strength lets it cantilever and span great distances easily.

But reinforced concrete is really only 140 years old – the blink of an eye, as far as the infrastructure of old is concerned. The Pantheon was built around 125 CE, by which time the Romans had been experimenting with concrete construction for well over 300 years. When we see the Pantheon, we’re seeing a mature method – a technology with full readiness, being used in an architectural style that’s tuned for its physical properties.

By contrast, even our most iconic steel-reinforced concrete buildings are prototypes…

Early on in the history of steel-reinforced concrete, it was known that the high alkalinity of concrete helped to inhibit the rebar from rusting. The steel was said to be sealed within a monolithic block, safe from the elements and passivated by its high pH surroundings; it would ostensibly last a thousand years. But atmospheric carbon dioxide inevitably penetrates concrete, reacting with lime to produce calcium carbonate – and lowering its pH. At that point, the inevitable cracks and fissures allow the rebar inside to rust, whereupon it expands dramatically, cracking the concrete further and eventually breaking the entire structure apart.

This process – carbonatation, followed by corrosion and failure – was often visible but largely ignored into the late twentieth century. Failures in reinforced concrete structures were often blamed on shoddy construction, but the reality is that like the crayfish and the barbarae, the truce between concrete and steel is tentative. What protection concrete offers steel is slowly eaten away by carbonatation, and once it’s gone the steel splits the concrete apart from the inside…

There are of course many potential innovations to come in reinforced concrete. Concrete mixtures made with fly ash and slag produce high strength and durable structures. Rebar rust can be mitigated by using sacrificial anodes or impressed current. Rebar can be made of more weather resistant materials like aluminum bronze and fiberglass. Or the entire project could be scrapped – after all, the CO2 emitted by the cement industry is nothing to thumb your nose at. Whatever we do, we should remember that the materials we work with are under no obligation to get along with one another – and that a symbiotic truce today doesn’t necessarily mean structural integrity tomorrow.

On composites, crayfish, and reinforced concrete’s tentative alkalinity: “A Symbiotic Truce,” from Spencer Wright (@pencerw), whose newsletter, “The Prepared” (@the_prepared), is always an education.

* Gunter Grass

###

As we delve into durability, we might recall that it was on this date in 315 that the Arch of Constantine officially opened. A triumphal arch in Rome dedicated to the emperor Constantine the Great, it was constructed of Roman concrete, faced with brick, and reveted in marble.

Roman concrete, like any concrete, consists of an aggregate and hydraulic mortar – a binder mixed with water (often sea water) that hardens over time. The aggregate varied, and included pieces of rock, ceramic tile, and brick rubble from the remains of previously demolished buildings. Gypsum and quicklime were used as binders, but volcanic dusts, called pozzolana or “pit sand”, were favored where they could be obtained. Pozzolana makes the concrete more resistant to salt water than modern-day concrete.

The strength and longevity of Roman marine concrete is understood to benefit from a reaction of seawater with a mixture of volcanic ash and quicklime to create a rare crystal called tobermorite, which may resist fracturing. As seawater percolated within the tiny cracks in the Roman concrete, it reacted with phillipsite naturally found in the volcanic rock and created aluminous tobermorite crystals. The result is a candidate for “the most durable building material in human history.” In contrast, as Wright notes above, modern concrete exposed to saltwater deteriorates within decades.

source

“Anonymity is the fame of the future”*…

So… late last spring, a strange, beguiling novel began arriving, in installments, in the mail, addressed to writer Adam Dalva at his parents’ apartment. Who had written it?…

It arrived at the height of the pandemic, in a brown envelope with no return address and too many stamps, none of which had been marked by the post office. It was addressed to me at my parents’ New York City apartment, where I haven’t lived in more than a decade. My mother used the envelope as a notepad for a few weeks, then handed it off to me in July; it was the first time I’d seen her after months of quarantine. Inside the envelope was a small, stapled book—a pamphlet, really—titled “Foodie or The Capitalist Monsoon that is Mississippi,” by a writer named Stokes Prickett. On the cover, there was a photograph of a burrito truck and a notice that read “Advance Promotional Copy: Do Not Read.” The book began with a Cide Hamete Benengeli-style introduction attributed to a Professor Sherbert Taylor. Then a fifty-five-page bildungsroman written in short sections with boldface titles. The prose reminded me a bit of Richard Brautigan.

Because I write book reviews, dozens of unsolicited books are sent to my house every month. Many of them, I confess, barely catch my attention before they’re added to a stack on the floor. But I sat down and read this one all the way through. The narrator of “Foodie” is Rusty, who thinks back on his days in high school, when he worked as a thumbtack-maker’s apprentice, then in a floor-mat factory. Rusty meets another kid from school, an idealist called Foodie whose real name is Gourmand, and whom Rusty describes as “a tetherball champ, a king of the taco stands,” in a town “at the edge of the 8-track suburbs.” Foodie, Rusty says, “was the kindest werewolf on the warfront, and I was his hairdresser.” They start spending time with a hulking, ruthless classmate named Dale, who is “right-handed and immoral as parchment,” and fated to die young because he has a white-collar job that causes him to move through time more quickly than his friends do. After Dale’s death, Foodie and Rusty part ways.

The book was good. But who was Stokes Prickett, and how did this person get my parents’ address?…

A most marvelous mystery, solved by @adalva: “On the Trail of a Mysterious, Pseudonymous Author.”

John Boyle O’Reilly

###

As we get to the bottom of it, we might recall that it was on this date in 1901 that William Sydney Porter was released (on good behavior) after serving three years in the Ohio Penitentiary for bank fraud and embezzlement; a licensed pharmacist, he had worked in the prison’s infirmary.  But on his release, he turned to what had been a pastime, writing.  Over the next several years he wrote 381 short stories under the pen name by which we know him, “O. Henry,” including a story a week for over a year for the New York World Sunday Magazine.

His wit, characterization, and plot twists– as evidenced in stories like “The Gift of the Magi” and “The Ransom of Red Chief”– were adored by his readers but often panned by critics… though academic opinion has since come around: O. Henry is now considered by many to be America’s answer to Guy de Maupassant.

220px-William_Sydney_Porter_by_doubleday

source

Written by (Roughly) Daily

July 24, 2021 at 1:00 am

“The circus comes as close to being the world in microcosm as anything I know; in a way, it puts all the rest of show business in the shade.”*…

Come one, come all!…

While circus acts go back to the midst of time, the circus as commercial entertainment dates to the opening decades of the nineteenth century. In Victorian England, the circus appealed across an otherwise class-divided society, its audiences ranging from poor peddlers to prestigious public figures. The acts that attracted such audiences included reenacted battle scenes, which reinforced patriotic identity; exotic animal displays that demonstrated the reach of Britain’s growing empire; female acrobatics, which disclosed anxieties about women’s changing role in the public sphere; and clowning, which spoke to popular understandings of these poor players’ melancholy lives on the margins of society.

The proprietor and showman George Sanger (from whose collection the following photographs come) was a prime example of how the circus was to evolve from a small fairground-type enterprise to a large-scale exhibition. Sanger’s circuses began in the 1840s and ’50s, but by the 1880s, they had grown to such a scale that they were able to hold their own against the behemoth of P.T. Barnum’s three-ring circus, which arrived in London for the first time in that decade.

Like many circuses in the nineteenth century, Sanger’s was indebted to the technology of modern visual culture to promote his business. Local newspapers displayed photographs alongside advertisements to announce the imminent arrival of a circus troupe. Garish posters, plastered around towns, also featured photographs of their star attractions. And individual artists used photographic portraits, too (in the form of the carte-de-visite or calling card), to draw attention to their attributes and to seek employment. One striking image in this collection [the image above] poses six performing acrobats amid the other acts—a lion tamer, an elephant trainer, a wire walker, and a clown—in one of Sanger’s circuses, all in front of the quintessential big-top tent. Maybe the projection of the collective solidarity of the circus in this image belies personal rivalries and animosities that might have characterized life on the road. Moreover, at the extreme edge of the image, on the right-hand side behind the dog trainer, there appears to be the almost ghostly presence of a Black male figure. By dint of their peripatetic existence, all those employed in the circus were often viewed as marginal and exotic. However, this image is a reminder of how racial and ethnic minorities were a presence within circus culture, even if, as here, they appear to have been banished to the margins of the photograph.

That most democratic of Victorian popular entertainments: photos from the Sanger Circus Collection.

* E. B. White

###

As we head for the big top, we might recall that today is International Yada Yada Yada Day. Lenny Bruce is often credited with the first use of “yadda yadda” on the closing track on his 1961 album “Lenny Bruce – American,” though earlier uses are documented in vaudeville. Employed by comedians and TV shows to convey that something unimportant or irrelevant was being elided, it gained vernacular currency when Jerry Seinfeld’s show featured a variation on this phrase as an inside joke between characters Elaine Benes (played by Julia Louis-Dreyfus) and George Costanza (Jason Alexander).

The Yada Yada,” the series’ 153rd episode, focused on just how badly using the phrase can backfire when the details being omitted are actually extremely important– the fact that George’s new girlfriend is actually a kleptomaniac who steals to kill time, or that Jerry’s new girlfriend is both racist and antisemitic. (That episode also introduced the term”anti-dentite.”) Hilarity ensues when both these unwitting men find out what kind of people they have been dating, and must break off the relationships.

In 2009, the Paley Center for Media named “Yada, Yada, Yada” the No. 1 funniest phrase on “TV’s 50 Funniest Phrases.”

“How can you govern a country which has 246 varieties of cheese”*…

Well, one strategy, embraced by dictators worldwide, is to declare one of them the official national cheese…

It always surprises me that more people don’t know that pad Thai was invented by a dictator. I don’t mean that the authoritarian prime minister of Thailand, Plaek Phibunsongkhram, got creative in the kitchen one day. But he made pad Thai—then an unknown noodle dish without a name—the country’s national dish by fiat.

Phibunsongkhram was a military officer who took power in a coup and liked to compare himself to Napoleon. Establishing pad Thai as Thailand’s official food was one of many reforms he pursued to unify the country under his leadership. And it was remarkably successful.

The Thai leader is not the only authoritarian who took an active interest in his country’s cuisine. When successful, dictators’ food obsessions can change how a country eats and drinks for generations. Here, we explore the fascinating but unnerving world of dictator food projects…

Authoritarian food obsessions can have a lasting legacy: “The Dictators Who Ruled Their Countries’ Cuisines,” from Alex Mayyasi (@amayyasi), with a Q&A with chef-turned-journalist Witold Szablowski, who published How to Feed a Dictator, a book that tells the story of five chefs who worked for five terrible rulers.

* Charles de Gaulle

###

As we contemplate comestible coercion, we might send comforting birthday greetings to Dorcas Lillian Bates Reilly; she was born on this date in 1926. A chef and inventor, she worked for many years in the test kitchen at the Campbell’s Soup Company– where she developed hundreds of recipes, including a tuna-noodle casserole and Sloppy Joe “souperburgers.” But she is best remembered for “the green bean bake”– or as it is better known, the green bean casserole— a holiday staple in tens of millions of households every year. While her recipe made good use of her employer’s Cream of Mushroom Soup, she believed that the French’s crispy fried onions were the “touch of genius” in the dish.

source

Written by (Roughly) Daily

July 22, 2021 at 1:00 am

“Be a good ancestor”*…

Even though– especially because– it’s hard…

… Mental time travel is essential. In one of Aesop’s fables, ants chastise a grasshopper for not collecting food for the winter; the grasshopper, who lives in the moment, admits, “I was so busy singing that I hadn’t the time.” It’s important to find a proper balance between being in the moment and stepping out of it. We all know people who live too much in the past or worry too much about the future. At the end of their lives, people often regret most their failures to act, stemming from unrealistic worries about consequences. Others, indifferent to the future or disdainful of the past, become unwise risk-takers or jerks. Any functioning person has to live, to some extent, out of the moment. We might also think that it’s right for our consciousnesses to shift to other times—such inner mobility is part of a rich and meaningful life.

On a group level, too, we struggle to strike a balance. It’s a common complaint that, as societies, we are too fixated on the present and the immediate future. In 2019, in a speech to the United Nations about climate change, the young activist Greta Thunberg inveighed against the inaction of policymakers: “Young people are starting to understand your betrayal,” she said. “The eyes of all future generations are upon you.” But, if their inaction is a betrayal, it’s most likely not a malicious one; it’s just that our current pleasures and predicaments are much more salient in our minds than the fates of our descendants. And there are also those who worry that we are too future-biased. A typical reaction to long-range programs, such as John F. Kennedy’s Apollo program or Elon Musk’s SpaceX, is that the money would be better spent on those who need it right now. Others complain that we are too focussed on the past, or with the sentimental reconstruction of it. Past, present, future; history, this year, the decades to come. How should we balance them in our minds?

Meghan Sullivan, a philosopher at the University of Notre Dame, contemplates these questions in her book “Time Biases: A Theory of Rational Planning and Personal Persistence.” Sullivan is mainly concerned with how we relate to time as individuals, and she thinks that many of us do it poorly, because we are “time-biased”—we have unwarranted preferences about when events should happen. Maybe you have a “near bias”: you eat the popcorn as the movie is about to start, even though you would probably enjoy it more if you waited. Maybe you have a “future bias”: you are upset about an unpleasant task that you have to do tomorrow, even though you’re hardly bothered by the memory of performing an equally unpleasant task yesterday. Or maybe you have a “structural bias,” preferring your experiences to have a certain temporal shape: you plan your vacation such that the best part comes at the end.

For Sullivan, all of these time biases are mistakes. She advocates for temporal neutrality—a habit of mind that gives the past, the present, and the future equal weight. She arrives at her arguments for temporal neutrality by outlining several principles of rational decision-making. According to the principle of success, Sullivan writes, a rational person prefers that “her life going forward go as well as possible”; according to the principle of non-arbitrariness, a rational person’s preferences “are insensitive to arbitrary differences.” A commitment to being rational, Sullivan argues, will make us more time-neutral, and temporal neutrality will help us think better about everyday problems, such as how best to care for elderly parents and save for retirement.

Perhaps our biggest time error is near bias—caring too much about what’s about to happen, and too little about the future. There are occasions when this kind of near bias can be rational: if someone offers you the choice between a gift of a thousand dollars today and a year from now, you’d be justified in taking the money now, for any number of reasons. (You can put it in the bank and get interest; there’s a chance you could die in the next year; the gift giver could change her mind.) Still, it’s more often the case that, as economists say, we too steeply “discount” the value of what’s to come. This near bias pulls at us in our everyday decisions. We tend to be cool and rational when planning for the far-off future, but we lose control when temptations grow nearer in time.

If near bias is irrational, Sullivan argues, so is future bias… Sullivan shares an example invented by the philosopher Derek Parfit. Suppose that you require surgery. It’s an unpleasant procedure, for which you need to be awake, in order to coöperate with the surgeon. Afterward, you will be given a drug that wipes out your memory of the experience. On the appointed day, you wake up in the hospital bed, confused, and ask the nurse about the surgery. She says that there are two patients in the ward—one who’s already had the operation, and another who’s soon to have it; she adds that, unusually, the operation that already happened took much longer than expected. She isn’t sure which patient you are, and has to go check. You would be greatly relieved, Parfit says, if the nurse comes back and tells you that you already had the operation. That is, you would willingly consign to your past self a long and agonizing procedure to avoid a much shorter procedure to come.

There is an evolutionary logic behind this kind of bias. As Caspar Hare, a philosopher at M.I.T., puts it, “It is not an accident that we are future-biased with respect to pain. That feature of ourselves has been selected-for by evolution.” In general, Hare writes, it seems likely that animals that focussed their attention on the future survived longer and reproduced more…

In 1992, Parfit teamed up with the economist Tyler Cowen to argue, in a book chapter, that our governments are too eager to discount the fortunes of future people. Parfit and Cowen proposed that even a small bias in favor of the present over the future could have huge consequences over time. Suppose that a politician reasons that one life now is equal to 1.01 lives a year from now, and so embraces policies that favor a hundred people now over a hundred people next year. This hardly seems to matter—but this “discount rate” of one per cent per year implies that we would rather save a single life now, at the cost of a million lives in about fourteen hundred years. At a ten-per-cent discount rate, one life now would be worth a million in a mere century and half. Although no one in power thinks in exactly these terms, many of our decisions favor the present over the future.

In a 2018 book, “Stubborn Attachments,” Cowen expands on the idea, asking how we can fight near bias at a societal level and better further the interests of future people. There are “a variety of relevant values” that we might want to consider in our temporal rebalancing, he writes, “including human well-being, justice, fairness, beauty, the artistic peaks of human achievement, the quality of mercy,” and so on. Cowen concludes that the best way to maximize all of these things for the future is to increase economic growth. (He doesn’t go just by G.D.P.—he adds in various measures of “leisure time, household production, and environmental amenities.”)

The thing about economic growth, Cowen tells us, is that it has the potential to advance just about everything that people value. “Wealthier societies have better living standards, better medicines, and offer greater personal autonomy, greater fulfillment, and more sources of fun,” he writes. He concedes that, in recent decades, inequality has risen within wealthier nations, but also notes that, as a consequence of global economic growth, “recent world history has been an extraordinarily egalitarian time”: over all, countries are becoming more equal. In terms of happiness, Cowen shows that there is considerable evidence supporting the commonsense view that citizens of rich countries are happier than citizens of poor countries, and that, within rich countries, wealthier individuals are happier than poorer ones. The data actually understate the strength of the effect, Cowen writes, because many studies miss the happiness boost that comes from more years on the earth: “Researchers do not poll the dead.”

Cowen is sympathetic to the school of thought known as effective altruism, which holds that we should use data and research to figure out how to do the greatest good for the greatest number of people. But he worries that these sorts of altruists are too prone to think about the greatest good for people right now. An effective altruist might hold that, instead of spending money on some luxury for yourself, you should use it to help the poor. But, for Cowen, this sort of advice is too present-oriented. Even a small boost in the growth rate has enormous ramifications for years to come. “Our strongest obligations are to contribute to sustainable economic growth,” he writes, “and to support the general spread of civilization, rather than to engage in massive charitable redistribution in the narrower sense.” In general, Cowen thinks that policymakers should be more future-oriented. He suggests that we should put fewer resources into improving the lives of the elderly and devote correspondingly more resources to the young and the not-yet-born. Most politicians would balk at this suggestion, but, when they do the opposite—well, that’s a choice, too.

Cowen, to my mind, glosses over the problem of diminishing returns. Suppose that our prosperity increases a hundredfold. Life would be better, but would our happiness also increase by a multiple of a hundred? After a certain point, it might make sense to worry less about growth. Perhaps the most privileged of us are close to that point now. But these things can be hard to judge. The Babylonian kings might have thought that they were living the best possible lives, not realizing that, in the future, even everyday schmoes would be wiser and more pain-free, living longer, eating better, and traveling more.

Whether or not one agrees with Cowen’s thesis, there are clearly good reasons for adopting temporal neutrality on a societal level. It’s less clear that we have an obligation to be rigorously time-neutral as individuals. If we can indulge our own time biases without making horrible errors in judgment, why shouldn’t we? Why not distribute our pleasures and pains unevenly throughout our lives, if we believe that, for us, doing so will contribute to “life going forward as well as possible”? For many people, as Seneca wrote, “Things that were hard to bear are sweet to remember.” We undertake activities that we know to be difficult or unpleasant because we see them as part of a good life and wish to think back upon them in the future. We curate our presents to furnish our futures with the right kinds of pasts. If this benign bias encourages us to take on difficult things, isn’t it wise to indulge the bias?

Many people suspect that a good life might be one that’s ordered in a certain way. Psychologists find that people tend to prefer the idea of a wonderful life that ends abruptly to the idea of an equally wonderful one that includes some additional, mildly pleasant years—the “James Dean effect.” There’s also an appeal to starting with the worst and then seeing things improve. Andy Dufresne, the protagonist of the film “The Shawshank Redemption,” based on a novella by Stephen King, is convicted of double murder but maintains his innocence; he spends twenty-eight years in prison before stealing millions of dollars from his corrupt warden and escaping, then living out the rest of his life on a Mexican beach. It’s an exhilarating and powerful tale, but, if one flipped the order—coastal paradise, then brutal prison—it would be impossible to enjoy. Rags to riches beats riches to rags, even if the good and the bad are in precise balance. Maybe this is what Sullivan calls a structural bias—but, without structure, there’s no story, and stories are good things to have.

It’s true that time-biased thinking can mislead us. Imagine that you are listening to a symphony for a pleasurable ninety minutes—and then, at the end, someone’s cell phone goes off, to loud shushing and stifled laughter. You might say that these awful thirty seconds ruined the experience, even though the first ninety-nine per cent of it was wonderful, and think that, if the phone had rung at the start, it would have been less of a problem. But is a disruption in the finale really worse than an interruption in the overture? Sullivan’s arguments show that we should try reconsidering those kinds of intuitions—and that we should be wary, in general, of the strange places to which they can lead us. In a classic series of studies, Daniel Kahneman and his colleagues exposed volunteers to two different experiences—sixty seconds of moderate pain, and sixty seconds of moderate pain followed by thirty seconds of mild pain. When they asked people which experience they would rather repeat, most chose the second experience, just because it ended better. There is little good to be said about choosing more over-all pain just because the experience ends on the right note.

And yet giving up all our time biases is a lot to ask. We are, it seems, constituted to favor the here and now, to radically discount the distant future, and to give special weight to how experiences end. We can move in the direction of temporal neutrality, fighting against certain time biases just as we resist our other unreasonable biases and preferences. This may make us more rational, more kind to others, and, at times, more happy.

How much should we value the past, the present, and the future? “Being in Time,” from Paul Bloom (@paulbloomatyale)

* “Be a good ancestor. Stand for something bigger than yourself. Add value to the Earth during your sojourn.” – Marian Wright Edelman

###

As we take the long view, we might recall that it was on this date in 356 BC that the second version of the Temple of Artemis at Ephesus (which had replaced a Bronze Age structure) was destroyed by arson (by a man, Herostratus, set fire to the wooden roof-beams, seeking fame at any cost; thus the term “herostratic fame“).

Its third iteration was finished several decades later, and survived for six centuries. It was described in Antipater of Sidon‘s list of the world’s Seven Wonders:

I have set eyes on the wall of lofty Babylon on which is a road for chariots, and the statue of Zeus by the Alpheus, and the hanging gardens, and the colossus of the Sun, and the huge labour of the high pyramids, and the vast tomb of Mausolus; but when I saw the house of Artemis that mounted to the clouds, those other marvels lost their brilliancy, and I said, “Lo, apart from Olympus, the Sun never looked on aught so grand”.

This model of the Temple of Artemis, at Miniatürk Park, Istanbul, Turkey, attempts to recreate the probable appearance of the third temple.

source

Written by (Roughly) Daily

July 21, 2021 at 1:00 am

%d bloggers like this: