(Roughly) Daily

Posts Tagged ‘memory

“Tell me to what you pay attention and I will tell you who you are”*…

A man wearing a gas mask operates a device at a wooden table, with letters L, A, M, F, and E visible on the table. Equipment and hoses are connected to the device.
A test subject has his oxygen consumption measured while using Walter R. Miles’ Pursuitmeter, as pictured in the inventor’s 1921 article for the Journal of Experimental PsychologySource.

Before the attention economy consumed our lives, “pursuit tests” devised by the US military coupled man to machine with the aim of assessing focus under pressure. D. Graham Burnett explores these devices for evaluating aviators, finding a pre-history of the laboratory research that has relentlessly worked to slice and dice the attentional powers of human beings…

We worry about our attention these days — nearly all of us. There is something. . . wrong. We cannot manage to do what we want to do with our eyes and minds — not for long, anyway. We keep coming back to the machines, to the screens, to the notifications, to the blinking cursor and the frictionless swipe that renews the feed.

An ethnographer from Mars, moving among us (would we even notice?), might have trouble understanding our complaint: “Trouble with their attention? They stare at small slabs of versicolor glass all day! Their attentive powers are. . . sublime!”

And that misunderstanding rather sharpens the point: we don’t have any problem at all with the forms of attention that involve remaining engaged with, and responsive to, machines. We are amazing at the click and tap of durational vigilance to this or that stimulus, presented at the business end of a complex device. Our uncanny and immersive cybernetic attention is a defining characteristic of the age. Our human attention — our ability to be with ourselves and with others, our ability to receive the world with our minds and senses, our ability to daydream, read a book uninterrupted, or watch a sunset — well, many of us are finding it increasingly difficult to remember what that might even mean.

This isn’t really an accident. Over the last century or so, a series of elaborate programs of laboratory research have worked to slice and dice the attentional powers of human beings. Their aim? To understand the operational capacities of those who would be asked to shoot down airplanes, monitor radar screens, and otherwise sit at the controls of large and expensive machines. Seated in front of countless instruments, experimental subjects were asked to listen and look, to track and trigger. Psychologists stood by with stopwatches, quantifying our cybernetic capacities, and seeking ways to extend them. For those of us who have come of age in the fluorescence of the “attention economy”, it is interesting to look back and try to catch glimpses of the way that the movement of human eyeballs came under precise scrutiny, the way that machine vigilance became a field of study. We know now that the mechanomorphic attention dissected in those laboratories is the machine attention that is relentlessly priced in our online lives — to deleterious effects.

You could say that this process began with the fascinating and now mostly forgotten tool known as the “pursuit test”. Part steampunk videogame, part laboratory snuff-flick, the pursuit test staged and restaged the integration of man and machine across the first decades of the twentieth century…

Fascinating– and timely: “Cybernetic Attention– All Watched over by Machines We Learned to Watch,” from @publicdomainrev.bsky.social. Eminently worth reading in full.

* Jose Ortega y Gasset

###

As we untangle engagement, we might send thoughtful birthday greetings to a man whose work influenced the endeavors described in the piece featured above, Hermann Ebbinghaus; he was born on this date in 1850. A psychologist, he pioneered the experimental study of memory and discovered the learning curve, the forgetting curve, and the spacing effect.

Black and white portrait of a man with a large beard, wearing round glasses and a formal suit, looking directly at the camera.

source

“Smell is a potent wizard that transports you across thousands of miles and all the years you have lived”*…

An artistic representation of a human nose surrounded by various flowers, molecular structures, and an orange, highlighting the connection between smell and emotions.

The most under-rated of our senses is also the least understood. But as Yasemin Saplakoglu reports, a better understanding of human smell is emerging as scientists interrogate its fundamental elements: the odor molecules that enter your nose and the individual neurons that translate them into perception in your brain…

… Smell is deeply tied with the emotion and memory centers of our brain. Lavender perfume might evoke memories of a close friend. A waft of cheap vodka, a relic of college days, might make you grimace. The smell of a certain laundry detergent, the same one your grandparents used, might bring tears to your eyes.

Smell is also our most ancient sense, tracing back billions of years to the first chemical-sensing cells. But scientists know little about it compared to other senses — vision and hearing in particular. That’s in part because smell has not been deemed critical to our survival; humans have been wrongly considered “bad smellers” for more than a century. It’s also not easy to study.

“It’s a highly dimensional sense,” said Valentina Parma, an olfactory researcher at the Monell Chemical Senses Center in Philadelphia. “We don’t know exactly how chemicals translate to perception.” But scientists are making progress toward systematically characterizing and quantifying what it means to smell by breaking the process down to its most fundamental elements — from the odor molecules that enter your nose to the individual neurons that process them in the brain.

Several new databases, including one recently published in the journal Scientific Data, are attempting to establish a shared scientific language for the perception of molecular scents — what individual molecules “smell like” to us. And on the other end of the pathway, researchers recently published a study in Nature describing how those scent molecules are translated into a neural language that triggers emotions and memories.

Together, these efforts are painting a richer picture of our strongest memory-teleportation device. This higher-resolution look is challenging the long-held assumption that smell is our least important sense…

[Saplakoglu recounts the history of our understanding of smell; explains the current science on how millions of molecules, often in complex bouquets, enter the nose and are processed by neurons to generate a sense of smell that’s deeply emotional and personal; and explores the ways in which it’s intstrumental in attraction, survival, and memory…]

… Because our sense of smell can be largely subliminal, in surveys many people, given the choice of losing one sense, choose olfaction. But “every day, I experience people sitting in my office and talking about how they are disconnected to the world,” [Thomas] Hummel said. They can’t smell their children or spouses anymore. They cannot detect bad-smelling food or dangerous smoke. They no longer have access to certain memories.

“I know the memory is there, but I don’t have the key to open [it] anymore,” Hummel said. “Life becomes a much more insecure place without a sense of smell in many ways, but you only realize it when it’s gone.”…

Fascinating: “How Smell Guides Our Inner World,” from @yaseminsaplakoglu.bsky.social‬ in @quantamagazine.bsky.social‬.

* Helen Keller

###

As we get to know the nose, we might celebrate the avatar of affecting aromas: today is National Cheese Pizza Day.

Close-up of a slice of cheese pizza on a metal tray, showcasing its melted cheese and tomato sauce.

source

Written by (Roughly) Daily

September 5, 2025 at 1:00 am

“Memory resides not just in brains but in every cell”*…

An artistic representation of a cell illustrated with two faces merging in its center, surrounded by cellular structures like mitochondria and various organelles, set against a gradient background of soft colors.

As the redoubtable Claire L. Evans [and here] reports, a small but enthusiastic group of neuroscientists is exhuming overlooked experiments and performing new ones to explore whether cells record past experiences — fundamentally challenging our understanding of what memory is…

In 1983, the octogenarian geneticist Barbara McClintock stood at the lectern of the Karolinska Institute in Stockholm. She was famously publicity averse — nearly a hermit — but it’s customary for people to speak when they’re awarded a Nobel Prize, so she delivered a halting account of the experiments that had led to her discovery, in the early 1950s, of how DNA sequences can relocate across the genome. Near the end of the speech, blinking through wire-framed glasses, she changed the subject, asking: “What does a cell know of itself?”

McClintock had a reputation for eccentricity. Still, her question seemed more likely to come from a philosopher than a plant geneticist. She went on to describe lab experiments in which she had seen plant cells respond in a “thoughtful manner.” Faced with unexpected stress, they seemed to adjust in ways that were “beyond our present ability to fathom.” What does a cell know of itself? It would be the work of future biologists, she said, to find out.

Forty years later, McClintock’s question hasn’t lost its potency. Some of those future biologists are now hard at work unpacking what “knowing” might mean for a single cell, as they hunt for signs of basic cognitive phenomena — like the ability to remember and learn — in unicellular creatures and nonneural human cells alike. Science has long taken the view that a multicellular nervous system is a prerequisite for such abilities, but new research is revealing that single cells, too, keep a record of their experiences for what appear to be adaptive purposes.

In a provocative study published in Nature Communications late last year, the neuroscientist Nikolay Kukushkin and his mentor Thomas J. Carew at New York University showed that human kidney cells growing in a dish can “remember” patterns of chemical signals when they’re presented at regularly spaced intervals — a memory phenomenon common to all animals, but unseen outside the nervous system until now. Kukushkin is part of a small but enthusiastic cohort of researchers studying “aneural,” or brainless, forms of memory. What does a cell know of itself? So far, their research suggests that the answer to McClintock’s question might be: much more than you think…

[Evans explains the prevailing wisdom, outlines the experiments that have challenged it, and unpacks (at least some reasons for) resistance to the notion of cellular-scale memory, both sociological and semantic…]

… In neuroscience, [biochemist and neuroscientist Nikolay] Kukushkin writes, the most common definition of memory is that it’s what remains after experience to change future behavior. This is a behavioral definition; the only way to measure it is to observe that future behavior. Think of S. roeselii snapping back into its holdfast, or a lab rat freezing up at the sight of an electrified maze it’s tangled with before. In these cases, how an organism reacts is a clue that prior experience left a lingering trace.

But is a memory only a memory when it’s associated with an external behavior? “It seems like an arbitrary thing to decide,” Kukushkin said. “I understand why it was historically decided to be that, because [behavior] is the thing you can measure easily when you’re working with an animal. I think what happened is that behavior started as something that you could measure, and then it ended up being the definition of memory.”

Behavior tells us that a memory has formed but says nothing about why, how or where. Further, it’s constrained by scale. Take Aplysia californica, a muscular sea slug with enormous neurons (the largest is about the size of a letter on a U.S. penny). Neuroscientists love to conduct memory experiments on Aplysia because its physical responses are easy to measure — poke it and it flinches — and they map cleanly to the handful of sensory and motor neurons involved.

The sea slug, Kukushkin said, can complicate neuroscience’s behavioral bias. Say you shock its tail, triggering a defensive reflex. If you shock it again the next day and find that the defensive reflex is stronger than it was before, that’s behavioral evidence that the slug remembers its initial shock. Any neuroscientist would associate it with a memory.

But what if (apologies to the squeamish) you take that sea slug apart and leave only its immobile neurons? Unlike the intact creature, the neurons can’t retract, so there will be no visible response. Is the memory gone? Certainly not, but without external validation, a behavioral definition of memory breaks down. “We no longer call that a memory,” Kukushkin said. “We call that a mechanism for a memory, we call that synaptic change underlying memory, we call that an analogue of memory. But we don’t call that a memory, and I think that it’s arbitrary.”

Perhaps a definition of memory should extend beyond behavior to encompass more records of the past. A vaccination is a kind of memory. So is a scar, a child, a book. “If you make a footprint, it’s a memory,” Gershman said. An interpretation of memory as a physical event — as a mark made on the world, or on the self — would encompass the biochemical changes that occur within a cell. “Biological systems have evolved to harness those physical processes that retain information and use them for their own purposes,” [cognitive scientist Sam] Gershman said.

So, what does a cell know of itself? Perhaps a better version of Barbara McClintock’s question is: What can a cell remember? When it comes to survival, what a cell knows of itself isn’t as important as what it knows of the world: how it incorporates information about its experiences to determine when to bend, when to battle and when to make a break for it.

A cell preserves the information that preserves its existence. And in a sense, so do we. As today’s cellular memory researchers revisit abandoned experimental threads from the past, they too are discovering what memory owes to its context, how science’s sociological environment can determine which ideas are preserved and which are forgotten. It’s almost as though a field is waking up from a 50-year spell of amnesia. Fortunately, the memories are flooding back…

What Can a Cell Remember?” from @theuniverse.bsky.social‬ in @quantamagazine.bsky.social‬.

For more on the work that got Barbara McClintock onto the Nobel podium see here.

And, also apposite, a pair of cautionary historical examples of scientists who followed Jean-Baptiste Lamarck, who argued in the early 19th century that an organism can pass on to its offspring physical characteristics that the parent organism acquired through use or disuse during its lifetime– that’s to say that learning (a kind of memory) is heritable… and went astray: Lysenko and Kammerer.

* James Gleick, The Information

###

As we muse on memory (and note that one cannot remember– and learn from– what one cannot know), we might recall that it was on this date in 1735 that New York Weekly Journal publisher and writer John Peter Zenger was acquitted of seditious libel against the royal governor of New York, William Cosby, on the basis that what he had published was true.

In 1733, Zenger had begun printing The New York Weekly Journal, voicing opinions critical of the colonial governor.  On November 17, 1734, on Cosby’s orders, the sheriff arrested Zenger. After a grand jury refused to indict him, the Attorney General Richard Bradley charged him with libel. Zenger’s lawyers, Andrew Hamilton and William Smith, Sr., successfully argued that truth is a defense against charges of libel… and Zenger became a symbol for freedom of the press.

An illustration depicting a courtroom scene with a judge, lawyers, and an audience, capturing the atmosphere of a historical trial.
Andrew Hamilton defending John Peter Zenger in court, 1734–1735 (source)

“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”*…

A watercolor illustration featuring a silhouette of a person standing on a horizon, surrounded by vibrant and swirling shades of pink, purple, and green.

Adam Frank argues that to understand life, we must stop treating organisms like machines and minds like code…

Much of our current discussion about consciousness has a singular fatal flaw. It’s a mistake built into the very foundations of how we view science — and how science itself is perceived and conducted across disciplines, including today’s hype around artificial intelligence.

What most popular attempts to explain consciousness miss is that no scientific explanations of any kind can be possible without accounting for something that is even more fundamental than the most powerful theories about the physical world: our experience.

Since the birth of modern science more than 400 years ago, philosophers have debated the fundamental nature of reality and the fundamental nature of consciousness. This debate became defined by two opposing poles: physicalism and idealism.

For physicalists, only the material that makes up physical reality is of consequence. To them, consciousness must be reducible to the matter and electromagnetic fields in the brain. For idealists, however, only the mind is real. Reality is built from the realm of ideas or, to put it another way, a pure universal essence of mind (the philosopher Hegel called it “Absolute Spirit”).

Physicists like me are trained to think of the world in terms of its physical representations: matter, energy, space and time. So it’s no surprise that we physicists tend to start off as physicalists, who approach the question of consciousness by inquiring about the physical mechanics that give rise to it, beginning with subatomic particles and then ascending the chain of sciences — chemistry, biology, neuroscience — to eventually focus in on the physical mechanics occurring in the neurons that must generate consciousness (or so the story goes).

This kind of “bottom-up” scientific approach has contributed to modern science’s success, and it is also why physicalism has become so compelling for most scientists and philosophers.  This approach, however, has not worked for consciousness. Trying to account for how our lived experience emerges from matter has proven so difficult that philosopher David Chalmers famously referred to it as “the hard problem of consciousness.”

We use the term consciousness to describe our vividly intimate lives — “what it is like” to exist. But experience, which encapsulates our consciousness, thereby cuts more effectively to the core of our reality. An achingly beautiful red sunset, a crisp bite of an autumn Honeycrisp apple; according to the dominant scientific way of thinking, these are phantoms.

Philosophically speaking, from this physics-first view, all experiences are epiphenomena that are unimportant and surface-level. Neurobiologists might fret over how experience appears or works, but ultimately reality is about quarks, electrons, magnetic fields, gravity and so on — matter and energy moving through space and time. Today’s dominant scientific view is blind to the true nature of experience, and this is costing us dearly.

The optic nerve lies at the back of the human eye, connected to the retina, which is made up of receptors sensitive to incoming light. The nerve’s job is to transmit visual input gathered by those receptors to the brain. But the optic nerve’s location atop a tiny portion of the retina also means there is a blind spot in our vision, a region in the visual field that is literally unseen.

In science, that blind spot is experience.

Experience is intimate — a continuous, ongoing background for all that happens. It is the fundamental starting point below all thoughts, concepts, ideas and feelings. The philosopher William James used the term “direct experience.” Others have used words like “presence” or “being.” Philosopher Edmund Husserl spoke of the “Lebenswelt” or life-world to highlight the irreducible totality of our “already being in a living world” before we ask any questions about it.

From this perspective, experience is a holism; it can’t be pulled apart into smaller units. It is also a precondition for science: To even begin to develop a theory of consciousness requires being already embedded in the richness of experience. But dealing with this has been difficult for the philosophies that guide science as it’s currently configured…

[Frank introduces the perspectives of William James, Alfred North Whitehead, Edmund Husserl, Thomas Nagel, and Immanuel Kant, urging that we move beyond the machine metaphor, and work with concepts like autopoiesis and embodiment…]

… The problem is, once again, surreptitious substitution. Intelligence is mistaken as mere computation. But this assumption undermines the centrality of experience, as philosopher Shannon Vallor has argued. Once we fall into this kind of blind spot, we open ourselves to building a world where our deepest connections and feelings of aliveness are flattened and devalued; pain and love are reduced to mere computational mechanisms viewable from an illusory and dead third-person perspective.

The difference between the enactive approach to cognition and consciousness and the reductive view of physicalism could not be more stark. The latter focuses on a physical object, in this case the brain, asking how the movements of atoms and molecules within it create a property called consciousness. This view assumes that a third-person objective view of the world is possible and that the brain’s job is to provide the best representation of this world.

The enactive approach and similar phenomenologically grounded perspectives, however, don’t separate the brain from the body. That is because brains are not separate things. Like the unity of cell membranes and the cell, brains are part of the organizational unity of organisms with brains. Organisms with brains, therefore, aren’t just representing the world around them; they are co-creating it.

To be clear, there is, of course, a world without us. To claim otherwise would be solipsistic nonsense. But that world without us is not our world. It’s not the one we experience and from which we begin our scientific investigations. Therefore, this third-person perspective of a world without us and our experience, is nothing more than a sophisticated kind of fantasy…

[Frank oultines a line of inquiry that builds on these insights…]

… Moving beyond consciousness as a mechanism in the dead physical world toward a view of lived experience as embedded and embodied in a living world is essential for at least two reasons. It may be the fundamental reframing required to make scientific progress on a range of issues, from the interpretation of quantum mechanics to the understanding of cognition and consciousness.

Recognizing the primacy of experience also forces us to understand that all our scientific stories — and the technologies we build from them — must always include us and our place within the tapestries of life. Recognizing there is no such thing as an external view has consequences for how we think about urgent questions like climate change and AI. In this way, the new vision of nature that comes from an experience-centric perspective can help us take the next steps necessary for human flourishing. That goal, after all, was also one of the primary reasons we invented science in the first place…

Why Science Hasn’t Solved Consciousness (Yet)” from @adamfrank4.bsky.social‬ in @noemamag.com‬.

Apposite (both to the post above and to the post from July 15): “Human Stigmergy” from @marco-giancotti.bsky.social‬.

Max Planck

###

As we embrace experience, we might send critical birthday greetings to Herbert Marcuse; he was born on this date in 1898. A philosopher, social critic, and political theorist associated with the Frankfurt School of critical theory, he critiqued capitalism, modern technology, Soviet Communism, and popular culture, arguing that they represent new forms of social control. Best-known for Eros and Civilization (1955) and One-Dimensional Man (1964), he is considered “the Father of the New Left.”

To the degree to which they correspond to the given reality, thought and behavior express a false consciousness, responding to and contributing to the preservation of a false order of facts. And this false consciousness has become embodied in the prevailing technical apparatus which in turn reproduces it.

– Marcuse

A black and white photograph of a middle-aged man sitting comfortably in a chair outdoors, holding a cigar and smiling.

source

“But somewhere, beyond space and time / Is wetter water, slimier slime”*…

Close-up of a vibrant yellow slime mold, _Physarum polycephalum_, spreading across a textured brown log.
Physarum Polycephalum

Scientist’s have long marvelled at the “intelligent” accomplishments of the humble slime mold (and here). Noting that certain slime molds can make decisions, solve mazes, and remember things, Matthew Sims ponders what we can learn from the blob…

During the COVID-19 pandemic, some people took up baking, others decided to get a dog; I chose to grow and observe slime mould. The study in my partner’s flat in Edinburgh became home to two cultures of Physarum polycephalum, an acellular slime mould sometimes more casually referred to as ‘the blob’.

I began a series of experiments investigating how long it would take for two separated cell masses from the same bisected Physarum cell to stop fusing with one another upon reintroduction. Hours turned into days, and days into weeks, and, due to time constraints, the experiment eventually fizzled out around six weeks. This, however, was only the beginning. Over that following year (unbeknown to our unsuspecting neighbours), I conducted several more experiments. Although none of them were published, each inspired new philosophical questions – which to this day continue to shape my thinking. One of the core questions was: what can the behaviour of slime mould teach us about biological memory?

The differences between P polycephalum and humans may seem vast, but slime mould can reveal a remarkable amount about various aspects of how we remember. While many people might assume that our memories are primarily stored within our brains, some philosophers like myself argue that – along with some other aspects of cognition – memory can extend beyond the confines of the body to involve coupled interaction with structures in the environment. At least some of our cognitive processes, in short, loop out into our surroundings. Slime mould is an intriguing candidate to explore this idea because it doesn’t have a brain at all, yet in some cases can apparently ‘remember’ things without needing to store those associated memories within itself. In other cases, memories acquired via learning by one individual can even be acquired by a separate individual through physical contact. The behaviour of this strange form of life suggests that some of our ideas about how memories are acquired may need a rethink…

[Sims explains how slime mold “remembers”– via slime trails– and explores the questions that this raises…]

… So, what can slime mould teach us about biological memory? One lesson is that spatial memory needn’t be confined entirely within an organism (á la HEC). Moreover, what becomes memory traces when used (eg, extracellular slime) needn’t be the result of learning by the external trace-producer. Another takeaway is that, in some cases, an individual can acquire such memory without having engaged in learning itself. This raises an intriguing parallel in the human case. We do, after all, routinely read and act upon instructions, maps and manuals written by others, drawing on information acquired through their experiences, not our own. Although such externalised sources of information are typically declarative in structure – designed to represent facts explicitly – we often act upon them automatically, without needing to consciously recall or reflect on the information they convey. In this way, they guide behaviour in ways that functionally resemble non-declarative memory. While the analogy shouldn’t be pushed too far, both the human and slime mould cases illustrate how memory can become decoupled from individual learning, instead becoming accessible to others through environmental structures.

These conclusions, of course, remain contentious within traditional cognitive science and psychology where memory is often defined as the result of learning on the part of the same individual whose memory it is. Despite important concerns raised by the likes of Francis Crick in 1984, memory storage is still often attributed to synaptic plasticity – changes in strength of connection between neurons – quashing the very possibility of external memory traces. That said, some like the psychologist C Randy Gallistel – who has long argued that memory may also be stored in molecules like RNA within the brain – have remained vigilant in thinking outside the box. However, given the accumulating empirical evidence that memory-guided behaviour is exhibited in non-neuronal organisms like Physarum, then even this outside-the-box thinking remains firmly planted in traditional views about the requirements of brains for memory and the kind of strict internalism HEC suggests needn’t always be the case. Both HEC and memory without learning are not easy pills to swallow, but then again, neither is the very idea that a non-neuronal organism can learn in the first place – an idea that Physarum’sbehaviour unequivocally seems to support.

Whether it’s the subject of experiments carried out in a lab (or in a cramped study of an Edinburgh tenement flat) or it’s the subject of empirically informed, armchair philosophising, Physarum provides a valuable model organism to inspect, challenge and refine some of our most fundamental biological concepts – concepts like memory…

Fascinating: “Memories without brains,” from @philosobio.bsky.social‬ in @aeon.co‬.

Rupert Brooke

###

As we reckon with recall, we might send microscopic birthday greetings to Carl Woese; he was born on this date in 1928. A microbiologist and biophysicist. Woese is famous for defining, in 1977, the Archaea (a new domain of life, distinct from the previously-recognized two domains of bacteria and life other than bacteria). To accomplish this feat, he pioneered phylogenetic taxonomy of 16S ribosomal RNA, a technique that has revolutionized microbiology. Microbiologist Justin Sonnenburg of Stanford said “The 1977 paper is one of the most influential in microbiology and arguably, all of biology. It ranks with the works of Watson and Crick and Darwin, providing an evolutionary framework for the incredible diversity of the microbial world.”

Woese originated the RNA world hypothesis in 1967, although not by that name. And he also speculated about an era of rapid evolution in which considerable horizontal gene transfer occurred between organisms. With regard to Woese’s work on horizontal gene transfer as a primary evolutionary process, Professor Norman R. Pace of the University of Colorado at Boulder said, “I think Woese has done more for biology writ large than any biologist in history, including Darwin… There’s a lot more to learn, and he’s been interpreting the emerging story brilliantly.”

A portrait of Carl Woese, a prominent microbiologist and biophysicist, sitting and looking directly at the camera with a thoughtful expression. He has gray hair and is wearing a dark shirt with a multi-colored sweater. A wall filled with scientific charts is blurred in the background.

source

Written by (Roughly) Daily

July 15, 2025 at 1:00 am