(Roughly) Daily

“The web of our life is of a mingled yarn”*…

In what does our personhood consist? From what/where does it come? João de Pina Cabral unpacks the seminal thinking of Lucien Lévy-Bruhl and the advances in cognitive science and developmental psychology that suggest that a person is not self-contained, but the outcome of a lifelong process of living with others…

It matters to understand what constitutes a person. After all, if there is one feature that distinguishes human society from other forms of sociality, it is that, at around one year of age, most human beings attain personhood: they learn to speak a language, develop object permanence – the understanding that things do not disappear when out of sight – and relate to others in consciously moral ways. Should all persons be accorded the same rights and duties by virtue of this condition? These are weighty questions that have occupied social scientists and philosophers since antiquity – particularly at moments such as the present, when war and imperial oppression once again raise their ugly heads.

Nevertheless, this question cannot be approached as a purely moral matter, for in order to determine what rights and duties may be attributed to persons, it is necessary to establish what persons are. This longstanding perplexity can now be addressed in increasingly sophisticated ways, following a century of sustained anthropological enquiry.

In September 1926, two of the most eminent anthropologists of the day met in person for the first time in New York. Both were Jewish and born in Europe, but one – Franz Boas – had become an American citizen and was a leading figure at Columbia University in New York, while the other – Lucien Lévy-Bruhl – was a professor in Paris. Both were highly learned, humanistically inclined and politically liberal; they respected one another, yet they did not seem to agree about the matter of the person.

Lévy-Bruhl had begun his career as a philosopher of ethics. His doctoral thesis focused on the legal concept of responsibility. He was struck by the fact that responsibility first arose between persons not as a law, but as an emotion – a deep-seated feeling. He argued that co-responsibility implies a bond between persons grounded less in reason than in the conditions of their emergence as persons. As children, individuals do not emerge out of nothing, but through deep engagement with prior persons – their caregivers. Thus, moral responsibility could not have arisen from adherence to norms or rules; rather, norms and rules emerged from the sense of responsibility that humans acquire as they become persons.

This led him to question how we become thinking beings. Do all humans, after all, think in the same way? He began reading the increasingly sophisticated ethnographic accounts emerging from Australia, Africa, Asia and South America, and was deeply influenced by an extended trip to China. He was an empirical realist, but also a personalist – that is, he accorded primacy to the person as such, refusing to subsume the individual into the group. In this respect, he was not persuaded by the arguments of the great sociologist Émile Durkheim concerning the exceptional status of the ‘sacred’ or the special powers of ‘collective consciousness’. Lévy-Bruhl soon arrived at a striking conclusion: in their everyday practices and especially in their ritual actions, the so-called ‘primitive’ peoples studied by ethnographers did not appear to conform to the norms of logic that had been regarded as universally valid since the time of Aristotle.

As a friend of his put it, Lévy-Bruhl discovered that such peoples are characterised by ‘a mystical mentality – full of the “supernatural in nature” and prelogic, of a different kind than ours’. Indeed, the basic principles of Aristotelian logic that continue to guide scientific thinking – underpinning modern technological development – seemed to be ignored by premodern peoples. Aristotle’s law of the excluded middle (p or not-p) did not appear to apply to their ‘mystical’ modes of thought, both because they tended to think in terms of concrete objects rather than abstractions, and because they exhibited what Lévy-Bruhl termed ‘participation’…

[de Pina Cabral traces the development of Lévy-Bruhl’s thought, starting with Plato’s concept of methexis; elaborates on Lévy-Bruhl’s ideas; and traces te advances in cognitive science and developmental psychology that support them…]

… the very experience of personhood – that is, the sense that I am myself – is not ‘individual’, since its emergence presupposes a prior condition of being-with others. The self arises from a sharing of being with others, from having been part of those who are close to us. One does not emerge as an addition to society, but rather as a partial separation from the participations that initially constituted one’s being.

As I become a person, I learn to relate to myself as an other; I transcend my immediate position in the world. Without this, I would not be able to speak a language, since the use of pronouns presupposes reflexive thought. Thus, as Lévy-Bruhl had already insisted in his notebooks, participation precedes the person. Intersubjectivity is not the meeting of already constituted subjects, but the ground from which subjectivity emerges. Participation, therefore, may be understood as the constitutive tension between the singular and the plural in the formation of the person in the world. In 1935, the great phenomenologist Edmund Husserl expressed this insight clearly in a letter to Lévy-Bruhl where he thanked him for his ideas on participation:

Saying ‘I’ and ‘we’, [persons] find themselves as members of families, associations, [socialities], as living ‘together’, exerting an influence on and suffering from their world – the world that has sense and reality for them, through their intentional life, their experiencing, thinking, [and] valuing.

In acting and being acted upon together in human company during the first year of life, children become ‘we’ at the same time as they become ‘I’, which means that persons are always, ambivalently, both ‘I’ and ‘we’. Participation and transcendence will remain sources of theoretical perplexity for as long as the ‘we’ is approached as a categorical matter – a question of ‘identity’ – rather than as the presence and activity of living persons in dynamic interaction with the world and with one another.

By contrast, once we accept that personhood is the outcome of a process – the encounter between the embodied capacities of human beings and the historically constituted world that surrounds them – participation loses its mystery. As Lévy-Bruhl put it in one of his final notes: ‘The impossibility for the individual to separate within himself what would be properly him and what he participates in in order to exist …’ Participation, therefore, is the ground upon which everyday social interaction is constituted. The ‘mystical’ (or transcendental) potential within each of us – that which animates the symbolic life of groups – is part of the very process through which each of us becomes ourselves…

How does one become a person? “We” before “I”: “To be is to participate,” from @aeon.co.

A (if not the) next question: how does personhood emerge when the formative interactions are increasingly mediated/attentuated by technology?

* Shakespeare, All’s Well That Ends Well, Act 4, Scene 3

###

As we get together, we might send behaviorist birthday greetings to a man whose work focused on how one might train the “persons” who emerge: Kenneth Spence; he was born on this date in 1907. A psychologist, he worked to construct a comprehensive theory of behavior to encompass conditioning and other simple forms of learning and behavior modification.

Spence attempted to establish a precise, mathematical formulation to describe the acquisition of learned behavior, trying to measure simple learned behaviors (e.g., salivating in anticipation of eating). Much of his research focused on classically conditioned, easily measured, eye-blinking behavior in relation to anxiety and other factors.

One of the leading theorists of his time, Spence was the most cited psychologist in the 14 most influential psychology journals in the last six years of his life (1962 – 1967).  A Review of General Psychology survey, published in 2002, ranked Spence as the 62nd most cited psychologist of the 20th century.

source

Written by (Roughly) Daily

May 6, 2026 at 1:00 am

“Always look on the bright side of life”*…

The estimable economic historian Louis Hyman has been engaged in an on-going “friendly debate” with his equally-estimable friend and Johns Hopkins colleague Rama Chellappa on “what AI means”…

… As I see this debate, this question of our age, there are two main questions that history can shed some light on.

  1. Is AI a complement or a substitute for labor? That is, will it increase demand for and the productivity of workers, or decrease it?
  2. Will AI be controlled by the few or be accessible to the many?

A Complement or a Substitute?

Consider a some of the most important technologies of the past 200 years.

When I am asked about what automation might look like, I inevitably discuss agriculture. Roughly all of our ancestors were farmers and approximately none of us today are. Yet we still eat bread made from wheat. That shift is possible because of automation.

The mechanical thresher, used to process wheat, was a substitute for the most backbreaking work of the harvest. But it also enabled more land to be cultivated, and that land was cultivated more efficiently, allowing for greater harvests. Mechanization of the farm, like the thresher, turned the American Midwest into the breadbasket of the world.

Those displaced farmers found work on railroads, moving all that. And those jobs, according to people at the time, were a kind of liberation from the raw animal labor of threshing. On net, it created demand for more workers at better wages in work more fit for people than beasts. For those that remained farmers, they found other higher-value work to be done. On a farm, there is always more work to do.

The failure, then and now, is to think farmers were only threshers. That was one part of their jobs. Today, our work, for most people, is also a bundle of tasks. Workers then and now could and can focus on parts of their job that are of higher value. And in a new economy, new tasks in new industries will be created. Many of the jobs that we do today (web designer, UI expert) were simply unimaginable in 1850. That is a good thing.

Consider now the assembly line. I’m sure you all know about the staggering increases in productivity that come from the division of labor. If you take my class in industrial history, you would learn deeply about the story of the automobile. With the assembly line, and no other change in technology, car assembly went from 12 and a half hours to about 30 minutes (once they worked out the kinks). Did this reduce the demand for workers? No. It reduced the price of cars. And that increased the demand for workers, who eventually could demand even higher wages through unionization.

It is important here to realize that better tools don’t make us get paid worse. They generally make us get paid more. Why? Because the tool, without the person, is useless. Even for today’s most cutting-edge AIs, that is true. It can code, but it can only code what I imagine it to code. It can draw, but only what I imagine it to draw. That is true for AIs as it was true for the thresher.

So, I would offer that AI will create more growth, more abundance. In the long run, all growth comes from higher productivity.

I would add one more piece to this story. Economic inequality has worsened since roughly 1970. It has worsened, therefore, not in the industrial era, but the digital era. I have argued elsewhere that this happened because for decades we did not use computers as tools of automation but as glorified typewriters (and then as televisions). Our productivity did not increase, especially to justify the expense of computers. Economists have debated for decades now over the lack of increase in productivity that came with the “digital age” of computing, but it is simple. We don’t use them as computers. Now we can.

For the first time now, normal people with their normal problems can use their computers to solve and automate their problems. AI can write code. AI can automate their tedium. The digital age did not bring any gains because it had no yet arrived. We were living through the last gasp of the industrial economy.

It is now here.

This technology will unleash unimaginable productivity gains. It will level the playing field between coders and the rest of us. Coders will lose their jobs, to be sure, but for the rest of us, the bundle of workplace tasks will become much better.

And truthfully, the demand for real computer scientists will probably increase in the era of vibe-coding. Computer science itself is a bundle of skills, of which coding is just one. The more important skill – software and data architecture – will only increase in demand as the usefulness of software expands…

[Hyman goes on to explore the dangers of monopolization (which, for reasons he explains, he believes are overstated); the future of softward (which, he believes, will skew to open-sorce), and of hardware (which, he believes will not be a bottleneck). He concludes…]

… Put together we come to a very different picture of what the digital age will be. The industrial age required massive investments to build the factories to make the products that were in demand. In the digital age, in contrast, the factories to build digital products will be made by the AI on your laptop. That is not inequality. That is equality.

The physical products of the Fordist industrial age were made for the mass market. In contrast, the digital products of the post-fordist digital age will be long-tail products. I don’t need to make mass market products; I can make them for a small niche, or just for myself.

Rather than fostering inequality, AI, then, is a great equalizer. To make products for a global market you don’t need a billion-dollar factory. You just need a laptop. That is astonishing.

That said, it will not be all sunshine and rainbows. Will AI solve the inequities of capitalism or its reliance on externalities as a source of primitive accumulation? Probably not.

But at the same time, AI is not a normal technology in that it has the potential to radically undermine many of the tendencies to concentrate capital that we have seen in the industrial age. We have been automated out of work before, that is nothing new, but it has always concentrated capital in the hands of the few. For the first time, there is potentially an alternative path forward.

AI will bring the digital age out of the hands of the coders. AI will not widen the gap—it will bridge it. Its ubiquity will mean that AI will be a tool that nearly all of us will be able to use in our daily work, which will make ordinary people more productive and prosperous…

Eminently worth reading in full: “Hooray! Post-Fordism Is Finally Here!

Even as Hyman’s message is reassuring in the context of the flood of jeremiads in which we’re awash, it’s worth remembering that eerily-similar points were made a couple of decades ago about the threat/promise of digital publishing/commerce. Given the then-current conditions and then-plausible futures, those predictions might have come true… but in the event, they didn’t pan out as projected. That said, things are changing, so maybe this time things are different?

(Image above: source)

* song (by Eric Idle) from Monty Python’s Life Of Brian

###

As we resolve to remain rosy, we might send productive birthday greetings to Andrew Meikle; he was born on this date in 1719. A Scottish millwright, he invented the threshing machine (for removing the husks from grain, as mentioned above). One of the key developments of the British Agricultural Revolution in the late 18th century., it was also one of the main causes of the Swing Riots— an 1830 uprising by English and Scottish agricultural workers protesting agricultural mechanization and harsh working conditions.

Threshing machine, invented by Andrew Meikle (source)

“Something that doesn’t actually exist can still be useful”*…

Gregory Barber on ultrafinitism, a philosophy that rejects the infinite. Ultrafinitism has long been dismissed as mathematical heresy, but it is also producing new insights in math and beyond…

Doron Zeilberger is a mathematician who believes that all things come to an end. That just as we are limited beings, so too does nature have boundaries — and therefore so do numbers. Look out the window, and where others see reality as a continuous expanse, flowing inexorably forward from moment to moment, Zeilberger sees a universe that ticks. It is a discrete machine. In the smooth motion of the world around him, he catches the subtle blur of a flip-book.

To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is. Equations define lines that carry on off the chalkboard, but to where? Proofs are littered with suggestive ellipses. These equations and proofs are, according to Zeilberger — a longtime professor at Rutgers University and a famed figure in combinatorics — both “very ugly” and false. It is “completely nonsense,” he said, huffing out each syllable in a husky voice that seemed worn out from making his point.

As a matter of practicality, infinity can be scrubbed out, he contends. “You don’t really need it.” Mathematicians can construct a form of calculus without infinity, for instance, cutting infinitesimal limits out of the picture entirely. Curves might look smooth, but they hide a fine-grit roughness; computers handle math just fine with a finite allowance of digits. (Zeilberger lists his own computer, which he named “Shalosh B. Ekhad,” as a collaborator on his papers.) With infinity eliminated, the only thing lost is mathematics that was “not worth doing at all,” Zeilberger said.

Most mathematicians would say just the opposite — that it’s Zeilberger who spews complete nonsense. Not just because infinity is so useful and so natural to our descriptions of the universe, but because treating sets of numbers (like the integers) as actual, infinite objects is at the very core of mathematics, embedded in its most fundamental rules and assumptions.

At the very least, even if mathematicians don’t want to think about infinity as an actual entity, they acknowledge that sequences, shapes, and other mathematical objects have the potential to grow indefinitely. Two parallel lines can in theory go on forever; another number can always be added to the end of the number line.

Zeilberger disagrees. To him, what matters is not whether something is possible in principle, but whether it is actually feasible. What this means, in practice, is that not only is infinity suspect, but extremely large numbers are as well. Consider “Skewes’ number,” eee79. This is an exceptionally large number, and no one has ever been able to write it out in decimal form. So what can we really say about it? Is it an integer? Is it prime? Can we find such a number anywhere in nature? Could we ever write it down? Perhaps, then, it is not a number at all.

This raises obvious questions, such as where, exactly, we will find the end point. Zeilberger can’t say. Nobody can. Which is the first reason that many dismiss his philosophy, known as ultrafinitism. “When you first pitch the idea of ultrafinitism to somebody, it sounds like quackery — like ‘I think there’s a largest number’ or something,” said Justin Clarke-Doane, a philosopher at Columbia University.

“A lot of mathematicians just find the whole proposal preposterous,” said Joel David Hamkins, a set theorist at the University of Notre Dame. Ultrafinitism is not polite talk at a mathematical society dinner. Few (one might say an ultrafinite number) work on it. Fewer still are card-carrying members, like Zeilberger, willing to shout their views out into the void. That’s not just because ultrafinitism is contrarian, but because it advocates for a mathematics that is fundamentally smaller, one where certain important questions can no longer be asked.

And yet it gives Hamkins and others a good deal to think about. From one angle, ultrafinitism can be seen as a more realistic mathematics. It is math that better reflects the limits of what people can create and verify; it may even better reflect the physical universe. While we might be inclined to think of space and time as eternally expansive and divisible, the ultrafinitist would argue that these are assumptions that science has increasingly brought into question — much as, Zeilberger might say, science brought doubt to God’s doorstep.

“The world that we’re describing needs to be honest through and through,” said Clarke-Doane, who in April 2025 convened a rare gathering of experts to explore ultrafinitist ideas. “If there might only be finitely many things, then we’d better also be using a math that doesn’t just assume that there are infinitely many things at the get-go.” To him, “it sure seems like that should be part of the menu in the philosophy of math.”

For mathematicians to take it seriously, though, ultrafinitists first need to agree on what they’re talking about — to turn arguments that sound like “bluster,” as Hamkins puts it, into an official theory. Mathematics is steeped in formal systems and common frameworks. Ultrafinitism, meanwhile, lacks such structure.

It is one thing to tackle problems piecemeal. It is quite another to rewrite the logical foundations of mathematics itself. “I don’t think the reason ultrafinitism has been dismissed is that people have good arguments against it,” Clarke-Doane said. “The feeling is that, oh, well, it’s hopeless.”

That’s a problem that some ultrafinitists are still trying to address.

Zeilberger, meanwhile, is prepared to abandon mathematical ideals in favor of a mathematics that’s inherently messy — just like the world is. He is less a man of foundational theories than a man of opinions, of which he lists 195 on his website. “I cannot be a tenured professor without doing this crackpot stuff,” he said. But one day, he added, mathematicians will look back and see that this crackpot, like those of yore who questioned gods and superstitions, was right. “Luckily, heretics are no longer burned at the stake.”…

Read on for the history of ultrafinitism, the critical dialogue surrounding it, and its implications: “What Can We Gain by Losing Infinity?” from @gregbarber.bsky.social in @quantamagazine.bsky.social.

* Ian Stewart (whose point was somewhat different from Zeilberger’s :-), Infinity: A Very Short Introduction

###

As we engage the endless, we might spare a thought for a man whose work touched on the infinitesimal, Isaac Barrow; he died on this date in 1677. A theologian and mathematician, he played a key role in the development of infinitesimal calculus (in particular, for a proof of the fundamental theorem of calculus). Barrow was the inaugural holder of the prestigious Lucasian Professorship of Mathematics at the University of Cambridge, a post later held by his student, Isaac Newton (who, of course, shares primary credit for the development of calculus with Gottfried Wilhelm Leibniz).

source

“The future is already here — it’s just not very evenly distributed”*…

… nor, perhaps, as widely read as it should be. “Urubos” is here to help…

The Extrapolated Futures Archive is a reverse-lookup for speculative fiction. Describe a situation you are facing, and find the SF stories that already worked through the implications.

The catalog connects stories (novels, novellas, short stories, films) to the speculative ideas they explore: thought experiments about technology, governance, biology, society, and more. Every idea is tagged with domains, scenario types, and outcome types so you can filter by the kind of future you are thinking about.

How to use it:

  • Search by title, author, synopsis keywords, or idea descriptions
  • Filter by domain (AI, biotech, climate, space, governance…), scenario type, outcome, decade, or series
  • Browse ideas to find transferable thought experiments, then follow links to the stories that explore them
  • Browse stories to see what speculative ideas a particular work contains
  • Book Club discussions (marked with 📖) offer section-by-section roundtable analyses by AI personas modeled on SF authors
  • What-If Query (via the What-If Query page/link) lets you describe a real-world scenario in plain text and get ranked matching ideas

The archive is designed for decision-makers in government, industry, and NGOs who want to widen their thinking by surfacing fictional precedents for novel real-world challenges…

Over 275 ideas, which cluster into 20 different “domains,” explored in over 1,900 stories, via over 3,500 links…

Mapping real-world scenarios to the science fiction stories that explored them first: “Extrapolated Futures Archive

* William Gibson

###

As we ponder prescience, we might spare a thought for Charles Hoy Fort, the prolific chronicler of paranormal phenomena; he died on this date in 1932.  Fort collected accounts of frogs and other strange objects raining from the sky, UFOs, ghosts, spontaneous human combustion, stigmata, psychic abilities, and the like, publishing four collections of weird tales and anomalies during his lifetime: Book of the Damned (1919), New Lands (1923), Lo! (1931), and Wild Talents (1932).  So influential was Fort among fellow-questers that his name has become an adjective, “Fortean,” often applied to unexplained events… The Truth is Out There…

source

“The most beautiful experience we can have is the mysterious”*…

Henri Matisse, View of Notre Dame, 1914, oil on canvas, 58 x 37 ⅛ in, via Wikimedia Commons. Public domain.

Eminent art critic and historian Hal Foster has started what will be a four-part series in The Paris Review on looking at– and seeing– art…

Many of us look at art in the company of others; I have done so with a close friend, off and on, for five decades. We meet at a museum, wander around, settle on a painting (or, rather, it settles on us), look, talk, look more, talk more. We attend to the work and to each other; we enter its world together. Only recently and rarely have we written up our reactions, which we do individually. A testament to our friendship, this writing is also a tribute to the art, to the discursivity that informs it and the sociability that it allows. 

Paintings call out to us in myriad ways. My friend and I are most drawn to pictures that are reflexive about looking, that anticipate it, that sharpen it, that alter our habits of seeing. This may be a Modernist criterion, but it hardly disqualifies older art; we have ranged as far back as Early Netherlandish painting. In this selection, though, I focus on pictures that date from the past hundred and fifty years. (For better or worse, that’s also my academic field.)  

My aim in this exercise isn’t to tease out context, which is almost too present in wall texts today. Immediacy may be a mirage, but I try to come to my chosen works as directly as possible. It’s not that I ignore the texts on the walls; I just don’t get stuck there. I don’t pretend to see with a “period eye,” as Michael Baxandall called the attempt to perceive as historical viewers may have. Contextual information may often be necessary, but I keep it at a useful minimum. And though I sometimes get speculative, that’s part of the fun. In fact, one purpose of these studies is to be loosened from my scholarly superego (which isn’t very strong, in any case). I want to demystify the viewing of art a little, not to deskill it exactly, but to suggest that anyone can do it. Ignorant Art History is a big tent.

Looking at a painting is a welcome respite from scanning a screen. In that sense, this exercise is reactive: I labor in the small cottage industry of attention that has sprouted up in the cracks of the massive complex of distraction all around us. A phenomenological turn often occurs at times of intensive mediation, but the point is not simply to have our perceptions mirrored back to us. T. J. Clark has put the aim nicely: “When I am in front of a picture the thing I most want is to enter the picture’s world: it is the possibility of doing so that makes pictures worth looking at for me.” To look at a painting is also to exit our world for a while, and then to return to it cast in a different—distant—light. The time travel is often wonderful, and almost free… 

– “The Ignorant Art Historian: An Introduction

The first of his short essays, on the Matisse pictured above, just dropped…

… As we approach this painting, we have little idea of what it depicts, or whether it depicts anything at all. A washy blue covers the entire surface unevenly, and its space is traversed by several black vectors. A vertical line stretches the length of the canvas on the far right, where it intersects with two horizontal lines that cut across the center of the picture. In the lower half of the painting, three diagonal lines run roughly parallel to one another, also toward the right.

The main motif floats in the top third of the painting. Outlined heavily in black, its interior is made up of the same blue as elsewhere except for one white blotch and a few black planes, scratched to reveal the white underneath. Three thin, white planes also appear in the interior, each crossed with a horizontal black stripe; the central plane divides the space in two. 

All this is hard to sort out, and two more pieces on the right—a green blob beside a black one—only add to the puzzle. It is a complicated painting, but its complication is borne of simplicity. Completed in 1914, at the beginning of World War I, it is an austere work in an austere time. 

The title offers a kind of lifeline: View of Notre Dame. But what kind of view and from where? And what are all the black lines? Neither abstract nor representational, the painting requires a shift in our way of looking: its elements are less images of things than signs for them. 

We know that the Notre-Dame sits on the western end of the Île de la Cité in Paris. So the three diagonals might signify the quai along the Left Bank, the low path alongside the Seine, and the great river. The two horizontal lines then read as a bridge over the Seine, and the slight curve underneath them as its arched support. Finally, the long vertical line serves as the near edge of the quai, or perhaps of the very building from which the view is taken. The angles suggest that we look down on the scene from a Left Bank apartment several floors up. The overall blue signifies air and water where that seems appropriate, and anything else (or nothing at all) where it does not. 

How does the squarish motif convey the famous cathedral? If the bisected shape suggests the two great towers, the white plane between them might evoke the rose window. Since we view the cathedral from the Left Bank, it appears turned away from us slightly, its south side more exposed. If the black areas register the sides of the building in deep shadow, the white ones might signify the play of light across the facade. And the blobs in green and black? The green could be a plant, and the black its shadow. 

The pieces don’t add up completely or neatly. But then signification is about signaling-just-enough rather than representing-in-full. Here, seeing is guesswork. It often is elsewhere, too; we just don’t acknowledge it. Sometimes a sign doesn’t signify and sometimes it suggests more than one thing. The diagonals evoke both the quai and the river; the black areas convey a material thing here and an immaterial shadow there. 

Around this time, Matisse kept a studio above the quai Saint-Michel. Might View of Notre Dame double as a view of the interior from which it was painted? In that case, the Paris cathedral is also a French window, with blue sky and white clouds seen in or through the glass; the green shrub is also a plant on the sill; the lines of the bridge are also the molding in the room; and—who knows?—the diagonals of the bank are also the easel on which this very painting was produced… 

– “The Ignorant Art Historian: View of Notre Dame

The remaining three installments will drop weekly into May.

* “The most beautiful experience we can have is the mysterious. It is the fundamental emotion that stands at the cradle of true art and true science.” – Albert Einstein

###

As we appreciate art, we might recall that on this date in 1808, at the outbreak of the Peninsular War, the people of Madrid rose up in rebellion against French occupation. 

In 1814, Francisco de Goya memorialized the event in his painting The Second of May 1808.

source