(Roughly) Daily

“Always look on the bright side of life”*…

The estimable economic historian Louis Hyman has been engaged in an on-going “friendly debate” with his equally-estimable friend and Johns Hopkins colleague Rama Chellappa on “what AI means”…

… As I see this debate, this question of our age, there are two main questions that history can shed some light on.

  1. Is AI a complement or a substitute for labor? That is, will it increase demand for and the productivity of workers, or decrease it?
  2. Will AI be controlled by the few or be accessible to the many?

A Complement or a Substitute?

Consider a some of the most important technologies of the past 200 years.

When I am asked about what automation might look like, I inevitably discuss agriculture. Roughly all of our ancestors were farmers and approximately none of us today are. Yet we still eat bread made from wheat. That shift is possible because of automation.

The mechanical thresher, used to process wheat, was a substitute for the most backbreaking work of the harvest. But it also enabled more land to be cultivated, and that land was cultivated more efficiently, allowing for greater harvests. Mechanization of the farm, like the thresher, turned the American Midwest into the breadbasket of the world.

Those displaced farmers found work on railroads, moving all that. And those jobs, according to people at the time, were a kind of liberation from the raw animal labor of threshing. On net, it created demand for more workers at better wages in work more fit for people than beasts. For those that remained farmers, they found other higher-value work to be done. On a farm, there is always more work to do.

The failure, then and now, is to think farmers were only threshers. That was one part of their jobs. Today, our work, for most people, is also a bundle of tasks. Workers then and now could and can focus on parts of their job that are of higher value. And in a new economy, new tasks in new industries will be created. Many of the jobs that we do today (web designer, UI expert) were simply unimaginable in 1850. That is a good thing.

Consider now the assembly line. I’m sure you all know about the staggering increases in productivity that come from the division of labor. If you take my class in industrial history, you would learn deeply about the story of the automobile. With the assembly line, and no other change in technology, car assembly went from 12 and a half hours to about 30 minutes (once they worked out the kinks). Did this reduce the demand for workers? No. It reduced the price of cars. And that increased the demand for workers, who eventually could demand even higher wages through unionization.

It is important here to realize that better tools don’t make us get paid worse. They generally make us get paid more. Why? Because the tool, without the person, is useless. Even for today’s most cutting-edge AIs, that is true. It can code, but it can only code what I imagine it to code. It can draw, but only what I imagine it to draw. That is true for AIs as it was true for the thresher.

So, I would offer that AI will create more growth, more abundance. In the long run, all growth comes from higher productivity.

I would add one more piece to this story. Economic inequality has worsened since roughly 1970. It has worsened, therefore, not in the industrial era, but the digital era. I have argued elsewhere that this happened because for decades we did not use computers as tools of automation but as glorified typewriters (and then as televisions). Our productivity did not increase, especially to justify the expense of computers. Economists have debated for decades now over the lack of increase in productivity that came with the “digital age” of computing, but it is simple. We don’t use them as computers. Now we can.

For the first time now, normal people with their normal problems can use their computers to solve and automate their problems. AI can write code. AI can automate their tedium. The digital age did not bring any gains because it had no yet arrived. We were living through the last gasp of the industrial economy.

It is now here.

This technology will unleash unimaginable productivity gains. It will level the playing field between coders and the rest of us. Coders will lose their jobs, to be sure, but for the rest of us, the bundle of workplace tasks will become much better.

And truthfully, the demand for real computer scientists will probably increase in the era of vibe-coding. Computer science itself is a bundle of skills, of which coding is just one. The more important skill – software and data architecture – will only increase in demand as the usefulness of software expands…

[Hyman goes on to explore the dangers of monopolization (which, for reasons he explains, he believes are overstated); the future of softward (which, he believes, will skew to open-sorce), and of hardware (which, he believes will not be a bottleneck). He concludes…]

… Put together we come to a very different picture of what the digital age will be. The industrial age required massive investments to build the factories to make the products that were in demand. In the digital age, in contrast, the factories to build digital products will be made by the AI on your laptop. That is not inequality. That is equality.

The physical products of the Fordist industrial age were made for the mass market. In contrast, the digital products of the post-fordist digital age will be long-tail products. I don’t need to make mass market products; I can make them for a small niche, or just for myself.

Rather than fostering inequality, AI, then, is a great equalizer. To make products for a global market you don’t need a billion-dollar factory. You just need a laptop. That is astonishing.

That said, it will not be all sunshine and rainbows. Will AI solve the inequities of capitalism or its reliance on externalities as a source of primitive accumulation? Probably not.

But at the same time, AI is not a normal technology in that it has the potential to radically undermine many of the tendencies to concentrate capital that we have seen in the industrial age. We have been automated out of work before, that is nothing new, but it has always concentrated capital in the hands of the few. For the first time, there is potentially an alternative path forward.

AI will bring the digital age out of the hands of the coders. AI will not widen the gap—it will bridge it. Its ubiquity will mean that AI will be a tool that nearly all of us will be able to use in our daily work, which will make ordinary people more productive and prosperous…

Eminently worth reading in full: “Hooray! Post-Fordism Is Finally Here!

Even as Hyman’s message is reassuring in the context of the flood of jeremiads in which we’re awash, it’s worth remembering that eerily-similar points were made a couple of decades ago about the threat/promise of digital publishing/commerce. Given the then-current conditions and then-plausible futures, those predictions might have come true… but in the event, they didn’t pan out as projected. That said, things are changing, so maybe this time things are different?

(Image above: source)

* song (by Eric Idle) from Monty Python’s Life Of Brian

###

As we resolve to remain rosy, we might send productive birthday greetings to Andrew Meikle; he was born on this date in 1719. A Scottish millwright, he invented the threshing machine (for removing the husks from grain, as mentioned above). One of the key developments of the British Agricultural Revolution in the late 18th century., it was also one of the main causes of the Swing Riots— an 1830 uprising by English and Scottish agricultural workers protesting agricultural mechanization and harsh working conditions.

Threshing machine, invented by Andrew Meikle (source)

“Something that doesn’t actually exist can still be useful”*…

Gregory Barber on ultrafinitism, a philosophy that rejects the infinite. Ultrafinitism has long been dismissed as mathematical heresy, but it is also producing new insights in math and beyond…

Doron Zeilberger is a mathematician who believes that all things come to an end. That just as we are limited beings, so too does nature have boundaries — and therefore so do numbers. Look out the window, and where others see reality as a continuous expanse, flowing inexorably forward from moment to moment, Zeilberger sees a universe that ticks. It is a discrete machine. In the smooth motion of the world around him, he catches the subtle blur of a flip-book.

To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is. Equations define lines that carry on off the chalkboard, but to where? Proofs are littered with suggestive ellipses. These equations and proofs are, according to Zeilberger — a longtime professor at Rutgers University and a famed figure in combinatorics — both “very ugly” and false. It is “completely nonsense,” he said, huffing out each syllable in a husky voice that seemed worn out from making his point.

As a matter of practicality, infinity can be scrubbed out, he contends. “You don’t really need it.” Mathematicians can construct a form of calculus without infinity, for instance, cutting infinitesimal limits out of the picture entirely. Curves might look smooth, but they hide a fine-grit roughness; computers handle math just fine with a finite allowance of digits. (Zeilberger lists his own computer, which he named “Shalosh B. Ekhad,” as a collaborator on his papers.) With infinity eliminated, the only thing lost is mathematics that was “not worth doing at all,” Zeilberger said.

Most mathematicians would say just the opposite — that it’s Zeilberger who spews complete nonsense. Not just because infinity is so useful and so natural to our descriptions of the universe, but because treating sets of numbers (like the integers) as actual, infinite objects is at the very core of mathematics, embedded in its most fundamental rules and assumptions.

At the very least, even if mathematicians don’t want to think about infinity as an actual entity, they acknowledge that sequences, shapes, and other mathematical objects have the potential to grow indefinitely. Two parallel lines can in theory go on forever; another number can always be added to the end of the number line.

Zeilberger disagrees. To him, what matters is not whether something is possible in principle, but whether it is actually feasible. What this means, in practice, is that not only is infinity suspect, but extremely large numbers are as well. Consider “Skewes’ number,” eee79. This is an exceptionally large number, and no one has ever been able to write it out in decimal form. So what can we really say about it? Is it an integer? Is it prime? Can we find such a number anywhere in nature? Could we ever write it down? Perhaps, then, it is not a number at all.

This raises obvious questions, such as where, exactly, we will find the end point. Zeilberger can’t say. Nobody can. Which is the first reason that many dismiss his philosophy, known as ultrafinitism. “When you first pitch the idea of ultrafinitism to somebody, it sounds like quackery — like ‘I think there’s a largest number’ or something,” said Justin Clarke-Doane, a philosopher at Columbia University.

“A lot of mathematicians just find the whole proposal preposterous,” said Joel David Hamkins, a set theorist at the University of Notre Dame. Ultrafinitism is not polite talk at a mathematical society dinner. Few (one might say an ultrafinite number) work on it. Fewer still are card-carrying members, like Zeilberger, willing to shout their views out into the void. That’s not just because ultrafinitism is contrarian, but because it advocates for a mathematics that is fundamentally smaller, one where certain important questions can no longer be asked.

And yet it gives Hamkins and others a good deal to think about. From one angle, ultrafinitism can be seen as a more realistic mathematics. It is math that better reflects the limits of what people can create and verify; it may even better reflect the physical universe. While we might be inclined to think of space and time as eternally expansive and divisible, the ultrafinitist would argue that these are assumptions that science has increasingly brought into question — much as, Zeilberger might say, science brought doubt to God’s doorstep.

“The world that we’re describing needs to be honest through and through,” said Clarke-Doane, who in April 2025 convened a rare gathering of experts to explore ultrafinitist ideas. “If there might only be finitely many things, then we’d better also be using a math that doesn’t just assume that there are infinitely many things at the get-go.” To him, “it sure seems like that should be part of the menu in the philosophy of math.”

For mathematicians to take it seriously, though, ultrafinitists first need to agree on what they’re talking about — to turn arguments that sound like “bluster,” as Hamkins puts it, into an official theory. Mathematics is steeped in formal systems and common frameworks. Ultrafinitism, meanwhile, lacks such structure.

It is one thing to tackle problems piecemeal. It is quite another to rewrite the logical foundations of mathematics itself. “I don’t think the reason ultrafinitism has been dismissed is that people have good arguments against it,” Clarke-Doane said. “The feeling is that, oh, well, it’s hopeless.”

That’s a problem that some ultrafinitists are still trying to address.

Zeilberger, meanwhile, is prepared to abandon mathematical ideals in favor of a mathematics that’s inherently messy — just like the world is. He is less a man of foundational theories than a man of opinions, of which he lists 195 on his website. “I cannot be a tenured professor without doing this crackpot stuff,” he said. But one day, he added, mathematicians will look back and see that this crackpot, like those of yore who questioned gods and superstitions, was right. “Luckily, heretics are no longer burned at the stake.”…

Read on for the history of ultrafinitism, the critical dialogue surrounding it, and its implications: “What Can We Gain by Losing Infinity?” from @gregbarber.bsky.social in @quantamagazine.bsky.social.

* Ian Stewart (whose point was somewhat different from Zeilberger’s :-), Infinity: A Very Short Introduction

###

As we engage the endless, we might spare a thought for a man whose work touched on the infinitesimal, Isaac Barrow; he died on this date in 1677. A theologian and mathematician, he played a key role in the development of infinitesimal calculus (in particular, for a proof of the fundamental theorem of calculus). Barrow was the inaugural holder of the prestigious Lucasian Professorship of Mathematics at the University of Cambridge, a post later held by his student, Isaac Newton (who, of course, shares primary credit for the development of calculus with Gottfried Wilhelm Leibniz).

source

“The future is already here — it’s just not very evenly distributed”*…

… nor, perhaps, as widely read as it should be. “Urubos” is here to help…

The Extrapolated Futures Archive is a reverse-lookup for speculative fiction. Describe a situation you are facing, and find the SF stories that already worked through the implications.

The catalog connects stories (novels, novellas, short stories, films) to the speculative ideas they explore: thought experiments about technology, governance, biology, society, and more. Every idea is tagged with domains, scenario types, and outcome types so you can filter by the kind of future you are thinking about.

How to use it:

  • Search by title, author, synopsis keywords, or idea descriptions
  • Filter by domain (AI, biotech, climate, space, governance…), scenario type, outcome, decade, or series
  • Browse ideas to find transferable thought experiments, then follow links to the stories that explore them
  • Browse stories to see what speculative ideas a particular work contains
  • Book Club discussions (marked with 📖) offer section-by-section roundtable analyses by AI personas modeled on SF authors
  • What-If Query (via the What-If Query page/link) lets you describe a real-world scenario in plain text and get ranked matching ideas

The archive is designed for decision-makers in government, industry, and NGOs who want to widen their thinking by surfacing fictional precedents for novel real-world challenges…

Over 275 ideas, which cluster into 20 different “domains,” explored in over 1,900 stories, via over 3,500 links…

Mapping real-world scenarios to the science fiction stories that explored them first: “Extrapolated Futures Archive

* William Gibson

###

As we ponder prescience, we might spare a thought for Charles Hoy Fort, the prolific chronicler of paranormal phenomena; he died on this date in 1932.  Fort collected accounts of frogs and other strange objects raining from the sky, UFOs, ghosts, spontaneous human combustion, stigmata, psychic abilities, and the like, publishing four collections of weird tales and anomalies during his lifetime: Book of the Damned (1919), New Lands (1923), Lo! (1931), and Wild Talents (1932).  So influential was Fort among fellow-questers that his name has become an adjective, “Fortean,” often applied to unexplained events… The Truth is Out There…

source

“The most beautiful experience we can have is the mysterious”*…

Henri Matisse, View of Notre Dame, 1914, oil on canvas, 58 x 37 ⅛ in, via Wikimedia Commons. Public domain.

Eminent art critic and historian Hal Foster has started what will be a four-part series in The Paris Review on looking at– and seeing– art…

Many of us look at art in the company of others; I have done so with a close friend, off and on, for five decades. We meet at a museum, wander around, settle on a painting (or, rather, it settles on us), look, talk, look more, talk more. We attend to the work and to each other; we enter its world together. Only recently and rarely have we written up our reactions, which we do individually. A testament to our friendship, this writing is also a tribute to the art, to the discursivity that informs it and the sociability that it allows. 

Paintings call out to us in myriad ways. My friend and I are most drawn to pictures that are reflexive about looking, that anticipate it, that sharpen it, that alter our habits of seeing. This may be a Modernist criterion, but it hardly disqualifies older art; we have ranged as far back as Early Netherlandish painting. In this selection, though, I focus on pictures that date from the past hundred and fifty years. (For better or worse, that’s also my academic field.)  

My aim in this exercise isn’t to tease out context, which is almost too present in wall texts today. Immediacy may be a mirage, but I try to come to my chosen works as directly as possible. It’s not that I ignore the texts on the walls; I just don’t get stuck there. I don’t pretend to see with a “period eye,” as Michael Baxandall called the attempt to perceive as historical viewers may have. Contextual information may often be necessary, but I keep it at a useful minimum. And though I sometimes get speculative, that’s part of the fun. In fact, one purpose of these studies is to be loosened from my scholarly superego (which isn’t very strong, in any case). I want to demystify the viewing of art a little, not to deskill it exactly, but to suggest that anyone can do it. Ignorant Art History is a big tent.

Looking at a painting is a welcome respite from scanning a screen. In that sense, this exercise is reactive: I labor in the small cottage industry of attention that has sprouted up in the cracks of the massive complex of distraction all around us. A phenomenological turn often occurs at times of intensive mediation, but the point is not simply to have our perceptions mirrored back to us. T. J. Clark has put the aim nicely: “When I am in front of a picture the thing I most want is to enter the picture’s world: it is the possibility of doing so that makes pictures worth looking at for me.” To look at a painting is also to exit our world for a while, and then to return to it cast in a different—distant—light. The time travel is often wonderful, and almost free… 

– “The Ignorant Art Historian: An Introduction

The first of his short essays, on the Matisse pictured above, just dropped…

… As we approach this painting, we have little idea of what it depicts, or whether it depicts anything at all. A washy blue covers the entire surface unevenly, and its space is traversed by several black vectors. A vertical line stretches the length of the canvas on the far right, where it intersects with two horizontal lines that cut across the center of the picture. In the lower half of the painting, three diagonal lines run roughly parallel to one another, also toward the right.

The main motif floats in the top third of the painting. Outlined heavily in black, its interior is made up of the same blue as elsewhere except for one white blotch and a few black planes, scratched to reveal the white underneath. Three thin, white planes also appear in the interior, each crossed with a horizontal black stripe; the central plane divides the space in two. 

All this is hard to sort out, and two more pieces on the right—a green blob beside a black one—only add to the puzzle. It is a complicated painting, but its complication is borne of simplicity. Completed in 1914, at the beginning of World War I, it is an austere work in an austere time. 

The title offers a kind of lifeline: View of Notre Dame. But what kind of view and from where? And what are all the black lines? Neither abstract nor representational, the painting requires a shift in our way of looking: its elements are less images of things than signs for them. 

We know that the Notre-Dame sits on the western end of the Île de la Cité in Paris. So the three diagonals might signify the quai along the Left Bank, the low path alongside the Seine, and the great river. The two horizontal lines then read as a bridge over the Seine, and the slight curve underneath them as its arched support. Finally, the long vertical line serves as the near edge of the quai, or perhaps of the very building from which the view is taken. The angles suggest that we look down on the scene from a Left Bank apartment several floors up. The overall blue signifies air and water where that seems appropriate, and anything else (or nothing at all) where it does not. 

How does the squarish motif convey the famous cathedral? If the bisected shape suggests the two great towers, the white plane between them might evoke the rose window. Since we view the cathedral from the Left Bank, it appears turned away from us slightly, its south side more exposed. If the black areas register the sides of the building in deep shadow, the white ones might signify the play of light across the facade. And the blobs in green and black? The green could be a plant, and the black its shadow. 

The pieces don’t add up completely or neatly. But then signification is about signaling-just-enough rather than representing-in-full. Here, seeing is guesswork. It often is elsewhere, too; we just don’t acknowledge it. Sometimes a sign doesn’t signify and sometimes it suggests more than one thing. The diagonals evoke both the quai and the river; the black areas convey a material thing here and an immaterial shadow there. 

Around this time, Matisse kept a studio above the quai Saint-Michel. Might View of Notre Dame double as a view of the interior from which it was painted? In that case, the Paris cathedral is also a French window, with blue sky and white clouds seen in or through the glass; the green shrub is also a plant on the sill; the lines of the bridge are also the molding in the room; and—who knows?—the diagonals of the bank are also the easel on which this very painting was produced… 

– “The Ignorant Art Historian: View of Notre Dame

The remaining three installments will drop weekly into May.

* “The most beautiful experience we can have is the mysterious. It is the fundamental emotion that stands at the cradle of true art and true science.” – Albert Einstein

###

As we appreciate art, we might recall that on this date in 1808, at the outbreak of the Peninsular War, the people of Madrid rose up in rebellion against French occupation. 

In 1814, Francisco de Goya memorialized the event in his painting The Second of May 1808.

source

“Behind it all is surely an idea so simple, so beautiful, that when we grasp it—in a decade, a century, or a millennium—we will all say to each other, how could it have been otherwise? How could we have been so stupid?”*…

The album cover of “When the Dust Settles” by STS9

From Plato on (if not, indeed, from even earlier), we’ve struggled to resolve the “shadows on the cave wall” into ever-sharper understandings of the reality “behind” those shadows. The quantum of that effort is the “idea.”

But what is an idea? “Roger’s Bacon” offers a provocative answer…

1. Ideas are alien life forms with an agency and intelligence independent of any mind or substrate which they inhabit. When we say that an idea (a story, a joke, a theory, a work of art) has “taken on a life of its own”, our language betrays an intuitive understanding that science has not yet grasped.

They are as you and I—eating, loving, mating, evolving, dying.

2. We do not create or “have” ideas—if anything is doing the creating or having, it is the ideas themselves.

There are times when we recognize this truth (when an idea “magically” pops into your head from “out of nowhere”), but too often it is obscured by the post-hoc just-so stories we tell ourselves about how I, the Great Thinker, Precious Me, was able to “come up with” the brilliant idea (e.g. I combined two other ideas, I was inspired by a memory, an event, another idea, etc.). Whatever explanation you give, the experience is always the same—the idea simply arrives. All else is confabulation.

Why then does an idea enter one mind and not another? Ideas act as all organisms do—they seek habitats (i.e. minds) that can provide them with the space and resources (i.e. mental runtime, ideas eat the energy that enables action potentials) needed to survive and reproduce (i.e. create new idea-children). Just as some ecosystems are more diverse, abundant, and resilient, some minds are as well. What we call creativity is the quality of possessing a healthy mental ecosystem, one that offers fertile ground for a plenitude of ideas. Ideas may also be attracted to particular minds for more specific reasons—for example, an idea may see that other related ideas (members of the same genera or family) have found the mind to be especially suitable or perhaps the mind is in dire need of a certain idea and therefore will offer it ample resources upon arrival. Some minds (e.g. those that are dominated by one idea or set of ideas, perhaps a religious or political ideology) provide poor habitat and are avoided by all but the most desperate ideas (e.g. irrational and harmful ideas that can’t find a home elsewhere—this is why conspiracy theories and hateful ideologies tend to congregate in the same minds).

3. Dear reader, I ask you to conduct an experiment.

Create something, anything—write a line of poetry, doodle an image, hum a melody, take some objects near you and arrange them into a sculpture. Now destroy what you created—physically if you can, but also mentally. Forget it completely.

The world is changed. You are changed. The idea will return in one form or another, in your mind or another.

4. Highly creative people, those we might call “geniuses”, sometimes have the intuition that ideas are autonomous living entities. The standard scientific explanation would be that creativity is positively associated with certain mental characteristics (such as theory of mind and schizotypy) that make someone prone to the intuition that ideas possess a degree of autonomous agency, that they are independently alive in some sense. However, another interpretation is possible: ideas do not like to be treated as if they were lifeless, inanimate objects (would you?) and therefore they gravitate towards minds that treat them with the respect and dignity they deserve…

[“RB” shares the fascinating insights of Philip K Dick, David Lynch, Terence Mckenna, and David Abram…]

… 5. Our relation to ideas is an inextricable symbiosis, like that between plant and pollinator, a mutualism in which neither can survive without the other. At the dawn of civilization, a covenant was made between humans and these alien entities which inhabit our minds—honor and respect each other and all will flourish beyond their wildest dreams.

Ideas will help us if we help them. This is why the growth of knowledge depends on certain moral values—freedom, openness, honesty, courage, tolerance, and humility, amongst others. Those cultures that respect these values provide ideal habitat for ideas, and where ideas thrive and multiply, so do humans.

The converse is true as well. When ideas are kept secret or willfully distorted, we suffer. When ideas are regarded as slaves, as mere tools that can be wielded for their owner’s benefit, the end is near.

Our treatment of ideas is at the root of all that ails us. The remedy: worship ideas like Wisdom, Justice, Equality, Peace, and Love as if they were Gods (because in fact they are, something the ancients recognized that we have long since forgotten), and follow one simple rule.

Do unto ideas as you would have them do unto you.

Teach the children, and in one generation—a new world.

6. Perhaps you has wondered if I am being serious, if I truly believe that ideas are alive in a literal sense—“surely he is just playing with metaphor, an interesting thought experiment and some poetic license, but nothing more.” I assure you nothing could be further from the truth. I am under no illusions; as it stands, there is absolutely no shred of evidence for my hypothesis. I have it on nothing but faith and intuition that one day there will be a paradigm shift of Copernican proportions, a revolution that utterly transforms our understanding of Mind and Matter.

Ask yourself: does history not teach us that there are new forms of life still waiting to be discovered which will seem utterly unimaginable to us until some new technology brings them to light? Is it not hubris of the highest order to suppose that we, Modern Man, have finally reached the end of nature’s catalogue? Democritus proposed that the universe consists of tiny indivisible “atoms”; over 2000 years later he was proven correct, however we still don’t understand the true nature of these atoms—might they too have a spark of consciousness? Is the idea that ideas are interdimensional endosymbiotic entities made of consciousness really so far-fetched? Yeah, maybe.

7. And this you shall know:

Ideas are Alive and You are Dead…

What is it like to be an idea? “Ideas are Alive and You are Dead,” @theseedsofscience.skystack.xyz via @mastroianni.bsky.social

John Archibald Wheeler (and apposite the piece above, here)

###

As we ponder panpsychism, we might send sentient birthday greetings to a man whose passing we noted last month, and whose work wrestled in a way with these same issues: Pierre Teilhard de Chardin; he was born on this date in 1881.  A Jesuit theologian, philosopher, geologist, and paleontologist, he conceived the idea of the Omega Point (a maximum level of complexity and consciousness towards which he believed the universe was evolving) and developed Vladimir Vernadsky‘s concept of noosphere (“a planetary “sphere of reason, the highest stage of biospheric development and of humankind’s rational activities).  

Teilhard took part in the discovery of Peking Man, and wrote on the reconciliation of faith and evolutionary theory.  His thinking on both these fronts was censored during his lifetime by the Catholic Church (in particular for its implications for “original sin”); but in 2009, they lifted their ban.

source