Posts Tagged ‘ontology’
“There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.”*…
… Indeed, the same might be said of life itself. David Krakauer and Chris Kempes of the Santa Fe Institute suggest that life is starting to look a lot less like an outcome of chemistry and physics, and more like a computational process…
… Today, doubts about conventional explanations of life are growing and a wave of new general theories has emerged to better define our origins. These suggest that life doesn’t only depend on amino acids, DNA, proteins and other forms of matter. Today, it can be digitally simulated, biologically synthesised or made from entirely different materials to those that allowed our evolutionary ancestors to flourish. These and other possibilities are inviting researchers to ask more fundamental questions: if the materials for life can radically change – like the materials for computation – what stays the same? Are there deeper laws or principles that make life possible?
Our planet appears to be exceptionally rare. Of the thousands that have been identified by astronomers, only one has shown any evidence of life. Earth is, in the words of Carl Sagan, a ‘lonely speck in the great enveloping cosmic dark.’ This apparent loneliness is an ongoing puzzle faced by scientists studying the origin and evolution of life: how is it possible that only one planet has shown incontrovertible evidence of life, even though the laws of physics are shared by all known planets, and the elements in the periodic table can be found across the Universe?
The answer, for many, is to accept that Earth really is as unique as it appears: the absence of life elsewhere in the Universe can be explained by accepting that our planet is physically and chemically unlike the many other planets we have formally identified. Only Earth, so the argument goes, produced the special material conditions conducive to our rare chemistry, and it did so around 4 billion years ago, when life first emerged.
In 1952, Stanley Miller and his supervisor Harold Urey provided the first experimental evidence for this idea through a series of experiments at the University of Chicago. The Miller-Urey experiment, as it became known, sought to recreate the atmospheric conditions of early Earth through laboratory equipment, and to test whether organic compounds (amino acids) could be created in a reconstructed inorganic environment. When their experiment succeeded, the emergence of life became bound to the specific material conditions and chemistry on our planet, billions of years ago.
However, more recent research suggests there are likely countless other possibilities for how life might emerge through potential chemical combinations. As the British chemist Lee Cronin, the American theoretical physicist Sara Walker and others have recently argued, seeking near-miraculous coincidences of chemistry can narrow our ability to find other processes meaningful to life. In fact, most chemical reactions, whether they take place on Earth or elsewhere in the Universe, are not connected to life. Chemistry alone is not enough to identify whether something is alive, which is why researchers seeking the origin of life must use other methods to make accurate judgments.
Today, ‘adaptive function’ is the primary criterion for identifying the right kinds of biotic chemistry that give rise to life, as the theoretical biologist Michael Lachmann (our colleague at the Santa Fe Institute) likes to point out. In the sciences, adaptive function refers to an organism’s capacity to biologically change, evolve or, put another way, solve problems. ‘Problem-solving’ may seem more closely related to the domains of society, culture and technology than to the domain of biology. We might think of the problem of migrating to new islands, which was solved when humans learned to navigate ocean currents, or the problem of plotting trajectories, which our species solved by learning to calculate angles, or even the problem of shelter, which we solved by building homes. But genetic evolution also involves problem-solving. Insect wings solve the ‘problem’ of flight. Optical lenses that focus light solve the ‘problem’ of vision. And the kidneys solve the ‘problem’ of filtering blood. This kind of biological problem-solving – an outcome of natural selection and genetic drift – is conventionally called ‘adaptation’. Though it is crucial to the evolution of life, new research suggests it may also be crucial to the origins of life.
This problem-solving perspective is radically altering our knowledge of the Universe…
The idea of life asa kind of computational process has roots that go back to the 4th century BCE, when Aristotle introduced his philosophy of hylomorphism in which functions take precedence over forms. For Aristotle, abilities such as vision were less about the biological shape and matter of eyes and more about the function of sight. It took around 2,000 years for his idea of hylomorphic functions to evolve into the idea of adaptive traits through the work of Charles Darwin and others. In the 19th century, these naturalists stopped defining organisms by their material components and chemistry, and instead began defining traits by focusing on how organisms adapted and evolved – in other words, how they processed and solved problems. It would then take a further century for the idea of hylomorphic functions to shift into the abstract concept of computation through the work of Alan Turing [and here] and the earlier ideas of Charles Babbage [here].
In the 1930s, Turing became the first to connect the classical Greek idea of function to the modern idea of computation, but his ideas were impossible without the work of Babbage, a century before. Important for Turing was the way Babbage had marked the difference between calculating devices that follow fixed laws of operation, which Babbage called ‘Difference Engines’, and computing devices that follow programmable laws of operation, which he called ‘Analytical Engines.’
Using Babbage’s distinction, Turing developed the most general model of computation: the universal Turing Machine…
Turing did not describe any of the materials out of which such a machine would be built. He had little interest in chemistry beyond the physical requirement that a computer store, read and write bits reliably. That is why, amazingly, this simple (albeit infinite) programmable machine is an abstract model of how our powerful modern computers work. But the theory of computation Turing developed can also be understood as a theory of life. Both computation and life involve a minimal set of algorithms that support adaptive function. These ‘algorithms’ help materials process information, from the rare chemicals that build cells to the silicon semiconductors of modern computers. And so, as some research suggests, a search for life and a search for computation may not be so different. In both cases, we can be side-tracked if we focus on materials, on chemistry, physical environments and conditions.
In response to these concerns, a set of diverse ideas has emerged to explain life anew, through principles and processes shared with computation, rather than the rare chemistry and early Earth environments simulated in the Miller-Urey experiment. What drives these ideas, developed over the past 60 years by researchers working in disparate disciplines – including physics, computer science, astrobiology, synthetic biology, evolutionary science, neuroscience and philosophy – is a search for the fundamental principles that drive problem-solving matter. Though researchers have been working in disconnected fields and their ideas seem incommensurable, we believe there are broad patterns to their research on the origins of life. However, it can be difficult for outsiders to understand how these seemingly incommensurable ideas are connected to each other or why they are significant. This is why we have set out to review and organise these new ways of thinking.
Their proposals can be grouped into three distinct categories, three hypotheses, which we have named Tron, Golem and Maupertuis…
[The authors unpack all three proposals…]
… Is life problem-solving matter? When thinking about our biotic origins, it is important to remember that most chemical reactions are not connected to life, whether they take place here or elsewhere in the Universe. Chemistry alone is not enough to identify life. Instead, researchers use adaptive function – a capacity for solving problems – as the primary evidence and filter for identifying the right kinds of biotic chemistry. If life is problem-solving matter, our origins were not a miraculous or rare event governed by chemical constraints but, instead, the outcome of far more universal principles of information and computation. And if life is understood through these principles, then perhaps it has come into existence more often than we previously thought, driven by problems as big as the bang that started our abiotic universe moving 13.8 billion years ago.
The physical account of the origin and evolution of the Universe is a purely mechanical affair, explained through events such as the Big Bang, the formation of light elements, the condensation of stars and galaxies, and the formation of heavy elements. This account doesn’t involve objectives, purposes, or problems. But the physics and chemistry that gave rise to life appear to have been doing more than simply obeying the fundamental laws. At some point in the Universe’s history, matter became purposeful. It became organised in a way that allowed it to adapt to its immediate environment. It evolved from a Babbage-like Difference Engine into a Turing-like Analytical Engine. This is the threshold for the origin of life.
In the abiotic universe, physical laws, such as the law of gravitation, are like ‘calculations’ that can be performed everywhere in space and time through the same basic input-output operations. For living organisms, however, the rules of life can be modified or ‘programmed’ to solve unique biological problems – these organisms can adapt themselves and their environments. That’s why, if the abiotic universe is a Difference Engine, life is an Analytical Engine. This shift from one to the other marks the moment when matter became defined by computation and problem-solving. Certainly, specialised chemistry was required for this transition, but the fundamental revolution was not in matter but in logic.
In that moment, there emerged for the first time in the history of the Universe a big problem to give the Big Bang a run for its money. To discover this big problem – to understand how matter has been able to adapt to a seemingly endless range of environments – many new theories and abstractions for measuring, discovering, defining and synthesising life have emerged in the past century. Some researchers have synthesised life in silico. Others have experimented with new forms of matter. And others have discovered new laws that may make life as inescapable as physics…
Eminently worth reading in full: “Problem-solving matter,” from @sfiscience and @aeonmag.
Pair with “At the limits of thought” (also by Krakauer).
* Albert Einstein
###
As we obsess on ontology, we might spare a thought for someone concerned with life as it is lived: Sigismund Schlomo “Sigmund” Freud; he died on this date in 1939. A neurologist, he was the founder of psychoanalysis– a clinical method for evaluating and treating pathologies seen as originating from conflicts in the psyche, through dialogue between patient and psychoanalyst, and the distinctive theory of mind and human agency derived from it.
“Those who are not shocked when they first come across quantum theory cannot possibly have understood it”*…
A scheduling note: your correspondent is headed onto the road for a couple of weeks, so (Roughly) Daily will be a lot more roughly than daily until September 20th or so.
100 years ago, a circle of physicists shook the foundation of science. As Philip Ball explains, it’s still trembling…
In 1926, tensions were running high at the Institute for Theoretical Physics in Copenhagen. The institute was established 10 years earlier by the Danish physicist Niels Bohr, who had shaped it into a hothouse for young collaborators to thrash out a new theory of atoms. In 1925, one of Bohr’s protégés, the brilliant and ambitious German physicist Werner Heisenberg, had produced such a theory. But now everyone was arguing with each other about what it implied for the nature of physical reality itself.
To the Copenhagen group, it appeared reality had come undone…
[Ball tells the story of Niels Bohr’s building on Max Planck, of Werner Heisenberg’s wrangling of Bohr’s thought into theory, of Einstein’s objections and Erwin Schrödinger’s competing theory; then he homes in on the ontological issue at stake…]
Quantum mechanics, they said, demanded we throw away the old reality and replace it with something fuzzier, indistinct, and disturbingly subjective. No longer could scientists suppose that they were objectively probing a pre-existing world. Instead, it seemed that the experimenter’s choices determined what was seen—what, in fact, could be considered real at all.
In other words, the world is not simply sitting there, waiting for us to discover all the facts about it. Heisenberg’s uncertainty principle implied that those facts are determined only once we measure them. If we choose to measure an electron’s speed (more strictly, its momentum) precisely, then this becomes a fact about the world—but at the expense of accepting that there are simply no facts about its position. Or vice versa…
…A century later, scientists are still arguing about this issue of what quantum mechanics means for the nature of reality…
[Ball recounts subsequent attempts to reconcile quantum theory to “reality,” including Schrödinger’s wave mechanics…]
… Schrödinger’s wave mechanics didn’t restore the kind of reality he and Einstein wanted. His theory represented all that could be said about a quantum object in the form of a mathematical expression called the wave function, from which one can predict the outcomes of making measurements on the object. The wave function looks much like a regular wave, like sound waves in air or water waves on the sea. But a wave of what?
At first, Schrödinger supposed that the amplitude of the wave—think of it like the height of a water wave—at a given point in space was a measure of the density of the smeared-out quantum particle there. But Born argued that in fact this amplitude (more precisely, the square of the amplitude) is a measure of the probability that we will find the particle there, if we make a measurement of its position.
This so-called Born rule goes to the heart of what makes quantum mechanics so odd. Classical Newtonian mechanics allows us to calculate the trajectory of an object like a baseball or the moon, so that we can say where it will be at some given time. But Schrödinger’s quantum mechanics doesn’t give us anything equivalent to a trajectory for a quantum particle. Rather, it tells us the chance of getting a particular measurement outcome. It seems to point in the opposite direction of other scientific theories: not toward the entity it describes, but toward our observation of it. What if we don’t make a measurement of the particle at all? Does the wave function still tell us the probability of its being at a given point at a given time? No, it says nothing about that—or more properly, it permits us to say nothing about it. It speaks only to the probabilities of measurement outcomes.
Crucially, this means that what we see depends on what and how we measure. There are situations for which quantum mechanics predicts that we will see one outcome if we measure one way, and a different outcome if we measure the same system in a different way. And this is not, as is sometimes implied (this was the cause of Heisenberg’s row with Bohr), because making a measurement disturbs the object in some physical manner, much as we might very slightly disturb the temperature of a solution in a test-tube by sticking a thermometer into it. Rather, it seems to be a fundamental property of nature that the very fact of acquiring information about it induces a change.
If, then, by reality we mean what we can observe of the world (for how can we meaningfully call something real if it can’t be seen, detected, or even inferred in any way?), it is hard to avoid the conclusion that we play an active role in determining what is real—a situation the American physicist John Archibald Wheeler called the “participatory universe.”..
… Heisenberg’s “uncertainty” captured that sense of the ground shifting. It was not the ideal word—Heisenberg himself originally used the German Ungenauigkeit, meaning something closer to “inexactness,” as well as Unbestimmtheit, which might be translated as “undeterminedness.” It was not that one was uncertain about the situation of a quantum object, but that there was nothing to be certain about.
There was an even more disconcerting implication behind the uncertainty principle. The vagueness of quantum phenomena, when an electron in an atom might seem to jump from one energy state to another at a time of its own choosing, seemed to indicate the demise of causality itself. Things happened in the quantum world, but one could not necessarily adduce a reason why. In his 1927 paper on the uncertainty principle, Heisenberg challenged the idea that causes in nature lead to predictable effects. That seemed to undermine the very foundation of science, and it made the world seem like a lawless, somewhat arbitrary place….
… One of Bohr’s most provocative views was that there is a fundamental distinction between the fuzzy, probabilistic quantum world and the classical world of real objects in real places, where measurements of, say, an electron with a macroscopic instrument tell us that it is here and not there.
What Bohr meant is shocking. Reality, he implied, doesn’t consist of objects located in time and space. It consists of “quantum events,” which are obliged to be self-consistent (in the sense that quantum mechanics can describe them accurately) but not classically consistent with one another. One implication of this, as far as we can currently tell, is that two observers can see different and conflicting outcomes from an event—yet both can be right.
But this rigid distinction between the quantum and classical worlds can’t be sustained today. Scientists can now conduct experiments that probe size scales in between those where quantum and classical rules are thought to apply—neither microscopic (the atomic scale) nor macroscopic (the human scale), but mesoscopic (an intermediate size). We can look, for example, at the behavior of nanoparticles that can be seen and manipulated yet are small enough to be governed by quantum rules. Such experiments confirm the view that there is no abrupt boundary of quantum and classical. Quantum effects can still be observed at these intermediate scales if our devices are sensitive enough, but those effects can be harder to discern as the number of particles in the system increases.
To understand such experiments, it’s not necessary to adopt any particular interpretation of quantum mechanics, but merely to apply the standard theory—encompassed within Schrödinger’s wave mechanics, say—more expansively than Bohr and colleagues did, using it to explore what happens to a quantum object as it interacts with its surrounding environment. In this way, physicists are starting to understand how information gets out of a quantum system and into its environment, and how, as it does so, the fuzziness of quantum probabilities morphs into the sharpness of classical measurement. Thanks to such work, it is beginning to seem that our familiar world is just what quantum mechanics looks like when you are 6 feet tall.
But even if we manage to complete that project of uniting the quantum with the classical, we might end up none the wiser about what manner of stuff—what kind of reality—it all arises from. Perhaps one day another deeper theory will tell us. Or maybe the Copenhagen group was right a hundred years ago that we just have to accept a contingent, provisional reality: a world only half-formed until we decide how it will be…
Eminently worth reading in full: “When Reality Came Undone,” from @philipcball in @NautilusMag.
See also: When We Cease to Understand the World, by Benjamin Labatut.
* Niels Bohr
###
As we wrestle with reality, we might spare a thought for Ludwig Boltzmann; he died on this date in 1906. A physicist and philosopher, he is best remembered for the development of statistical mechanics, and the statistical explanation of the second law of thermodynamics (which connected entropy and probability).
Boltzmann helped paved the way for quantum theory both with his development of statistical mechanics (which is a pillar of modern physics) and with his 1877 suggestion that the energy levels of a physical system could be discrete.
“A hole can itself have as much shape-meaning as a solid mass”*…
Holes. Caity Weaver wonders about them:
What is a hole?
A hole is a portion of something where something is not. Beyond that, holes are slippery. (As a concept — only some in reality.) Is a hole necessarily empty on both sides, like the gaps in a slice of Swiss cheese? Or need it only be empty on one side, like a pit dug into the earth? Is a hole with a bottom less of a hole than one without one? Can a slit be a hole, or must a hole be vaguely round? Does a straw have two holes, as one Reddit user pondered, or just one — a single thick hole, if you will?…
[She then proceeds to explore the concept etymologically…]
Wait — What Is a Hole?
The Stanford Encyclopedia of Philosophy goes right for the, well… philosophical:
Holes are an interesting case study for ontologists and epistemologists. Naive, untutored descriptions of the world treat holes as objects of reference, on a par with ordinary material objects. (‘There are as many holes in the cheese as there are cookies in the tin.’) And we often appeal to holes to account for causal interactions, or to explain the occurrence of certain events. (‘The water ran out because the bucket has a hole.’) Hence there is prima facie evidence for the existence of such entities. Yet it might be argued that reference to holes is just a façon de parler, that holes are mere entia representationis, as-if entities, fictions.
[There follows a fascinating account of the theories of holes…]
Holes
A whole lot about nothing…
*Henry Moore
###
As we hit ’em where they ain’t, we might spare a thought for mathematician Henri Cartan; he died on this date in 2008. A founding member (n 1934) of and active participant in the Bourbaki group, Cartan made contributions to math across algebra, geometry, and analysis, with a special focus on topology (that branch of math that plays with holes in toruses, Klein bottles, and other other-worldly shapes).
“Everything / is not itself”*…
Toward an ecology of mind: Nathan Gardels talks with Benjamin Bratton about his recent article, “Post-Anthropocene Humanism- Cultivating the ‘third space’ where nature, technology, and human autonomy meet“…
The reality we sense is not fixed or static, but, as Carlo Rovelli puts it, a “momentary get together on the sand.” For the quantum physicist, all reality is an ever-shifting interaction of manifold influences, each determining the other, which converge or dissolve under the conditions at a particular time and space that is always in flux…
The human, too, can be seen this way as a node of ever-changing interactions with the natural cosmos and the environment humans themselves have formed through technology and culture. What it means to be human, then, is not a constant, but continually constituted, altered and re-constituted through the recursive interface with an open and evolving world.
This is the view, at least, of Benjamin Bratton, a philosopher of technology who directs the Berggruen Institute’s Antikythera project to investigate the impact and potential of planetary-scale computation. To further explore the notion of “post-Anthropocene humanism” raised in a recent Noema essay, I asked him to weigh in on the nature of human being and becoming when anthropogenesis and technogenesis are one and the same process.
…
“I can’t accept the essentially reactionary claim that modern science erases ‘the Human.’ Demystification is not erasure. It may destabilize some ideas that humans have about what humans are, yes. But I see it more as a disclosure of what ‘humans’ always have been but could not perceive as such. It’s not that some essence of the Human goes away, but that humans are now a bit less wrong about what humans are,” he argues.
Bratton goes on: “Instead of science and technology leading to some ‘post-human’ condition, perhaps it will lead to a slightly more human condition? The figure we associate with modern European Humanism may be a fragile, if also a productive, philosophical concept. But dismantling the concept does not make the reality go away. Rather, it redefines it in the broader context of new understanding. In fact, that reality is more perceivable because the concept is made to dissolve.”
How so? “The origins of human societies are revealed by archaeological pursuits. What is found is usually not the primal scene of some local cultural tradition but something much more alien and unsettling: human society as a physical process.
…
All this would suggest, in Bratton’s view, “that cooperative social intelligence was not only the path to Anthropocene-scale agency for humans, but a reminder that the evolution of social intelligence literally shaped our bodies and biology, from the microbial ecologies inside of us to our tool-compatible phenotype. The Renaissance idea of Vitruvian Man, that we possess bodies and then engage the world through tools and intention, is somewhat backward. Instead, we possess bodies because of biotic and abiotic ‘technologization’ of us by the world, which we in turn accelerate through social cooperation.”
In short, one might say, it is not “I think therefore I am,” but, because the world is embedded in me, “thereby I am.”
…
Bratton’s view has significant implications for how we see and approach the accelerating advances in science and technology.
A negative biopolitics, so to speak, would seek to limit the transformations underway in the name of a valued concept of the human born in a specific time and place on the continuum of human evolution. A positive bio-politics would embrace the artificiality of those transformations as part of the responsibility of human agency.
Bratton states: “Abstract intelligence is not some outside imposition from above. It emerged and evolved along with humans and other things that think. Therefore, I am equally suspicious of the sort of posthumanism that collapses sentience and sapience into an anti-rationalist, flat epistemology that seeks not to calibrate the relation between reason and world, but is instead a will to vegetablization: a dissolving of agency into flux and flow. Governance then, in the sense of steerage, is sacrificed.”
To mediate this creative tension, what is called for is a theory of governance that recognizes the promise while affirming the autonomy of humans, albeit reconfigured through a new awareness, by striving to shape what we now understand as anthropo-technogenesis.
In the political theory of checks and balances, government is the positive and constitutional rule is the negative. The one is the capacity to act, the other to amend or arrest action that could lead to harmful consequences — the “katechon” concept from Greek antiquity of “withholding from becoming,” which I have written about before.
An ecology of mind, in the term of anthropologist Gregory Bateson, would encompass both by re-casting human agency not as the master, but as a responsible co-creator with other intelligences in the reality we are making together…
“The Evolution of What It Means To Be Human,” from Nathan Gardels and @bratton in @NoemaMag. Both the conversation and the article on which it is based are eminently worth reading on full.
Pair with: “Artificial Intelligence and the Noosphere” (from Robert Wright; for which, a ToTH to friend MK): a very optimistic take on a possible future that could emerge from the dynamic that Bratton outlines. Worth reading and considering; his visions of the socioeconomic and spiritual bounties-to-come are certainly enticing.
That said, I’ll just suggest that, even if AI is ultimately as capable as many assume it can/will be– by no means a sure thing– unless we address the kinds of issues raised in last week’s (R)D on this same general subject (“Without reflection, we go blindly on our way”) we’ll never get to Bratton’s (and Wright’s) happy place… The same kinds of things that Bratton implicitly and Wright explicitly are mooting for AI (as a knitter of minds in a noosphere) could have been said— were said— for computer networking, then for the web, then for social media… in the event, they knit— but not so much so much in the interest of blissful, enabling sharing and growth; rather as the tools of rapacious commercial interests (c.f.: Cory Doctorow’s “enshittification”) and/or authoritarians (c.f., China or Russia or…). Seems to me that in the long run, if we can rein in capitalism and authoritarians: maybe. In the foreseeable future: if only…
* Rainer Maria Rilke
###
As we contemplate collaboration, we might send mysterious birthday greetings to Alexius Meinong; he was born this date in 1853. A philosopher, he is known for his unique ontology and for contributions to the philosophy of mind and axiology– the theory of value.
Meinong’s ontology is notable for its belief in nonexistent objects. He distinguished several levels of reality among objects and facts about them: existent objects participate in actual (true) facts about the world; subsistent (real but non-existent) objects appear in possible (but false) facts; and objects that neither exist nor subsist can only belong to impossible facts. See his Gegenstandstheorie, or the Theory of Abstract Objects.
“Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.”*…
Representation of consciousness from the seventeenth century by Robert Fludd, an English Paracelsian physician (source)
… but that doesn’t mean that we won’t attempt to answer “the hard problem of consciousness.” Indeed, as Elizabeth Fernandez notes, some scientists are using Schrödinger’s own work to try…
Supercomputers can beat us at chess and perform more calculations per second than the human brain. But there are other tasks our brains perform routinely that computers simply cannot match — interpreting events and situations and using imagination, creativity, and problem-solving skills. Our brains are amazingly powerful computers, using not just neurons but the connections between the neurons to process and interpret information.
And then there is consciousness, neuroscience’s giant question mark. What causes it? How does it arise from a jumbled mass of neurons and synapses? After all, these may be enormously complex, but we are still talking about a wet bag of molecules and electrical impulses.
Some scientists suspect that quantum processes, including entanglement, might help us explain the brain’s enormous power, and its ability to generate consciousness. Recently, scientists at Trinity College Dublin, using a technique to test for quantum gravity, suggested that entanglement may be at work within our brains. If their results are confirmed, they could be a big step toward understanding how our brain, including consciousness, works…
More on why maybe the brain isn’t “classical” after all: “Brain experiment suggests that consciousness relies on quantum entanglement,” from @SparkDialog in @bigthink.
For an orthogonal view: “Why we need to figure out a theory of consciousness.”
* Erwin Schrödinger
###
As we think about thinking, we might spare a thought for Alexius Meinong; he died on this date in 1920. A philosopher, he is known for his unique ontology and for contributions to the philosophy of mind and axiology– the theory of value.
Meinong’s ontology is notable for its belief in nonexistent objects. He distinguished several levels of reality among objects and facts about them: existent objects participate in actual (true) facts about the world; subsistent (real but non-existent) objects appear in possible (but false) facts; and objects that neither exist nor subsist can only belong to impossible facts. See his Gegenstandstheorie, or the Theory of Abstract Objects.









You must be logged in to post a comment.