Most people are familiar with the dogmas promoted by powerful religious institutions such as the Roman Catholic church, evangelical protestant churches and some branches of Islam. The institutions claim they have sole possession of the truth, direct from God, and that anybody that does not agree is a heretic, someone to be avoided, and who may be punished.
Dogmatism is annoying, anti-social and causes a great deal of misery, both for people growing up under the power of the institution proclaiming the dogma and for some of those that interact with them.
It’s also pretty well recognised. One need only mention religious dogma and heads start to nod. People know what you’re talking about.
Despite the negative connotations the word has for most people, the leadership of the RC church does not object to the term and still uses it as a core part of its teachings. They invented the term, and use it without shame to describe propositions that the church says RCs are obliged to believe. When I was an RC I never thought to ask what happens if one does not believe a dogma. It seemed too impertinent. But now when I research it, the answer that appears fairly consistently across different RC sources is that it is not a sin to disbelieve the dogma, as long as you don’t say so aloud, because that might encourage somebody else to disbelieve it. That would be heresy, which is a grave sin, punishable by an eternity in hellfire. A few centuries ago, the punishment was lighter – a mere burning at the stake.
Although the RC church invented the word ‘dogma’, it is not the only institution to proclaim dogmas. There are plenty of dogmas in evangelical protestantism, and some variants of Islam are heavily dogmatic. Perhaps non-RCs would reject the application of the word ‘dogma’ to their essential beliefs, given the pejorative sense in which the word is mostly used these days. But it would be hard to argue that concepts such as ‘biblical inerrancy’ or ‘justification by faith alone’ are not dogmas for some protestant sects.
It would be a mistake to equate dogma with religion, because most religions are not dogmatic. It is just our misfortune that the three most dominant religions of our world: Roman Catholicism, Evangelical Protestantism and Islam have many adherents that assert an obligation to believe the relevant dogmas.
I am not aware of any pre-Christian religion that had obligatory beliefs. Judaism had many rules, but they were about practices, not beliefs. Even for worship, the injunction was to not worship other gods, or idols in particular. As long as you didn’t bow down or offer sacrifices to golden calves or statues of Ba’al, it didn’t matter whether, in the privacy of your own thoughts, you really believed Yahweh was the greatest god. In fact the Torah says nothing at all about obligatory beliefs, so far as I recall. Other pre-Christian religions, like Buddhism, the many variants of Hinduism, Mithraism, Zoroastrianism and the ancient Greek, Roman and Egyptian religions also appear to set no expectations about their members’ beliefs.
Dogmas appear in places other than religions. Just as some protestants, while abjuring RC dogmas like the Immaculate Conception or Trans-substantiation, insist on their own dogmas, people who are opposed to all religions – the so-called New Atheists – can be as dogmatic as those they criticise. Classic New Atheist dogmas are things like ‘it is wrong to believe anything that cannot be proven to be true’, or ‘for all questions and human challenges, science is the best means to an answer’. For some militant atheists it even seems to be an item of faith that adherence to any religious belief at all must be a sign of stupidity. I know these dogmas because for a while I was a born-again atheist and subscribed to them. I used to listen to podcasts of debates between Christians and atheists about whether God exists, cheering on my side and hoping for the unconditional surrender of the other. Looking back, it seems such an odd thing to do. Neither the debaters nor their supporters in the audience ever changed their views one iota. Each side had their dogmas and stuck steadfastly to them. They may as well have both been shouting into the wind. But really I suppose they were just playing to their supporters. I believe such debates can never get anywhere because it is impossible to prove or disprove the existence of a god, and any attempt to do either relies on presuppositions – usually unstated – that one side will accept and the other will not.
I have not completely forsaken atheism. I am still atheist on Mondays and alternate Wednesdays. But I have forsaken the dogmatism that accompanies the more aggressive variants of atheism.
Dogmas manifest in wider circles than the theological and anti-theological. Other areas where they crop up are philosophy, politics, economics, psychology and sociology. People debate whether there is such a thing as objective morality, whether equality is more important than liberty, whether wealth really does ‘trickle down’ in a capitalist society, and whether most psychological disorders can be traced back to early childhood experience. Debates between evangelical christians and militant atheists seem mild and friendly compared to the vicious passions unleashed in a debate between a Berkeleyan Idealist and a Materialist acolyte of GE Moore about whether a tree that falls in a forest makes a noise if there is nobody there to hear it.
I’m not suggesting that none of those things matter. It matters very much what political and economic theories are adopted by governments. They affect many people’s lives. Even some sorts of philosophy have huge effects. One can trace the roots of many important social movements to the ideas raised by philosophers, such as the influence of Enlightenment philosophers on the American and French revolutions. It’s hard to see how the ‘actual existence’ or otherwise of impossibly distant galaxies could affect our lives, but other similarly meaningless topics, such as whether the Holy Ghost proceeds from the Father and the Son, or just from the Father, have led to wars, the rise and fall of empires and many burnings of people that had the misfortune of siding with the wrong opinion.
The common element of dogmatic claims is not their capacity or otherwise to affect our lives, it is their total immunity to proof, disproof, or experimental testing of any kind.
There is no dogma about the law of gravity, no dogma of quantum mechanics or a doctrine of the periodic table. A good biology teacher will not demand that her class believe that cells of mammals have a nucleus containing bundles of DNA and little packets of RNA. A good mathematics teacher will not demand that the class believe that the method being taught for long division works. The teacher is saying: “Here is a method, or an approach to understanding something. Most people find it useful in getting important things done“. The teacher could add – but generally doesn’t bother – “If you don’t like what I’m teaching and want to go and invent your own method of long division (or theory of the elements), be my guest! I’ll still be here to help you learn this method if you change your mind.”
It is both ironic and predictable that the claims about which we humans get most dogmatic are those about which it is least possible to be certain. When there is a high level of certainty – as with Newton’s Laws of Motion – there is no need for dogmatism. You can take it or leave it. More fool you if you leave it. But when there is little to no certainty available, as with doctrines of neo-liberal economics (or, to be fair, Marxist economics), doctrines of the nature of the Holy Ghost, or proofs and disproofs of the existence of god(s), people generally ramp up the dogmatism and turn the volume to eleven. They use dogma and noise to make up for their lack of confidence and inability to provide any concrete evidence for the proposition.
This has led to my strongest philosophical position being anti-dogmatism. No matter what proposition somebody makes, be it about religion, ontology, economics or politics, and regardless of whether I sympathise with the belief being promoted or not, I now instinctively react against it and look to debunk it, if it is made dogmatically. That doesn’t mean I don’t hold any opinions on those topics. I have loads. Some of them – mostly the political ones – I hold very strongly and am prepared to march the streets, donate to a cause and publicly argue to try to persuade people over. But I hope I never get to the stage of believing that I am unquestionably right about something and that those who disagree are unquestionably wrong. That seems a poor way to live. I have sometimes been like that in the past, but I think I am not now and hope I won’t be again. For me, unquestioningly accepting a dogma is the coward’s excuse for not thinking for oneself.
That is my opinion, which I acknowledge may be mistaken.
Bondi Junction, April 2019
What a strange concept is Satan, the devil! He occupies such a large part in Western culture and literature that few people ever stop to reflect on the weirdness of the idea of a single person that is responsible for all the bad things in the world. Certainly I never considered it until the other day.
Satan seems to me to be a particularly Christian concept, in emphasis if not in origin. Other religions have supernatural beings with varying degrees of benevolence or malevolence, but I don’t think the concept of a single Prince of Darkness is widespread. China, India and Japan have mythologies replete with good and evil spirits, but no single spirit has control of all the bad stuff. For instance the Ramayana’s Rawanna, king of demons, comes across as more naughty than evil. So does Loki from Norse mythology. Actually Rawanna is a good deal less frightening to me than the Indian goddesses of destruction Kali and Durga, both of whom are worshipped by perfectly nice, law-abiding, kind people. I get the two mixed up, but one or both of them wears a necklace of skulls and is portrayed dancing on the bodies of those she has slain.
Chinese folk religion seems to encompass a multitude of evil spirits. We have to orient our houses the right way and do specific things with water, air, numbers and chants to keep them at bay.
Having multiple bad spirits seems to me like having a proliferation of petty criminals, whereas Satan is more like how Stalin appeared to the West at the height of the Cold War – the supreme leader of a tremendously powerful organisation capable of doing unfathomable harm. Some people, perhaps out of nostalgia for the good old days of the cold war, tried to resurrect that image with people like Osama Bin Laden, but it never really caught on. Stalin could have killed hundreds of millions just by pressing a button. Osama Bin Laden – to be blunt – couldn’t.
Satan does get the occasional mention in the two other Abrahamic religions: Judaism and Islam. After all, he makes his debut appearance in the Garden of Eden story in Genesis chapter three, which all three Abrahamic religions share.
But while I don’t know a whole lot about either Judaism or Islam, a bit of googling about devils in Judaism and Islam didn’t turn up anything with a prominence like the following from the RC baptismal rite:
Celebrant:Do you reject Satan?
Parents & Godparents: I do.
Celebrant: And all his works?
Parents & Godparents: I do.
Celebrant: And all his empty promises?
Parents & Godparents: I do.
You know you’ve made it to the big-time when an organisation with a billion members makes three successive references to you in the induction ceremony for every one of its new members.
The biggest speaking part that Satan is given in the Jewish scriptures – to my knowledge – is in the Book of Job, where he wagers with God (Yahweh) about whether Job can be induced to curse Yahweh if enough suffering is inflicted on him. Job is a fascinating book, partly because Satan comes across in it as a less malevolent entity than Yahweh. But I wouldn’t want my fate to be in the hands of either of them, as portrayed therein. There’s not a lot of ‘Duty of Care‘ going on.
Satan’s literary influence is so pervasive that his avatars pop up in secular literature as well. Classic instances of that are Sauron from Lord of the Rings and Voldemort from Harry Potter. Both are known as ‘the Dark Lord’ and have the honorific ‘Lord’ affixed before their name. One might ascribe Tolkien’s symbolism in Lord of the Rings to his Roman Catholicism. On the other hand JK Rowling is not a Christian, yet her use of such a clear Satan substitute shows how deeply embedded the role of Satan has become in Western culture, both religious and secular.
From a literary standpoint, having the notion of Satan is a wonderful cultural advantage. Pitting the hero(s) against the overwhelming odds of a leader of a massively powerful army of evil is so much more gripping than against a mere mortal villain.
Both Tolkien and Rowling hedged their bets a bit though. Both allude in their mythologies to earlier evils, which in some sense detracts from the uniqueness of their Dark Lords. With Tolkien it was Morgoth, while Rowling had Grindelwald. I think the latter name is unfortunate because Grindelwald is the name of a lovely village in the Swiss alps. I went there about thirty-five years ago and did not encounter any dark forces. But maybe it has changed since then.
Western culture has a rich tradition of tales about Satan and his followers. That has given us such chilling works of fiction as Rosemary’s Baby, The Omen and The Exorcist. The English author Dennis Wheatley wrote a series of best-selling novels about Satanist cults conjuring up the devil in gothic country mansions, sacrificing virgins and doing other dastardly deeds. Going back further in time we have Goethe’s Faust, Milton’s Paradise Lost and Dante’s Inferno. I have not read any of these three, although I know the story of Faust. I am keen to find time to read Paradise Lost (if only it weren’t so LONG!) because it is said that it presents Satan as a complex, multi-faceted character that is in some senses almost a tragic hero, rather than just the pure evil image to which we are generally subjected.
I don’t think I ever really believed in the devil entirely, although I said I did, both to others and to myself, because that was a requirement of the religion in which I was raised. I found Rosemary’s Baby and The Omen scary, but not so much The Exorcist.
On reflection, I think what scared me most about Rosemary’s Baby and The Omen was not the devil but rather his creepy followers. In Rosemary’s baby it was the solicitous, secretly Satan-worshipping neighbours that befriended Rosemary and took advantage of her trusting nature to gradually poison her with various herbs that they told her were medicine for a mild ailment she had. In The Omen it was the creepily motherly yet homicidal nursemaid, plus the kid Damian (son of Satan) who killed people by doing things like crashing his tricycle into them at the top of the stairs so that they fell and broke their neck. Malevolent children are always scary, regardless of whether any evil spirits are in sight. Horror movies love to make use of them, and ‘Lord of The Flies‘ is like the apotheosis of the evil children genre. How apt that William Golding chose the title ‘Lord of The Flies‘ for that novel, which is a translation of Beelzebub, one of Satan’s many names.
The book that scared me the most was Dracula. I read it at much too young an age and spent the next several years sleeping in terror with my head under the blankets to try to keep the vampires away. As far as I recall Dracula doesn’t actually mention the devil at all. Count Dracula is evil, but there is no suggestion in Bram Stoker’s book that he is unique.
I wonder what it is that made Satan such a prominent figure in Christianity, and the cultures that were heavily influenced by Christianity.
One theory I’ve come across is that, when Christianity became the official religion of the Roman Empire in the early 4th century CE, the Romans sought to spread the religion by discrediting the existing folk religions of Europe, many of which involved some sort of worship of nature and fertility. The sort of ‘dancing in a circle, naked in the woods‘ image that is associated in modern times with witches’ covens and satanic cults (in the popular imagination at least – whether also in reality I have no idea) may have been a feature of those pre-Christian folk religions. So associating them with a powerful evil figure would have been a way to discredit those religions, and maybe to justify suppressing them.
That theory has an intuitive appeal, but strikes the problem that for many of those folk religions the anthropomorphic image of nature that was worshipped was female – a mother goddess. Yet Satan is male.
An equally plausible, and rather simpler, explanation is that the notion of an immensely powerful Dark Lord just makes for a great story, and great stories make for successful social movements.
There’s an interesting theological conflict between the notion of Satan as the Embodiment of Evil on the one hand, and on the other, the Roman Catholic doctrine that evil is an absence of good, rather than a presence of something bad. The origin of that doctrine is first attributed to St Augustine (late 4th century), and later reinforced by St Thomas Aquinas (13th century). I find it hard to square this with Satan being an entity that is supposed to be actively evil. I have no idea what RC theologians make of this, although I am confident that their explanation would be extremely LONG. Make an explanation long enough and the chances are that the explainee will not raise any objections, out of sheer weariness and the fear that the explainer may launch into a further diatribe.
When I was in high school there was a boy who attended our school for a short while. He gained notoriety by telling people that he had seen the Devil. He had just woken up in the middle of the night and the Devil had been standing there at the foot of his bed. I remember that he had red eyes (the devil, not the boy), but can’t recall any other details being given. Still, the red eyes would be enough to narrow down the suspects fairly effectively if the police were to conduct a manhunt. A short conversation was had, twixt the boy and the devil. I don’t remember the topic but I do remember that it was surprisingly banal.
Did he really think he saw the devil, or did he just make up the story in order to gain attention and acceptance at a new school, as boys are wont to do? We’ll never know, because he left after being there only a couple of months. I hope it was made up, because such visions are often associated with mental illness and I wouldn’t wish for him to have suffered that.
Having meandered about all over the place in this essay (as usual) I feel I should lay my cards on the table and say that, although I think the devil is a marvellous literary figure that we couldn’t do without, I don’t believe in him any more. I hope that most other people don’t either, regardless of their religious or cultural associations, as belief in the devil seems to lead to black and white thinking that before you know it has medicine women being burned as witches and teenagers with schizophrenia being subjected to horrific exorcism rituals.
What I do believe in, at least in the middle of the night as I struggle out of bed to go and empty my bladder, is a frightful monster hiding under the bed with scaly claws that will grab my shins, pull me under the bed and then – I don’t know what then, but no doubt it will be horrific. But that’s more Doctor Who than Paradise Lost, and is the subject of another (not yet written) essay.
Now, having whinged shamelessly about the verbosity of both John Milton and of theologians, I had better stop here, lest I commit the very misdemeanor I have been moaning about.
Bondi Junction, May 2017
This essay considers the notion of events occurring that we do not know to either have occurred, or to be almost certain to occur in the future. Imagination of such events is everywhere in everyday speech, but we rarely stop to consider what we mean by it, or what effect imagining such things has on us.
It is dotted with numbered questions, so it can be used as a basis for a discussion.
A counterfactual is where we imagine something happening that we know did not happen.
This is fertile ground for fiction. Philip K Dick’s acclaimed novel ‘The Man in the High Castle’, written in 1962, depicts events in a world in which the Axis powers won World War II, and the USA has been divided into parts occupied by Japan and Germany. The movie ‘Sliding Doors’ is another well-known example, that imagines what ‘might have happened’ if Gwyneth Paltrow’s hadn’t missed a train by a second as the sliding doors closed in front of her..
When something terrible happens, many people torment themselves by considering what would have happened if they, or somebody else, had done something differently:
- What if I had been breathing out rather than in when the airborne polio germ floated by? (from Alan Marshall’s ‘I can jump puddles’)
- If she hadn’t missed her flight and had to catch the next one (doomed to crash), she’d still be alive now.
- What would life have been like if I hadn’t broken up with Sylvie / Serge?
We can also consider counterfactuals where the outcome would have been worse than what really happened, such as ‘What would my life have been like if I hadn’t met that inspirational person that helped me kick my heroin habit‘. But for some reason – so it appears to me – most counterfactuals that we entertain are where the real events are worse than the imagined ones. We could call these ‘regretful counterfactuals‘ and the other ones ‘thankful counterfactuals‘.
Then there are the really illogical-seeming ones, like the not-uncommon musing: ‘Who would I be [or what would I be like] if my parents were somebody else?‘ which makes about as much sense as ‘what would black look like if it were a lightish colour?‘
Here are some questions:
- why do we entertain counterfactuals? What, if any, benefits are there from considering regretful counterfactuals? What about thankful ones?
- given that for many counterfactuals, consideration of them just makes us feel bad, could we avoid entertaining them, or is it too instinctive an urge to be avoidable?
- Do counterfactuals have any meaning? Given that Alan Marshall did breathe in, and did contract polio, what does it mean to ask ‘If he had been breathing out instead, would he have become a top-level athlete rather than an author?‘ Are we in that case talking about a person – real or imaginary – other than Alan Marshall, since part of what made him who he is, was his polio?
That last question can lead in some very odd directions. My pragmatic approach is that counterfactuals are made-up stories about an imaginary universe that is very similar to this one, but in which slightly different things happen. Just as we make up stories about non-existent lands, princesses and far away galaxies, we can make up stories about imaginary worlds that are very similar to this one except in a handful of crucial respects.
Some philosophers insist that counterfactuals are not about imaginary people and worlds but about the real people we know. My objection to that is that, for example, the Marshall counterfactual cannot be about the Alan Marshall, because he had polio. It can only be about an imaginary boy whose life was almost identical to Marshall’s up the point when the real one contracted polio. My opponents (who would include Saul Kripke, that we mention later) would counter that polio is not what defines Alan Marshall, that it is an ‘inessential’ aka ‘accidental’ property of that person, and changing it would not change his being that person. Which begs the question of what, if any, properties are essential, such that changing them would make the subject a different person. Old Aristotle believed that objects, including people, have essential and inessential properties, and wrote reams about that. In the Middle Ages Thomas Aquinas picked up on that and wrote many more reams about it. The ‘essential properties’ of an object are called its ‘essence’, and believing in such things is called ‘Essentialism’. That is how certain RC theologians are able to claim that an object that looks, feels, smells, sounds, tastes and behaves like a small, round, white wafer is actually the body of Jesus of Nazareth – apparently because, although every property we can discern is that of a wafer, the ‘essential’ properties (which we cannot perceive) are those of Jesus, thus its essence is that of Jesus. I tried for years to make sense of that and believe it, but all it succeeded in doing was giving me a headache and making me sad. For me, essentialism is bunk.
- Can you make any sense of Essentialism? If so can you help those of us who can’t, to understand it?
I can’t help but muse that maybe thankful counterfactuals have some practical value, as they can enable us to put our current sorrows into perspective. They are a very real way of Operationalizing (I know, right?) what Garrison Keillor suggests is the Minnesotan state motto – ‘It could be worse‘.
Maybe regretful counterfactuals sometimes have a role too, when they encourage us to learn from our mistakes and be more careful in the future. But they are of no use in the three examples given above. What are we going to learn from them: Never breathe in? Never fly on an aeroplane? Never break up with a romantic partner (no matter how unsuitable the match turns out to be)?
If we do something that leads to somebody else suffering harm, considering the regretful counterfactual can be useful. If I hadn’t done that, they wouldn’t be so sad. How can I make it up to them? I know, I’ll do such-and-such. That won’t fix it completely, but it’s all I can think of and at least it’ll make them feel somewhat better.
But once we’ve done all we can along those lines, the counterfactual has outlived its usefulness and is best dismissed. Otherwise we end up punishing ourselves with pointless guilt, which benefits nobody. Yet we so often do this anyway, perhaps because we can’t help it, as speculated in question 2.
I am completely useless at banishing guilt. But the techniques I have, feeble as they are, revolve around reminding myself that the universe is as it is, and cannot be otherwise. The past cannot be changed. If I had not done that hurtful thing I would not have been who I am, and the universe would be a different one, not this one. I am sorry I did it, and will do my best to make restitution, and to avoid causing harm in that way again. But the counterfactual of my not doing it is just an imaginary story about a different universe, that is (once I’ve covered the restitution and self-improvement aspects) of no use to anybody, and not even a good story. Better to read about Harry Potter’s imaginary universe instead.
This universe-could-not-have-been-otherwise approach is currently working moderately well in helping me cope with the recent Fascist ascendancy in the US. There are so many ‘if only…’ situations we could torture ourselves with: ‘If only the Democrats had picked Bernie Sanders’, ‘If only Ms Clinton hadn’t made the offhand comment about the basket of deplorables’, ‘If only the Republicans had picked John Kasich’. Those ‘If only’s are about a different universe, not this one. They could not happen in this universe, because in this universe they didn’t happen.
Counterfactuals also come into Quantum Mechanics. Arguably the most profound and shocking finding of quantum mechanics is Bell’s Theorem which, together with the results of a series of experiments that physicists did after the theorem was published, implies that either influences can travel faster than light – which appears to destroy the theory of relativity that is the basis of much modern physics – or Counterfactual Definiteness is not true. Counterfactual Definiteness states that we can validly and meaningfully reason about what would have been the result if, in a given experiment, a scientist had made a different type of measurement from the one she actually made – eg if she had pointed a particular measuring device in a different direction. Many find it ridiculous that we cannot validly consider what would have happened in such an alternative experiment, but that (or the seemingly equally ridiculous alternative of faster-than-light influences) is what Bell’s Theorem tells us, and the maths has been checked exhaustively.
A counterfactual deals with the case where something happens that we know did not happen. What about when we don’t know? I use the word hypothetical or possibility to refer to where we consider events which we do not know whether or not they occur in the history of the universe. These events may be past or future:
- a past hypothetical is that Lee Harvey Oswald shot JFK from the book depository window. Some people believe he did. Others think the shot came from the ‘grassy knoll’.
- a future hypothetical is that the USA will have a trade war against China
What do we mean when we say those events are ‘possible’ or, putting it differently, that they ‘could have happened‘ (for past hypotheticals) or that they ‘could happen‘ (for future hypotheticals)? I suggest that we are simply indicating our lack of knowledge. That is, we are saying that we cannot be certain whether, in a theoretical Complete History of the Earth, written by omniscient aliens after the Earth has been engulfed by the Sun and vaporised, those events would be included.
Some people would insist that the future type is different from the past type – that while a past hypothetical is indeed just about a lack of knowledge about what actually happened, a future hypothetical is about something more fundamental and concrete than just knowledge. This leads me to ask:
- Does saying that a certain event is ‘possible’ in the future indicate anything more than a lack of certainty on the part of the speaker as to whether it will occur? If so, what?
I incline to the view that it indicates nothing other than the speaker’s current the state of knowledge. What some people find uncomfortable about that is that it makes the notion of possibility depend on who is speaking. For a medieval peasant it is impossible that an enormous metal device could fly. For a 21st century person it is not only possible but commonplace. As Arthur C Clarke said ‘Any sufficiently advanced technology is indistinguishable from magic.’ To us, mind-reading is impossible, but maybe in five hundred years we will be able to buy a device at the chemist for five dollars that reads people’s minds by measuring the electrical fields emitted by their brain.
Under this view, the notion of possibility is mind-dependent. What would a mind-independent notion of possibility be?
There is a whole branch of philosophy called ‘Modal Logic’, and an associated theory of language – from the brilliant logician Saul Kripke – that is based on the notion that possibility means something deep and fundamental that is not just about knowledge, or minds. To me the whole thing seems as meaningful as debates over how many angels can dance on the head of a pin, but maybe one day I will meet somebody that can demonstrate a meaning to such word games.
Sometimes counterfactuals sound like past possibilities. That happens when we say that something which didn’t happen, could have happened. Marlon Brando’s character Terry in ‘On the Waterfront‘ complains ‘I coulda been a contender … instead of a bum, which is what I am‘. As I said above, I don’t think it makes literal sense to say it could have happened, since it didn’t. But if we didn’t know whether it had happened or not, we wouldn’t have been surprised to find out that it did happen. So in a sense we are saying that a person in the past, prior to when the event did or didn’t occur, evaluating it from that perspective, would regard it as possible. Brando’s Terry was saying that, back in the early days of his boxing career, he would not have been at all surprised if he had become a star. But he didn’t, and now it was too late.
What would happen / have happened next?
With both counterfactuals and hypotheticals, we often ask whether some other thing would have happened if the first thing had happened differently from how it did. For instance:
- [counterfactual] If the FBI director had not announced an inquiry into Hilary Clinton’s emails days before the 2016 US presidential election, would she have won?
- [past hypothetical] If Henry V really did give a stirring speech like the ‘band of brothers’ one in Shakespeare’s play, exhorting his men to fight just for the glory of having represented England, God and Henry, were any of the men cynical about his asking them to risk death just in order to increase Henry’s personal power?
- [future hypothetical] If Australia’s Turnbull government continues with its current anti-environment policies, will it be trounced at the next election?
Which leads to another question:
- What exactly do these questions mean?
The first relates to something that we know did not happen and the other two relate to what is currently unknowable.
My opinion is that, like with counterfactuals, they are about making up stories. In the US election case we are imagining a story in which certain events in the election were different, and we are free, within the bounds of the constraints imposed by what we know of the laws of nature, to imagine what happened next. Perhaps in the story Ms Clinton wins. Perhaps she then goes on to become the most beloved and successful president the country has ever had, overseeing a resurgence of employment, creativity, and brotherly and sisterly love never before encountered. Or perhaps she declares martial law, suspends the constitution and becomes dictator for life, building coliseums around the country where Christians and men are regularly fed to lions. Within the bounds of the laws of nature we are free to make up whatever story we like.
The same goes for the past hypothetical of Henry’s speech. We can imagine the men swooning in awe and devotion, murmuring Amen after every sentence, or we can imagine them rolling their eyes and making bawdy, cynical quips to one another – but nevertheless eventually going in to battle because otherwise they won’t be paid and their families will starve.
However, the future hypothetical seems to be about more than a made-up story. If the first thing happens – continued anti-environmentalism – then we will definitely know after the next election whether the second thing has also happened. At that point it becomes a matter of fact rather than imagination.
To which I say, so what? Until it happens, or else it becomes clear that it will not happen, it is a matter of future possibilities and can be covered by any of the scientifically-valid imaginative scenarios we can dream up. It is only if the scientific constraint massively narrows down those scenarios that it has significance. If, for instance, we could be sure that any government that fails to make a credible attempt to protect the environment will be booted out office, our future possibility would become a certainty: If the government doesn’t change its track then it will be ejected. But in politics nothing is ever that certain. Other issues come up and change the agenda, scandals happen, natural and man-made disasters, personal retirements and deaths of key politicians. At best we can talk about whether maintaining the anti-environment stance makes it more probable that the government will lose office. Which leads on to the next thorny issue.
Probability, aka chance, aka risk, aka likelihood and many other synonyms and partial synonyms, is a word that most people feel they know what it means, but nobody can explain what that is.
What do we mean when we say that the probability of a tossed coin giving heads is 0.5? Introductory probability courses often explain this by saying that if we did a very large number of tosses we would expect about half of them to be heads. But if we ask what ‘expect’ means we find ourselves stuck in a circular definition. Why? Because what we ‘expect’ is what we consider most ‘likely’, which is the outcome that has the highest ‘probability’. We cannot define ‘probability’ without first defining ‘expect’, and we cannot define ‘expect’ without first defining ‘probability’ or one of its synonyms.
We could try to escape by saying that what we ‘expect’ is what we think will happen, only that would be wrong. The word ‘will’ is too definite here, implying certainty. When we say we expect a die will roll a number less than five, we are not saying that we are certain that will be the case. If it were, and we rolled the die one hundred times in succession, we would have that expectation before each roll, so we would be certain that no fives or sixes occurred in the hundred rolls. Yet the probability of getting no fives or sixes in a hundred rolls is about two in a billion billion, which is not very likely at all. We could dispense with the ‘certainty’ and instead say that we think a one, two, three or four is the ‘most probable’ outcome for the next roll. But then we’re back in the vicious circle, as we need to know what ‘probable’ means.
- What does ‘expected’ mean?
There is a formal mathematical definition of probability, that removes all vagueness from a mathematical point of view, and enables us to get on with any calculation, however complex. Essentially it says that ‘probability’ is any scheme for assigning a number between 0 and 1 to every imaginable outcome (note how I carefully avoid using the word ‘possible’ here), in such a way that the sum of the numbers for all the different imaginable outcomes is 1.
But that definition tells us nothing about how we assign numbers to outcomes. It would be just as valid to assign 0.9 to heads and 0.1 to tails as it would to assign 0.5 to both of them. Indeed, advanced probability of the kind used in pricing financial instruments involves using more than one different scheme at the same time, which assign different numbers (probabilities) to the same outcome.
This brings us no closer to understanding why we assign 0.5 to heads.
Another approach is to say that we divide up the set of all potential outcomes as finely as we can, so that every outcome is equally likely. Then if the number of ‘equally likely’ outcomes is N, we assign the probability 1/N to each one.
That seems great until we ask what ‘equally likely’ means, and then realise (with a sickening thud) that ‘equally likely’ means ‘has the same probability as’, which means we’re stuck in a circular definition again.
- What does ‘equally likely’ mean?
After much running around in metaphorical circles, I have come to the tentative conclusion that ‘likely’ is a concept that is fundamental to how we interpret the world, so fundamental that it transcends language. It cannot be defined. There are other words like this, but not many. Most words are defined in terms of other words, but in order to avoid the whole system becoming circular, there must be some words that are taken as understood without definition – language has to start somewhere. Other examples might be ‘feel’, ‘think’ and ‘happy’. We assume that others know what is meant by each of these words, or a synonym thereof, and if they don’t then communication is simply impossible on any subject that touches on the concept.
Or perhaps ‘likely’ and ‘expect’ may be best related to a (perhaps) more fundamental concept, which is that of ‘confidence’, and its almost-antonym ‘surprise’. Something is ‘likely’ if we are confident – but not necessarily certain – that it will happen, which is that we would be somewhat surprised – but not necessarily dumbfounded – if it did not happen. I think the twin notions of confidence and surprise may be fundamental because even weeks-old babies seem to understand surprise. The game of peek-a-boo relies on it entirely.
Once we have these concepts, I think we may be able to bootstrap the entire probability project. The six imaginable dice roll numbers will be equally likely if we would be very surprised if out of six million rolls, any of the numbers occurred more than two million times, or not at all.
There are various frameworks for assigning probabilities to events that are discussed by philosophers thinking about probability. The most popular are
- the Frequentist framework, which bases the probability of an event on the number of times it has been observed to occur in the past;
- the Bayesian approach, which starts with an intuitively sensed prior probability, and then adjusts to take account of subsequent observations that using Bayes’ Law; and
- the Symmetry approach, which argues that events that are similar to one another via some symmetry should have the same probability.
It would make this essay much too long to go into any of these in greater detail. But none of them lay out a complete method. I suspect they all have a role to play in how we intuitively sense probabilities of certain simple events. But I feel that there is still some fundamental, unanalysable concept of confidence vs surprise that is needed to cover the gaps left by the large vague areas in each framework.
Here is one last question to consider:
- A surgeon tells a parent that their three-year old daughter, who is in a coma with internal abdominal bleeding following a car accident, has a 98% chance of a successful outcome of the operation, with complete recovery of health. In the light of the above discussion, it seems that nobody can explain what that 98% means. Yet despite the lack of any explicable meaning, the parent is so relieved that they dissolve in tears. Why?
Bondi Junction, January 2017
I don’t believe in reincarnation in the sense that I could be (unwittingly) the reincarnated soul of Marie Antoinette, but I think that there may be a germ of insight, perhaps even wisdom, in reincarnation myths.
There, I’ve said it. I’ve probably lost half my small readership right there. Let me try to explain, before I lose the other half. It’s not as bad as you think.
‘Here’s the thing’, as I am told young people say these days:
I am very taken by David Hume’s views on the self (as I am by many of Hume’s ideas). He was unable to find that he had any persistent self, no matter how hard he introspected (is that a word?). All he could find was ‘bundles of perceptions’. There is no perceptible separate watcher – a homunculus sitting in an armchair, as it were – watching those perceptions on a High Definition screen with SurroundSound. The perceptions just happen. And they are tied together – identifiable as the perceptions of David Hume – by occurring in the presence of the memories of the physical human body that bears that name.
There is a continuity to the stream of perceptions. They succeed one another, blend together and overlap. But that lasts only for as long as consciousness does. It is interrupted, usually at least once a day, by sleep, anaesthesia, concussion.
We say that we ‘return to consciousness’ but really it is not a return but rather a completely new stream of consciousness. The only connection to the previous one is that it occurs in association with the same human body, and hence that it has essentially the same set of memories.
We do not remember returning to consciousness. Or at least I don’t. Daniel Dennett explains this nicely in relation to peripheral vision. He says that we can’t perceive the boundary of our visual field (try it!) because to perceive a boundary we need to be able to see both sides of the boundary and, by definition, we can’t see the far side of the boundary of our visual field. Similarly, we cannot perceive the instant of regaining consciousness because to do so would require our being conscious of not being conscious immediately before waking up, and that is a contradiction. This only applies to dreamless sleep because when we wake from a dream we were conscious on both sides of the boundary, and we quickly realise that what went before was a dream.
So in a sense, the world is just full of streams of consciousness, each made up of a series of overlapping sensations and thoughts, with most streams lasting no longer than about sixteen hours. We can, if we wish, group those streams of consciousness based on the human body with which the stream is associated, but that grouping is fairly arbitrary. We could just as well have grouped them by the day on which they commenced, by length, or by mood.
Well, perhaps it’s not entirely arbitrary. Apart from memory and a shared body, there is one other thing tying a body’s streams of consciousness together, and that is that each stream cares very much about future streams that will be associated with that body. So Tom, as he goes to bed, cares more that tomorrow he has to wake up 15 minutes earlier to get to an 830 meeting at work than he does that Rajesh in Mumbai is going into hospital for a triple bypass operation, even though the stream of consciousness that is Tom-today is as distinct from Tom-tomorrow as it is from Rajesh-tomorrow. This chauvinistic, body-centric caring is easily explicable by evolution. Animals that cared about their future states of consciousness – particularly about whether the animal would be healthy and happy in future – survived better than animals that did not. We can’t fight it. That’s just the way our nervous systems are configured. But neither can we draw any metaphysical conclusions about the existence of some spooky continuous self or ‘soul’ from it.
If one is a Cartesian Dualist, one believes that there is a ‘soul’ attached to a body, that is non-physical – whatever that means. Although Dualism was the predominant metaphysical view for the last few millenia, it appears to be a minority view now. One can be an Immaterialist – denying the existence of matter and asserting that everything is mental, or one can be a Materialist – asserting that minds are just physical phenomena that we don’t properly understand yet. But either way, most people are Monists – meaning that they believe the world is basically only made of one fundamental kind of ‘stuff’. I feel quite fond of Dualism, if only because it is quaint, old-fashioned and a minority view – which is always attractive to me (which is why I’m typing this with a non-Microsoft word processor on a non-Microsoft, non-Apple operating system). But try as I might I just can’t believe it, so I’m afraid I’ll have to leave it aside and plough on with my Monist biases.
What about before we were conceived then? Nobody seems to feel any big deal about the fact that there are no streams of consciousness associated with their body before they were conceived. I wasn’t conscious then, so I wasn’t around to notice the fact that I wasn’t conscious. Nor can I identify my first conscious moment, probably because of the Dennettian boundary problem already mentioned. I suspect that ‘my’ body gradually attained consciousness, and gradually attained memory, over the first months or years of ‘its’ life.
I feel similarly about what will happen when this body dies. Since I don’t believe in a Christian, Islamic, Valhallian or Olympian after-life, I think that there will simply be no subsequent streams of consciousness associated with this body, and no streams of consciousness that share memories with streams of consciousness of this body. It’s Just As Well really, because after a few years, the body will have been gobbled up by worms and/or fish and/or bacteria and there will be no body left with which streams of consciousness could associate themselves.
And yet…. there is something in being human that makes it almost impossible to comprehend that the consciousness of this body will cease forever. Perhaps it’s an evolutionary advantage to feel that, or maybe it’s just random. But it’s there, and I think that that feeling accounts for why nearly all cultures have developed some sort of after-life mythology.
Some deny the cessation by believing in an after-life – a continuation of the ‘same’ consciousness. It’s by no means obvious what ‘the same’ means here. My guess is that it means there will be future streams of consciousness that share memories with the body’s pre-death streams of consciousness. Some deny the cessation of consciousness, or at least mortality, by considering their children or grandchildren to be continuations of themselves. Others deny it by looking at their achievements – their legacy to the human race.
Here’s my answer:
After the death of this body, ‘I’ will still be conscious because every consciousness is an ‘I’. In other words, ‘my’ consciousness won’t cease because at any point in time, all those that are conscious will be conscious, and all those consciousnesses are ‘mine’ because every stream of consciousness is of a ‘me’.
‘My’ streams of consciousness don’t stop happening. All that stops is that there are no more streams of consciousness associated with this particular body, and this set of memories. So – and here’s the wibbly-woo, new-agey bit – ‘I’ become those other streams of consciousness, because they are all ‘I’. We were never really separate, it’s just that each individual stream of consciousness is locked in its own perspective for as long as it lasts – sixteen hours or so.
There’s all sorts of metaphors one could use for this, and they’re all wacky, but they have to be, since we are dealing with the indescribable. One I like is the idea of consciousness as some sort of fluid that is subject to conservation laws in the same way as energy, momentum, angular momentum, electric charge and matter. So whenever a stream of consciousness ends, because of sleep, death or whatever, the amount of consciousness it contains is released and flows into other streams. It’s a metaphor, alright (!?!), so don’t go reaching for those scientific instruments or ectoplasm-detectors or whatever they had in Ghostbusters to try to catch and measure this fluid.
Another metaphor is that in a sense ‘I’ am imprisoned in my own consciousness, unable to perceive what another perceives, no matter how close I am to them. When my stream of consciousness ends – usually around 11:15pm – ‘I’ am set free and can become someone else – another ‘I’. For some reason I visualise a bird – probably a dove (how twee) flying out from a cage whose door has been opened.
It is key to this perspective that consciousness is fungible, not hypothecated (after all what’s the earthly use of studying finance if you can’t insert technical financial terms at strategic points in a philosophical discourse, just to show off). In other words it’s like money. We can no more say that the consciousness from my stream of 29 May 2015 became that of Elton John on 30 May 2015 than we can say that my deposit in the bank paid for part of a particular customer’s home loan. That dismisses the possibility of my being Marie Antoinette right off the bat.
But just as all of a banks liabilities fund all of its assets, the consciousness that is liberated when I go to sleep tonight will replenish the consciousness of all streams that are going at that time. So I am connected to Marie Antoinette not because her consciousness – as a discrete entity – became specifically mine (with many other users in the 200 years between), but because we all share in the same cosmic pool of consciousness, that is particular to no body, and is drawn upon and supplemented billions of times per day as streams commence and end, be it by sleep, waking, death, birth, fainting, or other cause.
In that sense, ‘I’ am Mahatma Gandhi, John Lennon, Paul McCartney, Elvis Presley, Adolf Hitler, Charles Manson, Florence Nightingale, Elie Wiesel, Hypatia of Alexandria, Lucretia Borgia, George Best, Babe Ruth, Don Bradman, Peter Paul Rubens, Ludwig van Beethoven, Albert Einstein, John Cleese, Graham Chapman and, more importantly – many billions of other less famous people – clever, challenged, creative, dull, kind, cruel, indifferent, confident, shy. ‘I’ may be dogs and bandicoots and other animals too. But that’s the subject of another essay.
Arguably, a problem with this perspective is that consciousness will not persist indefinitely – at least not in this universe. We can be pretty confident that when the universe finally approaches heat death, no life will remain. So where does the consciousness go then? Well, that’s where the whole idea being a metaphor comes in handy. One great thing about metaphors is that you can drop them when something doesn’t fit, and pick them up again a little later. No metaphor fits every situation, because if it did, it wouldn’t be a metaphor (it would be the thing itself). So we drop it and think of something else, just as Shakespeare did when he realised that seas don’t generally fire arrows at you. Oh, no wait….
But why bother with a metaphor at all?
One might object that it’s silly to use a metaphor to orient oneself towards experience, especially when one knows that the metaphor will fail in some instances. My response to that is that every single one of our beliefs is a metaphor, and fails in some instances.
I tell myself I am sitting on a stool in front of a table to type this. The stool is solid and brown and the table is solid and purple. Yet that’s all metaphor too. The atomic theory tells us that what I’m sitting on is mostly empty space, and has no intrinsic colour. It has no integrity either, as it is constantly exchanging particles with its surroundings. But that too is only a metaphor, as quantum mechanics casts doubt on the whole notion of persistent particles, and who knows what even weirder theory will replace quantum mechanics and reveal it to be the crude metaphor that it undoubtedly is. It’s turtles all the way down, and there’s no reason to suppose that there’s a bottom.
Metaphors are neither true nor false, but they can be useful. We are story-telling animals, and stories – aka Metaphors – are the only way we can make any sense of life. They give it a shape that we can handle. Quantum mechanics is a useful metaphor if we want to make a laser (but not if we want to explain a black hole), and my metaphorical idea of this stool is useful if I want to have the experience that I call ‘sitting down’. So my metaphor of consciousness as a shared, universal, substance is useful to me if I want to think about inconceivable issues such as the non-existence of a persistent self, the lack of any conscious processes of this body before it was conceived and after it dies, and the relationship of all we people, and other animals, to one another.
Metaphors are also sometimes called myths, and they are just as good when they have that name.
Is this all just avoidance?
I can’t help pre-empting criticisms. It’s a vicious habit I picked up, I don’t know when but a long time ago. The wisdom of the ages says don’t bother, because it makes one’s writing longer, more complex, disjointed, ugly and harder to read. And critics rarely pay attention to one’s pre-emptions anyway. I can write “most dogs have fur that cause allergies to some people, but poodles don’t”, and some eager person will still sometimes respond “aha, but what about poodles? Got you there!”.
But since, like many people, I am my own worst critic I can’t help the odd pre-emption (of my own self-criticism), so I’ll allow myself one (or is it two? Did I already do one? We addicts are hopeless). Here it is.
Isn’t this all just some pathetic attempt to rationalise one’s way out of a fear of death by postulating some ridiculous Universal Consciousness? Why not just admit that when a body dies, it has no more conscious experiences, and that’s that?
Well Andrew (I reply), I’m glad you asked that question. Firstly I’d just like to observe that I did already say that (I believe) a dead body has no more conscious experiences, and there will be no more conscious experiences that have any memories of experiences that the body had. So this myth/metaphor doesn’t seek to deny or avoid that.
Nor is the myth relevant to fear of death, at least not for me. I used to fear death when I believed in a personal after-life, because I feared the punishments that had been threatened in that after-life if I didn’t conform to the strict expectations laid out in a rather large book of unrealistic rules. In fact I even feared the alternative of being ‘rewarded’ with eternal happiness, because I was convinced that no matter what treats and delights that reward comprised, I would be excruciatingly and agonisingly bored within a few billion years. But once I ceased to believe in an after-life, I ceased to believe in the possibility of such punishments, and hence I ceased to fear death. That is different of course from the fear of how one gets there (‘dying’), as I imagine that being squashed under the wheels of a Land Rover or being eaten by enraged Koalas is rather uncomfortable, albeit only for a short while.
No, the purpose of the myth, as far as I understand it, is twofold: first to escape the niggardly narrowness of the first-person perspective that is imposed on us by our bodily structure; second to open up possibilities for contemplating the mystery of consciousness, a phenomenon that no amount of scientific investigation seems ever likely to be able to explain. Given how mysterious and indefinable consciousness is (as opposed to mere brain activity that interprets sensory data, processes information and generates physical actions including speech), how unnecessary to the evolutionary account of the human brain it is, and how we (ie David Hume and I) are unable to detect any subject (‘self’) of this consciousness, it appears less ridiculous to me to regard consciousness as something primal, something universal that transcends individual bodies, than as an inexplicable phenomenon that arises in association with lumps of meat that are configured in just the right way.
Does that sound like a Humph! ? It wasn’t meant to. Ah well, if it is so, let it be so.
Marie Antoinette, 16 October 1793.
In my writings I have not infrequently been dismissive of metaphysics, arguing that most metaphysical claims are meaningless, unfalsifiable, and of no consequence to people’s lives (leaving aside the unfortunate historical fact that many people have been burned at the stake for believing metaphysical claims that others disliked).
Perhaps it is time to relent a little – to give the metaphysicians a little praise. At least I will try. The basis for this attempt is a re-framing of what metaphysics is about. Instead of thinking of it as a quasi-scientific activity of trying to work out ‘what the world is like’, perhaps we could instead think of it as a creative, artistic activity, of inventing new ways of thinking and feeling about the world. Metaphysics as a craft, as delightful and uncontentious as quilting.
Why would anybody want to do that? Well I can think of a couple of reasons, and here they are (except that, like the chief weapons of Python’s Spanish Inquisitor, the number of reasons may turn out to be either more or less than two).
We know that there is a very wide range of human temperaments, longings, fears and attachments. A perspective that is inspiring to one person may be terrifying to another, and morbidly depressing to a third. For instance some people long to believe in a personal God that oversees the universe, and would feel their life to be empty and meaningless without it. Others regard the idea with horror. Some people are very attached to the idea that matter – atoms, quarks and the like – really, truly exists rather than just being a conceptual model we use to make sense of our experiences. Philosophical Idealists (more accurately referred to as Immaterialists) have no emotional need for such beliefs, and accordingly deny the existence of matter, saying that only minds and ideas are real. Indeed some, such as George Berkeley, regard belief in matter as tantamount to heresy, which is why the subtitle of his tract ‘Three dialogues between Hylas and Philonous‘, which promoted his Immaterialist hypothesis, was ‘In opposition to sceptics and atheists‘.
So the wider the range of available metaphysical hypotheses, the more chance that any given person will be able to find one that satisfies her, and hence be able to live a life of satisfaction, free of existential terror. Unless of course what they really long for is existential terror, in which case Kierkegaard may have a metaphysical hypothesis that they would love.
One might wonder – ‘why do we need metaphysical hypotheses, when we have science?‘ The plain answer to this is ‘we don’t‘. But although we do not need them, it is human nature to seek out and adopt them. That’s because, correctly considered, science tells us not ‘the way the world is‘, but rather, what we may expect from the world. A scientific theory is a model that enables us to make predictions about what we will experience in the future – for instance whether we will feel the warmth of the sun tomorrow, and whether if we drop an apple we will see it fall. Scientific theories may seem to say that the world is made of quarks, or spacetime, or wave functions, but they actually say no such thing. What they say is, if you imagine a system that behaves according to the following rules – which might be rules about subatomic particles like quarks – and you observe certain phenomena (such as my letting go of the apple), then the behaviour of that imaginary system can guide you as to what you will see next (such as the apple falling to the ground).
It’s just as well that scientific theories say nothing about ‘the way the world is’, because they get discarded every few decades and replaced by new ones. The system described by the new theory may be completely different from that described by the previous one. For instance the new one may be all about waves while the previous one was all about tiny particles like billiard balls (electrons, protons and neutrons in the Rutherford model of the atom). But most of the predictions of the two theories will be identical. Indeed, if the old theory was a good one, it will only be in very unusual conditions that it makes different predictions from those of the new theory (eg if the things being considered are very small, very heavy or very fast). So by recognising that scientific theories are descriptions of imaginary systems that allow us to make predictions, rather than statements about the way the world is, we get much greater continuity in our understanding of the world, because not much changes when a theory is replaced.
I think of metaphysics as the activity of constructing models of the world (‘worldviews’) that contain more detail and structure than there is in the models of science. We do not need the more detailed models of metaphysics for our everyday life. Science gives us everything we need to survive. But, being naturally curious creatures, we tend to want to know what lies behind the observations we make, including the observations of scientific ‘laws’. So we speculate – that the world is made of atoms like billiard balls, or strings, or (mem’)branes, or a wave function, or a squishy-wishy four-dimensional block of ‘spacetime’, or quantum foam, or ideas, or noumena, or angels, demons, djinn and deities. This speculation leads to different mental models of the world.
So metaphysics adds additional detail to our picture of the world. Some suggest that it also adds an answer to the ‘why?’ question that science ignores (focusing only on ‘how?’). I reject that suggestion. As anybody knows that has ever as a child tried to rile a parent with the ‘but why?’ game, and as anybody that has been thus riled by a child knows, any explanation at all can be questioned with a ‘but why?’ question. No matter how many layers of complexity we add to our model, each layer explaining the layer above it, we can always ask about the lowest layer – ‘but why?’ Whether that last layer is God, or quarks, or strings, or the Great Green Arkleseizure, or even Max Tegmark’s Mathematical Universe, one can still demand an explanation of that layer. By the way, my favourite answers to the ‘But why?’ question are (1) Just because, (2) Nobody knows and (3) Why not? They’re all equally valid but I like (3) the best.
Some of these mental models have strong emotional significance, despite having no physical significance. For instance ‘strong solipsism‘ – the belief that I am the only conscious being – tends to frighten people and make them feel lonely. So most people, including me, reject it, even though it is perfectly consistent with science. Some people get great comfort from metaphysical models containing a god. Others find metaphysical models without gods much more pleasant.
So I would say that metaphysics, while physically unnecessary, is something that most people cannot help doing to some extent, and that people often develop emotional attachments to particular metaphysical models.
Good metaphysics is a creative activity. It is the craft of inventing new models. The more models there are, the more people have to choose from. Since there are such great psychological and emotional differences between people, one needs a great variety of models if everybody that wants a model is to be able to find a model with which they can be comfortable.
Bad metaphysics (of which there is a great deal in the world of philosophy) is trying to prove that one’s model is ‘the correct one‘. I call this bad because there is no reason to believe that there is such a thing as ‘the correct model’ and even if there was one, we’d have no way of finding out what it is. There can be ‘wrong’ models, in the sense that most people would consider a model wrong if it is logically inconsistent (ie generates contradictions). But there are a myriad of non-contradictory models, so there is no evidence that there is such a thing as ‘the right model’. Unfortunately, it appears that most published metaphysics is of this sort, rather than the good stuff.
It’s worth noting that speculative science is also metaphysics. By ‘speculative science’ I mean activities like string theory or interpretations of quantum mechanics. I favour Karl Popper’s test for whether a model is (non-speculative) science, which is whether it can make predictions that will falsify the model if they do not come true. A model that is metaphysical can move into the domain of science if somebody invents a way of using it to make falsifiable predictions. Metaphysical models have done this in the past. A famous example is the ‘luminiferous aether’ theory, which was finally tested and falsified in the Michelson-Morley experiment of 1887. Maybe one day string theorists will be able to develop some falsifiable predictions from the over-arching string theory modeli that will move it from the realm of metaphysics to either accepted (if the prediction succeeds) or discarded (if the prediction fails) science. However some metaphysical models seem unlikely to ever become science, as one cannot imagine how they could ever be tested. The debate of Idealism vs Materialism (George Berkeley vs GE Moore) is an example of this.
So I hereby give my applause to (some) metaphysicians. Some people look at philosophy and say it has failed because it has not whittled down worldviews to a single accepted possibility. They say that after three millenia it still has not ‘reached a conclusion’ about which is the correct worldview. I ask ‘why do you desire a conclusion?‘ My contrary position is to regard the proliferation of possibilities, the generation of countless new worldviews, as the true value of metaphysics. The more worldviews the better. Philosophy academics working in metaphysics should have their performance assessed based not on papers published but on how many new worldviews they have invented, and how evocatively they have described them to a thirsty and variety-seeking public. Theologians could get in on the act too, and some of the good ones (a minority) do. Rather than trotting out dreary, flawed proofs of the existence of God. the historicity of the resurrection, or why God really does get very cross if consenting grown-ups play with one another’s private parts, they could be generating creative, inspiring narratives about what God might be like and what our relationship to the God might be. They could manufacture a panoply of God mythologies, one to appeal to every single, unique one of us seven billion citizens of this planet. Some of us prefer a metaphysical worldview without a God, but that’s OK, because if the philosopher metaphysicians do their job properly, there will be millions of those to choose from as well. Nihilists can abstain from all worldviews, and flibbertigibbets like me can hop promiscuously from one worldview to another as the mood takes them.
We need more creative, nutty, imaginative, inspiring metaphysicians like Nietzsche, Sartre, Simone Weil and Soren Kierkegaard, not more dry, dogmatic dons that seek to evangelise their own pet worldview to the point of its becoming as ubiquitous as soccer.
Bondi Junction, January 2015
i. Not just a prediction of one of the thousands of sub-models. Falsifying a sub-model of string theory is useless, as there will always be thousands more candidates.