When it comes my turn to be king of the world I will ban the word ‘obviously’, together with its fellow travellers ‘clearly’ and ‘evidently’. My challenge to you, the other inhabitants of the kingdom of Earth, is this: find me a single example of a sentence that is improved by the use of the word ‘obviously’!
I assert that, not only is ‘obviously’ never an improvement to a sentence, but it usually degrades a sentence into which it is inserted and renders it foolish, pompous, or just plain false.
The first memory I have of encountering this rebarbative word is in mathematics lectures at university. It was the early 1980s. In those days lectures performed their proofs live on the black board with chalk – a difficult endeavour indeed. As soon as you saw that word on a board, you felt that if you couldn’t instantly see why that line followed logically from the line before, you must be very dim. If you hadn’t seen the connection by the time they finished writing the next line, you started to panic. The only solution was to accept the claim without challenge and try to keep up with what came next. There would be time that evening to go over your notes and try to work out why the claim was ‘obviously’ true.
Sometimes in the evening you could figure it out without difficulty. Sometimes you figured it out but it needed a page or so of closely written reasoning to justify it. Sometimes you couldn’t make it out at all. That’s when you had to summon your courage and challenge the lecturer about it before the next lecture. You’d sidle up to him and say ‘Sorry to bother you but I can’t see how you get line five. Can you please explain it?’
In short, it was rarely obvious. Even when it was moderately obvious, there were other lines that were more obvious, for which the tag was not used.
I started to detect a pattern. The word was being used to cover for the fact that the lecturer couldn’t remember, off the top of their head, the justification for the line. By writing ‘obviously’ they made potential hecklers too worried about seeming dumb to challenge the claim on the spot. It was the Emperor’s New Clothes all over again. What was needed was the little boy to blurt out ‘But it’s not obvious at all. In fact I can’t even see it.’
I forgive those lecturers, because what they were doing was very difficult. I would feel under a lot of pressure having to perform mathematical derivations on a blackboard in front of a specialist audience.
It is less forgivable when it occurs in text books. In many a mathematics or physics text book I have come across the prefix ‘obviously’ before a line that was the exact opposite. The authors of textbooks do not have the excuse that they have to come up with explanations on the spot, but they are nevertheless under time pressure because, unless a text is chosen as a key text for courses at many major schools or universities, it will not bring in much revenue, so extra time spent writing it makes it even harder to be profitable. Why spend hours deriving a proof of something you are fairly sure is true, but don’t remember why, when you can just write ‘obviously’ in half a second, and move on to the next line?
I don’t begrudge them saving that time, but there are more honest and helpful ways to do it. Other phrases that can be used are “It turns out that…” and “It can be shown that…”. These make it clear that what the author has written is not a full proof, and that the step over which they are glossing is not trivial. When I encounter those I don’t mind very much because they don’t contain the implicit challenge “If you can’t see why this line follows from the last one you must be stupid!”. The most generous excuse of all is “It is beyond the scope of this paper / text / chapter to prove X, so we will take it as read”. That way the reader knows that proof is long and difficult.
It is annoying when academics use the word ‘obviously’ in that way, but at least they use it in relation to a claim that is true. In political argument, that is not the case. People use ‘obviously’ to justify any claim, no matter how dubious, or sometimes just plain wrong. Examples abound, from politicians, shock jocks and reactionary newspaper columnists.
“Obviously, decriminalising marijuana use would make the problem worse”
“Obviously, it makes no difference whether Australia reduces its greenhouse gas emissions, since ours only make up a small part of the world’s total”
“Obviously, what’s needed to solve our city’s traffic problems is to build bigger roads”
“Obviously, we have to be cruel to refugees, otherwise many more would come to our country”.
It’s used as an excuse to not even consider any evidence that may be available, to not even entertain rational discussion on a topic. It implies that anybody that does not accept the claim must be stupid or have dishonest intentions. It’s an attempt to shut down inquiry and discussion, lest that lead to an outcome against which the speaker has an entrenched prejudice.
Is anything ever obvious?
Perhaps, but we need to very careful in suggesting that. What is obvious to one may not be at all obvious to another. A high-visibility yellow vest is obvious to normal-sighted people but not to the colour-blind. A person walking across a basketball court in a gorilla suit is not obvious to observers that have been tasked with counting the number of times each player passes the ball.
Further, beliefs in what is obvious are often founded on stereotypes that may be damaging. Is it obvious that boys are better at maths than girls, or that men cannot be trusted to care for other people’s children?
This leads me to wondering whether there is any sentence in which the word ‘obviously’ can play a useful role. I don’t apply the same challenge to ‘obvious’ because it can have observer-dependent roles, as in “It eventually became obvious to Shona that the doorman was not going to let her into the club”. Or we can use it to express relative obviousness, as in “Not wanting to mislay them, he left his keys in the most obvious position he could think of – in the middle of the empty kitchen bench”.
But “obviously”? That adverbial suffix ‘ly’ seems to strip from the adjective any ability to convey subtleties of degree. There seems to be no way of using it that does not imply that anybody who does not agree with the following proposition, and understand why it must be correct, is simply stupid.
No wonder it is used either as a tool of bullying or as a lazy attempt to escape the need to justify one’s claims.
Sometimes it occurs without intent, as a verbal tic. Like most verbal tics, it is rooted in the insecurity of the speaker. Although it sounds like it has an opposite meaning to other tics like ‘if that makes sense’ or ‘if you like’, it serves the same purpose in deflecting attention from the speaker’s insecurity – but in an offensive rather than a defensive way. In both cases the speaker hopes not to be challenged. With ‘if that makes sense’ the hope is that the humility it projects will discourage a listener from saying ‘that doesn’t sound right’, if only out of charity to the speaker. The ‘obviously’ is like the puffed-out frill of a lizard – a pretence at invulnerability intended to discourage attack: ‘Challenge me on this and you’ll end up looking foolish!’. Except that the intent is usually subconscious and, once one has used the phrase many times, it becomes reflexive, devoid of any meaning, or even of subconscious intent.
I vowed quite some time ago never to use the word, or any of its synonyms. I think I have managed to keep the vow. I hope I have. But I cannot be sure. One uses so many words in the course of a week, that it’s hard to keep track of them all.
If something is truly obvious to almost everybody, there should be no need to state that. It will be obvious that it is obvious. If, as is more often the case, it is far from obvious, it is foolish at best, and dishonest at worst, to imply that it is.
Bondi Junction, April 2019
I’ve had a few thoughts and discussions since I wrote this article about drawing stars. I thought it was worth sharing.
Stars within stars
The first observation is that stars, drawn in the way I described, contain other stars, nested within one another like a set of Russian dolls. Recall that we use the term ‘n-k star‘ to indicate a star with n points such that, if we draw a circle through all the points and consider the boundary of that circle as split up into n curve segments bounded by the points, then the straight line from one point to another traverses k of those curve segments. Like this, for a 7-3 star:
Yuriy made an interesting observation about the stars in my last article, that the sum of the angles of all the points of an n-k star is 180∘ × (n-2k). In the course of thinking of ways to prove that formula true, I came upon the realisation that a n-k star contains a n-(k-1)star, which contains a n-(k-2) star and so on down to the n-1 star, which is a regular, n-sided polygon.
Here’s a picture that shows this for an 11-5 star.
The red, outer 11-5 star contains a green 11-4 star, which contains a red 11-3 star, which contains a green 11-2 star, which contains a red 11-1 star (polygon). The points of each inner star are the innermost vertices of the star that immediately contains it. Since we will be referring to those vertices again later, let’s make up a name for them. We’ll call such a vertex a ‘tniop‘, since it is in a sense the opposite of a point. The above diagram shows a point and a tniop. We’ll call the stars inside a star ‘sub-stars‘.
We saw in my last essay that, when a star cannot be drawn without taking the pencil off the paper, it is made of a number of ‘component stars’, that are rotated copies of one another. Here is a picture of a 16-6 star, which uses different colours to highlight the two component stars. We have two 8-3 stars, one light blue and one red.
And here is a picture that uses colour variation to show the sub-stars of each of the two components.
The 8-3 light blue star contains an 8-2 star which is made up of two components that are 4-1 stars (also known as ‘squares’) coloured blue and pink. Similarly, the 8-3 red star contains an 8-2 star which is made up of two components that are 4-1 stars (also known as ‘squares’) coloured yellow and green.
The 8-2 stars each contain an 8-1 star (octagon) as the intersection of two squares -pink and dark blue for one octagon and yellow and green for the other.
Finally, those two octagons between them, bound a hexadecagon (16-sided polygon or 16-1 star). So altogether, in the one picture, we have:
- one 16-6 star (red and light blue)
- one 16-5 star (also red and light blue) inside that
- one 16-4 star (pink, green, dark blue and yellow) inside that
- one 16-3 star (also pink, green, dark blue and yellow) inside that
- one 16-2 star (also pink, green, dark blue and yellow) inside that
- one 16-1 star (also pink, green, dark blue and yellow) inside that
- two 8-3 stars (one red, one blue) making up the 16-6 star
- two 8-2 stars (one pink and dark blue, one yellow and green), one inside each of the 8-3 stars
- two 8-1 stars (octagon: one pink and dark blue, one yellow and green), one inside each of the 8-2 stars
- four 4-1 stars (squares: coloured pink, dark blue, yellow and green) which, in pairs, make up the 8-2 stars.
That’s sixteen stars altogether. What a lot of stars in one drawing! Can you see them all?
Here’s a different colouring that makes it easy to see all five 16-point stars:
Although the stars get smaller as k reduces, they do not shrink away to nearly nothing. In fact they get closer together as they go inwards, as if they are asymptotically approaching a circle of some fixed, minimum size.
To investigate this, I drew a 101-50 star:
You’ve probably noticed by now that I’m no longer drawing these by hand. My drawing is much too wobbly to capture the intricacies of stars-within-stars. So I wrote a computer program to draw them for me. I’ll try to remember to attach it at the end of the article, so that those of you who like mucking about with computers can muck about with it.
Anyway, that 101-50 star pretty well killed my hypothesis that the inner stars can’t get very small. In this one they almost disappear out of sight. I like the swirly patterns. I haven’t yet worked out whether they are really features of this very spiky, very complex, star, or whether they are just artefacts of the crudeness introduced by the computer’s need to pixillate.
Here’s a zoomed-in image of the interior of that star. Cool, eh?
This is a low-resolution image. I have saved a moderately high-resolution image of this star here. Zooming in and out is fun. It seems almost fractal as more patterns emerge from the inside when we zoom in. Also, the stars give the illusion that they are rotating as we zoom. To get the best effect you need to download the file (a .png image file) and then open it up, so that zooming is not limited by your internet connection’s speed.
Ratio of Outer to Inner radius
Let me briefly pick up on that idea above about whether there is some minimum inner radius for these stars. For each n, the spikiest n-k star is where k is the largest integer less than n/2, and this contains another k-1 stars, nested one within the other, down to the innermost, which is a n-sided regular polygon. We can work out the ratios of each star to the one immediately inside it, and use that to work out the ratio of the outermost, n-k, star, to the innermost, n-1, star. The attached computer program contains trigonometric formulas to do that. Here are the ratios of the radii of the innermost to the outermost star for each n from 1 to 109:
We observe that the ratios generally go down as n increases, but the decline is not steady. It bumps up and down. I have highlighted the prime numbers with asterisks. Notice how the ratio for those is always lower than for the numbers around them. The two drivers of the ratio seem to be:
- the size of n: the ratio generally declines as n increases; and
- the number of different factors n has. Note how 16 (divisible by 2, 4, 8) has a higher ratio than 15 (divisible by 3, 5) and 56 (divisible by 2, 4, 7, 8, 14) has a higher ratio than 55 (divisible by 5, 11).
This would be interesting to look into further, and to see if there is some neat, sweet, compact formula for the ratio that highlights the relationship to size and number of factors (if there really is one). But I have to stop thinking about that now or I’ll never post this essay.
The most general form of symmetrical stars
We can make an awful lot of stars using the above approach. For an integer n the number of different n-pointed stars is the biggest integer less than n/2.
But in fact there is an infinite number of different n-pointed stars, without having to loosen our standards by allowing asymmetry. After a bit of thought, I realised that the most general form of n-pointed star can be specified by a single number, which is the ratio of its inner radius to its outer radius. The outer radius is the radius of the circle on which the points sit. The inner radius is the radius of the circle on which all the tniops sit. Given n and that ratio – call it θ, we can draw a star as follows:
- Draw two concentric circles with ratio of the inner to the outer radius being θ.
- Mark n equally-spaced dots around the outer circle and draw faint lines connecting each of these to the centre. These will be the points of our star.
- Mark a dot on the inner circle halfway between each of the faint, radial lines drawn in the previous step. These dots will the the tniops of our star.
- Working consistently in one direction around the circle, draw a zig-zag line fro point to tniop to point to tniop and so on, always connecting to the nearest dot.
We will describe such a star as a n/θ star. We use a slash rather than a dash in order not to mix it up with the former type of star. Here is a sequence of six-point stars, with the ratio of the inner to outer radius going from 0.2 up to 0.8:
It is nice that this gives us more options for stars – infinitely many different kinds of star for each n in fact. But they are not as much fun to draw as the n-k stars, and it is harder to make them come out right without geometric instruments – which rules them out as an effective doodling pastime.
Note how, unlike with the n-k stars, the line leading away from a point does not intersect any other point. Nor do we get any inner stars for free. The price of gaining more variety is a loss of structure. The inner structure of a n-k star provides a great richness by enforcing all sorts of relationships between the vertices.
Only very specific values of the radius ratio θ give us n-k stars. I worked out the formula for the ratio by the way, with a bit of trigonometry. The ratio of the tniop radius to the point radius for a n-k star is:
2 sin(π(k-1)/n) sin(πk/n) sin(π(1-(2k-1)/n)) / (sin(2π/2)+sin(π(1-(2k-1)/n)) )
– sin(π((2k-1)/n – ½) )
For a n/θ star, if there are no integers n and k that give values of that formula equal to θ, the star will not have the array of inner stars that a n-k star has.
Computer Program to draw pretty stars
This whole diversion started as an exercise in drawing stars by hand. But there’s a limit to how intricate those stars can get without getting too messy with smeared ink or graphite. For those that like to look at pretty, intricate geometrical pictures, here’s my computer program in R that can draw stars of any of the types discussed.
Sums of angles
For those that like mathematical proofs, there are outlines of proofs here of Yuriy’s observation about the sum of the internal angles at points of a n-k star.
Bondi Junction, April 2017
‘Partial differentiation’ is an important mathematical technique which, although I have used it for decades, always confused me until a few years ago. When I finally had the blinding insight that de-confused me, I vowed to share that insight so that others could be spared the same trouble (or was it just me that was confused?). It took a while to get around to it, but here it is:
My daughter Eleanor make a drawing for it, of a maths monster (or partial differentiation monster, to be specific) terrorising a hapless student. The picture only displays in a small frame at the linked site, so I’m reproducing it in all its glory here.
Bondi Junction, October 2016
I am re-committing to memory my old piano repertoire which, 25 years ago, comprised somewhat over an hour of music.
Playing complex piano pieces from memory amazes me. Any form of memorised material is impressive, but piano seems weirdest because one doesn’t have to just memorise a melody, but all the chords and counterpointal parts as well. There are usually three to eight notes being played at once, so superficially it sounds as though one has to memorise three to eight separate parts and play them simultaneously. It’s not really that hard (unless it’s a five-part fugue by JS Bach, and I haven’t memorised any fugues yet) because while there may be three notes in an A major triad in second inversion, they are not just any old three notes. They are three notes that nearly always go together. So one can just remember that there’s a triad there, rather than remembering three separate notes.
But still, there’s an awful lot of stuff to remember. I think concert pianists and Shakespearean actors are the most impressive to me in terms of memory feats.
I find it astounding to what extent the fingers – after a fair bit of effort at memorising – just know which notes to play. If you asked me what the next chord was, I could play it, but I couldn’t tell you beforehand what it was. I might even need a run-up to play it, because it seems that playing the chords leading up to it set the context that enables my fingers to ‘know’ what comes next. I know this because if I get lost and have to get re-started, there are only certain points in the piece from which I can start cold.
I have read that it’s really the Cerebellum that ‘remembers’ what to play, not the fingers. But it feels like it’s the fingers.
One can commit a piece to memory either consciously, so that one can say out loud what comes next – what chord it is, which notes and perhaps which of passages A, B, C or D it is in a Rondo structure – or unconsciously, by just playing it over and over from sheet music until it gets programmed into one’s subconscious.
I think it is safest to learn both ways.
Committing the piece to conscious memory is a safeguard against a crisis of faith or a sudden disorientation. Playing unconsciously relies on context to know what comes next and it needs faith so that one trusts one’s fingers to do the right thing. As soon as one loses context or faith – easy to do when under pressure in a performance – one can lose the ability to let one’s fingers do the work.
It is like the art of flying in the fourth HitchHiker book (‘So long and thanks for all the fish‘) or a Roadrunner cartoon where Wile E Coyote accidentally runs over a cliff but only falls when he looks down and realises where he is. You only lose the ability to fly when you remember that it is impossible. Then you suddenly plummet.
There is a very long trill in a Chopin Nocturne that mixes me up because it covers three notes and is more complex than an ordinary trill. I can play it fine, and very fast, as long as I don’t think about it. But because it’s long, I usually end up inadvertently thinking about what my fingers are doing about halfway through and then getting muddled. The last few times I have succeeded in playing it right through without mistake by looking around the room as I play it, focusing on things I see – keeping my mind occupied by anything except what my fingers are doing.
My fingers playing music are like me doing maths. They are very good at it as long as nobody is watching. But as soon as somebody is watching it turns to mud. Young children enjoy tormenting me by sidling up to me and asking me something embarrassingly easy like ‘differentiate x squared!‘ and then staring at me intently so that my brain won’t work (like a watched pot).
But if one also knows consciously what comes next, one can silently tell oneself to play an E flat diminished chord in the second inversion, or to reprise theme B, one octave higher. One knows how to do that, so one does it – no faith required. The conscious brain acts as scab labour to supplant the striking union of the unconscious fingers.
Although both conscious and unconscious memory always have a role to play, I feel that this time I am learning a lot more unconsciously than I did 25 years ago. I can see how much conscious involvement there was in 1991 because some of the scores still have the pencilled notes I wrote on them to help me categorise and memorise the thematic and harmonic structure of each piece. It’s more enjoyable learning subconsciously. But it’s higher risk to do only that, if one has to perform.
I have been finding that, once one has committed a piece thoroughly to memory, it is quite peaceful and meditative to play without thinking about the notes one is playing. One thinks about the music, because one puts the feeling into the piece by variations in loudness and pace, but not about the microstructure of the notes. That is beyond one’s gaze, being taken care of by the fingers/cerebellum.
It is important to keep one’s mind on the music though, otherwise the relentless, angst-ridden chatter of the modern monkey mind comes in to disturb the peace. I can remember occasions of playing pieces in the past, whether from sheet music or from memory, with my mind completely oblivious to the music and instead working philistinically though every grievance, anxiety and obsession it could find, re-running past conversations and projecting future ones at a rate that would make a Boddhisattva wince and that could generate material for at least three psychology PhD theses.
I wonder what concert performers do – whether they do both, or just one and if so which one? Or does it vary between performers?
In case anyone is interested, here are the pieces from the 1991 repertoire, showing which ones have so far been re-learned:
- Beethoven Pathetique Sonata, all three movements (2nd and 3rd re-learned so far)
- Mozart C major sonata, all three movements
- Beethoven Moonlight Sonata First movement (the famous one)
- Beethoven Fur Elise (re-learned)
- Debussy First Arabesque (re-learned)
- Debussy Clair de Lune
- Chopin Nocturne in E flat major (re-learned)
Mr Beebe would say ‘Too much Beethoven‘.
But I will never be able to competently play the fiendishly difficult Opus 111 sonata whose crashing rendition by the troubled Miss Honeychurch prompted those immortal words.
I have vague aspirations to extend the list if I manage to re-learn all of it. I have in mind to do one of Faure’s three lovely impromptus. Given my comment above, I am tempted to also take up the challenge of attempting to memorise a Bach fugue. I probably shall. Sadly, nobody in my circle of friends and family seems to really like Bach fugues. Perhaps he really wrote them for the enjoyment of the performer rather than for the listener.
Bondi Junction, July 2016
Although this third movement is less “pathetic” than the preceding ones, the player alone will be to blame should the Pathetic Sonata end apathetically.
Thus writes the author of Schirmer’s Library of Musical Classics, at the foot of the first page of the score of the last movement of Beethoven’s Sonata Pathetique (That’s ‘Pathetic’ as in pathos, ie emotionally moving, not as in contemptible, in case you were wondering). The bold font emphasis was added by me, by the way – it’s not in the original.
Many were the admonitions of this type that one used to encounter in learning, performing and reading about music. I don’t know if such admonitions still abound, but I used to take them very much to heart.
Here’s another gem from the last page of the same movement:
In proportion to the greater or lesser degree of passion put forth by the player before the calando, this latter is to be conceived as a diminuendo and ritardando. Excess in either direction is, of course, reprehensible.
A piano teacher once told me that, while Mozart piano sonatas seemed easy to play, especially the slow movements, they were actually especially difficult because their smooth lightness and sustained notes would ‘expose’ the inadequacy of any player whose touch upon the keys was not delicate and even. I interpreted this as meaning that only an impostor would attempt to play a Mozart Sonata without first obtaining the very highest degree of musical performance qualification possible. Their crime of trying to hide their lack of skill behind the apparent simplicity of the score would be exposed by the very first unplanned variation in pressure ( a ‘plonk’ in lay terms) leading to their justly deserved shame and humiliation and, it is to be hoped, excommunication from any future association with decent, honest, genuine music lovers.
Although I did take this sort of sermon to heart, I nevertheless dared to play sonatas by Mozart. I enjoyed it. But I just had to secretly hope that no true music connoisseurs would ever hear me, perhaps as they walked past the open window of the room in which I was playing, and be goaded into a rage by my lack of finesse – not to mention the unmitigated temerity of presuming to play Mozart. I imagined they would feel it were as though I had reanimated the corpse of Wolfgang Amadeus himself, just so I could slap him in the face and jeer at him.
I don’t think that any more. In fact, I may have swung so far to the opposite extreme that I have to remind myself not to be too intolerant of those poor souls that do happen to be internationally renowned piano virtuosi.
In short, I love amateur music. There is a point at which it may become difficult to listen to, as with a tone deaf singer or the tuneless screeching of a child unwillingly doing their ten minutes a day practice on the clarinet or violin. But short of that (and even that isn’t too bad, but that’s another essay) I find that musical flaws enhance rather than detract from the performance, as long as the player’s heart is in it. Sincerity and enthusiasm is all that’s needed to make a performance truly marvellous.
My youngest daughter will graduate this year from high school, after which I will no longer have a socially acceptable reason to attend performances of school musical ensembles, whose enthusiasm is often in inverse proportion to their skill. What a pity! Like a Persian carpet, where (it is said) the maker always includes a deliberate flaw because only Allah is allowed to be perfect, the flaws in an amateur musical performance are an essential ingredient, without which the performance would lack – I don’t know, maybe ‘soul’?
For ensembles of young children, the many mistakes, constantly varying level of pitch accuracy and plodding pace are, of course, adorable. But my liking for amateur music is not limited to a sentimental fondness for kitsch cuteness. I feel just as warmly about performances by tall, spotty adolescents in rock bands – as long as they have not been stage managed by Simon Cowell and do not have PR agents in tow. What is important is sincerity and enthusiasm. The occasional (or frequent) mistake emphasises the humanity of the performer.
And in any case, a liking of cuteness could not explain my recently acquired toleration of my own mistakes since, if I recall correctly, some biologist or other (was it Richard Dawkins? Or perhaps Francis Collins? I get them mixed up. Goodness knows they have so much in common) has proven conclusively that it is impossible for any individual member of any mammalian species to find itself cute. I’m not talking about pretending to be cute. All humans seem to do that at a certain age. But pretending to be cute is not the same as finding oneself cute. Indeed, it requires a healthy dose of cynicism to pretend to be naively clumsy and inarticulate just to manipulate the emotions of those around you. It must be done with a cold, clear, calculating mind and a total awareness of what one is doing, and leaves no possibility open for being taken in by one’s own deception. Or so I imagine. It is many years since I discarded any hope of garnering positive attention by feigning sweet ingenuity.
I digress. Refocussing: I think perhaps it is the humanity revealed through their imperfections that make amateur performances so valuable. We have had flawless performances available ever since the piano roll was invented. No doubt it is now possible for a computer to produce a virtuoso performance of a piece of music direct from the written score. I’m not knocking that. Even when that is done, we still have the human element provided by the composer. I doubt the day will ever come when a computer can write something like Beethoven’s fifth symphony. And if it does, I may find myself believing that the computer has attained consciousness.
But music is an activity for participation, not passive observation. Even apparently passive listening often involves participation of some sort. If one taps one’s foot, sways a little to the rhythm, or hums along, maybe out loud or maybe silently inside one’s head, one is participating. If that mild level of participation is enjoyable and life-affirming, how much more so when one is fully involved in producing the music? Churches seem to have understood this for a long time, with hymns that all the congregation participates in singing. I also think of the wonderful chants that some African villagers do, and of Australian Aboriginal corroborrees. As I understand it, these are social activities, in which all tribe members participate, rather than demonstrations of skill.
When one is learning to play an instrument or, having learned the instrument, trying to master a difficult new piece on the instrument, it can be disheartening to think that, however much effort one might put in, one will never be able to perform the piece as well as a computer program programmed by a mildly competent computer nerd, regardless of whether they have any musical ability. It is a little sad to think that the role of musical performance could be supplanted by instruments played by computers. My response to such negative thoughts is to remind myself that a critical part of any performance is the personal experience of the performer. It will be many centuries before they can program a computer to not only perform Scott Joplin’s ‘The Entertainer’, but to enjoy playing it as well.
That experience of playing multiplies when one is part of an ensemble. It is especially so in a choir, when one can feel the harmonies with the other singers resonating throughout one’s body.
I think the knowledge that there is an important experience of the performer is part of the experience of the listener too. If we reflect on it, we can feel that the performer is feeling the music and, in a way, communicating to us through the music. It would be different, a less complete experience, if the music were being performed by a (non-sentient) computer and we knew that to be the case.
And that’s why I think international piano competitions are bad! Does that opinion follow smoothly enough from the previous paragraph? No? Well, never mind, that’s how I feel. Like many of my opinions, that particular one (which is only a minor aspect of my overall preference for amateur music) was planted in my head by another. It was a talk given by an Australian that was an internationally renowned concert pianist – I forget their name – about how damaging the world of international piano competitions is to musical appreciation, as well as to the lives that compete in them. Most of the contestants are virtuosos, whose difference in skill can only be discerned by the most experienced of connoisseurs. Yet one person will win and be declared ‘better’ than the others. What nonsense. Perhaps the problem is that there are too many virtuoso pianists and not enough paid jobs for them.
The Berlin Philharmonic is an amazing orchestra and tremendous to listen to. But I wonder whether my daughter’s high school orchestra sounds more like the orchestras that premiered works by Beethoven, Mozart, Haydn or Schubert, than the Berlin Phil. From what I have read, the musicians of late eighteenth century Vienna were poorly paid, possibly ill and malnourished and distracted by the worldly cares that beset the financially insecure. They frequently had insufficient opportunity to learn and practice a new piece – sometimes with score changes occurring mid-rehearsal – and the halls in which they performed were irregularly heated, which would have driven constant variations in tuning. My brother and sister-in-law married in a small, freezing stone church in the midst of a dark Oxford winter. I remember the sounds of the string quartet drifting in and out of tune as they played ‘Jesu Joy of Man’s Desiring’. The pitch went up when an eddy blew warm air from the bar heater towards them, and down when the warmth moved away. When I think about it, I realise that that is probably the way the piece was meant to be played. From what we know of JS Bach, the Leipzig churches in which his pieces were performed were probably even colder and draftier than the one in Oxford. There were no electric bar heaters in 1725.
And yet, even though the premier performances of those works may have been riddled with faults, the audiences still responded with adulation and rapturous applause. They could see past the occasional wrong note, loss of synchronisation and variation in pitch, to the underlying genius and emotional power of the composition, and the sincerity of the performers.
I don’t want to sound critical of virtuoso performers and ensembles. They are valuable too and have a key role to play in the world of music. I have no reason to doubt their dedication and sincerity or their enjoyment of the music they play. It is marvellous to hear every now and then a highly skilled performance of some challenging orchestral work. There are some, like Mahler’s second symphony (‘Resurrection’), that are so gargantuan – in both length and number of musicians and different instruments required – that it’s just not feasible to perform it with anything other than a top-level, fully professional orchestra. But they are not what music is about, just as teams like the All Blacks or Manchester United are not what sport is about. Attending an FA Cup Final or a Super Bowl would be a great experience but, given a choice between never being able to watch professional sport again and never being able to watch my children play sport, or play sport myself, I would give up watching professional sport in an instant. And it’s the same with music.
Books are too long. People talk for too long. Academic papers are too long. Almost everything is too long.
Why? Partly, because to be concise is very difficult. Urban legend has it that Blaise Pascal once wrote at the end of a letter to a friend: ‘I’m sorry this letter is so long. I didn’t have time to write a short one’.
I struggle with conciseness. Part of the problem is that, when I am trying to explain something, I worry about whether what I have said is clear enough, so I keep on saying it over, in a slightly different way each time, in the vague hope that one of the attempts will make the connection.
I think a better strategy might be to make one brief attempt at an explanation and then wait for a response. If more is needed, I imagine my interlocutor will tell me. If they do, the particular nature of their response will better enable me to tailor my next statement to fill in the information that was missing in my first.
But that requires discipline, and nerves of steel. It is like being silent in an interview after giving a short reply to a question – forcing the interviewer (or interrogator) to make the next move. Few people can carry that off, and I suspect I am not one of them.
Academic papers can be particularly irritating, droning on about all the references and who has written what, so that by the time one gets to the bit about what the authors have done that’s actually new, one is exhausted and wants to retire for a tea break. It’s not clear to me whether this is a stylistic practice, imposed by the producers and reviewers of journals, or whether it reflects insecurity on the part of the authors, who may feel that they need to mention some minimum number of other papers in order to be taken seriously.
Arthur Schopenhauer railed against this sort of writing in a series of essays collected under the title ‘The Art of Literature’. He opens with an unrestrained broadside ‘There are, first of all, two kinds of authors: those who write for the subject’s sake, and those who write for writing’s sake.‘ Schopenhauer loved the first (and of course considered himself to be one of them) and loathed the second.
If someone really has something important to say, it usually doesn’t take very long. When Neville Chamberlain announced the grim news to the British people in 1939 that Britain had declared war on Germany, the message had been delivered by the end of the 67th word. I did a test reading just now and it took about 26 seconds, including pauses for effect.
Einstein’s legendary 1905 paper that presented his special theory of relativity to the world, ending decades of confusion amongst physicists, is only 24 pages, and the key part that resolves the paradoxes by which physics was previously beset is complete by the end of page 12! John Bell’s paper that turned the world of Quantum Mechanics upside down in 1964 is only six pages. Bell cited only five references. Einstein cited none.
In general communication, most people use too many words. I do too, but I am trying to correct that. I feel that, where possible, I would like to conduct a post-mortem on every sentence I utter and work out whether that sentence has added any new information. If it hasn’t, then it was probably a waste of everybody’s time.
Politicians exploit this deliberately. They are trained to, when asked a difficult question by a journalist, give a long-winded, emphatic speech about something only tangentially related, thereby avoiding the issue and (they hope) making the journalist despair of persisting with the question because of the pressure of time. Even better, if the politician sounds confident in their ‘answer’, the less analytic watchers will form the impression that the politician is competent and frank. The more analytic types just shrug their shoulders in disgust and turn the telly off.
A sentence can be very long and yet not reveal what information it contains until late in the sentence. Sometimes there is a key word that makes it all fall into place, The words before that one stack up like the numbers in a long calculation on a Reverse Polish calculator, impotent while they wait for release. Then the key word comes and it all falls into place. It attains a meaning. The wait for that word can sometimes be prolonged, like in this:
Though they all came from different social strata, sub-cultures and occupations, crammed together against their will in the prison cell from which they wondered if there would ever be any release, though none of them had known each other – or even known of each other – in their previous lives, though they squabbled and quarrelled over the tiniest of things, the one thing that bound them together despite the rivalries and petty jealousies, the perceived slights and reconciliations, the development, disintegration and reformation of cliques, was a single shared emotion, an emotion so powerful that they could feel it oozing out of one anothers’ pores, smell it on their breath and discern it in the tones of voice – the emotion of fear.
In some cases, the key word never comes. Perhaps the writer or speaker confuses themselves by their excessive verbiage and ends the sentence with an admission of defeat.
Books are too long as well! Novels are generally OK, as it takes time to get to know and care about the characters. But I have a strong sense that non-fiction books are often padded to reach whatever is considered a minimum page count for a book – usually at least 200. There isn’t really a strong market for writings that are halfway between essay and book length. In many cases a book really only has one idea, which could make a decent essay, but doesn’t justify a book. But essays don’t get to be put on a prominent shelf that catches your eye as you enter the bookshop, nor do they get listed on the New York Times best sellers’ list.
Nassim Taleb’s famous book ‘The Black Swan’ is like that. It really only contains one idea, which is that investors, bankers and other financiers have for decades been making crucial financial decisions based on theories in which they assume that the future will be like the past, and that all occurrences of randomness must follow the Normal Distribution (the nice friendly old ‘Bell Curve’). Decisions based on that erroneous, oversimplified assumption have repeatedly led to disasters, because events tend to be more extreme than is predicted by the Bell Curve. Taleb’s is a good insight, and definitely worth saying, but probably not worth stringing out to book length.
And then, if the book sells well, they write it again, ever so slightly differently, and pretend it’s a new book, with new ideas. Taleb did that. Self-help authors do it all the time – which raises the question ‘If your first book about how to live a better life was so incomplete that it needs to be supplemented by a second, why did I waste my time reading it?‘ I suspect Richard Dawkins may do it too. As far as I can tell he has written at least four popular explanations of evolution. I read The Blind Watchmaker and thought it was great (but too long, of course!). But I didn’t read The Selfish Gene, The Ancestors’ Tale or The Greatest Show on Earth because I couldn’t see any indicators that they would contain much substance that hadn’t already been covered in the one I had read. I imagine there is some new material in each of them, but I would guess it’s more likely to be a dozen pages’ worth rather than 200+.
Fiction authors and other creative artists do this too. Stravinsky acidly observed that Vivaldi wrote the same marvellous concerto five hundred times. Bach shamelessly reused his work (goodness knows he was paid little enough for it!) and Enid Blyton invented maybe a dozen adventure and fantasy stories, which she recycled into what seems like hundreds of similar tales (surely I’m not the only one that’s noticed the remarkable similarity between Dame Slap’s School for Bad Pixies and Mr Grim’s School for Mischievous Brownies?). And let’s not even mention Mills and Boon. But somehow I don’t mind that so much. We humans are story-telling animals, and telling the same story repeatedly, changing it just a little every time, is what we have always done. I find myself able to smile indulgently on the prolixity of Enid and Antonio and Mills (?), but alas not on that of Nassim or Richard, or Deepak Chopra.
I think I’ve ranted for long enough now about how We All (including me) need to work on being more concise with our communication. It’s time to relent a little.
Not all language is just about conveying information, so the efficiency with which the information is conveyed is not always the best test. In comforting a frightened child, information communication is not the purpose of our speech. I will restrain myself from objecting that the second half of the soothing phrase ‘There, there‘ is informationally redundant. In fact, I think I could even stretch to approving of its repetition, if its first invocation was insufficient to assuage the poor mite’s distress.
Declarations of love, expressions of support, telling jokes, goodbyes, hellos and well-wishes are all ‘speech acts’ that have important non-informational components. It seems appropriate to apply different expectations to those speech acts from those we apply to informational speech. Even there, there are limits though. Many’s the operatic love aria I’ve sat through where after a while I just feel like screaming ‘OK, you love him, we get it, can we move on with the plot now please?’ And waiting for Mimi to die in La Boheme (of consumption, what else?) in between faint protestations of her love for Rodolfo, can become a little trying on one’s patience after the first ten minutes of the death scene.
But communication of information is the purpose of much of the language we use, especially in our work lives. It is a pity that so much of it is ill-considered.
Hmmm. 1,742 words. I wonder if I could turn this into a book.
Bondi Junction, November 2015