More fascinating facts about stars

I’ve had a few thoughts and discussions since I wrote this article about drawing stars. I thought it was worth sharing.

Stars within stars

The first observation is that stars, drawn in the way I described, contain other stars, nested within one another like a set of Russian dolls. Recall that we use the term ‘n-k star‘ to indicate a star with n points such that, if we draw a circle through all the points and consider the boundary of that circle as split up into n curve segments bounded by the points, then the straight line from one point to another traverses k of those curve segments. Like this, for a 7-3 star:

star 7-3 with arrows

Yuriy made an interesting observation about the stars in my last article, that the sum of the angles of all the points of an n-k star is 180 × (n-2k). In the course of thinking of ways to prove that formula true, I came upon the realisation that a n-k star contains a n-(k-1)star, which contains a n-(k-2) star and so on down to the n-1 star, which is a regular, n-sided polygon.

Here’s a picture that shows this for an 11-5 star.

11-5 with labels

The red, outer 11-5 star contains a green 11-4 star, which contains a red 11-3 star, which contains a green 11-2 star, which contains a red 11-1 star (polygon). The points of each inner star are the innermost vertices of the star that immediately contains it. Since we will be referring to those vertices again later, let’s make up a name for them. We’ll call such a vertex a ‘tniop‘, since it is in a sense the opposite of a point. The above diagram shows a point and a tniop. We’ll call the stars inside a star ‘sub-stars‘.

We saw in my last essay that, when a star cannot be drawn without taking the pencil off the paper, it is made of a number of ‘component stars’, that are rotated copies of one another. Here is a picture of a 16-6 star, which uses different colours to highlight the two component stars. We have two 8-3 stars, one light blue and one red.16-6 [1,0,0] star

And here is a picture that uses colour variation to show the sub-stars of each of the two components.

16-6 [1,1,0] star

The 8-3 light blue star contains an 8-2 star which is made up of two components that are 4-1 stars (also known as ‘squares’) coloured blue and pink. Similarly, the 8-3 red star contains an 8-2 star which is made up of two components that are 4-1 stars (also known as ‘squares’) coloured yellow and green.

The 8-2 stars each contain an 8-1 star (octagon) as the intersection of two squares -pink and dark blue for one octagon and yellow and green for the other.

Finally, those two octagons between them, bound a hexadecagon (16-sided polygon or 16-1 star). So altogether, in the one picture, we have:

  • one 16-6 star (red and light blue)
  • one 16-5 star (also red and light blue) inside that
  • one 16-4 star (pink, green, dark blue and yellow) inside that
  • one 16-3 star (also pink, green, dark blue and yellow) inside that
  • one 16-2 star (also pink, green, dark blue and yellow) inside that
  • one 16-1 star (also pink, green, dark blue and yellow) inside that
  • two 8-3 stars (one red, one blue) making up the 16-6 star
  • two 8-2 stars (one pink and dark blue, one yellow and green), one inside each of the 8-3 stars
  • two 8-1 stars (octagon: one pink and dark blue, one yellow and green), one inside each of the 8-2 stars
  • four 4-1 stars (squares: coloured pink, dark blue, yellow and green) which, in pairs, make up the 8-2 stars.

That’s sixteen stars altogether. What a lot of stars in one drawing! Can you see them all?

Here’s a different colouring that makes it easy to see all five 16-point stars:

16-6 [0,1,1] star

Although the stars get smaller as k reduces, they do not shrink away to nearly nothing. In fact they get closer together as they go inwards, as if they are asymptotically approaching a circle of some fixed, minimum size.

To investigate this, I drew a 101-50 star:

101-50 (cc_0,cs_1,st_0,bn_0,bg_FALSE) star

You’ve probably noticed by now that I’m no longer drawing these by hand. My drawing is much too wobbly to capture the intricacies of stars-within-stars. So I wrote a computer program to draw them for me. I’ll try to remember to attach it at the end of the article, so that those of you who like mucking about with computers can muck about with it.

Anyway, that 101-50 star pretty well killed my hypothesis that the inner stars can’t get very small. In this one they almost disappear out of sight. I like the swirly patterns. I haven’t yet worked out whether they are really features of this very spiky, very complex, star, or whether they are just artefacts of the crudeness introduced by the computer’s need to pixillate.

Here’s a zoomed-in image of the interior of that star. Cool, eh?

zoom of big star

This is a low-resolution image. I have saved a moderately high-resolution image of this star here. Zooming in and out is fun. It seems almost fractal as more patterns emerge from the inside when we zoom in. Also, the stars give the illusion that they are rotating as we zoom. To get the best effect you need to download the file (a .png image file) and then open it up, so that zooming is not limited by your internet connection’s speed.

Ratio of Outer to Inner radius

Let me briefly pick up on that idea above about whether there is some minimum inner radius for these stars. For each n, the spikiest n-k star is where k is the largest integer less than n/2, and this contains another k-1 stars, nested one within the other, down to the innermost, which is a n-sided regular polygon. We can work out the ratios of each star to the one immediately inside it, and use that to work out the ratio of the outermost, n-k, star, to the innermost, n-1, star. The attached computer program contains trigonometric formulas to do that. Here are the ratios of the radii of the innermost to the outermost star for each n from 1 to 109:radius ratios table

We observe that the ratios generally go down as n increases, but the decline is not steady. It bumps up and down. I have highlighted the prime numbers with asterisks. Notice how the ratio for those is always lower than for the numbers around them. The two drivers of the ratio seem to be:

  1. the size of n: the ratio generally declines as n increases; and
  2. the number of different factors n has. Note how 16 (divisible by 2, 4, 8) has a higher ratio than 15 (divisible by 3, 5) and 56 (divisible by 2, 4, 7, 8, 14) has a higher ratio than 55 (divisible by 5, 11).

This would be interesting to look into further, and to see if there is some neat, sweet, compact formula for the ratio that highlights the relationship to size and number of factors (if there really is one). But I have to stop thinking about that now or I’ll never post this essay.

The most general form of symmetrical stars

We can make an awful lot of stars using the above approach. For an integer n the number of different n-pointed stars is the biggest integer less than n/2.

But in fact there is an infinite number of different n-pointed stars, without having to loosen our standards by allowing asymmetry. After a bit of thought, I realised that the most general form of n-pointed star can be specified by a single number, which is the ratio of its inner radius to its outer radius. The outer radius is the radius of the circle on which the points sit. The inner radius is the radius of the circle on which all the tniops sit. Given n and that ratio – call it θ, we can draw a star as follows:

  1. Draw two concentric circles with ratio of the inner to the outer radius being θ.
  2. Mark n equally-spaced dots around the outer circle and draw faint lines connecting each of these to the centre. These will be the points of our star.
  3. Mark a dot on the inner circle halfway between each of the faint, radial lines drawn in the previous step. These dots will the the tniops of our star.
  4. Working consistently in one direction around the circle, draw a zig-zag line fro point to tniop to point to tniop and so on, always connecting to the nearest dot.

We will describe such a star as a n/θ star. We use a slash rather than a dash in order not to mix it up with the former type of star. Here is a sequence of six-point stars, with the ratio of the inner to outer radius going from 0.2 up to 0.8:

row of 6-0.2 stars

It is nice that this gives us more options for stars – infinitely many different kinds of star for each n in fact. But they are not as much fun to draw as the n-k stars, and it is harder to make them come out right without geometric instruments – which rules them out as an effective doodling pastime.

Note how, unlike with the n-k stars, the line leading away from a point does not intersect any other point. Nor do we get any inner stars for free. The price of gaining more variety is a loss of structure. The inner structure of a n-k star provides a great richness by enforcing all sorts of relationships between the vertices.

Only very specific values of the radius ratio θ give us n-k stars. I worked out the formula for the ratio by the way, with a bit of trigonometry. The ratio of the tniop radius to the point radius for a n-k star is:

2 sin(π(k-1)/n) sin(πk/n) sin(π(1-(2k-1)/n)) / (sin(2π/2)+sin(π(1-(2k-1)/n)) )

– sin(π((2k-1)/n – ½) )

For a n/θ star, if there are no integers n and k that give values of that formula equal to θ, the star will not have the array of inner stars that a n-k star has.

Computer Program to draw pretty stars

This whole diversion started as an exercise in drawing stars by hand. But there’s a limit to how intricate those stars can get without getting too messy with smeared ink or graphite. For those that like to look at pretty, intricate geometrical pictures, here’s my computer program in R that can draw stars of any of the types discussed.

Sums of angles

For those that like mathematical proofs, there are outlines of proofs here of Yuriy’s observation about the sum of the internal angles at points of a n-k star.

Andrew Kirk

Bondi Junction, April 2017

Advertisements

Pointing the Camera at the Monitor – An Answer

Here are my answers to the puzzle about what happens when we point a camera at its monitor. The problem has a flavour of infinite regress about it, and sounds a little like a Buddhist koan – a question designed to make us realise (amongst other things) the limitations of logic. But whereas a ‘correct’ answer to a koan is usually something bizarre like barking like a dog, or hitting the questioner with a stick, the four questions I posed do actually have perfectly logical answers. And I find them quite interesting.

Let’s kick things off by showing a picture of what one might see when one looks into the webcam that is perched on top of the monitor, looking outwards. This is a screenshot of what my monitor showed when I did that (Figure 1).

figure-01-q0a-screenshot-cheese-bear-non-max

The image is framed within the window labelled ‘Cheese’ – which is the name of the webcam program I was using. That’s me wearing the red cravat.

Question 1

When we turn the camera around and point it at the monitor, we will see an infinite regress of windows within windows, as the whole picture will be reduced and fitted into the image area where I am above. Then that reduce-and-insert step will be repeated as many times as it takes until the reduced image gets down to a single pixel and can contain no more strip-images. Here’s an image I made of what it should look like (Figure 2):

figure-02-q1_image

In every window, the green desktop background, the desktop icons and the file explorer window to the left are reproduced, and the series shrinks off into the distance. I can tell you it was pretty fiddly putting the tiniest innermost parts into that picture. Infinitely small objects are notoriously difficult to manipulate.

The picture looks a little like a classical picture that uses perspective to show a long, straight road disappearing into the distance, with the point of disappearance at which all the lines converge being in the top-right quadrant of the screen.

But there’s an important and fascinating difference: the dimension along which the images disappear here is of time, not distance. That’s because each nested window is an image captured by the camera’s sensor a short interval earlier than the window that contains it. That interval will vary slightly as we move through the nested sequence, based on the relationship between the rate at which the monitor screen is redrawn (the ‘refresh rate’) and the number of snapshot images captured per second by the camera sensor (the ‘frame rate’). But it will always be more than some minimum value that depends on how long the information takes to travel along the wire from the sensor, through any processor chips in the camera, along another wire to the computer, through any processing algorithms in the computer, and then through the video cable to the monitor. Even without knowing anything about the computer, we know that that time – called the ‘lag’ – will be greater than the lengths of the wires involved, divided by the speed of light (because electrical signals cannot travel faster than light).

So each nested window is earlier than the one outside it, and as we look through the sequence of windows towards the point of convergence at infinity, we are looking back through time!

Question 2

Now we maximise the Cheese window. First let’s see what it looks like with the camera the correct way around, pointing at me (Figure 3):

figure-03-q0b-screenshot-bear-maximised

You can tell from my expression that I’m quite enjoying this little exercise, can’t you?

Here there is nothing outside the Cheese frame, but the Cheese frame still has a broad, non-image bar at the bottom, a narrower bar at the top, and black vertical bars at either side, which are needed to preserve the image’s ‘aspect ratio’ – the ratio of its width to its height.

With that setting, we turn the camera on the monitor, and this is what we would see (Figure 4):

figure-04-q2_image

The lower, upper, left and right bars are reproduced as a series of receding frames, and there is nothing in view other than the receding frames. There is no room left for an actual image of anything other than frames.

Figure 4 gives an even better sense of the ‘time tunnel’ that we mentioned in the previous section. Those white borders really do regress away in a spooky way. It looks like something out of Doctor Who.

The ratio of the height of one window to the height of the window immediately inside it is 1/(1-p) where p is the sum of the heights of the upper and lower margins of the outermost window divided by the total height of the outermost window. The ratios of the widths is the same. In this case it looks like p is around 1/5, so each window will be about 4/5 of the height of the window that contains it.

I used my webcam and shaky hands to try an empirical verification of this. I maximised the Cheese window and, pointed the hand-held webcam at the monitor, centering it as closely as I could. Then I asked my partner to press the Screenshot button on the keyboard to record what the monitor was showing. Below is what we got (Figure 5).

figure-05-q2-empirical-non-full-screen

It’s a bit rough, but you can see it does the same sort of thing as Figure 4.

When one is holding the camera like this, the little involuntary movements one makes cause the trail of receding frames to wobble left and right in waves, that remind me of the effects used in 1970s television to produce a psychedelic impression – particularly prevalent in rock music film clips.

Here’s a link to a video I captured of this effect.

Question 3

Question 3 asks what we will see on the monitor after we click the full-screen icon.

When we click the full-screen icon, we have no borders, so there can be no infinite, reducing regress of nested borders. How can we work out what is shown, assuming that we started in non-full-screen mode with a maximised window, so that the monitor was showing Figure 4?

The answer turns out to be remarkably simple. We just note that, when the full-screen icon is clicked, the computer will do some computing and then redraw the screen using the whole monitor area for the image from the camera. When it has finished the computing it will draw the screen using the image received most recently from the sensor and, since that image was captured before the screen redraw, it will be the same as whatever the screen was showing previously. This assumes that the previous image is left in place on the monitor until the computer is ready to draw the new one. We consider later on what happens if that is not the case.

That re-drawn screen will then be captured again by the camera sensor, sent to the computer and then drawn again on the monitor, and so on. So the image will remain exactly as it was before the full-screen icon was clicked! In this case, since the Cheese window was previously maximised, it will continue to show something like Figure 4.

The image remains static until either the camera is pointed away or the computer is switched out of full-screen mode, using the keyboard or mouse. Barring earthquakes, electricity blackouts and such-like, we would expect the monitor to still be displaying Figure 4 if we locked the room it was in, went away and returned to inspect it ten years later.

We can understand this a different way by considering the time tunnel we talked about in the responses to questions 2 and 3. In those cases, as we travel inwards through the tunnel to successively smaller windows, each window’s image was captured a short while earlier than the image of the window around it. The interval between the capture time of those windows will be much less than a second, typically 1/24 seconds. In Figure 4 each window’s height is about 4/5 of the height of the window that contains it. So to see what the monitor was showing t seconds ago we have to go to the nth window in the sequence, counting from the outside, where n=24t. The height of that window, assuming the height of the monitor’s display area is 300mm, is 300 x 0.8-24t mm. The window size reduces rapidly as we go back through time. A little calculation shows that the 26th window in the sequence is the first to have height less than 1mm, and that window shows what the monitor was showing just over one second earlier.

Without constraint, that time tunnel would continue to go back, getting smaller at an increasing rate, window by nested window (like Russian dolls, or the cats in the Cat in the Hat’s hat), until we got to the time before the camera program window was opened on the computer, and that image would show whatever was on the monitor before the window was opened. But it would be indescribably tiny. If we opened the window – in maximised form – five minutes ago, the height of the window that now showed that image from back then would be 300mm x 0.8–7200, which is approximately 10-688 mm. This is indescribably smaller than the smallest atom (Hydrogen, with diameter 10-7 mm) or even just a proton (diameter 10-11 mm).

I expect our eyes probably could discern no more than the first twenty windows in the sequence. Further, since my screen has about three pixels per mm, the windows would reach the size of a single pixel by the 31st window in the sequence, and the regress would stop there. Hence the sequence would look back in time no more than 1.3 seconds.

When we move to full screen mode, we still have a time tunnel of nested windows, but each one is exactly the same size as the one before it – the height ratio is 1, rather than 0.8. That means that how far we look back in time is no longer limited by shrinking to the size of a pixel, and the sequence will go all the way back to the last image the monitor showed before drawing its first screen in full-screen mode – which will be Figure 4.

In practice, as my friend Moonbi points out, the slight distortions in the image arising from imperfections in the camera lenses, although they may be imperceptible at first, will compound on each other with each layer of nesting so that what is actually shown will be a distorted mess. Like a secret whispered from one person to another around a large circle, or a notice copied from copies of itself dozens of times recursively, the distortions – however tiny they may be at first – will grow exponentially to eventually dominate and destroy the image. One minute after full-screen mode has been commenced, the monitor will be showing a 600 times recopied image of Figure 4, which will be more than enough to obliterate the image.

But this is a thought experiment, so we allow ourselves the luxury of assuming that are lenses are somehow perfect, that there is zero distortion, and each copy is indistinguishable from the original.

What if the screen blacks out?

Above we assumed that the display does not change until the computer is ready to redraw the full-screen image. If you go to YouTube, start playing a video and then click the full-screen icon (at bottom right of the image area) you will see that is not what it does. It actually makes the whole screen go black for a considerable portion of a second, and only then redraws the screen. If the camera program we are using does that then the screen will go black and remain black indefinitely.

If the black-out is shorter than YouTube’s, different behaviour may arise. It depends on four things:

  • the time from image capture to display, which we call the lag and denote by L,
  • the interval between image captures, which we call the frame period and denote by T, and
  • the time the blackout period commences and the time it ends, both measured in milliseconds from the last image capture before the blackout. We’ll denote these by t1 and t2.

If no image capture occurs during the blackout, which will happen if t2<T, the blackout will have no effect on the final image and we can ignore it. The eventual image will still be Figure 4.

If images are captured during the blackout, and the first image shown on the monitor after the blackout was captured during the blackout, the screen will thereafter remain black indefinitely. This will be the case if both L and T are less than t2.

The other possibility is that T<t2 and L>t2. In this case the first image shown after the blackout will be a picture of the pre-blackout monitor, ie Figure 4, but it will be followed sooner or later by one or more black images captured during the blackout.  What will follow then will be an alternation between images showing Figure 4 and black images. It will look like a stroboscopic Figure 4. The strobe cycle will have period approximately equal to L, and the dark period will have approximately the same length as the blackout, ie  t2t1. In essence, the monitor will indefinitely replay what it showed in the period of length L ending at the end of the blackout.

This black-screen issue will also arise if the monitor is an old-style, boxy, CRT (Cathode Ray Tube) rather than a LCD, Plasma or LED device. CRT screens typically draw around 75 images per second, made up of bright dots on a photo-sensitive screen, by shooting electrons at it. In between those drawings, the screen is black. That’s why those screens sometimes appear to flicker, especially viewed through a video camera.

For a CRT screen, the image captured immediately before the redraw may be Figure 4 or black, or something in-between – a partial Figure 4, with a complex dependency based on four parameters: the length of exposure used by the sensor (shutter speed), the length of time taken for a single redraw of the CRT screen and the refresh rate and frame rate. Unless there were a particularly unusual and fortuitous relationship between those four numbers, the image on the monitor would not be Figure 4. I think instead it would either be just black or an unpredictable mess. But one would need more knowledge of CRT technology than I have, to predict that.

Anyway, we don’t want to get bogged down in practical technology. This is principally a thought experiment. And for that ideal situation, we assume an LCD monitor with a computer that, upon receiving a full-screen-mode command, leaves the prior image in place until it is ready to redraw the image on the full screen. And the answer in that situation is Figure 4.

Doing a careful experimental verification of this is beyond me because, amongst other things, I don’t have a camera program that has a full-screen mode. But just for fun, I made a video like the one above under Question 4, where I focused the camera on the area of the monitor that displayed the image, trying to exclude the borders. It wobbles about, partly because of my shaky hands. It is mostly black, but there’s a blue smudge that appears in the lower half and wobbles around. I think that is a degraded version of the regressing images of the lower border. But under such uncontrolled conditions, who knows?

Here’s the video.

Question 4

We are now in a position to work out the answer to question 4, which is ‘what will we see after we point the camera to the right of the monitor and then pan left until it exactly points at the monitor, and stop there?

To start with, we know that, once the camera has panned to the final position of exactly capturing the image of the entire monitor, it will hold that image indefinitely, and that image will be whatever the monitor was showing immediately before the camera finished panning.

We’ll be a little more precise. A video camera captures a number f of images (‘frames’) per second, typically 24. The final image shown by the camera will be whatever the camera captured in the last frame it shot before completing the pan. The nature of that image depends on relationships between the pan speed, the frame rate and the width of the monitor screen, which we will explore shortly. But, to avoid suspense, let’s assume a frame rate of 24, that 32 frames are shot while performing the pan, and that the view to the right of the monitor display area, including the black right-hand frame of the monitor itself, being this (Figure 6):

pan_image_1

Then, on completion of the pan, the camera will show, and continue to show indefinitely thereafter, the following image (Figure 7 – note that the image was made by editing, not shot through a camera. My equipment is nowhere near precise enough to do this accurately):

pan_image_32

C’est bizarre, non?

Those of you that enjoy exploring intricate patterns may wish to read on, to see the explanation of this phenomenon. I will not be offended if most don’t.

The answer depends on the ratio of the speed at which the camera pans left, to the width of the screen. If we want to be precise, these speeds and widths must be measured in  degrees (angles) rather than millimetres. But millimetres are easier to understand so we’ll use them and ignore the slight inaccuracy it introduces (otherwise I’d need to start using words like ‘subtend’, and we wouldn’t want that would we?).

Say the camera rotates at a speed of s mm per second, and that it shoots f frames per second. Hence it shifts view leftwards by s/f mm per frame. So, if the width of the monitor’s display area is w mm, it shoots N=wf/s frames between when the camera first captures part of the monitor’s display area (on the monitor’s right side) and when the camera is in the final position where it exactly captures the view of the whole monitor display area, not counting the last frame. We label the positions of the camera at each of those frames as 1 to N, going from earliest to latest. We omit the last frame because we know that, once the camera is pointing exactly at the monitor, the image will remain fixed on whatever the monitor is showing at that time, which will be what the camera saw in position N.

By the way, we assume that N is an integer even though in practice it won’t be, because it will have a fractional part. It doesn’t make the calculation significantly more difficult if it’s a non-integer, but it is messier, longer, and the differences are not terribly interesting, so we’ll assume it’s an integer.

Divide the view to the right of the monitor’s display area, as shown in Figure 6 above, into N vertical strips of equal width. Number those strips 1 to N from left to right. We will call these ‘strip-images‘, as each is a tall, thin picture. Next, number the positions the camera has when each frame is shot as follows:

  • Frame 0 is when the left-hand edge of the image captured by the camera coincides with the right-hand edge of the monitor display area, so that the camera captures exactly the image of Figure 6, ie strip-images 1 to N. At that time the monitor will be showing what the camera captured one frame earlier, which will be an image made up of strip-images 2 to N+1 (strip-image N+1 is what we can see in an area the same size as the opther strip-images, immediately to the right of Figure 6)
  • The frames shot after that are labelled 1, 2, etc.

Label the times when the camera is in position 0, 1, 2 etc as ‘time 0’, ‘time 1’, ‘time 2’ etc.

With this scheme, Frame N will be the one that is shot when the camera is in the final position, when it exactly captures what is on the monitor, so that that image remains on the monitor indefinitely thereafter.

The following table depicts what is shown by the monitor and what is captured by the camera at each position, in the situation we used to produce Figure 7, which has N=32.

The rows are labelled by the camera positions/times. The first 32 columns (the ‘left panel’) correspond to the 32 vertical, rectangular strips of the monitor display area, numbered from left to right. The next 32 columns (the ‘right panel’) correspond to the strips of what can be seen to the right of the monitor. The number in each cell shows which strip-image can be seen by looking in that direction. The yellow shading in each row shows what images are captured by the camera at that time, to be shown on the monitor in the next row. (Figure 8):

figure-08-pan-table-fullA few key points of interest are:

  • The numbers in the right panel do not change from one row to the next, because rotating the camera does not change what can be seen to the right of the monitor.
  • The numbers in the left panel change with each row, to reflect that what was captured by the camera at the previous camera position was different from what was captured at the one before that, because of the camera’s movement.
  • The yellow area denoting what the camera captures moves to the left as we go down the table, reflecting the camera’s panning to the left.

There are lots of lovely number patterns in the left panel, which I will leave the reader to explore.

Here is a zoomed-in image of just the left panel for those whose eyes, like mine, have trouble making out small numbers (Figure 9):

figure-09-pan-table-left-panel

Referring back to Figure 7 we see that, as we move from the right to the left side, it has a series of eight vertical images of increasing width. The first one is just the monitor’s right-hand frame – a black plastic strip. The next is twice as wide and has the monitor frame plus the strip-image to its right. The one after than is three times the width and so on. This corresponds to the last row of Figure 9:

5 6 7 8, 1 2 3 4 5 6 7, 1 2 3 4 5 6, 1 2 3 4 5, 1 2 3 4, 1 2 3, 1 2, 1

I have put commas between each contiguous set of strip-images. I call each such contiguous set a ‘sub-image‘. The first sub-image is incomplete – being 5 6 7 8 instead of 1 2 3 4 5 6 7 8 – because N is not a triangular number.

The time tunnel applies here, with a slightly different flavour. The newest sub-image is the one on the far right, composed solely of strip-image 1. This was captured from the world outside the monitor one frame period ago. The next, the ‘1 2’, was shot from the monitor last time, and came from the real world outsider the monitor two frame periods ago. The oldest sub-image is the one on the left, which has been through the camera-monitor loop seven times, having first been captured from the real world eight frame periods ago.

It’s quite fun to trace the path of these images as they repeatedly traverse the camera-monitor loop, by the numbers in the above table. Here’s the lower part of the table showing how the first, third and sixth sub-images from the left (using blue, grey and pink shading respectively), make their way from the real world (to the right of the vertical dividing line), into the camera-monitor loop (to the left of that line), around that loop as many times as needed, and finally to the ultimate static image (Figure 10):

figure-10-pan-table-with-locii

My example with N=32 involves a high panning speed. Shooting 24 frames per second, the pan would need to to completed in 32/24 = 1.33 seconds. One would need extremely good equipment to accomplish that without getting a bounce or wobble when the camera stops at the end of the pan – and avoiding wobble is critical to getting the indefinite static picture we have discussed.

It may be that in order to avoid camera bounce one would need a slower pan, giving us a (perhaps much) higher N. What would be the outcome of that? Well, the right-most sub-image contains only one strip-image and, as we move left, each image contains one more strip-image than the one to its right. So, if the number of sub-images is r, then N will be greater than the sum of the numbers from 1 to r-1 (the (r-1)th triangular number) and not exceeding the sum from 1 to r (the rth triangular number). A little maths tells us that this is from (r-1)r/2 to r(r+1)/2. For large N this means that there will be approximately √(2N) sub-images and the largest will be comprised of about √(2N) strip-images. Since for a display width of w the width of a strip-image is w/N, that means the widest sub-image will have width about w√(2N)/N=w√(2/N), which will get smaller as N increases. If N=2048, corresponding to a very slow pan time of about 85 seconds, the widest sub-image would be narrower than the frame of my monitor, so all we would see in the final static image would be black plastic monitor frame, something like this (Figure 11):

figure-11-final2048_smaller

The bars of light are from the screen reflecting on the shiny monitor frame. Because I couldn’t hold the camera straight, they are bigger at the bottom than at the top.

I will leave you with my synthesis of the sequence of 32 images that would be seen, given near-perfect equipment, in the 32-step pan that ends with Figure 7 above. They are simply a realisation in pictures of the patterns shown in figure 9, when applied to the image Figure 6. The sequence is in a pdf at this address. If you go to page 1 and then repeatedly hit Page Down rapidly, you will see a slow-motion video representation of what it would look like as the camera panned. You may need to download it first in order to be able to view it in single-page view, which is necessary in order to achieve a video-like effect. Alternatively, it is represented below, albeit somewhat more crudely, as a pretend film strip.

Andrew Kirk

Bondi Junction, February 2017

strip01-08strip09-16strip17-24strip25-32


Pointing the Camera at the Monitor – A Puzzle

I was listening to a talk by Alan Watts about some aspect of Eastern mysticism. I can’t remember the exact context. I think he was describing the impossibility of truly understanding the nature of one’s own mind. He said that trying to use one’s mind to understand one’s own mind was ‘like pointing the camera at the monitor’.

I was immediately struck by this. Partly I was surprised at his using such a simile, which involves common enough concepts in 2017, in a talk that he gave in the sixties, when computers only existed in large research establishments and occupied enormous rooms. There was certainly no such thing as a webcam back then. I realised later that he probably had in mind a closed-circuit television arrangement, which they did have in the sixties.

But beyond that, I was struck by the fact that it’s actually a very interesting question – what does happen when one points the camera at the monitor? It’s a classically self-referential problem. But unlike some self-referential problems, like the question of the truth of the statement ‘This sentence is false’, it must have a precise answer, because we can point a camera at a monitor, and when we do that the monitor must show something. But what will it show?

There are a number of practical considerations that can lead us towards different types of answers. While each of those considerations leads to an interesting problem in its own right, I tried to remove as many of them as possible to make the problem as close to ‘ideal’ as I could. So here it is.

Imagine we have a computer connected to a monitor and a digital video camera. A webcam is a digital video camera but, since the camera we are imagining here needs to be extremely accurate, a high-quality professional video camera would be more suitable. The monitor uses a rectangular array of display pixels to display an image and the camera uses a sensor that is a rectangular array of light-sensitive pixels, and the dimensions of the display and the sensor, in pixels (not in millimetres) are identical.i

On the computer we run a program that shows the image recorded by the camera. The telecommunication program Skype is a well-known such program that can do that, amongst other things. There are also dedicated camera-only programs, which webcam manufacturers typically include on a CD bundled with the webcams they sell. Let’s call our program CamView (not a real program name). We start up CamView on the computer in a non-maximised window, which we’ll call the ‘CamView window’. Then we turn the camera on and point it at the monitor. We aim and focus the camera so precisely that an image of the display area of the monitor fills the image-display area of the CamView window. Ideally this would mean that each pixel on the camera’s sensor is recording an image of the corresponding pixel on the monitor screen. In practice there will be some distortion, but we’ll ignore that for now.

Question 1: what does the monitor show?

Question 2: Next we maximise the CamView window. What does the monitor show now?

Those questions are easy enough to answer, when we remember that the window for any computer program, in default mode, typically has an upper border with tool icons on it, a lower border with status info on it, and sometimes left or right borders as well.

These questions are fairly similar to the question of what one sees when one stands between two parallel, opposing mirrors, as is the case in some lifts (elevators).

Now comes the hard one. In most video-viewing computer programs there is an icon that, upon clicking, maximises the window and removes all borders so that the image-display area occupies the entire display area of the monitor. Call it the ‘full screen icon’ and say that we are in ‘full screen mode’ after it is clicked – until a command is given that terminates that mode and returns to the default mode – ie restores the borders etc. In full screen mode the display area of the monitor corresponds exactly to the images recorded by the camera’s sensor.

Question 3: We now click the full screen icon. Describe what appears on the monitor, and how it changes, from the instant before the icon is clicked, until ten minutes after clicking it – assuming the program remains in full screen mode for that entire time.

That is the difficult one. It took me a while to figure it out, and I was surprised by the answer. It is possible that what I worked out was wrong. If so, I hope that someone will point that out to me.

I have one more question, and it has an even more peculiar answer – one that I found quite charming.

Question 4: Assume the camera is mounted on a very stable tripod. Still in full-screen mode, we pan the camera to the right until it no longer shows any of the monitor. Then we pan the camera back at a constant speed until it again sees only the display area of the monitor, and we stop the panning at that point. What is visible on the monitor after the camera has panned back to the original position? Does that change subsequently? What does it look like ten minutes later? Does the monitor image depend on the panning speed, or on the number of frames per second the camera shoots? If so, how?

In order to avoid spoiling anybody’s fun in trying to work out the answers to these puzzles for themself, I will not post answers now. I will post them a little later on. It will also take me a little while to make some nice pictures to help explain what I am talking about.

Andrew Kirk

Bondi Junction, February 2017

Footnotes

i Although most camera sensors have a 3:2 aspect ratio, which is different from the 16:9 aspect ratio of most modern computer monitors, it is possible on a sophisticated camera to alter the aspect ratio to 16:9, which is achieved by deactivating the sensor pixels in an upper and lower band of the sensor, so that the area used to record an image has the required aspect ratio. We’ll assume that is done and that the number of pixels in the active sensor area equals that on the monitor.


Hypotheticals, counterfactuals and probability

This essay considers the notion of events occurring that we do not know to either have occurred, or to be almost certain to occur in the future. Imagination of such events is everywhere in everyday speech, but we rarely stop to consider what we mean by it, or what effect imagining such things has on us.

It is dotted with numbered questions, so it can be used as a basis for a discussion.

Counterfactuals

A counterfactual is where we imagine something happening that we know did not happen.

This is fertile ground for fiction. Philip K Dick’s acclaimed novel ‘The Man in the High Castle’, written in 1962, depicts events in a world in which the Axis powers won World War II, and the USA has been divided into parts occupied by Japan and Germany. The movie ‘Sliding Doors’ is another well-known example, that imagines what ‘might have happened’ if Gwyneth Paltrow’s hadn’t missed a train by a second as the sliding doors closed in front of her..

When something terrible happens, many people torment themselves by considering what would have happened if they, or somebody else, had done something differently:

  • What if I had been breathing out rather than in when the airborne polio germ floated by? (from Alan Marshall’s ‘I can jump puddles’)
  • If she hadn’t missed her flight and had to catch the next one (doomed to crash), she’d still be alive now.
  • What would life have been like if I hadn’t broken up with Sylvie / Serge?

We can also consider counterfactuals where the outcome would have been worse than what really happened, such as ‘What would my life have been like if I hadn’t met that inspirational person that helped me kick my heroin habit‘. But for some reason – so it appears to me – most counterfactuals that we entertain are where the real events are worse than the imagined ones. We could call these ‘regretful counterfactuals‘ and the other ones ‘thankful counterfactuals‘.

Then there are the really illogical-seeming ones, like the not-uncommon musing: ‘Who would I be [or what would I be like] if my parents were somebody else?‘ which makes about as much sense as ‘what would black look like if it were a lightish colour?

Here are some questions:

  1. why do we entertain counterfactuals? What, if any, benefits are there from considering regretful counterfactuals? What about thankful ones?
  2. given that for many counterfactuals, consideration of them just makes us feel bad, could we avoid entertaining them, or is it too instinctive an urge to be avoidable?
  3. Do counterfactuals have any meaning? Given that Alan Marshall did breathe in, and did contract polio, what does it mean to ask ‘If he had been breathing out instead, would he have become a top-level athlete rather than an author?‘ Are we in that case talking about a person – real or imaginary – other than Alan Marshall, since part of what made him who he is, was his polio?

That last question can lead in some very odd directions. My pragmatic approach is that counterfactuals are made-up stories about an imaginary universe that is very similar to this one, but in which slightly different things happen. Just as we make up stories about non-existent lands, princesses and far away galaxies, we can make up stories about imaginary worlds that are very similar to this one except in a handful of crucial respects.

Some philosophers insist that counterfactuals are not about imaginary people and worlds but about the real people we know. My objection to that is that, for example, the Marshall counterfactual cannot be about the Alan Marshall, because he had polio. It can only be about an imaginary boy whose life was almost identical to Marshall’s up the point when the real one contracted polio. My opponents (who would include Saul Kripke, that we mention later) would counter that polio is not what defines Alan Marshall, that it is an ‘inessential’ aka ‘accidental’ property of that person, and changing it would not change his being that person. Which begs the question of what, if any, properties are essential, such that changing them would make the subject a different person. Old Aristotle believed that objects, including people, have essential and inessential properties, and wrote reams about that. In the Middle Ages Thomas Aquinas picked up on that and wrote many more reams about it. The ‘essential properties’ of an object are called its ‘essence’, and believing in such things is called ‘Essentialism’. That is how certain RC theologians are able to claim that an object that looks, feels, smells, sounds, tastes and behaves like a small, round, white wafer is actually the body of Jesus of Nazareth – apparently because, although every property we can discern is that of a wafer, the ‘essential’ properties (which we cannot perceive) are those of Jesus, thus its essence is that of Jesus. I tried for years to make sense of that and believe it, but all it succeeded in doing was giving me a headache and making me sad. For me, essentialism is bunk.

  1. Can you make any sense of Essentialism? If so can you help those of us who can’t, to understand it?

I can’t help but muse that maybe thankful counterfactuals have some practical value, as they can enable us to put our current sorrows into perspective. They are a very real way of Operationalizing (I know, right?) what Garrison Keillor suggests is the Minnesotan state motto – ‘It could be worse‘.

Maybe regretful counterfactuals sometimes have a role too, when they encourage us to learn from our mistakes and be more careful in the future. But they are of no use in the three examples given above. What are we going to learn from them: Never breathe in? Never fly on an aeroplane? Never break up with a romantic partner (no matter how unsuitable the match turns out to be)?

If we do something that leads to somebody else suffering harm, considering the regretful counterfactual can be useful. If I hadn’t done that, they wouldn’t be so sad. How can I make it up to them? I know, I’ll do such-and-such. That won’t fix it completely, but it’s all I can think of and at least it’ll make them feel somewhat better.

But once we’ve done all we can along those lines, the counterfactual has outlived its usefulness and is best dismissed. Otherwise we end up punishing ourselves with pointless guilt, which benefits nobody. Yet we so often do this anyway, perhaps because we can’t help it, as speculated in question 2.

I am completely useless at banishing guilt. But the techniques I have, feeble as they are, revolve around reminding myself that the universe is as it is, and cannot be otherwise. The past cannot be changed. If I had not done that hurtful thing I would not have been who I am, and the universe would be a different one, not this one. I am sorry I did it, and will do my best to make restitution, and to avoid causing harm in that way again. But the counterfactual of my not doing it is just an imaginary story about a different universe, that is (once I’ve covered the restitution and self-improvement aspects) of no use to anybody, and not even a good story. Better to read about Harry Potter’s imaginary universe instead.

This universe-could-not-have-been-otherwise approach is currently working moderately well in helping me cope with the recent Fascist ascendancy in the US. There are so many ‘if only…’ situations we could torture ourselves with: ‘If only the Democrats had picked Bernie Sanders’, ‘If only Ms Clinton hadn’t made the offhand comment about the basket of deplorables’, ‘If only the Republicans had picked John Kasich’. Those ‘If only’s are about a different universe, not this one. They could not happen in this universe, because in this universe they didn’t happen.

Counterfactuals also come into Quantum Mechanics. Arguably the most profound and shocking finding of quantum mechanics is Bell’s Theorem which, together with the results of a series of experiments that physicists did after the theorem was published, implies that either influences can travel faster than light – which appears to destroy the theory of relativity that is the basis of much modern physics – or Counterfactual Definiteness is not true. Counterfactual Definiteness states that we can validly and meaningfully reason about what would have been the result if, in a given experiment, a scientist had made a different type of measurement from the one she actually made – eg if she had pointed a particular measuring device in a different direction. Many find it ridiculous that we cannot validly consider what would have happened in such an alternative experiment, but that (or the seemingly equally ridiculous alternative of faster-than-light influences) is what Bell’s Theorem tells us, and the maths has been checked exhaustively.

Hypotheticals

A counterfactual deals with the case where something happens that we know did not happen. What about when we don’t know? I use the word hypothetical or possibility to refer to where we consider events which we do not know whether or not they occur in the history of the universe. These events may be past or future:

  • a past hypothetical is that Lee Harvey Oswald shot JFK from the book depository window. Some people believe he did. Others think the shot came from the ‘grassy knoll’.
  • a future hypothetical is that the USA will have a trade war against China

What do we mean when we say those events are ‘possible’ or, putting it differently, that they ‘could have happened‘ (for past hypotheticals) or that they ‘could happen‘ (for future hypotheticals)? I suggest that we are simply indicating our lack of knowledge. That is, we are saying that we cannot be certain whether, in a theoretical Complete History of the Earth, written by omniscient aliens after the Earth has been engulfed by the Sun and vaporised, those events would be included.

Some people would insist that the future type is different from the past type – that while a past hypothetical is indeed just about a lack of knowledge about what actually happened, a future hypothetical is about something more fundamental and concrete than just knowledge. This leads me to ask:

  1. Does saying that a certain event is ‘possible’ in the future indicate anything more than a lack of certainty on the part of the speaker as to whether it will occur? If so, what?

I incline to the view that it indicates nothing other than the speaker’s current the state of knowledge. What some people find uncomfortable about that is that it makes the notion of possibility depend on who is speaking. For a medieval peasant it is impossible that an enormous metal device could fly. For a 21st century person it is not only possible but commonplace. As Arthur C Clarke said ‘Any sufficiently advanced technology is indistinguishable from magic.’ To us, mind-reading is impossible, but maybe in five hundred years we will be able to buy a device at the chemist for five dollars that reads people’s minds by measuring the electrical fields emitted by their brain.

Under this view, the notion of possibility is mind-dependent. What would a mind-independent notion of possibility be?

There is a whole branch of philosophy called ‘Modal Logic’, and an associated theory of language – from the brilliant logician Saul Kripke – that is based on the notion that possibility means something deep and fundamental that is not just about knowledge, or minds. To me the whole thing seems as meaningful as debates over how many angels can dance on the head of a pin, but maybe one day I will meet somebody that can demonstrate a meaning to such word games.

Sometimes counterfactuals sound like past possibilities. That happens when we say that something which didn’t happen, could have happened. Marlon Brando’s character Terry in ‘On the Waterfront‘ complains ‘I coulda been a contender … instead of a bum, which is what I am‘. As I said above, I don’t think it makes literal sense to say it could have happened, since it didn’t. But if we didn’t know whether it had happened or not, we wouldn’t have been surprised to find out that it did happen. So in a sense we are saying that a person in the past, prior to when the event did or didn’t occur, evaluating it from that perspective, would regard it as possible. Brando’s Terry was saying that, back in the early days of his boxing career, he would not have been at all surprised if he had become a star. But he didn’t, and now it was too late.

What would happen / have happened next?

With both counterfactuals and hypotheticals, we often ask whether some other thing would have happened if the first thing had happened differently from how it did. For instance:

  • [counterfactual] If the FBI director had not announced an inquiry into Hilary Clinton’s emails days before the 2016 US presidential election, would she have won?
  • [past hypothetical] If Henry V really did give a stirring speech like the ‘band of brothers’ one in Shakespeare’s play, exhorting his men to fight just for the glory of having represented England, God and Henry, were any of the men cynical about his asking them to risk death just in order to increase Henry’s personal power?
  • [future hypothetical] If Australia’s Turnbull government continues with its current anti-environment policies, will it be trounced at the next election?

Which leads to another question:

  1. What exactly do these questions mean?

The first relates to something that we know did not happen and the other two relate to what is currently unknowable.

My opinion is that, like with counterfactuals, they are about making up stories. In the US election case we are imagining a story in which certain events in the election were different, and we are free, within the bounds of the constraints imposed by what we know of the laws of nature, to imagine what happened next. Perhaps in the story Ms Clinton wins. Perhaps she then goes on to become the most beloved and successful president the country has ever had, overseeing a resurgence of employment, creativity, and brotherly and sisterly love never before encountered. Or perhaps she declares martial law, suspends the constitution and becomes dictator for life, building coliseums around the country where Christians and men are regularly fed to lions. Within the bounds of the laws of nature we are free to make up whatever story we like.

The same goes for the past hypothetical of Henry’s speech. We can imagine the men swooning in awe and devotion, murmuring Amen after every sentence, or we can imagine them rolling their eyes and making bawdy, cynical quips to one another – but nevertheless eventually going in to battle because otherwise they won’t be paid and their families will starve.

However, the future hypothetical seems to be about more than a made-up story. If the first thing happens – continued anti-environmentalism – then we will definitely know after the next election whether the second thing has also happened. At that point it becomes a matter of fact rather than imagination.

To which I say, so what? Until it happens, or else it becomes clear that it will not happen, it is a matter of future possibilities and can be covered by any of the scientifically-valid imaginative scenarios we can dream up. It is only if the scientific constraint massively narrows down those scenarios that it has significance. If, for instance, we could be sure that any government that fails to make a credible attempt to protect the environment will be booted out office, our future possibility would become a certainty: If the government doesn’t change its track then it will be ejected. But in politics nothing is ever that certain. Other issues come up and change the agenda, scandals happen, natural and man-made disasters, personal retirements and deaths of key politicians. At best we can talk about whether maintaining the anti-environment stance makes it more probable that the government will lose office. Which leads on to the next thorny issue.

Probability

Probability, aka chance, aka risk, aka likelihood and many other synonyms and partial synonyms, is a word that most people feel they know what it means, but nobody can explain what that is.

What do we mean when we say that the probability of a tossed coin giving heads is 0.5? Introductory probability courses often explain this by saying that if we did a very large number of tosses we would expect about half of them to be heads. But if we ask what ‘expect’ means we find ourselves stuck in a circular definition. Why? Because what we ‘expect’ is what we consider most ‘likely’, which is the outcome that has the highest ‘probability’. We cannot define ‘probability’ without first defining ‘expect’, and we cannot define ‘expect’ without first defining ‘probability’ or one of its synonyms.

We could try to escape by saying that what we ‘expect’ is what we think will happen, only that would be wrong. The word ‘will’ is too definite here, implying certainty. When we say we expect a die will roll a number less than five, we are not saying that we are certain that will be the case. If it were, and we rolled the die one hundred times in succession, we would have that expectation before each roll, so we would be certain that no fives or sixes occurred in the hundred rolls. Yet the probability of getting no fives or sixes in a hundred rolls is about two in a billion billion, which is not very likely at all. We could dispense with the ‘certainty’ and instead say that we think a one, two, three or four is the ‘most probable’ outcome for the next roll. But then we’re back in the vicious circle, as we need to know what ‘probable’ means.

  1. What does ‘expected’ mean?

There is a formal mathematical definition of probability, that removes all vagueness from a mathematical point of view, and enables us to get on with any calculation, however complex. Essentially it says that ‘probability’ is any scheme for assigning a number between 0 and 1 to every imaginable outcome (note how I carefully avoid using the word ‘possible’ here), in such a way that the sum of the numbers for all the different imaginable outcomes is 1.

But that definition tells us nothing about how we assign numbers to outcomes. It would be just as valid to assign 0.9 to heads and 0.1 to tails as it would to assign 0.5 to both of them. Indeed, advanced probability of the kind used in pricing financial instruments involves using more than one different scheme at the same time, which assign different numbers (probabilities) to the same outcome.

This brings us no closer to understanding why we assign 0.5 to heads.

Another approach is to say that we divide up the set of all potential outcomes as finely as we can, so that every outcome is equally likely. Then if the number of ‘equally likely’ outcomes is N, we assign the probability 1/N to each one.

That seems great until we ask what ‘equally likely’ means, and then realise (with a sickening thud) that ‘equally likely’ means ‘has the same probability as’, which means we’re stuck in a circular definition again.

  1. What does ‘equally likely’ mean?

After much running around in metaphorical circles, I have come to the tentative conclusion that ‘likely’ is a concept that is fundamental to how we interpret the world, so fundamental that it transcends language. It cannot be defined. There are other words like this, but not many. Most words are defined in terms of other words, but in order to avoid the whole system becoming circular, there must be some words that are taken as understood without definition – language has to start somewhere. Other examples might be ‘feel’, ‘think’ and ‘happy’. We assume that others know what is meant by each of these words, or a synonym thereof, and if they don’t then communication is simply impossible on any subject that touches on the concept.

Or perhaps ‘likely’ and ‘expect’ may be best related to a (perhaps) more fundamental concept, which is that of ‘confidence’, and its almost-antonym ‘surprise’. Something is ‘likely’ if we are confident – but not necessarily certain – that it will happen, which is that we would be somewhat surprised – but not necessarily dumbfounded – if it did not happen. I think the twin notions of confidence and surprise may be fundamental because even weeks-old babies seem to understand surprise. The game of peek-a-boo relies on it entirely.

Once we have these concepts, I think we may be able to bootstrap the entire probability project. The six imaginable dice roll numbers will be equally likely if we would be very surprised if out of six million rolls, any of the numbers occurred more than two million times, or not at all.

There are various frameworks for assigning probabilities to events that are discussed by philosophers thinking about probability. The most popular are

  • the Frequentist framework, which bases the probability of an event on the number of times it has been observed to occur in the past;
  • the Bayesian approach, which starts with an intuitively sensed prior probability, and then adjusts to take account of subsequent observations that using Bayes’ Law; and
  • the Symmetry approach, which argues that events that are similar to one another via some symmetry should have the same probability.

It would make this essay much too long to go into any of these in greater detail. But none of them lay out a complete method. I suspect they all have a role to play in how we intuitively sense probabilities of certain simple events. But I feel that there is still some fundamental, unanalysable concept of confidence vs surprise that is needed to cover the gaps left by the large vague areas in each framework.

Here is one last question to consider:

  1. A surgeon tells a parent that their three-year old daughter, who is in a coma with internal abdominal bleeding following a car accident, has a 98% chance of a successful outcome of the operation, with complete recovery of health. In the light of the above discussion, it seems that nobody can explain what that 98% means. Yet despite the lack of any explicable meaning, the parent is so relieved that they dissolve in tears. Why?

Andrew Kirk

Bondi Junction, January 2017


Drawing Stars. Number Two in a Series of Adult Amusements

It has become apparent to me that the world needs another instalment in my series of suggestions for Adult Amusements. There have been complaints. Some are from pedants, who insist that a single monograph about standing on one leg does not constitute a series. Others, more gravely, have expressed concern about the occupational health and safety implications of people trying to balance on one leg while their mind is distracted by other things, like budgets, work-shopping and brain-storming, not to mention trying to be Pro-Active, Customer-focused, Agile, Continuously Improving and Outside the Box all at the same time.

So, belatedly, here it is. I hope that this will be considered less dangerous, being a mostly sedentary activity.

When in business meetings that do not hold us riveted with fascination, we should draw stars!

But not just any old stars. Special stars. Mathematical ones. Stars with prime numbers in them.

It is the dearest wish of every little child, after that of being a firefighter or an astronaut, to draw excellent stars in their pictures. But a wish is one thing, and its fulfilment is another. When as a child I tried to draw stars, the only technique I could think of was to draw a spiky circle. Start anywhere, and draw a perimeter that goes around an imaginary centre, that is a series of spikes. Maybe this works OK for others, but for me it typically produced a result like this (Figure 1):
fig-1-naive-starc

It invariably goes wonky, because it’s hard to keep track of where the centre is supposed to be, and to make the points point away from that centre. Mine looks like a confused kookaburra.

When one gets a little bit older and more sophisticated, one learns – by instruction or by observation of others – the two standard techniques for drawing stars. These are the six-pointed star, which is made by drawing an upside-down triangle slightly above a right-way up one (Figure 2):

fig-2-6-pointed-starc
and the five-pointed Pentacle, which requires a little more coordination, but can be done without taking the pencil off the paper (which I call a ‘single pencil stroke’), by following the arrows as shown (Figure 3):
fig-3-5-pointed-pentaclec

Learning to draw either of these stars is on a par with learning to ride a bike, in terms of the sense of achievement, wonder and progress. All of a sudden, one can construct an image of symmetry and elegance with the stroke of a pencil – or two strokes, in the case of the six-pointer.

I was very happy with this advance in technology for a long time, but then came the day when I hankered after drawing more bristly stars, with seven, ten or twelve points. I tried, but found I was just reverting back to the method of figure 1, and my bent stars just did not satisfy me.

One could of course take out a protractor and compass and, with a bit of preliminary calculation, measure out the exact angles needed for each point, and draw the star using that. But firstly that’s cheating, and against the Spirit of Doodling, and second it might cause others to notice that one is not paying attention to whatever the meeting is discussing.

I thought I was destined to be forever that object of public ridicule – the man with the two-star repertoire. But just as I was starting to come to terms with this being my fate, a discovery came to me in a blinding flash: instead of trying to draw spikes in a circle, I needed to generalise the methods used for the five and six-pointer. Well, to cut a long story short, I tried that, it worked, and now I can draw stars with any number of points up to about fifty.

Here is the method that generalises the way we draw five-pointed stars:

Drawing a star with a single pencil stroke

  • Step 1: pick the number of points N, and draw that number of points, as evenly spaced as you can, around the perimeter of an imaginary circle. If there is a large number of points it’s best to first draw points at the 12, 3, 6 and 9 o’clock locations and then put one quarter of the remaining points into each of the four quadrants. To be precise, divide N by 4 to get a quotient Q and a remainder R. Then draw Q points in each of R quadrants of the circle, and Q-1 points in the other quadrants. Ideally, if R=2, adjacent quadrants should not contain the same number of points, but it doesn’t matter very much if that is forgotten.
  • Step 2: pick a number K, greater than 1, that has no common factors with N. To make the spikiest possible star (ie with the thinnest spikes), choose K as the largest whole number less than N/2 that has no common factors with N. For instance if N=12 that number is 5. If N=13 it is 6. If N=6, 4 or 3 there is no possible K, and this method cannot be used. I’m pretty sure that, for any N greater than 6, there is at least one K for which this method will work, but I have not proved that yet.
  • Step 3, choose your favourite direction in which you want to draw. Unless you are a pan-dimensional creature drawing on paper with three or more dimensions, your only possible choices are clockwise or anti-clockwise.
  • Step 4 starting at any point, draw a straight line from that point to the point that is K steps away from it, hopping from point to point around the circumference in the chosen direction. We can call K the ‘side length’, since it is the length of the line that connects one point to another.
  • Step 5: repeat step 4 until you get back to the starting point.

If this process is executed carefully, you will have drawn a star that has a point at every one of the points you drew in step 1. And, if you want, you can do all the actual line drawing in steps 4 and 5 without ever taking your pencil off the paper.

Here is a depiction of that process for an eleven-pointed star with side length 5:

montage_11_5bAnd here is a depiction of this process for a sixteen-pointed star with side length 5:
montage_16_5b
Why do we not allow the side length K to be 1? That’s because if we do that, we just get a N-sided shape which, ignoring any irregularities in our drawing, is a regular polygon, like this, for N=12 (a ‘dodecagon’):
fig-4-regular-polygonNow the thing about stars is that they are not convex, while regular polygons are. Using the word ‘vertex’ for a place where two edges of a shape meet, an N-pointed star has 2N vertices, of which N are points – the outermost part of a peninsula (if we imagine the shape as an island in an ocean) and the other N are the innermost part of a bay. As we go around the vertices of a star they alternate between inlet and bay. So a regular polygon is not a star because it has no bays, and that’s why K must be more than 1.

Stars with more than one pencil stroke

We observed that the above method does not work for N=6. But we know we can draw a six-pointed star, using two pencil strokes to draw two overlapping triangles. We can use the approach taken there to invent many more stars. In fact, for an N-pointed star there are M different types we can draw, where M is (N+1)/2-2, rounded down to a whole number. Each of these shapes corresponds to using a different value of K, from 2 up to the biggest whole number below N/2.
Here is how we do it:

  • Step 1, for picking N and drawing the points around an imaginary circle, is the same as above.
  • Step 2. We pick any K as any whole number greater than 1 and less than N/2.
  • Do steps 4 and 5 from above. This will draw a shape that is either a star or a polygon. Now comes the tricky bit.

If the shape you drew has not touched all the N points around the circle, repeat the process starting on a point that has not been touched yet. I like doing this with a different colour pencil, as it helps me see the pattern and avoid getting confused.

Repeat that process, using a different colour pencil each time, until all points have been touched.

You will now have a N-pointed star, made up of a number of identical overlapping shapes, which are either all polygons or all stars.

For those that like mathsy stuff, the number of overlapping shapes – the number of pencil strokes required – will be the greatest common factor of N and K. It’s fun to try to work out why that is.

The traditional six-pointed star in figure 2 above is what you get under this method when you use N=6 and K=2. Here are a couple of others:
fig-6-10-2-star

fig-5-10-4-star-c

If we are going to draw a lot of different stars, we need names for them. We could call the star drawn with N points and side length K a ‘N-K star’, so that the pentacle is a 5-2 star and the traditional six-pointer is a 6-2 star.

Thin stars

If we wanted to, where N is even, we could let K be N/2. What we get then is this sort of thing:
fig-7-10-5-star
The shape we have drawn with each pencil stroke is a single line between a point and the point directly opposite it. Strictly speaking, this too is a star, but I mostly leave it out because it’s not as interesting as the others because (1) everybody knows how to draw a star like that; (2) as any five-year old would tell us, that’s not what stars look like in pictures of things in the night; and (3) it has no inside, so we can’t colour it in all yellow (well, actually the one I drew has a tiny little inside in the middle, because it’s not perfectly symmetrical. But a more accurate drawing would have all the lines going exactly through the middle of the circle, so that there’s no inside at all).

Other things

So now you know how to do lots of great stars. You need never be bored in a meeting again. Imagine if you started drawing all the possible stars, starting at the smallest number of points and going up in side-lengths and points until the meeting finished. Leaving out the too-easy ‘thin stars’, you would draw the following stars:

5-2, 6-2, 7-2, 7-3, 8-2, 8-3, 9-2, 9-3, 9-4, 10-2, 10-3, 10-4, 11-2, 11-3, 11-4, 11-5, 12-2, and so on.

Just drawing those, given a due amount of tongue-stuck-into-side-of-mouth-concentration, should be enough to get you through at least a half hour of Death By Powerpoint.

But let’s not forget our roots. With a very few exceptions, we all started off drawing stars like Figure 1. There is a touching ingenuousness about such stars, and I think it’s good to draw them as well. Often really interesting shapes arise when we do, looking like monsters or funny animals. And one good thing about that way is that you don’t have to decide how many points it will have before you begin. You just draw spikes around a circle until you get back to the start. I’ll sign off by doing that for a star with LOTS of points (it ended up being 21), and following it up by a series of the nine different stars with the same number of points drawn by the above method.

I think that each has a certain appeal, in a different way.

fig-8-freehand-v-pointy21-2

21-3

21-4
21-5
21-621-7

21-8
21-9
21-10


Mastering maths monsters

‘Partial differentiation’ is an important mathematical technique which, although I have used it for decades, always confused me until a few years ago. When I finally had the blinding insight that de-confused me, I vowed to share that insight so that others could be spared the same trouble (or was it just me that was confused?). It took a while to get around to it, but here it is:

https://www.physicsforums.com/insights/partial-differentiation-without-tears/

My daughter Eleanor make a drawing for it, of a maths monster (or partial differentiation monster, to be specific) terrorising a hapless student. The picture only displays in a small frame at the linked site, so I’m reproducing it in all its glory here.

Andrew Kirk

Bondi Junction, October 2016


Acoustic ‘beats’ from mismatched frequencies

Here’s a piece I wrote explaining the mathematics behind the peculiar phenomenon of acoustic ‘beats’.

https://www.physicsforums.com/insights/acoustic-beats-mismatched-musical-frequencies/

It’s a bit maths-y. But for those that don’t love maths quite as much as I do, it also has some interesting graphics and a few rather strange sound clips.

Andrew Kirk

Bondi Junction, August 2016