Pointing the Camera at the Monitor – An Answer

Here are my answers to the puzzle about what happens when we point a camera at its monitor. The problem has a flavour of infinite regress about it, and sounds a little like a Buddhist koan – a question designed to make us realise (amongst other things) the limitations of logic. But whereas a ‘correct’ answer to a koan is usually something bizarre like barking like a dog, or hitting the questioner with a stick, the four questions I posed do actually have perfectly logical answers. And I find them quite interesting.

Let’s kick things off by showing a picture of what one might see when one looks into the webcam that is perched on top of the monitor, looking outwards. This is a screenshot of what my monitor showed when I did that (Figure 1).

figure-01-q0a-screenshot-cheese-bear-non-max

The image is framed within the window labelled ‘Cheese’ – which is the name of the webcam program I was using. That’s me wearing the red cravat.

Question 1

When we turn the camera around and point it at the monitor, we will see an infinite regress of windows within windows, as the whole picture will be reduced and fitted into the image area where I am above. Then that reduce-and-insert step will be repeated as many times as it takes until the reduced image gets down to a single pixel and can contain no more strip-images. Here’s an image I made of what it should look like (Figure 2):

figure-02-q1_image

In every window, the green desktop background, the desktop icons and the file explorer window to the left are reproduced, and the series shrinks off into the distance. I can tell you it was pretty fiddly putting the tiniest innermost parts into that picture. Infinitely small objects are notoriously difficult to manipulate.

The picture looks a little like a classical picture that uses perspective to show a long, straight road disappearing into the distance, with the point of disappearance at which all the lines converge being in the top-right quadrant of the screen.

But there’s an important and fascinating difference: the dimension along which the images disappear here is of time, not distance. That’s because each nested window is an image captured by the camera’s sensor a short interval earlier than the window that contains it. That interval will vary slightly as we move through the nested sequence, based on the relationship between the rate at which the monitor screen is redrawn (the ‘refresh rate’) and the number of snapshot images captured per second by the camera sensor (the ‘frame rate’). But it will always be more than some minimum value that depends on how long the information takes to travel along the wire from the sensor, through any processor chips in the camera, along another wire to the computer, through any processing algorithms in the computer, and then through the video cable to the monitor. Even without knowing anything about the computer, we know that that time – called the ‘lag’ – will be greater than the lengths of the wires involved, divided by the speed of light (because electrical signals cannot travel faster than light).

So each nested window is earlier than the one outside it, and as we look through the sequence of windows towards the point of convergence at infinity, we are looking back through time!

Question 2

Now we maximise the Cheese window. First let’s see what it looks like with the camera the correct way around, pointing at me (Figure 3):

figure-03-q0b-screenshot-bear-maximised

You can tell from my expression that I’m quite enjoying this little exercise, can’t you?

Here there is nothing outside the Cheese frame, but the Cheese frame still has a broad, non-image bar at the bottom, a narrower bar at the top, and black vertical bars at either side, which are needed to preserve the image’s ‘aspect ratio’ – the ratio of its width to its height.

With that setting, we turn the camera on the monitor, and this is what we would see (Figure 4):

figure-04-q2_image

The lower, upper, left and right bars are reproduced as a series of receding frames, and there is nothing in view other than the receding frames. There is no room left for an actual image of anything other than frames.

Figure 4 gives an even better sense of the ‘time tunnel’ that we mentioned in the previous section. Those white borders really do regress away in a spooky way. It looks like something out of Doctor Who.

The ratio of the height of one window to the height of the window immediately inside it is 1/(1-p) where p is the sum of the heights of the upper and lower margins of the outermost window divided by the total height of the outermost window. The ratios of the widths is the same. In this case it looks like p is around 1/5, so each window will be about 4/5 of the height of the window that contains it.

I used my webcam and shaky hands to try an empirical verification of this. I maximised the Cheese window and, pointed the hand-held webcam at the monitor, centering it as closely as I could. Then I asked my partner to press the Screenshot button on the keyboard to record what the monitor was showing. Below is what we got (Figure 5).

figure-05-q2-empirical-non-full-screen

It’s a bit rough, but you can see it does the same sort of thing as Figure 4.

When one is holding the camera like this, the little involuntary movements one makes cause the trail of receding frames to wobble left and right in waves, that remind me of the effects used in 1970s television to produce a psychedelic impression – particularly prevalent in rock music film clips.

Here’s a link to a video I captured of this effect.

Question 3

Question 3 asks what we will see on the monitor after we click the full-screen icon.

When we click the full-screen icon, we have no borders, so there can be no infinite, reducing regress of nested borders. How can we work out what is shown, assuming that we started in non-full-screen mode with a maximised window, so that the monitor was showing Figure 4?

The answer turns out to be remarkably simple. We just note that, when the full-screen icon is clicked, the computer will do some computing and then redraw the screen using the whole monitor area for the image from the camera. When it has finished the computing it will draw the screen using the image received most recently from the sensor and, since that image was captured before the screen redraw, it will be the same as whatever the screen was showing previously. This assumes that the previous image is left in place on the monitor until the computer is ready to draw the new one. We consider later on what happens if that is not the case.

That re-drawn screen will then be captured again by the camera sensor, sent to the computer and then drawn again on the monitor, and so on. So the image will remain exactly as it was before the full-screen icon was clicked! In this case, since the Cheese window was previously maximised, it will continue to show something like Figure 4.

The image remains static until either the camera is pointed away or the computer is switched out of full-screen mode, using the keyboard or mouse. Barring earthquakes, electricity blackouts and such-like, we would expect the monitor to still be displaying Figure 4 if we locked the room it was in, went away and returned to inspect it ten years later.

We can understand this a different way by considering the time tunnel we talked about in the responses to questions 2 and 3. In those cases, as we travel inwards through the tunnel to successively smaller windows, each window’s image was captured a short while earlier than the image of the window around it. The interval between the capture time of those windows will be much less than a second, typically 1/24 seconds. In Figure 4 each window’s height is about 4/5 of the height of the window that contains it. So to see what the monitor was showing t seconds ago we have to go to the nth window in the sequence, counting from the outside, where n=24t. The height of that window, assuming the height of the monitor’s display area is 300mm, is 300 x 0.8-24t mm. The window size reduces rapidly as we go back through time. A little calculation shows that the 26th window in the sequence is the first to have height less than 1mm, and that window shows what the monitor was showing just over one second earlier.

Without constraint, that time tunnel would continue to go back, getting smaller at an increasing rate, window by nested window (like Russian dolls, or the cats in the Cat in the Hat’s hat), until we got to the time before the camera program window was opened on the computer, and that image would show whatever was on the monitor before the window was opened. But it would be indescribably tiny. If we opened the window – in maximised form – five minutes ago, the height of the window that now showed that image from back then would be 300mm x 0.8–7200, which is approximately 10-688 mm. This is indescribably smaller than the smallest atom (Hydrogen, with diameter 10-7 mm) or even just a proton (diameter 10-11 mm).

I expect our eyes probably could discern no more than the first twenty windows in the sequence. Further, since my screen has about three pixels per mm, the windows would reach the size of a single pixel by the 31st window in the sequence, and the regress would stop there. Hence the sequence would look back in time no more than 1.3 seconds.

When we move to full screen mode, we still have a time tunnel of nested windows, but each one is exactly the same size as the one before it – the height ratio is 1, rather than 0.8. That means that how far we look back in time is no longer limited by shrinking to the size of a pixel, and the sequence will go all the way back to the last image the monitor showed before drawing its first screen in full-screen mode – which will be Figure 4.

In practice, as my friend Moonbi points out, the slight distortions in the image arising from imperfections in the camera lenses, although they may be imperceptible at first, will compound on each other with each layer of nesting so that what is actually shown will be a distorted mess. Like a secret whispered from one person to another around a large circle, or a notice copied from copies of itself dozens of times recursively, the distortions – however tiny they may be at first – will grow exponentially to eventually dominate and destroy the image. One minute after full-screen mode has been commenced, the monitor will be showing a 600 times recopied image of Figure 4, which will be more than enough to obliterate the image.

But this is a thought experiment, so we allow ourselves the luxury of assuming that are lenses are somehow perfect, that there is zero distortion, and each copy is indistinguishable from the original.

What if the screen blacks out?

Above we assumed that the display does not change until the computer is ready to redraw the full-screen image. If you go to YouTube, start playing a video and then click the full-screen icon (at bottom right of the image area) you will see that is not what it does. It actually makes the whole screen go black for a considerable portion of a second, and only then redraws the screen. If the camera program we are using does that then the screen will go black and remain black indefinitely.

If the black-out is shorter than YouTube’s, different behaviour may arise. It depends on four things:

  • the time from image capture to display, which we call the lag and denote by L,
  • the interval between image captures, which we call the frame period and denote by T, and
  • the time the blackout period commences and the time it ends, both measured in milliseconds from the last image capture before the blackout. We’ll denote these by t1 and t2.

If no image capture occurs during the blackout, which will happen if t2<T, the blackout will have no effect on the final image and we can ignore it. The eventual image will still be Figure 4.

If images are captured during the blackout, and the first image shown on the monitor after the blackout was captured during the blackout, the screen will thereafter remain black indefinitely. This will be the case if both L and T are less than t2.

The other possibility is that T<t2 and L>t2. In this case the first image shown after the blackout will be a picture of the pre-blackout monitor, ie Figure 4, but it will be followed sooner or later by one or more black images captured during the blackout.  What will follow then will be an alternation between images showing Figure 4 and black images. It will look like a stroboscopic Figure 4. The strobe cycle will have period approximately equal to L, and the dark period will have approximately the same length as the blackout, ie  t2t1. In essence, the monitor will indefinitely replay what it showed in the period of length L ending at the end of the blackout.

This black-screen issue will also arise if the monitor is an old-style, boxy, CRT (Cathode Ray Tube) rather than a LCD, Plasma or LED device. CRT screens typically draw around 75 images per second, made up of bright dots on a photo-sensitive screen, by shooting electrons at it. In between those drawings, the screen is black. That’s why those screens sometimes appear to flicker, especially viewed through a video camera.

For a CRT screen, the image captured immediately before the redraw may be Figure 4 or black, or something in-between – a partial Figure 4, with a complex dependency based on four parameters: the length of exposure used by the sensor (shutter speed), the length of time taken for a single redraw of the CRT screen and the refresh rate and frame rate. Unless there were a particularly unusual and fortuitous relationship between those four numbers, the image on the monitor would not be Figure 4. I think instead it would either be just black or an unpredictable mess. But one would need more knowledge of CRT technology than I have, to predict that.

Anyway, we don’t want to get bogged down in practical technology. This is principally a thought experiment. And for that ideal situation, we assume an LCD monitor with a computer that, upon receiving a full-screen-mode command, leaves the prior image in place until it is ready to redraw the image on the full screen. And the answer in that situation is Figure 4.

Doing a careful experimental verification of this is beyond me because, amongst other things, I don’t have a camera program that has a full-screen mode. But just for fun, I made a video like the one above under Question 4, where I focused the camera on the area of the monitor that displayed the image, trying to exclude the borders. It wobbles about, partly because of my shaky hands. It is mostly black, but there’s a blue smudge that appears in the lower half and wobbles around. I think that is a degraded version of the regressing images of the lower border. But under such uncontrolled conditions, who knows?

Here’s the video.

Question 4

We are now in a position to work out the answer to question 4, which is ‘what will we see after we point the camera to the right of the monitor and then pan left until it exactly points at the monitor, and stop there?

To start with, we know that, once the camera has panned to the final position of exactly capturing the image of the entire monitor, it will hold that image indefinitely, and that image will be whatever the monitor was showing immediately before the camera finished panning.

We’ll be a little more precise. A video camera captures a number f of images (‘frames’) per second, typically 24. The final image shown by the camera will be whatever the camera captured in the last frame it shot before completing the pan. The nature of that image depends on relationships between the pan speed, the frame rate and the width of the monitor screen, which we will explore shortly. But, to avoid suspense, let’s assume a frame rate of 24, that 32 frames are shot while performing the pan, and that the view to the right of the monitor display area, including the black right-hand frame of the monitor itself, being this (Figure 6):

pan_image_1

Then, on completion of the pan, the camera will show, and continue to show indefinitely thereafter, the following image (Figure 7 – note that the image was made by editing, not shot through a camera. My equipment is nowhere near precise enough to do this accurately):

pan_image_32

C’est bizarre, non?

Those of you that enjoy exploring intricate patterns may wish to read on, to see the explanation of this phenomenon. I will not be offended if most don’t.

The answer depends on the ratio of the speed at which the camera pans left, to the width of the screen. If we want to be precise, these speeds and widths must be measured in  degrees (angles) rather than millimetres. But millimetres are easier to understand so we’ll use them and ignore the slight inaccuracy it introduces (otherwise I’d need to start using words like ‘subtend’, and we wouldn’t want that would we?).

Say the camera rotates at a speed of s mm per second, and that it shoots f frames per second. Hence it shifts view leftwards by s/f mm per frame. So, if the width of the monitor’s display area is w mm, it shoots N=wf/s frames between when the camera first captures part of the monitor’s display area (on the monitor’s right side) and when the camera is in the final position where it exactly captures the view of the whole monitor display area, not counting the last frame. We label the positions of the camera at each of those frames as 1 to N, going from earliest to latest. We omit the last frame because we know that, once the camera is pointing exactly at the monitor, the image will remain fixed on whatever the monitor is showing at that time, which will be what the camera saw in position N.

By the way, we assume that N is an integer even though in practice it won’t be, because it will have a fractional part. It doesn’t make the calculation significantly more difficult if it’s a non-integer, but it is messier, longer, and the differences are not terribly interesting, so we’ll assume it’s an integer.

Divide the view to the right of the monitor’s display area, as shown in Figure 6 above, into N vertical strips of equal width. Number those strips 1 to N from left to right. We will call these ‘strip-images‘, as each is a tall, thin picture. Next, number the positions the camera has when each frame is shot as follows:

  • Frame 0 is when the left-hand edge of the image captured by the camera coincides with the right-hand edge of the monitor display area, so that the camera captures exactly the image of Figure 6, ie strip-images 1 to N. At that time the monitor will be showing what the camera captured one frame earlier, which will be an image made up of strip-images 2 to N+1 (strip-image N+1 is what we can see in an area the same size as the opther strip-images, immediately to the right of Figure 6)
  • The frames shot after that are labelled 1, 2, etc.

Label the times when the camera is in position 0, 1, 2 etc as ‘time 0’, ‘time 1’, ‘time 2’ etc.

With this scheme, Frame N will be the one that is shot when the camera is in the final position, when it exactly captures what is on the monitor, so that that image remains on the monitor indefinitely thereafter.

The following table depicts what is shown by the monitor and what is captured by the camera at each position, in the situation we used to produce Figure 7, which has N=32.

The rows are labelled by the camera positions/times. The first 32 columns (the ‘left panel’) correspond to the 32 vertical, rectangular strips of the monitor display area, numbered from left to right. The next 32 columns (the ‘right panel’) correspond to the strips of what can be seen to the right of the monitor. The number in each cell shows which strip-image can be seen by looking in that direction. The yellow shading in each row shows what images are captured by the camera at that time, to be shown on the monitor in the next row. (Figure 8):

figure-08-pan-table-fullA few key points of interest are:

  • The numbers in the right panel do not change from one row to the next, because rotating the camera does not change what can be seen to the right of the monitor.
  • The numbers in the left panel change with each row, to reflect that what was captured by the camera at the previous camera position was different from what was captured at the one before that, because of the camera’s movement.
  • The yellow area denoting what the camera captures moves to the left as we go down the table, reflecting the camera’s panning to the left.

There are lots of lovely number patterns in the left panel, which I will leave the reader to explore.

Here is a zoomed-in image of just the left panel for those whose eyes, like mine, have trouble making out small numbers (Figure 9):

figure-09-pan-table-left-panel

Referring back to Figure 7 we see that, as we move from the right to the left side, it has a series of eight vertical images of increasing width. The first one is just the monitor’s right-hand frame – a black plastic strip. The next is twice as wide and has the monitor frame plus the strip-image to its right. The one after than is three times the width and so on. This corresponds to the last row of Figure 9:

5 6 7 8, 1 2 3 4 5 6 7, 1 2 3 4 5 6, 1 2 3 4 5, 1 2 3 4, 1 2 3, 1 2, 1

I have put commas between each contiguous set of strip-images. I call each such contiguous set a ‘sub-image‘. The first sub-image is incomplete – being 5 6 7 8 instead of 1 2 3 4 5 6 7 8 – because N is not a triangular number.

The time tunnel applies here, with a slightly different flavour. The newest sub-image is the one on the far right, composed solely of strip-image 1. This was captured from the world outside the monitor one frame period ago. The next, the ‘1 2’, was shot from the monitor last time, and came from the real world outsider the monitor two frame periods ago. The oldest sub-image is the one on the left, which has been through the camera-monitor loop seven times, having first been captured from the real world eight frame periods ago.

It’s quite fun to trace the path of these images as they repeatedly traverse the camera-monitor loop, by the numbers in the above table. Here’s the lower part of the table showing how the first, third and sixth sub-images from the left (using blue, grey and pink shading respectively), make their way from the real world (to the right of the vertical dividing line), into the camera-monitor loop (to the left of that line), around that loop as many times as needed, and finally to the ultimate static image (Figure 10):

figure-10-pan-table-with-locii

My example with N=32 involves a high panning speed. Shooting 24 frames per second, the pan would need to to completed in 32/24 = 1.33 seconds. One would need extremely good equipment to accomplish that without getting a bounce or wobble when the camera stops at the end of the pan – and avoiding wobble is critical to getting the indefinite static picture we have discussed.

It may be that in order to avoid camera bounce one would need a slower pan, giving us a (perhaps much) higher N. What would be the outcome of that? Well, the right-most sub-image contains only one strip-image and, as we move left, each image contains one more strip-image than the one to its right. So, if the number of sub-images is r, then N will be greater than the sum of the numbers from 1 to r-1 (the (r-1)th triangular number) and not exceeding the sum from 1 to r (the rth triangular number). A little maths tells us that this is from (r-1)r/2 to r(r+1)/2. For large N this means that there will be approximately √(2N) sub-images and the largest will be comprised of about √(2N) strip-images. Since for a display width of w the width of a strip-image is w/N, that means the widest sub-image will have width about w√(2N)/N=w√(2/N), which will get smaller as N increases. If N=2048, corresponding to a very slow pan time of about 85 seconds, the widest sub-image would be narrower than the frame of my monitor, so all we would see in the final static image would be black plastic monitor frame, something like this (Figure 11):

figure-11-final2048_smaller

The bars of light are from the screen reflecting on the shiny monitor frame. Because I couldn’t hold the camera straight, they are bigger at the bottom than at the top.

I will leave you with my synthesis of the sequence of 32 images that would be seen, given near-perfect equipment, in the 32-step pan that ends with Figure 7 above. They are simply a realisation in pictures of the patterns shown in figure 9, when applied to the image Figure 6. The sequence is in a pdf at this address. If you go to page 1 and then repeatedly hit Page Down rapidly, you will see a slow-motion video representation of what it would look like as the camera panned. You may need to download it first in order to be able to view it in single-page view, which is necessary in order to achieve a video-like effect. Alternatively, it is represented below, albeit somewhat more crudely, as a pretend film strip.

Andrew Kirk

Bondi Junction, February 2017

strip01-08strip09-16strip17-24strip25-32

Advertisements

Pointing the Camera at the Monitor – A Puzzle

I was listening to a talk by Alan Watts about some aspect of Eastern mysticism. I can’t remember the exact context. I think he was describing the impossibility of truly understanding the nature of one’s own mind. He said that trying to use one’s mind to understand one’s own mind was ‘like pointing the camera at the monitor’.

I was immediately struck by this. Partly I was surprised at his using such a simile, which involves common enough concepts in 2017, in a talk that he gave in the sixties, when computers only existed in large research establishments and occupied enormous rooms. There was certainly no such thing as a webcam back then. I realised later that he probably had in mind a closed-circuit television arrangement, which they did have in the sixties.

But beyond that, I was struck by the fact that it’s actually a very interesting question – what does happen when one points the camera at the monitor? It’s a classically self-referential problem. But unlike some self-referential problems, like the question of the truth of the statement ‘This sentence is false’, it must have a precise answer, because we can point a camera at a monitor, and when we do that the monitor must show something. But what will it show?

There are a number of practical considerations that can lead us towards different types of answers. While each of those considerations leads to an interesting problem in its own right, I tried to remove as many of them as possible to make the problem as close to ‘ideal’ as I could. So here it is.

Imagine we have a computer connected to a monitor and a digital video camera. A webcam is a digital video camera but, since the camera we are imagining here needs to be extremely accurate, a high-quality professional video camera would be more suitable. The monitor uses a rectangular array of display pixels to display an image and the camera uses a sensor that is a rectangular array of light-sensitive pixels, and the dimensions of the display and the sensor, in pixels (not in millimetres) are identical.i

On the computer we run a program that shows the image recorded by the camera. The telecommunication program Skype is a well-known such program that can do that, amongst other things. There are also dedicated camera-only programs, which webcam manufacturers typically include on a CD bundled with the webcams they sell. Let’s call our program CamView (not a real program name). We start up CamView on the computer in a non-maximised window, which we’ll call the ‘CamView window’. Then we turn the camera on and point it at the monitor. We aim and focus the camera so precisely that an image of the display area of the monitor fills the image-display area of the CamView window. Ideally this would mean that each pixel on the camera’s sensor is recording an image of the corresponding pixel on the monitor screen. In practice there will be some distortion, but we’ll ignore that for now.

Question 1: what does the monitor show?

Question 2: Next we maximise the CamView window. What does the monitor show now?

Those questions are easy enough to answer, when we remember that the window for any computer program, in default mode, typically has an upper border with tool icons on it, a lower border with status info on it, and sometimes left or right borders as well.

These questions are fairly similar to the question of what one sees when one stands between two parallel, opposing mirrors, as is the case in some lifts (elevators).

Now comes the hard one. In most video-viewing computer programs there is an icon that, upon clicking, maximises the window and removes all borders so that the image-display area occupies the entire display area of the monitor. Call it the ‘full screen icon’ and say that we are in ‘full screen mode’ after it is clicked – until a command is given that terminates that mode and returns to the default mode – ie restores the borders etc. In full screen mode the display area of the monitor corresponds exactly to the images recorded by the camera’s sensor.

Question 3: We now click the full screen icon. Describe what appears on the monitor, and how it changes, from the instant before the icon is clicked, until ten minutes after clicking it – assuming the program remains in full screen mode for that entire time.

That is the difficult one. It took me a while to figure it out, and I was surprised by the answer. It is possible that what I worked out was wrong. If so, I hope that someone will point that out to me.

I have one more question, and it has an even more peculiar answer – one that I found quite charming.

Question 4: Assume the camera is mounted on a very stable tripod. Still in full-screen mode, we pan the camera to the right until it no longer shows any of the monitor. Then we pan the camera back at a constant speed until it again sees only the display area of the monitor, and we stop the panning at that point. What is visible on the monitor after the camera has panned back to the original position? Does that change subsequently? What does it look like ten minutes later? Does the monitor image depend on the panning speed, or on the number of frames per second the camera shoots? If so, how?

In order to avoid spoiling anybody’s fun in trying to work out the answers to these puzzles for themself, I will not post answers now. I will post them a little later on. It will also take me a little while to make some nice pictures to help explain what I am talking about.

Andrew Kirk

Bondi Junction, February 2017

Footnotes

i Although most camera sensors have a 3:2 aspect ratio, which is different from the 16:9 aspect ratio of most modern computer monitors, it is possible on a sophisticated camera to alter the aspect ratio to 16:9, which is achieved by deactivating the sensor pixels in an upper and lower band of the sensor, so that the area used to record an image has the required aspect ratio. We’ll assume that is done and that the number of pixels in the active sensor area equals that on the monitor.