# 600 Rule?

You may have heard it elsewhere as the “600 rule”.  I first heard about the rule while visiting the Looney Bean in Bishop, California in 2008.  Five photographers sitting in a coffee shop poring over their laptops reviewing what they recently bagged are bound to start talking.  It was my good fortune that one of those present was the very talented Brenda Tharp who first quoted the 600 rule to me.

I, however, have repeated the rule as the “500 Rule” because I think 600 is overly optimistic.  What is the rule?  The rule states that the maximum length of an exposure with stars that doesn’t result in star streaks is achieved by dividing the effective focal length of the lens into the number 600.  A 50mm lens on a 35 mm camera, therefore would allow 600 / 50 = 12 seconds of exposure before streaks are noticeable.  That same 50 mm lens on a 1.6 crop factor camera would only allow 7.5 seconds of exposure.

## But Wait. The Rule Isn’t All That Great!

The real number is quite subjective.  A little math reveals that on the Canon 5D Mark II (a full frame camera), with a 16mm lens a pin point star on the celestial equator moves from one pixel* to the next in 5.3 seconds.  But the 600 rule would allow 37 seconds of exposure and the 500 rule 31 seconds.  Both rules will produce streaks on the sensor! The visibility of those streaks will depend on the finished print size and viewing distance.  Print it large and stand close and the streaks will be obvious.

So what does a 30 second exposure look like at the pixel level:

Clearly those stars are streaking across about 5 pixels* just as the math would bear out.

What is going on here?  The Canon 5D Mark II images are 5634 x 3753 pixels* from a sensor that measures 36 x 24 millimeters. Dividing 36 by 5,634 reveals that the distance from the center of one pixel* to the next is a scant 0.00639 millimeters (or 6.4 microns).

The formula for calculating the distance in millimeters (d) that a star travels across a sensor due to the earths rotation looks like this:

d = t * f / 13750

Where t is time in seconds, f the effective focal length and 13750 is, well 13750.  I’ve simplified the above from the full equation. Is the math scaring you a bit… don’t worry… we’re almost done. Earlier we calculated the pixel* to pixel distance as 0.00639, what we want to find is how long (t) it takes for a star to move that far on the sensor.

0.00639 = t * f / 13750

Solving for f = 16mm we get a t value of 5.3 seconds as I asserted earlier.

But how does that calculate out on a different sensor, the Canon 50D, for example?

The Canon 50D has 4770 pixels across 25.1 mm or an inter-pixel* distance of 0.0053 millimeters.  Substituting into the earlier equation we find that a star marches across a pixel on the 50D with the same 16mm lens in 2.83 seconds.  With a 50mm lens on the same camera… the bad news is the star is speeding from one pixel* to the next in less than a second!

What does an image look like with a 30 second exposure at 16mm on a full frame camera? Remember that the streaks will be 40% longer on the cropped Canon 50D.

30 Second Exposure – a close look shows elongated stars.

Scaled down to only 16% of the original image size or seen from a distance no streaking is obvious! We will try not to twitch knowing – because we pixel peeped – that the stars are really dashes not nice round pinpricks of light. And indeed only the eagle eyed are likely to notice the dash-like nature of the stars until the photo is printed large, say at 20 x 30 inches.

## What Can We Conclude?

1. Streaking starts a LOT sooner than any rule you may have learned.
2. The time it takes to streak depends on the inter-pixel* distance (sensor density / mm) and the focal length.
3. How much streaking to allow depends on your aesthetic tolerances.
4. You can not get more or brighter stars by exposing longer; starlight has already given up on one pixel* and moved on to the next in just a few seconds.
5. The longer the focal length, the more impossible it becomes to prevent streaking.
6. Gaps in your star trails may be unavoidable if the inter-shot delay (normally 1 second) is long enough to skip pixels*.

## Final Note

I carefully added asterisks* to every location where I wrote the word “pixel” in a way that might imply your camera collects light in pixels. You might be wondering why I did that. The answer is: your sensor is comprised of sensels, not pixels. It takes 4 to 9 sensels to create a single pixel depending on the de-mosaic-ing algorithm your camera uses. Maybe you aren’t that picky, but I didn’t want to hear complaints from the purists.

I particularly relish this epiphany because I reported long ago that “longer exposures do not result in more stars“.  I just never got around to doing the extra bit of math – or the experiments – to prove out my assertions.

## Real Final Note

A commenter has rightfully taken me to task by pointing out that the perception of a streak is dependent on many things other than just the actual sensor values recorded. In particular, if the image is not enlarged much some streaking will be scarcely or completely unnoticeable because the feature will be too small for the eye to perceive.  The problem with this assertion is that it assumes a lot of preconditions: e.g. how large the print is, how far from the print a viewer stands, and the subjective experience of the viewer.  My real world experience has led me to conclude that it is a reasonable goal to keep the streaking to 2 to 3 pixels or less because that will provide the greatest possible usable magnification (finished viewing size).  There would be no point to collecting a high megapixel image if you can not produce a print proportionately larger or more detailed than a lower megapixel image!

Here is an example that makes my point. I love this image captured on a Canon 5D Mark II. When printed at 20×30″ and viewed at 4 feet there is some streaking. Perhaps only a critical eye would notice, but even an untrained eye will notice when viewed from two feet away.

# Bending Reality

Many of those who follow my work and my webinars already know that I’m passionate about all things astronomical and night photography.  I’m the kind of guy that will go 10 times to the same location over a 4 year span to capture a shot that requires the elements to all be in place… the moon, a soon-to-rise sun, a coastal lighthouse, and clear weather.

Why do I do it? Because it involves embracing challenges from several disciplines – mathematics, astronomy, and some hairy technical aspects of photography. I don’t know what to make, however, of the super cheesy way to get the moon where you want it using photo editing techniques.  Take a nice clear picture of the moon with a 200 mm lens, and put it in a landscape taken at 20mm.  You get a moon 10 times larger than it should be, but will people notice?  Not very often, it turns out! I submitted an “e-ticket” for a charity. The E-ticket allowed the bidder to select any of the items in my image library for an 18 x 12 print. To my chagrin the image chosen was this one – a complete fabrication.

The patron knew it was a fabrication, but loves Yosemite and likes having the moon, a moon bow and a waterfall co-mingled together.  Making compelling composite images certainly falls in the realm of art which I highly value. But sadly, I created this image to educate people about forgeries.

One distressing trend that I see are photographers who run workshops and draw in participants by exhibiting photographs that are composites – not reality.  I know how I would feel if I booked a particular hotel because a lovely photo of the property showed a pristine beach just yards away but upon arrival discovered that the pristine beach was actually four blocks and a freeway away. Angry.

And what about those aspiring photographers who wonder how they too can get a photograph of a fantastic huge moon behind Yosemite Falls when you can’t – it’s impossible!

I have spent substantial effort finding, researching, learning about and writing tools to aid in achieving alignments of the moon (and sun) with various landmarks – but it’s quite unfair to compare the days weeks and months of calculation and waiting for the right date and weather to arrive to those photographers who recycle their stock moon, lightning, or cloud photos into whichever photos they think they will look best.

And it’s not a moon-only phenomenon.  I’m amused when I see folks exhibiting impossible star trails, improbable eclipses, and many other manufactured phenomenon.  So I am seeking your help. I want to learn from you what you think the boundary between fantasy creation and photography should be. And I’ll also give you some tools to help spot forgeries.

Here is what I think must be true for a photo to be an Authentic Photograph. Some editors are even more stringent about what they allow.

1. Photo(s) used in the final image must be taken at the same focal length and using the same equipment.
2. And within the same 24 hour period.
3. With no introduction of or removal of elements except those that are “small distractions”.  E.g. noise, an overly bright or overly dark element such as shiny trash or a tripod shadow,  Or cloning out an object that moved between exposures as in HDR.

I find these acceptable:

• Cropping – any amount.
• Sharpening, or Blurring (smoothing)
• Color correction, saturation or desaturation (but not color change. Green eyes should not become blue ones – though “red eye” correction is certainly ok).
• Selective coloration, including black and white, duo toning, etc.
• Perspective or lens aberration corrections.
• Vignetting
• Framing
• HDR, or bracketed exposure combinations together with tonal compensation.
• Contrast enhancement

For me the following cross the line from Authentic Photographs into Composites

• Use of any elements taken a different focal lengths or with different equipment  unless those elements are resized proportionately and placed in their correct and actual location.
• Using elements taken at different dates or from different directions (e.g. combining a photo of lightning with anything that the lighting did not actually strike)
• Moving elements in a single image to other locations (except incidentally to clone out or cover over issues).

When people violate my personal ethical boundaries, they usually create physical conundrums that are often easily spotted by a trained eye. For example in this photo the moon is impossibly large and intuitively feels wrong.  The moon is too large in this photo, too. The impossibility can be determined either from experience or through some mathematics (which I’ll show later).  Sometimes the moon placement just doesn’t match physics. For example in this photo the moon is illuminated on the wrong side. The moon is always illuminated by the sun so if the sun is setting at the left and the moon is illuminated on the right – well that’s impossible. Another common mistake is when people put a Full moon anywhere near a sunset or sunrise. The full (or nearly full moon) is always located on the opposite side of the sky from the sun. Any photo showing otherwise is doctored.  Sometimes the doctoring is laughably obvious as in this photo.

## Math Reveals Forgeries

As I teach in the “Catching the Moon (and sun) Webinar” the moon is a well known and almost invariant size and its presence can be used to measure distances in the photo. Specifically the moon is 1/2 of a degree in angular size (the superest of super moons is 0.57 degrees). An image with a angle of view of 50 degrees – as might be achieved with a 35mm lens on a 35mm camera – will result in the moon being exactly 1/100th the size of the image. Since the field of view of an image isn’t always obvious – especially in unfamiliar locations, finding something in the scene of identifiable size helps. For example in this image:

Some googling will tell you the height of Upper Yosemite Fall (675 feet). By eyeball Upper Yosemite Fall appears to be about 2.5 times the moon’s diameter. That means the falls are about 1.25 degrees in total (0.5 degrees times 2.5). Throwing some trigonometry in here, we can conclude that to get the moon sized as in the photo, the photographer had to be 30,934 feet away from the fall (5.8 miles). Yosemite Valley is less than a mile wide in this direction. Did we have to do the trigonometry? Not really! The rainbow gave us another huge clue. The arc of a rainbow is about 2 degrees wide from the top color (Red) to the bottom color (Violet). So the rainbow height here SHOULD be 4 times larger than the moon. The moon, however is much too large – just as we might have suspected.

The trigonometry: For an object to be the same angular size as the moon you must be 114.6 x the object height away from it. So, for example a one foot tall object requires a 114.6 foot distance to have the one foot object and the moon be the same size. For a super moon, the multiplier is 100.5 times.

The formula:   distance = height / tan(0.50)

What if the only identifiably sized object in a photo is a sand dollar?  No problem.  If the sand dollar and the moon are nearly the same size in the image, it’s easy to calculate how far away the photographer was from the sand dollar by multiplying the size of the sand dollar by 114.6.  Assuming that a 4″ sand dollar and an equally sized moon are in the same scene, a simple calculation reveals that the camera was 38 feet away from the sand dollar. Do many photographers get 38 feet away from the sand dollar they put in their foreground? No, they don’t!  Suppose we were wrong and the sand dollar is really only 2 inches in diameter. It would be just as unlikely for a photographer to be 19 feet away!  If you find more than one thing of an identifiable size relative to the moon you have a second point of reference.

For comparison here is an undoctored photo featuring the moon and a guy who is about 6 feet tall.  See how tiny the moon looks in this 20mm focal length photo? Did you even SPOT the moon? It’s at chin level on the left of the post.

The moon, even the crescent moon is very bright. Any exposure showing a detailed full or half-moon and stars is immediately suspect because the 1/100 of a second exposure needed to keep detail in the moon will rule out the capture of any stars. The problem is a limitation in the dynamic range of the camera. Our eyes can see stars near a fully featured moon… but no contemporary camera can do so except when the moon is well veiled by clouds or hanging very near the thick part of the atmosphere at the horizon.   The presence of stars in the moon bow photograph, oops, I mean COMPOSITE, scream inconsistency – or at the very least some super-duper HDR processing.  Click the image and check out the observations that other people have made about what is wrong.  E.g. How can the moon create a moon bow, illuminate the face of the fall AND be behind all those things?

## A Parting Puzzle

This image accurately depicts an Annular Solar Eclipse as captured by a series of images taken with the same camera pointed in the same direction and all at the same focal length. I wrote about it in my previous column.

But the following image I fabricated to look much like the many forged eclipse photos on Flickr and does not jive with reality. Can you spot why?  By the way I used a REAL image of an annular eclipse to create this photo but I combined it with another photo which had nothing to do with an eclipse.

If you can come up with a reasonable refutation (or two) you may win a free Catching the Moon webinar.

## I’d Like to Hear From You!

Do you take exception to my exceptions? Do you resonate with my concerns? Did you spot something you’re pretty sure is faked, but can’t quite tell why? Please leave a comment!

Interested in more about faked photos?

See here or the Scientific American article on spotting fakes in general photos.