• 0 Posts
  • 18 Comments
Joined 11 months ago
cake
Cake day: October 22nd, 2023

help-circle
  • Here is the basic equation explaining the relationship:

    Focal length / Sensor width = Distance to subject / Width of field at subject

    A normal lens for a camera is defined as one whose focal length equals (more or less) the width of the sensor. Often, the width of a sensor is measured from opposite corners. A focal length equal to the sensor width will give you a width of field—at the subject—equal to the distance to the subject. Doubling the focal length halves the width of view, doubling the sensor width doubles the width of view, etc.


  • I did that last Friday. It was a night parade, no sky light, with inadequate street lighting, on a narrow two lane old-time urban street. 50 mm was definitely too long (on full frame) for many of the shots but I managed to place myself at a turn in the road so it wasn’t awful. I exposed for the highlights, which consisted of a multitude of Christmas lights on each float, and a lot of the shadows were indeed rather dark. The lens was a f/1.8, and most of my shots were f/2.2 at rather elevated ISO. My keeper rate was well below what I’m accustomed to. Definitely I’d use a wider lens were I to do this again, like maybe my 28 mm f/2.8, and I’d shoot it wide open.



  • They say that the camera adds ten pounds and ten years to a human subject, which is why top models and actors tend to be extraordinarily good looking. An average attractive person in real life, unfortunately and regrettably, looks ugly in a photo. It’s the nature of the medium.

    Likewise, the camera turns typical landscapes into something dull and flat in a photo. It takes a truly extraordinary landscape to look better than ordinary in a photo.

    At one time, artists and thinkers developed a theory about what kinds of landscapes are worthy of being painted, or photographed, or simply visited, and these are the “picturesque”, and not at all ordinary.

    https://en.wikipedia.org/wiki/Picturesque



  • A lot of people do group portraits in dappled lighting, and it looks terrible since some people are in shadow and others are in full light, and usually the exposure is too high so highlights are blown.

    Here is a funny example of someone who tried to recover blown highlights:

    https://www.reddit.com/r/badphotoshop/s/aalrZmUHY1

    It’s really common for beginners to point their cameras into the sun when doing portraits, and so either the subject becomes a silhouette or the background is blown: it’s usually better to have the sun over your shoulder behind you, but without the subject looking directly at the sun.


  • I was a pretty serious architectural photographer and had quite a bit of success with it. Often I just needed a slightly higher point of view, and so I would use a seven foot tripod, a ladder, or sometimes a tall pole. For sure, a drone with a decent camera would help a lot and would be well worth it, and my late father, who was even more technically oriented than me, encouraged it. So one Christmas, my dad and a lady friend both got me inexpensive drones, one with a camera, and one without. I started with the cameraless one for practice, and quickly realized that even though my house had a one acre lot, the trees were too large to make this practical. The drone went off course, hit a tree branch, crashed, and a couple rotors broke. I replaced the rotors, went to a park—but no park was really large enough—and quickly smashed it. I took the other drone to a park by the river: up it went, and the wind took ahold of it, and it went flying way up river, where it plummeted to a parking lot, and smashed into many small pieces, and I was unable to recover the memory card. So I had about a total of a minute’s fun before destruction. Much later, my wife’s uncle, who is a drone enthusiast, demonstrated his self-flying drone, which was impressive, but by that time I really couldn’t justify it anymore.



  • They say that the camera adds ten pounds and ten years to the subject. Also consider the phenomenon of “Hollywood ugly”: a typical good looking person before the camera often appears to be average or worse (that’s one of the reasons why I prefer being behind the camera). Fashion photographers put a lot of effort into making their subjects look perfect, and there are good, solid reasons why that should be so. A photograph is flat, lifeless, and typically small, and most everything about a person that makes them lively, charming, personable, and exciting is missing from a photo, or at least difficult to capture effectively; but we do see flaws.

    It’s likewise for a landscape photo: an ordinary interesting scene will look flat and dull in a photo. It takes a truly epic landscape in real life to make an interesting landscape photo. Flaws totally overlooked in real life become apparent in a landscape: power lines, trash, parked cars may end up being seen for the first time in the photo.


  • Over a decade ago I went on a night hike on New Year’s Eve, in the wilderness, under the light of the moon. It was cold, foggy, and lightly snowing.

    I wanted to bring a camera, but the only camera that would comfortably fit under my jacket was my old Nikon D40, with a 35 mm f/1.8 lens. Even back in those days the D40 was not considered all that great of a camera in low light. It was a cheap camera when I purchased it, and had very little value at the time of the shoot.

    My solution was not to be worried at all about noise but instead try to get any photo that would capture my impression of the scene. I did use a monopod that would allow a longer shutter duration of ¼ second, and I set ISO as high as I could while still getting a usable image.

    Sometimes I converted the image to monochrome:

    https://flic.kr/p/dHhDVH

    https://flic.kr/p/dHo5qm

    https://flic.kr/p/dHo5j1

    Other times I just underexposed:

    https://flic.kr/p/dHo5EL

    I thought they turned out OK, in a very impressionistic, rough but memorable manner. I didn’t attempt portraits, but you wouldn’t even see hardly much of anyone’s face in this situation.

    It’s possible to get very good monochrome photos from even extremely underexposed raw files via a special technique: extract the raw, mostly unprocessed color channels from the file and sum them together. This bypasses the color processing which adds a considerable amount of noise.








  • A depth of field calculator is a good start.

    But I’ve found more success with Merklinger’s method, which takes into account the blur of objects in real life instead of on the sensor. So it is an “object space” method instead of an “image space” method for determining depth of field.

    Basically, for landscapes, wide deep landscapes where infinity needs to be sharp, you just focus on infinity. Easy, right?

    Then you identify the smallest object in your scene, in the foreground, that you want just barely resolved. Suppose this is a blade of grass 5 mm wide. As it so happens, the diameter of the entrance pupil of a lens puts a lower limit on the size of things that can be resolved, anywhere in the scene when focused at infinity, and the entrance pupil width is the focal length divided by the f-number, by definition. You divide the focal length by the size of object that you want barely resolved, and that gives you the f-number needed. So if you are using a 50 mm lens, and want to barely resolve a blade 5 mm wide, you’ll calculate 50 mm / 5 mm = 10, or f/10. This is so easy you can do it in your head. This is true no matter how near or far the object is from the camera; you simply don’t have to calculate distances, and it doesn’t matter what camera you have and what size sensor you have.

    This also works if you focus closer, except the 5 mm blade of grass will be sharper the closer it is to the focus distance, but you can calculate that as well: 2.5 mm will be resolved at half the camera-subject distance, and 1 mm will be resolved at when you are 4/5s of the way to the subject, and if you want to estimate blur behind the point of focus, the same thing happens in reverse, so 1/5 of the distance beyond the point of focus will resolve 1 mm as well, and twice the focus distance will again resolve 5 mm, and will increase proportionally out to infinity.

    Unfortunately, diffraction effects aren’t easily incorporated into depth of field equations, but diffraction blur is the same width on the sensor for all cameras and all lenses at the same f/stop, which ultimately means that the f-number you can use with equal diffraction is proportional to the sensor width. Simply determine what f/stop on one lens and one camera is too blurry for you, and from that one observation you can extrapolate it to all of your gear. If f/16 looks a bit too ugly on your full frame camera, then you’ll know that f/8 will be too ugly on your Micro 4/3rds camera, since the latter camera has half the sensor width. However, diffraction blur responds well to sharpening, so you can usually go beyond the so-called diffraction limit if you are willing to pump up the sharpening.

    http://www.trenholm.org/hmmerk/TIAOOFe.pdf


  • My moment was pretty much instantaneous. For a long time, I simple assumed that digital cameras were computerized machines that were programmed to take good photos, and if the photos were bad it was because the camera was bad. After taking a series of important photos that turned out badly, I had the sudden realization that my photos were bad because I was a bad photographer, and that I had to think like an artist to do better.

    Years later, I was at a photography exhibit at a major museum: it was an exhibit of photography that was admired by the Impressionists painters. I was with a young lady friend who was struggling to be good at photography, but lacked confidence. As we toured the exhibit, we were commenting on the various photos, figuring out was good, what made them good, and what could be improved, totally being the experts in the room. At one point my friend turned to me with a big smile as she suddenly realized that she had “arrived” and had her moment.