<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Shutter Angle &#187; depth of field</title>
	<atom:link href="http://www.shutterangle.com/tag/depth-of-field/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.shutterangle.com</link>
	<description>The science and magic of shooting moving pictures</description>
	<lastBuildDate>Thu, 23 Apr 2015 09:19:43 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.3.1</generator>
<xhtml:meta xmlns:xhtml="http://www.w3.org/1999/xhtml" name="robots" content="noindex" />
		<item>
		<title>Creating Depth, Part 1: Introduction, DOF, Deep Staging, Resolution</title>
		<link>https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/</link>
		<comments>https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/#comments</comments>
		<pubDate>Sat, 24 Nov 2012 18:38:46 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[composition]]></category>
		<category><![CDATA[depth]]></category>
		<category><![CDATA[depth of field]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1315</guid>
		<description><![CDATA[<p>Depth perception is a basic ability of human vision. It is through depth that we judge distances and spatial relations. But depth is inherently a three-dimensional concept. So capturing the three-dimensional world as a two-dimensional image presents challenges when striving to preserve depth. These  [...]</p><p><a href="https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/">Creating Depth, Part 1: Introduction, DOF, Deep Staging, Resolution</a></p>]]></description>
			<content:encoded><![CDATA[<p>Depth perception is a basic ability of human vision. It is through depth that we judge distances and spatial relations. But depth is inherently a three-dimensional concept. So capturing the three-dimensional world as a two-dimensional image presents challenges when striving to preserve depth. These challenges are mostly related to the fact that, unlike the real world, two-dimensional images lack stereo cues, and stereo vision is a major component of the mechanics of depth perception. This is one limitation that 3D cinema tries to overcome. This article is about 2D images though, and the ways to exploit stereo unrelated (monocular) cues to suggest depth. <span id="more-1315"></span><br />
<br/></p>
<h6><strong>Why depth is important for a 2D image?</strong></h6>
<div id="attachment_1319" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/ThomasCole-ThePicnic.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/ThomasCole-ThePicnic.jpg" alt="Thomas Cole - The Picnic" title="Thomas Cole - The Picnic" width="262" class="size-full wp-image-1319" /></a><p class="wp-caption-text">Exteriors naturally lend themselves to deep images. There are at least 4 significant distance planes here.</p></div>
<p>Depth is not a universal quality to look for in an image, but most non-abstract images benefit from an enhanced depth illusion. After all, an image renders a reality (existent or imagined), and reality is 3D. A cinematic 2D representation should <em>appear</em> sufficiently three-dimensional. Depth defines space. Injecting depth into an image furthers the spatial awareness of the viewer and helps orient them into the depicted world. Ideally, the objects in the frame should <em>appear</em> recessing behind the screen.</p>
<p>This is depth&#8217;s main function in a cinematic image, but not the only one. There are often purely pictorial benefits. Multiple depth planes create visual interest and stimulate the eye to wander and explore the frame. This may or may not be desirable, depending on content and intent. For example, an extreme close-up will usually gain nothing from distractions and complexity. And sometimes an image needs to be unclear, claustrophobic or to imply a confined space. Depth, on the other hand, both opens space and gives scope.</p>
<div id="attachment_1321" class="wp-caption alignleft" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/hellinthepacific.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/hellinthepacific.jpg" alt="Hell In the Pacific (1968) Lee Marvin" title="Hell In the Pacific (1968)" width="262" class="size-full wp-image-1321" /></a><p class="wp-caption-text">Obscurity can be a virtue. Here the face blends with the environment for a stronger impression.</p></div>
<p>Without the ability to triangulate distances through <a href="http://en.wikipedia.org/wiki/Stereopsis" title="Stereopsis at Wikipedia" target="_blank">stereo (binocular) vision</a>, spatial perception is drawing on experience about spatial relations between objects. The brain needs other cues to perceive depth. These are usually about space differentiation and, in one way or another, lead to the brain separating objects, figuring a space between them, and identifying multiple depth planes in the frame. So, helping the separation of scene elements and enhancing the sense of distance between planes in the scene promotes the illusion of depth. The cinematographer manages depth through viewpoint choice and movement, composition and blocking, and lighting. But there are other tricks that can help, and we will also talk about some of them.</p>
<p>In painting they often differentiate foreground, middle-ground and background. There can be more discernable distance planes though. But, in general, we can assume that at least two discernable distance planes are necessary for reasonable space definition in a frame.</p>
<div id="attachment_1333" class="wp-caption aligncenter" style="width: 490px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/Vermeer-TheArt.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/Vermeer-TheArt.jpg" alt="Vermeer - The Art of Painting" title="Vermeer - The Art of Painting" width="480" class="size-full wp-image-1333" /></a><p class="wp-caption-text">The foreground frames the planes in the back. A popular compositional device to add depth.</p></div>
<p><br/></p>
<h6><strong>Depth of field</strong></h6>
<p>One misguided &#8220;truth&#8221; you can often find on the internet is that full frame cameras render images with more depth due to their ability to create shallower apparent depth of field (compared to Super35/APS-C). This is wrong on many levels, starting with pure semantics: shallow focus and depth don&#8217;t really belong to the same sentence. Shallow focus is not the same as separation. The advantage of larger sensors is better pop and delineation (see the last section below). </p>
<div id="attachment_1373" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/nocountryofdof.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/nocountryofdof.jpg" alt="No country for Old Men (2007) depth of field" title="No country for Old Men (2007)" width="262" class="size-full wp-image-1373" /></a><p class="wp-caption-text">Slight defocus guides the eye and adds depth without obscuring scene elements. (click to enlarge)</p></div>
<p>Showing the relations between objects in the scene requires sufficient depth of field to render these objects recognizable. But slight defocusing, with objects getting smoothly and slowly out of focus with increasing distance from the focus point, can enhance the illusion of depth. Two reasons for this. First, <em>slight</em> defocus mimics the workings of the eye (especially in dim environments), creating a natural image. Second, it simulates one of the cues of depth perception: <em>texture gradient</em>. The eye sees nearby objects in fine detail, and objects in the distance appear less detailed. While this is usually related to linear perspective, a similar effect happens with objects slowly falling out of focus. Artists sometimes emulate the slight defocus of the eye by painting only the main subject in fine detail.</p>
<div id="attachment_1328" class="wp-caption alignleft" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/HarryPotterDOF.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/HarryPotterDOF.jpg" alt="Harry Potter and the Deathly Hallows, Part 1 (2010)" title="Harry Potter and the Deathly Hallows, Part 1 (2010)" width="262" class="size-full wp-image-1328" /></a><p class="wp-caption-text">Bokeh brightness variation creates a nice frame, but calling the background a &quot;depth plane&quot; would be taking &quot;plane&quot; a bit literal.</p></div>
<p>Very shallow focus is a viable cinematic device for some purposes, but it will never render a space with sufficient depth. Anything in front and behind the focus point becomes an impressionistic blur, isolating the focused object. Deep focus is often considered yielding flatness due to every element in the scene being equally sharp. But equal sharpness is far less objectionable (if at all) than very shallow focus when depth is concerned. It can even lend pictorial qualities to an image. And while there is no way to inject depth in a very shallow focus picture, there are approaches to separate depth planes in a deep focus image.<br />
<br/></p>
<h6><strong>Deep staging</strong></h6>
<p><em>Deep staging</em> (or <em>deep space</em>) refers to a specific approach to blocking action and camera. Important elements of the scene are placed on different depth planes. This creates natural distance points for the eye to wander to. Deep staging is often used with long takes, sometimes including dolly, steadicam or handheld camera moves. It is emphasized by actors entering the frame, thus creating an additional plane of interest; actors leaving the frame, shifting interest to another plane; or actors moving from one plane to another, creating depth vectors in the frame. A variant of the last is the so-called &#8220;walking into a close-up&#8221; shot, used by Hitchcock, John Ford, Kalatozov and others. This technique has the actor(s) moving from a deeper plane to the foreground, ending in a medium (or a tighter) close-up.</p>
<div class="wp-caption alignnone" style="width: 610px"><iframe src="http://player.vimeo.com/video/54162542" width="600" height="338" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe><p class="wp-caption-text"><strong><em>Managing depth planes:</em></strong> First Bernstein (right) lowering the paper introduces Thatcher (left), and creates a new plane; then Kane moving through depth planes establishes the full scope of the shot. Note how the true size of the room and the windows is only revealed after Kane stands in the deep background.</p></div>
<p>Deep focus and deep staging complement each other beautifully. But modern trends usually combine deep staging with selective focus and focus racking between planes to guide the attention of the viewer according filmmaker&#8217;s intent. There is also a tendency to rely on coverage and to defer sequence construction to editing. Coverage also relies more on close-ups, over-the-shoulder shots and other standard frames. All this doesn&#8217;t play well with imaginative deep staging and deep focus. Neither does the shift towards less lighting. Deep focus and long takes require both pre-shooting commitment and relatively small apertures (and more light). This makes staging deep interiors a challenge.</p>
<div id="attachment_1362" class="wp-caption aligncenter" style="width: 620px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/cranesmedley.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/cranesmedley.jpg" alt="The Cranes Are Flying (1957)" title="The Cranes Are Flying (1957)" width="610" class="size-full wp-image-1362" /></a><p class="wp-caption-text">Films by the duo Kalatozov/Urusevsky often feature deeply staged shots with pronounced foregrounds and expressionistic camera angles.</p></div>
<p>Deep staging and deep focus were used extensively by some of the greats: Orson Welles, Roman Polanski, Kurosawa, Mizoguchi, among others. <em>Citizen Kane</em> is the textbook example. Orson Welles and cinematographer Gregg Toland used torrents of light, a wide lens (24 mm), small apertures (f8 to f16) and split focus to render the startlingly deep interiors. 24 mm may sound tame by today&#8217;s standards: the film being shot in the Academy format, 24 mm is a humble 39 mm full frame equivalent (in horizontal FOV). But in that time it was considered unforgiving to actors and used sparingly. And certainly not with actors in the foreground. 50 mm was used universally for interiors, and f4 or larger apertures were the norm. Welles and Toland coupled their choice of optics with genuinely deep staging for maximum effect, which infused drama in their images.</p>
<div id="attachment_1346" class="wp-caption aligncenter" style="width: 620px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/CitizenKaneMedley.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/CitizenKaneMedley.jpg" alt="Citizen Kane (Orson Welles)" title="Citizen Kane (1941)" width="610" class="size-full wp-image-1346" /></a><p class="wp-caption-text"><em><strong>Left:</strong></em> a classic three person / three planes arrangement with a strong foreground. <em><strong>Middle:</strong></em> Another beautifully crafted three planes shot, with the figure in the deep background giving scope to the set. <em><strong>Right:</strong></em> An example of split focus.</p></div>
<p><br/></p>
<h6><strong>Sharpness and 3D pop</strong></h6>
<p>Before moving on to more interesting stuff, lets touch on a minor point. The quality of the lens and the resolution of the medium have impact on textures and texture gradient. An image lacking clarity at its focus point will appear flatter than a crisper image. And when coupled with slight focus fall-off, the crisp image will demonstrate a more readily noticeable texture gradient.</p>
<p style="text-align=left;">
<div id="attachment_1380" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/nocountrysoften.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/nocountrysoften.jpg" alt="No Country for Old Men (2007)" title="No Country for Old Men (2007)" width="262" class="size-full wp-image-1380" /></a><p class="wp-caption-text">Bottom image is a softened version of the original (top), approximating a softer lens. Note the weaker delineation of the subject and the softer textures, resulting in a subtly flatter image. (click to enlarge)</p></div>
</p>
<p>A more obscure quality related to resolution and optics is the so-called 3D pop. The subject in focus in some pictures appears to pop out of the surroundings, and out of the image. Photographers sometimes call this <em>Zeiss pop</em> or <em>Leica pop</em>, depending on their affinities, because it tends to show in images shot with some Zeiss and Leica lenses. Pop is often mystified and its origins seemingly difficult to pinpoint. The most important element is delineation of the subject in focus. This needs great microcontrast (<a href="http://en.wikipedia.org/wiki/Modulation_transfer_function" title="MTF at Wikipedia" target="_blank">MTF</a> result) in the spatial frequencies that contribute to the required resolution, preferably across the whole image field. Good MTF results at higher spatial frequencies may look nice in a MTF chart but are at best irrelevant, and at worst &#8211; creating aliasing. For example, a FullHD full frame image will only benefit from spatial frequencies up to around 15 lp/mm. For the same pixel resolution, larger sensors have an advantage: they need this great MTF at lower frequencies, compared to smaller sensors (which is easier to achieve). Minimized lens aberrations help for clean edges. And the edges obviously need to be in sharp focus. Again, a bit of focus fall-off towards the background helps, but heavily blurred backgrounds will make the subject look like a cut-out. Some decent subject-background contrast (through light and/or color) also contributes.</p>
<p>Pop&#8217;s connection to depth lies in interposition. <em>Interposition</em> is one of the basic depth cues. When an object occludes another object, the first object is apparently in front of the other, and closer to the observer. Popping (clearly delineating) the front object is one way to separate it from what lies behind. If the objects blend into each other, they may be perceived as a single entity.</p>
<p>Sharpness and pop are influenced (in a bad way) by lossy image compression. They will often get lost in heavily compressed video. <a href="http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/" title="Cinematic Look, Part 2: Frame Rate and Shutter Speed">Motion blur</a> also obliterates them, which renders them pretty much irrelevant in scenes with motion and more applicable to still images.</p>
<p>Part 2 of the <em>Creating Depth</em> series is on <a href="http://www.shutterangle.com/2013/creating-depth-perspective/" title="Creating Depth, Part 2: Perspective">depth and perspective</a>.</p>
<p><a href="https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/">Creating Depth, Part 1: Introduction, DOF, Deep Staging, Resolution</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field</title>
		<link>https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/</link>
		<comments>https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/#comments</comments>
		<pubDate>Thu, 22 Mar 2012 23:55:39 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[aspect ratio]]></category>
		<category><![CDATA[cinematic look]]></category>
		<category><![CDATA[depth of field]]></category>
		<category><![CDATA[sensor size]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=14</guid>
		<description><![CDATA[<p>With the advent of the digital SLR as a video capturing device in recent years there is a lot of raving on the internet about the "cinematic look" one can achieve with DSLRs. <em>Cinematic look</em> is often opposed to <em>video look</em> or <em>TV look</em>. On forums and blogs one can read both delusions and truth regarding this distinction. As is often the case with any hype - hype has the tendency to self-amplify - a lot of noise gets picked up and reiterated in such a discussion. This series of articles will attempt to examine in some detail the various characteristics of the cinematic look and then explore how they relate to the image of video capturing devices, including HDDSLRs. Hopefully, some myths will be cleared in the process. This first part in the series is focused on aspect ratios and sensor sizes and the closely related topic of depth of field.</p><p><a href="https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/">Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field</a></p>]]></description>
			<content:encoded><![CDATA[<p>With the advent of the digital SLR as a video capturing device in recent years there is a lot of raving on the internet about the &#8220;cinematic look&#8221; one can achieve with DSLRs. <em>Cinematic look</em> is often opposed to <em>video look</em> or <em>TV look</em>. On forums and blogs one can read both delusions and truth regarding this distinction. As is often the case with any hype &#8211; hype has the tendency to self-amplify &#8211; a lot of noise gets picked up and reiterated in such a discussion. This series of articles will attempt to examine in some detail the various characteristics of the cinematic look and then explore how they relate to the image of video capturing devices, including HDDSLRs. Hopefully, some myths will be cleared in the process. This first part in the series is focused on aspect ratios and sensor sizes and the closely related topic of depth of field.</p>
<p><span id="more-14"></span></p>
<p>Before we start, let&#8217;s make it clear what does &#8220;cinematic look&#8221; actually mean. For us, cinematic look is what audiences have come to expect from a motion picture in terms of appearance, or in other words in terms of visual perception. This is the image we have been culturally conditioned to consider as cinematic through decades of exposure to movies. And this is what we are trying to replicate with digital cameras when aspiring to achieve the &#8220;cinematic look&#8221;. Note that the cinematic look is historically &#8220;film look&#8221; as movies were almost exclusively shot on film for some hundred years.<br />
<br/></p>
<h6><strong>The aspect ratio</strong></h6>
<div id="attachment_89" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/ourhosp.jpg"><img class=" wp-image-89  " title="Our Hospitality (1923)" src="http://www.shutterangle.com/wp-content/uploads/2012/03/ourhosp.jpg" alt="Our Hospitality (1923) screenshot" width="262" height="202" /></a><p class="wp-caption-text">Silent films were generally shot in 4:3 (1.33:1) aspect ratio</p></div>
<p>For years wide format moving images were associated with cinema and the more squarish 1.33:1 format was linked to TVs. At least, that&#8217;s what we were conditioned to imagine. With the introduction of wide TVs things change a bit. HD and full HD television screens now have aspect ratio of 1.78:1 (1920&#215;1080 pixels for full HD; 1280&#215;720 pixels for HD). In comparison, the most popular cinema aspect ratios are currently 1.85:1 and 2.39:1 (often labeled 2.35:1 for historical reasons), for flat and anamorphic projection respectively. Which means TVs are now much closer in geometric appearance to cinema screens. Cinematic ratios weren&#8217;t always like this, though. Silent films were 1.33:1. Talking pictures were 1.375:1 for some twenty years till 1952. For various reasons, around this time the widescreen revolution in cinema happened with the forementioned wider aspect ratios becoming prevalent.</p>
<p>Artistic reasons for choosing a specific aspect ratio notwithstanding, it is correct to assume that wide image is nowadays associated with cinematic appearance. Up to some limit, at least. Incidentally, using anamorphic adapters on HD cameras may yield images with extracted aspect ratio of up to double the HD ratio (or 3.56:1), which can be way too wide, so some cropping of the sides is recommended in such cases. But use of anamorphics on consumer and prosumer digital cameras is a topic for another article. The bottom line is, 1.78:1 video is already quite cinematic in geometric appearance. But if one so desires, it is safe to consider a slight cropping of the top and the bottom to bring it to 1.85:1 or even 2.39:1. Have a look at <a href="http://www.shutterangle.com/2012/film-video-aspect-ratio-artistic-choice/" title="Aspect Ratio Choice for a Film or Video: Artistic Considerations">this article</a> for more in-detail thoughts on aspect ratio choice for a specific project.</p>
<div id="attachment_98" class="wp-caption aligncenter" style="width: 509px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/aspects.jpg"><img class="size-full wp-image-98" title="Movie aspect ratios" src="http://www.shutterangle.com/wp-content/uploads/2012/03/aspects.jpg" alt="Movie aspect ratios" width="499" height="348" /></a><p class="wp-caption-text">Cinema aspect ratios</p></div>
<p>Note that in the above two paragraphs we are talking about the aspect ratio of what you see on screen. This is not necessarily the same ratio as the recorded image, especially when the recording medium is film.<br />
<br/></p>
<h6><strong>Sensor size and depth of field</strong></h6>
<p>In terms of aesthetics the most apparent property linked to sensor size or film frame size is <a href="http://en.wikipedia.org/wiki/Depth_of_field" title="depth of field" target="_blank">depth of field</a>. Popular understanding is that smaller sensors yield greater depth of field and, conversely, large sensors have less depth of field. Technically, if we shoot the same composition with two differently sized sensors, and:</p>
<ol>
<li>with lenses covering the same angle of view, i.e. wider lens for the smaller sensor and longer lens for the larger sensor;</li>
<li>with the same camera-subject distance;</li>
<li>with the same aperture diameter;</li>
<li>enlarge the result to the same print or screen size (or resize to the same video pixel size, let&#8217;s say 1920&#215;1080);</li>
<li>and use the same criterion for sharpness (i.e., the circle of confusion is proportional to the sensor size),</li>
</ol>
<p>then both pictures will have exactly the same depth of field. Your experience tells you otherwise? The tricky part is number 3). Same size aperture does NOT mean same f-number because the f-number equals the focal length divided by the aperture diameter. Which means that for the conditions above, the longer lens (used on the larger sensor to achieve equal angle of view with the small sensor) will shoot at a bigger f-number in order to maintain the same physical aperture.</p>
<p>On the other hand, if we retain all conditions but change 3) to &#8220;at the same f-number&#8221; (which, by the way, should also, more or less, preserve the same exposure, considering we shoot at the same ISO), then the smaller sensor will indeed yield greater depth of field. This is because the wider lens (used for the smaller sensor) will have a smaller aperture opening at this equal f-number. So we can conclude that lenses with equal angles of view, shot at the same f-number on differently sized sensors (or film frames, for that matter) manifest different depth of field, with smaller sensors giving pictures with greater DOF.</p>
<p>For this article we will ignore other properties related to format size. In the digital case these include dynamic range, sensitivity and noise (all three are, more precisely, connected to sensor pixel size). For film, different film frame sizes (but from the same emulsion) will show different grain sizes when projected on the same screen (or printed on the same release stock).<br />
<br/></p>
<h6><strong>Camera aperture and projection aperture</strong></h6>
<p>Motion picture film cameras have a rectangular film gate in front of the negative, which defines the portion of the frame getting exposed. This portion of the negative (and often the gate itself) is called camera aperture. When projecting a release print in a movie theater another gate is used in front of the film called projection aperture. The projection aperture is slightly smaller than the camera aperture to allow some safety margin for imperfect alignment of the film roll. It is essentially a window in the camera aperture area. So the audience in the theater never sees the full image as it was recorded but a slightly cropped version. Camera and projection aperture may also vastly differ when different aspect ratios are involved in shooting and projection: for example, when a movie is shot in 1.33:1 but projected at 1.85:1. All this makes film frame area measurements somewhat ambiguous. In this text for consistency when talking about various film formats we will mean the standardized camera aperture unless &#8220;projection aperture&#8221; is explicitly stated.<br />
<br/></p>
<h6><strong>Frame sizes and aspect ratios of popular film formats</strong></h6>
<div id="attachment_163" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/stillsframe.jpg"><img class=" wp-image-163  " title="135 film" src="http://www.shutterangle.com/wp-content/uploads/2012/03/stillsframe.jpg" alt="35mm stills negative film" width="262" height="204" /></a><p class="wp-caption-text">A frame from Kodak Gold 35mm stills negative film</p></div>
<p>Stills photographers coming to videography sometimes wrongfully assume that 35mm motion picture frames are the same as stills 35mm frames. A full frame for stills photography is sized 36mm x 24mm, or, rather, this is the exposed area of the frame (or the camera aperture). Note that film rolls are oriented horizontally in a stills camera with film perforations at the top and the bottom of the frame. On the other hand, film used for motion pictures is (usually) oriented vertically (perforations at the sides). For 35mm film the longer side is around 24mm and the shorter side around 18mm. The exact frame height depends on how narrow is the frame line. The standard negative pulldown (or film pulldown) for movies is 4 perforations per frame (4-perf): the camera sprocket wheels pull four perforations from the film roll for each frame.</p>
<div style="float: left; margin-right: 10px;">
<div id="attachment_147" class="wp-caption alignnone" style="width: 239px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/silentframe.jpg"><img class=" wp-image-147  " title="35mm silent film frame" src="http://www.shutterangle.com/wp-content/uploads/2012/03/silentframe.jpg" alt="35mm silent film frame" width="229" height="124" /></a><p class="wp-caption-text">35mm silent frame</p></div>
<div id="attachment_148" class="wp-caption alignnone" style="width: 239px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/academyframe.jpg"><img class=" wp-image-148 " title="35mm Academy format" src="http://www.shutterangle.com/wp-content/uploads/2012/03/academyframe.jpg" alt="35mm Academy format" width="229" height="124" /></a><p class="wp-caption-text">35mm Academy format frame, shown with the space reserved for soundtrack</p></div>
<div id="attachment_153" class="wp-caption alignnone" style="width: 239px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/Super35frame.jpg"><img class=" wp-image-153 " title="Super 35 frame" src="http://www.shutterangle.com/wp-content/uploads/2012/03/Super35frame.jpg" alt="Super 35 frame" width="229" height="124" /></a><p class="wp-caption-text">3-perf Super 35 frame</p></div>
<div id="attachment_154" class="wp-caption alignnone" style="width: 239px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/techniscopeframe.jpg"><img class=" wp-image-154 " title="Techniscope frame" src="http://www.shutterangle.com/wp-content/uploads/2012/03/techniscopeframe.jpg" alt="Techniscope frame" width="229" height="124" /></a><p class="wp-caption-text">Techniscope is a 2-perf format</p></div>
</div>
<p>Silent film utilized the full area of the frame for recording images because there was no need to leave space on the negative for sound. The camera aperture of silent film was 24.89mm x 18.67mm (.980&#8243; x .735&#8243;) with 1.33:1 aspect ratio.</p>
<p>For talkies the image area shrunk in order to accommodate the soundtrack on the release print. In 1932 sound pictures camera aperture was set to 22.05mm x 16.03mm (.868&#8243; x .631&#8243;), with the projection aperture set to 20.1mm x 15.24mm (.825&#8243; x .600&#8243;) and 1.375:1 aspect ratio.</p>
<p>Wide screen formats varied a lot through the years. Anamorphic formats utilized a frame size similar to the Academy format but in order to achieve widescreen ratios anamorphic lenses were used to squeeze the image while shooting and then unsqueeze it on projection. But we will leave anamorphic formats out for this article and focus on flat formats as they relate easier to digital sensors. The current wide standard for shooting flat is Super 35. Super 35 is a production standard, meaning it gets resized when printed. This also means there is no need to leave space on the negative for sound as the printing is not 1:1. Super 35 was originally a 4-perf format sized 24.89mm x 18.67mm (.980&#8243; x .735&#8243;). This is a 1.33:1 format so frames were matted down (to 1.85:1 or 2.39:1) for release. This also means a lot of the frame was wasted, so currently a 3-perf version sized 24.89mm x 13.87mm (.980&#8243; x .546&#8243;) is used in order to maximize frame utilization. This saves around 1/4 stock length compared to 4-perf. 3-perf still gets cropped a bit when printed but wastes much less negative than 4-perf.</p>
<p>Various wide-screen apertures have been used through the years. VistaVision was a 8-perf horizontal format developed by Paramount in the 50&#8242;s and similar to 35mm for stills. With camera aperture sized 37.7mm x 25.17mm (1.485&#8243; x .991&#8243;) it offered great image quality. Most productions were matted and printed down to standard size 1.85:1 format vertical prints for theatrical release. Despite the exceptional quality VistaVision didn&#8217;t pick up because of the higher stock costs in comparison to anamorphic formats. Since the 60&#8242;s it&#8217;s been used mostly for special effects work requiring greater resolution.</p>
<p>On the other side of the spectrum was Techniscope introduced by Technicolor Italy in the early 60&#8242;s. This was a 2-perf production format meant to save film stock by sacrificing a bit of image quality. It used a camera aperture sized 22.05mm x 9.47mm (.868&#8243; x .373&#8243;). Techniscope pictures were shot flat then printed with 2x vertical enlargement factor to be projected anamorphically. Being 2-perf, during production it used half the stock compared to 4-perf but resulted in larger grain and less clarity.</p>
<div id="attachment_172" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/North.jpg"><img class="wp-image-172 " title="North by Northwest (1959)" src="http://www.shutterangle.com/wp-content/uploads/2012/03/North.jpg" alt="North by Northwest (1959) screenshot" width="512" height="288" /></a><p class="wp-caption-text">North by Northwest, like other Hitchcock movies from the second half of the 50&#39;s, was shot in VistaVision</p></div>
<p>Then there is also 16mm film, which is widely used for documentary, TV and occasionally for cinema work, especially for indie films. Super 16 (which is the analog of Super 35 in the 16mm world) has camera aperture sized 12.52mm x 7.41mm (0.493&#8243; x 0.292&#8243;). Recent award winning films shot in Super 16 include <em>The Hurt Locker</em>, <em>Black Swan</em> and <em>The Wrestler</em>.<br />
<br/></p>
<h6><strong>Digital sensors</strong></h6>
<p>For a long time TV cameras used analog pickup tubes to convert optical images into electric signals. The imaging area of the tube is usually 2/3 of the diameter of the tube. Standard tube diameters included 1 inch, 2/3 inch, 1/1.8 inch. You probably notice similarity with digital sensor size categories. Indeed, sensor sizes like 2/3&#8243;, 1/1.8&#8243;, 1/2.3&#8243;, Four-Thirds, etc. are named like this for historical reasons related to analog TV and video cameras. Each of these has an imaging diagonal roughly equal to the imaging diameter of a tube of that size (remember, the imaging diameter of the tube is about 2/3 of the overall tube diameter). For example, a typical 2/3&#8243; sensor will have a diagonal of around 11 mm. This is roughly equal to 2/3 of 2/3&#8243;.</p>
<p>Modern HD digital video cameras normally use a 16:9 sensor. The typical aspect ratio of the digital sensor in a photo camera is either 3:2 (mimicking film stills) or 4:3. But when shooting video with a photo camera only a 16:9 portion of the sensor is used with pixels in the top and the bottom getting discarded. One consequence of this is that crop factors often used for comparison of photo camera sensors are not always accurate in terms of video. Modern video is predominantly widescreen and cropping practically always happens at the top and/or the bottom thus keeping the original width of the image unchanged. That&#8217;s why cinematographers and videographers often use the sensor width when comparing sensors in terms of DOF instead of the usual diagonal measurement as used in crop factors for stills.</p>
<p>Considering this, the following table lists some film format frame and digital sensor sizes with only the approximately 16:9 (or wider, where 16:9 is not applicable) area taken into account, sorted by width.</p>
<div style="margin-left: 10%; margin-right: 10%;">
<table style="font-family: Verdana; text-align: left;" border="1" cellspacing="0" cellpadding="4">
<caption style="caption-side: bottom; text-align: center; font-size: 90%;"><em>Various film format and sensor sizes sorted by width. All sizes in millimeters.</em></caption>
<tbody>
<tr>
<th style="width: 40%;"><strong>Sensor or film format<strong></strong></strong></th>
<th style="width: 20%;"><strong>Frame size (16:9)</strong></th>
</tr>
<tr>
<td>Canon 5D Mark 2/3 (Full Frame)</td>
<td>36 x 20.3</td>
</tr>
<tr>
<td>Canon 1D Mark 4 (APS-H)</td>
<td>27.9 x 15.7</td>
</tr>
<tr>
<td>Super 35 (film)</td>
<td>24.89 x 13.87</td>
</tr>
<tr>
<td>Canon C300</td>
<td>24.6 x 13.8</td>
</tr>
<tr>
<td>Arri Alexa</td>
<td>23.76 x 13.365</td>
</tr>
<tr>
<td>Nikon D7000 (APS-C)</td>
<td>23.6 x 13.3</td>
</tr>
<tr>
<td>Sony Nex 5n</td>
<td>23.4 x 13.16</td>
</tr>
<tr>
<td>Canon 7D/60D/600D (APS-C)</td>
<td>22.3 x 12.5</td>
</tr>
<tr>
<td>Red Epic/Scarlet in 4K mode</td>
<td>22.12 x 12.44</td>
</tr>
<tr>
<td>Techniscope (film)</td>
<td>22.05 x 9.47</td>
</tr>
<tr>
<td>Panasonic GH2 in 16:9 mode</td>
<td>18.8 x 10.6</td>
</tr>
<tr>
<td>Super 16 (film)</td>
<td>12.52 x 7.03</td>
</tr>
<tr>
<td>Typical 2/3&#8243; TV camera tube</td>
<td>8.8 x 4.95</td>
</tr>
</tbody>
</table>
</div>
<p>So how do we interpret these in terms of depth of field?<br />
It is a common understanding that TV and video have relatively big apparent depth of field. And we can easily see why this is the case. First, TV cameras tend to use relatively slow zoom lenses with relatively small apertures. Second, and more important, as seen above they have a much smaller imaging area compared to both film and large digital sensors.</p>
<p>One can often read on forums statements like &#8220;I like Canon 5d Mark 2 because of its cinematic DOF&#8221;. Statements like this can be attributed to years of visual opposition TV vs Cinema and the consequent automatic generalization: TV has lots of DOF, cinema has shallow DOF. This is not always true. The correct statement is &#8220;cinema <em>can</em> have shallower depth of field than TV&#8221;.</p>
<p>One look at the table shows that Full-Frame DSLRs actually have much larger sensor size than the typical widescreen motion picture frame size (i.e. Super 35 and classic widescreen). This means they may demonstrate excessively shallow DOF compared to motion pictures shot on film when pictures are shot at the same f-number/exposure (see above). APS-C sized sensors are actually much closer to the typical film frame size. No wonder that digital cinema cameras that claim &#8220;Super 35&#8243; sized sensors actually utilize APS-C sensors. This doesn&#8217;t mean that APS-C sensors are better than Full Frame. There are other reasons to use sensors larger than APS-C: low light sensitivity, dynamic and color range, overall image crispness (this last one is often &#8220;lost in compression&#8221; in DSLR video). And the shallow depth of field fetish, of course.</p>
<div id="attachment_183" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/outiw.jpg"><img class=" wp-image-183 " title="Once Upon a Time in the West (1968)" src="http://www.shutterangle.com/wp-content/uploads/2012/03/outiw.jpg" alt="Once Upon a Time in the West (1968) screenshot" width="262" height="112" /></a><p class="wp-caption-text">Sergio Leone shot Once Upon a Time in the West in Techniscope, with a frame size smaller than APS-C</p></div>
<p>A small non-technical digression. There is also the aesthetic side of DOF. Some of the greatest films in history sought to get deep focus by either using wide lenses exclusively or pooling tons of light on set and shooting at small apertures. Others ended with relatively bigger DOF for technical reasons: smaller film frame sizes (compared to Super 35 and APS-C). It is telling (and a bit ironic) that Paramount promoted their VistaVision process as a deep focus vehicle, mostly based on the availability of a 28 mm lens &#8211; one of the widest at the time. It was not shallow focus that tempted filmmakers, but rather image clarity, depth and wide angle possibilities. Generally, movies are supposed to represent objects in relation to their surroundings. This requirement implies sufficient DOF in order to visualize these relations. Selective focus is certainly a great tool for isolating subject matter and commanding the viewer&#8217;s eye. But shallow DOF is just that: a mean, not a goal. Very shallow DOF can surely be a good aesthetic for certain scenarios (recently <em>Tinker Tailor Soldier Spy</em> relied on shallow DOF, mostly by utilizing longer lenses), but these tend to be the exception, not the norm. So we can nevertheless argue that shallow depth of field is not an implicit characteristic of cinema because there are lots of influential movies heavily utilizing deep focus shots. There are other properties more intimately associated with the cinematic look. Some of them are in the focus of <a href="http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/" title="Cinematic Look, Part 2: Frame Rate and Shutter Speed">the next part in this series</a>.</p>
<p>We can safely conclude that in the DSLR realm (and in the digital sensor world, in general) APS-C is the closest representation of the cinematic look in terms of DOF.</p>
<p><a href="https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/">Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
