<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Shutter Angle</title>
	<atom:link href="http://www.shutterangle.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.shutterangle.com</link>
	<description>The science and magic of shooting moving pictures</description>
	<lastBuildDate>Thu, 23 Apr 2015 09:19:43 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.3.1</generator>
<xhtml:meta xmlns:xhtml="http://www.w3.org/1999/xhtml" name="robots" content="noindex" />
		<item>
		<title>Introducing slimRAW: A Fast CinemaDNG Raw Video Compressor</title>
		<link>https://www.shutterangle.com/2015/slimraw-cinemadng-raw-video-lossless-compressor/</link>
		<comments>https://www.shutterangle.com/2015/slimraw-cinemadng-raw-video-lossless-compressor/#comments</comments>
		<pubDate>Wed, 22 Apr 2015 11:18:10 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[CinemaDNG]]></category>
		<category><![CDATA[raw video]]></category>
		<category><![CDATA[video compression]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=2047</guid>
		<description><![CDATA[<p>CinemaDNG lossless compression has been an interest of mine for some time. I&#8217;ve described a possible workflow before. But it had some issues and the (lack of) speed was annoying, to say the least. So I&#8217;ve been thinking about a better solution.
This led to the creation of slimRAW. A fast CinemaDNG  [...]</p><p><a href="https://www.shutterangle.com/2015/slimraw-cinemadng-raw-video-lossless-compressor/">Introducing slimRAW: A Fast CinemaDNG Raw Video Compressor</a></p>]]></description>
			<content:encoded><![CDATA[<p>CinemaDNG lossless compression has been an interest of mine for some time. I&#8217;ve described a possible workflow <a href="http://www.shutterangle.com/2014/lossless-compression-for-dng-raw-video-dngstrip/">before</a>. But it had some issues and the (lack of) speed was annoying, to say the least. So I&#8217;ve been thinking about a better solution.</p>
<p>This led to the creation of <strong>slimRAW</strong>. A fast CinemaDNG compressor developed by yours truly.<span id="more-2047"></span><br />
<br />
<a href="http://www.slimraw.com"><img src="http://www.shutterangle.com/wp-content/uploads/2015/04/slimRAW-logo.png" alt="" width="548" height="113" class="aligncenter size-full wp-image-2060" /></a></p>
<p>slimRAW supports all types of uncompressed CinemaDNG and DNG raw video that I could get my hands on. It has been tested to work with footage from Blackmagic Design Cinema Camera (pre firmware 2.1); Digital Bolex D16; Sony FS700/FS7 CinemaDNG raw recorded through Convergent Design Odyssey 7Q/7Q+; Canon DSLR Magic Lantern raw converted to CinemaDNG/DNG through any of the available converters; Ikonoskop A-Cam dii; uncompressed CinemaDNG from Kinefinity cameras (including 6K from the KineMAX); Fastec Imaging TS and HiSpec high speed cameras; Indiecam indiePOV and indieGS2K (12-bit footage). Other sources are likely to work if the footage is 8, 12, 14 or 16-bit (but no guarantees).</p>
<p>slimRAW losslessly compressed CinemaDNG output has been tested to import fine in Blackmagic Design daVinci Resolve, Assimilate Scratch, The Foundry NUKE, Adobe Premiere Pro CC, Adobe SpeedGrade CC, and in Adobe software using Adobe Camera Raw for CinemaDNG/DNG import &#8211; After Effects, Photoshop, Lightroom. It should also work in any CinemaDNG conformant software.</p>
<p>An obvious application of slimRAW is to compress existing uncompressed CinemaDNG footage and free storage space without any loss of quality. But there is more to it. An important goal in development was to achieve faster than real time compression speeds. This would allow for practical offload-with-compression on location in one step: offloading video from camera or recorder storage to main storage while compressing it in the same time.</p>
<p>slimRAW compression was coded from scratch for performance. Processing is parallel and scales with available CPU cores. Example processing speeds (on a 3.4ghz Intel i7-4770 under Windows 7, running footage off SSD): around 110fps for Digital Bolex D16 2K CinemaDNG uncompressed footage and around 70fps for Blackmagic Design Cinema Camera 2.5K CinemaDNG uncompressed footage. slimRAW is fast enough that in a lot of cases performance is limited by storage bandwidth and not by the CPU. SSDs and RAID certainly help with that.</p>
<p>slimRAW also achieves the best compression ratios I&#8217;ve seen for lossless CinemaDNG compression (including software and hardware, i.e. cameras outputting losslessly compressed CinemaDNG). Lossless compression ratios aren&#8217;t fixed. They depend on the nature of source material: noisy and detailed footage will compress less than clean and defocused footage. With test footage from various cameras slimRAW compression ratios range from around 1.5:1 to 2.8:1 (a reduction of size to 66-35% of the original size). Some clips actually break the 3:1 barrier.</p>
<div id="attachment_2051" class="wp-caption aligncenter" style="width: 610px"><a href="http://www.shutterangle.com/wp-content/uploads/2015/04/sr-done.png"><img src="http://www.shutterangle.com/wp-content/uploads/2015/04/sr-done.png" alt="slimRAW CinemaDNG lossless compression" width="600" class="size-full wp-image-2051" /></a><p class="wp-caption-text">slimRAW has just finished working on some Canon 5D Mark3 Magic Lantern raw DNG footage.</p></div>
<p>slimRAW preserves all the original metadata. This includes CinemaDNG specific metadata like time code, frame rate, T-stop, etc. It doesn&#8217;t touch the color matrices or any other color related metadata. This means that in any video production application supporting losslessly compressed CinemaDNG there is no difference between the original files and the losslessly compressed files. Moreover, the compressed files should be easy to swap in place of the originals in existing projects. Applications using Adobe Camera Raw are exceptions since ACR modifies metadata in the input files on import and records settings in there.</p>
<p>Since support for CinemaDNG in Premiere Pro CC and SpeedGrade CC is incomplete, slimRAW provides a user selectable option to output losslessly compressed CinemaDNG files compatible with these two applications. The trade-off is a small sacrifice of compression. As a curious bonus feature, slimRAW can actually convert some uncompressed DNG video which is unsupported in Premiere CC to losslessly compressed DNG video which works with Premiere CC. The most notable example is DNG video converted from Canon Magic Lantern raw with ML&#8217;s raw2dng: Premiere CC will reject the uncompressed DNG video, but the slimRAW losslessly compressed sequences will import fine. </p>
<p>To recap, a few nice points about slimRAW:</p>
<ul>
<li>Speed.</li>
<li>Optimal compression ratios.</li>
<li>CinemaDNG metadata preservation.</li>
<li>Output compatible with a bunch of video production software.</i>
<li>Can offload-with-compression.</li>
<li>Checksum generation to help track data integrity through post.</li>
<li>Versions for both Windows and OS X.</li>
</ul>
<p>Read more on <a href="http://www.slimraw.com" title="slimRAW: lossless compression for CinemaDNG raw video">slimRAW&#8217;s website</a>.</p>
<p><em>Thanks to <a href="http://schoolpost.ca/" target="_blank">Csaba Nagy</a> and <a href="https://frankglencairn.wordpress.com/" target="_blank">Frank Glencairn</a> for testing and discussion.</em></p>
<p><a href="https://www.shutterangle.com/2015/slimraw-cinemadng-raw-video-lossless-compressor/">Introducing slimRAW: A Fast CinemaDNG Raw Video Compressor</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2015/slimraw-cinemadng-raw-video-lossless-compressor/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Shooting 4K Video for 2K Delivery: the Bit Depth Advantage</title>
		<link>https://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/</link>
		<comments>https://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/#comments</comments>
		<pubDate>Tue, 03 Jun 2014 22:04:39 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[4K for 2K]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1931</guid>
		<description><![CDATA[<p>The first cheapish 4K cameras are here. Panasonic GH4 is out and Sony A7S is on the horizon. But nobody needs 4K, right? There is still a lot to be had from shooting in 4K though. Shooting 4K for 2K/1080p delivery has a couple of advantages. First, you generally get sharper images after the  [...]</p><p><a href="https://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/">Shooting 4K Video for 2K Delivery: the Bit Depth Advantage</a></p>]]></description>
			<content:encoded><![CDATA[<p>The first cheapish 4K cameras are here. Panasonic GH4 is out and Sony A7S is on the horizon. But nobody needs 4K, right? There is still a lot to be had from shooting in 4K though. Shooting 4K for 2K/1080p delivery has a couple of advantages. First, you generally get <strong>sharper images</strong> after the downscale from 4K to 2K in post compared to shooting 2K in-camera. Second, there is <strong>increased color precision</strong> in the downscaled image. You get 10-bit 2K video from 8-bit 4K video, or 12-bit 2K video from 10-bit 4K video. This advantage  is sometimes misunderstood, some people go as far as saying there isn&#8217;t any (spoiler: they are wrong). So here is an explanation with examples.<span id="more-1931"></span><br />
</p>
<h6><strong>4K video for 2K delivery: the theory</strong></h6>
<p>Image and video processing software works in high bit depth in order to minimize precision errors introduced in the processing stages. A working precision of 16 bits or 32 bits is common. When a lower precision image is imported, the values are scaled up to fit in the higher working precision. For example, if we scale an 8-bit image to a 10-bit working precision all values will be multiplied by 4. That is, 1 becomes 4; 2 becomes 8; 3 becomes 12; 100 becomes 400; etc. Note the gaps between successive values. We really only have 8 meaningful bits, no matter that the working precision is 10 bits.</p>
<p><em>This higher working bit depth is fundamental in getting increased precision when downscaling video. The process won&#8217;t work if the working bit depth is the same as the source video bit depth.</em></p>
<p>To get our point through let&#8217;s start with a simple case: a grayscale image. Suppose we have a camera that records both 4K and 2K greyscale images in 8 bits. Also, in order to keep numbers small enough, suppose we are importing in a processing software working with 10-bit precision.</p>
<p>The following diagram shows what happens with both images after import. Image processors may use a whole bunch of sophisticated downscaling methods, but for simplicity let&#8217;s assume an algorithm that takes four neighboring pixels and outputs a single averaged pixel in their place.</p>
<div id="attachment_id=1939" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2014/06/4K-to-2K.png"><img class="size-full wp-image-1939" src="http://www.shutterangle.com/wp-content/uploads/2014/06/4K-to-2K.png" alt="4K to 2K gaining bit depth precision" width="500" /></a><p class="wp-caption-text">Importing a 4K 8-bit image into 10-bit working precision and downscaling to 2K effectively utilizes the whole 10-bit coding space. A 4x4 pixel block from the 4K image (above) compared to the corresponding 2x2 pixel block from the same scene shot in-camera in 2K (below).</p></div>
<p>It should now be apparent where the increased tonal precision comes from. After import and conversion to 10-bit, the 2K source image will only have values multiple of 4 (that is, still only 8 meaningful bits). On the other hand, the 4K-to-2K image will utilize the whole 10-bit coding space.</p>
<p>In color images there are 3 channels but the principle remains the same. The discussion above applies per channel. Things get more interesting when chroma subsampling comes into play. I won&#8217;t be going into details on what chroma subsampling is. Here is the <a href="http://en.wikipedia.org/wiki/Chroma_subsampling" title="Chroma Subsampling at Wikipedia" target="_blank">wikipedia article</a>, just in case.</p>
<div id="attachment_1947" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/chroma_subsampling/" rel="attachment wp-att-1947"><img src="http://www.shutterangle.com/wp-content/uploads/2014/06/chroma_subsampling.png" alt="Chroma subsampling 4:2:0 4:4:4 4:2:2" width="500" class="size-full wp-image-1947" /></a><p class="wp-caption-text">Chroma subsampling ratios 4:2:0 and 4:2:2 are often used in video. 4:4:4 means no subsampling. (image from <a href='http://upload.wikimedia.org/wikipedia/commons/f/f2/Common_chroma_subsampling_ratios.svg' title='Chroma subsampling' target='_blank'>Wikimedia</a>)</p></div>
<p>The image above makes it clear that a block of 2&#215;2 neighbor pixels contains 1 true chroma sample per chroma channel in 4:2:0 subsampled video and 2 true chroma samples per chroma channel in 4:2:2 subsampled video. This means that <em>downscaling 4:2:0 8-bit video from 4K to 2K will result in 4:4:4 video (no chroma subsampling) with 10-bit luma and 8-bit chroma channels</em>, because downscaled luma averages 4 samples while downscaled chroma only uses 1 sample. And <em>downscaling 4:2:2 8-bit video from 4K to 2K will result in 4:4:4 video (no chroma subsampling) with 10-bit luma and 9-bit chroma channels</em>: downscaled chroma averages two samples in this case.</p>
<p>There are a couple of important points to consider here:</p>
<ul>
<li>
Pretty much all sensors used in video are Bayer sensors. Each raw pixel is only sensitive to one of the colors red, green and blue. So a 2&#215;2 block of pixels has two green, one blue and one red pixel. This means that for each pixel of <em>4K debayered video coming from 4K sensors</em> two of the three color channels are interpolated from the pixels around it. Technically this puts a limit on the chroma precision gain from downscaling (this is less of a concern for chroma subsampled video though).</li>
<li>
<em>Compression</em> should also be taken into account. Everything above describes an ideal case with no lossy compression whatsoever. Lossy compression tends to discard high frequency detail and essentially smoothens the values of neighboring pixels. This lessens the precision gains on downscale, but does not cancel them, as can be seen in the examples below. In any case, the heavier the compression, the smaller the actual precision gain from downscaling.
</li>
</ul>
<p></p>
<h6><strong>4K video for 2K delivery: a synthetic example</strong></h6>
<p>All this is theory. To check it in real world conditions I did a synthetic test. I used some FullHD video and downscaled it to 960&#215;540. The simulation below was created following these steps in Davinci Resolve 10 (Resolve works in 32 bits internally):<br />
1) Full color (4:4:4) high bitdetph (14-bit) uncompressed 1080p video imported in Davinci Resolve. Source footage coming from well exposed Canon Magic Lantern raw images.<br />
2) Fullsize video exported with the 175mbps 8-bit 4:2:2 DNxHD intraframe codec (23.976fps). This is our stand-in for 4K 8-bit 4:2:2 video.<br />
3) Halfsize video exported with the 175mbps 8-bit 4:2:2 DNxHD intraframe codec (as 960&#215;540 centered image with big black borders in 1080p video). This is our stand-in for 2K 8-bit 4:2:2 video.<br />
4) Both fullsize and halfsize clips imported back in Resolve on a 960&#215;540 timeline. The fullsize video was downscaled to timeline resolution with a Smoother scaling filter, thus creating our &#8220;4K-to-2K&#8221; 10-bit footage. The halfsize video was imported without scaling (1:1 center crop from the 1080p frame).<br />
5) Equal gamma lift was applied to both clips to raise the brightness a bit and stress the image, then a bit of contrast added for better legibility of the damage.<br />
6) The same frame exported (uncompressed) from both clips.</p>
<p>Here are a couple of crops for comparison. Blacks are the most sensitive part of the range due to their lack of tonal precision, so lifting stresses them most. That&#8217;s also the range which gains most precision in practice from the downscaling shenanigans.</p>
<div id="attachment_1990" class="wp-caption aligncenter" style="width: 520px"><a href="http://www.shutterangle.com/wp-content/uploads/2014/06/4k-to-2k-vs-2k.png"><img src="http://www.shutterangle.com/wp-content/uploads/2014/06/4k-to-2k-vs-2k.png" alt="2K video vs 4K for 2K video" width="510" height="520" class="size-full wp-image-1990" /></a><p class="wp-caption-text">The downscaled for increased precision images are on the right. Images blown to 200% for better legibility. 8-bit PNG file, so no further image compression at play here.</p></div>
<p>These demonstrate the superiority of the &#8220;4K-to-2K&#8221; video. It handles the processing significantly better. Also note the difference in sharpness. This example uses DNxHD 175mbps 8-bit 4:2:2. Other codecs usually used for compressing HDMI feeds may or may not handle the source signal more gracefully.</p>
<p>So how does all this apply to real world cameras?<br />
On the Panasonic GH4 you can record 8-bit 4:2:0 4K video internally (heavily compressed at 100mbps) and 10-bit 4:2:2 4K video externally. The first will give you 4:4:4 2K video with 10-bit luma and 8-bit chroma. The second will give you 4:4:4 2K video with 12-bit luma and 11-bit chroma. On the Sony A7S you can record 8-bit 4:2:2 4K video externally. This will give you 4:4:4 2K video with 10-bit luma and 9-bit chroma. In both cases you need a 4K external recorder like the Atomos Shogun or Convergent Design Odyssey 7Q.</p>
<p><a href="https://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/">Shooting 4K Video for 2K Delivery: the Bit Depth Advantage</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Lossless Compression for DNG Raw Video</title>
		<link>https://www.shutterangle.com/2014/lossless-compression-for-dng-raw-video-dngstrip/</link>
		<comments>https://www.shutterangle.com/2014/lossless-compression-for-dng-raw-video-dngstrip/#comments</comments>
		<pubDate>Thu, 27 Mar 2014 21:26:35 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[DNGStrip]]></category>
		<category><![CDATA[raw video]]></category>
		<category><![CDATA[video compression]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1767</guid>
		<description><![CDATA[<p>There are a lot of good things about raw video but data size is not amongst them. Uncompressed raw video can quickly add up and fill storage. One way of handling the size problem in post-production is to do some initial adjustments on the raw video, then bake to a lossily compressed format like  [...]</p><p><a href="https://www.shutterangle.com/2014/lossless-compression-for-dng-raw-video-dngstrip/">Lossless Compression for DNG Raw Video</a></p>]]></description>
			<content:encoded><![CDATA[<p>There are a lot of good things about raw video but data size is not amongst them. Uncompressed raw video can quickly add up and fill storage. One way of handling the size problem in post-production is to do some initial adjustments on the raw video, then bake to a lossily compressed format like ProRes, DNxHD or Cineform and only keep the compressed video for further post and/or archiving of the source material. The downside of this approach is that a bit of image integrity and re-processing options are ultimately sacrificed in lossy compression, which is not exactly the goal of optimal archiving and image quality.</p>
<p>This article describes an alternative simple workflow for <em>lossless</em> compression of DNG/CinemaDNG video footage, and also introduces DNGStrip &#8211; an utility that&#8217;s used to optimize file size. Lossless DNG compression allows a reduction in video file size while preserving all the benefits of a raw post workflow. <span id="more-1767"></span></p>
<p><strong>Update April 2015: I now have a much better <a href="http://www.shutterangle.com/2015/slimraw-cinemadng-raw-video-lossless-compressor/" title="slimRAW: a fast CinemaDNG compressor">solution for CinemaDNG/DNG lossless compression</a>, which solves all the issues mentioned here.</strong></p>
<p>Currently there are a quite a lot of cameras that shoot uncompressed raw DNG/CinemaDNG video, or processes which end up with DNG video sequences: Blackmagic Design Cinema Camera, Digital Bolex, Ikonoskop A-Cam DII, Kinefinity first generation cameras, Canon Magic Lantern raw video, Sony FS700 + Convergent Design Odyssey 7Q, etc. All of these would benefit from some kind of (lossless) compression.</p>
<p>Note that by <em>lossless</em> I don&#8217;t mean visually lossless. Lossless means <em>truly lossless</em>: pixels are exactly the same as in the uncompressed file. Key here is that the DNG standard provides for losslessly compressed DNG frames. In fact, that is exactly the compression format used in compressed DNG video produced by the Blackmagic Design Pocket Cinema Camera. Both Adobe Camera Raw and Blackmagic Design DaVinci Resolve 10+ support losslessly compressed DNG images. </p>
<p>So you start with uncompressed raw DNG/CinemaDNG sequences and end up with losslessly compressed raw DNG sequences. There are a couple of applications involved in the process. Both are free to use and available for Windows and Mac OS X.<br />
</p>
<h6><strong>Step 1: Adobe DNG Converter</strong></h6>
<p>Since Adobe spec&#8217;d the DNG standard, it isn&#8217;t surprising they provide an application that can compress DNG images. Adobe DNG Converter is straightforward to use for processing DNG video sequences. You specify a source folder containing the DNG footage and a destination folder for output; make sure that on the Preferences screen &#8220;JPEG Preview&#8221; is set to None and &#8220;Use Lossy Compression&#8221; is unchecked; then press Convert. That&#8217;s it. DNG Converter will recursively process any subfolders and compress sequences.</p>
<div id="attachment_1780" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2014/03/DNGConverter.png"><img src="http://www.shutterangle.com/wp-content/uploads/2014/03/DNGConverter.png" alt="Adobe DNG Converter Preferences" width="500" class="size-full wp-image-1780" /></a><p class="wp-caption-text">Adobe DNG Converter Preferences for losslessly compressing DNG video. Everything is either left unchecked or set to None.</p></div>
<p>You can stop at this stage and just use the resulting DNG sequences. That&#8217;s what many have been doing already. The compressed video works beautifully in DaVinci Resolve 10 and Adobe Camera Raw. And there is no speed penalty in Resolve for using compressed DNG video, or at least I can&#8217;t see any. If anything, compressed DNG needs less filesystem bandwidth and loads faster. The compressed footage is usually between 50% and 75% of the original size, depending on the nature of the source material. For example, the source footage for a short shot on Canon Magic Lantern raw was compressed from 710GB to 399GB, or about 56.2% of the original size. A sample Blackmagic Design Cinema Camera sequence was compressed to 75% of the original size. A sample Digital Bolex sequence was compressed to 51.5% of the original size. Note that these numbers are not representative of how footage from specific camera models will compress. Compression depends on content. More detailed images compress less; for example, shallow DOF images compress more and fine detail landscapes compress less.</p>
<p>There are both Windows and Mac OS X versions of DNG Converter. Current version is 8.3. Here are the links:<br />
<a href="http://www.adobe.com/support/downloads/detail.jsp?ftpID=5694" title="Adobe DNG Converter 8.3 for Windows" target="_blank">Adobe DNG Converter 8.3 for Windows</a><br />
<a href="http://www.adobe.com/support/downloads/detail.jsp?ftpID=5695" title="Adobe DNG Converter 8.3 for Mac" target="_blank">Adobe DNG Converter 8.3 for Mac</a></p>
<p>Now that the good stuff is out of the way, here is bad news. <strong>There are three problems with using Adobe DNG Converter for video compression</strong>:</p>
<ol>
<li>It is rather slow. The forementioned Canon raw footage &#8211; about 2:42h of 23.976 fps material in 1920&#215;960 resolution &#8211; took 4 to 5 times its real time duration (or more than 11 hours) to compress on a decent modern Intel I7 powered PC. Make sure DNG Converter is running in the foreground, running it in the background (without its UI window showing) will slow it further.</li>
<li>Be aware that DNG Converter will strip any CinemaDNG specific tags if they are present in the source DNG. These may include time code, frame rate, reel name, T-stop and similar metadata. That&#8217;s because DNG Converter is really meant to process raw stills and we are using it for video. This is not just a theoretical possibility, some cameras do use these tags. For example, the Blackmagic Design Cinema Camera uses the <em>TimeCodes</em> and <em>FrameRate</em> tags. If these are important to you, <strong>don&#8217;t use Adobe DNG Converter</strong>.</li>
<li>As DNG Converter is stills oriented, it is not entirely concerned with size and will not produce the smallest file possible. It includes a small resolution (but uncompressed) thumbnail image into each and every DNG frame (around 100KB per frame, depending on image aspect ratio) and it also adds some useless tags to every file (for example, a 5.5KB XMP tag of entirely useless metadata). This may add up to some substantial disk space when a lot of frames are involved.</li>
</ol>
<p></p>
<h6><strong>Step 2: DNGStrip</strong></h6>
<p>I have written DNGStrip to address issue number 3 and squeeze the maximum out of DNG Converter&#8217;s compression. DNGStrip is a free command line tool which processes DNG sequences produced by DNG Converter and strips them of thumbnails and unnecessary tags added by Adobe. It will also remove any JPEG previews included in the compressed DNG files in case you accidentally enable them when converting with DNG Converter. DNGStrip doesn&#8217;t change the actual image data in any way. DNGStrip is also pretty fast. Its speed will likely be limited by the speed of the storage drives used. It also keeps the files standard and therefore compatible with both Adobe Camera Raw and Resolve.</p>
<div id="attachment_1790" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2014/03/DNGStrip.png"><img src="http://www.shutterangle.com/wp-content/uploads/2014/03/DNGStrip.png" alt="DNGStrip Usage" width="500" class="size-full wp-image-1790" /></a><p class="wp-caption-text">A screenshot of DNGStrip usage info as displayed in the console.</p></div>
<p>You can easily predict the gains of running DNGStrip over your DNG Converter compressed footage. DNGStrip will generally reduce frames by the same amount of bytes, provided they have <em>the same aspect ratio</em>. For example, Blackmagic Cinema Camera raw frames, Digital Bolex raw frames and Canon Magic Lantern 16:9 raw frames will all be around 114KB smaller (116 638 bytes, to be precise), and Canon Magic Lantern 2:1 frames will be around 102KB smaller after being processed by DNGStrip. So if your footage is homogeneous the gains after running DNGStrip are proportional to the number of frames processed. The forementioned Canon Magic Lantern 2:1 footage (a total of 233 516 frames) was reduced by DNGStrip from 399GB to 377GB, which is 53.1% of the original 710GB size. Quite a serious reduction, considering there is no loss of quality whatsoever compared to the original footage.</p>
<p>To put this in perspective: 377GB for 233 516 frames of 23.976 fps video means an average bitrate of approx. 317 mbps (in 1920&#215;960). For Full HD (1920&#215;1080) that would be equivalent to about 357 mbps. In comparison, 10-bit DNxHD 4:4:4 23.976 fps in 1920&#215;1080 has a bitrate of 350 mbps. In other words, I am getting the full possible quality (14-bit raw, in case of Canon ML raw) at a bitrate comparable to a 10-bit intermediate lossily compressed codec. This is a real world example, but remember that gains will vary depending on the nature of the source material. The stripped raw footage plays in real-time in Resolve as long as your GPU can handle it (an oldish GTX 570 is good enough). This isn&#8217;t related to decompression, which is done on the CPU, rather to debayering and image processing. No need for fancy storage setups either, at least with 2K video. A clean or sufficiently defragmented HDD will do fine. I use a cheap WD Blue without issues. For 4K and above you will still likely need RAID or SSD though.</p>
<p>You can download DNGStrip here. Both the Windows and Mac versions are included in the .zip file.<br />
<a href="http://www.shutterangle.com/downloads/DNGStrip.zip" title="Download DNGStrip">DNGStrip for Windows and Mac</a></p>
<p>On Windows you may need to install the Microsoft C++ Visual Studio 2013 redistributable packages if they aren&#8217;t already installed on your system. Do this if you get an error about MSVCP120.dll or MSVCR120.dll &#8220;missing on your computer&#8221;. You can get the redistributables <a href="http://www.microsoft.com/en-us/download/confirmation.aspx?id=40784" title="Microsoft C++ Visual Studio 2013 redistributables" target="_blank">here</a>.</p>
<p>Some quick notes about DNGStrip:</p>
<ul>
<li>
DNGStrip is really only meant to be used with DNG Converter&#8217;s output. While it won&#8217;t do stupid things to differently sourced DNGs, running it over them is generally pointless.</li>
<li>
DNGStrip is straightforward to use. In most cases all you&#8217;ll want to type is (make sure the current directory is the same as the one you&#8217;ve extracted DNGStrip to):<br />
In Windows:<br />
<code>dngstrip -r SourceFolderPath DestinationFolderPath</code><br />
And in Mac OS X:<br />
<code>./dngstrip -r SourceFolderPath DestinationFolderPath</code></li>
<li>
Be careful when overwriting original files with the -ovr option. While I don&#8217;t expect any issues and the application itself will never overwrite files it doesn&#8217;t understand, it is better to be safe than sorry. Test with some bits of footage first. Also, note that overwriting is really meant to be used with SSD drives or when you plan to subsequently move the footage to another drive. Using -ovr with hard drives with the idea to leave the files there long term increases the fragmentation of the free disk space.</li>
<li>
For maximum speed it is best to use different physical drives for input and output, unless you are using SSDs. This also applies when running DNG Converter.</li>
</ul>
<p>More details about DNGStrip can be found in the ReadMe file included in the zip. Make sure to read that before use.</p>
<p><em>Credit goes to Zolac at <a title="BMCUser" href="http://www.bmcuser.com/" target="_blank">bmcuser.com</a> for pointing out the thumbnails in DNG Converter&#8217;s compressed output, and for discussion and testing DNGStrip.</em></p>
<p><a href="https://www.shutterangle.com/2014/lossless-compression-for-dng-raw-video-dngstrip/">Lossless Compression for DNG Raw Video</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2014/lossless-compression-for-dng-raw-video-dngstrip/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Creating Depth, Part 3: Light, Color and More on Deep Staging</title>
		<link>https://www.shutterangle.com/2013/creating-depth-light-color-deep-staging/</link>
		<comments>https://www.shutterangle.com/2013/creating-depth-light-color-deep-staging/#comments</comments>
		<pubDate>Wed, 20 Mar 2013 11:33:35 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[color]]></category>
		<category><![CDATA[composition]]></category>
		<category><![CDATA[depth]]></category>
		<category><![CDATA[lighting]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1567</guid>
		<description><![CDATA[<p>The third part of the Creating Depth series is mostly about light and color, and how they impact depth perception. There is also a section on camera and subject placement and how to maximize available space as a mean to increase image depth.

Deep staging for cinematographers
Directing action so  [...]</p><p><a href="https://www.shutterangle.com/2013/creating-depth-light-color-deep-staging/">Creating Depth, Part 3: Light, Color and More on Deep Staging</a></p>]]></description>
			<content:encoded><![CDATA[<p>The third part of the Creating Depth series is mostly about light and color, and how they impact depth perception. There is also a section on camera and subject placement and how to maximize available space as a mean to increase image depth.<span id="more-1567"></span><br />
<br/></p>
<h6><strong>Deep staging for cinematographers</strong></h6>
<div id="attachment_1669" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/CoolHandLuke.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/CoolHandLuke.jpg" alt="Conrad Hall" title="Cool Hand Luke (1967)" width="262" class="size-full wp-image-1669" /></a><p class="wp-caption-text">Large room = deep interior shot.</p></div>
<p>Directing action so that it creates movement on the depth axis mostly falls in the domain of the director. But the cinematographer also has some staging tricks to promote depth. It is probably obvious, but let&#8217;s put it out explicitly: one main reason small movies <em>look small</em> is that they are shot in small spaces. It is really that simple. Shoot in a small room, and you&#8217;ve got a cramped and confined scene. Shoot in a hangar, and suddenly there is space and air to the shot. A larger space will naturally create a deeper frame. It is no coincidence that a typical big production stage is spacious and the sets are often larger than life. Shooting in a large space is really the simplest way to have depth in the image.</p>
<p>Unfortunately, the large set is often a luxury in the low budget world. But there are tricks that help photograph a deeper space. All of the following are based on a single underlying principle: select a viewpoint so that a (seemingly) great range of depth is put on display.</p>
<ul>
<li>
<em>Pull the subject away from walls.</em> This is a simple and effective technique to increase the depth of the scene, even more so when shooting squarely against the wall. By pulling the subject (and the camera, to preserve subject size) you are effectively pushing the background farther back. This also opens the opportunity to place well-differentiated props between the subject and the background walls: this heightens the sense of depth by creating more depth planes. And this trick has bonus benefits: less distracting shadows on the walls; free space for back lights, if needed; and space for running cables, positioning stands, etc. </p>
<div id="attachment_1642" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/BadDayatBlackRock.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/BadDayatBlackRock.jpg" alt="Bad Day at Black Rock John Sturges" title="Bad Day at Black Rock (1955)" width="500" class="size-full wp-image-1642" /></a><p class="wp-caption-text">Movie furniture likes to be arranged away from walls. (Throughout the illustrations note how they often employ more than one of the described techniques.)</p></div>
</li>
<li>
<em>Shoot against a corner.</em> For any subject positioned in a space with a rectangular floor plan the longest distance between the subject and a point at the walls is towards a corner. So take advantage of this and include a corner of the room in the frame. This gives the deepest frame possible. Having the corner in the frame will also create natural diagonals, especially with close-to-subject viewpoints and wider lenses, resulting in a more dynamic composition.</p>
<div id="attachment_1652" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/Se7enCorner.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/Se7enCorner.jpg" alt="seven fincher khondji" title="Se7en (1995)" width="500" class="size-full wp-image-1652" /></a><p class="wp-caption-text">Here having the corner and the side wall in the frame allows wall texture to emphasize linear perspective.</p></div>
</li>
<li>
<div id="attachment_1735" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/ASeriousManOTS.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/ASeriousManOTS.jpg" alt="Deakins" title="A Serious Man (2009)" width="262" class="size-full wp-image-1735" /></a><p class="wp-caption-text">Over-the-shoulder framing creates a natural foreground plane.</p></div>
<p><em>Have the subject deeper in the frame, behind a well defined foreground.</em> The foreground may frame the subject, or simply serve to enhance the sense of depth in the image. One special case of this principle is the over-the-shoulder shot which is a classic coverage device, and can be seen in pretty much all movies: some defocusing of the foreground character (with their back to the audience) creates an unobtrusive foreground plane. Another special case: have foreground elements or props create leading lines into the frame. Leading lines are a compositional device for guiding the eye and increasing subject importance, but they can also assist depth perception when oriented along the depth axis. </p>
<div id="attachment_1733" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/aseriousmandeep.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/aseriousmandeep.jpg" alt="Deakins Depth" title="A Serious Man (2009)" width="500" class="size-full wp-image-1733" /></a><p class="wp-caption-text"><strong><em>Top:</em></strong> The subject is framed by scene elements in the foreground. <strong><em>Bottom:</em></strong> Table edges lead inside the image. Sidewall bookshelves strengthen perspective even more (compare to the shot from <em>Se7en</em>).</p></div>
</li>
<li>
<em>Show the space outside the room in the frame.</em> In its easiest incarnation that means including open interior doors in the frame. This adds the space of the adjacent room to the scene and effectively deepens the shot. It also creates a natural exit point for the eye in the frame. A bit harder (but possibly more rewarding) is including the exterior in an interior image. Either through windows, or through open exterior doors. The difficulty lies in matching light levels: this may require raising interior light levels or gelling windows with ND sheets to prevent blowing the exterior to white. There is also the danger of strongly disbalancing the composition when letting overly busy exteriors in.</p>
<div id="attachment_1599" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/NoCountryForOldMenDoors.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/NoCountryForOldMenDoors.jpg" alt="No Country For Old Men" title="No Country for Old Men (2007)" width="500" class="size-full wp-image-1599" /></a><p class="wp-caption-text">Doors and windows can effectively extend image depth.</p></div>
</li>
</ul>
<p><br/></p>
<h6><strong>Light and depth</strong></h6>
<p>Light is the main weapon of the cinematographer. It has a multitude of functions: creating illumination for proper exposure, manipulating mood, modeling character and texture, conveying time of day and time of the year, focusing attention, tweaking composition, etc. But here we are interested in its power to differentiate planes. Lighting separate areas of the frame in distinct tones is a major tool for space differentiation. And space differentiation is a prerequisite for perceiving distinct depth planes. It is a basic principle: if two areas look different, they are easily accepted as separate. This is somewhat related to light as a modeling tool: lighting different planes of an object to different tones separates the object&#8217;s features and reveals form. </p>
<p>The most popular way to achieve this is alternating light and dark planes in the frame along the depth axis. The juxtaposition of shadow and light is a very effective approach for space structuring because areas are cleanly separated, and this helps the mind to map the spatial relationships within the scene.</p>
<div id="attachment_1588" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/TheManThatWasntTherePlanes.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/TheManThatWasntTherePlanes.jpg" alt="The Man Who Wasn&#039;t There" title="The Man Who Wasn&#039;t There (2001)" width="500" class="size-full wp-image-1588" /></a><p class="wp-caption-text">Space definition through selective lighting and light and shadow alternation along the depth axis. There are five light and dark layers in the second shot. Black and white images illustrate luminance distribution pretty well due to the lack of chromatic influence.</p></div>
<p>Seemingly chaotic splashes of light in a dark environment can be as much effective and compositionally interesting. Light and shadow can be loosely interspersed. A particular depth plane can have both light and shadow. It is desirable to alternate them along the depth axis, and not necessarily keep a similar light level across the plane.</p>
<div id="attachment_1758" class="wp-caption aligncenter" style="width: 620px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/truegritsplashes.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/truegritsplashes.jpg" alt="Deakins light" title="True Grit (2010)" width="610" class="size-full wp-image-1758" /></a><p class="wp-caption-text">Streaks of light break up the uniform darkness and introduce receding depth planes.</p></div>
<div id="attachment_1625" class="wp-caption aligncenter" style="width: 620px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/HeWalkedByNightMedley.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/HeWalkedByNightMedley.jpg" alt="He Walked By Night John Alton" title="He Walked by Night (1948)" width="610" class="size-full wp-image-1625" /></a><p class="wp-caption-text">Hard light is easier to use for structuring space. John Alton's noir work is exemplary in this regard. Note how scene geometry and action are defined without sacrificing mystery.</p></div>
<p>This also applies to subject separation. As described previously, delineating subjects is beneficial for depth perception because it helps the brain to judge interposition and occlusion. There are a couple of ways to aid delineation through lighting. First, through contrasting subject tones with background and foreground (where applicable) tones. This is essentially the above method of contrasting light: subject and background are positioned and lit so that bright elements of the figure fall against a darker background, and dark parts fall against a brighter background. This effectively models subject outlines.</p>
<div id="attachment_1609" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/butchseparation.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/butchseparation.jpg" alt="Butch Cassidy and the Sundance Kid lighting" title="Butch Cassidy and the Sundance Kid (1969)" width="500" class="size-full wp-image-1609" /></a><p class="wp-caption-text">Side lit characters will generally exhibit light variation in a plane. The checkered lighting principle is demonstrated here on Robert Redford. The checkered arrangement can be achieved with a single crosslight and careful camera positioning.</p></div>
<div id="attachment_1612" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/SweetSmellOfSuccessSeparation.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/SweetSmellOfSuccessSeparation.jpg" alt="Sweet Smell of Success lighting" title="Sweet Smell of Success (1957)" width="500" class="size-full wp-image-1612" /></a><p class="wp-caption-text">Side light can delineate well with flat backgrounds as long as the bright side of the figure is brighter than the background tone, and the dark side is darker than the background tone.</p></div>
<p>The second method is used with dark subjects on dark backgrounds. It involves backlights or rimlights to rim the subject and separate it from the background. In general, any light aimed at the subject and positioned at an angle greater than 90 degrees with respect to the camera-subject axis will produce some sort of an outlining effect.</p>
<div id="attachment_1636" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/InglouriousBasterdsBacklight.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/InglouriousBasterdsBacklight.jpg" alt="Inglourious Basterds Backlight Richardson" title="Inglourious Basterds (2009)" width="500" class="size-full wp-image-1636" /></a><p class="wp-caption-text">Cinematographer Robert Richardson often uses backlight (especially top backlight).</p></div>
<p>Both pools of light and lines formed by practical lights themselves are prime candidates for exploiting diminishing perspective and the principle of relative size. This is a popular way to create interest and accentuate depth in hallways or any possibly bland but deep space.</p>
<div id="attachment_1658" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/BartonFinkHallway.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/BartonFinkHallway.jpg" alt="Barton Fink Coen Deakins" title="Barton Fink (1991)" width="500" class="size-full wp-image-1658" /></a><p class="wp-caption-text">Lines of practicals reinforce perspective in this hotel hallway.</p></div>
<p>Hard light is also a natural source of leading lines. It creates both shafts of light (through a bit of smoke) and striking shadows. Both light shafts and long shadows (especially from a relatively low backlight) can be used to lead the eye deeper into the frame.<br />
<br/></p>
<h6><strong>A few words on color and depth</strong></h6>
<div id="attachment_1722" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/TheMasterBlue.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/TheMasterBlue.jpg" alt="PTA" title="The Master (2012)" width="262" class="size-full wp-image-1722" /></a><p class="wp-caption-text">Cold walls nicely contrast skin tones and also cool off-the-wall spill a bit.</p></div>
<p>Color can be used in a similar way for the purpose of depth enhancement. In fact, in the dawn of color film some cinematographers thought that color alone is enough for separation and differentiation, and didn&#8217;t hesitate to light the whole scene flat. This is a bit optimistic: after all, the eye is more sensitive to brightness variations than changes of color. Nevertheless, color plays an important role. It is no coincidence that some color grading schemes are popular (looking at you, blockbuster teal). Warm skin tones pop more against a cold background (also see color perspective in the previous article of the series). That&#8217;s one reason why blue is often production design favorite color for props and walls.</p>
<div id="attachment_1707" class="wp-caption alignleft" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/03/ShawshankCold.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/03/ShawshankCold.jpg" alt="The Shawshank Redemption Deakins" title="The Shawshank Redemption (1994)" width="262" class="size-full wp-image-1707" /></a><p class="wp-caption-text"><em>The Shawshank Redemption</em> was shot uncorrected on a tungsten balanced film stock. In this particular shot some blue has probably been pulled back in color timing. Note the overall blue tint (especially in highlights) and how skin goes a bit desaturated, but still retains some warmth.</p></div>
<p>One way to infuse a cold tint is shooting with cold light while keeping the white balance at a warmer color temperature. This is sometimes achieved by exposing tungsten balanced film stocks to daylight or near-daylight balanced light without on-lens correction or with partial lens filter correction. This will throw greys towards blue, and when done with good measure skin will be desaturated but not overcooled, especially when paired with some warmer fill. Opposing light colors create color separation, which is a good base for additional DI color work, with or without bringing white balancing back in post. The same approach is, of course, valid for digital cameras through creative white balancing.</p>
<p>You can find the previous parts of the Creating Depth series here: <a href="http://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/" title="Creating Depth, Part 1: Introduction, DOF, Deep Staging, Resolution">part 1</a> and <a href="http://www.shutterangle.com/2013/creating-depth-perspective/" title="Creating Depth, Part 2: Perspective">part 2</a>.</p>
<p><a href="https://www.shutterangle.com/2013/creating-depth-light-color-deep-staging/">Creating Depth, Part 3: Light, Color and More on Deep Staging</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2013/creating-depth-light-color-deep-staging/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Creating Depth, Part 2: Perspective</title>
		<link>https://www.shutterangle.com/2013/creating-depth-perspective/</link>
		<comments>https://www.shutterangle.com/2013/creating-depth-perspective/#comments</comments>
		<pubDate>Sun, 17 Feb 2013 19:50:30 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[composition]]></category>
		<category><![CDATA[depth]]></category>
		<category><![CDATA[lenses]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1466</guid>
		<description><![CDATA[<p>The second part of the Creating Depth series is about perspective. While people usually think linear perspective when they read perspective, I will also put tonal and color perspective here. All these are concerned with perceptual properties changing with distance from the viewer, and they happen  [...]</p><p><a href="https://www.shutterangle.com/2013/creating-depth-perspective/">Creating Depth, Part 2: Perspective</a></p>]]></description>
			<content:encoded><![CDATA[<p>The second part of the Creating Depth series is about perspective. While people usually think linear perspective when they read perspective, I will also put tonal and color perspective here. All these are concerned with perceptual properties changing with distance from the viewer, and they happen to provide major depth cues in an image. This article also explores the relationship between lenses and space representation. <span id="more-1466"></span><br />
<br/></p>
<h6><strong>Linear perspective</strong></h6>
<div id="attachment_1543" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/02/pers.png"><img src="http://www.shutterangle.com/wp-content/uploads/2013/02/pers.png" alt="diminishing perspective" title="Linear perspective" width="262" class="size-full wp-image-1543" /></a><p class="wp-caption-text"><strong><em>One-point linear perspective:</em></strong> Parallel lines converging towards a single vanishing point on the horizon.</p></div>
<p><em>Linear perspective</em> (also referred as <em>diminishing perspective</em>) is both a mathematical theory of projecting 3D spaces on a 2D plane and a related technique of depicting space in a drawing. Incidentally, this is exactly what our photo image plane does with the photographed space. </p>
<p>But here we are interested in perspective in the context of perception. That is, the way objects appear to the eye depending on their distance. All perspective cues exploited by the brain follow from the simple fact that objects appear smaller with increased distance from the eye. When the absolute size of an object is familiar, its distance can be judged depending on the size of its projection on the eye&#8217;s retina. When the absolute size of an object is unknown, but there are at least two objects of the same kind in view, the relative size of the objects suggests a notion for their relative distance. The property of parallel lines to converge towards the horizon is also a good depth cue.<br />
<br/></p>
<h6><strong>Lenses and perspective</strong></h6>
<p>Any rectilinear lens creates a (linearly) perspective image of the scene (a fisheye lens renders curvilinear perspective). A common misconception has it that wide lenses strengthen perspective and long lenses weaken perspective. This is not true. <em>Perspective is entirely dependent on viewpoint</em>. Relative positions and relative sizes of objects only change if the eye moves. It is not the angle of view of the lens that manipulates perspective, rather the shift of camera viewpoint forward with a wide lens, and backward with a long lens, in order to frame a subject in a similar way. This change of camera viewpoint creates the perspective differences often wrongly associated with the angle of view of the lens.</p>
<p>With the same size of a reference object in the frame, the wide lens exaggerates foreground-background relations and space appears expanded. Conversely, the long lens seemingly compresses and flattens space. Consequently, one good general rule for consistent space representation is to avoid mixing focal lengths when preserving the size of an important object in the frame. So this rule does not apply when changing from a wide shot to a close-up, etc, because objects change their sizes in the frame. The <em>Vertigo effect</em> (also known as a <em>dolly zoom shot</em>) demonstrates what happens when the change of focal length while preserving subject size is realized in a single continuous shot – space seemingly expands (pushing the camera in while zooming the lens out) or contracts (pulling the camera out while zooming the lens in). </p>
<div class="wp-caption alignnone" style="width: 610px"><iframe src="http://player.vimeo.com/video/59839326" width="600" height="338" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe><p class="wp-caption-text">The Vertigo shot is used here as a transition effect to a flashback. The viewpoint pulls back while the focal length increases: space contracts and the room appears to shrink.</p></div>
<p>In this context, lens choice is an instrument for perspective control. A normal lens presents a certain naturalism in space rendering. This is the focal length of choice when the camera must disappear. What exactly is a &#8220;normal lens&#8221; is worth an article of its own, what with all the misconceptions surrounding the term, especially in the moving pictures realm. Quick definitions for the purposes of this article: a &#8220;normal&#8221; image has its center of perspective at the viewer&#8217;s eye; wide-angle images have their center of perspective in front of the viewer; telephoto images have their center of perspective behind the viewer. Any non-coincidence of the viewing position and the center of perspective in the image creates a perceived perspective distortion, which can be used for artistic purposes. </p>
<div id="attachment_1520" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/02/CoMwide.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/02/CoMwide.jpg" alt="Children of Men, wide angle lenses" title="Children of Men (2006)" width="500" class="size-full wp-image-1520" /></a><p class="wp-caption-text"><em>Children of Men</em> was mostly shot with an 18 mm lens (on Super 35).</p></div>
<p>Wide-angle lenses can render a stylized dramatic space and exaggerate movement on the depth axis; the intimate viewpoint adds a feeling of immediacy. Films by Stanley Kubrick, Terry Gilliam, Jean-Pierre Jeunet and Barry Sonnenfeld often exploit these characteristics. Telephoto lenses can emphasize the graphic qualities of the image and promote the abstraction of a plane from the space; the viewpoint is detached and formal (or voyeuristic, depending on context). Kurosawa (who was very particular about these things) used mostly long lenses together with multi-camera setups during the second half of his career.</p>
<div id="attachment_1522" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/02/TTSSlong.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/02/TTSSlong.jpg" alt="Tinker Tailor Soldier Spy, telephoto lens" title="Tinker Tailor Soldier Spy (2011)" width="500" class="size-full wp-image-1522" /></a><p class="wp-caption-text"><em>Tinker Tailor Soldier Spy</em> used long lenses and distant viewpoints extensively on exteriors to build a feeling of spying on the action.</p></div>
<p><br/></p>
<h6><strong>Forced perspective</strong></h6>
<p><em>Forced perspective</em> is a technique that exploits the principles of <em>familiar size</em> and <em>relative size</em>, and plays with the expectations of the brain to create an illusion. Depending on context, objects are made to appear either closer or further away than they are, or smaller or larger than their actual size. In the first case, cheating the size of the object manipulates its perceived position. In the second, cheating the position of the object manipulates its perceived size. Size manipulation was perfected in the <em>Lord of the Rings</em> trilogy. Tweaking actors&#8217; positions and moving set elements around in sync with the camera allowed keeping the illusion even on moving shots.</p>
<div id="attachment_1489" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/02/SaboteurForced.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/02/SaboteurForced.jpg" alt="Hitchcock Saboteur Forced Perspective" title="Saboteur (1942)" width="500" class="size-full wp-image-1489" /></a><p class="wp-caption-text">Small props and small people in the background fake a longer train and a deeper scene. Hitchcock shot this on a stage.</p></div>
<p>But the topic of this article is more concerned with faking distances. In its most popular form the technique uses objects smaller than their real life size to increase the perceived distance. This is most often done to effectively increase the perceived depth of the entire set. Filmmakers adopted this practical approach of faking depth very early. It was already an art in the days of the German expressionists. Using mattes to fake distant backgrounds is nothing more than a special (but simple) case of forced perspective. These used to be painted panes strategically placed on the set. Nowadays this is done mostly in post with greenscreens and CGI.<br />
<br/></p>
<h6><strong>Aerial perspective and color perspective</strong></h6>
<div id="attachment_1529" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/02/aerialpersp.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/02/aerialpersp.jpg" alt="atmospheric perspective" title="Aerial perspective" width="262" class="size-full wp-image-1529" /></a><p class="wp-caption-text">Aerial perspective is usually demonstrated through mountain vistas or cityscapes. I am staying true to tradition.</p></div>
<p><em>Aerial perspective</em> describes the effect of atmosphere on scene appearance. It is often called <em>atmospheric perspective</em> or <em>tonal perspective</em>, the latter term widely used in the context of visual arts. The scattering of skylight from various air particles creates a veil of sorts over the scene. Effectively, some amount of scattered skylight is added to the reflected scene light. The longer the distance from the viewer to the object, the stronger the effect. As a result, distant depth planes have lower contrast, lower saturation and higher brightness than closer planes, and their tones appear to converge towards the luminance (and color) of the distant sky. Aerial perspective is another reason for the presence of texture gradient in large-scale scenes. The eye needs contrast to differentiate detail, so there is a gradual loss of fine detail perception with increasing distance. And since reflected scene light is also scattered by the atmosphere, object definition itself is affected in the first place: atmosphere acts as diffusion.</p>
<p>The accumulation of scattered skylight also creates a color shift. Since short wavelength light scatters more, there is more green and especially blue in the scattered daylight. That&#8217;s why distant objects seem to acquire a blue cast. Consequently, the conditioned brain readily agrees that warm tones in an image tend to advance, and cold tones tend to recede. This is sometimes called <em>color perspective</em>.</p>
<div id="attachment_1493" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2013/02/BladeRunnerSmoke.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2013/02/BladeRunnerSmoke.jpg" alt="Blade Runner atmospheric perspective" title="Blade Runner (1982)" width="500" class="size-full wp-image-1493" /></a><p class="wp-caption-text">Smoke helps define depth planes through contrast differentiation. <strong><em>Top:</em></strong> Selective lighting and a bit of smoke help lift the midground from the foreground and the background. <strong><em>Bottom:</em></strong> Smoke is also a way to pop silhouettes and visualize light shafts.</p></div>
<p>Atmospheric conditions are certainly hard to control, but the tonal perspective effect can be mimicked on a smaller scale, and that includes interiors. This is achieved through the use of artificial smoke and fog. Their higher particle density leads to much more pronounced atmospheric effects and faster color modulation with increasing distance. So infusing atmosphere into the set also creates atmospheric perspective. Even a little bit of smoke adds some fill to the shadows in the far end of the set, establishing an axial gradient of contrast. This lowers the contrast of the background and softens it, helping the subject to stand out. Rain can work in a similar fashion as a device for enhancing depth. The water droplets are much bigger particles, but they also obscure objects, scatter light and veil the distance.</p>
<p>You can read the the <a href="http://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/" title="Creating Depth, Part 1">first part of the Creating Depth series here</a>. And the next part is on <a href="http://www.shutterangle.com/2013/creating-depth-light-color-deep-staging/" title="Creating Depth, Part 3: Light, Color and More on Deep Staging">light and depth</a>.</p>
<p><a href="https://www.shutterangle.com/2013/creating-depth-perspective/">Creating Depth, Part 2: Perspective</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2013/creating-depth-perspective/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Creating Depth, Part 1: Introduction, DOF, Deep Staging, Resolution</title>
		<link>https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/</link>
		<comments>https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/#comments</comments>
		<pubDate>Sat, 24 Nov 2012 18:38:46 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[composition]]></category>
		<category><![CDATA[depth]]></category>
		<category><![CDATA[depth of field]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1315</guid>
		<description><![CDATA[<p>Depth perception is a basic ability of human vision. It is through depth that we judge distances and spatial relations. But depth is inherently a three-dimensional concept. So capturing the three-dimensional world as a two-dimensional image presents challenges when striving to preserve depth. These  [...]</p><p><a href="https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/">Creating Depth, Part 1: Introduction, DOF, Deep Staging, Resolution</a></p>]]></description>
			<content:encoded><![CDATA[<p>Depth perception is a basic ability of human vision. It is through depth that we judge distances and spatial relations. But depth is inherently a three-dimensional concept. So capturing the three-dimensional world as a two-dimensional image presents challenges when striving to preserve depth. These challenges are mostly related to the fact that, unlike the real world, two-dimensional images lack stereo cues, and stereo vision is a major component of the mechanics of depth perception. This is one limitation that 3D cinema tries to overcome. This article is about 2D images though, and the ways to exploit stereo unrelated (monocular) cues to suggest depth. <span id="more-1315"></span><br />
<br/></p>
<h6><strong>Why depth is important for a 2D image?</strong></h6>
<div id="attachment_1319" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/ThomasCole-ThePicnic.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/ThomasCole-ThePicnic.jpg" alt="Thomas Cole - The Picnic" title="Thomas Cole - The Picnic" width="262" class="size-full wp-image-1319" /></a><p class="wp-caption-text">Exteriors naturally lend themselves to deep images. There are at least 4 significant distance planes here.</p></div>
<p>Depth is not a universal quality to look for in an image, but most non-abstract images benefit from an enhanced depth illusion. After all, an image renders a reality (existent or imagined), and reality is 3D. A cinematic 2D representation should <em>appear</em> sufficiently three-dimensional. Depth defines space. Injecting depth into an image furthers the spatial awareness of the viewer and helps orient them into the depicted world. Ideally, the objects in the frame should <em>appear</em> recessing behind the screen.</p>
<p>This is depth&#8217;s main function in a cinematic image, but not the only one. There are often purely pictorial benefits. Multiple depth planes create visual interest and stimulate the eye to wander and explore the frame. This may or may not be desirable, depending on content and intent. For example, an extreme close-up will usually gain nothing from distractions and complexity. And sometimes an image needs to be unclear, claustrophobic or to imply a confined space. Depth, on the other hand, both opens space and gives scope.</p>
<div id="attachment_1321" class="wp-caption alignleft" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/hellinthepacific.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/hellinthepacific.jpg" alt="Hell In the Pacific (1968) Lee Marvin" title="Hell In the Pacific (1968)" width="262" class="size-full wp-image-1321" /></a><p class="wp-caption-text">Obscurity can be a virtue. Here the face blends with the environment for a stronger impression.</p></div>
<p>Without the ability to triangulate distances through <a href="http://en.wikipedia.org/wiki/Stereopsis" title="Stereopsis at Wikipedia" target="_blank">stereo (binocular) vision</a>, spatial perception is drawing on experience about spatial relations between objects. The brain needs other cues to perceive depth. These are usually about space differentiation and, in one way or another, lead to the brain separating objects, figuring a space between them, and identifying multiple depth planes in the frame. So, helping the separation of scene elements and enhancing the sense of distance between planes in the scene promotes the illusion of depth. The cinematographer manages depth through viewpoint choice and movement, composition and blocking, and lighting. But there are other tricks that can help, and we will also talk about some of them.</p>
<p>In painting they often differentiate foreground, middle-ground and background. There can be more discernable distance planes though. But, in general, we can assume that at least two discernable distance planes are necessary for reasonable space definition in a frame.</p>
<div id="attachment_1333" class="wp-caption aligncenter" style="width: 490px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/Vermeer-TheArt.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/Vermeer-TheArt.jpg" alt="Vermeer - The Art of Painting" title="Vermeer - The Art of Painting" width="480" class="size-full wp-image-1333" /></a><p class="wp-caption-text">The foreground frames the planes in the back. A popular compositional device to add depth.</p></div>
<p><br/></p>
<h6><strong>Depth of field</strong></h6>
<p>One misguided &#8220;truth&#8221; you can often find on the internet is that full frame cameras render images with more depth due to their ability to create shallower apparent depth of field (compared to Super35/APS-C). This is wrong on many levels, starting with pure semantics: shallow focus and depth don&#8217;t really belong to the same sentence. Shallow focus is not the same as separation. The advantage of larger sensors is better pop and delineation (see the last section below). </p>
<div id="attachment_1373" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/nocountryofdof.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/nocountryofdof.jpg" alt="No country for Old Men (2007) depth of field" title="No country for Old Men (2007)" width="262" class="size-full wp-image-1373" /></a><p class="wp-caption-text">Slight defocus guides the eye and adds depth without obscuring scene elements. (click to enlarge)</p></div>
<p>Showing the relations between objects in the scene requires sufficient depth of field to render these objects recognizable. But slight defocusing, with objects getting smoothly and slowly out of focus with increasing distance from the focus point, can enhance the illusion of depth. Two reasons for this. First, <em>slight</em> defocus mimics the workings of the eye (especially in dim environments), creating a natural image. Second, it simulates one of the cues of depth perception: <em>texture gradient</em>. The eye sees nearby objects in fine detail, and objects in the distance appear less detailed. While this is usually related to linear perspective, a similar effect happens with objects slowly falling out of focus. Artists sometimes emulate the slight defocus of the eye by painting only the main subject in fine detail.</p>
<div id="attachment_1328" class="wp-caption alignleft" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/HarryPotterDOF.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/HarryPotterDOF.jpg" alt="Harry Potter and the Deathly Hallows, Part 1 (2010)" title="Harry Potter and the Deathly Hallows, Part 1 (2010)" width="262" class="size-full wp-image-1328" /></a><p class="wp-caption-text">Bokeh brightness variation creates a nice frame, but calling the background a &quot;depth plane&quot; would be taking &quot;plane&quot; a bit literal.</p></div>
<p>Very shallow focus is a viable cinematic device for some purposes, but it will never render a space with sufficient depth. Anything in front and behind the focus point becomes an impressionistic blur, isolating the focused object. Deep focus is often considered yielding flatness due to every element in the scene being equally sharp. But equal sharpness is far less objectionable (if at all) than very shallow focus when depth is concerned. It can even lend pictorial qualities to an image. And while there is no way to inject depth in a very shallow focus picture, there are approaches to separate depth planes in a deep focus image.<br />
<br/></p>
<h6><strong>Deep staging</strong></h6>
<p><em>Deep staging</em> (or <em>deep space</em>) refers to a specific approach to blocking action and camera. Important elements of the scene are placed on different depth planes. This creates natural distance points for the eye to wander to. Deep staging is often used with long takes, sometimes including dolly, steadicam or handheld camera moves. It is emphasized by actors entering the frame, thus creating an additional plane of interest; actors leaving the frame, shifting interest to another plane; or actors moving from one plane to another, creating depth vectors in the frame. A variant of the last is the so-called &#8220;walking into a close-up&#8221; shot, used by Hitchcock, John Ford, Kalatozov and others. This technique has the actor(s) moving from a deeper plane to the foreground, ending in a medium (or a tighter) close-up.</p>
<div class="wp-caption alignnone" style="width: 610px"><iframe src="http://player.vimeo.com/video/54162542" width="600" height="338" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe><p class="wp-caption-text"><strong><em>Managing depth planes:</em></strong> First Bernstein (right) lowering the paper introduces Thatcher (left), and creates a new plane; then Kane moving through depth planes establishes the full scope of the shot. Note how the true size of the room and the windows is only revealed after Kane stands in the deep background.</p></div>
<p>Deep focus and deep staging complement each other beautifully. But modern trends usually combine deep staging with selective focus and focus racking between planes to guide the attention of the viewer according filmmaker&#8217;s intent. There is also a tendency to rely on coverage and to defer sequence construction to editing. Coverage also relies more on close-ups, over-the-shoulder shots and other standard frames. All this doesn&#8217;t play well with imaginative deep staging and deep focus. Neither does the shift towards less lighting. Deep focus and long takes require both pre-shooting commitment and relatively small apertures (and more light). This makes staging deep interiors a challenge.</p>
<div id="attachment_1362" class="wp-caption aligncenter" style="width: 620px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/cranesmedley.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/cranesmedley.jpg" alt="The Cranes Are Flying (1957)" title="The Cranes Are Flying (1957)" width="610" class="size-full wp-image-1362" /></a><p class="wp-caption-text">Films by the duo Kalatozov/Urusevsky often feature deeply staged shots with pronounced foregrounds and expressionistic camera angles.</p></div>
<p>Deep staging and deep focus were used extensively by some of the greats: Orson Welles, Roman Polanski, Kurosawa, Mizoguchi, among others. <em>Citizen Kane</em> is the textbook example. Orson Welles and cinematographer Gregg Toland used torrents of light, a wide lens (24 mm), small apertures (f8 to f16) and split focus to render the startlingly deep interiors. 24 mm may sound tame by today&#8217;s standards: the film being shot in the Academy format, 24 mm is a humble 39 mm full frame equivalent (in horizontal FOV). But in that time it was considered unforgiving to actors and used sparingly. And certainly not with actors in the foreground. 50 mm was used universally for interiors, and f4 or larger apertures were the norm. Welles and Toland coupled their choice of optics with genuinely deep staging for maximum effect, which infused drama in their images.</p>
<div id="attachment_1346" class="wp-caption aligncenter" style="width: 620px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/CitizenKaneMedley.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/CitizenKaneMedley.jpg" alt="Citizen Kane (Orson Welles)" title="Citizen Kane (1941)" width="610" class="size-full wp-image-1346" /></a><p class="wp-caption-text"><em><strong>Left:</strong></em> a classic three person / three planes arrangement with a strong foreground. <em><strong>Middle:</strong></em> Another beautifully crafted three planes shot, with the figure in the deep background giving scope to the set. <em><strong>Right:</strong></em> An example of split focus.</p></div>
<p><br/></p>
<h6><strong>Sharpness and 3D pop</strong></h6>
<p>Before moving on to more interesting stuff, lets touch on a minor point. The quality of the lens and the resolution of the medium have impact on textures and texture gradient. An image lacking clarity at its focus point will appear flatter than a crisper image. And when coupled with slight focus fall-off, the crisp image will demonstrate a more readily noticeable texture gradient.</p>
<p style="text-align=left;">
<div id="attachment_1380" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/11/nocountrysoften.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/11/nocountrysoften.jpg" alt="No Country for Old Men (2007)" title="No Country for Old Men (2007)" width="262" class="size-full wp-image-1380" /></a><p class="wp-caption-text">Bottom image is a softened version of the original (top), approximating a softer lens. Note the weaker delineation of the subject and the softer textures, resulting in a subtly flatter image. (click to enlarge)</p></div>
</p>
<p>A more obscure quality related to resolution and optics is the so-called 3D pop. The subject in focus in some pictures appears to pop out of the surroundings, and out of the image. Photographers sometimes call this <em>Zeiss pop</em> or <em>Leica pop</em>, depending on their affinities, because it tends to show in images shot with some Zeiss and Leica lenses. Pop is often mystified and its origins seemingly difficult to pinpoint. The most important element is delineation of the subject in focus. This needs great microcontrast (<a href="http://en.wikipedia.org/wiki/Modulation_transfer_function" title="MTF at Wikipedia" target="_blank">MTF</a> result) in the spatial frequencies that contribute to the required resolution, preferably across the whole image field. Good MTF results at higher spatial frequencies may look nice in a MTF chart but are at best irrelevant, and at worst &#8211; creating aliasing. For example, a FullHD full frame image will only benefit from spatial frequencies up to around 15 lp/mm. For the same pixel resolution, larger sensors have an advantage: they need this great MTF at lower frequencies, compared to smaller sensors (which is easier to achieve). Minimized lens aberrations help for clean edges. And the edges obviously need to be in sharp focus. Again, a bit of focus fall-off towards the background helps, but heavily blurred backgrounds will make the subject look like a cut-out. Some decent subject-background contrast (through light and/or color) also contributes.</p>
<p>Pop&#8217;s connection to depth lies in interposition. <em>Interposition</em> is one of the basic depth cues. When an object occludes another object, the first object is apparently in front of the other, and closer to the observer. Popping (clearly delineating) the front object is one way to separate it from what lies behind. If the objects blend into each other, they may be perceived as a single entity.</p>
<p>Sharpness and pop are influenced (in a bad way) by lossy image compression. They will often get lost in heavily compressed video. <a href="http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/" title="Cinematic Look, Part 2: Frame Rate and Shutter Speed">Motion blur</a> also obliterates them, which renders them pretty much irrelevant in scenes with motion and more applicable to still images.</p>
<p>Part 2 of the <em>Creating Depth</em> series is on <a href="http://www.shutterangle.com/2013/creating-depth-perspective/" title="Creating Depth, Part 2: Perspective">depth and perspective</a>.</p>
<p><a href="https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/">Creating Depth, Part 1: Introduction, DOF, Deep Staging, Resolution</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/creating-depth-dof-deep-staging-resolution/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Book Review: The Art and Technique of Digital Color Correction</title>
		<link>https://www.shutterangle.com/2012/book-review-the-art-and-technique-of-digital-color-correction/</link>
		<comments>https://www.shutterangle.com/2012/book-review-the-art-and-technique-of-digital-color-correction/#comments</comments>
		<pubDate>Sat, 10 Nov 2012 22:38:25 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Book Reviews]]></category>
		<category><![CDATA[book review]]></category>
		<category><![CDATA[color]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1284</guid>
		<description><![CDATA[<p>
Color processing was used with film even before the adoption of color film. Hand painted animation or monochromatic dyed prints were common more than a century ago. The art of color timing came to prominence with color film though. Timing, itself, related to the duration of the various chemical  [...]</p><p><a href="https://www.shutterangle.com/2012/book-review-the-art-and-technique-of-digital-color-correction/">Book Review: The Art and Technique of Digital Color Correction</a></p>]]></description>
			<content:encoded><![CDATA[<div style="float: left; margin-right: 12px; margin-top: 5px;"><img alt="The Art and Technique of Digital Color Correction" title="The Art and Technique of Digital Color Correction" src="http://www.shutterangle.com/wp-content/uploads/2012/11/atcolor.jpg"/></div>
<p>Color processing was used with film even before the adoption of color film. Hand painted animation or monochromatic dyed prints were common more than a century ago. The art of <em>color timing</em> came to prominence with color film though. <em>Timing</em>, itself, related to the duration of the various chemical baths. Chemistry was later mostly replaced by colored printing lights and color manipulation usually happened during intermediate printing. Then digital intermediate came and changed cinematography significantly.<span id="more-1284"></span></p>
<p>It is hard to overestimate the importance of the post-process adjustments. While a lot of old-school cinematographers still largely create the image in-camera, the new wave thinks of the colorist as one of the most important links in the production chain. The paradigm <em>shooting for post</em> is popular for a reason. With digital image manipulation the creative possibilities are endless. And so is the lack of restraint, but that&#8217;s another story.</p>
<p><em>Color correction</em> and <em>color grading</em> are often used interchangeably. But there are nuances. Or at least we can inject different notions for terminology precision. Color correction is more appropriate to describe the process of fixing, adjusting and matching footage. Color grading is then furthering an artistic vision or creating a <em>look</em>. Looks are all the rage now. At any level of experience, and for any kind of visual media, you can get fast looks (kinda rhymes with fast food) with Instagram, Hipstamatic, Magic Bullet Looks, etc. Being derivative is easier than ever.</p>
<p>So, after this short introduction with a touch of rant, let&#8217;s talk a bit about the book from the title. <em><a target="_blank" rel="nofollow" href="http://www.amazon.com/gp/product/024081715X/ref=as_li_ss_tl?ie=UTF8&#038;camp=1789&#038;creative=390957&#038;creativeASIN=024081715X&#038;linkCode=as2&#038;tag=revmaz-20" title="The Art and Technique of Digital Color Correction by Steve Hullfish">The Art and Technique of Digital Color Correction</a></em> by Steve Hullfish is a clever book. Its cleverness is coming from the fact that it is largely software agnostic in its description of the tools of the trade. Knowing the principles of a specific color tool means you can go beyond the implementation differences between various software packages. And there certainly is an abundance of tools for video color correction. Each one appropriate for specific tasks or specific approaches. Actually, someone coming from the still photo world may raise an eyebrow. After all, Curves should be enough for almost everything, right?</p>
<p>The book introduces the terminology and the main control tools of the color correction process: the vectorscope and the waveform monitor. And goes into details about how to use them in the color correction process. Both primary and secondary color correction are covered. Primary color correction mostly fitting in the definition of color correction above, and secondary color correction being essentially localized fixes, relighting and color grading. These include basic tonal adjustments, global color adjustments, and the various ways of qualifying image elements for secondaries.</p>
<p>One of the cool things about this book is that topics are disclosed by describing various pros tackling various tasks. Getting a glimpse of the working process of experienced colorists is really helpful. You can work out and synthesize a method by analyzing their approaches, similarities and differences. There is never a single way to do things. And you can pick small tips and tricks on the way. Of particular interest are the more advanced topics, some of them going further beyond the technical side of things: matching shots, color as a storytelling element, creating looks, and communicating with a cinematographer/videographer. The latter, of course, applies when the videographer and the colorist are not the same person, an uncommon case in the low budget world.</p>
<p>It is important to note that <em>The Art and Technique of Digital Color Correction</em> is strictly about creative and technical color manipulation in post. It doesn&#8217;t teach about color science and color spaces. It doesn&#8217;t talk much about setting and calibrating a color correction suite. It doesn&#8217;t concern itself with camera color: color matrices, color profiles, picture profiles, etc. It doesn&#8217;t teach how to get accurate color during shooting, through color charts on set or otherwise. Some of these topics are covered in another book co-authored by Steve (with Jaime Flower): <em><a target="_blank" rel="nofollow" href="http://www.amazon.com/gp/product/0240810783/ref=as_li_ss_tl?ie=UTF8&#038;camp=1789&#038;creative=390957&#038;creativeASIN=0240810783&#038;linkCode=as2&#038;tag=revmaz-20" title="Color Correction for Video by Steve Hullfish and Jaime Flower">Color Correction for Video</a></em>.</p>
<p><a href="https://www.shutterangle.com/2012/book-review-the-art-and-technique-of-digital-color-correction/">Book Review: The Art and Technique of Digital Color Correction</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/book-review-the-art-and-technique-of-digital-color-correction/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Choosing Lenses for Video</title>
		<link>https://www.shutterangle.com/2012/choosing-lenses-for-video/</link>
		<comments>https://www.shutterangle.com/2012/choosing-lenses-for-video/#comments</comments>
		<pubDate>Sat, 20 Oct 2012 18:35:25 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[lenses]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1190</guid>
		<description><![CDATA[<p>Choosing lenses for video can be a difficult task. If you haven&#8217;t shot much before (either photos, or video) you will likely have problems identifying the appropriate focal lengths and maximum apertures, let alone actual lenses. The best way to get over this phase is to shoot some &#8211; preferably with  [...]</p><p><a href="https://www.shutterangle.com/2012/choosing-lenses-for-video/">Choosing Lenses for Video</a></p>]]></description>
			<content:encoded><![CDATA[<p>Choosing lenses for video can be a difficult task. If you haven&#8217;t shot much before (either photos, or video) you will likely have problems identifying the appropriate focal lengths and maximum apertures, let alone actual lenses. The best way to get over this phase is to shoot some &#8211; preferably with a zoom lens to get acquainted with various focal lengths and what they have to offer. This article doesn&#8217;t touch the subject of focal length selection, which is largely a creative decision. Instead, it offers some considerations related to the use of still photo lenses for video work. It is centered on HDDSLRs and large sensor video cameras with photo mounts. <span id="more-1190"></span><br />
<br/></p>
<h6><strong>Usability requirements for lenses for video</strong></h6>
<p>Any lens can be used to shoot video, as long as you can mount it on the camera. That said, there are a couple of features, which make the job easier.</p>
<div style="float: right; margin-left: 12px; margin-top: 5px; width: 40%; background-color: LightGrey; padding: 10px 10px 0px; border-width: thin; border-color: black;">
<strong>T-stops</strong></p>
<p>F-stops are defined as the ratio of the lens focal length to the physical diameter of the aperture. F-stops are not appropriate for precise exposure control because the f-stop is a geometric property and doesn&#8217;t describe actual light transmission. All lenses lose light due to reflections from the lens&#8217;s elements and internal light scatter. The trouble is, different lenses lose different amounts of light. That&#8217;s why cine lenses use <em>t-stops</em> instead of f-stops. T-stops mark aperture positions with light transmission equal to 100% transmission at the respective f-stop. So equal t-stops on different lenses mean equal transmission, and thus the same exposure (assuming both lenses have calibrated t-stop markings). Because real lenses are not ideal, the f-stop at a specific t-stop is always a smaller number for any lens. In general, lenses with more glass elements (like zoom lenses) exhibit larger deviations between t-stops and f-stops.
</div>
<p>Smooth aperture is the most underestimated lens feature by new videographers. Image consistency generally needs fixed sensitivity of the capturing medium (fixed ISO) and fixed shutter speed. This leaves aperture as the only mean for precise exposure control. And precise control you <em>will</em> need, once you have to (near) perfectly match the exposure of a couple of lenses between shots in a sequence. Well calibrated t-stop markings allow for perfect exposure matching. In fact, t-stops are critical when shooting film. With digital one can get a perfect exposure match even without t-stop markings, as long as there is smooth aperture control available. This is achieved with through-the-lens exposure tools like a waveform monitor or a spotmeter and a solid tonal reference (usually a grey card). Exposure matching during the shoot is not mandatory, but saves matching time in post.</p>
<p>Next is the tight and smooth focus ring. A good focusing ring has decent travel for focus precision; has no play; ideally, has a hard stop at infinity; either allows for a focus gear to be attached, or is standard pitch geared itself for use with a follow focus.<br />
<br/></p>
<h6><strong>Lens image rendition in relation to video</strong></h6>
<p>Lens <em>resolution</em> is largely overrated for low-end video. Sensor line skipping (used for video in most photo cameras) is brutal in decimating image definition and introducing fake detail from aliasing. Pixel binning is much more forgiving, essentially being localized downsampling. Image quality is then further degraded by video compression. Any decent lens will do fine in terms of sharpness for DSLR video. On the other hand, lens resolution matters a lot when high quality 4K video is concerned, or when recording RAW video from a small sensor (as with Blackmagic&#8217;s camera).</p>
<p>Lens <em>contrast</em> needs more attention. First, higher contrast images will <em>appear</em> to have more detail. Second, higher contrast images will endure somewhat better through heavy video compression due to their better value differentiation. Low contrast lenses, on the other hand, can slightly boost the captured <a href="http://www.shutterangle.com/2012/cinematic-look-dynamic-range/" title="Cinematic Look, Part 3: Dynamic Range">scene contrast</a>. Lens contrast is related to coatings: low contrast lenses are usually uncoated or single-coated, so they will also flare more.</p>
<p>Color rendition varies a lot. Older lenses often render warmer (due to benefits for black and white film). It is best to have neutral color rendition, but lens color can be modified through white balance camera options, or in post.</p>
<p>Out-of-focus rendering (<a href="http://en.wikipedia.org/wiki/Bokeh" title="Bokeh at Wikipedia" target="_blank">bokeh</a>) is largely unaffected by video specifics as its appearance is defined over (relatively) large areas of the image. So it is valid to take it into consideration when choosing lenses for video. Same with lens distortion and vignetting. Using full frame lenses on crop bodies pretty much eliminates vignetting problems though. Also, some cameras will correct these defects for native lenses. Abundant chromatic aberrations will be visible in video.<br />
<br/></p>
<h6><strong>Video lenses vs. still lenses</strong></h6>
<div id="attachment_1239" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/10/samyang.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/10/samyang.jpg" alt="Samyang\Rokinon cine lenses" title="Samyang cine lenses" width="262" class="size-full wp-image-1239" /></a><p class="wp-caption-text">Samyang cine lenses have smooth aperture, t-stops, and geared focus and aperture rings</p></div>
<p>Unsurprisingly, video/cine lenses are better suited for shooting video than photo lenses. As a minimum, video lenses have smooth aperture control and good focus rings with standard pitch teeth for use with a follow focus. They usually feature t-stop markings instead of f-stop markings on the aperture ring. The distance scale is often to the side to make it visible to a focus puller, detailed, and precisely calibrated. Same with depth of field scales, where available. Photo lenses will mostly have the distance scale on top, so that the photographer can have a quick glance at it at will. High quality video lens sets like the Carl Zeiss compact primes are also color matched and feature matching front diameters (for use with matte boxes). Overall, image characteristics are matched across the set, including generally matching out-of-focus rendering due to the consistent number of aperture blades.</p>
<p>Video and cine lenses have a flaw though: they are usually quite expensive. Only recently Korean and Chinese lens manufacturers are getting into the market, with Samyang (also rebranded as Rokinon or Bower in the USA) and SLRMagic rehousing their photo lenses for video use and keeping the prices down. But in general, still photo lenses, especially some legacy 35mm lenses, are significantly cheaper.<br />
<br/></p>
<h6><strong>Native mount lenses vs. adapted lenses</strong></h6>
<p>If you are using a photo camera for video (HDDSLR or a video capable mirrorless), you are likely invested in native glass already. Modern photo lenses, except for some third party lenses, are universally autofocus and usually feature electronic aperture control. Autofocus is still largely irrelevant for video, so the lenses are mostly used in manual focus mode. Autofocus lenses have a couple of side effects though. They usually have relatively short focus throw (for faster AF), which may limit manual focus precision. And their focus ring rotation don&#8217;t stop at infinity when in manual focus mode, but rather goes beyond infinity. That&#8217;s because the autofocus system usually needs to go beyond the exact focus point and then back a little when hunting for focus in order to get the sharpest result.</p>
<p>Electronic aperture control can be either a positive or a negative. On some cameras (Nikon D4 or Canon C300, for example) aperture can be adjusted in 1/8 stop steps, allowing some quite precise exposure control. Typically though, on photo cameras the aperture is controlled in 1/3 stop steps. This is still better than most manual focus and manual aperture legacy lenses which allow aperture control in either half stops or full stops. But manual aperture can usually be declicked, resulting in smooth aperture rings. Declicked aperture means both precise exposure control (with the help of through-the-lens exposure tools) and the possibility for basic aperture pulls during shots. The latter can&#8217;t be done with stepped electronic aperture.</p>
<p>Using lenses recognized by the camera body (this generally means lenses from the camera manufacturer) has additional benefits. In-camera processing can fix some optical defects. For example, Canon DSLR cameras have an option for peripheral illumination correction (vignetting correction) available for Canon lenses. Some cameras (Panasonic, Samsung, etc) will also optionally fix distortion on lenses they recognize.</p>
<p>Then there is image stabilization. This feature of some native mount lenses is very beneficial for handheld video, more so with long focal lengths. If handheld is your thing, it is probably wise to stick with native lenses with IS.</p>
<p>So what do legacy manual focus lenses have to offer?</p>
<div id="attachment_1260" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/10/flekfocus.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/10/flekfocus.jpg" alt="Carl Zeiss Jena Flektogon 35mm" title="Flektogon focus throw" width="262" class="size-full wp-image-1260" /></a><p class="wp-caption-text">The focus path of the Carl Zeiss Jena Flektogon 35/2.4 is more than 250 degrees.</p></div>
<p>Quality old manual lenses usually have tight and smooth focusing rings, which are vastly superior to AF lenses. The focus throw is often noticeably longer than AF lenses. As mentioned above, manual aperture rings (where available) can usually be declicked for smooth aperture control. Legacy lenses sometimes have &#8220;character&#8221; (which is glorified optical defects). Lenses from the 60&#8242;s and earlier will generally be uncoated or single-coated and will flare easily. This can be used for lowering contrast or for artistic effects. Legacy lenses often have sturdy metal bodies, unlike the plastic bodies common with modern lenses.</p>
<p>But the main attraction is price. With patient shopping, you can often collect a whole set for the price of a quality modern lens. That is, if you don&#8217;t jump on the Leica train. Relatively cheap manual lens systems include Super Takumar (M42), Olympus OM, Canon FD, Carl Zeiss Jena (M42), Pentacon (M42), Pentax K, Yashica ML (Contax/Yashica mount), Minolta MD, to name a few. Going up in price, there are manual Nikon F, Contax Carl Zeiss and Leica R.<br />
<br/></p>
<h6><strong>Adapting lenses</strong></h6>
<div id="attachment_1256" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/10/omtocanon.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/10/omtocanon.jpg" alt="Olympus OM to Canon EF adapter" title="Olympus OM to Canon EF adaptor" width="262" class="size-full wp-image-1256" /></a><p class="wp-caption-text">Olympus OM to Canon EF adaptor. FFD of the OM system is 46mm and FFD of Canon EF is 44mm, so OM lenses can be adapted to Canon EOS cameras.</p></div>
<p>The main factor that decides if a lens can be adapted to a camera mount is flange focal distance (FFD). This is the distance from the camera mount to the sensor plane. In general, lenses from a system with longer flange focal distance can be adapted to a system with shorter flange focal distance. A lens with a shorter FFD will need an adaptor with optical elements in order to be able to focus to infinity (if possible at all) on systems with longer FFDs. It is best to not bother with these. The common way to adapt is through a separate adaptor between the lens and the camera mount. But when the difference in FFD between the two systems is small, lenses can often be adapted through replacement lens mounts that take the place of the original lens mount. The replacement mount is manufactured to more or less equalize the FFD. Popular replacement mount routes are Canon FD to Canon EF, M42 to Nikon F, Contax/Yashica to Sony Alpha, Leica R to Nikon F, to name a few. The following table lists the flange focal distance of some popular camera mounts for reference.</p>
<div style="margin-left: 22%; margin-right: 22%;">
<table style="font-family: Verdana; text-align: center;" border="1" cellspacing="0" cellpadding="0">
<caption style="caption-side: bottom; text-align: center; font-size: 90%;"><em>Camera mounts sorted by flange focal distance</em></caption>
<tbody style="text-align: left">
<tr>
<th style="width: 40%; text-align: left;"><strong>Camera mount</strong></th>
<th style="width: 20%; text-align: left;"><strong>FFD in mm</strong></th>
</tr>
<tr>
<td>Sony E</td>
<td>18</td>
</tr>
<tr>
<td>Canon EF-M</td>
<td>18</td>
</tr>
<tr>
<td>Micro 4/3</td>
<td>19.25</td>
</tr>
<tr>
<td>Leica M</td>
<td>27.8</td>
</tr>
<tr>
<td>Leica M39</td>
<td>28.8</td>
</tr>
<tr>
<td>Canon FD</td>
<td>42</td>
</tr>
<tr>
<td>Minolta MD</td>
<td>43.5</td>
</tr>
<tr>
<td>Canon EF</td>
<td>44</td>
</tr>
<tr>
<td>Sony Alpha</td>
<td>44.6</td>
</tr>
<tr>
<td>M42</td>
<td>45.46</td>
</tr>
<tr>
<td>Pentax K</td>
<td>45.46</td>
</tr>
<tr>
<td>Contax/Yashica</td>
<td>45.5</td>
</tr>
<tr>
<td>Olympus OM</td>
<td>46</td>
</tr>
<tr>
<td>Nikon F</td>
<td>46.5</td>
</tr>
<tr>
<td>Leica R</td>
<td>47</td>
</tr>
<tr>
<td>Arri PL</td>
<td>52</td>
</tr>
<tr>
<td>Mamiya 645</td>
<td>63.3</td>
</tr>
</tbody>
</table>
</div>
<p>Mirrorless mounts are the easiest to adapt lenses to. You can put pretty much any lens ever made on a Sony E-mount camera. Among (video) DSLRs Canon EF cameras are the easiest to adapt to, and Nikon the hardest. Due to mount specifics Sony Alpha cameras often require replacement mounts to be installed on the lens even though the FFD is only 0.6mm longer than Canon&#8217;s. </p>
<p>FFD is only one part of the equation though. Some lenses simply can&#8217;t fit physically on a camera even if they have a longer flange focal distance: they could be too wide, or they could hit the mirror (in the case of a SLR camera). For example, Arri PL lenses have wide mounts and protrude deep. Adapting can be a tricky business. For example, some Leica R lenses work on APS-C Canon DSLR cameras, but hit the mirror on full frame Canon DSLRs. So before investing in a lens make sure it can be used on your camera. Another issue is adaptor thickness. In order to ensure infinity focus adaptor manufacturers routinely make adaptors a hair too thin. This results in unreliable distance scales and annoying loss of the hard stop of the focus ring at infinity (where such a hard stop was previously available). Depending on adaptor construction, adaptors can sometimes be shimmed to the precise thickness through trial and error.<br />
<br/></p>
<h6><strong>Prime lenses vs. zoom lenses</strong></h6>
<p>Primes&#8217; most important advantage over zooms is faster maximum apertures. This means better low light capabilities and the option of shallower depth of field. Their generally better sharpness only matters when good quality video compression and/or high resolution are available. Think dedicated video sensors, RAW video or 4K. Primes are usually smaller and with less glass elements. This translates to lighter rigs.</p>
<div id="attachment_1246" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/10/zoomlens.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/10/zoomlens.jpg" alt="Zoom lens" title="Zoom lens" width="262"  class="size-full wp-image-1246" /></a><p class="wp-caption-text">Non-cine zoom lenses often extend forward a lot making matte box operation hard</p></div>
<p>Zooms, on the other hand, are all about flexibility and convenience. A single lens covers a whole range of focal lengths. This means less lens changes and lugging fewer lenses around. There is also the added bonus of color matched focal lengths over the zoom range. With <a href="http://en.wikipedia.org/wiki/Parfocal" title="Parfocal lenses at Wikipedia" target="_blank">parfocal</a> constant aperture zoom lenses you can also zoom during a shot. So make sure your zoom lens is parfocal and not varifocal, if you plan to do this. Note that constant aperture is not always <em>constant</em>. There are often variations in true aperture over the zoom range. This may lead to slight exposure changes during zooms. Push-pull zoom lenses will need readjustment of the follow focus (if you use one) as the focus ring moves forward or backward with zooming. Modern zooms are mostly of the two-touch type, with separate zoom and focus rings. Photo zoom lenses (unlike cine zoom lenses) often extend a lot when zooming, which may either prevent their use with a matte box, or at least require matte box readjustment after changing the focal length. Then again, one can do fine without a matte box, especially when lens changes are rare. Photo zooms rarely have apertures wider than f2.8. If you need low light, you are better suited with primes.<br />
<br/></p>
<h6><strong>Some thoughts on lens set building</strong></h6>
<p>If you&#8217;ve never shot seriously before, it is probably best to start with a standard zoom. Get familiar with the focal lengths, find out what you like and proceed from there. You can shoot a feature with a zoom lens. You can shoot a feature with a single prime. Buying a lens only because someone raves about it on the Internet is a bad idea. Get only what you need, when you need it. Build up gradually. It is usually best to build a set of lenses from the same system &#8211; only Nikon lenses, or only Canon lenses, or only Leica R lenses, etc. Some lens systems are better matched than others, but it is best to aim for serial numbers close to one another. Mixing lenses is the easiest way to get in trouble with matching color, contrast, or just &#8220;feel&#8221; between shots. But you should not absolutize this. Content is more important than the occasional mismatch, and sometimes the mismatch can even be used to an advantage. And sampling various lens systems will also give you a taste of what is out there. There is more on lens color in the article on <a href="http://www.shutterangle.com/2012/color-matching-lenses-for-video/" title="Color Matching Lenses for Video">color matching lenses for video</a>. Matching the filter thread diameter of the lenses will let you work without step-up rings, both with and without a matte box. Luckily, lens manufacturers often utilize only one or two filter sizes within a single lens system.<br />
<br/></p>
<h6><strong>Random tidbits</strong></h6>
<ul>
<li>Nikkors&#8217; focus rings rotate in the opposite direction compared to pretty much any other lens.</li>
<li>You may want to consider the minimum focus distance when choosing a lens. A lens with short MFD can give you near-macro shots without the need for a dedicated macro lens.</li>
<li>Some very small lenses have the focus ring so close to the camera body that using a follow focus is hard or impossible.</li>
<li>Some lens repair shops do cine mods to still lenses. These usually include installing focus gears, common fronts, and declicking the aperture.</li>
<li><a href="http://www.flickr.com/" title="flickr" target="_blank">flickr</a> is your best friend when you need to research how a lens renders.</li>
</ul>
<p><a href="https://www.shutterangle.com/2012/choosing-lenses-for-video/">Choosing Lenses for Video</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/choosing-lenses-for-video/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Cinematic Look, Part 4: Film Grain</title>
		<link>https://www.shutterangle.com/2012/cinematic-look-film-grain/</link>
		<comments>https://www.shutterangle.com/2012/cinematic-look-film-grain/#comments</comments>
		<pubDate>Mon, 17 Sep 2012 20:24:04 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cinematic look]]></category>
		<category><![CDATA[film grain]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1132</guid>
		<description><![CDATA[<p>Film grain is possibly the single most differentiating factor of film images when compared to digital images (in both stills and video). It is also the first characteristic of the film look the average viewer would pick if they had to point their finger. This part of the cinematic look series  [...]</p><p><a href="https://www.shutterangle.com/2012/cinematic-look-film-grain/">Cinematic Look, Part 4: Film Grain</a></p>]]></description>
			<content:encoded><![CDATA[<p>Film grain is possibly the single most differentiating factor of film images when compared to digital images (in both stills and video). It is also the first characteristic of the film look the average viewer would pick if they had to point their finger. This part of the cinematic look series explores some of the properties of film grain and how film grain relates to image perception. We also talk a bit about digital sensor noise, which is the closest perceptual relative of film grain in the digital video world. <span id="more-1132"></span><br />
<br/></p>
<h6><strong>What is film grain?</strong></h6>
<div id="attachment_1135" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/09/WarGrain.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/09/WarGrain.jpg" alt="War of the Worlds screenshot, film grain" title="War of the Worlds (2005)" width="262" class="size-full wp-image-1135" /></a><p class="wp-caption-text">Film grain is further intensified here by <a href='http://en.wikipedia.org/wiki/Bleach_bypass' target='_blank'>bleach bypass</a>. Click the image to enlarge it.</p></div>
<p><em>Film grain</em> is often used to describe a few different concepts. For the film savvy viewer, film grain is the random grain-like texture seemingly overlaid on a scene captured on film. It is observed in a paper print, on a display or through projection. In this aspect film grain is somewhat related to film scratches and dirt specks. On a higher level, the grain texture is one element that distinguishes the film image from reality. This is true for stills, but even more so with cinema where the random nature of the grain manifests itself in consecutive frames, and the greater enlargement makes it more pronounced. A more technically inclined person with less sentiment for films would simply call this apparent image graininess <em>noise</em>.</p>
<p>In black and white negatives the light sensitive elements are usually silver halide crystals suspended in gelatin. When photons hit the crystals, they are converted to a latent developable state. Subsequently, lab processing dispenses with the unexposed particles. Film grain is commonly thought of as the remaining silver particles. This isn&#8217;t entirely correct. While observable film grain is a result of these image-forming particles, it is distinctly different from the particles themselves. The individual silver particles are so small they can&#8217;t be seen. What is perceived as grain is clumps of these particles and, more precisely, micro-variations in areas of relatively uniform negative density. In color film the silver particles are coupled with dyes; silver is removed in processing after development and only dye clouds remain. These dye clouds are the cause of graininess in color film.</p>
<div id="attachment_1142" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/09/filmgrain.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/09/filmgrain.jpg" alt="Film grain magnified" title="Film grain" width="512" class="size-full wp-image-1142" /></a><p class="wp-caption-text">Silver halide particles embedded in gelatin (left). Particles removed from gelatin for a better view (right). Note the 2 micrometers reference at the bottom. (images shamelessly lifted from <a href='http://www.optics.rochester.edu/workgroups/cml/opt307/spr04/jidong/' target='_blank'>this page</a>)</p></div>
<div style="float: right; margin-left: 12px; margin-top: 5px; width: 40%; background-color: LightGrey; padding: 10px 10px 0px; border-width: thin; border-color: black;">
<strong>Print Grain Index</strong></p>
<p>Granularity is a bit complex and, ultimately, not very telling. For consumer film stocks Kodak have moved to a more meaningful concept: the <em>Print Grain Index</em>. Print Grain Index takes into consideration image enlargement and is entirely based on perception from a fixed distance (14 inches, or around 36 cm). It is out of the scope of this article, but to illustrate the fine grain of modern stocks: for example, a Kodak Ektar 100 negative can be enlarged to 6&#215;4&#8243; (15x10cm) from a 35mm source, or to 10&#215;8&#8243; from a medium format source without any perceptible grain when observed from the fore-mentioned control distance.
</div>
<p>Graininess is a subjective visual sensation. And it is highly dependent on scene tones, colors and details. All this makes it a bit hard to quantify. Traditionally, film stock density unevenness is quantified through measurements of density fluctuations. This objective quantity is called <em>granularity</em>. Granularity is measured with a microdensitometer in a small area of uniform density at 1.0 density above base. The microdensitometer usually has an aperture of 48 microns (0.048mm). The standard deviation from the average density gives us root-mean-square granularity. Standard deviation is very small, so it is usually multiplied by 1000 to bring it into whole numbers territory. Where there are very small silver particles, many of them are averaged and fluctuation is small. With large particles, there are less of them getting averaged, and fluctuations are larger. Modern film stocks (like Kodak Vision 3 stocks) have granularities below RMS granularity of 5, which is considered finer than extremely fine.<br />
<br/></p>
<h6><strong>Film grain and film properties</strong></h6>
<p>In order to facilitate the capture of different shades, film uses silver halide particles of various sizes. But each particle needs the same number of photons for exposure, no matter what its size. So larger particles are exposed faster, and smaller particles need more light (or more time) to capture enough photons. This variance in particle size is responsible for the great <a href="http://www.shutterangle.com/2012/cinematic-look-dynamic-range/" title="Cinematic Look, Part 3: Dynamic Range">dynamic range of film</a>. In the dark areas of the image only the large particles are exposed, and in brighter areas particles of all sizes get exposed. That&#8217;s the reason grain appears coarser in shadows and low mids. Some cinematographers overexpose a bit in order to get the cleanest results, but this is stock specific.</p>
<p>There is a similar connection with film speed (sensitivity). Slower film is cleaner and finer grained due to its very fine individual grains. Fast films need larger particles to capture light faster, and thus exhibit coarser grain. Different developers can affect graininess, especially with black &#038; white film. More notably, developers containing silver solvents lead to a softer grain look.</p>
<p>Film grain is also connected to image sharpness. While the relation is complex, especially in color film, fine grain stocks generally resolve more than coarser grained stocks. But there is more than resolving power to perceived sharpness. Film grain is noise and can mask image detail out. But it can also enhance tonality and fine detail by modulating tonal changes that are too miniscule for the brain to register. For those not easily scared by terminology: in this case film grain acts as <em>stochastic noise</em> and causes <em>stochastic resonance</em>.</p>
<div id="attachment_1150" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/09/LennaAndGrain.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/09/LennaAndGrain.jpg" alt="Lenna film grain" title="Lenna and film grain" width="512" class="size-full wp-image-1150" /></a><p class="wp-caption-text">Lenna clean (left), and with film grain overlaid (right). Grain brings out some of the more subtle detail and occasionally creates fake detail. (click to enlarge)</p></div>
<p><br/></p>
<h6><strong>Film grain and the film look</strong></h6>
<p>Film stock manufacturers have always considered graininess a defect and have strived to decrease granularity. Filmmakers, on the other hand, often consider low to moderate grain an important aesthetic. Both for the pleasing qualities of its texture and for its subtle veil over reality.</p>
<p>But there are a couple of other properties not so obviously related to the film look. They aren&#8217;t as much a result of film grain as they are a consequence of film <em>grains</em>. </p>
<p>A digital image is made of rows and columns of dots (pixels). It is a matrix. So a digital sensor always samples uniformly the image delivered by the lens. This leads to <a href="http://en.wikipedia.org/wiki/Aliasing" title="Aliasing at Wikipedia" target="_blank">aliasing</a> problems with high frequency detail in the scene. Hence the need for anti-aliasing filters in the typical digital camera. These optical low-pass filters can kill very fine detail and also complicate the use of small symmetrical lenses with digital cameras with short flange focal distance (like Sony E-mount cameras). In contrast, the individual grains of film and, subsequently, the clumps that form visible film grain are placed randomly. This prevents any noticeable aliasing. There is an attempt to mimic this in the recent Fuji X-mount digital cameras. They use a pseudo-random color array for their CMOS sensors, and have dumped the anti-aliasing optical filter.</p>
<div id="attachment_aliasing" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/09/alias.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/09/alias.jpg" alt="Aliasing and anti-aliasing" title="Aliasing" width="512" class="size-full wp-image-1178" /></a><p class="wp-caption-text">An example of aliasing (above) and the same image anti-aliased (below). Note the faux diagonal lines in the bricks in the top image. (click to enlarge)</p></div>
<p>The other effect concerns movies specifically and is also a result of the random distribution of grains in film. Because of this randomness, each consecutive frame in a film roll captures a slightly different image of the scene (for static or slowly moving scenes). While any single frame may lack some details, all frames as a whole can capture lots of fine detail. When film is projected at 24 fps, the brain integrates the individual frames&#8217; contributions and sees the cumulative result. This lends an organic feel to projected film images.<br />
<br/></p>
<h6><strong>Sensor noise</strong></h6>
<p>The closest relation to film grain in digital video is sensor noise. Unlike grain (usually looked at positively or ambivalently), sensor noise was widely considered a detriment to image quality. This is because digital sensor noise lacks grain&#8217;s inherent randomness of appearance and variation in size. Sensor photosites (pixels) are placed on a matrix and they are ordered and fixed sized. These properties translate to sensor noise. There are various causes of noise in sensors. Shot noise (photon noise), thermal noise, readout and reset noise, quantization noise, voltage variance noise, etc. all merge in a single combined manifestation. In earlier sensors noise would often manifest itself in patterns, and would appear quite objectionable. Newer sensors largely dispense with the fixed noise patterns and demonstrate a much more random noise structure. In CMOS sensors, debayering acts as partial anti-aliasing on noise and softens it. Video compression can further smear noise: this can be blotchy and ugly with heavy compression, but can be a positive when only slightly affecting noise. As a result, some recent digital cameras produce an organic noise structure that shares characteristics with film grain.</p>
<p>In general, images acquired digitally are cleaner than film, especially at base ISO speeds. Noise is mostly apparent in dark areas or when sensor gain is applied to increase sensitivity. In both cases visible noise is the result of lower signal-to-noise ratio.<br />
<br/></p>
<h6><strong>Film grain in post</strong></h6>
<p>Film grain is often added to images in post in an attempt to get some of the characteristics mentioned above. Usually grain is applied to simply mimic the film look, but there are sometimes technical reasons behind the decision. Digitally acquired images can look clinical, being both clean and sharp. And more so with CG images. Overlaying a bit of film grain dirties them and adds some texture. With low tonal resolution images (such as 8-bit compressed video) film grain can act as <a href="http://en.wikipedia.org/wiki/Dither" title="Dither at Wikipedia" target="_blank">dither</a> and help cover banding issues. But adding film grain is not reserved for digital cinematography. Cinematographers routinely add grain in DI to movies shot on film, because latest film stocks with their extremely fine granularity can look scarily clean. Added film grain can be either software simulated, or scanned from actual exposed film stock. While the latter is the preferred method for most, there are some good synthetic noise examples around.</p>
<p>You can read the previous parts of the <em>Cinematic Look</em> series here:<br />
Part 1: <a href="http://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/" title="Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field">Aspect Ratio, Sensor Size and Depth of Field</a><br />
Part 2: <a href="http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/" title="Cinematic Look, Part 2: Frame Rate and Shutter Speed">Frame Rate and Shutter Speed</a><br />
Part 3: <a href="http://www.shutterangle.com/2012/cinematic-look-dynamic-range/" title="Cinematic Look, Part 3: Dynamic Range">Dynamic Range</a></p>
<p><a href="https://www.shutterangle.com/2012/cinematic-look-film-grain/">Cinematic Look, Part 4: Film Grain</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/cinematic-look-film-grain/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>Exposure Tools for Digital Video, Part 2</title>
		<link>https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-2/</link>
		<comments>https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-2/#comments</comments>
		<pubDate>Fri, 07 Sep 2012 13:30:51 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[exposure]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1075</guid>
		<description><![CDATA[<p>The first part of this exposure tools overview article introduced the topic and covered histogram, zebras and waveform monitor. We are now continuing the summary with a few more tools: false color, spot displays, external light meters. Familiarity with all these instruments is a prerequisite for  [...]</p><p><a href="https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-2/">Exposure Tools for Digital Video, Part 2</a></p>]]></description>
			<content:encoded><![CDATA[<p>The <a href="http://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-1/" title="Exposure Tools for Digital Video, Part 1">first part</a> of this exposure tools overview article introduced the topic and covered histogram, zebras and waveform monitor. We are now continuing the summary with a few more tools: false color, spot displays, external light meters. Familiarity with all these instruments is a prerequisite for good technical understanding and creative handling of various exposure tasks. <span id="more-1075"></span><br />
<br/></p>
<h6><strong>False color</strong></h6>
<div id="attachment_1089" class="wp-caption alignleft" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/09/falsecolors.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/09/falsecolors.jpg" alt="False Colors SmallHD" title="False Colors" width="262" class="size-full wp-image-1089" /></a><p class="wp-caption-text">False Color as implemented on the SmallHD field monitor; reference color scale at the bottom.</p></div>
<p>False color is an exposure tool that colorizes specific levels on the image. Tonal regions are color coded with easily recognizable colors. Sometimes these regions include only the near-clipping extremes and the most common tonal range of skin, but the whole range can be split in intervals and colorized. A legend for the different color codes is often included in the frame for reference.</p>
<p>False color can be thought of as an extension of zebras. Zebras can be left active for monitoring during recording. But false color is generally used to tune exposure in advance, as composition is difficult to judge with colorization on. Overreliance on false color with its neatly predefined tonal zones can force shooters into exposure patterns. This tool should be used for reference only.<br />
<br/></p>
<h6><strong>Through-the-lens digital spotmeters</strong></h6>
<p>Isolating a portion of the image for evaluation can be a big help. Problematic or important areas can be quickly judged on their own or in respect to the whole image. This is especially useful when paired with tools that display the exact levels. Constant aperture zooms can be used to isolate an area of interest while preserving the exposure. This is not always applicable though.</p>
<p>Some cameras (Canon C-series, for example) include a spot waveform, which imposes the waveform of a smaller rectangular image region on the waveform of the whole image. <a href="http://www.magiclantern.fm/" title="Magic Lantern" target="_blank">Magic Lantern</a> includes a simpler and more straightforward spotmeter. It indicates the averaged brightness level of a small rectangular area in the center of the picture. This is a powerful exposure tool when coupled with some good knowledge of the underlying transfer curve. Finding out reflected light (luminance) ratios is then just a matter of pointing and measuring shadows and highlights.<br />
<br/></p>
<h6><strong>Good old light meters</strong></h6>
<p>Any overview of exposure tools is not complete without a few words on light meters.<br />
For years light meters have been indispensable to the cinematographer due to film’s lack of immediate feedback. They are of two basic varieties: reflected light meters and incident light meters. Both functions are often found in a single body. Incident light meters measure falling (incident) light, or illuminance. Reflected light meters measure light reflected from scene surfaces, or luminance. This is an important difference: incident light readings are thus scene independent, unlike reflected light readings. Cinematographers, in general, tended to rely on incident readings. The spotmeter is a variant of the reflected light meters. Unlike standard reflected light meters, which are usually reading reflected light within an angle of 20 to 30 degree, spotmeters measure reflected light in a small angle (sometimes as small as 1 degree). This makes them very useful for isolating surfaces and for contrast ratio calculations. Light measurements are converted to photographic exposure terms based on the assumption for average scene brightness (18% reflection). So creative use of light meters usually requires exposure compensation based on the creative intent.</p>
<div id="attachment_1079" class="wp-caption aligncenter" style="width: 590px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/09/lightmeters.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/09/lightmeters.jpg" alt="Light Meters" title="Light Meters" width="580" class="size-full wp-image-1079" /></a><p class="wp-caption-text">Light Meters: Gossen Luna-Lux SBC, Kenko KFM-1100, Sekonic L-758DR</p></div>
<p>All digital cameras feature internal light meters. These are through-the-lens reflected light meters. On photo cameras they usually default to sophisticated evaluative (matrix/multizone) metering modes. Matrix metering is great for still images, but it is hard to consistently use for matching exposure between shots. An experienced video user can get more mileage from the spot metering mode of the camera.</p>
<p>With the rise of digital video and the associated exposure tools external light meters are falling into obscurity. Digital video guys tend to discard them as unnecessary and old fashioned. Nevertheless, they can still be useful to a knowledgeable video shooter. Unlike any of the exposure tools mentioned above, external light meters measure either the scene reflected light or the incident light directly, and not by processing the captured image. So predicting the transfer of scene luminance into image levels requires intimate knowledge of the tonal characteristics of the recording medium, and &#8211; in the case of incident readings &#8211; some experience of the reflectivity of various materials. That&#8217;s how shooting film works. A somewhat underestimated property of incident readings: being independent of the reflective properties of scene elements, they can allow for consistent light measurement and, consequently, for consistent exposure without the need for a grey card or any other scene reference. And as they don’t need a camera to function, external light meters are also handy for location scouting and setting lights up. This is their most common use nowadays.</p>
<div id="attachment_1086" class="wp-caption aligncenter" style="width: 590px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/09/lightreading.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/09/lightreading.png" alt="Incident light and reflected light readings" title="Light Measurements" width="580" class="size-full wp-image-1086" /></a><p class="wp-caption-text">Incident readings are taken with the meter at the subject, pointed towards the camera. Reflected readings are taken with the meter at the camera, pointed towards the subject.</p></div>
<p>It is worth noting that light meters are not created equal. With different brands, <a href="http://en.wikipedia.org/wiki/Light_meter#Exposure_meter_calibration" title="Exposure meter calibration" target="_blank">varying calibration constants</a> may lead to variations of up to 1/5 stop in measurements. This is a bit annoying, but ultimately doesn’t really matter &#8211; meters should always be tested (and possibly corrected through meter correction factors) with the specific transfer curve used.</p>
<p>A further article will explore the application of these exposure tools.</p>
<p><a href="https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-2/">Exposure Tools for Digital Video, Part 2</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Exposure Tools for Digital Video, Part 1</title>
		<link>https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-1/</link>
		<comments>https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-1/#comments</comments>
		<pubDate>Fri, 17 Aug 2012 12:02:46 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[exposure]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=1000</guid>
		<description><![CDATA[<p>On the formal level, exposure is the amount of light that reaches the image-capturing medium. It is determined by the sensitivity of the medium, the illuminance at the image plane and the exposure time. Setting exposure is one of the most important artistic decisions and probably the most important  [...]</p><p><a href="https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-1/">Exposure Tools for Digital Video, Part 1</a></p>]]></description>
			<content:encoded><![CDATA[<p>On the formal level, exposure is the amount of light that reaches the image-capturing medium. It is determined by the sensitivity of the medium, the illuminance at the image plane and the exposure time. Setting exposure is one of the most important artistic decisions and probably the most important technical decision for a shot because it governs the distribution of tones in the image. With film, exposure choice requires good knowledge of the available exposure latitude and the dynamic range distribution of the emulsion. Light levels are measured with a light meter and the exposure can be further adjusted for optimal tonality based on properties like scene mood, vision or simply personal preferences. <span id="more-1000"></span></p>
<p>The instant feedback of digital video is its greatest advantage over film. On a shoot, this feedback relates to exposure more than anything else. Exposure errors can be fixed immediately (no need for rushes or dailies) and exposure choice can be feedback informed and guided. A calibrated field monitor can be of great help, but it needs both good viewing conditions and excellent rendering capabilities in order to offer a decent representation of the recorded video. Exposure is often hard to judge based on monitor feedback alone. But a number of tools can process the captured signal and display it in various ways to aid the exposure decision. This article is an overview of the most popular of these exposure tools, and their strengths and weaknesses.</p>
<p>Most of these exposure tools take as their input the image data after the transfer curve is applied, so they directly represent the recorded video. In some cases &#8211; mostly with higher-end equipment &#8211; the display data may differ from the actual recorded data. This is usually done when <a href="http://www.shutterangle.com/2012/canon-picture-styles-shooting-flat-or-not/" title="Canon Picture Styles: Shooting Flat or Not?">log space video</a> is recorded. The flat image is not very useful for monitoring, so it may have some LUT applied for preview and exposure purposes. This LUT will generally simulate to some degree the final look of the video.</p>
<p>These exposure tools are available through the camera firmware, on field or studio monitors, or through specialized software fed with the camera signal. In any case, they need a screen to display info on, be it a camera LCD, an external display, or just a computer screen. They are at their most useful when combined with good knowledge of the camera transfer curves (gammas).<br />
<br/></p>
<h6><strong>Histogram</strong></h6>
<div style="float: left; margin-right: 8px;">
<div id="attachment_1048" class="wp-caption alignright" style="width: 260px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/08/hist-clip.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/08/hist-clip.png" alt="Histogram" title="Overexposure" width="250" class="size-full wp-image-1048" /></a><p class="wp-caption-text">This histogram indicates overexposure with large clipped areas</p></div></p>
<p><div id="attachment_1053" class="wp-caption alignright" style="width: 260px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/08/hist-dark.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/08/hist-dark.png" alt="Histogram" title="Underexposure" width="250" height="100" class="size-full wp-image-1053" /></a><p class="wp-caption-text">An overly dark image but without truly crushed blacks</p></div>
</div>
<p>The histogram is familiar to most photographers and Photoshop users. The histogram is a simple chart diagram. It displays the distribution of image pixels over the possible range of the video signal domain. The more pixels falling into a particular value bin, the higher its column in the histogram. The histogram doesn’t really contain any information about what part of the image falls where. But it is useful for quickly identifying the presence of clipped highlights or crushed blacks, as well as determining the overall key of the acquired image. A histogram skewed to the right suggests a high key image, a skew to the left suggests low key. Histograms usually come in two modes &#8211; luminance (brightness) and RGB &#8211; which are often switchable according to the preferences of the camera op. RGB histograms are actually compound histograms featuring a histogram for each of the three color channels, either overlaid or in parade. They help identify color channel over-saturation or color balance issues.</p>
<p>The main strength of the histogram is that its graphic representation can be scaled down without significant information losses because it is the overall shape that delivers meaningful exposure information. This allows to get it overlaid on the actual image, tucked somewhere to the side.<br />
</br></p>
<h6><strong>Zebras</strong></h6>
<div id="attachment_1043" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/08/zebras.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/08/zebras.jpg" alt="Zebras Magic Lantern" title="Zebras" width="262" class="size-full wp-image-1043" /></a><p class="wp-caption-text">Zebras marking blacks near the crush point</p></div>
<p>Zebras are a video camera invention and one of the first exposure tools to appear in video cameras. The concept is simple. Colored stripes (hence the name) are overlaid on the actual image, marking the position in the image of any clipped whites or crushed blacks, or both. Unlike the histogram, zebras localize and visualize the problematic areas so it is easier to judge if the out-of-range pixels are important or can be painlessly sacrificed. Depending on the implementation, zebras may have their warning triggering clip/crush points customizable; they may blink or feature other improvements.<br />
</br></p>
<h6><strong>Waveform monitor</strong></h6>
<div style="float: right; margin-left: 10px;">
<div id="attachment_1008" class="wp-caption alignnone" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/08/wfm-ref.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/08/wfm-ref.jpg" alt="Grenoble with the synchrotron" title="Reference image" width="262" class="size-full wp-image-1008" /></a><p class="wp-caption-text">Sample input image</p></div>
<div id="attachment_1009" class="wp-caption alignnone" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/08/wfm-y.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/08/wfm-y.jpg" alt="luminance waveform monitor" title="Luminance waveform" width="262" class="size-full wp-image-1009" /></a><p class="wp-caption-text"> Luminance waveform of the above image (in IRE units)</p></div>
<div id="attachment_1010" class="wp-caption alignnone" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/08/wfm-rgb.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/08/wfm-rgb.jpg" alt="RGB parade waveform monitor" title="RGB parade waveform" width="262" class="size-full wp-image-1010" /></a><p class="wp-caption-text">The RGB parade shows the blue channel is clipped</p></div>
</div>
<p>The waveform monitor (WFM) is one of the most useful tools in video production. Both during shooting and in post. Traditionally, the WFM is a monitoring device that displays the level of a video signal over time. In analog video this means voltage. In the digital world the WFM concept has been expanded to handle discrete values. In a sense, the digital waveform monitor is an extension of the histogram. But instead of binning pixels from the entire image, the waveform monitor bins pixels from each image column separately. These bins are then represented by pixels in a column of the output of the WFM. On the vertical axis of the waveform monitor are the values defining the bins. These can be the actual video space values, values scaled from 0 to 100%, or <a href="http://en.wikipedia.org/wiki/IRE_(unit)" title="IRE at Wikipedia" target="_blank">IRE</a> units. The horizontal axis runs over the columns of the input camera image. The intensity of the output pixel corresponds to the saturation of its respective bin. The more pixels from the corresponding input column falling into a value bin, the brighter the bin’s pixel on the WFM.</p>
<p>The waveform monitor can do pretty much any job the histogram can handle, plus more. Because each image column is processed separately and there is one-to-one correspondence between image columns and WFM columns (assuming a non-scaled WFM image), any offending areas are easily horizontally localized on the waveform monitor, and then in the actual camera image. This is especially useful for faces, grey cards or just large suspicious areas. Moreover, important values on the vertical axis can be signified with lines parallel to the horizontal axis for quick judgment of the image tonal distribution. This is a powerful tool when coupled with good knowledge of the transfer curve used.</p>
<p>Waveform monitors can be used with a variety of input color spaces. Luminance is generally enough for exposure and is one of the channels of the typical YCbCr digital video stream. RGB, YRGB and others can be used for variety of tasks, most notably for color correction. These are either displayed in parade, or overlaid in different colors.</p>
<p>The exposure tools overview continues with <a href="http://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-2/" title="Exposure Tools for Digital Video, Part 2">false color, spot displays and light meters</a>.</p>
<p><a href="https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-1/">Exposure Tools for Digital Video, Part 1</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/exposure-tools-for-digital-video-part-1/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Book Review: Light &#8211; Science &amp; Magic</title>
		<link>https://www.shutterangle.com/2012/book-review-light-science-magic/</link>
		<comments>https://www.shutterangle.com/2012/book-review-light-science-magic/#comments</comments>
		<pubDate>Mon, 16 Jul 2012 13:31:08 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Book Reviews]]></category>
		<category><![CDATA[book review]]></category>
		<category><![CDATA[lighting]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=969</guid>
		<description><![CDATA[<p>
Light &#8211; Science &#038; Magic by Fil Hunter, Steven Biver and Paul Fuqua is probably the most important book on lighting that you will ever read. Moreover, if you only ever read one book on lighting, make it this one. This is, indeed, a rather bold statement. In fact, some readers who are new to  [...]</p><p><a href="https://www.shutterangle.com/2012/book-review-light-science-magic/">Book Review: Light &#8211; Science &#038; Magic</a></p>]]></description>
			<content:encoded><![CDATA[<div style="float: left; margin-right: 12px; margin-top: 5px;"><img alt="Light - Science and Magic. An Introduction to Photographic Lighting" title="Light - Science &#038; Magic" src="http://www.shutterangle.com/wp-content/uploads/2012/07/light-sm.jpg"/></div>
<p><em><a target="_blank" rel="nofollow" href="http://www.amazon.com/gp/product/0240812255/ref=as_li_tf_tl?ie=UTF8&#038;camp=1789&#038;creative=9325&#038;creativeASIN=0240812255&#038;linkCode=as2&#038;tag=revmaz-20" title="Light Science and Magic: An Introduction to Photographic Lighting by Fil Hunter, Steven Biver and Paul Fuqua">Light &#8211; Science &#038; Magic</a></em> by Fil Hunter, Steven Biver and Paul Fuqua is probably the most important book on lighting that you will ever read. Moreover, if you only ever read one book on lighting, make it this one. This is, indeed, a rather bold statement. In fact, some readers who are new to shooting images may actually be puzzled by this praise once they read the actual book. The information there can be fully appreciated after you&#8217;ve fought a bit with real-world lighting problems.</p>
<p><span id="more-969"></span></p>
<p>Lighting is about the relationship between lights, subjects and camera (viewpoint). The third part of this triad is not immediately apparent to everyone. It is surprising (then again, maybe not) how many people &#8220;with experience&#8221; actually struggle when they face basic problems like removing an unwanted specular highlight in a multi-light setup. They then start switching lights off and on, or moving them around to localize the offending light. And that&#8217;s not even in the domain of hard things to do: lighting a shot with lots of glass or glossy surfaces in it can be an intimidating task if you don&#8217;t fully grasp how light works.</p>
<p>How does light work?<br />
How many times have you seen this question asked? Then how many times have you seen &#8220;I have $1000 for a lighting kit. What lights should I buy?&#8221; or &#8220;How do you light this shot [insert-reference-link]?&#8221;? Yeah. Lots of people asking about lights, no one asking about light.</p>
<p><em>Light &#8211; Science &#038; Magic (An Introduction to Photographic Lighting)</em> actually attempts to answer this question. It is by no means a fully detailed work, and the approach may not be to everyone&#8217;s liking (it is quite heavily biased towards product photography). But it is the only book I&#8217;ve seen that attempts this. And that&#8217;s why I believe it is the first book on the subject of lighting that people should read. You can then move on to other books like <em><a target="_blank" rel="nofollow" href="http://www.amazon.com/gp/product/0240810759/ref=as_li_ss_tl?ie=UTF8&#038;camp=1789&#038;creative=390957&#038;creativeASIN=0240810759&#038;linkCode=as2&#038;tag=revmaz-20" title="Set Lighting Technician's Handbook by Harry Box">Set Lighting Technician&#8217;s Handbook</a></em> or <em><a target="_blank" rel="nofollow" href="http://www.amazon.com/gp/product/1439169063/ref=as_li_ss_tl?ie=UTF8&#038;camp=1789&#038;creative=390957&#038;creativeASIN=1439169063&#038;linkCode=as2&#038;tag=revmaz-20" title="Film Lighting: Talks with Hollywood's Cinematographers and Gaffers by Kris Malkiewicz">Film Lighting</a></em>, for example. These are also good books on their own, and will likely get reviews here.</p>
<p><em>Light &#8211; Science &#038; Magic</em> covers the basic photographic properties of light: brightness, contrast, color. Also, typical information like hard and soft, or small and large light, or applying the inverse square law. But where the book really shines is in the exploration of the interaction of light and subject, and light and camera. Transmission, absorption and, of course, reflection. I haven&#8217;t seen another book that can teach as much about reflection management.</p>
<p>Specular (direct) reflection, diffuse reflection, polarized reflection are all covered with an emphasis on the family of angles causing direct reflection. The book then goes on to show how this is relevant in revealing surface texture and subject shape, and for the purpose of separation and delineation. This is further detailed in two great chapters on lighting metal and glass. The first material is highly reflective, the second &#8211; both reflective and transparent. This is all essential knowledge about suppressing or exploiting specular reflection, and applicable to a myriad of subjects and situations.</p>
<p>Then there is one of the better overviews of portrait lighting, based on the functional properties of the lights involved. Followed by a very useful chapter on the connection of characteristic curves (transfer curves) and exposure, and how this connection relates to overexposure and underexposure. This is an often misunderstood (and sometimes underestimated) concept. Its significance is fundamental when purposefully exposing for a specific part of the transfer curve.</p>
<p>Most of the examples in the book are based on product photography lighting. But once you grasp the concepts, the rest is really a matter of scale. A popular saying has it that if you can light a human face, you can light everything. Well, after reading this book the obvious conclusion is: &#8220;If you can light a small glossy box, you can light everything&#8221;.</p>
<p><em>Light &#8211; Science &#038; Magic</em> won&#8217;t teach you about specific fixtures or light types (although, there is some info on the latter in the last chapter). Nor will it teach how to envision beautiful lighting. What it does is enabling you to realize your vision by knowing, controlling and finessing light.</p>
<p><a href="https://www.shutterangle.com/2012/book-review-light-science-magic/">Book Review: Light &#8211; Science &#038; Magic</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/book-review-light-science-magic/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Color Matching Lenses for Video</title>
		<link>https://www.shutterangle.com/2012/color-matching-lenses-for-video/</link>
		<comments>https://www.shutterangle.com/2012/color-matching-lenses-for-video/#comments</comments>
		<pubDate>Mon, 25 Jun 2012 14:39:01 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Canon DSLR]]></category>
		<category><![CDATA[lenses]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=916</guid>
		<description><![CDATA[<p>Traditionally, cine lenses have been color matched. Careful selection of glass and coatings results in consistent color and images that intercut flawlessly after the film is processed and edited together. Consistent color is one of the many features of cine lens sets. Among the others are T-stop  [...]</p><p><a href="https://www.shutterangle.com/2012/color-matching-lenses-for-video/">Color Matching Lenses for Video</a></p>]]></description>
			<content:encoded><![CDATA[<p>Traditionally, cine lenses have been color matched. Careful selection of glass and coatings results in consistent color and images that intercut flawlessly after the film is processed and edited together. Consistent color is one of the many features of cine lens sets. Among the others are T-stop markings, matching barrel size, fixed front diameter, smooth aperture, consistent focus and aperture ring sizes, consistent out of focus rendering, consistent contrast, etc. Lots of consistency there. No similar consistency is expected from photographic lenses. <span id="more-916"></span></p>
<div id="attachment_923" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/zeisscp2.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/zeisscp2.jpg" alt="Carl Zeiss compact primes cp.2" title="Carl Zeiss compact primes" width="262" class="size-full wp-image-923" /></a><p class="wp-caption-text">Carl Zeiss compact primes cp.2 are currently the cheapest truly matched cinema lenses</p></div>
<p>Unfortunately for most video enthusiasts, cine lenses are both quite expensive and usually manufactured to fit cinema mounts like Arri PL or Panavision PV. Only recently lens sets like the <a href="http://www.adorama.com/searchsite/default.aspx?searchinfo=zeiss+cp.2&#038;KBID=67467&#038;sub=sa_cmlfv" rel="nofollow" target="_blank" title="Zeiss compact primes at Adorama">Zeiss compact primes</a> have started to appear for hybrid and photo mounts. Based on the latest generation of Zeiss SLR photo lenses, the Zeiss compact primes have been reworked to offer a significant degree of consistency, plus interchangeable mounts. But while not as expensive as bigger cinema lenses, Zeiss compact primes are not exactly cheap.</p>
<p>This leaves most large sensor video shooters in the photographic lenses camp. Photographic lenses present a lot of challenges for video, but they have a significant advantage: price. Selecting photo lenses for video is an art in itself (and possibly a topic for another article), but here we will focus on lens color and color matching. First, a quick word on a related subject.<br />
<br/></p>
<h6><strong>White balance and lens color</strong></h6>
<p>One advantage of digital cameras over film is the ability to easily tweak white balance. Color film stocks are balanced for some specific color temperature, usually 5500K for daylight and 3200K for tungsten light. Color balance can be adjusted further during post-production, either chemically, through printer lights manipulation, or in DI. </p>
<div style="float: right; margin-left: 12px; margin-top: 5px; width: 40%; background-color: LightGrey; padding: 10px 10px 0px; border-width: thin; border-color: black;">
<strong>Mireds</strong></p>
<p>Color temperature in Kelvin is not very useful for judging the color difference between illuminants. For example, the perceived change from 2400K to 2500K is about the same as the change from 10000K to 12000K. The difference of the reciprocals of the color temperatures is better related to perceived color changes. That&#8217;s why the <em>mired</em> (micro reciprocal degree) concept comes handy.<br/><br/><em>Mireds = 1000000 / (CT in Kelvin)</em><br/><br/>Incidentally, light color conversion gels often describe the resultant color shift in mireds.
</div>
<p>Digital sensors are also optimized for specific spectral response (usually biased towards daylight). But by applying gain on individual color channels the signal can be white balanced in-camera to pretty much any desired color temperature. Furthermore, while film is usually only balanced on the orange-blue axis and expects green-magenta neutrality, digital video can also be white balanced on the green-magenta axis. For RAW video the white balancing decisions can be deferred to post. For video transformed to some working color space for recording the white balanced is baked during RAW conversion in-camera.</p>
<p>In-camera white balance means that lens color is less critical for digital video compared to film. The camera can be manually white balanced with a grey card after every lens change and this through-the-lens balancing will lead to neutral rendering. And this is how a lot of videographers work. But this practice deprives the videographer from a very nice tool: white balance can&#8217;t be used for creative purposes. Typical creative white balance uses include colder than neutral balance for winter or overcast feel and warmer than neutral balance for night indoor scenes. Of course, this can be done in post. But too much post-processing is bad for low precision video. Color matched lens sets, on the other hand, allow such creative choices to be dialed in when the sequence starts as subsequent lens changes do not introduce color deviations.<br />
<br/></p>
<h6><strong>Photo lenses and color</strong></h6>
<p>Ideally, a lens should be completely neutral in color rendering. In reality it is not quite so. Lens color rendering is dependent on coatings and glass: both can cause tints. But if two lenses use the same glass and coatings, they will almost certainly render color in a very similar way. &#8220;Almost&#8221;, because tinted glass or coatings will generally lead to heavier tints (tint stacking) in lenses constructed from more glass elements.</p>
<p>Early coated lenses often exhibit yellowish tints. One reason is that warm coatings render skin lighter and skies darker in black and white, resulting in a favorable tonal separation. Earlier Leica lenses are like this. Some lenses from the 50&#8242;s to the 70&#8242;s used thorium dioxide to increase the refractive index of the glass. Thorium radioactivity leads to brownish tints over time. Well-known examples include some Pentax Super Takumar and various Kodak lenses.</p>
<p>The wide adoption of color film led to manufacturers developing coatings with a more neutral color rendition. Nevertheless, color varies not only among lenses from different manufacturers, but also in a series of lenses from the same manufacturer. It is common sense, though, that lenses produced by the same manufacturer close in time have higher chances of being made from the same glass and with the same coatings, seeing as both of these usually don&#8217;t change very often. The Zeiss Contax/Yashica mount lenses are good examples of consistent color in lenses stretching production over a significant period of time.</p>
<p>The following image shows a grey card rendered by various lenses. The white balance is set for the first lens and left as is for the others. Consequently, any tints manifested are relative to the first lens. Note how the Leicas are very close to each other (they were manufactured in the same year). Same with the two Contax Zeiss lenses (an early AEJ Planar and a somewhat later MMJ Distagon). The Leicas are noticeably colder than the Contax Zeiss lenses (a difference of around 20 mireds). They also have a green tint.</p>
<div id="attachment_931" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/lenscolor.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/lenscolor.jpg" alt="Lens color matching" title="Grey card shot with various lenses" width="512" class="size-full wp-image-931" /></a><p class="wp-caption-text">From left to right: Leica R Summilux 50/1.4, Leica R Summilux 80/1.4, Carl Zeiss Contax Planar 50/1.7, Carl Zeiss Contax Distagon 28/2.8, Carl Zeiss Jena Flektogon 35/2.4</p></div>
<p><br/></p>
<h6><strong>Color matching photo lenses in camera</strong></h6>
<p>Differences in color rendition between lenses can be easily quantified. One method appropriate for DSLR cameras takes advantage of their ability to shoot RAW still images. This involves shooting a RAW image of a grey card under a consistent light. Manually white balancing the image shot with each lens in Lightroom, Camera Raw or a similar RAW development software and comparing the resulting color temperature and magenta/green tint values will then reveal the <em>relative</em> differences in color rendition. Note that color temperature needs to be converted to mireds to yield a translatable result that can be used for color matching.</p>
<div id="attachment_927" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/wbshift.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/wbshift.png" alt="Canon DSLR white balance shift " title="Canon DSLR white balance shift option" width="262" class="size-full wp-image-927" /></a><p class="wp-caption-text">Canon DSLR cameras can be WB corrected independently of white balance choice</p></div>
<p>Some digital cameras have options for white balance shifts (or correction). For example, on a Canon DSLR camera white balance can be corrected from -45 to +45 mireds on an amber-blue axis in 5-mireds steps. There is also a similar -9 to +9 step correction on the magenta-green axis. While originally intended for correction under non-full spectrum light sources exhibiting color spikes, these options can also be used for in-camera lens color matching. For example, the Zeiss Contax/Yashica lenses from the above test can be decently color matched to the Leica R lenses by ticking the WB shift 4 steps towards blue and 4 steps towards green.</p>
<p>The <a href="http://www.magiclantern.fm/" target="_blank">Magic Lantern</a> firmware for Canon DSLR cameras offers another way to measure color differences. Its point-and-white-balance feature allows the white balance to be set by pointing the camera to a neutral surface (ideally, a grey card). The calculated color temperature and WB shifts are immediately displayed for reference. This method has the advantage of yielding information directly in camera specific white balance terms. For better precision you should do this test under daylight because Canon DSLRs only set WB color temperature in 100K increments.</p>
<p>Once the relative differences in color rendition are measured against a known base setting, creative manual white balance can be used even with photo lenses. White balance can be preset for specific effects and left untouched for the whole sequence. Any lens change will only require dialing the respective WB shift values. Note that this will only help match color. Differences in contrast or out-of-focus rendering will still be present if they did exist in the first place.</p>
<p><a href="https://www.shutterangle.com/2012/color-matching-lenses-for-video/">Color Matching Lenses for Video</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/color-matching-lenses-for-video/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Cinematic Look, Part 3: Dynamic Range</title>
		<link>https://www.shutterangle.com/2012/cinematic-look-dynamic-range/</link>
		<comments>https://www.shutterangle.com/2012/cinematic-look-dynamic-range/#comments</comments>
		<pubDate>Wed, 06 Jun 2012 11:44:02 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cinematic look]]></category>
		<category><![CDATA[dynamic range]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=822</guid>
		<description><![CDATA[<p>Images are all about light. Light is captured, transferred through the various storage and processing stages of the workflow and finally reproduced for viewing. The adventures of scene light on its way to the viewer of the final images have some implications for the cinematic look. More precisely,  [...]</p><p><a href="https://www.shutterangle.com/2012/cinematic-look-dynamic-range/">Cinematic Look, Part 3: Dynamic Range</a></p>]]></description>
			<content:encoded><![CDATA[<p>Images are all about light. Light is captured, transferred through the various storage and processing stages of the workflow and finally reproduced for viewing. The adventures of scene light on its way to the viewer of the final images have some implications for the cinematic look. More precisely, this article is about the dynamic range of the image capturing medium. The differences in the dynamic range of film and digital camera sensors are explained. We also get to talk a bit about transfer curves and gamma. <span id="more-822"></span><br />
<br/></p>
<h6><strong>Scene dynamic range</strong></h6>
<p>Dynamic range and dynamic range transfer is one of the often misunderstood concepts in video and film, maybe because it is a bit technical. Dynamic range is the ratio between the smallest and the biggest <em>possible</em> values in some signal. Here we are interested in the case when this signal is light. Scene dynamic range or scene contrast is the ratio between the luminance of the darkest blacks and the brightest whites in a scene. This ratio can get quite large in scenes with both bright sunlight and dark shadows.</p>
<p>Human vision has a curious characteristic. In order to accommodate large scene contrasts we don&#8217;t see light physically &#8220;correct&#8221;. We see exponential luminance increments as linear increments. We perceive the change from 10 <a href="http://en.wikipedia.org/wiki/Candela_per_square_metre" title="Candela per square metre" target="_blank">cd/m<sup>2</sup></a> to 20 cd/m<sup>2</sup> as similar to the change from 200 to 400 cd/m<sup>2</sup>. This means that the series of gray steps with luminances of 10, 20, 40, 80, 160,&#8230;cd/m<sup>2</sup> is perceived as uniformly changing. And the series of gray steps with luminances 10, 20, 30, 40, 50, 60,&#8230;cd/m<sup>2</sup> has the perceived differences between steps getting smaller. One important consequence of this logarithmic correlation of human vision to light is that the eye discerns small luminance differences in the darks better than in the highlights.</p>
<p>The logarithmic concept of light <em>stops</em> fits well with the workings of our vision and is widely adopted in photography. A surface is said to be one stop higher than another surface when the luminance of the first surface is twice the luminance of the second surface. So if a scene has a contrast ratio of 1000:1 it is said to have dynamic range of around 10 stops (2<sup>10</sup> = 1024).<br />
<br/></p>
<h6><strong>Film dynamic range</strong></h6>
<div style="float: right; margin-left: 12px; margin-top: 5px; width: 40%; background-color: LightGrey; padding: 10px 10px 0px; border-width: thin; border-color: black;">
<strong>Film density</strong></p>
<p>Some familiarity with the concept of <em>density</em> is necessary in order to understand film dynamic range. Film is semi-transparent and some of the passing light is absorbed. <em>Transmittance</em> is the part of the incident light that passes through. Denser materials have less transmittance. <em>Opacity</em> is the reciprocal of transmittance. Denser materials have larger opacities. Density is the common logarithm of opacity. The benefit of using density instead of opacity is again connected to human perception: we tend to see materials with double the density twice as dark. An increase of around 0.3 density halves the transmitted light.<br />
<em><br />
transmittance = transmitted / incident light<br />
opacity = 1 / transmittance<br />
density = log(opacity)</em>
</div>
<p>The dynamic range of film and digital sensors is usually smaller than high dynamic range scenes. And color reversal film has much smaller dynamic range than color negative film. For example, Kodak Ektachrome 5285, which is a reversal stock, has less than 9 stops of dynamic range. The captured dynamic range distribution varies a bit depending on the specific film negative stock but latest negative stocks like Kodak Vision3 5219 have dynamic range of over 14 stops. Color reversal film is usually much more saturated than negative film. Both high saturation and limited dynamic range make reversal film more of a specialty stock, appropriate for specific uses like ads or music videos. Movies are almost universally shot on negative film.</p>
<p>From the characteristic curve of film (Kodak 5219 in this example) we can note the following. There is a large linear part in the middle of the curve where equal exposure change results in equal density change. That&#8217;s where detail is captured uniformly and with the greatest tonal resolution.  The slope of the curve in its straight part is called <em>gamma</em>. For most film negatives gamma is around 0.6. This means that one stop of light, or 0.3 log exposure, is represented by 0.3*0.6 = 0.18 density. So, in a way, film does dynamic range compression: as you can see from the chart, a spread of more than 14 stops (4.2 log exposure) is captured in a density range of less than 2.0 log D. Most of this is due to highlights and shadows compression as explained below. Note that film has different sensitivity to red, green and blue. This is taken care of during the printing process.</p>
<div id="attachment_842" class="wp-caption alignleft" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/kodak5219dr.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/kodak5219dr.png" alt="Kodak Vision3 5219 motion picture stock dynamic range" title="Kodak Vision3 5219 (click to enlarge)" width="262" class="size-full wp-image-842" /></a><p class="wp-caption-text">Dynamic range distribution of the Kodak Vision3 5219 film stock</p></div>
<p>I have also marked where 18% gray, 2% black and 90% white fall on the curve (for the green sensitivity curve). 18% gray or middle gray is what light meters use for light measurements. This is the shade of gray that falls perceptually in the middle of a black-to-white grayscale. 90% white is used as reference white in video and shows where diffuse white falls. Whites above this are generally specular highlights or in-frame lights. 2% black shows where the darkest detailed shadows fall. Below this, deep black with some tonal change is expected, but without real detail.</p>
<p>As we can see, there are around 3 stops below 2% where blacks are recorded, albeit compressed at the bottom and with less tonal resolution. And there are around 5 stops above 90% white for highlights. This is also the overexposure latitude. This latitude allows the cinematographer to overexpose in order to capture significant dark detail or to play with the look of the image during processing and printing. This allows for some contrast and grain modulation. Slight overexposure paired with pull processing (underdevelopment) and/or print down is common. The highest part of the curve is also compressed a bit, which means less tonal precision in this part. The point where shadows start to compress is called <em>toe</em>, and the point where highlights begin to roll is called <em>shoulder</em>.</p>
<p>It should be clear that the negative image is <em>source</em> material. If printed so that the curve is preserved, the image would appear very low contrast: washed and unappealing. That&#8217;s why release printing is done on high contrast positive stocks with gamma in the range of 2.5 to 3.0. This results in a print-through gamma of around 1.5 to 1.8. The print stock also does some further highlight compression through its toe. Blacks, on the other hand, are mostly unaffected due to the high maximum density over base of positive stocks.</p>
<p>The existence of a toe and a shoulder is the cause of one the defining characteristics of film, and consequently, of the cinematic look. The relatively large dynamic range paired with the compression of the extremes is the reason of the pleasant look of material shot on film in terms of range distribution: highlights seemingly roll off forever without clipping and there is a notion of tonality in the deep shadows.</p>
<div id="attachment_866" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/ASeriousMan.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/ASeriousMan.jpg" alt="A Serious Man screenshot" title="A Serious Man (2009)" width="512" class="size-full wp-image-866" /></a><p class="wp-caption-text">Film preserves detail and tonality in highlights and shadows, with a pleasant roll-off in specular highlights</p></div>
<p><br/></p>
<h6><strong>An interlude: gamma encoding and end-to-end gamma</strong></h6>
<p>Gamma is another often misunderstood area. The fact that the word is used for at least three different concepts in the image-making realm doesn&#8217;t help either. In the case of film <em>gamma</em> is the slope (or the tangent) of the linear part of the characteristic curve. In digital, gamma is used both as a synonym of <em>transfer function</em> or <em>transfer curve</em>, and as the value used for the exponent in the special case of power-law gamma encoding/decoding.</p>
<p>The dynamic range of the human eye is around 10 to 15 stops in a given moment of time, depending on lighting conditions. Displays and projection have smaller reproduction capabilities. Projection usually has intraframe contrast ratio of 150:1 or smaller. Good monitors may have intraframe contrast of around 1000:1. So the highlights and blacks compression above shoulder and below toe allows for squeezing a higher dynamic range into the smaller dynamic range of the reproduction system. It is, in essence, a case of tone mapping.</p>
<p>System gamma, end-to-end gamma or print-through gamma (in the film case) all describe the gamma of the whole process: from scene to the final deliverable. Replicating scene light would suggest system gamma of 1. But this is only true if the viewing conditions were equivalent to scene conditions in terms of light. This is rarely the case. Projection flare, low absolute projection luminance (less than 50 cd/m<sup>2</sup>) and the relatively dark viewing conditions lower display contrast significantly and make blacks appear brighter to the eye. The higher system gamma adds some contrast and combats these limitations. For example, film negative gamma of 0.6, intermediate film gamma of 1 and print film gamma of 3.0 lead to a composite gamma of 0.6 * 1.0 * 3.0 = 1.8. For the brighter viewing conditions in offices and homes an end-to-end gamma of around 1.2 is considered sufficient.</p>
<div id="attachment_831" class="wp-caption aligncenter" style="width: 534px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/gamma.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/gamma.png" alt="gamma encoding" title="Gamma" width="524" height="70" class="size-full wp-image-831" /></a><p class="wp-caption-text">Linear light transfer (above) and power law gamma encoded transfer (below). Mid-gray background for reference. Note the limited tonal resolution in the blacks with linear encoding.</p></div>
<p>Gamma encoding in digital images serves a different purpose. Consumer grade images are universally 8-bit. If light is encoded linearly the dark stops have very limited precision: 2 is a stop higher than 1, 4 is a stop higher than 2, 8 is a stop higher than 4, etc. There are almost no values to encode intermediate shades. On the other hand, there is an excessive amount of values in the upper end: in the top stop between 128 and 255, for example. So linear encoding is both inefficient and losing important information in the shadows. Power-law gamma encoding addresses this by applying a transform (usually a simple power function) to the input signal. The eye still needs linear light in order to see the correct image so the display applies the reverse curve and linearizes the output. Decoding (reverse) gamma values between 2.2 (sRGB) and 2.6 (digital cinema) are used, depending on the expected viewing conditions.<br />
<br/></p>
<h6><strong>Dynamic range of digital video</strong></h6>
<p>Digital sensors are more straightforward than film in terms of captured light representation. The quantized signal from the sensor&#8217;s analog to digital converter is linear. If a photosite (pixel) is capturing twice the light than another photosite, then its quantized value will be twice larger. Most DSLR cameras capture RAW images quantized to 14 bits. For a typical DSLR camera with slightly above 11 stops of dynamic range, 14 bits allow for some decent tonal resolution even with linear encoding. But things start to get complicated when the raw data have to be stuffed into less bits for recording.</p>
<div id="attachment_849" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/standarddr.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/standarddr.png" alt="Canon DSLR Standard picture style dynamic range" title="Canon DSLR Standard picture style dynamic range" width="262" class="size-full wp-image-849" /></a><p class="wp-caption-text">Canon DSLR Standard picture style: dynamic range over light stops (log exposure)</p></div>
<p>All DSLR cameras and consumer video cameras output 8-bit video. Stuffing 11+ stops of dynamic range into 8 bits can&#8217;t be done linearly simply because the coding space lacks resolution. The typical compromise results into a gamma encoded S-shaped (over stops/log exposure) transfer curve. The top 8 to 9 stops of the RAW dynamic range are selected for transfer because they are cleanest. Some sort of a <em>knee</em> is usually implemented with the highest 1 to 1.5 stops getting compressed. The knee is very similar to the shoulder of film. It simulates a roll-off in the highlights, slightly increases the overall dynamic range and also allows for a bit more tonal precision in the mids where the most important tones are. The resulting image is sufficiently contrasty and ready for the consumer display. But it is not really supposed to be post-processed.</p>
<p>Again, I have marked 2% black, 18% gray and 90% white on the dynamic range chart for the Canon DSLR Standard picture style. Note that there is around one stop over 90% white available for highlights. Compare this to the excessive overexposure latitude of film. Shadows are better represented although the low stops are lacking in tonal resolution. An attempt to contain the highlights on exposure will often result in crushed blacks in high contrast scenes.</p>
<p>This type of consumer-ready transfer function plus the limited dynamic range of early digital cameras have led to the notion that digital video is too contrasty, highlights are hard clipped and the blacks are crushed and lacking detail. This is exactly what many people mean when they say that an image looks &#8220;video-ish&#8221;.</p>
<div id="attachment_869" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/likecrazy.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/likecrazy.jpg" alt="Like Crazy screenshot" title="Like Crazy (2011)" width="512" class="size-full wp-image-869" /></a><p class="wp-caption-text">Blown highlights due to limited dynamic range. The front girl actually wears a chequered shirt. Also note the blown pink shirt in the middle. <em>Like Crazy</em> was shot on the Canon 7D. Compare to the shot from <em>A Serious Man</em> above.</p></div>
<div id="attachment_873" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/sin_city.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/sin_city.jpg" alt="Sin City screenshot" title="Sin City (2005)" width="512" class="size-full wp-image-873" /></a><p class="wp-caption-text">A rare case of hard clip actually complementing the graphic presentation of a film. <em>Sin City</em> was shot on the Sony CineAlta HDC-F950.</p></div>
<div id="attachment_853" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/06/ArriCLog.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/06/ArriCLog.png" alt="Arri Log C" title="Arri C Log" width="262" class="size-full wp-image-853" /></a><p class="wp-caption-text">Dynamic range distribution of the Arri Log C transfer curve</p></div>
<p>Recent high-end digital cameras have much better dynamic range capabilities and rival the best film stocks. Access to the full dynamic range is enabled through either linear RAW video (12 bit or more) or some (near) logarithmic transfer function. Both linear RAW video and log video are production formats and require post-processing for presentation. The idea of log space video is to provide a near flat distribution of coding values over exposure. Such a distribution provides both the full camera dynamic range and better tonal precision in blacks and highlights. Thus log curves are close to film characteristic curves, allowing for easier intercutting of digital video and scanned film footage. For example, the Arri Log C transfer function encodes around 14 stops of dynamic range from the Arri Alexa camera. Similar transfer curves have been constructed for many cameras, including DSLRs. It is worth noting that accommodating a large dynamic range into a limited coding space (such as 8 bits) results in limited tonal precision. This makes the practicality of true 8-bit log curves somewhat dubious. A 10-bit film scan allocates around 90 coding values per stop in the flat part of the characteristic curve, 10-bit Arri Log C allocates around 75 values. Whereas an 8-bit transfer curve like Technicolor&#8217;s CineStyle for Canon DSLR cameras allocates around 27 values per stop. That&#8217;s why low precision flat curves should be used with care and with understanding of the tonal precision trade-off. You can read more on 8-bit <a href="http://www.shutterangle.com/2012/canon-picture-styles-shooting-flat-or-not/" title="Canon Picture Styles: Shooting Flat or Not?">flat transfer curves</a> here.</p>
<p>The previous parts of the <em>Cinematic Look</em> series can be found here: Part 1 on <a href="http://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/" title="Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field">Aspect Ratio, Depth of Field and Sensor Size</a>, and Part 2 on <a href="http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/" title="Cinematic Look, Part 2: Frame Rate and Shutter Speed">Frame Rate and Shutter Speed</a>. And the next part is on <a href="http://www.shutterangle.com/2012/cinematic-look-film-grain/" title="Cinematic Look, Part 4: Film Grain">Film Grain</a>.</p>
<p><a href="https://www.shutterangle.com/2012/cinematic-look-dynamic-range/">Cinematic Look, Part 3: Dynamic Range</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/cinematic-look-dynamic-range/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Book Review: The Photographer&#8217;s Eye by Michael Freeman</title>
		<link>https://www.shutterangle.com/2012/book-review-photographers-eye-michael-freeman/</link>
		<comments>https://www.shutterangle.com/2012/book-review-photographers-eye-michael-freeman/#comments</comments>
		<pubDate>Sat, 19 May 2012 14:33:05 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Book Reviews]]></category>
		<category><![CDATA[book review]]></category>
		<category><![CDATA[composition]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=788</guid>
		<description><![CDATA[<p>
Michael Freeman is a popular author amongst photographers. He has written a myriad of books on photography related topics: lighting, exposure, composition, etc. As the title implies, The Photographer&#8217;s Eye: Composition and Design for Better Digital Photos is concerned with the subject of  [...]</p><p><a href="https://www.shutterangle.com/2012/book-review-photographers-eye-michael-freeman/">Book Review: The Photographer&#8217;s Eye by Michael Freeman</a></p>]]></description>
			<content:encoded><![CDATA[<div style="float: left; margin-right: 12px; margin-top: 5px;"><img alt="The Photographer's Eye: Composition and Design for Better Digital Photo" title="The Photographer's Eye" src="http://www.shutterangle.com/wp-content/uploads/2012/05/peye.jpg"/></div>
<p><a href="http://www.michaelfreemanphoto.com/" target="_blank">Michael Freeman</a> is a popular author amongst photographers. He has written a myriad of books on photography related topics: lighting, exposure, composition, etc. As the title implies, <em><a target="_blank" rel="nofollow" href="http://www.amazon.com/gp/product/0240809343/ref=as_li_tf_tl?ie=UTF8&#038;camp=1789&#038;creative=9325&#038;creativeASIN=0240809343&#038;linkCode=as2&#038;tag=revmaz-20" title="The Photographer's Eye: Composition and Design for Better Digital Photos by Michael Freeman">The Photographer&#8217;s Eye: Composition and Design for Better Digital Photos</a></em> is concerned with the subject of composition. This is a vast subject. Composition encompasses everything involved in the graphic (or visual) representation of the scene in the image. And everything means <em>everything</em>.</p>
<p><span id="more-788"></span></p>
<p>The usefulness for video of being familiar with (or, even better, being adept in) pictorial and still photo composition was discussed in some detail in the review of <a href="http://www.shutterangle.com/2012/book-review-pictorial-composition-in-art-henry-rankin-poore/" title="Book Review: Pictorial Composition (Composition in Art) by Henry Rankin Poore"><em>Pictorial Composition (Composition in Art)</em></a> by Henry Rankin Poore. Michael Freeman&#8217;s book is a good complement to <em>Pictorial Composition</em>. The latter is a classic text; somewhat formal and theoretic; focused on the image itself and exploring the result. <em>The Photographer&#8217;s Eye</em> is more based into practice. It covers a lot of topics and also discusses the process of shooting with a mindset grounded in composition.</p>
<p>One of the strengths of this book is that it makes the reader aware of many compositional elements, some of which are not readily apparent. Even if it doesn&#8217;t usually go in depth, implanting the notion of these into the mind of the reader will inevitably lead to some useful thoughts. This is also good inspiration material. <em>The Photographer&#8217;s Eye</em> is one of these books that make you feel ideas sweep in your head. Both through concepts and specific examples.</p>
<div style="float: right; margin-left: 12px; margin-top: 5px;"><img alt="The Photographer's Eye: Composition and Design for Better Digital Photo" title="The Photographer's Eye (UK edition cover)" src="http://www.shutterangle.com/wp-content/uploads/2012/05/peye2.jpg"/></div>
<p>The book covers pretty much anything connected to composition. It starts with the frame as a compositional device and touches upon formal balance and tension (there is more on this topic in <em>Pictorial Composition</em>). Then it goes into details on various compositional elements: content, lines and shapes, motion, rhythm, light, color, depth. Also discussed are the pure photographic elements in their connection to composition: optics and perspective, focus, exposure. The last third of the book delves into intent and process. More specifically: exploring locations, hunting the perfect image, reaction, anticipation, organizing subject matter, repertoire. All of these are useful skills for video; mostly for run &#038; gun and documentaries, but also for improvisation. The topic of intent, style and process is expanded and further developed in a follow-up book called <em>The Photographer&#8217;s Mind</em>.</p>
<p><em>The Photographer&#8217;s Eye</em> is richly illustrated. Many of the photos are editorial/documentary material and thus fall into the &#8220;telling a story with pictures&#8221; department. This makes them highly relevant to video shooting. If you can tell a story with a sequence of pictures, or &#8211; better yet &#8211; with a single picture, then you can surely do it with moving pictures. Another interesting side is presented by the more graphically oriented product photography illustrations. Cinematography books don&#8217;t usually explore the purely graphic side of composition as they are focused mainly on the practical aspects of framing. But graphic knowledge expands one&#8217;s visual arsenal and deepens the understanding of shapes and lines. This helps to see beauty in unexpected ways.</p>
<p><a href="https://www.shutterangle.com/2012/book-review-photographers-eye-michael-freeman/">Book Review: The Photographer&#8217;s Eye by Michael Freeman</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/book-review-photographers-eye-michael-freeman/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cinematic Look, Part 2: Frame Rate and Shutter Speed</title>
		<link>https://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/</link>
		<comments>https://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/#comments</comments>
		<pubDate>Tue, 08 May 2012 21:13:51 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cinematic look]]></category>
		<category><![CDATA[frame rate]]></category>
		<category><![CDATA[shutter speed]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=677</guid>
		<description><![CDATA[<p>In the first part of this series we addressed some of the cinematic properties, which follow from the size and proportions of the capturing frame, be it film or digital. This second article is concerned with the temporal aspects of the cinematic look. More precisely, the characteristics of the  [...]</p><p><a href="https://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/">Cinematic Look, Part 2: Frame Rate and Shutter Speed</a></p>]]></description>
			<content:encoded><![CDATA[<p>In the <a href="http://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/" title="Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field">first part of this series</a> we addressed some of the cinematic properties, which follow from the size and proportions of the capturing frame, be it film or digital. This second article is concerned with the temporal aspects of the cinematic look. More precisely, the characteristics of the image following from specific frame rate and shutter speed choices. For decades these characteristics have been almost unchanging, with deviations only used for special effects. This constancy has made them perhaps the most defining features of the cinematic look. <span id="more-677"></span><br />
<br/></p>
<h6><strong>Film frame rate</strong></h6>
<p>The frame rate of the motion picture specifies the frequency at which sequential frames are captured. In a film camera, this is the number of frames per second that pass through the camera gate and get exposed. In the days of silent film both motion picture cameras and projection cameras were hand-cranked. Cameramen took pride in their ability to crank at a steady pace. Common knowledge is that 16 frames per second (fps) was the prevalent rate of cranking. But this is not exactly so. Cranking speed varied wildly between 10 and 26 fps. Cranking speed varied even across the reels of a single movie. Projection was an art in itself. The projectionist had to watch the action closely and correct cranking speed when needed. On top of that, theater owners sometimes required projection speed to be increased in order to squeeze more showings in a given time frame &#8211; a practice that could lend a slapsticky feel even to the most serious drama.</p>
<p>Standardization came with the arrival of sound and the need to have sound in sync with picture. And ever since, the accepted standard for cinema frame rate is 24 fps. There wasn&#8217;t anything special about this number, other than being the approximately average projection speed from a bunch of sampled theaters in 1926. Nevertheless, it has come to define some of the prominent characteristics of the cinematic look.<br />
<br/></p>
<h6><strong>Shutter angle and shutter speed</strong></h6>
<p>The second temporally important property is the period of exposure for each frame. In a motion picture film camera the negative is not exposed continuously. A window of time, with light shut out, is needed in order for the sprocket wheel to pull the next frame in the film gate and prepare it for exposure. In a film camera this is what the rotary shutter does.</p>
<div id="attachment_690" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/05/gate.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/05/gate.png" alt="shutter angle" title="Rotary disc shutter" width="262" /></a><p class="wp-caption-text">Schematic of a half-moon rotary disc shutter positioned beside the film gate</p></div>
<p>The rotary shutter is an arc shaped mirror (called &#8220;half-moon&#8221;) that rotates in front of the gate. Its pivot is placed either beside or underneath the gate. When the gate is covered, the mirror serves to reflect the image from the lens into the ground glass, so that the cameraman can see it in the viewfinder. At the same time the next frame is being loaded into position behind the shutter. When the shutter is rotated away from the gate, light exposes the negative. The exact shape of the arc defines the <em>shutter angle</em>. The shutter angle is specified in degrees and describes the size of the cut out part of the shutter disc.</p>
<p>In simple cameras the shutter is shaped as a semicircle (and the shutter angle is 180 degrees). In more advanced cameras the shutter angle (and thus the shape of the shutter) can be changed. The shutter rotates with a constant speed and makes one revolution per frame. So for the standard cinema frame rate of 24 fps that means 24 revolutions. Bigger angle means longer exposure time. Smaller angle means shorter exposure time. In photography and cinematography the exposure time is often called <em>shutter speed</em>, because the exposure time is the time the shutter stays open per frame. The shutter angle can easily be made small, but large angles are harder because of the need for next frame advancement.</p>
<div id="attachment_695" class="wp-caption aligncenter" style="width: 590px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/05/shutters.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/05/shutters.png" alt="Shutter angle" title="Various shutter angles" width="580" class="size-full wp-image-695" /></a><p class="wp-caption-text">Various shutter angles and their respective exposure time as a fraction of total frame time</p></div>
<p>There is another popular rotary shutter type, called &#8220;butterfly&#8221;. It consists of two segments (90 degrees each, for 180 degree shutter angle) positioned opposite of each other. Butterfly shutters rotate at half the speed of a half-moon shutter. That is, a full revolution exposes a couple of frames. Panavision Panaflex cameras usually have this design.</p>
<p>Early cameras usually had a fixed shutter angle. The angle itself varied between cameras. For example, Bell &#038; Howell Eyemo 71k used a 160 degree shutter; the 16mm B&#038;H Filmo 70 DR used a 204 degree shutter. Eventually a shutter angle of 180 degrees ended as standard or &#8220;normal&#8221;. For a frame rate of 24 fps this equals shutter speed of 1/48 sec. The shutter angle can be converted to shutter speed with the following formula:</p>
<div style="text-align: center; margin-top: 0px"><img alt="shutter angle to shutter speed formula" title="Shutter angle to shutter speed conversion" src="http://www.shutterangle.com/wp-content/uploads/2012/05/satoss.png"/></div>
<p>The following table conveniently lists some useful shutter angles converted to shutter speeds.</p>
<div style="margin-left: 20%; margin-right: 20%;">
<table style="font-family: Verdana; text-align: center;" border="1" cellspacing="0" cellpadding="4">
<caption style="caption-side: bottom; text-align: center; font-size: 90%;"><em>Shutter angle (in degrees) to shutter speed (in seconds). Conversion at 24 fps.</em></caption>
<tbody>
<tr>
<th style="width: 40%; text-align: center;"><strong>Shutter angle</strong></th>
<th style="width: 40%; text-align: center;"><strong>Shutter speed</strong></th>
</tr>
<tr>
<td>45</td>
<td>1/192</td>
</tr>
<tr>
<td>60</td>
<td>1/144</td>
</tr>
<tr>
<td>90</td>
<td>1/96</td>
</tr>
<tr>
<td>135</td>
<td>1/64</td>
</tr>
<tr>
<td>144</td>
<td>1/60</td>
</tr>
<tr>
<td>160</td>
<td>1/54</td>
</tr>
<tr>
<td>172.8</td>
<td>1/50</td>
</tr>
<tr>
<td>180</td>
<td>1/48</td>
</tr>
<tr>
<td>270</td>
<td>1/32</td>
</tr>
<tr>
<td>360</td>
<td>1/24</td>
</tr>
</tbody>
</table>
</div>
<p><br/></p>
<h6><strong>Motion blur and strobing</strong></h6>
<p>There are a couple of artifacts that arise from the rotary shutter and the 24 fps frame rate.<br />
As the camera only ever sees half of the time (for a typical 180 degree shutter), it doesn&#8217;t capture the scene continuously. This means that fast moving objects, and especially objects moving across the frame, will exhibit jerky movement. This is called <em>strobing</em>. The defect is also very noticeable during pans.</p>
<div id="attachment_714" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/05/BonnieandClyde.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/05/BonnieandClyde.jpg" alt="Bonnie and Clyde motion blur" title="Bonnie and Clyde (1967)" width="262" class="size-full wp-image-714" /></a><p class="wp-caption-text">The relatively slow shutter speed and fast motion in the frame result in motion blur</p></div>
<p>The other artifact is also related to motion. Because of the relatively slow shutter speed (1/48 sec for 180 degree shutter angle and 24 fps), fast moving objects blur in the frame, because the longer the exposure, the more movement is captured. This is what we call <em>motion blur</em>. Note that while strobing is in essence an artefact related exclusively to the no full time shutter, motion blur results from the (relatively) long exposure of each frame.</p>
<p>Smaller shutter angles (shorter exposure) exhibit more pronounced strobing effects. Bigger shutter angles (longer exposure) increase motion blur. Faster frame rates can smooth out the perception of strobing, even with shutter angles smaller than 180 degrees. Faster frame rate also decreases captured motion per frame and decreases motion blur per frame. Note that small variations in shutter angle (and shutter speed) are imperceptible for most of the audience. Most people won&#8217;t notice any difference between a 180 degree shutter angle (1/48 sec shutter speed) and a 144 degree (1/60 sec shutter speed).</p>
<div id="attachment_700" class="wp-caption aligncenter" style="width: 590px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/05/blur.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/05/blur.png" alt="Motion blur and strobing" title="Motion blur and strobing (over three consecutive frames)" width="580" class="size-full wp-image-700" /></a><p class="wp-caption-text">Top: 180 degree shutter will produce typical motion blur and strobing. Middle: decreasing shutter speed increases blur and decreases strobing. Bottom: increasing shutter speed decreases blur and increases strobing.</p></div>
<p>Motion blur and strobing are the main ingredients of what people call the <em>dream-like</em> effect of cinema. They set apart the cinematic image from the crisp and fluid reality and reinforce the unreal feel of cinema. This makes them an important characteristic of the traditional cinematic look.<br />
<br/></p>
<h6><strong>Digital cameras</strong></h6>
<div style="float: right; margin-left: 12px; margin-top: 5px; width: 40%; background-color: LightGrey; padding: 10px 10px 0px; border-width: thin; border-color: black;">
<strong>Shutter speed and light flicker</strong></p>
<p>Artificial light varies in intensity depending on voltage. For mains AC powered lights voltage changes polarity according to frequency (100 times for 50 Hz, 120 times for 60 hz). This causes the light to flicker. Human eyes usually don&#8217;t see it but <a href="http://www.davidsatz.com/aboutflicker_en.html" title="About light flicker problems" target="_blank">light flicker</a> may interfere with film and video recording. This is more noticeable with discharge type lights without electronic ballasts. Incandescent lamps are less prone to flicker because they don&#8217;t cool much between pulses; and more so for powerful tungsten lamps. To minimize flicker under artificial light shutter speed can be adjusted depending on the specific mains frequency. That&#8217;s 60 Hz in USA and 50 Hz in Europe. So the flicker free shutter speeds closest to the 180 degree shutter are 1/50 sec (for Europe) and 1/60 sec (for USA). For 24 fps that will be 172.8 and 144 degree shutter angle respectively.
</div>
<p>In digital video cameras (and DSLRs) there is no real need for a rotary shutter as there is no film negative that needs moving around. Nevertheless, some high-end digital cinema cameras like the Arri Alexa Studio and the Sony CineAlta F65 use rotary shutters to closely simulate the exposure process of film cameras while letting light on the sensor. But the vast majority of digital cameras only have electronic shutters. The camera simply reads out the sensor (or parts of it, for CMOS sensors) simultaneously ending its exposure and resetting it for the next frame.</p>
<p>One advantage of the electronic shutter is the ability for full time exposure (or 360 degree shutter angle equivalent). Rotary shutters need to be closed for some time after each exposure so that the sprocket wheel moves the next negative frame into position. No such movement is necessary in digital cameras. This allows for exposure times as low as 1/fps seconds. A disadvantage is the jello effect (image skew) that may happen with fast pans or fast moving subjects. This is a characteristic of the rolling shutter &#8211; typical for CMOS sensors &#8211; and is the result of partial sensor readout: parts of the sensor continue exposing while other parts are being read. High-end sensors with their fast readout times tend to minimize this defect.</p>
<p>Setting the digital camera to 24 fps and 1/48 sec shutter speed emulates the way film cameras work pretty close. Some digital cameras (especially DSLRs) don&#8217;t have an option for 1/48 sec. But setting the camera to a shutter speed of 1/50 sec will give results virtually indistinguishable from 1/48 sec.<br />
<br/></p>
<h6><strong>The higher frame rate debate</strong></h6>
<p>Recently there has been a push towards higher frame rates, especially in connection to 3D movies. Peter Jackson is shooting <em>The Hobbit</em> in 48 fps and James Cameron will allegedly shoot <em>Avatar 2</em> and <em>3</em> in 60 fps. The reasoning behind this is that higher frame rates result in more fluid and crisper image compared to 24 fps and the typical 180 degree shutter angle. This proposed change has been met with polarized reactions. Many find this image bland, TV-like and lacking the dramatic feel of 24 fps (at 180 degree shutter). Nevertheless, a higher frame rate will probably be accepted as standard alongside 24 fps. If high frame rates take off this may lead to a shift in perception about what is considered cinematic in terms of temporal characteristics. But for now, 24 fps and a 180 degree shutter angle define the traditional cinematic look. This is the look games (in cutscenes) and video emulate when trying to be cinematic. You can read more in the articles on <a href="http://www.shutterangle.com/2012/why-48-fps-is-good-for-3d-movies/" title="Cinema and Reality, or Why 48 fps is Good for 3D Movies">high frame rates and 3D movies</a> and <a href="http://www.shutterangle.com/2012/frame-rate-artistic-choice-silent-movies/" title="Frame Rate as Artistic Choice or What Can We Learn from Silent Films">frame rate as artistic choice</a>.</p>
<p>The next part of the Cinematic Look series is on <a href="http://www.shutterangle.com/2012/cinematic-look-dynamic-range/" title="Cinematic Look, Part 3: Dynamic Range">Dynamic Range</a>. </p>
<p><a href="https://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/">Cinematic Look, Part 2: Frame Rate and Shutter Speed</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Frame Rate as Artistic Choice or What Can We Learn from Silent Films</title>
		<link>https://www.shutterangle.com/2012/frame-rate-artistic-choice-silent-movies/</link>
		<comments>https://www.shutterangle.com/2012/frame-rate-artistic-choice-silent-movies/#comments</comments>
		<pubDate>Fri, 27 Apr 2012 22:33:41 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cinematic look]]></category>
		<category><![CDATA[frame rate]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=636</guid>
		<description><![CDATA[<p>In my previous article I argued that high frame rates are good for 3D. This was based on both philosophical grounds and on reasons connected to ease of perception when watching 3D. But there is another side to the debate, and I have unintentionally alluded to it with arguing that 3D is as a step  [...]</p><p><a href="https://www.shutterangle.com/2012/frame-rate-artistic-choice-silent-movies/">Frame Rate as Artistic Choice or What Can We Learn from Silent Films</a></p>]]></description>
			<content:encoded><![CDATA[<p>In my previous article I argued that <a href="http://www.shutterangle.com/2012/why-48-fps-is-good-for-3d-movies/" title="Cinema and Reality, or Why 48 fps is Good for 3D Movies">high frame rates are good for 3D</a>. This was based on both philosophical grounds and on reasons connected to ease of perception when watching 3D. But there is another side to the debate, and I have unintentionally alluded to it with arguing that 3D is as a step towards realism. So lets have a go at the idea of shooting at a specific frame rate as an artistic choice.<span id="more-636"></span></p>
<p>To better illustrate this we need to go back in time. Since the arrival of sound cinema frame rate has been fixed to 24 fps. As such, the choice of frame rate was not an option for filmmakers. There were tries to introduce higher frame rates but these never really took off, mostly because higher rates require more film stock. And this obviously means the price of filmmaking would go up. With digital movie making this is less of a concern &#8211; storage is cheap.</p>
<p>But we should actually go further back in time. Before talkies. It is a common misconception that silent films were shot at 16 fps. Cameramen claimed that they had hand-cranked at this speed; some cameras even had indicators for 16 fps to help hand-cranking. This myth was busted by Kevin Brownlow in an article from 1980. During his work on restoration and conversion to tape of silent films he found that 16 fps was not the norm. Cranking speed varied widely between 12 and 26 fps. And higher (than 16 fps) speeds were common. There was also a tendency towards high frame rates mostly based on the habit of theater managers to have their projectionists cranking at higher speeds in order to squeeze more shows in the busy evening schedule. This would inject a slapstick feel even into the most serious drama. To counter that, directors and cameramen would increase speeds to ctach up, hoping that projection would look about right.</p>
<p>It is popular knowledge that the frame rate of talkies (24 fps) was selected as the minimum rate that can yield a decent optical track. This is not exactly so. When Western Electric had to select the frame rate for their process they conducted a survey in a bunch of movie theaters. 24 fps turned out to be about the average projection speed in these theaters. So they selected it. In fact, rival sound processes had the frame rate as low as 21 fps. The point is, there was not anything really scientific or special about the choice of 24 fps. It was largely arbitrary. So it is to a large degree coincidental that the established look of cinema has the dream-like qualities it happens to have. This was not intentional. But all this by no means diminishes these qualities. One way or another, they are here, and we’ve come to love them.</p>
<p><div id="attachment_640" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/HomeSweetHome.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/HomeSweetHome.jpg" alt="Home Sweet Home screenshot" title="Home Sweet Home (1914)" width="262" /></a><p class="wp-caption-text">Home Sweet Home has the frame rate rising from reel to reel. And then the last reel is very slow.</p></div> But let’s go back to silents. There is something peculiar about silent films. Because there was no standard and no strict requirement to shoot at a fixed frame rate, it appears that some directors did shoot at specific frame rates to further their artistic vision. D. W. Griffith in particular would quite consistently shoot at slow frame rates, even as low as 12 fps for parts of <em>The Birth of a Nation</em>. But not only this. Griffith actually shot at least one movie (<em>Home Sweet Home</em>) at different speeds for each reel. The movie contains four separate stories, which makes this choice even more intriguing. Reels were shipped to theaters with notes for projectionists specifying the right speed for each reel. Incidentally, this way he could also achieve slow motion simply through means of slower projection: for that same film, projectionists were instructed to crank the last reel at a very low speed.</p>
<p>Now, how conscious were the directors of the silent era of the artistic side of frame rate is debatable. Maybe they had external reasons for specific choices. For example, low fps means less film stock used. Which leads to a lower price and extended scenes in a reel. Maybe the variations were unintentional. Nevertheless, this provokes additional thought.</p>
<p>So back to the topic at hand. Democratizing frame rate choice can be a good thing. While too many available frame rates may lead to chaos, having available 24 fps plus a higher rate (48 or 60 fps) is something that deserves consideration. Frame rate can be used as a mean to further an artistic vision. The obvious example is with films striving for perceived realism, or films going for a documentary feel. No doubt some of them can benefit from more fluid and crisp action. This could help make the viewer a part of the scene. To make them experience it in a more visceral way. Other (most?) films are better suited for the traditional cinematic illusion. And I believe that existing movies should be left alone and not upconverted to high fps. But having the option to go for a higher frame rate with a film increases the creative possibilities. The same way a filmmaker can choose film or digital, a specific film stock, lighting style, framing, etc. they would be able to choose a frame rate that promotes their vision in the best way possible. It is also good to mention that speeds lower than 24 fps are being used in 24 fps movies for artistic effects. This is achieved by either undercranking the camera, or shooting 24 fps and then dropping frames. In editing, frames are duplicated as many times as needed to achieve correct speed when projected at 24 fps.</p>
<p>But then again, having too many variables can confuse people and may also lead to wrong or arbitrary choices. Still, the latter is not really a good argument. The lack of restraint or of understanding of a specific variable of filmmaking is not an excuse. Confusion amongst the audience on the other hand is something that should be considered. The audience of the silent film wasn’t conditioned to a specific frame rate in the way that we are with 24 fps. Getting used to high frame rates may lead to 24 fps movies being rendered unwatchable for some viewers. But perhaps this is a risk worth taking.</p>
<p>Then there is the option of variable acquisition frame rate. Even the critics of Jackson and his 48 fps endeavor admit that his aerial, scenery and establishing shots look spectacular. Think of Discovery or National Geographic. Variable frame rates can give us the best of both worlds: motion blur and strobing for dramatic impact, and crisp, fluid images for landscapes and panoramas. This is easy to achieve technically once theaters start supporting high frame rates. For example, a 48 fps master can simply double frames for the 24 fps sequences. This will fully preserve low frame rate aesthetics where necessary.</p>
<p><a href="https://www.shutterangle.com/2012/frame-rate-artistic-choice-silent-movies/">Frame Rate as Artistic Choice or What Can We Learn from Silent Films</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/frame-rate-artistic-choice-silent-movies/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cinema and Reality, or Why 48 fps is Good for 3D Movies</title>
		<link>https://www.shutterangle.com/2012/why-48-fps-is-good-for-3d-movies/</link>
		<comments>https://www.shutterangle.com/2012/why-48-fps-is-good-for-3d-movies/#comments</comments>
		<pubDate>Wed, 25 Apr 2012 16:24:11 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cinematic look]]></category>
		<category><![CDATA[frame rate]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=599</guid>
		<description><![CDATA[<p>The screening of 48 fps footage from The Hobbit at CinemaCon has certainly divided the opinions in the movie industry and amongst film fans. We have been conditioned for decades to expect and appreciate the jerky and motion blurred look of 24 fps cinema. This new 48 fps fluid and crisp look is  [...]</p><p><a href="https://www.shutterangle.com/2012/why-48-fps-is-good-for-3d-movies/">Cinema and Reality, or Why 48 fps is Good for 3D Movies</a></p>]]></description>
			<content:encoded><![CDATA[<p>The screening of 48 fps footage from The Hobbit at CinemaCon has certainly divided the opinions in the movie industry and amongst film fans. We have been conditioned for decades to expect and appreciate the jerky and motion blurred look of 24 fps cinema. This new 48 fps fluid and crisp look is uncomfortable and unappealing. It is not cinematic. It reminds of cheap vintage television shows. </p>
<p>But 48 fps actually comes with benefits. Well, for 3D at least.<span id="more-599"></span><br />
</br></p>
<h6><strong>Cinema and reality</strong></h6>
<p><a href="http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/" title="Cinematic Look, Part 2: Frame Rate and Shutter Speed">Motion blur and strobing</a> are the main ingredients of what some label <em>the dream effect</em> of cinema. These artifacts of relatively low frame rate and less-than-360 degrees shutter angle separate the cinematic image from reality and their abnormality reinforces the unreal nature of cinema. This is something cinema viewers have come to appreciate. This characteristic of cinema has stayed more or less the same after the arrival of sound, after the introduction of the wide screen and after the advance of CGI.<br />
<div id="attachment_613" class="wp-caption aligncenter" style="width: 610px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/TheHobbit.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/TheHobbit.jpg" alt="The Hobbit" title="The Hobbit (2012)" width="600" height="338" class="size-full wp-image-613" /></a><p class="wp-caption-text">The Hobbit breaks new ground with its 48 fps presentation</p></div><br />
This dream-like film look is often opposed to <em>TV look</em> and <em>video games look</em>. TV tube cameras with their full time open shutter capture fluid motion without strobing (although the image is interlaced). That’s why some have compared the 48 fps image of The Hobbit to TV shows of the 70’s. Some have even stated that it feels like behind the scenes video and that <a href="http://badassdigest.com/2012/04/24/cinemacon-2012-the-hobbit-underwhelms-at-48-frames-per-secon/">sets actually look like sets</a>. It is worth noting, that the fluidity of analog TV comes from essentially a 360 degrees shutter angle. The fluidity of The Hobbit comes from the higher frame rate. Different origins, same feeling. This fluid image appears more life-like. Similar fluidity is expected in video games. 30 fps is the minimum that is considered acceptable for a video game, with 60 fps or more being preferable. All this is based on the relation between interactive responsiveness of game controls and high frame rate.</p>
<div id="attachment_615" class="wp-caption aligncenter" style="width: 490px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/Quake3.jpeg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/Quake3.jpeg" alt="Quake 3 screenshot" title="Quake 3 Arena" width="480" class="size-full wp-image-615" /></a><p class="wp-caption-text">Quake players would often run the game with a frame rate into the hundreds for maximum fluidity and responsiveness</p></div>
<p><br/></p>
<h6><strong>3D movies</strong></h6>
<p>3D changes things for cinema. Most importantly, it blurs the boundary between the fantasy of cinema and reality by tricking the brain to think that what it sees on the screen is not a screen at all, but deep and three dimensional. This is in direct contradiction with the otherworldliness of cinema, as defined by its most famous image artifacts. So, in a sense, in the moment we put 3D in the equation, we clash with the dream-like nature of cinema. And even more: one might say that with 3D we give up the dream nature of cinema. But there is more than just philosophical reasoning. </p>
<p>Problems with non-coinciding eye convergence and eye focus aside, 3D simply does not coexist well with strobing and motion blur. The already strained eyes get additional load while trying to discern depth plans because of the lack of crisp object edges and the jerky movement. Selective focus does not help either because the eye can’t wander freely through the scene (but that’s unrelated to frame rate). So 3D tells the brain “This is real!”. On the other hand, the motion blur and strobing artifacts obstruct this perception.</p>
<p>So we can argue that 3D simply is not cinematic. With its life-like aspirations 3D is more suitable for interactive representations of reality like games and, ultimately, virtual reality. But &#8211; like it or not &#8211; 3D is here anyway.</p>
<p>So there is really only one logical direction to go from here in the 3D cinema case. Motion blur and strobing need to go. 48 fps will no doubt help tremendously in terms of both fluidity and crispness. And 3D cinema becomes reality &#8211; for good or bad &#8211; thus resolving the contradiction. And if we are lucky, 2D movies will stay 24 fps so that we have our opposition to reality.</p>
<p>The familiar fluidity of TV and video games is the main reason younger generations will most likely embrace 48 fps without questions. That, and probably being more open to changes. They have sunk so many fluid images through their TV and games entertainment, that actually it won’t be surprising if the strobing film look is going to be considered weird and severely outdated in a few years. And the hyper-realism of the “be in the scene” factor will no doubt sell the concept to many others. </p>
<p>(There are some further thoughts on the topic and an interesting reference to silent films in my follow-up article on <a href="http://www.shutterangle.com/2012/frame-rate-artistic-choice-silent-movies/" title="Frame Rate as Artistic Choice or What Can We Learn from Silent Films">frame rate as artistic choice</a>.)</p>
<p><a href="https://www.shutterangle.com/2012/why-48-fps-is-good-for-3d-movies/">Cinema and Reality, or Why 48 fps is Good for 3D Movies</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/why-48-fps-is-good-for-3d-movies/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Canon Picture Styles: Shooting Flat or Not?</title>
		<link>https://www.shutterangle.com/2012/canon-picture-styles-shooting-flat-or-not/</link>
		<comments>https://www.shutterangle.com/2012/canon-picture-styles-shooting-flat-or-not/#comments</comments>
		<pubDate>Tue, 17 Apr 2012 11:13:20 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Canon DSLR]]></category>
		<category><![CDATA[dynamic range]]></category>
		<category><![CDATA[picture style]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=448</guid>
		<description><![CDATA[<p>When Technicolor released the CineStyle picture profile last year it immediately became a hit amongst Canon DSLR videographers. After all, this is Technicolor. These folks have extensive experience in color science, image processing and digital intermediate. So after this introduction the following  [...]</p><p><a href="https://www.shutterangle.com/2012/canon-picture-styles-shooting-flat-or-not/">Canon Picture Styles: Shooting Flat or Not?</a></p>]]></description>
			<content:encoded><![CDATA[<p>When Technicolor released the CineStyle picture profile last year it immediately became a hit amongst Canon DSLR videographers. After all, this is Technicolor. These folks have extensive experience in color science, image processing and digital intermediate. So after this introduction the following may come as a surprise to you. The thing is,  unless you know exactly why you are using CineStyle, then chances are you will get better results by <strong>not</strong> using it. This article talks about dynamic range and picture styles, and attempts to explain the Why&#8217;s behind the previous statement. Also, we are focused here on Canon picture styles but the principles apply to any DSLR or video camera with 8-bit video.<br />
<span id="more-448"></span><br/></p>
<h6><strong>Dynamic range</strong></h6>
<p><em>Scene dynamic range</em> specifies the ratio between the luminance of the brightest whites and the darkest blacks in a scene. Similarly, <em>camera dynamic range</em> specifies the ratio between the luminance of the brightest whites and the darkest blacks that the camera can capture. In photography this range is usually measured in f-stops (or exposure values, EV) which is a logarithmic measure (with base 2). Each successive stop is double the light. So a dynamic range of 10 stops means a contrast ratio of 2<sup>10</sup>:1 or 1024:1.</p>
<p>Current digital sensors have some pretty good dynamic range capabilities. The ARRI Alexa and Red MX sensors capture dynamic range well in excess of 13 stops. The dynamic range of the APS-C sized Canon 550D/t2i sensor is about 11.5 stops and the dynamic range of the full frame Canon 5D mark II is around 12 stops. </p>
<p>The RAW data of a Canon DSLR camera is 14-bit. This means that the analog-to-digital conversion on the sensor yields 14-bit values (between 0 and 16383). This is enough to represent accurately dynamic range spanning 12 stops when encoding it linearly. This is all nice and good, but unfortunately for videographers that&#8217;s not the signal we get out of the camera.</p>
<div style="float: right; margin-left: 12px; margin-top: 5px; width: 40%; background-color: LightGrey; padding: 10px 10px 0px; border-width: thin; border-color: black;">
<strong>What is a Picture Style?</strong></p>
<p>Most digital cameras feature settings for picture styles. These are sometimes called picture profiles or creative styles (on some Sony cameras). The picture style defines some parameters for the image, most notably color handling. But in this article we are more interested in another property of <em>custom</em> picture styles.<br />
The raw image of the camera is in linear space. Which means image pixels record luminosity in a linear fashion. The picture style specifies a curve which is applied to this input image in order to make it presentable. This is usually some sort of S-shaped curve to make the image more contrasty: shadows and highlights are compressed. Custom Canon picture styles further enhance this behaviour by allowing curves to be applied on top of the default curve. And these additional curves are what we are interested in here.
</div>
<p>See, Canon DSLRs (and all DSLRs, for that matter) capture consumer grade video. By design, video from DSLRs is not meant to be tinkered with but displayed directly on consumer displays. Consumer displays are almost universally 8-bit displays, with some of them actually having 6-bit panels and dithering colors up to 8-bit. Unsurprisingly, consumer level video is 8-bit too. Blu-ray discs, for example, have 8-bit video. Canon DSLR video is also 8-bit. And that&#8217;s where picture styles come into play.</p>
<p>The picture style tells the camera how to put all that RAW dynamic range into 8 bits (with <a href="http://www.cambridgeincolour.com/tutorials/gamma-correction.htm" title="Understanding gamma correction" target="_blank">gamma encoding</a> on top of it). Back in the 90&#8242;s Kodak made the Cineon system for scanning and digital intermediate of film. The digitized data was 10-bit and the guys at Kodak actually came to the conclusion that 8-bit was good for representing around 6 2/3 stops, or less than 7 stops of dynamic range. Encode more range, and you start getting banding issues in the grading process. Digital cameras now routinely output between 8 and 9 stops of dynamic range in 8-bit video and jpegs. This is mostly achieved by rolling off the highlights and pressing down the shadows. Most of the coding space is reserved for midtones. This is the well-known S-curve. So in this encoded dynamic range of 8-9 stops there is good detail in, say, 5-6 stops in the mids.</p>
<p>The camera sensor has lower signal-to-noise ratio in the blacks and higher signal-to-noise ratio in the whites. This is because noise is more or less evenly distributed on the sensor but there are less photons hitting the sensor in the darker pixels. For this reason, the 8-9 stops of dynamic range baked in the encoded video (or still picture) are taken from the upper end of the RAW dynamic range. This ensures cleaner image. Incidentally, that&#8217;s also the reason RAW images usually have more exposure latitude for pushing shadows up than for pulling highlights down.<br />
<br/></p>
<h6><strong>The case of flat picture styles</strong></h6>
<div id="attachment_463" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/kodak5219.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/kodak5219.png" alt="Kodak 5219 Characteristic Curves" title="Kodak Vision3 5219" width="262" class="size-full wp-image-463" /></a><p class="wp-caption-text">The Kodak Vision3 5219 motion picture stock has a flat characteristic curve over a huge exposure range</p></div>
<p>Flat picture styles try to emulate how film negative captures the scene. Modern negative stocks can capture full detail in some 11-12 stops of dynamic range. Compare that to the 6 detailed stops in 8-bit video meant for display. On top of that, there is even more exposure latitude in the rolled off highlights above the shoulder and in the dark tones below the toe. Because equal negative density is allocated to each stop, the film is said to work in Log space (from logarithmic). This may be somewhat confusing at first, but bear in mind that the f-stop axis is already logarithmic: each successive stop is double the light of the previous stop. Having that much dynamic range captured in full detail allows the cinematographer a lot of latitude in the way the scene is shot. Exposure errors can be fixed and decisions about the overall tone of the shoot can be deferred to post-production. The film is then printed on a contrasty release stock which returns contrast back for presentation. (You can read a more detailed introduction on <a href="http://www.shutterangle.com/2012/cinematic-look-dynamic-range/" title="Cinematic Look, Part 3: Dynamic Range">film dynamic range</a> here.)</p>
<div id="attachment_468" class="wp-caption alignleft" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/flat.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/flat.png" alt="Log space" title="Ideal logarithmic distribution (click to enlarge)" width="262" class="size-full wp-image-468" /></a><p class="wp-caption-text">The ideal log space encoding is represented by a straight line over exposure in stops</p></div>
<p>Flat picture styles aim to allow for similar exposure latitude and post-production flexibility. To increase the recorded dynamic range in general, and to have good detail throughout this recorded dynamic range. Flat picture styles do this by distributing the coding space equally over the exposure range: each stop of light is being encoded by the same number of values. So, an ideal Log distribution is a straight line. For example, if we want to encode 10 stops in 8 bits, we will have around 25 values to encode shades in each stop. In practice, the curve is rarely totally flat because there is too much noise in the low end of the range. Lifting the blacks also lifts the noise. So allotting a full bucket of values for them would be a waste of coding space. That&#8217;s why the extreme darks get compressed a bit even in flat picture styles. </p>
<p>One clarification, before we go further. Flat picture styles should not be seen as a way to increase dynamic range. In fact, they may not increase DR at all and still be useful. Usually there is some dynamic range increase over the factory picture styles. These additional stops (or a fraction of a stop) are almost always in the shadows, and they are noisy, compressed and lacking color fidelity. There is no real detail in there; just some notion of tonal change, at best. The real gain is in the stops which are already there at the bottom of the curve in the factory picture styles. The flat picture style just expands them and makes the detail in these stops available in post. In that sense, flat picture styles increase the <em>usable dynamic range</em>.</p>
<p>As you may have guessed, it is not only dances and songs in the case of flat picture styles. They have a serious weakness. The 8-bit coding space simply sucks for representing extended dynamic range. The more dynamic range you encode, the less precision there is in tonality distribution in the usable dynamic range. That means that video shot with flat picture styles breaks faster when pushed in post; posterization (banding) happens sooner when stretching the range around. In a sense, what we get in exposure latitude, we lose in tonal precision and gradation. So, flat picture styles offer a trade-off, and not the solution to all problems.</p>
<p>The flat picture style is in essence a production format. The same way as Super 35 negative film or RAW video are production formats. Flat picture styles are meant for post and not for presentation. Usually, the image needs to get some contrast treatment to make it presentable.<br />
<br/></p>
<h6><strong>Technicolor CineStyle</strong></h6>
<p>The <a href="http://www.technicolor.com/en/hi/theatrical/visual-post-production/digital-printer-lights/cinestyle" title="Technicolor CineStyle" target="_blank">Technicolor CineStyle picture style</a> was developed by Technicolor in order to assist their digital intermediate process. Cinematographers often utilize different cameras on a shoot, including Canon DSLR cameras. And Canon DSLR video doesn&#8217;t mesh well during DI with scanned film negative footage and footage from high-end digital cameras that shoot Log space video. That&#8217;s why they developed CineStyle to have the output of the Canon DSLR in Log space. It came as a surprise to Technicolor that so many people actually donwloaded CineStyle. The recommended settings for Technicolor CineStyle are -4 Contrast and -2 Saturation. And it also comes with an optional LUT which can be used in most NLEs or compositors and applies a S-curve to the image to make it presentable.</p>
<p>I&#8217;ve measured the dynamic range and values distribution of CineStyle through multiple exposures of a Danes-Picta BST13 reflective grayscale chart with steps of 1/3 stop (a copy of Kodak Q-13). This test does not pretend for scientific accuracy but nevertheless gives a pretty good idea of what&#8217;s happening with the image. The test was done with a Canon 550D/t2i but the results should apply to any Canon DSLR camera.</p>
<div id="attachment_472" class="wp-caption aligncenter" style="width: 510px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/chartDR.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/chartDR.png" alt="Technicolor CineStyle dynamic range distribution" title="Technicolor CineStyle dynamic range (click to enlarge)" width="500" height="420" class="size-full wp-image-472" /></a><p class="wp-caption-text">Technicolor CineStyle dynamic range distribution plotted against the Standard and Faithful Canon picture styles</p></div>
<p>There are a couple of peculiarities about Technicolor CineStyle. Both can be seen in the chart.<br />
First, somewhat controversially, CineStyle sets the black point at 16. The values [0..15] are practically unused. Second, and not as obvious, the whites are brought down significantly. There are still values recorded all the way up to 255, but a whole lot of the value range is dedicated to the highest 1/2 stop. This is quite the opposite of the typical knee and knee slope approach to handling extreme highlights. It is worth noting that there is no official info from Technicolor setting things straight about this. There is a notice on their site, saying &#8220;a detailed white paper and case study will be developed in the near future&#8221;. But it&#8217;s been there for ages with no white paper on the horizon (at the time of writing this).</p>
<p>Various explanations exist for this range distribution. Mike Seymour <a href="http://www.fxphd.com/blog/squeezing-the-most-out-of-a-5d-mk-ii-or-alexa-signal/" target="_blank">advocates the idea</a> that lifting the black point is reducing noise by allowing negative values (with respect to black at 16) and having the smoothed average exactly at black (read his post for details). This makes sense but there are problems with the idea. For one, even at high ISO settings with lots of sensor noise there are only rarely any pixels with values below 16 (and none below 14 in my tests). Another popular explanation says that the h.264 codec doesn&#8217;t like small values. That&#8217;s why lifting the blacks to 16 preserves them from being butchered by the codec. Finally, there is the idea that lifting the blacks and limiting the whites is simply done in order to put all the important info in the broadcast legal range of 16-235. Even though there are values all the way up to 255, important whites are pulled down below the 235 limit. </p>
<p>My take on this matter is different. I am convinced that the lifted black point is entirely connected to film and digital cinema Log workflows. For example, in 10-bit film scan formats like Cineon/DPX the black point (D-min) is placed at 95 for reasons that I won&#8217;t delve into here (it is concerned with the uneven grain structure of unexposed/underexposed film). Converted to 8 bits this would be around 24. Incidentally, D-min is around 4 2/3 stops down of mid-gray (18% gray). In my measurements of CineStyle&#8217;s distribution 4 2/3 stops below mid-gray came at around value 25. This theory also explains the limited precision (due to the relatively small inclination) in the straight portion of the curve resulting in apparently badly used precision in the extreme whites. Both the lifted black and the low gamma of the curve are needed for seamless intercutting of footage from Canon DSLRs with film scans and digital cinema Log footage without any significant import pain. Which is the reason for the creation of CineStyle in the first place.</p>
<p>And there is, of course, the possibility that Technicolor did the blacks lifting for very specific reasons connected to their DI workflow. Hopefully, the white paper will materialize some day to clear this point. But whatever the reason, limiting the useful value range doesn&#8217;t look like a good idea when recording video in 8 bits. This decreases the already precious space allotted to each stop even more and invites banding artifacts in post. Still, many videographers use Technicolor CineStyle because, well, it&#8217;s Technicolor. But there are better alternatives in the Canon DSLR world. At least in the likely case that you are not doing your DI at Technicolor.</p>
<p>One side effect of the lifted black point is an increase in perceived noise. Pushing black up to 16 also lifts the noise always present in the shadows and makes it brighter and thus more visible. This is not noise created by CineStyle, the noise exists in other flat picture styles too. But being darker makes it harder to notice.<br />
<br/></p>
<h6><strong>Other flat Canon picture styles</strong></h6>
<p>Even before Technicolor CineStyle there were flat picture styles attempting to increase the useful dynamic range. The easiest option is to use one of the lower contrast factory Canon picture styles &#8211; Neutral or Faithful &#8211; and decrease the Contrast setting down from the default 0 to -4. You can see Faithful at -4 Contrast in the chart above. The curve of Faithful becomes flat later than CineStyle but keeps being flat all the way to the limit of 255. This means more range for each stop in the mids and the highlights, compared to CineStyle. CineStyle has more detail in the shadows and the extreme highlights though. Neutral looks identical to Faithful in terms of dynamic range. The difference is in the color treatment. Neutral is closer to Standard, and Faithful is closer to Portrait. Canon claims that Faithful offers colors closest to the original scene colors. Note that the difference in contrast between Standard, Portrait and Landscape on one side, and Neutral and Faithful on the other, is small. In fact, Canon says it is mostly due to color tone balancing.</p>
<p>When the basic styles at -4 Contrast are not enough for the job, other custom picture styles may come in handy. <a href="http://marvelsfilm.wordpress.com/marvels-cine-canon/" title="Marvels Cine picture style" target="_blank">Marvels Cine</a> is one of the oldest attempts for a flat picture style. It is currently at version 3.4. This version is based on the Neutral Canon picture style. And it is created to allow for correct exposure evaluation through the Canon DSLR LCD screen. This is a common disadvantage of flat picture styles: by lifting the range they brighten the LCD screen of the camera. This makes exposure judgement through the screen difficult. Some people actually compose and setup exposure in another style (Standard, for example) and switch to the flat style for recording.</p>
<div id="attachment_546" class="wp-caption alignright" style="width: 269px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/histo-styles.png"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/histo-styles.png" alt="Portrait and Flaat_12p Canon picture styles" title="Portrait and Flaat_12p Canon picture styles" width="259" height="310" class="size-full wp-image-546" /></a><p class="wp-caption-text">Histogram for a low contrast scene. Portrait (top), Flaat_12 (middle), Flaat_12 after auto levels (bottom). Adding contrast stretches the values and breaks the histogram.</p></div>
<p>The <a href="http://similaar.com/foto/flaat-picture-styles/index.html" title="Flaat picture styles" target="_blank">Flaat family of picture styles</a> is more recent. It includes 4 picture styles &#8211; Flaat_09, Flaat_10, Flaat_11, Flaat_12 &#8211; claiming 9, 10, 11 and 12 stops of dynamic range respectively. But remember what we said above. Dynamic range and usable dynamic range are two different things. Digging too deep in the darks is not recommended due to both noise extraction and loss of tonal precision in the mids. These styles have the flattest curve from the shadows to the highlights compared to both Technicolor CineStyle and Marvels Cine. Flaat_10 and Flaat_9 are the styles most will be interested in, as they offer good flat tonal distribution without excessive noise in the shadows. The Flaat picture style family comes in two variations: one based on Neutral and one based on Portrait. This is handy, considering how color handling is the main (but sometimes forgotten) reason for selecting a specific picture style in the first place.</p>
<p>Skin colors are one area which can be affected negatively by flat picture styles. Having the right skin hues won&#8217;t help much if tonality is lost due to insufficient tonal precision in this range. Lacking gradation in the range of the skin tones is a highway to plasticky skin. This is the reason Marvels Cine v3.4 supposedly keeps the linearity of the values (does not flatten them) in the most common range for skin.<br />
<br/></p>
<h6><strong>Not shooting flat: getting the desired look in camera</strong></h6>
<p>The position of this section in the article may seem illogical. Why haven&#8217;t we started with not shooting flat in the first place? I believe that only by getting acquainted with the shortcomings of flat shooting one can truly appreciate the alternative.</p>
<p>Getting the desired look in the camera has been a popular mantra amongst cinematographers for decades and also a reason for pride, considering how this is a significant demonstration of skill and knowledge. Utilizing specific film stocks to achieve a specific look was a main part of the process. In the digital realm picture styles can be thought of as the analog of film stocks. By selecting a specific picture style and playing with the style&#8217;s parameters one can create various looks. While the factory Canon picture styles offer some decent variety and will work for lots of people, there are tons of custom picture styles online, each targeting a specific look, including simulations of popular film stocks like Kodachrome, Velvia, Ektachrome.</p>
<div id="attachment_486" class="wp-caption aligncenter" style="width: 586px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/neutralfaithful.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/neutralfaithful.jpg" alt="Canon Picture Style: Neutral and Faithful" title="Neutral and Faithful Picture Style (click to enlarge)" width="576" height="768" class="size-full wp-image-486" /></a><p class="wp-caption-text">Neutral (above) has more magenta in the blues and brighter greens, Faithful has more pinkish skin tones (the building to the right posing as skin double)</p></div>
<p>The point is, getting as close as possible to the final look during the shoot will limit the need for adjustments in post. And post hurts the image. The heavily compressed 8-bit video simply is not meant to be abused in post.<br />
<br/></p>
<h6><strong>DO&#8217;s and DON&#8217;Ts</strong></h6>
<ul>
<li>Getting the look you want in-camera is the best solution. This means less tinkering in post, with less possibilities for breaking the image. It just so happens that this is not always possible. Nevertheless, when shooting in controlled conditions and for simple projects that should be the default option.</li>
<li>Shooting flat is a trade-off between usable dynamic range and tonal precision throughout that range. Try to use the most contrasty picture style that does not clip or limit important highlights and does not excessively compress blacks. This ensures the best possible tonal precision.</li>
<li>Don&#8217;t bother with CineStyle unless you intercut with film or digital cinema footage. For better tonal precision stick to Neutral/Faithful (and contrast set depending on the scene) or Flaat_10.</li>
<li>Flat picture styles are mostly useful in scenes with lots of contrast and lots of details. Having large gradients in the frame may pose problems with banding in post when you try to add some contrast back to the image. Don&#8217;t be afraid to shoot flat, just be ready for the consequences. Shooting flat when the scene is low contrast makes no sense as you will compress an already limited range even further. Avoid that.</li>
<li>Flat picture styles can be thought of as a production format. But it may still be acceptable to shoot flat if your goal is low contrast look. Even then, it is easier to take contrast out in post than the opposite.</li>
<li>Don&#8217;t be afraid to play with the Color Tone parameter of your picture style. This tweaks skin tones from Magenta/Pink to Yellow and can help when skin does not look right. </li>
<li>Mixing picture styles for a project can work, but may also introduce matching problems in post. Stylistic differences between sequences is a good reason to use a different picture style. Difference in scene contrast is a good reason for that only if the two picture styles match in color.</li>
<li>Picture style color is at least as important as dynamic range, if not more. There are so many videos with crappy grading out there that it is not even funny (hello, Magic Bullet Looks). Test picture styles before committing to them. Canon&#8217;s Digital Photo Professional can apply any picture style to a RAW image, so it is easy to see for yourself. Don&#8217;t just use a style because it is popular, or because you&#8217;ve seen a nice video shot with it. Test. Know your image before you&#8217;ve even taken it out of the memory card.</li>
</ul>
<p><a href="https://www.shutterangle.com/2012/canon-picture-styles-shooting-flat-or-not/">Canon Picture Styles: Shooting Flat or Not?</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/canon-picture-styles-shooting-flat-or-not/feed/</wfw:commentRss>
		<slash:comments>24</slash:comments>
		</item>
		<item>
		<title>Aspect Ratio Choice for a Film or Video: Artistic Considerations</title>
		<link>https://www.shutterangle.com/2012/film-video-aspect-ratio-artistic-choice/</link>
		<comments>https://www.shutterangle.com/2012/film-video-aspect-ratio-artistic-choice/#comments</comments>
		<pubDate>Thu, 05 Apr 2012 13:59:23 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[aspect ratio]]></category>
		<category><![CDATA[composition]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=347</guid>
		<description><![CDATA[<p>If you ask budding cinematographers what are the ways to contribute visually to the story, many will mention lighting and framing. And maybe even lens choice for perspective and dynamics control. It may not be immediately obvious, but so does the choice of a suitable aspect ratio. The aspect ratio  [...]</p><p><a href="https://www.shutterangle.com/2012/film-video-aspect-ratio-artistic-choice/">Aspect Ratio Choice for a Film or Video: Artistic Considerations</a></p>]]></description>
			<content:encoded><![CDATA[<p>If you ask budding cinematographers what are the ways to contribute visually to the story, many will mention lighting and framing. And maybe even lens choice for perspective and dynamics control. It may not be immediately obvious, but so does the choice of a suitable aspect ratio. The aspect ratio commands the geometrical shape of the picture and thus defines the base for in-frame composition. This makes it one of the important choices for any video production, be it a feature, a short, an ad, or a music video. In the age of digital image acquisition and digital intermediate it is much easier to be independent from the capture medium in aspect ratio choice. This is even more true in the context of digital content distribution online. And there are artistic reasons for a specific aspect ratio choice even more than there are technical reasons.</p>
<p><span id="more-347"></span></p>
<p>I wrote <a href="http://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/" title="Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field">here</a> that cinema is usually associated with widescreen aspect ratios. And this is indeed the case for many. In fact, the general expectation and audience conditioning that cinema is widescreen is one reason people automatically shoot videos wide when they want to get the film look. Of course, when theatrical projection is intended, it does put some limits on the aspect ratio due to standard projection equipment and projection gates availability at theaters. Or rather, projection gates unavailability for more exotic aspect ratios. But there are workarounds for this in the digital age. One can simply pad the frame of choice with black bars to fit it in the closest standard ratio. This is much easier to do now when no optical resizing is involved.</p>
<div id="attachment_98" class="wp-caption aligncenter" style="width: 509px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/aspects.jpg"><img class="size-full wp-image-98" title="Movie aspect ratios" src="http://www.shutterangle.com/wp-content/uploads/2012/03/aspects.jpg" alt="Movie aspect ratios" width="499" height="348"/></a><p class="wp-caption-text">Cinema aspect ratios</p></div>
<p>So how do you approach film aspect ratio choice from an artistic point of view? It is all connected to composition. The aspect ratio should set the frame which is most appropriate in the context of the film concept; a frame offering compositional opportunities that best serve the story or the idea behind the story. It is really as simple as that, and as difficult as that. Different aspect ratios have different characteristics best suited for different scenarios.</p>
<div id="attachment_359" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/bladerunner.jpg"><img class="size-full wp-image-359" title="Blade Runner (1982)" src="http://www.shutterangle.com/wp-content/uploads/2012/04/bladerunner.jpg" alt="Blade Runner (1982) screenshot" width="262" /></a><p class="wp-caption-text">The &quot;haircut&quot; maximizes the onscreen area of the face</p></div>
<p>Many people associate the grandeur of theatrical viewing with 2.39:1 images (2.35:1), historically called <a href="http://en.wikipedia.org/wiki/CinemaScope" title="CinemaScope in Wikipedia">CinemaScope</a> after the anamorphic film process of the same name, and lovingly known simply as <em>Scope</em>. This is also the ratio that most movie theaters are built for nowadays. This aspect ratio is obviously good for putting vistas on display. That&#8217;s why it is the typical ratio of choice for epics. Fritz Lang joked about CinemaScope in a Jean-Luc Godard film: &#8220;It wasn&#8217;t meant for human beings; just for snakes and funerals.&#8221; &#8211; a possible allusion to a saying by George Stevens that CinemaScope is &#8220;a system of photography that pictures a boa constrictor to better advantage than a man&#8221;. It really works well with large scenes and groups of people. But giving actors presence can be difficult in such a wide frame. It is apparent when looking at early CinemaScope pictures that cinematographers were struggling with close-ups. Applying the traditional clean framing often looks awkward. So, to maximize the area they occupy on screen, actors in CinemaScope pictures routinely get &#8220;haircuts&#8221; &#8211; with the tops of their heads cut by the frame &#8211; sometimes even in medium shots. On the other hand, getting lost in the frame can be used to an advantage for the purpose of isolation. And then there is the option to play with figures separation and opposition. John Boorman actually likes Scope for character drama because the wide ratio allows him to put distance between the characters, to visualize relations by slotting them in the opposite ends of the frame.</p>
<p>The wide frame also increases subject placement possibilities and presents the opportunity to add tension through highly imbalanced compositions. And there is an added sense of freedom and space to widescreen aspect ratio frames. This last bit is partially connected to the fact that the wide frame naturally lends itself to wider lenses, which add air and space to the view. Sergio Leone had the dynamics of the Scope frame under full control. He shot almost exclusively in 2.35:1. He practically invented the super tight shot in wide frames and his movies are masterclass on spreading action and subjects on the entire frame and balancing it. The wide frame, with its long base, can also play well with formal symmetry. Wes Anderson movies stand as perfect examples.</p>
<div id="attachment_364" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/04/TheArtist.jpg"><img src="http://www.shutterangle.com/wp-content/uploads/2012/04/TheArtist.jpg" alt="4:3 aspect ratio screenshot from The Artist" title="The Artist (2011)" width="262" /></a><p class="wp-caption-text">Actors easily dominate the 4:3 frame</p></div>
<p>At the other end of the spectrum are the more squarish aspect ratios 1.33:1 and 1.37:1 (1.375:1), sometimes called <em>open matte</em> because they utilize pretty much the whole negative frame. These classic ratios are rarely used for movie productions today, being associated with TV. They are also the simplest to use from storytelling point of view. Michel Hazanavicius, who recently used the Academy ratio on <em>The Artist</em>, notes that the 4:3 frame forces you to have a single important piece of information in the frame. Which is in contrast with the need to manage the super wide frame when shooting Scope. The taller frame can appear cramped or even claustrophobic, but is also more intimate. It gives presence to actors, mostly in close-ups, but also in medium shots. The actor is always more or less centered in the frame and fills the screen. After all, these are the native ratios of the glamour Hollywood close-up and the glorious romantic two-shot.</p>
<p>4:3 is close to the stills photography ratio of 1.5:1 and also a very popular canvas ratio for paintings. So it is much easier to transfer compositional skills and experience from stills shooting and painting. Stanley Kubrick, who had extensive experience as a photographer, preferred this format. He was very specific about the aspect ratio choice of his films and even wrote letters to theater projectionists with instructions about it. He only ever shot two truly wide movies &#8211; <em>Spartacus</em> and <em>2001: A Space Odyssey</em> &#8211; the first in Super Technirama, the second in Cinerama; both intended in 2.20:1. Most of his films were shot with the full frame in mind even though his three last movies were actually projected at 1.66:1 and 1.85:1.</p>
<p>In the middle we have the flat widescreen aspect ratios 1.85:1 and 1.66:1 (long time European flat wide standard), and 1.78:1 (coming from HDTV). These are the go-to formats for character drama. They share lots of the advantages of Scope allowing for some compositional variation. But they are easier to shoot in and more balanced which makes them a good starting point when considering an aspect ratio for a project.</p>
<p>If a ratio doesn&#8217;t present itself immediately, it is probably a good idea to see what kind of scenes dominate the story and choose accordingly. It may be difficult to select an aspect ratio for a diverse project where some scenes will work better in one ratio and other scenes in another. This is perhaps best resolved following one&#8217;s intuition. A more analytical approach would be going for the wider ratio and then using various tricks to help accommodate character centric scenes. These may include &#8220;naturally&#8221; occurring frames within frames, various vignettes or even shooting entire sequences in a different ratio.</p>
<p>There are also some lens considerations connected to video aspect ratio choice. Let&#8217;s say during a flat shoot we need to frame a close-up with consideration for 1.85:1 and 2.39:1. In widescreen aspect ratios close-ups are almost always restricted by the top and bottom framelines. So in order to achieve the same frame restriction we need to either move the camera back for Scope &#8211; and thus alter perspective &#8211; or use a wider lens. Loosely speaking, shooting flat for wider frame targets invites the use of wider lenses. Some cinematographers love Scope because they are fond of the various anamorphic artefacts arising in the anamorphic CinemaScope process. More recently, videographers have started to create the look in digital video through anamorphic adaptors on digital cameras, including DSLR cameras. The expanded result is very wide when sourced from HD cameras, usually up to 3.56:1 for 2x anamorphic lenses. Consequently, anamorphic video is virtually always presented in very wide ratios (2.39:1 and wider even after cropped at the sides). In this particular case anamorphic lens aesthetics govern over aspect ratio choice.</p>
<p><a href="https://www.shutterangle.com/2012/film-video-aspect-ratio-artistic-choice/">Aspect Ratio Choice for a Film or Video: Artistic Considerations</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/film-video-aspect-ratio-artistic-choice/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
		<item>
		<title>Book Review: Pictorial Composition (Composition in Art) by Henry Rankin Poore</title>
		<link>https://www.shutterangle.com/2012/book-review-pictorial-composition-in-art-henry-rankin-poore/</link>
		<comments>https://www.shutterangle.com/2012/book-review-pictorial-composition-in-art-henry-rankin-poore/#comments</comments>
		<pubDate>Mon, 26 Mar 2012 12:42:26 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Book Reviews]]></category>
		<category><![CDATA[book review]]></category>
		<category><![CDATA[composition]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=299</guid>
		<description><![CDATA[<p>
A painting or a photo captures a moment. The moment may convey a story, or it may just freeze a beautiful scene. But no matter how good the picture is from a technical point of view, it is the composition that binds the components together. So the study of composition is concerned with the  [...]</p><p><a href="https://www.shutterangle.com/2012/book-review-pictorial-composition-in-art-henry-rankin-poore/">Book Review: Pictorial Composition (Composition in Art) by Henry Rankin Poore</a></p>]]></description>
			<content:encoded><![CDATA[<div style="float: left; margin-right: 12px; margin-top: 5px;"><img alt="Pictorial Composition (Composition in Art)" title="Pictorial Composition (An Introduction)" src="http://www.shutterangle.com/wp-content/uploads/2012/03/pictorialcomposition.jpg"/></div>
<p>A painting or a photo captures a moment. The moment may convey a story, or it may just freeze a beautiful scene. But no matter how good the picture is from a technical point of view, it is the composition that binds the components together. So the study of composition is concerned with the arrangement of the picture elements within the frame.</p>
<p>But how important is composition in video, and what does a book about painting composition has to do with video? A video shot is really just a superset of still pictures. Video adds another dimension to still frames: time; but in essence video is just a sequence of frames. So it is important to be familiar with pictorial composition. Camera and/or subject movement or inherent subject interest can sometime mask crappy composition. But this masking is really just that: it won&#8217;t really hide bad composition, just delay its discovery. And in order to successfully tackle moving images a firm grasp over still composition is a requirement. Interestingly, paintings are often much more complex in terms of composition compared to cinema shots. This is because, generally, the eye has more time to explore a painting and appreciate the details.</p>
<p><span id="more-299"></span></p>
<p>Great composition can also be a hindrance. A gritty movie can be hindered by beautifully composed frames. Good formal composition also implies deliberation so it may counter the feeling of immediacy. Selecting an appropriate approach to composition is often an important artistic decision: adding pictorial interest to a shot or refraining from doing so can lead to dramatically different results. In any case, as is often the matter with art, it is good to know the rules before breaking them.</p>
<p>It is always best to learn from the masters and <em><a target="_blank" rel="nofollow" href="http://www.amazon.com/gp/product/0486233588/ref=as_li_tf_tl?ie=UTF8&#038;camp=1789&#038;creative=9325&#038;creativeASIN=0486233588&#038;linkCode=as2&#038;tag=revmaz-20" title="Pictorial Composition (Composition in Art) by Henry Rankin Poore">Pictorial Composition (Composition in Art)</a></em> does exactly this. The classic book teaches the basics of composition through the works of the masters. This is a short book; being around a hundred pages it doesn&#8217;t linger over the material. The book does not underestimate the reader, so one may occasionally need to reread passages. The text is concise and there are lots of reproductions and sketches to illustrate the points discussed.</p>
<p>The first and most important subject covered is balance: picture elements weight, formal balance, balancing on various axes, balancing by opposition of lines and shapes, etc. Other topics include: transitions inside an image; circular and angular composition; lines; composing with one, two or three figures and with groups of figures; the compositional characteristics of light and dark tones.</p>
<p>The patient reader will emerge out of this book with a decent understanding of composition fundamentals. This will imbue a greater appreciation of the various visual forms of art and will also enable an analytical approach to image judgement. Being critical is a good thing; honing one&#8217;s analytical skills never did hurt anyone. Neither does the ability to articulate what you intuitively feel about an image. Ultimately, this can lead to better shots and better understanding why something works out or not.</p>
<p>What <em>Pictorial Composition (Composition in Art)</em> is not? It is not concerned with motion, so it is not about composing fancy moving shots (although it is up to you to apply what you&#8217;ve learnt in any way imaginable). It is not a guide or a &#8220;how to&#8221; book: it is less concerned with process and more with the perception and analysis of the result.</p>
<p><a href="https://www.shutterangle.com/2012/book-review-pictorial-composition-in-art-henry-rankin-poore/">Book Review: Pictorial Composition (Composition in Art) by Henry Rankin Poore</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/book-review-pictorial-composition-in-art-henry-rankin-poore/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field</title>
		<link>https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/</link>
		<comments>https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/#comments</comments>
		<pubDate>Thu, 22 Mar 2012 23:55:39 +0000</pubDate>
		<dc:creator>cpc</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[aspect ratio]]></category>
		<category><![CDATA[cinematic look]]></category>
		<category><![CDATA[depth of field]]></category>
		<category><![CDATA[sensor size]]></category>

		<guid isPermaLink="false">http://www.shutterangle.com/?p=14</guid>
		<description><![CDATA[<p>With the advent of the digital SLR as a video capturing device in recent years there is a lot of raving on the internet about the "cinematic look" one can achieve with DSLRs. <em>Cinematic look</em> is often opposed to <em>video look</em> or <em>TV look</em>. On forums and blogs one can read both delusions and truth regarding this distinction. As is often the case with any hype - hype has the tendency to self-amplify - a lot of noise gets picked up and reiterated in such a discussion. This series of articles will attempt to examine in some detail the various characteristics of the cinematic look and then explore how they relate to the image of video capturing devices, including HDDSLRs. Hopefully, some myths will be cleared in the process. This first part in the series is focused on aspect ratios and sensor sizes and the closely related topic of depth of field.</p><p><a href="https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/">Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field</a></p>]]></description>
			<content:encoded><![CDATA[<p>With the advent of the digital SLR as a video capturing device in recent years there is a lot of raving on the internet about the &#8220;cinematic look&#8221; one can achieve with DSLRs. <em>Cinematic look</em> is often opposed to <em>video look</em> or <em>TV look</em>. On forums and blogs one can read both delusions and truth regarding this distinction. As is often the case with any hype &#8211; hype has the tendency to self-amplify &#8211; a lot of noise gets picked up and reiterated in such a discussion. This series of articles will attempt to examine in some detail the various characteristics of the cinematic look and then explore how they relate to the image of video capturing devices, including HDDSLRs. Hopefully, some myths will be cleared in the process. This first part in the series is focused on aspect ratios and sensor sizes and the closely related topic of depth of field.</p>
<p><span id="more-14"></span></p>
<p>Before we start, let&#8217;s make it clear what does &#8220;cinematic look&#8221; actually mean. For us, cinematic look is what audiences have come to expect from a motion picture in terms of appearance, or in other words in terms of visual perception. This is the image we have been culturally conditioned to consider as cinematic through decades of exposure to movies. And this is what we are trying to replicate with digital cameras when aspiring to achieve the &#8220;cinematic look&#8221;. Note that the cinematic look is historically &#8220;film look&#8221; as movies were almost exclusively shot on film for some hundred years.<br />
<br/></p>
<h6><strong>The aspect ratio</strong></h6>
<div id="attachment_89" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/ourhosp.jpg"><img class=" wp-image-89  " title="Our Hospitality (1923)" src="http://www.shutterangle.com/wp-content/uploads/2012/03/ourhosp.jpg" alt="Our Hospitality (1923) screenshot" width="262" height="202" /></a><p class="wp-caption-text">Silent films were generally shot in 4:3 (1.33:1) aspect ratio</p></div>
<p>For years wide format moving images were associated with cinema and the more squarish 1.33:1 format was linked to TVs. At least, that&#8217;s what we were conditioned to imagine. With the introduction of wide TVs things change a bit. HD and full HD television screens now have aspect ratio of 1.78:1 (1920&#215;1080 pixels for full HD; 1280&#215;720 pixels for HD). In comparison, the most popular cinema aspect ratios are currently 1.85:1 and 2.39:1 (often labeled 2.35:1 for historical reasons), for flat and anamorphic projection respectively. Which means TVs are now much closer in geometric appearance to cinema screens. Cinematic ratios weren&#8217;t always like this, though. Silent films were 1.33:1. Talking pictures were 1.375:1 for some twenty years till 1952. For various reasons, around this time the widescreen revolution in cinema happened with the forementioned wider aspect ratios becoming prevalent.</p>
<p>Artistic reasons for choosing a specific aspect ratio notwithstanding, it is correct to assume that wide image is nowadays associated with cinematic appearance. Up to some limit, at least. Incidentally, using anamorphic adapters on HD cameras may yield images with extracted aspect ratio of up to double the HD ratio (or 3.56:1), which can be way too wide, so some cropping of the sides is recommended in such cases. But use of anamorphics on consumer and prosumer digital cameras is a topic for another article. The bottom line is, 1.78:1 video is already quite cinematic in geometric appearance. But if one so desires, it is safe to consider a slight cropping of the top and the bottom to bring it to 1.85:1 or even 2.39:1. Have a look at <a href="http://www.shutterangle.com/2012/film-video-aspect-ratio-artistic-choice/" title="Aspect Ratio Choice for a Film or Video: Artistic Considerations">this article</a> for more in-detail thoughts on aspect ratio choice for a specific project.</p>
<div id="attachment_98" class="wp-caption aligncenter" style="width: 509px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/aspects.jpg"><img class="size-full wp-image-98" title="Movie aspect ratios" src="http://www.shutterangle.com/wp-content/uploads/2012/03/aspects.jpg" alt="Movie aspect ratios" width="499" height="348" /></a><p class="wp-caption-text">Cinema aspect ratios</p></div>
<p>Note that in the above two paragraphs we are talking about the aspect ratio of what you see on screen. This is not necessarily the same ratio as the recorded image, especially when the recording medium is film.<br />
<br/></p>
<h6><strong>Sensor size and depth of field</strong></h6>
<p>In terms of aesthetics the most apparent property linked to sensor size or film frame size is <a href="http://en.wikipedia.org/wiki/Depth_of_field" title="depth of field" target="_blank">depth of field</a>. Popular understanding is that smaller sensors yield greater depth of field and, conversely, large sensors have less depth of field. Technically, if we shoot the same composition with two differently sized sensors, and:</p>
<ol>
<li>with lenses covering the same angle of view, i.e. wider lens for the smaller sensor and longer lens for the larger sensor;</li>
<li>with the same camera-subject distance;</li>
<li>with the same aperture diameter;</li>
<li>enlarge the result to the same print or screen size (or resize to the same video pixel size, let&#8217;s say 1920&#215;1080);</li>
<li>and use the same criterion for sharpness (i.e., the circle of confusion is proportional to the sensor size),</li>
</ol>
<p>then both pictures will have exactly the same depth of field. Your experience tells you otherwise? The tricky part is number 3). Same size aperture does NOT mean same f-number because the f-number equals the focal length divided by the aperture diameter. Which means that for the conditions above, the longer lens (used on the larger sensor to achieve equal angle of view with the small sensor) will shoot at a bigger f-number in order to maintain the same physical aperture.</p>
<p>On the other hand, if we retain all conditions but change 3) to &#8220;at the same f-number&#8221; (which, by the way, should also, more or less, preserve the same exposure, considering we shoot at the same ISO), then the smaller sensor will indeed yield greater depth of field. This is because the wider lens (used for the smaller sensor) will have a smaller aperture opening at this equal f-number. So we can conclude that lenses with equal angles of view, shot at the same f-number on differently sized sensors (or film frames, for that matter) manifest different depth of field, with smaller sensors giving pictures with greater DOF.</p>
<p>For this article we will ignore other properties related to format size. In the digital case these include dynamic range, sensitivity and noise (all three are, more precisely, connected to sensor pixel size). For film, different film frame sizes (but from the same emulsion) will show different grain sizes when projected on the same screen (or printed on the same release stock).<br />
<br/></p>
<h6><strong>Camera aperture and projection aperture</strong></h6>
<p>Motion picture film cameras have a rectangular film gate in front of the negative, which defines the portion of the frame getting exposed. This portion of the negative (and often the gate itself) is called camera aperture. When projecting a release print in a movie theater another gate is used in front of the film called projection aperture. The projection aperture is slightly smaller than the camera aperture to allow some safety margin for imperfect alignment of the film roll. It is essentially a window in the camera aperture area. So the audience in the theater never sees the full image as it was recorded but a slightly cropped version. Camera and projection aperture may also vastly differ when different aspect ratios are involved in shooting and projection: for example, when a movie is shot in 1.33:1 but projected at 1.85:1. All this makes film frame area measurements somewhat ambiguous. In this text for consistency when talking about various film formats we will mean the standardized camera aperture unless &#8220;projection aperture&#8221; is explicitly stated.<br />
<br/></p>
<h6><strong>Frame sizes and aspect ratios of popular film formats</strong></h6>
<div id="attachment_163" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/stillsframe.jpg"><img class=" wp-image-163  " title="135 film" src="http://www.shutterangle.com/wp-content/uploads/2012/03/stillsframe.jpg" alt="35mm stills negative film" width="262" height="204" /></a><p class="wp-caption-text">A frame from Kodak Gold 35mm stills negative film</p></div>
<p>Stills photographers coming to videography sometimes wrongfully assume that 35mm motion picture frames are the same as stills 35mm frames. A full frame for stills photography is sized 36mm x 24mm, or, rather, this is the exposed area of the frame (or the camera aperture). Note that film rolls are oriented horizontally in a stills camera with film perforations at the top and the bottom of the frame. On the other hand, film used for motion pictures is (usually) oriented vertically (perforations at the sides). For 35mm film the longer side is around 24mm and the shorter side around 18mm. The exact frame height depends on how narrow is the frame line. The standard negative pulldown (or film pulldown) for movies is 4 perforations per frame (4-perf): the camera sprocket wheels pull four perforations from the film roll for each frame.</p>
<div style="float: left; margin-right: 10px;">
<div id="attachment_147" class="wp-caption alignnone" style="width: 239px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/silentframe.jpg"><img class=" wp-image-147  " title="35mm silent film frame" src="http://www.shutterangle.com/wp-content/uploads/2012/03/silentframe.jpg" alt="35mm silent film frame" width="229" height="124" /></a><p class="wp-caption-text">35mm silent frame</p></div>
<div id="attachment_148" class="wp-caption alignnone" style="width: 239px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/academyframe.jpg"><img class=" wp-image-148 " title="35mm Academy format" src="http://www.shutterangle.com/wp-content/uploads/2012/03/academyframe.jpg" alt="35mm Academy format" width="229" height="124" /></a><p class="wp-caption-text">35mm Academy format frame, shown with the space reserved for soundtrack</p></div>
<div id="attachment_153" class="wp-caption alignnone" style="width: 239px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/Super35frame.jpg"><img class=" wp-image-153 " title="Super 35 frame" src="http://www.shutterangle.com/wp-content/uploads/2012/03/Super35frame.jpg" alt="Super 35 frame" width="229" height="124" /></a><p class="wp-caption-text">3-perf Super 35 frame</p></div>
<div id="attachment_154" class="wp-caption alignnone" style="width: 239px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/techniscopeframe.jpg"><img class=" wp-image-154 " title="Techniscope frame" src="http://www.shutterangle.com/wp-content/uploads/2012/03/techniscopeframe.jpg" alt="Techniscope frame" width="229" height="124" /></a><p class="wp-caption-text">Techniscope is a 2-perf format</p></div>
</div>
<p>Silent film utilized the full area of the frame for recording images because there was no need to leave space on the negative for sound. The camera aperture of silent film was 24.89mm x 18.67mm (.980&#8243; x .735&#8243;) with 1.33:1 aspect ratio.</p>
<p>For talkies the image area shrunk in order to accommodate the soundtrack on the release print. In 1932 sound pictures camera aperture was set to 22.05mm x 16.03mm (.868&#8243; x .631&#8243;), with the projection aperture set to 20.1mm x 15.24mm (.825&#8243; x .600&#8243;) and 1.375:1 aspect ratio.</p>
<p>Wide screen formats varied a lot through the years. Anamorphic formats utilized a frame size similar to the Academy format but in order to achieve widescreen ratios anamorphic lenses were used to squeeze the image while shooting and then unsqueeze it on projection. But we will leave anamorphic formats out for this article and focus on flat formats as they relate easier to digital sensors. The current wide standard for shooting flat is Super 35. Super 35 is a production standard, meaning it gets resized when printed. This also means there is no need to leave space on the negative for sound as the printing is not 1:1. Super 35 was originally a 4-perf format sized 24.89mm x 18.67mm (.980&#8243; x .735&#8243;). This is a 1.33:1 format so frames were matted down (to 1.85:1 or 2.39:1) for release. This also means a lot of the frame was wasted, so currently a 3-perf version sized 24.89mm x 13.87mm (.980&#8243; x .546&#8243;) is used in order to maximize frame utilization. This saves around 1/4 stock length compared to 4-perf. 3-perf still gets cropped a bit when printed but wastes much less negative than 4-perf.</p>
<p>Various wide-screen apertures have been used through the years. VistaVision was a 8-perf horizontal format developed by Paramount in the 50&#8242;s and similar to 35mm for stills. With camera aperture sized 37.7mm x 25.17mm (1.485&#8243; x .991&#8243;) it offered great image quality. Most productions were matted and printed down to standard size 1.85:1 format vertical prints for theatrical release. Despite the exceptional quality VistaVision didn&#8217;t pick up because of the higher stock costs in comparison to anamorphic formats. Since the 60&#8242;s it&#8217;s been used mostly for special effects work requiring greater resolution.</p>
<p>On the other side of the spectrum was Techniscope introduced by Technicolor Italy in the early 60&#8242;s. This was a 2-perf production format meant to save film stock by sacrificing a bit of image quality. It used a camera aperture sized 22.05mm x 9.47mm (.868&#8243; x .373&#8243;). Techniscope pictures were shot flat then printed with 2x vertical enlargement factor to be projected anamorphically. Being 2-perf, during production it used half the stock compared to 4-perf but resulted in larger grain and less clarity.</p>
<div id="attachment_172" class="wp-caption aligncenter" style="width: 522px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/North.jpg"><img class="wp-image-172 " title="North by Northwest (1959)" src="http://www.shutterangle.com/wp-content/uploads/2012/03/North.jpg" alt="North by Northwest (1959) screenshot" width="512" height="288" /></a><p class="wp-caption-text">North by Northwest, like other Hitchcock movies from the second half of the 50&#39;s, was shot in VistaVision</p></div>
<p>Then there is also 16mm film, which is widely used for documentary, TV and occasionally for cinema work, especially for indie films. Super 16 (which is the analog of Super 35 in the 16mm world) has camera aperture sized 12.52mm x 7.41mm (0.493&#8243; x 0.292&#8243;). Recent award winning films shot in Super 16 include <em>The Hurt Locker</em>, <em>Black Swan</em> and <em>The Wrestler</em>.<br />
<br/></p>
<h6><strong>Digital sensors</strong></h6>
<p>For a long time TV cameras used analog pickup tubes to convert optical images into electric signals. The imaging area of the tube is usually 2/3 of the diameter of the tube. Standard tube diameters included 1 inch, 2/3 inch, 1/1.8 inch. You probably notice similarity with digital sensor size categories. Indeed, sensor sizes like 2/3&#8243;, 1/1.8&#8243;, 1/2.3&#8243;, Four-Thirds, etc. are named like this for historical reasons related to analog TV and video cameras. Each of these has an imaging diagonal roughly equal to the imaging diameter of a tube of that size (remember, the imaging diameter of the tube is about 2/3 of the overall tube diameter). For example, a typical 2/3&#8243; sensor will have a diagonal of around 11 mm. This is roughly equal to 2/3 of 2/3&#8243;.</p>
<p>Modern HD digital video cameras normally use a 16:9 sensor. The typical aspect ratio of the digital sensor in a photo camera is either 3:2 (mimicking film stills) or 4:3. But when shooting video with a photo camera only a 16:9 portion of the sensor is used with pixels in the top and the bottom getting discarded. One consequence of this is that crop factors often used for comparison of photo camera sensors are not always accurate in terms of video. Modern video is predominantly widescreen and cropping practically always happens at the top and/or the bottom thus keeping the original width of the image unchanged. That&#8217;s why cinematographers and videographers often use the sensor width when comparing sensors in terms of DOF instead of the usual diagonal measurement as used in crop factors for stills.</p>
<p>Considering this, the following table lists some film format frame and digital sensor sizes with only the approximately 16:9 (or wider, where 16:9 is not applicable) area taken into account, sorted by width.</p>
<div style="margin-left: 10%; margin-right: 10%;">
<table style="font-family: Verdana; text-align: left;" border="1" cellspacing="0" cellpadding="4">
<caption style="caption-side: bottom; text-align: center; font-size: 90%;"><em>Various film format and sensor sizes sorted by width. All sizes in millimeters.</em></caption>
<tbody>
<tr>
<th style="width: 40%;"><strong>Sensor or film format<strong></strong></strong></th>
<th style="width: 20%;"><strong>Frame size (16:9)</strong></th>
</tr>
<tr>
<td>Canon 5D Mark 2/3 (Full Frame)</td>
<td>36 x 20.3</td>
</tr>
<tr>
<td>Canon 1D Mark 4 (APS-H)</td>
<td>27.9 x 15.7</td>
</tr>
<tr>
<td>Super 35 (film)</td>
<td>24.89 x 13.87</td>
</tr>
<tr>
<td>Canon C300</td>
<td>24.6 x 13.8</td>
</tr>
<tr>
<td>Arri Alexa</td>
<td>23.76 x 13.365</td>
</tr>
<tr>
<td>Nikon D7000 (APS-C)</td>
<td>23.6 x 13.3</td>
</tr>
<tr>
<td>Sony Nex 5n</td>
<td>23.4 x 13.16</td>
</tr>
<tr>
<td>Canon 7D/60D/600D (APS-C)</td>
<td>22.3 x 12.5</td>
</tr>
<tr>
<td>Red Epic/Scarlet in 4K mode</td>
<td>22.12 x 12.44</td>
</tr>
<tr>
<td>Techniscope (film)</td>
<td>22.05 x 9.47</td>
</tr>
<tr>
<td>Panasonic GH2 in 16:9 mode</td>
<td>18.8 x 10.6</td>
</tr>
<tr>
<td>Super 16 (film)</td>
<td>12.52 x 7.03</td>
</tr>
<tr>
<td>Typical 2/3&#8243; TV camera tube</td>
<td>8.8 x 4.95</td>
</tr>
</tbody>
</table>
</div>
<p>So how do we interpret these in terms of depth of field?<br />
It is a common understanding that TV and video have relatively big apparent depth of field. And we can easily see why this is the case. First, TV cameras tend to use relatively slow zoom lenses with relatively small apertures. Second, and more important, as seen above they have a much smaller imaging area compared to both film and large digital sensors.</p>
<p>One can often read on forums statements like &#8220;I like Canon 5d Mark 2 because of its cinematic DOF&#8221;. Statements like this can be attributed to years of visual opposition TV vs Cinema and the consequent automatic generalization: TV has lots of DOF, cinema has shallow DOF. This is not always true. The correct statement is &#8220;cinema <em>can</em> have shallower depth of field than TV&#8221;.</p>
<p>One look at the table shows that Full-Frame DSLRs actually have much larger sensor size than the typical widescreen motion picture frame size (i.e. Super 35 and classic widescreen). This means they may demonstrate excessively shallow DOF compared to motion pictures shot on film when pictures are shot at the same f-number/exposure (see above). APS-C sized sensors are actually much closer to the typical film frame size. No wonder that digital cinema cameras that claim &#8220;Super 35&#8243; sized sensors actually utilize APS-C sensors. This doesn&#8217;t mean that APS-C sensors are better than Full Frame. There are other reasons to use sensors larger than APS-C: low light sensitivity, dynamic and color range, overall image crispness (this last one is often &#8220;lost in compression&#8221; in DSLR video). And the shallow depth of field fetish, of course.</p>
<div id="attachment_183" class="wp-caption alignright" style="width: 272px"><a href="http://www.shutterangle.com/wp-content/uploads/2012/03/outiw.jpg"><img class=" wp-image-183 " title="Once Upon a Time in the West (1968)" src="http://www.shutterangle.com/wp-content/uploads/2012/03/outiw.jpg" alt="Once Upon a Time in the West (1968) screenshot" width="262" height="112" /></a><p class="wp-caption-text">Sergio Leone shot Once Upon a Time in the West in Techniscope, with a frame size smaller than APS-C</p></div>
<p>A small non-technical digression. There is also the aesthetic side of DOF. Some of the greatest films in history sought to get deep focus by either using wide lenses exclusively or pooling tons of light on set and shooting at small apertures. Others ended with relatively bigger DOF for technical reasons: smaller film frame sizes (compared to Super 35 and APS-C). It is telling (and a bit ironic) that Paramount promoted their VistaVision process as a deep focus vehicle, mostly based on the availability of a 28 mm lens &#8211; one of the widest at the time. It was not shallow focus that tempted filmmakers, but rather image clarity, depth and wide angle possibilities. Generally, movies are supposed to represent objects in relation to their surroundings. This requirement implies sufficient DOF in order to visualize these relations. Selective focus is certainly a great tool for isolating subject matter and commanding the viewer&#8217;s eye. But shallow DOF is just that: a mean, not a goal. Very shallow DOF can surely be a good aesthetic for certain scenarios (recently <em>Tinker Tailor Soldier Spy</em> relied on shallow DOF, mostly by utilizing longer lenses), but these tend to be the exception, not the norm. So we can nevertheless argue that shallow depth of field is not an implicit characteristic of cinema because there are lots of influential movies heavily utilizing deep focus shots. There are other properties more intimately associated with the cinematic look. Some of them are in the focus of <a href="http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/" title="Cinematic Look, Part 2: Frame Rate and Shutter Speed">the next part in this series</a>.</p>
<p>We can safely conclude that in the DSLR realm (and in the digital sensor world, in general) APS-C is the closest representation of the cinematic look in terms of DOF.</p>
<p><a href="https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/">Cinematic Look, Part 1: Aspect Ratio, Sensor Size and Depth of Field</a></p>]]></content:encoded>
			<wfw:commentRss>https://www.shutterangle.com/2012/cinematic-look-aspect-ratio-sensor-size-depth-of-field/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
