Cameras How to Shooting

The right definitions: Resolution & aspect ratios

It might be simple enough to switch the television screen on and off, but the mechanics behind that are much more complex.

Video technology now combines formats, resolutions, frame rates, sensor sizes and pixel sizes. Digital video standard frame sizes depend on the corresponding TV system. Standard Definition (SD) in the United Kingdom and most of Europe is PAL at 720 pixels wide by 576 pixels high at 25 fps or 50 interlaced fields.

Across the pond in the United States, Japan and other parts of Asia, frame sizes are NTSC at 720 by 480 at 30 fps or 60 interlaced fields.

SD video has different shapes or aspect ratios for width and height which includes normal (4:3) to wide-screen (16:9).

Wide-screen is also called anamorphic because the same 720 by 576 pixel size applies in PAL mode, but images will be squashed into the frame and stretched out for playback on a wide-screen TV through non-square pixels.

High standards

To make things easier on the eyes, standard high definition (HD) videos are now always in a 16:9 aspect ratio to match wide-screen HD TVs. Additionally, HD videos come in both 1280 by 720 and 1920 by 1080 frame sizes for versatility. Their square pixels also allow adaptation to PAL and NTSC.

Frame rates for PAL HD and NTSC HD differ. However, there are standards for both interlaced (1080i) and progressive video (720p, 1080p). Many cameras can be switched between these two formats to give operators a range of frame rates to work with. Rates are often referred to as 720p25, which means 25 progressive fps at a 1280 by 720 frame size. Another way to word frame rates would be 1080i50, which means 50 interlaced fields at 1920 by 1080.

Information overload

Outside standard frame sizes, digital cinema cameras can shoot from 2K (2048 x 1536 pixels, 2K, 4K (4096 x 3072 pixels) and 5K all the way to an 8K ultra high-definition video (UHDV).

Large frame sizes need space for the generated data – especially with high frame rates. Huge data rates stem from frame sizes in pixels multiplied by the amount of data bits per pixel that represent color information – 8 bits for each red, green or blue pixel – and finally multiplied by frames per second. One second of video can contain lots of information and is referred to as uncompressed data. Thankfully, most cameras can greatly reduce data rates down through compression algorithms.