If you look at the specifications of video recording in ILCs, you will find that the manufacturers are prominently describing that recording is done using 4:2:2 sampling. Though these numbers look a bit complicated it is an important point, specifying what is called as ‘Chroma Sub-sampling’. In brief this is what it means. A video stream has two components. First is the ‘Luma’ (symbol for this is Y’) and this defines the brightness part. The second is the ‘Chroma’ and conveys the colour information. Separating this way into two components allows more optimal use of resources.
Since the eye is more sensitive to variations in brightness than in colour, more resources are allocated to the former (Y’). This process or more precisely ‘encoding’, where less resolution is given to chroma information compared to brightness, is called ‘Chroma Subsampling’. Pictures 1, 2 and 3 show what this means. Picture 1 shows the Luma part with full resolution, and Picture 2 shows the Chroma component sampled at less resolution. Picture 3 shows how we perceive when both are combined.
In practice, the Chroma is in turn split into two components Cb and Cr. If Y’, Cb and Cr are all sampled equally, and the scheme is called 4:4:4. This gives the best quality but is very demanding on resources. On the other hand if the two chroma components are sampled only at half the frequency of the luma, then there is a substantial reduction in resources needed but with no apparent loss of quality. This encoding is called 4:2:2 and is used by most high-end video cameras. If the chroma sampling is further halved, we get the 4:1:1 subsampling but here quality suffers and is no longer considered suitable for serious work. It is only good for low-end consumer applications. Thus the 4:2:2 subsampling now being offered on some ILCs offers extremely high-quality video on par with professional equipment as far as this factor is concerned.
Most ILCs (there are a few exceptions) have a limitation on the duration of recording, the limit being 29 minutes and 59 seconds (or just a second less than half-hour). There are a few reasons for this. First, there are thermal issues as the heat builds up over time and this increases noise. This problem is quite severe in mirrorless cameras as their overall size is small and there is less volume to dissipate heat. You need to check this carefully by experimenting a bit. This can be done by recording for the full duration two or more clips back to back and observing how the heat builds up. Also, the LCD monitor itself generates a lot of heat and if your camera has a flip-up screen, keep it away from the body so that its heat does not affect it.
Second, many countries impose higher duties (and taxes) on video cameras which are defined as those which can record for more than 30 minutes. This is why the recording time is artificially limited. Many manufacturers have come up with a way to get around it. They simply close the current file that is being recorded and start another one once the time limit is reached. So, each clip is less than 30 minutes long but you can record many clips subject to the capacity of your memory card and overheating.
Apart from this some ILCs restrict the size of the file created. Usually, the recording will stop whichever limit comes first, either the length of the video clip or the file size. Check your camera manual for exact details.
This is an advanced feature that few ILCs are currently offering. If you have seen the EXIF data of any image, you will find contained in it the exact date and time at which the image has been captured (assuming that your camera’s internal clock has been set properly). Thus, each image can be considered to have been ‘time-stamped’. Likewise, if your ILC has the timecode feature and if it is enabled, then each frame of the video stream will have a time stamp on it.
The timecode is invaluable when editing videos. Since each frame has a time assigned to it, one can access any specific frame and thus perform manipulation frame-wise. The timecode is also used for adding audio to video. This is needed when audio is recorded separately and then synchronised with the video. Having a timecode will help this process to be performed precisely.
If you have watched sports events on TV, you may have noticed the same scene from different angles using multiple cameras. However, each frame from any camera will show exactly the same instant. In other words, they are all synchronised. Having a time code will help you do this. In general, timecode is very useful when you want to synchronise data from multiple sources.
Today most ILCs have a pixel count of 24 MP. From Part 1 (September 2019 issue), you have come to know that a 1080p video needs around 2 MP per frame and the 4K video is just 8 MP per frame. So, what do we do with those extra pixels? One way is to use the centre part of the sensor but this will bring the cropping factor into account and the consequent problems, the main one being a change in the angle of view. So, engineers have come up with what is called ‘sub-sampling’. With this all the pixels of the sensor are not used. As an example, only alternate pixels will be used. This is not a very good method as it can lead to ‘moire’. A better method is down-sampling where all the pixels are read and the pixel count is reduced just like you downsize a still image in postprocessing. Check what your camera does to get a better idea on the quality of video it produces.
You can read up to 3 premium stories before you subscribe to Magzter GOLD
Log in, if you are already a subscriber
Get unlimited access to thousands of curated premium stories, newspapers and 5,000+ magazines
READ THE ENTIRE ISSUE