Digital video relies primarily on hard disk power and size, and the important characteristic is sustained data throughput in a real-world environment. Video files can be huge and therefore require a hard disk drive to sustain high rates of data transfer over an extended period of time. If the rate dips, the video stream stutters as the playback program skips frames to maintain the playback speed. In the past this was a problem that meant that audio and video capture applications required use of drives with a so-called AV specification, designed not to perform thermal recalibration during data transfer. Generally, SCSI drives were preferable to EIDE drives since the latter could be waylaid by processor activity. Nowadays hard disk performance is much less of an issue. Not only have the bandwidths of both the EIDE and SCSI interfaces increased progressively over the years, but the advent of embedded servo technology means that thermal recalibration is not the issue it once was.

By late 2001 the fastest Ultra ATA100/UltraSCSI-160 drives were capable of data transfer rates in the region of 50 and 60MBps respectively, more than enough to support the sustained rates necessary to handle all of the compressed video formats and arguably sufficient to achieve the rates needed to handle uncompressed video. However, the professionals likely to need this level of performance are more likely to achieve it via the use of two or more hard drives are striped together in a RAID 0, 3 or 5 configuration.

The other troublesome side effect of this type of event is audio drift, which has dogged DV editing systems since they first appeared. Because of minute variations in data rate and the logistics of synchronising a video card and a sound card over an extended period of time, the audio track in AVI files often drifts out of sync. High-end video capture cards circumvent this problem by incorporating their own sound recording hardware and by using their own playback software rather than relying on a standard component, such as Video for Windows. Moreover, Microsoft’s new ActiveMovie API is itself claimed to eliminate these audio drift problems.

The rate at which video needs to be sampled or digitised varies for different applications. Digitising frames at 768×576 (for PAL) yields broadcast-quality (also loosely known as full-PAL) video. It’s what’s needed for professional editing where the intention is to record video, edit it, and then play it back to re-record onto tape. It requires real-time video playback from a hard disk, making the hard disk drive’s sustained data-transfer rate the critical performance characteristic in the processing chain.

However, for capturing video for multimedia movies, for playback from a CD-ROM with or without hardware decompression, it is not necessary to digitise at the full PAL resolution. Usually half the lines are digitised (either the odd or the even 288 lines), and to get the 4:3 ratio each line is split into 384 sections. This gives a frame size of 384×288 pixels (320×240 for NTSC), thus requiring about 8.3 MBps. A similar resolution (352×288) is required for capturing video which will be distributed in MPEG-1 format for VideoCDs.

Of course, a large digital-video market is that of video conferencing, including displaying video over the Internet. Here, the limitation is in the connection – whether it’s an ordinary phone line and a modem, ISDN, cable, or whatever. For example, a 56Kbit modem is about 25 times slower than a single-speed CD-ROM, so in this case, high-compression ratios are required. And for real-time video-conferencing applications, hardware compression at very high rates is necessary.

Of course, there are a number of factors that affect the quality of digital video encoding:

  • Source format: VHS tape is acceptable for home use, but S-VHS and Hi-8 tape formats give noticeably better results. It used to be that only professional projects could justify the cost of the highest quality source footage that BetaCam and digital tape formats could provide. However, the advent of the DV format means that quality is no longer the preserve of the professional.
  • Source content: MPEG-1 and software only codecs tend to stumble on high-speed action sequences, creating digital artefacts and colour smearing. Such sequences have a high degree of complexity and change dramatically from one scene to the next, thereby generating a huge amount of video information that must be compressed. MPEG-2 and DV are robust standards designed to handle such demanding video content.
  • Quality of the encoding system: While video formats adhere to standards encoding systems range greatly in quality, sophistication and flexibility. A low-end system processes digital video in a generic process with little control over parameters, while a high-end system will provide the capability for artfully executed encoding.

Pin It on Pinterest

Share This

Share This

Share this post with your friends!