Studies have thrown up some interesting statistics, which can change the way content creators choose to look at their offerings. One of these claimed that a majority of people do not watch videos with sound, sometimes estimated to be as high as 85%. And although this might seem extreme, a more relevant data point is that 41% of all videos are deemed incomprehensible without sound or captions. This raises questions on how video content might overcome barriers to cognitive accessibility through captions and transcription.
What are closed captions?
Simply put, a closed caption is a translation or transcription of the spoken word in any video content. It is considered synonymous with subtitles in certain countries, but the distinction lies in the type of information that is included. Closed captions include additional information that might be deemed relevant, such as sound effects, musical cues, cultural subtext, intonation and even the speaker’s identity.
This form of text is required by hearing-impaired individuals, who depend on this information to comprehend the content. Creating a transcript of your video content can build digital inclusion for an estimated 1.5 billion people across the world who live with hearing loss.
Why are closed captions important?
There are various reasons to include closed captions in your workflow:
- Digital inclusion – People with disabilities have no recourse for learning and consuming content that is not designed for them. Closed captioning is a step towards changing this scenario and providing opportunities for those with impairments.
- Accessibility standards – Countries are constantly looking for ways to improve accessibility, as a large part of their population is left out of the workforce. Laws and regulations are being instituted to this end, especially with the European Accessibility Act. This is particularly relevant for educational and scientific institutions.
- Content engagement – This can increase the quality of your content and provide an alternate form of engagement, when designed in an efficient manner. Closed captions are useful even for people whose first language is different to that of the content.
- Search Engine Optimisation (SEO) – This might not be applicable to all, but search engines are pushing up content that is deemed accessible as they try to fulfil their goals on diversity and inclusion.
How does one create closed captions?
The simplest form of closed captioning is to do it yourself, creating a transcription of the content and adding it to the media file. However, this might be akin to subtitles and not true closed captions.
Most established companies use the following steps to achieve closed captioning:
- Automated tools and software are used to transcribe the video’s spoken content.
- Experts then enter the process, as speech recognition software might not catch subtext. This human intervention refines the transcript, adding relevant information to match digital accessibility laws.
- Quality control is conducted, to ensure that placement and line breaks follow the natural rhythm of speech. Timestamps are vital in this part of the cycle, so that the text matches the corresponding spoken word.
- The output formats are decided, such as .srt, .vtt, .xml, or .doc, and then embedded into the media asset. This is important for offline use and sharing.
Image recognition software for video has also been considered, but it hasn’t developed enough to be used in automated transcribing of contextual imagery and sounds. That being said, there could come a time where Artificial Intelligence and Quantum computing makes this a reality.
The requirement for accessibility features in digital technology is increasing, with discoverability and education of said features also gaining prominence. And be it on your TV, your smartphone or even at a kiosk, these facilities are moving us in the direction of complete digital inclusion. Closed captioning is just one part of this process and Lumina Datamatics can help. For more information, please visit Accessibility – Lumina Datamatics.