Closed Captions vs Subtitles: Key Differences
These terms are used interchangeably, but they are not the same thing. The distinction matters for accessibility compliance, international distribution, and how you create each type.
The core distinction
Closed captions are a text representation of all audio content in a video — spoken dialogue, yes, but also non-speech audio: [music playing], [door slams], [applause], [narrator], [JOHN:]. They are designed for viewers who cannot hear the audio at all. The word "closed" means they can be toggled on or off by the viewer (as opposed to "open" captions, which are burned into the video and always visible).
Subtitles are a text translation of the spoken dialogue only. They assume the viewer can hear the audio — they just don't understand the language. Subtitles do not include sound effects, music descriptions, or speaker identification. A French film with English subtitles is a classic example: the viewer hears the French audio; the subtitles provide the English meaning of what is being said.
When does this distinction matter?
Practically speaking, on most video platforms (YouTube, Vimeo, social media) the terms are used interchangeably by the general public. But the distinction matters in three contexts:
- Legal compliance. Accessibility laws require captions, not subtitles. The ADA (Americans with Disabilities Act) and Section 508 of the Rehabilitation Act in the US mandate that video content be made accessible to people with hearing disabilities. "Accessible" means captions that include all audio information — not just speech.
- International distribution. When localising video for foreign markets, you are producing subtitles — translating speech into another language. This is a different production workflow from captioning.
- File format and workflow. Both captions and subtitles typically use .SRT or .VTT files with timestamps. But caption files for professional use sometimes include speaker labels and sound effect notations that subtitle files omit.
Legal requirements for captions
ADA (Americans with Disabilities Act). Applies to places of public accommodation. Courts have increasingly interpreted websites and streaming video as places of public accommodation, requiring captions for video content. Several high-profile lawsuits against streaming services and educational institutions have resulted in mandatory captioning requirements.
Section 508. Applies to federal agencies and organisations receiving federal funding. All video content — whether on websites, internal systems, or distributed digitally — must be captioned.
CVAA (Twenty-First Century Communications and Video Accessibility Act). Requires that video programming shown on television with captions must also have captions when distributed online.
WCAG 2.1 guidelines. Web Content Accessibility Guidelines specify that pre-recorded audio content in video must have captions (Success Criterion 1.2.2, Level A). This is the baseline standard adopted by most international accessibility frameworks.
The practical implication: if you are a business publishing video on your website, an educational institution posting lecture recordings, or any organisation covered by these laws, you need captions — not just subtitles.
Open captions vs closed captions
A further distinction worth knowing:
- Closed captions are delivered as a separate text track that viewers toggle on or off. They appear as an overlay that can be styled by the viewer's device settings. This is the standard for broadcast TV and most video platforms.
- Open captions are baked into the video image itself — they are always visible and cannot be turned off. Instagram and TikTok content often uses open captions because those platforms do not always render separate caption tracks reliably.
How to create captions using a transcript
The most reliable workflow for creating accurate captions:
- Get the transcript. Use TranscribeVideo.ai to transcribe your video from a URL. The tool generates a time-coded transcript automatically.
- Export as .SRT. The .SRT format includes timestamps that sync text segments to the video timeline. Most video platforms (YouTube, Vimeo, Wistia) and editing tools (Premiere Pro, Final Cut) accept .SRT caption files directly.
- Add non-speech audio notations. For true accessibility compliance, review the transcript and add descriptions of significant non-speech sounds: [upbeat music], [phone ringing], [crowd cheering]. This step is important if you need to meet ADA or Section 508 requirements.
- Upload to your platform. On YouTube, go to YouTube Studio → Subtitles → Add → Upload file. On Vimeo and Wistia, the process is similar.
Creating subtitles for foreign language distribution
If you need subtitles in another language rather than captions in the same language:
- Get the English transcript (or the source language transcript) using TranscribeVideo.ai
- Translate the transcript using DeepL (best quality for most language pairs) or ChatGPT for less common languages
- Format the translated text as an .SRT file with the same timestamps as the original
- Upload to your platform as a separate subtitle track for that language
YouTube supports multiple subtitle/caption tracks on a single video, so you can have English captions, Spanish subtitles, and French subtitles all available on the same upload.