Skip to main content
Free · No signup required

What Is Closed Captioning?

Closed captioning (often abbreviated CC) is timed text overlaid on video that describes everything a deaf or hard-of-hearing viewer would otherwise miss — spoken dialogue, identifying who is speaking, music cues, sound effects, and audience reactions. The word "closed" means the captions are stored separately from the picture and only display when the viewer turns them on. "Open" captions are burned into the picture and always visible.

Generate captions from any video URL →

Definition and origin

Closed captioning was developed in the United States in the early 1970s and rolled out commercially in 1980 via the National Captioning Institute. The original technology encoded text in a hidden line of the broadcast video signal (line 21 of the vertical blanking interval); a special decoder built into a TV could read this signal and overlay the text on the screen. The 1990 Television Decoder Circuitry Act required all televisions sold in the U.S. with screens 13 inches or larger to include caption decoders by default. Modern closed captioning is delivered as a separate text track stored alongside the video file or stream — viewers toggle it via a CC button. Closed captioning differs from subtitles in two important ways: (1) closed captions describe non-speech audio (a doorbell rings, ominous music plays) while subtitles only translate dialogue, and (2) closed captions are designed for people who cannot hear the audio at all, while subtitles assume the viewer hears the audio but doesn't understand the language. Both are now produced from the same workflow in 2026 — automated speech recognition with human review — but the editorial requirements differ.

Closed captions vs open captions vs subtitles

Three terms get used loosely. Here are the actual definitions:

Closed captions (CC)

Stored as a separate track from the video. Viewers turn them on or off via a CC button. Designed for deaf and hard-of-hearing audiences — include speaker identification (e.g., "JOHN:"), sound effect descriptions ("[door slams]"), and music notations ("[upbeat music]"). Required by U.S. law for most broadcast television and many streaming platforms.

Open captions

Burned directly into the video picture. Always visible — viewers cannot turn them off. Used in social media (TikTok, Instagram Reels with on-screen text), public displays, theatrical screenings for accessibility, and any context where the player may not support a captions track.

Subtitles

Translation of dialogue into another language for hearing viewers who don't speak the original. Subtitles assume the viewer hears the audio. They typically don't include speaker IDs (the viewer can hear who's speaking) or sound effects (the viewer can hear them). Stored as a separate track like closed captions but editorially different.

Why the distinction matters

A subtitle file translated from English to Spanish is not closed captioning. A closed caption file in the original language is not subtitles. If a video is accessible to deaf viewers, it has closed captioning. If a video is accessible to non-English speakers, it has subtitles. A truly accessible video may have both — closed captions in the source language and subtitles in target languages.

 Closed captionsOpen captionsSubtitles
ToggleableYesNo (burned in)Yes
Speaker IDsYesSometimesNo
Sound effects describedYesSometimesNo
Same language as audioUsuallyUsuallyDifferent language
Primary audienceDeaf / HoHMixedHearing, non-native
Required by ADA / FCCOften yesNoNo

U.S. legal requirements for closed captioning

Closed captioning is mandated by several U.S. laws and rules. Broadcast and online video are regulated separately.

Broadcast TV — FCC rules

The Federal Communications Commission (FCC) requires closed captioning for almost all programming on broadcast and cable television. The Telecommunications Act of 1996 established the foundation; subsequent rulemaking expanded coverage. As of 2026, the rules require: (a) 100% of new English programming to be captioned, with limited exemptions for specific genres, (b) caption quality standards (accuracy, synchronicity, completeness, placement), and (c) Spanish-language programming captioning.

Online video — 21st Century Communications and Video Accessibility Act (CVAA)

The CVAA, signed in 2010, extended captioning requirements to internet video. Specifically, video that aired on TV with captions must include captions when redistributed online. Pure web-original content (e.g., a YouTube channel that never aired on TV) is not directly covered by the CVAA, but is covered by the ADA in many cases.

Public-facing websites — Americans with Disabilities Act (ADA)

The ADA's Title III applies to "places of public accommodation," which courts have increasingly interpreted to include public-facing websites. Numerous lawsuits have been filed against companies whose video content lacks closed captioning. The settled standard is WCAG 2.1 Level AA, which requires captions for all pre-recorded video content with audio.

Section 508 — Federal agencies

U.S. federal agencies and contractors are required by Section 508 of the Rehabilitation Act to make all video content accessible — which means closed captions on every public video.

Practical implications for content creators

If you're running a business, university, or government website with video content, closed captioning is effectively required. The cost-benefit math has flipped in 2026: AI-generated captions cost almost nothing and now achieve 95%+ accuracy on clear speech, so there's no realistic reason to skip captioning even for unregulated content.

How closed captions are technically encoded

The format depends on whether the captions are for broadcast TV, streaming, or web video.

Broadcast TV — CEA-608 and CEA-708

Two standards govern broadcast captioning in North America:

  • CEA-608 (formerly EIA-608, "Line 21 captioning"): The original analog captioning standard, encoded in line 21 of the NTSC video signal. Limited to a single font, white text on a black background, 2 colors of background, and basic positioning. Still in use for legacy broadcasts.
  • CEA-708: The standard for digital television, introduced with HDTV. Supports multiple fonts, 64 colors, transparency, multiple character sizes, and richer positioning. Required for ATSC digital broadcasts in the U.S.

Streaming and online video — file-based formats

Online video uses standalone caption files in formats like:

  • SRT (SubRip): The most common format. Plain text with timestamps. Universally supported. See full SRT file reference →
  • WebVTT (.vtt): The W3C standard for HTML5 video. Required for the HTML <track> element.
  • iTT (iTunes Timed Text): Apple's caption format for iTunes/Apple TV submissions.
  • TTML / DFXP: XML-based formats used by Netflix and other streaming platforms for richer styling.
  • SCC (Scenarist Closed Caption): Used in professional video workflows; mirrors CEA-608 byte data.

For most use cases (YouTube, Vimeo, your own website), SRT or VTT is the right choice. SRT for upload to YouTube/Vimeo; VTT for embedding in your own HTML5 video player.

How to turn on closed captions on common platforms

Every modern video platform supports closed captioning. The toggle location varies:

YouTube (web, mobile, TV apps)

Click the CC button in the player controls (bottom-right). For language and font customisation, click the gear icon → Subtitles/CC → Options. YouTube's auto-captions are AI-generated and toggleable separately from creator-uploaded captions.

Netflix

Click the speech-bubble icon while playing a title. Pick from available languages and styles. Captions can be customised globally in Account → Subtitle Appearance.

Amazon Prime Video

Click the speech-bubble or CC icon in the player. Customisation in Settings → Captions.

Apple TV / iTunes

Press the menu/info button on the remote, choose Audio & Subtitles. On iOS, settings live in Accessibility → Subtitles & Captioning.

Disney+, Hulu, Max, Paramount+

All have a captions/subtitles button in the player overlay. Customisation in account settings.

Broadcast TV

Press the CC button on your TV remote. If your remote lacks one, navigate to TV menu → Accessibility or Closed Captions.

Zoom, Google Meet, Microsoft Teams

All three support live AI captioning. Click the CC button (or "Show captions") in the meeting toolbar. Captions are generated in real time and disappear after the meeting unless explicitly recorded.

How to add closed captions to your own video

Three workflows, in order of effort:

1. Auto-generated captions (free, fast, ~95% accurate)

Upload to YouTube — captions are auto-generated within hours. They're imperfect but a workable starting point. Or use a dedicated tool like Whisper (open-source), TranscribeVideo.ai for URL-based videos, or paid services like Otter and Rev. Export the result as an SRT file. Upload the SRT to your video host (YouTube, Vimeo, etc.) — most platforms accept SRT directly.

2. Hybrid: AI draft, human review (recommended for accessibility)

Generate AI captions, then have a person review and correct. Errors common in AI captions: proper nouns (people's names, places, brands), technical jargon, homophones (their/there), and overlapping speech. A human pass takes 15-30 minutes per hour of video, costs around $30-50 if outsourced, and produces broadcast-quality captions. This is the standard workflow in 2026 for organisations with accessibility obligations.

3. Full human transcription (expensive, gold standard)

Services like 3PlayMedia, Rev, and CaptionMax produce captions from audio with no AI step. Costs $1-5 per minute of video. Required for the highest-stakes content (legal proceedings, broadcast, court evidence). Overkill for most content in 2026.

What to include in good closed captions

  • Speaker identification when not visually obvious (e.g., "JOHN:", "INTERVIEWER:")
  • Non-speech sounds in brackets ("[laughter]", "[doorbell rings]", "[upbeat music]")
  • Music descriptions when significant ("[melancholy piano music]")
  • Audience reactions ("[applause]", "[crowd cheering]")
  • Off-screen sounds ("[gunshot in distance]")

Don't include: filler words ("um", "uh") unless they're meaningful, repeated stutters, or overlapping crosstalk that the speaker self-corrects.

Why captions matter beyond accessibility

Closed captioning was created for deaf and hard-of-hearing viewers and remains essential for that purpose. But research and platform data show that most closed-caption users in 2026 aren't deaf:

  • Sound-off viewing: Over 80% of Facebook video and 75% of TikTok video is watched without sound. Captions are the only way the message lands.
  • SEO benefit: Search engines index caption text. A captioned video is searchable on its full content; an uncaptioned video is searchable only on title and description.
  • Comprehension: Studies on educational video show that captions improve comprehension and recall, even for viewers who can hear perfectly.
  • Watch time: Captioned videos retain viewers 12-25% longer in industry benchmarks. Particularly true for long-form content.
  • Non-native speakers: 1.5 billion people speak English as a second language. Captions make English-language content accessible to them in a way audio alone cannot.
  • Public-space viewing: People watching video on a phone in a coffee shop, on a train, or in an open office mute the audio and rely on captions.

If you produce video professionally and don't caption it, you're losing a meaningful share of your audience for the cost of a few minutes of automated captioning per video.

Feature Comparison

FeatureClosed captionsOpen captionsSubtitles
Stored separately from videoYesNo (burned in)Yes
Viewer can toggle on/offYesNoYes
Includes speaker IDsYesSometimesNo
Includes sound effectsYesSometimesNo
Same language as audioUsuallyUsuallyDifferent language
Primary audienceDeaf / HoH viewersAll viewersHearing, non-native speakers
Required by FCC / ADAOften yesNoNo
Common file formatsSRT, VTT, SCCBurned-in (no file)SRT, VTT

How It Works

  1. 1.Closed captioning starts as a transcript — every word spoken in the video, plus descriptions of non-speech audio, with timestamps.
  2. 2.The transcript is encoded in a caption file (SRT, VTT, SCC, or TTML) or in the broadcast signal (CEA-608/708).
  3. 3.The video player reads the caption track separately from the video. When the viewer toggles CC on, the player overlays the timed text on the picture.
  4. 4.Modern AI workflows produce a draft transcript automatically (Whisper, TranscribeVideo.ai, Otter). A human reviewer corrects errors, adds speaker IDs, and notes sound effects.
  5. 5.The corrected caption file is uploaded alongside the video on YouTube, Vimeo, or any platform that accepts caption tracks. Viewers can then turn captions on by clicking the CC button.

Why Use This Tool?

  • Required by U.S. law (ADA, FCC, Section 508) for most public-facing video content
  • Reaches deaf and hard-of-hearing viewers — about 15% of U.S. adults have some hearing loss
  • Captures the 80%+ of social video watched with sound off
  • Improves SEO — caption text is indexed by search engines
  • Increases watch time and comprehension across all viewer types
  • Makes content accessible to 1.5 billion non-native English speakers

Use Cases

  • Educational courses and online learning platforms (legally required and pedagogically beneficial)
  • Corporate training videos for employees with hearing impairments and non-native speakers
  • Marketing videos on social media — captions make them watchable in sound-off feeds
  • Government and federal agency video (Section 508 compliance)
  • Broadcast and cable television (FCC mandated)
  • Video evidence in legal proceedings (verbatim caption file becomes part of the record)

Frequently Asked Questions

What's the difference between closed captioning and subtitles?

Closed captions describe all audio (speech, speaker IDs, sound effects, music) and are designed for deaf and hard-of-hearing viewers. Subtitles translate dialogue into another language for hearing viewers who don't speak the original. Closed captions are usually in the same language as the audio; subtitles are not.

Why is it called 'closed' captioning?

'Closed' means the captions are hidden by default — stored separately from the video and only displayed when the viewer turns them on. 'Open' captions are burned into the video picture and always visible. The closed/open terminology dates from the original 1970s broadcast captioning system.

Is closed captioning required by law?

In the U.S.: yes, for most contexts. The FCC requires CC for almost all broadcast and cable TV. The CVAA extends this to online video that originally aired on TV. The ADA increasingly requires CC for public-facing websites. Section 508 requires it for federal agencies. For private business websites, CC is usually required under ADA Title III.

How accurate are auto-generated closed captions?

Modern AI captioning (YouTube auto-captions, Whisper, Rev's auto-CC, TranscribeVideo.ai) achieves 90-95% word accuracy on clear speech in 2026. Errors concentrate around proper nouns, technical terms, and overlapping speakers. For accessibility-compliant captions, AI output should be reviewed by a human — typically 15-30 minutes per hour of video.

Can I download closed captions from a YouTube video?

Yes — YouTube allows transcript download via the three-dot menu under the video player. For automated extraction of any YouTube transcript as an SRT file, paste the URL into TranscribeVideo.ai's YouTube transcript generator.

What file format are closed captions stored in?

For online video: SRT (most common, universal), WebVTT (required for HTML5 <track>), TTML or DFXP (Netflix and styled formats), iTT (Apple). For broadcast TV: CEA-608 (analog/legacy) and CEA-708 (digital HDTV). YouTube and Vimeo accept SRT directly.

Do I need closed captions for a small business website?

Probably yes. Title III of the ADA has been interpreted by U.S. courts to apply to public-facing business websites, and lawsuits over inaccessible video content have proliferated since 2017. The accepted standard is WCAG 2.1 AA, which requires CC for all pre-recorded video. Given AI captioning costs almost nothing now, the prudent choice is to caption all video.

What does CC stand for?

CC stands for Closed Captions (or Closed Captioning). The CC symbol — typically two stylised C letters — is the universal toggle button on TV remotes, video players, and streaming apps for turning closed captions on or off.

Related Tools

Related Pages