Speed Control (Rate Control) () 25
Vertical Zoom (Amplitude Multiplier) 38
Transcript (Plaintext) Import 43
End Event On Punctuation (.?!) 47
Auto Select Next Event On Start Time Update 81
Auto Chain Events on Start Time Update 81
Automatically Insert Line Breaks 81
Automatic Backups & Restore 89
Copy Timecodes From Event Group 98
Formatting (Bold, Italics, and Underline) 101
Automatic Scene Change Detection 120
In/Out Shortcuts (MacCaption Compatible) 128
Closed Caption Embed & Extraction 153
How To Create a Team Project 166
How To Share a Team Project 167
How To Open a Team Project 168
Creating Audio Descriptions 178
Audio Description Templates 179
Update The Voice of All Event 182
Updating the Font of the Primary Language 185
Black Video + Audio OR Video Does Not Play 188
How To Cancel/Upgrade/Pause My Account 199
Document Version | Software Version | Notes | Author(s) |
1.0.0 | <1.8.13 | Initial Draft | Nathaniel Deshpande |
1.1.0 | 1.8.13 | Major updates based on UI and other functional changes | Nathaniel Deshpande |
Closed Caption Creator can be used to create subtitles, closed captioning, transcripts, and audio description tracks. Closed Caption Creator supports a wide range of video, audio, and subtitle file formats used for broadcast and online distribution (e.g. Netflix, Amazon Prime, YouTube, etc.). Version 8 of Closed Caption Creator was released in December of 2021, with new updates being made available to users every month.
This document is intended to be used as a companion guide alongside our online video series available for free online (https://www.youtube.com/c/closedcaptioncreator). If you have any questions, or would like to request more information, please visit the support section located below.
Before getting started with Closed Caption Creator we recommend familiarising yourself with a few of our core concepts, and why design decisions were made.
Closed Caption Creator uses a Project based approach to organising deliverables. Deliverables are the final output goal of the user. For example, a deliverable can be a caption file, video, audio track, or transcript. In order to help organise deliverables, Projects contain Event Groups with one or more Events.
Event Groups are created by the user based on the deliverable. For example, when authoring subtitles or captions, an Event Group with the type Subtitle will need to be created. When a simple transcript may be required, users will create an Event Group with the type Transcription. Even Group types include Subtitle, Transcription, Audio Description, and Translation.
Event Group Creation Window
Event Group Navigation
Event Groups help to organise your work and keep track of requirements. When translating subtitles to another language, we recommend creating a Subtitle Event Group for each language inside of a single project, instead of creating multiple projects for each new language.
Event Groups can also help when multiple users (team members) are working on the same project together at the same time. Each user may choose to work in a separate Event Group for their segment of media. Once each segment is completed, Event Groups can be merged together to create a final Event Group which can be exported and delivered to the client. You can find more information about Team Workflows in the Team Section below.
Event Groups will contain one or more Events. Events contain properties such as Text, Start Time, End Time, and Style.
Events are displayed in the Event List based on the Event Group Type. For example, the following Event is being rendered as part of a Subtitle Event Group.
This Event is being rendered as part of a Translation Event Group. You will notice that an Event that belongs to a Translation Event Group will show an additional text field representing the Original text language. Position, and Caption Style options are hidden when working in a Translation Event Group.
Event Groups and Events are the building blocks used in every project. It is important to understand the relationship between Events, and Event Groups before continuing in this guide.
The final diagram included in this section, is meant to illustrate the relationship of other entities discussed in this guide.
As new functionality is added to Closed Caption Creator, this relationship diagram will be updated to include new entities, and links.
Video training tutorials are available for free to all users by visiting https://www.youtube.com/c/ClosedCaptionCreator. Video training can help you stay up-to-date with the latest features, and workflows supported by Closed Caption Creator.
Welcome to Closed Caption Creator. In this section we will walk you through installing, and logging into Closed Caption Creator.
Closed Caption Creator is available as a web, and desktop application. This means you can run Closed Caption Creator in a web browser (Chrome), or install it as a desktop application on your Mac, or PC.
Do you need help deciding between using the desktop or web version?
If you answered yes to any of the questions above we recommend installing the desktop version of Closed Caption Creator. If you answered No to all of the above, either version will suit your needs.
If you are using the web version of Closed Caption Creator there is nothing to install. Simply visit https://app.closedcaptioncreator.com in your Chrome web browser to run Closed Caption Creator.
To install the desktop version of Closed Caption Creator please visit https://www.closedcaptioncreator.com/downloads.html. Here you will need to download the installer for your operating system (ie. Windows, Mac, or Linux).
If you need help installing Closed Caption Creator please contact our support team by emailing support@closedcaptioncreator.com.
Once you have installed Closed Caption Creator on your PC, or Mac you can open it from either the Start menu on Windows or the Applications folder on Mac.
When you first launch Closed Caption Creator it will ask you to login using the same email address you used when registering your subscription. The first time you login you will also need to create a password. It is important that you follow best practices when creating a password and not to share your password with anyone else.
Once you have logged in, you will be greeted by the Welcome Screen.
If your subscription has not been registered you will receive an error message. Please contact our support team to correct this issue.
In this section we will walk you through getting started with Closed Caption Creator. We will cover the most basic features you will need to know in order to start captioning. Closed Caption Creator can be used to create multiple file types (audio, video, transcripts, and captions), however, we will focus on manually transcribing our audio to text, and exporting a sidecar caption file.
💡 If this is your first time creating subtitles or captions, we recommend watching our Closed Captioning For Beginners series online.
You will need a video or audio file to start captioning. If you do not have a video or audio file we recommend downloading this video file: https://media.w3.org/2010/05/sintel/trailer.mp4
We will start by creating our first project. You can open the New Project dialogue from the Welcome screen or by going to File -> New -> Project.
Welcome Screen showing the New Project option highlighted.
The New Project dialogue will open showing a number of available options.
Start by selecting Default from Project Type dropdown. Team Projects are discussed in more detail in the Team section below.
Next, you will want to provide a project name. We recommend making project names unique so that they are easy to identify when managing projects at a later date.
Select your Project Frame Rate. This can be updated at a later time. If you do not know the frame rate of your media we recommend setting it to 24.
Finally, select the Media Source and Media Location.
Media Sources and Locations are discussed in more detail in the Media Import section below.
Click the Create Project button once you are ready.
💡 Larger video files may take a few seconds to import due to additional processing required to generate the audio waveform display.
Transcription is the process of converting audio to text. Closed Caption Creator supports both Automatic Transcription and Manual Transcription. In this section we will explore Manual Transcription.
Start by playing back the video by clicking the Play button () located beneath the video display. When you hear dialogue you will want to pause the video () and create a new Event by clicking the plus icon () at the bottom of the Event List.
Type what you hear into the text field of the Event.
Continue to playback the video and pause when needed to create new Events.
💡 If you find this process slow, we recommend configuring keyboard shortcuts to control playback, and inserting new Events.
Once you have finished transcribing your video, you will want to assign a Start and End time to each subtitle Event so that it displays at the correct time. This process is called Caption Spotting or Queuing. Closed Caption Creator supports a number of different methods for Caption Spotting which are discussed in more detail below.
If you view your Event List you will notice that each timecode input is set to 00:00:00:00.
Start by opening the Timing & Sync panel in the QuickTools Drawer.
Enable the panel by clicking toggle in the top left of the panel.
The Timing & Sync panel has two modes: Long Press and Dual Key. For this example, we will be using the Long Press mode.
💡 You may need to practice this a few times. Once you’re comfortable with the mouse you may wish to use the shortcut key (Up Arrow).
As you sync your captions and video you will notice the Events begin to appear on the Timeline.
Once you are finished Captioning Spotting, you may wish to go back and fine tune the timings using the handles of the Events in the Timeline.
The last step in the captioning process is learning how to export a sidecar caption file. There are a number of different caption file formats available. The most popular formats are SRT, and WebVTT for web. If you are delivering for broadcast you may be asked to export a SCC, or EBU-STL file. Closed Caption Creator supports all of these formats.
Go to File -> Export and select Subtitle File from the list of options.
Select a file extension, and profile for the file type you wish to export.
Click Export when you are ready to save the file to your computer.
We hope you have learned the basics of using Closed Caption Creator. Most workflows are much more complicated than what was covered in this section, therefore, we recommend reading through more of this user guide. In other sections we explore using AI to help in the captioning process, speaker identification, formatting, and much more.
If you have questions that are not answered in this guide, please feel free to reach out to our support team for help.
#UI #controls #editor
The Closed Caption Creator user interface is made up of five parent components. Within each component there are additional panels, and screens that can be enabled.
The Toolbar (or Main Menu) provides access to advanced functionality found in Closed Caption Creator. Each menu item is described in more detail throughout this guide. For more information on a specific function, we recommend searching through this document for more information.
The Media Player provides playback support for multiple audio and video formats. Closed Caption Creator currently supports playback of local files, YouTube, Vimeo, cloud urls, and HLS (.m3u8) manifest files.
Media Controls are located just below the video playback window.
The Volume Control allows users to control the program audio level. When creating Audio Descriptions, an additional volume range slider will be available to control the volume level of the Audio Description preview.
The Speed Control allows users to control the playback speed of the Media Player. Playback speed can range from 0.25x to 16x real time.
Go to the start (0 sec) of the currently loaded media.
Go back one (1) frame in time.
Play or pause the currently loaded media.
Go forward one (1) frame in time.
Go to the end of the currently loaded media.
The Timecode Display will show the current time in SMPTE format based on the current Project Frame Rate settings. The Timecode Display also allows users to input their own timecode value in order to skip to a specific part of media. Up/Down arrow keys can also be used to navigate forwards and backwards frame-by-frame.
The Event Selector shows the currently selected Event. The user may click the Event Selector in order to navigate to the currently selected Event in the Event Editor.
Select the previous Event in the Event List.
Select the next Event in the Event List.
Pause playback for 0.5 seconds while typing. This toggle can help when manually transcribing audio to text.
Enable Caption Lock in order to link the Event Editor and Media Player. When playing back media, the Event Editor will automatically select the currently displayed Event based on timecode.
💡 We recommend enabling this setting when reviewing captions so that it is easy to make corrections if required.
Enable Video lock in order to link the Media Player and Event Editor. When selecting Events in the Event Editor, the Media Player time will automatically update based on the Start time of the selected Event. If an Event does not have a Start time, then the Media Player time will not update. If a user selects the End time input within an Event, then the Media Player will update based on the End time instead of the Start time.
The Preview CC toggle will enable real time preview of captioning/subtitles. When disabled, the Media Player will display the currently selected Event regardless of the current time. When enabled, the Event Text will display as it would normally during playback for the end-user.
💡 It may be helpful to disable this option when manually transcribing your media to text.
The Event Editor is made up of three child components: Event Group Navigation, Event List, and Event Editor Controls.
What are Events and Event Groups? Before continuing, you may wish to revisit the section on Core Concepts that discusses Events, and Event Groups in more detail.
The Event Group Navigation is located at the top of the Event Editor and will display a tab for each Event Group within a Project. Each Event Group Tab will show the Event Group Display Name, an icon representing the Event Group Type, and the Event Group Settings menu (). Simply click on any of the Tabs to open the Event Group inside of the Event List.
The Event List will show all Events for the selected Event Group. Each Event is represented by a row in the Event List.
At the very bottom of the Event List are additional controls used to create Events, delete Events, and tag Events as Forced Subtitles (). On the bottom right of the Event List is the Project information which will show the Project Name, Project Frame Rate, number of Events in the selected Event Group, and the media duration.
Watch: Creator 8 - Editor Control Toolbar
The Event Editor Controls are located in a ribbon to the left of the Editor List.
Set the alignment for all selected Events to the left.
Set the alignment for all selected Events to the centre.
Set the alignment for all selected Events to the right.
Update the position of all selected Events using a nine box grid.
Copy the selected Events or highlighted text to clipboard.
Cut the selected Events or highlighted text to clipboard.
Paste Events, or text.
Shift single word up to the previous Event.
Shift single word down to the next Event.
Shift single line up to the previous Event.
Shift single line down to the next Event.
Split Event based on the cursor location.
Merge all selected Events.
Expand the text of all selected Events.
Compress the text of all selected Events.
Update the Start time of all selected Events to match the End time of the previous Event.
Update the End time of the all selected Events to match the Start time of the next Event.
The QuickTools Drawer is located beneath the Media Player. The ‘Drawer’ can be opened and closed using the arrow toggle in the bottom right (). Each tool located in the QuickTools can be shown or hidden by editing the QuickTools Settings located in the Options menu (Edit -> Options).
The design concept behind the QuickTools Drawer is to provide a UI location where tools that are used ‘once-in-awhile’ can be accessed.
The Styles panel is used to customise the look of the Subtitle Preview, and Video Export (with open captions). Additional information on styling captions can be found in the Caption Style Section.
The Search and Replace (Find and Replace) panel can be used to search the Events of the selected Event Group for a specific word (or regex expression), and replace it with a new word or character set. Additional information can be found in the Search & Replace section.
The Timing and Sync panel is used for caption spotting (queuing, timing). Closed Caption Creator supports a number of different methods for spotting Events, including Automatic Sync. Additional information can be found in the Caption Spotting section.
The QC and Review panel allows users to select a custom Style Guide and automatically test each Event in the selected Event Group for errors. Event errors will display in the Failed Event list. When a user selects a Failed Event, the list of errors will display in the Errors list. The Event List will also update to show the selected Event. More information can be found in the QC & Review section.
The Event Templates panel will display all Event Templates that have been saved by the user. Events can be created using an Event Template by simply clicking the Event Template you wish to use. To create a new Event Template, select the Event from the Event List, right-click, and select Add To Templates. Event Templates can be useful when storing Event Text that is often reused (e.g. Audience Laughter, Telephone Ringing). Event Templates include Event Text, and Event Positioning. Other Event properties are not stored when creating a template.
The Voices panel displays all pinned voices used when authoring Audio Description tracks. For more information on Audio Description and managing pinned voices please visit the Audio Description section.
The Speakers panel shows all Project Speakers. To assign a Speaker to an Event, simply select the Event, and click the Speaker card you wish to assign. For more information on speaker identification please visit the Speaker Identification section.
The Tags panel shows all Project Tags. Project Tags can be managed from the Tag Manager window (available by clicking the Menu button () to the right of the Tags panel). In order to link a Tag to an Event, simply select the Event and click the required Tag. For more information on Tagging and other metadata workflows please see the Tags section.
The Interactive Timeline is an alternative view of all Events from the Selected Event Group. Events are rendered as Timeline Events based on their Start and End time.
💡 Events without Start or End times will not be displayed on the Timeline.
Timeline Events will show the Event Text, along with handles used to update the Start and End time. Users can simply click-and-drag either handle to update the time of an Event. Users can also click-and-drag an Event anywhere on the timeline in order to update both the Start and End times at once.
Multiple Events can be moved at once by selecting more then one Event and dragging it on the timeline.
Selected Events will show a green plus (+) icon in the top right corner, used to create a new Event on the timeline. Events created using the timeline will automatically generate based on the timecodes of the selected Event. This means that the new Event will inherit timing based on the previous Event.
Scene Change (Shot Change) Markers will display as Red markers in the Ruler of the Timeline. Scene Change Detection is explained in more detail in the section below.
Dialogue silence (no dialogue) will show as a highlighted part of the Ruler. This is automatically generated when creating Audio Description Templates. For more information, please refer to the section on Audio Description.
Horizontal Zoom controls are located to the right of the Timeline. Horizontal Zoom is limited to the default values 10, 15, 30, 45, and 60 seconds. The lower the number, the greater the Zoom multiplier. For example, 10 seconds will show 10 seconds of Events within the width of the display.
Vertical Zoom (Amplitude Multiplier) will appear to the right of the Horizontal Zoom controls. The Vertical Zoom accepts an integer that is then used to multiply the amplitude of the waveform. This can be used to make the waveform appear larger without impacting the actual playback volume.
💡 Users can scroll along the timeline by holding the shift button while scrolling. This method works for all horizontal scrollable areas (e.g. Event Group Navigation).
#open #load
Closed Caption Creator supports a number of different file types and formats. In this section we will review the import process for each file type.
Watch: Creator 8 - How to import and export subtitles
Closed Caption Creator supports over 25 different subtitle file formats. A complete list of supported formats can be found here.
To start, go to File -> Import and select Subtitle File from the list of options.
Click Next and the Subtitle Import window will appear showing the Subtitle Import Form and a Source Preview.
Select a file from your local harddrive using the file picker input.
Beneath the file picker will be the Event Group options. Users have the option to import subtitle Events into an existing Event Group, or they can create a new Event Group by choosing the *NEW* option. When Importing into an existing Event Group, the user will need to select the Merge or Replace option.
Merging subtitles into an existing group will place all new subtitles at the bottom of the Event List. Replacing subtitles in an existing group will remove all Events inside of the group before importing.
Depending on the source file type extension, users may need to select a Source Profile from the Source Profile dropdown list. If a user is not sure the exact profile to use, they may need to try multiple times in order to get the correct results.
Users can access the Advanced Options by clicking the button near the bottom of the Import Form.
Advanced Options are separated into two sections: General Options, and Decoding Options.
📝 Not all file imports will show the Decoding Options. Decoding Options are custom settings dependent on the selected Source Profile.
The General Options will show the Source Frame Rate, File Encoding, and Drop Frame Convert setting.
The File Encoding may need to be updated if the source file encoding is different then UTF-8.
Decoding Options provide additional settings used when decoding (reading) the source file.
By default, most Decoding Options will be disabled.
Watch: Creator 8 - Transcription Workflow
Closed Caption Creator supports importing transcripts as plaintext (.txt) files. Transcript files should only contain text data, and exclude timecode information.
Start by going to File -> Import and select Plaintext (Transcript) from the list of options.
The Plaintext Importer is organised very much the same as the Subtitle Importer. Select a file using the Transcript File Picker near the top of the window.
Beneath the file picker will be the Event Group options. Users have the option to import Events into an existing Event Group, or they can create a new Event Group by choosing the *NEW* option. When Importing into an existing Event Group, the user will need to select the Merge or Replace option.
Merging subtitles into an existing group will place all new subtitles at the bottom of the Event List. Replacing subtitles in an existing group will remove all Events inside of the group before importing.
📝 When importing into a new Event Group, the default Event Group type will default Transcription. Transcription Event Groups can be converted to Subtitle Event Groups using the Event Group options menu (accessible via the Event Group Navigation).
Users can access the advanced import options by clicking the More Options button (). Advanced options include settings for the source File Encoding, and additional import settings that help the importer determine how Event text should be structured.
The file encoding may need to be changed if your source preview contains unknown symbols. For example:
Changing the File Encoding option from UTF-8 to Windows-1252 (LATIN1) allows the Importer to properly decode the character set:
The Max Lines setting can be enabled to limit the number of lines the Importer will include in a single Event.
The Max Characters setting can be enabled to limit the number of characters in a line before the Importer will automatically insert a line break. This setting can be used in conjunction with the Max Lines setting to provide greater control of Event Text shaping.
If a transcript contains a special line ending character, it may help to specify that character here.
IF a transcript contains a special character to start a line, it may help to specify that character here.
The Transcript Importer will automatically insert a new line break following any punctuation (.?!).
The Transcript Importer will automatically insert a new line break following a comma (,).
The Block Import option will bypass a number of the other import options. By enabling Block Import, the Transcript Importer will create Events based on blocks of texts from the transcript. That is to say, any block of text followed by two line breaks.
Auto Format is performed post-import. This is the same process used by the Auto Format tool by going to Edit -> Auto Format. We do not recommend using this option but instead using the new Auto Format Custom option by going to Edit -> Auto Format Custom for better results. More information can be found in the Auto Format section below.
Closed Caption Creator supports media playback from multiple sources. To import a new media file, go to File -> Import and select Media File from the list of options.
When importing a new media file, users will need to select a Media Source and a Media Location. The Media Source selection indicates HOW to play the media. The Media Location selection indicates WHERE to play the media from.
The following Media Sources are supported:
Local Storage can support playback from any hard drive or network location mounted to the client system. Local Storage playback is limited however to audio, and video formats as listed in the following link: https://www.chromium.org/audio-video/.
Proxy RT is supported on all desktop versions of Closed Caption Creator. Similar to Local Storage playback, Proxy RT is able to playback files from any hard drive or network location mounted to the client system. The exception is that Proxy RT is able to playback almost any file format including ProRes, DNxHD, and other broadcast formats. There is a slight delay when importing media using the Proxy RT Media Source.
Closed Caption Creator supports playback of both YouTube and Vimeo content using the video URL.
Example Vimeo URL: https://vimeo.com/747666103
Example YouTube URL: https://www.youtube.com/watch?v=zoNMlpMaUYk
Closed Caption Creator supports secure HLS streaming. This is a great option for customers who are interested in a more secure way of sharing media.
Example HLS Manifest URL: https://bitdash-a.akamaihd.net/content/sintel/hls/playlist.m3u8
Closed Caption Creator supports direct links to files stored in the cloud. For example, files stored in Amazon S3, GCP, or Azure.
📝 Playback is still limited to the supported file formats listed here: https://www.chromium.org/audio-video/.
Users can import Team Projects from other users by going to File -> Import and selecting Team Project from the list of options.
To import a team project, users will need a Project Code created by the Project Owner.
Input the Project Code and click the Import Project button to load the project.
📝 More information on Team Projects can be found in the Team Projects section below.
Closed Caption Creator supports project files (.ccprj) from version 7 and version 8. To import a previously exported project, go to File -> Import and select Project File from the list of options.
The Project File Importer will appear with a file picker near the top. Select your project file.
📝 Project Files will contain a ccprj extension.
Click the Import Project button to continue.
The Project Importer supports the ability to import parts of a project instead of the entire project itself.
Under More Options menu, select the Merge Import options and choose which modules to import from the project file.
The Event Groups option will import all Event Groups and Events from the project.
#save #deliver #publish #transfer
Closed Caption Creator supports a number of different file types and formats. In this section we will review the export process for each file type.
Video Export allows for users to export video with burnt-in (open) captions using the style settings set in the Styles Panel of the QuickTools Drawer.
📝 Video Export is only available when using the desktop version of Closed Caption Creator. The desktop version can be installed by visiting www.closedcaptioncreator.com/downloads.html.
Watch: Creator 8 - Adding subtitles to your video
To export video go to File -> Export and select Video File from the list of options.
Select a Video Export Location on your local hard drive.
Note: Please ensure the user account has write access to the Video Export Location. If the user account does not have write permission, the video export process will fail.
Select the Event Group containing the Events to be exported.
Select a preset from the list of available presets.
Note: Proxy Presets have been configured to bypass the caption export process. This means that when using the Proxy Preset, no caption data will be burnt-into the output. Proxy presets are intended to be used to create smaller files that playback natively in Closed Caption Creator.
Note: Instagram Presets will apply a 1:1 crop to the video export regardless of the original video aspect ratio.
Click Export to begin the export process.
Closed Caption Creator supports the export of audio description tracks using synthetic voice.
Note: Audio Description Export requires an active subscription to the Audio Description plugin.
To export audio description go to File -> Export and select Audio Description from the list of options.
Click Next to open the Audio Description Exporter.
Select an Export Folder where the final outputs will be stored. The Audio Description Exporter will export VO tracks for each Event, a VO mixdown, and a program mixdown containing the program audio and AD.
Select the Event Group that will be exported. The Event Group list will be filtered to only display Audio Description Event Group types.
Select the Export Profile (preset) that will be used to generate the final mixdown files. Closed Caption Creator supports FLAC, MP3, and WAV files.
AutoMix Settings can also be specified for the final program mixdown. AutoMix will work to automatically apply fades and ducking where required based on the settings specified. Mix Presets have been added to help provide a baseline.
Loudness Processing can also be applied to the final program mixdown. Loudness Processing presets are based on industry standards set by the EBU and ATSC.
Click the Export Audio button to begin the export process.
Hint: If you require a transcript or timed text file you may also wish to use the Subtitle, or Transcript Exporter as discussed below.
Closed Caption Creator supports the export of transcript files in multiple formats including csv, txt, and docx.
To export a transcript file go to File -> Export and select Transcript File from the list of options.
Click Next to open the Transcript Exporter.
Select the Event Group from the Event Group dropdown list. Any Event Group type can be exported as a transcript file.
Select a Template Preset.
The Transcript Exporter supports two presets: Paragraph and Subtitle. The subtitle preset will export all Events as they are shown in the Event List. The Paragraph template will attempt to merge Events that are chained together. This creates more of a paragraph-like structure in the final output. Users may wish to experiment with both templates to see which option best suits their requirements.
Select the desired file format. The Transcript Exporter supports txt, docx, and csv.
Under the More Options menu there are settings for including additional metadata including timecodes, and project metadata.
Notes and speaker information are automatically included in transcript exports. There is currently no setting to disable this behaviour.
Closed Caption Creator supports over 25 different subtitle and closed caption formats. A complete list of supported formats can be found here.
To export a subtitle file go to File -> Export and select Subtitle File from the list of options.
Click the Next button to open the Subtitle Exporter.
Select the Event Group containing the Events you wish to export using the Event Group dropdown menu.
Next, select a target profile. The File Extension dropdown is used to filter the available Profiles. For example, setting the File Extension to txt will update the available profiles to only those supported by that extension.
Users can access the Advanced Options by clicking the More Options button ().
Set the frame rate and drop frame value of the output file if different than the current Project Frame Rate.
When your output file must match the timecode of the corresponding video file, the TC Offset setting can help to align the output with the video. For example, if the timecode of the first frame of video is 00:58:30:00, then the TC Offset value will also be set 00:58:30:00.
Hint: Metadata tools such as MediaInfo can be helpful in determining the timecode of the first frame.
The TC Multiplier setting can be used to adjust the output timecodes. Common values are 0.999 (when going from 23.98 to 24 frames) and 1.001 (when going from 24 frames to 23.98).
Users may choose to do this in the Subtitle Exporter or using the Timecode Stretch and Shrink tool.
The File Encoding option allows users to set the encoding of the output file. Binary file types (such as EBU-STL) will have a default file encoding that cannot be overwritten. Updating the file encoding may help for certain non-european languages (such as Arabic).
Drop Frame Convert has been deprecated. Please avoid use of this option.
The Forced Subtitles option allows users to include, or exclude Events that have been marked as Forced Subtitles. Additional information can be found below.
The Batch Subtitle Export tool can be used to export multiple Event Groups, and multiple subtitle file types at one time.
Users can select one or more Event Groups by holding down the Shift or Ctrl key while clicking in the Event Group list.
Users can insert additional formats using the + and - buttons near the top of the Formats List.
Note: One drawback of the Batch Exporter is the lack of Advanced Options (found in default Subtitle Exporter). If you require any of the advanced settings (e.g. TC Offset) please use the default Subtitle Exporter instead.
Users may choose to export their Project Files to local storage in order to archive or share them with other users.
Hint: Users can import Project Files using the Project Importer.
To export a copy of your project go to File -> Export and select Project file from the list options.
Click the Export button to export your project to disk.
Note: No media is saved when exporting a project. It is the users responsibility to manage the media that accompanies a project.
#errors #reading #rate #checks #troubleshooting
Closed Caption Creator supports a number of different Review workflows for verifying the accuracy of subtitles, and transcriptions. In this section we will discuss Real Time Error Detection, and the use of Style Guides to check subtitles.
Real Time Error Detection is designed for users authoring subtitles. Errors are displayed as badges beneath the timecode inputs of each subtitle Event.
The benefit of Real Time Error Detection is that it allows captioners to see mistakes as they are being created instead of having to wait for review. In the screenshot above, the Event is showing errors for exceeding the max number of lines, and the max number of characters allowed per line.
Watch: To watch how to configure and use Real Time Error Detection, you can visit the following link: How To Catch Errors In Your Subtitled and Closed Captioning
Real Time Error Detection can be configured per Event Group. In order to configure the error settings for an Event Group go to the Event Group settings by clicking the menu icon to the right of the Event Group tab.
Next, select Settings from the list of options.
Real Time Error Detection settings will be shown in the Error Detection section of the Event Group Edit form.
Note: Real Time Error Detection is only available for Subtitle Event Groups.
Real Time Errors can be configured for exceeding the maximum characters per second (CPS), maximum words per minute (WPM), maximum characters per line, maximum lines per Event, timecode overlaps, or illegal (non-608) compliant characters.
Note: For more complex error detection, we recommend using Style Guides to identify errors.
Once Error Detection rules have been applied, these settings will become the new default whenever creating new Event Groups.
If a user wishes to disable Real Time Error detection, it is recommended to set all numerical values to 9999, and disable overlap, and illegal character detection.
Style Guides are a more advanced method for automatically reviewing subtitle Event Groups. Style Guides were designed for supervisors, and reviewers who may not have been involved in the authoring process. Style Guides will identify specific Events and any errors that are detected.
Watch: Creator 8 - Review Subtitles and Closed Captioning
To configure a new Style Guide, go to Edit -> Options and select the Style Guides option from the navigation menu on the left.
Click the button near the top left of the Style Guide form to register a new Style Guide.
Note: All new Style Guides are created with the same “Untitled Style Guide” name.
Update the Style Guide name. Users can register an unlimited number of Style Guides, so it is recommended that Style Guide names be as descriptive as possible.
Style Guides can be configured to test for Event Timing, Reading Rate, and other technical requirements.
Style Guide Test | Description |
Maximum Event Lines | Events containing more than the maximum allowable number of lines will fail. |
Maximum Characters Per Line | Event lines exceeding the maximum allowable number of characters will fail. Only one line must fail in order to return an error for the entire Event. |
Overlap Detection | Event times are tested to ensure they do not overlap with another Event in the same group. |
Illegal Character Detection | Illegal character detection will return an error if characters that do not comply with the 608 caption spec are found. A full list of supported 608 characters can be found here: http://www.theneitherworld.com/mcpoodle/SCC_TOOLS/DOCS/CC_CHARS.HTML |
Maximum Event Duration | If an Event exceeds the allowable maximum duration it will return an error. Duration values are specified in seconds, and therefore may be required to convert frames to seconds. For example, if the maximum duration is 250 frames at a frame rate of 25 frames/sec, the maximum allowable Event duration will need to be set to 10 seconds. |
Minimum Event Duration | If an Event falls below the allowable minimum duration it will return an error. Duration values are specified in seconds, and therefore may be required to convert frames to seconds. For example, if the minimum duration is 12 frames at a frame rate of 25 frames/sec, the minimum allowable Event duration will need to be set to 0.48 seconds. |
Minimum Characters Per Second (CPS) | If the reading rate in CPS falls below the configured value it will return an error. CPS is calculated to include spaces and punctuation. |
Maximum Characters Per Second (CPS) | If the reading rate in CPS exceeds the configured value it will return an error. CPS is calculated to include spaces and punctuation. |
Minimum Words Per Minute (WPM) | If the reading rate in WPM exceeds the configured value it will return an error. |
Maximum Words Per Minute (WPM) | If the reading rate in WPM exceeds the configured value it will return an error. |
Minimum Gap Between Events | If an Event Start or End time is too close to another Event in the same Event Group, it will return an error. Note: Inserting Blank Frames may help in automatically correcting for these issues. |
Maximum Gap Between Events | If the duration gap between two Events exceeds the maximum allowable duration, an error will be returned for both Events. |
Once a Style Guide has been created, and configured it will be available in the QC & Review panel of the QuickTools Drawer.
Select the pre-configured Style Guide from the list of Style Guides. Click the Run Review button to begin the Automatic QC process.
A list of failed Events will display in the list to the left. A total number of failed Events will display as a badge next to the list label (e.g. ). Selecting an Event from the list of failed Events will automatically navigate to that Event in the Event List. The Errors list will update to show all Errors for that Event.
Note: Errors may extend past the allowable textarea. Hover over errors to view the complete text.
It is recommended that each error be corrected and then the Review process repeated to ensure a new error was not introduced. For example, extending an Event to fix the minimum duration may cause an overlap error to occur. Reviewing subtitles is not always easy, or fast.
When exporting subtitles, there is a final automated checks process that runs prior to export. The automated checks process is a very basic suite of tests to verify the validity of Events. Events are tested for timecode overlaps, missing timecode, and missing text.
Errors can be ignored, or the export can be aborted. When aborting the export, the Subtitle Exporter will close and the failed Event will become visible in the Event List.
Note: Automated Checks can not be customised or disabled.
#shortcut #keys #spacebar #enter #jkl
Closed Caption Creator supports custom keyboard shortcuts to help optimise subtitle workflows.
Watch: Creator 8 - Keyboard Shortcuts For Subtitles and Closed Captioning
To configure keyboard shortcuts go to Edit -> Shortcut Keys.
Select a shortcut function from the list of shortcuts on the left. The Key Command input will automatically focus.
Enter the key command you wish to use.
Note: If shortcuts share the same key command, the first shortcut function will be overwritten.
Certain shortcuts have been created with the intention of being used with food pedal controllers:
Closed Caption Creator supports any foot pedal that can be configured to use hot keys. A list of pedals that have been tested can be found below.
In the above screenshot we show the Philips Speech Control utility configured to work with the web version of Closed Caption Creator.
Please contact support if you require assistance configuring Closed Caption Creator to work with your foot pedal.
Note: Ensure the Pressed and Released settings are enabled.
The default behaviour of certain shortcut commands can be customised using the Editor Settings panel found in the Options menu.
Go to Edit -> Options and select Editor from the navigation menu on the left.
When inserting Events using the Insert Event shortcut, the new Event Start time will be set to the current time of the player. This can be helpful for manual transcription when the Events require only rough timing.
When inserting Events using the Insert Event shortcut, the previously selected Event End time will be set to the current time of the player. This is similar to auto-chaining, in that the previous and new Event will precede each other.
When updating the Start time of an Event using the Update Start Time shortcut, the Editor will automatically select the next Event in the Event List. This option is helpful for caption spotting using shortcut keys.
Updating the Start time of an Event using the Insert Start Time shortcut will also update the End time of the previous Event (chaining the Events together in time).
The Editor will automatically insert line breaks based on the Maximum Character value in the Real Time Error Detection settings of the Event Group. As a user inserts new text, line breaks will be inserted automatically.
When performing caption spotting, the minimum frame gap value will be applied. Inserting Blank Frames can also be used to the same effect.
#save #load #save-as #share #project
In this section we will review the Project Management workflows supported by Closed Caption Creator. Project Management is the process of saving, and archiving projects in Closed Caption Creator.
Watch: Creator 8 - Save and Load Project Files
Closed Caption Creator supports both online and offline project management. This means you are able to store and load project files from the cloud as well as from local storage. By default, Closed Caption Creator is configured to store projects in the cloud. This configuration can be updated by going to Edit -> Options and enabling Use Local Storage in the General options.
When Use Local Storage is enabled, saving files will default to the client’s local storage.
Closed Caption Creator includes unlimited cloud storage for project files and transcripts.
Note: No media is ever saved or uploaded to the cloud. It is the responsibility of the user to manage the storage of media (audio and video files).
To save a project to the cloud, go to File -> Save Project As…
The Storage Explorer will open showing all files and virtual folders currently registered.
To save a project click the Save Project button in the bottom right. This will prompt the user to confirm the project name.
Click Save to confirm you wish to save the project. Wait for the notification in the top right showing the project was saved successfully.
Note: If an error occurs, check your internet connection and try again.
Now that the project has been saved once, saving the project again will overwrite the previous project record.
To save the project again go to File -> Save. Again, wait for the Project Saved notification before continuing.
Hint: Configure keyboard shortcuts for saving projects by going to Edit -> Shortcut Keys.
Once a Project has been saved it will appear in the Storage Explorer whenever a user selects Open Project from the File menu.
To open a project the user can select the project record from the items list, and click the Open Project button in the bottom right.
Hint: Users can also double click the project record from the items list to open a project.
Recently updated projects will also appear in the Recent Projects window by going to File -> Recent Projects.
The Recent Projects window can only be used to open projects. Projects can not be deleted, moved, or Starred from this window.
Once you are familiar with how to save and load projects using the Storage Explorer and Recent Projects window, it may be helpful to commit to a management structure for your projects.
Projects should be managed in virtual folders. Users can create unlimited folders to help in this process.
Here are a few examples based on the feedback of real customers.
Hint: Team structure works great when all users have the same login access and need to be able to assign work to each other.
#save #load #backup #crash #reload
Closed Caption Creator includes an automated backup solution that is configured to run every five (5) minutes. Instead of backing up entire projects, only the Event Group information is ever saved. This means that if you restore from a backup, a new project will need to be created to store the Event Group information.
In order to import a backup go to Help -> Restore Event Groups.
Click the Reload Records button near the top to load all snapshot records.
Each snapshot record will show the Project Name, the datetime when the snapshot was created, the number of Event Groups in the snapshot, and the total number of Events.
To restore a snapshot, select the snapshot you wish to restore and click the Restore button.
New Event Groups will appear at the end of the Event Group Navigation (located above the Event List).
If you require additional support when restoring a project please feel free to contact the support team.
Closed Caption Creator supports the ability to automatically format Event Text to meet new caption guidelines. For example, when conforming 16x9 captions (that allow for 42 characters per line) down to 4x3 captions (which are limited to 32 characters), it may be helpful to first use the Automatic Format tools.
Open the Auto Format tool by going to Edit -> Auto Format Custom…
Note: The Edit menu contains two Auto Format tools. Please ignore the first Auto Format option. This has been deprecated and will be removed in a future version.
Set the maximum number of lines allowed per Event in the range of 1 to 5 lines.
Set the maximum number of characters allowed per line in the range of 1 to 50 characters.
To apply the Auto Formatting to all Events in the selected Event Group, check the Apply To All box. If this box is left unchecked, only the currently selected Events will be formatted.
Click the Auto Format option when ready.
In certain circumstances, the Auto Format process will require an Event to be split into two or more Events. Event timing will be impacted by this change. The Auto Format process will attempt to retime Events based on the average number of words per Event. It is recommended that users review all Events following the Auto Format process to ensure proper timing.
#gaps #overlapping #cover #timing #sync
When editing Events or merging Event Groups containing work from multiple users, Event Overlap may occur. Event Overlap is when the Start or End time of one Event overlaps with another Event.
This is easy to identify in the timeline or when using Real Time Error Detection in the Event List.
Timeline Events overlapping
Real Time Error Detection showing Overlap errors for both Events.
To correct for any overlap errors, go to Format -> Fix Event Overlap.
All Event timings will correct themselves automatically. Event End times will update to match the Start time of the following Event if an overlap is detected.
#misorder #list
In some scenarios, Events may be misordered in the Event List. To order all Events in the Event List by Start time, go to Format -> Order Events By Time. This will reorder all Events in the selected Event Group.
#adjust #timing #move #shift #align #insert #black #slate
Closed Caption Creator is able to offset the timecode of one or more Events in an Event Group using two available options: TC Offset tool, and the Timecode Offset shortcut.
Note: This section provides information on how to offset Events so that they align with the project media. If you are attempting to offset the start time of captions so that they align with the first frame of video (e.g. 01:00:00:00) outside of Closed Caption Creator, you will want to review the Subtitle Export Advanced Options.
Before opening the TC Offset tool, we recommend selecting the first Event from which you wish to offset, and cue the video to where you wish the Event to start.
To open the TC Offset Tool go to Timecode -> TC Offset.
You will notice that the offset type, offset amount, and the Event range will auto populate based on the Event, and current video timecode. By default the Event range will start at the selected Event and end at the last Event unless two or more Events are selected at once.
Click the Apply Offset button in order to apply the offset based on the chosen settings.
In some scenarios when only few Events must be offset, the Timecode Offset shortcut may be a better option for users. The Timecode Offset shortcut must be first configured via the Shortcut Keys window (Edit -> Shortcut Keys).
Once configured, users can select one or more Events, cue to the correct spot in the video, and use the shortcut to offset all selected Events.
The Timeline interface is the easiest way to visualise the change.
#drift #sync #dropframe #pulldown #align
If subtitles appear in sync at the start of a video, and drift out of sync throughout the duration of the video, this is considered caption drift (or subtitle drift). This can be caused by mismatched frame rates or drop frame settings.
Caption drift can be corrected using the Stretch and Shrink tool available by going to Timecode -> Stretch/Shrink.
The simplest method is to set the new End time of the last Event. Closed Caption Creator will automatically calculate the correct timecode multiplier to use to adjust timing.
Increase timecode by 1.001/1.000 (1.001) can be used to convert from
23.976 fps to 24 fps, 29.97 fps to 30 fps, or 59.94 fps to 60 fps.
Decrease timecode by 1.000/1.001 (0.999 multiplier) can be used to
convert 24 fps to 23.976 fps, 30 fps to 29.97 fps, or 60 fps to 59.94 fps.
Note: If captions appear in sync with the video when playing back in Closed Caption Creator, but are out of sync in a 3rd party application or player, you may need to apply the same multiply option on export using the TC Multiplier setting.
#remove #delete #reset
In order to clear timecode information from one or more Events, simply select the Events you wish to clear, and go to Timecode -> Clear Timecodes. All Start and End times will be set to 00:00:00:00.
When translating subtitles (audiovisual translation) from one language to another, it may be helpful to copy the timecode data from an external file or source. The external file must be imported first into its own Event Group using the Subtitle Import tool.
The Copy Timecodes From Group requires at least two Event Groups. Open the Copy tool by going to Timecode -> Copy TC From Group.
Select the source group (timecodes you wish to copy) and the target group (where the timecodes should be applied). Click the Copy button when ready.
Note: Both Event Groups should have the same number of Events.
Note: We also recommend translating Event Groups using the Translation Event Group type which will automatically sync timecode data between groups.
If the Project Frame Rate was set incorrectly when first creating the project, it can be updated by going to Timecode -> Project Frame Rate. This can also help when the timecodes of Events do not match the burnt-in timecode of the proxy media.
#font #bold #italics #underline #color #mono #placement #position
Closed Caption Creator supports a number of different style and formatting options. It is important to note however, that not all subtitle file formats support all styling and format information. For example, SubRip (.srt) files offer no support for position, or styling information (fonts, size, colour, etc.).
It is also important to note, that even if the desired subtitle format supports a feature, the 3rd party player (or streaming device) must also support the ability to represent that feature. For example, WebVtt files support a number of different formatting, and style cues (including position), however, not every player that supports WebVTT playback will support all style cues.
It is worth testing each file export in the desired playback to ensure consistent and accurate playback.
Closed Caption Creator supports the ability to apply basic formatting at the character level.
Users can apply formatting options by first highlighting the character range, and clicking the corresponding format tag (bold, underline, and italics).
Note: Shortcut keys can also be used to apply formatting information.
Subtitle Event Position can be customised using the quadrant selection and x/y offset controls.
Note: Event Position is calculated using the fixed origin along with the x/y offset value specified (in pixels).
To update the position of an Event, select the Event from the Event List, and use the Position control from the Event Control ribbon to update the origin of the Event.
The Event origin can be set to one of nine quadrants. Once an origin has been selected, the x/y offset value can be set for further customization. X/Y offsets can be specified using the Position controls (in the far right column of the Event).
To set the x/y offset of multiple Events at once, select the desired Events from the Event List, and go to Format -> Offset Position. The Position Offset tool will allow you to change the x and/or y offset of all selected Events. The preview to the right of the window, is a rough estimate of the new Event position.
Note: If you find you’re offsetting the X/Y position of all Events, you may wish to set a default display padding by going to Edit -> Options. The display padding is applied to the X/Y offsets on export and import of subtitles.
Users can customise the look of subtitles using the Styles panel located as part of the QuickTools Drawer.
Note: Changes made using the Styles panel affect all Events.
When authoring subtitle files, the Styles panel can be used to customise the subtitle preview. Note, the styles may only apply depending on the chosen export format.
When the purpose of your project is to export a video with burnt-in (open) captions, the Styles panel provides a realistic preview of what the user can expect from the output.
Users can choose between two display profiles: Solid Background, and Text Outline.
Subtitle Preview with Solid Background profile.
Subtitle Preview with Text Outline profile.
The users can adjust the font size, colour, text opacity, background colour, background opacity, letter spacing, text shadow, and padding.
Once styles have been configured, clicking the Set Default button will save the settings so that they load by default for each new project.
Users can also choose to import their own custom fonts. Fonts must be installed on the local client machine. To import a custom font, open the Fonts Manager by going to Edit -> Options and select Fonts from the menu on the left.
Insert the font name in the top text input and click the Insert Font button.
The Font Name can be found by opening the Fonts settings for your operating system.
Font Settings for Windows 10
New fonts will appear near the top of the Font selection menu found in the Styles panel.
When translating subtitles or transcripts, a different font may be required for the source (original) language. The original language font can be updated by opening the Event Group settings and selecting a new Font Family from the Original Font Family option.
Closed Caption Creator includes access to the best 3rd party AI platforms and machine learning tools to help in the captioning, and translation of media.
With Closed Caption Creator, users can transcribe, translate, and sync subtitles with video automatically.
We recognize the fact that automation is not perfect, and results still require a person to review and fix any issues. AI tools are not a replacement for manual transcription, but in some cases, it can help to save time which can be reinvested in other parts of the accessibility workflow (e.g. QC and review).
Watch: How to Automatically Add Subtitles to Your Video
Automatic transcription is the process of converting dialogue audio to text. Closed Caption Creator provides access to six different Service Providers for Automatic Transcription. Automatic Transcription Service Providers include Deepgram, Assembly AI, Google Speech-to-Text, Rev AI, Speechmatics, and Voicegain.
Each Service Provider supports a unique set of languages, and transcript features. For example, Deepgram supports over 14 languages, and offers punctuation, and speaker diarization (identification). Voicegain, in contrast, offers support for four languages, and returns results that include SDH elements (e.g. background music identification).
In order to run Automatic Transcription, the source media must be available locally or via cloud url. This means media imports from YouTube, Vimeo, or HLS manifest are not supported.
To submit an Automatic Transcription job go to Ai Tools -> Automatic Transcription.
Select a Service Provider from the top menu, and choose the media source language. The media source language is the primary language of the imported media. If the media contains multiple languages, select the most prominent spoken language.
Click submit Job to begin the transcription process. Once the transcription job is submitted, the Automatic Transcription window will close, and the AI Transcript Import Dashboard will open.
The AI Transcript Import Dashboard provides the current status of all Automatic Transcript jobs submitted to the system. Each job will show the current status, Service Provider, language, and whether a transcript file has been returned (File Present).
Once a transcription job completes, it will show 100% in the progress column, with a status of Passed.
Select a completed job by clicking on the job row.
The job options will appear at the bottom of the import window.
Users have the option to import the transcription results as a Transcription Event Group, or Subtitle Event Group. The choice really depends on the final deliverable that a user must provide to a client. When creating subtitles, it is recommended to import into a Subtitle Event Group. If your client requires a transcript file, then importing into a Transcription Event Group would be preferred.
Subtitle Event Group
Transcription Event Group
Import settings are inherited from the Event Group’s Real Time Error Detection Settings. Updating the Error Detection Settings of an Event Group prior to importing an automatic transcription, will change how new Events are generated.
For example, if a user only wanted single line Events, they would open the Event Group Settings of any Event Group and update the Max Lines per Event to 1.
This would cause the new Event Group to only contain single line Events with a maximum character count of 32.
Note: Error Detection Settings are also inherited by the new Event Group which means that Error settings should only need to be configured once.
Because transcripts are stored in the cloud, they are always available if required by the user. Transcripts can be deleted by opening the Ai Transcript Import Dashboard (Ai Tools -> Ai Transcription Import).
Select a completed job and click the button located in the bottom right corner.
The File Present column will display an x instead of a checkmark and the Import options will be disabled.
Note: Transcripts can not be restored once deleted. A new transcription job will need to be submitted.
Watch: Automatically Translate Subtitles using DeepL
Note: Requires a CC Pro or CC Enterprise subscription
Closed Caption Creator supports automatic translation using DeepL, ModernMT, and Google Translate. Each Automatic Translation Service Provider supports a unique set of languages and level of sophistication. For example, ModernMT provides context-aware translation for higher accuracy.
To Translate an Event Group go to Ai Tools -> Automatic Translation.
Select the Service Provider, Source Event Group, and Language you wish to use. The new translation will appear in its own Translation Event Group. Users can specify a display name for the new Event Group, or the default name can be updated at a later time.
The new Event Group will be selected automatically showing the original language, along with the translation.
Watch: Automatically Sync Subtitles and Video (Forced Alignment)
Note: Requires a CC Pro or CC Enterprise subscription
Automatic Sync (or Forced Alignment) automatically applies timecode data to all Events within the selected Event Group. The feature works by using the timing information generated by the Automatic Transcription to sync subtitles with the video.
Open the Ai Transcript Import Dashboard by going to Ai Tools -> Ai Transcription Import.
Select the corresponding automatic transcription job from the list of records. Click the More Options menu and select Apply Automatic Sync.
The Automatic Sync process may take a few seconds to complete, depending on the number of Events within the selected Event Group.
Hint: Automatic Scene Change Detection (Shot Change Detection) is only available via the Desktop version of Closed Caption Creator.
Closed Caption Creator is able to automatically detect scene changes or shot changes from video stored locally on a client’s system.
Once media has been imported using either Local Storage, or the Proxy RT Media Source, go to Ai Tools -> Detect Scene Changes.
Scene changes will appear as red markers () on the timeline ruler.
Events can be synced automatically to Scene Changes by going to Format -> Sync to Scene Changes. Events within 0.5 seconds will automatically snap to Scene Change Markers.
Before Snap To Scene Changes
After Snap To Scene Changes
There is a limited number of free resources provided to users each month. To check your current usage of Automatic Transcription minutes, or Automatic Translation Characters go to Help -> Reporting & Usage.
The Usage Dashboard provides a rough estimate on the monthly cost.
Note: There is a known bug where the dashboard does not calculate the estimated cost using the number of free minutes provided within a month. For an exact total, please contact support.
#queuing #time #sync #align #stamping
Watch: Closed Captioning For Beginners - How To Sync Captions & Video (Spotting)
Caption Spotting (or queuing) is the process of syncing subtitles with your media. There are a number of different options available in Closed Caption Creator for performing caption spotting.
Automatic Sync is discussed in the AI Tools section above. This approach assumes that an AI Transcript has been generated. Both topics are covered in the above section.
The first method we will explore is the Long Press option found in the Timing & Sync panel in the QuickTools Drawer.
Long Press timing can be triggered using the default shortcut key (Up Arrow) or by pressing the Show Event button with the mouse cursor. Activating the button will update the Start time of the selected Event. Deactivating the button will update the End time of the selected Event.
Start by selecting the first Event from the Event Group List.
Go to the start of the video by clicking the Go To Start control ().
Activate the Timing & Sync panel using the Enable toggle in the top left.
Start playback ().
When the Event should appear on screen, click and hold the Show Event button. When the Event should be cleared from the screen, release the button.
Hint: The Up Arrow key can also be used to Show and Clear Events.
The next Event in the Event List will be automatically selected.
Note: The Timing & Sync panel will deactivate if the Media Player is paused or stopped.
The second method we will explore is the Dual Key option found in the Timing & Sync panel in the QuickTools Drawer.
Dual Key timing can be triggered using the default shortcut keys (Up and Down Arrow) or by pressing the Show Event and Hide Event button with the mouse cursor. Showing an Event will update the Start time of the Event, along with the End time of the previous Event. The Hide Event button is only used when a gap between Events is required.
Start by selecting the first Event from the Event Group List.
Go to the start of the video by clicking the Go To Start control ().
Activate the Timing & Sync panel using the Enable toggle in the top left.
Start playback ().
When the Event should appear on screen, click and Show Event button. When the next Event should appear on screen, click the Show Event button once again. This will cause the first Event to be Cleared and the new Event to be shown.
Hint: The Up/Down Arrow keys can also be used to Show and Clear Events.
Note: The Timing & Sync panel will deactivate if the Media Player is paused or stopped.
Hint: If you require a certain number of frames to be inserted between Events (Frame Gap), please use the Insert Blank Frames options following the Caption Spotting process.
The last method we will explore for Caption Spotting requires custom configuration to be applied first. This method (In/Out Spotting) uses the Mark In and Mark Out shortcut keys to sync captions and video. Users who are familiar with MacCaption or Caption Maker may find this method more familiar.
Start by going to Edit -> Options and selecting the Editor option from the left navigation.
Enable the following settings (shown in the screenshot above):
Hint: Specify a minimum Frame Gap if required. This can also be added at a later time using the Insert Blank Frames options.
Click the Save Changes button and close the Options window.
Open the Keyboard Shortcuts menu by going to Edit -> Shortcut Keys. Set shortcuts for Mark Start (e.g. F5), and Insert Event Above (e.g. F6).
The Mark Start shortcut will be responsible for Showing an Event, while the Insert Event Above will be used to Clear Events.
Close the Shortcut Keys menu.
Select the First Event from the Event List, and start playback.
Insert the Start time of an Event using the Mark Start shortcut. The next Event in the Event List will automatically be selected. Marking the Start of the next Event will update the End time of the previous Event.
Hint: It may be easier to watch the Event List when Caption Spotting using this method.
Closed Caption Creator supports the option to create custom metadata, and tags which can be exported along with subtitle data to help in translation, and localization workflows. Closed Caption Creator can store metadata at the Project level (global), or Event level. Project Metadata can help track Project status, or project information such as the author, or content owner. Event Metadata can include custom tags, speaker identifiers, and Forced Subtitle markers.
To create or edit Project Metadata, go to Edit -> Options and select Project Metadata from the navigation menu on the left.
Project Metadata can be exported as part of the Transcript Export workflow or Subtitle Export when using WebVTT as the target format.
Speaker identification has become critical as part of many localization workflows. It is important to be able to identify Speakers in Events, and even be able to insert the Speaker names as part of the subtitle text.
Closed Caption Creator provides two separate tools for accomplishing both tasks. The first is the Speaker panel which is part of the QuickTools Drawer, and the second is the Speaker Name Insert option available under the Insert menu.
Speaker tags are created automatically when importing Automatic Transcripts, or they can be created manually via the Speaker panel located in the QuickTools Drawer.
To create or edit an existing Speaker tag open the Speaker Management window by clicking the menu (...) button to the right of the Speaker panel.
In order to insert a new speaker, enter a speaker name, select a custom identifying colour, and click Add Speaker.
New speakers will display at the bottom of the Speaker table.
To edit or update a speaker, click the blue edit icon () to the right of the speaker row. Update the speaker name, or identifying colour, and click Update.
Any changes made to Speaker names will automatically show in the Event List.
In order to update the Speaker Id for an Event simply select the Event, and click the Speaker Tag in the QuickTools Drawer.
The Speaker Id in the metadata column of the Event should update to reflect the change.
If you require speaker names to be inserted as part of the Event Text, you can easily do this for one or more Events, once the Events have been properly tagged.
Select the Events you wish to update and go to Insert -> Speaker Name…
Select *Assigned Speaker* from the Speaker Name dropdown list. This will force Closed Caption Creator to use the Speaker tag value as the name to be inserted.
There are Prefix and Suffix character options to allow for customising the Speaker Id text that is inserted. Prefix and Suffix characters are optional.
The Uppercase All option will uppercase the speaker name as well as the prefix, and suffix characters.
Click Insert Speaker when ready.
Note: Click Manage Speakers to open the Speaker Management window.
Speaker names will be inserted on the top line of each selected Event.
Tags are a form of custom metadata that can be assigned to one or more Events. Tags are managed similar to Speakers, using the Tags panel located in the QuickTools Drawer.
To create or edit an existing Tag, click the menu icon (...) to the right of the Tags panel.
Tags contain a Term, Type, and Definition.
The Tag Term can be anything such as a key name or phrase. Tags can be used to mark Events that require review, or to identify an object in the video matching a specific timecode or frame.
The Tag Type can be one of five options: Character, Phrase, Organisation, Location, or Other. Each Tag Type is colour coded. Location Tags are grey, Character Tags are blue, Organisation Tags are orange, Phrase Tags are green, and Other Tags are black.
The Tag Description is only visible in the Tags Manager, but can help other team members when deciding whether a Tag is appropriate for a specific use case.
To Tag one or more Events, simply select the Events, and click the Tag badge located in the Tags panel of the QuickTools Drawer.
Events can have multiple Tags assigned at one time.
To remove a tag, simply click the x icon to the left of the Tag badge located on the Event.
Tags can be imported and exported via the Tags Manager using the button controls near the bottom of the window.
This allows for Tags to be shared with other team members, or used across projects for managing metadata and review workflows.
#narrative #netflix
Forced Subtitles are subtitles that are displayed regardless of if captions are enabled or not. They are used by streaming platforms such as Netflix, Disney, and Apple+. In most workflows, they are delivered as a separate caption file.
An example use case for Forced Subtitles are when a character is speaking a secondary language. The Forced Subtitles would display the translation to the primary language. For example, in Star Trek, Forced Subtitles are displayed whenever the Klingons are speaking so that the audience is able to understand what is being said. Subtitles are shown whether captions are enabled by the user or not.
Note: For more information on Forced Subtitles please see this article from Netflix.
Events can be marked as Forced Subtitles (FS) using the FS icon found below the Event List.
To mark an Event, select the Event, and click the FS icon. The FS badge will appear next to the Event Id.
Once Events have been marked as Forced Subtitles (or not) they can be filtered upon export using the Subtitle Export Advanced Options.
Users have the option to include FS Events, Exclude FS Events, or exclude non-FS Events (FS Only).
If delivering for Netflix, users are required to deliver two subtitle files. One which includes FS Events, and a second which includes only FS Events.
From Netflix:
On our service, Forced Narrative subtitles are only displayed if full Subtitles and CC are set to "off" in the user's playback settings. When the user activates a full Subtitle or CC file, the FN subtitle does not display and for this reason, we require that all Forced Narrative Events are also included in each full Subtitle and SDH/CC file.
The Forced Subtitle option found in the Subtitle Export window easily allows for supporting this workflow.
The Search & Replace panel is available in the QuickTools Drawer.
Enter a Search term and click the search button to begin searching the currently selected Event Group.
The number of results will show next to the bottom controls along with arrows to navigate between them.
Use the arrow buttons to quickly jump between results. The selected Event will automatically scroll into view and the search term will be highlighted.
There are additional Search Options available to help improve search results.
Makes the search case sensitive and removes results that do not match the term exactly. Default search is case insensitive.
Match the entire word based on word bounds. For example searching the word to with match whole word enabled will return results for the word to but not too or tomorrow.
When replace all is enabled, the replace button will affect all search results.
Enabling Regular Expressions allows for the use of regular expressions when searching. Tools such as Regex101 can be helpful in testing Regex searches.
Note: Only ECMASCRIPT (JAVASCRIPT) regex expressions are supported.
Event Templates are copies of common Events that are created to be reused. Event Templates can be accessed via the Event Template panel found in the QuickTools Drawer.
To create a new Event Template, simply select an Event in the Event List, and right click. Select Add To Templates to create a new template. The Event text, style, and position will all be saved.
In order to insert a new Event using a template, simply click the desired template from the Event Template panel in the QuickTools Drawer. The new Event will be created beneath the selected Event.
Hint: We recommend using Event Templates for common SDH elements in order to maintain consistency when captioning multiple episodes from the same season or series.
Music notes can be inserted using the Insert menu or keyboard shortcut.
To insert music notes using the Insert menu, select one or more Events, and go to Insert -> Music Notes.
Select from the list of music note options (single, double, surround single, or surround double).
Hint: If none of the music note options fit your use case, please consider using the shortcut key options, or Event Templates.
All pre-selected Events will be updated based on the selected music note option.
Keyboard shortcuts have recently been added to allow for music notes to be inserted without having to use the Insert menu.
For more information configuring keyboard shortcuts please view the Shortcut Keys section above.
SDH (subtitles for the deaf and hard of hearing) elements are text elements that describe environment sounds (e.g. [Dog Barking], and narrative descriptions (e.g. [Scary Background Music]).
Conforming SDH subtitles to standard subtitles may involve removing all SDH elements. This can be done easily using the Remove SDH Elements tool available by going to Edit -> Remove SDH.
Select the required options and click the Remove SDH button to apply changes to the selected Event Group.
Note: If you only wish to remove SDH elements from a pre-selected group of Events, simply de-select the Apply to all option.
To help improve caption readability for the end-user it can help to insert gaps between Subtitle Events. This can be easily achieved using the Insert Blank frames option available in the Insert menu.
In the above screenshot we have an Event Group that has been synced with the video.
If we zoom in we can see that there is no gap between Events. This means that Events are chained one after another. The end-user may have an issue following if subtitle Events contain the same number of characters and lines. The Events will begin to all look the same.
We can help to fix this problem, by inserting a few frames between each Event.
Note: Netflix, and Itunes both require a certain number of frames to be included between Events.
To insert frames go to Insert -> Blank Frames…
Select a number of frames you wish to insert and click Apply.
By looking at the timeline we can see that the new gap between Events is at least 6 frames.
Watch: Creator 8 - Embed and Extract 608/708 Closed Captioning from Video
Closed Caption Creator supports the ability to embed 608 and 708 caption tracks into files for broadcast using the optional Embed and Extraction plugin.
Note: The Embed and Extraction plugin is an additional one-time charge that requires a subscription to Closed Caption Creator Pro or Enterprise.
It is recommended that support manage the initial configuration and setup of the Embed & Extraction plugin.
Open the CLI Tools options by going to Edit -> Options and selecting CLI Tools from the navigation menu on the left.
Select Drastic Technologies from the Developer menu, and set the Enable option to True.
Select the install paths for CC Embed and CC Extract using file input.
Click Save Settings.
Note: A new CLI Tools menu item should be visible next to Ai Tools.
To embed captions into an external video file you must import or create the captions in Closed Caption Creator.
To import a caption file you can follow the caption import workflow found in the section above.
Subtitles should exist in their own Event Group, and align with the proxy media.
Open the Drastic panel by going to CLI Tools -> Drastic Technologies.
Select Embed Captions from the Mode select dropdown.
Select a target folder where the final video export will be written.
Note: The embed process is non-destructive and creates a new copy of the video file instead of overwriting the existing file.
Select the Event Group containing the subtitle Events you want embedded in the final output video.
Select a Profile from the profile dropdown. In most scenarios the profile will be set to Wrap. Rewrapping is much faster than transcoding since it doesn’t require decoding or encoding the video or audio streams.
Select one or more source media files. Closed Caption Creator will embed the same caption track into all selected media files.
Specify a TC Offset if required. For example, if the source media file has a start of message (SOM) of 01:00:00:00 then you would set the TC Offset to 01:00:00:00.
Click Start Process to begin the Embed process.
To extract captions from a video file go to CLI Tools -> Drastic Technologies.
Change the Mode dropdown to Extract Captions. Select a Target Folder where temporary files will be stored.
Select one or more source files containing embedded captions.
Click Start Process to begin the extraction process.
Extracted captions will be imported into their own Subtitle Event Group.
#team #share #sharing #collaborate #external #review
Watch: How To Share Subtitle Projects With Your Team
Closed Caption Creator allows multiple users to work on the same project at one time using Team Projects.
Team Projects can be shared with other users, and updates can be synchronised between users using a workflow similar to other production tools (e.g. Avid). Changes are manually synced to generate a new Head File which is then imported into the user’s current project.
Note: In this section we will explain how the Team Project workflow is designed to work. However, it is not important to understand this section in order to use the feature. It is included for those who may wish to learn more.
A team project is created by what we call the Project Owner. Once a Team Project is created, a Project Commit is automatically made by the Project Owner. The Project Commit contains basic Project information such as the Project Name, Id, and frame rate.
Note: If the Project contains media referenced by Cloud URL, Vimeo link, or YouTube Link, then that information is also shared in the Commit.
The Commit is sent to a Server located in the cloud. The Server will save the commit, and return a new Head File back to the Owner.
The Head File contains a list of Commits, and updated Project information. Because this is the first Commit, the Head File returned to the owner will be identical to the original Commit that was sent. No changes will be applied to the Owner’s Project.
As the Project Owner continues to make changes to their project (e.g. adding new Events or Event Groups) they would need to continue to sync their changes with the Server version by generating a new Commit and sending it to the server.
Note: This is all handled in the Sync process, and is seamless to the user.
Up until this point, the workflow has been fairly simple because it only includes the Project Owner. If the Project Owner decides to share their project with other team members, those team members would become collaborators on the project.
Project Collaborators will follow the same workflow as the Project Owner, however this time the server will need to manage versions based on the initial version ID from the user.
For example, a user may be a few Commits behind if other users have been syncing their changes. That user will need to perform a sync, and the server will need to apply all changes from previous syncs in order to generate a new Head File which can be returned to that user.
Note: If the user is importing the Team project, then they will be sent the latest Head File, and no additional Commit Files are created.
User syncs their changes by submitting a new Commit File
The Server compares the New Commit against the existing Commits and a new Head File is generated by the Server.
The Server returns the new Head File to the user. Closed Caption Creator will then import/load the new Head File automatically.
The final workflow will look something like this:
Once a Project is completed, the Project Owner or Collaborator may export their work for delivery.
When a Project is created using the New Project window, the user has the option to create a Default or Team Project.
Select the Team option from the Project Type dropdown. Once the project is created, the system will automatically create the first Commit file and deliver to the Server via the Project Sync window.
If the project has been created successfully a new Teams menu item will appear in the top toolbar.
To share a Team Project simply go to Teams -> Share. The Share Project window will open.
Click the button in order to copy the Project Code which can then be shared with other team members.
To open a team project go to File -> Import and select Team Project from the list of options.
Paste in the Team Project code that was received from the Project Owner.
Click Import Project to open the Team Project.
Note: Users may have to re-import the media file if it is stored elsewhere on the system.
Once a user or team member has made changes to a Team Project, they will need to sync their changes with the Server. Go to Teams -> Sync.
The Project Sync window will automatically sync the changes, and apply any updates from other users.
Note: Project Sync currently supports updates to the Project Name, Frame Rate, Event Groups, and Events. If any other changes are made (e.g. Font or Font Size Changes) these will need to be made manually by each user.
In this section we will walk through an example workflow recommended by one of our users that uses Team Projects for fast turnaround of a show. Fast turnaround in this example is a 30 minute show being delivered in under one hour. Normally, this would be impossible for a single captioner, but is quite feasible for multiple captioners working together.
#DV #AD #described #video #synthetic #voice #virtual #synthesis
Watch: Creator 8 | Audio Description Plugin (v2)
Closed Caption Creator recently launched their new Audio Description Plugin. The AD plugin is already being used in production with real results being delivered for broadcast. The AD plugin is still under development, with new features being added with each new monthly release.
The AD Plugin is an optional add-on that installs alongside the desktop version of Closed Caption Creator. To have the AD Plugin added to your subscription, please contact support.
We also highly recommend you understand the core concepts of Closed Caption Creator before continuing in this section. You may wish to walk through the Getting Started section as well to better understand where certain settings are located.
Once a new Project has been created, a new Audio Description Event Group must be added. To add a new Event Group go to File -> New -> Event Group.
Set the Group Type to Audio Description, insert a Display Name, and click the Create Group button.
A new Event Group will be created containing a single Event.
The next step is to select the voices you wish to use in your project. This can be done via the Virtual Voice Manager available in the QuickTools Drawer.
Click the menu (...) icon located to the right of the Voice panel.
The Virtual Voice Manager allows users to pin their favourite voices so that they are easily available via the Voice Panel in the Quick Tools Drawer. Users can also preview voices by entering their own custom preview text and clicking the Play () next to any of the voices.
Once you have selected your favourite voices, you are ready to begin scripting.
Note: We recommend setting up keyboard shortcuts to make navigating the UI much easier. This will save you time by making it easy to insert new Events and skip to specific Events in the Event List.
Note: We recommend enabling Start TC On Insert under the Editor settings so that Events are added in the timeline as they are created.
Note: We recommend enabling Video Lock to aid in navigating your project using the Event List.
The standard workflow when authoring AD scripts is to playback the video, and watch for parts that require description. Whenever a section of video requires a description you would insert a new Event () and start entering a description into the Event Text. Once the description is complete, you would click the Render Audio control (). Once the audio is finished rendering, there may be a red Duration error beneath the Event Controls (). This error indicates that the duration of the virtual audio exceeds the duration of the Event set by the Event Timecode inputs. To fix the duration error, simply click the Auto-Trim button (). This will automatically extend the Event End time to match the duration of the Virtual Audio.
Note: If your Event duration is now too long, and overlaps with dialogue from the program audio, you may wish to edit your Event Text to be shorter, or increase the Event Rate/Speed using the Rate Slider ().
To Preview an Event you can now click the Preview button () in the Event Controls. This will automatically queue the video to the correct timecode and begin playback. The Media Player will automatically stop once the Event End is reached.
Repeat this process until the entire video has been described.
AD Templates can be used to identify spots in your media where AD may be required. This works by detecting missing dialogue in the program audio track.
The first step is to generate an Automatic Transcription using any of the supported Service Providers (Deepgram may be preferred).
Automatic Transcription jobs will show in the Ai Transcription Import Dashboard (Ai Tools -> Ai Transcription Import…).
Select a completed transcript job, click the More Options button, and select Create AD Template from the list of options.
The AD Template will be created as a new Event Group with a number of empty Events.
The Timeline Ruler will also update to show highlighted parts where program dialogue is missing, and descriptions could be added.
Note: The highlighted Ruler will be shown regardless of the selected Event Group. This means that you can remove all Events and still see the highlighted parts with missing dialogue.
Update the synthetic voice of all Events by first selecting all Events in the Event List (Right Click -> Select All) and then selecting the desired voice from the Voices Panel in the QuickTools Drawer.
When opening a Shared project, or a project that was previously archived, you may need to re-render all Events. Go to AI Tools -> Force Render Audio.
This can also be useful when updating the voices of multiple Events without having to click the Render Audio button () for each Event.
Instead of clicking the Auto Trim button () for each Event, you may wish to select all Events (Right Click -> Select All), and go to Timecode -> Trim To Duration.
Note: Select only the Events you want to apply the Auto Trim option.
Audio Description Export is covered in the File Export section above.
This section contains helpful tips regarding specific workflows that have been shared by other users. If you have an additional tip that you think would be helpful to other users, please feel free to submit it here.
When manually transcribing audiovisual content, it may help to try some of the following options:
If you are a professional audio visual translator you may wish you translate subtitles and other timed text projects from scratch (without the aid of the automatic translation tools).
You can start by importing an existing subtitle, or transcript file using the import workflows explained above. Once a file has been imported you will need to create a new Translation Event Group by going to File -> New -> Event Group.
The New Event Group window will appear.
Select Translation from the Group Type dropdown. Next, select the Linked Group found under the Translation Options. The Linked Group will be the Event Group containing the Original subtitles or transcript.
Click Create Group when you are ready.
A new Event Group will appear in the Event Group Navigation pre-populated with Events based on the Linked Event Group.
You may wish to specify separate fonts for the primary (original) and secondary language. In order to accomplish this, you can go to the Event Group Settings by clicking the menu icon () to the right of the Event Group tab.
Here you can update the Original Font Family using the Font menu dropdown.
Click the Update Group button when you are finished.
Closed Caption Creator can be used to deliver closed captioning and subtitles to almost any platform. However, each platform has their own requirements. Before delivering a file to a platform, it’s important to review a spec sheet of requirements.
A spec sheet provides an overview of what file formats and technical requirements a platform supports. An example spec sheet for Itunes can be found here.
Itunes provides a simple spec sheet that users can follow to determine what caption format they should deliver:
Itunes supports SCC formatted files, with a frame rate of 29.97.
If we look at the table above we can see that the timecode format of the closed caption file is dependent on the source video frame rate. For example, if the source video frame rate is 24, then the closed caption file will be 29.97 NDF.
Here are the steps to export a proper SCC file for Itunes:
The most common playback issues are caused by incompatible media. There are a few scenarios and solutions available.
If you can hear the program audio, but the video is black, then the video codec is not compatible with Closed Caption Creator.
Video playback may be choppy when first importing the file due to the Waveform generation happening in the background. If choppy playback continues, it may be caused by the video file size being too large.
If captions are in sync at the beginning of your video, but begin to drift as the video plays, then you may have a caption drift issue. This issue can be easily corrected using the Stretch and Shrink tool.
MediaInfo is the best option for viewing embedded metadata. MediaInfo is able to show information about a files video codec, frame rate, audio channels, and caption tracks. On Windows, MediaInfo is a free download, and on Mac it is $1.00 on the Mac App Store.
Telestream’s Switch is one of the best QC tools we’ve found for reviewing captions, and playing back broadcast media files. It supports embedded 608, and 708 captions. As well, it can play sidecar caption files (in almost any professional format).
If Switch is a little pricey for your workflow or you only work with SRT or VTT files, we highly recommend VLC media player to all of our clients. VLC is able to playback almost any media format, and can even load sidecar SRT and WebVTT files.
Premiere Pro 2022 has made huge strides in its closed captioning and subtitle tools. Users can now add closed captioning as data tracks (instead of video layers), and it even supports 608, and 708 embedded captions. Premiere Pro is a great tool for reviewing caption sync, and playback issues.
Notepad++ is a great tool for opening text based subtitle files. If you work with SRT or VTT files, Notepad++ can sometimes be easier to use to fix simple errors (like spelling, or punctuation).
Closed Caption Converter is another product developed by Closed Caption Creator that supports the same closed caption and subtitle file formats. The tool allows for simple conversions, and complex conform jobs to be completed without having to actually open a caption file and make the changes yourself (reducing human error).
Closed Caption Converter is available via Web GUI, CLI, and Web Services API.
If you require any help or support please feel free to contact customer support using the details below. Enterprise and Pro subscribers are also encouraged to book free 1:1 team training with our experts.
support@closedcaptioncreator.com
training@closedcaptioncreator.com
Users can manage their account by visiting the self-serve portal located here: https://closedcaptioncreator.chargebeeportal.com/portal/v2/login?forward=portal_main