- April 19, 2021 at 8:04 pm
I am looking at ways to get translations into an srt file or similar in the most seamless, efficient manner, for import into FCP X. I see there is an FCPX SRT converter that will allow me to create titles as opposed to generic captions when I import, which is very helpful, but I would like some advice about how to get it into an SRT file. Can anyone advise on a good workflow, allowing for the translator (from indigenous language to English) to timestamp a Word doc (or similar) and then for me to import into FCP X? I can’t use Simon Says etc as these African languages aren’t catered for in existing translation software.
- April 20, 2021 at 2:00 pm
Try Rev.com. You can send the video and script to them, they’ll create the .srt file.
- April 20, 2021 at 6:41 pm
Unfortunately, I can’t do that as the video won’t be in a language they can translate.
- April 20, 2021 at 9:45 pm
Several possible ways to tackle this.
The first is to have the translator create a time-stamped Word doc and then manually copy and paste from the doc into FCP’s built in captions. So you would be creating original captions – either open or closed.
The second would be to upload the African track to Simon Says and just let it transcribe the file into jibberish. Have the translator log into Simon Says and use the online editing tool to fix all of the captions.
A third option might be to have the translator speak along to the track translating on-the-fly. Have the African language in one channel and his English recorded in the other. Edit both against your video, so that you’d use the African for sync and the English would automatically be in the correct place. (Both of these are only for reference.) Upload the English channel to Simon Says. Fix as needed and download the captions and/or subtitles.
- April 22, 2021 at 10:09 am
In past projects I have done #1 and #3. In general text-based translation is easiest and most common, but synchronized voice translation can be a more polished presentation. It can theoretically enable first-pass dialog editing by an English-only editor, since each language is on a separate audio track. It is also more laborious to achieve. The final result must be checked by a language specialist and (depending on the type of presentation) the edited version may be re-recorded by an age and gender-appropriate voice actor.
We had a team of translators equipped with Blue Snowball USB mics, they used Audacity to capture the files. We trained them to pause and re-start so they didn’t need a perfect 20 min take. This was Spanish-to-English voice translation of each full interview, which became a separate audio channel on the multicam, not simply translation of the final edited clips.
Due to the difficulty of the translator staying within +/- 1 or 2 sec sync, they still had to make their own transcript. Then when recording the translation, they could focus on timing and delivery. Despite the mic we had to re-do several because of background noise or echo problems. It’s more difficult than it first appears to obtain a good quality synchronized voice translation.
The translator would listen to the foreign-language dialog in one earbud, that way it didn’t leak into the English audio. We tried headphones but they preferred one ear, to better hear themselves speak.
When I got the translated audio back, it was fairly simple to drop into FCP and check the sync.
If a decision is made to later voice translate the final edited timeline to a new language while keeping the same edit, that can be difficult if going from English to Spanish. The “semantic density” is lower in Spanish to it takes more time to stay the words, and you can easily run out of space, forcing multiple re-edits. The answer is plan ahead for that and keep the project in a form that permits adjustments.
In general I think method #1 (time-stamped transcript) is the best approach and involves the least post production labor. It is tedious for a single-language editor, but it avoids the complexity of full-duration synchronized audio voice translation. The final edited timeline can still use voice translation of the selected clips.
- April 22, 2021 at 6:46 pm
Why not have the translators use Jubler? You can set it up for them at the proper frame rate, and even put dummy text in for them to replace at the proper time code they can do the translations at the proper time codes then send the whole thing back to you. You can have them save as a .srt file or something else and you can open them in jubler and save as a .srt file.
- April 22, 2021 at 7:26 pm
Let me reply here (instead of reply to your mail).
First supply the translators with videos which do have burned in TC starting at 00:00:00:00.
The files the translators return should look like:
TC, tab, Text (chapter file format)
This is the easiest way for timed transcripts.
Then you can use one of my X-Title apps to convert either to SRT or FCP SRT. You have to set the out for each caption though, but from my experience it is the fastest most effective way when working with translators.
- April 22, 2021 at 7:37 pm
I normally do recommend Jubler as well, but often enough translators don’t want to “mess” around with subtitling software.
Otherwise Isa probably wouldn’t have asked here since the translators would have suggested it by their own 😉
- April 22, 2021 at 8:02 pm
Thanks so much! I obviously knew of #1 but not the others. Very interesting ideas. Just would love a converter from the time stamped doc to srt. They seem to exist but I may need to play around.
- April 22, 2021 at 8:04 pm
I was almost convinced that voicing over the African language was the way to go, but you’ve convinced me that it’s harder than it sounds. Thanks for saving me the trouble.
Log in to reply.