How I dubbed a 16-second video with lip sync for $0.50 using open-source models
AI video dubbing or translation has surged in popularity, breaking down language barriers and enabling communication across diverse cultures. While many paid services offer video dubbing, they often rely on proprietary black-box models outside your control. What if you could deploy a fully customizable, open-source video dubbing pipeline tailored to your needs?
With the right open-source models stitched together, you can translate videos on your terms. This blog walks through how users of Union can tweak parameters, scale resources based on input, reproduce results if needed, and leverage caching to optimize costs. This flexible, transparent approach gives you full command over quality, performance, and spend.
I dubbed a 16-second video for an estimated $0.50 using an AWS T4 instance. Enabling caching can further reduce costs. For example, if you need to update lip sync parameters, only the lip sync task needs to be re-executed, while other task outputs, such as voice cloning and text translation, are read from the cache. If you want to dub to a different language, the speech-to-text transcription won’t be re-executed as the task output for the previously dubbed language will remain the same.
Building your own video dubbing pipeline is easier than you think. I’ll show you how to assemble best-in-class open-source models into a single pipeline, all ready to run on the Union platform.
Before diving into the details, let's take a sneak peek at the workflow.
Copied to clipboard!
from flytekit import workflow
@workflow
def video_translation_wf(...) -> FlyteFile:
values = fetch_audio_and_image(...)
text = speech2text(...)
translated_text = translate_text(...)
cloned_voice = clone_voice(...)
return lip_sync(...)
When you run this workflow on Union, it triggers a sequence of steps to translate your video:
- The pipeline starts by fetching the audio and image components from your input video file for further processing.
- Using Whisper, the audio is transcribed into text, enabling translation.
- The transcribed text is fed into the M2M100 model, translating it into your desired target language.
- Coqui XTTS clones the original speaker's voice in the target language.
- Finally, Sad Talker lip-syncs the translated audio to the original video, producing a complete translated clip with accurate lip movements.
Audio & image extraction
To enable transcription and translation, we need to extract audio from the video file, and for the lip sync model, we require a frame from the video. While the model typically selects at random, choosing the most representative keyframe using the Katna library could yield better results.
Copied to clipboard!
from flytekit import ImageSpec, task
preprocessing_image = ImageSpec(
name="fetch_audio_and_image",
apt_packages=["ffmpeg"],
packages=[
"moviepy==1.0.3",
"katna==0.9.2",
"unionai==0.1.5",
],
)
@task(
container_image=preprocessing_image,
requests=Resources(mem="5Gi", cpu="1"),
)
def fetch_audio_and_image(
video_file: FlyteFile, output_ext: str
) -> audio_and_image_values:
from Katna.video import Video
from Katna.writer import KeyFrameDiskWriter
from moviepy.editor import VideoFileClip
...
The `ImageSpec` utility captures all the dependencies, while the `ucimage` builder automatically builds the image remotely when you launch a remote execution.
The `resources` parameter in the task decorator allows you to specify the necessary resources to run the task. You can also adjust the resources based on the task's consumption, as observed in the Union UI.
Speech-to-text transcription
The audio must then be transcribed to enable translation in the subsequent task. The automatic speech recognition (ASR) task enables transcribing speech audio recordings into text.
Copied to clipboard!
@task(
container_image=speech2text_image,
requests=Resources(gpu="1", mem="10Gi", cpu="1"),
accelerator=T4,
)
def speech2text(
checkpoint: str,
audio: FlyteFile,
chunk_length: float,
return_timestamps: bool,
translate_from: str,
) -> str:
...
pipe = pipeline(
"automatic-speech-recognition",
model=checkpoint,
chunk_length_s=chunk_length,
device="cuda:0" if torch.cuda.is_available() else "cpu",
)
...
The `checkpoint` can refer to the Whisper model (for example, Whisper Large v2), but in reality, it can be any speech-to-text model. You can configure it to run on a GPU to speed up execution, and you can use an accelerator to select the GPU on which you want the transcription to run.
Text translation
The M2M100 1.2B model by Meta enables text translation. When executing the workflow, both the source and target languages need to be provided as inputs.
Copied to clipboard!
@task(
container_image=language_translation_image,
requests=Resources(mem="10Gi", cpu="3"),
)
def translate_text(translate_from: str, translate_to: str, input: str) -> str:
...
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")
...
This task doesn’t require a GPU.
Voice cloning
The translated text is used to clone the speaker’s voice. Coqui XTTS clones the voice based on the provided text, target language, and the speaker’s audio.
Copied to clipboard!
@task(
container_image=clone_voice_image,
requests=Resources(gpu="1", mem="15Gi"),
accelerator=T4,
environment={"COQUI_TOS_AGREED": "1"},
)
def clone_voice(text: str, target_lang: str, speaker_wav: FlyteFile) -> FlyteFile:
...
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
...
The XTTS model supports the following languages for voice cloning, making them potential target languages for video dubbing as well:
Copied to clipboard!
language_codes = {
"English": "en",
"Spanish": "es",
"French": "fr",
"German": "de",
"Italian": "it",
"Portuguese": "pt",
"Polish": "pl",
"Turkish": "tr",
"Russian": "ru",
"Dutch": "nl",
"Czech": "cs",
"Arabic": "ar",
"Chinese": "zh-cn",
"Japanese": "ja",
"Hungarian": "hu",
"Korean": "ko",
"Hindi": "hi",
}
If another voice cloning model supports a larger set of languages, you can also use that.
Lip syncing
Sad Talker model generates head videos based on an image and audio input. The model allows for adjusting various parameters such as pose style, face enhancement, background enhancement, expression scale, and more. The following code snippet outlines the inputs that the lip sync task accepts:
Copied to clipboard!
@task(
requests=(Resources(gpu="1", mem="30Gi")),
container_image=lip_sync_image,
accelerator=T4,
)
def lip_sync(
audio_path: FlyteFile,
pic_path: FlyteFile,
ref_pose: FlyteFile,
ref_eyeblink: FlyteFile,
pose_style: int,
batch_size: int,
expression_scale: float,
input_yaw_list: Optional[list[int]],
input_pitch_list: Optional[list[int]],
input_roll_list: Optional[list[int]],
enhancer: str,
background_enhancer: str,
device: str,
still: bool,
preprocess: str,
checkpoint_dir: str,
size: int,
) -> FlyteFile:
...
You can find the end-to-end video dubbing workflow on GitHub.
Running the pipeline
Note: Starting with Union is simple. Explore the unionai SDK to run workflows on the platform!
To run the workflow on Union, use the following command:
Copied to clipboard!
union run --remote --copy-all video_translation.py video_translation_wf
Once registered, you can trigger the workflow in the Union UI to translate your videos!
Outputs of our video dubbing application. The `still` parameter in the workflow is set to `True` because the Sad Talker model supports head motions. You can set it to `False` if `preprocess` is set to `crop`.
Key takeaways
What exactly does this video dubbing pipeline unlock?
- Each task has an associated `ImageSpec`, eliminating the need for a single bloated image containing all dependencies. You can also use different Python versions or install CUDA libraries to run on GPUs, providing a new level of dependency isolation. This reduces the chances of dealing with “dependency hell”!
- Within a single workflow, tasks can run on both CPUs and GPUs, and you can adjust resources based on the requirements of each task.
- You can easily swap out existing libraries with other open-source alternatives. The transparent pipeline lets you fine-tune parameters for optimal performance.
- The versioning and caching features of Union enable you to roll back to a previous execution with ease and avoid re-running executions that have already been completed, respectively.
- Reproducibility is the low-hanging fruit of Union that accelerates the iteration velocity while developing workflows.
- If you have Flyte up and running, you can also “self-host” this pipeline without relying on third-party video dubbing libraries.
Contact the Union team if you’re interested in implementing end-to-end AI solutions!