How I dubbed a 16-second video with lip sync for $0.50 using open-source models

https://www.union.ai/blog-post/open-source-video-dubbing-using-whisper-m2m-coqui-xtts-and-sad-talker

AI video dubbing or translation has surged in popularity, breaking down language barriers and enabling communication across diverse cultures. While many paid services offer video dubbing, they often rely on proprietary black-box models outside your control. What if you could deploy a fully customizable, open-source video dubbing pipeline tailored to your needs?

With the right open-source models stitched together, you can translate videos on your terms. This blog walks through how users of Union can tweak parameters, scale resources based on input, reproduce results if needed, and leverage caching to optimize costs. This flexible, transparent approach gives you full command over quality, performance, and spend.

I dubbed a 16-second video for an estimated $0.50 using an AWS T4 instance. Enabling caching can further reduce costs. For example, if you need to update lip sync parameters, only the lip sync task needs to be re-executed, while other task outputs, such as voice cloning and text translation, are read from the cache. If you want to dub to a different language, the speech-to-text transcription won’t be re-executed as the task output for the previously dubbed language will remain the same. 

Building your own video dubbing pipeline is easier than you think. I’ll show you how to assemble best-in-class open-source models into a single pipeline, all ready to run on the Union platform.

Before diving into the details, let's take a sneak peek at the workflow.

Copied to clipboard!

from flytekit import workflow
@workflow
def video_translation_wf(...) -> FlyteFile:
    values = fetch_audio_and_image(...)
    text = speech2text(...)
    translated_text = translate_text(...)
    cloned_voice = clone_voice(...)
    return lip_sync(...)

When you run this workflow on Union, it triggers a sequence of steps to translate your video:

  1. The pipeline starts by fetching the audio and image components from your input video file for further processing.
  2. Using Whisper, the audio is transcribed into text, enabling translation.
  3. The transcribed text is fed into the M2M100 model, translating it into your desired target language.
  4. Coqui XTTS clones the original speaker's voice in the target language.
  5. Finally, Sad Talker lip-syncs the translated audio to the original video, producing a complete translated clip with accurate lip movements.

Audio & image extraction

To enable transcription and translation, we need to extract audio from the video file, and for the lip sync model, we require a frame from the video. While the model typically selects at random, choosing the most representative keyframe using the Katna library could yield better results.

Copied to clipboard!

from flytekit import ImageSpec, task
preprocessing_image = ImageSpec(
    name="fetch_audio_and_image",
    apt_packages=["ffmpeg"],
    packages=[
        "moviepy==1.0.3",
        "katna==0.9.2",
        "unionai==0.1.5",
    ],
)
@task(
    container_image=preprocessing_image,
    requests=Resources(mem="5Gi", cpu="1"),
)
def fetch_audio_and_image(
    video_file: FlyteFile, output_ext: str
) -> audio_and_image_values:
    from Katna.video import Video
    from Katna.writer import KeyFrameDiskWriter
    from moviepy.editor import VideoFileClip
    ...

The `ImageSpec` utility captures all the dependencies, while the `ucimage` builder automatically builds the image remotely when you launch a remote execution.

The `resources` parameter in the task decorator allows you to specify the necessary resources to run the task. You can also adjust the resources based on the task's consumption, as observed in the Union UI.

Resource utilization for lip sync task as observed in the Union UI

Speech-to-text transcription 

The audio must then be transcribed to enable translation in the subsequent task. The automatic speech recognition (ASR) task enables transcribing speech audio recordings into text. 

Copied to clipboard!

@task(
    container_image=speech2text_image,
    requests=Resources(gpu="1", mem="10Gi", cpu="1"),
    accelerator=T4,
)
def speech2text(
    checkpoint: str,
    audio: FlyteFile,
    chunk_length: float,
    return_timestamps: bool,
    translate_from: str,
) -> str:
    ...
    pipe = pipeline(
        "automatic-speech-recognition",
        model=checkpoint,
        chunk_length_s=chunk_length,
        device="cuda:0" if torch.cuda.is_available() else "cpu",
    )
    ...

The `checkpoint` can refer to the Whisper model (for example, Whisper Large v2), but in reality, it can be any speech-to-text model. You can configure it to run on a GPU to speed up execution, and you can use an accelerator to select the GPU on which you want the transcription to run.

Text translation

The M2M100 1.2B model by Meta enables text translation. When executing the workflow, both the source and target languages need to be provided as inputs. 

Copied to clipboard!

@task(
    container_image=language_translation_image,
    requests=Resources(mem="10Gi", cpu="3"),
)
def translate_text(translate_from: str, translate_to: str, input: str) -> str:
    ...
    model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B")
    tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")
    ...

This task doesn’t require a GPU. 

Voice cloning

The translated text is used to clone the speaker’s voice. Coqui XTTS clones the voice based on the provided text, target language, and the speaker’s audio.

Copied to clipboard!

@task(
    container_image=clone_voice_image,
    requests=Resources(gpu="1", mem="15Gi"),
    accelerator=T4,
    environment={"COQUI_TOS_AGREED": "1"},
)
def clone_voice(text: str, target_lang: str, speaker_wav: FlyteFile) -> FlyteFile:
    ...
    tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
    ...

The XTTS model supports the following languages for voice cloning, making them potential target languages for video dubbing as well:

Copied to clipboard!

language_codes = {
    "English": "en",
    "Spanish": "es",
    "French": "fr",
    "German": "de",
    "Italian": "it",
    "Portuguese": "pt",
    "Polish": "pl",
    "Turkish": "tr",
    "Russian": "ru",
    "Dutch": "nl",
    "Czech": "cs",
    "Arabic": "ar",
    "Chinese": "zh-cn",
    "Japanese": "ja",
    "Hungarian": "hu",
    "Korean": "ko",
    "Hindi": "hi",
}

If another voice cloning model supports a larger set of languages, you can also use that.

Lip syncing

Sad Talker model generates head videos based on an image and audio input. The model allows for adjusting various parameters such as pose style, face enhancement, background enhancement, expression scale, and more. The following code snippet outlines the inputs that the lip sync task accepts:

Copied to clipboard!

@task(
    requests=(Resources(gpu="1", mem="30Gi")),
    container_image=lip_sync_image,
    accelerator=T4,
)
def lip_sync(
    audio_path: FlyteFile,
    pic_path: FlyteFile,
    ref_pose: FlyteFile,
    ref_eyeblink: FlyteFile,
    pose_style: int,
    batch_size: int,
    expression_scale: float,
    input_yaw_list: Optional[list[int]],
    input_pitch_list: Optional[list[int]],
    input_roll_list: Optional[list[int]],
    enhancer: str,
    background_enhancer: str,
    device: str,
    still: bool,
    preprocess: str,
    checkpoint_dir: str,
    size: int,
) -> FlyteFile:
    ...

You can find the end-to-end video dubbing workflow on GitHub.

Running the pipeline

Note: Starting with Union is simple. Explore the unionai SDK to run workflows on the platform!

To run the workflow on Union, use the following command:

Copied to clipboard!

union run --remote --copy-all video_translation.py video_translation_wf

Once registered, you can trigger the workflow in the Union UI to translate your videos!

Outputs of our video dubbing application. The `still` parameter in the workflow is set to `True` because the Sad Talker model supports head motions. You can set it to `False` if `preprocess` is set to `crop`.

Key takeaways

What exactly does this video dubbing pipeline unlock?

  • Each task has an associated `ImageSpec`, eliminating the need for a single bloated image containing all dependencies. You can also use different Python versions or install CUDA libraries to run on GPUs, providing a new level of dependency isolation. This reduces the chances of dealing with “dependency hell!
  • Within a single workflow, tasks can run on both CPUs and GPUs, and you can adjust resources based on the requirements of each task.
  • You can easily swap out existing libraries with other open-source alternatives. The transparent pipeline lets you fine-tune parameters for optimal performance. 
  • The versioning and caching features of Union enable you to roll back to a previous execution with ease and avoid re-running executions that have already been completed, respectively. 
  • Reproducibility is the low-hanging fruit of Union that accelerates the iteration velocity while developing workflows.
  • If you have Flyte up and running, you can also “self-host” this pipeline without relying on third-party video dubbing libraries.

Contact the Union team if you’re interested in implementing end-to-end AI solutions!

{
"by": "samhita-alla",
"descendants": 0,
"id": 40235080,
"score": 15,
"time": 1714650868,
"title": "How I dubbed a 16-second video with lip sync for $0.50 using open-source models",
"type": "story",
"url": "https://www.union.ai/blog-post/open-source-video-dubbing-using-whisper-m2m-coqui-xtts-and-sad-talker"
}
{
"author": "Samhita Alla",
"date": "2024-09-12T12:00:00.000Z",
"description": "AI video dubbing or translation has surged in popularity, breaking down language barriers and enabling communication across diverse cultures.",
"image": "https://cdn.prod.website-files.com/64028677e7e50a208e0a56a8/6632849c640914aa1a0ab8b6_English-Russian-Video-Dub.gif",
"logo": null,
"publisher": null,
"title": "Open-Source Video Dubbing Using Whisper, M2M, Coqui XTTS, and Sad Talker • Union.ai",
"url": "https://www.union.ai/blog-post/open-source-video-dubbing-using-whisper-m2m-coqui-xtts-and-sad-talker"
}
{
"url": "https://www.union.ai/blog-post/open-source-video-dubbing-using-whisper-m2m-coqui-xtts-and-sad-talker",
"title": "Open-Source Video Dubbing Using Whisper, M2M, Coqui XTTS, and Sad Talker • Union.ai",
"description": "AI video dubbing or translation has surged in popularity, breaking down language barriers and enabling communication across diverse cultures. While many paid services offer video dubbing, they often rely on...",
"links": [
"https://www.union.ai/blog-post/open-source-video-dubbing-using-whisper-m2m-coqui-xtts-and-sad-talker"
],
"image": "https://cdn.prod.website-files.com/64028677e7e50a208e0a56a8/6632849c640914aa1a0ab8b6_English-Russian-Video-Dub.gif",
"content": "<div><p>AI video dubbing or translation has surged in popularity, breaking down language barriers and enabling communication across diverse cultures. While many paid services offer video dubbing, they often rely on proprietary black-box models outside your control. What if you could deploy a fully customizable, open-source video dubbing pipeline tailored to your needs?</p><p>With the right open-source models stitched together, you can translate videos on your terms. This blog walks through how users of Union can tweak parameters, scale resources based on input, reproduce results if needed, and leverage caching to optimize costs. This flexible, transparent approach gives you <strong>full command over quality, performance, and spend</strong>.</p><p><strong><em>I dubbed a 16-second video for an estimated $0.50 </em></strong><em>using an AWS T4 instance. Enabling caching can further reduce costs. For example, if you need to update lip sync parameters, only the lip sync task needs to be re-executed, while other task outputs, such as voice cloning and text translation, are read from the cache. If you want to dub to a different language, the speech-to-text transcription won’t be re-executed as the task output for the previously dubbed language will remain the same. </em></p><p>Building your own video dubbing pipeline is easier than you think. I’ll show you how to assemble best-in-class open-source models into a single pipeline, all ready to run on the Union platform.</p><p>Before diving into the details, let's take a sneak peek at the workflow.</p><div>\n <p>Copied to clipboard!</p>\n <pre><code>from flytekit import workflow\n@workflow\ndef video_translation_wf(...) -&gt; FlyteFile:\n values = fetch_audio_and_image(...)\n text = speech2text(...)\n translated_text = translate_text(...)\n cloned_voice = clone_voice(...)\n return lip_sync(...)</code></pre>\n</div><p>When you run this workflow on Union, it triggers a sequence of steps to translate your video:</p><ol><li>The pipeline starts by fetching the audio and image components from your input video file for further processing.</li><li>Using <strong>Whisper</strong>, the audio is transcribed into text, enabling translation.</li><li>The transcribed text is fed into the <strong>M2M100</strong> model, translating it into your desired target language.</li><li><strong>Coqui XTTS</strong> clones the original speaker's voice in the target language.</li><li>Finally, <strong>Sad Talker</strong> lip-syncs the translated audio to the original video, producing a complete translated clip with accurate lip movements.</li></ol><h2>Audio &amp; image extraction</h2><p>To enable transcription and translation, we need to extract audio from the video file, and for the lip sync model, we require a frame from the video. While the model typically selects at random, choosing the most representative keyframe using the <a target=\"_blank\" href=\"https://github.com/keplerlab/katna\">Katna</a> library could yield better results.</p><div>\n <p>Copied to clipboard!</p>\n <pre><code>from flytekit import ImageSpec, task\npreprocessing_image = ImageSpec(\n name=\"fetch_audio_and_image\",\n apt_packages=[\"ffmpeg\"],\n packages=[\n \"moviepy==1.0.3\",\n \"katna==0.9.2\",\n \"unionai==0.1.5\",\n ],\n)\n@task(\n container_image=preprocessing_image,\n requests=Resources(mem=\"5Gi\", cpu=\"1\"),\n)\ndef fetch_audio_and_image(\n video_file: FlyteFile, output_ext: str\n) -&gt; audio_and_image_values:\n from Katna.video import Video\n from Katna.writer import KeyFrameDiskWriter\n from moviepy.editor import VideoFileClip\n ...</code></pre>\n</div><p>The `<a target=\"_blank\" href=\"https://docs.union.ai/developing-workflows/imagespec/\">ImageSpec</a>` utility captures all the dependencies, while the `ucimage` builder automatically <strong>builds the image remotely</strong> when you launch a remote execution.</p><p>The `resources` parameter in the task decorator allows you to specify the necessary resources to run the task. You can also adjust the resources based on the task's consumption, as observed in the Union UI.</p><figure><p><img src=\"https://cdn.prod.website-files.com/64028677e7e50a208e0a56a8/66327c5ddaa8303b7af4b6ab_dIQvdRIgrFqVz9e26_N3eWwms1HsZs-P2mpYXc4RzjkCbwo_SjN0FXgSuJHgA_JXIzcHb02NzNvdh3vhd_vXBZhoChaUaTBkcmG5CVRUW0DdpJ4bwp4g1uBaFpmABCM_VmWsvO3UNdGx2KAtNGFhEAM.png\" /></p><figcaption><a target=\"_blank\" href=\"https://docs.union.ai/web-console/task-level-monitoring\">Resource utilization</a> for lip sync task as observed in the Union UI</figcaption></figure><h2>Speech-to-text transcription </h2><p>The audio must then be transcribed to enable translation in the subsequent task. The <a target=\"_blank\" href=\"https://huggingface.co/docs/transformers/tasks/asr\"><em>automatic speech recognition (ASR)</em></a> task enables transcribing speech audio recordings into text. </p><div>\n <p>Copied to clipboard!</p>\n <pre><code>@task(\n container_image=speech2text_image,\n requests=Resources(gpu=\"1\", mem=\"10Gi\", cpu=\"1\"),\n accelerator=T4,\n)\ndef speech2text(\n checkpoint: str,\n audio: FlyteFile,\n chunk_length: float,\n return_timestamps: bool,\n translate_from: str,\n) -&gt; str:\n ...\n pipe = pipeline(\n \"automatic-speech-recognition\",\n model=checkpoint,\n chunk_length_s=chunk_length,\n device=\"cuda:0\" if torch.cuda.is_available() else \"cpu\",\n )\n ...</code></pre>\n</div><p>The `checkpoint` can refer to the Whisper model (for example, <a target=\"_blank\" href=\"https://huggingface.co/openai/whisper-large-v2\">Whisper Large v2</a>), but in reality, it can be any speech-to-text model. You can configure it to run on a GPU to speed up execution, and you can use an <a target=\"_blank\" href=\"https://docs.flyte.org/en/latest/api/flytekit/extras.accelerators.html\">accelerator</a> to select the GPU on which you want the transcription to run.</p><h2>Text translation</h2><p>The M2M100 1.2B model by Meta enables text translation. When executing the workflow, both the source and target languages need to be provided as inputs. </p><div>\n <p>Copied to clipboard!</p>\n <pre><code>@task(\n container_image=language_translation_image,\n requests=Resources(mem=\"10Gi\", cpu=\"3\"),\n)\ndef translate_text(translate_from: str, translate_to: str, input: str) -&gt; str:\n ...\n model = M2M100ForConditionalGeneration.from_pretrained(\"facebook/m2m100_1.2B\")\n tokenizer = M2M100Tokenizer.from_pretrained(\"facebook/m2m100_1.2B\")\n ...</code></pre>\n</div><p>This task doesn’t require a GPU. </p><h2>Voice cloning</h2><p>The translated text is used to clone the speaker’s voice. <a target=\"_blank\" href=\"https://huggingface.co/coqui/XTTS-v2\">Coqui XTTS</a> clones the voice based on the provided text, target language, and the speaker’s audio.</p><div>\n <p>Copied to clipboard!</p>\n <pre><code>@task(\n container_image=clone_voice_image,\n requests=Resources(gpu=\"1\", mem=\"15Gi\"),\n accelerator=T4,\n environment={\"COQUI_TOS_AGREED\": \"1\"},\n)\ndef clone_voice(text: str, target_lang: str, speaker_wav: FlyteFile) -&gt; FlyteFile:\n ...\n tts = TTS(\"tts_models/multilingual/multi-dataset/xtts_v2\").to(device)\n ...</code></pre>\n</div><p>The XTTS model supports the following languages for voice cloning, making them potential target languages for video dubbing as well:</p><div>\n <p>Copied to clipboard!</p>\n <pre><code>language_codes = {\n \"English\": \"en\",\n \"Spanish\": \"es\",\n \"French\": \"fr\",\n \"German\": \"de\",\n \"Italian\": \"it\",\n \"Portuguese\": \"pt\",\n \"Polish\": \"pl\",\n \"Turkish\": \"tr\",\n \"Russian\": \"ru\",\n \"Dutch\": \"nl\",\n \"Czech\": \"cs\",\n \"Arabic\": \"ar\",\n \"Chinese\": \"zh-cn\",\n \"Japanese\": \"ja\",\n \"Hungarian\": \"hu\",\n \"Korean\": \"ko\",\n \"Hindi\": \"hi\",\n}</code></pre>\n</div><p>If another voice cloning model supports a larger set of languages, you can also use that.</p><h2>Lip syncing</h2><p><a target=\"_blank\" href=\"https://github.com/OpenTalker/SadTalker\">Sad Talker</a> model generates <em>head videos</em> based on an image and audio input. The model allows for adjusting various parameters such as pose style, face enhancement, background enhancement, expression scale, and more. The following code snippet outlines the inputs that the lip sync task accepts:</p><div>\n <p>Copied to clipboard!</p>\n <pre><code>@task(\n requests=(Resources(gpu=\"1\", mem=\"30Gi\")),\n container_image=lip_sync_image,\n accelerator=T4,\n)\ndef lip_sync(\n audio_path: FlyteFile,\n pic_path: FlyteFile,\n ref_pose: FlyteFile,\n ref_eyeblink: FlyteFile,\n pose_style: int,\n batch_size: int,\n expression_scale: float,\n input_yaw_list: Optional[list[int]],\n input_pitch_list: Optional[list[int]],\n input_roll_list: Optional[list[int]],\n enhancer: str,\n background_enhancer: str,\n device: str,\n still: bool,\n preprocess: str,\n checkpoint_dir: str,\n size: int,\n) -&gt; FlyteFile:\n ...</code></pre>\n</div><p>You can find the end-to-end video dubbing workflow on <a href=\"https://github.com/unionai/examples/tree/main/tutorials/video_translation\" target=\"_blank\">GitHub</a>.</p><h2>Running the pipeline</h2><p><em>Note: Starting with Union is simple. Explore the </em><a target=\"_blank\" href=\"https://github.com/unionai/unionai/blob/main/README.md\"><em>unionai</em></a><em> SDK to run workflows on the platform!</em></p><p>To run the workflow on Union, use the following command:</p><div>\n <p>Copied to clipboard!</p>\n <pre><code>union run --remote --copy-all video_translation.py video_translation_wf</code></pre>\n</div><p>Once registered, you can trigger the workflow in the Union UI to translate your videos!</p><figure><p><iframe frameborder=\"0\" scrolling=\"no\" src=\"https://player.vimeo.com/video/933661681\"></iframe></p></figure><figure><p><iframe frameborder=\"0\" scrolling=\"no\" src=\"https://player.vimeo.com/video/933681223\"></iframe></p></figure><p><em>Outputs of our video dubbing application. The `still` parameter in the workflow is set to `True` because the Sad Talker model supports head motions. You can set it to `False` if `preprocess` is set to `crop`.</em></p><h2>Key takeaways</h2><p>What exactly does this video dubbing pipeline unlock?</p><ul><li>Each task has an associated `ImageSpec`, eliminating the need for a single bloated image containing all dependencies. You can also use different Python versions or install CUDA libraries to run on GPUs, providing a new level of dependency isolation. This reduces the chances of dealing with “<a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Dependency_hell\"><strong>dependency hell</strong></a><strong>”</strong>!</li><li>Within a single workflow, tasks can run on both CPUs and GPUs, and you can adjust resources based on the requirements of each task.</li><li>You can easily swap out existing libraries with other open-source alternatives. The transparent pipeline lets you fine-tune parameters for optimal performance. </li><li>The versioning and caching features of Union enable you to roll back to a previous execution with ease and avoid re-running executions that have already been completed, respectively. <a target=\"_blank\" href=\"https://www.union.ai/blog-post/achieving-reproducible-workflows-with-flyte\">‍</a></li><li><a target=\"_blank\" href=\"https://www.union.ai/blog-post/achieving-reproducible-workflows-with-flyte\">Reproducibility</a> is the low-hanging fruit of Union that accelerates the iteration velocity while developing workflows.</li><li>If you have Flyte up and running, you can also “self-host” this pipeline without relying on third-party video dubbing libraries.</li></ul><p>Contact the <a target=\"_blank\" href=\"https://www.union.ai/demo\">Union team</a> if you’re interested in implementing end-to-end AI solutions!</p></div>",
"author": "",
"favicon": "https://cdn.prod.website-files.com/64028677e7e50a840d0a5685/674defdf79db2f8776845a33_Union-Logo-Favicon-2024-32.png",
"source": "union.ai",
"published": "sep 12, 2024",
"ttr": 234,
"type": "website"
}