CorentinJ

    CorentinJ/Real-Time-Voice-Cloning

    Clone a voice in 5 seconds to generate arbitrary speech in real-time

    deep-learning
    machine-learning
    python
    pytorch
    tensorflow
    tts
    voice-cloning
    Python
    NOASSERTION
    58.6K stars
    9.3K forks
    58.6K watching
    Updated 2/27/2026
    View on GitHub
    Backblaze Advertisement

    Loading star history...

    Health Score

    75

    Weekly Growth

    +0

    +0.0% this week

    Contributors

    1

    Total contributors

    Open Issues

    170

    Generated Insights

    About Real-Time-Voice-Cloning

    Real-Time Voice Cloning

    This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. This was my master's thesis.

    SV2TTS is a deep learning framework in three stages. In the first stage, one creates a digital representation of a voice from a few seconds of audio. In the second and third stages, this representation is used as reference to generate speech given arbitrary text.

    Video demonstration (click the picture):

    Toolbox demo

    Papers implemented

    URLDesignationTitleImplementation source
    1806.04558SV2TTSTransfer Learning from Speaker Verification to Multispeaker Text-To-Speech SynthesisThis repo
    1802.08435WaveRNN (vocoder)Efficient Neural Audio Synthesisfatchord/WaveRNN
    1703.10135Tacotron (synthesizer)Tacotron: Towards End-to-End Speech Synthesisfatchord/WaveRNN
    1710.10467GE2E (encoder)Generalized End-To-End Loss for Speaker VerificationThis repo

    Heads up

    Like everything else in Deep Learning, this repo has quickly gotten old. Many SaaS apps (often paying) will give you a better audio quality than this repository will. If you wish for an open-source solution with a high voice quality:

    • Check out paperswithcode for other repositories and recent research in the field of speech synthesis.
    • Check out Chatterbox for a similar project up to date with the 2025 SOTA in voice cloning

    Setup

    1. Install Requirements

    1. Both Windows and Linux are supported. A GPU is recommended for training and for inference speed, but is not mandatory.
    2. Python 3.7 is recommended. Python 3.5 or greater should work, but you'll probably have to tweak the dependencies' versions. I recommend setting up a virtual environment using venv, but this is optional.
    3. Install ffmpeg. This is necessary for reading audio files.
    4. Install PyTorch. Pick the latest stable version, your operating system, your package manager (pip by default) and finally pick any of the proposed CUDA versions if you have a GPU, otherwise pick CPU. Run the given command.
    5. Install the remaining requirements with pip install -r requirements.txt

    2. (Optional) Download Pretrained Models

    Pretrained models are now downloaded automatically. If this doesn't work for you, you can manually download them here.

    3. (Optional) Test Configuration

    Before you download any dataset, you can begin by testing your configuration with:

    python demo_cli.py

    If all tests pass, you're good to go.

    4. (Optional) Download Datasets

    For playing with the toolbox alone, I only recommend downloading LibriSpeech/train-clean-100. Extract the contents as <datasets_root>/LibriSpeech/train-clean-100 where <datasets_root> is a directory of your choosing. Other datasets are supported in the toolbox, see here. You're free not to download any dataset, but then you will need your own data as audio files or you will have to record it with the toolbox.

    5. Launch the Toolbox

    You can then try the toolbox:

    python demo_toolbox.py -d <datasets_root>
    or
    python demo_toolbox.py

    depending on whether you downloaded any datasets. If you are running an X-server or if you have the error Aborted (core dumped), see this issue.

    Discover Repositories

    Search across tracked repositories by name or description