FunAudioLLM

    FunAudioLLM/CosyVoice

    Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.

    ai
    audio-generation
    cantonese
    chatbot
    chatgpt
    chinese
    cosyvoice
    cross-lingual
    english
    fine-grained
    fine-tuning
    gpt-4o
    japanese
    korean
    multi-lingual
    natural-language-generation
    python
    text-to-speech
    tts
    voice-cloning
    Python
    Apache-2.0
    18.9K stars
    2.1K forks
    18.9K watching
    Updated 2/27/2026
    View on GitHub
    Backblaze Advertisement

    Loading star history...

    Health Score

    5.6

    Weekly Growth

    +0

    +0.0% this week

    Contributors

    1

    Total contributors

    Open Issues

    865

    Generated Insights

    About CosyVoice

    SVG Banners

    👉🏻 CosyVoice 👈🏻

    CosyVoice 3.0: Demos; Paper; CV3-Eval

    CosyVoice 2.0: Demos; Paper; Modelscope; HuggingFace

    CosyVoice 1.0: Demos; Paper; Modelscope

    Highlight🔥

    CosyVoice 2.0 has been released! Compared to version 1.0, the new version offers more accurate, more stable, faster, and better speech generation capabilities.

    Multilingual

    • Supported Language: Chinese, English, Japanese, Korean, Chinese dialects (Cantonese, Sichuanese, Shanghainese, Tianjinese, Wuhanese, etc.)
    • Crosslingual & Mixlingual:Support zero-shot voice cloning for cross-lingual and code-switching scenarios.

    Ultra-Low Latency

    • Bidirectional Streaming Support: CosyVoice 2.0 integrates offline and streaming modeling technologies.
    • Rapid First Packet Synthesis: Achieves latency as low as 150ms while maintaining high-quality audio output.

    High Accuracy

    • Improved Pronunciation: Reduces pronunciation errors by 30% to 50% compared to CosyVoice 1.0.
    • Benchmark Achievements: Attains the lowest character error rate on the hard test set of the Seed-TTS evaluation set.

    Strong Stability

    • Consistency in Timbre: Ensures reliable voice consistency for zero-shot and cross-language speech synthesis.
    • Cross-language Synthesis: Marked improvements compared to version 1.0.

    Natural Experience

    • Enhanced Prosody and Sound Quality: Improved alignment of synthesized audio, raising MOS evaluation scores from 5.4 to 5.53.
    • Emotional and Dialectal Flexibility: Now supports more granular emotional controls and accent adjustments.

    Roadmap

    • 2025/08

      • Thanks to the contribution from NVIDIA Yuekai Zhang, add triton trtllm runtime support
    • 2025/07

      • release cosyvoice 3.0 eval set
    • 2025/05

      • add cosyvoice 2.0 vllm support
    • 2024/12

      • 25hz cosyvoice 2.0 released
    • 2024/09

      • 25hz cosyvoice base model
      • 25hz cosyvoice voice conversion model
    • 2024/08

      • Repetition Aware Sampling(RAS) inference for llm stability
      • Streaming inference mode support, including kv cache and sdpa for rtf optimization
    • 2024/07

      • Flow matching training support
      • WeTextProcessing support when ttsfrd is not available
      • Fastapi server and client

    Install

    Clone and install

    • Clone the repo

      git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
      # If you failed to clone the submodule due to network failures, please run the following command until success
      cd CosyVoice
      git submodule update --init --recursive
      
    • Install Conda: please see https://docs.conda.io/en/latest/miniconda.html

    • Create Conda env:

      conda create -n cosyvoice -y python=3.10
      conda activate cosyvoice
      pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
      
      # If you encounter sox compatibility issues
      # ubuntu
      sudo apt-get install sox libsox-dev
      # centos
      sudo yum install sox sox-devel
      

    Model download

    We strongly recommend that you download our pretrained CosyVoice2-0.5B CosyVoice-300M CosyVoice-300M-SFT CosyVoice-300M-Instruct model and CosyVoice-ttsfrd resource.

    # SDK模型下载
    from modelscope import snapshot_download
    snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
    snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
    snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
    snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
    snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
    
    # git模型下载,请确保已安装git lfs
    mkdir -p pretrained_models
    git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
    git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
    git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
    git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
    git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
    

    Optionally, you can unzip ttsfrd resource and install ttsfrd package for better text normalization performance.

    Notice that this step is not necessary. If you do not install ttsfrd package, we will use wetext by default.

    cd pretrained_models/CosyVoice-ttsfrd/
    unzip resource.zip -d .
    pip install ttsfrd_dependency-0.1-py3-none-any.whl
    pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
    

    Basic Usage

    We strongly recommend using CosyVoice2-0.5B for better performance. Follow the code below for detailed usage of each model.

    import sys
    sys.path.append('third_party/Matcha-TTS')
    from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
    from cosyvoice.utils.file_utils import load_wav
    import torchaudio
    

    CosyVoice2 Usage

    cosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load_trt=False, load_vllm=False, fp16=False)
    
    # NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference
    # zero_shot usage
    prompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
        torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    
    # save zero_shot spk for future usage
    assert cosyvoice.add_zero_shot_spk('希望你以后能够做的比我还好呦。', prompt_speech_16k, 'my_zero_shot_spk') is True
    for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '', '', zero_shot_spk_id='my_zero_shot_spk', stream=False)):
        torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    cosyvoice.save_spkinfo()
    
    # fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
    for i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)):
        torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    
    # instruct usage
    for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)):
        torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    
    # bistream usage, you can use generator as input, this is useful when using text llm model as input
    # NOTE you should still have some basic sentence split logic because llm can not handle arbitrary sentence length
    def text_generator():
        yield '收到好友从远方寄来的生日礼物,'
        yield '那份意外的惊喜与深深的祝福'
        yield '让我心中充满了甜蜜的快乐,'
        yield '笑容如花儿般绽放。'
    for i, j in enumerate(cosyvoice.inference_zero_shot(text_generator(), '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
        torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    

    CosyVoice2 vllm Usage

    If you want to use vllm for inference, please install vllm==v0.9.0. Older vllm version do not support CosyVoice2 inference.

    Notice that vllm==v0.9.0 has a lot of specific requirements, for example torch==2.7.0. You can create a new env to in case your hardward do not support vllm and old env is corrupted.

    conda create -n cosyvoice_vllm --clone cosyvoice
    conda activate cosyvoice_vllm
    pip install vllm==v0.9.0 -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
    python vllm_example.py
    

    CosyVoice Usage

    cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
    # sft usage
    print(cosyvoice.list_available_spks())
    # change stream=True for chunk stream inference
    for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
        torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    
    cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M')
    # zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
    prompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
        torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    # cross_lingual usage
    prompt_speech_16k = load_wav('./asset/cross_lingual_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
        torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    # vc usage
    prompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)
    source_speech_16k = load_wav('./asset/cross_lingual_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
        torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    
    cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
    # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
    for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
        torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
    

    Start web demo

    You can use our web demo page to get familiar with CosyVoice quickly.

    Please see the demo website for details.

    # change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
    python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
    

    Advanced Usage

    For advanced users, we have provided training and inference scripts in examples/libritts/cosyvoice/run.sh.

    Build for deployment

    Optionally, if you want service deployment, You can run the following steps.

    cd runtime/python
    docker build -t cosyvoice:v1.0 .
    # change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
    # for grpc usage
    docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
    cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
    # for fastapi usage
    docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
    cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
    

    Discussion & Communication

    You can directly discuss on Github Issues.

    You can also scan the QR code to join our official Dingding chat group.

    Acknowledge

    1. We borrowed a lot of code from FunASR.
    2. We borrowed a lot of code from FunCodec.
    3. We borrowed a lot of code from Matcha-TTS.
    4. We borrowed a lot of code from AcademiCodec.
    5. We borrowed a lot of code from WeNet.

    Citations

    @article{du2024cosyvoice,
      title={Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens},
      author={Du, Zhihao and Chen, Qian and Zhang, Shiliang and Hu, Kai and Lu, Heng and Yang, Yexin and Hu, Hangrui and Zheng, Siqi and Gu, Yue and Ma, Ziyang and others},
      journal={arXiv preprint arXiv:2407.05407},
      year={2024}
    }
    
    @article{du2024cosyvoice,
      title={Cosyvoice 2: Scalable streaming speech synthesis with large language models},
      author={Du, Zhihao and Wang, Yuxuan and Chen, Qian and Shi, Xian and Lv, Xiang and Zhao, Tianyu and Gao, Zhifu and Yang, Yexin and Gao, Changfeng and Wang, Hui and others},
      journal={arXiv preprint arXiv:2412.10117},
      year={2024}
    }
    
    @article{du2025cosyvoice,
      title={CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-training},
      author={Du, Zhihao and Gao, Changfeng and Wang, Yuxuan and Yu, Fan and Zhao, Tianyu and Wang, Hao and Lv, Xiang and Wang, Hui and Shi, Xian and An, Keyu and others},
      journal={arXiv preprint arXiv:2505.17589},
      year={2025}
    }
    
    @inproceedings{lyu2025build,
      title={Build LLM-Based Zero-Shot Streaming TTS System with Cosyvoice},
      author={Lyu, Xiang and Wang, Yuxuan and Zhao, Tianyu and Wang, Hao and Liu, Huadai and Du, Zhihao},
      booktitle={ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
      pages={1--2},
      year={2025},
      organization={IEEE}
    }
    

    Disclaimer

    The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.

    Discover Repositories

    Search across tracked repositories by name or description