Demo | Paper | Hugging Face | Space Demo
This repository is the official repository for “LeVo: High-Quality Song Generation with Multi-Preference Alignment” (NeurIPS 2025). In this repository, we provide the SongGeneration model, inference scripts,pretrained checkpoints, and some music generation tools.
- 2025.10.16 🔥: Our Demo webpage now supports full-length song generation (up to 4m30s)! 🎶 Experience end-to-end music generation with vocals and accompaniment — try it out now!
- 2025.10.15 🔥: We have updated the codebase to improve inference speed and generation quality, and adapted it to the latest model version. Please update to the newest code to ensure the best performance and user experience.
- 2025.10.14 🔥: We have released the large model (SongGeneration-large).
- 2025.10.13 🔥: We have released the full time model (SongGeneration-base-full) and evaluation performance.
- 2025.10.12 🔥: We have released the english enhanced model (SongGeneration-base-new).
- 2025.09.23 🔥: We have released the Data Processing Pipeline, which is capable of analyzing the structure and lyrics of entire songs and providing precise timestamps without the need for additional source separation. On the human-annotated test set SSLD-200, the model’s performance outperforms mainstream models including Gemini-2.5, Seed-ASR, and Qwen3-ASR.
- 2025.07.25 🔥: SongGeneration can now run with as little as 10GB of GPU memory.
- 2025.07.18 🔥: SongGeneration now supports generation of pure music, pure vocals, and dual-track (vocals + accompaniment separately) outputs.
- 2025.06.16 🔥: We have released the SongGeneration series.
- Release SongGeneration-v1.5 (trained on a larger multilingual dataset, supports more languages, and integrates a Reward Model with Reinforcement Learning to enhance musicality and lyric alignment)
- Release finetuning scripts.
- Release Music Codec and VAE.
- Release large model.
- Release full time model.
- Release English enhanced model.
- Release data processing pipeline.
- Update Low memory usage model.
- Support single vocal/bgm track generation.
Model | Max Length | Language | GPU Menmory | RFT(A100) | Download Link |
---|---|---|---|---|---|
SongGeneration-base | 2m30s | zh | 10G/16G | 1.26 | Huggingface |
SongGeneration-base-new | 2m30s | zh, en | 10G/16G | 1.26 | Huggingface |
SongGeneration-base-full | 4m30s | zh, en | 12G/18G | 1.30 | Huggingface |
SongGeneration-large | 4m30s | zh, en | 22G/28G | 1.51 | Huggingface |
SongGeneration-v1.5-small | 2m | zh, en, es, ja, etc. | - | - | Coming soon |
SongGeneration-v1.5-base | 4m30s | zh, en, es, ja, etc. | - | - | Coming soon |
SongGeneration-v1.5-large | 4m30s | zh, en, es, ja, etc. | - | - | Coming soon |
💡 Notes:
- GPU Memory — “X / Y” means X: no prompt audio; Y: with prompt audio.
- RFT — Real Forward Time (pure inference, excluding model loading).
We develop the SongGeneration model. It is an LM-based framework consisting of LeLM and a music codec. LeLM is capable of parallelly modeling two types of tokens: mixed tokens, which represent the combined audio of vocals and accompaniment to achieve vocal-instrument harmony, and dual-track tokens, which separately encode vocals and accompaniment for high-quality song generation. The music codec reconstructs the dual-track tokens into highfidelity music audio. SongGeneration significantly improves over the open-source music generation models and performs competitively with current state-of-the-art industry systems. For more details, please refer to our paper.
You can install the necessary dependencies using the requirements.txt
file with Python>=3.8.12 and CUDA>=11.8:
pip install -r requirements.txt
pip install -r requirements_nodeps.txt --no-deps
(Optional) Then install flash attention from git. For example, if you're using Python 3.10 and CUDA 12.0
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
docker pull juhayna/song-generation-levo:hf0613
docker run -it --gpus all --network=host juhayna/song-generation-levo:hf0613 /bin/bash
- Windows platform with ComfyUI: https://github.com/smthemex/ComfyUI_SongGeneration
- Windows installer: http://bilibili.com/video/BV1ATK8zQE8L/?vd_source=22cfc54298226c4161b1aff457d17585
- Quick start with ComfyUI on CNB: https://cnb.cool/tencent/tencent-ailab/examples/SongGeneration-comfyui
To ensure the model runs correctly, please download all the required folders from the original source at Hugging Face.
-
Download
ckpt
andthird_party
folder from Hugging Face or Hugging Face, and move them into the root directory of the project. You can also download models using hugging face-cli.huggingface-cli download lglg666/SongGeneration-Runtime --local-dir ./runtime mv runtime/ckpt ckpt mv runtime/third_party third_party
-
Download the specific model checkpoint and save it to your specified checkpoint directory:
ckpt_path
(We provide multiple versions of model checkpoints. Please select the most suitable version based on your needs and download the corresponding file. Also, ensure the folder name matches the model version name.) Your can also download models using hugging face-cli.# download SongGeneration-base huggingface-cli download lglg666/SongGeneration-base --local-dir ./songgeneration_base # download SongGeneration-base-new huggingface-cli download lglg666/SongGeneration-base-new --local-dir ./songgeneration_base_new # download SongGeneration-base-full huggingface-cli download lglg666/SongGeneration-base-full --local-dir ./songgeneration_base_full # download SongGeneration-large huggingface-cli download lglg666/SongGeneration-large --local-dir ./songgeneration_large
Once everything is set up, you can run the inference script using the following command:
sh generate.sh ckpt_path lyrics.jsonl output_path
-
You may provides sample inputs in JSON Lines (
.jsonl
) format. Each line represents an individual song generation request. The model expects each input to contain the following fields:-
idx
: A unique identifier for the output song. It will be used as the name of the generated audio file. -
gt_lyric
:The lyrics to be used in generation. It must follow the format of[Structure] Text
, whereStructure
defines the musical section (e.g.,[Verse]
,[Chorus]
). See Input Guide. -
descriptions
: (Optional) You may customize the text prompt to guide the model’s generation. This can include attributes like gender, timbre, genre, emotion, instrument, and BPM. See Input Guide. -
prompt_audio_path
: (Optional) Path to a 10-second reference audio file. If provided, the model will generate a new song in a similar style to the given reference. -
auto_prompt_audio_type
: (Optional) Used only ifprompt_audio_path
is not provided. This allows the model to automatically select a reference audio from a predefined library based on a given style. Supported values include:'Pop'
,'R&B'
,'Dance'
,'Jazz'
,'Folk'
,'Rock'
,'Chinese Style'
,'Chinese Tradition'
,'Metal'
,'Reggae'
,'Chinese Opera'
,'Auto'
.
-
Note: If certain optional fields are not required, they can be omitted.
-
-
Outputs of the loader
output_path
:audio
: generated audio filesjsonl
: output jsonls
-
An example command may look like:
sh generate.sh songgeneration_base sample/lyrics.jsonl sample/output
If you encounter out-of-memory (OOM) issues, you can manually enable low-memory inference mode using the --low_mem
flag. For example:
sh generate.sh ckpt_path lyrics.jsonl output_path --low_mem
If your GPU device does not support Flash Attention or your environment does not have Flash Attention installed, you can disable it by adding the --not_use_flash_attn
flag. For example:
sh generate.sh ckpt_path lyrics.jsonl output_path --not_use_flash_attn
By default, the model generates songs with both vocals and accompaniment. If you want to generate pure music, pure vocals, or separated vocal and accompaniment tracks, please use the following flags:
--bgm
Generate pure music--vocal
Generate vocal-only (a cappella)--separate
Generate separated vocal and accompaniment tracks
For example:
sh generate.sh ckpt_path lyrics.jsonl output_path --separate
An example input file can be found in sample/lyrics.jsonl
The gt_lyric
field defines the lyrics and structure of the song. It consists of multiple musical section, each starting with a structure label. The model uses these labels to guide the musical and lyrical progression of the generated song.
-
The following segments should not contain lyrics (they are purely instrumental):
[intro-short]
,[intro-medium]
,[inst-short]
,[inst-medium]
,[outro-short]
,[outro-medium]
short
indicates a segment of approximately 0–10 secondsmedium
indicates a segment of approximately 10–20 seconds- We find that [inst] label is less stable, so we recommend that you do not use it.
-
The following segments require lyrics:
[verse]
,[chorus]
,[bridge]
-
Each section is separated by
;
-
Within lyrical segments (
[verse]
,[chorus]
,[bridge]
), lyrics must be written in complete sentences and separated by a period (.
) -
A complete lyric string may look like:
[intro-short] ; [verse] These faded memories of us. I can't erase the tears you cried before. Unchained this heart to find its way. My peace won't beg you to stay ; [bridge] If ever your truth still remains. Turn around and see. Life rearranged its games. All these lessons in mistakes. Even years may never erase ; [inst-short] ; [chorus] Like a fool begs for supper. I find myself waiting for her. Only to find the broken pieces of my heart. That was needed for my soul to love again ; [outro-short]
-
More examples can be found in
sample/test_en_input.jsonl
andsample/test_zh_input.jsonl
.
The descriptions
field allows you to control various musical attributes of the generated song. It can describe up to six musical dimensions: Gender (e.g., male, female), Timbre (e.g., dark, bright, soft), Genre (e.g., pop, jazz, rock), Emotion (e.g., sad, energetic, romantic), Instrument (e.g., piano, drums, guitar), BPM (e.g., the bpm is 120).
-
All six dimensions are optional — you can specify any subset of them.
-
The order of dimensions is flexible.
-
Use commas (
,
) to separate different attributes. -
Although the model supports open vocabulary, we recommend using predefined tags for more stable and reliable performance. A list of commonly supported tags for each dimension is available in the
sample/description/
folder. -
Here are a few valid
descriptions
inputs:- female, dark, pop, sad, piano and drums. - male, piano, jazz. - male, dark, the bpm is 110.
- The input audio file can be longer than 10 seconds, but only the first 10 seconds will be used.
- For best musicality and structure, it is recommended to use the chorus section of a song as the prompt audio.
- You can use this field to influence genre, instrumentation, rhythm, and voice
- Avoid providing both
prompt_audio_path
anddescriptions
at the same time. If both are present, and they convey conflicting information, the model may struggle to follow instructions accurately, resulting in degraded generation quality. - If
prompt_audio_path
is not provided, you can instead useauto_prompt_audio_type
for automatic reference selection.
You can start up the UI with the following command:
sh tools/gradio/run.sh ckpt_path
Model | Open-Source | PER↓ | Audiobox Aesthetics ↑ | SongEval ↑ | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
CE | CU | PC | PQ | COH | MUS | MEM | CLA | NAT | |||
Suno | ❌ | 21.6% | 7.65 | 7.86 | 5.94 | 8.35 | 4.41 | 4.34 | 4.44 | 4.38 | 4.26 |
Mureka | ❌ | 7.2% | 7.71 | 7.83 | 6.39 | 8.44 | 4.01 | 3.85 | 3.73 | 3.87 | 3.75 |
Haimian | ❌ | 11.8% | 7.56 | 7.85 | 5.89 | 8.27 | 3.69 | 3.43 | 3.51 | 3.52 | 3.34 |
ACE-Step | ✅ | 37.1% | 7.37 | 7.52 | 6.26 | 7.85 | 3.68 | 3.45 | 3.54 | 3.48 | 3.38 |
Diffrhythm-v1,2 | ✅ | 8.78% | 6.91 | 7.45 | 5.45 | 7.99 | 2.93 | 2.60 | 2.70 | 2.71 | 2.60 |
YUE | ✅ | 14.9% | 7.29 | 7.53 | 6.19 | 7.96 | 3.68 | 3.43 | 3.49 | 3.49 | 3.42 |
SongGeneration-base | ✅ | 7.2% | 7.78 | 7.90 | 6.03 | 8.42 | 3.96 | 3.80 | 3.85 | 3.74 | 3.71 |
SongGeneration-base-new | ✅ | 5.7% | 7.82 | 7.94 | 6.07 | 8.43 | 4.07 | 3.92 | 3.98 | 3.93 | 3.86 |
SongGeneration-base-full | ✅ | 8.4% | 7.81 | 7.94 | 6.07 | 8.41 | 4.02 | 3.88 | 3.94 | 3.87 | 3.80 |
SongGeneration-large | ✅ | 5.1% | 7.82 | 7.95 | 6.09 | 8.46 | 4.08 | 3.94 | 4.00 | 3.94 | 3.87 |
Model | Open-Source | PER↓ | Audiobox Aesthetics ↑ | SongEval ↑ | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
CE | CU | PC | PQ | COH | MUS | MEM | CLA | NAT | |||
Suno | ❌ | 15.6% | 7.64 | 7.85 | 5.84 | 8.19 | 4.49 | 4.35 | 4.47 | 4.35 | 4.23 |
Mureka | ❌ | 12.6% | 7.71 | 7.93 | 6.46 | 8.39 | 4.06 | 3.88 | 3.90 | 3.90 | 3.73 |
Haimian | ❌ | 26.6% | 7.85 | 8.01 | 5.28 | 8.44 | 3.83 | 3.68 | 3.71 | 3.61 | 3.45 |
ACE-Step | ✅ | 32.1% | 7.19 | 7.37 | 6.16 | 7.57 | 3.59 | 3.34 | 3.43 | 3.36 | 3.27 |
Diffrhythm-v1.2 | ✅ | 17.8% | 7.02 | 7.58 | 5.96 | 7.81 | 3.51 | 3.12 | 3.32 | 3.21 | 3.08 |
YUE | ✅ | 27.3% | 7.04 | 7.22 | 5.89 | 7.67 | 3.58 | 3.24 | 3.42 | 3.37 | 3.30 |
SongGeneration-base | ✅ | - | - | - | - | - | - | - | - | - | - |
SongGeneration-base-new | ✅ | 16.2% | 7.78 | 7.97 | 6.03 | 8.37 | 4.05 | 3.90 | 3.99 | 3.91 | 3.79 |
SongGeneration-base-full | ✅ | 20.1% | 7.76 | 7.98 | 5.96 | 8.39 | 4.02 | 3.87 | 3.97 | 3.86 | 3.74 |
SongGeneration-large | ✅ | 14.9% | 7.85 | 8.05 | 6.17 | 8.46 | 4.08 | 3.94 | 4.03 | 3.93 | 3.82 |
- The evaluation results of SongGeneration are based on 200 generated songs, including 100 using descriptions and 100 using
auto_prompt_audio_type=Auto
. We also provide 40 English and 40 Chinese example inputs insample/test_en_input.jsonl
andsample/test_zh_input.jsonl
for reference. - Since the model attempts to clone the timbre and musical style of the given prompt audio, the choice of prompt audio can significantly affect generation performance, and may lead to fluctuations in the evaluation metrics.
- The format of the input lyrics has a strong impact on generation quality. If the output quality appears suboptimal, please check whether your lyrics format is correct. You can find more examples of properly formatted inputs in
sample/test_en_input.jsonl
andsample/test_zh_input.jsonl
.
@article{lei2025levo,
title={LeVo: High-Quality Song Generation with Multi-Preference Alignment},
author={Lei, Shun and Xu, Yaoxun and Lin, Zhiwei and Zhang, Huaicheng and Tan, Wei and Chen, Hangting and Yu, Jianwei and Zhang, Yixuan and Yang, Chenyu and Zhu, Haina and Wang, Shuai and Wu, Zhiyong and Yu, Dong},
journal={arXiv preprint arXiv:2506.07520},
year={2025}
}
The code and weights in this repository is released in the LICENSE file.
Use WeChat or QQ to scan blow QR code.