Improving Audio Codec-based Zero-Shot Text-to-Speech Synthesis with Multi-Modal Context and Large Language Model
Abstract
Recent advances in large language models (LLMs) and development of audio codecs greatly propel the zero-shot TTS. They can synthesize personalized speech with only a 3-second speech of an unseen speaker as an acoustic prompt. However, they only support short speech prompts and cannot leverage longer context information, as required in audiobook and conversational TTS scenarios. In this paper, we introduce a novel audio codec-based TTS model to adapt context features with multiple enhancements. Inspired by the success of Qformer, we propose a multi-modal context-enhanced Qformer (MMCE-Qformer) to utilize additional multi-modal context information. Besides, we adapt a pretrained LLM to leverage its understanding ability to predict semantic tokens, and use a SoundStorm to generate acoustic tokens thereby enhancing audio quality and speaker similarity. The extensive objective and subjective evaluations show that our proposed method outperforms baselines across various context TTS scenarios.
Contents
Model Architecture
LibriTTS Evaluation
Groundtruth: the groundtruth speech.
Reconstruct: the reconstruct speech by SpeechTokenizer.
VALL-E: use the open-source implementation of VALL-E but use SpeechTokenizer as audio codec instead of Encodec.
SoundStorm: only use AR stage of above VALL-E model and replace the NAR stage with same SoundStorm model as ours.
XTTS: an open-source zero-shot TTS similar to TortoiseTTS, based on a discrete VAE model as audio codec and also use AR model, but it use fixed length vectors for speaker conditioning via Perceiver model.
Proposed-w/o-context: our proposed method without MMCE-Qformer.
Proposed-w/-text: our proposed method using only text context.
Proposed-w/-audio: our proposed method using audio context.
Proposed: our proposed method using MMCE-Qformer with multi-modal context.
Sample 1
Text: then, if she observes any expression of discontent or insubmission in mary's countenance, the mother would add,
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
Sample 2
Text: he twisted himself like an eel between the outstretched arms of the courtiers, and over the soldiers' muskets he jumped like a little rabbit.
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
Sample 3
Text: the immense blade was so heavy that it took the strength of seven blueskins to raise it.
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
Sample 4
Text: however, verne gives his hero's brilliance and benevolence a dark underside the man's obsessive hate for his old enemy.
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
Sample 5
Text: therefore her majesty paid no attention to anyone and no one paid any attention to her.
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
IEMOCAP Evaluation
Groundtruth: the groundtruth speech.
Reconstruct: the reconstruct speech by SpeechTokenizer.
VALL-E: use the open-source implementation of VALL-E but use SpeechTokenizer as audio codec instead of Encodec.
SoundStorm: only use AR stage of above VALL-E model and replace the NAR stage with same SoundStorm model as ours.
XTTS: an open-source zero-shot TTS similar to TortoiseTTS, based on a discrete VAE model as audio codec and also use AR model, but it use fixed length vectors for speaker conditioning via Perceiver model.
Proposed-w/o-context: our proposed method without MMCE-Qformer.
Proposed-w/-text: our proposed method using only text context.
Proposed-w/-audio: our proposed method using audio context.
Proposed: our proposed method using MMCE-Qformer with multi-modal context.
Sample 1
Text: On the contrary, a child of two could get violently drunk on only one glass of brandy.
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
Sample 2
Text: Are you crazy?
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
Sample 3
Text: I'd rather not remember some things. I'd rather not hope for some things.
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
Sample 4
Text: 'cause you know what you get Carla, you know what you get? This.
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|
Sample 5
Text: You don't understand anything I'm saying.
Groundtruth | Reconstruct | Prompt | VALL-E | SoundStorm | XTTS | Proposed-w/o-context | Proposed-w/-text | Proposed-w/-audio | Proposed |
---|---|---|---|---|---|---|---|---|---|