Spotlight-TTS: Spotlighting the Style via Voiced-Aware Style Extraction and Style Direction Adjustment for Expressive Text-to-Speech

Abstract

  Recent advances in expressive text-to-speech (TTS) have introduced diverse methods based on style embedding extracted from reference speech. However, synthesizing high-quality expressive speech remains challenging. We propose Spotlight-TTS, which exclusively emphasizes style via voiced-aware style extraction and style direction adjustment. Voiced-aware style extraction focuses on voiced regions highly related to style while maintaining continuity across different speech regions to improve expressiveness. We adjust the direction of the extracted style for optimal integration into the TTS model, which improves speech quality. Experimental results demonstrate that Spotlight-TTS achieves superior performance compared to baseline models in terms of expressiveness, overall speech quality, and style transfer capability.





Overall framework of Spotlight-TTS

Parallel Style Transfer


Non-Parallel Style Transfer


Ablation Study

Voiced-aware Style Extraction

*RT: rotation trick

*UF: unvoiced filler

*VE: voiced extraction

Biased Self-attention

*BM: binary mask

Style Direction Adjustment

*SP: style preserving loss

*SD: style disentanglement loss