Speech Watermarking with Discrete Intermediate Representations

Paper under double-blind review

Abstract

Speech watermarking techniques can proactively mitigate the potential harmful consequences of instant voice cloning techniques. These techniques involve the insertion of signals into speech that are imperceptible to humans but can be detected by algorithms. Previous approaches typically embed watermark messages into continuous space. However, intuitively, embedding watermark information into robust discrete latent space can significantly improve the robustness of watermarking systems. In this paper, we propose DiscreteWM, a novel speech watermarking framework that injects watermarks into the discrete intermediate representations of speech. Specifically, we map speech into discrete latent space with a vector-quantized autoencoder and inject watermarks by changing the modular arithmetic relation of discrete IDs. To ensure the imperceptibility of watermarks, we also propose a manipulator model to select the candidate tokens for watermark embedding. Experimental results demonstrate that our framework achieves state-of-the-art performance in robustness and imperceptibility, simultaneously. Moreover, our flexible frame-wise approach can serve as an efficient solution for both voice cloning detection and information hiding. Additionally, DiscreteWM can encode 1 to 150 bits of watermark information within a 1-second speech clip, indicating its encoding capacity.

Overall Framework

Interpolate start reference image.

Information Hiding

We list the demo examples of the information hiding experiment here. The examples of these baselines are generated with their pre-trained model. We embed approximately 30 bits of watermark information into each speech clip. Feel free to listen to the speech samples or zoom in the image.

Ground Truth Chang Liu's (30 bits) WavMark (32 bits) Ours (32 bits)
Interpolate start reference image. Interpolate start reference image. Interpolate start reference image. Interpolate start reference image.
Interpolate start reference image. Interpolate start reference image. Interpolate start reference image. Interpolate start reference image.

AI-Generated Speech Detection

We list the demo examples of AI-generated speech detection here. The examples of these baselines are generated with their pre-trained model. Feel free to listen to the speech samples or zoom in the image.

Ground Truth WavMark SeamlessWM Ours
Interpolate start reference image. Interpolate start reference image. Interpolate start reference image. Interpolate start reference image.
Interpolate start reference image. Interpolate start reference image. Interpolate start reference image. Interpolate start reference image.

Flexible Encoding Capacity

We visualize the watermarked speech of DiscreteWM with different encoding capacity. In the following table, 700bits denotes we embed 700bits of information into the corresponding speech clip, which is able to include the url of our demo page (41 bytes = 328 bits) and add error correction mechanism for it. Feel free to listen to the speech samples or zoom in the image.

Ground Truth 10bits 100bits 700bits
Interpolate start reference image. Interpolate start reference image. Interpolate start reference image. Interpolate start reference image.
Interpolate start reference image. Interpolate start reference image. Interpolate start reference image. Interpolate start reference image.