Skip to main content
Version: 5.0.0

Audio Manipulation Detection

Phonexia audio-manipulation-detection is a tool for detecting possible cut-and-merge manipulation of audio files, by using a pre-trained neural network model. To learn more, visit the technology's home page.

Installation

Getting the image

You can easily obtain the audio manipulation detection image from docker hub. There are 2 variants of the image. For CPU and for GPU.

You can get the CPU image by specifying a direct version in the tag (e.g. 2.0.0) or latest for the latest image:

docker pull phonexia/audio-manipulation-detection:latest

Running the image

You can start the microservice and list all the supported options by running:

docker run --rm -it phonexia/audio-manipulation-detection:latest --help

The output should look like this:

note

The model and license_key options are required. To obtain the model and license, contact Phonexia.

You can specify the options either via command line arguments or via environmental variables.

Run the container with the mandatory parameters:

docker run --rm -it -p 8080:8080 -v /opt/phx/models:/models phonexia/audio-manipulation-detection:latest --model /models/audio_manipulation_detection-beta-1.1.1.model --license_key ${license-key}

Replace the /opt/phx/models, audio_manipulation_detection-beta-1.1.1.model and license-key with the corresponding values.

With this command, the container will start, and the microservice will be listening on port 8080 on localhost.

Performance optimization

The audio-manipulation-detection microservice supports GPU acceleration.

In the docker images with GPU support, the GPU acceleration is enabled by default. While GPU acceleration will be used primarily, certain processing tasks will still rely on CPU resources.

For better performance, multiple microservices can share a GPU unit. The number of microservice instances per GPU depends on the hardware used.

Microservice communication

gRPC API

For communication, our microservices use gRPC, which is a high-performance, open-source Remote Procedure Call (RPC) framework that enables efficient communication between distributed systems using a variety of programming languages. We use an interface definition language to specify a common interface and contracts between components. This is primarily achieved by specifying methods with parameters and return types.

Take a look at our gRPC API documentation. The audio-manipulation-detection microservice defines a AudioManipulationDetection service with remote procedure called Detect. This procedure accepts an argument (also referred to as "message") called DetectRequest, which contains the audio as an array of bytes, together with an optional config argument.

This DetectRequest argument is streamed, meaning that it may be received in multiple requests, each containing a part of the audio. If specified, the optional config argument must be sent only with the first request. Once all the requests have been received and processed, the Detect procedure returns a message called DetectResponse which consists of the processed audio length and array of detected segments in the audio. The segments than consist of the detection score, start time and end time of the segment.

Connecting to microservice

There are multiple ways how you can communicate with our microservices.

Phonexia Python client

The easiest way to get started with testing is to use our simple Python client. To get it, run:

pip install phonexia-audio-manipulation-detection-client

After the successful installation, run the following command to see the client options:

audio_manipulation_detection_client --help

Versioning

We use Semantic Versioning.