Configuration Script
This article describes the configure-speech-platform.sh script included in the
Virtual Appliance. The script automates the configuration of Phonexia Speech
Platform so you don't have to edit the speech-platform-values.yaml file
manually.
Quick start
The fastest way to configure the platform is to use automatic configuration after uploading licensed models:
/home/phonexia/scripts/configure-speech-platform.sh --auto-configure
This detects all uploaded models and licenses in /data/ folder, enables the
corresponding technologies, and configures GPU processing if a GPU is present.
When you upload a licensed-models*.zip file via Filebrowser, the configuration
runs automatically — no manual script execution needed.
Usage
/home/phonexia/scripts/configure-speech-platform.sh [OPTIONS]
Parameters
Basic configuration
| Flag | Shortcut | Description |
|---|---|---|
‑‑auto‑configure | -a | Automatically configures the system based on models and licenses in /data/. Enables GPU if detected. |
‑‑technologies | -t | List of technologies to configure. Use with -a to auto-configure only specific technologies. |
‑‑microservices | -m | List of microservices to configure. Use with -a to auto-configure only specific microservices. |
‑‑enable | -e | Enables the specified technology or microservice. |
‑‑disable | -d | Disables the specified technology or microservice. |
‑‑help | -h | Prints the help page with all available parameters and lists available technologies and microservices. |
GPU settings
| Flag | Description |
|---|---|
‑‑gpu | Configures the specified technologies or microservices to run on GPU. |
‑‑no‑gpu | Configures the specified technologies or microservices to run on CPU only. |
‑‑set‑gpu‑slices | Sets the number of GPU time-slices for the NVIDIA device plugin configuration. |
Scaling settings
| Flag | Description |
|---|---|
‑‑on‑demand | Configures the specified technologies or microservices to start only when a task is queued (on-demand mode). |
‑‑no‑on‑demand | Configures the specified technologies or microservices as permanently running replicas. |
‑‑replica‑count | Sets the number of replicas. Requires specifying the exact model for which the replica count applies. |
‑‑set‑instances‑per‑device | Sets the number of concurrent tasks processed per device (controls instancesPerDevice configuration). |
Model settings
| Flag | Description |
|---|---|
‑‑model‑names | Model names to configure. Can only be used with a single technology or microservice. Count must match ‑‑model‑versions. |
‑‑model‑versions | Model versions to configure. Can only be used with a single technology or microservice. Count must match ‑‑model‑names. |
Internal flags
| Flag | Description |
|---|---|
‑‑filebrowser | Used only during automatic runs inside the Filebrowser pod. Changes how GPU presence is checked. Do not use manually. |
Examples
Auto-configure everything
Detects all available models, licenses, and hardware, then configures all technologies accordingly:
/home/phonexia/scripts/configure-speech-platform.sh --auto-configure
Auto-configure specific technologies only
Run auto-configuration for only the specified technologies:
/home/phonexia/scripts/configure-speech-platform.sh -a -t speaker-identification enhanced-speech-to-text-built-on-whisper
Auto-configure specific microservices only
Run auto-configuration for only the specified microservices:
/home/phonexia/scripts/configure-speech-platform.sh -a -m voiceprint-extraction enhanced-speech-to-text-built-on-whisper
Enable technologies with GPU
Enable specific technologies and configure them to use GPU:
/home/phonexia/scripts/configure-speech-platform.sh -t speaker-identification enhanced-speech-to-text-built-on-whisper --enable --gpu
Enable microservices with GPU
Enable specific microservices and configure them to use GPU:
/home/phonexia/scripts/configure-speech-platform.sh -m voiceprint-extraction enhanced-speech-to-text-built-on-whisper --enable --gpu
Disable a technology
/home/phonexia/scripts/configure-speech-platform.sh -t emotion-recognition --disable
Set GPU time-slices
Configure how many pods can share a single GPU:
/home/phonexia/scripts/configure-speech-platform.sh --set-gpu-slices 6