Time Analysis Extraction
Technology Description
Phonexia Time Analysis Extraction (TAE) extracts key information from dialogues in recordings, providing essential insights into the conversation flow.
This makes it easy to identify:
- Long reaction times
- Crosstalk
- Speaker responses on both channels
- Speech speed, measured in phonemes per second
Typical Usage
TAE is commonly used in contact centers to identify weak moments in dialogues. It helps improve the quality of interactions between operators and callers, and it can highlight potential stress points, such as changes in speech speed during a conversation.
Input
TAE can process both audio files and streams. TAE is primarily designed for two-channel phone call recordings, where the operator speaks on one channel and the caller on the other. It can also process mono-channel recordings, but it provides a limited set of dialogue statistics in that case.
When applied to a stream, the results are generated and returned in real-time with every request, even during an ongoing stream.
Output
As with the entire SPE (Speech Processing Engine), results are provided in the form of a JSON or XML file.
TAE offers information about both monologues and conversations.
Monologue
The Monologue section describes statistics for each channel in the recording.
It answers the following questions:
- How long did this speaker talk alone?
- How much of that time was net speech?
- What was the average speed of speech?
Conversation
The Conversation section describes interactions between the two channels, including:
- The longest and shortest reaction times, i.e., where one speaker stopped and the other started
- The average reaction times
- The number of speaker turns in each direction
- Crosstalk details, such as when one speaker talks "over" the other
Segmentation
This section is optional and must be explicitly enabled. It describes segments of detected voice and silence (similar to the Voice Activity Detection technology).
More Information
For more details, refer to the corresponding chapter of the API documentation.