Class: MicLevelsAnalysisStub

.WebspeechAudioInput. MicLevelsAnalysisStub

mmir.env.media.WebspeechAudioInput.MicLevelsAnalysisStub

Extends

Methods

Getter/Setter for ASR-/recording-active state. This function should be called with true when ASR starts and with false when ASR stops. NOTE setting the active state allows the analyzer to start processing when a listener for miclevelchanged is added while ASR/recording is already active (otherwise the processing would not start immediately, but when the ASR/recording is started the next time).
Name Type Description
active Boolean optional if active is provided, then the mic-level-analysis' (recording) active-state is set to this value.
Returns:
Type Description
Boolean the mic-level-analysis' (recording) active-state. If argument active was supplied, then the return value will be the same as this input value.
Get/set the mic-level-analysis' enabled-state: If the analysis is disabled, then start will not active the analysis (and currently running analysis will be stopped). This function is getter and setter: if an argument enable is provided, then the mic-level-analysis' enabled-state will be set, before returning the current value of the enabled-state (if omitted, just the enabled-state will be returned)
Name Type Description
enable Boolean optional OPTIONAL if enable is provided, then the mic-level-analysis' enabled-state is set to this value.
Returns:
Type Description
Boolean the mic-level-analysis' enabled-state
Start the audio analysis for generating "microphone levels changed" events. This functions should be called, when ASR is starting / receiving the audio audio stream. When the analysis has started, listeners of the MediaManager for event miclevelchanged will get notified, when the mic-levels analysis detects changes in the microphone audio input levels.
Name Type Description
audioInputData AudioInputData optional If provided, the analysis will use these audio input objects instead of creating its own audio-input via getUserMedia. The AudioInputData object must have 2 properties: { inputSource: MediaStreamAudioSourceNode (HTML5 Web Audio API) audioContext: AudioContext (HTML5 Web Audio API) } If this argument is omitted, then the analysis will create its own audio input stream via getUserMedia
Stops the audio analysis for "microphone levels changed" events. This functions should be called, when ASR has stopped / closed the audio input stream.