mmir.env.media.WebspeechAudioInput.MicLevelsAnalysisStub
Extends
Methods
-
Getter/Setter for ASR-/recording-active state. This function should be called with
true
when ASR starts and withfalse
when ASR stops. NOTE setting theactive
state allows the analyzer to start processing when a listener formiclevelchanged
is added while ASR/recording is already active (otherwise the processing would not start immediately, but when the ASR/recording is started the next time).Name Type Description active
Boolean optional if active
is provided, then the mic-level-analysis' (recording) active-state is set to this value.Returns:
Type Description Boolean the mic-level-analysis' (recording) active-state. If argument active
was supplied, then the return value will be the same as this input value. -
Get/set the mic-level-analysis' enabled-state: If the analysis is disabled, then
start
will not active the analysis (and currently running analysis will be stopped). This function is getter and setter: if an argumentenable
is provided, then the mic-level-analysis' enabled-state will be set, before returning the current value of the enabled-state (if omitted, just the enabled-state will be returned)Name Type Description enable
Boolean optional OPTIONAL if enable
is provided, then the mic-level-analysis' enabled-state is set to this value.Returns:
Type Description Boolean the mic-level-analysis' enabled-state -
inherited start(audioInputData)
-
Start the audio analysis for generating "microphone levels changed" events. This functions should be called, when ASR is starting / receiving the audio audio stream. When the analysis has started, listeners of the
MediaManager
for eventmiclevelchanged
will get notified, when the mic-levels analysis detects changes in the microphone audio input levels.Name Type Description audioInputData
AudioInputData optional If provided, the analysis will use these audio input objects instead of creating its own audio-input via getUserMedia
. The AudioInputData object must have 2 properties: { inputSource: MediaStreamAudioSourceNode (HTML5 Web Audio API) audioContext: AudioContext (HTML5 Web Audio API) } If this argument is omitted, then the analysis will create its own audio input stream viagetUserMedia
-
Stops the audio analysis for "microphone levels changed" events. This functions should be called, when ASR has stopped / closed the audio input stream.