mmir.env.media.WebspeechAudioInput.MicLevelsAnalysisStub
Extends
Methods
-
Getter/Setter for ASR-/recording-active state. This function should be called with
truewhen ASR starts and withfalsewhen ASR stops. NOTE setting theactivestate allows the analyzer to start processing when a listener formiclevelchangedis added while ASR/recording is already active (otherwise the processing would not start immediately, but when the ASR/recording is started the next time).Name Type Description activeBoolean optional if activeis provided, then the mic-level-analysis' (recording) active-state is set to this value.Returns:
Type Description Boolean the mic-level-analysis' (recording) active-state. If argument activewas supplied, then the return value will be the same as this input value. -
Get/set the mic-level-analysis' enabled-state: If the analysis is disabled, then
startwill not active the analysis (and currently running analysis will be stopped). This function is getter and setter: if an argumentenableis provided, then the mic-level-analysis' enabled-state will be set, before returning the current value of the enabled-state (if omitted, just the enabled-state will be returned)Name Type Description enableBoolean optional OPTIONAL if enableis provided, then the mic-level-analysis' enabled-state is set to this value.Returns:
Type Description Boolean the mic-level-analysis' enabled-state -
inherited start(audioInputData)
-
Start the audio analysis for generating "microphone levels changed" events. This functions should be called, when ASR is starting / receiving the audio audio stream. When the analysis has started, listeners of the
MediaManagerfor eventmiclevelchangedwill get notified, when the mic-levels analysis detects changes in the microphone audio input levels.Name Type Description audioInputDataAudioInputData optional If provided, the analysis will use these audio input objects instead of creating its own audio-input via getUserMedia. The AudioInputData object must have 2 properties: { inputSource: MediaStreamAudioSourceNode (HTML5 Web Audio API) audioContext: AudioContext (HTML5 Web Audio API) } If this argument is omitted, then the analysis will create its own audio input stream viagetUserMedia -
Stops the audio analysis for "microphone levels changed" events. This functions should be called, when ASR has stopped / closed the audio input stream.