日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

webrtc 语音流java_通过WebView WebRTC从麦克风传输语音时的语音识别

發布時間:2025/3/15 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 webrtc 语音流java_通过WebView WebRTC从麦克风传输语音时的语音识别 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

我正在編寫一個具有WebView的應用程序,它通過WebRTC處理語音呼叫 . 麥克風工作正常,因為我已授予WebView權限 .

webView.setWebChromeClient(new WebChromeClient() {

@Override

public void onPermissionRequest(final PermissionRequest request) {

request.grant(request.getResources());

}

后來我決定添加SpeechRecognizer,以便我能夠在WebRTC調用期間識別出我在說什么 . 我嘗試在同一個活動中進行語音識別工作,后來我在一個單獨的服務中完成了它,但不幸的是,它不允許同時工作 . 麥克風被WebView占用,而SpeechRecognizer沒有任何聲音(RMS一直是-2.12) . 或者,如果我在通過WebView撥打電話之前嘗試運行服務,我打電話給的人根本聽不到我的聲音(SpeechRecognizer占用麥克風而WebView沒有得到任何東西) . 如果存在,我希望找到任何解決方案 . 我不是iOS開發人員,但我聽說,它可能在iPhone上,所以這是一個驚喜,這在Android設備上是不可能的 . 我的語音識別服務代碼公共類RecognitionService extends Service實現RecognitionListener {

private String LOG_TAG = "RecognitionService";

private SpeechRecognizer speech = null;

private Intent recognizerIntent;

public RecognitionService() {

}

@Override

public IBinder onBind(Intent intent) {

// TODO: Return the communication channel to the service.

startRecognition();

return null;

}

@Override

public void onCreate() {

Log.i("Test", "RecognitionService: onCreate");

startRecognition();

}

private void startRecognition() {

speech = SpeechRecognizer.createSpeechRecognizer(this);

speech.setRecognitionListener(this);

recognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);

recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE,

"ru-RU");

recognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,

getPackageName());

recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,

RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);

recognizerIntent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 3);

speech.startListening(recognizerIntent);

}

@Override

public void onBeginningOfSpeech() {

Log.i(LOG_TAG, "onBeginningOfSpeech");

}

@Override

public void onBufferReceived(byte[] buffer) {

Log.i(LOG_TAG, "onBufferReceived: " + buffer);

}

@Override

public void onEndOfSpeech() {

Log.i(LOG_TAG, "onEndOfSpeech");

}

@Override

public void onError(int errorCode) {

String errorMessage = getErrorText(errorCode);

Log.d(LOG_TAG, "FAILED " + errorMessage);

speech.destroy();

startRecognition();

}

@Override

public void onEvent(int arg0, Bundle arg1) {

Log.i(LOG_TAG, "onEvent");

}

@Override

public void onPartialResults(Bundle arg0) {

Log.i(LOG_TAG, "onPartialResults");

}

@Override

public void onReadyForSpeech(Bundle arg0) {

Log.i(LOG_TAG, "onReadyForSpeech");

}

@Override

public void onResults(Bundle results) {

Log.i(LOG_TAG, "onResults");

ArrayList matches = results

.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);

String text = "";

for (String result : matches)

text += result + "\n";

Toast.makeText(getApplicationContext(),text,Toast.LENGTH_SHORT).show();

speech.destroy();

startRecognition();

}

public static String getErrorText(int errorCode) {

String message;

switch (errorCode) {

case SpeechRecognizer.ERROR_AUDIO:

message = "Audio recording error";

break;

case SpeechRecognizer.ERROR_CLIENT:

message = "Client side error";

break;

case SpeechRecognizer.ERROR_INSUFFICIENT_PERMISSIONS:

message = "Insufficient permissions";

break;

case SpeechRecognizer.ERROR_NETWORK:

message = "Network error";

break;

case SpeechRecognizer.ERROR_NETWORK_TIMEOUT:

message = "Network timeout";

break;

case SpeechRecognizer.ERROR_NO_MATCH:

message = "No match";

break;

case SpeechRecognizer.ERROR_RECOGNIZER_BUSY:

message = "RecognitionService busy";

break;

case SpeechRecognizer.ERROR_SERVER:

message = "error from server";

break;

case SpeechRecognizer.ERROR_SPEECH_TIMEOUT:

message = "No speech input";

break;

default:

message = "Didn't understand, please try again.";

break;

}

return message;

}

@Override

public void onRmsChanged(float rmsdB) {

Log.i(LOG_TAG, "onRmsChanged: " + rmsdB);

}

}

總結

以上是生活随笔為你收集整理的webrtc 语音流java_通过WebView WebRTC从麦克风传输语音时的语音识别的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。