diff --git a/.gitignore b/.gitignore index 5a9e6d8adb..19eaf5e55d 100644 --- a/.gitignore +++ b/.gitignore @@ -34,3 +34,6 @@ /doc/xml-java/ Dockerfile.build Dockerfile.train +doc/xml-c +doc/xml-java +doc/xml-dotnet diff --git a/BIBLIOGRAPHY.md b/BIBLIOGRAPHY.md index 19b14d27b8..0640e27d41 100644 --- a/BIBLIOGRAPHY.md +++ b/BIBLIOGRAPHY.md @@ -1,5 +1,5 @@ This file contains a list of papers in chronological order that have been published -using Mozilla's DeepSpeech. +using Mozilla Voice STT. To appear ========== diff --git a/Dockerfile.build.tmpl b/Dockerfile.build.tmpl index 58bea15027..a3982312e3 100644 --- a/Dockerfile.build.tmpl +++ b/Dockerfile.build.tmpl @@ -149,12 +149,12 @@ RUN bazel build \ --copt=-msse4.2 \ --copt=-mavx \ --copt=-fvisibility=hidden \ - //native_client:libdeepspeech.so \ + //native_client:libmozilla_voice_stt.so \ --verbose_failures \ --action_env=LD_LIBRARY_PATH=${LD_LIBRARY_PATH} # Copy built libs to /DeepSpeech/native_client -RUN cp bazel-bin/native_client/libdeepspeech.so /DeepSpeech/native_client/ +RUN cp bazel-bin/native_client/libmozilla_voice_stt.so /DeepSpeech/native_client/ # Build client.cc and install Python client and decoder bindings ENV TFDIR /DeepSpeech/tensorflow @@ -162,7 +162,7 @@ ENV TFDIR /DeepSpeech/tensorflow RUN nproc WORKDIR /DeepSpeech/native_client -RUN make NUM_PROCESSES=$(nproc) deepspeech +RUN make NUM_PROCESSES=$(nproc) mozilla_voice_stt WORKDIR /DeepSpeech RUN cd native_client/python && make NUM_PROCESSES=$(nproc) bindings diff --git a/README.rst b/README.rst index 9c1b987e93..2b4729c9c1 100644 --- a/README.rst +++ b/README.rst @@ -1,5 +1,5 @@ -Project DeepSpeech -================== +Mozilla Voice STT +================= .. image:: https://readthedocs.org/projects/deepspeech/badge/?version=latest @@ -12,7 +12,7 @@ Project DeepSpeech :alt: Task Status -DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper `_. Project DeepSpeech uses Google's `TensorFlow `_ to make the implementation easier. +Mozilla Voice STT is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper `_. Mozilla Voice STT uses Google's `TensorFlow `_ to make the implementation easier. Documentation for installation, usage, and training models are available on `deepspeech.readthedocs.io `_. diff --git a/doc/DeepSpeech.rst b/doc/AcousticModel.rst similarity index 88% rename from doc/DeepSpeech.rst rename to doc/AcousticModel.rst index 3d74d22ec0..cf70af2ebc 100644 --- a/doc/DeepSpeech.rst +++ b/doc/AcousticModel.rst @@ -1,11 +1,5 @@ -DeepSpeech Model -================ - -The aim of this project is to create a simple, open, and ubiquitous speech -recognition engine. Simple, in that the engine should not require server-class -hardware to execute. Open, in that the code and models are released under the -Mozilla Public License. Ubiquitous, in that the engine should run on many -platforms and have bindings to many different languages. +Mozilla Voice STT Acoustic Model +================================ The architecture of the engine was originally motivated by that presented in `Deep Speech: Scaling up end-to-end speech recognition `_. @@ -77,7 +71,7 @@ with respect to all of the model parameters may be done via back-propagation through the rest of the network. We use the Adam method for training `[3] `_. -The complete RNN model is illustrated in the figure below. +The complete LSTM model is illustrated in the figure below. .. image:: ../images/rnn_fig-624x598.png - :alt: DeepSpeech BRNN + :alt: Mozilla Voice STT LSTM diff --git a/doc/BUILDING.rst b/doc/BUILDING.rst index 4d25359ad2..9b8b6066ff 100644 --- a/doc/BUILDING.rst +++ b/doc/BUILDING.rst @@ -1,12 +1,12 @@ .. _build-native-client: -Building DeepSpeech Binaries -============================ +Building Mozilla Voice STT Binaries +=================================== This section describes how to rebuild binaries. We have already several prebuilt binaries for all the supported platform, it is highly advised to use them except if you know what you are doing. -If you'd like to build the DeepSpeech binaries yourself, you'll need the following pre-requisites downloaded and installed: +If you'd like to build the Mozilla Voice STT binaries yourself, you'll need the following pre-requisites downloaded and installed: * `Bazel 2.0.0 `_ * `General TensorFlow r2.2 requirements `_ @@ -26,14 +26,14 @@ If you'd like to build the language bindings or the decoder package, you'll also Dependencies ------------ -If you follow these instructions, you should compile your own binaries of DeepSpeech (built on TensorFlow using Bazel). +If you follow these instructions, you should compile your own binaries of Mozilla Voice STT (built on TensorFlow using Bazel). For more information on configuring TensorFlow, read the docs up to the end of `"Configure the Build" `_. Checkout source code ^^^^^^^^^^^^^^^^^^^^ -Clone DeepSpeech source code (TensorFlow will come as a submdule): +Clone Mozilla Voice STT source code (TensorFlow will come as a submdule): .. code-block:: @@ -56,24 +56,24 @@ After you have installed the correct version of Bazel, configure TensorFlow: cd tensorflow ./configure -Compile DeepSpeech ------------------- +Compile Mozilla Voice STT +------------------------- -Compile ``libdeepspeech.so`` +Compile ``libmozilla_voice_stt.so`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Within your TensorFlow directory, there should be a symbolic link to the DeepSpeech ``native_client`` directory. If it is not present, create it with the follow command: +Within your TensorFlow directory, there should be a symbolic link to the Mozilla Voice STT ``native_client`` directory. If it is not present, create it with the follow command: .. code-block:: cd tensorflow ln -s ../native_client -You can now use Bazel to build the main DeepSpeech library, ``libdeepspeech.so``. Add ``--config=cuda`` if you want a CUDA build. +You can now use Bazel to build the main Mozilla Voice STT library, ``libmozilla_voice_stt.so``. Add ``--config=cuda`` if you want a CUDA build. .. code-block:: - bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so + bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so The generated binaries will be saved to ``bazel-bin/native_client/``. @@ -82,12 +82,12 @@ The generated binaries will be saved to ``bazel-bin/native_client/``. Compile ``generate_scorer_package`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Following the same setup as for ``libdeepspeech.so`` above, you can rebuild the ``generate_scorer_package`` binary by adding its target to the command line: ``//native_client:generate_scorer_package``. +Following the same setup as for ``libmozilla_voice_stt.so`` above, you can rebuild the ``generate_scorer_package`` binary by adding its target to the command line: ``//native_client:generate_scorer_package``. Using the example from above you can build the library and that binary at the same time: .. code-block:: - bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_scorer_package + bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so //native_client:generate_scorer_package The generated binaries will be saved to ``bazel-bin/native_client/``. @@ -99,7 +99,7 @@ Now, ``cd`` into the ``DeepSpeech/native_client`` directory and use the ``Makefi .. code-block:: cd ../DeepSpeech/native_client - make deepspeech + make mozilla_voice_stt Installing your own Binaries ---------------------------- @@ -121,9 +121,9 @@ Included are a set of generated Python bindings. After following the above build cd native_client/python make bindings - pip install dist/deepspeech* + pip install dist/mozilla_voice_stt* -The API mirrors the C++ API and is demonstrated in `client.py `_. Refer to `deepspeech.h `_ for documentation. +The API mirrors the C++ API and is demonstrated in `client.py `_. Refer to the `C API ` for documentation. Install NodeJS / ElectronJS bindings ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -136,7 +136,7 @@ After following the above build and installation instructions, the Node.JS bindi make build make npm-pack -This will create the package ``deepspeech-VERSION.tgz`` in ``native_client/javascript``. +This will create the package ``mozilla_voice_stt-VERSION.tgz`` in ``native_client/javascript``. Install the CTC decoder package ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -165,23 +165,23 @@ So your command line for ``RPi3`` and ``ARMv7`` should look like: .. code-block:: - bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so + bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so And your command line for ``LePotato`` and ``ARM64`` should look like: .. code-block:: - bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so + bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so While we test only on RPi3 Raspbian Buster and LePotato ARMBian Buster, anything compatible with ``armv7-a cortex-a53`` or ``armv8-a cortex-a53`` should be fine. -The ``deepspeech`` binary can also be cross-built, with ``TARGET=rpi3`` or ``TARGET=rpi3-armv8``. This might require you to setup a system tree using the tool ``multistrap`` and the multitrap configuration files: ``native_client/multistrap_armbian64_buster.conf`` and ``native_client/multistrap_raspbian_buster.conf``. +The ``mozilla_voice_stt`` binary can also be cross-built, with ``TARGET=rpi3`` or ``TARGET=rpi3-armv8``. This might require you to setup a system tree using the tool ``multistrap`` and the multitrap configuration files: ``native_client/multistrap_armbian64_buster.conf`` and ``native_client/multistrap_raspbian_buster.conf``. The path of the system tree can be overridden from the default values defined in ``definitions.mk`` through the ``RASPBIAN`` ``make`` variable. .. code-block:: cd ../DeepSpeech/native_client - make TARGET= deepspeech + make TARGET= mozilla_voice_stt Android devices support ----------------------- @@ -193,9 +193,9 @@ Please refer to TensorFlow documentation on how to setup the environment to buil Using the library from Android project ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -We provide uptodate and tested ``libdeepspeech`` usable as an ``AAR`` package, +We provide up-to-date and tested STT usable as an ``AAR`` package, for Android versions starting with 7.0 to 11.0. The package is published on -`JCenter `_, +`JCenter `_, and the ``JCenter`` repository should be available by default in any Android project. Please make sure your project is setup to pull from this repository. You can then include the library by just adding this line to your @@ -203,43 +203,43 @@ You can then include the library by just adding this line to your .. code-block:: - implementation 'deepspeech.mozilla.org:libdeepspeech:VERSION@aar' + implementation 'voice.mozilla.org:stt:VERSION@aar' -Building ``libdeepspeech.so`` +Building ``libmozilla_voice_stt.so`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -You can build the ``libdeepspeech.so`` using (ARMv7): +You can build the ``libmozilla_voice_stt.so`` using (ARMv7): .. code-block:: - bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so + bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libmozilla_voice_stt.so Or (ARM64): .. code-block:: - bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so + bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libmozilla_voice_stt.so -Building ``libdeepspeech.aar`` +Building ``libmozillavoicestt.aar`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In the unlikely event you have to rebuild the JNI bindings, source code is -available under the ``libdeepspeech`` subdirectory. Building depends on shared -object: please ensure to place ``libdeepspeech.so`` into the -``libdeepspeech/libs/{arm64-v8a,armeabi-v7a,x86_64}/`` matching subdirectories. +available under the ``libmozillavoicestt`` subdirectory. Building depends on shared +object: please ensure to place ``libmozilla_voice_stt.so`` into the +``libmozillavoicestt/libs/{arm64-v8a,armeabi-v7a,x86_64}/`` matching subdirectories. Building the bindings is managed by ``gradle`` and should be limited to issuing -``./gradlew libdeepspeech:build``, producing an ``AAR`` package in -``./libdeepspeech/build/outputs/aar/``. +``./gradlew libmozillavoicestt:build``, producing an ``AAR`` package in +``./libmozillavoicestt/build/outputs/aar/``. Please note that you might have to copy the file to a local Maven repository and adapt file naming (when missing, the error message should states what filename it expects and where). -Building C++ ``deepspeech`` binary -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Building C++ ``mozilla_voice_stt`` binary +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Building the ``deepspeech`` binary will happen through ``ndk-build`` (ARMv7): +Building the ``mozilla_voice_stt`` binary will happen through ``ndk-build`` (ARMv7): .. code-block:: @@ -272,13 +272,13 @@ demo of one usage of the application. For example, it's only able to read PCM mono 16kHz 16-bits file and it might fail on some WAVE file that are not following exactly the specification. -Running ``deepspeech`` via adb -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Running ``mozilla_voice_stt`` via adb +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You should use ``adb push`` to send data to device, please refer to Android documentation on how to use that. -Please push DeepSpeech data to ``/sdcard/deepspeech/``\ , including: +Please push Mozilla Voice STT data to ``/sdcard/mozilla_voice_stt/``\ , including: * ``output_graph.tflite`` which is the TF Lite model @@ -286,18 +286,18 @@ Please push DeepSpeech data to ``/sdcard/deepspeech/``\ , including: the scorer; please be aware that too big scorer will make the device run out of memory -Then, push binaries from ``native_client.tar.xz`` to ``/data/local/tmp/ds``\ : +Then, push binaries from ``native_client.tar.xz`` to ``/data/local/tmp/stt``\ : -* ``deepspeech`` -* ``libdeepspeech.so`` +* ``mozilla_voice_stt`` +* ``libmozilla_voice_stt.so`` * ``libc++_shared.so`` You should then be able to run as usual, using a shell from ``adb shell``\ : .. code-block:: - user@device$ cd /data/local/tmp/ds/ - user@device$ LD_LIBRARY_PATH=$(pwd)/ ./deepspeech [...] + user@device$ cd /data/local/tmp/stt/ + user@device$ LD_LIBRARY_PATH=$(pwd)/ ./mozilla_voice_stt [...] Please note that Android linker does not support ``rpath`` so you have to set ``LD_LIBRARY_PATH``. Properly wrapped / packaged bindings does embed the library diff --git a/doc/C-API.rst b/doc/C-API.rst index e96f3e12a6..bddc7d491c 100644 --- a/doc/C-API.rst +++ b/doc/C-API.rst @@ -10,56 +10,59 @@ C API See also the list of error codes including descriptions for each error in :ref:`error-codes`. -.. doxygenfunction:: DS_CreateModel +.. doxygenfunction:: STT_CreateModel :project: deepspeech-c -.. doxygenfunction:: DS_FreeModel +.. doxygenfunction:: STT_FreeModel :project: deepspeech-c -.. doxygenfunction:: DS_EnableExternalScorer +.. doxygenfunction:: STT_EnableExternalScorer :project: deepspeech-c -.. doxygenfunction:: DS_DisableExternalScorer +.. doxygenfunction:: STT_DisableExternalScorer :project: deepspeech-c -.. doxygenfunction:: DS_SetScorerAlphaBeta +.. doxygenfunction:: STT_SetScorerAlphaBeta :project: deepspeech-c -.. doxygenfunction:: DS_GetModelSampleRate +.. doxygenfunction:: STT_GetModelSampleRate :project: deepspeech-c -.. doxygenfunction:: DS_SpeechToText +.. doxygenfunction:: STT_SpeechToText :project: deepspeech-c -.. doxygenfunction:: DS_SpeechToTextWithMetadata +.. doxygenfunction:: STT_SpeechToTextWithMetadata :project: deepspeech-c -.. doxygenfunction:: DS_CreateStream +.. doxygenfunction:: STT_CreateStream :project: deepspeech-c -.. doxygenfunction:: DS_FeedAudioContent +.. doxygenfunction:: STT_FeedAudioContent :project: deepspeech-c -.. doxygenfunction:: DS_IntermediateDecode +.. doxygenfunction:: STT_IntermediateDecode :project: deepspeech-c -.. doxygenfunction:: DS_IntermediateDecodeWithMetadata +.. doxygenfunction:: STT_IntermediateDecodeWithMetadata :project: deepspeech-c -.. doxygenfunction:: DS_FinishStream +.. doxygenfunction:: STT_FinishStream :project: deepspeech-c -.. doxygenfunction:: DS_FinishStreamWithMetadata +.. doxygenfunction:: STT_FinishStreamWithMetadata :project: deepspeech-c -.. doxygenfunction:: DS_FreeStream +.. doxygenfunction:: STT_FreeStream :project: deepspeech-c -.. doxygenfunction:: DS_FreeMetadata +.. doxygenfunction:: STT_FreeMetadata :project: deepspeech-c -.. doxygenfunction:: DS_FreeString +.. doxygenfunction:: STT_FreeString :project: deepspeech-c -.. doxygenfunction:: DS_Version +.. doxygenfunction:: STT_Version + :project: deepspeech-c + +.. doxygenfunction:: STT_ErrorCodeToErrorMessage :project: deepspeech-c diff --git a/doc/Decoder.rst b/doc/Decoder.rst index c335c3173e..9f2381976c 100644 --- a/doc/Decoder.rst +++ b/doc/Decoder.rst @@ -6,7 +6,7 @@ CTC beam search decoder Introduction ^^^^^^^^^^^^ -DeepSpeech uses the `Connectionist Temporal Classification `_ loss function. For an excellent explanation of CTC and its usage, see this Distill article: `Sequence Modeling with CTC `_. This document assumes the reader is familiar with the concepts described in that article, and describes DeepSpeech specific behaviors that developers building systems with DeepSpeech should know to avoid problems. +Mozilla Voice STT uses the `Connectionist Temporal Classification `_ loss function. For an excellent explanation of CTC and its usage, see this Distill article: `Sequence Modeling with CTC `_. This document assumes the reader is familiar with the concepts described in that article, and describes Mozilla Voice STT specific behaviors that developers building systems with Mozilla Voice STT should know to avoid problems. Note: Documentation for the tooling for creating custom scorer packages is available in :ref:`scorer-scripts`. @@ -16,19 +16,19 @@ The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "S External scorer ^^^^^^^^^^^^^^^ -DeepSpeech clients support OPTIONAL use of an external language model to improve the accuracy of the predicted transcripts. In the code, command line parameters, and documentation, this is referred to as a "scorer". The scorer is used to compute the likelihood (also called a score, hence the name "scorer") of sequences of words or characters in the output, to guide the decoder towards more likely results. This improves accuracy significantly. +Mozilla Voice STT clients support OPTIONAL use of an external language model to improve the accuracy of the predicted transcripts. In the code, command line parameters, and documentation, this is referred to as a "scorer". The scorer is used to compute the likelihood (also called a score, hence the name "scorer") of sequences of words or characters in the output, to guide the decoder towards more likely results. This improves accuracy significantly. -The use of an external scorer is fully optional. When an external scorer is not specified, DeepSpeech still uses a beam search decoding algorithm, but without any outside scoring. +The use of an external scorer is fully optional. When an external scorer is not specified, Mozilla Voice STT still uses a beam search decoding algorithm, but without any outside scoring. -Currently, the DeepSpeech external scorer is implemented with `KenLM `_, plus some tooling to package the necessary files and metadata into a single ``.scorer`` package. The tooling lives in ``data/lm/``. The scripts included in ``data/lm/`` can be used and modified to build your own language model based on your particular use case or language. See :ref:`scorer-scripts` for more details on how to reproduce our scorer file as well as create your own. +Currently, the Mozilla Voice STT external scorer is implemented with `KenLM `_, plus some tooling to package the necessary files and metadata into a single ``.scorer`` package. The tooling lives in ``data/lm/``. The scripts included in ``data/lm/`` can be used and modified to build your own language model based on your particular use case or language. See :ref:`scorer-scripts` for more details on how to reproduce our scorer file as well as create your own. -The scripts are geared towards replicating the language model files we release as part of `DeepSpeech model releases `_, but modifying them to use different datasets or language model construction parameters should be simple. +The scripts are geared towards replicating the language model files we release as part of `Mozilla Voice STT model releases `_, but modifying them to use different datasets or language model construction parameters should be simple. Decoding modes ^^^^^^^^^^^^^^ -DeepSpeech currently supports two modes of operation with significant differences at both training and decoding time. Note that Bytes output mode is experimental and has not been tested for languages other than Chinese Mandarin. +Mozilla Voice STT currently supports two modes of operation with significant differences at both training and decoding time. Note that Bytes output mode is experimental and has not been tested for languages other than Chinese Mandarin. Default mode (alphabet based) diff --git a/doc/DotNet-API.rst b/doc/DotNet-API.rst index 92342deda4..7ec4e18de8 100644 --- a/doc/DotNet-API.rst +++ b/doc/DotNet-API.rst @@ -2,17 +2,17 @@ ============== -DeepSpeech Class ----------------- +MozillaVoiceSttModel Class +-------------------------- -.. doxygenclass:: DeepSpeechClient::DeepSpeech +.. doxygenclass:: MozillaVoiceSttClient::MozillaVoiceSttModel :project: deepspeech-dotnet :members: -DeepSpeechStream Class ----------------------- +MozillaVoiceSttStream Class +--------------------------- -.. doxygenclass:: DeepSpeechClient::Models::DeepSpeechStream +.. doxygenclass:: MozillaVoiceSttClient::Models::MozillaVoiceSttStream :project: deepspeech-dotnet :members: @@ -21,33 +21,33 @@ ErrorCodes See also the main definition including descriptions for each error in :ref:`error-codes`. -.. doxygenenum:: DeepSpeechClient::Enums::ErrorCodes +.. doxygenenum:: MozillaVoiceSttClient::Enums::ErrorCodes :project: deepspeech-dotnet Metadata -------- -.. doxygenclass:: DeepSpeechClient::Models::Metadata +.. doxygenclass:: MozillaVoiceSttClient::Models::Metadata :project: deepspeech-dotnet :members: Transcripts CandidateTranscript ------------------- -.. doxygenclass:: DeepSpeechClient::Models::CandidateTranscript +.. doxygenclass:: MozillaVoiceSttClient::Models::CandidateTranscript :project: deepspeech-dotnet :members: Tokens, Confidence TokenMetadata ------------- -.. doxygenclass:: DeepSpeechClient::Models::TokenMetadata +.. doxygenclass:: MozillaVoiceSttClient::Models::TokenMetadata :project: deepspeech-dotnet :members: Text, Timestep, StartTime -DeepSpeech Interface --------------------- +IMozillaVoiceSttModel Interface +------------------------------- -.. doxygeninterface:: DeepSpeechClient::Interfaces::IDeepSpeech +.. doxygeninterface:: MozillaVoiceSttClient::Interfaces::IMozillaVoiceSttModel :project: deepspeech-dotnet :members: diff --git a/doc/DotNet-Examples.rst b/doc/DotNet-Examples.rst index a00ee83350..749250ba24 100644 --- a/doc/DotNet-Examples.rst +++ b/doc/DotNet-Examples.rst @@ -1,12 +1,12 @@ .NET API Usage example ====================== -Examples are from `native_client/dotnet/DeepSpeechConsole/Program.cs`. +Examples are from `native_client/dotnet/MozillaVoiceSttConsole/Program.cs`. Creating a model instance and loading model ------------------------------------------- -.. literalinclude:: ../native_client/dotnet/DeepSpeechConsole/Program.cs +.. literalinclude:: ../native_client/dotnet/MozillaVoiceSttConsole/Program.cs :language: csharp :linenos: :lineno-match: @@ -16,7 +16,7 @@ Creating a model instance and loading model Performing inference -------------------- -.. literalinclude:: ../native_client/dotnet/DeepSpeechConsole/Program.cs +.. literalinclude:: ../native_client/dotnet/MozillaVoiceSttConsole/Program.cs :language: csharp :linenos: :lineno-match: @@ -26,4 +26,4 @@ Performing inference Full source code ---------------- -See :download:`Full source code<../native_client/dotnet/DeepSpeechConsole/Program.cs>`. +See :download:`Full source code<../native_client/dotnet/MozillaVoiceSttConsole/Program.cs>`. diff --git a/doc/Error-Codes.rst b/doc/Error-Codes.rst index 361ca025b9..60090c9da9 100644 --- a/doc/Error-Codes.rst +++ b/doc/Error-Codes.rst @@ -5,7 +5,7 @@ Error codes Below is the definition for all error codes used in the API, their numerical values, and a human readable description. -.. literalinclude:: ../native_client/deepspeech.h +.. literalinclude:: ../native_client/mozilla_voice_stt.h :language: c :start-after: sphinx-doc: error_code_listing_start :end-before: sphinx-doc: error_code_listing_end diff --git a/doc/Java-API.rst b/doc/Java-API.rst index e0c6a7dd90..f75297f1cb 100644 --- a/doc/Java-API.rst +++ b/doc/Java-API.rst @@ -1,29 +1,29 @@ Java ==== -DeepSpeechModel ---------------- +MozillaVoiceSttModel +-------------------- -.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::DeepSpeechModel +.. doxygenclass:: org::mozilla::voice::stt::MozillaVoiceSttModel :project: deepspeech-java :members: Metadata -------- -.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::Metadata +.. doxygenclass:: org::mozilla::voice::stt::Metadata :project: deepspeech-java :members: getNumTranscripts, getTranscript CandidateTranscript ------------------- -.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::CandidateTranscript +.. doxygenclass:: org::mozilla::voice::stt::CandidateTranscript :project: deepspeech-java :members: getNumTokens, getConfidence, getToken TokenMetadata ------------- -.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::TokenMetadata +.. doxygenclass:: org::mozilla::voice::stt::TokenMetadata :project: deepspeech-java :members: getText, getTimestep, getStartTime diff --git a/doc/Java-Examples.rst b/doc/Java-Examples.rst index 46ffa17517..a1e1a7dc8e 100644 --- a/doc/Java-Examples.rst +++ b/doc/Java-Examples.rst @@ -1,12 +1,12 @@ Java API Usage example ====================== -Examples are from `native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java`. +Examples are from `native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java`. Creating a model instance and loading model ------------------------------------------- -.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java +.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java :language: java :linenos: :lineno-match: @@ -16,7 +16,7 @@ Creating a model instance and loading model Performing inference -------------------- -.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java +.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java :language: java :linenos: :lineno-match: @@ -26,4 +26,4 @@ Performing inference Full source code ---------------- -See :download:`Full source code<../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java>`. +See :download:`Full source code<../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java>`. diff --git a/doc/Makefile b/doc/Makefile index 0980ab242c..1b8aa39c69 100644 --- a/doc/Makefile +++ b/doc/Makefile @@ -4,7 +4,7 @@ # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build -SPHINXPROJ = DeepSpeech +SPHINXPROJ = Mozilla Voice STT SOURCEDIR = . BUILDDIR = .build diff --git a/doc/ParallelOptimization.rst b/doc/ParallelOptimization.rst index e0d3734c37..0da5954ee3 100644 --- a/doc/ParallelOptimization.rst +++ b/doc/ParallelOptimization.rst @@ -1,8 +1,8 @@ Parallel Optimization ===================== -This is how we implement optimization of the DeepSpeech model across GPUs on a -single host. Parallel optimization can take on various forms. For example +This is how we implement optimization of the Mozilla Voice STT model across GPUs +on a single host. Parallel optimization can take on various forms. For example one can use asynchronous updates of the model, synchronous updates of the model, or some combination of the two. diff --git a/doc/SUPPORTED_PLATFORMS.rst b/doc/SUPPORTED_PLATFORMS.rst index 1ccfb7e3aa..eeea28da6a 100644 --- a/doc/SUPPORTED_PLATFORMS.rst +++ b/doc/SUPPORTED_PLATFORMS.rst @@ -9,61 +9,61 @@ Linux / AMD64 without GPU ^^^^^^^^^^^^^^^^^^^^^^^^^ * x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference) * Ubuntu 14.04+ (glibc >= 2.19, libstdc++6 >= 4.8) -* Full TensorFlow runtime (``deepspeech`` packages) -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* Full TensorFlow runtime (``mozilla_voice_stt`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) Linux / AMD64 with GPU ^^^^^^^^^^^^^^^^^^^^^^ * x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference) * Ubuntu 14.04+ (glibc >= 2.19, libstdc++6 >= 4.8) * CUDA 10.0 (and capable GPU) -* Full TensorFlow runtime (``deepspeech`` packages) -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* Full TensorFlow runtime (``mozilla_voice_stt`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) Linux / ARMv7 ^^^^^^^^^^^^^ * Cortex-A53 compatible ARMv7 SoC with Neon support * Raspbian Buster-compatible distribution -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) Linux / Aarch64 ^^^^^^^^^^^^^^^ * Cortex-A72 compatible Aarch64 SoC * ARMbian Buster-compatible distribution -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) Android / ARMv7 ^^^^^^^^^^^^^^^ * ARMv7 SoC with Neon support * Android 7.0-10.0 * NDK API level >= 21 -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) Android / Aarch64 ^^^^^^^^^^^^^^^^^ * Aarch64 SoC * Android 7.0-10.0 * NDK API level >= 21 -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) macOS / AMD64 ^^^^^^^^^^^^^ * x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference) * macOS >= 10.10 -* Full TensorFlow runtime (``deepspeech`` packages) -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* Full TensorFlow runtime (``mozilla_voice_stt`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) Windows / AMD64 without GPU ^^^^^^^^^^^^^^^^^^^^^^^^^^^ * x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference) * Windows Server >= 2012 R2 ; Windows >= 8.1 -* Full TensorFlow runtime (``deepspeech`` packages) -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* Full TensorFlow runtime (``mozilla_voice_stt`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) Windows / AMD64 with GPU ^^^^^^^^^^^^^^^^^^^^^^^^ * x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference) * Windows Server >= 2012 R2 ; Windows >= 8.1 * CUDA 10.0 (and capable GPU) -* Full TensorFlow runtime (``deepspeech`` packages) -* TensorFlow Lite runtime (``deepspeech-tflite`` packages) +* Full TensorFlow runtime (``mozilla_voice_stt`` packages) +* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages) diff --git a/doc/Scorer.rst b/doc/Scorer.rst index 1f37460448..841c857761 100644 --- a/doc/Scorer.rst +++ b/doc/Scorer.rst @@ -3,7 +3,7 @@ External scorer scripts ======================= -DeepSpeech pre-trained models include an external scorer. This document explains how to reproduce our external scorer, as well as adapt the scripts to create your own. +Mozilla Voice STT pre-trained models include an external scorer. This document explains how to reproduce our external scorer, as well as adapt the scripts to create your own. The scorer is composed of two sub-components, a KenLM language model and a trie data structure containing all words in the vocabulary. In order to create the scorer package, first we must create a KenLM language model (using ``data/lm/generate_lm.py``, and then use ``generate_scorer_package`` to create the final package file including the trie data structure. @@ -59,6 +59,6 @@ Building your own scorer can be useful if you're using models in a narrow usage The LibriSpeech LM training text used by our scorer is around 4GB uncompressed, which should give an idea of the size of a corpus needed for a reasonable language model for general speech recognition. For more constrained use cases with smaller vocabularies, you don't need as much data, but you should still try to gather as much as you can. -With a text corpus in hand, you can then re-use ``generate_lm.py`` and ``generate_scorer_package`` to create your own scorer that is compatible with DeepSpeech clients and language bindings. Before building the language model, you must first familiarize yourself with the `KenLM toolkit `_. Most of the options exposed by the ``generate_lm.py`` script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior. +With a text corpus in hand, you can then re-use ``generate_lm.py`` and ``generate_scorer_package`` to create your own scorer that is compatible with Mozilla Voice STT clients and language bindings. Before building the language model, you must first familiarize yourself with the `KenLM toolkit `_. Most of the options exposed by the ``generate_lm.py`` script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior. After using ``generate_lm.py`` to create a KenLM language model binary file, you can use ``generate_scorer_package`` to create a scorer package as described in the previous section. Note that we have a :github:`lm_optimizer.py script ` which can be used to find good default values for alpha and beta. To use it, you must first generate a package with any value set for default alpha and beta flags. For this step, it doesn't matter what values you use, as they'll be overridden by ``lm_optimizer.py`` later. Then, use ``lm_optimizer.py`` with this scorer file to find good alpha and beta values. Finally, use ``generate_scorer_package`` again, this time with the new values. diff --git a/doc/TRAINING.rst b/doc/TRAINING.rst index 7de40e6a64..e43c6829ec 100644 --- a/doc/TRAINING.rst +++ b/doc/TRAINING.rst @@ -12,7 +12,7 @@ Prerequisites for training a model Getting the training code ^^^^^^^^^^^^^^^^^^^^^^^^^ -Clone the DeepSpeech repository: +Clone the Mozilla Voice STT repository: .. code-block:: bash @@ -21,25 +21,25 @@ Clone the DeepSpeech repository: Creating a virtual environment ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/deepspeech-train-venv``. You can create it using this command: +In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run Mozilla Voice STT. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/stt-train-venv``. You can create it using this command: .. code-block:: - $ python3 -m venv $HOME/tmp/deepspeech-train-venv/ + $ python3 -m venv $HOME/tmp/stt-train-venv/ Once this command completes successfully, the environment will be ready to be activated. Activating the environment ^^^^^^^^^^^^^^^^^^^^^^^^^^ -Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command: +Each time you need to work with Mozilla Voice STT, you have to *activate* this virtual environment. This is done with this simple command: .. code-block:: - $ source $HOME/tmp/deepspeech-train-venv/bin/activate + $ source $HOME/tmp/stt-train-venv/bin/activate -Installing DeepSpeech Training Code and its dependencies -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Installing Mozilla Voice STT Training Code and its dependencies +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Install the required dependencies using ``pip3``\ : @@ -88,7 +88,7 @@ This should ensure that you'll re-use the upstream Python 3 TensorFlow GPU-enabl make Dockerfile.train -If you want to specify a different DeepSpeech repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters: +If you want to specify a different Mozilla Voice STT repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters: .. code-block:: bash @@ -105,7 +105,7 @@ After extraction of such a data set, you'll find the following contents: * the ``*.tsv`` files output by CorporaCreator for the downloaded language * the mp3 audio files they reference in a ``clips`` sub-directory. -For bringing this data into a form that DeepSpeech understands, you have to run the CommonVoice v2.0 importer (\ ``bin/import_cv2.py``\ ): +For bringing this data into a form that Mozilla Voice STT understands, you have to run the CommonVoice v2.0 importer (\ ``bin/import_cv2.py``\ ): .. code-block:: bash @@ -147,7 +147,7 @@ For executing pre-configured training scenarios, there is a collection of conven **If you experience GPU OOM errors while training, try reducing the batch size with the ``--train_batch_size``\ , ``--dev_batch_size`` and ``--test_batch_size`` parameters.** -As a simple first example you can open a terminal, change to the directory of the DeepSpeech checkout, activate the virtualenv created above, and run: +As a simple first example you can open a terminal, change to the directory of the Mozilla Voice STT checkout, activate the virtualenv created above, and run: .. code-block:: bash @@ -157,7 +157,7 @@ This script will train on a small sample dataset composed of just a single audio Feel also free to pass additional (or overriding) ``DeepSpeech.py`` parameters to these scripts. Then, just run the script to train the modified network. -Each dataset has a corresponding importer script in ``bin/`` that can be used to download (if it's freely available) and preprocess the dataset. See ``bin/import_librivox.py`` for an example of how to import and preprocess a large dataset for training with DeepSpeech. +Each dataset has a corresponding importer script in ``bin/`` that can be used to download (if it's freely available) and preprocess the dataset. See ``bin/import_librivox.py`` for an example of how to import and preprocess a large dataset for training with Mozilla Voice STT. Some importers might require additional code to properly handled your locale-specific requirements. Such handling is dealt with ``--validate_label_locale`` flag that allows you to source out-of-tree Python script that defines a ``validate_label`` function. Please refer to ``util/importers.py`` for implementation example of that function. If you don't provide this argument, the default ``validate_label`` function will be used. This one is only intended for English language, so you might have consistency issues in your data for other languages. @@ -184,7 +184,7 @@ Mixed precision training makes use of both FP32 and FP16 precisions where approp python3 DeepSpeech.py --train_files ./train.csv --dev_files ./dev.csv --test_files ./test.csv --automatic_mixed_precision ``` -On a Volta generation V100 GPU, automatic mixed precision speeds up DeepSpeech training and evaluation by ~30%-40%. +On a Volta generation V100 GPU, automatic mixed precision speeds up Mozilla Voice STT training and evaluation by ~30%-40%. Checkpointing ^^^^^^^^^^^^^ @@ -226,9 +226,9 @@ Upon sucessfull run, it should report about conversion of a non-zero number of n Continuing training from a release model ---------------------------------------- -There are currently two supported approaches to make use of a pre-trained DeepSpeech model: fine-tuning or transfer-learning. Choosing which one to use is a simple decision, and it depends on your target dataset. Does your data use the same alphabet as the release model? If "Yes": fine-tune. If "No" use transfer-learning. +There are currently two supported approaches to make use of a pre-trained Mozilla Voice STT model: fine-tuning or transfer-learning. Choosing which one to use is a simple decision, and it depends on your target dataset. Does your data use the same alphabet as the release model? If "Yes": fine-tune. If "No" use transfer-learning. -If your own data uses the *extact* same alphabet as the English release model (i.e. `a-z` plus `'`) then the release model's output layer will match your data, and you can just fine-tune the existing parameters. However, if you want to use a new alphabet (e.g. Cyrillic `а`, `б`, `д`), the output layer of a release DeepSpeech model will *not* match your data. In this case, you should use transfer-learning (i.e. remove the trained model's output layer, and reinitialize a new output layer that matches your target character set. +If your own data uses the *extact* same alphabet as the English release model (i.e. `a-z` plus `'`) then the release model's output layer will match your data, and you can just fine-tune the existing parameters. However, if you want to use a new alphabet (e.g. Cyrillic `а`, `б`, `д`), the output layer of a release Mozilla Voice STT model will *not* match your data. In this case, you should use transfer-learning (i.e. remove the trained model's output layer, and reinitialize a new output layer that matches your target character set. N.B. - If you have access to a pre-trained model which uses UTF-8 bytes at the output layer you can always fine-tune, because any alphabet should be encodable as UTF-8. @@ -260,11 +260,11 @@ If you try to load a release model without following these steps, you'll get an Transfer-Learning (new alphabet) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you want to continue training an alphabet-based DeepSpeech model (i.e. not a UTF-8 model) on a new language, or if you just want to add new characters to your custom alphabet, you will probably want to use transfer-learning instead of fine-tuning. If you're starting with a pre-trained UTF-8 model -- even if your data comes from a different language or uses a different alphabet -- the model will be able to predict your new transcripts, and you should use fine-tuning instead. +If you want to continue training an alphabet-based Mozilla Voice STT model (i.e. not a UTF-8 model) on a new language, or if you just want to add new characters to your custom alphabet, you will probably want to use transfer-learning instead of fine-tuning. If you're starting with a pre-trained UTF-8 model -- even if your data comes from a different language or uses a different alphabet -- the model will be able to predict your new transcripts, and you should use fine-tuning instead. -In a nutshell, DeepSpeech's transfer-learning allows you to remove certain layers from a pre-trained model, initialize new layers for your target data, stitch together the old and new layers, and update all layers via gradient descent. You will remove the pre-trained output layer (and optionally more layers) and reinitialize parameters to fit your target alphabet. The simplest case of transfer-learning is when you remove just the output layer. +In a nutshell, Mozilla Voice STT's transfer-learning allows you to remove certain layers from a pre-trained model, initialize new layers for your target data, stitch together the old and new layers, and update all layers via gradient descent. You will remove the pre-trained output layer (and optionally more layers) and reinitialize parameters to fit your target alphabet. The simplest case of transfer-learning is when you remove just the output layer. -In DeepSpeech's implementation of transfer-learning, all removed layers will be contiguous, starting from the output layer. The key flag you will want to experiment with is ``--drop_source_layers``. This flag accepts an integer from ``1`` to ``5`` and allows you to specify how many layers you want to remove from the pre-trained model. For example, if you supplied ``--drop_source_layers 3``, you will drop the last three layers of the pre-trained model: the output layer, penultimate layer, and LSTM layer. All dropped layers will be reinintialized, and (crucially) the output layer will be defined to match your supplied target alphabet. +In Mozilla Voice STT's implementation of transfer-learning, all removed layers will be contiguous, starting from the output layer. The key flag you will want to experiment with is ``--drop_source_layers``. This flag accepts an integer from ``1`` to ``5`` and allows you to specify how many layers you want to remove from the pre-trained model. For example, if you supplied ``--drop_source_layers 3``, you will drop the last three layers of the pre-trained model: the output layer, penultimate layer, and LSTM layer. All dropped layers will be reinintialized, and (crucially) the output layer will be defined to match your supplied target alphabet. You need to specify the location of the pre-trained model with ``--load_checkpoint_dir`` and define where your new model checkpoints will be saved with ``--save_checkpoint_dir``. You need to specify how many layers to remove (aka "drop") from the pre-trained model: ``--drop_source_layers``. You also need to supply your new alphabet file using the standard ``--alphabet_config_path`` (remember, using a new alphabet is the whole reason you want to use transfer-learning). @@ -282,8 +282,7 @@ You need to specify the location of the pre-trained model with ``--load_checkpoi UTF-8 mode ^^^^^^^^^^ -DeepSpeech includes a UTF-8 operating mode which can be useful to model languages with very large alphabets, such as Chinese Mandarin. For details on how it works and how to use it, see :ref:`decoder-docs`. - +Mozilla Voice STT includes a UTF-8 operating mode which can be useful to model languages with very large alphabets, such as Chinese Mandarin. For details on how it works and how to use it, see :ref:`decoder-docs`. .. _training-data-augmentation: diff --git a/doc/USING.rst b/doc/USING.rst index 12519980a9..c2813b9dc1 100644 --- a/doc/USING.rst +++ b/doc/USING.rst @@ -3,7 +3,7 @@ Using a Pre-trained Model ========================= -Inference using a DeepSpeech pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed `further down in this README <#third-party-bindings>`_. +Inference using a Mozilla Voice STT pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed `further down in this README <#third-party-bindings>`_. * :ref:`The C API `. * :ref:`The Python package/language binding ` @@ -13,7 +13,7 @@ Inference using a DeepSpeech pre-trained model can be done with a client/languag .. _runtime-deps: -Running ``deepspeech`` might, see below, require some runtime dependencies to be already installed on your system: +Running ``mozilla_voice_stt`` might, see below, require some runtime dependencies to be already installed on your system: * ``sox`` - The Python and Node.JS clients use SoX to resample files to 16kHz. * ``libgomp1`` - libsox (statically linked into the clients) depends on OpenMP. Some people have had to install this manually. @@ -28,29 +28,29 @@ Please refer to your system's documentation on how to install these dependencies CUDA dependency ^^^^^^^^^^^^^^^ -The GPU capable builds (Python, NodeJS, C++, etc) depend on CUDA 10.1 and CuDNN v7.6. +The CUDA capable builds (Python, NodeJS, C++, etc) depend on CUDA 10.1 and CuDNN v7.6. Getting the pre-trained model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech `releases page `_. Alternatively, you can run the following command to download the model files in your current directory: +If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the Mozilla Voice STT `releases page `_. Alternatively, you can run the following command to download the model files in your current directory: .. code-block:: bash wget https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.pbmm wget https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.scorer -There are several pre-trained model files available in official releases. Files ending in ``.pbmm`` are compatible with clients and language bindings built against the standard TensorFlow runtime. Usually these packages are simply called ``deepspeech``. These files are also compatible with CUDA enabled clients and language bindings. These packages are usually called ``deepspeech-gpu``. Files ending in ``.tflite`` are compatible with clients and language bindings built against the `TensorFlow Lite runtime `_. These models are optimized for size and performance in low power devices. On desktop platforms, the compatible packages are called ``deepspeech-tflite``. On Android and Raspberry Pi, we only publish TensorFlow Lite enabled packages, and they are simply called ``deepspeech``. You can see a full list of supported platforms and which TensorFlow runtime is supported at :ref:`supported-platforms-inference`. +There are several pre-trained model files available in official releases. Files ending in ``.pbmm`` are compatible with clients and language bindings built against the standard TensorFlow runtime. Usually these packages are simply called ``mozilla_voice_stt``. These files are also compatible with CUDA enabled clients and language bindings. These packages are usually called ``mozilla_voice_stt_cuda``. Files ending in ``.tflite`` are compatible with clients and language bindings built against the `TensorFlow Lite runtime `_. These models are optimized for size and performance in low power devices. On desktop platforms, the compatible packages are called ``mozilla_voice_stt_tflite``. On Android and Raspberry Pi, we only publish TensorFlow Lite enabled packages, and they are simply called ``mozilla_voice_stt``. You can see a full list of supported platforms and which TensorFlow runtime is supported at :ref:`supported-platforms-inference`. -+--------------------+---------------------+---------------------+ -| Package/Model type | .pbmm | .tflite | -+====================+=====================+=====================+ -| deepspeech | Depends on platform | Depends on platform | -+--------------------+---------------------+---------------------+ -| deepspeech-gpu | ✅ | ❌ | -+--------------------+---------------------+---------------------+ -| deepspeech-tflite | ❌ | ✅ | -+--------------------+---------------------+---------------------+ ++--------------------------+---------------------+---------------------+ +| Package/Model type | .pbmm | .tflite | ++==========================+=====================+=====================+ +| mozilla_voice_stt | Depends on platform | Depends on platform | ++--------------------------+---------------------+---------------------+ +| mozilla_voice_stt_cuda | ✅ | ❌ | ++--------------------------+---------------------+---------------------+ +| mozilla_voice_stt_tflite | ❌ | ✅ | ++--------------------------+---------------------+---------------------+ Finally, the pre-trained model files also include files ending in ``.scorer``. These are external scorers (language models) that are used at inference time in conjunction with an acoustic model (``.pbmm`` or ``.tflite`` file) to produce transcriptions. We also provide further documentation on :ref:`the decoding process ` and :ref:`how scorers are generated `. @@ -61,82 +61,82 @@ The release notes include detailed information on how the released models were t The process for training an acoustic model is described in :ref:`training-docs`. In particular, fine tuning a release model using your own data can be a good way to leverage relatively smaller amounts of data that would not be sufficient for training a new model from scratch. See the :ref:`fine tuning and transfer learning sections ` for more information. :ref:`Data augmentation ` can also be a good way to increase the value of smaller training sets. -Creating your own external scorer from text data is another way that you can adapt the model to your specific needs. The process and tools used to generate an external scorer package are described in :ref:`scorer-scripts` and an overview of how the external scorer is used by DeepSpeech to perform inference is available in :ref:`decoder-docs`. Generating a smaller scorer from a single purpose text dataset is a quick process and can bring significant accuracy improvements, specially for more constrained, limited vocabulary applications. +Creating your own external scorer from text data is another way that you can adapt the model to your specific needs. The process and tools used to generate an external scorer package are described in :ref:`scorer-scripts` and an overview of how the external scorer is used by Mozilla Voice STT to perform inference is available in :ref:`decoder-docs`. Generating a smaller scorer from a single purpose text dataset is a quick process and can bring significant accuracy improvements, specially for more constrained, limited vocabulary applications. Model compatibility ^^^^^^^^^^^^^^^^^^^ -DeepSpeech models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it. +Mozilla Voice STT models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it. .. _py-usage: Using the Python package ^^^^^^^^^^^^^^^^^^^^^^^^ -Pre-built binaries which can be used for performing inference with a trained model can be installed with ``pip3``. You can then use the ``deepspeech`` binary to do speech-to-text on an audio file: +Pre-built binaries which can be used for performing inference with a trained model can be installed with ``pip3``. You can then use the ``mozilla_voice_stt`` binary to do speech-to-text on an audio file: For the Python bindings, it is highly recommended that you perform the installation within a Python 3.5 or later virtual environment. You can find more information about those in `this documentation `_. We will continue under the assumption that you already have your system properly setup to create new virtual environments. -Create a DeepSpeech virtual environment -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Create a Mozilla Voice STT virtual environment +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/deepspeech-venv``. You can create it using this command: +In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run Mozilla Voice STT. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/stt-venv``. You can create it using this command: .. code-block:: - $ virtualenv -p python3 $HOME/tmp/deepspeech-venv/ + $ virtualenv -p python3 $HOME/tmp/stt-venv/ Once this command completes successfully, the environment will be ready to be activated. Activating the environment ~~~~~~~~~~~~~~~~~~~~~~~~~~ -Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command: +Each time you need to work with Mozilla Voice STT, you have to *activate* this virtual environment. This is done with this simple command: .. code-block:: - $ source $HOME/tmp/deepspeech-venv/bin/activate + $ source $HOME/tmp/stt-venv/bin/activate -Installing DeepSpeech Python bindings -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Installing Mozilla Voice STT Python bindings +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Once your environment has been set-up and loaded, you can use ``pip3`` to manage packages locally. On a fresh setup of the ``virtualenv``\ , you will have to install the DeepSpeech wheel. You can check if ``deepspeech`` is already installed with ``pip3 list``. +Once your environment has been set-up and loaded, you can use ``pip3`` to manage packages locally. On a fresh setup of the ``virtualenv``\ , you will have to install the Mozilla Voice STT wheel. You can check if ``mozilla_voice_stt`` is already installed with ``pip3 list``. To perform the installation, just use ``pip3`` as such: .. code-block:: - $ pip3 install deepspeech + $ pip3 install mozilla_voice_stt -If ``deepspeech`` is already installed, you can update it as such: +If ``mozilla_voice_stt`` is already installed, you can update it as such: .. code-block:: - $ pip3 install --upgrade deepspeech + $ pip3 install --upgrade mozilla_voice_stt -Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the GPU specific package as follows: +Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the CUDA specific package as follows: .. code-block:: - $ pip3 install deepspeech-gpu + $ pip3 install mozilla_voice_stt_cuda See the `release notes `_ to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`_. -You can update ``deepspeech-gpu`` as follows: +You can update ``mozilla_voice_stt_cuda`` as follows: .. code-block:: - $ pip3 install --upgrade deepspeech-gpu + $ pip3 install --upgrade mozilla_voice_stt_cuda -In both cases, ``pip3`` should take care of installing all the required dependencies. After installation has finished, you should be able to call ``deepspeech`` from the command-line. +In both cases, ``pip3`` should take care of installing all the required dependencies. After installation has finished, you should be able to call ``mozilla_voice_stt`` from the command-line. Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_. .. code-block:: bash - deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio my_audio_file.wav + mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio my_audio_file.wav The ``--scorer`` argument is optional, and represents an external language model to be used when transcribing the audio. @@ -151,7 +151,7 @@ You can download the JS bindings using ``npm``\ : .. code-block:: bash - npm install deepspeech + npm install mozilla_voice_stt Please note that as of now, we support: - Node.JS versions 4 to 13. @@ -159,11 +159,11 @@ Please note that as of now, we support: TypeScript support is also provided. -Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the GPU specific package as follows: +Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the CUDA specific package as follows: .. code-block:: bash - npm install deepspeech-gpu + npm install mozilla_voice_stt_cuda See the `release notes `_ to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`_. @@ -174,7 +174,7 @@ See the :ref:`TypeScript client ` for an example of how to use t Using the command-line client ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To download the pre-built binaries for the ``deepspeech`` command-line (compiled C++) client, use ``util/taskcluster.py``\ : +To download the pre-built binaries for the ``mozilla_voice_stt`` command-line (compiled C++) client, use ``util/taskcluster.py``\ : .. code-block:: bash @@ -192,7 +192,7 @@ also, if you need some binaries different than current master, like ``v0.2.0-alp python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "." -The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``deepspeech`` binary and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well. +The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``mozilla_voice_stt`` binary and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of Mozilla Voice STT or TensorFlow can be specified as well. Alternatively you may manually download the ``native_client.tar.xz`` from the [releases](https://github.com/mozilla/DeepSpeech/releases). @@ -200,9 +200,9 @@ Note: the following command assumes you `downloaded the pre-trained model <#gett .. code-block:: bash - ./deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio_input.wav + ./mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio_input.wav -See the help output with ``./deepspeech -h`` for more details. +See the help output with ``./mozilla_voice_stt -h`` for more details. Installing bindings from source ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -212,14 +212,14 @@ If pre-built binaries aren't available for your system, you'll need to install t Dockerfile for building from source ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -We provide ``Dockerfile.build`` to automatically build ``libdeepspeech.so``, the C++ native client, Python bindings, and KenLM. +We provide ``Dockerfile.build`` to automatically build ``libmozilla_voice_stt.so``, the C++ native client, Python bindings, and KenLM. You need to generate the Dockerfile from the template using: .. code-block:: bash make Dockerfile.build -If you want to specify a different DeepSpeech repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters: +If you want to specify a different Mozilla Voice STT repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters: .. code-block:: bash diff --git a/doc/conf.py b/doc/conf.py index bb64d77e28..228575144d 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- # -# DeepSpeech documentation build configuration file, created by +# Mozilla Voice STT documentation build configuration file, created by # sphinx-quickstart on Thu Feb 2 21:20:39 2017. # # This file is execfile()d with the current directory set to its @@ -24,7 +24,7 @@ sys.path.insert(0, os.path.abspath('../')) -autodoc_mock_imports = ['deepspeech'] +autodoc_mock_imports = ['mozilla_voice_stt'] # This is in fact only relevant on ReadTheDocs, but we want to run the same way # on our CI as in RTD to avoid regressions on RTD that we would not catch on @@ -41,7 +41,7 @@ # -- Project information ----------------------------------------------------- -project = u'DeepSpeech' +project = u'Mozilla Voice STT' copyright = '2019-2020, Mozilla Corporation' author = 'Mozilla Corporation' @@ -143,7 +143,7 @@ # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. -htmlhelp_basename = 'DeepSpeechdoc' +htmlhelp_basename = 'sttdoc' # -- Options for LaTeX output --------------------------------------------- @@ -170,7 +170,7 @@ # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ - (master_doc, 'DeepSpeech.tex', u'DeepSpeech Documentation', + (master_doc, 'Mozilla_Voice_STT.tex', u'Mozilla Voice STT Documentation', u'Mozilla Research', 'manual'), ] @@ -180,7 +180,7 @@ # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ - (master_doc, 'deepspeech', u'DeepSpeech Documentation', + (master_doc, 'mozilla_voice_stt', u'Mozilla Voice STT Documentation', [author], 1) ] @@ -191,8 +191,8 @@ # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ - (master_doc, 'DeepSpeech', u'DeepSpeech Documentation', - author, 'DeepSpeech', 'One line description of project.', + (master_doc, 'Mozilla Voice STT', u'Mozilla Voice STT Documentation', + author, 'Mozilla Voice STT', 'One line description of project.', 'Miscellaneous'), ] diff --git a/doc/doxygen-c.conf b/doc/doxygen-c.conf index f36f57b205..daecb5f4cd 100644 --- a/doc/doxygen-c.conf +++ b/doc/doxygen-c.conf @@ -790,7 +790,7 @@ WARN_LOGFILE = # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. -INPUT = native_client/deepspeech.h +INPUT = native_client/mozilla_voice_stt.h # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses diff --git a/doc/doxygen-dotnet.conf b/doc/doxygen-dotnet.conf index 74c2c5bb5c..6481a9c144 100644 --- a/doc/doxygen-dotnet.conf +++ b/doc/doxygen-dotnet.conf @@ -790,7 +790,7 @@ WARN_LOGFILE = # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. -INPUT = native_client/dotnet/DeepSpeechClient/ native_client/dotnet/DeepSpeechClient/Interfaces/ native_client/dotnet/DeepSpeechClient/Enums/ native_client/dotnet/DeepSpeechClient/Models/ +INPUT = native_client/dotnet/MozillaVoiceSttClient/ native_client/dotnet/MozillaVoiceSttClient/Interfaces/ native_client/dotnet/MozillaVoiceSttClient/Enums/ native_client/dotnet/MozillaVoiceSttClient/Models/ # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses diff --git a/doc/doxygen-java.conf b/doc/doxygen-java.conf index a8d65c6936..cf193fed41 100644 --- a/doc/doxygen-java.conf +++ b/doc/doxygen-java.conf @@ -790,7 +790,7 @@ WARN_LOGFILE = # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. -INPUT = native_client/java/libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech/ native_client/java/libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech_doc/ +INPUT = native_client/java/libmozillavoicestt/src/main/java/org/mozilla/voice/stt/ native_client/java/libmozillavoicestt/src/main/java/org/mozilla/voice/stt_doc/ # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses diff --git a/doc/index.rst b/doc/index.rst index e8991d3f58..b9657df199 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -1,23 +1,23 @@ -.. DeepSpeech documentation master file, created by +.. Mozilla Voice STT documentation master file, created by sphinx-quickstart on Thu Feb 2 21:20:39 2017. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. -Welcome to DeepSpeech's documentation! +Welcome to Mozilla Voice STT's documentation! ====================================== -DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper `_. Project DeepSpeech uses Google's `TensorFlow `_ to make the implementation easier. +Mozilla Voice STT is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper `_. Project Mozilla Voice STT uses Google's `TensorFlow `_ to make the implementation easier. -To install and use DeepSpeech all you have to do is: +To install and use Mozilla Voice STT all you have to do is: .. code-block:: bash # Create and activate a virtualenv - virtualenv -p python3 $HOME/tmp/deepspeech-venv/ - source $HOME/tmp/deepspeech-venv/bin/activate + virtualenv -p python3 $HOME/tmp/stt-venv/ + source $HOME/tmp/stt-venv/bin/activate - # Install DeepSpeech - pip3 install deepspeech + # Install Mozilla Voice STT + pip3 install mozilla_voice_stt # Download pre-trained English model files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.pbmm @@ -28,27 +28,27 @@ To install and use DeepSpeech all you have to do is: tar xvf audio-0.7.4.tar.gz # Transcribe an audio file - deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav + mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav A pre-trained English model is available for use and can be downloaded following the instructions in :ref:`the usage docs `. For the latest release, including pre-trained models and checkpoints, `see the GitHub releases page `_. -Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes `_ to find which GPUs are supported. To run ``deepspeech`` on a GPU, install the GPU specific package: +Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes `_ to find which GPUs are supported. To run ``mozilla_voice_stt`` on a GPU, install the GPU specific package: .. code-block:: bash # Create and activate a virtualenv - virtualenv -p python3 $HOME/tmp/deepspeech-gpu-venv/ - source $HOME/tmp/deepspeech-gpu-venv/bin/activate + virtualenv -p python3 $HOME/tmp/stt-gpu-venv/ + source $HOME/tmp/stt-gpu-venv/bin/activate - # Install DeepSpeech CUDA enabled package - pip3 install deepspeech-gpu + # Install Mozilla Voice STT CUDA enabled package + pip3 install mozilla_voice_stt_cuda # Transcribe an audio file. - deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav + mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav Please ensure you have the required :ref:`CUDA dependencies `. -See the output of ``deepspeech -h`` for more information on the use of ``deepspeech``. (If you experience problems running ``deepspeech``, please check :ref:`required runtime dependencies `). +See the output of ``mozilla_voice_stt -h`` for more information on the use of ``mozilla_voice_stt``. (If you experience problems running ``mozilla_voice_stt``, please check :ref:`required runtime dependencies `). .. toctree:: :maxdepth: 2 @@ -76,7 +76,7 @@ See the output of ``deepspeech -h`` for more information on the use of ``deepspe :maxdepth: 2 :caption: Architecture and training - DeepSpeech + AcousticModel Geometry diff --git a/evaluate_tflite.py b/evaluate_tflite.py index 0d46261551..829a7d1857 100644 --- a/evaluate_tflite.py +++ b/evaluate_tflite.py @@ -10,7 +10,7 @@ import os import sys -from deepspeech import Model +from mozilla_voice_stt import Model from deepspeech_training.util.evaluate_tools import calculate_and_print_report from deepspeech_training.util.flags import create_flags from functools import partial @@ -19,11 +19,8 @@ r''' This module should be self-contained: - - build libdeepspeech.so with TFLite: - - bazel build [...] --define=runtime=tflite [...] //native_client:libdeepspeech.so - - make -C native_client/python/ TFDIR=... bindings - setup a virtualenv - - pip install native_client/python/dist/deepspeech*.whl + - pip install mozilla_voice_stt_tflite - pip install -r requirements_eval_tflite.txt Then run with a TF Lite model, a scorer and a CSV test file diff --git a/examples/README.rst b/examples/README.rst index f5ebb1bd26..4b5b3dc0ad 100644 --- a/examples/README.rst +++ b/examples/README.rst @@ -1,6 +1,6 @@ Examples ======== -DeepSpeech examples were moved to a separate repository. +Mozilla Voice STT examples were moved to a separate repository. New location: https://github.com/mozilla/DeepSpeech-examples diff --git a/native_client/Android.mk b/native_client/Android.mk index d21551fd1c..9c40d58542 100644 --- a/native_client/Android.mk +++ b/native_client/Android.mk @@ -1,14 +1,14 @@ LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) -LOCAL_MODULE := deepspeech-prebuilt -LOCAL_SRC_FILES := $(TFDIR)/bazel-bin/native_client/libdeepspeech.so +LOCAL_MODULE := mozilla_voice_stt-prebuilt +LOCAL_SRC_FILES := $(TFDIR)/bazel-bin/native_client/libmozilla_voice_stt.so include $(PREBUILT_SHARED_LIBRARY) include $(CLEAR_VARS) LOCAL_CPP_EXTENSION := .cc .cxx .cpp -LOCAL_MODULE := deepspeech +LOCAL_MODULE := mozilla_voice_stt LOCAL_SRC_FILES := client.cc -LOCAL_SHARED_LIBRARIES := deepspeech-prebuilt +LOCAL_SHARED_LIBRARIES := mozilla_voice_stt-prebuilt LOCAL_LDFLAGS := -Wl,--no-as-needed include $(BUILD_EXECUTABLE) diff --git a/native_client/BUILD b/native_client/BUILD index 92eb788cae..0b8ffed341 100644 --- a/native_client/BUILD +++ b/native_client/BUILD @@ -96,10 +96,10 @@ cc_library( ) tf_cc_shared_object( - name = "libdeepspeech.so", + name = "libmozilla_voice_stt.so", srcs = [ "deepspeech.cc", - "deepspeech.h", + "mozilla_voice_stt.h", "deepspeech_errors.cc", "modelstate.cc", "modelstate.h", @@ -149,7 +149,7 @@ tf_cc_shared_object( #"//tensorflow/core:all_kernels", ### => Trying to be more fine-grained ### Use bin/ops_in_graph.py to list all the ops used by a frozen graph. - ### CPU only build, libdeepspeech.so file size reduced by ~50% + ### CPU only build, libmozilla_voice_stt.so file size reduced by ~50% "//tensorflow/core/kernels:spectrogram_op", # AudioSpectrogram "//tensorflow/core/kernels:bias_op", # BiasAdd "//tensorflow/core/kernels:cast_op", # Cast @@ -189,11 +189,11 @@ tf_cc_shared_object( ) genrule( - name = "libdeepspeech_so_dsym", - srcs = [":libdeepspeech.so"], - outs = ["libdeepspeech.so.dSYM"], + name = "libmozilla_voice_stt_so_dsym", + srcs = [":libmozilla_voice_stt.so"], + outs = ["libmozilla_voice_stt.so.dSYM"], output_to_bindir = True, - cmd = "dsymutil $(location :libdeepspeech.so) -o $@" + cmd = "dsymutil $(location :libmozilla_voice_stt.so) -o $@" ) cc_binary( diff --git a/native_client/CODINGSTYLE.md b/native_client/CODINGSTYLE.md index ddb8fc822e..0175947388 100644 --- a/native_client/CODINGSTYLE.md +++ b/native_client/CODINGSTYLE.md @@ -1,5 +1,5 @@ This file contains some notes on coding style within the C++ portion of the -DeepSpeech project. It is very much a work in progress and incomplete. +Mozilla Voice STT project. It is very much a work in progress and incomplete. General ======= diff --git a/native_client/Makefile b/native_client/Makefile index b645499c28..597adc1265 100644 --- a/native_client/Makefile +++ b/native_client/Makefile @@ -16,32 +16,32 @@ include definitions.mk default: $(DEEPSPEECH_BIN) clean: - rm -f deepspeech + rm -f $(DEEPSPEECH_BIN) $(DEEPSPEECH_BIN): client.cc Makefile $(CXX) $(CFLAGS) $(CFLAGS_DEEPSPEECH) $(SOX_CFLAGS) client.cc $(LDFLAGS) $(SOX_LDFLAGS) ifeq ($(OS),Darwin) - install_name_tool -change bazel-out/local-opt/bin/native_client/libdeepspeech.so @rpath/libdeepspeech.so deepspeech + install_name_tool -change bazel-out/local-opt/bin/native_client/libmozilla_voice_stt.so @rpath/libmozilla_voice_stt.so $(DEEPSPEECH_BIN) endif run: $(DEEPSPEECH_BIN) - ${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} ./deepspeech ${ARGS} + ${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} ./$(DEEPSPEECH_BIN) ${ARGS} debug: $(DEEPSPEECH_BIN) - ${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} gdb --args ./deepspeech ${ARGS} + ${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} gdb --args ./$(DEEPSPEECH_BIN) ${ARGS} install: $(DEEPSPEECH_BIN) install -d ${PREFIX}/lib - install -m 0644 ${TFDIR}/bazel-bin/native_client/libdeepspeech.so ${PREFIX}/lib/ + install -m 0644 ${TFDIR}/bazel-bin/native_client/libmozilla_voice_stt.so ${PREFIX}/lib/ install -d ${PREFIX}/include - install -m 0644 deepspeech.h ${PREFIX}/include + install -m 0644 mozilla_voice_stt.h ${PREFIX}/include install -d ${PREFIX}/bin - install -m 0755 deepspeech ${PREFIX}/bin/ + install -m 0755 $(DEEPSPEECH_BIN) ${PREFIX}/bin/ uninstall: - rm -f ${PREFIX}/bin/deepspeech + rm -f ${PREFIX}/bin/$(DEEPSPEECH_BIN) rmdir --ignore-fail-on-non-empty ${PREFIX}/bin - rm -f ${PREFIX}/lib/libdeepspeech.so + rm -f ${PREFIX}/lib/libmozilla_voice_stt.so rmdir --ignore-fail-on-non-empty ${PREFIX}/lib print-toolchain: diff --git a/native_client/args.h b/native_client/args.h index baa9b7ffa3..0f26743c3c 100644 --- a/native_client/args.h +++ b/native_client/args.h @@ -8,7 +8,7 @@ #endif #include -#include "deepspeech.h" +#include "mozilla_voice_stt.h" char* model = NULL; @@ -43,7 +43,7 @@ void PrintHelp(const char* bin) std::cout << "Usage: " << bin << " --model MODEL [--scorer SCORER] --audio AUDIO [-t] [-e]\n" "\n" - "Running DeepSpeech inference.\n" + "Running Mozilla Voice STT inference.\n" "\n" "\t--model MODEL\t\t\tPath to the model (protocol buffer binary file)\n" "\t--scorer SCORER\t\t\tPath to the external scorer file\n" @@ -58,9 +58,9 @@ void PrintHelp(const char* bin) "\t--stream size\t\t\tRun in stream mode, output intermediate results\n" "\t--help\t\t\t\tShow help\n" "\t--version\t\t\tPrint version and exits\n"; - char* version = DS_Version(); - std::cerr << "DeepSpeech " << version << "\n"; - DS_FreeString(version); + char* version = STT_Version(); + std::cerr << "Mozilla Voice STT " << version << "\n"; + STT_FreeString(version); exit(1); } @@ -153,9 +153,9 @@ bool ProcessArgs(int argc, char** argv) } if (has_versions) { - char* version = DS_Version(); - std::cout << "DeepSpeech " << version << "\n"; - DS_FreeString(version); + char* version = STT_Version(); + std::cout << "Mozilla Voice STT " << version << "\n"; + STT_FreeString(version); return false; } diff --git a/native_client/client.cc b/native_client/client.cc index 46a16115c5..4fa167d2d2 100644 --- a/native_client/client.cc +++ b/native_client/client.cc @@ -34,7 +34,7 @@ #endif // NO_DIR #include -#include "deepspeech.h" +#include "mozilla_voice_stt.h" #include "args.h" typedef struct { @@ -168,17 +168,17 @@ LocalDsSTT(ModelState* aCtx, const short* aBuffer, size_t aBufferSize, // sphinx-doc: c_ref_inference_start if (extended_output) { - Metadata *result = DS_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, 1); + Metadata *result = STT_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, 1); res.string = CandidateTranscriptToString(&result->transcripts[0]); - DS_FreeMetadata(result); + STT_FreeMetadata(result); } else if (json_output) { - Metadata *result = DS_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, json_candidate_transcripts); + Metadata *result = STT_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, json_candidate_transcripts); res.string = MetadataToJSON(result); - DS_FreeMetadata(result); + STT_FreeMetadata(result); } else if (stream_size > 0) { StreamingState* ctx; - int status = DS_CreateStream(aCtx, &ctx); - if (status != DS_ERR_OK) { + int status = STT_CreateStream(aCtx, &ctx); + if (status != STT_ERR_OK) { res.string = strdup(""); return res; } @@ -186,22 +186,22 @@ LocalDsSTT(ModelState* aCtx, const short* aBuffer, size_t aBufferSize, const char *last = nullptr; while (off < aBufferSize) { size_t cur = aBufferSize - off > stream_size ? stream_size : aBufferSize - off; - DS_FeedAudioContent(ctx, aBuffer + off, cur); + STT_FeedAudioContent(ctx, aBuffer + off, cur); off += cur; - const char* partial = DS_IntermediateDecode(ctx); + const char* partial = STT_IntermediateDecode(ctx); if (last == nullptr || strcmp(last, partial)) { printf("%s\n", partial); last = partial; } else { - DS_FreeString((char *) partial); + STT_FreeString((char *) partial); } } if (last != nullptr) { - DS_FreeString((char *) last); + STT_FreeString((char *) last); } - res.string = DS_FinishStream(ctx); + res.string = STT_FinishStream(ctx); } else { - res.string = DS_SpeechToText(aCtx, aBuffer, aBufferSize); + res.string = STT_SpeechToText(aCtx, aBuffer, aBufferSize); } // sphinx-doc: c_ref_inference_stop @@ -367,7 +367,7 @@ GetAudioBuffer(const char* path, int desired_sample_rate) void ProcessFile(ModelState* context, const char* path, bool show_times) { - ds_audio_buffer audio = GetAudioBuffer(path, DS_GetModelSampleRate(context)); + ds_audio_buffer audio = GetAudioBuffer(path, STT_GetModelSampleRate(context)); // Pass audio to DeepSpeech // We take half of buffer_size because buffer is a char* while @@ -381,7 +381,7 @@ ProcessFile(ModelState* context, const char* path, bool show_times) if (result.string) { printf("%s\n", result.string); - DS_FreeString((char*)result.string); + STT_FreeString((char*)result.string); } if (show_times) { @@ -400,16 +400,16 @@ main(int argc, char **argv) // Initialise DeepSpeech ModelState* ctx; // sphinx-doc: c_ref_model_start - int status = DS_CreateModel(model, &ctx); + int status = STT_CreateModel(model, &ctx); if (status != 0) { - char* error = DS_ErrorCodeToErrorMessage(status); + char* error = STT_ErrorCodeToErrorMessage(status); fprintf(stderr, "Could not create model: %s\n", error); free(error); return 1; } if (set_beamwidth) { - status = DS_SetModelBeamWidth(ctx, beam_width); + status = STT_SetModelBeamWidth(ctx, beam_width); if (status != 0) { fprintf(stderr, "Could not set model beam width.\n"); return 1; @@ -417,13 +417,13 @@ main(int argc, char **argv) } if (scorer) { - status = DS_EnableExternalScorer(ctx, scorer); + status = STT_EnableExternalScorer(ctx, scorer); if (status != 0) { fprintf(stderr, "Could not enable external scorer.\n"); return 1; } if (set_alphabeta) { - status = DS_SetScorerAlphaBeta(ctx, lm_alpha, lm_beta); + status = STT_SetScorerAlphaBeta(ctx, lm_alpha, lm_beta); if (status != 0) { fprintf(stderr, "Error setting scorer alpha and beta.\n"); return 1; @@ -485,7 +485,7 @@ main(int argc, char **argv) sox_quit(); #endif // NO_SOX - DS_FreeModel(ctx); + STT_FreeModel(ctx); return 0; } diff --git a/native_client/ctcdecode/__init__.py b/native_client/ctcdecode/__init__.py index 2dc2be560d..c01d671238 100644 --- a/native_client/ctcdecode/__init__.py +++ b/native_client/ctcdecode/__init__.py @@ -10,7 +10,7 @@ # Hack: import error codes by matching on their names, as SWIG unfortunately # does not support binding enums to Python in a scoped manner yet. for symbol in dir(swigwrapper): - if symbol.startswith('DS_ERR_'): + if symbol.startswith('STT_ERR_'): globals()[symbol] = getattr(swigwrapper, symbol) class Scorer(swigwrapper.Scorer): diff --git a/native_client/ctcdecode/scorer.cpp b/native_client/ctcdecode/scorer.cpp index 23982ef33a..ad41dd8e2e 100644 --- a/native_client/ctcdecode/scorer.cpp +++ b/native_client/ctcdecode/scorer.cpp @@ -74,13 +74,13 @@ int Scorer::load_lm(const std::string& lm_path) // Check if file is readable to avoid KenLM throwing an exception const char* filename = lm_path.c_str(); if (access(filename, R_OK) != 0) { - return DS_ERR_SCORER_UNREADABLE; + return STT_ERR_SCORER_UNREADABLE; } // Check if the file format is valid to avoid KenLM throwing an exception lm::ngram::ModelType model_type; if (!lm::ngram::RecognizeBinary(filename, model_type)) { - return DS_ERR_SCORER_INVALID_LM; + return STT_ERR_SCORER_INVALID_LM; } // Load the LM @@ -97,7 +97,7 @@ int Scorer::load_lm(const std::string& lm_path) uint64_t trie_offset = language_model_->GetEndOfSearchOffset(); if (package_size <= trie_offset) { // File ends without a trie structure - return DS_ERR_SCORER_NO_TRIE; + return STT_ERR_SCORER_NO_TRIE; } // Read metadata and trie from file @@ -113,7 +113,7 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path) if (magic != MAGIC) { std::cerr << "Error: Can't parse scorer file, invalid header. Try updating " "your scorer file." << std::endl; - return DS_ERR_SCORER_INVALID_TRIE; + return STT_ERR_SCORER_INVALID_TRIE; } int version; @@ -125,10 +125,10 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path) if (version < FILE_VERSION) { std::cerr << "Update your scorer file."; } else { - std::cerr << "Downgrade your scorer file or update your version of DeepSpeech."; + std::cerr << "Downgrade your scorer file or update your version of Mozilla Voice STT."; } std::cerr << std::endl; - return DS_ERR_SCORER_VERSION_MISMATCH; + return STT_ERR_SCORER_VERSION_MISMATCH; } fin.read(reinterpret_cast(&is_utf8_mode_), sizeof(is_utf8_mode_)); @@ -143,7 +143,7 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path) opt.mode = fst::FstReadOptions::MAP; opt.source = file_path; dictionary.reset(FstType::Read(fin, opt)); - return DS_ERR_OK; + return STT_ERR_OK; } bool Scorer::save_dictionary(const std::string& path, bool append_instead_of_overwrite) diff --git a/native_client/ctcdecode/scorer.h b/native_client/ctcdecode/scorer.h index 5aee1046ff..ee361d7a60 100644 --- a/native_client/ctcdecode/scorer.h +++ b/native_client/ctcdecode/scorer.h @@ -13,7 +13,7 @@ #include "path_trie.h" #include "alphabet.h" -#include "deepspeech.h" +#include "mozilla_voice_stt.h" const double OOV_SCORE = -1000.0; const std::string START_TOKEN = ""; diff --git a/native_client/ctcdecode/swigwrapper.i b/native_client/ctcdecode/swigwrapper.i index dbe67c689c..9daf7d89d8 100644 --- a/native_client/ctcdecode/swigwrapper.i +++ b/native_client/ctcdecode/swigwrapper.i @@ -42,14 +42,14 @@ namespace std { %constant const char* __version__ = ds_version(); %constant const char* __git_version__ = ds_git_version(); -// Import only the error code enum definitions from deepspeech.h +// Import only the error code enum definitions from mozilla_voice_stt.h // We can't just do |%ignore "";| here because it affects this file globally (even // files %include'd above). That causes SWIG to lose destructor information and // leads to leaks of the wrapper objects. // Instead we ignore functions and classes (structs), which are the only other -// things in deepspeech.h. If we add some new construct to deepspeech.h we need +// things in mozilla_voice_stt.h. If we add some new construct to mozilla_voice_stt.h we need // to update the ignore rules here to avoid exposing unwanted APIs in the decoder // package. %rename("$ignore", %$isfunction) ""; %rename("$ignore", %$isclass) ""; -%include "../deepspeech.h" +%include "../mozilla_voice_stt.h" diff --git a/native_client/deepspeech.cc b/native_client/deepspeech.cc index 38868d4b5f..01a9292b64 100644 --- a/native_client/deepspeech.cc +++ b/native_client/deepspeech.cc @@ -9,7 +9,7 @@ #include #include -#include "deepspeech.h" +#include "mozilla_voice_stt.h" #include "alphabet.h" #include "modelstate.h" @@ -25,7 +25,7 @@ #ifdef __ANDROID__ #include -#define LOG_TAG "libdeepspeech" +#define LOG_TAG "libmozilla_voice_stt" #define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__) #define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__) #else @@ -263,23 +263,23 @@ StreamingState::processBatch(const vector& buf, unsigned int n_steps) } int -DS_CreateModel(const char* aModelPath, +STT_CreateModel(const char* aModelPath, ModelState** retval) { *retval = nullptr; std::cerr << "TensorFlow: " << tf_local_git_version() << std::endl; - std::cerr << "DeepSpeech: " << ds_git_version() << std::endl; + std::cerr << "Mozilla Voice STT: " << ds_git_version() << std::endl; #ifdef __ANDROID__ LOGE("TensorFlow: %s", tf_local_git_version()); LOGD("TensorFlow: %s", tf_local_git_version()); - LOGE("DeepSpeech: %s", ds_git_version()); - LOGD("DeepSpeech: %s", ds_git_version()); + LOGE("Mozilla Voice STT: %s", ds_git_version()); + LOGD("Mozilla Voice STT: %s", ds_git_version()); #endif if (!aModelPath || strlen(aModelPath) < 1) { std::cerr << "No model specified, cannot continue." << std::endl; - return DS_ERR_NO_MODEL; + return STT_ERR_NO_MODEL; } std::unique_ptr model( @@ -292,79 +292,79 @@ DS_CreateModel(const char* aModelPath, if (!model) { std::cerr << "Could not allocate model state." << std::endl; - return DS_ERR_FAIL_CREATE_MODEL; + return STT_ERR_FAIL_CREATE_MODEL; } int err = model->init(aModelPath); - if (err != DS_ERR_OK) { + if (err != STT_ERR_OK) { return err; } *retval = model.release(); - return DS_ERR_OK; + return STT_ERR_OK; } unsigned int -DS_GetModelBeamWidth(const ModelState* aCtx) +STT_GetModelBeamWidth(const ModelState* aCtx) { return aCtx->beam_width_; } int -DS_SetModelBeamWidth(ModelState* aCtx, unsigned int aBeamWidth) +STT_SetModelBeamWidth(ModelState* aCtx, unsigned int aBeamWidth) { aCtx->beam_width_ = aBeamWidth; return 0; } int -DS_GetModelSampleRate(const ModelState* aCtx) +STT_GetModelSampleRate(const ModelState* aCtx) { return aCtx->sample_rate_; } void -DS_FreeModel(ModelState* ctx) +STT_FreeModel(ModelState* ctx) { delete ctx; } int -DS_EnableExternalScorer(ModelState* aCtx, +STT_EnableExternalScorer(ModelState* aCtx, const char* aScorerPath) { std::unique_ptr scorer(new Scorer()); int err = scorer->init(aScorerPath, aCtx->alphabet_); if (err != 0) { - return DS_ERR_INVALID_SCORER; + return STT_ERR_INVALID_SCORER; } aCtx->scorer_ = std::move(scorer); - return DS_ERR_OK; + return STT_ERR_OK; } int -DS_DisableExternalScorer(ModelState* aCtx) +STT_DisableExternalScorer(ModelState* aCtx) { if (aCtx->scorer_) { aCtx->scorer_.reset(); - return DS_ERR_OK; + return STT_ERR_OK; } - return DS_ERR_SCORER_NOT_ENABLED; + return STT_ERR_SCORER_NOT_ENABLED; } -int DS_SetScorerAlphaBeta(ModelState* aCtx, +int STT_SetScorerAlphaBeta(ModelState* aCtx, float aAlpha, float aBeta) { if (aCtx->scorer_) { aCtx->scorer_->reset_params(aAlpha, aBeta); - return DS_ERR_OK; + return STT_ERR_OK; } - return DS_ERR_SCORER_NOT_ENABLED; + return STT_ERR_SCORER_NOT_ENABLED; } int -DS_CreateStream(ModelState* aCtx, +STT_CreateStream(ModelState* aCtx, StreamingState** retval) { *retval = nullptr; @@ -372,7 +372,7 @@ DS_CreateStream(ModelState* aCtx, std::unique_ptr ctx(new StreamingState()); if (!ctx) { std::cerr << "Could not allocate streaming state." << std::endl; - return DS_ERR_FAIL_CREATE_STREAM; + return STT_ERR_FAIL_CREATE_STREAM; } ctx->audio_buffer_.reserve(aCtx->audio_win_len_); @@ -393,11 +393,11 @@ DS_CreateStream(ModelState* aCtx, aCtx->scorer_); *retval = ctx.release(); - return DS_ERR_OK; + return STT_ERR_OK; } void -DS_FeedAudioContent(StreamingState* aSctx, +STT_FeedAudioContent(StreamingState* aSctx, const short* aBuffer, unsigned int aBufferSize) { @@ -405,32 +405,32 @@ DS_FeedAudioContent(StreamingState* aSctx, } char* -DS_IntermediateDecode(const StreamingState* aSctx) +STT_IntermediateDecode(const StreamingState* aSctx) { return aSctx->intermediateDecode(); } Metadata* -DS_IntermediateDecodeWithMetadata(const StreamingState* aSctx, +STT_IntermediateDecodeWithMetadata(const StreamingState* aSctx, unsigned int aNumResults) { return aSctx->intermediateDecodeWithMetadata(aNumResults); } char* -DS_FinishStream(StreamingState* aSctx) +STT_FinishStream(StreamingState* aSctx) { char* str = aSctx->finishStream(); - DS_FreeStream(aSctx); + STT_FreeStream(aSctx); return str; } Metadata* -DS_FinishStreamWithMetadata(StreamingState* aSctx, +STT_FinishStreamWithMetadata(StreamingState* aSctx, unsigned int aNumResults) { Metadata* result = aSctx->finishStreamWithMetadata(aNumResults); - DS_FreeStream(aSctx); + STT_FreeStream(aSctx); return result; } @@ -440,41 +440,41 @@ CreateStreamAndFeedAudioContent(ModelState* aCtx, unsigned int aBufferSize) { StreamingState* ctx; - int status = DS_CreateStream(aCtx, &ctx); - if (status != DS_ERR_OK) { + int status = STT_CreateStream(aCtx, &ctx); + if (status != STT_ERR_OK) { return nullptr; } - DS_FeedAudioContent(ctx, aBuffer, aBufferSize); + STT_FeedAudioContent(ctx, aBuffer, aBufferSize); return ctx; } char* -DS_SpeechToText(ModelState* aCtx, +STT_SpeechToText(ModelState* aCtx, const short* aBuffer, unsigned int aBufferSize) { StreamingState* ctx = CreateStreamAndFeedAudioContent(aCtx, aBuffer, aBufferSize); - return DS_FinishStream(ctx); + return STT_FinishStream(ctx); } Metadata* -DS_SpeechToTextWithMetadata(ModelState* aCtx, +STT_SpeechToTextWithMetadata(ModelState* aCtx, const short* aBuffer, unsigned int aBufferSize, unsigned int aNumResults) { StreamingState* ctx = CreateStreamAndFeedAudioContent(aCtx, aBuffer, aBufferSize); - return DS_FinishStreamWithMetadata(ctx, aNumResults); + return STT_FinishStreamWithMetadata(ctx, aNumResults); } void -DS_FreeStream(StreamingState* aSctx) +STT_FreeStream(StreamingState* aSctx) { delete aSctx; } void -DS_FreeMetadata(Metadata* m) +STT_FreeMetadata(Metadata* m) { if (m) { for (int i = 0; i < m->num_transcripts; ++i) { @@ -491,13 +491,13 @@ DS_FreeMetadata(Metadata* m) } void -DS_FreeString(char* str) +STT_FreeString(char* str) { free(str); } char* -DS_Version() +STT_Version() { return strdup(ds_version()); } diff --git a/native_client/deepspeech_errors.cc b/native_client/deepspeech_errors.cc index 1f1e4d8d15..69b580f62f 100644 --- a/native_client/deepspeech_errors.cc +++ b/native_client/deepspeech_errors.cc @@ -1,8 +1,8 @@ -#include "deepspeech.h" +#include "mozilla_voice_stt.h" #include char* -DS_ErrorCodeToErrorMessage(int aErrorCode) +STT_ErrorCodeToErrorMessage(int aErrorCode) { #define RETURN_MESSAGE(NAME, VALUE, DESC) \ case NAME: \ @@ -10,7 +10,7 @@ DS_ErrorCodeToErrorMessage(int aErrorCode) switch(aErrorCode) { - DS_FOR_EACH_ERROR(RETURN_MESSAGE) + STT_FOR_EACH_ERROR(RETURN_MESSAGE) default: return strdup("Unknown error, please make sure you are using the correct native binary."); } diff --git a/native_client/definitions.mk b/native_client/definitions.mk index 0c8ab656ba..bad584f83a 100644 --- a/native_client/definitions.mk +++ b/native_client/definitions.mk @@ -18,9 +18,9 @@ ifeq ($(findstring _NT,$(OS)),_NT) PLATFORM_EXE_SUFFIX := .exe endif -DEEPSPEECH_BIN := deepspeech$(PLATFORM_EXE_SUFFIX) +DEEPSPEECH_BIN := mozilla_voice_stt$(PLATFORM_EXE_SUFFIX) CFLAGS_DEEPSPEECH := -std=c++11 -o $(DEEPSPEECH_BIN) -LINK_DEEPSPEECH := -ldeepspeech +LINK_DEEPSPEECH := -lmozilla_voice_stt LINK_PATH_DEEPSPEECH := -L${TFDIR}/bazel-bin/native_client ifeq ($(TARGET),host) @@ -53,7 +53,7 @@ TOOL_CC := cl.exe TOOL_CXX := cl.exe TOOL_LD := link.exe TOOL_LIBEXE := lib.exe -LINK_DEEPSPEECH := $(TFDIR)\bazel-bin\native_client\libdeepspeech.so.if.lib +LINK_DEEPSPEECH := $(TFDIR)\bazel-bin\native_client\libmozilla_voice_stt.so.if.lib LINK_PATH_DEEPSPEECH := CFLAGS_DEEPSPEECH := -nologo -Fe$(DEEPSPEECH_BIN) SOX_CFLAGS := @@ -174,7 +174,7 @@ define copy_missing_libs new_missing="$$( (for f in $$(otool -L $$lib 2>/dev/null | tail -n +2 | awk '{ print $$1 }' | grep -v '$$lib'); do ls -hal $$f; done;) 2>&1 | grep 'No such' | cut -d':' -f2 | xargs basename -a)"; \ missing_libs="$$missing_libs $$new_missing"; \ elif [ "$(OS)" = "${TC_MSYS_VERSION}" ]; then \ - missing_libs="libdeepspeech.so"; \ + missing_libs="libmozilla_voice_stt.so"; \ else \ missing_libs="$$missing_libs $$($(LDD) $$lib | grep 'not found' | awk '{ print $$1 }')"; \ fi; \ diff --git a/native_client/dotnet/DeepSpeechClient/Enums/ErrorCodes.cs b/native_client/dotnet/DeepSpeechClient/Enums/ErrorCodes.cs deleted file mode 100644 index 30660add2a..0000000000 --- a/native_client/dotnet/DeepSpeechClient/Enums/ErrorCodes.cs +++ /dev/null @@ -1,30 +0,0 @@ -namespace DeepSpeechClient.Enums -{ - /// - /// Error codes from the native DeepSpeech binary. - /// - internal enum ErrorCodes - { - // OK - DS_ERR_OK = 0x0000, - - // Missing invormations - DS_ERR_NO_MODEL = 0x1000, - - // Invalid parameters - DS_ERR_INVALID_ALPHABET = 0x2000, - DS_ERR_INVALID_SHAPE = 0x2001, - DS_ERR_INVALID_SCORER = 0x2002, - DS_ERR_MODEL_INCOMPATIBLE = 0x2003, - DS_ERR_SCORER_NOT_ENABLED = 0x2004, - - // Runtime failures - DS_ERR_FAIL_INIT_MMAP = 0x3000, - DS_ERR_FAIL_INIT_SESS = 0x3001, - DS_ERR_FAIL_INTERPRETER = 0x3002, - DS_ERR_FAIL_RUN_SESS = 0x3003, - DS_ERR_FAIL_CREATE_STREAM = 0x3004, - DS_ERR_FAIL_READ_PROTOBUF = 0x3005, - DS_ERR_FAIL_CREATE_SESS = 0x3006, - } -} diff --git a/native_client/dotnet/DeepSpeechClient/NativeImp.cs b/native_client/dotnet/DeepSpeechClient/NativeImp.cs deleted file mode 100644 index bc77cf1b18..0000000000 --- a/native_client/dotnet/DeepSpeechClient/NativeImp.cs +++ /dev/null @@ -1,102 +0,0 @@ -using DeepSpeechClient.Enums; - -using System; -using System.Runtime.InteropServices; - -namespace DeepSpeechClient -{ - /// - /// Wrapper for the native implementation of "libdeepspeech.so" - /// - internal static class NativeImp - { - #region Native Implementation - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl, - CharSet = CharSet.Ansi, SetLastError = true)] - internal static extern IntPtr DS_Version(); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal unsafe static extern ErrorCodes DS_CreateModel(string aModelPath, - ref IntPtr** pint); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal unsafe static extern IntPtr DS_ErrorCodeToErrorMessage(int aErrorCode); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal unsafe static extern uint DS_GetModelBeamWidth(IntPtr** aCtx); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal unsafe static extern ErrorCodes DS_SetModelBeamWidth(IntPtr** aCtx, - uint aBeamWidth); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal unsafe static extern ErrorCodes DS_CreateModel(string aModelPath, - uint aBeamWidth, - ref IntPtr** pint); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal unsafe static extern int DS_GetModelSampleRate(IntPtr** aCtx); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern ErrorCodes DS_EnableExternalScorer(IntPtr** aCtx, - string aScorerPath); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern ErrorCodes DS_DisableExternalScorer(IntPtr** aCtx); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern ErrorCodes DS_SetScorerAlphaBeta(IntPtr** aCtx, - float aAlpha, - float aBeta); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl, - CharSet = CharSet.Ansi, SetLastError = true)] - internal static unsafe extern IntPtr DS_SpeechToText(IntPtr** aCtx, - short[] aBuffer, - uint aBufferSize); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl, SetLastError = true)] - internal static unsafe extern IntPtr DS_SpeechToTextWithMetadata(IntPtr** aCtx, - short[] aBuffer, - uint aBufferSize, - uint aNumResults); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern void DS_FreeModel(IntPtr** aCtx); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern ErrorCodes DS_CreateStream(IntPtr** aCtx, - ref IntPtr** retval); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern void DS_FreeStream(IntPtr** aSctx); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern void DS_FreeMetadata(IntPtr metadata); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern void DS_FreeString(IntPtr str); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl, - CharSet = CharSet.Ansi, SetLastError = true)] - internal static unsafe extern void DS_FeedAudioContent(IntPtr** aSctx, - short[] aBuffer, - uint aBufferSize); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern IntPtr DS_IntermediateDecode(IntPtr** aSctx); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern IntPtr DS_IntermediateDecodeWithMetadata(IntPtr** aSctx, - uint aNumResults); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl, - CharSet = CharSet.Ansi, SetLastError = true)] - internal static unsafe extern IntPtr DS_FinishStream(IntPtr** aSctx); - - [DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)] - internal static unsafe extern IntPtr DS_FinishStreamWithMetadata(IntPtr** aSctx, - uint aNumResults); - #endregion - } -} diff --git a/native_client/dotnet/DeepSpeech.sln b/native_client/dotnet/MozillaVoiceStt.sln similarity index 77% rename from native_client/dotnet/DeepSpeech.sln rename to native_client/dotnet/MozillaVoiceStt.sln index 78afe7db06..0bf2b52e93 100644 --- a/native_client/dotnet/DeepSpeech.sln +++ b/native_client/dotnet/MozillaVoiceStt.sln @@ -2,9 +2,9 @@ Microsoft Visual Studio Solution File, Format Version 12.00 # Visual Studio Version 16 VisualStudioVersion = 16.0.30204.135 MinimumVisualStudioVersion = 10.0.40219.1 -Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "DeepSpeechClient", "DeepSpeechClient\DeepSpeechClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}" +Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "MozillaVoiceSttClient", "MozillaVoiceSttClient\MozillaVoiceSttClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}" EndProject -Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeechConsole", "DeepSpeechConsole\DeepSpeechConsole.csproj", "{312965E5-C4F6-4D95-BA64-79906B8BC7AC}" +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceSttConsole", "MozillaVoiceSttConsole\MozillaVoiceSttConsole.csproj", "{312965E5-C4F6-4D95-BA64-79906B8BC7AC}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution diff --git a/native_client/dotnet/MozillaVoiceSttClient/Enums/ErrorCodes.cs b/native_client/dotnet/MozillaVoiceSttClient/Enums/ErrorCodes.cs new file mode 100644 index 0000000000..aa816f8d7e --- /dev/null +++ b/native_client/dotnet/MozillaVoiceSttClient/Enums/ErrorCodes.cs @@ -0,0 +1,29 @@ +namespace MozillaVoiceSttClient.Enums +{ + /// + /// Error codes from the native Mozilla Voice STT binary. + /// + internal enum ErrorCodes + { + STT_ERR_OK = 0x0000, + STT_ERR_NO_MODEL = 0x1000, + STT_ERR_INVALID_ALPHABET = 0x2000, + STT_ERR_INVALID_SHAPE = 0x2001, + STT_ERR_INVALID_SCORER = 0x2002, + STT_ERR_MODEL_INCOMPATIBLE = 0x2003, + STT_ERR_SCORER_NOT_ENABLED = 0x2004, + STT_ERR_SCORER_UNREADABLE = 0x2005, + STT_ERR_SCORER_INVALID_LM = 0x2006, + STT_ERR_SCORER_NO_TRIE = 0x2007, + STT_ERR_SCORER_INVALID_TRIE = 0x2008, + STT_ERR_SCORER_VERSION_MISMATCH = 0x2009, + STT_ERR_FAIL_INIT_MMAP = 0x3000, + STT_ERR_FAIL_INIT_SESS = 0x3001, + STT_ERR_FAIL_INTERPRETER = 0x3002, + STT_ERR_FAIL_RUN_SESS = 0x3003, + STT_ERR_FAIL_CREATE_STREAM = 0x3004, + STT_ERR_FAIL_READ_PROTOBUF = 0x3005, + STT_ERR_FAIL_CREATE_SESS = 0x3006, + STT_ERR_FAIL_CREATE_MODEL = 0x3007, + } +} diff --git a/native_client/dotnet/DeepSpeechClient/Extensions/NativeExtensions.cs b/native_client/dotnet/MozillaVoiceSttClient/Extensions/NativeExtensions.cs similarity index 95% rename from native_client/dotnet/DeepSpeechClient/Extensions/NativeExtensions.cs rename to native_client/dotnet/MozillaVoiceSttClient/Extensions/NativeExtensions.cs index 9325f4b82a..0d2229f934 100644 --- a/native_client/dotnet/DeepSpeechClient/Extensions/NativeExtensions.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Extensions/NativeExtensions.cs @@ -1,9 +1,9 @@ -using DeepSpeechClient.Structs; +using MozillaVoiceSttClient.Structs; using System; using System.Runtime.InteropServices; using System.Text; -namespace DeepSpeechClient.Extensions +namespace MozillaVoiceSttClient.Extensions { internal static class NativeExtensions { @@ -20,7 +20,7 @@ internal static string PtrToString(this IntPtr intPtr, bool releasePtr = true) byte[] buffer = new byte[len]; Marshal.Copy(intPtr, buffer, 0, buffer.Length); if (releasePtr) - NativeImp.DS_FreeString(intPtr); + NativeImp.STT_FreeString(intPtr); string result = Encoding.UTF8.GetString(buffer); return result; } @@ -86,7 +86,7 @@ internal static Models.Metadata PtrToMetadata(this IntPtr intPtr) metadata.transcripts += sizeOfCandidateTranscript; } - NativeImp.DS_FreeMetadata(intPtr); + NativeImp.STT_FreeMetadata(intPtr); return managedMetadata; } } diff --git a/native_client/dotnet/DeepSpeechClient/Interfaces/IDeepSpeech.cs b/native_client/dotnet/MozillaVoiceSttClient/Interfaces/IMozillaVoiceSttModel.cs similarity index 85% rename from native_client/dotnet/DeepSpeechClient/Interfaces/IDeepSpeech.cs rename to native_client/dotnet/MozillaVoiceSttClient/Interfaces/IMozillaVoiceSttModel.cs index e1ed9cad7e..ede8b5f4bb 100644 --- a/native_client/dotnet/DeepSpeechClient/Interfaces/IDeepSpeech.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Interfaces/IMozillaVoiceSttModel.cs @@ -1,13 +1,13 @@ -using DeepSpeechClient.Models; +using MozillaVoiceSttClient.Models; using System; using System.IO; -namespace DeepSpeechClient.Interfaces +namespace MozillaVoiceSttClient.Interfaces { /// - /// Client interface of Mozilla's DeepSpeech implementation. + /// Client interface of Mozilla Voice STT. /// - public interface IDeepSpeech : IDisposable + public interface IMozillaVoiceSttModel : IDisposable { /// /// Return version of this library. The returned version is a semantic version @@ -59,7 +59,7 @@ public interface IDeepSpeech : IDisposable unsafe void SetScorerAlphaBeta(float aAlpha, float aBeta); /// - /// Use the DeepSpeech model to perform Speech-To-Text. + /// Use the Mozilla Voice STT model to perform Speech-To-Text. /// /// A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on). /// The number of samples in the audio signal. @@ -68,7 +68,7 @@ unsafe string SpeechToText(short[] aBuffer, uint aBufferSize); /// - /// Use the DeepSpeech model to perform Speech-To-Text, return results including metadata. + /// Use the Mozilla Voice STT model to perform Speech-To-Text, return results including metadata. /// /// A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on). /// The number of samples in the audio signal. @@ -83,26 +83,26 @@ unsafe Metadata SpeechToTextWithMetadata(short[] aBuffer, /// This can be used if you no longer need the result of an ongoing streaming /// inference and don't want to perform a costly decode operation. /// - unsafe void FreeStream(DeepSpeechStream stream); + unsafe void FreeStream(MozillaVoiceSttStream stream); /// /// Creates a new streaming inference state. /// - unsafe DeepSpeechStream CreateStream(); + unsafe MozillaVoiceSttStream CreateStream(); /// /// Feeds audio samples to an ongoing streaming inference. /// /// Instance of the stream to feed the data. /// An array of 16-bit, mono raw audio samples at the appropriate sample rate (matching what the model was trained on). - unsafe void FeedAudioContent(DeepSpeechStream stream, short[] aBuffer, uint aBufferSize); + unsafe void FeedAudioContent(MozillaVoiceSttStream stream, short[] aBuffer, uint aBufferSize); /// /// Computes the intermediate decoding of an ongoing streaming inference. /// /// Instance of the stream to decode. /// The STT intermediate result. - unsafe string IntermediateDecode(DeepSpeechStream stream); + unsafe string IntermediateDecode(MozillaVoiceSttStream stream); /// /// Computes the intermediate decoding of an ongoing streaming inference, including metadata. @@ -110,14 +110,14 @@ unsafe Metadata SpeechToTextWithMetadata(short[] aBuffer, /// Instance of the stream to decode. /// Maximum number of candidate transcripts to return. Returned list might be smaller than this. /// The extended metadata result. - unsafe Metadata IntermediateDecodeWithMetadata(DeepSpeechStream stream, uint aNumResults); + unsafe Metadata IntermediateDecodeWithMetadata(MozillaVoiceSttStream stream, uint aNumResults); /// /// Closes the ongoing streaming inference, returns the STT result over the whole audio signal. /// /// Instance of the stream to finish. /// The STT result. - unsafe string FinishStream(DeepSpeechStream stream); + unsafe string FinishStream(MozillaVoiceSttStream stream); /// /// Closes the ongoing streaming inference, returns the STT result over the whole audio signal, including metadata. @@ -125,6 +125,6 @@ unsafe Metadata SpeechToTextWithMetadata(short[] aBuffer, /// Instance of the stream to finish. /// Maximum number of candidate transcripts to return. Returned list might be smaller than this. /// The extended metadata result. - unsafe Metadata FinishStreamWithMetadata(DeepSpeechStream stream, uint aNumResults); + unsafe Metadata FinishStreamWithMetadata(MozillaVoiceSttStream stream, uint aNumResults); } } diff --git a/native_client/dotnet/DeepSpeechClient/Models/CandidateTranscript.cs b/native_client/dotnet/MozillaVoiceSttClient/Models/CandidateTranscript.cs similarity index 92% rename from native_client/dotnet/DeepSpeechClient/Models/CandidateTranscript.cs rename to native_client/dotnet/MozillaVoiceSttClient/Models/CandidateTranscript.cs index cc6b5d2855..abe1aa3025 100644 --- a/native_client/dotnet/DeepSpeechClient/Models/CandidateTranscript.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Models/CandidateTranscript.cs @@ -1,4 +1,4 @@ -namespace DeepSpeechClient.Models +namespace MozillaVoiceSttClient.Models { /// /// Stores the entire CTC output as an array of character metadata objects. diff --git a/native_client/dotnet/DeepSpeechClient/Models/DeepSpeechStream.cs b/native_client/dotnet/MozillaVoiceSttClient/Models/DeepSpeechStream.cs similarity index 80% rename from native_client/dotnet/DeepSpeechClient/Models/DeepSpeechStream.cs rename to native_client/dotnet/MozillaVoiceSttClient/Models/DeepSpeechStream.cs index e4605f5ed8..0223a6bd2d 100644 --- a/native_client/dotnet/DeepSpeechClient/Models/DeepSpeechStream.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Models/DeepSpeechStream.cs @@ -1,19 +1,19 @@ using System; -namespace DeepSpeechClient.Models +namespace MozillaVoiceSttClient.Models { /// /// Wrapper of the pointer used for the decoding stream. /// - public class DeepSpeechStream : IDisposable + public class MozillaVoiceSttStream : IDisposable { private unsafe IntPtr** _streamingStatePp; /// - /// Initializes a new instance of . + /// Initializes a new instance of . /// /// Native pointer of the native stream. - public unsafe DeepSpeechStream(IntPtr** streamingStatePP) + public unsafe MozillaVoiceSttStream(IntPtr** streamingStatePP) { _streamingStatePp = streamingStatePP; } diff --git a/native_client/dotnet/DeepSpeechClient/Models/Metadata.cs b/native_client/dotnet/MozillaVoiceSttClient/Models/Metadata.cs similarity index 88% rename from native_client/dotnet/DeepSpeechClient/Models/Metadata.cs rename to native_client/dotnet/MozillaVoiceSttClient/Models/Metadata.cs index fb6c613dfd..ea0666bf17 100644 --- a/native_client/dotnet/DeepSpeechClient/Models/Metadata.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Models/Metadata.cs @@ -1,4 +1,4 @@ -namespace DeepSpeechClient.Models +namespace MozillaVoiceSttClient.Models { /// /// Stores the entire CTC output as an array of character metadata objects. diff --git a/native_client/dotnet/DeepSpeechClient/Models/TokenMetadata.cs b/native_client/dotnet/MozillaVoiceSttClient/Models/TokenMetadata.cs similarity index 92% rename from native_client/dotnet/DeepSpeechClient/Models/TokenMetadata.cs rename to native_client/dotnet/MozillaVoiceSttClient/Models/TokenMetadata.cs index 5f2dea562f..86e8bdda1d 100644 --- a/native_client/dotnet/DeepSpeechClient/Models/TokenMetadata.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Models/TokenMetadata.cs @@ -1,4 +1,4 @@ -namespace DeepSpeechClient.Models +namespace MozillaVoiceSttClient.Models { /// /// Stores each individual character, along with its timing information. diff --git a/native_client/dotnet/DeepSpeechClient/DeepSpeech.cs b/native_client/dotnet/MozillaVoiceSttClient/MozillaVoiceStt.cs similarity index 72% rename from native_client/dotnet/DeepSpeechClient/DeepSpeech.cs rename to native_client/dotnet/MozillaVoiceSttClient/MozillaVoiceStt.cs index 08a3808b39..a331e393a5 100644 --- a/native_client/dotnet/DeepSpeechClient/DeepSpeech.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/MozillaVoiceStt.cs @@ -1,34 +1,34 @@ -using DeepSpeechClient.Interfaces; -using DeepSpeechClient.Extensions; +using MozillaVoiceSttClient.Interfaces; +using MozillaVoiceSttClient.Extensions; using System; using System.IO; -using DeepSpeechClient.Enums; -using DeepSpeechClient.Models; +using MozillaVoiceSttClient.Enums; +using MozillaVoiceSttClient.Models; -namespace DeepSpeechClient +namespace MozillaVoiceSttClient { /// - /// Concrete implementation of . + /// Concrete implementation of . /// - public class DeepSpeech : IDeepSpeech + public class MozillaVoiceSttModel : IMozillaVoiceSttModel { private unsafe IntPtr** _modelStatePP; /// - /// Initializes a new instance of class and creates a new acoustic model. + /// Initializes a new instance of class and creates a new acoustic model. /// /// The path to the frozen model graph. /// Thrown when the native binary failed to create the model. - public DeepSpeech(string aModelPath) + public MozillaVoiceSttModel(string aModelPath) { CreateModel(aModelPath); } - #region IDeepSpeech + #region IMozillaVoiceSttModel /// - /// Create an object providing an interface to a trained DeepSpeech model. + /// Create an object providing an interface to a trained Mozilla Voice STT model. /// /// The path to the frozen model graph. /// Thrown when the native binary failed to create the model. @@ -48,7 +48,7 @@ private unsafe void CreateModel(string aModelPath) { throw new FileNotFoundException(exceptionMessage); } - var resultCode = NativeImp.DS_CreateModel(aModelPath, + var resultCode = NativeImp.STT_CreateModel(aModelPath, ref _modelStatePP); EvaluateResultCode(resultCode); } @@ -60,7 +60,7 @@ private unsafe void CreateModel(string aModelPath) /// Beam width value used by the model. public unsafe uint GetModelBeamWidth() { - return NativeImp.DS_GetModelBeamWidth(_modelStatePP); + return NativeImp.STT_GetModelBeamWidth(_modelStatePP); } /// @@ -70,7 +70,7 @@ public unsafe uint GetModelBeamWidth() /// Thrown on failure. public unsafe void SetModelBeamWidth(uint aBeamWidth) { - var resultCode = NativeImp.DS_SetModelBeamWidth(_modelStatePP, aBeamWidth); + var resultCode = NativeImp.STT_SetModelBeamWidth(_modelStatePP, aBeamWidth); EvaluateResultCode(resultCode); } @@ -80,7 +80,7 @@ public unsafe void SetModelBeamWidth(uint aBeamWidth) /// Sample rate. public unsafe int GetModelSampleRate() { - return NativeImp.DS_GetModelSampleRate(_modelStatePP); + return NativeImp.STT_GetModelSampleRate(_modelStatePP); } /// @@ -89,9 +89,9 @@ public unsafe int GetModelSampleRate() /// Native result code. private void EvaluateResultCode(ErrorCodes resultCode) { - if (resultCode != ErrorCodes.DS_ERR_OK) + if (resultCode != ErrorCodes.STT_ERR_OK) { - throw new ArgumentException(NativeImp.DS_ErrorCodeToErrorMessage((int)resultCode).PtrToString()); + throw new ArgumentException(NativeImp.STT_ErrorCodeToErrorMessage((int)resultCode).PtrToString()); } } @@ -100,7 +100,7 @@ private void EvaluateResultCode(ErrorCodes resultCode) /// public unsafe void Dispose() { - NativeImp.DS_FreeModel(_modelStatePP); + NativeImp.STT_FreeModel(_modelStatePP); } /// @@ -120,7 +120,7 @@ public unsafe void EnableExternalScorer(string aScorerPath) throw new FileNotFoundException($"Cannot find the scorer file: {aScorerPath}"); } - var resultCode = NativeImp.DS_EnableExternalScorer(_modelStatePP, aScorerPath); + var resultCode = NativeImp.STT_EnableExternalScorer(_modelStatePP, aScorerPath); EvaluateResultCode(resultCode); } @@ -130,7 +130,7 @@ public unsafe void EnableExternalScorer(string aScorerPath) /// Thrown when an external scorer is not enabled. public unsafe void DisableExternalScorer() { - var resultCode = NativeImp.DS_DisableExternalScorer(_modelStatePP); + var resultCode = NativeImp.STT_DisableExternalScorer(_modelStatePP); EvaluateResultCode(resultCode); } @@ -142,7 +142,7 @@ public unsafe void DisableExternalScorer() /// Thrown when an external scorer is not enabled. public unsafe void SetScorerAlphaBeta(float aAlpha, float aBeta) { - var resultCode = NativeImp.DS_SetScorerAlphaBeta(_modelStatePP, + var resultCode = NativeImp.STT_SetScorerAlphaBeta(_modelStatePP, aAlpha, aBeta); EvaluateResultCode(resultCode); @@ -153,9 +153,9 @@ public unsafe void SetScorerAlphaBeta(float aAlpha, float aBeta) /// /// Instance of the stream to feed the data. /// An array of 16-bit, mono raw audio samples at the appropriate sample rate (matching what the model was trained on). - public unsafe void FeedAudioContent(DeepSpeechStream stream, short[] aBuffer, uint aBufferSize) + public unsafe void FeedAudioContent(MozillaVoiceSttStream stream, short[] aBuffer, uint aBufferSize) { - NativeImp.DS_FeedAudioContent(stream.GetNativePointer(), aBuffer, aBufferSize); + NativeImp.STT_FeedAudioContent(stream.GetNativePointer(), aBuffer, aBufferSize); } /// @@ -163,9 +163,9 @@ public unsafe void FeedAudioContent(DeepSpeechStream stream, short[] aBuffer, ui /// /// Instance of the stream to finish. /// The STT result. - public unsafe string FinishStream(DeepSpeechStream stream) + public unsafe string FinishStream(MozillaVoiceSttStream stream) { - return NativeImp.DS_FinishStream(stream.GetNativePointer()).PtrToString(); + return NativeImp.STT_FinishStream(stream.GetNativePointer()).PtrToString(); } /// @@ -174,9 +174,9 @@ public unsafe string FinishStream(DeepSpeechStream stream) /// Instance of the stream to finish. /// Maximum number of candidate transcripts to return. Returned list might be smaller than this. /// The extended metadata result. - public unsafe Metadata FinishStreamWithMetadata(DeepSpeechStream stream, uint aNumResults) + public unsafe Metadata FinishStreamWithMetadata(MozillaVoiceSttStream stream, uint aNumResults) { - return NativeImp.DS_FinishStreamWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata(); + return NativeImp.STT_FinishStreamWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata(); } /// @@ -184,9 +184,9 @@ public unsafe Metadata FinishStreamWithMetadata(DeepSpeechStream stream, uint aN /// /// Instance of the stream to decode. /// The STT intermediate result. - public unsafe string IntermediateDecode(DeepSpeechStream stream) + public unsafe string IntermediateDecode(MozillaVoiceSttStream stream) { - return NativeImp.DS_IntermediateDecode(stream.GetNativePointer()).PtrToString(); + return NativeImp.STT_IntermediateDecode(stream.GetNativePointer()).PtrToString(); } /// @@ -195,9 +195,9 @@ public unsafe string IntermediateDecode(DeepSpeechStream stream) /// Instance of the stream to decode. /// Maximum number of candidate transcripts to return. Returned list might be smaller than this. /// The STT intermediate result. - public unsafe Metadata IntermediateDecodeWithMetadata(DeepSpeechStream stream, uint aNumResults) + public unsafe Metadata IntermediateDecodeWithMetadata(MozillaVoiceSttStream stream, uint aNumResults) { - return NativeImp.DS_IntermediateDecodeWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata(); + return NativeImp.STT_IntermediateDecodeWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata(); } /// @@ -206,18 +206,18 @@ public unsafe Metadata IntermediateDecodeWithMetadata(DeepSpeechStream stream, u /// public unsafe string Version() { - return NativeImp.DS_Version().PtrToString(); + return NativeImp.STT_Version().PtrToString(); } /// /// Creates a new streaming inference state. /// - public unsafe DeepSpeechStream CreateStream() + public unsafe MozillaVoiceSttStream CreateStream() { IntPtr** streamingStatePointer = null; - var resultCode = NativeImp.DS_CreateStream(_modelStatePP, ref streamingStatePointer); + var resultCode = NativeImp.STT_CreateStream(_modelStatePP, ref streamingStatePointer); EvaluateResultCode(resultCode); - return new DeepSpeechStream(streamingStatePointer); + return new MozillaVoiceSttStream(streamingStatePointer); } /// @@ -225,25 +225,25 @@ public unsafe DeepSpeechStream CreateStream() /// This can be used if you no longer need the result of an ongoing streaming /// inference and don't want to perform a costly decode operation. /// - public unsafe void FreeStream(DeepSpeechStream stream) + public unsafe void FreeStream(MozillaVoiceSttStream stream) { - NativeImp.DS_FreeStream(stream.GetNativePointer()); + NativeImp.STT_FreeStream(stream.GetNativePointer()); stream.Dispose(); } /// - /// Use the DeepSpeech model to perform Speech-To-Text. + /// Use the Mozilla Voice STT model to perform Speech-To-Text. /// /// A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on). /// The number of samples in the audio signal. /// The STT result. Returns NULL on error. public unsafe string SpeechToText(short[] aBuffer, uint aBufferSize) { - return NativeImp.DS_SpeechToText(_modelStatePP, aBuffer, aBufferSize).PtrToString(); + return NativeImp.STT_SpeechToText(_modelStatePP, aBuffer, aBufferSize).PtrToString(); } /// - /// Use the DeepSpeech model to perform Speech-To-Text, return results including metadata. + /// Use the Mozilla Voice STT model to perform Speech-To-Text, return results including metadata. /// /// A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on). /// The number of samples in the audio signal. @@ -251,7 +251,7 @@ public unsafe string SpeechToText(short[] aBuffer, uint aBufferSize) /// The extended metadata. Returns NULL on error. public unsafe Metadata SpeechToTextWithMetadata(short[] aBuffer, uint aBufferSize, uint aNumResults) { - return NativeImp.DS_SpeechToTextWithMetadata(_modelStatePP, aBuffer, aBufferSize, aNumResults).PtrToMetadata(); + return NativeImp.STT_SpeechToTextWithMetadata(_modelStatePP, aBuffer, aBufferSize, aNumResults).PtrToMetadata(); } #endregion diff --git a/native_client/dotnet/DeepSpeechClient/DeepSpeechClient.csproj b/native_client/dotnet/MozillaVoiceSttClient/MozillaVoiceSttClient.csproj similarity index 100% rename from native_client/dotnet/DeepSpeechClient/DeepSpeechClient.csproj rename to native_client/dotnet/MozillaVoiceSttClient/MozillaVoiceSttClient.csproj diff --git a/native_client/dotnet/MozillaVoiceSttClient/NativeImp.cs b/native_client/dotnet/MozillaVoiceSttClient/NativeImp.cs new file mode 100644 index 0000000000..daad79acb6 --- /dev/null +++ b/native_client/dotnet/MozillaVoiceSttClient/NativeImp.cs @@ -0,0 +1,102 @@ +using MozillaVoiceSttClient.Enums; + +using System; +using System.Runtime.InteropServices; + +namespace MozillaVoiceSttClient +{ + /// + /// Wrapper for the native implementation of "libmozilla_voice_stt.so" + /// + internal static class NativeImp + { + #region Native Implementation + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl, + CharSet = CharSet.Ansi, SetLastError = true)] + internal static extern IntPtr STT_Version(); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal unsafe static extern ErrorCodes STT_CreateModel(string aModelPath, + ref IntPtr** pint); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal unsafe static extern IntPtr STT_ErrorCodeToErrorMessage(int aErrorCode); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal unsafe static extern uint STT_GetModelBeamWidth(IntPtr** aCtx); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal unsafe static extern ErrorCodes STT_SetModelBeamWidth(IntPtr** aCtx, + uint aBeamWidth); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal unsafe static extern ErrorCodes STT_CreateModel(string aModelPath, + uint aBeamWidth, + ref IntPtr** pint); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal unsafe static extern int STT_GetModelSampleRate(IntPtr** aCtx); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern ErrorCodes STT_EnableExternalScorer(IntPtr** aCtx, + string aScorerPath); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern ErrorCodes STT_DisableExternalScorer(IntPtr** aCtx); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern ErrorCodes STT_SetScorerAlphaBeta(IntPtr** aCtx, + float aAlpha, + float aBeta); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl, + CharSet = CharSet.Ansi, SetLastError = true)] + internal static unsafe extern IntPtr STT_SpeechToText(IntPtr** aCtx, + short[] aBuffer, + uint aBufferSize); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl, SetLastError = true)] + internal static unsafe extern IntPtr STT_SpeechToTextWithMetadata(IntPtr** aCtx, + short[] aBuffer, + uint aBufferSize, + uint aNumResults); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern void STT_FreeModel(IntPtr** aCtx); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern ErrorCodes STT_CreateStream(IntPtr** aCtx, + ref IntPtr** retval); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern void STT_FreeStream(IntPtr** aSctx); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern void STT_FreeMetadata(IntPtr metadata); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern void STT_FreeString(IntPtr str); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl, + CharSet = CharSet.Ansi, SetLastError = true)] + internal static unsafe extern void STT_FeedAudioContent(IntPtr** aSctx, + short[] aBuffer, + uint aBufferSize); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern IntPtr STT_IntermediateDecode(IntPtr** aSctx); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern IntPtr STT_IntermediateDecodeWithMetadata(IntPtr** aSctx, + uint aNumResults); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl, + CharSet = CharSet.Ansi, SetLastError = true)] + internal static unsafe extern IntPtr STT_FinishStream(IntPtr** aSctx); + + [DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)] + internal static unsafe extern IntPtr STT_FinishStreamWithMetadata(IntPtr** aSctx, + uint aNumResults); + #endregion + } +} diff --git a/native_client/dotnet/DeepSpeechClient/Structs/CandidateTranscript.cs b/native_client/dotnet/MozillaVoiceSttClient/Structs/CandidateTranscript.cs similarity index 93% rename from native_client/dotnet/DeepSpeechClient/Structs/CandidateTranscript.cs rename to native_client/dotnet/MozillaVoiceSttClient/Structs/CandidateTranscript.cs index 54581f6f84..9029d0f5cc 100644 --- a/native_client/dotnet/DeepSpeechClient/Structs/CandidateTranscript.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Structs/CandidateTranscript.cs @@ -1,7 +1,7 @@ using System; using System.Runtime.InteropServices; -namespace DeepSpeechClient.Structs +namespace MozillaVoiceSttClient.Structs { [StructLayout(LayoutKind.Sequential)] internal unsafe struct CandidateTranscript diff --git a/native_client/dotnet/DeepSpeechClient/Structs/Metadata.cs b/native_client/dotnet/MozillaVoiceSttClient/Structs/Metadata.cs similarity index 91% rename from native_client/dotnet/DeepSpeechClient/Structs/Metadata.cs rename to native_client/dotnet/MozillaVoiceSttClient/Structs/Metadata.cs index 0a9beddce5..a354759abc 100644 --- a/native_client/dotnet/DeepSpeechClient/Structs/Metadata.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Structs/Metadata.cs @@ -1,7 +1,7 @@ using System; using System.Runtime.InteropServices; -namespace DeepSpeechClient.Structs +namespace MozillaVoiceSttClient.Structs { [StructLayout(LayoutKind.Sequential)] internal unsafe struct Metadata diff --git a/native_client/dotnet/DeepSpeechClient/Structs/TokenMetadata.cs b/native_client/dotnet/MozillaVoiceSttClient/Structs/TokenMetadata.cs similarity index 93% rename from native_client/dotnet/DeepSpeechClient/Structs/TokenMetadata.cs rename to native_client/dotnet/MozillaVoiceSttClient/Structs/TokenMetadata.cs index 1c660c71cc..1f54e5d48e 100644 --- a/native_client/dotnet/DeepSpeechClient/Structs/TokenMetadata.cs +++ b/native_client/dotnet/MozillaVoiceSttClient/Structs/TokenMetadata.cs @@ -1,7 +1,7 @@ using System; using System.Runtime.InteropServices; -namespace DeepSpeechClient.Structs +namespace MozillaVoiceSttClient.Structs { [StructLayout(LayoutKind.Sequential)] internal unsafe struct TokenMetadata diff --git a/native_client/dotnet/DeepSpeechConsole/App.config b/native_client/dotnet/MozillaVoiceSttConsole/App.config similarity index 100% rename from native_client/dotnet/DeepSpeechConsole/App.config rename to native_client/dotnet/MozillaVoiceSttConsole/App.config diff --git a/native_client/dotnet/DeepSpeechConsole/DeepSpeechConsole.csproj b/native_client/dotnet/MozillaVoiceSttConsole/MozillaVoiceSttConsole.csproj similarity index 92% rename from native_client/dotnet/DeepSpeechConsole/DeepSpeechConsole.csproj rename to native_client/dotnet/MozillaVoiceSttConsole/MozillaVoiceSttConsole.csproj index a05fca6141..13a8b3551e 100644 --- a/native_client/dotnet/DeepSpeechConsole/DeepSpeechConsole.csproj +++ b/native_client/dotnet/MozillaVoiceSttConsole/MozillaVoiceSttConsole.csproj @@ -6,8 +6,8 @@ AnyCPU {312965E5-C4F6-4D95-BA64-79906B8BC7AC} Exe - DeepSpeechConsole - DeepSpeechConsole + MozillaVoiceSttConsole + MozillaVoiceSttConsole v4.6.2 512 true @@ -56,9 +56,9 @@ - + {56DE4091-BBBE-47E4-852D-7268B33B971F} - DeepSpeechClient + MozillaVoiceSttClient diff --git a/native_client/dotnet/DeepSpeechConsole/Program.cs b/native_client/dotnet/MozillaVoiceSttConsole/Program.cs similarity index 94% rename from native_client/dotnet/DeepSpeechConsole/Program.cs rename to native_client/dotnet/MozillaVoiceSttConsole/Program.cs index 68f3fc54b9..f94f5de16e 100644 --- a/native_client/dotnet/DeepSpeechConsole/Program.cs +++ b/native_client/dotnet/MozillaVoiceSttConsole/Program.cs @@ -1,6 +1,6 @@ -using DeepSpeechClient; -using DeepSpeechClient.Interfaces; -using DeepSpeechClient.Models; +using MozillaVoiceSttClient; +using MozillaVoiceSttClient.Interfaces; +using MozillaVoiceSttClient.Models; using NAudio.Wave; using System; using System.Collections.Generic; @@ -52,7 +52,7 @@ static void Main(string[] args) Console.WriteLine("Loading model..."); stopwatch.Start(); // sphinx-doc: csharp_ref_model_start - using (IDeepSpeech sttClient = new DeepSpeech(model ?? "output_graph.pbmm")) + using (IMozillaVoiceSttModel sttClient = new MozillaVoiceSttModel(model ?? "output_graph.pbmm")) { // sphinx-doc: csharp_ref_model_stop stopwatch.Stop(); diff --git a/native_client/dotnet/DeepSpeechConsole/Properties/AssemblyInfo.cs b/native_client/dotnet/MozillaVoiceSttConsole/Properties/AssemblyInfo.cs similarity index 96% rename from native_client/dotnet/DeepSpeechConsole/Properties/AssemblyInfo.cs rename to native_client/dotnet/MozillaVoiceSttConsole/Properties/AssemblyInfo.cs index 845851a185..f3257c6409 100644 --- a/native_client/dotnet/DeepSpeechConsole/Properties/AssemblyInfo.cs +++ b/native_client/dotnet/MozillaVoiceSttConsole/Properties/AssemblyInfo.cs @@ -5,7 +5,7 @@ // General Information about an assembly is controlled through the following // set of attributes. Change these attribute values to modify the information // associated with an assembly. -[assembly: AssemblyTitle("DeepSpeechConsole")] +[assembly: AssemblyTitle("MozillaVoiceSttConsole")] [assembly: AssemblyDescription("")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("")] diff --git a/native_client/dotnet/DeepSpeechConsole/arctic_a0024.wav b/native_client/dotnet/MozillaVoiceSttConsole/arctic_a0024.wav similarity index 100% rename from native_client/dotnet/DeepSpeechConsole/arctic_a0024.wav rename to native_client/dotnet/MozillaVoiceSttConsole/arctic_a0024.wav diff --git a/native_client/dotnet/DeepSpeechConsole/packages.config b/native_client/dotnet/MozillaVoiceSttConsole/packages.config similarity index 100% rename from native_client/dotnet/DeepSpeechConsole/packages.config rename to native_client/dotnet/MozillaVoiceSttConsole/packages.config diff --git a/native_client/dotnet/DeepSpeechWPF/.gitignore b/native_client/dotnet/MozillaVoiceSttWPF/.gitignore similarity index 100% rename from native_client/dotnet/DeepSpeechWPF/.gitignore rename to native_client/dotnet/MozillaVoiceSttWPF/.gitignore diff --git a/native_client/dotnet/DeepSpeechWPF/App.config b/native_client/dotnet/MozillaVoiceSttWPF/App.config similarity index 100% rename from native_client/dotnet/DeepSpeechWPF/App.config rename to native_client/dotnet/MozillaVoiceSttWPF/App.config diff --git a/native_client/dotnet/DeepSpeechWPF/App.xaml b/native_client/dotnet/MozillaVoiceSttWPF/App.xaml similarity index 71% rename from native_client/dotnet/DeepSpeechWPF/App.xaml rename to native_client/dotnet/MozillaVoiceSttWPF/App.xaml index 16ebb0d435..ca6a0f1369 100644 --- a/native_client/dotnet/DeepSpeechWPF/App.xaml +++ b/native_client/dotnet/MozillaVoiceSttWPF/App.xaml @@ -1,8 +1,8 @@  diff --git a/native_client/dotnet/DeepSpeechWPF/App.xaml.cs b/native_client/dotnet/MozillaVoiceSttWPF/App.xaml.cs similarity index 58% rename from native_client/dotnet/DeepSpeechWPF/App.xaml.cs rename to native_client/dotnet/MozillaVoiceSttWPF/App.xaml.cs index d4b87d6e60..6404f50b99 100644 --- a/native_client/dotnet/DeepSpeechWPF/App.xaml.cs +++ b/native_client/dotnet/MozillaVoiceSttWPF/App.xaml.cs @@ -1,10 +1,10 @@ using CommonServiceLocator; -using DeepSpeech.WPF.ViewModels; -using DeepSpeechClient.Interfaces; +using MozillaVoiceStt.WPF.ViewModels; +using MozillaVoiceSttClient.Interfaces; using GalaSoft.MvvmLight.Ioc; using System.Windows; -namespace DeepSpeechWPF +namespace MozillaVoiceSttWPF { /// /// Interaction logic for App.xaml @@ -18,11 +18,11 @@ protected override void OnStartup(StartupEventArgs e) try { - //Register instance of DeepSpeech - DeepSpeechClient.DeepSpeech deepSpeechClient = - new DeepSpeechClient.DeepSpeech("deepspeech-0.8.0-models.pbmm"); + //Register instance of Mozilla Voice STT + MozillaVoiceSttClient.MozillaVoiceSttModel client = + new MozillaVoiceSttClient.MozillaVoiceSttModel("deepspeech-0.8.0-models.pbmm"); - SimpleIoc.Default.Register(() => deepSpeechClient); + SimpleIoc.Default.Register(() => client); SimpleIoc.Default.Register(); } catch (System.Exception ex) @@ -35,8 +35,8 @@ protected override void OnStartup(StartupEventArgs e) protected override void OnExit(ExitEventArgs e) { base.OnExit(e); - //Dispose instance of DeepSpeech - ServiceLocator.Current.GetInstance()?.Dispose(); + //Dispose instance of Mozilla Voice STT + ServiceLocator.Current.GetInstance()?.Dispose(); } } } diff --git a/native_client/dotnet/DeepSpeechWPF/MainWindow.xaml b/native_client/dotnet/MozillaVoiceSttWPF/MainWindow.xaml similarity index 97% rename from native_client/dotnet/DeepSpeechWPF/MainWindow.xaml rename to native_client/dotnet/MozillaVoiceSttWPF/MainWindow.xaml index 4fbe5e72e1..5894fae3bc 100644 --- a/native_client/dotnet/DeepSpeechWPF/MainWindow.xaml +++ b/native_client/dotnet/MozillaVoiceSttWPF/MainWindow.xaml @@ -1,10 +1,10 @@  /// Interaction logic for MainWindow.xaml diff --git a/native_client/dotnet/DeepSpeechWPF/DeepSpeech.WPF.csproj b/native_client/dotnet/MozillaVoiceSttWPF/MozillaVoiceStt.WPF.csproj similarity index 94% rename from native_client/dotnet/DeepSpeechWPF/DeepSpeech.WPF.csproj rename to native_client/dotnet/MozillaVoiceSttWPF/MozillaVoiceStt.WPF.csproj index 7f46a31e1f..d14a02b707 100644 --- a/native_client/dotnet/DeepSpeechWPF/DeepSpeech.WPF.csproj +++ b/native_client/dotnet/MozillaVoiceSttWPF/MozillaVoiceStt.WPF.csproj @@ -6,8 +6,8 @@ AnyCPU {54BFD766-4305-4F4C-BA59-AF45505DF3C1} WinExe - DeepSpeech.WPF - DeepSpeech.WPF + MozillaVoiceStt.WPF + MozillaVoiceStt.WPF v4.6.2 512 {60dc8134-eba5-43b8-bcc9-bb4bc16c2548};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC} @@ -131,9 +131,9 @@ - + {56de4091-bbbe-47e4-852d-7268b33b971f} - DeepSpeechClient + MozillaVoiceSttClient diff --git a/native_client/dotnet/DeepSpeechWPF/DeepSpeech.WPF.sln b/native_client/dotnet/MozillaVoiceSttWPF/MozillaVoiceStt.WPF.sln similarity index 79% rename from native_client/dotnet/DeepSpeechWPF/DeepSpeech.WPF.sln rename to native_client/dotnet/MozillaVoiceSttWPF/MozillaVoiceStt.WPF.sln index cd29025ea3..003c6d8e6b 100644 --- a/native_client/dotnet/DeepSpeechWPF/DeepSpeech.WPF.sln +++ b/native_client/dotnet/MozillaVoiceSttWPF/MozillaVoiceStt.WPF.sln @@ -3,9 +3,9 @@ Microsoft Visual Studio Solution File, Format Version 12.00 # Visual Studio 15 VisualStudioVersion = 15.0.28307.421 MinimumVisualStudioVersion = 10.0.40219.1 -Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeech.WPF", "DeepSpeech.WPF.csproj", "{54BFD766-4305-4F4C-BA59-AF45505DF3C1}" +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceStt.WPF", "MozillaVoiceStt.WPF.csproj", "{54BFD766-4305-4F4C-BA59-AF45505DF3C1}" EndProject -Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeechClient", "..\DeepSpeechClient\DeepSpeechClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}" +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceSttClient", "..\MozillaVoiceSttClient\MozillaVoiceSttClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution diff --git a/native_client/dotnet/DeepSpeechWPF/Properties/AssemblyInfo.cs b/native_client/dotnet/MozillaVoiceSttWPF/Properties/AssemblyInfo.cs similarity index 95% rename from native_client/dotnet/DeepSpeechWPF/Properties/AssemblyInfo.cs rename to native_client/dotnet/MozillaVoiceSttWPF/Properties/AssemblyInfo.cs index f9ae7d76fe..034ac3d6b9 100644 --- a/native_client/dotnet/DeepSpeechWPF/Properties/AssemblyInfo.cs +++ b/native_client/dotnet/MozillaVoiceSttWPF/Properties/AssemblyInfo.cs @@ -7,11 +7,11 @@ // General Information about an assembly is controlled through the following // set of attributes. Change these attribute values to modify the information // associated with an assembly. -[assembly: AssemblyTitle("DeepSpeech.WPF")] +[assembly: AssemblyTitle("MozillaVoiceStt.WPF")] [assembly: AssemblyDescription("")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("")] -[assembly: AssemblyProduct("DeepSpeech.WPF.SingleFiles")] +[assembly: AssemblyProduct("MozillaVoiceStt.WPF.SingleFiles")] [assembly: AssemblyCopyright("Copyright © 2018")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/native_client/dotnet/DeepSpeechWPF/Properties/Resources.Designer.cs b/native_client/dotnet/MozillaVoiceSttWPF/Properties/Resources.Designer.cs similarity index 94% rename from native_client/dotnet/DeepSpeechWPF/Properties/Resources.Designer.cs rename to native_client/dotnet/MozillaVoiceSttWPF/Properties/Resources.Designer.cs index 2da2b4b275..b470f9ae3f 100644 --- a/native_client/dotnet/DeepSpeechWPF/Properties/Resources.Designer.cs +++ b/native_client/dotnet/MozillaVoiceSttWPF/Properties/Resources.Designer.cs @@ -8,7 +8,7 @@ // //------------------------------------------------------------------------------ -namespace DeepSpeech.WPF.Properties { +namespace MozillaVoiceStt.WPF.Properties { using System; @@ -39,7 +39,7 @@ internal Resources() { internal static global::System.Resources.ResourceManager ResourceManager { get { if (object.ReferenceEquals(resourceMan, null)) { - global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("DeepSpeech.WPF.Properties.Resources", typeof(Resources).Assembly); + global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("MozillaVoiceStt.WPF.Properties.Resources", typeof(Resources).Assembly); resourceMan = temp; } return resourceMan; diff --git a/native_client/dotnet/DeepSpeechWPF/Properties/Resources.resx b/native_client/dotnet/MozillaVoiceSttWPF/Properties/Resources.resx similarity index 100% rename from native_client/dotnet/DeepSpeechWPF/Properties/Resources.resx rename to native_client/dotnet/MozillaVoiceSttWPF/Properties/Resources.resx diff --git a/native_client/dotnet/DeepSpeechWPF/Properties/Settings.Designer.cs b/native_client/dotnet/MozillaVoiceSttWPF/Properties/Settings.Designer.cs similarity index 96% rename from native_client/dotnet/DeepSpeechWPF/Properties/Settings.Designer.cs rename to native_client/dotnet/MozillaVoiceSttWPF/Properties/Settings.Designer.cs index 0f464bc46a..a72186946a 100644 --- a/native_client/dotnet/DeepSpeechWPF/Properties/Settings.Designer.cs +++ b/native_client/dotnet/MozillaVoiceSttWPF/Properties/Settings.Designer.cs @@ -8,7 +8,7 @@ // //------------------------------------------------------------------------------ -namespace DeepSpeech.WPF.Properties { +namespace MozillaVoiceStt.WPF.Properties { [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] diff --git a/native_client/dotnet/DeepSpeechWPF/Properties/Settings.settings b/native_client/dotnet/MozillaVoiceSttWPF/Properties/Settings.settings similarity index 100% rename from native_client/dotnet/DeepSpeechWPF/Properties/Settings.settings rename to native_client/dotnet/MozillaVoiceSttWPF/Properties/Settings.settings diff --git a/native_client/dotnet/DeepSpeechWPF/ViewModels/BindableBase.cs b/native_client/dotnet/MozillaVoiceSttWPF/ViewModels/BindableBase.cs similarity index 98% rename from native_client/dotnet/DeepSpeechWPF/ViewModels/BindableBase.cs rename to native_client/dotnet/MozillaVoiceSttWPF/ViewModels/BindableBase.cs index 909327ee02..92fd2f57ac 100644 --- a/native_client/dotnet/DeepSpeechWPF/ViewModels/BindableBase.cs +++ b/native_client/dotnet/MozillaVoiceSttWPF/ViewModels/BindableBase.cs @@ -3,7 +3,7 @@ using System.ComponentModel; using System.Runtime.CompilerServices; -namespace DeepSpeech.WPF.ViewModels +namespace MozillaVoiceStt.WPF.ViewModels { /// /// Implementation of to simplify models. diff --git a/native_client/dotnet/DeepSpeechWPF/ViewModels/MainWindowViewModel.cs b/native_client/dotnet/MozillaVoiceSttWPF/ViewModels/MainWindowViewModel.cs similarity index 96% rename from native_client/dotnet/DeepSpeechWPF/ViewModels/MainWindowViewModel.cs rename to native_client/dotnet/MozillaVoiceSttWPF/ViewModels/MainWindowViewModel.cs index 230fd42a3e..0d81c2f05e 100644 --- a/native_client/dotnet/DeepSpeechWPF/ViewModels/MainWindowViewModel.cs +++ b/native_client/dotnet/MozillaVoiceSttWPF/ViewModels/MainWindowViewModel.cs @@ -3,8 +3,8 @@ using CSCore.CoreAudioAPI; using CSCore.SoundIn; using CSCore.Streams; -using DeepSpeechClient.Interfaces; -using DeepSpeechClient.Models; +using MozillaVoiceSttClient.Interfaces; +using MozillaVoiceSttClient.Models; using GalaSoft.MvvmLight.CommandWpf; using Microsoft.Win32; using System; @@ -15,7 +15,7 @@ using System.Threading; using System.Threading.Tasks; -namespace DeepSpeech.WPF.ViewModels +namespace MozillaVoiceStt.WPF.ViewModels { /// /// View model of the MainWindow View. @@ -27,7 +27,7 @@ public class MainWindowViewModel : BindableBase private const string ScorerPath = "kenlm.scorer"; #endregion - private readonly IDeepSpeech _sttClient; + private readonly IMozillaVoiceSttModel _sttClient; #region Commands /// @@ -62,7 +62,7 @@ public class MainWindowViewModel : BindableBase /// /// Stream used to feed data into the acoustic model. /// - private DeepSpeechStream _sttStream; + private MozillaVoiceSttStream _sttStream; /// /// Records the audio of the selected device. @@ -75,7 +75,7 @@ public class MainWindowViewModel : BindableBase private SoundInSource _soundInSource; /// - /// Target wave source.(16KHz Mono 16bit for DeepSpeech) + /// Target wave source.(16KHz Mono 16bit for Mozilla Voice STT) /// private IWaveSource _convertedSource; @@ -200,7 +200,7 @@ public ObservableCollection AvailableRecordDevices #endregion #region Ctors - public MainWindowViewModel(IDeepSpeech sttClient) + public MainWindowViewModel(IMozillaVoiceSttModel sttClient) { _sttClient = sttClient; @@ -290,7 +290,8 @@ private void Capture_DataAvailable(object sender, DataAvailableEventArgs e) //read data from the converedSource //important: don't use the e.Data here //the e.Data contains the raw data provided by the - //soundInSource which won't have the deepspeech required audio format + //soundInSource which won't have the Mozilla Voice STT required + // audio format byte[] buffer = new byte[_convertedSource.WaveFormat.BytesPerSecond / 2]; int read; diff --git a/native_client/dotnet/DeepSpeechWPF/packages.config b/native_client/dotnet/MozillaVoiceSttWPF/packages.config similarity index 100% rename from native_client/dotnet/DeepSpeechWPF/packages.config rename to native_client/dotnet/MozillaVoiceSttWPF/packages.config diff --git a/native_client/dotnet/README.rst b/native_client/dotnet/README.rst index b102557368..26db5b96ce 100644 --- a/native_client/dotnet/README.rst +++ b/native_client/dotnet/README.rst @@ -1,8 +1,8 @@ -Building DeepSpeech native client for Windows +Building Mozilla Voice STT native client for Windows ============================================= -Now we can build the native client of DeepSpeech and run inference on Windows using the C# client, to do that we need to compile the ``native_client``. +Now we can build the native client of Mozilla Voice STT and run inference on Windows using the C# client, to do that we need to compile the ``native_client``. **Table of Contents** @@ -59,8 +59,8 @@ There should already be a symbolic link, for this example let's suppose that we . ├── D:\ - │ ├── cloned # Contains DeepSpeech and tensorflow side by side - │ │ └── DeepSpeech # Root of the cloned DeepSpeech + │ ├── cloned # Contains Mozilla Voice STT and tensorflow side by side + │ │ └── DeepSpeech # Root of the cloned Mozilla Voice STT │ │ ├── tensorflow # Root of the cloned Mozilla's tensorflow └── ... @@ -126,7 +126,7 @@ We will add AVX/AVX2 support in the command, please make sure that your CPU supp .. code-block:: bash - bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so + bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libmozilla_voice_stt.so GPU with CUDA ~~~~~~~~~~~~~ @@ -135,11 +135,11 @@ If you enabled CUDA in `configure.py `_ in your DeepSpeech directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy ``libdeepspeech.so`` to the generated ``x64/Debug`` or ``x64/Release`` directory. +As for now we can only use the generated ``libmozilla_voice_stt.so`` with the C# clients, go to `native_client/dotnet/ `_ in your Mozilla Voice STT directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy ``libmozilla_voice_stt.so`` to the generated ``x64/Debug`` or ``x64/Release`` directory. diff --git a/native_client/dotnet/nupkg/deepspeech.nuspec.in b/native_client/dotnet/nupkg/deepspeech.nuspec.in index a4797177ce..93a6f6ea16 100644 --- a/native_client/dotnet/nupkg/deepspeech.nuspec.in +++ b/native_client/dotnet/nupkg/deepspeech.nuspec.in @@ -3,13 +3,13 @@ $NUPKG_ID $NUPKG_VERSION - DeepSpeech + Mozilla.Voice.STT Mozilla Mozilla MPL-2.0 http://github.com/mozilla/DeepSpeech false - A library for running inference with a DeepSpeech model + A library for running inference with a Mozilla Voice STT model Copyright (c) 2019 Mozilla Corporation native speech speech_recognition diff --git a/native_client/generate_scorer_package.cpp b/native_client/generate_scorer_package.cpp index 4486b42cb9..c33c4891cd 100644 --- a/native_client/generate_scorer_package.cpp +++ b/native_client/generate_scorer_package.cpp @@ -11,7 +11,7 @@ using namespace std; #include "ctcdecode/decoder_utils.h" #include "ctcdecode/scorer.h" #include "alphabet.h" -#include "deepspeech.h" +#include "mozilla_voice_stt.h" namespace po = boost::program_options; @@ -66,9 +66,9 @@ create_package(absl::optional alphabet_path, scorer.set_utf8_mode(force_utf8.value()); scorer.reset_params(default_alpha, default_beta); int err = scorer.load_lm(lm_path); - if (err != DS_ERR_SCORER_NO_TRIE) { + if (err != STT_ERR_SCORER_NO_TRIE) { cerr << "Error loading language model file: " - << DS_ErrorCodeToErrorMessage(err) << "\n"; + << STT_ErrorCodeToErrorMessage(err) << "\n"; return 1; } scorer.fill_dictionary(words); diff --git a/native_client/java/Makefile b/native_client/java/Makefile index 191b1013e1..22694841c0 100644 --- a/native_client/java/Makefile +++ b/native_client/java/Makefile @@ -2,7 +2,7 @@ include ../definitions.mk -ARCHS := $(shell grep 'ABI_FILTERS' libdeepspeech/gradle.properties | cut -d'=' -f2 | sed -e 's/;/ /g') +ARCHS := $(shell grep 'ABI_FILTERS' libmozillavoicestt/gradle.properties | cut -d'=' -f2 | sed -e 's/;/ /g') GRADLE ?= ./gradlew all: apk @@ -14,13 +14,13 @@ apk-clean: $(GRADLE) clean libs-clean: - rm -fr libdeepspeech/libs/*/libdeepspeech.so + rm -fr libmozillavoicestt/libs/*/libmozilla_voice_stt.so -libdeepspeech/libs/%/libdeepspeech.so: - -mkdir libdeepspeech/libs/$*/ - cp ${TFDIR}/bazel-out/$*-*/bin/native_client/libdeepspeech.so libdeepspeech/libs/$*/ +libmozillavoicestt/libs/%/libmozilla_voice_stt.so: + -mkdir libmozillavoicestt/libs/$*/ + cp ${TFDIR}/bazel-out/$*-*/bin/native_client/libmozilla_voice_stt.so libmozillavoicestt/libs/$*/ -apk: apk-clean bindings $(patsubst %,libdeepspeech/libs/%/libdeepspeech.so,$(ARCHS)) +apk: apk-clean bindings $(patsubst %,libmozillavoicestt/libs/%/libmozilla_voice_stt.so,$(ARCHS)) $(GRADLE) build maven-bundle: apk @@ -28,4 +28,4 @@ maven-bundle: apk $(GRADLE) zipMavenArtifacts bindings: clean ds-swig - $(DS_SWIG_ENV) swig -c++ -java -package org.mozilla.deepspeech.libdeepspeech -outdir libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech/ -o jni/deepspeech_wrap.cpp jni/deepspeech.i + $(DS_SWIG_ENV) swig -c++ -java -package org.mozilla.voice.stt -outdir libmozillavoicestt/src/main/java/org/mozilla/voice/stt/ -o jni/deepspeech_wrap.cpp jni/deepspeech.i diff --git a/native_client/java/app/build.gradle b/native_client/java/app/build.gradle index c1aed496ad..abf1fd62b6 100644 --- a/native_client/java/app/build.gradle +++ b/native_client/java/app/build.gradle @@ -4,7 +4,7 @@ android { compileSdkVersion 27 defaultConfig { - applicationId "org.mozilla.deepspeech" + applicationId "org.mozilla.voice.sttapp" minSdkVersion 21 targetSdkVersion 27 versionName androidGitVersion.name() @@ -28,7 +28,7 @@ android { dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) - implementation project(':libdeepspeech') + implementation project(':libmozillavoicestt') implementation 'com.android.support:appcompat-v7:27.1.1' implementation 'com.android.support.constraint:constraint-layout:1.1.3' testImplementation 'junit:junit:4.12' diff --git a/native_client/java/app/src/androidTest/java/org/mozilla/deepspeech/ExampleInstrumentedTest.java b/native_client/java/app/src/androidTest/java/org/mozilla/voice/sttapp/ExampleInstrumentedTest.java similarity index 84% rename from native_client/java/app/src/androidTest/java/org/mozilla/deepspeech/ExampleInstrumentedTest.java rename to native_client/java/app/src/androidTest/java/org/mozilla/voice/sttapp/ExampleInstrumentedTest.java index 6c3e7f91f8..01ddafb9cc 100644 --- a/native_client/java/app/src/androidTest/java/org/mozilla/deepspeech/ExampleInstrumentedTest.java +++ b/native_client/java/app/src/androidTest/java/org/mozilla/voice/sttapp/ExampleInstrumentedTest.java @@ -1,4 +1,4 @@ -package org.mozilla.deepspeech; +package org.mozilla.voice.sttapp; import android.content.Context; import android.support.test.InstrumentationRegistry; @@ -21,6 +21,6 @@ public void useAppContext() { // Context of the app under test. Context appContext = InstrumentationRegistry.getTargetContext(); - assertEquals("org.mozilla.deepspeech", appContext.getPackageName()); + assertEquals("org.mozilla.voice.sttapp", appContext.getPackageName()); } } diff --git a/native_client/java/app/src/main/AndroidManifest.xml b/native_client/java/app/src/main/AndroidManifest.xml index 0702cc1074..1ef6e3a221 100644 --- a/native_client/java/app/src/main/AndroidManifest.xml +++ b/native_client/java/app/src/main/AndroidManifest.xml @@ -1,6 +1,6 @@ + package="org.mozilla.voice.sttapp"> - + diff --git a/native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java b/native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java similarity index 95% rename from native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java rename to native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java index d82de3a121..7f24e9f6db 100644 --- a/native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java +++ b/native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java @@ -1,4 +1,4 @@ -package org.mozilla.deepspeech; +package org.mozilla.voice.sttapp; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; @@ -16,11 +16,11 @@ import java.nio.ByteOrder; import java.nio.ByteBuffer; -import org.mozilla.deepspeech.libdeepspeech.DeepSpeechModel; +import org.mozilla.voice.stt.MozillaVoiceSttModel; -public class DeepSpeechActivity extends AppCompatActivity { +public class MozillaVoiceSttActivity extends AppCompatActivity { - DeepSpeechModel _m = null; + MozillaVoiceSttModel _m = null; EditText _tfliteModel; EditText _audioFile; @@ -50,7 +50,7 @@ private void newModel(String tfliteModel) { this._tfliteStatus.setText("Creating model"); if (this._m == null) { // sphinx-doc: java_ref_model_start - this._m = new DeepSpeechModel(tfliteModel); + this._m = new MozillaVoiceSttModel(tfliteModel); this._m.setBeamWidth(BEAM_WIDTH); // sphinx-doc: java_ref_model_stop } diff --git a/native_client/java/app/src/main/res/layout/activity_deep_speech.xml b/native_client/java/app/src/main/res/layout/activity_deep_speech.xml index 02c383d431..ffbee61977 100644 --- a/native_client/java/app/src/main/res/layout/activity_deep_speech.xml +++ b/native_client/java/app/src/main/res/layout/activity_deep_speech.xml @@ -4,7 +4,7 @@ xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" - tools:context=".DeepSpeechActivity"> + tools:context=".MozillaVoiceSttActivity">