SpeechCore is a cross-platform C++ library that abstracts the process of communicating with various screen readers, managing low-level details and providing a clean, simple-to-use interface.
- Simple and intuitive API with straightforward usage
- Cross-platform support for various screen readers, with the option to support additional ones
- Download the matching library build for your platform from the releases page.
- Link against the library (static or shared, depending on your choice).
- Copy the contents of the include folder to your project directory.
- Include
SpeechCore.hin your project.
The GitHub repo includes a Visual Studio solution and a SConstruct file. If you're using a different build system, keep the following notes in mind:
- Define either
__SPEECH_C_EXPORT(shared) orSPEECH_C_STATIC(static) when compiling. - JNI files are included for Java support. Ensure you have the required Java files, or exclude them if not needed.
- Include only platform-specific files for your target platform.
- Linux builds require the Speech Dispatcher library.
- Windows builds need to link against SAPI.LIB.
- Macos builds need to link against object library. And the AVFoundation and Foundation frameworks.
- Documentation generation requires Doxygen and Sphinx.
Simple usage example:
#include <iostream>
#include <SpeechCore.h>
int main() {
Speech_Init();
if (Speech_Is_Loaded()) {
std::cout << "Current screen reader: " << Speech_Get_Current_Driver() << std::endl;
std::cout << "Speaking some text" << std::endl;
Speech_Output(L"This is a test for the SpeechCore library. If you're hearing this, it indicates the library is functioning properly.");
}
Speech_Free(); // Free resources when you're done.
return 0;
}See the documentation for more detailed usage examples.
Documentation can be found here.
- Windows screen readers (NVDA, JAWS, Zhengdu, PCTalker, and System Access) require specific binaries, included with the source code.
- Enhanced control over SAPI 5, including voice configuration and speech parameters.
- Braille output functionality is included and implemented for the screen readers that support it. Mainly NVDA and Jaws
- Windows: NVDA, JAWS, System Access, Zhengdu Screen Reader, PCTalker, SAPI 5
- macOS: AVSpeech
- Linux: Speech Dispatcher
Bindings are available for Python, .NET/C#, and Java.
For Python Install python bindings via pip:
pip install SpeechCoreDotnet Install via NuGet Package Manager:
dotnet add package SpeechCore.CrossPlatformOr via Package Manager Console in Visual Studio:
Install-Package SpeechCore.CrossPlatformThis library was inspired by Tolk, with adaptations for more flexibility. It was initially developed for personal projects and later expanded to be cross-platform.
- The Speech_Detect_Driver now rescans for drivers on Windows even if one is currently running.
- Fixed NVDA speech interrupt functionality.
- New functions available on api:
Speech_Set_PitchandSpeech_Get_Pitch. For controling pitch parameter for drivers that support it.Speech_Output_Text. An enhanced speech function which includes an with_ssml parameter to use if the speech driver supports it. The SC_SSML_SUPPORT an be used to check if it does.
- Added PCTalker screen reader support.
- Implemented braille functionality for screen readers that support it (NVDA, jaws)
- Modified Encoding handling for Unix platforms, it should function properly now.
- Python bindings have been rewritten with pybind11 an is now a a part of the main repo.
- Dotnet bindings are now available as a nuget package.
Contributions to the library or support for additional screen readers are welcome.
- Implement Android/iOS support
- Add Meson build files
- Add CMake build files