Facebook iconWeb Speech API: A Beginner’s Guide
Blogs/Technology

Web Speech API: A Beginner’s Guide

Aug 28, 20246 Min Read
by Shubham Ambastha
Web Speech API: A Beginner’s Guide Hero

Nowadays, AI is being used for almost anything and everything which often saves us time, and delivers much better results. Using AI also involves costing. Imagine doing a basic thing such as converting audio to text, which costs you thousands of dollars because you used AI. 

Did you know you can get the same results without needing AI or spending a dime? Even better, you won't need an extra server. Most modern browsers already have a built-in audio-to-text feature, so they handle everything right there in your browser. This handy tool is available through the Web Speech API.

The World Wide Web Consortium (W3C) put forward the Web Speech API as a draft in 2012. It has since become widely supported in modern browsers, making voice capabilities more accessible to web developers.

Importance of Integrating Voice Capabilities in Web Applications

  • Accessibility: Enhances the accessibility of web applications for users with disabilities, resulting in a more inclusive user experience.
  • User Engagement: Voice interaction can improve the usability and engagement of programs, particularly when it comes to searching, giving commands, and completing forms.
  • Efficiency: Voice commands can be faster and more convenient than input methods, enhancing the overall user experience.
  • Innovation: By integrating voice capabilities, developers can create innovative applications, stay in trends and meet user expectations for modern, interactive web experiences.

Understanding the Web Speech API

The Web Speech API is a powerful tool built right into modern browsers, allowing web apps to use voice interactions. It works by tapping into the hardware and software of the user’s device to process and understand spoken words.

This API has two main parts: Speech Recognition and Speech Synthesis. Speech Recognition lets websites listen to what you say and turn your words into text, making hands-free, voice-controlled interactions possible. This opens up new ways to create user-friendly interfaces and makes websites more accessible.

The Speech Synthesis component, in contrast, provides the capability to generate synthetic speech from written text. This text-to-speech functionality allows web applications to audibly convey information to users, further enhancing the multimodal experience and accessibility of web-based experiences.

Together, the Speech Recognition and Speech Synthesis capabilities of the Web Speech API offer developers a robust set of tools to incorporate voice-driven features and functionality into their web applications, catering to a wide range of user needs and preferences.

Here's a closer look at how it works:

Speech Recognition (Converts spoken words into text)

  • The SpeechRecognition interface captures audio input from the user's microphone.
  • The audio data is processed by the browser's speech recognition engine, which can be either a built-in local engine or a cloud-based service.
  • Speech is converted into text and then returned to the app via events.
  • Then the developer can handle these events to display the recognised text or trigger other actions.

Speech Synthesis (Converts text into spoken words)

Partner with Us for Success

Experience seamless collaboration and exceptional results.

  • The SpeechSynthesis interface takes text input and converts it into spoken words using the browser's speech synthesis engine.
  • Developers create instances of SpeechSynthesisUtterance to specify the text to be spoken, along with properties like pitch, rate, and volume.
  • The browser then uses available voices to read the text aloud.
  • Developers can manage speech synthesis events to handle start, end, pause, resume, and error states.

Exploring Speech Recognition

Now let’s guide you through setting up and using the SpeechRecognition interface.

Key Interfaces and Methods:

  • SpeechRecognition: Main interface for speech recognition.
  • SpeechGrammarList and SpeechGrammar: Define the grammar (words and phrases) the recognition service should recognise.
  • Methods:
    • start(): Begins the speech recognition service.
    • stop(): Stops the speech recognition service.
  • Events:
    • onresult: Triggered when the speech recognition service returns a result.
    • onspeechend: Triggered when the user stops speaking.
    • onerror: Triggered when an error occurs during recognition.
    • onnomatch: Triggered when no speech matches the defined grammar.
// Creating a new SpeechRecognition instance
const recognition = new SpeechRecognition();

// Set properties (optional)
recognition.lang = 'en-US';
recognition.interimResults = true;

// Event listeners
recognition.addEventListener('result', (event) => {
  const transcript = Array.from(event.results)
    .map((result) => result[0].transcript)
    .join('');

  console.log('Transcript:', transcript);
  // Do something
});

recognition.addEventListener('error', (event) => {
  console.error('Speech recognition error:', event.error);
});

// Start the speech recognition 
recognition.start();

Advanced Speech Recognition Techniques

You can work with grammar and configure recognition properties to enhance speech recognition functionality.

Working with Grammars:

  • Grammar defines the vocabulary that the recognition service should recognize.
  • Use SpeechGrammarList and SpeechGrammar interfaces to create and use grammars.

Partner with Us for Success

Experience seamless collaboration and exceptional results.

// Creating a new SpeechGrammarList
const grammar = new SpeechGrammarList();

// Grammar
const phrase = '#JSGF V1.0; grammar phrase; public <phrase> = hello | goodbye;';
const newGrammar = new SpeechGrammar(phrase);
grammar.addFromString(newGrammar, 1);

// Configure the SpeechRecognition to use the grammar
recognition.grammars = grammar;
recognition.start();

Exploring Speech Synthesis

Speech synthesis, also known as text-to-speech (TTS), is a powerful feature of the Web Speech API that allows web applications to convert text into spoken words. This capability opens up a range of possibilities for enhancing user experience and accessibility.

Basic Usage: To use speech synthesis, you'll work with the SpeechSynthesis interface and SpeechSynthesisUtterance object. Here's a basic example:

const synth = window.speechSynthesis;
const utterance = new SpeechSynthesisUtterance("Hello, world!");
synth.speak(utterance);

Customizing Voice Properties: You can customize various aspects of the synthesized speech:

utterance.volume = 0.8; // 0 to 1
utterance.rate = 1.2; // 0.1 to 10
utterance.pitch = 1.1; // 0 to 2
utterance.lang = 'en-US';

Choosing Voices: Modern browsers often provide multiple voices to choose from:

const voices = synth.getVoices();
utterance.voice = voices[0]; // Choose the first available voice

Handling Events: The SpeechSynthesisUtterance object emits various events you can listen for:

utterance.onstart = () => console.log('Speech started');
utterance.onend = () => console.log('Speech ended');
utterance.onerror = (event) => console.error('Speech error:', event.error);
utterance.onpause = () => console.log('Speech paused');
utterance.onresume = () => console.log('Speech resumed');

Managing Speech Queue: The speech synthesis interface allows you to manage multiple utterances:

synth.cancel(); // Stop current speech and clear queue
synth.pause(); // Pause speaking
synth.resume(); // Resume speaking

Handling Long Text: For longer text, you might want to break it into smaller chunks:

function speakLongText(text) {
  const maxLength = 200;
  const chunksArr = text.match(new RegExp(`.{1,${maxLength}}(\\s|$)`, 'g'));
  chunksArr.forEach((chunk, index) => {
    const utterance = new SpeechSynthesisUtterance(chunk);
    utterance.onend = () => {
      if (index === chunks.length - 1) {
        console.log('Finished speaking all text');
      }
    };
    synth.speak(utterance);
  });
}

Accessibility Considerations: When using speech synthesis, consider the following:

  • Provide controls for users to stop or pause the speech
  • Allow users to adjust volume, rate, and pitch
  • Ensure visual feedback is provided alongside audio feedback

Browser Support and Fallbacks: While speech synthesis is widely supported, it's good practice to check for support and provide fallbacks:

if ('speechSynthesis' in window) {
  // This says, Speech synthesis is supported
} else {
  console.log('Speech synthesis not supported');
  // Provide alternative feedback method
}

Combining with Speech Recognition: You can create a conversational interface by combining speech synthesis with speech recognition:

recognition.onresult = (event) => {
  const text = event.results[0][0].transcript;
  console.log('You said:', text);
  
  const response = generateResponse(text); // Main logic here
  const utterance = new SpeechSynthesisUtterance(response);
  synth.speak(utterance);
};

Putting It All Together: Building a Complete Voice-Enabled Application

To demonstrate the integration of the Web Speech API in a complete web application, let's walk through a sample React-based implementation.

First, let's create a React component VoiceEnabledApp:

import React, { useState, useEffect } from 'react';

function VoiceEnabledApp() {
  const [recognition, setRecognition] = useState(null);
  const [transcript, setTranscript] = useState('');

  useEffect(() => {
    // Initialize the SpeechRecognition instance
    const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;
    const newRecognition = new SpeechRecognition();
    newRecognition.lang = 'en-US';
    newRecognition.interimResults = true;
    setRecognition(newRecognition);

    // Event listeners
    newRecognition.addEventListener('result', (event) => {
      const spoken = Array.from(event.results)
        .map((result) => result[0].transcript)
        .join('');
      setTranscript(spoken);
    });

    newRecognition.addEventListener('end', () => {
      newRecognition.start();
    });

    // Starting the speech recognition service
    newRecognition.start();

    return () => {
      // Cleanup the event listeners and stop the recognition
      newRecognition.removeEventListener('result', () => {});
      newRecognition.removeEventListener('end', () => {});
      newRecognition.stop();
    };
  }, []);

  return (
    <div>
      <h1>Voice-Enabled Application</h1>
      <p>Transcript: {transcript}</p>
      {/* Add other UI components and functionality */}
    </div>
  );
}

export default VoiceEnabledApp;

To use this VoiceEnabledApp component in the application, we can import and render it:

import React from 'react';
import VoiceEnabledApp from './VoiceEnabledApp';

function App() {
  return (
    <div>
      <VoiceEnabledApp />
    </div>
  );
}

export default App;

Find more detailed React App on GitHub.

Tips and Best Practices

  • Use parameters to tailor the speech recognition and synthesis experience. This includes options for language interim results and voice choices.
  • Handle errors and edge cases gracefully, and give users clear feedback when problems happen.
  • Consider putting in place backup plans for web browsers that don't support the Web Speech API.
  • Optimize the performance of your application by managing the lifecycle of the speech recognition and synthesis services.

Conclusion

The Web Speech API gives developers a strong built-in way to add voice features to websites. Using this API has an impact on making sites easier to use keeping users interested, and boosting productivity. It also sparks new ideas in web projects. As this tech keeps getting better, the Web Speech API opens up exciting chances for developers to build websites that are easier to use and more interactive. With the example code and tips we've shared in this post, you can begin to explore what the Web Speech API can do and add voice features to your own websites.

Frequently Asked Questions

1. Is the Web Speech API supported in all browsers? 

The Web Speech API is widely supported in modern browsers, but it's best to check compatibility and provide fallbacks for unsupported browsers.

2. How can I use the Web Speech API in my web application? 

You can use JavaScript to access the API's features, like SpeechRecognition for speech-to-text and SpeechSynthesis for text-to-speech, in your web application's code.

Author-Shubham Ambastha
Shubham Ambastha

I excel in optimizing performance, reducing costs, and creating innovative web applications. Passionate about coding and continuous learning, I share insights on cutting-edge tech solutions.

Phone

Next for you

How to Integrate Google Maps into Your Flutter App Cover

Technology

Nov 20, 20244 min read

How to Integrate Google Maps into Your Flutter App

In our smartphone-driven world, location services are transforming the app landscape. The global location-based services market is projected to hit a staggering reach USD 129.71 billion by 2032, exhibiting a CAGR of 19.5% during the forecast period (2024-2032) This explosion stems from our smartphone addiction and hunger for personalized experiences. It's only natural that developers are rushing to integrate Google Maps into their apps, aiming to deliver an engaging, user-friendly mapping experi

How To Implement Dark Mode in Your Flutter App Cover

Technology

Oct 24, 20244 min read

How To Implement Dark Mode in Your Flutter App

In today's app-driven world, dark mode has become a must-have feature. A 2021 Android Authority survey found that an overwhelming 81.9% of users prefer dark mode in their apps. This trend highlights the need for Flutter developers to incorporate this feature to enhance user satisfaction and stay competitive. Understanding Theming in Flutter Flutter theming is the foundation for implementing dark mode. It provides a systematic way to manage your app's visual elements, ensuring consistency and

How to Implement Drag-and-Drop in Flutter? Cover

Technology

Oct 22, 20245 min read

How to Implement Drag-and-Drop in Flutter?

Have you ever marvelled at how smoothly you can rearrange apps on your smartphone's home screen? That's the magic of drag-and-drop at work. As a Flutter developer, you have the power to bring this same intuitive interaction to your apps. Let's dive into the world of Flutter drag and drop and discover how you can create interfaces that users will love to touch, drag, and interact with. Why Drag-and-Drop Matters in Modern Apps Picture this: you're scrolling through a to-do list app, and instea