Next: 1.3 Signal Representation
Up: Spoken Language Input
Previous: 1.1 Overview
Victor Zue, Ron Cole,
& Wayne Ward
MIT Laboratory for Computer Science, Cambridge, Massachusetts, USA
Oregon Graduate Institute of Science & Technology, Portland, Oregon, USA
Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
Speech recognition is the process of converting an acoustic signal,
captured by a microphone or a telephone, to a set of words. The
recognized words can be the final results, as for applications such as
commands & control, data entry, and
document preparation. They can also
serve as the input to further linguistic processing in order to achieve
speech understanding, a subject covered in section .
Speech recognition systems can be characterized by many parameters, some
of the more important of which are shown in Figure . An
isolated-word speech recognition system requires that the speaker pause
briefly between words, whereas a continuous speech recognition
system does
not. Spontaneous, or extemporaneously generated, speech contains
disfluencies, and is much more difficult to recognize than speech read from
script. Some systems require speaker enrollment---a user must provide
samples of his or her speech before using them, whereas other systems are
said to be speaker-independent, in that no enrollment is necessary.
Some of the other parameters depend on the specific task. Recognition is
generally more difficult when vocabularies are large or have many
similar-sounding words. When speech is produced in a sequence of words,
language models or artificial grammars are used to restrict the
combination of words.
The simplest language model can be specified as a finite-state network, where the permissible words following each word are given explicitly. More general language models approximating natural language are specified in terms of a context-sensitive grammar.
One popular
measure of the difficulty of the task, combining the vocabulary size and the
language model, is perplexity, loosely defined as
the geometric mean of
the number of words that can follow a word after the language model has been
applied (see section for a discussion of language modeling in
general and perplexity in particular). Finally, there are some external
parameters that can affect speech recognition system performance, including the
characteristics of the environmental noise and the type and the placement
of the microphone.
Table: Typical parameters used to characterize the capability of speech
recognition systems
Speech recognition is a difficult problem, largely because of the
many sources of variability associated with the signal. First, the
acoustic realizations of phonemes, the smallest sound units of which
words are composed, are highly dependent on the context in which they
appear. These phonetic variabilities are exemplified by the acoustic
differences of the phoneme /t/ in two, true, and butter in American
English. At word boundaries, contextual variations
can be quite dramatic---making gas shortage sound
like gash shortage in American
English, and devo andare sound like devandare in Italian.
Second, acoustic variabilities can result from changes in the environment as well as in the position and characteristics of the transducer. Third, within-speaker variabilities can result from changes in the speaker's physical and emotional state, speaking rate, or voice quality. Finally, differences in sociolinguistic background, dialect, and vocal tract size and shape can contribute to across-speaker variabilities.
Figure shows the major components of a typical
speech recognition system. The digitized speech signal is first
transformed into a set of useful measurements or features at a fixed rate,
typically once every 10--20 msec (see sections
and
11.3 for signal representation and digital signal
processing, respectively). These measurements are then used to search for
the most likely word candidate, making use of constraints imposed by the
acoustic, lexical, and language models. Throughout this process,
training data are used to determine the values of the model parameters.
Figure: Components of a typical speech recognition system.
Speech recognition systems attempt to model the sources of variability
described above in several ways. At the level of signal representation,
researchers have developed representations that emphasize perceptually
important speaker-independent features of the signal, and de-emphasize
speaker-dependent characteristics [Her90]. At the
acoustic phonetic level, speaker variability is typically modeled using
statistical techniques applied to large amounts of data.
Speaker adaptation algorithms have also been developed that adapt
speaker-independent acoustic models to those of the current
speaker during system use, (see section ). Effects
of linguistic context at the acoustic phonetic level are typically
handled by training separate models for phonemes in different contexts;
this is called context dependent acoustic modeling.
Word level variability can be handled by allowing alternate pronunciations of words in representations known as pronunciation networks. Common alternate pronunciations of words, as well as effects of dialect and accent are handled by allowing search algorithms to find alternate paths of phonemes through these networks. Statistical language models, based on estimates of the frequency of occurrence of word sequences, are often used to guide the search through the most probable sequence of words.
The dominant recognition paradigm in the past fifteen years is known as
hidden Markov models (HMM). An HMM
is a doubly stochastic model, in which
the generation of the underlying phoneme string and the
frame-by-frame, surface acoustic realizations are both represented
probabilistically as Markov processes, as discussed in sections
,
and 11.2.
Neural networks have also been used to estimate the frame based scores;
these scores are then integrated into HMM-based system architectures, in
what has come to be known as hybrid systems,
as described in section 11.5.
An interesting feature of frame-based HMM systems
is that speech segments
are identified during the search process, rather than explicitly.
An alternate approach is to first identify speech segments, then
classify the segments and use the segment scores to recognize words.
This approach has produced competitive recognition performance
in several tasks [ZGPS90,FBC95].
Comments about the state-of-the-art need to be made in the context of specific applications which reflect the constraints on the task. Moreover, different technologies are sometimes appropriate for different tasks. For example, when the vocabulary is small, the entire word can be modeled as a single unit. Such an approach is not practical for large vocabularies, where word models must be built up from subword units.
Performance of speech recognition systems is typically described in terms of word error rate, E, defined as:
where N is the total number of words in the test set, and S, I, and D are the total number of substitutions, insertions, and deletions, respectively.
The past decade has witnessed significant progress in speech recognition technology. Word error rates continue to drop by a factor of 2 every two years. Substantial progress has been made in the basic technology, leading to the lowering of barriers to speaker independence, continuous speech, and large vocabularies. There are several factors that have contributed to this rapid progress. First, there is the coming of age of the HMM. HMM is powerful in that, with the availability of training data, the parameters of the model can be trained automatically to give optimal performance.
Second, much effort has gone into the development of large speech corpora for system development, training, and testing. Some of these corpora are designed for acoustic phonetic research, while others are highly task specific. Nowadays, it is not uncommon to have tens of thousands of sentences available for system training and testing. These corpora permit researchers to quantify the acoustic cues important for phonetic contrasts and to determine parameters of the recognizers in a statistically meaningful way. While many of these corpora (e.g., TIMIT, RM, ATIS, and WSJ; see section 12.3) were originally collected under the sponsorship of the U.S. Defense Advanced Research Projects Agency (ARPA) to spur human language technology development among its contractors, they have nevertheless gained world-wide acceptance (e.g., in Canada, France, Germany, Japan, and the U.K.) as standards on which to evaluate speech recognition.
Third, progress has been brought about by the establishment of standards for performance evaluation. Only a decade ago, researchers trained and tested their systems using locally collected data, and had not been very careful in delineating training and testing sets. As a result, it was very difficult to compare performance across systems, and a system's performance typically degraded when it was presented with previously unseen data. The recent availability of a large body of data in the public domain, coupled with the specification of evaluation standards, has resulted in uniform documentation of test results, thus contributing to greater reliability in monitoring progress (corpus development activities and evaluation methodologies are summarized in chapters 12 and 13 respectively).
Finally, advances in computer technology have also indirectly influenced our progress. The availability of fast computers with inexpensive mass storage capabilities has enabled researchers to run many large scale experiments in a short amount of time. This means that the elapsed time between an idea and its implementation and evaluation is greatly reduced. In fact, speech recognition systems with reasonable performance can now run in real time using high-end workstations without additional hardware---a feat unimaginable only a few years ago.
One of the most popular, and potentially most useful tasks with low perplexity (PP=11) is the recognition of digits. For American English, speaker-independent recognition of digit strings spoken continuously and restricted to telephone bandwidth can achieve an error rate of 0.3% when the string length is known.
One of the best known moderate-perplexity tasks is the 1,000-word so-called Resource Management (RM) task, in which inquiries can be made concerning various naval vessels in the Pacific ocean. The best speaker-independent performance on the RM task is less than 4%, using a word-pair language model that constrains the possible words following a given word (PP=60). More recently, researchers have begun to address the issue of recognizing spontaneously generated speech. For example, in the Air Travel Information Service (ATIS) domain, word error rates of less than 3% has been reported for a vocabulary of nearly 2,000 words and a bigram language model with a perplexity of around 15.
High perplexity tasks with a vocabulary of thousands of words are intended
primarily for the dictation application. After working on
isolated-word, speaker-dependent systems
for many years, the community has since 1992
moved towards very-large-vocabulary (20,000 words and
more), high-perplexity (), speaker-independent,
continuous speech recognition.
The best system in 1994 achieved an error rate of 7.2% on
read sentences drawn from North America business news [PFF
94].
With the steady improvements in speech recognition performance, systems are now being deployed within telephone and cellular networks in many countries. Within the next few years, speech recognition will be pervasive in telephone networks around the world. There are tremendous forces driving the development of the technology; in many countries, touch tone penetration is low, and voice is the only option for controlling automated services. In voice dialing, for example, users can dial 10--20 telephone numbers by voice (e.g., call home) after having enrolled their voices by saying the words associated with telephone numbers. AT&T, on the other hand, has installed a call routing system using speaker-independent word-spotting technology that can detect a few key phrases (e.g., person to person, calling card) in sentences such as: I want to charge it to my calling card.
At present, several very large vocabulary dictation systems are available for document generation. These systems generally require speakers to pause between words. Their performance can be further enhanced if one can apply constraints of the specific domain such as dictating medical reports.
Even though much progress is being made, machines are a long way from recognizing conversational speech. Word recognition rates on telephone conversations in the Switchboard corpus are around 50% [CGF94]. It will be many years before unlimited vocabulary, speaker-independent continuous dictation capability is realized.
In 1992, the U.S. National Science Foundation sponsored a workshop to
identify the key research challenges in the area of human language
technology, and the infrastructure needed to support the work. The key
research challenges are summarized in [CH92]. Research
in the following areas for speech recognition were identified:
Prosody refers to acoustic structure that extends over several segments or words. Stress, intonation, and rhythm convey important information for word recognition and the user's intentions (e.g., sarcasm, anger). Current systems do not capture prosodic structure. How to integrate prosodic information into the recognition architecture is a critical question that has not yet been answered.