Integrating HMM-Based Speech Recognition With Direct Manipulation In A Multimodal Korean Natural Language Interface

Computer Science – Computation and Language

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

6 pages, ps file, presented at icmi96 (Bejing)

Scientific paper

This paper presents a HMM-based speech recognition engine and its integration into direct manipulation interfaces for Korean document editor. Speech recognition can reduce typical tedious and repetitive actions which are inevitable in standard GUIs (graphic user interfaces). Our system consists of general speech recognition engine called ABrain {Auditory Brain} and speech commandable document editor called SHE {Simple Hearing Editor}. ABrain is a phoneme-based speech recognition engine which shows up to 97% of discrete command recognition rate. SHE is a EuroBridge widget-based document editor that supports speech commands as well as direct manipulation interfaces.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Integrating HMM-Based Speech Recognition With Direct Manipulation In A Multimodal Korean Natural Language Interface does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Integrating HMM-Based Speech Recognition With Direct Manipulation In A Multimodal Korean Natural Language Interface, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Integrating HMM-Based Speech Recognition With Direct Manipulation In A Multimodal Korean Natural Language Interface will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-484883

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.