Record:   Prev Next
作者 Purkayastha, Bhaskar
書名 Integrating gesture recognition and speech recognition in a touch-less human computer interaction system
國際標準書號 9781109344028
book jacket
說明 73 p
附註 Source: Masters Abstracts International, Volume: 48-01, page: 0458
Adviser: Venu Govindaraju
Thesis (M.S.)--State University of New York at Buffalo, 2009
We envision a command and control scenario where the speaker makes hand gestures while referring to objects on a computer display (monitor or projection screen). The objective is automated recognition of the gestures to manipulate the virtual objects on the display. Our approach advances the state of the art in human computer interaction technology in the following unique ways: (i) the user is expected to be at a distance from the display thus the sensing is "touchless" and is entirely based upon one or more camera unobtrusively placed in the environment, (ii) the gestures considered are predominantly two-handed, which are natural and intuitive as if speaking with the screen serving as a prop, and (iii) coherent multimodal integration of speech and gestures
We have trained HMMs and HCRF models using features such as PCA and Optical Flow. A pose estimation algorithm has been designed to identify the object of interest. It involves locating the hand using skin region modeling for each user in real time. The speech and gesture recognition modules provide independent outputs, which are integrated to execute the user commands. Experimental results on video sequences obtained from 11 different users providing five gesture classes are discussed
School code: 0656
Host Item Masters Abstracts International 48-01
主題 Computer Science
0984
Alt Author State University of New York at Buffalo. Computer Science and Engineering
Record:   Prev Next