The development of sonification model using parameter mapping technique

Most of the instructions given by trainers or therapists in body movements, such as walking, turning, rising arms or legs are mostly done through voice instructions or touches. This does not poses much problem to ordinary people as they can see the action at the same time. To follow these instructio...

Full description

Saved in:
Bibliographic Details
Main Author: Alter Jimat Embug
Format: Thesis
Language:English
English
Published: 2017
Subjects:
Online Access:https://eprints.ums.edu.my/id/eprint/42772/1/24%20PAGES.pdf
https://eprints.ums.edu.my/id/eprint/42772/2/FULLTEXT.pdf
https://eprints.ums.edu.my/id/eprint/42772/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Most of the instructions given by trainers or therapists in body movements, such as walking, turning, rising arms or legs are mostly done through voice instructions or touches. This does not poses much problem to ordinary people as they can see the action at the same time. To follow these instructions without visual may cause confusion in terms of actions and directions. Unfortunately, there was no other option for the visual-impaired persons. Thus, this research studies on transforming those voices and touches form of instructions into non-speech sound instructions. The method involves transforming 3-dimensional data of body movements (kinematics) into sounds. This research expects to provide audio aid, and the user should be able to follow the exact body movements of another person or instructor without any speech commands or instructions. The novel contribution of this research is to produce a sonification model (converting the data into sound) that represents the actions and directions of the body movements in 3-dimensional space. Parameter Mapping is used as the conversion approach, where the movement properties will be mapped to sound properties. The parameter mapping involves three transformation processes - data, acoustics parameters and sound representations. This research used the Kinect (Microsoft Xbox 3D movement sensors) as the live 3D movements input data stream. Kinect technology was intentionally used due to its readiness and low cost. Experiments were conducted to examine the effect of audio 3D movement knowledge and training through the number of success performance between groups. The result demonstrates that given knowledge and training enhances the number of success performance compare to the control group. In Conclusion, the results show that the 3D hand movements can be represented using non-speech sound. From the finding, training is required beforehand to increase the ability to interpret the proposed 3D sound design.