Digital Library[ Search Result ]
A Comparative Analysis of the Motion Recognition Rate by Direction of Push-up Activity Using ELM Algorithm
Sangwoong Kim, Jaeyeong Ryu, Jiwoo Jeong, Dongyeong Kim, Youngho Chai
http://doi.org/10.5626/JOK.2023.50.12.1031
In this paper, we propose a motion recognition system for each direction of push-up activity using ELM algorithm. In the proposed system, a recognized motion consists of three parts. The first part is the process of reading motion data. In the process, the data acquired from the motion capture system is entered into the system"s memory. Then, the system extracts a feature vector from the motion data. The 3D position data converted from the quaternion data value of the motion data is projected onto the X-Y plane, Y-Z plane and Z-X plane of the system, and the values are used as the final feature vector. Feature vectors projected on each plane train different ELM, and a total of three ELM are learned. Finally, by inputting test data to each learned ELM, the final recognition result value is derived. First, before obtaining motion data, as the data set to be trained, general push-ups performed in the correct posture were selected. Second, the upper chest did not go down all the way. Third, only the buttocks came up when bending and lifting. Four, when bending your elbows move away from your upper chest. Finally, mix these motions to build a test dataset.
A Dynamic Gesture Recognition System based on Trajectory Data of the Motion-sphere
Jaeyeong Ryu, Adithya B, Ashok Kumar Patil, Youngho Chai
http://doi.org/10.5626/JOK.2021.48.7.781
Recently, dynamic gesture recognition technology, which belongs to human-computer interaction (HCI), has received much attention. This is because the interface configuration for utilizing the system is simple and it is possible to communicate quickly. In this paper, we used a new input data format for the dynamic gesture recognition system and conducted research to improve the recognition accuracy. In the existing dynamic gesture recognition system, the position data and the rotation data of the joint are mainly used. In the proposed system, motion-sphere trajectory data are used. Motion-sphere expresses motion intuitively as a technique for visualizing movement. In the motion-sphere, the expression is composed of the trajectory and twist angle. In this paper, the trajectory of the motion-sphere is used as input data of the dynamic gesture recognition system. The validity of the trajectory data used is verified through the dynamic gesture recognition accuracy comparison. In the experiment, we experimented on two cases. The first cases were conducted by using measured quaternion data. The other experiments used open motion data. Both experiments conducted cognitive accuracy tests, and each experiment yielded high cognitive accuracy.
Joint Sphere based 3D Animation Motion Authoring for Joint Units
Jieun Lee, Taehwan Kwon, Youngho Chai
http://doi.org/10.5626/JOK.2021.48.4.453
Research that creates natural human movement in the motion work of 3D animation is progressing in various fields with the development of technology. For more natural and realistic animation, numerous animators have worked directly on the Key-frame or received motion capture to author the motion. However, since the process of making several keys per second or measuring the motion with a sensor is an inefficient process despite simple motion authoring and modification, it is necessary to solve the problems for simple motion modification. In this study, we have analyzed the existing motion authoring method, Key-frame animation, and motion capture. Also, we propose a new motion authoring method that complements the disadvantages of the existing method. The human movement is recorded through the joint - sphere attached to each joint of 3D character and the recorded pattern and motion are revised. As the motion is modified through trajectory modification, the rotation angle and joints of each part of the model do not need to be adjusted one by one. Apparently, there is an increase in convenience along with the reduction in the working time compared to the existing motion authoring system.
Partial Movement Authoring for a Reconfigurable Motion Capture
Seonghun Kim, Yangkyu Lim, Youngho Kim, Youngho Chai
http://doi.org/10.5626/JOK.2019.46.10.989
Research on human motion perception has progressed along with the development of technology. The demand for the gesture recognition is increasing daily. In motion capture, data on each part of the body are acquired by using a sensor or camera in order to realize a natural movement as an actual motion. However, it is inefficient to use the sensor every time motion data must be obtained or to make repeat measures because of slight differences in motion. In addition, various problems must be solved in order to transfer the stored data to another person or modify it use for another action.
In this paper, we studied the trends of motion recognition research and analyzed characteristics of the motion reproduction method using keyframe animation and the Labanotation motion recording method. In addition, we proposed an action authoring method which addresses the disadvantages of the existing methods. By visualizing and recording the pattern of partial motions in the units of the images, it is possible to reconstruct other motions, including the meaning of the motion.
Implementation of a Stable Point-of-View for Dual Gazing based on the Principle of Eye Movement
Ire Eom, Hanna Lee, Adithya B, Youngho Chai
http://doi.org/10.5626/JOK.2019.46.5.419
Using a dual gaze with eye movement in a first-person game provides a stable view while minimizing blurring of images. The First Person Shooter (FPS) game is based on a first-person viewpoint, since the camera responsible for the player`s gaze and the characters in the game are combined. Thus, when the character moves, the viewpoint moves together, which is the same principle underlying fixed vision with a still head. However, the eyes and the head are separate. Therefore, under multiple eye movements, the head follows the eye after gazing at the object initially, which allows a steady gaze at an object even under body movements.
In this paper, we propose a stable perspective based on the principle of vestibular reflex to the camera in the FPS game. The game environment is created using the Unity game engine, and the stable viewpoint of the dual gaze is demonstrated by comparing the viewpoints when the eye and head are fixed under specific scenarios and during the vestibular reflex.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr