Digital Library[ Search Result ]
Optimization of EOG-Based Horizontal Gaze Tracking Lightweight Deep Learning Algorithm in a Virtual Environment
http://doi.org/10.5626/JOK.2024.51.2.184
This study presents an algorithm for real-time prediction of eye blinks with high accuracy and minimal parameters, utilizing a deep learning model. Previous eye-tracking algorithms relied on the assumption that the EOG(Electrooculography) signal from the pupil is linear with the angle [1,2]. However, the algorithm presented in this paper has an induction bias based on the data available. As a result, a lightweight deep learning network with layers like 1D CNN(Convolutional Neural Network) and residual block can make real-time prediction. In this study, we conducted an experiment using a device that could predicts eye movements, even while wearing an HMD(Head Mount Display) designed for virtual environments, via deep learning model predictions of eye blinks. Reconstruction of the eye using EOG data, as studied here, has the potential to yield realistic reconstructions. By researching up and down movements and extreme eye movements, the real-time nature of the avatar"s eyes may be utilized.
3D Object-grabbing Hand Tracking based on Depth Reconstruction and Prior Knowledge of Grasp
Woojin Cho, Gabyong Park, Woontack Woo
http://doi.org/10.5626/JOK.2019.46.7.673
We propose a real-time 3D object-grabbing hand tracking system based on the prior knowledge of grasping an object. The problem of tracking a hand interacting with an object is more difficult compared to the issue of an isolated hand since it requires consideration of occlusion by an object. Most of the previous studies resort to the insufficient data which lacks the data of occluded hand and the information that the presence of an object may rather be a constraint on the pose of the hand. In the present work, we focused on the sequence of a hand grabbing an object by utilizing prior knowledge about grasp situation. Consequently, an excluded depth data of the hand occluded by the object was reconstructed with proper depth data and a reinitialization process was conducted based on the plausible grasp pose of the human. The effectiveness of the proposed process was verified based on model-based tracker with particle swarm optimization. Quantitative and qualitative experiments demonstrate that the proposed processes can effectively improve the performance of model-based tracker for the object-grabbing hand.
Quantified Lockscreen: Integration of Personalized Facial Expression Detection and Mobile Lockscreen application for Emotion Mining and Quantified Self
Sung Sil Kim, Junsoo Park, Woontack Woo
Lockscreen is one of the most frequently encountered interfaces by smartphone users. Although users perform unlocking actions every day, there are no benefits in using lockscreens apart from security and authentication purposes. In this paper, we replace the traditional lockscreen with an application that analyzes facial expressions in order to collect facial expression data and provide real-time feedback to users. To evaluate this concept, we have implemented Quantified Lockscreen application, supporting the following contributions of this paper: 1) an unobtrusive interface for collecting facial expression data and evaluating emotional patterns, 2) an improvement in accuracy of facial expression detection through a personalized machine learning process, and 3) an enhancement of the validity of emotion data through bidirectional, multi-channel and multi-input methodology.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr