Digital Library[ Search Result ]
Understanding Video Semantic Structure with Spatiotemporal Graph Random Walk
Hoyeoung Yun, Minseo Kim, Eun-Sol Kim
http://doi.org/10.5626/JOK.2024.51.9.801
Understanding a long video focuses on finding various semantic units present in the video and interpreting complex relationships among them. Conventional approaches utilize models based on CNNs or transformers to encode contextual information for short clips and then consider temporal relationships among them. However, such approaches struggle to capture complex relationships among smaller semantic units within video clips. In this paper, we present video inputs using a spatiotemporal graph with objects as vertices and relative space-time information between objects as edges, to explicitly express relationships among these semantic units. Additionally, we proposed a novel method to represent major semantic units as compositions of smaller units using high-order relationship information obtained by spatiotemporal random walks on the graph. Through experiments on CATER dataset, which involved complex actions of multiple objects, we demonstrated that our approach exhibited effective semantic unit capturing capabilities.
Efficient Compositional Translation Embedding for Visual Relationship Detection
Yu-Jung Heo, Eun-Sol Kim, Woo Suk Choi, Kyoung-Woon On, Byoung-Tak Zhang
http://doi.org/10.5626/JOK.2022.49.7.544
Scene graphs are widely used to express high-order visual relationships between objects present in an image. To generate the scene graph automatically, we propose an algorithm that detects visual relationships between objects and predicts the relationship as a predicate. Inspired by the well-known knowledge graph embedding method TransR, we present the CompTransR algorithm that i) defines latent relational subspaces considering the compositional perspective of visual relationships and ii) encodes predicate representations by applying transitive constraints between the object representations in each subspace. Our proposed model not only reduces computational complexity but also outperformed previous state-of-the-art performance in predicate detection tasks in three benchmark datasets: VRD, VG200, and VrR-VG. We also showed that a scene graph could be applied to the image-caption retrieval task, which is one of the high-level visual reasoning tasks, and the scene graph generated by our model increased retrieval performance.
Locally Linear Embedding for Face Recognition with Simultaneous Diagonalization
Eun-Sol Kim, Yung-Kyun Noh, Byoung-Tak Zhang
Locally linear embedding (LLE) [1] is a type of manifold algorithms, which preserves inner product value between high-dimensional data when embedding the high-dimensional data to low-dimensional space. LLE closely embeds data points on the same subspace in low-dimensional space, because the data points have significant inner product values. On the other hand, if the data points are located orthogonal to each other, these are separately embedded in low-dimensional space, even though they are in close proximity to each other in high-dimensional space. Meanwhile, it is well known that the facial images of the same person under varying illumination lie in a low-dimensional linear subspace [2]. In this study, we suggest an improved LLE method for face recognition problem. The method maximizes the characteristic of LLE, which embeds the data points totally separately when they are located orthogonal to each other. To accomplish this, all of the subspaces made by each class are forced to locate orthogonally. To make all of the subspaces orthogonal, the simultaneous Diagonalization (SD) technique was applied. From experimental results, the suggested method is shown to dramatically improve the embedding results and classification performance.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr