Explicit Memory through Online 3D Gaussian Splatting Improves Class-Agnostic Video Segmentation
Author(s) -
Anthony Opipari,
Aravindhan K Krishnan,
Shreekant Gayaka,
Min Sun,
Cheng-Hao Kuo,
Arnie Sen,
Odest Chadwicke Jenkins
Publication year - 2025
Publication title -
ieee robotics and automation letters
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.123
H-Index - 56
eISSN - 2377-3766
DOI - 10.1109/lra.2025.3619783
Subject(s) - robotics and control systems , computing and processing , components, circuits, devices and systems
Remembering where object segments were predicted in the past is useful for improving the accuracy and consistency of class-agnostic video segmentation algorithms. Existing video segmentation algorithms typically use either no object-level memory (e.g. FastSAM) or they use implicit memories in the form of recurrent neural network features (e.g. SAM2). In this paper, we augment both types of segmentation models using an explicit 3D memory and show that the resulting models have more accurate and consistent predictions. For this, we develop an online 3D Gaussian Splatting (3DGS) technique to store predicted object-level segments generated throughout the duration of a video. Based on this 3DGS representation, a set of fusion techniques are developed, named FastSAM-Splat and SAM2-Splat, that use the explicit 3DGS memory to improve their respective foundation models' predictions. Ablation experiments are used to validate the proposed techniques' design and hyperparameter settings. Results from both real-world and simulated benchmarking experiments show that models which use explicit 3D memories result in more accurate and consistent predictions than those which use no memory or only implicit neural network memories.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom