
Enhancing Glass Segmentation Accuracy via Boundary-Context Guided Physics-Aware Modeling.
Author(s) -
Guojun Chen,
Jianqiang Yuan,
Haozhen Chen,
Huihui Li,
Jiale Chen
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3598465
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Transparent objects such as glass pose significant challenges and potential safety risks to vision systems due to their transparency and complex light interactions, which often lead to blurred boundaries and contextual ambiguity. Existing methods typically suffer from weak edge responses and insufficient cross-scale context integration. To address these issues, we propose the Boundary-Context Guided Glass Detection Network (BCGNet), a novel encoder-decoder architecture that combines global context modeling with boundary-aware refinement for accurate glass segmentation. Specifically, we design a dual-path Boundary Feature Refinement (BFR) module, where the spatial branch captures position-sensitive boundary features, and the channel branch, inspired by physical light propagation, leverages optical flow-guided featurewarping to model geometry-aware deformations along transparent edges, thereby mitigating semantic confusion. Additionally, we develop a Multi-scale Context Aggregator that integrates a Contextual Cross-Attention Module (CCAM) and a Depthwise Separable Feed-Forward Network (D-FFN) through hierarchical feature interaction. CCAM captures long-range dependencies via dual-stream attention, while D-FFN reconstructs boundary-sensitive local representations using depthwise separable convolutions. Extensive experiments on four public datasets demonstrate that BCGNet consistently outperforms existing methods, validating its robustness and effectiveness in complex real-world scenarios.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom