
Contrastive Disentangled Variational Autoencoder for Collaborative Filtering
Author(s) -
Woo-Seong Yun,
Seong-Min Kang,
Yoon-Sik Cho
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3576445
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Recommender systems aim to accurately predict user preferences in order to provide potential items of interests. However, the highly skewed long-tail item distribution leads the models to more focus on popular items, which can damage the predictive performance by recommending specific items repeatedly. We propose a novel Variational Autoencoder (VAE) framework for collaborative filtering using contrastive disentanglement. We contrast salient latent features in VAE against the non-salient background. Here, we intentionally generate a background dataset based on item popularity, a concept previously unexplored in existing recommender system. The representations learnt from our proposed scheme better reflect salient latent factor instead of being washed out by latent factor of popular items. Consequently, our model improves the predictive performance by effectively learning the representations toward salient latent features while excluding the effects from popularity. The proposed contrastive disentanglement framework is generic and thus adaptable to VAE-based recommender algorithms. We comprehensively evaluate our proposed method on popular datasets such as MovieLens20M and Netflix and show that it consistently outperforms corresponding VAE models, achieving superior performance across all metrics. Notably, in terms of diversity, our approach outperforms the best-performing methods by 5.5% and 4.8% on the MovieLens20M and Netflix datasets, respectively. We further present a way of extending the framework to an ensemble of two contrasts with two different backgrounds, which achieves state-of-the-art performance in standard settings with significant improvements. Our code is available at https://github.com/yunwooseong/CD-VAE.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom