Beyond-Accuracy (Sparsed-) coVariance Neural Network Recommender Systems

Bachelor Thesis (2025)
Author(s)

I. Bozhanin (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

E. Isufi – Mentor (TU Delft - Multimedia Computing)

K.A. Hildebrandt – Graduation committee member (TU Delft - Computer Graphics and Visualisation)

A. Cavallo – Mentor (TU Delft - Multimedia Computing)

C. Liu – Mentor (TU Delft - Multimedia Computing)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
27-06-2025
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Accuracy‐driven recommender systems risk confining users to "filter‐bubbles'' of familiar content. Recent work on coVariance Neural Networks (VNNs) provides a scalable alternative to Principal Component Analysis (PCA) for modelling high-order correlations, but their impact on beyond-accuracy metrics (BAMs), such as Novelty and Diversity, remains unexplored.
We use the user–user covariance (or its inverse, the precision matrix) as a graph shift operator (GSO) and train SelectionGNN-based VNNs on the MovieLens-100K dataset.
Two training regimes are evaluated: (i) RMSE-only (No-BAM-SVNN) and (ii) a compound loss that also includes novelty and diversity terms (BAM-SVNN).
For each regime we sweep six graph configurations: covariance/precision crossed with {dense, hard-threshold, soft-threshold} sparsification, under five random seeds, yielding 30 runs per regime.
Baseline comparisons include PCA, a naive mean–std model, and a random predictor.

The best SVNN configuration increases recommendation Novelty by 2.8 percentage points and matches PCA’s Diversity while incurring only a 0.03 RMSE penalty.
Hard-thresholded precision graphs provide the lowest SVNN RMSE (0.952), whereas dense covariance graphs maximise diversity (0.868).
Integrating novelty/diversity directly into the loss offers no additional benefit yet multiplies runtime by x33.
One-way ANOVA indicates that model family explains 97.6% of RMSE variance (\(\eta^2=0.976\)) and 77.8% of novelty variance.

This work is the first to benchmark (sparsified) VNNs on beyond-accuracy metrics, demonstrating a favourable accuracy–novelty trade-off and clarifying when sparsification and BAM-weighted training pay off.
All code, data splits and statistical notebooks are released for full reproducibility.

Files

Paper.pdf
(pdf | 10.6 Mb)
License info not available