Always-Valid Risk Bounds for Low-Rank Online Matrix Completion

Published in arXiv, 2021

Recommended citation: Chi-Hua Wang*, Wenjie Li*, Guang Cheng.


This paper presents the first online matrix completion procedure that allows continuous monitoring on the prediction risk, where the decision makers are allowed to terminate the task whenever they wish, \textit{and} the result still maintains statistical validity. By utilizing non-asymptotic martingale concentration, an online regularization sequence is designed to generate always-valid concentration inequalities for monitoring online-learned model performance. Such inequalities advance the online learning algorithm design by allowing random, adaptively chosen sample sizes instead of a fixed pre-specified size in offline statistical learning. Our results contribute to existing matrix completion methodology a more sample-efficient online algorithm design and serve as a foundation to evaluate online experiment policies in the task of online matrix completion.