Optimal Off-Policy Evaluation from Multiple Logging Policies

Abstract

We study off-policy evaluation (OPE) from multiple logging policies, each generating a dataset of fixed size, i.e., stratified sampling. Previous work noted that in this setting the ordering of the variances of different importance sampling estimators is instance-dependent, which brings up a dilemma as to which importance sampling weights to use. In this paper, we resolve this dilemma by finding the OPE estimator for multiple loggers with minimum variance for any instance, i.e., the efficient one. In particular, we establish the efficiency bound under stratified sampling and propose an estimator achieving this bound when given consistent 𝑞-estimates. To guard against misspecification of 𝑞-functions, we also provide a way to choose the control variate in a hypothesis class to minimize variance. Extensive experiments demonstrate the benefits of our methods’ efficiently leveraging of the stratified sampling of off-policy data from multiple loggers.

Category
Publication
In Proceedings of 38th International Conference on Machine Learning (ICML) (Acceptance rate=21.5%)
Yuta Saito
Yuta Saito
Third-year CS Ph.D. Student

Related