Yuta Saito
Yuta Saito
Home
Publications
Contact
CV
日本語
Light
Dark
Automatic
English
日本語
Off-Policy Evaluation
Off-Policy Evaluation of Ranking Policies under Diverse User Behavior
Ranking interfaces are everywhere in online platforms. There is thus an ever growing interest in their Off-Policy Evaluation (OPE), …
Haruka Kiyohara
,
Tatsuya Matsuhiro
,
Yusuke Narita
,
Nobuyuki Shimizu
,
Yasuo Yamamoto
,
Yuta Saito
Cite
Code
arXiv
Off-Policy Evaluation for Large Action Spaces via Conjunct Effect Modeling
We study off-policy evaluation (OPE) of contextual bandit policies for large discrete action spaces where conventional …
Yuta Saito
,
Qingyang Ren
,
Thorsten Joachims
Cite
Code
arXiv
Policy-Adaptive Estimator Selection for Off-Policy Evaluation
Off-policy evaluation (OPE) aims to accurately evaluate the performance of counterfactual policies using only offline logged data. …
Takuma Udagawa
,
Haruka Kiyohara
,
Yusuke Narita
,
Yuta Saito
,
Kei Tateno
Cite
Code
Slides
arXiv
Counterfactual Evaluation and Learning for Interactive Systems
Counterfactual estimators enable the use of existing log data to estimate how some new target recommendation policy would have …
Yuta Saito
,
Thorsten Joachims
Cite
Code
Proceedings
Website
Off-Policy Evaluation for Large Action Spaces via Embeddings
Off-policy evaluation (OPE) in contextual bandits has seen rapid adoption in real-world systems, since it enables offline evaluation of …
Yuta Saito
,
Thorsten Joachims
Cite
Code
Video
Slides
arXiv
Proceedings
Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model
In real-world recommender systems and search engines, optimizing ranking decisions to present a ranked list of relevant items is …
Haruka Kiyohara
,
Yuta Saito
,
Tatsuya Matsuhiro
,
Yusuke Narita
,
Nobuyuki Shimizu
,
Yasuo Yamamoto
Cite
Code
arXiv
Proceedings
Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation
Off-policy evaluation (OPE) aims to estimate the performance of hypothetical policies using data generated by a different policy. …
Yuta Saito
,
Shunsuke Aihara
,
Megumi Matsutani
,
Yusuke Narita
Cite
Code
Dataset
Proceedings
arXiv
Counterfactual Learning and Evaluation for Recommender Systems
Counterfactual estimators enable the use of existing log data to estimate how some new target recommendation policy would have …
Yuta Saito
,
Thorsten Joachims
Cite
Code
Video
Proceedings
Website
Evaluating the Robustness of Off-Policy Evaluation
Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only …
Yuta Saito
,
Takuma Udagawa
,
Haruka Kiyohara
,
Kazuki Mogi
,
Yusuke Narita
,
Kei Tateno
Cite
Code
Slides
Proceedings
arXiv
Optimal Off-Policy Evaluation from Multiple Logging Policies
We study off-policy evaluation (OPE) from multiple logging policies, each generating a dataset of fixed size, i.e., stratified …
Nathan Kallus
,
Yuta Saito
,
Masatoshi Uehara
Cite
Code
Proceedings
arXiv
»
Cite
×