Yuta Saito
Yuta Saito
Home
Publications
Contact
CV
日本語
Light
Dark
Automatic
English
日本語
1
Policy-Adaptive Estimator Selection for Off-Policy Evaluation
Off-policy evaluation (OPE) aims to accurately evaluate the performance of counterfactual policies using only offline logged data. …
Takuma Udagawa
,
Haruka Kiyohara
,
Yusuke Narita
,
Yuta Saito
,
Kei Tateno
Cite
Code
Slides
arXiv
Fair Ranking as Fair Division: Impact-Based Individual Fairness in Ranking
Rankings have become the primary interface of many two-sided markets. Many have noted that the rankings not only affect the …
Yuta Saito
,
Thorsten Joachims
Cite
Code
Video
Slides
arXiv
Proceedings
Off-Policy Evaluation for Large Action Spaces via Embeddings
Off-policy evaluation (OPE) in contextual bandits has seen rapid adoption in real-world systems, since it enables offline evaluation of …
Yuta Saito
,
Thorsten Joachims
Cite
Code
Video
Slides
arXiv
Proceedings
Towards Resolving Propensity Contradiction in Offline Recommender Learning
We study offline recommender learning from explicit rating feedback in the presence of selection bias. A current promising solution for …
Yuta Saito
,
Masahiro Nomura
Cite
Proceedings
Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model
In real-world recommender systems and search engines, optimizing ranking decisions to present a ranked list of relevant items is …
Haruka Kiyohara
,
Yuta Saito
,
Tatsuya Matsuhiro
,
Yusuke Narita
,
Nobuyuki Shimizu
,
Yasuo Yamamoto
Cite
Code
arXiv
Proceedings
A Real-World Implementation of Unbiased Lift-based Bidding System
Daisuke Moriwaki
,
Yuta Hayakawa
,
Isshu Munemasa
,
Yuta Saito
,
Akira Matsui
,
Masashi Shibata
Cite
Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation
Off-policy evaluation (OPE) aims to estimate the performance of hypothetical policies using data generated by a different policy. …
Yuta Saito
,
Shunsuke Aihara
,
Megumi Matsutani
,
Yusuke Narita
Cite
Code
Dataset
Proceedings
arXiv
Efficient Hyperparameter Optimization under Multi-Source Covariate Shift
A typical assumption in supervised machine learning is that the train (source) and test (target) datasets follow completely the same …
Masahiro Nomura
,
Yuta Saito
Cite
Code
Proceedings
arXiv
Evaluating the Robustness of Off-Policy Evaluation
Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only …
Yuta Saito
,
Takuma Udagawa
,
Haruka Kiyohara
,
Kazuki Mogi
,
Yusuke Narita
,
Kei Tateno
Cite
Code
Slides
Proceedings
arXiv
Optimal Off-Policy Evaluation from Multiple Logging Policies
We study off-policy evaluation (OPE) from multiple logging policies, each generating a dataset of fixed size, i.e., stratified …
Nathan Kallus
,
Yuta Saito
,
Masatoshi Uehara
Cite
Code
Proceedings
arXiv
»
Cite
×