• Sorted by Date • Classified by Publication Type • Classified by Topic • Grouped by Student (current) • Grouped by Former Students •
Erel Shtossel, Alicia Vidler, Uri Shaham, and Gal A. Kaminka . A Harmonic-Mean Formulation of Average-Reward Reinforcement Learning in SMDPs . In Proceedings of the AAMAS Workshop on Adaptive and Learning Agents (ALA), 2026.
(unavailable)
Recent research has revived and amplified interest in algorithms for undiscounted average-reward reinforcement learning in infinite-horizon, non-episodic (continuing) tasks. Semi-Markov decision processes (SMDPs) are of particular interest. In SMDPs, discrete actions stochastically generate both rewards and durations, and the objective is to optimize the average reward rate. Existing algorithms approach this by optimizing the ratio of rewards to durations. However, when rewards and durations are non-stationary (in the infinite horizon), this can be incorrect. This paper presents a novel modified harmonic mean operator that correctly computes reward rates even under such conditions. This yields model-free learning algorithms that can work with SMDPs, while maintaining robustness to non-stationary reward and duration distributions over time. We prove theoretical properties of the modified harmonic mean operator, and empirically demonstrate its efficacy in comparison to existing algorithms.
@inproceedings{ala26ws-erel,
title = {A Harmonic-Mean Formulation of Average-Reward Reinforcement Learning in {SMDP}s },
author = { Erel Shtossel and Alicia Vidler and Uri Shaham and Gal A. Kaminka },
booktitle = {Proceedings of the {AAMAS} Workshop on Adaptive and Learning Agents ({ALA})},
year = {2026},
abstract = { Recent research has revived and amplified interest in algorithms for undiscounted average-reward reinforcement learning in infinite-horizon, non-episodic (continuing) tasks. Semi-Markov decision processes (SMDPs) are of particular interest. In SMDPs, discrete actions stochastically generate both rewards and durations, and the objective is to optimize the average reward rate. Existing algorithms approach this by optimizing the ratio of rewards to durations. However, when rewards and durations are non-stationary (in the infinite horizon), this can be incorrect. This paper presents a novel modified harmonic mean operator that correctly computes reward rates even under such conditions. This yields model-free learning algorithms that can work with SMDPs, while maintaining robustness to non-stationary reward and duration distributions over time. We prove theoretical properties of the modified harmonic mean operator, and empirically demonstrate its efficacy in comparison to existing algorithms.
},
}
Generated by bib2html.pl (written by Patrick Riley ) on Mon Apr 06, 2026 23:33:57