:py:mod:`rt_stoppers_contrib` ============================= .. py:module:: rt_stoppers_contrib Package Contents ---------------- Classes ~~~~~~~ .. autoapisummary:: rt_stoppers_contrib.NoImprovementTrialStopper rt_stoppers_contrib.ThresholdTrialStopper rt_stoppers_contrib.AndStopper rt_stoppers_contrib.LoggedStopper Attributes ~~~~~~~~~~ .. autoapisummary:: rt_stoppers_contrib.__version__ rt_stoppers_contrib.default_logger .. py:data:: __version__ .. py:data:: default_logger .. py:class:: NoImprovementTrialStopper(metric: str, *, rel_change_thld: float | dict[int, float] = 0.01, mode: str = 'max', patience: int | dict[int, int] = 6, grace_period: int = 4, logger: logging.Logger | None = None) Bases: :py:obj:`ray.tune.Stopper` Stopper that stops trial if at no iteration within ``num_results`` a better result than the current best one is observed. This can be useful if your metric shows instabilities/oscillations and thus does not converge in a way that would make the ``tune.stopper.TrialPlateauStopper`` stop. :param metric: :param rel_change_thld: Relative change threshold to be considered for improvement. Any change that is less than that is considered no improvement. If set to 0, any change is considered an improvement. Can also be set to a dictionary epoch -> threshold (the first epoch has the index 0). :param mode: "max" or "min" :param patience: Number of iterations without improvement after which to stop. If 1, stop after the first iteration without improvement. Can also be set to a dictionary epoch -> patience (the first epoch has the index 0). :param grace_period: Number of iterations to wait before considering stopping :param logger: Logger to use. If None, a default logger is used. .. py:method:: __call__(trial_id: Any, result: dict[str, Any]) -> bool .. py:method:: stop_all() -> bool .. py:class:: ThresholdTrialStopper(metric: str, thresholds: None | dict[int, float], *, mode: str = 'max', logger: logging.Logger | None = None) Bases: :py:obj:`ray.tune.Stopper` Stopper that stops a trial if results at a certain epoch fall above/below a certain threshold. :param metric: The metric to check :param thresholds: Thresholds as a mapping of epoch to threshold. The first epoch (the first time the stopper is checked) is numbered 1. :param mode: "max" or "min" :param logger: Logger to use. If None, a default logger is used. .. py:method:: __call__(trial_id: Any, result: dict[str, Any]) -> bool .. py:method:: stop_all() -> bool .. py:class:: AndStopper(stoppers: list[ray.tune.Stopper], *, logger: logging.Logger | None = None) Bases: :py:obj:`ray.tune.Stopper` Trigger stopping option only if all stoppers agree. :param stoppers: List of stoppers to use. :param logger: Logger to use. If None, a default logger is used. .. py:method:: __call__(trial_id: Any, result: dict[str, Any]) -> bool .. py:method:: stop_all() -> bool .. py:class:: LoggedStopper(stopper: ray.tune.Stopper, logger: logging.Logger | None = None) Bases: :py:obj:`ray.tune.Stopper` Wrapper class to make an existing `tune.Stopper` issue log messages when stopping a trial/experiment. This can be useful if there are multiple stoppers involved. :param stopper: Existing `tune.stopper` :param logger: Logger to use. If None, a new logger is set up with `INFO` log level. .. py:method:: __call__(trial_id: Any, result: dict[str, Any]) -> bool .. py:method:: stop_all() -> bool