rt_stoppers_contrib#

Package Contents#

Classes#

NoImprovementTrialStopper

Stopper that stops trial if at no iteration within num_results a better

ThresholdTrialStopper

Stopper that stops a trial if results at a certain epoch fall above/below

AndStopper

Trigger stopping option only if all stoppers agree.

LoggedStopper

Wrapper class to make an existing tune.Stopper issue log messages when stopping a trial/experiment.

Attributes#

__version__

default_logger

rt_stoppers_contrib.__version__#
rt_stoppers_contrib.default_logger#
class rt_stoppers_contrib.NoImprovementTrialStopper(metric: str, *, rel_change_thld: float | dict[int, float] = 0.01, mode: str = 'max', patience: int | dict[int, int] = 6, grace_period: int = 4, logger: logging.Logger | None = None)#

Bases: ray.tune.Stopper

Stopper that stops trial if at no iteration within num_results a better result than the current best one is observed.

This can be useful if your metric shows instabilities/oscillations and thus does not converge in a way that would make the tune.stopper.TrialPlateauStopper stop.

Parameters
  • metric

  • rel_change_thld – Relative change threshold to be considered for improvement. Any change that is less than that is considered no improvement. If set to 0, any change is considered an improvement. Can also be set to a dictionary epoch -> threshold (the first epoch has the index 0).

  • mode – “max” or “min”

  • patience – Number of iterations without improvement after which to stop. If 1, stop after the first iteration without improvement. Can also be set to a dictionary epoch -> patience (the first epoch has the index 0).

  • grace_period – Number of iterations to wait before considering stopping

  • logger – Logger to use. If None, a default logger is used.

__call__(trial_id: Any, result: dict[str, Any]) bool#
stop_all() bool#
class rt_stoppers_contrib.ThresholdTrialStopper(metric: str, thresholds: None | dict[int, float], *, mode: str = 'max', logger: logging.Logger | None = None)#

Bases: ray.tune.Stopper

Stopper that stops a trial if results at a certain epoch fall above/below a certain threshold.

Parameters
  • metric – The metric to check

  • thresholds – Thresholds as a mapping of epoch to threshold. The first epoch (the first time the stopper is checked) is numbered 1.

  • mode – “max” or “min”

  • logger – Logger to use. If None, a default logger is used.

__call__(trial_id: Any, result: dict[str, Any]) bool#
stop_all() bool#
class rt_stoppers_contrib.AndStopper(stoppers: list[ray.tune.Stopper], *, logger: logging.Logger | None = None)#

Bases: ray.tune.Stopper

Trigger stopping option only if all stoppers agree.

Parameters
  • stoppers – List of stoppers to use.

  • logger – Logger to use. If None, a default logger is used.

__call__(trial_id: Any, result: dict[str, Any]) bool#
stop_all() bool#
class rt_stoppers_contrib.LoggedStopper(stopper: ray.tune.Stopper, logger: logging.Logger | None = None)#

Bases: ray.tune.Stopper

Wrapper class to make an existing tune.Stopper issue log messages when stopping a trial/experiment.

This can be useful if there are multiple stoppers involved.

Parameters
  • stopper – Existing tune.stopper

  • logger – Logger to use. If None, a new logger is set up with INFO log level.

__call__(trial_id: Any, result: dict[str, Any]) bool#
stop_all() bool#