Package: MDPtoolbox 4.0.3

MDPtoolbox: Markov Decision Processes Toolbox

The Markov Decision Processes (MDP) toolbox proposes functions related to the resolution of discrete-time Markov Decision Processes: finite horizon, value iteration, policy iteration, linear programming algorithms with some variants and also proposes some functions related to Reinforcement Learning.

Authors:Iadine Chades, Guillaume Chapron, Marie-Josee Cros, Frederick Garcia, Regis Sabbadin

MDPtoolbox_4.0.3.tar.gz
MDPtoolbox_4.0.3.zip(r-4.5)MDPtoolbox_4.0.3.zip(r-4.4)MDPtoolbox_4.0.3.zip(r-4.3)
MDPtoolbox_4.0.3.tgz(r-4.4-any)MDPtoolbox_4.0.3.tgz(r-4.3-any)
MDPtoolbox_4.0.3.tar.gz(r-4.5-noble)MDPtoolbox_4.0.3.tar.gz(r-4.4-noble)
MDPtoolbox_4.0.3.tgz(r-4.4-emscripten)MDPtoolbox_4.0.3.tgz(r-4.3-emscripten)
MDPtoolbox.pdf |MDPtoolbox.html
MDPtoolbox/json (API)

# Install 'MDPtoolbox' in R:
install.packages('MDPtoolbox', repos = c('https://gchapron.r-universe.dev', 'https://cloud.r-project.org'))

Peer review:

On CRAN:

This package does not link to any Github/Gitlab/R-forge repository. No issue tracker or development information is available.

2.40 score 3 stars 83 scripts 541 downloads 21 exports 4 dependencies

Last updated 8 years agofrom:20c3fb4eae. Checks:OK: 3 NOTE: 4. Indexed: yes.

TargetResultDate
Doc / VignettesOKNov 01 2024
R-4.5-winNOTENov 01 2024
R-4.5-linuxNOTENov 01 2024
R-4.4-winNOTENov 01 2024
R-4.4-macNOTENov 01 2024
R-4.3-winOKNov 01 2024
R-4.3-macOKNov 01 2024

Exports:mdp_bellman_operatormdp_checkmdp_check_square_stochasticmdp_computePpolicyPRpolicymdp_computePRmdp_eval_policy_iterativemdp_eval_policy_matrixmdp_eval_policy_optimalitymdp_eval_policy_TD_0mdp_example_forestmdp_example_randmdp_finite_horizonmdp_LPmdp_policy_iterationmdp_policy_iteration_modifiedmdp_Q_learningmdp_relative_value_iterationmdp_spanmdp_value_iterationmdp_value_iteration_bound_itermdp_value_iterationGS

Dependencies:latticelinproglpSolveMatrix

Readme and manuals

Help Manual

Help pageTopics
Markov Decision Processes ToolboxMDPtoolbox-package MDPtoolbox
Applies the Bellman operatormdp_bellman_operator
Checks the validity of a MDPmdp_check
Checks if a matrix is square and stochasticmdp_check_square_stochastic
Computes the transition matrix and the reward matrix for a fixed policymdp_computePpolicyPRpolicy
Computes a reward matrix for any form of transition and reward functionsmdp_computePR
Evaluates a policy using an iterative methodmdp_eval_policy_iterative
Evaluates a policy using matrix inversion and productmdp_eval_policy_matrix
Computes sets of 'near optimal' actions for each statemdp_eval_policy_optimality
Evaluates a policy using the TD(0) algorithmmdp_eval_policy_TD_0
Generates a MDP for a simple forest management problemmdp_example_forest
Generates a random MDP problemmdp_example_rand
Solves finite-horizon MDP using backwards induction algorithmmdp_finite_horizon
Solves discounted MDP using linear programming algorithmmdp_LP
Solves discounted MDP using policy iteration algorithmmdp_policy_iteration
Solves discounted MDP using modified policy iteration algorithmmdp_policy_iteration_modified
Solves discounted MDP using the Q-learning algorithm (Reinforcement Learning)mdp_Q_learning
Solves MDP with average reward using relative value iteration algorithmmdp_relative_value_iteration
Evaluates the span of a vectormdp_span
Solves discounted MDP using value iteration algorithmmdp_value_iteration
Computes a bound for the number of iterations for the value iteration algorithmmdp_value_iteration_bound_iter
Solves discounted MDP using Gauss-Seidel's value iteration algorithmmdp_value_iterationGS