Abstract
This work proposes a Markov Decision Process (MDP) model for identifying windows of opportunities to perform preventive maintenance for multi-unit parallel systems subject to a varying demand. The main contribution lies in proposing: (i) a reward function that does not depend on maintenance costs, which are typically difficult to assess and classify; and (ii) a new metric for prevention.
By optimizing the capacity utilization rate and the decision flexibility, which is denoted in terms of standby units, for a set of typical operational scenarios, the optimal opportunities for preventive interventions are identified within the respective prevention ranges, in relation to an offshore power plant (case study). The sequential decision problem is solved using the Value Iteration algorithm to obtain the optimal long-term policies. As a result, a backlog management decision-support solution is developed, using a low-cost computational model,
which provides scenario-dependent preventive policies and promotes the integration of operations with maintenance, being easy to implement, maintain and communicate with stakeholders.
By optimizing the capacity utilization rate and the decision flexibility, which is denoted in terms of standby units, for a set of typical operational scenarios, the optimal opportunities for preventive interventions are identified within the respective prevention ranges, in relation to an offshore power plant (case study). The sequential decision problem is solved using the Value Iteration algorithm to obtain the optimal long-term policies. As a result, a backlog management decision-support solution is developed, using a low-cost computational model,
which provides scenario-dependent preventive policies and promotes the integration of operations with maintenance, being easy to implement, maintain and communicate with stakeholders.