1 

Stochastic Scheduling of Train Maintenance Projects
We study the application of stochastic scheduling methods for dealing with the negative impact of uncertainty on the timely delivery of NedTrain maintenance projects. A stochastic scheduling problem includes a quantification of uncertainty by representing the duration of a maintenance activity with a probability distribution. The solution is no longer a schedule but a scheduling policy: a dynamic process for determining when to execute activities onthefly. Our objective is to find a policy that minimizes stochastic tardiness by expectation and also by variance, to minimize risk. We concentrate on the class of EarlyStart (ES) policies. An ES policy is a PERT network representation of the projects in which the width of the partial order defined by the precedence edges is restricted such that resource requirement will not exceed resource availability. ES policies enable the use of standard PERT management practices (developed for handling uncertainty in projects with unlimited resources) to projects with limited resources (e.g., NedTrain projects).
Simulation is considered as the only practical means to evaluate the objective function in stochastic scheduling. This approach is computationally expensive, especially for large (e.g., NedTrainspecific) instances with activity durations that exhibit high variability. We observe that circuit timing graphs in modern VLSI design use a PERT networklike representational mechanism. Statistical Static Timing Analysis (SSTA) is a new research area with a wealth of methods for solving the PERT problem on massive networks, offering high accuracy at low computational cost. We propose the use of SSTA techniques, treating a given ES policy as a circuit timing graph. This enables us to evaluate the objective function and more importantly to implement informed heuristics with efficiency. As a proofofconcept we studied the use of LinearGaussian SSTA, applicable when activity durations exhibit Gaussian variability and may have linear interdependencies. Our other contribution is to propose efficient enumeration schemes for ES policies, overcoming the infeasibility of the existing scheme on problems of practical size. Experiments suggest that the proposed enumeration schemes in combination with SSTA establish a frame work for the design of competitive heuristic solvers, suitable for largescale problem instances with high durational variability.

[PDF]
[Abstract]

2 

Desynchronization Methods for Scheduled Circuits
Synchronous systems waste a lot of power in the clock tree, and must be designed based on the worst case scenario in terms of speed. Asynchronous circuits offer relief to these problems, by replacing clock signals with handshakes which only charge when data is being transferred, and delay signals which may adapt more easily to variance in speed compared to the clock period. Desynchronization is the process of turning a synchronous circuit into an asynchronous one. Scheduled circuits are a common way to provide a good compromise between conserving area of a circuit and increasing its speed. Desynchronizing such a system is made difficult because every functional unit in the circuit must respond to controls from the central state machine, which cannot easily handshake with all of them. This report demonstrates two related methods designed specifically for the conversion of a synchronous, scheduled circuit into an asynchronous, delay insensitive circuit. Decomposition of the central state machine into local, smaller ones is used to combat the problem of skew in the control signals, as well as to speed up the performance of the asynchronous circuit. The slack in the clock period can also be used for possible speedup. Conditions which threaten deadlock of the circuit are identified and rescheduling solutions are proposed. A tradeoff between the two methods of area conservation and hardware reusability versus speed is also explained.

[PDF]
[Abstract]

3 

Maximal Flexibility and Optimal Decoupling in Task Scheduling Problems
This thesis focuses on the properties of (multiagent) task scheduling instances represented as Simple Temporal Problems (STP). By defining a subclass STP$_{\prec}$ of STPs that contain these task scheduling instances, existing algorithms for arbitrary STPs can be improved if applied to task scheduling STPs, allowing arbitrary schedules and temporal decouplings to be created more efficiently.
With the introduction of a new flexibility metric in this thesis, a Linear Programming (LP) formulation as well as an alternative Maximum Flexibility Algorithm is given to create maximally flexible open schedules from which an optimal temporal decoupling can be derived. This thesis also contains a proof that in task scheduling instances, contrary to intuition, an optimal temporal decoupling does not reduce the flexibility of the system.
In order to ensure fair decouplings or open schedules for either the tasks or the agents, three types of egalitarian flexibility problem formulations are presented including LP formulations to solve these problems.

[PDF]
[Abstract]

4 

Robust Solutions for the ResourceConstrained Project Scheduling Problem: Understanding and Improving Robustness in Partial Order Schedules produced by the Chaining algorithm
Robustness is essential for schedules if they are being executed under uncertain conditions. In this thesis we research robustness in Partial Order Schedules, which represent sets of solutions for instances of the ResourceConstrained Project Scheduling Problem. They can be generated using a greedy procedure called Chaining, which can be easily adapted to use various heuristics. We use an empirical method to gain understanding of robustness and how the chaining procedure adds to this.
Based on the findings of an exploratory study we develop three models, each capturing aspects of robustness on a different level. The first model describes how a single activity is affected by various disturbances. The second model predicts how structural properties of Partial Order Schedules can reduce the effect of these disturbances. The third model describes how heuristics for the chaining procedure can influence these properties.
Using experimental evaluation, we found that the model is not complete. Experimental results did conform to the expectations set by the third model, but not of the second model. We therefore suspect that it is too simplistic for accurate predictions, but since it does match earlier observations we believe it is a good starting point for further understanding.

[PDF]
[Abstract]

5 

Design and evaluation of a simulation environment for evaluating departure scheduling algorithms
To meet traffic demand predictions, the global air traffic management (ATM) system needs to be changed. Several visions on future ATM operations exist. A commonality between the different visions is 4D Trajectory management. This function enables planbased operation as opposed to the statebased approach of the present system. Planbased operation enables the optimization of traffic flows by generating 4D trajectories. A part of the traffic flow generation process is scheduling.
The research presented in this thesis focuses on these scheduling opportunities. In this research the scheduling opportunities for departure traffic at a runway are investigated. A study of the existing literature showed that the most common scheduling algorithms currently available can be divided into four categories: first come first served, brandandbound, greedy search and genetic algorithms. A simulation environment is designed for evaluation of the departure scheduling algorithms using various input parameters like traffic situation, airport map and algorithm. The four algorithm categories are evaluated on output aspects like delay and robustness of the schedule and are compared with the current method of traffic scheduling.
The evaluation of the scheduling algorithms shows that the performance of the current method of scheduling departure traffic performs well in comparison with the tested algorithms. In case of no disturbances the genetic algorithm performs slightly better than the current method, but the other algorithms do not have a better performance. When disturbances are taken into account, a bigger performance increase can be obtained by using scheduling algorithms.

[PDF]
[Abstract]

6 

Flightcrew fatigue recommendations for airlines on reducing and preventing crew fatigue
Fatigue is an insidious threat to safety. Its effects are usually subtle but non the less very real. In the face of 'aviation culture' and job dependence, fatigue is an underreported and difficult matter that often does not receive the attention that it should. The first stage of the report is aimed at getting a good view of the phenomenon of fatigue. Its definition, different forms and related factors are presented. The next stage is on different methods of measuring or determining human fatigue. The methods are categorised into subjective, objective and predictive methods. A Multicriteria analysis was performed to rank the scales best suited for measuring crew fatigue. The third stage consists of an actor analysis. The goal of this analysis was to get a clear picture of all possible stakeholders and their relations surrounding the issue of crew fatigue. Finally, recommendations are presented in three categories: taking the 'sting' out of crew fatigue, clarifying it in your organisation, promoting and facilitating sleep and taking action: measure, change and finetune flight schedules

[PDF]
[Abstract]

7 

Performance Analysis and CostPerformance Tradeoffs of a High Performance Partially Buffered Crossbar Switch
Why is it hard to build highspeed routers? Because highspeed routers are like marriages; they are unpredictable, provide no guar antees, and become vulnerable in adversity. Highspeed networks including the Internet backbone suffer from a wellknown problem; packets arrive on highspeed routers much faster than commodity memory can support. On a 10 Gb/s link, packets can arrive ev ery 32 ns, while memory can only be accessed once every 50 ns. If we are unable to bridge this performance gap, then (1) We can not create Internet routers that reliably support links >10 Gb/s. (2) Routers cannot support the needs of realtime applications such as voice, video conferencing, multimedia, gaming, etc., that require guaranteed performance. Network operators expect certain perfor mance characteristics; for example, if the arrival rate is less than the router’s advertised capacity, they can reasonably assume the router can handle the traffic. Somewhat surprisingly, no commercial router can do this today!
The emphasis is put on the switching architecture of a router. This thesis lays down a theoretical foundation for the Partially Buffered Crossbar switches and is about managing and resolving the prefer ences and contention for memory between packets from participating
inputs and outputs in a switch. By combining the theory of fluid models, Lyapunov functions and the pigeonhole principle, the requirements for devising practical algorithms which can provide guarantees and emulate the performance of the ideal Output Queued switch and approximate the optimal Maximum Weight Matching scheduler are drawn up. The solutions described in this thesis, relax the memory access and band width constraint, in fact, there is no better switching architecture described till now in terms of memory requirements and practicality regarding its achieved performance. Moreover, this thesis derives the first study of scheduling unicast and multicast traffic simultaneously in a Partially Buffered Crossbar switch.

[PDF]
[Abstract]

8 

Multi Skilled Staff Scheduling with Simulated Annealing
In this thesis, a model for a staff scheduling problem within a multi skilled environment is constructed. It is shown that a cost optimal planning can be obtained by Simulated Annealing. The resulting algorithm is analyzed and applied to a toy example and a scheduling problem of a Dutch railway construction company.

[PDF]
[Abstract]

9 

KOALAC: A Scheduler for Integrated MultiCluster and MultiCloud Environments
In this work, we propose the design of KOALAC, a task scheduling system that can operate in integrated multicluster and multicloud environments based on two previous work: KOALA and SkyMark. We design and evaluate a number of task allocation and resource
provisioning policies through both simulations and realworld experiments using KOALAC.

[PDF]
[Abstract]

10 

MultiStakeholder Aircraft Scheduling Problem: Performance Evaluation and Fairness Analysis at Schiphol Airport
The growth in the aviation industry means that with existing constraints, operational efficiency has to be improved in order to be sustainable. The bottlenecks at airports are usually the runways and consequently, the routing and scheduling decisions from the ATC pertaining to the route and order of the incoming and outgoing flights are of paramount importance. The objective of this research was to evaluate an advanced optimisation algorithm at Schiphol using publicly sourced data on different aspects, which were dual in nature, one was performance as compared to the incumbent practises and the other was fairness which dealt with the fair distribution of the decisions from the ATC for different airlines depending on cost incurred by each airline. The advanced algorithm was devised by drawing an analogy to job shop scheduling problem and solving the same using graph theory and associated (Meta) heuristics. The financial and fairness analysis was carried out through analogising game theory.
The experimental design was set up through running the data through an optimisation model followed by financial analysis. The data consisted of schematics of Schiphol, so as to determine the time to traverse resources like approach air segment, glide path and runways, details of the aircraft and time of entry into the terminal control area of Schiphol along with expected time at gates. In total 49 data sets were evaluated through the model in different configurations. The configurations were as follows,
1. First Come First Serve (Incumbent)
2. Solver Scheduling
3. Solver Routing and Scheduling – the proposed algorithm
4. Equity 1 (Priority KLM) – proposed algorithm being partisan to KLM
5. Equity 2 (Priority Non KLM) – proposed algorithm being partisan to nonKLM airlines
The output was in the form of delay for individual aircraft which were then consolidated to delays for airlines. The delay(s) were the result of the decision which was based on the configuration used; this aspect was used to compare the performance of the various algorithms. Furthermore, the delay(s) for different airlines was used to analyse whether decisions which resulted in the delays are commensurate with the payments made by the airlines.
The findings were quite consistent with the expected outcome of the experimental setup. The proposed algorithm, in its normal and original state, performed the best amongst all other configurations. In all the data sets, there was improvement in the performance, by using the proposed algorithm, at a global level i.e. for the whole system as a whole. The factor of improvement from the incumbent practise depended on the initial status of the system. Having established the superior performance of the algorithm, the distribution of decision amongst airlines was analysed to establish fairness. The delays for the airlines were monetised using the value of time specific to aviation operation and the situation was analysed using a cooperative game theory approach, where airlines could agree to implement the proposed algorithm by forming a grand coalition or not agreeing thereby reverting back to the incumbent system for all. Only taking the operational cost incurred by the airlines and performance analysis conducted previously, the Shapley Value gave the fair distribution of the costs based on the marginal improvement each airline brought to the system. For all data sets, the Shapley Value was consistent and comparable to the actual costs albeit with minor inconsistencies; in some cases a few airlines paid more than what they should pay and in some cases they paid less than what they should pay. To tackle the inconsistencies a financial redistribution framework was proposed. The airlines paying less than what they should pay, contribute the default amount to a common fund and then, the money from the fund is redistributed amongst the airlines paying too much according to their Shapley Value ratio to minimise their loss. This system created a system wherein no outside interference is required and by transferring money internally, a sense of fairness could be introduced into the system. Also, this system took care of the local optimal after a global optimal had been established and in fact improved upon the global optimal. In all the data sets, the number of times an airline paid too much or too little was evenly distributed. Also, the grand coalition, wherein all the airlines agree to implement the new algorithm, was inherently stable due the game being inherently convex and the Shapley Value being present in the core. However, owing to the scale of operation of KLM, KLM could impact the performance of the whole system and actually benefitted the most from the proposed algorithm.
To summarise, the proposed algorithm can be implemented to give a superior performance in terms of minimising the delay experienced by the whole airport. However, a further detailed study of the financial agreements between the airlines and Schiphol is required so as to align the actual financial transactions with that of the ideal or the fair financial transactions. Also, for any financial framework or agreement between Schiphol and various other airlines, the interests of KLM should always be taken into account since KLM is a dominant player whose individual (local) performance affects the global performance. Hence, it can be concluded that the proposed algorithm is definitely an improvement over the existing system and also a sense of fairness can be introduced in the decision support system to ensure participation of all the airlines.

[PDF]
[Abstract]

11 

Travel time errors in shortest route predictions and allornothing assignments: theoretical analysis and simultation findings

[PDF]

12 

Operational Scheduling in a MultiActor Environment using Multiagent Systems
Liquid bulk is one of the largest industries of the modern world, and efficient scheduling is needed for liquid bulk terminals to remain competitive. A common solution for the planning efficiency problem is to apply planning algorithms, rather than human planners for creating schedules. However, replacing human planners by algorithms is often a difficult problem and its implementation is often obstructed both by management and the planning departments. In order to still achieve efficient schedules, attention has shifted from planning algorithms intended to replace the human planner, to scheduling support systems which aim to assist the planner in making efficient schedules. A core aspect of both automated scheduling systems as well as scheduling support systems is an optimization engine, to allow the human planner to quickly generate alternative schedule options.
In recent years, literature has emerged introducing the concept of Multiagent systems for planning and scheduling. Multiagent scheduling seems to be a suitable approach for scheduling in liquid bulk terminals since the agents provide a natural metaphor for the scheduling problem in a liquid bulk terminal. This is mainly because a terminal serves vessels which are owned by a set of customers which are largely uninterested in the vessels of other customers. Multiagent systems can, in such a scenario, offer flexibility with regards to optimality criteria which traditional scheduling systems cannot.
The main goal of this study was to determine whether Multiagent Systems (MAS) could be effectively applied for proposing schedule allocation options to assist schedulers in liquid bulk terminals.
The research started with an extensive background study, in which it was found that there are a number of stakeholders involved in the scheduling process, which have partially conflicting goals. The main relevant stakeholders are the terminal and its customers, both consisting of several independent departments or entities. The main goals of the terminal are achieving a high terminal utilization, having a low impact on surroundings, having low pressure on operations, and not paying demurrage costs. The main goals from the customer side are achieving short turnaround times, and maintaining product quality. By combining the stakeholder requirements with information regarding the processes, infrastructure, and products at liquid bulk terminals, it was found that the scheduling problem resembles a specific version of the jobshop problem: $JMPT  pmtn, prec, r_i, s_{ij}  multi$. Since this problem is strongly $\NP$hard it is generally infeasible to find an optimal solution as the size of the problem instance grows, and efficient algorithms are needed to find solutions.
After formally defining the scheduling problem as present in liquid bulk terminals, this thesis moved on to determine the applicability of several common Multiagent scheduling solutions based on four dimensions: whether the solution provides a good analogy to the scheduling problem, whether the solution could deal with the arrival and departure of vessels over time, whether the solution could incorporate shared infrastructure between routes and whether terminal preferences could be modelled. It was found that Combinatorial Auctions provided the best fit on the scheduling problem. Due to existing contract structures between terminals and their customers it proved impossible to incorporate an additional payment structure in the scheduling problem, making it infeasible to design the auction to be truthful and strategyproof. Nonetheless, this is not considered a problem as strategic behaviour can easily be verified.
For incorporating multiattribute optimizations several design directions were explored including Paretooptima, hard constraints and discrete choice models. It was found that only discrete choice models offered the flexibility needed in this case. Two types of discrete choice models were proposed and implemented: a Utility Maximization model, as well as a Regret Minimization model. Empirical studies have shown that in certain cases, a regret minimizing model provides a significantly better fit on the preference data, meaning that they are better able to capture the tradeoffs made by humans when comparing alternatives. As the goal of this scheduling system is to assist the planner in making allocation choices, it can benefit greatly from these features. A major drawback of the regret model is that finding the regret for all alternatives is an $O(n^2)$ operation, rather than an $O(n)$ operation. For solution methods where this posed a bottleneck, a hybrid regret / utility model was proposed and implemented.
The auction structure was implemented using a minor modification on the Contract Net Protocol. The final Multiagent Systems consists of three agent types: a Route Finder agent, a Vessel agent, and a Planner agent. The Route Finder agent is responsible for finding routes for specific vessels and products on the terminal, as well as for making accurate estimations on the processing times on that route. This is implemented using a combination of Yen's algorithm and a modification of Dijkstra's shortest path algorithm. The Vessel agent uses te routes from the Route Finder agent to determine a valuation on the routes and submits these valuations as bids to the Planner agent. The Planner agent retrieves all bids and uses these to create final schedule proposals. This is done by solving the Winner Determination Problem (WDP) of the auction.
Several algorithms were implemented and tested for solving the WDP, including a Branch and Bound algorithm, informed search methods and two Simulated Annealing approaches. It was found that Simulated Annealing, implemented as a search among alternative complete solutions, was the most feasible solution strategy. Typically, it provided solutions which, on average, achieve a 50\% higher utility model value than a FirstCome FirstServed Heuristic in three minutes. Furthermore, the Simulated Annealing process can be stopped at any moment and still provide a better alternative solution.
All algorithms were implemented using a utilitymaximising as well as a regretminimising strategy. It was found that the top ten results from these different models typically differed from each other with an edit distance of five. Further
empirical studies would be needed to determine whether regretbased models provide better accepted solutions in practice than their utilitybased counterparts.

[PDF]
[PDF]
[Abstract]

13 

Algorithms for Scheduling of Train Maintenance
In this thesis we will develop algorithms for the scheduling of train maintenance at NedTrain facilities. We present a detailed analysis of the maintenance process at NedTrain. The problem of scheduling train maintenance is formalized and expressed using linear inequalities. We show that the Simple Temporal Problem can be used to find schedules that satisfy the temporal constraints of the scheduling problem. However, when resource constraints are added, the STP is no longer sufficient. The problem can still be expressed using the Disjunctive Temporal Problem but this cannot be efficiently solved. Two algorithms are presented: a zero/one integer linear programming approach and an STPbased approach. In the STPbased approach we start with an STP that describes the temporal constraints. The STP is then iteratively updated until its earliest start time assignment corresponds to a valid solution to the scheduling problem. Experiments show that the ILP method is fast enough to produce optimal base schedules whereas the heuristic method is well suited to updating schedules with little disruption when changes have occurred.

[PDF]
[Abstract]

14 

Density Adaptive Sleep Scheduling in Wireless Sensor Networks
Broadcasting by flooding is one of the most fundamental services for both wired and wireless networks. This also includes several sensor network applications that use broadcasting to spread information from one sensor node to the other sensor nodes in the entire sensor network. These wireless sensor networks have certain characteristics such as limited power and battery driven. However the simple flooding mechanism causes lots of duplicated packets and consumes a lot of resources. In view of these constraints, the broadcasting service should reduce redundant transmissions such that energy conservation is obtained since the devices within a wireless sensor network have limited battery power.
Since it is not necessary for each sensor node to be active all the time a more sophisticated method might be introduced to flood an entire network and the other side is more efficient. In this work, an asynchronous sleep scheduling is proposed by an adapted duty cycle for each sensor where the duty cycle is based on the RSS based density estimation for each sensor node. By using the proposed duty cycle the reachability is compared with that of the fixed duty cycle and the adapted duty cycle by using neighborhood discovery density estimation model. The results show that the reachability of the network with an adapted duty cycle in combination with RSS based density estimation is two times more than that of the neighborhood density estimation in a 100m x 100m WSN of 200 sensor nodes. Further the results show the reachability is as good as in the case of the 90% fixed duty cycle but is more energy efficient.

[PDF]
[Abstract]

15 

Scheduling with release times and deadlines
We study the single machine version of the task scheduling problem with release times and deadlines.
This problem is too simple to be of practical importance in itself, but it is also used as a relaxation in algorithms for the Job Shop scheduling problem, which is a more practical task scheduling problem.
We study exact algorithms for solving the single machine problem.
We propose a new lower bound for the single machine problem and analyze its practical performance when used in a branch and bound algorithm.
We also study the theoretical hardness of the single machine problem with a fixed set of task lengths.

[PDF]
[Abstract]

16 

Scheduling the Refuelling Activities of Multiple Heterogeneous Autonomous Mobile Robots
When a team of autonomous mobile robots (MRs) has to perform a mission of a duration that exceeds their energy capacities, the mission can only be fulfilled if the MRs are resupplied. Nowadays most MRs are powered by batteries, which can either be recharged, or replaced. Since it is not necessary that the MRs are battery powered, we will use the general term refuelling instead of recharging. In general there are two options for MRs to refuel themselves: by travelling to a fuelling station (FS) that is situated at a fixed location, or rendezvousing with a mobile fuelling station (MFS). When multiple MRs operate in the same environment, it is desired that the MRs share the available FSs instead of each MR having its own dedicated FS. Sharing the FSs will reduce the purchase and maintenance costs, and less space will be needed for the placement of FSs. The FSs considered during this thesis, can only refuel a single robot at a time. This raises the need for properly scheduling the refuelling activities, such that the FSs are shared in an efficient manner, and depletion of the MRs can be prevented. This thesis presents several methods to schedule the refuelling activities of multiple heterogeneous autonomous MRs. The scheduling is focused on the selection of the refuel events, and the allocation of the FSs as a shared resource. The following problem is considered: For an environment which contains an arbitrary number of FSs, and MRs, the refuelling activities should be scheduled in such a way that the overall mission time is minimized. Each mission entails an assignment of a unique set of waypoints for each MR which have to be visited in a predetermined order, in order to complete the mission. The total mission is accomplished when the last MR is completely refuelled, after visiting its last waypoint. Scheduling the refuelling activities using a timebased metric is complicated compared to a distancebased metric. Since a FS can refuel only a single MR at a time, the duration that each MR spends refuelling, and the ordering in which the MRs are refuelling have to be taken into account. Furthermore since the MRs share the FSs, each refuel event of one MR can affect all future refuel events of all other MRs. Global optimal solutions for this problem can be found by using a centralized approach as shown in this thesis. However, since all refuel events of all MRs can influence each other, the complexity increases very quickly when the problem size increases. In order to obtain a global optimum, all possible refuel events have to be taken into account. Due to the computational complexity, the problem size that can be solved by this approach is limited. In order to solve larger scale problems, a tradeoff has been made between computation time and solution quality. A distributed, and hierarchical architecture are proposed, in order to distribute the computations and decision making over the robotic team. The core of the distributed approach is that MRs make individual decisions based on local knowledge. The decision making of the hierarchical approach is done for individuals or clusters of MRs. Simulation case studies indicate that these decentralized approaches can be used to solve large scale problems in realtime, at the cost of a suboptimal solution.

[PDF]
[Abstract]

17 

Improving the production of high quality anodes at Aluchemie
The production of anodes at Aluchemie is improved using a genetic algorithm. This is a parallel batch scheduling problem, where the batching and handling time constraints are predetermined. The GA has customized crossover and mutation functions. Using this GA, an improvement of 4.4% achieved.

file embargo until: 20200906
[Abstract]

18 

Improving PCP algorithms using flexibility metrics: Creating flexible schedules for Technical Maintenance
In this work we improve on existing scheduling techniques suitable for scheduling problems at the train maintenance provider, NedTrain. Scheduling problems that require flexible solutions can be modeled using variations of the Resource Constraint Project Scheduling Problem (RCPSP), which can be solved using Precedence Constraint Posting (PCP). Earlier work already showed that PCP is able to create fixedtime schedules for NetTrain. In this work we investigate two variations of PCP for creating flexible schedules. The envelope approach, where the problem instance is directly reduced to a flexible solution and the `solve and robustify' approach, where a fixed inflexible schedule is created and then relaxed to create a flexible schedule. Using state of the art flexibility metrics and Interval Schedules we show that the envelope approach, which has been considered the weaker approach in recent literature, can be greatly improved. By using Interval Schedules to remove temporal interaction between tasks we were able to estimate resource violations much more accurately, improving the flexibility of the resulting schedules.

[PDF]
[Abstract]

19 

Coordinated MultiAgent Planning and Scheduling

[PDF]

20 

Coordinating autonomous planning and scheduling

[PDF]
