1. INTRODUCTION
The main objective of many companies and organizations today is to improve profitability and competitiveness. These improvements can be obtained with a good optimization of resources allocation. However, in the last few years many companies are facing not only complex and diverse economic trends of shorter product life cycles, quick changing science and technology, increasing customer demand diversity, and production activities globalization, but also enormous and heavy environmental challenges of global climate change (e.g., greenhouse effect), rapid exhaustion of various nonrenewable resources (e.g., gas, oil, and coal), and decreasing biodiversity.
Scheduling problems are widely discussed in the literature and two main approaches can be distinguished (Billaut et al., Reference Billaut, Moukrim and Sanlaville2008):
1. Classical deterministic methods: These methods consider that the data are deterministic and that the machine environment is relatively simple. Some traditional constraints are taken into account (precedence constraints, release dates, due dates, preemption, etc.). The criterion to optimize is often standard (makespan). A number of methods have been proposed (exact methods, greedy algorithms, approximate methods, etc.), depending on the difficulty of a particular problem. These kinds of studies are the most common in the literature devoted to scheduling problems.
2. Online methods: When the algorithm does not have access to all the data from the outset, the data become available step by step, or “online.” Different models may be considered here. In some studies, the tasks that we have to schedule are listed and appear one by one. The aim is to assign them to a resource and to specify a start time for them. In other studies, the duration of the tasks is not known in advance.
Flexibility occurs at the boundary between these two approaches: some information is available concerning the nature of the problem to be solved and concerning the data. Although this information is imperfect and not wholly reliable, it cannot be totally ignored. It is well known that there will be discrepancies, for a number of reasons, between the initial plan and what is actually realized. Given a set of disruptions that can occur in unforeseen circumstances, the aim is to propose one or more solutions that adapt well to disruptions, and then produce reactive decisions in order to ensure a smooth implementation (Billaut et al., Reference Billaut, Moukrim and Sanlaville2008).
The job-shop scheduling problem (JSP) represents a problem where there are some specific resources or machines that have to be used to carry out some tasks. Many real-life problems can be modeled as a JSP and can be applied in some variety of areas, such as production scheduling in the industry, departure and arrival times of logistic problems, and the delivery times of orders in a company. Most of the solving techniques try to find the optimality of the problem for minimizing the makespan, minimizing tardiness, minimizing flow-time, and so forth.
Some recent works have focused on minimizing the energy consumption in scheduling problems (Mouzon & Yildirim, Reference Mouzon and Yildirim2008; Dai et al., Reference Dai, Tang, Giret, Salido and Li2013), mainly from the operations research community (Mouzon et al., Reference Mouzon, Yildirim and Twomey2007; Bruzzone et al., Reference Bruzzone, Anghinolfi, Paolucci and Tonelli2012). Furthermore, some works have been carried out to obtain robust schedules to absorb incidences in dynamic scheduling (Caplinkas et al., Reference Caplinskas, Dzemyda, Kiss and Lupeikiene2012). However, no works have related energy efficiency with robustness, although there exists a clear relationship between them in job-shop scheduling.
We focus our attention in a JSP with different speed machine (JSSM). It represents an extension of the classical JSP, where each operation must be executed on a machine at a determined speed (by a classical deterministic method). If the speed of a machine is high, the energy consumption increases, but the processing time of the task decreases; meanwhile, if the speed is low, the energy consumption decreases and the processing time increases. Thus, in online scheduling, if disruptions occur, reactive decisions are needed, so machines can accelerate the speed to absorb these disruptions and recover the original schedule, obtained by the classical deterministic method.
To this end, we analyze the relationship between some important parameters in order to obtain a multiple-objective solution. We show that there is a clear trade-off between makespan and energy consumption, and between makespan and robustness. Therefore, there is a close relationship between energy consumption and robustness. However, this close relationship has not been analyzed in the literature, and new techniques can be developed to achieve these objectives jointly. Thus, our main goal is to find a solution that minimizes the energy consumption and the makespan. Furthermore, we extend this goal to determine the saved time by energy efficiency as a robustness measure in order to be used if incidences appear. Thus, if a task is delayed, the lost time can be recovered by increasing the speed of the machine to recover the original solution.
2. PROBLEM DESCRIPTION
The JSSM can be formally defined as follows. We are given a set of n jobs {J 1, … , J n}, and a set of m resources or machines {R 1, … , R m}. Each job J i consists of a sequence of v i tasks (θil, … , θivi). Each task θil has a single machine requirement R θil and a start time st θil to be determined. Each machine can work with different speeds, so each task is linked up to an integer duration pθil and an integer energy e θil used by the corresponding machine.
A feasible schedule is a complete assignment of starting times to tasks that satisfies the following constraints: the tasks of each job are sequentially scheduled, each machine can process at most one task at any time, and no preemption is allowed. The objective is finding a feasible schedule that minimizes the completion time of all the tasks and the energy used. The problem is a standard job-shop problem denoted as J ‖ Cmax according to the classification scheme proposed in Blazewicz et al. (Reference Blazewicz, Cellary, Slowinski and Weglarz1986). However, the association between duration and energy have been created so the problem JSSM can be denoted as J(Speed) ‖ Cmax,Energy. For each task, three different speeds have been defined. Each speed has a duration and an energy consumption. When the working speed increases, the energy also increases but the duration decreases.
3. ENERGY EFFICIENCY
Nowadays manufacturing enterprisers are facing not only complex and diverse economic trends of shorter product life cycles, quick changing science and technology, increasing customer demand diversity, and production activities globalization, but also enormous and heavy environmental challenges of global climate change (e.g., greenhouse effect), rapid exhaustion of various nonrenewable resources (e.g., gas, oil, and coal), and decreasing biodiversity. Statistical data in 2009 shows the Germany industrial sector was responsible for approximately 47% of the total national electricity consumption, and the corresponding amount of CO2 emissions generated by this electricity summed up to 18%–20% (BMWi, 2009). Thus, manufacturing companies are responsible for the environmental outcome and are forced to have manufacturing systems that demonstrate major potential to reduce environmental impacts (Duflou et al., Reference Duflou, Sutherland, Dornfeld, Herrmann, Jeswiet, Kara, Hauschild and Kellens2012).
There has recently been growing interest in the development of energy savings due to a sequence of serious environmental impacts and rising energy costs. Research on minimizing the energy consumption of manufacturing systems has focused on three perspectives: the machine level, the product level, and the manufacturing system level. From the machine-level perspective, developing and designing more energy-efficient machines and equipment to reduce the power and energy demands of machine components is an important strategic target for manufacturing companies (Li et al., Reference Li, Zein, Kara and Herrmann2011; Neugebauer et al., Reference Neugebauer, Wabner, Rentzsch and Ihlenfeldt2011). Unfortunately, previous studies show that the share of energy demand for removal of metal material compared to the share of energy needed to support various functions of manufacturing systems is quite small (less than 30% of total energy consumption; Dahmus & Gutowski, Reference Dahmus and Gutowski2004; Gutowski et al., Reference Gutowski, Murphy, Allen, Bauer, Bras, Piwonka, Sheng, Sutherland, Thurston and Wolff2005).
From the product-level perspective, modeling embodied product energy framework based on a product design viewpoint for an energy reduction approach is beneficial to support the improvements of product design and operational decisions (Seow & Rahimifard, Reference Seow and Rahimifard2011; Weinert et al., Reference Weinert, Chiotellis and Seliger2011). It requires strong commercial simulation software to facilitate the analysis and evaluation of the embodied product energy. The results cannot be applied easily in most manufacturing companies, especially in small- and medium-size enterprises, due to the enormous financial investments required. From the manufacturing system-level perspective, thanks to decision models that support energy savings, it is feasible to achieve a significant reduction in energy consumption in manufacturing applications. In the specialized literature about production scheduling, the key production objectives for production decision models, such as cost, time, and quality, have been widely discussed. However, decreasing energy consumption in manufacturing systems through production scheduling has been rather limited. One of the most well-known research works is the work of Mouzon et al. (Reference Mouzon, Yildirim and Twomey2007), who developed several algorithms and a multiple-objective mathematical programming model to investigate the problem of scheduling jobs on a single CNC machine in order to reduce energy consumption and total completion time. They pointed out that there was a significant amount of energy savings when nonmachines were turned off until needed; the relevant share of savings in total energy consumption could add up to 80%. They also reported that the interarrivals would be forecasted, and therefore more energy-efficient dispatching rules could be adopted for scheduling.
In further research, Mouzon et al. (Reference Mouzon and Yildirim2008) proposed a greedy randomized adaptive search algorithm to solve a multiple-objective optimization schedule that minimized the total energy consumption and the total tardiness on a machine. Fang et al. (Reference Fang, Uhan, Zhao and Sutherland2011) provided a new mixed-integer linear programming model for scheduling a classical flow shop that combined the peak total power consumption and the associated carbon footprint with the makespan. Bruzzone et al. (Reference Bruzzone, Anghinolfi, Paolucci and Tonelli2012) presented an energy-aware scheduling algorithm based on a mixed-integer programming formulation to realize energy savings for a given flexible flow shop that was required to keep fixed original job assignment and sequencing.
Although the majority of the research on production scheduling has not considered energy-saving strategies completely, the efforts mentioned above provide a starting point for exploring an energy-aware schedule optimization from the viewpoint of energy consumption. However, no work has been carried out to consider a multiple-objective optimization schedule to minimize the total energy consumption, the makespan, and to maximize the robustness of the schedule.
4. ROBUSTNESS
Robustness is a common feature in real-life problems. Biological life, functional systems, physical objects, and so on, persist if they remain running and maintain their main features despite continuous perturbations, changes, incidences, or aggressions (Szathmary, Reference Szathmary2006). Thus, robustness is a concept related to the persistence of the system, its structure, its functionality, and so forth, against external interferences: a system is robust, if it persists.
It is really difficult to give a unique definition for robustness, because this concept is differently defined in several domains. Furthermore, the definition often remains implicit in the literature or is determined by the specific target application. Finally, most authors prefer to use the concept of robust solution (and here, of robust schedule).
The data associated with a scheduling problem are the processing times, occurrence dates of some events, some structural features, and the costs. None of this data is free from factors of uncertainty. The duration of tasks depends on the conditions of their execution, in particular on the necessary human and material resources. They are thus inherently uncertain, regardless of contingent factors that may impair their execution. For instance, transportation times for components between separate operations in a manufacturing system will depend on the characteristics of the transportation resources available. Finally, in a production scheduling, some resources such as versatile machines require a reconfiguration time between operations. This time depends on the type of tools needed and the location of these tools in the shop, not to mention the operator carrying out the reconfiguration (Billaut et al., Reference Billaut, Moukrim and Sanlaville2008).
Let us first propose some consensus definition: a schedule is robust if its performance is rather insensitive to the data uncertainties. Performance must be understood here in the broad sense of solution quality for the person in charge; this naturally encompasses this solution value relative to a given criterion, as well as the structure itself of the proposed solution. The robustness of a schedule is a way to characterize its performance.
In the literature, it is sometimes difficult to separate sensitivity analysis and robustness. The sensitivity analysis tries to answer the “what if …” questions. It deals with disturbances more than with general uncertainty: data are fixed but might be disturbed (Billaut et al., Reference Billaut, Moukrim and Sanlaville2008).
In scheduling problems, robustness can be defined as the following.
Definition 1.
Robustness is the ability of a solution to maintain its feasibility when incidences appear during execution in the scheduling problem.■
In this paper, the robustness of a schedule will be used to answer what-if questions, mainly related to small disruptions that daily occur in real-life scheduling problems. In this way, the robustness of a schedule can be used to obtain energy-aware schedules that do not modify the start time of tasks. To this end, the slack between tasks that makes the schedule robust (to absorb incidences) can be profitable by machines to work at lower speed and therefore saving energy consumption. However, if this slack is needed, due to a disruption, the involved machine can increase its speed in order to recover the disrupted time and finalize the task on time. In this way, there exists a relationship between robustness and energy saving that can be applied to many scheduling contexts.
5. MODELING AND SOLVING A JSSM
The more natural way to solve the JSP involves all variables and constraints related to jobs, tasks, and machines (Garrido et al., Reference Garrido, Salido, Barber and López2000; Nowicki & Smutnicki, Reference Nowicki and Smutnicki2005; Huang & Liao, Reference Huang and Liao2008). However, the solution obtained is an optimal solution that minimizes the makespan, but it does not guarantee a certain level of robustness. Generally, this solution is not able to absorb incidences, and a delay in a task is propagated along the rest of the schedule.
Several reactive/proactive techniques have been developed in the literature to manage incidences in scheduling problems (Billaut et al., Reference Billaut, Moukrim and Sanlaville2008). Thus, computing a new solution from scratch after each problem change is possible (reactive technique), but it has two important drawbacks: inefficiency and instability of the successive solutions (Verfaillie & Schiex, Reference Verfaillie and Schiex1994). While reactive methods merely deal with the consequences of an unexpected change, taking a more proactive approach may guarantee a certain level of robustness. We are interested in this proactive approach, so that our goal is searching for a equitable trade-off between robustness and optimality of a solution.
Robustness (as in Section 4) in job-shop scheduling can be obtained through allocating buffer times between tasks in order to absorb small disruptions (task delays, etc.) that can occur stochastically along the schedule. In an optimized solution of a JSSM, some natural buffers appear to satisfy the involved constraints (nonoverlapping constraints). These buffers give the schedule some degree of robustness. However, if more buffers must be included to make the final solution more robust, the involved tasks must be moved and the effect must be propagated to the rest of the schedule.
To add robustness to JSSM solutions, we use the extra speed that machines can work only in cases where machines are not working at top speed. Several solutions are obtained with different weights to minimize makespan or energy used. When the main objective is to minimize the energy used, the solutions are composed of several tasks that are processed by machines in a low speed. If some incidences appear, this speed can be increased and the solutions remain valid. Following this idea, the energy roominess can be considered as robustness.
5.1. IBM ILOG CPLEX CP Optimizer tool
The problem is modeled and solved with the IBM ILOG CPLEX CP Optimizer tool (CP Optimizer; see http://www-01.ibm.com/software/integration/optimization/cplex-optimizer/). CP Optimizer uses constraint programming technology to solve detailed scheduling problems and other hard combinatorial optimization problems.
CP Optimizer is a commercial solver embedding powerful constraint propagation techniques and a self-adapting large neighborhood search method dedicated to scheduling (Laborie, Reference Laborie2009). This solver is expected to be very efficient for a variety of scheduling problems (IBM, 2010), in particular when the cumulative demand for resources exceeds their availability, as it happens, for example, in the satellite control network scheduling problem confronted in Kramer et al. (Reference Kramer, Barbulescu and Smith2007).
The problem has been modeled as a typical JSP. The extension with different machine speeds has been implemented considering that each task is executed by a machine and this machine has different optional modes where each mode represents the duration of the task and an associated consumption energy.
The objective is to find a solution that minimizes the multiple-objective makespan and energy consumption. The weight of each objective can be changed by the λ parameter.
Because the values of energy consumption and makespan are not proportional, it is necessary to normalize both measures (NormEnergy)(NormMakespan). NormEnergy value is calculated by summing the energy used in the execution of all the tasks, divided by the maximum energy (maxEnergy). Here, maxEnergy is the sum of the energy needed to execute all tasks at top speed. The NormMakespan is the makespan divided by the sum of the task durations when the machines are working at the lowest speed (maxMakespan). The objective function is Expression (2).
Algorithm 1 shows a pseudocode of the model to solve the problem.
Algorithm 1: Model in CP-Optimizcr
Data:tasks: Set of tasks; modes: Set of 3 modes for each task;
Result: A solution minimizing the objective function depending on λ
Interval itvs := Define interval, one for each task;
Interval modes := Define mode, three for each task;
Sequence mchs := Each itvs with the same machine is linked up;
Minimize ((1−λ) * NormEnergy + λ * NormMakespan)
Subject to
• noOverlap (mchs)
• end BeforcStart (itvs[j][o], itvs[j][o + 1])
• alternative (itvs[j][o], all(md in modes: if(md.id = itvs.id)))
Figure 1 shows two different schedules obtained by CP Optimizer for a given instance of the JSSM proposed in Agnetis et al. (Reference Agnetis, Flamini, Nicosia and Pacifici2010). This instance represents a scheduling problem with three machines, three jobs, each with five tasks, and each task has a processing time between 1 and 10 time units when the machine works at full speed. Each task is represented by a gray rectangle, which can be divided in two regions: a solid black gray color represents the processing time when the machine is working at full speed (mandatory), and a light gray color with horizontal lines represents the extra processing time if the machine does not work at full speed (optional). This region represents the used time to save energy. However, this time can also be used to absorb incidences if a disruption occurs (EEBuffer). Each task is labeled with the number of the task, the machine used, and the speed used by the corresponding machine (green is low speed, yellow is medium speed, and red is full speed). Finally, the black rectangles represent natural buffer times generated by the solution. They can also be used to absorb incidences.
Two solutions (schedules) have been obtained for the same instance with different lambda values (λ) between 0.1 and 0.9. For λ = 0.1, the solution X 1 give more importance (0.9) to energy efficiency and less importance (0.1) to minimize makespan. It can be observed that the makespan was 54, no tasks were carried out by machines at full speed (red), 2 tasks at medium speed (yellow), and 13 tasks at low speed (green). It generated an energy consumption of 79 units, and it can be observed that all tasks maintain slack to absorb incidences, so the robustness of the schedule is considered high.
For λ = 0.9, the solution X 2 gives more importance (0.9) to minimize makespan and less importance (0.1) to energy efficiency. It can be observed that the makespan was 34, and 7 tasks were carried out by machines at full speed (red), 4 tasks at medium speed (yellow), and 4 tasks at low speed (green). It generated an energy consumption of 112 units, and it can be observed that only 8 tasks maintain slack to absorb incidences, so the robustness of the schedule is considered low.
By modifying the value of λ, an approximate Pareto front for the bicriteria optimization schedule is generated. It must be taken into account that no single solution between the X 1 and X 2 can be said, a priori, to be the best one. They are noncomparable, so choosing a solution from an approximate Pareto front can only be done by the user, depending on the requirements. This is why we advocate producing, for a given problem instance, the Pareto front rather than a single solution.
6. DEFINITION OF THE BENCHMARK SET
To analyze the relationships among makespan, energy consumption, and robustness, we have evaluated the behavior on the benchmarks proposed in Agnetis et al. (Reference Agnetis, Flamini, Nicosia and Pacifici2010). According to the benchmark results (small and large instances), several analytical formulas have been developed to estimate these parameters. All analyzed instances are characterized by the number of machines (m), the maximum number of tasks by job (v max), and the range of processing times (p). The number of jobs (j) is set to 3. A set of instances was generated by combining values of each parameter: m = 3, 5, 7; v max = 5, 7, 10, 20, 25, 30; and p = [1,10], [1,50], [1,100], [1,200]. In these benchmarks, the number of operators was not considered, so that we fixed it to the number of machines. We have modeled the instances to be solved by the optimizer.
We have also extended the original instances of Agnetis et al. (Reference Agnetis, Flamini, Nicosia and Pacifici2010) to add different energy consumptions (e1, e2, and e3) to each task according to three processing times (pt1, pt2, and pt3), where pt1 is equal to the original value of processing time in the Agnetis instances. Pt2 and pt3 were calculated following Expressions (3) and (4), respectively. These instances can be found in our webpage (see http://gps.webs.upv.es/jobshop/).
The value maxdur represents the maximum duration of a task for the corresponding instance and the expression rand represents a random value between both expressions. Similar expressions were developed to calculate the energy consumption [Expressions (5)–(7)].
Following these expressions the processing times of pt1, pt2, and pt3 increase as the energy consumption of e1, e2, and e3 decrease. For example, given an instance with 5 tasks per job, three triplets are represented for each task: the id of the task, the energy used, and the processing time (<id, e, pt>):
7. MAKESPAN VERSUS ENERGY CONSUMPTION
In this section, we analyze the trade-off between makespan and energy consumption in JSPs with different machine speeds. Figure 2 shows an approximate Pareto front for a set of 10 instances with 5 machines, 10 tasks per job, and a maximum processing time of 50 time units. For λ = 1, it can be observed that the average energy consumption was 1311 and the average makespan was minimized (317). However, for λ = 0, the average energy consumption was minimized (745) and the average makespan was maximized (564.4). As we pointed out above, depending on the user requirements, a value of λ must be selected to obtain the desired level of makespan/energy consumption. Table 1 shows the makespan and energy consumption for each value of λ in different instances. It must be taken into account the relationship/ratio between makespan and energy consumption is similar in all instances, so that this trade-off is not dependent on the number of machines, number of tasks per job, or the range of processing times.
According to the analyzed instances, the ratio between energy consumption and makespan can be estimated by using Formula (8):
Thus, given a schedule instance with a given makespan and a λ value, we can estimate the energy consumption required to execute this schedule. In the same way, given a schedule instance with a given energy consumption threshold and a λ value, we can estimate the makespan needed to execute this schedule. This formula can be redefined by the operator according to the distribution of energy consumption of machines at different speeds. This formula and further formulas have been empirically obtained by approximation of all analyzed benchmarks. They were approximated by polynomial interpolation and then they were empirically approximated to a more complex formula to adjust the behavior in all desired points. Thus, they show that there is a clear relationship between the involved parameters.
8. ROBUSTNESS VERSUS ENERGY CONSUMPTION
The main goal of this paper is to show the trade-off between robustness and energy consumption. In this way, the advantage could be twofold. Developing new techniques for searching energy-efficient schedules also means searching for robust schedules. Thus, these techniques will generate energy-aware and robust solutions in production scheduling, so small disruptions can be repaired by accelerating the needed machines to recover the original schedule. In this way, no rescheduling is needed, and the user can adjust the parameters to obtained the optimal solution based on the problem preference.
To carry out this study, we have simulated 100 incidences to each instance in order to analyze the number of incidences that can be absorbed by the resultant schedule. An incidence is a delay to a random task of the schedule. The duration of the incidence (%incid) was bounded by 20% of the total duration of the involved task. Figure 3 shows an approximate Pareto front for a set of 10 instances with 7 machines, 10 tasks per job, and a maximum processing time of 100 time units. It can be observed that as the robustness increased, the energy consumption decreased. This is because more robust solutions allow machines to work at minimum speed, so the energy consumption decreased; that is, if all machines work at minimum speed, all tasks have a slack (time between solving the task at minimum speed minus solving the task at minimum speed). Thus, if a disruption occurs in a machine m i at speed (s il) during the task t i, this machine can accelerate its speed to s i2 in this task t i in order to finish on time (before the next task t i+1 starts). In this case, we consider the schedule is robust. If the delay of task t i affects the following task, t i+1, the machine m j that works in this task accelerates its speed in order to finish on time. Finally, the disruption is absorbed in some steps. In this case, we consider the schedule is stable because the disruption has been propagated to some other tasks before the original solution is recovered.
Table 2 shows the energy consumption and robustness in different instances. It must be taken into account that the robustness maintained the same behavior in all instances, so that the robustness is not directly dependent on the number of machines, the number of tasks per job, or the range of processing times. However, in most instances, for λ = 0,1 and λ = 0 the energy needed is similar, but the robustness is different (see last two rows in instances 3–5–10, 3–7–10, and 5–10–50). Thus, given an energy consumption threshold, we can obtain different solutions with different robustness and makespan levels.
The relationship between energy consumption and robustness can be estimated by using Formula (9):
This formula is more accurate for λ values close to 0 (from 0.6 to 0), because energy consumption is more considered for these values in the objective function. Thus, given a percentage of robustness for a given incidence duration (%incid) and a λ value of a schedule, we can estimate the energy needed to carry out this schedule. In the same way, for a schedule with a given energy consumption, a λ value, and a threshold of the duration of the incidences (%incid), we can estimate the robustness of this schedule. This formula can be refined by the operator according to the distribution of energy consumption of machines at different speeds.
9. MAKESPAN VERSUS ROBUSTNESS
There is a direct relationship between makespan and robustness: as makespan increases, the robustness is bigger because the tasks are more sparse in time and they are able to absorb more incidences. However, it is not realistic to generate too sparse schedules, so generally a makespan bound is set and we try to find the more robust schedule for a given makespan threshold.
To carry out this study, the simulation carried out in the previous section gave us the number of incidences that can be absorb by modifying the energy consumption threshold. Figure 4 shows an approximate Pareto front for a set of 10 instances with 7 machines, 10 tasks per job, and a maximum processing time of 100 time units. It can be observed that as the makespan increased, the robustness also increased with a trigonometrical shape. Table 3 shows the makespan and robustness in different instances. It must be taken into account that the robustness is quite similar in all instances, so it is not directly dependent on the number of machines, the number of tasks per job, or the range of processing times. When the makespan threshold was set to the minimum possible (to achieve the optimal solution), these solutions were able to absorb an average of 29% of the incidences (first row of Table 3). This is because natural buffers (black rectangles in Fig. 1) were able to absorb this percentage of incidences. Finally, when the makespan threshold was set to an upper bound (obtained by minimizing energy consumption), the percentage of absorbed incidences was close to 100%. That means that the buffers are well distributed among all tasks, and almost all disruptions were able to be absorbed.
The relationship between makespan and robustness can be obtained from Formulas (8) and (9) to obtain Formula (10):
Thus, given a makespan of a schedule with a given λ value, and the duration of the incidence (%incid), we can estimate the robustness of this schedule. In the same way, given a robustness threshold, the duration of the incidence (%incid), and a λ value, we can estimate the makespan of this schedule. This formula can be refined by the operator according to the distribution of energy consumption of machines at different speeds.
10. GENERAL ANALYSIS
In this section, a general analysis for all instance types was carried out. The main objective is to analyze the relationship among all relevant parameters around robustness and energy efficiency for all analyzed instances and different λ value (horizontal axe). Figure 5 shows the results for disruptions of 40% of the maximum processing time (%incid = 40). The main vertical axes represents the robustness. Thus, the blue curve (% of Absorbed (40%)) represents the percentage of absorbed incidences for each λ value. The yellow curve (%Natural Buff) represents the percentage of incidences absorbed by a natural buffer. The green curve (%EffEn Buff) represents the percentage incidences absorbed by accelerating a machine. In this way, the robustness is % of Absorbed (40%) = %Natural Buff + %EffEn Buff.
In the secondary vertical axes, the garnet columns (NbuffEff) represent the average number of buffers generated by increasing the speed of machines, and light blue columns (NbuffNat) represent the average number of natural buffers.
It can be observed that NbuffNat is mainly constant because they are independent of the objective (minimize makespan or energy consumption). However, the total amount of time involved in these natural buffers decreased as the value of λ increased. This is because as λ increased, the objective function gives more importance to minimize makespan, so the free slack is also minimized. Thus, the percentage of times that the incidence is absorbed by a natural buffer (%Natural Buff) also decreased. The same tendency is carried out by NbuffEff, where the number of buffers generated decreased as λ increased. This is because as λ increases, the objective is to minimize makespan and more machines are assigned at maximum speed, so few buffer times can be generated by speeding up the machines. The percentage absorbed incidences (%EffEn Buff) also decreased as the λ increased. However, the difference between the percentage of absorbed incidences by the speeding up the machines (%EffEn Buff) and the percentage of absorbed incidences by natural buffers (%Natural Buff) can be observed. The main objective is represented by the blue curve (% of Absorbed (40%)), which represents the percentage of absorbed incidences for each λ value. It can be observed that for λ = 0 (minimizing energy consumption), almost all incidences can be absorbed. Thus, energy-aware schedules are also considered robust solutions that can absorb medium-size incidences.
In Figure 6 we have simulated disruptions of different length, from 10 to 40% of the maximum processing time (from %indic = 10 to %indic = 40). The red curve (% of absorbed (10%)) represents the percentage of absorbed incidences for each λ value. The green curve (% of absorbed (20%)) represents the percentage of absorbed incidences for each λ value. The garnet curve (% of absorbed (30%)) represents the percentage of absorbed incidences for each λ value. Finally, the gray curve (% of absorbed (40%)) represents the percentage of absorbed incidences for each λ value. It can be observed that all curves maintained the same behavior in all λ values, and the values are proportional to the length of the disruptions. This is because it is easier to absorb small incidences than higher, but the difference is not too high. Thus, longer incidences than 40% will maintain the same tendency (proportional to the presented in Fig. 6).
In the secondary vertical axes, the extra energy needed to absorb incidences is represented for the different lengths of disruptions (from 10 to 40%). It can be observed that although % of Absorbed (20%) was able to absorb fewer disruptions than % of Absorbed (10%), it needed more extra energy than the other in many cases. It must taken into account that as the number of absorbed disruptions increased, the extra energy needed to absorb these disruptions also increased, and the magnitude of needed energy is proportional to the size of the disruption. For instance, for λ = 0 (minimizing energy), the percentage of absorbed disruption of size 40% was around 77%; meanwhile, the percentage of absorbed disruption of size 30% was around 87%. However, the extra energy needed to absorb these incidences was almost the same in both cases, because larger disruptions generated larger needed of extra energy.
11. CONCLUSIONS
Many real-life problems can be modeled as a JSP where machines can work at different speeds. It represents an extension of the classical JSP, where each operation has to be executed by one machine and this machine has the possibility to work at different speeds. In this paper, we analyze the relationship among three important objectives that must be taken into consideration: energy efficiency, robustness, and makespan. Analytical formulas are presented to estimated the relationship between these objectives in the analyzed instances. The results show the trade-off between makespan and robustness, and the direct relationship between robustness and energy efficiency.
To reduce the makespan, the energy consumption has to be increased to process the tasks faster. When the energy consumption is low, it is because the machines are not working at their highest speed, so if an incidence occurs, the speed of these machines can been increased in order to recover the time lost by the incidence. Thus, robustness is directly related to energy consumption. Robustness is also directly related to makespan because when makespan increases, there are more gaps in the solution, so sometimes incidences can be absorbed by these natural buffers.
Thus, new techniques can be developed to find robust solutions, and at the same time they are guaranteed to be energy-aware solutions. Thus, in online scheduling, the obtained robust solution is carried out, and only in case of disruptions are involved machines accelerated to absorb the disruptions, and the rest of the tasks are executed in an energy-aware scheduling.
In further works, we will develop new metaheuristic techniques for finding robust and energy-aware solutions. These problems have multiple objectives, so efficient techniques must be developed to obtain optimized solutions in an efficient way.
ACKNOWLEDGMENTS
This research has been supported by the Spanish Government under research project MICINN TIN2013-46511-C2-1-P, the European CASES project (No. 294931) supported by a Marie Curie International Research Staff Exchange Scheme Fellowship within the FP7, and the European TETRACOM project (No. 609491) supported by FP7-ICT-2013-10. This research was also supported by the National Science Foundation of China (No. 51175262) and the Jiangsu Province Science Foundation for Excellent Youths under Grant BK2012032.
Miguel A. Salido is an Associate Professor in the Department of Computer Science at Universitat Politècnica de València. His expertise is focused on constraint programming and its application to planning and scheduling problems. He is the author of several papers published in international journals and conferences. He is a PC member of international conferences such as IJCAI, AAAI, ECAI, and ICAPS and has participated in or led several national and European research projects.
Joan Escamilla Fuster is a PhD student in the Department of Computer Science at Universitat Politècnica de València, where he received his BS in computer science. His current work within the Artificial, Intelligence, Planning & Scheduling Research Group focuses on scheduling, planning, energy efficency, metaheuristics, and multiple-objective optimization.
Federico Barber is a Professor in the Department of Computer Science at Universitat Politècnica de València, where he leads a research team in artificial intelligence. He has worked on the development of temporal reasoning systems, constraint satisfaction problems, planning, scheduling, and related optimization problems. Dr. Barber is the author of several articles published in international journals and conferences and has developed several tools for solving real-world optimization problems. He has participated in and led national and European research projects in these areas, as well as leading several technology transference projects to relevant companies.
Adriana Giret is an Associate Professor at Universitat Politècnica de València. She attained her PhD in computer science. Her areas of interest are in intelligent manufacturing systems, holonic systems, and multiagent systems. She is the author or coauthor of more than 120 research papers in journals, books, book chapters, and conference proceedings. Dr. Giret is a member of the IEEE-IES Technical Committee on Industrial Agents.
Dunbing Tang is a Professor at Nanjing University of Aeronautics and Astronautics. He earned his PhD from Nanjing University of Science and Technology and then spent 2 years on postdoctoral research at Tsinghua University and City University of Hong Kong. He conducted research at Aachen University and worked at Cranfield University as a research fellow. Dr. Tang's research interests include intelligent manufacturing and automation, product design theory, and methodology. He has published over 100 academic papers in international academic journals.
Min Dai is a PhD student in the College of Mechanical and Electrical Engineering at Nanjing University of Aeronautics and Astronautics. He earned his MS from Yangzhou University. His research interests include applications of heuristic optimization algorithms in production scheduling and manufacturing system modeling.