1.
Introduction
Planning and scheduling are important problems in industrial engineering, logistics and supply chain management (see Wu et al. [1], Zhang et al. [2], Polverini et al. [3], Yang et al. [4]). In the traditional scheduling models, the processing time of jobs is a fixed constant, but in many real productions, the processing time of jobs often decreases with the repetition of certain jobs, this phenomenon is called learning effects (see the survey Azzouz et al. [5]). Recently, Pei et al. [6] examined the single-machine and parallel-machine serial batching scheduling with a learning effect and time-dependent setup time. The objective functions is to minimize maximum earliness and total number of tardy jobs, respectively. For the single-machine scheduling, they proved that the maximum earliness and number of tardy jobs minimizations can be solved in polynomial time. For the parallel-machine scheduling, they proposed some algorithms to solve the problems. Qian et al. [7] addressed single-machine release times scheduling with a learning effect. For the weighted sum of makespan, total completion time and maximum tardiness, they proposed several heuristic algorithms and a branch-and-bound algorithm to solve the problem. Wang et al. [8] investigated single-machine release dates scheduling with a position-weighted learning effect. For the total completion time minimization, they proposed the heuristic and branch-and-bound algorithms. Wang et al. [9] considered flow shop scheduling with truncated learning effects. For the makespan and total weighted completion time minimizations, they proposed some heuristics and a branch-and-bound algorithm. Liang et al. [10] studied flow shop scheduling with sum-of-logarithm-processing-times-based learning effects. For the total (weighted) completion time, the makespan, and the sum of the quadratic job completion times minimizations, they proposed several heuristic algorithms to solve the problems and the worst-case bounds of heuristic algorithms were analyzed. Sun et al. [11] addressed single-machine scheduling with general position weighted learning effects. For the total weighted completion time minimization, they proposed two heuristics and their worst-case bounds were analysed. They also proposed some complex heuristics and a branch-and-bound algorithm.
Another, increasing attention has been paid to scheduling problems with controllable processing times (resource allocation) (see Shabtay and Steiner [12], Wei et al. [13], Wang and Wang [14], Mashayekhy et al. [15], Liang et al. [16] and Liu and Xiong [17]) and learning effects (see Lu et al. [18]). Recently, Zhu et al. [19] studied single-machine scheduling with learning effects and resource allocation. Under past-sequence-dependent setup times and general resource allocation, they showed that a regular objective function minimization can be solved in polynomial time. Pei et al. [20] investigated scheduling with learning effects and resource allocation on a single-machine. For the makespan minimization under the serial-batching production, they proposed a hybrid GSA-TS algorithm. Liu and Jiang [21] addressed single machine scheduling problems with learning effects and resource allocation. For the common and slack due-date assignments with position-dependent weights, they gave some results. Geng et al. [22] and Liu and Jiang [23] considered flow shop scheduling with learning effects and resource allocation. Under due date assignments, they proved that some two machine no-wait flow shop problems can be solved in polynomial time. Wang et al. [24] considered single machine scheduling with learning effects and resource allocation. Under a convex resource allocation function, they provided a bicriteria analysis for the total weighted flow time cost and the total resource consumption cost. Lv and Wang [25] considered flow shop scheduling problem with learning effect and resource allocation. Under different due-window assignment and two machine no-wait setting, they provided a bicriteria analysis for the scheduling cost and the resource consumption cost. They demonstrated that four versions about these both costs remain polynomially solvable. In this article, we study scheduling problems in a single-machine environment with truncated learning effects and resource allocation. Under the job processing times function is a general resource consumption function, we provide a unified approach for a large scheduling objective functions. We show that all these problems can be solved in polynomial time.
The remaining of this paper is as follows: Section 2 gives the description of the problem. In Section 3, we give some positional weights results. In Section 4, the optimal properties and the optimal solution algorithms are proposed. A special case is given in Section 5. Section 6 concludes the paper.
2.
Problem description
A set J={J1,J2,...,J˘n} of ˘n independent jobs are processed on a single machine, and all the jobs are available at time 0 and not allowed to be preempted. Suppose that job Jh is scheduled in rth position, the actual processing time is
where βh≤0 is the learning rate of job Jh, 0<δ≤1 is a truncation parameter, function Ph(uh) satisfies P′h(uh)≤0, Ph″, P_h'(u_h) = \frac{d P_h(u_h)}{d u_h} is the first derivative of u_{h} , P_h''(u_h) = \frac{d^2 P_h(u_h)}{d (u_h)^2} is the second derivative of u_{h} , u_{h} is the amount of resource allocated to job J_{h} such that {u}_{h}^{min} \leq u_{h} \leq {u}_{h}^{max} , {u}_{h}^{min} and {u}_{h}^{max} are the lower and upper bound of the resource allocation u_{h} . Note that the linear resource allocation {P}_h^A(u_h) = a_{h}-b_hu_h and convex resource allocation {P}_{h}^A = a_h+\left(\frac{w_h}{u_h}\right)^\theta are special cases of Eq (1), where a_{h} is the basic processing time of job J_{h} , b_h is the compression rate of job J_{h} , \theta > 0 is a constant, and w_h is the workload of job J_h .
Let [h] be the job that is placed in h th position, P_{[h]}^A denote the actual processing time of job J_{[h]} , the scheduling cost of this article can be expressed as \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A , and the resource consumption cost is \sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} , where \eta_h is the positional weight of h th position and g_h is the cost allocated to job J_h . The goal is to find the optimal sequence \pi of all jobs, and optimal resource allocation to minimize
where \widehat{\alpha}\geq0 and \widehat{\beta}\geq0 are given constants. By using extensions of the notation, the problem can be denoted:
3.
Positional weights results
3.1. Case 1
Note that the scheduling cost \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A can be applied to many scheduling cost, such as, for the makespan C_{\max} = \sum_{h = 1}^{\breve{n}}P_{[h]}^A , i.e., \eta_h = 1 ; for the total completion time \sum_{h = 1}^{\breve{n}}C_h = \sum_{h = 1}^{\breve{n}}\sum_{j = 1}^{h}P_{[j]}^A = \sum_{h = 1}^{\breve{n}} (\breve{n}-h+1)P_{[h]}^A , i.e., \eta_h = \breve{n}-h+1 ; for the total absolute differences in completion times (see Kanet [26]) \sum_{h = 1}^{\breve{n}}\sum_{j = 1}^h|C_h-C_j| = \sum_{h = 1}^{\breve{n}} (h-1)(\breve{n}-h+1)P_{[h]}^A , i.e., \eta_h = (h-1)(\breve{n}-h+1) ; for the total absolute differences in waiting times (see Bagchi [27]) \sum_{h = 1}^{\breve{n}}\sum_{j = 1}^h|W_h-W_j| = \sum_{h = 1}^{\breve{n}} h(\breve{n}-h)P_{[h]}^A , i.e., \eta_h = h(\breve{n}-h) , where W_h is the waiting time of job J_h .
Under due date assignment, let d_h be the due date of job J_h , E_h = \max\{0, d_h-C_h\} and T_h = \max\{0, C_h-d_h\} be the earliness and tardiness of job J_h . For the common (CON) due date assignment (see Panwalkar et al. [28]), \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d) = \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A , where \phi , \varphi , and \chi are given constants, d_h = d is the common due date ( d is a decision variable) and \eta_h = \min\{\breve{n}\chi+(h-1)\phi, (\breve{n}+1-h)\varphi\} . For the slack due date assignment (see Adamopoulos and Pappis [29]), \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi q) = \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A , where d_h = P_{h}^A+q , q is the common flow allowance ( q is a decision variable) and \eta_h = \min\{\breve{n}\chi+h\phi, (\breve{n}-h)\varphi\} . For the different due date assignment (see Seidmann et al. [30]), \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d_h) = \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A , where d_h is a decision variable and \eta_h = \min\{\chi(\breve{n}+1-h), \varphi(\breve{n}+1-h)\} .
Under due window assignment, let [d_h', d_h''] be the due window of job J_h , d_h' ( d_h'' ) is the starting (finishing) time of the due window of job J_h , and D_h = d_h''-d_h' is the size of due window [d_h', d_h''] , E_h = \max\{0, d_h'-C_h\} and T_h = \max\{0, C_h-d_h''\} ) are the earliness and tardiness of job J_h . For the common due window assignment (see Liman et al. [31]), d_h' = a' ( d_h'' = d'' , such D = d_h''-d' ), \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d'+\psi D) = \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A , where \psi is a given constant, and \eta_h = \min\{\breve{n}\chi+(h-1)\phi, \breve{n}\psi, (\breve{n}+1-h)\varphi\} . For the slack due window assignment (see Wu et al. [32]), \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d'+\psi D) = \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A , where d_i' = p_i+q' , d_i'' = p_i+q'' , D = q''-q' , \eta_h = \min\{\breve{n}\chi+h\phi, \breve{n}\psi, (\breve{n}-h)\varphi\} ; For the different due window assignment (see Wang et al. [33]), d_h' and d_h'' are decision variables, \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d'_h+\psi D_h) = \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A , where \eta_h = \min\{\chi(\breve{n}+1-h), (\breve{n}+1-h)\varphi\} .
3.2. Case 2
For the scheduling problems with past-sequence-dependent setup times (see Koulamas and Kyparisis [34], Liu et al. [35] and Wang et al. [36]), i.e., the setup time of job J_h is s_{[h]} = \epsilon \sum_{j = 1}^{h-1}P_{[j]}^A , where \epsilon \geq0 is a constant, we have the following results:
For the makespan C_{\max} = \sum_{h = 1}^{\breve{n}}(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} [1+(\breve{n}-h)\varepsilon]P_{[h]}^A , i.e., \eta_h = 1+(\breve{n}-h)\varepsilon ; for the total completion time \sum_{h = 1}^{\breve{n}}C_h = \sum_{h = 1}^{\breve{n}}\sum_{j = 1}^{h}(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} (\breve{n}-h+1)\left[1+\frac{\varepsilon(n-h)}{2}\right]P_{[h]}^A , i.e., \eta_h = (\breve{n}-h+1)\left[1+\frac{\varepsilon(n-h)}{2}\right] ; for the total absolute differences in completion times \sum_{h = 1}^{\breve{n}}\sum_{j = 1}^h|C_h-C_j| = \sum_{h = 1}^{\breve{n}} (h-1)(\breve{n}-h+1)(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} \left[(h-1)(\breve{n}-h+1)+\epsilon\sum_{j = h+1}^{\breve{n}}(j-1)(\breve{n}-j+1)\right]P_{[h]}^A , i.e., \eta_h = (h-1)(\breve{n}-h+1)+\epsilon\sum_{j = h+1}^{\breve{n}}(j-1)(\breve{n}-j+1) ; for the total absolute differences in waiting times \sum_{h = 1}^{\breve{n}}\sum_{j = 1}^h|W_h-W_j| = \sum_{h = 1}^{\breve{n}} h(\breve{n}-h)(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} \left[h(\breve{n}-h)+\epsilon\sum_{j = h+1}^{\breve{n}}j(\breve{n}-j)\right]P_{[h]}^A , i.e., \eta_h = h(\breve{n}-h)+\epsilon\sum_{j = h+1}^{\breve{n}}j(\breve{n}-j) .
For the common due date assignment, \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d) = \sum_{h = 1}^{\breve{n}}\varpi_h(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} \left[\varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j\right]P_{[h]}^A , where \varpi_h = \min\{\breve{n}\chi+(h-1)\phi, (\breve{n}+1-h)\varphi\} and \eta_h = \varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j . For the slack due date assignment, \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d_h) = \sum_{h = 1}^{\breve{n}}\varpi_h(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} \left[\varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j\right]P_{[h]}^A , where \varpi_h = \min\{\breve{n}\chi+h\phi, (\breve{n}-h)\varphi\} and \eta_h = \varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j . For the different due date assignment, \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d_h) = \sum_{h = 1}^{\breve{n}}\varpi_h(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} \left[\varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j\right]P_{[h]}^A , where \varpi_h = \min\{\chi(\breve{n}+1-h), \varphi(\breve{n}+1-h)\} and \eta_h = \varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j .
Under the common due window assignment, \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d'+\psi D) = \sum_{h = 1}^{\breve{n}}\varpi_h(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} \left[\varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j\right]P_{[h]}^A , where \varpi_h = \min\{\breve{n}\chi+(h-1)\phi, \breve{n}\psi, (\breve{n}+1-h)\varphi\} . For the slack due window assignment, \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d'+\psi D) = \sum_{h = 1}^{\breve{n}}\eta_h(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} \left[\varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j\right]P_{[h]}^A , where \varpi_h = \min\{\breve{n}\chi+h\phi, \breve{n}\psi, (\breve{n}-h)\varphi\} ; For the different due window assignment, d_h' and d_h'' are decision variables, \sum_{h = 1}^n(\phi E_h+\varphi T_h+\chi d'_h+\psi D_h) = \sum_{h = 1}^{\breve{n}}\varpi_h(s_{[h]}+P_{[h]}^A) = \sum_{h = 1}^{\breve{n}} \left[\varpi_h+\epsilon\sum_{j = h+1}^{\breve{n}}\varpi_j\right]P_{[h]}^A , where \varpi_h = \min\{\chi(\breve{n}+1-h), (\breve{n}+1-h)\varphi\} .
4.
Main results
Lemma 1. For a given sequence \pi = [J_{[1]}, J_{[2]}, \ldots, J_{[{\breve{n}}]}] , the optimal resource allocation of the problem 1\left|{P}_h^A(u_h) = P_h(u_h)\max\{r^{\beta_h}, \delta\}\right|\widehat{\alpha} \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A+\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} is:
Proof. Taking the derivative of Eq (2) with respect to u_{[h]} and setting the derivative value as 0, it follows that
then we obtain:
Let the solution of Eq (6) is \ddot{u}_{[h]} , if P_{[h]}'(u_{[h]}^{min})\geq0 , it follows that \frac{{\partial {F}(\pi, u_{[h]})}}{{\partial {u_{[h]}}}} \geq \frac{-\widehat{\beta} g_{[h]} }{\widehat{\alpha}\eta_h} , which indicates {F}(\pi, u_{[h]}) is a non-decreasing function of u_{[h]} , the optimal solution of minimizing {F}(\pi, u_{[h]}) is u_{[h]}^* = u_{[h]}^{min} .
If P_{[h]}'(u_{[h]}^{max})\leq \frac{-\widehat{\beta} g_{[h]} }{\widehat{\alpha}\eta_h} , it follows that \frac{{\partial {F}(\pi, u_{[h]})}}{{\partial {u_{[h]}}}} \leq0 , which indicates {F}(\pi, u_{[h]}) is a non-increasing function of u_{[h]} , the optimal solution of minimizing {F}(\pi, u_{[h]}) is u_{[h]}^* = u_{[h]}^{max} .
If P_{[h]}'(u_{[h]}^{min}) < \frac{-\widehat{\beta} g_{[h]} }{\widehat{\alpha}\eta_h} < P_{[h]}'(u_{[h]}^{max}) , based on the properties of P_{[h]}'(u_{[h]}) , the optimal solution of minimizing {F}(\pi, u_{[h]}) is u_{[h]}^* = \ddot{u}_{[h]} .
Let z_{hr} = 1 if job J_h is placed in position r , and z_{hr} = 0 ; otherwise. Then, the optimal job sequence of the problem 1\left|{P}_h^A(u_h) = P_h(u_h)\max\{r^{\beta_h}, \delta\}\right|\widehat{\alpha} \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A+\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} can be formulated as the following assignment problem (denoted by {\overbrace{AP}} ):
where
From Lemma 1 and the above analysis, for solving 1\left|{P}_h^A(u_h) = P_h(u_h)\max\{r^{\beta_h}, \delta\}\right|\widehat{\alpha}\sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A +\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} , the following algorithm can be summarized.
Algorithm 1
Step 1. Calculate \widehat{\Omega}_{hr} (see Eq (9), h, r = 1, 2, ..., \breve{n} ), solve {\overbrace{AP}} Eq (7)–(8) to determine an optimal sequence.
Step 2. Calculate optimal resource allocation u_{[h]}^* by using Lemma 1 (see Eq (4)).
Theorem 1. The problem 1\left|{P}_h^A(u_h) = P_h(u_h)\max\{r^{\beta_h}, \delta\}\right|\widehat{\alpha}\sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A +\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} can be solved by Algorithm 1 in O({\breve{n}}^3) time.
Proof. In Step 1, solving {\overbrace{AP}} needs O({\breve{n}}^3) time; Step 2 requires O({\breve{n}}) time, hence, the total time of Algorithm 1 is O({\breve{n}}^3) .
Example 1. Assume a 8-job instance, where P_h(u_h) = a_h+\left(\frac{w_h}{u_h}\right)^\theta , \widehat{\alpha} = \widehat{\beta} = 1, \delta = 0.65, \theta = 2 , the positional weights are \eta_1 = 26, \eta_2 = 5, \eta_3 = 27, \eta_4 = 9, \eta_5 = 4, \eta_6 = 25, \eta_7 = 8, \eta_8 = 3, and the other parameters are given in Table 1.
Solution: By Algorithm 1, P_h'(u_h) = \frac{d P_h(u_h)}{d u_h} = -\theta \left(w_h\right)^\theta\left({u_h}\right)^{-(\theta+1)} , \widehat{\Omega}_{hr} are given in Table 2, optimal sequence is \pi^* = J_6\rightarrow J_3\rightarrow J_4\rightarrow J_2\rightarrow J_7\rightarrow J_1 \rightarrow J_8\rightarrow J_5 , optimal resource allocations (see Table 3) are u_{6}^* = 3.2105, u_{3}^* = 5, u_{4}^* = 4.5789, u_{2}^* = 3.8620, u_{7}^* = 2.2894, u_{1}^* = 4, u_{8}^* = 2.2525, u_{5}^* = 3 and \widehat{\alpha}\sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A +\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} = 963.5719 .
5.
A special case
Lemma 2. (Hardy et al. [37]). "The sum of products \sum_{h = 1}^{\breve{n}} X_h Y_{h} is minimized if sequence X_1, X_2, \ldots, X_n is ordered non-decreasingly and sequence Y_1, Y_2, \ldots, Y_n is ordered non-increasingly or vice versa."
For the convex resource allocation function {P}_{h}^A(u_h) = \left(\frac{w_h}{u_h}\right)^\theta\max\{r^{\beta}, \delta\} ( u_h > 0 ), i.e., a_h = 0, \beta_h = \beta , it follows that
Lemma 3. For a given sequence \pi = [J_{[1]}, J_{[2]}, \ldots, J_{[{\breve{n}}]}] , the optimal resource allocation of the problem 1\left|{P}_{h}^A(u_h) = \left(\frac{w_h}{u_h}\right)^\theta\max\{r^{\beta}, \delta\}\right|\widehat{\alpha} \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A+\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} is:
Proof. Taking the derivative of {F}(\pi, u_{[h]}) = \widehat{\alpha} \sum_{h = 1}^{\breve{n}}\eta_h\max\{h^{\beta}, \delta\}\left(\frac{w_{[h]}}{u_{[h]}}\right)^\theta+\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} with respect to u_{[h]} and setting the derivative value as 0, it follows that
then we obtain: u_{[h]} = \left(\frac{ \theta\widehat{\alpha}\eta_h\max\{h^{\beta}, \delta\}\left({w_{[h]}}\right)^\theta}{\widehat{\beta} g_{[h]} }\right)^{\frac{1}{1+\theta}}.
By substituting Eq (10) into {F}(\pi, u_{[h]}) = \widehat{\alpha} \sum_{h = 1}^{\breve{n}}\eta_h\max\{h^{\beta}, \delta\}\left(\frac{w_{[h]}}{u_{[h]}}\right)^\theta +\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} , we have:
Equation (12) can be minimized by Lemma 2, i.e., X_h = (\eta_h\max\{h^{\beta}, \delta\})^{\frac{1}{1+\theta}} , Y_h = \left(g_{[h]}w_{[h]}\right)^{\frac{\theta}{1+\theta}} , hence, we introduce the following algorithm to solve the problem 1\left|{P}_{h}^A(u_h) = \left(\frac{w_h}{u_h}\right)^\theta\max\{h^{\beta}, \delta\}\right| \widehat{\alpha} \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A+\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} .
Algorithm 2
Step 1. By using Lemma 2 (let X_h = (\eta_h\max\{h^{\beta}, \delta\})^{\frac{1}{1+\theta}} and Y_h = \left(g_{h}w_{h}\right)^{\frac{\theta}{1+\theta}} ) to determine an optimal job sequence.
Step 2. Calculate optimal resource allocation by Lemma 3 (see Eq (10)).
Theorem 2. The problem 1\left|{P}_{h}^A(u_h) = \left(\frac{w_h}{u_h}\right)^\theta\max\{r^{\beta}, \delta\}\right|\widehat{\alpha} \sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A+\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} can be solved by Algorithm 2 in O(n\log n) time.
Example 2. Assume a 8-job instance, where {P}_{h}^A(u_h) = \left(\frac{w_h}{u_h}\right)^\theta\max\{r^{\beta}, \delta\} , \widehat{\alpha} = \widehat{\beta} = 1, \beta = -0.3, \delta = 0.65, \theta = 2 , the positional weights are \eta_1 = 26, \eta_2 = 5, \eta_3 = 27, \eta_4 = 9, \eta_5 = 4, \eta_6 = 25, \eta_7 = 8, \eta_8 = 3, and the other parameters are given in Table 4.
Solution: By Algorithm 2, X_1 = (26\max\{1^{-0.3}, 0.65\})^{\frac{1}{1+2}} = 2.9625 , X_2 = (5\max\{2^{-0.3}, 0.65\})^{\frac{1}{1+2}} = 1.5955 , X_3 = (27\max\{3^{-0.3}, 0.65\})^{\frac{1}{1+2}} = 2.6879 , X_4 = (9\max\{4^{-0.3}, 0.65\})^{\frac{1}{1+2}} = 1.8108 , X_5 = (4\max\{5^{-0.3}, 0.65\})^{\frac{1}{1+2}} = 1.3751 , X_6 = (25\max\{6^{-0.3}, 0.65\})^{\frac{1}{1+2}} = 2.5329 , X_7 = (8\max\{7^{-0.3}, 0.65\})^{\frac{1}{1+2}} = 1.7325 , X_8 = (3\max\{8^{-0.3}, 0.65\})^{\frac{1}{1+2}} = 1.2493 , Y_1 = \left(2\times3\right)^{\frac{2}{1+2}} = 3.3019 , Y_2 = \left(4\times5\right)^{\frac{2}{1+2}} = 7.3681 , Y_3 = \left(12\times10\right)^{\frac{2}{1+2}} = 24.3288 , Y_4 = \left(8\times9\right)^{\frac{2}{1+2}} = 17.3070 , Y_5 = \left(14\times9\right)^{\frac{2}{1+2}} = 25.1332 , Y_6 = \left(7\times11\right)^{\frac{2}{1+2}} = 18.0992 , Y_7 = \left(9\times6\right)^{\frac{2}{1+2}} = 14.2886 , Y_8 = \left(5\times7\right)^{\frac{2}{1+2}} = 10.6999 , optimal sequence is \pi^* = J_1\rightarrow J_6\rightarrow J_2\rightarrow J_7\rightarrow J_3\rightarrow J_8 \rightarrow J_4\rightarrow J_5 , optimal resource allocations are u_{1}^* = \left(\frac{ 2\times26\times\max\{1^{-0.3}, 0.65\}\times\left({2}\right)^2}{3 }\right)^{\frac{1}{1+2}} = 4.1082 , u_{6}^* = \left(\frac{ 2\times7\times\max\{2^{-0.3}, 0.65\}\times\left({7}\right)^2}{11 }\right)^{\frac{1}{1+2}} = 3.7000 , u_{2}^* = \left(\frac{ 2\times27\times\max\{3^{-0.3}, 0.65\}\times\left({4}\right)^2}{5 }\right)^{\frac{1}{1+2}} = 4.9904 , u_{7}^* = \left(\frac{ 2\times9\times\max\{4^{-0.3}, 0.65\}\times\left({9}\right)^2}{6 }\right)^{\frac{1}{1+2}} = 5.4325 , u_{3}^* = \left(\frac{ 2\times4\times\max\{5^{-0.3}, 0.65\}\times\left({12}\right)^2}{10 }\right)^{\frac{1}{1+2}} = 4.2149 , u_{8}^* = \left(\frac{ 2\times25\times\max\{6^{-0.3}, 0.65\}\times\left({5}\right)^2}{7 }\right)^{\frac{1}{1+2}} = 4.8780 , u_{4}^* = \left(\frac{ 2\times8\times\max\{7^{-0.3}, 0.65\}\times\left({8}\right)^2}{9 }\right)^{\frac{1}{1+2}} = 4.1975, u_{5}^* = \left(\frac{ 2\times3\times\max\{8^{-0.3}, 0.65\}\times\left({14}\right)^2}{9 }\right)^{\frac{1}{1+2}} = 4.3957 and \widehat{\alpha}\sum_{h = 1}^{\breve{n}}\eta_hP_{[h]}^A +\widehat{\beta}\sum_{h = 1}^{\breve{n}}g_{[h]}u_{[h]} = (2^{\frac{1}{1+2}}+2^{\frac{-2}{1+2}}) \sum_{h = 1}^{\breve{n}}X_hY_{[h]} = (2^{\frac{1}{1+2}}+2^{\frac{-2}{1+2}})\times(2.9625\times3.3019+1.5955\times18.0992+2.6879\times7.3681 +1.8108\times14.2886+1.3751\times24.3288+ 2.5329\times10.6999+ 1.7325\times17.3070+1.2493\times25.1332) = 389.8396 .
6.
Conclusions
This article discussed the single-machine scheduling problems with truncated learning effect and resource allocation. Under a general resource consumption function, some optimal properties are given and it is showed that the problem can be solved in O(n^3) time. Further research might investigate multi-machine (e.g., flow shop setting) scheduling with learning effect and resource allocation (most of the multi-machine problems are NP-hard, and it's hard to find a quick and exact solution), consider serial-batching scheduling problems with learning effects and resource allocation (most of the serial-batching problems are NP-hard, and it's hard to find a quick and exact solution), study medical (hospital) scheduling problem (Wu et al. [38]) or open job scheduling problem (see Zhuang et al. [39]).
Acknowledgments
This research was supported by the National Natural Science Regional Foundation of China (71861031 and 72061029).
Conflict of interest
The authors declare that they have no conflicts of interest.