Loading [MathJax]/jax/output/SVG/jax.js
Editorial Special Issues

The imperative for ecosystem regeneration by supply chains: sixth industrial revolution

  • Received: 27 January 2025 Revised: 06 March 2025 Accepted: 06 March 2025 Published: 11 March 2025
  • This editorial letter contributes to the ongoing discussions on drawing potential paths for the sixth industrial revolution, which is being proposed. Contributing to the dialogue, this letter suggests laying the groundwork for eco-regenerative industries and supply chains, moving beyond the existing sustainability movement to ecosystem regeneration, as the next big thing in the industry.

    Citation: Shahryar Sorooshian. The imperative for ecosystem regeneration by supply chains: sixth industrial revolution[J]. AIMS Environmental Science, 2025, 12(2): 253-255. doi: 10.3934/environsci.2025012

    Related Papers:

    [1] Xiaolong Li . Asymptotic optimality of a joint scheduling–control policy for parallel server queues with multiclass jobs in heavy traffic. AIMS Mathematics, 2025, 10(2): 4226-4267. doi: 10.3934/math.2025196
    [2] Nodari Vakhania . On preemptive scheduling on unrelated machines using linear programming. AIMS Mathematics, 2023, 8(3): 7061-7082. doi: 10.3934/math.2023356
    [3] Samer Nofal . On the time complexity of achieving optimal throughput in time division multiple access communication networks. AIMS Mathematics, 2024, 9(5): 13522-13536. doi: 10.3934/math.2024659
    [4] Ibrahim Attiya, Mohammed A. A. Al-qaness, Mohamed Abd Elaziz, Ahmad O. Aseeri . Boosting task scheduling in IoT environments using an improved golden jackal optimization and artificial hummingbird algorithm. AIMS Mathematics, 2024, 9(1): 847-867. doi: 10.3934/math.2024043
    [5] Cunlin Li, Wenyu Zhang, Hooi Min Yee, Baojun Yang . Optimal decision of a disaster relief network equilibrium model. AIMS Mathematics, 2024, 9(2): 2657-2671. doi: 10.3934/math.2024131
    [6] Mohammed Jalalah, Lyu-Guang Hua, Ghulam Hafeez, Safeer Ullah, Hisham Alghamdi, Salem Belhaj . An application of heuristic optimization algorithm for demand response in smart grids with renewable energy. AIMS Mathematics, 2024, 9(6): 14158-14185. doi: 10.3934/math.2024688
    [7] Xiaoxiao Liang, Lingfa Lu, Xueke Sun, Xue Yu, Lili Zuo . Online scheduling on a single machine with one restart for all jobs to minimize the weighted makespan. AIMS Mathematics, 2024, 9(1): 2518-2529. doi: 10.3934/math.2024124
    [8] Yuan-Zhen Li, Lei-Lei Meng, Biao Zhang . Two-stage iterated greedy algorithm for distributed flexible assembly permutation flowshop scheduling problems with sequence-dependent setup times. AIMS Mathematics, 2025, 10(5): 11488-11513. doi: 10.3934/math.2025523
    [9] Aliyu Ismail Ishaq, Abdullahi Ubale Usman, Hana N. Alqifari, Amani Almohaimeed, Hanita Daud, Sani Isah Abba, Ahmad Abubakar Suleiman . A new Log-Lomax distribution, properties, stock price, and heart attack predictions using machine learning techniques. AIMS Mathematics, 2025, 10(5): 12761-12807. doi: 10.3934/math.2025575
    [10] Fazeel Abid, Muhammad Alam, Faten S. Alamri, Imran Siddique . Multi-directional gated recurrent unit and convolutional neural network for load and energy forecasting: A novel hybridization. AIMS Mathematics, 2023, 8(9): 19993-20017. doi: 10.3934/math.20231019
  • This editorial letter contributes to the ongoing discussions on drawing potential paths for the sixth industrial revolution, which is being proposed. Contributing to the dialogue, this letter suggests laying the groundwork for eco-regenerative industries and supply chains, moving beyond the existing sustainability movement to ecosystem regeneration, as the next big thing in the industry.



    Research on single machine scheduling problems with location-based learning effects has long been the focus of scholars. Early pioneering research can be found in Biskup [1] and Moshiov [2]. In their research, they assumed that the processing time of the job was constant and a decreasing function of its position respectively. Moshiov [3] described the parallel machine case. Mosheiov and Sidney [4] introduced job-dependent learning effects and thought that the problem could be transformed into an allocation problem. Since then, many scholars have studied similar or improved models. There are many related achievements, and the research perspective is becoming more and more open. Mosheiov and Sidney [5] studied the case that all tasks have a common due date, and proved that the model is polynomial solvable. Bachman and Janiak [6] provided a method of proof using diagrams. Lee, Wu and Sung [7] discussed the two-criteria problem, and put forward the skills of searching for the optimal solution. The authors followed the methods of the non-increasing order of ωj and the shortest processing time sequence achieves the optimal solution. Lee and Wu [8] solved a flow shop scheduling problem with two machines using a heuristic algorithm. Janiak and Rudek [9] discussed the complexity results of a single machine scheduling problem that minimizes the number of delayed jobs. Zhao, Zhang and Tang [10] investigated the polynomial solutions of some single machine problems, parallel machine problems, and flow shop problems in environments with learning effects. Cheng, Sun and Yu [11] considered some permutation flow shop scheduling problems with a learning effect on no-idle dominant machines. Eren and Ertan [12] studied the problem of minimizing the total delay in the case of learning effects, and used the 0-1 integer programming model to solve this problem. Zhang and Yan [13], Zhang et al. [14], Liu and Pan [15] and Liu, Bao and Zheng [16] have all studied scheduling problems based on learning effects from the perspective of problem model innovation or improvement.

    With the deepening of research, many scholars found that p is not always a constant and there will be different processing times in different environments, different machines or different workers. Therefore, some scholars put forward the stochastic scheduling problem. Pinedo and Rammouz [17], Frenk [18] and Zhang, Wu and Zhou [19] have done a lot of pioneering work. Based on research results of the above scholars, Zhang, Wu and Zhou [20] studied the single machine stochastic scheduling problem based on the learning effect of location, and studied the optimal scheduling strategy for stochastic scheduling problems with and without machine failures. Ji et al. [21] considered the parallel machine scheduling caused by job degradation and the learning effect of DeJong. They proved that the proposed problem is polynomial solvable, and provided a fully polynomial time approximation solution. A labor scheduling model with learning effect is proposed by Qin, Liu and Kuang [22]. By piecewise linearizing the curve, the mixed 0-1 nonlinear programming model (MNLP) was transformed into the mixed 0-1 linear programming model (MLP) for solution. Zhang, Wang and Bai [23] proposed a group scheduling model with both degradation and learning effects. Xu et al. [24] proposed a multi machine order scheduling problem with learning effect, and use simulated annealing and particle swarm optimization to obtain a near optimal solution. Wu and Wang [25] consider a single machine scheduling problem with learning effects based on processing time and truncation of job delivery times related to past sequences. Vile et al. [26], Souiss, Benmansour and Artiba [27], Liu et al. [28] and Liu et al. [29] applied scheduling models separately to emergency medical services, supply chain, manufacturing management and graph theory. Li [30] studied the processing time of the job as random and uses a job-based learning rate. At the same time, he provided a method to deal with problems by using the difference between EVPI and EVwPI. Toksari and Atalay [31] studied four problems of the coexistence of learning effect and homework refusal. In order to reduce production costs, Chen et al. [32] focused on multi-project scheduling and multi-skilled labor allocation. Shi et al. [33] applied a machine learning model to medical treatment, estimate service level and its probability distribution, and used various optimization models to solve scheduling programs. Wang et al. [34] improved and studied several existing problems.

    Ham, Hitomi and Yoshida [35] first proposed the "group technology" (GT). According to type or characteristics, jobs are divided into different groups. Each group of jobs is produced by the same means and, once the jobs are put into production, they cannot be stopped. Lee and Wu [36] proposed a group scheduling learning model where the learning effect not only depends on the work location, but also on the group location. We have demonstrated that the problem is polynomial solvable under the proposed model. Yang and Chand [37] studied a single machine group scheduling problem with learning and forgetting effects to minimize the total completion time of tasks. Zhang and Yan [38] proposes a group scheduling model with deterioration and learning effects. Under the proposed model, the completion time and total completion time problems are polynomial optimal solvable. Similarly, Ma et al. [39], Sun et al. [40] and Liu et al. [41] provided appropriate improvements to the model. Li and Zhao [42] considered the group scheduling problem on a single machine with multiple expiration window assignments. They also divided the homework into several groups to improve production efficiency and save resources. However, this work is only an improvement on some problems or solutions and has not achieved groundbreaking results. Liang [43] inquired into the model with deteriorating jobs under GT to minimize the weighted sum of manufacturing time and resource allocation costs. Wang et al. [44] considered the issue of maturity allocation and group technology at the same time. They determined the best sequence and the best deadline allocation of the group and intra-group jobs by minimizing the weighted sum of the absolute values of the lateness and deadline allocation costs. Wang and Ye [45] established a stochastic grouping scheduling model. Based on SDST and preventive maintenance, Jain [46] proposed a method based on a genetic algorithm to minimize the performance measurement of completion time.

    The above literature provides assistance in establishing problem models and solving methods for classical sorting and random sorting with learning effects. In the learning effect, the learning factor is a very important quantity which often affects the actual processing time of the job. There are many factors that affect learning factors, which can be internal or external and often have randomness. In this case, the learning factor is no longer a constant–it is variable–and sometimes the probability density can be calculated. These aspects were not considered in previous works. Therefore, the discussion of the randomness of learning factors in this study has practical significance. Based on the above ideas, this study establishes a new stochastic scheduling model. In the model, workers participate in production, the processing of jobs has a position based learning effect and the learning index is random. The method to solve the problem is to use heuristic algorithms to find the optimal seqencing of the problems.

    n independent jobs are processed on one machine. The job can be processed at any time and cannot be stopped during processing.

    First model is

    1|prj=pjraj,ajU(0,λj)|E[f(Cj)],

    where f(Cj) is function of processing time of job Jj.

    First, we give some lemmas.

    Lemma 2.1. [16] X is a random variable, continuous on the definition field, and fX(x) is density function, Y=g(X), there is E(Y)=E[g(X)]=+g(x)fX(x)dx.

    Lemma 2.2. [16] The meaning of X is the same as Lemma 2.1, function g(x) is everywhere derivable and monotonous, Y=g(X), then,

    fY(y)={fX[h(y)]|h(y)|,0,α<y<β,others, (2.1)

    in which α,β is the minimum and maximum value of g(),g(+) respectively, h(y)=g1(x).

    Lemma 2.3. When x>1, y(x)=1lnx(11x) is monotonic and does not increase, z(x)=11lnx(11x) is a monotone increasing function, and z(x)>0.

    It is easy to prove the conclusion by using the method of derivation and function limit.

    Lemma 2.4. If the random variable aU(0,λ), X=pra, where p is constant, then fX(x)=1λ1xlnr.

    Proof. From (2.1), when aU(0,λ), we can get X(prλ,p), and

    fX(x)=fX[h(a)]|h(a)|=1λ|(lnplnxlnr)|=1λ1xlnr. (2.2)

    Second, we give the symbols and their meanings in the theorems (see Table 1).

    Table 1.  The symbols and their meanings.
    Symbol Description
    J A collection of independent jobs
    Jj A job in J
    r Job position in the sequenc
    a Learning rate, a>0
    ωi Weight of job i
    pj The p of Jj, pjU(0,λj)
    S,S Job sequence
    di The due date of Ji
    Lmax Maximum delay
    Tj Delay of job Jj
    Uj Penalty of job j
    pri(S) Random processing time when Ji in S is r-position
    π1,π2 Jobs without Ji,Jj in the sequence
    E() Mathematical expectation of random variable
    t0 Completion time in the sequence except for Ji,Jj
    t0 Time required for the (r1)-th job in the sequence to finish processing
    Cj(S) Time spent on completion of Jj in S
    ωjCj Weighted completion time when all jobs are processed

     | Show Table
    DownLoad: CSV

    Theorem 2.1. For 1|prj=pjraj,ajU(0,λj)|E(Cmax), if p is consistent with the parameters λ, that is, for all Ji,Jj, if there is pipjλiλj, if the λ is large, arrange it first, we can obtain the optimal ranking of the problem.

    Proof. (1) There is the first job in the exchanged jobs, the job in the first and second positions is exchanged.

    According to hypothesis and Lemma 2.1, we have

    E[C2(S)] = E(t0)+E[p11(S)]+E[p22(S)]=E(t0)+p1+p2ln(2λ2)(112λ2), (2.3)
    E[C1(S)] = E(t0)+E[p12(S)]+E[p21(S)]=E(t0)+p2+p1ln(2λ1)(112λ1). (2.4)

    Note that p1p2λ1λ2, 2λ2>2λ1>1, from Lemma 2.3, we can get

    1ln(2λ2)(112λ2)<1ln(2λ1)(112λ1),11ln(2λ1)(112λ1)>0. (2.5)

    From (2.3)–(2.5), it can be obtained that

    E[C2(S)]E[C1(S)]=p1p2+p2ln(2λ2)(112λ2)p1ln(2λ1)(112λ1)<p1p2+p2ln(2λ1)(112λ1)p1ln(2λ1)(112λ1)=[11ln(2λ1)(112λ1)](p1p2)<0.

    (2) When there is no first job in the exchanged jobs, that is r2. We compare E[Cj(S)] of Jj in S with E[Ci(S)] of Ji in S.

    From hypothesis and Lemma 2.1, we get

    E[Cj(S)] = E(t0)+E[pri(S)]+E[pr+1j(S)]=E(t0)+E[pirai]+E[pj(r+1)aj]=E(t0)+pilnrλi(1rλi)+pjln(r+1)λj[1(r+1)λj], (2.6)
    E[Ci(S)] = E(t0)+E[prj(S)]+E[pr+1i(S)]=E(t0)+E[pjrai]+E[pi(r+1)aj]=E(t0)+pjlnrλi(1rλi)+piln(r+1)λj[1(r+1)λj]. (2.7)

    Notice the hypothesis of the Theorem 2.1, for all  Ji,Jj, if there is pipjλiλj, and when r2, rλi>1. From Lemmas 2.2, 2.3 and (2.6), (2.7), we get

    E[Cj(S)]E[Ci(S)]={1ln(rλi)(11rλi)1ln[(r+1)λj](11(r+1)λj)}(pipj)<0.

    Proof complete.

    Theorem 2.2. For model 1|prj=pjraj,ajU(0,λj)|E(ωjCj), if p and its weight meet pipjmin{1,ωiωj}, the optimal order is obtained by the larger λjωj first.

    Proof. Suppose pipj, according to the previous assumptions, we have

    E[Ci(S)]=E(t0)+E[pri]=E(t0)+pilnrλi(1rλi), (2.8)
    E[Cj(S)]=E(t0)+E[pri]+E[pr+1j]=E(t0)+pilnrλi(1rλi)+pjln(r+1)λj[1(r+1)λj], (2.9)
    E[Cj(S)]=E(t0)+E[prj]=E(t0)+pjlnrλi(1rλi), (2.10)
    E[Ci(S)]=E(t0)+E[prj]+E[pr+1i]=E(t0)+pjlnrλi(1rλi)+piln(r+1)λj[1(r+1)λj]. (2.11)

    Notice pipjmin{1,ωiωj}, ωjλjωiλi and (2.8)–(2.11), we can get

    ωiE[Ci(S)]+ωjE[Cj(S)]ωiE[Ci(S)]ωjE[Cj(S)]=(ωi+ωj)(pipj)1lnrλi(1rλi)+(ωjpjωipi)1ln(r+1)λj[1(r+1)λj](ωi+ωj)(pipj)1lnrλi(1rλi)+(ωjpjωipi)1lnrλi(1rλi)=(ωjpiωipj)1lnrλi(1rλi)0.

    Theorem 2.2 is proved.

    Theorem 2.3. When λiλjdidj for Ji,Jj, the EDD rule, that is non-decreasing of dj is the optimal algorithm of problem 1|prj=pjra,pjU(0,λj)|E(Lmax).

    Proof. First, we consider λiλj. The parameters of Ji,Jj in S satisfy the relationship λiλjdidj, but they violate the EDD rule, that is, Jj is processed before Ji. In fact, Ji is processed before Jj in S. Then we prove that the sequence S can be transformed into a non-decreasing of dj sequence S and this transformation makes the maximum delay non-increasing. Further more, we suppose the last job in π1 is on the (r1)-th position and the expected completion time of it is E(t0). From (2.8)–(2.11), we get the expected delays of Ji,Jj in sequence S are

    E[Li(S)]=E[Ci(S)]di=E(t0)+12λjra+12λi(r+1)adi, (2.12)
    E[Lj(S)]=E[Cj(S)]dj=E(t0)+12λjradj. (2.13)

    Similarly, the expected delays of Ji,Jj in sequence S are

    E[Li(S)]=E[Ci(S)]di=E(t0)+12λiradi, (2.14)
    E[Lj(S)]=E[Cj(S)]dj=E(t0)+12λira+12λj(r+1)adj. (2.15)

    Because λiλj, didj and (2.12)–(2.15), so

    E[Li(S)]E[Lj(S)]=12(λjλi)[ra(r+1)a]+(djdi)0, (2.16)
    E[Lj(S)]E[Li(S)]=12(λjλi)ra+12λi(r+1)a>0. (2.17)

    Then from (2.16) and (2.17), we have

    max{E[Li(S)],E[Lj(S)]}max{E[Li(S)],E[Lj(S)]}. (2.18)

    Thus, exchanging the position of Ji and Jj does not cause an increase in the maximum delay. After a limited number of similar processing, the optimal sequence can be transformed into the order of non-decreasing of due time dj and the expected maximum delay will not increase.

    Theorem 2.4. For 1|prj=pjra,pjU[0,λj]|E(Tj), when λiλjdidj, we have the optimal algorithm by non-decreasing order of dj.

    Proof. First we assume didj. The necessary condition for the theorem proof is the same as that of Theorem 2.3. Now we show that exchanging the position of Ji and Jj does not increase the objective function value. Then we repeat the exchange of adjacent jobs to get the optimal sequence of problem.

    We discuss it in two cases:

    Case 1. E[Cj(S)]dj, in which E[Ti(S)]0, E[Tj(S)]0. If one of them is zero, the conclusion is obviously true.

    From Theorem 2.3 and didj, we have

    (E[Ti(S)]+E[Tj(S)])(E[Ti(S)]+E[Tj(S)])=max{E[Ci(S)]di,0}+max{E[Cj(S)]dj,0}max{E[Ci(S)]di,0}-max{E[Cj(S)]dj,0}=E[Ci(S)]E[Ci(S)](E[Cj(S)]dj)0. (2.19)

    Case 2. E[Cj(S)]>dj. Notice didj, λiλj, we have

    (E[Ti(S)]+E[Tj(S)])(E[Ti(S)]+E[Tj(S)])=E[Ci(S)]+E[Cj(S)]E[Ci(S)]E[Cj(S)]0. (2.20)

    From (2.19) and (2.20), we get E[Ti(S)]+E[Tj(S)]E[Ti(S)]+E[Tj(S)].

    In the following discussion, we assume that dj=d,j=1,2,,n.

    Theorem 2.5. For 1|prj=pjra,pjU(0,λj,)di=d|E(ωjUj), there are the following results:

    (1) When dt>min(λ1,λ2), according to the non-decreasing order of λj, we can get the smaller number of expected weighted tardy jobs;

    (2) When dt<λ1, dt<λ2, ω1ω2dt2(λ2λ1)(r+1)aω1+ω2ω1ω2+ω1λ1ω2λ2ω1ω2, we can get the minimum expected value of the objective function according to the non-decreasing rules of job weight.

    Proof. (1) First we assume the machine is idle at time t and then there are two jobs J1,J2 without processing. Suppose processing times of J1,J2 are p1 and p2, J1,J2 are processed at the r-th position and (r+1)-th position respectively. The weights of J1,J2 are ω1 and ω2. When dt>min(λ1,λ2), we should process the job for which λj is smaller first, so that we can guarantee the target value is smaller.

    (2) In the following we assume ω1ω2, dt<λ1 and dt<λ2.

    Let E[ωU(1,2)] denotes the expected weighted number of tardy jobs when we process J1 first and then process J2, then

    E[ωU(1,2)]=(ω1+ω2)P(pi1>dt)+ω2P(pi2<dt,pi2+pi+11>dt)=(ω1+ω2)λ1dtia1λ1dx+ω2dt0dxλ2dtxia(i+1)aλ1λ2dy=(ω1+ω2)ia[11λ1(dt)]+ω2ia(i+1)aλ1λ2[λ2(dt)12(dt)2]. (2.21)

    The meaning of E[ωU(2,1)] is similar to E[ωU(1,2)], then

    E[ωU(2,1)]=(ω1+ω2)P(pi2>dt)+ω1P(pi2<dt,pi2+pi+11>dt)=(ω1+ω2)λ2dtia1λ2dx+ω1dt0dxλ1dtxia(i+1)aλ1λ2dy=(ω1+ω2)ia[11λ2(dt)]+ia(i+1)aλ1λ2[λ1(dt)12(dt)2]. (2.22)

    According to (2.21) and (2.22), we obtain

    E[ωU(1,2)]E[ωU(2,1)]=ra(r+1)a2λ1λ2(ω1ω2)(dt)[(dt)+2(λ1λ2)(r+1)aω1+ω2ω1ω2+ω2λ2ω1λ1ω1ω2].

    When condition (2) of the theorem is satisfied we can get E[ωU(1,2)]<E[ωU(2,1)]. The proof of Theorem 2.5 is completed.

    Now, we will discuss the case of group scheduling. All jobs in the model can be processed at zero time. In a group, the job can be worked continuously. The machine must have preparation time before entering the next group, and it is subject to the classical learning effect hypothesis. p is a random with learning effect based on the position in the group, and the learning rate follows a uniform distribution.

    Our model is expressed as

    1|pkij=pijkaij,aijU(0,λij),G,sri=sira|E[f(Cj)].

    The assumptions and symbols of the model (see Table 2) are as follows:

    Table 2.  Symbols of the model.
    Symbol Description
    Gi The i-th group
    ni Number of jobs in group Gi
    n Number of total jobs, n1+n2++ni=n
    Jij The j-th job in group Gi, i=1,2,,m;j=1,2,,ni
    a Learning rate for group installation, a<0
    si Normal installation time of group Gi, i=1,2,,m
    sri Actual installation time of group Gi in number r, sri=sira
    aij Learning rate of job j in group Gi, aij>0
    pij The random processing time of the job Jij,
    obeys the uniform distribution in the interval (0,λij)
    pkij Random processing time of job Jij in position k of group Gij,
    pkij=pijkaij
    Q,Q Job group sequence
    σ1,σ2 Partial workpiece sequence
    E[Cjnj(Q)] Expected completion time of the last job of group Gj in sequence Q
    E[Cini(Q)] Expected completion time of the last job of group Gi in sequence Q
    E(T0) Expected completion time of the last job of σ1

     | Show Table
    DownLoad: CSV

    Theorem 3.1. For 1|pkij=pijkaij,aijU(0,λij),G,sri=sira|E(Cmax), when the jobs in the group satisfy: For all Jik,Jil, pikpilλikλil, the internal parts of the group are in accordance with λik large rule, the group is on the basis of the non-decreasing order of si, we can get the optimal solution.

    Proof. Suppose sisj, and a>0. We can get

    E[Cjnj(Q)]E[Cinj(Q)]=E(T0)+sira+E(nik=1pikkaik)+sj(r+1)a+E(njk=1pjkkajk)E(T0)sjraE(njk=1pjkkajk)si(r+1)aE(nik=1pikkaik)=(sisj)[ra(r+1)a]<0.

    Then, we can get the sorting rules of the group by repeating the exchange operation.

    The scheduling problem of jobs in a group can be summed up as

    1|pkij=pijkaij,aijU(0,λij)|E(Cmax),

    Theorem 2.1 has been proved.

    We have completed the proof of Theorem 3.1.

    Theorem 3.2. For 1|pkij=pijkaij,aijU(0,λij),G,sri=sira|E(Cj), if sinisjnj, the larger λij of the job in the group, the better the arrangement. The group is arranged by the non-decreasing order of nil,k=1pikln(kλij)(11kλij)ni, we can get the optimal sort.

    Proof. Assumptions are the same as Theorem 3.1.

    And sinisjnj, nil,k=1pikln(kλik)(11kλik)ninjl,k=1pjkln(kλjk)(11kλjk)nj, then

    {E[nil=1Cil(Q)]+E[njl=1Cjl(Q)]}{E[nil=1Cil(Q)]+E[njl=1Cjl(Q)]}=niE(t0)+nisira+E[nik=1,l=1(nil+1)pilkail]+njE[Cini(Q)]+njsj(r+1)a+E[njk=1,l=1(njl+1)pjlkajl]niE[Cjnj(Q)]nisi(r+1)aE[nik=1,l=1(nil+1)pilkail]njE(t0)njsjraE[njk=1,l=1(njl+1)pjlkajl]=(ni+nj)(sisj)[ra(r+1)a]+(njsinisj)(r+1)a+[njnil,k=1pikln(kλik)(11kλik)ninjl,k=1pjkln(kλjk)(11kλjk)]0.

    Then, we just need to repeat the previous method to complete the proof of the group order rule.

    Secondly, for the scheduling problem of jobs in a group, we can reduce it to problem 1|prj=pjraij,aijU(0,λij)|E(ωjCj), and ωj=1, Theorem 2.2 has been proved.

    Therefore, we have completed the proof.

    The following are heuristic algorithms and examples for Theorems 3.1 and 3.2.

    Algorithm 1

    Step 1: When pikpilλikλil, the priority principle shall be followed λij in group;

    Step 2: Priority arrangement for groups with small si.

    The group Gi time complexity is O(nilogni), the total time complexity of step 1 is mi=1O(nilogni); the optimal order time complexity of step 2 is O(mlogm). The time complexity of Algorithm 1 is O(nlogn).

    An example of Algorithm 1:

    Example 1. m=2, G1={J11,J12,J13}, G2={J21,J22}, s1=2, s2=3, a=2, a11U(0,2), a12U(0,4), a13U(0,3), a21U(0,2), a22U(0,4), p11=1, p12=3, p13=2, p21=4, p22=5.

    solution. Step 1: In G1, because λ11=2<λ13=3<λ12=4, p11=1<p13=2<p12=3; λ21=2<λ22=4, p21=4<p22=5, satisfy the consistency assumption, the order is J12J13J11, J22J21;

    Step 2: Because s1=2<s2=3, so G1G2.

    The solution of this example is E(Cmax)=13.07.

    Algorithm 2

    Step 1: Arrange the production of jobs in the group by the priority of λij;

    Step 2: When the requirements are met sinisjnj in the group, the group processing shall be arranged according to the non-decreasing order of nil,k=1pikln(kλik)(11kλik)ni.

    The complexity analysis of Algorithm 2 is the same as Algorithm 1.

    We give an example of Algorithm 2.

    Example 2. Same as Example 1.

    solution. Step 1: Compare the values of λij, we can get J12J13J11, J22J21;

    Step 2: Because s1n1=23<s2n2=32, p1jln(kλ1j)(11kλj)n1=0.866<p2jln(kλ2j)(11kλj)n2=2.844. The group order is G1G2.

    After calculation, we have E(Cmax)=50.15.

    For the single machine stochastic scheduling and group stochastic scheduling problems established in this paper, we consider that the learning rate is a random variable, which is not available in previous literature. We find that such problems are very general, and we can also find examples in real life. For example, if workers participate in learning a new project training, their mastery time of new technology may be random, which can be expressed by random variables. After transforming the problem, we give corresponding theoretical assumptions for the proposed problem, and then give the solution of the optimal order. For the case of group technology, we give a numerical example to verify the theoretical results.

    Next, we can combine it with intelligent algorithms to solve the problems of large number of jobs, machine maintenance, waiting time for job processing, and multi-stage processing of jobs.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was funded by Shanghai Philosophy and Social Sciences General Project (2022BGL010); the Post-funded Project of the National Social Science Fund Project of China (22FGLB109); Laboratory of Philosophy and Social Sciences for Data lntelligence and Rural Revitalization of Dabie Mountains, Lu'an, Anhui 237012, China; Key projects of Humanities and Social Sciences in Colleges and Universities of Anhui Province (SK2018A0397).

    The authors declare no conflicts of interest.



    [1] Ellili N O D (2024) Bibliometric analysis of sustainability papers: Evidence from Environment. Dev Sustain 26: 8183–8209. https://doi.org/10.1007/s10668-023-03067-6 doi: 10.1007/s10668-023-03067-6
    [2] Bradu P, Biswas A, Nair C, et al. (2023) RETRACTED ARTICLE: Recent advances in green technology and Industrial Revolution 4.0 for a sustainable future. Environ Sci Pollut R 30: 124488–124519. https://doi.org/10.1007/s11356-022-20024-4 doi: 10.1007/s11356-022-20024-4
    [3] Malik A, Sharma S, Batra I, et al. (2024) Industrial revolution and environmental sustainability: an analytical interpretation of research constituents in Industry 4.0. Int J Lean Six Sigma 15: 22–49. https://doi.org/10.1108/IJLSS-02-2023-0030 doi: 10.1108/IJLSS-02-2023-0030
    [4] Ahmed F, Ali I, Kousar S, et al. (2022) The environmental impact of industrialization and foreign direct investment: empirical evidence from Asia-Pacific region. Environ Sci Pollut Res 1–15.
    [5] Mehmood K, Kautish P, Mangla S K, et al. (2024) Navigating a net‐zero economy future: Antecedents and consequences of net‐zero economy‐based green innovation. Bus Strat Environ 33: 4175–4197. https://doi.org/10.1002/bse.3685 doi: 10.1002/bse.3685
    [6] United-Nations. (2019) Only 11 Years Left to Prevent Irreversible Damage from Climate Change, Speakers Warn during General Assembly High-Level Meeting. Available from: https://press.un.org/en/2019/ga12131.doc.htm.
  • This article has been cited by:

    1. Xuyin Wang, Weiguo Liu, Single machine group scheduling jobs with resource allocations subject to unrestricted due date assignments, 2024, 1598-5865, 10.1007/s12190-024-02216-y
    2. Yaru Yang, Man Xiao, Weidong Li, Semi-Online Algorithms for the Hierarchical Extensible Bin-Packing Problem and Early Work Problem, 2024, 12, 2079-3197, 68, 10.3390/computation12040068
    3. Dan-Yang Lv, Ji-Bo Wang, Single-machine group technology scheduling with resource allocation and slack due window assignment including minmax criterion, 2024, 0160-5682, 1, 10.1080/01605682.2024.2430351
    4. Lotfi Hidri, Flexible flow shop scheduling problem with removal times, 2025, 23071877, 10.1016/j.jer.2025.01.010
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(667) PDF downloads(165) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog