Processing math: 100%
Research article Special Issues

Hybrid multi-objective metaheuristic algorithms for solving airline crew rostering problem with qualification and language


  • Received: 29 August 2022 Revised: 19 October 2022 Accepted: 21 October 2022 Published: 31 October 2022
  • In order to cope with the rapid growth of flights and limited crew members, the rational allocation of crew members is a strategy to greatly alleviate scarcity. However, if there is no appropriate allocation plan, some flights may be canceled because there is no pilot in the scheduling period. In this paper, we solved an airline crew rostering problem (CRP). We model the CRP as an integer programming model with multiple constraints and objectives. In this model, the schedule of pilots takes into account qualification restrictions and language restrictions, while maximizing the fairness and satisfaction of pilots. We propose the design of two hybrid metaheuristic algorithms based on a genetic algorithm, variable neighborhood search algorithm and the Aquila optimizer to face the trade-off between fairness and crew satisfaction. The simulation results show that our approach preserves the fairness of the system and maximizes the fairness at the cost of crew satisfaction.

    Citation: Bin Deng, Ran Ding, Jingfeng Li, Junfeng Huang, Kaiyi Tang, Weidong Li. Hybrid multi-objective metaheuristic algorithms for solving airline crew rostering problem with qualification and language[J]. Mathematical Biosciences and Engineering, 2023, 20(1): 1460-1487. doi: 10.3934/mbe.2023066

    Related Papers:

    [1] Guohui Zhang, Gulbahar Tohti, Ping Chen, Mamtimin Geni, Yixuan Fan . SFEMM: A cotton binocular matching method based on YOLOv7x. Mathematical Biosciences and Engineering, 2024, 21(3): 3618-3630. doi: 10.3934/mbe.2024159
    [2] Yan Liu, Bingxue Lv, Yuheng Wang, Wei Huang . An end-to-end stereo matching algorithm based on improved convolutional neural network. Mathematical Biosciences and Engineering, 2020, 17(6): 7787-7803. doi: 10.3934/mbe.2020396
    [3] Sukun Tian, Ning Dai, Linlin Li, Weiwei Li, Yuchun Sun, Xiaosheng Cheng . Three-dimensional mandibular motion trajectory-tracking system based on BP neural network. Mathematical Biosciences and Engineering, 2020, 17(5): 5709-5726. doi: 10.3934/mbe.2020307
    [4] Yujing Qiao, Ning Lv, Baoming Jia . Multiview intelligent networking based on the genetic evolution algorithm for precise 3D measurements. Mathematical Biosciences and Engineering, 2023, 20(8): 14260-14280. doi: 10.3934/mbe.2023638
    [5] Hui Du, Depeng Lu, Zhihe Wang, Cuntao Ma, Xinxin Shi, Xiaoli Wang . Fast clustering algorithm based on MST of representative points. Mathematical Biosciences and Engineering, 2023, 20(9): 15830-15858. doi: 10.3934/mbe.2023705
    [6] Liwei Deng, Zhen Liu, Tao Zhang, Zhe Yan . Study of visual SLAM methods in minimally invasive surgery. Mathematical Biosciences and Engineering, 2023, 20(3): 4388-4402. doi: 10.3934/mbe.2023203
    [7] Wanru Du, Xiaochuan Jing, Quan Zhu, Xiaoyin Wang, Xuan Liu . A cross-modal conditional mechanism based on attention for text-video retrieval. Mathematical Biosciences and Engineering, 2023, 20(11): 20073-20092. doi: 10.3934/mbe.2023889
    [8] Xianlong Ge, Yonghong Liang, Yuanzhi Jin, Chunbing Song . Proactive dynamic vehicle routing problems considering cooperation services for the store-depot-integrated retailer. Mathematical Biosciences and Engineering, 2023, 20(10): 18030-18062. doi: 10.3934/mbe.2023801
    [9] Tinghuai Ma, Lei Guo, Xin Wang, Yurong Qian, Yuan Tian, Najla Al-Nabhan . Friend closeness based user matching cross social networks. Mathematical Biosciences and Engineering, 2021, 18(4): 4264-4292. doi: 10.3934/mbe.2021214
    [10] Yu Shen, Hecheng Li . A new differential evolution using a bilevel optimization model for solving generalized multi-point dynamic aggregation problems. Mathematical Biosciences and Engineering, 2023, 20(8): 13754-13776. doi: 10.3934/mbe.2023612
  • In order to cope with the rapid growth of flights and limited crew members, the rational allocation of crew members is a strategy to greatly alleviate scarcity. However, if there is no appropriate allocation plan, some flights may be canceled because there is no pilot in the scheduling period. In this paper, we solved an airline crew rostering problem (CRP). We model the CRP as an integer programming model with multiple constraints and objectives. In this model, the schedule of pilots takes into account qualification restrictions and language restrictions, while maximizing the fairness and satisfaction of pilots. We propose the design of two hybrid metaheuristic algorithms based on a genetic algorithm, variable neighborhood search algorithm and the Aquila optimizer to face the trade-off between fairness and crew satisfaction. The simulation results show that our approach preserves the fairness of the system and maximizes the fairness at the cost of crew satisfaction.



    Text classification is modelling the relationship between text features and text categories to perform text category determination [1]. Unlike English grammar, Chinese text classification has character-based [2] and word-based [3] methods. The character-based method reduces the impact of unfamiliar words, but individual characters contain insufficient semantic information. The word-based method first faces the problem of accurate word segmentation, which directly affects the effectiveness of the model. However, the text classification task based on text feature words is still the most widely used method at present.

    The main algorithm models for text classification can be divided into rule and template-based, statistical and machine learning-based, and deep learning-based methods.

    Rule-based methods draw on the help of professionals to develop many decision rules for predefined categories, with the degree of match to particular rules serving as feature representations of the text. Limited by subjectivity, the comprehensiveness and scalability of rule templates, and most notably the complete lack of portability of rule templates, text classification models based on rule formulation have not progressed effectively.

    Machine learning-based text classification algorithms [4,5,6] mainly include Decision Tree, Naive Bayesian Model, Support Vector Machine, and K-Nearest Neighbors. Kanish [7] used TF-IDF to convert the news corpus into digital vectors and compared KNN, RF, and LR on a specific dataset, with LR being the best and KNN the worst for classification. Chen [8] constructed the overall correlation factor of different categories, and obtained the calculation method of the optimal correlation factor by balancing the deviation and variance, which improved the classification accuracy of NBM. Liu [9] proposed an improved KNN text classification algorithm based on Simhash, which solves the computational complexity and data imbalance of traditional KNN by calculating the average Hamming distance of neighboring texts. Although the above improved machine learning model improves the effect of text classification to a certain extent, it still needs artificial feature selection and feature extraction. Limited by the size of the text dataset, the accuracy of feature extraction, and ignoring the correlation between text features, it has poor generality and scalability.

    Deep learning-based text classification algorithms mainly include convolutional neural networks, recurrent neural networks, long short-term memory networks, and the fusion of various types of neural network models. With the introduction of the word2vec [10,11] model, word sequences can be converted into low-dimensional dense word vectors with rich semantic information, making neural network models widely used in text classification tasks. Kim [12] proposed to use convolutional neural networks for text classification, setting different weights through convolutional kernels to obtain richer local features and extracting key information through max-pooling operations. The network structure is simple, efficient and robust due to its unique weight-sharing strategy, which allows the training model to have fewer parameters. Rehman [13] constructed a CNN-LSTM model to evaluate the movie review dataset and obtained good results. Gao [14] constructed a hybrid CNN-BiGRU model, ignoring the effect of the loss of word order information caused by CNN on sequence modelling with BiGRU. Although the method of model fusion improves the classification effect to a certain extent, it cannot represent the importance of text features to the classification effect. The introduction of the attention mechanism effectively solves this problem [15]. Wang [16] used the attention mechanism to assign weights to the deep-level information of text extracted by BiGRU to filter effective text features and reduce the interference of noisy features, effectively improving the effectiveness of the model. Deng [17] proposed the attention-based BiLSTM fused CNN model for Chinese extended text classification, by introducing a gating mechanism to assign weights to BiLSTM and CNN output to obtain text fusion features. In addition, the related neural network fusion models also include MTL-LC [18], CNN-BiLSTM-Attention [19], AC-BiLSTM [20], and Attention-BiLSTM [21]. Although the fusion model effectively improves the model prediction, it mainly adopts a recursive network structure. The extracted information is prone to gradient disappearance and explosion problems when transmitting backward. Meanwhile, the recursive network structure only uses the advantages of a single network when extracting text features. It cannot fuse the advantages of CNN and RNN to extract text features, so the classification effects need to be improved.

    Pre-training is performed by training the language model through a large amount of original text to obtain an initialized model with parameters. Then fine-tuning is performed based on the pre-trained language model according to the specific task [22]. Pre-training methods have shown better results in classification and labeling tasks in NLP [23,24]. Currently, the popular pre-training methods include ELMo, OpenAI GPT, BERT [25], and XLNet [26]. However, such models are particularly complex in structure and require tremendous arithmetic support.

    In order to solve the problems of sparse text features, loss of key feature information, low model performance and poor classification results when processing text classification tasks with CNN and RNN. This study constructed a dual-channel neural network model combining CNN and LSTM with self-attention for text classification. The main contributions are as follows:

    (1) N-Gram information of different word windows is extracted using multilayer CNN to enrich the local feature representation of the text.

    (2) Using BiLSTM for feature representation of sentence sequences and adding attention mechanism for weighting the hidden layer states to complete effective feature screening.

    (3) A dual-channel neural network text classification model is constructed, which can effectively integrate the local and global features by the fusion of extracted text feature information, alleviating the problem that CNN will lose word order information and the gradient of BiLSTM when processing text sequences.

    DCCL: text classification model based on self-attention combined with CNN and LSTM is shown in Figure 1.

    Figure 1.  Structure of DCCL text classification model.

    Pre-processing operations are performed on the text dataset, including word segmentation and removal of stop words, to form the original corpus. Training word vectors using word2vec, default skip-gram. Tokenizer converts text sequences into word index sequences based on word lists and automatically pads them to a fixed length. The word vectors trained by word2vec are used as the weight matrix for the word embedding layer, and the text sequence is vectorized and used as the neural network input. The pre-processing process and vectorized representation for the dataset is shown in the following Figure 2.

    Figure 2.  Pre-processing process for text dataset.

    For text input sequence S=(w1,w2,w3wn), wiRd, d is the word vector dimension. The width of the convolution kernel is the same as the word embedding dimension, and the number of words taken in the window for each convolution operation is h, so the convolution kernel ωRh*d. For each window slide, the convolution result ci:

    ci=ReLU(ωwi:i+h1)+b (1)

    where ReLU is the nonlinear activation function, wi:i+h-1 is the number of words taken in each convolution operation, and bR is the bias term.

    The length of the sequence S is n, the padding parameter is set to the same mode, the stride size is s, and the convolution summary result c=[c1,c2,c3,cn/s]. The pooling layers then perform MaxPooling operation on the convolutional layer results, increasing the perceptual field of the upper convolutional kernel, preserving the main features of the word vector sequence, reducing the parameters and computation of the next layer, and preventing overfitting.

    For the input of a sequence of word vectors S, the outputs of each layer in the parallel structure are O1, O2, O3, respectively, and the overall output O for the TextCNN is expressed as:

    O=concatenate([O1,O2,O3],axis=1) (2)

    where concatenate denotes the concatenate () function and axis denotes the way of dimension splicing.

    Sepp Hochreiter [27] proposed LSTM to solve the problem of RNNs with long-term dependencies arising from processing too much information, leading to gradient disappearance or explosion. The structure of the LSTM unit is shown in Figure 3. By linking the memory cell, the input gate, the forgetting gate, and the output gate, the relevant parameters of the gate are controlled and updated to learn and train the model, that is, to adjust the degree of information update and forget. Therefore, the memory cell can preserve the semantic information of longer sequences effectively.

    Figure 3.  LSTM unit structure.

    For the moment t, the input of the LSTM unit includes the current moment input vector xt, the previous moment memory cell information ct-1, and the previous moment hidden layer output information ht-1. The specific implementation of the LSTM unit is as follows.

    it=σ(Wixt+Uiht1+bi) (3)
    ot=σ(Woxt+Uoht1+bo) (4)
    ft=σ(Wfxt+Ufht1+bf) (5)
    ¯ct=tanh(Wcxt+Ucht1+bc) (6)
    ct=ftct1+it¯ct (7)
    ht=ottanh(ct) (8)

    where σ is the sigmoid function, Wi, Wo, Wf, Wc are the weight matrix on the input vector xt, Ui, Uo, Uf, Uc are the weight matrix on the hidden layer state ht-1, and bi, bo, bf, bc are the bias vector. it, ot, ft represent the input gate, output gate, and forget gate, respectively.

    Finally, the splicing of the output vectors of the forward and backward LSTM units is performed, and the feature vectors with bidirectional semantics are the output of the BiLSTM neural network layer.

    Ht=[ht;ht]Rn (9)

    BiLSTM cannot show the importance of key information in context during computation, and it causes information redundancy when dealing with long sequence tasks. The introduction of self-attention to weight the hidden layer states of the BiLSTM can effectively highlight essential text features. The input of self-attention consists of Q(Query), K(Key), and V(Value). First, linearly transform Q, K, and V.

    Q=WQHt,K=WKHt,V=WVHt (10)

    where Q = K = V = Ht, WQ, WK, WV are the weight matrix of Q, K, V respectively.

    Q and K are computed using the scaled dot-product function, normalized to probability distribution by softmax to obtain the vector of self-attention weights, which is then multiplied by V to obtain the final weighted output A.

    A=Attention(Q,K,V)=softmax(QKTdk)V (11)

    Where dk denotes the dimensions of Q, K, V, and dk is the scaling factor.

    In order to take into account both local and global features of the text sequence, the overall output of the dual-channel neural network is obtained by concatenating the individual channel output. Then, the fully connected layers are connected for dimensionality reduction and used as input to the softmax classifier. Finally, directly output the probability of the text categories.

    Output=Concatenate([O,A]) (12)
    ˆy=softmax(WfOutput+bf) (13)

    Where, ˆy is the probability of the text category predicted by the model, and Wf and bf are the weight and bias matrix of the fully connected layer, respectively.

    Set the softmax cross-entropy as the loss function for the overall training of the model.

    Loss(ˆy,y)=ki=1yilogˆyi (14)

    Where, y is the k-dimensional one-hot encoded vector of true labels.

    The experimental datasets are drawn from the open news corpus, Sougou and THUCNews, and the sample sizes for the two types of datasets are shown in Table 1 below.

    Table 1.  Sample size distribution of the dataset.
    Dataset Category Training set Test set Total
    Sougou 5 4000 500 4500
    THUNews 10 50000 10000 60000

     | Show Table
    DownLoad: CSV

    Macro average precision (MAP), Macro average recall (MAR), and Macro average F1-score (MAF1) are used as the evaluation indicators of the text classification models. The macro-average is the arithmetic average of precision, recall, and F1-score for each category. The calculation of each type of evaluation indicator is as follows.

    MAP = 1kki=1Pi (15)
    MAR = 1kki=1Ri (16)
    MAF1 = 1kki=1F1i (17)

    Where k is the number of label categories, Pi, Ri and F1i represent the precision, recall and F1-score of the ith category respectively.

    To better verify the superiority of the proposed model for text classification tasks in public domains, we introduced five sets of comparison experiments, including TextCNN, BiLSTM, BiLSTM-Attention, TextCNN-BiLSTM-Attention (SCA-CL), and DCCL.

    In constructing model experiments, especially for hybrid dual-channel neural network models, channel-based ablation experiments effectively determine model parameters. For the TextCNN model processing text classification tasks, it is essential to determine the size of the convolutional kernel used to extract N-Gram information. Experiments were conducted using a single-layer CNN structure, as shown below.

    Figure 4.  Classification results for different convolutional kernel sizes.

    The experimental software environment is Windows 10, Python 3.6, Tensorflow 1.14.0, Keras 2.2.5, jieba 0.42. The model parameters are determined after several rounds of experimental comparison, where vocab size is 8000, lstm units is 256, num of filter is 128, kernel sizes are 2, 3, and 4, window size is 5, word embedding dimension is 200, max length of sentence sequence is 256, batch size is 64, dropout is 0.3 to prevent the model from overfitting, learning rate is 0.001, epoch is 50, and Adam is used to optimize the model parameters. The application effects of various text classification algorithms are shown in the following Table 2.

    Table 2.  Text classification results for each model.
    Sougou THUNews
    Algorithm MAP MAR MAF1 MAP MAR MAF1
    TextCNN 0.8839 0.8830 0.8834 0.9473 0.9472 0.9470
    BiLSTM 0.8744 0.8661 0.8683 0.9507 0.9501 0.9503
    BiLSTM-Attention 0.8849 0.8729 0.8768 0.9578 0.9577 0.9577
    SCA-CL 0.8919 0.8882 0.8885 0.9526 0.9527 0.9524
    DCCL 0.9103 0.8983 0.9007 0.9627 0.9626 0.9626

     | Show Table
    DownLoad: CSV

    Also, to further validate the superiority of the DCCL model for the text classification task, Figures 5 and 6 show the evolution of the accuracy and loss values for each type of comparison model on the Sougou and THUNews training sets, respectively.

    Figure 5.  Evolution of accuracy and loss for the Sougou.
    Figure 6.  Evolution of accuracy and loss for the THUNews.

    For the multi-category text classification experiments, Figures 7 and 8 show the results of the comparison models for each category in the Sougou and THUNews test sets, respectively.

    Figure 7.  Classification results for each category in the Sougou.
    Figure 8.  Classification results for each category in the THUNews.

    The experimental results in Table 2 show that the constructed DCCL model achieved the most excellent results in the classification experiments for both datasets, with MAP, MAR, and MAF1 of 91.03%, 89.83%, and 90.07% respectively for the Sougou dataset, and 96.27%, 96.26%, 96.26% respectively for the THUNews dataset.

    The classification results of the DCCL model for the two types of datasets are quite different. The major impacts we consider include two points: (1) The difference in sample size between the two datasets is large, and the word vectors obtained from word2vec training on a large-scale corpus are more closely matched to the actual distribution. Hence, the THUNews has better classification results. (2) The THUNews dataset was filtered by the Natural Language Processing and Social Humanities Computing Laboratory of Tsinghua University, and the text data of each category differed significantly.

    For the classification effect of the Sougou, self-attention is introduced to weight the output of the BiLSTM hidden layer to reduce the influence of redundant features on the classification effect. Therefore, the MAF1 of BiLSTM-Attention is 0.85% higher than that of BiLSTM. However, BiLSTM-Attention still performs slightly worse than TextCNN, which uses multiple word windows to extract the N-Gram information of the text, followed by max-pooling operation, similar to the attention mechanism for feature highlighting. For the SCA-CL model, the CNN causes a loss of word order information, and the error is passed to the BiLSTM for text feature reconstitution. However, based on the experimental results, it can be concluded that the SCA-CL model with the application of the tandem structure plays a positive role [28], making MAF1 higher than BiLSTM-Attention by 1.17%. For the ablation experiments of the DCCL model, the classification effect was improved by 1.73% and 2.39% respectively compared to the single channel.

    For the THUNews, the difference in classification results achieved by the various comparative experimental models is not too significant. Due to the apparent differences in text data between categories in the THUNews dataset, the sentence-level high-level features constructed by BiLSTM and then features weighted by the attention mechanism, making the BiLSTM-Attention significantly more effective than TextCNN and SCA-CL. For the ablation experiments of the DCCL model, the classification effect was improved by 1.56% and 0.49% respectively compared to the single channel.

    Through the training process of various models, it can be seen that the iterations of BiLSTM and BiLSTM-Attention are slow, TextCNN is relatively fast, and SCA-CL and DCCL are both excellent. Due to the small sample size and noisy data of Sougou, the text features constructed by BiLSTM and BiLSTM-Attention have large information redundancy and errors, resulting in poor classification results. TextCNN has excellent performance and high accuracy during training because it captures multi-level local features. Due to the large sample size and manual pre-processing of THUNews, BiLSTM and BiLSTM-Attention can better obtain high-level text features, and ultimately achieve higher accuracy and lower Loss. TextCNN processing of text sequences suffers from word order and information loss, which can cause cumulative propagation of errors in the SCA-CL model. DCCL can complement the shortcomings of CNN and LSTM for feature extraction, thus effectively integrating local and global text features and highlighting essential information to obtain a more comprehensive text feature at multiple levels. As a result, DCCL has the best performance in the training process for both types of datasets.

    For each category of the dataset, it can be seen from Figures 7 and 8 that among the five categories of the Sougou, DCCL is the best in the categories of Sports, Education and Automobile, and the next best in the categories of Health and Military. Among the ten categories of the THUNews, DCCL has the best classification effect in the six categories of Sports, Home, Education, Affairs, Technology and Finance, and the next best in the three categories of Property, Fashion, and Games. Meanwhile, for the other comparison models performed poorly in the categories of Home, Education, Affairs, and Technology, while DCCL substantially improved the classification results.

    DCCL: dual-channel hybrid neural network combined with self-attention is proposed to solve the problems of high-dimensional sparse features, the low performance of classification models, and poor classification results in text classification tasks. DCCL complements the shortcomings of CNN and LSTM for text feature extraction, and can integrate local and global features of text and highlight key features. The results of multiple rounds of model comparison experiments with the two datasets show that DCCL can achieve excellent classification results and is suitable for Chinese text classification tasks. The application of the attention mechanism, the overall model structure, and the parameters need to be reasonably adjusted based on the experimental dataset for the DCCL model. At the same time, the effective combination of pre-trained language models and classification models can reduce the time consumption of the training process and significantly improve classification performance.

    This research is supported by Key R & D Program for Xuzhou Science and Technology Plan Project, with granted number KC21308. In addition, it is also supported by Innovation and Entrepreneurship Project for University Students in Jiangsu Province, under granted No. 201810313047Y and 201910313004Z.

    The data used to support the findings of this study are available from the corresponding author upon request.

    The authors declare that there is no conflict of interest regarding the publication of this paper.



    [1] Civil Aviation Administration of China, 2018 Civil Aviation Industry Development Statistics Bulletin, 2019. Available from: http://www.caac.gov.cn/XXGK/XXGK/TJSJ/201905/P0201905085195 29727887.pdf.
    [2] M. Ehrgott, D. M. Ryan, Constructing robust crew schedules with bicriteria optimization, J. Multi-Criter. Decis. Anal., 11 (2002), 139–150. https://doi.org/10.1002/mcda.321 doi: 10.1002/mcda.321
    [3] S. Zhou, Z. Zhan, Z. Chen, S. Kwong, J. Zhang, A multi-objective ant colony system algorithm for airline crew rostering problem with fairness and satisfaction, IEEE Trans. Intell. Transp. Syst., 22 (2020), 6784–6798. https://doi.org/10.1109/TITS.2020.2994779 doi: 10.1109/TITS.2020.2994779
    [4] R. T. Marler, J. S. Arora, Survey of multi-objective optimization methods for engineering, Struct. Multidiscip. Optim., 26 (2004), 369–395. https://doi.org/10.1007/s00158-003-0368-6 doi: 10.1007/s00158-003-0368-6
    [5] Q. Yang, L. Dan, M. Lv, J. Wu, W. Li, W. Dong, Quantitative assessment of the parameterization sensitivity of the Noah-MP land surface model with dynamic vegetation using ChinaFLUX data, Agric. For. Meteorol., 307 (2021), 108542. https://doi.org/10.1016/j.agrformet.2021.108542 doi: 10.1016/j.agrformet.2021.108542
    [6] Q. Yang, L. Dan, J. Wu, R. Jiang, J. Dan, W. Li, et al., The improved freeze–thaw process of a climate-vegetation model: Calibration and validation tests in the source region of the Yellow River, Agric. For. Meteorol., 123 (2018), 346–367. https://doi.org/10.1029/2017JD028050 doi: 10.1029/2017JD028050
    [7] J. E. Beasley, B. Cao, A tree search algorithm for the crew scheduling problem, Eur. J. Oper. Res., 94 (1996), 517–526. https://doi.org/10.1016/0377-2217(95)00093-3 doi: 10.1016/0377-2217(95)00093-3
    [8] J. E. Beasley, B. Cao, A dynamic programming based algorithm for the crew scheduling problem, Comput. Oper. Res., 25 (1998), 567–582. https://doi.org/10.1016/S0305-0548(98)00019-7 doi: 10.1016/S0305-0548(98)00019-7
    [9] P. Lučić, D. Teodorović, Metaheuristics approach to the aircrew rostering problem, Ann. Oper. Res., 155 (2007), 311–338. https://doi.org/10.1007/s10479-007-0216-y doi: 10.1007/s10479-007-0216-y
    [10] B. Maenhout, M. Vanhoucke, A hybrid scatter search heuristic for personalized crew rostering in the airline industry, Eur. J. Oper. Res., 206 (2010), 155–167. https://doi.org/10.1016/j.ejor.2010.01.040 doi: 10.1016/j.ejor.2010.01.040
    [11] R. Hadianti, K. Novianingsih, S. Uttunggadewa, K. Sidarto, N. Sumarti, E. Soewono, Optimization model for an airline crew rostering problem: Case of Garuda Indonesia, J. Math. Fundam. Sci., 45 (2013), 218–234. https://doi.org/10.5614/j.math.fund.sci.2013.45.3.2 doi: 10.5614/j.math.fund.sci.2013.45.3.2
    [12] F. Quesnel, G. Desaulniers, F. Soumis, Improving air crew rostering by considering crew preferences in the crew pairing problem, Transp. Sci., 54 (2020), 97–114. https://doi.org/10.1287/trsc.2019.0913 doi: 10.1287/trsc.2019.0913
    [13] F. Quesnel, A. Wu, G. Desaulniers, F. Soumis, Deep-learning-based partial pricing in a branch-and-price algorithm for personalized crew rostering, Comput. Oper. Res., 138 (2022), 105554. https://doi.org/10.1016/j.cor.2021.105554 doi: 10.1016/j.cor.2021.105554
    [14] B. Deng, An improved honey badger algorithm by genetic algorithm and levy flight distribution for solving airline crew rostering problem, IEEE Access, 10 (2022), 108075–108088. https://doi.org/10.1109/ACCESS.2022.3213066 doi: 10.1109/ACCESS.2022.3213066
    [15] N. Souai, J. Teghem, Genetic algorithm based approach for the integrated airline crew-pairing and rostering problem, Eur. J. Oper. Res., 199 (2009), 674–683. https://doi.org/10.1016/j.ejor.2007.10.065 doi: 10.1016/j.ejor.2007.10.065
    [16] M. Saddoune, G. Desaulniers, I. Elhallaoui, F. Soumis, A. Fathollahi-Fard, Integrated airline crew pairing and crew assignment by dynamic constraint aggregation, Transp. Sci., 46 (2012), 39–55. https://doi.org/10.1287/trsc.1110.0379 doi: 10.1287/trsc.1110.0379
    [17] V. Zeighami, M. Saddoune, F. Soumis, Alternating Lagrangian decomposition for integrated airline crew scheduling problem, Eur. J. Oper. Res., 287 (2020), 211–224. https://doi.org/10.1016/j.ejor.2020.05.005 doi: 10.1016/j.ejor.2020.05.005
    [18] A. M. Fathollahi-Fard, A. Ahmadi, F. Goodarzian, N. Cheikhrouhou, A bi-objective home healthcare routing and scheduling problem considering patients' satisfaction in a fuzzy environment, Appl. Soft Comput., 93 (2020), 106385. https://doi.org/10.1016/j.asoc.2020.106385 doi: 10.1016/j.asoc.2020.106385
    [19] Z. Sazvar, S. Mirzapour Al-E-Hashem, A. Baboli, M. A. Jokar, A bi-objective stochastic programming model for a centralized green supply chain with deteriorating products, Int. J. Prod. Econ., 150 (2014), 140–154. https://doi.org/10.1016/j.ijpe.2013.12.023 doi: 10.1016/j.ijpe.2013.12.023
    [20] J. Pasha, A. L. Nwodu, A. M. Fathollahi-Fard, G. Tian, Z. Li, H. Wang, et al., Exact and metaheuristic algorithms for the vehicle routing problem with a factory-in-a-box in multi-objective settings, Adv. Eng. Inf., 52 (2022), 101623. https://doi.org/10.1016/j.aei.2022.101623 doi: 10.1016/j.aei.2022.101623
    [21] A. M. Fathollahi-Fard, L. Woodward, O. Akhrif, Sustainable distributed permutation flow-shop scheduling model based on a triple bottom line concept, J. Ind. Inf. Integr., 24 (2021), 100233. https://doi.org/10.1016/j.jii.2021.100233 doi: 10.1016/j.jii.2021.100233
    [22] E. K. Burke, P. De Causmaecker, G. De Maere, J. Mulder, M. Paelinck, G. V. Berghe, A multi-objective approach for robust airline scheduling, Comput. Oper. Res., 37 (2010), 822–832. https://doi.org/10.1016/j.cor.2009.03.026 doi: 10.1016/j.cor.2009.03.026
    [23] P. Chutima, K. Arayikanon, Many-objective low-cost airline cockpit crew rostering optimisation, Comput. Ind. Eng., 150 (2020), 106844. https://doi.org/10.1016/j.cie.2020.106844 doi: 10.1016/j.cie.2020.106844
    [24] V. Baradaran, A. H. Hosseinian, A multi-objective mathematical formulation for the airline crew scheduling problem: MODE and NSGA-II solution approaches, J. Ind. Manage. Perspect., 11 (2021), 247–269. https://doi.org/10.52547/jimp.11.1.247 doi: 10.52547/jimp.11.1.247
    [25] Q. Yang, H. Zuo, W. Li, Land surface model and particle swarm optimization algorithm based on the model-optimization method for improving soil moisture simulation in a semi-arid region, Plos One, 11 (2016), e0151576. https://doi.org/10.1371/journal.pone.0151576 doi: 10.1371/journal.pone.0151576
    [26] Q. Yang, J. Wu, Y. Li, W. Li, L. Wang, Y. Yang, Using the particle swarm optimization algorithm to calibrate the parameters relating to the turbulent flux in the surface layer in the source region of the Yellow River, Agric. For. Meteorol., 232 (2017), 606–622. https://doi.org/10.1016/j.agrformet.2016.10.019 doi: 10.1016/j.agrformet.2016.10.019
    [27] L. Abualigah, D. Yousri, M. A. Al-Qaness, A. H. Gandomi, Aquila optimizer: A novel meta-heuristic optimization algorithm, Comput. Ind. Eng., 157 (2021), 107250. https://doi.org/10.1016/j.cie.2021.107250 doi: 10.1016/j.cie.2021.107250
    [28] B. Naderi, R. Tavakkoli-Moghaddam, M. Khalili, Electromagnetism-like mechanism and simulated annealing algorithms for flowshop scheduling problems minimizing the total weighted tardiness and makespan, Knowl. Based Syst., 23 (2010), 77–85. https://doi.org/10.1016/j.knosys.2009.06.002 doi: 10.1016/j.knosys.2009.06.002
    [29] B. Vahdani, M. Zandieh, Scheduling trucks in cross-docking systems: Robust meta-heuristics, Comput. Ind. Eng., 58 (2010), 12–24. https://doi.org/10.1016/j.cie.2009.06.006 doi: 10.1016/j.cie.2009.06.006
    [30] M. Abd Elaziz, A. Dahou, N. A. Alsaleh, A. H. Elsheikh, A. I. Saba, M. Ahmadein, Boosting COVID-19 image classification using MobileNetV3 and aquila optimizer algorithm, Entropy, 23 (2021), 1383. https://doi.org/10.3390/e23111383 doi: 10.3390/e23111383
    [31] A. M. AlRassas, M. A. Al-qaness, A. A. Ewees, S. Ren, M. Abd Elaziz, R. Damaševičius, et al., Optimized ANFIS model using Aquila Optimizer for oil production forecasting, Processes, 9 (2021), 1194. https://doi.org/10.3390/pr9071194 doi: 10.3390/pr9071194
    [32] Q. Xing, J. Wang, H. Lu, S. Wang, Research of a novel short-term wind forecasting system based on multi-objective Aquila optimizer for point and interval forecast, Energy Convers. Manage., 263 (2022), 115583. https://doi.org/10.1016/j.enconman.2022.115583 doi: 10.1016/j.enconman.2022.115583
    [33] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput., 6 (2002), 182–197. https://doi.org/10.1007/s00158-003-0368-6 doi: 10.1007/s00158-003-0368-6
    [34] G. Taguchi, Introduction to Quality Engineering: Designing Quality into Products and Processes, 1986.
    [35] M. Rezaei, M. Afsahi, M. Shafiee, M. Patriksson, A bi-objective optimization framework for designing an efficient fuel supply chain network in post-earthquakes, Comput. Ind. Eng., 147 (2020), 106654. https://doi.org/10.1016/j.cie.2020.106654 doi: 10.1016/j.cie.2020.106654
    [36] N. Janatyan, M. Zandieh, A. Alem-Tabriz, M. Rabieh, A robust optimization model for sustainable pharmaceutical distribution network design: A case study, Ann. Oper. Res., 46 (2021), 1–20. https://doi.org/10.1007/s10479-020-03900-5 doi: 10.1007/s10479-020-03900-5
    [37] K. Govindan, A. Jafarian, M. E. Azbari, T. Choi, Optimal bi-objective redundancy allocation for systems reliability and risk management, IEEE Trans. Cybern., 46 (2015), 1735–1748. https://doi.org/10.1109/TCYB.2014.2382666 doi: 10.1109/TCYB.2014.2382666
    [38] F. Goodarzian, A. A. Taleizadeh, P. Ghasemi, A. Abraham, An integrated sustainable medical supply chain network during COVID-19, Eng. Appl. Artif. Intell., 100 (2021), 104188. https://doi.org/10.1016/j.engappai.2021.104188 doi: 10.1016/j.engappai.2021.104188
    [39] G. R. Amin, M. Toloo, Finding the most efficient DMUs in DEA: An improved integrated model, Comput. Ind. Eng., 52 (2007), 71–77. https://doi.org/10.1016/j.cie.2006.10.003 doi: 10.1016/j.cie.2006.10.003
    [40] P. Seydanlou, F. Jolai, R. Tavakkoli-Moghaddam, A. Fathollahi-Fard, A multi-objective optimization framework for a sustainable closed-loop supply chain network in the olive industry: Hybrid meta-heuristic algorithms, Expert Syst. Appl., 203 (2022), 117566. https://doi.org/10.1016/j.eswa.2022.117566 doi: 10.1016/j.eswa.2022.117566
    [41] Y. Haimes, On a bicriterion formulation of the problems of integrated system identification and system optimization, IEEE Trans. Syst. Man Cybern., 1 (1971), 296–297.
    [42] X. Liu, X. Zhang, W. Li, X. Zhang, Swarm optimization algorithms applied to multi-resource fair allocation in heterogeneous cloud computing systems, Computing, 99 (2017), 1231–1255. https://doi.org/10.1007/s00607-017-0561-x doi: 10.1007/s00607-017-0561-x
  • This article has been cited by:

    1. Naomi A. Ubina, Shyi-Chyi Cheng, Chin-Chun Chang, Sin-Yi Cai, Hsun-Yu Lan, Hoang-Yang Lu, Intelligent Underwater Stereo Camera Design for Fish Metric Estimation Using Reliable Object Matching, 2022, 10, 2169-3536, 74605, 10.1109/ACCESS.2022.3185753
    2. Guohua Gao, Shuangyou Wang, Ciyin Shuai, Optimization of greenhouse tomato localization in overlapping areas, 2023, 66, 11100168, 107, 10.1016/j.aej.2022.11.036
    3. Xiaojuan Liu, Xudong Jing, Hanhui Jiang, Shoaib Younas, Ruiyang Wei, Haojie Dang, Zhenchao Wu, Longsheng Fu, Performance evaluation of newly released cameras for fruit detection and localization in complex kiwifruit orchard environments, 2024, 41, 1556-4959, 881, 10.1002/rob.22297
    4. Changqing Gao, Hanhui Jiang, Xiaojuan Liu, Haihong Li, Zhenchao Wu, Xiaoming Sun, Leilei He, Wulan Mao, Yaqoob Majeed, Rui Li, Longsheng Fu, Improved binocular localization of kiwifruit in orchard based on fruit and calyx detection using YOLOv5x for robotic picking, 2024, 217, 01681699, 108621, 10.1016/j.compag.2024.108621
    5. Yong Li, Chenguang Liu, Xiaoyu You, Jian Liu, A New Vision Measurement Technique with Large Field of View and High Resolution, 2023, 23, 1424-8220, 6615, 10.3390/s23146615
    6. Li Li, Zhi He, Kai Li, Xinting Ding, Hao Li, Weixin Gong, Yongjie Cui, Object detection and spatial positioning of kiwifruits in a wide-field complex environment, 2024, 223, 01681699, 109102, 10.1016/j.compag.2024.109102
    7. Dian Xi, Hengzhan Yang, Bo Tan, Stereo matching algorithm based on improved census transform and minimum spanning tree cost aggregation, 2024, 98, 10473203, 104023, 10.1016/j.jvcir.2023.104023
    8. Jiake Wang, Yong Guan, Zhenjia Kang, Pengzhan Chen, A Robust Monocular and Binocular Visual Ranging Fusion Method Based on an Adaptive UKF, 2024, 24, 1424-8220, 4178, 10.3390/s24134178
    9. Huaizhou Li, Shuaijun Wang, Zhenpeng Bai, Hong Wang, Sen Li, Shupei Wen, Research on 3D Reconstruction of Binocular Vision Based on Thermal Infrared, 2023, 23, 1424-8220, 7372, 10.3390/s23177372
    10. Jinfeng Zhang, Yihua Luo, Zhiyu Cheng, Weijie Cai, Hao Pang, 2024, Stability analysis of a visual system for docking and positioning of single-column steel tube towers, 979-8-3503-7382-0, 1183, 10.1109/CVIDL62147.2024.10603523
    11. Yeping Peng, Jianrui Xu, Guangzhong Cao, Runhao Zeng, Binocular-Separated Modeling for Efficient Binocular Stereo Matching, 2025, 26, 1524-9050, 3028, 10.1109/TITS.2025.3531115
    12. Qingdong Wu, Jijun Miao, Zhaohui Liu, Fuhao Li, Yihao Liu, Detection method of subgrade settlement for the road of ART in coastal tidal flat area based on Vehicle-mounted binocular stereo vision technology, 2025, 15, 2045-2322, 10.1038/s41598-025-91343-y
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2496) PDF downloads(133) Cited by(3)

Figures and Tables

Figures(10)  /  Tables(13)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog