
Citation: Massoumeh Nazari, Mahmoud Dehghan Nayeri, Kiamars Fathi Hafshjani. Correction: Developing mathematical models and intelligent sustainable supply chains by uncertain parameters and algorithms[J]. AIMS Mathematics, 2024, 9(9): 25223-25231. doi: 10.3934/math.20241230
[1] | Massoumeh Nazari, Mahmoud Dehghan Nayeri, Kiamars Fathi Hafshjani . Developing mathematical models and intelligent sustainable supply chains by uncertain parameters and algorithms. AIMS Mathematics, 2024, 9(3): 5204-5233. doi: 10.3934/math.2024252 |
[2] | Zhimin Liu, Ripeng Huang, Songtao Shao . Data-driven two-stage fuzzy random mixed integer optimization model for facility location problems under uncertain environment. AIMS Mathematics, 2022, 7(7): 13292-13312. doi: 10.3934/math.2022734 |
[3] | Shuai Huang, Youwu Lin, Jing Zhang, Pei Wang . Chance-constrained approach for decentralized supply chain network under uncertain cost. AIMS Mathematics, 2023, 8(5): 12217-12238. doi: 10.3934/math.2023616 |
[4] | Xiaodie Lv, Yi Liu, Yihua Zhong . A novel method to solve the optimization problem of uncertain network system based on uncertainty theory. AIMS Mathematics, 2023, 8(3): 5445-5461. doi: 10.3934/math.2023274 |
[5] | Zhimin Liu . Data-driven two-stage sparse distributionally robust risk optimization model for location allocation problems under uncertain environment. AIMS Mathematics, 2023, 8(2): 2910-2939. doi: 10.3934/math.2023152 |
[6] | Yuejia Dang . Intelligent optimization algorithm for strategic planning in economics with multi-factors assessment: A MEREC-driven Heronian mean framework. AIMS Mathematics, 2025, 10(5): 10866-10897. doi: 10.3934/math.2025494 |
[7] | Ibrahim M. Hezam, Pratibha Rani, Arunodaya Raj Mishra, Ahmad Alshamrani . An intuitionistic fuzzy entropy-based gained and lost dominance score decision-making method to select and assess sustainable supplier selection. AIMS Mathematics, 2023, 8(5): 12009-12039. doi: 10.3934/math.2023606 |
[8] | Jun Wang, Dan Wang, Yuan Yuan . Research on low-carbon closed-loop supply chain strategy based on differential games-dynamic optimization analysis of new and remanufactured products. AIMS Mathematics, 2024, 9(11): 32076-32101. doi: 10.3934/math.20241540 |
[9] | Muhammad Ihsan, Muhammad Saeed, Atiqe Ur Rahman, Mazin Abed Mohammed, Karrar Hameed Abdulkaree, Abed Saif Alghawli, Mohammed AA Al-qaness . An innovative decision-making framework for supplier selection based on a hybrid interval-valued neutrosophic soft expert set. AIMS Mathematics, 2023, 8(9): 22127-22161. doi: 10.3934/math.20231128 |
[10] | Hanan Alohali, Muhammad Bilal Khan, Jorge E. Macías-Díaz, Fahad Sikander . On (p,q)-fractional linear Diophantine fuzzy sets and their applications via MADM approach. AIMS Mathematics, 2024, 9(12): 35503-35532. doi: 10.3934/math.20241685 |
Developing mathematical models and intelligent sustainable supply chains by uncertain parameters and algorithms
by Massoumeh Nazari, Mahmoud Dehghan Nayeri and Kiamars Fathi Hafshjani. AIMS Mathematics, 2024, 9(3): 5204–5233. DOI: 10.3934/math.2024252
The authors would like to revise a small mistake in Figure 1 by changing the direction of the fifth elbow arrow from h to j, and add the artificial intelligence codes to Appendix section of the published paper [1]. The updated Figure 1 and Appendix are as follows,
T = tonndata(y, true, false);
trainFcn = 'trainlm';
feedbackDelays = 1:3;
hiddenLayerSize = 25;
net = narnet(feedbackDelays, hiddenLayerSize, 'open', trainFcn);
net.input.processFcns = {'removeconstantrows', 'mapminmax'};
[x, xi, ai, t] = preparets(net, {}, {}, T);
net.divideFcn = 'dividerand';
net.divideMode = 'time';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 10/100;
net.divideParam.testRatio = 20/100;
net.performFcn = 'mse';
net.plotFcns = {'plotperform', 'plottrainstate', 'ploterrhist', ... 'plotregression', 'plotresponse', 'ploterrcorr', 'plotinerrcorr'};
[net, tr] = train(net, x, t, xi, ai);
y = net(x, xi, ai);
e = gsubtract(t, y);
performance = perform(net, t, y)
trainTargets = gmultiply(t, tr.trainMask);
valTargets = gmultiply(t, tr.valMask);
testTargets = gmultiply(t, tr.testMask);
trainPerformance = perform(net, trainTargets, y)
valPerformance = perform(net, valTargets, y)
testPerformance = perform(net, testTargets, y)
view(net)
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
[xc, xic, aic, tc] = preparets(netc, {}, {}, T);
yc = netc(xc, xic, aic);
closedLoopPerformance = perform(net, tc, yc)
[x1, xio, aio, t] = preparets(net, {}, {}, T);
[y1, xfo, afo] = net(x1, xio, aio);
[netc, xic, aic] = closeloop(net, xfo, afo);
[y2, xfc, afc] = netc(cell(0, 5), xic, aic);
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
view(nets)
[xs, xis, ais, ts] = preparets(nets, {}, {}, T);
ys = nets(xs, xis, ais);
stepAheadPerformance = perform(nets, ts, ys)
if (false)
genFunction(net, 'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x, xi, ai);
end
if (false)
genFunction(net, 'myNeuralNetworkFunction', 'MatrixOnly', 'yes');
x1 = cell2mat(x(1, :));
xi1 = cell2mat(xi(1, :));
y = myNeuralNetworkFunction(x1, xi1);
end
if (false)
gensim(net);
end
T = tonndata(y, true, false);
trainFcn = 'trainlm';
feedbackDelays = 1:3;
hiddenLayerSize = 20;
net = narnet(feedbackDelays, hiddenLayerSize, 'open', trainFcn);
net.input.processFcns = {'removeconstantrows', 'mapminmax'};
[x, xi, ai, t] = preparets(net, {}, {}, T);
net.divideFcn = 'dividerand';
net.divideMode = 'time';
net.divideParam.trainRatio = 85/100;
net.divideParam.valRatio = 5/100;
net.divideParam.testRatio = 10/100;
net.performFcn = 'mse';
net.plotFcns = {'plotperform', 'plottrainstate', 'ploterrhist', ... 'plotregression', 'plotresponse', 'ploterrcorr', 'plotinerrcorr'};
[net, tr] = train(net, x, t, xi, ai);
y = net(x, xi, ai);
e = gsubtract(t, y);
performance = perform(net, t, y)
trainTargets = gmultiply(t, tr.trainMask);
valTargets = gmultiply(t, tr.valMask);
testTargets = gmultiply(t, tr.testMask);
trainPerformance = perform(net, trainTargets, y)
valPerformance = perform(net, valTargets, y)
testPerformance = perform(net, testTargets, y)
view(net)
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
[xc, xic, aic, tc] = preparets(netc, {}, {}, T);
yc = netc(xc, xic, aic);
closedLoopPerformance = perform(net, tc, yc)
[x1, xio, aio, t] = preparets(net, {}, {}, T);
[y1, xfo, afo] = net(x1, xio, aio);
[netc, xic, aic] = closeloop(net, xfo, afo);
[y2, xfc, afc] = netc(cell(0, 5), xic, aic);
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
view(nets)
[xs, xis, ais, ts] = preparets(nets, {}, {}, T);
ys = nets(xs, xis, ais);
stepAheadPerformance = perform(nets, ts, ys)
if (false)
genFunction(net, 'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x, xi, ai);
end
if (false)
genFunction(net, 'myNeuralNetworkFunction', 'MatrixOnly', 'yes');
x1 = cell2mat(x(1, :));
xi1 = cell2mat(xi(1, :));
y = myNeuralNetworkFunction(x1, xi1);
end
if (false)
gensim(net);
end
T = tonndata(y, true, false);
trainFcn = 'trainlm';
feedbackDelays = 1:3;
hiddenLayerSize = 27;
net = narnet(feedbackDelays, hiddenLayerSize, 'open', trainFcn);
net.input.processFcns = {'removeconstantrows', 'mapminmax'};
[x, xi, ai, t] = preparets(net, {}, {}, T);
net.divideFcn = 'dividerand';
net.divideMode = 'time';
net.divideParam.trainRatio = 85/100;
net.divideParam.valRatio = 10/100;
net.divideParam.testRatio = 5/100;
net.performFcn = 'mse';
net.plotFcns = {'plotperform', 'plottrainstate', 'ploterrhist', ... 'plotregression', 'plotresponse', 'ploterrcorr', 'plotinerrcorr'};
[net, tr] = train(net, x, t, xi, ai);
y = net(x, xi, ai);
e = gsubtract(t, y);
performance = perform(net, t, y)
trainTargets = gmultiply(t, tr.trainMask);
valTargets = gmultiply(t, tr.valMask);
testTargets = gmultiply(t, tr.testMask);
trainPerformance = perform(net, trainTargets, y)
valPerformance = perform(net, valTargets, y)
testPerformance = perform(net, testTargets, y)
view(net)
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
[xc, xic, aic, tc] = preparets(netc, {}, {}, T);
yc = netc(xc, xic, aic);
closedLoopPerformance = perform(net, tc, yc)
[x1, xio, aio, t] = preparets(net, {}, {}, T);
[y1, xfo, afo] = net(x1, xio, aio);
[netc, xic, aic] = closeloop(net, xfo, afo);
[y2, xfc, afc] = netc(cell(0, 5), xic, aic);
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
view(nets)
[xs, xis, ais, ts] = preparets(nets, {}, {}, T);
ys = nets(xs, xis, ais);
stepAheadPerformance = perform(nets, ts, ys)
if (false)
genFunction(net, 'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x, xi, ai);
end
if (false)
genFunction(net, 'myNeuralNetworkFunction', 'MatrixOnly', 'yes');
x1 = cell2mat(x(1, :));
xi1 = cell2mat(xi(1, :));
y = myNeuralNetworkFunction(x1, xi1);
end
if (false)
gensim(net);
end
T = tonndata(x4, false, false);
trainFcn = 'trainlm';
feedbackDelays = 1:3;
hiddenLayerSize = 42;
net = narnet(feedbackDelays, hiddenLayerSize, 'open', trainFcn);
net.input.processFcns = {'removeconstantrows', 'mapminmax'};
[x, xi, ai, t] = preparets(net, {}, {}, T);
net.divideFcn = 'dividerand';
net.divideMode = 'time';
net.divideParam.trainRatio = 75/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 10/100;
net.performFcn = 'mse';
net.plotFcns = {'plotperform', 'plottrainstate', 'ploterrhist', ... 'plotregression', 'plotresponse', 'ploterrcorr', 'plotinerrcorr'};
[net, tr] = train(net, x, t, xi, ai);
y = net(x, xi, ai);
e = gsubtract(t, y);
performance = perform(net, t, y)
trainTargets = gmultiply(t, tr.trainMask);
valTargets = gmultiply(t, tr.valMask);
testTargets = gmultiply(t, tr.testMask);
trainPerformance = perform(net, trainTargets, y)
valPerformance = perform(net, valTargets, y)
testPerformance = perform(net, testTargets, y)
view(net)
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
[xc, xic, aic, tc] = preparets(netc, {}, {}, T);
yc = netc(xc, xic, aic);
closedLoopPerformance = perform(net, tc, yc)
[x1, xio, aio, t] = preparets(net, {}, {}, T);
[y1, xfo, afo] = net(x1, xio, aio);
[netc, xic, aic] = closeloop(net, xfo, afo);
[y2, xfc, afc] = netc(cell(0, 5), xic, aic);
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
view(nets)
[xs, xis, ais, ts] = preparets(nets, {}, {}, T);
ys = nets(xs, xis, ais);
stepAheadPerformance = perform(nets, ts, ys)
if (false)
genFunction(net, 'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x, xi, ai);
end
if (false)
genFunction(net, 'myNeuralNetworkFunction', 'MatrixOnly', 'yes');
x1 = cell2mat(x(1, :));
xi1 = cell2mat(xi(1, :));
y = myNeuralNetworkFunction(x1, xi1);
end
if (false)
gensim(net);
end
T = tonndata(x5, false, false);
feedbackDelays = 1:3;
hiddenLayerSize = 38;
net = narnet(feedbackDelays, hiddenLayerSize, 'open', trainFcn);
net.input.processFcns = {'removeconstantrows', 'mapminmax'};
[x, xi, ai, t] = preparets(net, {}, {}, T);
net.divideFcn = 'dividerand';
net.divideMode = 'time';
net.divideParam.trainRatio = 45/100;
net.divideParam.valRatio = 35/100;
net.divideParam.testRatio = 20/100;
net.performFcn = 'mse';
net.plotFcns = {'plotperform', 'plottrainstate', 'ploterrhist', ... 'plotregression', 'plotresponse', 'ploterrcorr', 'plotinerrcorr'};
[net, tr] = train(net, x, t, xi, ai);
y = net(x, xi, ai);
e = gsubtract(t, y);
performance = perform(net, t, y)
trainTargets = gmultiply(t, tr.trainMask);
valTargets = gmultiply(t, tr.valMask);
testTargets = gmultiply(t, tr.testMask);
trainPerformance = perform(net, trainTargets, y)
valPerformance = perform(net, valTargets, y)
testPerformance = perform(net, testTargets, y)
view(net)
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
[xc, xic, aic, tc] = preparets(netc, {}, {}, T);
yc = netc(xc, xic, aic);
closedLoopPerformance = perform(net, tc, yc)
[x1, xio, aio, t] = preparets(net, {}, {}, T);
[y1, xfo, afo] = net(x1, xio, aio);
[netc, xic, aic] = closeloop(net, xfo, afo);
[y2, xfc, afc] = netc(cell(0, 5), xic, aic);
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
view(nets)
[xs, xis, ais, ts] = preparets(nets, {}, {}, T);
ys = nets(xs, xis, ais);
stepAheadPerformance = perform(nets, ts, ys)
if (false)
genFunction(net, 'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x, xi, ai);
end
if (false)
genFunction(net, 'myNeuralNetworkFunction', 'MatrixOnly', 'yes');
x1 = cell2mat(x(1, :));
xi1 = cell2mat(xi(1, :));
y = myNeuralNetworkFunction(x1, xi1);
end
if (false)
gensim(net);
end
The changes have no material impact on the conclusion of this article. The original manuscript will be updated [1].
The authors declare no conflicts of interest.
[1] |
M. Nazari, M. D. Nayeri, K. F. Hafshjani, Developing mathematical models and intelligent sustainable supply chains by uncertain parameters and algorithms, AIMS Math., 9 (2024), 5204–5233.https://doi.org/10.3934/math.2024252 doi: 10.3934/math.2024252
![]() |