
Predicting construction costs often involves disadvantages, such as low prediction accuracy, poor promotion value and unfavorable efficiency, owing to the complex composition of construction projects, a large number of personnel, long working periods and high levels of uncertainty. To address these concerns, a prediction index system and a prediction model were developed. First, the factors influencing construction cost were first identified, a prediction index system including 14 secondary indexes was constructed and the methods of obtaining data were presented elaborately. A prediction model based on the Random Forest (RF) algorithm was then constructed. Bird Swarm Algorithm (BSA) was used to optimize RF parameters and thereby avoid the effect of the random selection of RF parameters on prediction accuracy. Finally, the engineering data of a construction company in Xinyu, China were selected as a case study. The case study showed that the maximum relative error of the proposed model was only 1.24%, which met the requirements of engineering practice. For the selected cases, the minimum prediction index system that met the requirement of prediction accuracy included 11 secondary indexes. Compared with classical metaheuristic optimization algorithms (Particle Swarm Optimization, Genetic Algorithms, Tabu Search, Simulated Annealing, Ant Colony Optimization, Differential Evolution and Artificial Fish School), BSA could more quickly determine the optimal combination of calculation parameters, on average. Compared with the classical and latest forecasting methods (Back Propagation Neural Network, Support Vector Machines, Stacked Auto-Encoders and Extreme Learning Machine), the proposed model exhibited higher forecasting accuracy and efficiency. The prediction model proposed in this study could better support the prediction of construction cost, and the prediction results provided a basis for optimizing the cost management of construction projects.
Citation: Zhishan Zheng, Lin Zhou, Han Wu, Lihong Zhou. Construction cost prediction system based on Random Forest optimized by the Bird Swarm Algorithm[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 15044-15074. doi: 10.3934/mbe.2023674
[1] | Hongyan Xu . Digital media zero watermark copyright protection algorithm based on embedded intelligent edge computing detection. Mathematical Biosciences and Engineering, 2021, 18(5): 6771-6789. doi: 10.3934/mbe.2021336 |
[2] | Qichao Ying, Jingzhi Lin, Zhenxing Qian, Haisheng Xu, Xinpeng Zhang . Robust digital watermarking for color images in combined DFT and DT-CWT domains. Mathematical Biosciences and Engineering, 2019, 16(5): 4788-4801. doi: 10.3934/mbe.2019241 |
[3] | Chuanda Cai, Changgen Peng, Jin Niu, Weijie Tan, Hanlin Tang . Low distortion reversible database watermarking based on hybrid intelligent algorithm. Mathematical Biosciences and Engineering, 2023, 20(12): 21315-21336. doi: 10.3934/mbe.2023943 |
[4] | Xinyi Wang, He Wang, Shaozhang Niu, Jiwei Zhang . Detection and localization of image forgeries using improved mask regional convolutional neural network. Mathematical Biosciences and Engineering, 2019, 16(5): 4581-4593. doi: 10.3934/mbe.2019229 |
[5] | Boyi Zeng, Jun Zhao, Shantian Wen . A textual and visual features-jointly driven hybrid intelligent system for digital physical education teaching quality evaluation. Mathematical Biosciences and Engineering, 2023, 20(8): 13581-13601. doi: 10.3934/mbe.2023606 |
[6] | Eric Ke Wang, Nie Zhe, Yueping Li, Zuodong Liang, Xun Zhang, Juntao Yu, Yunming Ye . A sparse deep learning model for privacy attack on remote sensing images. Mathematical Biosciences and Engineering, 2019, 16(3): 1300-1312. doi: 10.3934/mbe.2019063 |
[7] | Guanghua Fu, Qingjuan Wei, Yongsheng Yang . Bearing fault diagnosis with parallel CNN and LSTM. Mathematical Biosciences and Engineering, 2024, 21(2): 2385-2406. doi: 10.3934/mbe.2024105 |
[8] | Hao Chen, Shengjie Li, Xi Lu, Qiong Zhang, Jixining Zhu, Jiaxin Lu . Research on bearing fault diagnosis based on a multimodal method. Mathematical Biosciences and Engineering, 2024, 21(12): 7688-7706. doi: 10.3934/mbe.2024338 |
[9] | Xuehu Yan, Xuan Zhou, Yuliang Lu, Jingju Liu, Guozheng Yang . Image inpainting-based behavior image secret sharing. Mathematical Biosciences and Engineering, 2020, 17(4): 2950-2966. doi: 10.3934/mbe.2020166 |
[10] | Haifeng Song, Weiwei Yang, Songsong Dai, Haiyan Yuan . Multi-source remote sensing image classification based on two-channel densely connected convolutional networks. Mathematical Biosciences and Engineering, 2020, 17(6): 7353-7377. doi: 10.3934/mbe.2020376 |
Predicting construction costs often involves disadvantages, such as low prediction accuracy, poor promotion value and unfavorable efficiency, owing to the complex composition of construction projects, a large number of personnel, long working periods and high levels of uncertainty. To address these concerns, a prediction index system and a prediction model were developed. First, the factors influencing construction cost were first identified, a prediction index system including 14 secondary indexes was constructed and the methods of obtaining data were presented elaborately. A prediction model based on the Random Forest (RF) algorithm was then constructed. Bird Swarm Algorithm (BSA) was used to optimize RF parameters and thereby avoid the effect of the random selection of RF parameters on prediction accuracy. Finally, the engineering data of a construction company in Xinyu, China were selected as a case study. The case study showed that the maximum relative error of the proposed model was only 1.24%, which met the requirements of engineering practice. For the selected cases, the minimum prediction index system that met the requirement of prediction accuracy included 11 secondary indexes. Compared with classical metaheuristic optimization algorithms (Particle Swarm Optimization, Genetic Algorithms, Tabu Search, Simulated Annealing, Ant Colony Optimization, Differential Evolution and Artificial Fish School), BSA could more quickly determine the optimal combination of calculation parameters, on average. Compared with the classical and latest forecasting methods (Back Propagation Neural Network, Support Vector Machines, Stacked Auto-Encoders and Extreme Learning Machine), the proposed model exhibited higher forecasting accuracy and efficiency. The prediction model proposed in this study could better support the prediction of construction cost, and the prediction results provided a basis for optimizing the cost management of construction projects.
Copyright protection of multimedia contents have become a great concern over the decades and it also developed a series of techniques, the most notable of which are the cryptographic schemes and digital watermarking methods. The cryptographic schemes, also called digital signature methods, are usually based on a hashing function [1] or image feature extraction [2,3,4,5] followed by a public key encryption system. Unfortunately, the digital signature-based methods are mainly used for integrity authentication rather than ownership protection, because no explicit copyright information can be extracted from digital products. On the other hand, digital watermarking methods embed a watermark into the image without causing significant damage to its usage, and later it can be extracted from the watermarked image for ownership verification. Generally speaking, an effective watermarking scheme should be perceptually invisible and robust to possible attacks including signal processing, geometric distortion, and intentional manipulations.
In recent years, robust watermarking systems has become an intriguing research topic, among them are the spatial domain methods [6,7,8,9,10,11,12,13,14,15] and the transform domain methods [16,17,18,19,20,21,22]. Either way, though, in order to gain robustness, as Cox's prominent argument in [23], the destination of the watermark should be the perceptually most significant components of the carrier signal, despite the risk of potential fidelity distortions. Obviously, this criterion would lead to a counter-balance paradox between robustness and imperceptibility, since improving the robustness will have a notable impact on the visual quality of the image. To alleviate this problem, many adaptive watermarking embedding algorithms emerged, including watermark embedding strength adaption [24,25,26] and embedding position adaption [27], etc. However, neither of these methods has solved this issue with finality. Additionally, most of the existing robust watermarking methods essentially design or utilize some stable features of the image to embed the watermark. For example, transform domain methods typically embed the watermark in the DCT coefficients or the wavelet subbands of the image because they are relatively stable. The transform process can be regarded as the robustness feature extraction from the original image. Nevertheless, artificially designing and extracting robust features is no easy task, especially when the method is supposed to resist one or several particular attacks.
Based on these considerations, we propose a new scheme for multiple digital image copyright protection to displace the traditional image watermarking architecture by resorting to the deep neural network technique, which has achieved superior performance on feature extraction and representation [28,29,30]. In order to get around the paradox between robustness and imperceptibility, we no longer modify the image but represent the copyright message by exploring the patterns of image data itself through a neural network. To be specific, a large number of selected image blocks along with the message bits as their corresponding labels will be feed into the network, enabling the network to extract copyright message from images by classifying the image blocks to target labels (bits). Moreover, to enhance the robustness, we provide a preventive strategy to further train the network, namely, feeding image blocks from the post-attacked images to the network. By doing this, the neural network could automatically extract those classification features that are robust to certain attacks. The trained neural network can then be stored in a database (or added to the image header) for future use in ownership verification. Finally, we found the neural network-based scheme is particularly suitable for multiple images copyright protection. Based on experiments, a well-designed neural network is capable of representing the copyright message for a considerable number of images under various intentional attacks. Superior robustness and perfect imperceptibility are achieved in the proposed scheme.
This paper is organized as follows. Section 2 outlines the proposed system for image copyright protection. Some discussion and analysis for the scheme are described in Section 3. The experiments in Section 4 suggest our scheme is of considerable load capacity and strong robustness to a variety of attacks. Finally, Section 5 concludes the paper and discuss possible enhancements to the proposed scheme.
This section will elaborate on the proposed scheme in detail. The image copyright protection scheme consists of two parts: copyright image registration where we train a neural network for a number of given images, and copyright verification where the stored neural network extracts copyright message from the images to be verified.
The neural network in our scheme can be partly analogous to the hashing procedure in general digital signature systems, in the sense that the neural network serves as a mapping from images to discrete numeric values (digests). But we hope that our scheme could not only reduce the data but also name an explicit message bitstream for copyright verification. We then train a neural network to transform the image blocks into copyright message. The procedure for training is showed in Figure 1:
In a most common application scenario, our scheme starts with a series of RGB images I=(I1,I2,⋯,In) to be registered to our verification system and binary sequences of copyright message for each image Wj=(w1,w2,⋯,wm),wi∈{0,1},j=1,2,⋯,n. In practice, the choice of n and m depends on the capacity of the network. In general, as the number of images and the copyright message length are increased, the optimal network structure to which should be altered. In this paper, we adopted a 5-layer fully connected neural network structured in Figure 2. The choice of network is further elaborated in Section 3.4.
Before training the network, expanded training data is yield from an attack set A=A1,A2,⋯,Ar including r types of attack, the proposed A is showed in Table 1. For each image Ik, the corresponding attacked image set is denoted by A(Ik)={A1(Ik),A2(Ik),⋯,Ar(Ik)}.
Notation | Type of attack |
A1 | Superposition Gaussian noise with μ=0,σ=0.2 |
A2 | Superposition Gaussian noise with μ=0,σ=0.2 |
A3 | Superposition Gaussian noise with μ=0,σ=0.2 |
A4 | JPEG Compression with quality factor 50 |
A5 | JPEG Compression with quality factor 30 |
A6 | JPEG Compression with quality factor 10 |
A7 | Mean value filtering with kernel size 7×7 |
A8 | Median filtering with kernel size 5×5 |
A9 | Gaussian filtering with kernel size 7×7 |
A10 | Resize image to 64×64 (bilinear interpolation) |
To determine the location of the candidate training data, I is processed as follows: Normalize the size of all the images in I to M×M, and then divide each image into non-overlapping blocks of size b×b. For each image, calculate keypoints with diameter b of these normalized images, and locate m strongest keypoints at block scale. In other words, for each image, we choose the m image blocks containing the strongest keypoints and then save the location information of these chosen image blocks. These m image blocks are supposed to be the input of the network and the corresponding output should be the m message bits.
Then we normalize the images in the attacked image set A(Ik),k=1,⋯,n and choose the corresponding image blocks with the saved location information. Taking all n images in I into account, we train the neural network to map nm(r+1) image blocks to the n copyright message bitstreams for n images. The trained neural network can then be stored in a database (or added to the image header) for ownership verification.
During the copyright verification stage, the copyright owner use the trained network to extract copyright messages from the registered images to declare the ownership.
Continuing from the application scenario above, the image distributor registered an image on a neural network, let's say Ic, which has possibly been through some malicious or non-malicious manipulations. As Figure 3 shows, in order to feed the trained network NIW, we should first normalize the image Ic to size M×M and then divide it into b×b non-overlapping blocks as the registration stage does. It is worth noting that during verification stage the distributor has already known the location information of the registered image blocks, and thus making it possible for distributor to locate the image blocks and feed them to the network sequentially. After this, the trained neural network NIW, serving as a copyright message extractor, is able to yield a bit sequence Wc from the possibly distorted registered image Ic, and Wc is supposed to be the copyright message for image Ic to verify the copyright ownership.
In this section, we give some explanation and discussion on the overall and details of the design scheme.
The universality theorem [31,32] ensures that a neural network can fit any mapping from image blocks to the classification labels, as long as there is no such case where identical image blocks are mapped to different labels. Actually, this unexpected case is possible in our training set, in another word, it is possible that two identical image blocks (or highly similar image blocks) are chosen to participate in the training process and unfortunately they are paired with two different message bits. However, we have taken the following measures to avoid this block collision case as much as possible.
● Choosing the keypoint-included blocks reduces the probability of such coincidence. The more complex of an image block is, the less possible it coincides with another image block. What's more, there are much more similar simple areas than the complex areas in the natural image samples, thus, those areas containing complex contents are preferable in our scheme.
● Controlling the number of candidate image blocks also lessens the co-occurrence possibility of two similar image blocks. Our choices are based on the complexity of the image blocks, in this manner, as the number of blocks raises, the chance of simple blocks' co-occurrence raises.
● Likewise, an appropriate block size also suppresses the possibility of such coincidence. An undersized block may lead to more similarities in content. However, an oversized block may decrease the copyright message capacity of each image. For this reason, we determined an appropriate block size to further optimize the feasibility.
With the neural network's universality theorem and the collision avoidance rules listed above, we are able to guarantee the feasibility of the proposed scheme (See Section 4 for detailed settings). But still, we hope the trained network would be equipped with robustness for representing copyright message to adapt to lossy channel and resist malicious manipulation.
The robustness of the scheme comes from three parts, the error tolerance of the neural network, the preventive training set, and the robust feature extraction. Firstly, error tolerance is the inherent nature of the proposed neural network. The neural network essentially works as a soft classifier which gives a probabilistic, rather than a hard yes/no classification result, allowing the disturbed signals to be classified to the expected class.
Secondly, since the expanded training set has already contained varied attacked images, a well trained neural network guarantees perfect extraction of those image blocks in the training set. So our scheme is capable of resisting certain attacks that contains in the preventive training set. Furthermore, the robustness of the proposed scheme can be enhanced by adding new attacked images to the preventive training set. As such, it enables a dynamic evolution of robustness according to practical demands.
Finally, our scheme also gets it robustness from the generalization ability of the neural network. Above all, the process that the neural network maps the original image block and the attacked image block to the same label is exactly the process where the neural network extracts the robust features and then classifies them. In this way, even for those attacked images that are not included in the training set, the trained network is still able to perform a correct classification by recognizing the robust features.
The capacity of our scheme is plainly defined as the number of "image block-message bit pair" that the neural network could effectively enroll. Due to transmission and storage considerations, we expect a single neural network could register as many images as possible. Interestingly, we have experimentally determined such a fact that choosing the same number of blocks from many disparate images instead of a small number of images, will significantly enlarge the network's capacity. This is partly because choosing blocks from various images would provide more diverse pixel distributions, and then the image blocks probabilistically could have more identifiable features for the neural network. In conclusion, to train the network in an economical manner, we prefer a shorter copyright message for each image but a richer image set to be registered by the network.
Of particular note is that the robustness and capacity of our scheme also depend closely on the network structure and training algorithm, but confined to the length of the paper, we only focus on the network structure proposed above. All experiments in Section 4 that will further explore the robustness and capacity of our scheme are based on this structure.
Our choice of fully connected neural network rather than the more effective CNN in image feature extraction is out of the resource utilization and classification optimization considerations. Firstly, as the proposed method is based on image blocks, to increase the length of the represented copyright message, the block size should not be too large otherwise there will not be sufficient room to cover copyright message length (expanding normalization size can also increase the number of image blocks, but we expect more 'content' in a block rather than simply more pixels). However, on a smaller scale we can no longer bring CNN's superiority into full play. To be specific, when the block size is 8×8 as Section 4 does, the convolutional kernel size of CNN can either be too small to capture the spatial correlation details among adjacent region or be too large to effectively reduce the number of parameters.
Moreover, our classification task might become rigorous at times. For example, sometimes we need the network be able to distinguish one image block from another highly similar one (See co-occurrence occasion described in Section 3.1) where leaving out any of the details can bring about classification failure, thus we dropped weight sharing and local connection to avoid any information loss and finally chose fully connected neural network to extract robust features and represent the copyright message.
In this section, experiments are performed to evaluate the proposed scheme, which should be able to extract a copyright message from each registered image, even though the image could have suffered disturbance or damage. We take a number of RGB images from "Standard" test images and BossBase-1.0 as copyright images (Figure 4 shows some of them), and pseudo-random binary sequences as copyright messages. For each image we take 64 bits as it copyright message. All the experiments were performed on Windows PC with Intel® CoreTM I7-4720 CPU at 2.6GHz, 8GB RAM with Tensorflow version 1.6.0.
The network architecture used in this paper is shown in Figure 2. The input image block is first flattened and then sequentially passed through four layers of gradually narrowed fully connected networks. Each layer is activated by the ReLU function, and the final output of the two nodes passes through the Softmax layer to obtain the probability of corresponding classification labels. The mini-batch size used for training is 32, and Adam is adopted to optimize the network for 600 training epochs on the training set.
Based on the analysis and discussion in Section 3, this part mainly shows the general resistance of the proposed scheme to various attacks, including the superposition Gaussian noise with zero mean and standard deviation ranging from 5% to 30% of the maximum pixel value 255, JPEG compression with quality factor 70, 50, 30 and 10, resizing and various filtering attacks such as Guassian filtering and median filtering (some of the attacked sample images are illustrated in Figure 5). To further explore the capacity of our network, different numbers of images including 10, 20, 30, 40, 90 images are registered to 5 neural networks respectively in the following experiments, and then we evaluate the each network's extraction accuracy under different attacks. It should be noted that each image corresponds to an individual 64-bit copyright message. Therefore, we are actually testing the neural network's capability of 640, 1280, 1920, 2560 and 5760 bits copyright messages. In general, we take the ratio of the number of correctly extracted message bits to the total number of bits as extraction accuracy to evaluate the performance.
Table 2 shows the average extraction accuracy rate after superposition Gaussian additive noise attacked the images, for each image, we register 64 bit copyright message for it. The noise is normally distributed with zero mean and standard deviation Std, which ranges from 0 to 76.5 in the experiment. At a larger Std, as the numbers of images participated in the training raises, the robustness performance presents a regular decline trend. Table 3 shows the robustness against JPEG compression. In the case of lower compression quality factors, the extraction accuracy rate reduced slightly. Even for those quality factors that are not involved in training, the trained network can still achieve a reasonably high extraction accuracy rate. Table 4 shows the extraction accuracy rate for three kinds of filtering attacks including mean value filtering, Gaussian filtering, and median filtering. With the same size of the filtering kernel, the extraction accuracy of the Gaussian filtering is relatively higher than the other two methods, since Gaussian filter impacts less to the image than the mean and median filters. Table 5 shows the robustness against rescaling attacks. The extraction is almost unaffected when the rescaling size is larger than the normalization size, which is by no means difficult to foretell. When the rescaling size is smaller than the normalization size, the extraction accuracy reduced considerably at 20% and 12.5% rescaling level. Still, expanding the training set can effectively improve the reduction.
Number of bits\Std. | 0(no attack) | 11.25(5%) | 22.5(10%) | 51(20%)_ | 76.5(30%) |
640 for 10 images | 1 | 1 | 0.984 | 0.972 | 0.911 |
1280 for 20 images | 0.989 | 0.989 | 0.984 | 0.950 | 0.884 |
1920 for 30 images | 0.997 | 0.997 | 0.992 | 0.950 | 0.887 |
2560 for 40 images | 0.998 | 0.997 | 0.991 | 0.953 | 0.889 |
5760 for 90 images | 0.988 | 0.986 | 0.976 | 0.916 | 0.831 |
*The underlined items are included in the attack set during the training process.
Number of bits\JPEG QF | 100(no attack) | 70 | 50 | 30_ | 10_ |
640 for 10 images | 1 | 1 | 1 | 1 | 0.984 |
1280 for 20 images | 1 | 1 | 0.990 | 0.990 | 0.978 |
1920 for 30 images | 0.997 | 0.997 | 0.997 | 0.997 | 0.990 |
2560 for 40 images | 0.997 | 0.997 | 0.997 | 0.997 | 0.987 |
5760 for 90 images | 0.988 | 0.976 | 0.976 | 0.976 | 0.951 |
Filter | Mean value filtering | Gaussian filtering | Median filtering | ||||
Kernel size | 7×7_ | 9×9 | 7×7_ | 9×9 | 11×11 | 3×3_ | 5×5 |
640/10 | 0.997 | 0.969 | 1 | 0.997 | 0.964 | 1 | 0.986 |
1280/20 | 0.977 | 0.930 | 1 | 0.989 | 0.962 | 1 | 0.982 |
1920/30 | 0.983 | 0.945 | 0.997 | 0.993 | 0.969 | 0.997 | 0.985 |
2560/40 | 0.962 | 0.933 | 0.995 | 0.977 | 0.932 | 0.996 | 0.972 |
5760/90 | 0.956 | 0.893 | 0.986 | 0.972 | 0.923 | 0.990 | 0.968 |
Rescaling rate | 50% | 25% | 20% | 12.5% |
Rescaled size | 256×256 | 128×128 | 100×100 | 64×64 |
640/10 | 1 | 1 | 0.989 | 0.850 |
1280/20 | 0.989 | 0.989 | 0.978 | 0.741 |
1920/30 | 0.997 | 0.997 | 0.984 | 0.739 |
2560/40 | 0.997 | 0.997 | 0.967 | 0.672 |
5760/90 | 0.988 | 0.988 | 0.949 | 0.672 |
This paper proposes a deep neural network based large scale image copyright protection scheme. Instead of modifying the original image to embed the copyright message as the traditional watermarking system, the proposed scheme trains a neural network to register multiple images, and when copyright verification is required, the neural network could represent a copyright message for each image by classifying image blocks to message bits. With the error tolerance of the neural network and a preventive training strategy, the proposed scheme is remarkably robust to many attacks, including additive noise, JPEG compression, filtering and resizing. Moreover, this scheme is especially appropriate for multiple images' copyright verification as experimental results showed. Lastly, the preventive method can be modified for specific attacks in practice, thus a stronger robustness can be obtained by expanding the preventive training set. For now, our scheme is only based on a single network structure which clearly limited its robustness and capacity. To further improve its performance with alternative network structure will be a promising study in the future.
This work was supported in part by the National Natural Science Foundation of China (No. 61772549, 61872448, U1736214, 61602508, and 61601517), and the National Key R & D Program of China(No. 2016YFB0801303 and 2016QY01W0105).
The authors declare no conflict of interest.
[1] |
L. F. Cabeza, L. Rincon, V. Vilarino, G. Perez, A. Castell, Life cycle assessment (LCA) and life cycle energy analysis (LCEA) of buildings and the building sector: a review, Renewable Sustainable Energy Rev., 29 (2014), 394–416. https://doi.org/10.1016/j.rser.2013.08.037 doi: 10.1016/j.rser.2013.08.037
![]() |
[2] |
M. Y, Cheng, H. C. Tsai, E. Sudjono, Conceptual cost estimates using evolutionary fuzzy hybrid neural network for projects in construction industry, Expert Syst. Appl., 37 (2010), 4224–4231. https://doi.org/10.1016/j.eswa.2009.11.080 doi: 10.1016/j.eswa.2009.11.080
![]() |
[3] |
A. Mahdavian, A. Shojaei, M. Salem, J. S. Yuan, A. A. Oloufa, Data-driven predictive modeling of highway construction cost items, J. Constr. Eng. Manage., 147 (2021), 04020180. https://doi.org/10.1061/(ASCE)CO.1943-7862.0001991 doi: 10.1061/(ASCE)CO.1943-7862.0001991
![]() |
[4] |
A. Mahmoodzadeh, H. R. Nejati, M. Mohammadi, Optimized machine learning modelling for predicting the construction cost and duration of tunnelling projects, Autom. Constr., 139 (2022), 104305. https://doi.org/10.1016/j.autcon.2022.104305 doi: 10.1016/j.autcon.2022.104305
![]() |
[5] |
M. Juszczyk, On the search of models for early cost estimates of bridges: an SVM-based approach, Buildings, 10 (2020), 2. https://doi.org/10.3390/buildings10010002 doi: 10.3390/buildings10010002
![]() |
[6] |
S. Kim, C. Y. Choi, M. Shahandashti, K. R. Ryu, Improving accuracy in predicting city-level construction cost indices by combining linear ARIMA and nonlinear ANNs, J. Manage. Eng., 38 (2022), 04021093. https://doi.org/10.1061/(ASCE)ME.1943-5479.0001008 doi: 10.1061/(ASCE)ME.1943-5479.0001008
![]() |
[7] |
L. Breiman, Random forests, Mach. Learn., 45 (2001), 5–32. https://doi.org/10.1023/A:1010933404324 doi: 10.1023/A:1010933404324
![]() |
[8] |
C. Pierdzioch, M. Risse, Forecasting precious metal returns with multivariate random forests, Empirical Econ., 58 (2020), 1167–1184. https://doi.org/10.1007/s00181-018-1558-9 doi: 10.1007/s00181-018-1558-9
![]() |
[9] |
J. Yoon, Forecasting of real GDP growth using machine learning models: gradient boosting and Random forest approach, Comput. Econ., 57 (2021), 247–265. https://doi.org/10.1007/s10614-020-10054-w doi: 10.1007/s10614-020-10054-w
![]() |
[10] |
S. Dang, L. Peng, J. M. Zhao, J. J. Li, Z. M. Kong, A quantile regression random forest-based short-term load probabilistic forecasting method, Energies, 15 (2022), 663. https://doi.org/10.3390/en15020663 doi: 10.3390/en15020663
![]() |
[11] |
G. Tang, B. Pang, T. Tian, C. Zhou, Fault diagnosis of rolling bearings based on improved fast spectral correlation and optimized random forest, Appl. Sci., 8 (2018), 1859. https://doi.org/10.3390/app8101859 doi: 10.3390/app8101859
![]() |
[12] |
H. Latifi, B. Koch, Evaluation of most similar neighbour and random forest methods for imputing forest inventory variables using data from target and auxiliary stands, Int. J. Remote Sens., 33 (2012), 6668–6694. https://doi.org/10.1080/01431161.2012.693969 doi: 10.1080/01431161.2012.693969
![]() |
[13] |
X. B. Meng, X. Z. Gao, L. Lu, Y. Liu, H. Z. Zhang, A new bio-inspired optimisation algorithm: Bird Swarm Algorithm, J. Exp. Theor. Artif. Intell., 28 (2016), 673–687. https://doi.org/10.1080/0952813X.2015.1042530 doi: 10.1080/0952813X.2015.1042530
![]() |
[14] |
C. Zhang, S. Yu, G. Li, Y. Xu, The recognition method of MQAM signals based on BP neural network and Bird Swarm Algorithm, IEEE Access, 9 (2021), 36078–36086. https://doi.org/10.1109/ACCESS.2021.3061585 doi: 10.1109/ACCESS.2021.3061585
![]() |
[15] |
Y. Yu, S. Liang, B. Samali, T. N. Nguyen, C. X. Zhai, J. C. Li, et al., Torsional capacity evaluation of RC beams using an improved bird swarm algorithm optimised 2D convolutional neural network, Eng. Struct., 273 (2022), 115066. https://doi.org/10.1016/j.engstruct.2022.115066 doi: 10.1016/j.engstruct.2022.115066
![]() |
[16] |
J. H. Huan, D. H. Ma, W. Wang, X. D. Guo, Z. Y. Wang, L. C. Wu, Safety-state evaluation model based on structural entropy weight-matter element extension method for ancient timber architecture, Adv. Struct. Eng., 23 (2020), 1087–1097. https://doi.org/10.1177/1369433219886085 doi: 10.1177/1369433219886085
![]() |
[17] |
Y. Elfahham, Estimation and prediction of construction cost index using neural networks, time series, and regression, Alexandria Eng. J., 58 (2019), 499–506. https://doi.org/10.1016/j.aej.2019.05.002 doi: 10.1016/j.aej.2019.05.002
![]() |
[18] |
Y. Cao, B. Ashuri, Predicting the volatility of highway construction cost index using long short-term memory, J. Manage. Eng., 36 (2020), 1–9. https://doi.org/10.1061/(ASCE)ME.1943-5479.0000784 doi: 10.1061/(ASCE)ME.1943-5479.0000784
![]() |
[19] |
S. Mao, F. Xiao, A novel method for forecasting construction cost index based on complex network, Physica A, 527 (2019), 121306. https://doi.org/10.1016/j.physa.2019.121306 doi: 10.1016/j.physa.2019.121306
![]() |
[20] |
E. Kaya, A comprehensive comparison of the performance of metaheuristic algorithms in neural network training for nonlinear system identification, Mathematics, 10 (2022), 1611. https://doi.org/10.3390/math10091611 doi: 10.3390/math10091611
![]() |
[21] |
S. Roh, S. Tae, R. Kim, S. Park, Probabilistic analysis of major construction materials in the life cycle embodied environmental cost of Korean apartment buildings, Sustainability, 11 (2019), 846. https://doi.org/10.3390/su11030846 doi: 10.3390/su11030846
![]() |
[22] | Y. Liu, X. Y. Wang, H. Li, A multi-object grey target approach for group decision, J. Grgy Syst., 31 (2019), 60–72. |
[23] |
T. Moon, D. H. Shin, Forecasting construction cost index using interrupted time-series, KSCE J. Civ. Eng., 22 (2018), 1626–1633. https://doi.org/10.1007/s12205-017-0452-x doi: 10.1007/s12205-017-0452-x
![]() |
[24] |
R. Slade, A. Bauen, Micro-algae cultivation for biofuels: cost, energy balance, environmental impacts and future prospects, Biomass Bioenergy, 53 (2013), 29–38. https://doi.org/10.1016/j.biombioe.2012.12.019 doi: 10.1016/j.biombioe.2012.12.019
![]() |
[25] |
J. Hong, G. Q. Shen, Z. Li, B. Y. Zhang, W. Q. Zhang, Barriers to promoting prefabricated construction in China: a cost-benefit analysis, J. Cleaner Prod., 172 (2018), 649–660. https://doi.org/10.1016/j.jclepro.2017.10.171 doi: 10.1016/j.jclepro.2017.10.171
![]() |
[26] |
L. Liu, D. Liu, H. Wu, J. W. Wang, Study on foundation pit construction cost prediction based on the stacked denoising autoencoder, Math. Probl. Eng., 2020 (2020), 8824388. https://doi.org/10.1155/2020/8824388 doi: 10.1155/2020/8824388
![]() |
[27] |
S. Hwang, Time series models for forecasting construction costs using time series indexes, J. Constr. Eng. Manage., 137 (2011), 656–662. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000350 doi: 10.1061/(ASCE)CO.1943-7862.0000350
![]() |
[28] |
S. Punia, K. Nikolopoulos, S. P. Singh, J. K. Madaan, K. Litsiou, Deep learning with long short-term memory networks and random forests for demand forecasting in multi-channel retail, Int. J. Prod. Res., 58 (2020), 4964–4979. https://doi.org/10.1080/00207543.2020.1735666 doi: 10.1080/00207543.2020.1735666
![]() |
[29] |
Z. Zou, Y. Yang, Z. Fan, H. M. Tang, M. Zou, X. L. Hu, et al., Suitability of data preprocessing methods for landslide displacement forecasting, Stochastic Environ. Res. Risk Assess., 34 (2020), 1105–1119. https://doi.org/10.1007/s00477-020-01824-x doi: 10.1007/s00477-020-01824-x
![]() |
[30] | L. Endlova, V. Vrbovsky, Z. Navratilova, L. Tenkl, The use of near-infrared spectroscopy in rapeseed breeding programs, Chem. Listy, 111 (2017), 524–530. Available from: https://hero.epa.gov/hero/index.cfm/reference/details/reference_id/5214159. |
[31] |
M. A. Bujang, E. D. Omar, N. A. Baharum, A review on sample size determination for Cronbach's alpha test: a simple guide for researchers, Malays. J. Med. Sci., 25 (2018), 85–99. https://doi.org/10.21315/mjms2018.25.6.9 doi: 10.21315/mjms2018.25.6.9
![]() |
[32] |
Y. Yu, B. Samali, M. Rashidi, M. Mohammadi, T. N. Nguyen, G. Zhang, Vision-based concrete crack detection using a hybrid framework considering noise effect, J. Build. Eng., 61 (2022), 105246. https://doi.org/10.1016/j.jobe.2022.105246 doi: 10.1016/j.jobe.2022.105246
![]() |
[33] |
T. Mitsul, S. Okuyama, Measurement data selection using multiple regression analysis for precise quantitative analysis, Bunseki. Kagaku., 60 (2011), 163–170. https://doi.org/10.2116/bunsekikagaku.60.163 doi: 10.2116/bunsekikagaku.60.163
![]() |
[34] | M. Skitmore, D. H. Picken, The accuracy of pre-tender building price forecasts: an analysis of USA data, in Information and Communication in Construction Procurement CIB W92 Procurement System Symposium, (2000), 595–606. Available from: https://eprints.qut.edu.au/9460/. |
[35] |
T. Jin, Y. Jiang, B. Mao, X. Wang, B. Lu, J. Qian, et al., Multi-center verification of the influence of data ratio of training sets on test results of an Al system for detecting early gastric cancer based on the YOLO-v4 algorithm, Front. Oncol., 12 (2022), 953090. https://doi.org/10.3389/fonc.2022.953090 doi: 10.3389/fonc.2022.953090
![]() |
[36] |
P. An, X. Li, P. Qin, Y. J. Ye, J. Y. Zhang, H. Y. Guo, et al., Predicting model of mild and severe types of COVID-19 patients using Thymus CT radiomics model: a preliminary study, Math. Biosci. Eng., 20 (2023), 6612–6629. https://doi.org/10.3934/mbe.2023284 doi: 10.3934/mbe.2023284
![]() |
[37] |
C. Benard, S. Da Veiga, E. Scornet, Mean decrease accuracy for random forests: inconsistency, and a practical solution via the Sobol-MDA, Biometrika, 109 (2022), 881–900. https://doi.org/10.1093/biomet/asac017 doi: 10.1093/biomet/asac017
![]() |
[38] |
D. Karamichailidou, V. Kaloutsa, A. Alexandridis, Wind turbine power curve modeling using radial basis function neural networks and tabu search, Renewable Energy, 163 (2021), 2137–2152. https://doi.org/10.1016/j.renene.2020.10.020 doi: 10.1016/j.renene.2020.10.020
![]() |
[39] |
K. M. El-Naggar, M. R. AlRashidi, M. F. AlHajri, A. K. Al-Othman, Simulated annealing algorithm for photovoltaic parameters identification, Sol. Energy, 86 (2012), 266–274. https://doi.org/10.1016/j.solener.2011.09.032 doi: 10.1016/j.solener.2011.09.032
![]() |
[40] |
S. Gao, Y. Wang, J. Cheng, Y. Inazumi, Z. Tang, Ant colony optimization with clustering for solving the dynamic location routing problem, Appl. Math. Comput., 285 (2016), 149–173. https://doi.org/10.1016/j.amc.2016.03.035 doi: 10.1016/j.amc.2016.03.035
![]() |
[41] |
L. Tang, Y. Dong, J. Liu, Differential evolution with an individual-dependent mechanism, IEEE Trans. Evol. Comput., 19 (2015), 560–574. https://doi.org/10.1109/TEVC.2014.2360890 doi: 10.1109/TEVC.2014.2360890
![]() |
[42] |
Y. Yu, M. Rashidi, B. Samali, M. Mohammadi, T. N. Nguyen, X. X. Zhou, Crack detection of concrete structures using deep convolutional neural networks optimized by enhanced chicken swarm algorithm, Struct. Health Monit., 21 (2022), 2244–2263. https://doi.org/10.1177/14759217211053546 doi: 10.1177/14759217211053546
![]() |
[43] |
C. Zhang, X. Wang, S. Chen, H. Li, X. X. Wu, X. Zhang, A modified random forest based on kappa measure and binary artificial bee colony algorithm, IEEE Access, 9 (2021), 117679–117690. https://doi.org/10.1109/ACCESS.2021.3105796 doi: 10.1109/ACCESS.2021.3105796
![]() |
[44] |
M. Reif, F. Shafait, A. Dengel, Meta-learning for evolutionary parameter optimization of classifiers, Mach. Learn., 87 (2012), 357–380. https://doi.org/10.1007/s10994-012-5286-7 doi: 10.1007/s10994-012-5286-7
![]() |
[45] | Y. Dong, J. Du, B. Li, Research on discrete wolf pack algorithm of mutiple choice knapsack problem, Transducer Microsyst. Technol., 34 (2015), 21–23. |
[46] |
H. Naseri, H. Jahanbakhsh, A. Foomajd, N. Galustanian, M. M. Karimi, E. O. D. Waygood, A newly developed hybrid method on pavement maintenance and rehabilitation optimization applying Whale Optimization Algorithm and random forest regression, Int. J. Pavement Eng., 2022 (2022). https://doi.org/10.1080/10298436.2022.2147672 doi: 10.1080/10298436.2022.2147672
![]() |
[47] |
D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev., 42 (2014), 21–57. https://doi.org/10.1007/s10462-012-9328-0 doi: 10.1007/s10462-012-9328-0
![]() |
[48] |
Y. Yu, J. Li, J. Li, Y. Xia, Z. H. Ding, B. Samali, Automated damage diagnosis of concrete jack arch beam using optimized deep stacked autoencoders and multi-sensor fusion, Dev. Built Environ., 14 (2023), 100128. https://doi.org/10.1016/j.dibe.2023.100128 doi: 10.1016/j.dibe.2023.100128
![]() |
[49] |
G. Huang, G. B. Huang, S. Song, K. Y. You, Trends in extreme learning machines: a review, Neural Networks, 61 (2015), 32–48. https://doi.org/10.1016/j.neunet.2014.10.001 doi: 10.1016/j.neunet.2014.10.001
![]() |
[50] | M. Kayri, I. Kayri, M. T. Gencoglu, The performance comparison of multiple linear regression, random forest and artificial neural network by using photovoltaic and atmospheric data, in 2017 14th International Conference on Engineering of Modern Electric Systems (EMES), (2017), 1–4. https://doi.org/10.1109/EMES.2017.7980368 |
[51] |
Y. Wang, A. W. Kandeal, A. Swidan, S. W. Sharshir, G. B. Abdelaziz, M. A. Halim, et al., Prediction of tubular solar still performance by machine learning integrated with Bayesian optimization algorithm, Appl. Therm. Eng., 184 (2021), 116233. https://doi.org/10.1016/j.applthermaleng.2020.116233 doi: 10.1016/j.applthermaleng.2020.116233
![]() |
[52] |
A. B. Owen, Better estimation of small sobol' sensitivity pndices, ACM Trans. Model. Comput. Simul., 23 (2013), 1–17. https://doi.org/10.1145/2457459.2457460 doi: 10.1145/2457459.2457460
![]() |
[53] |
S. Kucherenko, O. V. Klymenko, N. Shah, Sobol' indices for problems defined in non-rectangular domains, Reliab. Eng. Syst. Saf., 167 (2017), 218–231. https://doi.org/10.1016/j.ress.2017.06.001 doi: 10.1016/j.ress.2017.06.001
![]() |
1. | Preeti Garg, R. Rama Kishore, Performance comparison of various watermarking techniques, 2020, 79, 1380-7501, 25921, 10.1007/s11042-020-09262-1 |
Notation | Type of attack |
A1 | Superposition Gaussian noise with μ=0,σ=0.2 |
A2 | Superposition Gaussian noise with μ=0,σ=0.2 |
A3 | Superposition Gaussian noise with μ=0,σ=0.2 |
A4 | JPEG Compression with quality factor 50 |
A5 | JPEG Compression with quality factor 30 |
A6 | JPEG Compression with quality factor 10 |
A7 | Mean value filtering with kernel size 7×7 |
A8 | Median filtering with kernel size 5×5 |
A9 | Gaussian filtering with kernel size 7×7 |
A10 | Resize image to 64×64 (bilinear interpolation) |
Number of bits\Std. | 0(no attack) | 11.25(5%) | 22.5(10%) | 51(20%)_ | 76.5(30%) |
640 for 10 images | 1 | 1 | 0.984 | 0.972 | 0.911 |
1280 for 20 images | 0.989 | 0.989 | 0.984 | 0.950 | 0.884 |
1920 for 30 images | 0.997 | 0.997 | 0.992 | 0.950 | 0.887 |
2560 for 40 images | 0.998 | 0.997 | 0.991 | 0.953 | 0.889 |
5760 for 90 images | 0.988 | 0.986 | 0.976 | 0.916 | 0.831 |
Number of bits\JPEG QF | 100(no attack) | 70 | 50 | 30_ | 10_ |
640 for 10 images | 1 | 1 | 1 | 1 | 0.984 |
1280 for 20 images | 1 | 1 | 0.990 | 0.990 | 0.978 |
1920 for 30 images | 0.997 | 0.997 | 0.997 | 0.997 | 0.990 |
2560 for 40 images | 0.997 | 0.997 | 0.997 | 0.997 | 0.987 |
5760 for 90 images | 0.988 | 0.976 | 0.976 | 0.976 | 0.951 |
Filter | Mean value filtering | Gaussian filtering | Median filtering | ||||
Kernel size | 7×7_ | 9×9 | 7×7_ | 9×9 | 11×11 | 3×3_ | 5×5 |
640/10 | 0.997 | 0.969 | 1 | 0.997 | 0.964 | 1 | 0.986 |
1280/20 | 0.977 | 0.930 | 1 | 0.989 | 0.962 | 1 | 0.982 |
1920/30 | 0.983 | 0.945 | 0.997 | 0.993 | 0.969 | 0.997 | 0.985 |
2560/40 | 0.962 | 0.933 | 0.995 | 0.977 | 0.932 | 0.996 | 0.972 |
5760/90 | 0.956 | 0.893 | 0.986 | 0.972 | 0.923 | 0.990 | 0.968 |
Rescaling rate | 50% | 25% | 20% | 12.5% |
Rescaled size | 256×256 | 128×128 | 100×100 | 64×64 |
640/10 | 1 | 1 | 0.989 | 0.850 |
1280/20 | 0.989 | 0.989 | 0.978 | 0.741 |
1920/30 | 0.997 | 0.997 | 0.984 | 0.739 |
2560/40 | 0.997 | 0.997 | 0.967 | 0.672 |
5760/90 | 0.988 | 0.988 | 0.949 | 0.672 |
Notation | Type of attack |
A1 | Superposition Gaussian noise with μ=0,σ=0.2 |
A2 | Superposition Gaussian noise with μ=0,σ=0.2 |
A3 | Superposition Gaussian noise with μ=0,σ=0.2 |
A4 | JPEG Compression with quality factor 50 |
A5 | JPEG Compression with quality factor 30 |
A6 | JPEG Compression with quality factor 10 |
A7 | Mean value filtering with kernel size 7×7 |
A8 | Median filtering with kernel size 5×5 |
A9 | Gaussian filtering with kernel size 7×7 |
A10 | Resize image to 64×64 (bilinear interpolation) |
Number of bits\Std. | 0(no attack) | 11.25(5%) | 22.5(10%) | 51(20%)_ | 76.5(30%) |
640 for 10 images | 1 | 1 | 0.984 | 0.972 | 0.911 |
1280 for 20 images | 0.989 | 0.989 | 0.984 | 0.950 | 0.884 |
1920 for 30 images | 0.997 | 0.997 | 0.992 | 0.950 | 0.887 |
2560 for 40 images | 0.998 | 0.997 | 0.991 | 0.953 | 0.889 |
5760 for 90 images | 0.988 | 0.986 | 0.976 | 0.916 | 0.831 |
Number of bits\JPEG QF | 100(no attack) | 70 | 50 | 30_ | 10_ |
640 for 10 images | 1 | 1 | 1 | 1 | 0.984 |
1280 for 20 images | 1 | 1 | 0.990 | 0.990 | 0.978 |
1920 for 30 images | 0.997 | 0.997 | 0.997 | 0.997 | 0.990 |
2560 for 40 images | 0.997 | 0.997 | 0.997 | 0.997 | 0.987 |
5760 for 90 images | 0.988 | 0.976 | 0.976 | 0.976 | 0.951 |
Filter | Mean value filtering | Gaussian filtering | Median filtering | ||||
Kernel size | 7×7_ | 9×9 | 7×7_ | 9×9 | 11×11 | 3×3_ | 5×5 |
640/10 | 0.997 | 0.969 | 1 | 0.997 | 0.964 | 1 | 0.986 |
1280/20 | 0.977 | 0.930 | 1 | 0.989 | 0.962 | 1 | 0.982 |
1920/30 | 0.983 | 0.945 | 0.997 | 0.993 | 0.969 | 0.997 | 0.985 |
2560/40 | 0.962 | 0.933 | 0.995 | 0.977 | 0.932 | 0.996 | 0.972 |
5760/90 | 0.956 | 0.893 | 0.986 | 0.972 | 0.923 | 0.990 | 0.968 |
Rescaling rate | 50% | 25% | 20% | 12.5% |
Rescaled size | 256×256 | 128×128 | 100×100 | 64×64 |
640/10 | 1 | 1 | 0.989 | 0.850 |
1280/20 | 0.989 | 0.989 | 0.978 | 0.741 |
1920/30 | 0.997 | 0.997 | 0.984 | 0.739 |
2560/40 | 0.997 | 0.997 | 0.967 | 0.672 |
5760/90 | 0.988 | 0.988 | 0.949 | 0.672 |