
Citation: Domenico Lombardo, Pietro Calandra, Maria Teresa Caccamo, Salvatore Magazù, Mikhail Alekseyevich Kiselev. Colloidal stability of liposomes[J]. AIMS Materials Science, 2019, 6(2): 200-213. doi: 10.3934/matersci.2019.2.200
[1] | Guozhen Dong . A pixel-wise framework based on convolutional neural network for surface defect detection. Mathematical Biosciences and Engineering, 2022, 19(9): 8786-8803. doi: 10.3934/mbe.2022408 |
[2] | Xuguo Yan, Liang Gao . A feature extraction and classification algorithm based on improved sparse auto-encoder for round steel surface defects. Mathematical Biosciences and Engineering, 2020, 17(5): 5369-5394. doi: 10.3934/mbe.2020290 |
[3] | Xiaochen Liu, Weidong He, Yinghui Zhang, Shixuan Yao, Ze Cui . Effect of dual-convolutional neural network model fusion for Aluminum profile surface defects classification and recognition. Mathematical Biosciences and Engineering, 2022, 19(1): 997-1025. doi: 10.3934/mbe.2022046 |
[4] | Hongxia Ni, Minzhen Wang, Liying Zhao . An improved Faster R-CNN for defect recognition of key components of transmission line. Mathematical Biosciences and Engineering, 2021, 18(4): 4679-4695. doi: 10.3934/mbe.2021237 |
[5] | Lili Wang, Chunhe Song, Guangxi Wan, Shijie Cui . A surface defect detection method for steel pipe based on improved YOLO. Mathematical Biosciences and Engineering, 2024, 21(2): 3016-3036. doi: 10.3934/mbe.2024134 |
[6] | Zhigao Zeng, Cheng Huang, Wenqiu Zhu, Zhiqiang Wen, Xinpan Yuan . Flower image classification based on an improved lightweight neural network with multi-scale feature fusion and attention mechanism. Mathematical Biosciences and Engineering, 2023, 20(8): 13900-13920. doi: 10.3934/mbe.2023619 |
[7] | Yongmei Ren, Xiaohu Wang, Jie Yang . Maritime ship recognition based on convolutional neural network and linear weighted decision fusion for multimodal images. Mathematical Biosciences and Engineering, 2023, 20(10): 18545-18565. doi: 10.3934/mbe.2023823 |
[8] | Yinghong Xie, Biao Yin, Xiaowei Han, Yan Hao . Improved YOLOv7-based steel surface defect detection algorithm. Mathematical Biosciences and Engineering, 2024, 21(1): 346-368. doi: 10.3934/mbe.2024016 |
[9] | Jiaming Ding, Peigang Jiao, Kangning Li, Weibo Du . Road surface crack detection based on improved YOLOv5s. Mathematical Biosciences and Engineering, 2024, 21(3): 4269-4285. doi: 10.3934/mbe.2024188 |
[10] | Naigong Yu, Hongzheng Li, Qiao Xu . A full-flow inspection method based on machine vision to detect wafer surface defects. Mathematical Biosciences and Engineering, 2023, 20(7): 11821-11846. doi: 10.3934/mbe.2023526 |
Surface defects are an important factor affecting the quality of steel plates and strips. More than 60% of the quality objection incidents by users of steel plate and strip products are prompted by surface defects, thus causing enormous economic losses to the steel companies [1]. With the technological advancement of optical instruments, image processing-based steel plate surface defect recognition has become a research focus of scholars worldwide [2,3,4]. High-quality images of steel plates can be captured by coordinating the camera, light source, and laser line. Additionally, the surface defects of the steel plate can be detected and classified by a custom algorithm. The automatic surface defect detection system can perform online detection of surface defects and provide timely feedback. Detection and timely feedback are key to improving the surface quality of steel plates and strips. With the improvement of the production line speed and the increasingly stringent requirements that users impose regarding product quality, it is urgent to improve the accuracy, speed, and efficiency of defect detection and recognition algorithms in surface detection systems.
The machine vision detection algorithm consists of two steps: Feature extraction and classification. Multiple sets of features from different aspects, such as gray level, shape, and texture, can be extracted from the defect image and are conducive to the correct classification of defects. However, too many features affect the complexity and performance of the classifier. To achieve the best possible classification result without losing features, feature selection methods are generally used to process the features. Commonly used feature extraction algorithms include the gray-level cooccurrence matrix (GLCM) [5] and scale-invariant feature transformation (SIFT) [6]. With these methods, a high-dimensional space is mapped to a low-dimensional space to generate a linear combination of the original features and to reduce the feature dimension. Defect classification falls under the scope of pattern recognition. Commonly used classification algorithms include support vector machines [7], naïve Bayes [8], K-nearest neighbors [9], and random forests (RFs) [10]. Relevant classification algorithms have received increasing attention and have achieved good results in practice. However, the abovementioned traditional detection methods have poor generalization ability and rely on the personal experience of researchers to design the feature engineering. Hence, it is very difficult to apply these methods in large-scale industrial production.
In recent years, deep convolutional neural networks (CNNs) [11,12,13] have sparked a resurgence of visual research because of the benefit of self-learning image features. Inspired by the biological natural visual perception mechanism, CNNs can substantially reduce the number of training parameters by using the weight sharing mechanism mainly through supervised learning and backpropagation training. However, in practical applications, because specific parameters, such as weight bias in training, cannot be set, deep CNNs have certain shortcomings. Therefore, only the hyperparameters during learning can be adjusted to control the fitting. Then, the weight bias parameters needed for recognition are generated. This black-box algorithm has been criticized by researchers. Especially in steel plate surface defect recognition, due to the low contrast of image data, CNNs learn image features through randomly generated convolution kernels. By using such convolution kernels with uncertain parameters, important features may be unminable. The unminability of important features easily leads to misjudgment and affects the further improvement of the recognition rate.
To detect cracks in nuclear power plant components, Chen [14] and Jahanshahi et al. proposed a deep learning framework based on a CNN and naï ve Bayes data fusion scheme to analyze single video frames for crack detection. The authors presented a new data fusion scheme to agammaegate the information extracted from each video frame to enhance the overall performance and robustness of the system. A method to extract the potential features of steel plate defects by fusing multiple convolutional network feature layers was proposed by He [15]. However, this method relies on the network to perform the fusion, and there is a certain randomness in the learning process; additionally, this method needs a huge amount of data. Wang et al. [16] proposed an improved RF algorithm with optimal multi-feature-set fusion for distributed defect recognition. This algorithm fuses the histogram of oriented gradients (HOG) feature set and GLCM feature set by using the multi-feature-set fusion factor to change the number of decision trees corresponding to each feature set in the RF algorithm. In this paper, a feature fusion preprocessing method is designed. This method can achieve a high recognition rate for six categories of defects by using a small amount of data, help the CNN to mine the deeper features of steel plate defect images, and give full play to the learning advantage of neural networks.
Since the matrices and defects of steel plates are mostly gray and black, the overall contrast between the defects and their defects is extremely low. Even if an advanced color camera is used to collect images in a red-green-blue format, the color features of steel plate defects are not obvious. Therefore, the images acquired by industrial cameras are mostly single-channel grayscale images. In this study, specific contour and texture detection operators are combined to process the original grayscale image acquired by an industrial camera to extract the image features, fuse the image features with the original image based on the channel model, and finally convert the image to a single channel based on a certain weight ratio. By using the above processing, not only can the feature dimension of the image be increased and the image be more easily activated by the network to extract features, but the same pixel level as that of the original grayscale image can also be guaranteed, without wasting extra computational cost. Additionally, artificial guidance helps the CNN learn image features purposefully and avoid the drawbacks of black-box algorithms, thereby improving the recognition accuracy of low-contrast images.
The main contributions of this study are as follows:
(1) This study promotes the purposeful learning of a CNN, improves the classification capability of the network, and proposes a multichannel fusion strategy based on the combination of feature operators. This strategy not only increases the feature dimension but also retains the feature information of the original image.
(2) This study proposes converting multiple channels into a single channel for network training according to a certain weight ratio to reduce computational cost. Such a conversion not only ensures the classification accuracy but also does not affect the computational speed.
(3) This study obtains the optimal weight ratio of fusion and conversion by using the traversal method, which involves comparing the impacts of different fusion and conversion schemes on the classification results.
When a CNN extracts images, the shallow convolution kernel is mainly used to extract the edge features, and the deep convolution kernel is mainly used to extract the high-level abstract features, such as texture. Then, these features are integrated through the full connection layer to make the classification. A convolution kernel with a specific template can extract the directional features of the image by using, e.g., the Sobel [17], Laplace [18], Prewitt [19], and Roberts operators [20]; the texture features of image data can be encoded by contrast and integration of pixels, such as with the local binary pattern (LBP) [21]. In this paper, a feature extraction operator is used to extract the edge and texture information of an image to make it is easier for the shallow convolution kernel to learn edge features and for the deep convolution kernel to learn texture features. Additionally, the influence of different fusion schemes on the steel plate surface defect recognition rate is investigated.
Currently, five feature extraction operators are commonly used: Sobel, Laplace, Prewitt, Robert, and LBP. In this paper, we choose each pair of these operators to process the original grayscale image, and the resulting feature matrix is fused with the original grayscale matrix. The original grayscale matrix is placed in the middle channel, and the image processed by the feature operators is randomly placed on the remaining two channels. Fifteen combination schemes of the same feature operator and different feature operators form a three-channel color-effect image. The three-channel data after fusion results in the data calculation burden for two additional channels in the model. In contrast, the model of the original grayscale image requires only the data calculation for a single channel. To mitigate the increase in computational load after multichannel fusion, we use a traversal method to explore the optimal weight ratio and convert the three-channel image data into single-channel image data according to a certain weight ratio. The flowchart is shown in Figure 1.
In this study, the same CNN framework and hyperparameters are used for the training. The sensitivity of the feature fusion preprocessing model at the defect area is verified using a heat map and a feature activation map of the visual learning area. The accuracy (acc)-loss curve is used to observe the convergence of the model before and after feature preprocessing. The confusion matrix is used to assess the accuracy of network recognition before and after feature fusion preprocessing. The goal of the experiment is to find the optimal fusion scheme and conversion weight.
The NEU steel surface defect database created by Song et al. [22] of Northeastern University, China, was selected for use in this study. There are six categories of defects, namely, crazing, inclusion, patches, pitted, rolled-in, and scratches. There are 300 sample data points for each defect category, thus, resulting in a total of 1800 data points. The image resolution is 200 × 200 pixels. Figure 2 shows the defect samples.
Defect recognition based on the NEU steel plate surface defect database [23] faces three difficulties: (1) Defects of different classes have similar features; (2) illumination and material changes can affect the gray value of the acquired defect image; and (3) there are effects of various noises. To prevent overfitting, data augmentation is used in this study to expand the existing database capacity to 5000 samples, to increase the data volume of the experiment set, and to improve the generalization ability of the model. The experimental set in this paper is divided into three categories: Training set, validation set and test set, which are distributed according to the ratio of 7:2:1. So there are 3500 samples in training set, 1000 samples in validation set and 500 samples in test set. In order to ensure the consistency of the samples, the number of samples for each category in each sample set is roughly the same.
To fairly compare the data results before and after feature fusion processing, the exact same hyperparameters are used as follows: Adam, which is used as the optimizer; ReLU, which is used as the activation function; random initialization of weights; and use of the learning rate decay strategy with an initial learning rate of 0.00001, an epoch of 100, and a batch size of 32.
All experiments were run on a graphics workstation with two 10-core Intel Xeon E5-260 Wv4 central processing units (CPUs), an NVIDIA Titan 1080Ti graphics processing unit (GPU), and 128 GB memory. Windows 10-based Python was the development environment, and Keras was used as the learning framework.
Five classical models, Lenet-5 [24], Alexnet [25], VGG16 [26], InceptionV3 [27], and Resnet50 [28], are selected as the main framework of the CNN. Taking the original image data as samples, the recognition accuracies of these classical network models are compared using identical hyperparameters. The results of this comparison are shown in Figure 3. VGG16 has the highest recognition rate, up to 95.55%. Therefore, this model is the most suitable for learning the features of the NEU data. Therefore, VGG16 is chosen as the main framework.
To study the influence of the fusion of the same operators and the original image on the model, the image processed by each operator is selected and combined with the original grayscale image. The original grayscale data are placed in the middle channel, and the grayscale matrix is placed in the other two channels. The result is five different fusion schemes, as shown in Figure 4. Five sets of experiments (Nos. 1 to 5) are carried out for each fusion scheme by using the above hyperparameters for training. Hence, there are a total of 25 sets of experiments. The results are shown in Table 1. The fusion scheme of Sobel:image:Sobel has the highest average accuracy, reaching 96.22%.
Conversion weights | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Sobel | 97.24 | 96.54 | 94.27 | 93.28 | 96.77 | 96.22 |
Roberts:img:Roberts | 95.34 | 95.49 | 94.65 | 96.97 | 95.30 | 95.55 |
Prewitt:img:Prewitt | 95.25 | 93.94 | 95.21 | 94.24 | 94.71 | 94.67 |
Laplace:img:Laplace | 88.13 | 87.12 | 86.44 | 87.63 | 87.08 | 87.28 |
LBP:img:LBP | 92.11 | 92.48 | 92.26 | 93.32 | 94.43 | 92.92 |
To study the influence of the feature operator combination scheme on the recognition accuracy of the model, the image processed by each pair of operators is selected and combined with the original grayscale image, and the original grayscale data are placed in the middle channel, while the grayscale matrix processed by the two operators is randomly placed in the other two channels. Ten combination schemes are based on channel fusion, as shown in Figure 5. The fused data are all in color, and the inclusion areas of the images with edge operator fusion tend to be darker. The inclusion area of the image data processed by the LBP operator and the texture of the steel plate background are more prominent. Using the aforementioned model and hyperparameters, five sets of experiments (Nos. 1 to 5) were carried out for each fusion scheme; thus, a total of 50 experiments were carried out. The accuracy of the fusion results obtained using different operators is shown in Table 2.
Fusion scheme | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Roberts | 98.88 | 97.36 | 98.05 | 99.09 | 97.97 | 98.27 |
Sobel:img:Laplace | 97.51 | 98.14 | 98.98 | 98.91 | 99.45 | 98.61 |
Sobel:img:Prewitt | 98.51 | 97.04 | 98.17 | 99.01 | 97.27 | 98.00 |
Sobel:img:LBP | 76.11 | 76.25 | 75.22 | 74.89 | 75.58 | 75.61 |
Roberts:img:Laplace | 98.02 | 96.94 | 96.01 | 98.57 | 99.61 | 97.83 |
Roberts:img:Prewitt | 95.17 | 96.37 | 95.25 | 96.44 | 97.32 | 96.11 |
Roberts:img:LBP | 81.21 | 81.37 | 83.27 | 81.65 | 80.25 | 81.55 |
Laplace:img:Prewitt | 97.41 | 97.11 | 97.02 | 96.51 | 95.55 | 96.72 |
Laplace:img:LBP | 70.52 | 67.61 | 65.84 | 64.32 | 67.26 | 67.11 |
Prewitt:img:LBP | 19.84 | 14.28 | 15.77 | 16.19 | 17.22 | 16.66 |
Table 2 shows that among the five sets of experiments, the fusion result of Sobel:image:Laplace has the highest average accuracy, reaching 98.61%. The fused images with the LBP operator generally have poor performance. In this case, the CNN performs poorly because although the CNN is more sensitive to the visual feature information of the image, the CNN is not good at analyzing the intrinsic meaning of the abstract texture features encoded by LBP.
Processing the three-channel data after fusion will double the computational cost, while the original grayscale image model only needs to process the single-channel data. In this study, to avoid this drawback after multichannel fusion, the three-channel data are converted into a single-channel grayscale image according to a certain weight ratio, as shown in Eq (1):
Finalimage=α(1stchannel)+β(2ndchannel)+γ(3rdchannel) | (1) |
where α, β, and γ are the weight ratio coefficients of different channels, and they sum up to 1. The traversal method is used with a step size of 0.1. And there is a total of 36 combinations. Based on the original model, the average accuracy is tested, as shown in Figure 6.
The results show that when the weight coefficients of α, β, and γ are 0.2, 0.6, and 0.2, respectively, the computational cost is the same as with the original data. However, the accuracy is 1.16% higher than when using the three-channel fusion. Figure 7 compares the single-channel image converted using a weight ratio of 0.2:0.6:0.2 with the original image. The figure shows that the defect feature edge of the feature fusion image of the Sobel:image:Sobel algorithm is more obvious. The weight of 0.6 assigned to the original grayscale image can properly retain the information of the original image. The ability to retain this information helps the convolution kernel to learn weights, and the weight of 0.2 assigned to the edge extraction operator can well highlight the defect features on the steel plate image. Consequently, these defect features are conducive to CNN capturing. The same calculation parameters as those of the original data can be retained, while the recognition rate can reach as high as 99.77%.
The feature activation map of the first layer of VGG16 before and after the feature fusion processing is visualized, and the learning results are analyzed. One such image is randomly selected for processing (Figure 8). The yellow area indicates that the defect feature is activated and contains the black inclusion defects; hence, it can be determined that this feature produces the correct phase response. Figure 8 also shows that, in the activation map without feature fusion, 23 feature maps in the defect area generate correct activation. In contrast, in the results after feature fusion, 31 feature maps in the defect area generate correct activation. These data indicate that because the edge extraction operator captures the target defect area in advance and fuses it with the original image, the edge extraction operator can enhance the edge pixel feature of the defect area of the image and ensure that the original data are not lost. Consequently, it is easier for the convolution kernel to capture the defect.
In this study, the gradient-weighted class activation mapping (Grad-CAM) algorithm [29] is used to further validate the target area where the model finally learns. This algorithm can display the area where the model learns in the form of a heat-affected zone to facilitate the observation of whether the final model successfully learns the features of the correct area. Given an input image, the algorithm can obtain the output feature map of a convolution layer through the trained model, which can obtain the parameters of the convolution kernel corresponding to each pair of feature maps without modifying the original model structure. The category relative to each channel of this feature map is used as a weight to generate a spatial map of the activation intensity of the input image to the category. So the heat-affected zone can be generated. In this paper, this method is used to visualize the model before and after feature fusion. Figure 9 shows the learning result of the inclusion defect. The figure shows that the defect area of the steel plate image data subjected to feature fusion process is red; i.e., the area is a “high-temperature” zone. This zone can be accurately located in the image, while the defect area of the original image data not processed by the feature operator is not very well covered. The results show that the data processed by the feature operator have an increased pixel level in the defect area; therefore, the area is more likely to be captured by the CNN. The heat map analysis further verifies that feature fusion preprocessing significantly improves the recognition ability of the CNN model.
The acc and loss curves of the model learning before and after feature fusion are plotted in Figure 10. This figure shows that the acc curve after feature operator processing has a narrow range of fluctuation and overall larger fluctuation; in contrast, the acc curve without feature operator processing has a relatively wide range of fluctuation and overall smaller fluctuation. The loss curve after feature operator processing has a narrow range of fluctuation and overall smaller fluctuation; in contrast, the loss curve without feature operator processing has a wide range of fluctuation and overall larger fluctuation. It can be concluded that the model after feature fusion has a higher fitting ability and better convergence than the model without feature fusion.
The confusion matrix is used in this study to further compare the classification results of the model before and after feature fusion. Each row of the matrix represents the results of a predicted category. The optimal results before and after feature fusion are represented by the confusion matrix, as shown in Figure 11. The figure shows that the recognition accuracy for the crazing category with a steel plate background is 100% with or without feature fusion; the recognition accuracy for the inclusion category is 99.33% with feature fusion and 97.58% without feature fusion; the recognition accuracy for the patches category feature is 100% with feature fusion and 98.48% without feature fusion; the recognition accuracy for the pitted category is 100% with feature fusion and 78.48% without feature fusion; the recognition accuracy for the rolled-in category is 100% with feature fusion and 95.76% without feature fusion; and the recognition accuracy for the scratches category is 99.33% with feature fusion and 95.15% without feature fusion. The above results show that the recognition accuracy of the model is higher in each category after preprocessing with feature fusion than without feature fusion. These results indicate the feasibility of preprocessing with feature fusion in improving the recognition accuracy.
This study proposes an image preprocessing method based on feature operator fusion that is mainly used for defect recognition on low-contrast steel plate surface grayscale images. Through many combination tests, the original grayscale images have been processed using the Sobel and Laplace operators, then superimposed with the original grayscale image at a weight ratio of 0.2:0.6:0.2 as the input to a CNN. The experimental results show that this fusion scheme effectively improves the recognition rate of the test set. Without changing the experimental network framework, the accuracy rate can reach 99.77%, which is 4.22% higher than the accuracy rate of the CNN trained directly on the original images without fusion.
In this study, the influence of several commonly used operators on the accuracy of the model have been investigated, and a solution scheme from the perspective of optimizing the data sources has been offered to resolve the bottleneck problem of CNN learning. The results show that this feature operator-based channel fusion strategy is highly effective at improving the pixel features of defects and making it easier for the network to capture the features of defects, thereby successfully enhancing classification accuracy.
The authors acknowledge the support from the National Key R & D Program of China (Grant No. 2018YFB1701601).
The authors declared that they have no conflicts of interest to this work.
[1] |
Chen G, Roy I, Yang C, et al. (2016) Nanochemistry and nanomedicine for nanoparticle-based dagnostics and therapy. Chem Rev 116: 2826–2885. doi: 10.1021/acs.chemrev.5b00148
![]() |
[2] |
Ali I, Lone MN, Suhail M, et al. (2016) Advances in nanocarriers for anticancer drugs delivery. Curr Med Chem 23: 2159–2187. doi: 10.2174/0929867323666160405111152
![]() |
[3] |
Pasqua L, Leggio A, Sisci D, et al. (2016) Mesoporous silica nanoparticles in cancer therapy: relevance of the targeting function. Mini Rev Med Chem 16: 743–753. doi: 10.2174/1389557516666160321113620
![]() |
[4] | Chow EKH, Ho D (2013) Cancer nanomedicine: from drug delivery to imaging. Sci Transl Med 5: 216rv4. |
[5] |
Lee BK, Yun YH, Park K (2015) Smart nanoparticles for drug delivery: boundaries and opportunities. Chem Eng Sci 125: 158–164. doi: 10.1016/j.ces.2014.06.042
![]() |
[6] | Bozzuto G, Molinari A (2015) Liposomes as nanomedical devices. Int J Nanomed 10: 975–999. |
[7] |
Allen TM, Cullis PR (2013) Liposomal drug delivery systems: from concept to clinical applications. Adv Drug Deliver Rev 65: 36–48. doi: 10.1016/j.addr.2012.09.037
![]() |
[8] |
Bobo D, Robinson KJ, Islam J, et al. (2016) Nanoparticle-based medicines: a review of FDA-approved materials and clinical trials to date. Pharm Res 33: 2373–2387. doi: 10.1007/s11095-016-1958-5
![]() |
[9] |
Wilhelm S, Tavares AJ, Dai Q, et al. (2016) Analysis of nanoparticle delivery to tumours. Nat Rev Mater 1: 16014. doi: 10.1038/natrevmats.2016.14
![]() |
[10] |
Iwamoto T (2013) Clinical application of drug delivery systems in cancer chemotherapy: review of the efficacy and side effects of approved drugs. Biol Pharm Bull 36: 715–718. doi: 10.1248/bpb.b12-01102
![]() |
[11] |
Brand W, Noorlander CW, Giannakou C, et al. (2017) Nanomedicinal products: a survey on specific toxicity and side effects. Int J Nanomed 12: 6107–6129. doi: 10.2147/IJN.S139687
![]() |
[12] | Janssen Products Expert Committee, DOXIL (doxorubicin HCl liposome injection), 2018. Available from: https://www.doxil.com. |
[13] |
Ishida T, Harashima H, Kiwada H (2001) Interactions of liposomes with cells in vitro and in vivo: opsonins and receptors. Curr Drug Metab 2: 397–409. doi: 10.2174/1389200013338306
![]() |
[14] |
Ishida T, Harashima H, Kiwada H, et al. (2002) Liposome clearance. Bioscience Rep 22: 197–224. doi: 10.1023/A:1020134521778
![]() |
[15] |
Lombardo D, Calandra P, Barreca D, et al. (2016) Soft interaction in liposome nanocarriers for therapeutic drug delivery. Nanomaterials 6: E125. doi: 10.3390/nano6070125
![]() |
[16] |
Dai Y, Xu C, Sun X, et al. (2017) Nanoparticle design strategies for enhanced anticancer therapy by exploiting the tumour microenvironment. Chem Soc Rev 46: 3830–3852. doi: 10.1039/C6CS00592F
![]() |
[17] | Sackmann E (1995) Physical basis of self-organization and function of membranes: physics of vesicles, In: Lipowsky R, Sackmann E, Handbook of Biological Physics, Elsevier, 213–303. |
[18] |
Israelachvili J, Wennerström H (1996) Role of hydration and water structure in biological and colloidal interactions. Nature 379: 219–225. doi: 10.1038/379219a0
![]() |
[19] | Franks F (1972) Water-a comprehensive treatise, New York, NY, USA: Plenum. |
[20] |
Magazù S, Migliardo F, Telling MT (2007) Study of the dynamical properties of water in disaccharide solutions. Eur Biophys J 36: 163–171. doi: 10.1007/s00249-006-0108-0
![]() |
[21] | Degiorgio V, Corti M (1985) Physics of amphiphiles: micelles, vesicles and microemulsions, Amsterdam: North-Holland. |
[22] | Tanford C (1980) The hydrophobic effect: formation of micelles and biological membranes, 2 Eds., New York: Wiley. |
[23] | Parsegian VA (2006) Van der Waals forces: a handbook for biologists, chemists, engineers, and physicists, Cambridge University Press. |
[24] | Hunter RJ (1986) Foundations of Colloid Science, Oxford University Press. |
[25] |
Cevc G (1993) Electrostatic characterization of liposomes. Chem Phys Lipids 64: 163–186. doi: 10.1016/0009-3084(93)90064-A
![]() |
[26] |
Dan N (2002) Effect of liposome charge and PEG polymer layer thickness on cell-liposome electrostatic interactions. BBA-Biomembranes 1564: 343–348. doi: 10.1016/S0005-2736(02)00468-6
![]() |
[27] | Lombardo D (2014) Modeling dendrimers charge interaction in solution: relevance in biosystems. Biochem Res Int 2014: 837651. |
[28] |
Akpinar B, Fielding LA, Cunningham VJ, et al. (2016) Determining the effective density and stabilizer layer thickness of sterically stabilized nanoparticles. Macromolecules 49: 5160–5171. doi: 10.1021/acs.macromol.6b00987
![]() |
[29] |
Wang Z, Zhu W, Qiu Y, et al. (2016) Biological and environmental interactions of emerging two-dimensional nanomaterials. Chem Soc Rev 45: 1750–1780. doi: 10.1039/C5CS00914F
![]() |
[30] |
Moore TL, Rodriguez-Lorenzo L, Hirsch V, et al. (2015) Nanoparticle colloidal stability in cell culture media and impact on cellular interactions. Chem Soc Rev 44: 6287–6305. doi: 10.1039/C4CS00487F
![]() |
[31] |
Plessis JD, Ramachandran C, Weiner N (1996) The influence of lipid composition and lamellarity of liposomes on the physical stability of liposomes upon storage. Int J Pharm 127: 273–278. doi: 10.1016/0378-5173(95)04281-4
![]() |
[32] |
Ceh B, Lasic DD (1995) A rigorous theory of remote loading of drugs into liposomes. Langmuir 11: 3356–3368. doi: 10.1021/la00009a016
![]() |
[33] |
Geng S, Yang B, Wang G, et al. (2014) Two cholesterol derivative-based PEGylated liposomes as drug delivery system, study on pharmacokinetics and drug delivery to retina. Nanotechnology 25: 275103. doi: 10.1088/0957-4484/25/27/275103
![]() |
[34] |
Kiselev MA, Janich M, Hildebrand A, et al. (2013) Structural transition in aqueous lipid/bile salt [DPPC/NaDC] supramolecular aggregates: SANS and DLS study. Chem Phys 424: 93–99. doi: 10.1016/j.chemphys.2013.05.014
![]() |
[35] |
Kiselev MA, Lombardo D, Lesieur P, et al. (2008) Membrane self assembly in mixed DMPC/NaC systems by SANS. Chem Phys 345: 173–180. doi: 10.1016/j.chemphys.2007.09.034
![]() |
[36] |
Hernández-Caselles T, Villalaín J, Gómez-Fernández JC (1993) Influence of liposome charge and composition on their interaction with human blood serum proteins. Mol Cell Biochem 120: 119–126. doi: 10.1007/BF00926084
![]() |
[37] | Narenji M, Talae MR, Moghimi HR (2017) Effect of Charge on Separation of Liposomes upon Stagnation. Iran J Pharm Res 16: 423–431. |
[38] |
Krasnici S, Werner A, Eichhorn ME, et al. (2003) Effect of the surface charge of liposomes on their uptake by angiogenic tumor vessels. Int J Cancer 105: 561–567. doi: 10.1002/ijc.11108
![]() |
[39] |
Jain NK, Nahar M (2010) PEGylated nanocarriers for systemic delivery. Methods Mol Biol 624: 221–234. doi: 10.1007/978-1-60761-609-2_15
![]() |
[40] |
Dan N (2014) Nanostructured lipid carriers: effect of solid phase fraction and distribution on the release of encapsulated materials. Langmuir 30: 13809–13814. doi: 10.1021/la5030197
![]() |
[41] |
Bourgaux C, Couvreur P (2014) Interactions of anticancer drugs with biomembranes: what can we learn from model membranes? J Control Release 190: 127–138. doi: 10.1016/j.jconrel.2014.05.012
![]() |
[42] |
Lombardo D, Calandra P, Magazù S, et al. (2018) Soft nanoparticles charge expression within lipid membranes: The case of amino terminated dendrimers in bilayers vesicles. Colloid Surface B 170: 609–616. doi: 10.1016/j.colsurfb.2018.06.031
![]() |
[43] | Dan N (2016) Membrane-induced interactions between curvature-generating protein domains: the role of area perturbation. AIMS Biophys 4: 107–120. |
[44] |
Lombardo D, Calandra P, Bellocco E, et al. (2016) Effect of anionic and cationic polyamidoamine (PAMAM) dendrimers on a model lipid membrane. BBA-Biomembranes 1858: 2769–2777. doi: 10.1016/j.bbamem.2016.08.001
![]() |
[45] | Katsaras J, Gutberlet T (2000) Lipid bilayers: Structure and Interactions, Springer Science & Business Media. |
[46] | Wanderlingh U, D'Angelo G, Branca C (2014) Multi-component modeling of quasielastic neutron scattering from phospholipid membranes. J Chem Phys 140: 05B602. |
[47] |
Kiselev MA, Lombardo D (2017) Structural characterization in mixed lipid membrane systems by neutron and X-ray scattering. BBA-Gen Subjects 1861: 3700–3717. doi: 10.1016/j.bbagen.2016.04.022
![]() |
[48] |
Kiselev MA, Lesieur P, Kisselev AM, et al. (2001) A sucrose solutions application to the study of model biological membranes. Nucl Instrum Meth A 470: 409–416. doi: 10.1016/S0168-9002(01)01087-7
![]() |
[49] |
Blanco E, Shen H, Ferrari M (2015) Principles of nanoparticle design for overcoming biological barriers to drug delivery. Nat Biotechnol 33: 941–951. doi: 10.1038/nbt.3330
![]() |
[50] |
Pirollo KF, Chang EH (2008) Does a targeting ligand influence nanoparticle tumor localization or uptake? Trends Biotechnol 26: 552–558. doi: 10.1016/j.tibtech.2008.06.007
![]() |
[51] |
Bae YK, Park K (2011) Targeted drug delivery to tumors: myths, reality and possibility. J Control Release 153: 198–205. doi: 10.1016/j.jconrel.2011.06.001
![]() |
[52] |
Mura S, Nicolas J, Couvreur P (2013) Stimuli-responsive nanocarriers for drug delivery. Nat Mater 12: 991–1003. doi: 10.1038/nmat3776
![]() |
[53] |
Xing H, Hwang K, Lu Y (2016) Recent developments of liposomes as nanocarriers for theranostic applications. Theranostics 6: 1336–1352. doi: 10.7150/thno.15464
![]() |
[54] | Lombardo D, Kiselev AM, Caccamo MT (2019) Smart nanoparticles for drug delivery application: development of versatile nanocarrier platforms in biotechnology and nanomedicine. J Nanomater 2019: 3702518. |
1. | Wen Zhang, Min Wu, Sheng Du, Luefeng Chen, Plate shape recognition based on Gaussian function and particle swarm optimization for roller quenching process, 2022, 119, 09591524, 115, 10.1016/j.jprocont.2022.10.001 | |
2. | Feng Miao Tu, Ming Hui Wei, Jun Liu, Lu Lu Liao, An adaptive weighting multimodal fusion classification system for steel plate surface defect, 2023, 45, 10641246, 3501, 10.3233/JIFS-230170 | |
3. | Rachid Zaghdoudi, Abdelmalek Bouguettaya, Adel Boudiaf, Steel surface defect recognition using classifier combination, 2024, 132, 0268-3768, 3489, 10.1007/s00170-024-13407-z | |
4. | Hongkai Zhang, Suqiang Li, Qiqi Miao, Ruidi Fang, Song Xue, Qianchuan Hu, Jie Hu, Sixian Chan, Surface defect detection of hot rolled steel based on multi-scale feature fusion and attention mechanism residual block, 2024, 14, 2045-2322, 10.1038/s41598-024-57990-3 | |
5. | Fengmin Shen, Qian Zhou, Zhijia Bai, Hui Zhang, 2024, Crane Rail Bolt Image Defect Recognition Based on Feature Fusion, 979-8-3503-7364-6, 24, 10.1109/ICSECE61636.2024.10729317 |
Conversion weights | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Sobel | 97.24 | 96.54 | 94.27 | 93.28 | 96.77 | 96.22 |
Roberts:img:Roberts | 95.34 | 95.49 | 94.65 | 96.97 | 95.30 | 95.55 |
Prewitt:img:Prewitt | 95.25 | 93.94 | 95.21 | 94.24 | 94.71 | 94.67 |
Laplace:img:Laplace | 88.13 | 87.12 | 86.44 | 87.63 | 87.08 | 87.28 |
LBP:img:LBP | 92.11 | 92.48 | 92.26 | 93.32 | 94.43 | 92.92 |
Fusion scheme | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Roberts | 98.88 | 97.36 | 98.05 | 99.09 | 97.97 | 98.27 |
Sobel:img:Laplace | 97.51 | 98.14 | 98.98 | 98.91 | 99.45 | 98.61 |
Sobel:img:Prewitt | 98.51 | 97.04 | 98.17 | 99.01 | 97.27 | 98.00 |
Sobel:img:LBP | 76.11 | 76.25 | 75.22 | 74.89 | 75.58 | 75.61 |
Roberts:img:Laplace | 98.02 | 96.94 | 96.01 | 98.57 | 99.61 | 97.83 |
Roberts:img:Prewitt | 95.17 | 96.37 | 95.25 | 96.44 | 97.32 | 96.11 |
Roberts:img:LBP | 81.21 | 81.37 | 83.27 | 81.65 | 80.25 | 81.55 |
Laplace:img:Prewitt | 97.41 | 97.11 | 97.02 | 96.51 | 95.55 | 96.72 |
Laplace:img:LBP | 70.52 | 67.61 | 65.84 | 64.32 | 67.26 | 67.11 |
Prewitt:img:LBP | 19.84 | 14.28 | 15.77 | 16.19 | 17.22 | 16.66 |
Conversion weights | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Sobel | 97.24 | 96.54 | 94.27 | 93.28 | 96.77 | 96.22 |
Roberts:img:Roberts | 95.34 | 95.49 | 94.65 | 96.97 | 95.30 | 95.55 |
Prewitt:img:Prewitt | 95.25 | 93.94 | 95.21 | 94.24 | 94.71 | 94.67 |
Laplace:img:Laplace | 88.13 | 87.12 | 86.44 | 87.63 | 87.08 | 87.28 |
LBP:img:LBP | 92.11 | 92.48 | 92.26 | 93.32 | 94.43 | 92.92 |
Fusion scheme | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Roberts | 98.88 | 97.36 | 98.05 | 99.09 | 97.97 | 98.27 |
Sobel:img:Laplace | 97.51 | 98.14 | 98.98 | 98.91 | 99.45 | 98.61 |
Sobel:img:Prewitt | 98.51 | 97.04 | 98.17 | 99.01 | 97.27 | 98.00 |
Sobel:img:LBP | 76.11 | 76.25 | 75.22 | 74.89 | 75.58 | 75.61 |
Roberts:img:Laplace | 98.02 | 96.94 | 96.01 | 98.57 | 99.61 | 97.83 |
Roberts:img:Prewitt | 95.17 | 96.37 | 95.25 | 96.44 | 97.32 | 96.11 |
Roberts:img:LBP | 81.21 | 81.37 | 83.27 | 81.65 | 80.25 | 81.55 |
Laplace:img:Prewitt | 97.41 | 97.11 | 97.02 | 96.51 | 95.55 | 96.72 |
Laplace:img:LBP | 70.52 | 67.61 | 65.84 | 64.32 | 67.26 | 67.11 |
Prewitt:img:LBP | 19.84 | 14.28 | 15.77 | 16.19 | 17.22 | 16.66 |