
Chest X-ray image is an important clinical diagnostic reference to lung diseases that is a serious threat to human health. At present, with the rapid development of computer vision and deep learning technology, many scholars have carried out the fruitful research on how to build a valid model for chest X-ray images recognition of lung diseases. While some efforts are still expected to improve the performance of the recognition model and enhance the interpretability of the recognition results. In this paper, we construct a multi-scale adaptive residual neural network (MARnet) to identify chest X-ray images of lung diseases. To make the model better extract image features, we cross-transfer the information extracted by residual block and the information extracted by adaptive structure to different layer, avoiding the reduction effect of residual structure on adaptive function. We compare MARnet with some classical neural networks, and the results show that MARnet achieves accuracy (ACC) of 83.3% and the area under ROC curve (AUC) of 0.97 in the identification of 4 kinds of typical lung X-ray images including nodules, atelectasis, normal and infection, which are higher than those of other methods. Moreover, to avoid the randomness of the train-test-split method, 5-fold cross-validation method is used to verify the generalization ability of the MARnet model and the results are satisfactory. Finally, the technique called Gradient-weighted Class Activation Mapping (Grad-CAM), is adopted to display significantly the discriminative regions of the images in the form of the heat map, which provides an explainable and more direct clinical diagnostic reference to lung diseases.
Citation: Boyang Wang, Wenyu Zhang. MARnet: multi-scale adaptive residual neural network for chest X-ray images recognition of lung diseases[J]. Mathematical Biosciences and Engineering, 2022, 19(1): 331-350. doi: 10.3934/mbe.2022017
[1] | M. Rajanish, N. V. Nanjundaradhya, Ramesh S. Sharma, H. K. Shivananda, Alok Hegde . Directional Interlaminar Shear Strength (ILSS) of nano-modified epoxy/unidirectional glass fibre composite. AIMS Materials Science, 2018, 5(4): 603-613. doi: 10.3934/matersci.2018.4.603 |
[2] | Ruijin Hong, Jialin Ji, Chunxian Tao, Daohua Zhang, Dawei Zhang . Fabrication of Au/graphene oxide/Ag sandwich structure thin film and its tunable energetics and tailorable optical properties. AIMS Materials Science, 2017, 4(1): 223-230. doi: 10.3934/matersci.2017.1.223 |
[3] | Venkatarajan Subbarayalu, Subbu Chinnaraman, Athijayamani Ayyanar, Jayaseelan Chinnapalanisamy . Mechanical properties of vinyl ester hybrid composite laminates reinforced with screw pine and glass fiber. AIMS Materials Science, 2024, 11(1): 114-128. doi: 10.3934/matersci.2024007 |
[4] | Ahmed M. Abood, Haider Khazal, Abdulkareem F. Hassan . On the determination of first-mode stress intensity factors and T-stress in a continuous functionally graded beam using digital image correlation method. AIMS Materials Science, 2022, 9(1): 56-70. doi: 10.3934/matersci.2022004 |
[5] | Kiyotaka Obunai, Daisuke Mikami, Tadao Fukuta, Koichi Ozaki . Microstructure and mechanical properties of newly developed SiC-C/C composites under atmospheric conditions. AIMS Materials Science, 2018, 5(3): 494-507. doi: 10.3934/matersci.2018.3.494 |
[6] | Zulzamri Salleh, Md Mainul Islam, Jayantha Ananda Epaarachchi, Haibin Su . Mechanical properties of sandwich composite made of syntactic foam core and GFRP skins. AIMS Materials Science, 2016, 3(4): 1704-1727. doi: 10.3934/matersci.2016.4.1704 |
[7] | Bao Zhang, Xudong Wang, Liu Chen, Xingwu Li . Dynamic behavior of graphene reinforced aluminum composites. AIMS Materials Science, 2018, 5(2): 338-348. doi: 10.3934/matersci.2018.2.338 |
[8] | Albert Uchenna Ude, Che Husna Azhari . Lateral crashworthiness response of bombyx mori fibre/glass–fibre/epoxy hybrid composite cylindrical tubes-experimental. AIMS Materials Science, 2019, 6(6): 1227-1239. doi: 10.3934/matersci.2019.6.1227 |
[9] | Prashant Tripathi, Vivek Kumar Gupta, Anurag Dixit, Raghvendra Kumar Mishra, Satpal Sharma . Development and characterization of low cost jute, bagasse and glass fiber reinforced advanced hybrid epoxy composites. AIMS Materials Science, 2018, 5(2): 320-337. doi: 10.3934/matersci.2018.2.320 |
[10] | Vishanth Uppu, Kunal Mishra, Libin K. Babu, Ranji Vaidyanathan . Understanding the influence of graphene and nonclay on the microcracks developed at cryogenic temperature. AIMS Materials Science, 2019, 6(4): 559-566. doi: 10.3934/matersci.2019.4.559 |
Chest X-ray image is an important clinical diagnostic reference to lung diseases that is a serious threat to human health. At present, with the rapid development of computer vision and deep learning technology, many scholars have carried out the fruitful research on how to build a valid model for chest X-ray images recognition of lung diseases. While some efforts are still expected to improve the performance of the recognition model and enhance the interpretability of the recognition results. In this paper, we construct a multi-scale adaptive residual neural network (MARnet) to identify chest X-ray images of lung diseases. To make the model better extract image features, we cross-transfer the information extracted by residual block and the information extracted by adaptive structure to different layer, avoiding the reduction effect of residual structure on adaptive function. We compare MARnet with some classical neural networks, and the results show that MARnet achieves accuracy (ACC) of 83.3% and the area under ROC curve (AUC) of 0.97 in the identification of 4 kinds of typical lung X-ray images including nodules, atelectasis, normal and infection, which are higher than those of other methods. Moreover, to avoid the randomness of the train-test-split method, 5-fold cross-validation method is used to verify the generalization ability of the MARnet model and the results are satisfactory. Finally, the technique called Gradient-weighted Class Activation Mapping (Grad-CAM), is adopted to display significantly the discriminative regions of the images in the form of the heat map, which provides an explainable and more direct clinical diagnostic reference to lung diseases.
Pesticide spraying is one of the most important farming processes, as it plays a crucial role in increasing agricultural productivity. The conventional spraying methods in strawberry greenhouses, typically, employ hazardous chemicals that are consistently applied by an operator with a manual sprayer. Despite the use of pesticide protection equipment (i.e., protective suit, gas mask, etc.), farmers are still exposed to toxic and dangerous chemicals that can cause health problems [1,2,3]. In addition, un-suitable spraying can infect both the plants and the soil, potentially causing harm to the final consumer. Because of this drawback, agricultural robotics has become a major interest of researchers in order to reduce the intervention of the farmers in risky situations, together with the protection of the environment and final consumers.
Nowadays, robots play a significant role in crop inspection and treatment as well as in weeds, pests, and diseases detection. Agricultural robots have been improved through the development of mobile platforms equipped with a variety of sensors and algorithms for performing agricultural tasks. Agricultural robotics mobile technology has been widely used for a variety of purposes, such as the harvesting of cherry tomatoes based on stereo vision and a fruit collector [4]. An autonomous robot has been developed for intra-row weeding in vineyards by means of a rotary weeder using sonar and feeler to detect the trunk of the plants [5]. Another example can be found in [6], where a wheeled robot has been designed for precise wheat seeding.
Robust and efficient autonomous navigation is a critical component for mobile robots that operate autonomously in agricultural environments, and it is becoming increasingly important in current robotics research. The majority of cultures are planted in straight rows with almost equal spacing between the rows, which favors the use of robotics to achieve agricultural tasks, e.g., pesticide spraying. This way, several agricultural navigation methods have been developed using plant rows as landmarks for navigation algorithms [7]. For that, vision sensors have been widely employed in plant rows identification due to their low cost and ability to provide a large amount of data that can be used to guide robots. Methods for detecting crop rows based on the Hough transform are broadly used in robot visual navigation systems [8,9,10]. Astrand and Baerveldt [8,9] developed a crop row recognition method that is also based on the Hough transform. Chen et al [10] successfully realized the automatic detection of the transplanting robot's navigation target by extracting the target row based on the improved Hough transform. Montalvo et al [11] proposed a least-squares method for detecting crop rows in a grassed cornfield.
On the other hand, the use of laser rangefinder (LIDAR) sensors for navigation has resulted in a great number of research contributions, making them one of the most commonly used sensor systems in robotic platforms. They have the advantages of providing direct distance measurements, being less sensitive to environmental variables (e.g. work in the dark) and having a greater range than other sensors. Also, the recent cost reductions have increased the interest in this technology. Because of these, they are commonly used for local navigation between cultural ranges, which entails determining the robot's relative location across cultural ranges and guiding directions to avoid collisions [12,13,14]. Barawid et al. [15] used a 2D LIDAR to develop a real-time guidance system for driving an autonomous vehicle in an orchard, using again the Hough transform to extract plant rows in order to guide the vehicle. In [12], Hiremath and colleagues propose an autonomous robot navigation model in a maize field using 2D LIDAR and a particle filtering algorithm. It estimates the robot's state in relation to its surroundings, such as the robot's cap and lateral deviation.
The main contribution of this work is the development of an autonomous navigation method for an agricultural mobile robot called "AgriEco Robot" operating in a strawberry greenhouse. We present a 2D LIDAR-based controller navigation approach that steers the robot using the estimated heading and lateral offset of the robot relative to crop rows. This allows the "AgriEco Robot" to autonomously guide itself between the crop rows while automatically spraying the pesticide. Then, after successfully navigating between the rows and detecting their ends, the robot autonomously takes care of reaching the next crop rows and continues the navigation between them while resuming the spraying process. The control software is based on Robot Operating System (ROS) [16], which runs on a Jetson Tx2 high-performance processor.
The "AgriEco Robot" (Figure 1) has been devised to work specifically within an agricultural greenhouse and navigate between rows of strawberries for automatic pesticide spraying. The robot's dimensions (65 cm long x 55 cm wide) were chosen in accordance with strawberry cultivation standards.
The mobile robot platform is composed of a chassis made of steel, equipped with a four-wheel drive; a spraying system installed in the rear part of the robot, consisting of two motorized arms; and a tank of pesticide with an emergent pump. It also carries three on-board perception sensors: a stereo camera, a 2D LIDAR and an omnidirectional camera. In the rest of this section, we describe the robot's main components.
The robotic platform is composed of a robotic chassis and four-wheel-drive with an in-wheel brush-less direct current engine (BLDC) linked with its controllers (see Figure 1). The chassis is made of lightweight steel and has been designed and analyzed to support loads up to 100 kg and to tow up to 80 kg. The four wheels operate separately and rotate along a single axis [17,18]. Each in-wheel motor has an independent controller which provides, through an Arduino Mega 2560 microcontroller, total and smooth control over the following functions: forward and backward motion, brake, and speed variation.
The BLDC motor controller constitutes the driver of the motor. It acts as an intermediary between the motor and the battery, receiving a control signal from a microcontroller and converting it into a higher current signal that can drive the motor. The controller converts the DC voltage from the battery to AC voltage using power electronic switches. The controller, in turn, is characterized by a maximum current of 15 A and a power of 250 W and provides inputs for Hall effect sensors. PWM (Pulse Width Modulation) signals are generated and filtered by low-pass RC to control the motor speed variation. The "AgriEco Robot" is completely powered by a 36 V/30 Ah Lithium-ion battery.
The sensory system of the robot comprises a Hokuyo URG-04LX-UG01 2D LIDAR, a ZED stereo camera, and an omnidirectional camera system, but it is not used in this work. The 2D LIDAR (Figure 1) has been mounted in the front of the robot, at a height of 12 cm above the ground. According to the manufacturer, this device has a scanning area of 240° and provides an angle resolution of 0.36° with a scan frequency of 10 Hz. The detection range is approximately 20‒5,600 mm with an accuracy of ±30 mm at a distance between 60 and 1000 mm, and ±3% of the measurement up to 4095 mm. It is connected to the onboard Jetson TX2 via a USB 2.0 interface and is powered by an additional 5 V power source.
The ZED stereo camera (Figure 1), developed by Stereolabs [19], integrates two cameras with a maximum resolution of 4416 x 1242 px and a frame rate of 15 fps. It has been specially designed for autonomous navigation and 3D analysis applications, and it provides robust visual odometry estimations. The ZED SDK supports ROS integration through the zed-ros-wrapper package. It uses depth perception to accurately estimate the camera's 6 DoF pose (x, y, z, roll, pitch, yaw) with frequency up to 100 Hz and thus the pose of the system it is mounted on. The positional tracking accuracy for ZED cameras is +- (0.01, 0.1, 0.1) in meters for the x, y and z-axes. The camera has been positioned on the robot at a height of 44 cm from the ground, with a pitch angle of 15° so that it points to the ground.
The omnidirectional vision sensor (Figure 1) is composed of a CCD camera and a spherical mirror in a face-to-face configuration. The catadioptric system has been positioned perpendicular to the robot. It covers a 360-degree field of view of the robot environment which makes it suited for autonomous navigation, obstacle detection, and localized pesticide spraying.
The "Agri-Eco Robot" systems uses a combination of CPU and GPU cores in a real time application, which implies the need for considerable computing power to fulfill the embedded sensors' requirements, like the Zed camera, which recommend NVIDIA TX2 [20] as the optimal embedded processing card. The Nvidia Jetson TX2 is a unit that is equipped with a Quad-core 2.0 Ghz 64-bit ARMv8 A57, a dual-core 2.0 Ghz ARMv8 Denver, a 256 CUDA core 1.3 MHz Nvidia Pascal and 8 GB memory. This embedded card runs the developed systems in 0.25 s.
It runs on an Ubuntu Linux 18.04 operating system with the open-source Melodic Robot Operating System (ROS). The general scheme of the complete system is shown in Figure 2.
Our system is based on the versatile and widely employed ROS framework [21], which is able to execute multiple programs in parallel in the form of nodes that communicate through services and messages broadcast via topics. ROS includes a set of libraries, a wide range of sensor drivers and a set of tools which make easy the implementation of the proposed application. The "AgriEco Robot" system implements in this work three ROS nodes:
● The ZED node (zed-ros-wrapper) broadcast its data in a set of topics providing access to the stereo images, the depth map, the 3D point cloud and an estimation of the camera's 6-DOF tracking.
● The Hokuyo node produces messages containing the values of the laser scans.
● The microcontroller node receives commands from other process and controls the robot motion and spraying systems.
The spraying system (Figure 1 and Figure 3) has been developed with the aim of optimizing the automatic pesticide spraying by the "AgriEco Robot". This system has been attached to the rear part of the robot and is composed of a 10 L bank (1) of polythene with a submerged pump (2) and two motorized arms (5) built by 3D printing using poly-lactic acid (PLA) material. The pipe is made of pure silicone and is tied to the pump inside the bank. The subsystem is completed by a T-type connection (3) to distribute the pesticide to both arms. A set of adjustable flow nozzles (6) are fixed at the end of the pipes.
The "AgriEco Robot", must be able to navigate through strawberry crop rows within an agricultural greenhouse to perform tasks such as automatic of spraying pesticide. For this purpose, we have developed a navigation method, based on the 2D LIDAR sensor, allowing the robot to autonomously navigate through the crop rows inside a strawberry greenhouse. As it will be explained later, the proposed autonomous navigation method is divided into two major parts: between and outside the crop rows.
We propose a 2D LIDAR-based navigation method that uses 2D range scans to autonomously drive between the strawberry plants by benefiting from the row-based arrangement of the crops. The robot moves and controls its trajectory between crop rows based on its heading and lateral offset estimated until the detection of the end of the row. For such a task, the LIDAR scans its surroundings in a plane parallel to the ground and uses the measurements to determine the robot localization relative to the rows. This procedure can detect if the robot is navigating correctly (i.e. the robot is positioned in the center of the space between crop rows and moving forward), or it deviates to the left or right with respect to the crop rows, as seen in Figure 5.
The deviation is related to two parameters: the robot's lateral shift from the center of the row and the robot's deviation angle from the center of the row. In each situation, the motion controller generates a decision for the mobile robot, to correct its trajectory or to move straight ahead in case the robot is well positioned between the rows.
Figure 6 shows the scheme of the navigation system where R and L represent the distances to the rows detected by the laser scanner at angles of 0° and 180°, respectively. The width of the robot is represented by W, and C is half the length of the robot. In turn, Cr and Cl are the perpendicular distances between the laser scanner (positioned at the center of the robot) and the right and left rows, respectively. In case the robot deviates to the left, Cl is extracted from the laser scanner data, and Cr is calculated as follows:
f=(Cl+Cr)2−Cl−Ccosθ | (1) |
With:
θ=arccos(ClR) | (2) |
Cr=Lcosθ | (3) |
Similarly, if the robot deviates to the right, Cr is extracted from the laser scanner data, and Cl is calculated.
The robot's deviation angles θ to the right and left between crop rows is represented by θ. Finally, f denotes the displacement of the robot center with respect to the middle point between rows (see Figure 6b).
Apart from this, there are other crucial factors that must be estimated during the navigation of the mobile robot, such as the accurate position of the robot limits with respect to the crop rows, to ensure the operation safety. As shown in Figure 6a, these factors have been represented by the following parameters: k denotes the distance corner of the robot to the right row, while m is its analog to the left crop row. The correct estimation of these parameters allows the robot to safely navigate without hitting the crops, and they are computed through these equations depending on the type of deviation the robot is suffering:
Deviation to the left:
k=(R−W2)cosθ | (4) |
m=(Cr−(Lsinθ)) | (5) |
Deviation to the right:
k=(L−W2)cosθ | (6) |
m=(Cl−(Lsinθ)) | (7) |
Once we detect the position of the robot between the crop rows, the navigation motion controller provides the needed speed and steering values. The strategies for motions were described in detail in [17]. To keep the proper robot trajectory between the rows, we follow the scheme shown in Figure 7, which depicts the motion corrections the robot applies when it deviates from the middle of the crops (i.e. the reference position). Thus, the input of the motion control is composed of the lateral and the heading offsets of the robot. Then, the control system outputs three different control actions: i) an angular correction to the left by acting the two right motors, ii) an angular correction to the right by acting the left motors, iii) and a linear velocity by acting the four motors with the same speed.
Finally, in order to increase the safety of the robot in case an emergency occurs, and the system loses control of the actuators, the robot stops moving if it did not get any new command every 0.5 s. The farmers use many daily tools in farms, and they can forget an object in the robot's trajectory. For this reason, during the navigation process within the greenhouse, the "AgriEco Robot" will be coupled with an obstacle detector based on the 2D Lidar/visual sensors embedded on the robot [18].
Once the robot has safely navigated between the strawberry crop rows, when it reaches the end of the rows, the robot must deal with its transition to the next row, which is performed as follows:
The first step is to detect the end of the current row by means of the LIDAR in the front of the robot, which is easily carried out by inspecting the laser readings. Then, we use the odometry system provided by the ZED camera to get an estimation of the total travelled distance from the beginning of the rows and compare it with their actual length in order to ensure that the robot has reached the end of the rows. After that, to enter the next one, the robot performs a sequence of a circular-arc movement followed by a backwards action and a new final circular-arc movement, as shown in Figure 8.
Once the robot is placed at the beginning of the new crop row, the odometry system is reset to start computing the distance travelled for the new row, and the robot continues navigating and spraying pesticides until all the rows are completed.
Finally, the pesticide spraying system operates automatically and simultaneously with the autonomous navigation, with the robot following a stop-and-go strategy. This way, the robot moves a specific distance according to real distance between successive plants (known beforehand), subsequently stopping and spraying the plants located at both sides of the robot. Then, it restarts its navigation and repeats this process until the end of the rows. The robot 2D LIDAR-navigation can encounter certain difficulties, such as LIDAR sensor failure. For this purpose, in order to improve the robustness and the efficiency of our process, we have incorporated a Zed-based verification system that provides the robot's orientation angle as an auxiliary system during the robot navigation.
In this section we present the experiments performed to evaluate the robustness and performance of the "AgriEco Robot" when operating within a real greenhouse, shown in Figure 4, which has been built at the Faculty of Science in Rabat, Morocco, (GPS coordinates: 34.008287475248935, -6.838260257670796) and has dimensions of 5 m x 9 m x 2.5 m (width, length and height, respectively). The greenhouse consists of four rows with an inter-row space of 70 cm and an aisle length of 500 cm. Each row has a height of 25 cm and includes 11 plants. The experiment consisted of automatically navigating between the two rows in the right of the image while spraying pesticides, detecting the end of the rows, changing to the next aisle and continuing navigating and spraying. A calibration was made beforehand for the 2D LIDAR by using two perpendicular walls to ensure that the sensor was properly positioned in the middle of the robot. In order to assess the performance of the robot navigation, we computed the lateral error (i.e. the deviation of the following trajectory from the center of the aisle) as well as the heading error of the robot (see Figure 9 and Figure 10).
As can be seen in the figure, the lateral error remains bounded by a few centimeters during the whole experiment, presenting a root mean square (RMS) value of 2.99 cm, while, in turn, the heading RMS error is 3.27°. In this experiment, the robot navigated at a speed of 0.44 m/s (which can be considered a safe speed for indoor operation) through a trajectory of approximately 5 m. Finally, it is important to note that in the actual field, the rows were not symmetrically aligned, but the calculation of the distance between rows in real time allowed the robot to reactively adjust its navigation to the actual row alignments.
This work presented an autonomous navigation method for an agricultural robot based on a 2D LIDAR sensor and a motion control. Our system is able to automatically estimate the robot's lateral deviation with respect to the center of the crops aisles as well as its heading. Control actions are then sent to the robot motors in order to correct them. The entire system has been implemented in ROS, which has facilitated the integration and communication of the software and the hardware of our robot. The autonomous navigation method has been evaluated in a real-world scenario with a robot spraying pesticides within strawberry greenhouses, proving its usefulness and performance for agricultural application. As future work, we will focus on improving the spraying procedure by developing an automatic and targeted pesticide spraying system. For this, we intend to use an image processing approach based on the omnidirectional vision sensor embedded on the robot to automatically recognize crops and servo the sprayer system's motor arms.
The authors of this paper are thankful to the Minister of Higher Education, Research and Innovation (MHERI) and the National Center for Scientific and Technical Research of Morocco (CNRST) for financing this project.
The authors declare that there is no conflict of interest in this paper.
[1] |
A. Bhandary, G. A. Prabhu, V. Rajinikanth, K. P. Thanaraj, S. C. Satapathy, D. E. Robbins, et al., Deep-learning framework to detect lung abnormality a study with chest X-Ray and lung CT scan images, Pattern Recognit. Lett., 129 (2020), 271-278. doi: 10.1016/j.patrec.2019.11.013. doi: 10.1016/j.patrec.2019.11.013
![]() |
[2] |
T. Han, V. X. Nunes, L. F. De Freitas Souza, A. G. Marques, I. C. L. Silva, M. A. A. F. Junior, et al., Internet of medical things-based on deep learning techniques for segmentation of lung and stroke regions in CT scans, IEEE Access, 8 (2020), 71117-71135. doi: 10.1109/ACCESS.2020.2987932. doi: 10.1109/ACCESS.2020.2987932
![]() |
[3] | S. Kumar, P. Singh, M. Ranjan, A review on deep learning based pneumonia detection systems, in Proceedings-International Conference on Artificial Intelligence and Smart Systems, ICAIS, (2021), 289-296. doi: 10.1109/ICAIS50930.2021.9395868. |
[4] | W. S. U. S. Krimsky, Induced Atelectasis and Pulmonary Consolidation Systems and Methods, 2019. Available from: https://patentimages.storage.googleapis.com/d1/1e/27/edb84321a9bb25/US10448886.pdf. |
[5] |
C. A. de Pinho Pinheiro, N. Nedjah, L. de Macedo Mourelle, Detection and classification of pulmonary nodules using deep learning and swarm intelligence, Multimedia Tools Appl., 79 (2020), 15437-15465. doi: 10.1007/s11042-019-7473-z. doi: 10.1007/s11042-019-7473-z
![]() |
[6] |
D. Brenner, J. McLaughlin, R. Hung, Previous lung diseases and lung cancer risk: a systematic review and meta-analysis, PLoS One, 6 (2011). doi: 10.1371/journal.pone.0017479. doi: 10.1371/journal.pone.0017479
![]() |
[7] |
A. Krizhevsky, I. Sutskever, G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84-90. doi: 10.1145/3065386. doi: 10.1145/3065386
![]() |
[8] |
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, et al., ImageNet large scale visual recognition challenge, Int. J. Comput. Vision, 115 (2015), 211-252. doi: 10.1007/s11263-015-0816-y. doi: 10.1007/s11263-015-0816-y
![]() |
[9] |
L. Zhang, P. Yang, H. Feng, Q. Zhao, H. Liu, Using network distance analysis to predict lncRNA-miRNA interactions, Interdiscip Sci., 13 (2021), 535-545. doi: 10.1007/s12539-021-00458-z. doi: 10.1007/s12539-021-00458-z
![]() |
[10] |
P. P. Sun, Y. B. Chen, B. Liu, Y. X. Gao, Y. Han, F. He, et al., DeepMRMP: a new predictor for multiple types of RNA modification sites using deep learning, Math. Biosci. Eng., 16 (2019), 6231-6241. doi: 10.3934/mbe.2019310. doi: 10.3934/mbe.2019310
![]() |
[11] |
X. Y. Wang, H. Wang, S. Z. Niu, J. W. Zhang, Detection and localization of image forgeries using improved mask regional convolutional neural network, Math. Biosci. Eng., 16 (2019), 4581-4593. doi: 10.3934/mbe.2019229. doi: 10.3934/mbe.2019229
![]() |
[12] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, (2016), 770-778. doi: 10.1109/CVPR.2016.90. |
[13] |
P. Wang, E. Fan, P. Wang, Comparative analysis of image classification algorithms based on traditional machine learning and deep learning, Pattern Recognit. Lett., 141 (2021), 61-67. doi: 10.1016/j.patrec.2020.07.042. doi: 10.1016/j.patrec.2020.07.042
![]() |
[14] |
S. Zeng, Y. Cao, Q. Lin, Z. Man, T. Deng, R. Wang, Deep learning SPECT lung perfusion image classification method based on attention mechanism, J. Phys. Conf. Ser., 1748 (2021). doi: 10.1088/1742-6596/1748/4/042050. doi: 10.1088/1742-6596/1748/4/042050
![]() |
[15] | T. K. K. Ho, J. Gwak, O. Prakash, J. I. Song, C. M. Park, Utilizing pretrained deep learning models for automated pulmonary tuberculosis detection using chest radiography, in Intelligent Information and Database Systems, Springer, 11432 (2019), 395-403. doi: 10.1007/978-3-030-14802-7_34. |
[16] | R. Zhang, M. Sun, S. Wang, K. Chen, Computed Tomography Pulmonary Nodule Detection Method Based on Deep Learning, 2021. Available from: https://patentimages.storage.googleapis.com/9c/00/cc/4c302cd759496a/US10937157.pdf. |
[17] |
C. Tong, B. Liang, Q. Su, M. Yu, J. Hu, A. K. Bashir, et al., Pulmonary nodule classification based on heterogeneous features learning, IEEE J. Sel. Areas Commun., 39 (2021), 574-581. doi: 10.1109/JSAC.2020.3020657. doi: 10.1109/JSAC.2020.3020657
![]() |
[18] |
J. H. Lee, H. Y. Sun, S. Park, H. Kim, E. J. Hwang, J. M. Goo, et al., Performance of a deep learning algorithm compared with radiologic interpretation for lung cancer detection on chest radiographs in a health screening population, Radiology, 297 (2020). doi: 10.1148/radiol.2020201240. doi: 10.1148/radiol.2020201240
![]() |
[19] |
A. Hosny, C. Parmar, T. P. Coroller, P. Grossmann, R. Zeleznik, A. Kumar, et al., Deep learning for lung cancer prognostication: a retrospective multi-cohort radiomics study, PLoS Med., 15 (2018). doi: 10.1371/journal.pmed.1002711. doi: 10.1371/journal.pmed.1002711
![]() |
[20] |
M. Masud, N. Sikder, A. A. Nahid, A. K. Bairagi, M. A. Alzain, A machine learning approach to diagnosing lung and colon cancer using a deep learningbased classification framework, Sensors (Basel), 21 (2021), 1-21. doi: 10.3390/s21030748. doi: 10.3390/s21030748
![]() |
[21] |
G. Liang, L. Zheng, A transfer learning method with deep residual network for pediatric pneumonia diagnosis, Comput. Methods Programs Biomed., 187 (2020). doi: 10.1016/j.cmpb.2019.06.023. doi: 10.1016/j.cmpb.2019.06.023
![]() |
[22] | X. Wei, Y. Chen, Z. Zhang, Comparative experiment of convolutional neural network (CNN) models based on pneumonia X-ray images detection, in Proceedings-2020 2nd International Conference on Machine Learning, Big Data and Business Intelligence, MLBDBI, (2020), 449-454. doi: 10.1109/MLBDBI51377.2020.00095. |
[23] | L. Račić, T. Popovic, S. Caki, S. Sandi, Pneumonia Detection Using Deep Learning Based on Convolutional Neural Network, in 2021 25th International Conference on Information Technology, (2021), 1-4. doi: 10.1109/IT51528.2021.9390137. |
[24] |
A. G. Taylor, C. Mielke, J. Mongan, Automated detection of moderate and large pneumothorax on frontal chest X-rays using deep convolutional neural networks: a retrospective study, PLoS Med., 15 (2018). doi: 10.1371/journal.pmed.1002697. doi: 10.1371/journal.pmed.1002697
![]() |
[25] |
S. Roy, W. Menapace, S. Oei, B. Luijten, E. Fini, C. Saltori, et al., Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound, IEEE Trans. Med. Imaging, 39 (2020), 2676-2687. doi: 10.1109/TMI.2020.2994459. doi: 10.1109/TMI.2020.2994459
![]() |
[26] |
T. Hu, K. Mohammad, M. Mokhtar, P. Gholam-Reza, T. K. Sarkhel H, R. Tarik A, Real‑time COVID-19 diagnosis from X-Ray images using deep CNN and extreme learning machines stabilized by chimp optimization algorithm, Biomed. Signal Process. Control, 68 (2021). doi: 10.1016/j.bspc.2021.102764. doi: 10.1016/j.bspc.2021.102764
![]() |
[27] |
M. A. Khan, S. Kadry, Y. D. Zhang, T. Akram, M. Sharif, A. Rehman, et al., Prediction of COVID-19-pneumonia based on selected deep features and one class kernel extreme learning machine, Comput. Electr. Eng., 90 (2021). doi: 10.1016/j.compeleceng.2020.106960. doi: 10.1016/j.compeleceng.2020.106960
![]() |
[28] |
G. B. Kim, K. H. Jung, Y. Lee, H. J. Kim, N. Kim, S. Jun, et al., Comparison of shallow and deep learning methods on classifying the regional pattern of diffuse lung disease, J. Digit. Imaging, 31 (2018), 415-424. doi: 10.1007/s10278-017-0028-9. doi: 10.1007/s10278-017-0028-9
![]() |
[29] | R. Sivaramakrishnan, S. Antani, S. Candemir, Z. Xue, J. Abuya, M. Kohli, et al., Comparing deep learning models for population screening using chest radiography, in SPIE Medical Imaging 2018: Computer-Aided Diagnosis, Houston, Texas, USA, 10575 (2018). doi: 10.1117/12.2293140. |
[30] |
C. A. de Pinho Pinheiro, N. Nedjah, L. de Macedo Mourelle, Detection and classification of pulmonary nodules using deep learning and swarm intelligence, Multimed. Tools Appl., 79 (2019), 15437-15465. doi: 10.1007/s11042-019-7473-z. doi: 10.1007/s11042-019-7473-z
![]() |
[31] |
X. Yiwen, H. Ahmed, Z. Roman, P. Chintan, C. Thibaud, F. Idalid, et al., Deep learning predicts lung cancer treatment response from serial medical imaging, Clin. Cancer Res., 25 (2019). doi: 10.1158/1078-0432.CCR-18-2495. doi: 10.1158/1078-0432.CCR-18-2495
![]() |
[32] |
K. C. Chen, H. R. Yu, W. S. Chen, W. C. Lin, Y. C. Lee, H. H. Chen, et al., Diagnosis of common pulmonary diseases in children by X-ray images and deep learning, Sci. Rep., 10 (2020), 17374. doi: 10.1038/s41598-020-73831-5. doi: 10.1038/s41598-020-73831-5
![]() |
[33] |
P. Rajpurkar, J. Irvin, R. L. Ball, K. Zhu, B. Yang, H. Mehta, et al., Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists, PLoS Med., 15 (2018). doi: 10.1371/journal.pmed.1002686. doi: 10.1371/journal.pmed.1002686
![]() |
[34] | A. I. Aviles-Rivero, N. Papadakis, R. Li, P. Sellars, Q. Fan, R. T. Tan, et al., GraphX NET-chest X-Ray classification under extreme minimal supervision, in Medical Image Computing and Computer Assisted Intervention-MICCAI 2019-22nd International Conference, (2019). doi: 10.1007/978-3-030-32226-7_56. |
[35] | X. Wang, Y. Peng, L. Lu, Z. Lu, R. M. Summers, TieNet: text-image embedding network for common thorax disease classification and reporting in chest X-rays, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2018), 9049-9058. doi: 10.1109/CVPR.2018.00943. |
[36] |
S. Xu, H. Wu, R. Bie, CXNet-m1: anomaly detection on chest X-rays with image-based deep learning, IEEE Access, 7 (2019), 4466-4477. doi: 10.1109/ACCESS.2018.2885997. doi: 10.1109/ACCESS.2018.2885997
![]() |
[37] |
J. Zhao, M. Li, W. Shi, Y. Miao, Z. Jiang, B. Ji, A deep learning method for classification of chest X-ray images, J. Phys. Conf. Ser., 1848 (2021). doi: 10.1088/1742-6596/1848/1/012030. doi: 10.1088/1742-6596/1848/1/012030
![]() |
[38] |
T. K. K. Ho, J. Gwak, Utilizing knowledge distillation in deep learning for classification of chest X-ray abnormalities, IEEE Access, 8 (2020), 160749-160761. doi: 10.1109/ACCESS.2020.3020802. doi: 10.1109/ACCESS.2020.3020802
![]() |
[39] |
I. Sirazitdinov, M. Kholiavchenko, T. Mustafaev, Y. Yixuan, R. Kuleev, B. Ibragimov, Deep neural network ensemble for pneumonia localization from a large-scale chest x-ray database, Comput. Electr. Eng., 78 (2019), 388-399. doi: 10.1109/ACCESS.2020.3020802. doi: 10.1109/ACCESS.2020.3020802
![]() |
[40] |
J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, Squeeze-and-excitation networks, IEEE Trans. Pattern Anal. Mach. Intell., 42 (2020), 2011-2023. doi: 10.1109/TPAMI.2019.2913372. doi: 10.1109/TPAMI.2019.2913372
![]() |
[41] |
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-CAM: visual explanations from deep networks via gradient-based localization, Int. J. Comput. Vision, 128 (2020), 336-359. doi: 10.1109/ICCV.2017.74. doi: 10.1109/ICCV.2017.74
![]() |
[42] |
N. L. Ramo, K. L. Troyer, C. M. Puttlitz, Comparing predictive accuracy and computational costs for viscoelastic modeling of spinal cord tissues, J. Biomech. Eng., 141 (2019). doi: 10.1115/1.4043033. doi: 10.1115/1.4043033
![]() |
[43] | D. M. Powers, Evaluation: from precision, recall and f-measure to ROC, informedness, markedness and correlation, J. Mach. Learn. Technol., 2 (2011), 2229-3981. Available from: http://hdl.handle.net/2328/27165. |
[44] |
T. Fawcett, An introduction to ROC analysis, Pattern Recognit. Lett., 27 (2006), 861-874. doi: 10.1016/j.patrec.2005.10.010. doi: 10.1016/j.patrec.2005.10.010
![]() |
[45] | C. X. Ling, J. Huang, H. Zhang, AUC: a better measure than accuracy in comparing learning algorithms, in Advances in Artificial Intelligence, 16th Conference of the Canadian Society for Computational Studies of Intelligence, AI 2003, Halifax, Canada, 2003. doi: 10.1007/3-540-44886-1_25. |
[46] | K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in 3rd International Conference on Learning Representations, ICLR 2015-Conference Track Proceedings, (2015). arXiv: 1409.1556. |
[47] |
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., Going deeper with convolutions, IEEE Comput. Soc., (2015), 1-9. doi: 10.1109/CVPR.2015.7298594. doi: 10.1109/CVPR.2015.7298594
![]() |
[48] | Y. Yang, Z. Zhong, T. Shen, Z. Lin, Convolutional neural networks with alternately updated clique, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2018), 2413-2422. doi: 10.1109/CVPR.2018.00256. |
[49] |
G. Zeng, On the confusion matrix in credit scoring and its analytical properties, Commun. Stat. Theory Methods, 49 (2020), 2080-2093. doi: 10.1080/03610926.2019.1568485. doi: 10.1080/03610926.2019.1568485
![]() |
1. | Shaik Shabberhussain, Velmurugan R, Unidirectional glass/epoxy composite cylindrical shells with graphene nanoplatelets subjected to internal pressure and thermal loading. Part II: finite element analysis, 2024, 2374-068X, 1, 10.1080/2374068X.2024.2397185 | |
2. | Pranesh K. G., Attel Manjunath, Nagaraja K. C., Raghavendra Bhat, Prajwal D., Abhijit Bhowmik, Chander Prakash, Claudio Mazzotti, Influence of Nanosilica on Mechanical Performance in Woven Carbon/Kevlar/Epoxy Hybrid Composites, 2024, 2024, 2314-4904, 10.1155/je/2646317 | |
3. | Bryant Grove, Derek Patton, Boran Ma, Microscopic‐to‐macroscopic finite element analysis for predicting polymer‐graphene nanocomposite laminate mechanical strength, 2025, 0272-8397, 10.1002/pc.29764 |