Loading [MathJax]/jax/output/SVG/jax.js
Review Special Issues

Exploring ginseng's potential role as an adjuvant therapy in COVID-19

  • Ginseng is a plant from the Panax genus used since ancient times as a prominent component of traditional Chinese medicine, and is prized for its energizing, antiaging and antioxidant properties. Over time, the scientific community has taken a keen interest in ginseng's potential as a supplement in various health sectors. While there is a substantial body of data demonstrating the effectiveness of ginseng and other natural products as adjuncts in the treatment of respiratory diseases, the emergence of the COVID-19 pandemic has amplified the attention on ginseng and its extracts as potential antiviral and antibacterial agents. This review aims to summarize the potential benefits of ginseng in the prevention of COVID-19, the alleviation of symptoms and the enhancement of clinical outcomes for patients. It suggests incorporating ginseng and other natural compounds into complementary therapeutic regimens to augment the effectiveness of vaccines and pharmacological treatments. However, it's important to note that further experiments and clinical studies are necessary to solidify the efficacy of ginseng against COVID-19 and to establish its use as a viable option.

    Citation: Lisa Aielli, Chman Shahzadi, Erica Costantini. Exploring ginseng's potential role as an adjuvant therapy in COVID-19[J]. AIMS Allergy and Immunology, 2023, 7(4): 251-272. doi: 10.3934/Allergy.2023017

    Related Papers:

    [1] Keying Du, Liuyang Fang, Jie Chen, Dongdong Chen, Hua Lai . CTFusion: CNN-transformer-based self-supervised learning for infrared and visible image fusion. Mathematical Biosciences and Engineering, 2024, 21(7): 6710-6730. doi: 10.3934/mbe.2024294
    [2] Yongmei Ren, Xiaohu Wang, Jie Yang . Maritime ship recognition based on convolutional neural network and linear weighted decision fusion for multimodal images. Mathematical Biosciences and Engineering, 2023, 20(10): 18545-18565. doi: 10.3934/mbe.2023823
    [3] Yong Tian, Tian Zhang, Qingchao Zhang, Yong Li, Zhaodong Wang . Feature fusion–based preprocessing for steel plate surface defect recognition. Mathematical Biosciences and Engineering, 2020, 17(5): 5672-5685. doi: 10.3934/mbe.2020305
    [4] Zhijing Xu, Jingjing Su, Kan Huang . A-RetinaNet: A novel RetinaNet with an asymmetric attention fusion mechanism for dim and small drone detection in infrared images. Mathematical Biosciences and Engineering, 2023, 20(4): 6630-6651. doi: 10.3934/mbe.2023285
    [5] Dawei Li, Suzhen Lin, Xiaofei Lu, Xingwang Zhang, Chenhui Cui, Boran Yang . IMD-Net: Interpretable multi-scale detection network for infrared dim and small objects. Mathematical Biosciences and Engineering, 2024, 21(1): 1712-1737. doi: 10.3934/mbe.2024074
    [6] Shuai Cao, Biao Song . Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991. doi: 10.3934/mbe.2021103
    [7] Jianguo Xu, Cheng Wan, Weihua Yang, Bo Zheng, Zhipeng Yan, Jianxin Shen . A novel multi-modal fundus image fusion method for guiding the laser surgery of central serous chorioretinopathy. Mathematical Biosciences and Engineering, 2021, 18(4): 4797-4816. doi: 10.3934/mbe.2021244
    [8] Xiaoli Zhang, Kunmeng Liu, Kuixing Zhang, Xiang Li, Zhaocai Sun, Benzheng Wei . SAMS-Net: Fusion of attention mechanism and multi-scale features network for tumor infiltrating lymphocytes segmentation. Mathematical Biosciences and Engineering, 2023, 20(2): 2964-2979. doi: 10.3934/mbe.2023140
    [9] Zhigao Zeng, Cheng Huang, Wenqiu Zhu, Zhiqiang Wen, Xinpan Yuan . Flower image classification based on an improved lightweight neural network with multi-scale feature fusion and attention mechanism. Mathematical Biosciences and Engineering, 2023, 20(8): 13900-13920. doi: 10.3934/mbe.2023619
    [10] Haifeng Song, Weiwei Yang, Songsong Dai, Haiyan Yuan . Multi-source remote sensing image classification based on two-channel densely connected convolutional networks. Mathematical Biosciences and Engineering, 2020, 17(6): 7353-7377. doi: 10.3934/mbe.2020376
  • Ginseng is a plant from the Panax genus used since ancient times as a prominent component of traditional Chinese medicine, and is prized for its energizing, antiaging and antioxidant properties. Over time, the scientific community has taken a keen interest in ginseng's potential as a supplement in various health sectors. While there is a substantial body of data demonstrating the effectiveness of ginseng and other natural products as adjuncts in the treatment of respiratory diseases, the emergence of the COVID-19 pandemic has amplified the attention on ginseng and its extracts as potential antiviral and antibacterial agents. This review aims to summarize the potential benefits of ginseng in the prevention of COVID-19, the alleviation of symptoms and the enhancement of clinical outcomes for patients. It suggests incorporating ginseng and other natural compounds into complementary therapeutic regimens to augment the effectiveness of vaccines and pharmacological treatments. However, it's important to note that further experiments and clinical studies are necessary to solidify the efficacy of ginseng against COVID-19 and to establish its use as a viable option.



    As an image enhancement technology, image fusion is utilized to combine images captured by different types of sensors or under distinct shooting settings, aiming to obtain images with more comprehensive scene representation information [1]. Among various applications of image fusion, infrared and visible image fusion serves as a typical example. Infrared images are captured using infrared sensors, relying on thermal radiation. They are characterized by prominent targets, minimal environmental influence, high imaging noise and blurred details [2]. On the other hand, visible images exhibit rich texture details, high resolution and sensitivity to lighting conditions [3]. Image fusion enables the integration of unique and shared information from both modalities to generate fused images with enhanced texture and salient targets. These fused images play a crucial role in subsequent high-level vision tasks such as semantic segmentation [4] and nighttime vehicle target detection [5].

    The research on infrared and visible image fusion has led to the development of various traditional methods that have been proposed [6,7,8,9,10]. These methods are often highly interpretable but rely on hand-designed fusion rules, which can limit their performance when dealing with more complex scene fusion tasks. However, with the advancements in deep learning, an increasing number of deep learning methods are being applied to image fusion tasks [11,12,13,14,15]. Deep networks exhibit strong capabilities in characterizing image features, surpassing the limitations of traditional feature extraction methods. They adopt a data-driven approach, enabling end-to-end generation of fused images.

    In order to enhance the performance metrics of fusion, many existing deep learning-based fusion methods incorporate complex network modules that require more storage and computational resources to achieve improved performance metrics. For instance, Long et al. [16] proposed a network that aggregates residual dense blocks, combining dense connected blocks with residual connected blocks. Pu et al. [17] introduced a complex contextual information perceptual module for image reconstruction. Xu et al. [18] employed dissociative representation learning in an auto-encoder-based approach. These methods have demonstrated performance improvements in fusion results; however, they also introduce greater computational complexity due to the inclusion of complex modules in the network.

    Furthermore, existing fusion algorithms often employ fusion layers that incorporate intricate fusion modules or fusion rules, with the primary aim of improving evaluation metrics. However, these algorithms often overlook the characteristics of different modalities. Notably, auto-encoder-based methods [18,19,20,21] utilize hand-designed fusion strategies for combining depth features. The use of such hand-designed fusion strategies may not assign proper weights to the depth features, leading to limitations in the performance of the fusion methods.

    Currently, there is a lack of research on lightweight fusion models, which aim to reduce model parameters and convolutional depth channels. One example is PMGI [22], which performs information extraction through gradient and intensity scale preservation. It achieves this by reusing and fusing features extracted with fewer convolutional layers. Another lightweight model, FLFuse [23], generates fused images using a weight sharing encoder and feature swapping training strategy to ensure efficiency. However, FLFuse fails to fully extract and fuse image features due to its shallow network channel dimension and simplistic implicit fusion strategy, resulting in subpar visual effects and performance metrics.

    We focus on exploring lightweight fusion methods based on structural re-parameterization. Existing structural re-parameterization methods have demonstrated high performance in training and fast inference speeds, making them effective for advanced vision tasks [24,25,26,27]. They are likely to be crucial in addressing the imbalance between fusion performance and computational resource consumption. However, directly applying these structural re-parameterization blocks designed for high-level vision tasks provides limited improvement for infrared and visible image fusion. Specific structural re-parameterization blocks tailored for fusion tasks are required to efficiently extract richer information from different modal features.

    To address the limitations of existing image fusion methods, this paper proposes a novel approach that combines edge operators with structural re-parameterization. This approach enables the rapid generation of fused images with enhanced edge texture information and prominent targets, effectively addressing the imbalance between fusion performance and computational resource consumption. The major contributions of this paper are outlined as follows:

    ● A fast edge convolution fusion network (FECFusion) for infrared and visible images is proposed, which combines edge operations with structural re-parameterization for the first time to rapidly generate fused images with rich edge texture information and salient target, solving the problem of imbalance between fusion performance and computational resource consumption.

    ● A structural re-parameterization edge convolution block (RECB) is proposed, which can deeply mine the edge information in the source images and improve the performance of the fusion model without introducing additional inference burden.

    ● An attention fusion module (AFM) is designed to sufficiently fuze the unique and common information of different modal features to effectively integrate the feature information of the source images with less computational effort.

    In the current literature, there are numerous works that primarily focus on preserving texture details in images [28,29,30]. In contrast, our work aims to address the challenge of balancing lightweight design and performance in image fusion networks. One approach is IFCNN [31], where both the encoder and decoder components employ only two convolutional layers for feature extraction and image reconstruction. Additionally, the fusion rules are adjusted based on the source image type, resulting in a unified network capable of handling various fusion tasks. Another method, SDNet [32], tackles the fusion task by incorporating the generated fused image reconstruction into a squeezed network structure of the source image. This forces the fused image to contain more information from the source images. SeAFusion [33] utilizes dense blocks with gradient residuals for feature extraction and employs the semantic segmentation task loss to guide the training of the fusion network. Recently, FLFuse [23] achieves feature extraction implicitly through a weight sharing encoder and feature swapping training strategy, enabling the generation of fused images in a lightweight and fast manner.

    However, existing methods for infrared and visible image fusion only reduce the network model's parameters through conventional lightweight network design approaches, which can lead to a degradation in fusion performance.

    Many existing methods for infrared and visible image fusion rely on attention mechanisms or multi-scale feature extraction to enhance network performance, but these approaches often come at the cost of increased computational complexity. Finding networks that effectively extract image features while maintaining high computational efficiency is challenging. In ACNet [34], Ding et al. proposed a method to convert multi-branch structures into a single-branch structure, thereby improving the performance of convolutional networks. In another work by Ding et al. [35], a concise VGG-like backbone network called RepVGG was introduced. RepVGG utilizes structural re-parameterization as its core technique, enabling efficient feature extraction while reducing network computation. RepVGG has demonstrated excellent performance in target detection tasks. Building upon this work, Ding et al. [36] proposed six network structures that can be structurally re-parameterized. The authors also explained the underlying reasons why the structural re-parameterization method is effective. Given the success of structural re-parameterization in various vision tasks, this approach holds promise for addressing the challenge of balancing fusion performance and computational resource consumption in infrared and visible image fusion tasks.

    Unfortunately, the direct use of those structural re-parameterization blocks designed for advanced vision tasks provides little improvement for infrared and visible image fusion tasks. They still require the structural re-parameterization block specifically designed for the image fusion task to quickly extract the full wealth of information from the different modal features.

    Therefore, we propose a new image fusion method, FECFusion, which substantially reduces computational resource consumption while maintaining high fusion performance through a well-designed structural re-parameterization technique.

    In this paper, FECFusion utilizes end-to-end convolutional neural networks to perform feature extraction, feature fusion and image reconstruction, enabling efficient and straightforward fusion tasks. The network architecture, as depicted in Figure 1, consists of three main components: an encoder, a fusion layer and a decoder. In the encoder, a two-branch structure is employed, comprising one convolutional layer and two structural re-parameterization edge convolution blocks. This setup allows for the extraction of depth features from both the infrared and visible images. The fusion layer combines these extracted features, leveraging the complementary information present in the two modalities. Subsequently, the decoder, consisting of three structural re-parameterization edge convolution blocks, reconstructs the hybrid features obtained from the fusion layer to generate the final fused image. Overall, FECFusion offers a simple and efficient solution for infrared and visible image fusion, utilizing end-to-end convolutional neural networks for image fusion of the two modalities.

    Figure 1.  The overall structure of FECFusion consists of an encoder, a fusion layer (AFM), and a decoder. The infrared Iir and visible images Ivi are simultaneously passed to a two-branch encoder to extract depth features, and a fusion layer to fuse common and unique features, finally reconstructed by a decoder to obtain the fused image If. The whole process is guided by both content loss Lcontent and traditional loss Ltradition to generate the fused image.

    To ensure that the fused images better retain the edge feature information of the source images, FECFusion has designed a structural re-parameterization edge convolution block(RECB) for improving the performance in infrared and visible image fusion tasks. In addition, we use an attention fusion module (AFM) to better fuse the feature information of different modal images extracted from different branches.

    Although it has some effects to use standard convolution to extract infrared and visible image feature information for fusion networks, it is inferior to complex models in terms of fusion performance. However, replacing standard convolution with complex blocks would make the network consume more computational resources. Therefore, the structural reparameterization technique is introduced in this paper to enrich the characterization capability of the network without increasing the computational resource consumption of the network in the inference stage.

    To ensure that the fused images are better able to retain the edge feature information of the source images, we design a structural re-parameterization edge convolution block(RECB) for improving the performance in infrared and visible image fusion task. The specific structure of RECB is shown in Figure 2.

    Figure 2.  The specific devise of the structural re-parameterization edge convolution block(RECB). The RECB extracts fine-grained detail information of feature maps.

    In particular, the RECB consists of four elaborated operators as follows.

    1) The branch of standard 3 × 3 convolution

    To guarantee the basic performance of the module, we use a standard 3 × 3 convolution. This convolution is represented as:

    Fn=KnX+Bn, (3.1)

    where Fn represents the output feature of 3 × 3 standard convolution. Kn represents the convolution kernel weight of 3 × 3 standard convolution. X represents the input feature. Bn represents the offset of 3 × 3 standard convolution.

    2) The branch of feature expansion convolution

    The representational power of the fusion task is improved by expanding the channels of the features, which helps to improve the extraction of more feature information. Specifically, the branch uses 1 × 1 convolution to expand the channel dimension of the features and 3 × 3 convolution to extract the feature information, which is expressed as:

    Fe=Kn(KeX+Be)+Bn, (3.2)

    where Fe represents the output feature of the feature expansion convolution branch. Ke represents the convolution kernel of 1 × 1 convolution. Be represents the offset of 1 × 1 convolution.

    3) The branch of Sobel filter

    Edge information is tremendously helpful for the performance improvement of the fusion task. Since it is usually difficult for the network model to learn the weights of the edge detection filters through training, a pre-defined Sobel edge filter is embedded in this branch for extracting the first-order spatial derivatives and learning the scaling factors of the filters. Specifically, the input features are firstly scaled by 1 × 1 convolution, then the edge information is extracted by horizontal and vertical Sobel filters, which are processed as follows:

    Dx=[+101+202+101]andDy=[+1+2+1000121], (3.3)
    FDx=(SDxDx)(KxX+Bx)+BDx,FDy=(SDyDy)(KyX+By)+BDy, (3.4)
    Fsobel=FDx+FDy. (3.5)

    where Kx, Bx and Ky, By are the weights, bias of the 1 × 1 convolution in the horizontal and vertical directions. SDx, BDx and SDy, BDy are the scaling parameters and bias with the shape of C× 1 × 1 × 1. and represent DWConv and normal convolution. (SDxDx), (SDyDy) are in the shape of C× 1 × 3 × 3.

    4) The branch of Laplacian filter

    In addition to the Sobel operator for extracting the first-order spatial derivatives, this branch employs a more stable Laplacian edge filter that is more robust to noise to extract the second-order spatial derivatives of the image edge information. Similarly, this branch also uses 1 × 1 convolution for scaling and then uses the Laplacian operator to extract the edge information, processed as:

    Dlap=[0+10+14+10+10], (3.6)
    Flap=(SlapDlap)(KlapK+Blap)+Blap. (3.7)

    where Klap, Blap are the weights, bias of the 1 × 1 convolution. Slap, Blap are scaling factors and bias of DWConv, respectively.

    In addition, the BN layer is not used in the RECB, unlike the structural re-parameterization block designed for advanced vision tasks, because the BN layer would hinder the performance of the fusion network. Finally, the output features of these four branches are summed and mapped to the nonlinear activation layer:

    F=Fn+Fe+Fsobel+Flap. (3.8)

    where F is the output characteristic of RECB. The nonlinear activation layer used in this experiment is LeakyRelu.

    The above RECB is the structure of the training phase. After the training is completed, the parameters of the four branch structures are equivalent to a 3 × 3 convolution parameter through the structure re-parameterization technique, so that the same effect can be obtained only through the 3 × 3 convolution processing after the structure re-parameterization in the inference phase.

    Since these two features come from source images of different modalities, they have object focus of different scenes, with complementary information and public information of each other. Thus, the fusion module should have to focus on the fusion of complementary information and public information of different modalities.

    In Figure 3, it is evident that detecting pedestrians in visible images at night can be challenging due to inadequate lighting conditions. However, in infrared thermal images, pedestrians are clearly highlighted. Therefore, the key challenge lies in fusing these two features by leveraging the complementary information that exists in only one of the modalities. In the case of a well-illuminated thermal target, such as the vehicle in Figure 3, both cameras are capable of sensing it. During the fusion process, it is important to enhance both features simultaneously. If a method is used that focuses solely on processing the complementary information, there is a risk of weakening one of the features. In order to effectively fuse the infrared and visible features, it is crucial to devise a fusion approach that preserves and enhances the salient information from both modalities. This will ensure that both the infrared highlights and the visible details are effectively integrated, leading to improved detection results.

    Figure 3.  Illustrations of registered infrared and visible images, where infrared and visible images have unique (the {red} box) and common (the {green} box) information.

    In order to better solve the problem of fusion of different modal information, the element-by-element addition method for extracting complementary information of heteromodal images and the element-by-element multiplication method for extracting common information of heteromodal images are employed here. The element-by-element addition and element-by-element multiplication methods are expressed as:

    Xadd=Xvi+Xir,Xmul=XviXir. (3.9)

    where Xvi and Xirrepresent the depth features of infrared and visible images extracted by the encoder, respectively. Element-by-element addition Xadd represents the addition of elements to visible image features and thermal target features to accumulate complementary information of different modes, while element-by-element multiplication Xmul represents the multiplication of elements to visible image features and thermal target features to enhance common information of different modes.

    As in the simple example in Figure 4, 0 represents that the target is not sensed by the sensor and 1 represents that the target is sensed by the sensor. Suppose there are two cases of sensed and un-sensed target for infrared and visible sensors, respectively, which are illustrated by 0 and 1. Therefore, four cases are generated, and the goal is to retain all the information as long as possible. Element-by-element addition preserves the target information to the maximum extent possible, the left in Figure 4, with the target being sensed by at least one of the sensors. Element-by-element multiplication allows filtering out the common information, the right image in Figure 4, with the target needing to be sensed by two sensors at the same time. Once the two types of information are obtained separately, they are preserved by feature concatenation, combining the unique information from both modalities. This approach allows for the retention of crucial information while effectively combining the features extracted from each sensor.

    Figure 4.  The simple illustration for the element-by-element addition of the complementary information and the element-by-element multiplication of the common information that are extracted from the infrared and visible images.

    Therefore, the attention fusion module (AFM) based on element-by-element addition and element-by-element multiplication is designed to better fuse the information of these two depth features. The specific structure of the AFM is shown in Figure 5.

    Figure 5.  The specific devise of the attention fusion module (AFM), which is used to fuse unique and common information from different modal images.

    The upper branch in AFM is used to enhance the common information of different modal features, while the bottom branch enhances the feature information of different modal features through the attention module and aggregates the complementary information of different modal features by feature summation, and then cascades the common and complementary information to allow both feature information to be retained as much as possible.AFM is processed as follows:

    Y=Cat(XviXir,A(Xvi)+A(Xir)). (3.10)

    where Cat represents concatenation; A represents the attention module, and CBAM is used in this article.

    The loss function is the key to guide the training of deep neural networks to achieve the desired results. The loss of structural similarity is often used to maintain a clear intensity distribution of the fused image. However, in the fusion task, the fused image needs to be similar to the two source images at the same time, and there is more complementary information between the two source images, which will weaken the complementary information region and lead to a decrease in fusion performance.

    In order to better promote the recovery of texture details, this paper uses content loss Lcontent and traditional loss Ltradition to jointly constrain the training of the network. The total loss Lfusion formula is as follows:

    Lfusion=λLtradition+Lcontent. (3.11)

    where λ is the weight coefficient to balance these two losses.

    The design of traditional loss function can enhance the similarity between the fused image and the two types of source images, guide the network to generate the fusion result with complete information faster, and avoid the single and incomplete information in the fusion result. The traditional loss Ltradition calculation formula is :

    Ltradition=1HWIf0.5(Iir+Ivi)1. (3.12)

    where If represents the fused image. Iir and Ivi represent the infrared and visible images. refers to the l1-norm.

    In order to promote the model to fuse more meaningful information, retain the saliency in the infrared image and the edge texture information of the source image, the content loss Lcontent with bilateral filtering is designed in this paper. The content loss consists of two parts: the intensity loss Lin and the edge gradient loss with bilateral filtering Lgrad. The formula is as follows:

    Lcontent=μ1Lin+μ2Lgrad. (3.13)

    where μ1, μ2 are the weighting coefficients for balancing these two losses.

    Among them, the intensity loss Lin constrains the overall apparent intensity of the fused image. In order to better retain the salient target, the pixel intensity of the fused image should be biased towards the maximum intensity of the infrared and visible images. The formula of strength loss is as follows:

    Lin=1HWIfMax(Iir,Ivi)1. (3.14)

    where Max() stands of the element-wise maximum calculation.

    In addition, in order to make the network model better preserve the edge texture details of the fused image, the existing methods use the maximum edge gradient of the source image to constrain the training of the network, but this loss is easily affected by noise in the infrared image. To this end, this paper uses a bilateral filter that preserves the edge gradient to denoise the infrared image, thereby reducing the noise of the fused image. The following is the calculation formula of edge gradient loss of bilateral filtering:

    Lgrad=1HW|If|Max(|Bila(Iir)|,|Ivi|)1. (3.15)

    where is the gradient operator for measuring image texture information, and the Sobel operator is used to calculate the gradient in this paper. || indicates the absolute operation. Bila represents a bilateral filter.

    In this paper, FECFusion is trained with the MSRS dataset [37]. Since the existing infrared and visible image fusion dataset is small, the MSRS training set part of the infrared and visible images common dataset is expanded from 1083 to 26,112 pairs of images, and the size of the training set image pairs after data enhancement is 64 pixel × 64 pixel, which can basically meet the training requirements. In order to evaluate the effectiveness of FECFusion, it is tested on the test set of the MSRS dataset and picks 361 pairs of images as the test subject. In addition, to more comprehensively evaluate the generalization performance of FECFusion, it selected 42 and 300 pairs of images on the TNO[38] and M3FD[39] datasets, respectively, for generalization comparison experiments.

    Since none of the current public datasets for infrared and visible image fusion have reference images, the quality of the fusion result images cannot be directly evaluated by ground truth, therefore, we evaluate the visualization image effects of different algorithms by human subjective visual perception as a qualitative assessment, and by objective generic image quality evaluation index results as a quantitative assessment.

    In this paper, standard deviation (SD) [40], mutual information (MI) [41], visual information fidelity (VIF) [42], sum of correlation differences (SCD) [43], entropy (EN) [44] and Qabf [45] are used. SD evaluates the contrast and distribution of the fused images from a statistical point of view. MI measures the amount of information from the source image to the fused image. VIF reflects the fidelity of the fused information from a human visual point of view. SCD measures the difference between the source image and the fused image. EN measures the amount of information contained in the image. Qabf evaluates the amount of fused edge information from the source image. All the above metrics are positive metrics, and higher values mean better fusion results.

    FECFusion is compared with seven fusion algorithms, including DenseFuse[46], FusionGAN[47], IFCNN[31], SDNet[32], U2Fusion[48], FLFuse[23] and PIAFusion[37]. All the compared algorithms are experimented in public code, where the relevant settings of the experiments are kept constant. In the superparameter settings of the proposed network, the network optimizer uses Adam, epoch = 10, batch size = 64, learning rate is 1 ×104, loss function parameters are λ=10, μ1=12, μ2=45. The parameters of bilateral filtering are σd=0.05, σr=8.0, and window size is 11 × 11. The training process of FECFusion is summarized in Algorithm 1.

    Besides the comparative and generalization experiments, the effectiveness of RECB and AFM is verified by ablation experiments in this paper. In addition, FECFusion is verified to be helpful for the advanced vision task through segmentation experiments. Finally, we have compared the operational efficiency of FECFusion with other methods and compare the computational resource consumption with and without structural re-parameterization. Our experiments are all conducted on a GeForce RTX 2080Ti 11GB and an Intel Core i5-12600KF, with PyTorch of a deep learning framework.

    Algorithm 1: Training procedure
    Input: Infrared images Iir and visible images Ivi
    Output: Fused images If

    It is an important challenge for the image fusion algorithm to generalize the performance of different scenes. In the MSRS dataset, we have chosen two daytime and two nighttime images to evaluate the subjective visualization performance, and the comparison results are shown in Figures 6 and 7. We mark the texture detail information with green boxes and the highlighted target information with red boxes.

    Figure 6.  Qualitative comparison of FECFusion with 7 advanced algorithms on the daytime scene (00537D and 00633D) from the MSRS dataset. For a clear view of comparative detail, we have selected a textured region (the {green} box) and a salient region (the {red} box) in each image.
    Figure 7.  Qualitative comparison of FECFusion with 7 advanced algorithms on the nighttime scene (01023N and 01042N) from the MSRS dataset. For a clear view of comparative detail, we have selected a textured region (the {green} box) and a salient region (the {red} box) in each image.

    In the daytime scene depicted in Figure 6, we can observe the performance of different fusion methods. DenseFuse, SDNet and U2Fusion fail to effectively highlight the infrared target and do not fully utilize the background information present in the visible image. FusionGAN manages to highlight the salient target to some extent, but it can be seen from the green box that it blurs the background. In contrast, IFCNN and FLFuse weaken the texture details of the background, as evident from the green box. Only PIAFusion and the method proposed in this paper successfully integrate the relevant information, effectively preserving both the infrared target and the background texture details. Therefore, it is evident that the method proposed in this paper exhibits superior performance in the daytime scene, achieving a balanced fusion result that highlights the target while retaining the background information.

    In the night scene depicted in Figure 7, the visible image contains limited texture information, while the infrared image contains both background texture details and a salient target. Many existing fusion methods tend to overemphasize the information from one modality, making it challenging to achieve satisfactory results across different scenes.

    Among the fusion methods examined, DenseFuse, SDNet and U2Fusion exhibit a bias towards infrared images, and weaken infrared targets. FusionGAN introduces artifacts into the fused images. Only IFCNN, PIAFusion, and the method proposed in this paper are capable of generating fused images with higher contrast in black night scenes. FLFuse, which is also a lightweight method, performs poorly in this scenario. It fails to fully leverage the characteristics of both modal images, leading to a degradation in both the background and the infrared target. Therefore, the method proposed in this paper demonstrates good performance in night scenes as well. It effectively captures the characteristics of both modal images and achieves better contrast, thereby preserving the infrared target while retaining background details.

    In this section, we perform quantitative evaluation on the MARS dataset and select six metrics for evaluation. The comparison of the metrics of different methods is shown in Table 1, where red represents the best result and blue represents the second one.

    Table 1.  Quantitative comparisons of the six metrics, i.e., SD, MI, VIF, SCD, EN and Qabf, on image pairs from the MSRS dataset. Bold indicates the best result and underline represents the second best result.
    Algorithm Evaluation method
    SD MI VIF SCD EN Qabf
    DenseFuse 7.0692 2.5409 0.6752 1.3296 5.8397 0.3552
    FusionGAN 5.4694 1.9155 0.4253 0.8015 5.2260 0.1208
    IFCNN 7.5947 2.7399 0.8283 1.6658 6.3109 0.5540
    SDNet 5.3258 1.7398 0.3758 0.8364 4.8891 0.2944
    U2Fusion 5.6231 1.8953 0.3967 1.0034 4.7525 0.2908
    FLFuse 6.4790 2.0697 0.4860 1.1189 5.5157 0.3198
    PIAFusion 7.9268_ 4.1774 0.9072_ 1.7395_ 6.4304_ 0.6324
    Ours 8.1413 3.6805_ 0.9282 1.8153 6.5104 0.5619_

     | Show Table
    DownLoad: CSV

    From Table 1, it is clear that our method shows significant advantages in four metrics, SD, VIF, SCD and EN, while its performance in MI and Qabf is second only to PIAFusion. The value of SD is the best indicating that the fusion result of this paper method Shencheng achieves high contrast between infrared target and background; the highest value of VIF indicates that the fused image generated by this paper method is more in line with The highest value of VIF indicates that the fused images generated by this method are more consistent with the human visual system; the highest values of SCD and EN indicate that this method can generate fused images with more edge details and contain more realistic results than other methods. In conclusion, the quantitative experimental results show that this method can generate fused images with more information while reducing the computational effort.

    In the fusion task, it is required that the fusion model has a stronger generalization capability, which is applicable in different scenes. Therefore, we selected 20 and 300 pairs of images in the TNO and M3FD datasets, respectively, to evaluate the generalization ability of FECFusion. A qualitative comparison of the different algorithms on the TNO and M3FD datasets is presented in Figures 8 and 9.

    Figure 8.  The visualisation results of FECFusion with 7 advanced algorithms on the TNO dataset. For a clear view of comparative detail, we selected a textured region (the green box) and a salient region (the red box) in each image.
    Figure 9.  The visualisation results of FECFusion with 7 advanced algorithms on the M3FD dataset. For a clear view of comparative detail, we selected a textured region (the green box) and a salient region (the red box) in each image.

    From the figures, it is evident that DenseFuse, SDNet, U2Fusion and FLFuse tend to blend the background and the target together, making it difficult to distinguish the salient infrared target. FusionGAN, on the other hand, exhibits high overlap with the infrared image and lacks the inclusion of background information. In comparison, IFCNN and PIAFusion show performance similar to the proposed method in this paper. However, it is important to note that these methods may not match the inference speed of the proposed method, which offers faster processing capabilities. Therefore, based on objective evaluation and considering the faster inference speed, the proposed method in this paper demonstrates competitive performance and provides a promising solution for infrared and visible image fusion tasks.

    The results of quantitative metrics for the generalization experiments are shown in Table 2. The metrics performance of our method is the best or the second best on both datasets, which indicates that our method can both preserve the texture details of the source image and improve the contrast of the target. In conclusion, the qualitative and quantitative results show that FECFusion performs excellently in generalization. In addition, the method in this paper effectively maintains the intensity distribution of the target region and preserves the texture details of the background region, benefiting from the proposed RECB and AFM.

    Table 2.  Quantitative comparisons of the six metrics, i.e., SD, MI, VIF, SCD, EN and Qabf, from the TNO and M3FD datasets. Bold indicates the best result and underline represents the second best result.
    Dataset Algorithm Evaluation method
    SD MI VIF SCD EN Qabf
    TNO DenseFuse 8.5765 2.1987 0.6704 1.5916 6.3422 0.3427
    FusionGAN 8.6703 2.3353 0.6541 1.3788 6.5578 0.2339
    IFCNN 9.0058 2.4154 0.7996 1.6850 6.7413 0.5066
    SDNet 9.0679 2.2606 0.7592 1.5587 6.6947 0.4290
    U2Fusion 8.8553 1.8730 0.6787 1.5862 6.4230 0.4245
    FLFuse 9.2628 2.1925 0.8084 1.7308 6.3658 0.4177
    PIAFusion 9.1093 3.2464 0.8835 1.6540 6.8937 0.5556
    Ours 9.2721 3.7136 0.9496 1.7312 6.9856 0.5311
    M3FD DenseFuse 8.6130 2.8911 0.6694 1.5051 6.4264 0.3709
    FusionGAN 8.8571 2.9921 0.5176 1.1292 6.4750 0.2530
    IFCNN 9.2815 2.9560 0.7738 1.5353 6.6966 0.6053
    SDNet 8.8855 3.1798 0.6329 1.3914 6.6102 0.5005
    U2Fusion 9.0141 2.7531 0.7061 1.5488 6.6285 0.5303
    FLFuse 8.7580 3.2425 0.6986 1.4975 6.5744 0.2640
    PIAFusion 10.1639 4.6942 0.9300 1.3363 6.8036 0.6348
    Ours 9.9899 4.3123 0.9350 1.5502 6.7685 0.6440

     | Show Table
    DownLoad: CSV

    In order to verify the effectiveness of adding RECB and AFM to our FECFusion, ablation experiments are designed in this section to further analyze the role of these two proposed modules in the network model. First, RECB is a structure that equates multiple branch structures into a single branch structure by structural re-parameterization; therefore, in the ablation experiments, the RECB part is directly replaced with the structure after structural re-parameterization for training, i.e., the structure of a single ordinary convolution. For the ablation experiments of AFM, the network is trained with a direct feature cascade instead of AFM. This experiment is performed on the MSRS dataset, and the experimental results are shown in the Figure 10, where the background texture is marked with a green solid box and the infrared salient targets are marked with a red solid box.

    Figure 10.  Visualized results of ablation on the MSRS dataset. From left to right: visible images, infrared images, fused results of FECFusion, FECFusion without RECB, FECFusion without AFM and FECFusion without both RECB and AFM.

    From the experimental results, it can be seen that without RECB, the fused images are blurred at the edges to some extent, which proves that the module contributes to maintaining the edge information of the fusion results. Without AFM, the saliency of the fusion results decreases. If both RECB and AFM are not available, the fusion results have a decreased target significance and blurred edge texture. The evaluation metrics for this ablation experiment are shown in Table 3. We observe that the absence of both RECB and AFM leads to a decrease in the evaluation metric values to different degrees, proving the effectiveness of each part of our FECFusion.

    Table 3.  The results of ablation study for RECB and AFM on the MSRS dataset. The bolded values indicate the best results.
    RECB AFM Evaluation method
    SD MI VIF SCD EN Qabf
    8.1413 3.6805 0.9282 1.8153 6.5104 0.5619
    7.4551 3.0648 0.7234 1.6117 6.0100 0.4488
    7.7954 3.0922 0.8355 1.8060 6.2925 0.5098
    6.6285 2.6117 0.5500 1.2793 5.6158 0.4362

     | Show Table
    DownLoad: CSV

    To verify the execution efficiency of the proposed algorithm, the average processing time of forward propagation of each fusion method is tested on the MSRS dataset in this paper, and the comparison results are shown in Table 4 where red represents the best and blue represents the second best. It can be seen that our method is more efficient than most methods, while FLFuse is faster than our method, the method in this paper works better, so these differences in running efficiency are acceptable.

    Table 4.  Mean of the running times of all methods on the MSRS dataset (underline: second, bold indicates the best result and italic represents the second best result).
    Algorithm DenseFuse FusionGAN IFCNN SDNet U2Fusion FLFuse PIAFusion Ours
    Running time 0.374 0.082 0.019 0.014 0.155 0.001 0.081 0.002_

     | Show Table
    DownLoad: CSV

    In addition, with reference to the image size used in the MSRS data set, the data of 640 × 480 × 1 is used as the input of the network forward propagation, and the parameters and weights of the network model are calculated by the TorchSummary library. The forward propagation time, running storage space, parameter quantity, weight size and the cumulative deviation of the fusion results pixel by pixel before and after the structural re-parameterization are compared experimentally. The comparison results are shown in Table 5.

    Table 5.  Model properties of FECFusion with and without the structural re-parameterisation.
    Structural re-parameterisation Forward time Forward pass size Params Params size Cumulative pixel deviation of results
    W/O 0.0299 s 5451.57 MB 146,389 0.56 MB /
    W 0.0020 s 2601.57 MB 145,477 0.55 MB 1 × 10-4

     | Show Table
    DownLoad: CSV

    By comparing the results, it can be seen that there is almost no difference in the fusion results of the network before and after the structural re-parameterization, indicating that the structural re-parameterization can effectively reduce the running time, running storage space, parameter quantity and weight size in the case of very low deviation.

    Semantic segmentation algorithms are an important general-purpose computer vision method whose performance reflects well on the semantic information of the fused resultant image. To verify that the fused images can be helpful for subsequent vision tasks, DeepLabV3+[49], a semantic segmentation model pre-trained on the Cityscapes dataset[50], is also used in this section to evaluate the performance of the fused images, and the semantic segmentation results are shown in the Figure 11.

    Figure 11.  Segmentation results for infrared, visible, and fused images from the MSRS dataset. The segmentation model is Deeplabv3+, pre-trained on the Cityscapes dataset.

    From the experimental results, the semantic segmentation results of the fused result images are all a little better than the infrared and visible images, especially at night when the lighting conditions are poorer, the visible sensors have difficulty capturing enough information, and the semantic segmentation models often have difficulty detecting hot targets such as travelers, so to some extent, it can be shown that the image fusion has an enhanced effect on subsequent vision tasks.

    In this paper, we propose FECFusion, an infrared and visible image fusion network based on fast edge convolution. The network consists of several key components. First, the main part of the network employs the RECB to extract features, including detailed texture features and salient image features. These extracted features are then fused using the AFM, and the fused image is reconstructed. After the completion of training, the network undergoes a structural re-parameterization operation to optimize the inference speed and storage space required while preserving the original training effectiveness. Through subjective and objective experimental results, we demonstrate that FECFusion achieves superior fusion results compared to other algorithms. It offers better real-time performance and requires less inference memory footprint, making it more suitable for practical engineering applications that involve the design of custom hardware accelerated circuits. In future research, we will explore specific applications of FECFusion on mobile devices and further optimize its performance. This includes enhancing the network's ability to learn multi-scale image features and achieve better fusion results with lower computational resource consumption.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported by National Natural Science Foundation of China(NO.62266025).

    The authors declare there is no conflict of interest.



    Conflict of interest



    The authors declare no conflict of interest.

    [1] Shi H, Xia Y, Gu R, et al. (2021) Ginseng adjuvant therapy on COVID-19: A protocol for systematic review and meta-analysis. Medicine 100: e27586. https://doi.org/10.1097/MD.0000000000027586
    [2] Ghibu S, Juncan AM, Rus LL, et al. (2021) The particularities of pharmaceutical care in improving public health service during the COVID-19 pandemic. Int J Environ Res Public Health 18: 9776. https://doi.org/10.3390/ijerph18189776
    [3] Paudyal V, Sun S, Hussain R, et al. (2022) Complementary and alternative medicines use in COVID-19: A global perspective on practice, policy and research. Res Soc Admin Pharm 18: 2524-2528. https://doi.org/10.1016/j.sapharm.2021.05.004
    [4] Wee JJ (2011) Biological activities of ginseng and its application to human health. Herbal Medicine: Biomolecular and Clinical Aspects . Boca Raton: CRC Press/Taylor & Francis.
    [5] Wu H, Høiby N, Yang L, et al. (2014) Effects of radix ginseng on microbial infections: a narrative review. J Tradit Chin Med 34: 227-233. https://doi.org/10.1016/s0254-6272(14)60083-2
    [6] Shishtar E, Sievenpiper JL, Djedovic V, et al. (2014) The effect of ginseng (the genus panax) on glycemic control: a systematic review and meta-analysis of randomized controlled clinical trials. PloS One 9: e107391. https://doi.org/10.1371/journal.pone.0107391
    [7] Lee CS, Lee JH, Oh M, et al. (2012) Preventive effect of Korean red ginseng for acute respiratory illness: a randomized and double-blind clinical trial. J Korean Med Sci 27: 1472-1478. https://doi.org/10.3346/jkms.2012.27.12.1472
    [8] Mancuso C, Santangelo R (2017) Panax ginseng and Panax quinquefolius: From pharmacology to toxicology. Food Chem Toxicol 1107: 362-372. https://doi.org/10.1016/j.fct.2017.07.019
    [9] Karmazyn M, Gan XT (2021) Chemical components of ginseng, their biotransformation products and their potential as treatment of hypertension. Mol Cell Biochem 476: 333-347. https://doi.org/10.1007/s11010-020-03910-8
    [10] Choi KT (2008) Botanical characteristics, pharmacological effects and medicinal components of Korean Panax ginseng CA Meyer. Acta Pharmacol Sin 29: 1109-1118. https://doi.org/10.1111/j.1745-7254.2008.00869.x
    [11] Yang Y, Ren C, Zhang Y, et al. (2017) Ginseng: An nonnegligible natural remedy for healthy aging. Aging Dis 8: 708-720. https://doi.org/10.14336/AD.2017.0707
    [12] You L, Cha S, Kim MY, et al. (2022) Ginsenosides are active ingredients in Panax ginseng with immunomodulatory properties from cellular to organismal levels. J Ginseng Res 46: 711-721. https://doi.org/10.1016/j.jgr.2021.12.007
    [13] Qi LW, Wang CZ, Du GJ, et al. (2011) Metabolism of ginseng and its interactions with drugs. Curr Drug Metab 12: 818-822. https://doi.org/10.2174/138920011797470128
    [14] Zhang H, Abid S, Ahn JC, et al. (2020) Characteristics of Panax ginseng cultivars in Korea and China. Molecules 25: 2635. https://doi.org/10.3390/molecules25112635
    [15] Kitts D, Hu C (2000) Efficacy and safety of ginseng. Public Health Nutr 3: 473-485. https://doi.org/10.1017/S1368980000000550
    [16] Zhang Y, Zheng Y, Xia P, et al. (2019) Impact of continuous Panax notoginseng plantation on soil microbial and biochemical properties. Sci Rep 9: 13205. https://doi.org/10.1038/s41598-019-49625-9
    [17] Hyun SH, Kim SW, Seo HW, et al. (2020) Physiological and pharmacological features of the non-saponin components in Korean Red Ginseng. J Ginseng Res 44: 527-537. https://doi.org/10.1016/j.jgr.2020.01.005
    [18] Ratan ZA, Haidere MF, Hong YH, et al. (2021) Pharmacological potential of ginseng and its major component ginsenosides. J Ginseng Res 45: 199-210. https://doi.org/10.1016/j.jgr.2020.02.004
    [19] Yin X, Hu H, Shen X, et al. (2021) Ginseng omics for ginsenoside biosynthesis. Curr Pharm Biotechnol 22: 570-578. https://doi.org/10.2174/1389201021666200807113723
    [20] Razgonova MP, Veselov VV, Zakharenko AM, et al. (2019) Panax ginseng components and the pathogenesis of Alzheimer's disease. Mol Med Rep 19: 2975-2998. https://doi.org/10.3892/mmr.2019.9972
    [21] Yao W, Guan Y (2022) Ginsenosides in cancer: A focus on the regulation of cell metabolism. Biomed Pharmacother 156: 113756. https://doi.org/10.1016/j.biopha.2022.113756
    [22] Qi HY, Li L, Ma H (2018) Cellular stress response mechanisms as therapeutic targets of ginsenosides. Med Res Rev 38: 625-654. https://doi.org/10.1002/med.21450
    [23] Kim H, Lee JH, Kim JE, et al. (2018) Micro-/nano-sized delivery systems of ginsenosides for improved systemic bioavailability. J Ginseng Res 42: 361-369. https://doi.org/10.1016/j.jgr.2017.12.003
    [24] Lee JO, Kim JH, Kim S, et al. (2020) Gastroprotective effects of the nonsaponin fraction of Korean Red Ginseng through cyclooxygenase-1 upregulation. J Ginseng Res 44: 655-663. https://doi.org/10.1016/j.jgr.2019.11.001
    [25] Ratan ZA, Youn SH, Kwak YS, et al. (2021) Adaptogenic effects of Panax ginseng on modulation of immune functions. J Ginseng Res 45: 32-40. https://doi.org/10.1016/j.jgr.2020.09.004
    [26] Choi MK, Song IS (2019) Interactions of ginseng with therapeutic drugs. Arch Pharm Res 42: 862-878. https://doi.org/10.1007/s12272-019-01184-3
    [27] Kim JH, Kim DH, Jo S, et al. (2022) Immunomodulatory functional foods and their molecular mechanisms. Exp Mol Med 54: 1-11. https://doi.org/10.1038/s12276-022-00724-0
    [28] Taguchi T, Mukai K (2019) Innate immunity signalling and membrane trafficking. Curr Opin Cell Biol 59: 1-7. https://doi.org/10.1016/j.ceb.2019.02.002
    [29] Youn SH, Lee SM, Han CK, et al. (2020) Immune activity of polysaccharide fractions isolated from Korean Red Ginseng. Molecules 25: 3569. https://doi.org/10.3390/molecules25163569
    [30] He LX, Ren JW, Liu R, et al. (2017) Ginseng (Panax ginseng Meyer) oligopeptides regulate innate and adaptive immune responses in mice via increased macrophage phagocytosis capacity, NK cell activity and Th cells secretion. Food Funct 8: 3523-3532. https://doi.org/10.1039/C7FO00957G
    [31] Shin JY, Song JY, Yun YS, et al. (2002) Immunostimulating effects of acidic polysaccharides extract of Panax ginseng on macrophage function. Immunopharmacol Immunotoxicol 24: 469-482. https://doi.org/10.1081/IPH-120014730
    [32] Han MJ, Kim DH (2020) Effects of red and fermented ginseng and ginsenosides on allergic disorders. Biomolecules 10: 634. https://doi.org/10.3390/biom10040634
    [33] Gu Z, Ling J, Cong J, et al. (2020) A review of therapeutic effects and the pharmacological molecular mechanisms of Chinese medicine weifuchun in treating precancerous gastric conditions. Integr Cancer Ther 19: 1534735420953215. https://doi.org/10.1177/1534735420953215
    [34] Lee DY, Park CW, Lee SJ, et al. (2019) Immunostimulating and antimetastatic effects of polysaccharides purified from ginseng berry. Am J Chin Med 47: 823-839. https://doi.org/10.1142/S0192415X19500435
    [35] Zhang X, Liu Z, Zhong C, et al. (2021) Structure characteristics and immunomodulatory activities of a polysaccharide RGRP-1b from radix ginseng Rubra. Int J Biol Macromol 189: 980-992. https://doi.org/10.1016/j.ijbiomac.2021.08.176
    [36] Lim DS, Bae KG, Jung IS, et al. (2002) Anti-septicaemic effect of polysaccharide from Panax ginseng by macrophage activation. J Infect 45: 32-38. https://doi.org/10.1053/jinf.2002.1007
    [37] Gao H, Kang N, Hu C, et al. (2020) Ginsenoside Rb1 exerts anti-inflammatory effects in vitro and in vivo by modulating toll-like receptor 4 dimerization and NF-kB/MAPKs signaling pathways. Phytomedicine 69: 153197. https://doi.org/10.1016/j.phymed.2020.153197
    [38] Zhou P, Lu S, Luo Y, et al. (2017) Attenuation of TNF-a-induced inflammatory injury in endothelial cells by ginsenoside Rb1 via inhibiting NF-kB, JNK and p38 signaling pathways. Front Pharmacol 8: 464. https://doi.org/10.3389/fphar.2017.00464
    [39] Chen S, Li X, Wang Y, et al. (2019) Ginsenoside Rb1 attenuates intestinal ischemia/reperfusion‑induced inflammation and oxidative stress via activation of the PI3K/Akt/Nrf2 signaling pathway. Mol Med Rep 19: 3633-3641. https://doi.org/10.3892/mmr.2019.10018
    [40] Byun J, Kim SK, Ban JY (2021) Anti-Inflammatory and anti-oxidant effects of Korean Ginseng berry extract in LPS-activated RAW264.7 macrophages. Am J Chin Med 49: 719-735. https://doi.org/10.1142/S0192415X21500336
    [41] Yao F, Xue Q, Li K, et al. (2019) Phenolic compounds and ginsenosides in ginseng shoots and their antioxidant and anti-inflammatory capacities in LPS-induced RAW264.7 mouse macrophages. Int J Mol Sci 20: 2951. https://doi.org/10.3390/ijms20122951
    [42] Prager I, Watzl C (2019) Mechanisms of natural killer cell-mediated cellular cytotoxicity. J Leukoc Biol 105: 1319-1329. https://doi.org/10.1002/JLB.MR0718-269R
    [43] Kwon HJ, Lee H, Choi GE, et al. (2018) Ginsenoside F1 promotes cytotoxic activity of NK cells via insulin-like growth factor-1-dependent mechanism. Front Immunol 9: 2785. https://doi.org/10.3389/fimmu.2018.02785
    [44] Sun Y, Guo M, Feng Y, et al. (2016) Effect of ginseng polysaccharides on NK cell cytotoxicity in immunosuppressed mice. Exp Ther Med 12: 3773-3777. https://doi.org/10.3892/etm.2016.3840
    [45] Wang Z, Meng J, Xia Y, et al. (2013) Maturation of murine bone marrow dendritic cells induced by acidic Ginseng polysaccharides. Int J Biol Macromol 53: 93-100. https://doi.org/10.1016/j.ijbiomac.2012.11.009
    [46] Zhang W, Cho SY, Xiang G, et al. (2015) Ginseng berry extract promotes maturation of mouse dendritic cells. PloS One 10: e0130926. https://doi.org/10.1371/journal.pone.0130926
    [47] Takei M, Tachikawa E, Umeyama A (2008) Dendritic cells promoted by ginseng saponins drive a potent Th1 polarization. Biomark Insights 3: 269-286. https://doi.org/10.4137/BMI.S585
    [48] Larsen MW, Moser C, Hoiby N, et al. (2004) Ginseng modulates the immune response by induction of interleukin-12 production. APMIS 112: 369-373. https://doi.org/10.1111/j.1600-0463.2004.apm1120607.x
    [49] Lee EJ, Ko E, Lee J, et al. (2004) Ginsenoside Rg1 enhances CD4(+) T-cell activities and modulates Th1/Th2 differentiation. Int Immunopharmacol 4: 235-244. https://doi.org/10.1016/j.intimp.2003.12.007
    [50] Berek L, Szabó D, Petri IB, et al. (2001) Effects of naturally occurring glucosides, solasodine glucosides, ginsenosides and parishin derivatives on multidrug resistance of lymphoma cells and leukocyte functions. In Vivo 15: 151-156.
    [51] Cho M, Choi G, Shim I, et al. (2019) Enhanced Rg3 negatively regulates Th1 cell responses. J Ginseng Res 43: 49-57. https://doi.org/10.1016/j.jgr.2017.08.003
    [52] Rivera E, Pettersson FE, Inganäs M, et al. (2005) The Rb1 fraction of ginseng elicits a balanced Th1 and Th2 immune response. Vaccine 23: 5411-5419. https://doi.org/10.1016/j.vaccine.2005.04.007
    [53] Zhang L, Feng H, He Y, et al. (2017) Ginseng saponin Rb1 enhances hematopoietic function and dendritic cells differentiation. Acta Biochim Biophys Sin 49: 746-749. https://doi.org/10.1093/abbs/gmx062
    [54] Park HY, Lee SH, Lee KS, et al. (2015) Ginsenoside Rg1 and 20(S)-Rg3 induce IgA production by mouse B cells. Immune Netw 15: 331-336. https://doi.org/10.4110/in.2015.15.6.331
    [55] Rabaan AA, Al-Ahmed SH, Haque S, et al. (2020) SARS-CoV-2, SARS-CoV, and MERS-CoV: A comparative overview. Infez Med 28: 174-184.
    [56] Lu R, Zhao X, Li J, et al. (2020) Genomic characterisation and epidemiology of 2019 novel coronavirus: implications for virus origins and receptor binding. Lancet 395: 565-574. https://doi.org/10.1016/S0140-6736(20)30251-8
    [57] Giovanetti M, Benedetti F, Campisi G, et al. (2021) Evolution patterns of SARS-CoV-2: Snapshot on its genome variants. Biochem Biophys Res Commun 538: 88-91. https://doi.org/10.1016/j.bbrc.2020.10.102
    [58] Wu A, Peng Y, Huang B, et al. (2020) Genome composition and divergence of the novel coronavirus (2019-nCoV) originating in China. Cell Host Microbe 27: 325-328. https://doi.org/10.1016/j.chom.2020.02.001
    [59] Fontanet A, Autran B, Lina B, et al. (2021) SARS-CoV-2 variants and ending the COVID-19 pandemic. Lancet 397: 952-954. https://doi.org/10.1016/S0140-6736(21)00370-6
    [60] Aleem A, Samad ABA, Vaqar S (2023) Emerging variants of SARS-CoV-2 and novel therapeutics against coronavirus (COVID-19). StatPearls . Treasure Island: StatPearls Publishing.
    [61] Schultze JL, Aschenbrenner AC (2021) COVID-19 and the human innate immune system. Cell 184: 1671-1692. https://doi.org/10.1016/j.cell.2021.02.029
    [62] Vabret N, Britton GJ, Gruber C, et al. (2020) Immunology of COVID-19: Current state of the science. Immunity 52: 910-941. https://doi.org/10.1016/j.immuni.2020.05.002
    [63] Sette A, Crotty S (2021) Adaptive immunity to SARS-CoV-2 and COVID-19. Cell 184: 861-880. https://doi.org/10.1016/j.cell.2021.01.007
    [64] Jordan SC (2021) Innate and adaptive immune responses to SARS-CoV-2 in humans: relevance to acquired immunity and vaccine responses. Clin Exp Immunol 204: 310-320. https://doi.org/10.1111/cei.13582
    [65] Cohen SA, Kellogg C, Equils O (2021) Neutralizing and cross-reacting antibodies: implications for immunotherapy and SARS-CoV-2 vaccine development. Hum Vaccin Immunother 17: 84-87. https://doi.org/10.1080/21645515.2020.1787074
    [66] Winheim E, Rinke L, Lutz K, et al. (2021) Impaired function and delayed regeneration of dendritic cells in COVID-19. PloS Pathog 17: e1009742. https://doi.org/10.1371/journal.ppat.1009742
    [67] Carsetti R, Zaffina S, Mortari EP, et al. (2020) Different innate and adaptive immune responses to SARS-CoV-2 infection of asymptomatic, mild, and severe cases. Front Immunol 11: 610300. https://doi.org/10.3389/fimmu.2020.610300
    [68] Liao M, Liu Y, Yuan J, et al. (2020) Single-cell landscape of bronchoalveolar immune cells in patients with COVID-19. Nat Med 26: 842-844. https://doi.org/10.1038/s41591-020-0901-9
    [69] Paces J, Strizova Z, Smrz D, et al. (2020) COVID-19 and the immune system. Physiol Res 69: 379-388. https://doi.org/10.33549/physiolres.934492
    [70] de Candia P, Prattichizzo F, Garavelli S, et al. (2021) T cells: Warriors of SARS-CoV-2 infection. Trends Immunol 42: 18-30. https://doi.org/10.1016/j.it.2020.11.002
    [71] Fernandes Q, Inchakalody VP, Merhi M, et al. (2022) Emerging COVID-19 variants and their impact on SARS-CoV-2 diagnosis, therapeutics and vaccines. Ann Med 54: 524-540. https://doi.org/10.1080/07853890.2022.2031274
    [72] Hwang YC, Lu RM, Su SC, et al. (2022) Monoclonal antibodies for COVID-19 therapy and SARS-CoV-2 detection. J Biomed Sci 29: 1. https://doi.org/10.1186/s12929-021-00784-w
    [73] Nguyen NH, Nguyen CT (2019) Pharmacological effects of ginseng on infectious diseases. Inflammopharmacology 27: 871-883. https://doi.org/10.1007/s10787-019-00630-4
    [74] Alsayari A, Muhsinah AB, Almaghaslah D, et al. (2021) Pharmacological efficacy of ginseng against respiratory tract infections. Molecules 26: 4095. https://doi.org/10.3390/molecules26134095
    [75] Dong W, Farooqui A, Leon AJ, et al. (2017) Inhibition of influenza A virus infection by ginsenosides. PloS One 12: e0171936. https://doi.org/10.1371/journal.pone.0171936
    [76] Lee JS, Lee YN, Lee YT, et al. (2015) Ginseng protects against respiratory syncytial virus by modulating multiple immune cells and inhibiting viral replication. Nutrients 7: 1021-1036. https://doi.org/10.3390/nu7021021
    [77] Lee JS, Cho MK, Hwang HS, et al. (2014) Ginseng diminishes lung disease in mice immunized with formalin-inactivated respiratory syncytial virus after challenge by modulating host immune responses. J Interferon Cytokine Res 34: 902-914. https://doi.org/10.1089/jir.2013.0093
    [78] Yoo DG, Kim MC, Park MK, et al. (2012) Protective effect of Korean red ginseng extract on the infections by H1N1 and H3N2 influenza viruses in mice. J Med Food 15: 855-862. https://doi.org/10.1089/jmf.2012.0017
    [79] Lee WS, Rhee DK (2021) Corona-Cov-2 (COVID-19) and ginseng: Comparison of possible use in COVID-19 and influenza. J Ginseng Res 45: 535-537. https://doi.org/10.1016/j.jgr.2020.12.005
    [80] Boopathi V, Nahar J, Murugesan M, et al. (2023) In silico and in vitro inhibition of host-based viral entry targets and cytokine storm in COVID-19 by ginsenoside compound K. Heliyon 9: e19341. https://doi.org/10.1016/j.heliyon.2023.e19341
    [81] Seo SH (2022) Ginseng protects ACE2-transgenic mice from SARS-CoV-2 infection. Front Biosci 27: 180. https://doi.org/10.31083/j.fbl2706180
    [82] Cho IH (2012) Effects of Panax ginseng in neurodegenerative diseases. J Ginseng Res 36: 342-353. https://doi.org/10.5142/jgr.2012.36.4.342
    [83] de Oliveira Zanuso B, Dos Santos ARO, Miola VFB, et al. (2022) Panax ginseng and aging related disorders: A systematic review. Exp Gerontol 161: 111731. https://doi.org/10.1016/j.exger.2022.111731
    [84] Szczuka D, Nowak A, Zakłos-Szyda M, et al. (2019) American Ginseng (Panax quinquefolium L.) as a source of bioactive phytochemicals with pro-health properties. Nutrients 11: 1041. https://doi.org/10.3390/nu11051041
    [85] Kaiser R, Leunig A, Pekayvaz K, et al. (2021) Self-sustaining IL-8 loops drive a prothrombotic neutrophil phenotype in severe COVID-19. JCI Insight 6: e150862. https://doi.org/10.1172/jci.insight.150862
    [86] Liu T, Zhang J, Yang Y, et al. (2019) The role of interleukin-6 in monitoring severe case of coronavirus disease 2019. EMBO Mol Med 12: e12421. https://doi.org/10.15252/emmm.202012421
    [87] Laforge M, Elbim C, Frère C, et al. (2020) Tissue damage from neutrophil-induced oxidative stress in COVID-19. Nat Rev Immunol 20: 515-516. https://doi.org/10.1038/s41577-020-0407-1
    [88] Saba E, Jeong D, Irfan M, et al. (2018) Anti-inflammatory activity of Rg3-enriched Korean Red Ginseng extract in murine model of sepsis. Evid Based Complement Alternat Med 2018: 6874692. https://doi.org/10.1155/2018/6874692
    [89] Huang WC, Huang TH, Yeh KW, et al. (2021) Ginsenoside Rg3 ameliorates allergic airway inflammation and oxidative stress in mice. J Ginseng Res 45: 654-664. https://doi.org/10.1016/j.jgr.2021.03.002
    [90] Tu C, Wan B, Zeng Y (2020) Ginsenoside Rg3 alleviates inflammation in a rat model of myocardial infarction via the SIRT1/NF-κB pathway. Exp Ther Med 20: 238. https://doi.org/10.3892/etm.2020.9368
    [91] Yang S, Li F, Lu S, et al. (2022) Ginseng root extract attenuates inflammation by inhibiting the MAPK/NF-κB signaling pathway and activating autophagy and p62-Nrf2-Keap1 signaling in vitro and in vivo. J Ethnopharmacol 283: 114739. https://doi.org/10.1016/j.jep.2021.114739
    [92] Yi YS (2022) Potential benefits of ginseng against COVID-19 by targeting inflammasomes. J Ginseng Res 46: 722-730. https://doi.org/10.1016/j.jgr.2022.03.008
    [93] Jung EM, Lee GS (2022) Korean Red Ginseng, a regulator of NLRP3 inflammasome, in the COVID-19 pandemic. J Ginseng Res 46: 331-336. https://doi.org/10.1016/j.jgr.2022.02.003
    [94] Han BC, Ahn H, Lee J, et al. (2017) Nonsaponin fractions of Korean Red Ginseng extracts prime activation of NLRP3 inflammasome. J Ginseng Res 41: 513-523. https://doi.org/10.1016/j.jgr.2016.10.001
    [95] Feng J, Fang B, Zhou D, et al. (2021) Clinical effect of traditional Chinese medicine Shenhuang granule in critically ill patients with COVID-19: A single-centered, retrospective, observational study. J Microbiol Biotechnol 31: 380-386. https://doi.org/10.4014/jmb.2009.09029
    [96] Kang S, Min H (2012) Ginseng, the “Immunity Boost”: The effects of Panax ginseng on immune system. J Ginseng Res 36: 354-368. https://doi.org/10.5142/jgr.2012.36.4.354
    [97] Qu DF, Yu HJ, Liu Z, et al. (2011) Ginsenoside Rg1 enhances immune response induced by recombinant Toxoplasma gondii SAG1 antigen. Vet Parasitol 179: 28-34. https://doi.org/10.1016/j.vetpar.2011.02.008
    [98] Xu ML, Kim HJ, Choi YR, et al. (2012) Intake of korean red ginseng extract and saponin enhances the protection conferred by vaccination with inactivated influenza a virus. J Ginseng Res 36: 396-402. https://doi.org/10.5142/jgr.2012.36.4.396
    [99] Rhee DK (2022) COVID-19 infection and ginseng: Predictive influenza virus strains and non-predictive COVID-19 vaccine strains. J Ginseng Res 47: 347-348. https://doi.org/10.1016/j.jgr.2022.12.007
    [100] Kim JH (2012) Cardiovascular diseases and Panax ginseng: A review on molecular mechanisms and medical applications. J Ginseng Res 36: 16-26. https://doi.org/10.5142/jgr.2012.36.1.16
    [101] Hossain MA, Kim JH (2022) Possibility as role of ginseng and ginsenosides on inhibiting the heart disease of COVID-19: A systematic review. J Ginseng Res 46: 321-330. https://doi.org/10.1016/j.jgr.2022.01.003
    [102] Lee YY, Quah Y, Shin JH, et al. (2022) COVID-19 and Panax ginseng: Targeting platelet aggregation, thrombosis and the coagulation pathway. J Ginseng Res 46: 175-182. https://doi.org/10.1016/j.jgr.2022.01.002
    [103] Quah Y, Lee YY, Lee SJ, et al. (2022) In silico investigation of Panax ginseng lead compounds against COVID-19 associated platelet activation and thromboembolism. J Ginseng Res 47: 283-290. https://doi.org/10.1016/j.jgr.2022.09.001
    [104] Irfan M, Jeong D, Kwon HW, et al. (2018) Ginsenoside-Rp3 inhibits platelet activation and thrombus formation by regulating MAPK and cyclic nucleotide signaling. Vasc Pharmacol 109: 45-55. https://doi.org/10.1016/j.vph.2018.06.002
    [105] Jeong D, Irfan M, Kim SD, et al. (2017) Ginsenoside Rg3-enriched red ginseng extract inhibits platelet activation and in vivo thrombus formation. J Ginseng Res 41: 548-555. https://doi.org/10.1016/j.jgr.2016.11.003
    [106] Yi XQ, Li T, Wang JR, et al. (2010) Total ginsenosides increase coronary perfusion flow in isolated rat hearts through activation of PI3K/Akt-eNOS signaling. Phytomedicine 17: 1006-1015. https://doi.org/10.1016/j.phymed.2010.06.012
    [107] Irfan M, Lee YY, Lee KJ, et al. (2021) Comparative antiplatelet and antithrombotic effects of red ginseng and fermented red ginseng extracts. J Ginseng Res 46: 387-395. https://doi.org/10.1016/j.jgr.2021.05.010
    [108] Kang SY, Kim SH, Schini VB, et al. (1995) Dietary ginsenosides improve endothelium dependent relaxation in the thoracic aorta of hypercholesterolemic rabbit. Gen Pharmacol 26: 483-487. https://doi.org/10.1016/0306-3623(95)94002-X
    [109] Wang Z, Li YF, Han XY, et al. (2018) Kidney protection effect of ginsenoside re and its underlying mechanisms on cisplatin-induced kidney injury. Cell Physiol Biochem 48: 2219-2229. https://doi.org/10.1159/000492562
    [110] Karunasagara S, Hong GL, Park SR, et al. (2020) Korean red ginseng attenuates hyperglycemia-induced renal inflammation and fibrosis via accelerated autophagy and protects against diabetic kidney disease. J Ethnopharmacol 254: 112693. https://doi.org/10.1016/j.jep.2020.112693
    [111] Mariage PA, Hovhannisyan A, Panossian AG (2020) Efficacy of panax ginseng meyer herbal preparation HRG80 in preventing and mitigating stress-induced failure of cognitive functions in healthy subjects: A pilot, randomized, double-blind, placebo-controlled crossover trial. Pharmaceuticals 13: 57. https://doi.org/10.3390/ph13040057
    [112] Teitelbaum J, Goudie S (2021) An open-label, pilot trial of HRG80™ red ginseng in chronic fatigue syndrome, fibromyalgia, and post-viral fatigue. Pharmaceuticals 15: 43. https://doi.org/10.3390/ph15010043
    [113] Yuan HD, Kim JT, Kim SH, et al. (2012) Ginseng and diabetes: the evidences from in vitro, animal and human studies. J Ginseng Res 36: 27-39. https://doi.org/10.5142/jgr.2012.36.1.27
    [114] Yang L, Zou H, Gao Y, et al. (2020) Insights into gastrointestinal microbiota-generated ginsenoside metabolites and their bioactivities. Drug Metab Rev 52: 125-138. https://doi.org/10.1080/03602532.2020.1714645
    [115] Lee JI, Park KS, Cho IH (2019) Panax ginseng: a candidate herbal medicine for autoimmune disease. J Ginseng Res 43: 342-348. https://doi.org/10.1016/j.jgr.2018.10.002
    [116] Zhang M, Ren H, Li K, et al. (2021) Therapeutic effect of various ginsenosides on rheumatoid arthritis. BMC Complement Med Ther 21: 149. https://doi.org/10.1186/s12906-021-03302-5
    [117] Iqbal H, Rhee DK (2020) Ginseng alleviates microbial infections of the respiratory tract: a review. J Ginseng Res 44: 194-204. https://doi.org/10.1016/j.jgr.2019.12.001
    [118] Wang L, Huang Y, Yin G, et al. (2020) Antimicrobial activities of Asian ginseng, American ginseng, and notoginseng. Phytother Res 34: 1226-1236. https://doi.org/10.1002/ptr.6605
    [119] WHO, WHO Director-General's opening remarks at the media briefing on COVID-19. World Health Organization (2020) . Available from: https://www.who.int/director-general/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---11-march-2020
    [120] WHO, COVID-19 Epidemiological Update - 29 September 2023. World Health Organization (2023) . Available from: https://www.who.int/publications/m/item/covid-19-epidemiological-update---29-september-2023
    [121] Vitiello A, Ferrara F, Troiano V, et al. (2021) COVID-19 vaccines and decreased transmission of SARS-CoV-2. Inflammopharmacology 29: 1357-1360. https://doi.org/10.1007/s10787-021-00847-2
    [122] Nisar B, Sultan A, Rbab SL (2017) Comparison of medicinally important natural products versus synthetic drugs—A short commentary. Nat Prod Chem Res 6: 308. https://doi.org/10.4172/2329-6836.1000308
    [123] Lin L, Hsu W, Lin C (2014) Antiviral natural products and herbal medicines. J Tradit Complement Med 4: 24-35. https://doi.org/10.4103/2225-4110.124335
  • This article has been cited by:

    1. Song Qian, Liwei Yang, Yan Xue, Ping Li, Xiaoyong Sun, SIFusion: Lightweight infrared and visible image fusion based on semantic injection, 2024, 19, 1932-6203, e0307236, 10.1371/journal.pone.0307236
    2. 张鸿德 ZHANG Hongde, 冯鑫 FENG Xin, 杨杰铭 YANG Jieming, 邱国航 QIU Guohang, 基于双分支边缘卷积融合网络的红外与可见光图像融合方法, 2024, 53, 1004-4213, 0810004, 10.3788/gzxb20245308.0810004
    3. Yongyu Luo, Zhongqiang Luo, Infrared and visible image fusion algorithm based on gradient attention residuals dense block, 2024, 10, 2376-5992, e2569, 10.7717/peerj-cs.2569
    4. Zhen Pei, Jinbo Lu, Yongliang Qian, Lihua Fan, Hongyan Wang, Jinling Chen, A new method for fusing infrared and visible images in low-light environments based on visual perception and attention mechanism, 2025, 186, 01438166, 108800, 10.1016/j.optlaseng.2024.108800
    5. Song Qian, Guzailinuer Yiming, Ping Li, Junfei Yang, Yan Xue, Shuping Zhang, LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement, 2025, 82, 1546-2226, 4069, 10.32604/cmc.2025.059931
    6. 赵佳 Zhao Jia, 辛月兰 Xin Yuelan, 刘冀钊 Liu Jizhao, 王庆庆 Wang Qingqing, 脉冲耦合双对抗学习网络的红外与可见光图像融合, 2025, 62, 1006-4125, 1037008, 10.3788/LOP242143
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2584) PDF downloads(87) Cited by(1)

Figures and Tables

Figures(4)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog