
Plastics have quickly become an integral part of modern life. Due to excessive production and improper waste disposal, they are recognized as contaminants present in practically all habitat types. Although there are several polymers, polyethylene terephthalate (PET) is of particular concern due to its abundance in the environment. There is a need for a solution that is both cost-effective and ecologically friendly to address this pollutant. The use of microbial depolymerizing enzymes could offer a biological avenue for plastic degradation, though the full potential of these enzymes is yet to be uncovered. The purpose of this study was to use (1) plate-based screening methods to investigate the plastic degradation potential of marine bacteria from the order Enterobacterales collected from various organismal and environmental sources, and (2) perform genome-based analysis to identify polyesterases potentially related to PET degradation. 126 bacterial isolates were obtained from the strain collection of RD3, Research Unit Marine Symbioses-GEOMAR-and sequentially tested for esterase and polyesterase activity, in combination here referred to as PETase–like activity. The results show that members of the microbial families Alteromonadaceae, Shewanellaceae, and Vibrionaceae, derived from marine sponges and bryozoans, are the most promising candidates within the order Enterobacterales. Furthermore, 389 putative hydrolases from the α/β superfamily were identified in 23 analyzed genomes, of which 22 were sequenced for this study. Several candidates showed similarities with known PETases, indicating underlying enzymatic potential within the order Enterobacterales for PET degradation.
Citation: Denisse Galarza–Verkovitch, Onur Turak, Jutta Wiese, Tanja Rahn, Ute Hentschel, Erik Borchert. Bioprospecting for polyesterase activity relevant for PET degradation in marine Enterobacterales isolates[J]. AIMS Microbiology, 2023, 9(3): 518-539. doi: 10.3934/microbiol.2023027
[1] | Qian Zhang, Haigang Li, Ming Li, Lei Ding . Feature extraction of face image based on LBP and 2-D Gabor wavelet transform. Mathematical Biosciences and Engineering, 2020, 17(2): 1578-1592. doi: 10.3934/mbe.2020082 |
[2] | Fang Zhu, Wei Liu . A novel medical image fusion method based on multi-scale shearing rolling weighted guided image filter. Mathematical Biosciences and Engineering, 2023, 20(8): 15374-15406. doi: 10.3934/mbe.2023687 |
[3] | Haohao Xu, Yuchen Gong, Xinyi Xia, Dong Li, Zhuangzhi Yan, Jun Shi, Qi Zhang . Gabor-based anisotropic diffusion with lattice Boltzmann method for medical ultrasound despeckling. Mathematical Biosciences and Engineering, 2019, 16(6): 7546-7561. doi: 10.3934/mbe.2019379 |
[4] | Michael James Horry, Subrata Chakraborty, Biswajeet Pradhan, Maryam Fallahpoor, Hossein Chegeni, Manoranjan Paul . Factors determining generalization in deep learning models for scoring COVID-CT images. Mathematical Biosciences and Engineering, 2021, 18(6): 9264-9293. doi: 10.3934/mbe.2021456 |
[5] | Auwalu Saleh Mubarak, Zubaida Said Ameen, Fadi Al-Turjman . Effect of Gaussian filtered images on Mask RCNN in detection and segmentation of potholes in smart cities. Mathematical Biosciences and Engineering, 2023, 20(1): 283-295. doi: 10.3934/mbe.2023013 |
[6] | Jimin Yu, Jiajun Yin, Shangbo Zhou, Saiao Huang, Xianzhong Xie . An image super-resolution reconstruction model based on fractional-order anisotropic diffusion equation. Mathematical Biosciences and Engineering, 2021, 18(5): 6581-6607. doi: 10.3934/mbe.2021326 |
[7] | Chen Yue, Mingquan Ye, Peipei Wang, Daobin Huang, Xiaojie Lu . SRV-GAN: A generative adversarial network for segmenting retinal vessels. Mathematical Biosciences and Engineering, 2022, 19(10): 9948-9965. doi: 10.3934/mbe.2022464 |
[8] | Hao Wang, Guangmin Sun, Kun Zheng, Hui Li, Jie Liu, Yu Bai . Privacy protection generalization with adversarial fusion. Mathematical Biosciences and Engineering, 2022, 19(7): 7314-7336. doi: 10.3934/mbe.2022345 |
[9] | Hui Yao, Yuhan Wu, Shuo Liu, Yanhao Liu, Hua Xie . A pavement crack synthesis method based on conditional generative adversarial networks. Mathematical Biosciences and Engineering, 2024, 21(1): 903-923. doi: 10.3934/mbe.2024038 |
[10] | Wei-wei Jiang, Guang-quan Zhou, Ka-Lee Lai, Song-yu Hu, Qing-yu Gao, Xiao-yan Wang, Yong-ping Zheng . A fast 3-D ultrasound projection imaging method for scoliosis assessment. Mathematical Biosciences and Engineering, 2019, 16(3): 1067-1081. doi: 10.3934/mbe.2019051 |
Plastics have quickly become an integral part of modern life. Due to excessive production and improper waste disposal, they are recognized as contaminants present in practically all habitat types. Although there are several polymers, polyethylene terephthalate (PET) is of particular concern due to its abundance in the environment. There is a need for a solution that is both cost-effective and ecologically friendly to address this pollutant. The use of microbial depolymerizing enzymes could offer a biological avenue for plastic degradation, though the full potential of these enzymes is yet to be uncovered. The purpose of this study was to use (1) plate-based screening methods to investigate the plastic degradation potential of marine bacteria from the order Enterobacterales collected from various organismal and environmental sources, and (2) perform genome-based analysis to identify polyesterases potentially related to PET degradation. 126 bacterial isolates were obtained from the strain collection of RD3, Research Unit Marine Symbioses-GEOMAR-and sequentially tested for esterase and polyesterase activity, in combination here referred to as PETase–like activity. The results show that members of the microbial families Alteromonadaceae, Shewanellaceae, and Vibrionaceae, derived from marine sponges and bryozoans, are the most promising candidates within the order Enterobacterales. Furthermore, 389 putative hydrolases from the α/β superfamily were identified in 23 analyzed genomes, of which 22 were sequenced for this study. Several candidates showed similarities with known PETases, indicating underlying enzymatic potential within the order Enterobacterales for PET degradation.
At present, image color rendering as a major branch of image processing has attracted much attention. With the development of deep learning, image color rendering based on neural network has gradually become a research hotspot [1,2,3,4,5]. Because traditional color rendering methods require manual intervention and have high requirements of reference images. Moreover, when the structure and color of the image are complex, color rendering effect is not ideal [6,7,8,9,10]. Color rendering methods based on deep learning can be easily deployed in the actual production environment, and the limitation of the traditional methods can be solved [11,12,13]. By using the neural network model and the corresponding dataset training model [14,15], the image can be automatically rendered according to the model, without being affected by human or other factors [16,17,18,19].
Larsson [20] used the convolutional neural network to consider the brightness of the image as input, decomposed the color and saturation of the image by the super-column model, to realize color rendering. Iizuka [21] combined the low-dimensional feature and global feature of the image by using the fusion layer in the convolutional neural network, for generating the color of the image and processing images of any resolution. Zhang [22] designed an appropriate loss function to handle the multi-mode uncertainty in color rendering and maintain the color diversity. However, when the grayscale image features are extracted using the above mentioned method, up-sampling is adopted to make the image size consistent, resulting in the loss of image information. Moreover, the network structure cannot well extract and understand the complex features of the image, and the rendering effect is limited [23,24,25].
Isola [26] improved conditional generative adversarial networks (CGAN) to achieve the transformation between images. The proposed pix2pix model can realize conversion between different images, for example, color rendering can be realized by learning the mapping relationship between grayscale image and color image [27,28]. But the pix2pix model based generative adversarial networks (GAN) has the disadvantage of training instability. Moreover, the current image rendering methods based on deep learning are not good at rendering robust images. Gabor filter can easily extract texture information in all scales and directions of the image, and reduce the influence of light change and noise in the image to a certain extent.
Therefore, we propose a color rendering method using Gabor filter based improved pix2pix for robust image. The contributions of this paper are mainly there-folds:
(1) The improved pix2pix model can not only automatically complete image rendering and achieve good visual effect, but also achieve more stable training and better image quality.
(2) Gabor filter was added to enhance the robustness of model rendered images.
(3) The metric data of a series of experiments show that the proposed method has better performance for robust image.
The rest of the paper is organized as follows. Section 2 introduces the previous work, including Gabor filter and pix2pix model. Section 3 describes the method and its design details. Section 4 introduces the experiment and comparison experiment, and evaluates the image quality. Section 5 conclusions the paper and outlooks the future work.
Fourier transform is a powerful tool in signal processing, which can help us transform images from spatial domain to frequency domain, and extract features that are not easy to extract in spatial domain. However, after Fourier transform, frequency features of images at different locations are often mixed together, but Gabor filter can extract spatial local frequency features, which is an effective texture detection tool [29,30]. The Gabor filter is derived by multiplying a Gaussian by a cosine function [31,32,33], it is defined as
g(x,y,λ,θ,φ,σ,γ)=exp(−x′2+γ2y22σ2)exp(i(2πx′λ+φ)) | (2.1) |
greal(x,y,λ,θ,φ,σ,γ)=exp(−x′2+γ2y22σ2)cos(i(2πx′λ+φ)) | (2.2) |
gimag(x,y,λ,θ,φ,σ,γ)=exp(−x′2+γ2y22σ2)sin(i(2πx′λ+φ)) | (2.3) |
where, x′=xcosθ+ysinθ,y′=−xsinθ+ycosθ. Where, x, y represent the coordinate position of the pixel, λ represents the wavelength of the filter, θ represents the tilt degree of the Gabor kernel image, φ represents the phase offset, σ represents the standard deviation of the Gaussian function, and γ represents the aspect ratio.
In order to make full use of the characteristics of Gabor filters, r filter extracts the texture features of the image in 6 scales and 4 directions. Namely, the Gabor it is necessary to design Gabor filters with different directions and scales to extract features. In this study, the Gaboscales are 7, 9, 11, 13, 15 and 17. The Gabor directions are 0°, 45°, 90° and 135°, as shown in Figure 1(a). Extract effective texture feature sets from the output results of the filter. The extracted texture feature sets are shown in Figure 1(b), with 24 texture feature maps in total.
At present, image rendering based on generative adversarial networks [34] attracts much attention because it can directly generate color images by using mapping relations. Therefore, it is widely used in image processing, text processing, natural language processing and other fields. pix2pix model [26] is a model for image-to-image conversion based generative adversarial networks. It can better synthesize image or generate color image. The following are the main features of the pix2pix model.
(1) Both the generator and discriminator structure use the convolution unit of Conv-Batchnorm-ReLU, namely, convolutional layer, batch normalization and ReLU Loss are used.
(2) The input of the pix2pix model is the specified image, such as the label image to the real image, the input is the label image, the input is the grayscale image to the color image, and the input is the grayscale image. The grayscale image as the input of the generator, and the input and output of the generator as the input of the discriminator, so as to establish the corresponding relationship between the input image and the output image, realize user control, and complete image color rendering.
(3) PatchGAN was used as discriminator for pix2pix model. Specifically, the image is divided into several fixed-size blocks, and the authenticity of each block is determined. Finally, the average value is taken as the final output. A network structure similar to U-net is adopted as a generator, and skip connections are added between i and n−i at each layer to simulate U-net, where n is the total number of layers of the network. Not only can the path be shrunk for context information, but the symmetric extension path can be positioned precisely.
(4) The loss function of the pix2pix model is as follows, which is composed of L1 loss and Vanilla GAN loss. Where, let x be the input image, y be the expected output, G be the generator, and D be the discriminator:
G∗=argminGmaxDLcGAN(G,D)+λLL1(G) | (2.4) |
LcGAN(G,D)=Ex,y(logD(x,y))+Ex(log(1−D(x,G(x)))) | (2.5) |
LL1(G)=Ex,y(∥y−G(x)1∥) | (2.6) |
In view of the detail problems existing in the generative adversarial networks based image color rendering method in complex scenes, this paper proposes an image color rendering method using Gabor filter based improved pix2pix for robust image. The network framework is shown in Figure 2. The rendering process is shown in Figure 3. After selecting the data set for training, the trained generator is used for color rendering.
Firstly, we preprocessed the image with Gabor filter, and extracted the texture feature set of the image as input for training and verification. By comparing 24 Gabor texture feature maps with 6 scales and 4 directions, the texture map with 7 scales and 0° direction has the best color rendering effect. Secondly, this paper utilizes the existing pix2pix model architecture for image transformation to perform color rendering by learning the mapping relationship between grayscale image and color image. Finally, although the pix2pix model solves some problems existing in the generative adversarial networks, it still has the instability problem of training on large-scale image dataset. Therefore, the least square loss in LSGAN [35] is used in the objective function of pix2pix model, and the penalty term similar to WGAN_GP [36] is added. We improve the overall model framework, it is shown that the proposed method has a better performance on the rendering of robust images by a series of comparison experiments.
The generator in generative adversarial networks hopes that the output data distribution can be more close to the distribution of the real data. Meanwhile, the discriminator of generative adversarial networks needs to make a judgment between the real data and the output data by the generator to find the real data and the fake data. The loss function can generate more real data through the Lipschitz constraint generative adversarial networks. The traditional generative adversarial networks uses the cross entropy loss or Vanilla GAN loss as the loss function. The classification is correct, but gradient dispersion occurs when the generator is updated [36,37]. LSGAN uses the square loss as the objective function, and the least square loss function penalizes the samples (fake samples) that are in the discriminant true but far away from the decision boundary, and drags the false samples far away from the decision boundary into the decision boundary, to improve the quality of the generated image.
Therefore, compared with the traditional generative adversarial networks, the image generated by LSGAN has higher quality and a more stable training process. So the least square loss function is adopted in the framework of this paper.
{minDVLSGAN(D)=12ExPdata(x)[(D(x)−b)2]+12EzPz(z)[(D(G(z))−a)2]minDVLSGAN(D)=12EzPz(z)[(D(G(z))−c)2] | (3.1) |
where, the input image is x, expected output is y, generator is G, discriminator is D, noise is z, labels of generated sample and real sample are a and b, respectively. c is the value set by the generator to let the discriminator think the generated image is real data.
Generative adversarial networks can generate better data distribution, but it has the problem of training instability. Improving the training stability of generative adversarial networks is a hot topic in deep learning. Wasserstein generative adversarial networks (WGAN) [38] uses Wasserstein distance to generate a value function with better theoretical properties than JS divergence in order to constrain the Lipschitz constant of the discriminator function, which basically solves the problems of generative adversarial networks training instability and model collapse and ensures the diversity of generated samples [39]. WGAN_GP continues to improve on WGAN, and its penalty term is derived from the Wasserstein distance, where the penalty coefficient is 10.
The objective function of WGAN_GP is as follows, adding the original critic loss and the gradient penalty term of WGAN_GP.
L=E˜x∼Pg[D(˜x)]−E˜x∼Pr[D(x)]+λE˜x∼Pˆx[(∥▽ˆxD(ˆx∥2−1)2] | (3.2) |
where, E˜x∼Pg[D(˜x)]−E˜x∼Pr[D(x)] is the original critic loss, λE˜x∼Pˆx[(∥▽ˆxD(ˆx∥2−1)2] denotes the gradient penalty term of WGAN_GP, ˆx=tˆx+(1−t)x, 0≤t≤1, and λ is the penalty coefficient.
To verify the effectiveness and accuracy of the proposed method, we conducted extensive experiments on summer dataset [40], with 1231 pieces of train set, and 309 pieces of test set. Experiment 1 is conducted to test the effect of application of the Gabor filter and different objective functions in the pix2pix model environment. Experiment 2 is performed to test the rendering effect when different Gabor texture feature maps are given as input. Experiment 3 is conducted to test whether the penalty term should be added in the discriminator. Experiment 4 is the rendering effect of low-quality or robust images was tested by adding noise and dimming the brightness of the image for assessing the robustness of this model.
Training parameters: The experiment was performed on a PC with Intel(R) Core(TM) i7-9750H CPU @ 2.6 GHz 2.59 GHz, a graphics card NVIDIA GeForce GTX 1650, and CUDA+Cudnn for acceleration training. The proposed method is implemented based on Python 3.7 and Pytorch framework. The number of experimental training iterations is 200, optimizer is Adam, batch_size is 1, learning rate is 0.0002, and number of processes is 4.
Network structures and implementation details: All the models we train are designed to 256 × 256 images. The input image of the model is 512 × 256; the left one is the original color image, and the right one is the texture feature map processed by the Gabor filter, as shown in Figure 4. By default, the pix2pix model uses a generator similar to U-net, PatchGAN, and Vanilla GAN loss.
Evaluation Metrics: To reflect image color rendering quality of different models more objectively, peak signal to noise ratio (PSNR) and structural similarity (SSIM) indexes are adopted to evaluate the rendered images [41,42]. These two indexes are often used in the evaluation metrics of image processing. PSNR is an objective standard to evaluate the quality of the color image produced. The calculation formula is as follows:
PSNR=10log10(2n−1)2MSE | (4.1) |
MSE=1H×WH∑i=1W∑j=1[X(i,j)−Y(i,j)]2 | (4.2) |
where, H and W represent the width and height of the image respectively, (i,j) represents each pixel point, and n represents the number of bits of the pixel, X and Y represent two images respectively.
Because PSNR index also has its limitations, it cannot completely reflect the consistency of image quality and human visual effect, so SSIM index is used for further comparison. SSIM is a metric to measure the similarity of two images. By comparing the image rendered by the model with the original color image, the effectiveness and accuracy of this algorithm are demonstrated. The calculation formula is as follows:
SSIM=(2μxμy+c1)(2σxy+c2)(μ2xμ2y+c1)(σ2xσ2y+c2) | (4.3) |
where, μx and μy respectively represent the average value of the real image and the generated image, σ2x and σ2y respectively represent the variance of the real image and the generated image, σxy represents the covariance of the real image and the generated image, c1=(k1,L)2 and c2=(k2,L)2 are constants that maintain stability, and L is the dynamic range of pixel value, k1=0.01, k2=0.03.
In this study, the Gabor filter extracts the texture features of the image in 6 scales and 4 directions. For convenience, according to the texture feature set shown in Figure 1(b), the images are numbered from left to right and from top to bottom. The direction is assumed to be d and the size is s, as shown in Figure 5. For example, G1 means "s = 7, d = 0°". So the direction is 0° and the scale is 7. G6 means "s = 17, d = 0°". So the direction is 0° and the scale is 17. By default, the pix2pix model uses Vanilla GAN loss. Based on pix2pix model, the model using least squares loss function is called LSpix (least squares pix2pix). Based on Gabor filter, the model using Gabor texture maps is called pixGn (pix2pix Gabor n), n = 1, 6, 7, 13.
To test the effect of application of the Gabor filter and different objective functions in the pix2pix model environment, we divided the experiment into adding Gabor filters (Figures 6(c), (e)), not adding Gabor filters (Figures 6(b), (d)), using least squares loss (Figures 6(d), (e)) or Vanilla GAN loss (Figures 6(b), (c)). By comparing the images in Figure 6, it can be confirmed that the rendering effect preprocessed by least square loss or Gabor filter is better, which is the LSpixG1 model. This is because Gabor can preprocess images and obtain multi-scale and multi-direction features of images, so as to achieve good and fast feature extraction and learning during network model learning. Moreover, compared with other loss functions, the least square loss function only reaches saturation at one point, which is not easy to cause the problem of gradient disappearance.
Tables 1 and 2 compare the distortion and structural similarity between the rendered image and the ground truth, show the maximum, minimum, and average indexes. This is an additional interpretation of Figure 6. The LSpix model has the highest score in the maximum and average PSNR, which is 3.591dB and 1.083dB higher than that of the pix2pix model. Meanwhile, the LSpix model has the highest score in SSIM, which is 1.618%, 15.649% and 3.848% higher than that of the pix2pix model, respectively. This proves that our model is closer to ground truth in structure, and the colors are more reductive.
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
LSpix | 32.795 | 11.003 | 24.107 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
LSpix | 94.506 | 68.123 | 86.011 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
Note: Bold font is the best value for each column. |
In order to test the rendering effect when different Gabor texture feature maps are input, we use different feature images as input. Figure 7 shows how different Gabor texture images are rendered when Vanilla GAN loss is the target function of the pix2pix model. Figures 5(c), (d), that is, the direction is the scale is 7 and 45° or 90°, contain incomplete details of the original image, resulting in incomplete input texture features. Therefore, the generated image is blurred, as shown in Figures 7(a), (b). Although the 7th and 13th texture images were considered as training sets (pixG7+G13 model) with a total of 1231 × 2 images taken together, the rendering effect was not significantly improved, as shown in Figure 7(b). Evidently, by comparing the images in Figure 7, it can be found out that the visual effect of Figures 7(c)–(e) is good and not blurred. And Table 3 and 4 show the evaluation indexes after the input of different feature maps. The data show that incomplete input of texture feature map is not desirable.
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
pixG7 | 27.565 | 9.232 | 17.600 |
pixG1+G13 | 28.960 | 9.947 | 20.682 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
pixG7 | 91.188 | 4.964 | 39.057 |
pixG1+G13 | 87.615 | 42.562 | 71.630 |
Note: Bold font is the best value for each column. |
And to compare the operation efficiency of input different texture maps, the training time is shown in Table 5, in hours. Regardless of whether the Gabor filter was used, which texture map was entered, the operation time was around 9 hours. However, if two texture maps are used for training, such as G1 and G13 are used in pixG1+G13 model, the training set doubles and the pre-training time doubles. Even though the results shown in Figure 7(d) are good, the method is not desirable. This is because when we use filtering, we need to extract multi-scale and multi-direction features and remove redundant information. Once the important information is removed, it will certainly have a certain impact on the results, resulting in blurred images.
Model | pix2pix | pixG1 | pixG6 | pixG7 | pixG7+G13 | pixG1+G13 |
Time | 8.72 | 9.00 | 8.76 | 8.43 | 15.27 | 16.52 |
Figure 8 shows the performance on whether or not to add a penalty item in the discriminator based on the pixG1 model. Figure 8(a) is the effect of not adding penalty items, and Figure 8(b) is the effect of adding penalty items. Obviously, Figure 8(b) has less error in detail and better visual effect. Penalty term, that is, gradient punishment is carried out by interpolation method to make the model satisfy Lipschitz constraint. The addition of punishment terms similar to WGAN_GP basically solves the problems of training instability and model collapse in the GAN model and ensures the diversity of generated samples.
Tables 6 and 7 show the evaluation indexes whether or not to add a penalty item. With the addition of penalty term, the LSpix_GP model achieved the highest score in the minimum PSNR, which was 0.904dB higher than that of the original pix2pix model. Evidently, in the texture map extracted based on Gabor filter, the image with scale of 7 and direction of 0° has the best training effect. Furthermore, when the objective function is least squares loss, the average SSIM and performance are improved. When penalty term is added, the score of maximum and average SSIM is the highest, which is 1.753% and 1.083% higher than that of the pix2pix model. Therefore, the image rendered by the LSpixG1_GP model is better than that of the original model.
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpix | 32.795 | 11.003 | 24.107 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
LSpix_GP | 31.859 | 12.030 | 24.019 |
LSpixG1_GP | 32.342 | 11.514 | 21.290 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1_GP | 32.113 | 11.067 | 21.384 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpix | 94.506 | 68.123 | 86.011 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
LSpix_GP | 94.641 | 67.250 | 85.967 |
LSpixG1_GP | 90.772 | 54.308 | 78.067 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG6_GP | 90.941 | 52.740 | 78.236 |
Note: Bold font is the best value for each column. |
To compare the operating efficiency of different objective functions given as input and increase the punishment items, the running time is listed in Table 8 in hours. For example, LSpixG6_GP represents using the least squares loss, adding the penalty item, the direction is 0° and the scale is 17. Regardless of whether Gabor filter was used, which texture map was input, whether Vanilla GAN loss or least square loss was the target function, the training time was approximately 9 h. Although the algorithm efficiency of adding the filter alone is basically the same, the time of using the filter after adding the penalty term will be increased by 2–3 h. Therefore, the algorithm in this study adopts LSpixG1_GP model, namely Gabor texture map with model input scale of 7 and direction of 0°, least squares loss and penalty term.
Model | pix2pix | LSpix | LSpixG1 | LSpix_GP | LSpixG1_GP | LSpixG6 | LSpixG6_GP |
Time | 8.72 | 8.66 | 8.66 | 8.72 | 11.17 | 8.72 | 11.72 |
Note: Bold font is the best value for each row. |
In order to evaluate the robustness of the model for rendering robust image, the rendering effect of low-quality images was tested by adding noise and dimming the image brightness, as shown in Figure 9. When testing the noise image, the Gaussian noise image with mean value of 0 and variance of 10 is added. When testing low-illumination images, power operation is performed on the pixels of the image, and the power is set to 2.5 to generate low-illumination images.
We use PSNR evaluation metric to evaluate the rendering results of each model for low-quality images. As shown in Table 9, the image rendered by the LSpix model is of higher quality when rendering noisy images. As shown in Table 10, images rendered by Gabor filter models are generally of good quality for low-illumination images. After the Gabor filter, the objective function is least square loss and the penalty term is added, the image quality of the LSpixG1_GP model is higher than that of the original model. This is because the method in this paper uses Gabor filter to avoid the interference of noise to the image to a certain extent. And when extracting features, the depth information of the image can be extracted to avoid the influence of light on the image. Clearly, the proposed method in this paper is robust to color rendering of low-quality images.
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.489 | 10.770 | 22.528 |
pixG1 | 25.657 | 10.562 | 18.665 |
LSpix | 29.655 | 11.805 | 22.528 |
LSpixG1 | 27.650 | 12.409 | 19.942 |
LSpix_GP | 29.516 | 11.950 | 22.504 |
LSpixG1_GP | 27.306 | 11.548 | 19.966 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 21.977 | 8.723 | 12.535 |
pixG1 | 26.158 | 7.864 | 14.441 |
LSpix | 21.457 | 9.119 | 12.579 |
LSpixG1 | 24.565 | 7.946 | 14.171 |
LSpix_GP | 21.948 | 9.334 | 12.563 |
LSpixG1_GP | 24.337 | 7.886 | 14.127 |
Note: Bold font is the best value for each column. |
We proposed a novel image color rendering method based on using Gabor filter based improved pix2pix for robust image and demonstrate its feasibility and superiority for a variety of tasks. It enables automatically render robust images and has good robustness with low-quality image rendering. The experimental results on summer dataset demonstrate that the proposed method can achieve high-quality performance with image color rendering. At present, the image resolution of image processing based on deep learning is limited, which leads to the limitation in the practical application of rendering method. In the future, we will focus on improving the resolution of network model input images.
This work were partially supported by the National Natural Science Foundation of China (No. 62002285 and No. 61902311).
The authors declare there is no conflict of interest.
[1] |
Carr C, de Oliveira B, Jackson S, et al. (2022) Identification of BgP, a cutinase-like polyesterase from a deep-sea sponge-derived Actinobacterium. Front Microbiol 13: 888343. https://doi.org/10.3389/fmicb.2022.888343 ![]() |
[2] |
Dalmaso G, Ferreira D, Vermelho A (2015) Marine extremophiles: a source of hydrolases for biotechnological applications. Mar Drugs 13: 1925-1965. https://doi.org/10.3390/md13041925 ![]() |
[3] |
Rosenberg E, Zilber-Rosenberg I (2018) The hologenome concept of evolution after 10 years. Microbiome 6: 1-14. https://doi.org/10.1186/s40168-018-0457-9 ![]() |
[4] |
de Oliveira B, Carr C, Dobson A, et al. (2020) Harnessing the sponge microbiome for industrial biocatalysts. Appl Microbiol Biotechnol 10: 8131-8154. https://doi.org/10.1007/s00253-020-10817-3 ![]() |
[5] |
Arnosti C, Bell C, Moorhead D, et al. (2014) Extracellular enzymes in terrestrial, freshwater, and marine environments: perspectives on system variability and common research needs. Biogeochemistry 117: 5-21. https://doi.org/10.1007/s10533-013-9906-5 ![]() |
[6] | PlasticsEuropePlastics–The Facts 2022: An Analysis of European plastics production, demand, and waste (2023). Available from: https://plasticseurope.org/knowledge-hub/plastics-the-facts-2022/. |
[7] |
Jambeck J, Geyer R, Wilcox C, et al. (2015) Plastic waste inputs from land into the ocean. Science 347: 768-771. https://doi.org/10.1126/science.1260352 ![]() |
[8] |
Oberbeckmann S, Osborn A, Duhaime M (2016) Microbes on a bottle: substrate, season and geography influence community composition of microbes colonizing marine plastic debris. PLoS One 11: e0159289. https://doi.org/10.1371/journal.pone.0159289 ![]() |
[9] |
Cox K, Covernton G, Davies H, et al. (2019) Human consumption of microplastics. Environ Sci Technol 53: 7068-7074. https://doi.org/10.1021/acs.est.9b01517 ![]() |
[10] |
Gregory M (2009) Environmental implications of plastic debris in marine settings entanglement, ingestion, smothering, hangers-on, hitch-hiking and alien invasions. Phil Trans R Soc B 364: 2013-2025. https://doi.org/10.1098/rstb.2008.0265 ![]() |
[11] |
Worm B, Lotze H, Jubinville I, et al. (2017) Plastic as a persistent marine pollutant. Annu Rev Environ Resour 42: 1-26. https://doi.org/10.1146/annurev-environ-102016-060700 ![]() |
[12] |
Carlton J, Fowler A (2018) Ocean rafting and marine debris: A broader vector menu requires a greater appetite for invasion biology research support. Aquat Invasions 13: 11-15. https://doi.org/10.3391/ai.2018.13.1.02 ![]() |
[13] |
Tetu S, Sarker I, Moore L (2020) How will marine plastic pollution affect bacterial primary producers?. Commun Biol 3: 1-4. https://doi.org/10.1038/s42003-020-0789-4 ![]() |
[14] |
Bowley J, Baker–Austin C, Porter A, et al. (2021) Oceanic hitchhikers–assessing pathogen risks from marine microplastic. Trends Microbiol 29: 107-116. https://doi.org/10.1016/j.tim.2020.06.011 ![]() |
[15] |
Pirillo V, Pollegioni L, Molla G (2021) Analytical methods for the investigation of enzyme–catalyzed degradation of polyethylene terephthalate. FEBS J 288: 4730-4745. https://doi.org/10.1111/febs.15850 ![]() |
[16] |
Koshti R, Mehta L, Samarth N (2018) Biological recycling of polyethylene terephthalate: A mini–review. J Polym Environ 26: 3520-3529. https://doi.org/10.1007/s10924-018-1214-7 ![]() |
[17] | Zimmermann W (2019) Biocatalytic recycling of polyethylene terephthalate plastic. Philos Trans R Soc A: 378. https://doi.org/10.1098/rsta.2019.0273 |
[18] |
Dissanayake L, Jayakody L (2021) Engineering microbes to bio-upcycle Polyethylene Terephthalate. Front Bioeng Biotechnol 9: 656465. https://doi.org/10.3389/fbioe.2021.656465 ![]() |
[19] |
Tiso T, Narancic T, Wei R, et al. (2021) Towards bio-upcycling of polyethylene terephthalate. Metab Eng 66: 167-178. https://doi.org/10.1016/j.ymben.2021.03.011 ![]() |
[20] |
Danso D, Chow J, Streit W (2019) Plastics: Environmental and biotechnological perspectives on microbial degradation. Appl Environ Microbiol 85: 01019-010195. https://doi.org/10.1128/AEM.01095-19 ![]() |
[21] |
Schmidt J, Wei R, Oeser T (2017) Degradation of polyester polyurethane by bacterial polyester hydrolases. Polymers 9: 65. https://doi.org/10.3390/polym9020065 ![]() |
[22] |
Carr C, Clarke D, Dobson A (2020) Microbial polyethylene terephthalate hydrolases: current and future perspectives. Front Microbiol 11: 1-23. https://doi.org/10.3389/fmicb.2020.571265 ![]() |
[23] |
Mohanan N, Montazer Z, Sharma P, et al. (2020) Microbial and enzymatic degradation of synthetic plastics. Front Microbiol 11: 580709. https://doi.org/10.3389/fmicb.2020.580709 ![]() |
[24] |
Ru J, Huo Y, Yang Y (2020) Microbial degradation and valorization of plastic wastes. Front Microbiol 11: 442. https://doi.org/10.3389/fmicb.2020.00442 ![]() |
[25] |
Kawai F, Kawabata T, Oda M (2019) Current knowledge on enzymatic PET degradation and its possible application to waste stream management and other fields. Appl Microbiol Biotechnol 103: 4253-4268. https://doi.org/10.1007/s00253-019-09717-y ![]() |
[26] |
Danso D, Schmeisser C, Chow J, et al. (2018) New insights into the function and global distribution of polyethylene terephthalate (PET)-degrading bacteria and enzymes in marine and terrestrial metagenomes. Appl Environ Microbiol 84: e02773-17. https://doi.org/10.1128/AEM.02773-17 ![]() |
[27] |
Müller R, Schrader H, Profe J, et al. (2005) Enzymatic degradation of poly (ethylene terephthalate): Rapid Hydrolysis using a hydrolase from T. fusca. Macromol Rapid Commun 26: 1400-1405. https://doi.org/10.1002/marc.200500410 ![]() |
[28] |
Yoshida S, Hiraga K, Takehana T, et al. (2016) A bacterium that degrades and assimilates poly (ethylene terephthalate). Science 351: 1196-1199. https://doi.org/10.1126/science.aad6359 ![]() |
[29] |
Ekanayaka A, Tibpromma S, Dai D, et al. (2022) A review of the fungi that degrade plastic. J Fungi 8: 772. https://doi.org/10.3390/jof8080772 ![]() |
[30] |
Roth C, Wei R, Oeser T, et al. (2014) Structural and functional studies on a thermostable polyethylene terephthalate degrading hydrolase from Thermobifida fusca. Appl Microbiol Biotechnol 98: 7815-7823. https://doi.org/10.1007/s00253-014-5672-0 ![]() |
[31] |
Sulaiman S, Yamato S, Kanaya E, et al. (2012) Isolation of a novel cutinase homolog with polyethylene terephthalate–degrading activity from leaf–branch compost by using a metagenomic approach. Appl Environ Microbiol 78: 1556-1562. https://doi.org/10.1128/AEM.06725-11 ![]() |
[32] |
Then J, Wei R, Oeser T, et al. (2016) A disulfide bridge in the calcium binding site of a polyester hydrolase increases its thermal stability and activity against polyethylene terephthalate. FEBS Open Bio 6: 425-432. https://doi.org/10.1002/2211-5463.12053 ![]() |
[33] |
Gambarini V, Pantos O, Kingsbury J, et al. (2021) Phylogenetic distribution of plastic-degrading microorganisms. mSystems 6: e01112-20. https://doi.org/10.1128/mSystems.01112-20 ![]() |
[34] |
Imhoff J (2005) Enterobacteriales. Bergey's manual® of systematic bacteriology.Springer 587-850. https://doi.org/10.1007/0-387-28022-7_13 ![]() |
[35] | Brenner D, Farmer J (2005) III. Family Enterobacteriaceae. Bergey's manual® of systematic bacteriology.Springer 587-606. https://doi.org/10.1002/9781118960608.fbm00222 |
[36] |
Janda J, Abbott S (2021) The changing face of the family Enterobacteriaceae (Order: “Enterobacterales”): New members, taxonomic issues, geographic expansion, and new diseases and disease syndromes. Clin Microbiol Rev 34: 1-45. https://doi.org/10.1128/CMR.00174-20 ![]() |
[37] |
Kato C, Honma A, Sato S, et al. (2019) Poly 3-hydroxybutyrate-co- 3-hydroxyhexanoate films can be degraded by the deep-sea microbes at high pressure and low temperature conditions. High Press Res 39: 248-257. https://doi.org/10.1080/08957959.2019.1584196 ![]() |
[38] |
Borchert E, García-Moyano A, Sanchez-Carrillo S, et al. (2021) Deciphering a marine bone-degrading microbiome reveals a complex community effort. mSystems 6: 1218-20. https://doi.org/10.1128/mSystems.01218-20 ![]() |
[39] |
Yang J, Yang Y, Wu W, et al. (2014) Evidence of polyethylene biodegradation by bacterial strains from the guts of plastic-eating waxworms. Environ Sci Technol 48: 13776-13784. https://doi.org/10.1021/es504038a ![]() |
[40] |
Mallakuntla M, Vaikuntapu P, Bhuvanachandra B, et al. (2017) Transglycosylation by a chitinase from Enterobacter cloacae subsp. cloacae generates longer chitin oligosaccharides. Sci Rep 7: 1-12. https://doi.org/10.1038/s41598-017-05140-3 ![]() |
[41] |
Volova T, Boyandin A, Vasil'ev A, et al. (2011) Biodegradation of polyhydroxyalkanoates (PHAs) in the South China Sea and identification of PHA-degrading bacteria. Microbiology 80: 252-260. https://doi.org/10.1134/S0026261711020184 ![]() |
[42] |
Dashti N, Ali N, Eliyas M, et al. (2015) Most hydrocarbonoclastic bacteria in the total environment are diazotrophic, which highlights their value in the bioremediation of hydrocarbon contaminants. Microbes Environ 30: 70-75. https://doi.org/10.1264/jsme2.ME14090 ![]() |
[43] |
Wang X, Isbrandt T, Strube M, et al. (2021) Chitin degradation machinery and secondary metabolite profiles in the marine bacterium Pseudoalteromonas rubra S4059. Mar Drugs 19: 108. https://doi.org/10.3390/md19020108 ![]() |
[44] |
Caruso G (2020) Microbial colonization in marine environments: overview of current knowledge and emerging research topics. J Mar Sci Eng 8: 78. https://doi.org/10.3390/jmse8020078 ![]() |
[45] |
Sekiguchi T, Saika A, Nomura K, et al. (2011) Biodegradation of aliphatic polyesters soaked in deep seawaters and isolation of poly(ϵ-caprolactone)-degrading bacteria. Poly. Degrad Stabil 96: 1397-1403. https://doi.org/10.1016/j.polymdegradstab.2011.03.004 ![]() |
[46] |
Ohta Y, Hatada Y, Miyazaki M, et al. (2005) Purification and characterization of a novel α-agarase from a Thalassomonas sp. Curr Microbiol 50: 212-216. https://doi.org/10.1007/s00284-004-4435-z ![]() |
[47] |
Yakimov M, Bargiela R, Golyshin P (2022) Calm and Frenzy: marine obligate hydrocarbonoclastic bacteria sustain ocean wellness. Curr Opin Biotechnol 73: 337-345. https://doi.org/10.1016/j.copbio.2021.09.015 ![]() |
[48] |
Molitor R, Bollinger A, Kubicki S, et al. (2020) Agar platebased screening methods for the identification of polyester hydrolysis by Pseudomonas species. Microb Biotechnol 13: 274-284. https://doi.org/10.1111/1751-7915.13418 ![]() |
[49] |
Pérez-García P, Danso D, Zhang H, et al. (2021) Exploring the global metagenome for plastic-degrading enzymes. Methods Enzymol 648: 137-157. https://doi.org/10.1016/bs.mie.2020.12.022 ![]() |
[50] | Genome Taxonomy Database (GTDB)University of Queensland (2023). Available from: https://gtdb.ecogenomic.org/. |
[51] | R Studio Team (2020). Available from: https://support--rstudio-com.netlify.app/. |
[52] |
Noda T, Sagara H, Yen A, et al. (2006) Architecture of ribonucleoprotein complexes in influenza A virus particles. Nature 439: 490-492. https://doi.org/10.1038/nature04378 ![]() |
[53] |
Reysenbach A, Longnecker K, Kirshtein J (2000) Novel bacterial and archaeal lineages from an in situ growth chamber deployed at a mid-atlantic ridge hydrothermal vent. Appl Environ Microbiol 66: 3798-3806. https://doi.org/10.1128/AEM.66.9.3798-3806.2000 ![]() |
[54] |
Kumar S, Stecher G, Li M, et al. (2018) MEGA X: Molecular evolutionary genetics analysis across computing platforms. Mol Biol Evol 35: 1547-1549. https://doi.org/10.1093/molbev/msy096 ![]() |
[55] |
Larkin M, Blackshields G, Brown N, et al. (2007) Clustal W and Clustal X version 2.0. Bioinformatics 23: 2947-2948. https://doi.org/10.1093/bioinformatics/btm404 ![]() |
[56] |
Price M, Dehal P, Arkin A (2009) Fasttree: Computing large minimum evolution trees with profiles instead of a distance matrix. Mol Biol Evol 26: 1641-1650. https://doi.org/10.1093/molbev/msp077 ![]() |
[57] |
Price M, Dehal P, Arkin A (2010) FastTree 2-Approximately maximum-likelihood trees for large alignments. PLoS One 5: e9490. https://doi.org/10.1371/journal.pone.0009490 ![]() |
[58] | ITOL, Interactive Tree Of Life. Available from: https://itol.embl.de. |
[59] |
Letunic I, Bork P (2021) Interactive Tree Of Life (iTOL) v5: an online tool for phylogenetic tree display and annotation. Nucleic Acids Res 49: 293-296. https://doi.org/1093/nar/gkab301 ![]() |
[60] | Desjardins P, Conklin D (2010) NanoDrop microvolume quantitation of nucleic acids. J Vis Exp 45: e2565. https://doi.org/10.3791/2565 |
[61] |
Kolmogorov M, Yuan J, Lin Y, et al. (2019) Assembly of long, error-prone reads using repeat graphs. Nat Biotechnol 37: 540-546. https://doi.org/10.1038/s41587-019-0072-8 ![]() |
[62] |
Parks D, Chuvochina M, Waite D, et al. (2018) A standardized bacterial taxonomy based on genome phylogeny substantially revises the tree of life. Nat Biotechnol 36: 996-1004. https://doi.org/10.1038/nbt.4229 ![]() |
[63] |
Chaumeil P, Mussig A, Hugenholtz P, et al. (2019) GTDB-Tk: a toolkit to classify genomes with the Genome Taxonomy Database. Bioinformatics 36: 1925-1927. https://doi.org/10.1093/bioinformatics/btz848 ![]() |
[64] |
Hyatt D (2010) Prodigal: prokaryotic gene recognition and translation initiation site identification. BMC Bioinformatics 11: 119-119. https://doi.org/10.1186/1471-2105-11-119 ![]() |
[65] |
Eddy S (2011) Accelerated profile HMM searches. PLoS Comput Biol 7: e1002195. https://doi.org/10.1371/journal.pcbi.1002195 ![]() |
[66] |
Matsen F, Kodner R, Armbrust E (2010) pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree. BMC Bioinformatics 11: 538. https://doi.org/10.1186/1471-2105-11-538 ![]() |
[67] |
Huerta-Cepas J, Szklarczyk D, Heller D, et al. (2019) Eggnog 5.0: a hierarchical, functionally and phylogenetically annotated orthology resource based on 5090 organisms and 2502 viruses. Nucleic Acids Res 47: 309-314. https://doi.org/nar/gky1085 ![]() |
[68] |
Almeida E, Carrillo Rincón A, Jackson S, et al. (2019) In silico screening and heterologous expression of a polyethylene terephthalate hydrolase (PETase)-like enzyme (SM14est) with polycaprolactone (PCL)-degrading activity, from the marine sponge-derived strain Streptomyces sp. SM14. Front Microbiol 10: 2187. https://doi.org/10.3389/fmicb.2019.02187 ![]() |
[69] |
Buchholz P, Feuerriegel G, Zhang H, et al. (2022) Plastics degradation by hydrolytic enzymes: The plastics-active enzymes database—PAZy. Proteins: Structure, Function, and Bioinformatics. Proteins 90: 1443-1456. https://doi.org/10.1002/prot.26325 ![]() |
[70] |
Di Tommaso P, Moretti S, Xenarios I, et al. (2011) T-Coffee: a web server for the multiple sequence alignment of protein and RNA sequences using structural information and homology extension. Nucleic Acids Res 39: 13-17. https://doi.org/10.1093/nar/gkr245 ![]() |
[71] |
Gouet P, Courcelle E, Stuart D, et al. (1999) Espript: analysis of multiple sequence alignments in Postscript. Bioinformatics 15: 305-308. https://doi.org/10.1093/bioinformatics/15.4.305 ![]() |
[72] | The Lipase Engineering Database (LED) version4.1.0. Germany: University of Stuttgart. Available from: https://led.biocatnet.de/. |
[73] |
Armenteros J, Tsirigos K, Sønderby C, et al. (2019) SignalP 5.0 improves signal peptide predictions using deep neural networks. Nat Biotechnol 37: 420-423. https://doi.org/10.1038/s41587-019-0036-z ![]() |
[74] |
Baek M, DiMaio F, Anishchenko I, et al. (2021) Accurate prediction of protein structures and interactions using a three-track neural network. Science 373: 871-876. https://doi.org/10.1126/science.abj8754 ![]() |
[75] |
Blázquez-Sánchez P, Engelberger F, Cifuentes-Anticevic J, et al. (2022) Antarctic polyester hydrolases degrade aliphatic and aromatic polyesters at moderate temperatures. Appl Environ Microbiol 88: e0184221. https://doi.org/10.1128/AEM.01842-21 ![]() |
[76] |
Bollinger A, Thies S, Knieps-Grünhagen E, et al. (2020) A novel polyester hydrolase from the marine bacterium Pseudomonas aestusnigri–structural and functional insights. Front Microbiol 11: 1-16. https://doi.org/10.3389/fmicb.2020.00114 ![]() |
[77] |
Ronkvist Å, Xie W, Lu W, et al. (2009) Cutinase-catalyzed hydrolysis of poly (ethylene terephthalate). Macromolecules 42: 5128-5138. https://doi.org/10.1021/ma9005318 ![]() |
[78] | Eiamthong B, Meesawat P, Wongsatit T, et al. (2022) Discovery and genetic code expansion of a polyethylene terephthalate (PET) hydrolase from the human saliva metagenome for the degradation and bio-functionalization of PET. Angew Chem 61: e202203061. https://doi.org/10.1002/anie.202203061 |
[79] |
Wallace P, Haernvall K, Ribitsch D, et al. (2017) PpEst is a novel PBAT degrading polyesterase identified by proteomic screening of Pseudomonas pseudoalcaligenes. Appl Microbiol Biotechnol 101: 2291-2303. https://doi.org/10.1007/s00253-016-7992-8 ![]() |
[80] |
Martínez-Tobón D, Gul M, Elias A, et al. (2018) Polyhydroxybutyrate (PHB) biodegradation using bacterial strains with demonstrated and predicted PHB depolymerase activity. Appl Microbiol Biotechnol 102: 8049-8067. https://doi.org/10.1007/s00253-018-9153-8 ![]() |
[81] |
Liu X, Hille P, Zheng M, et al. (2019) Diversity of polyester degrading bacteria in surface sediments from Yangtze River Estuary. AIP Conf Proc 2122: 020063. https://doi.org/10.1063/1.5116502 ![]() |
[82] |
Qiu L, Yin X, Liu T, et al. (2020) Biodegradation of bis (2-hydroxyethyl) terephthalate by a newly isolated Enterobacter sp. HY1 and characterization of its esterase properties. J Basic Microbiol 60: 699-711. https://doi.org/10.1002/jobm.202000053 ![]() |
[83] |
Gaino E, Pronzato R (1989) Ultrastructural evidence of bacterial damage to Spongia officinalis fibres (Porifera, Demospongiae). Dis Aquat Organ 6: 67-74. https://www.int-res.com/articles/dao/6/d006p067.pdf ![]() |
[84] |
Reisser J, Shaw J, Hallegraeff G, et al. (2014) Millimeter-sized marine plastics: a new pelagic habitat for microorganisms and invertebrates. PloS One 9: e100289. https://doi.org/10.1371/journal.pone.0100289 ![]() |
[85] |
Bryant J, Clemente T, Viviani D, et al. (2016) Diversity and activity of communities inhabiting plastic debris in the North Pacific Gyre. mSystems 1: 024-16. https://doi.org/10.1128/mSystems.00024-16 ![]() |
[86] |
Kiessling T, Gutow L, Thiel M (2015) Marine litter as habitat and dispersal vector. Marine anthropogenic litter. Cham: Springer Open Elsevier 141-181. https://doi.org/10.1007/978-3-319-16510-3_6 ![]() |
[87] |
Zelezniak A, Andrejev S, Ponomarova O, et al. (2015) Metabolic dependencies drive species co-occurrence in diverse microbial communities. Proc Natl Acad Sci 112: 6449-6454. https://doi.org/10.1073/pnas.1421834112 ![]() |
[88] |
Borchert E, Hammerschmidt K, Hentschel U, et al. (2021) Enhancing microbial pollutant degradation by integrating eco-evolutionary principles with environmental biotechnology. Trends Microbiol 29: 908-918. https://doi.org/10.1016/j.tim.2021.03.002 ![]() |
[89] |
Tournier V, Topham C, Gilles A, et al. (2020) An engineered PET depolymerase to break down and recycle plastic bottles. Nature 580: 216-219. https://doi.org/10.1038/s41586-020-2149-4 ![]() |
[90] |
Lenfant N, Hotelier T, Bourne Y, et al. (2013) Proteins with an alpha/beta hydrolase fold: relationships between subfamilies in an ever-growing superfamily. Chem Biol Interact 203: 266-268. https://doi.org/10.1016/j.cbi.2012.09.003 ![]() |
[91] |
Gricajeva A, Nadda A, Gudiukaite R (2021) Insights into polyester plastic biodegradation by carboxyl ester hydrolases. J Chem Technol Biotechnol 97: 359-380. https://doi.org/10.1002/jctb.6745 ![]() |
![]() |
![]() |
1. | Hongan Li, Guanyi Wang, Qiaozhi Hua, Zheng Wen, Zhanli Li, Ting Lei, An image watermark removal method for secure internet of things applications based on federated learning, 2022, 0266-4720, 10.1111/exsy.13036 | |
2. | Hong-an Li, Guanyi Wang, Kun Gao, Haipeng Li, A Gated Convolution and Self-Attention-Based Pyramid Image Inpainting Network, 2022, 31, 0218-1266, 10.1142/S0218126622502085 | |
3. | Peng Zhao, Yongxin Zhang, Qiaozhi Hua, Haipeng Li, Zheng Wen, Bio-Inspired Optimal Dispatching of Wind Power Consumption Considering Multi-Time Scale Demand Response and High-Energy Load Participation, 2023, 134, 1526-1506, 957, 10.32604/cmes.2022.021783 | |
4. | Hong’an Li, Min Zhang, Dufeng Chen, Jing Zhang, Meng Yang, Zhanli Li, Image Color Rendering Based on Hinge-Cross-Entropy GAN in Internet of Medical Things, 2023, 135, 1526-1506, 779, 10.32604/cmes.2022.022369 | |
5. | Xiaonan Shi, Jian Huang, Bo Huang, An Underground Abnormal Behavior Recognition Method Based on an Optimized Alphapose-ST-GCN, 2022, 31, 0218-1266, 10.1142/S0218126622502140 | |
6. | Zhi-Hua zhao, Li Chen, Bifurcation Fusion Network for RGB-D Salient Object Detection, 2022, 31, 0218-1266, 10.1142/S0218126622502152 | |
7. | Hong-an Li, Liuqing Hu, Qiaozhi Hua, Meng Yang, Xinpeng Li, Image Inpainting Based on Contextual Coherent Attention GAN, 2022, 31, 0218-1266, 10.1142/S0218126622502097 | |
8. | Mei Gao, Baosheng Kang, Blind Image Inpainting Using Low-Dimensional Manifold Regularization, 2022, 31, 0218-1266, 10.1142/S0218126622502115 | |
9. | Qianqian Liu, Xiaoyan Zhang, Qiaozhi Hua, Zheng Wen, Haipeng Li, Yan Huo, Adaptive Differential Evolution Algorithm with Simulated Annealing for Security of IoT Ecosystems, 2022, 2022, 1530-8677, 1, 10.1155/2022/6951849 | |
10. | Hong’an Li, Jiangwen Fan, Qiaozhi Hua, Xinpeng Li, Zheng Wen, Meng Yang, Biomedical sensor image segmentation algorithm based on improved fully convolutional network, 2022, 197, 02632241, 111307, 10.1016/j.measurement.2022.111307 | |
11. | Youzhong Ma, Qiaozhi Hua, Zheng Wen, Ruiling Zhang, Yongxin Zhang, Haipeng Li, Thippa Reddy G, k Nearest Neighbor Similarity Join Algorithm on High-Dimensional Data Using Novel Partitioning Strategy, 2022, 2022, 1939-0122, 1, 10.1155/2022/1249393 | |
12. | Wenchao Ren, Liangfu Li, Shiyi Wen, Lingmei Ai, APE-GAN: A colorization method for focal areas of infrared images guided by an improved attention mask mechanism, 2024, 124, 00978493, 104086, 10.1016/j.cag.2024.104086 | |
13. | Thavavel Vaiyapuri, Jaiganesh Mahalingam, Sultan Ahmad, Hikmat A. M. Abdeljaber, Eunmok Yang, Soo-Yong Jeong, Ensemble Learning Driven Computer-Aided Diagnosis Model for Brain Tumor Classification on Magnetic Resonance Imaging, 2023, 11, 2169-3536, 91398, 10.1109/ACCESS.2023.3306961 | |
14. | Samarth Deshpande, Siddhi Deshmukh, Atharva Deshpande, Devyani Manmode, Sakshi Dhamne, Abha Marathe, 2023, Pseudo Coloring Using Deep Learning Approach, 979-8-3503-4805-7, 218, 10.1109/ICAECIS58353.2023.10170408 | |
15. | Ali Salim Rasheed, Marwa Jabberi, Tarek M. Hamdani, Adel M. Alimi, PIXGAN-Drone: 3D Avatar of Human Body Reconstruction From Multi-View 2D Images, 2024, 12, 2169-3536, 74762, 10.1109/ACCESS.2024.3404554 | |
16. | Mamoona Jamil, Mubashar Sarfraz, Sajjad A. Ghauri, Muhammad Asghar Khan, Mohamed Marey, Khaled Mohamad Almustafa, Hala Mostafa, Optimized Classification of Intelligent Reflecting Surface (IRS)-Enabled GEO Satellite Signals, 2023, 23, 1424-8220, 4173, 10.3390/s23084173 | |
17. | Muhammad Asif Khan, Hamid Menouar, Ridha Hamila, 2023, Crowd Counting in Harsh Weather using Image Denoising with Pix2Pix GANs, 979-8-3503-7051-5, 1, 10.1109/IVCNZ61134.2023.10343548 | |
18. | Huda F. AL-Shahad, Razali Yaakob, Nurfadhlina Mohd Sharef, Hazlina Hamdan, Hasyma Abu Hassan, An Improved Pix2pix Generative Adversarial Network Model to Enhance Thyroid Nodule Segmentation, 2025, 16, 17982340, 37, 10.12720/jait.16.1.37-48 | |
19. | Qizhi Zou, Binghua Wang, Zhaofei Jiang, Qian Wu, Jian Liu, Xinting Ji, Dynamic style transfer for interior design: An IoT-driven approach with DMV-CycleNet, 2025, 117, 11100168, 662, 10.1016/j.aej.2024.12.030 | |
20. | Alvan Reyhanza Vittorino, Giovanus Immanuel, Simeon Yuda Prasetyo, Eko Setyo Purwanto, 2024, Machine Learning Approaches for Diabetic Retinopathy Classification Utilizing Gabor, LBP, and HOG Feature Extraction, 979-8-3315-0857-9, 697, 10.1109/BTS-I2C63534.2024.10942171 |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
LSpix | 32.795 | 11.003 | 24.107 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
LSpix | 94.506 | 68.123 | 86.011 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
pixG7 | 27.565 | 9.232 | 17.600 |
pixG1+G13 | 28.960 | 9.947 | 20.682 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
pixG7 | 91.188 | 4.964 | 39.057 |
pixG1+G13 | 87.615 | 42.562 | 71.630 |
Note: Bold font is the best value for each column. |
Model | pix2pix | pixG1 | pixG6 | pixG7 | pixG7+G13 | pixG1+G13 |
Time | 8.72 | 9.00 | 8.76 | 8.43 | 15.27 | 16.52 |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpix | 32.795 | 11.003 | 24.107 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
LSpix_GP | 31.859 | 12.030 | 24.019 |
LSpixG1_GP | 32.342 | 11.514 | 21.290 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1_GP | 32.113 | 11.067 | 21.384 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpix | 94.506 | 68.123 | 86.011 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
LSpix_GP | 94.641 | 67.250 | 85.967 |
LSpixG1_GP | 90.772 | 54.308 | 78.067 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG6_GP | 90.941 | 52.740 | 78.236 |
Note: Bold font is the best value for each column. |
Model | pix2pix | LSpix | LSpixG1 | LSpix_GP | LSpixG1_GP | LSpixG6 | LSpixG6_GP |
Time | 8.72 | 8.66 | 8.66 | 8.72 | 11.17 | 8.72 | 11.72 |
Note: Bold font is the best value for each row. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.489 | 10.770 | 22.528 |
pixG1 | 25.657 | 10.562 | 18.665 |
LSpix | 29.655 | 11.805 | 22.528 |
LSpixG1 | 27.650 | 12.409 | 19.942 |
LSpix_GP | 29.516 | 11.950 | 22.504 |
LSpixG1_GP | 27.306 | 11.548 | 19.966 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 21.977 | 8.723 | 12.535 |
pixG1 | 26.158 | 7.864 | 14.441 |
LSpix | 21.457 | 9.119 | 12.579 |
LSpixG1 | 24.565 | 7.946 | 14.171 |
LSpix_GP | 21.948 | 9.334 | 12.563 |
LSpixG1_GP | 24.337 | 7.886 | 14.127 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
LSpix | 32.795 | 11.003 | 24.107 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
LSpix | 94.506 | 68.123 | 86.011 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
pixG7 | 27.565 | 9.232 | 17.600 |
pixG1+G13 | 28.960 | 9.947 | 20.682 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
pixG7 | 91.188 | 4.964 | 39.057 |
pixG1+G13 | 87.615 | 42.562 | 71.630 |
Note: Bold font is the best value for each column. |
Model | pix2pix | pixG1 | pixG6 | pixG7 | pixG7+G13 | pixG1+G13 |
Time | 8.72 | 9.00 | 8.76 | 8.43 | 15.27 | 16.52 |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpix | 32.795 | 11.003 | 24.107 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
LSpix_GP | 31.859 | 12.030 | 24.019 |
LSpixG1_GP | 32.342 | 11.514 | 21.290 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1_GP | 32.113 | 11.067 | 21.384 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpix | 94.506 | 68.123 | 86.011 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
LSpix_GP | 94.641 | 67.250 | 85.967 |
LSpixG1_GP | 90.772 | 54.308 | 78.067 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG6_GP | 90.941 | 52.740 | 78.236 |
Note: Bold font is the best value for each column. |
Model | pix2pix | LSpix | LSpixG1 | LSpix_GP | LSpixG1_GP | LSpixG6 | LSpixG6_GP |
Time | 8.72 | 8.66 | 8.66 | 8.72 | 11.17 | 8.72 | 11.72 |
Note: Bold font is the best value for each row. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.489 | 10.770 | 22.528 |
pixG1 | 25.657 | 10.562 | 18.665 |
LSpix | 29.655 | 11.805 | 22.528 |
LSpixG1 | 27.650 | 12.409 | 19.942 |
LSpix_GP | 29.516 | 11.950 | 22.504 |
LSpixG1_GP | 27.306 | 11.548 | 19.966 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 21.977 | 8.723 | 12.535 |
pixG1 | 26.158 | 7.864 | 14.441 |
LSpix | 21.457 | 9.119 | 12.579 |
LSpixG1 | 24.565 | 7.946 | 14.171 |
LSpix_GP | 21.948 | 9.334 | 12.563 |
LSpixG1_GP | 24.337 | 7.886 | 14.127 |
Note: Bold font is the best value for each column. |