
This study investigates the effect of the COVID-19 pandemic on the residential real estate prices in Turkey. This study indicates the effect of COVID-19, loan package, macroeconomic and behavioral control variables on abnormal returns of residential real estate prices during the event window. This study consists of three econometric steps. Firstly, the abnormal returns of the residential real estate prices are obtained by using an event study. Secondly, the effect of the COVID-19 pandemic on abnormal returns of residential real estate prices was estimated by panel data analysis for regional and city levels. According to the findings of the city level, the COVID-19 pandemic has a negative effect on abnormal returns of residential prices, as expected. However, the regional analysis shows mainly a positive COVID-19 effect.
Citation: Selahattin Kaynak, Aykut Ekinci, Havvanur Feyza Kaya. The effect of COVID-19 pandemic on residential real estate prices: Turkish case[J]. Quantitative Finance and Economics, 2021, 5(4): 623-639. doi: 10.3934/QFE.2021028
[1] | Qian Zhang, Haigang Li, Ming Li, Lei Ding . Feature extraction of face image based on LBP and 2-D Gabor wavelet transform. Mathematical Biosciences and Engineering, 2020, 17(2): 1578-1592. doi: 10.3934/mbe.2020082 |
[2] | Fang Zhu, Wei Liu . A novel medical image fusion method based on multi-scale shearing rolling weighted guided image filter. Mathematical Biosciences and Engineering, 2023, 20(8): 15374-15406. doi: 10.3934/mbe.2023687 |
[3] | Haohao Xu, Yuchen Gong, Xinyi Xia, Dong Li, Zhuangzhi Yan, Jun Shi, Qi Zhang . Gabor-based anisotropic diffusion with lattice Boltzmann method for medical ultrasound despeckling. Mathematical Biosciences and Engineering, 2019, 16(6): 7546-7561. doi: 10.3934/mbe.2019379 |
[4] | Michael James Horry, Subrata Chakraborty, Biswajeet Pradhan, Maryam Fallahpoor, Hossein Chegeni, Manoranjan Paul . Factors determining generalization in deep learning models for scoring COVID-CT images. Mathematical Biosciences and Engineering, 2021, 18(6): 9264-9293. doi: 10.3934/mbe.2021456 |
[5] | Auwalu Saleh Mubarak, Zubaida Said Ameen, Fadi Al-Turjman . Effect of Gaussian filtered images on Mask RCNN in detection and segmentation of potholes in smart cities. Mathematical Biosciences and Engineering, 2023, 20(1): 283-295. doi: 10.3934/mbe.2023013 |
[6] | Jimin Yu, Jiajun Yin, Shangbo Zhou, Saiao Huang, Xianzhong Xie . An image super-resolution reconstruction model based on fractional-order anisotropic diffusion equation. Mathematical Biosciences and Engineering, 2021, 18(5): 6581-6607. doi: 10.3934/mbe.2021326 |
[7] | Chen Yue, Mingquan Ye, Peipei Wang, Daobin Huang, Xiaojie Lu . SRV-GAN: A generative adversarial network for segmenting retinal vessels. Mathematical Biosciences and Engineering, 2022, 19(10): 9948-9965. doi: 10.3934/mbe.2022464 |
[8] | Hao Wang, Guangmin Sun, Kun Zheng, Hui Li, Jie Liu, Yu Bai . Privacy protection generalization with adversarial fusion. Mathematical Biosciences and Engineering, 2022, 19(7): 7314-7336. doi: 10.3934/mbe.2022345 |
[9] | Hui Yao, Yuhan Wu, Shuo Liu, Yanhao Liu, Hua Xie . A pavement crack synthesis method based on conditional generative adversarial networks. Mathematical Biosciences and Engineering, 2024, 21(1): 903-923. doi: 10.3934/mbe.2024038 |
[10] | Wei-wei Jiang, Guang-quan Zhou, Ka-Lee Lai, Song-yu Hu, Qing-yu Gao, Xiao-yan Wang, Yong-ping Zheng . A fast 3-D ultrasound projection imaging method for scoliosis assessment. Mathematical Biosciences and Engineering, 2019, 16(3): 1067-1081. doi: 10.3934/mbe.2019051 |
This study investigates the effect of the COVID-19 pandemic on the residential real estate prices in Turkey. This study indicates the effect of COVID-19, loan package, macroeconomic and behavioral control variables on abnormal returns of residential real estate prices during the event window. This study consists of three econometric steps. Firstly, the abnormal returns of the residential real estate prices are obtained by using an event study. Secondly, the effect of the COVID-19 pandemic on abnormal returns of residential real estate prices was estimated by panel data analysis for regional and city levels. According to the findings of the city level, the COVID-19 pandemic has a negative effect on abnormal returns of residential prices, as expected. However, the regional analysis shows mainly a positive COVID-19 effect.
At present, image color rendering as a major branch of image processing has attracted much attention. With the development of deep learning, image color rendering based on neural network has gradually become a research hotspot [1,2,3,4,5]. Because traditional color rendering methods require manual intervention and have high requirements of reference images. Moreover, when the structure and color of the image are complex, color rendering effect is not ideal [6,7,8,9,10]. Color rendering methods based on deep learning can be easily deployed in the actual production environment, and the limitation of the traditional methods can be solved [11,12,13]. By using the neural network model and the corresponding dataset training model [14,15], the image can be automatically rendered according to the model, without being affected by human or other factors [16,17,18,19].
Larsson [20] used the convolutional neural network to consider the brightness of the image as input, decomposed the color and saturation of the image by the super-column model, to realize color rendering. Iizuka [21] combined the low-dimensional feature and global feature of the image by using the fusion layer in the convolutional neural network, for generating the color of the image and processing images of any resolution. Zhang [22] designed an appropriate loss function to handle the multi-mode uncertainty in color rendering and maintain the color diversity. However, when the grayscale image features are extracted using the above mentioned method, up-sampling is adopted to make the image size consistent, resulting in the loss of image information. Moreover, the network structure cannot well extract and understand the complex features of the image, and the rendering effect is limited [23,24,25].
Isola [26] improved conditional generative adversarial networks (CGAN) to achieve the transformation between images. The proposed pix2pix model can realize conversion between different images, for example, color rendering can be realized by learning the mapping relationship between grayscale image and color image [27,28]. But the pix2pix model based generative adversarial networks (GAN) has the disadvantage of training instability. Moreover, the current image rendering methods based on deep learning are not good at rendering robust images. Gabor filter can easily extract texture information in all scales and directions of the image, and reduce the influence of light change and noise in the image to a certain extent.
Therefore, we propose a color rendering method using Gabor filter based improved pix2pix for robust image. The contributions of this paper are mainly there-folds:
(1) The improved pix2pix model can not only automatically complete image rendering and achieve good visual effect, but also achieve more stable training and better image quality.
(2) Gabor filter was added to enhance the robustness of model rendered images.
(3) The metric data of a series of experiments show that the proposed method has better performance for robust image.
The rest of the paper is organized as follows. Section 2 introduces the previous work, including Gabor filter and pix2pix model. Section 3 describes the method and its design details. Section 4 introduces the experiment and comparison experiment, and evaluates the image quality. Section 5 conclusions the paper and outlooks the future work.
Fourier transform is a powerful tool in signal processing, which can help us transform images from spatial domain to frequency domain, and extract features that are not easy to extract in spatial domain. However, after Fourier transform, frequency features of images at different locations are often mixed together, but Gabor filter can extract spatial local frequency features, which is an effective texture detection tool [29,30]. The Gabor filter is derived by multiplying a Gaussian by a cosine function [31,32,33], it is defined as
g(x,y,λ,θ,φ,σ,γ)=exp(−x′2+γ2y22σ2)exp(i(2πx′λ+φ)) | (2.1) |
greal(x,y,λ,θ,φ,σ,γ)=exp(−x′2+γ2y22σ2)cos(i(2πx′λ+φ)) | (2.2) |
gimag(x,y,λ,θ,φ,σ,γ)=exp(−x′2+γ2y22σ2)sin(i(2πx′λ+φ)) | (2.3) |
where, x′=xcosθ+ysinθ,y′=−xsinθ+ycosθ. Where, x, y represent the coordinate position of the pixel, λ represents the wavelength of the filter, θ represents the tilt degree of the Gabor kernel image, φ represents the phase offset, σ represents the standard deviation of the Gaussian function, and γ represents the aspect ratio.
In order to make full use of the characteristics of Gabor filters, r filter extracts the texture features of the image in 6 scales and 4 directions. Namely, the Gabor it is necessary to design Gabor filters with different directions and scales to extract features. In this study, the Gaboscales are 7, 9, 11, 13, 15 and 17. The Gabor directions are 0°, 45°, 90° and 135°, as shown in Figure 1(a). Extract effective texture feature sets from the output results of the filter. The extracted texture feature sets are shown in Figure 1(b), with 24 texture feature maps in total.
At present, image rendering based on generative adversarial networks [34] attracts much attention because it can directly generate color images by using mapping relations. Therefore, it is widely used in image processing, text processing, natural language processing and other fields. pix2pix model [26] is a model for image-to-image conversion based generative adversarial networks. It can better synthesize image or generate color image. The following are the main features of the pix2pix model.
(1) Both the generator and discriminator structure use the convolution unit of Conv-Batchnorm-ReLU, namely, convolutional layer, batch normalization and ReLU Loss are used.
(2) The input of the pix2pix model is the specified image, such as the label image to the real image, the input is the label image, the input is the grayscale image to the color image, and the input is the grayscale image. The grayscale image as the input of the generator, and the input and output of the generator as the input of the discriminator, so as to establish the corresponding relationship between the input image and the output image, realize user control, and complete image color rendering.
(3) PatchGAN was used as discriminator for pix2pix model. Specifically, the image is divided into several fixed-size blocks, and the authenticity of each block is determined. Finally, the average value is taken as the final output. A network structure similar to U-net is adopted as a generator, and skip connections are added between i and n−i at each layer to simulate U-net, where n is the total number of layers of the network. Not only can the path be shrunk for context information, but the symmetric extension path can be positioned precisely.
(4) The loss function of the pix2pix model is as follows, which is composed of L1 loss and Vanilla GAN loss. Where, let x be the input image, y be the expected output, G be the generator, and D be the discriminator:
G∗=argminGmaxDLcGAN(G,D)+λLL1(G) | (2.4) |
LcGAN(G,D)=Ex,y(logD(x,y))+Ex(log(1−D(x,G(x)))) | (2.5) |
LL1(G)=Ex,y(∥y−G(x)1∥) | (2.6) |
In view of the detail problems existing in the generative adversarial networks based image color rendering method in complex scenes, this paper proposes an image color rendering method using Gabor filter based improved pix2pix for robust image. The network framework is shown in Figure 2. The rendering process is shown in Figure 3. After selecting the data set for training, the trained generator is used for color rendering.
Firstly, we preprocessed the image with Gabor filter, and extracted the texture feature set of the image as input for training and verification. By comparing 24 Gabor texture feature maps with 6 scales and 4 directions, the texture map with 7 scales and 0° direction has the best color rendering effect. Secondly, this paper utilizes the existing pix2pix model architecture for image transformation to perform color rendering by learning the mapping relationship between grayscale image and color image. Finally, although the pix2pix model solves some problems existing in the generative adversarial networks, it still has the instability problem of training on large-scale image dataset. Therefore, the least square loss in LSGAN [35] is used in the objective function of pix2pix model, and the penalty term similar to WGAN_GP [36] is added. We improve the overall model framework, it is shown that the proposed method has a better performance on the rendering of robust images by a series of comparison experiments.
The generator in generative adversarial networks hopes that the output data distribution can be more close to the distribution of the real data. Meanwhile, the discriminator of generative adversarial networks needs to make a judgment between the real data and the output data by the generator to find the real data and the fake data. The loss function can generate more real data through the Lipschitz constraint generative adversarial networks. The traditional generative adversarial networks uses the cross entropy loss or Vanilla GAN loss as the loss function. The classification is correct, but gradient dispersion occurs when the generator is updated [36,37]. LSGAN uses the square loss as the objective function, and the least square loss function penalizes the samples (fake samples) that are in the discriminant true but far away from the decision boundary, and drags the false samples far away from the decision boundary into the decision boundary, to improve the quality of the generated image.
Therefore, compared with the traditional generative adversarial networks, the image generated by LSGAN has higher quality and a more stable training process. So the least square loss function is adopted in the framework of this paper.
{minDVLSGAN(D)=12ExPdata(x)[(D(x)−b)2]+12EzPz(z)[(D(G(z))−a)2]minDVLSGAN(D)=12EzPz(z)[(D(G(z))−c)2] | (3.1) |
where, the input image is x, expected output is y, generator is G, discriminator is D, noise is z, labels of generated sample and real sample are a and b, respectively. c is the value set by the generator to let the discriminator think the generated image is real data.
Generative adversarial networks can generate better data distribution, but it has the problem of training instability. Improving the training stability of generative adversarial networks is a hot topic in deep learning. Wasserstein generative adversarial networks (WGAN) [38] uses Wasserstein distance to generate a value function with better theoretical properties than JS divergence in order to constrain the Lipschitz constant of the discriminator function, which basically solves the problems of generative adversarial networks training instability and model collapse and ensures the diversity of generated samples [39]. WGAN_GP continues to improve on WGAN, and its penalty term is derived from the Wasserstein distance, where the penalty coefficient is 10.
The objective function of WGAN_GP is as follows, adding the original critic loss and the gradient penalty term of WGAN_GP.
L=E˜x∼Pg[D(˜x)]−E˜x∼Pr[D(x)]+λE˜x∼Pˆx[(∥▽ˆxD(ˆx∥2−1)2] | (3.2) |
where, E˜x∼Pg[D(˜x)]−E˜x∼Pr[D(x)] is the original critic loss, λE˜x∼Pˆx[(∥▽ˆxD(ˆx∥2−1)2] denotes the gradient penalty term of WGAN_GP, ˆx=tˆx+(1−t)x, 0≤t≤1, and λ is the penalty coefficient.
To verify the effectiveness and accuracy of the proposed method, we conducted extensive experiments on summer dataset [40], with 1231 pieces of train set, and 309 pieces of test set. Experiment 1 is conducted to test the effect of application of the Gabor filter and different objective functions in the pix2pix model environment. Experiment 2 is performed to test the rendering effect when different Gabor texture feature maps are given as input. Experiment 3 is conducted to test whether the penalty term should be added in the discriminator. Experiment 4 is the rendering effect of low-quality or robust images was tested by adding noise and dimming the brightness of the image for assessing the robustness of this model.
Training parameters: The experiment was performed on a PC with Intel(R) Core(TM) i7-9750H CPU @ 2.6 GHz 2.59 GHz, a graphics card NVIDIA GeForce GTX 1650, and CUDA+Cudnn for acceleration training. The proposed method is implemented based on Python 3.7 and Pytorch framework. The number of experimental training iterations is 200, optimizer is Adam, batch_size is 1, learning rate is 0.0002, and number of processes is 4.
Network structures and implementation details: All the models we train are designed to 256 × 256 images. The input image of the model is 512 × 256; the left one is the original color image, and the right one is the texture feature map processed by the Gabor filter, as shown in Figure 4. By default, the pix2pix model uses a generator similar to U-net, PatchGAN, and Vanilla GAN loss.
Evaluation Metrics: To reflect image color rendering quality of different models more objectively, peak signal to noise ratio (PSNR) and structural similarity (SSIM) indexes are adopted to evaluate the rendered images [41,42]. These two indexes are often used in the evaluation metrics of image processing. PSNR is an objective standard to evaluate the quality of the color image produced. The calculation formula is as follows:
PSNR=10log10(2n−1)2MSE | (4.1) |
MSE=1H×WH∑i=1W∑j=1[X(i,j)−Y(i,j)]2 | (4.2) |
where, H and W represent the width and height of the image respectively, (i,j) represents each pixel point, and n represents the number of bits of the pixel, X and Y represent two images respectively.
Because PSNR index also has its limitations, it cannot completely reflect the consistency of image quality and human visual effect, so SSIM index is used for further comparison. SSIM is a metric to measure the similarity of two images. By comparing the image rendered by the model with the original color image, the effectiveness and accuracy of this algorithm are demonstrated. The calculation formula is as follows:
SSIM=(2μxμy+c1)(2σxy+c2)(μ2xμ2y+c1)(σ2xσ2y+c2) | (4.3) |
where, μx and μy respectively represent the average value of the real image and the generated image, σ2x and σ2y respectively represent the variance of the real image and the generated image, σxy represents the covariance of the real image and the generated image, c1=(k1,L)2 and c2=(k2,L)2 are constants that maintain stability, and L is the dynamic range of pixel value, k1=0.01, k2=0.03.
In this study, the Gabor filter extracts the texture features of the image in 6 scales and 4 directions. For convenience, according to the texture feature set shown in Figure 1(b), the images are numbered from left to right and from top to bottom. The direction is assumed to be d and the size is s, as shown in Figure 5. For example, G1 means "s = 7, d = 0°". So the direction is 0° and the scale is 7. G6 means "s = 17, d = 0°". So the direction is 0° and the scale is 17. By default, the pix2pix model uses Vanilla GAN loss. Based on pix2pix model, the model using least squares loss function is called LSpix (least squares pix2pix). Based on Gabor filter, the model using Gabor texture maps is called pixGn (pix2pix Gabor n), n = 1, 6, 7, 13.
To test the effect of application of the Gabor filter and different objective functions in the pix2pix model environment, we divided the experiment into adding Gabor filters (Figures 6(c), (e)), not adding Gabor filters (Figures 6(b), (d)), using least squares loss (Figures 6(d), (e)) or Vanilla GAN loss (Figures 6(b), (c)). By comparing the images in Figure 6, it can be confirmed that the rendering effect preprocessed by least square loss or Gabor filter is better, which is the LSpixG1 model. This is because Gabor can preprocess images and obtain multi-scale and multi-direction features of images, so as to achieve good and fast feature extraction and learning during network model learning. Moreover, compared with other loss functions, the least square loss function only reaches saturation at one point, which is not easy to cause the problem of gradient disappearance.
Tables 1 and 2 compare the distortion and structural similarity between the rendered image and the ground truth, show the maximum, minimum, and average indexes. This is an additional interpretation of Figure 6. The LSpix model has the highest score in the maximum and average PSNR, which is 3.591dB and 1.083dB higher than that of the pix2pix model. Meanwhile, the LSpix model has the highest score in SSIM, which is 1.618%, 15.649% and 3.848% higher than that of the pix2pix model, respectively. This proves that our model is closer to ground truth in structure, and the colors are more reductive.
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
LSpix | 32.795 | 11.003 | 24.107 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
LSpix | 94.506 | 68.123 | 86.011 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
Note: Bold font is the best value for each column. |
In order to test the rendering effect when different Gabor texture feature maps are input, we use different feature images as input. Figure 7 shows how different Gabor texture images are rendered when Vanilla GAN loss is the target function of the pix2pix model. Figures 5(c), (d), that is, the direction is the scale is 7 and 45° or 90°, contain incomplete details of the original image, resulting in incomplete input texture features. Therefore, the generated image is blurred, as shown in Figures 7(a), (b). Although the 7th and 13th texture images were considered as training sets (pixG7+G13 model) with a total of 1231 × 2 images taken together, the rendering effect was not significantly improved, as shown in Figure 7(b). Evidently, by comparing the images in Figure 7, it can be found out that the visual effect of Figures 7(c)–(e) is good and not blurred. And Table 3 and 4 show the evaluation indexes after the input of different feature maps. The data show that incomplete input of texture feature map is not desirable.
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
pixG7 | 27.565 | 9.232 | 17.600 |
pixG1+G13 | 28.960 | 9.947 | 20.682 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
pixG7 | 91.188 | 4.964 | 39.057 |
pixG1+G13 | 87.615 | 42.562 | 71.630 |
Note: Bold font is the best value for each column. |
And to compare the operation efficiency of input different texture maps, the training time is shown in Table 5, in hours. Regardless of whether the Gabor filter was used, which texture map was entered, the operation time was around 9 hours. However, if two texture maps are used for training, such as G1 and G13 are used in pixG1+G13 model, the training set doubles and the pre-training time doubles. Even though the results shown in Figure 7(d) are good, the method is not desirable. This is because when we use filtering, we need to extract multi-scale and multi-direction features and remove redundant information. Once the important information is removed, it will certainly have a certain impact on the results, resulting in blurred images.
Model | pix2pix | pixG1 | pixG6 | pixG7 | pixG7+G13 | pixG1+G13 |
Time | 8.72 | 9.00 | 8.76 | 8.43 | 15.27 | 16.52 |
Figure 8 shows the performance on whether or not to add a penalty item in the discriminator based on the pixG1 model. Figure 8(a) is the effect of not adding penalty items, and Figure 8(b) is the effect of adding penalty items. Obviously, Figure 8(b) has less error in detail and better visual effect. Penalty term, that is, gradient punishment is carried out by interpolation method to make the model satisfy Lipschitz constraint. The addition of punishment terms similar to WGAN_GP basically solves the problems of training instability and model collapse in the GAN model and ensures the diversity of generated samples.
Tables 6 and 7 show the evaluation indexes whether or not to add a penalty item. With the addition of penalty term, the LSpix_GP model achieved the highest score in the minimum PSNR, which was 0.904dB higher than that of the original pix2pix model. Evidently, in the texture map extracted based on Gabor filter, the image with scale of 7 and direction of 0° has the best training effect. Furthermore, when the objective function is least squares loss, the average SSIM and performance are improved. When penalty term is added, the score of maximum and average SSIM is the highest, which is 1.753% and 1.083% higher than that of the pix2pix model. Therefore, the image rendered by the LSpixG1_GP model is better than that of the original model.
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpix | 32.795 | 11.003 | 24.107 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
LSpix_GP | 31.859 | 12.030 | 24.019 |
LSpixG1_GP | 32.342 | 11.514 | 21.290 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1_GP | 32.113 | 11.067 | 21.384 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpix | 94.506 | 68.123 | 86.011 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
LSpix_GP | 94.641 | 67.250 | 85.967 |
LSpixG1_GP | 90.772 | 54.308 | 78.067 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG6_GP | 90.941 | 52.740 | 78.236 |
Note: Bold font is the best value for each column. |
To compare the operating efficiency of different objective functions given as input and increase the punishment items, the running time is listed in Table 8 in hours. For example, LSpixG6_GP represents using the least squares loss, adding the penalty item, the direction is 0° and the scale is 17. Regardless of whether Gabor filter was used, which texture map was input, whether Vanilla GAN loss or least square loss was the target function, the training time was approximately 9 h. Although the algorithm efficiency of adding the filter alone is basically the same, the time of using the filter after adding the penalty term will be increased by 2–3 h. Therefore, the algorithm in this study adopts LSpixG1_GP model, namely Gabor texture map with model input scale of 7 and direction of 0°, least squares loss and penalty term.
Model | pix2pix | LSpix | LSpixG1 | LSpix_GP | LSpixG1_GP | LSpixG6 | LSpixG6_GP |
Time | 8.72 | 8.66 | 8.66 | 8.72 | 11.17 | 8.72 | 11.72 |
Note: Bold font is the best value for each row. |
In order to evaluate the robustness of the model for rendering robust image, the rendering effect of low-quality images was tested by adding noise and dimming the image brightness, as shown in Figure 9. When testing the noise image, the Gaussian noise image with mean value of 0 and variance of 10 is added. When testing low-illumination images, power operation is performed on the pixels of the image, and the power is set to 2.5 to generate low-illumination images.
We use PSNR evaluation metric to evaluate the rendering results of each model for low-quality images. As shown in Table 9, the image rendered by the LSpix model is of higher quality when rendering noisy images. As shown in Table 10, images rendered by Gabor filter models are generally of good quality for low-illumination images. After the Gabor filter, the objective function is least square loss and the penalty term is added, the image quality of the LSpixG1_GP model is higher than that of the original model. This is because the method in this paper uses Gabor filter to avoid the interference of noise to the image to a certain extent. And when extracting features, the depth information of the image can be extracted to avoid the influence of light on the image. Clearly, the proposed method in this paper is robust to color rendering of low-quality images.
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.489 | 10.770 | 22.528 |
pixG1 | 25.657 | 10.562 | 18.665 |
LSpix | 29.655 | 11.805 | 22.528 |
LSpixG1 | 27.650 | 12.409 | 19.942 |
LSpix_GP | 29.516 | 11.950 | 22.504 |
LSpixG1_GP | 27.306 | 11.548 | 19.966 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 21.977 | 8.723 | 12.535 |
pixG1 | 26.158 | 7.864 | 14.441 |
LSpix | 21.457 | 9.119 | 12.579 |
LSpixG1 | 24.565 | 7.946 | 14.171 |
LSpix_GP | 21.948 | 9.334 | 12.563 |
LSpixG1_GP | 24.337 | 7.886 | 14.127 |
Note: Bold font is the best value for each column. |
We proposed a novel image color rendering method based on using Gabor filter based improved pix2pix for robust image and demonstrate its feasibility and superiority for a variety of tasks. It enables automatically render robust images and has good robustness with low-quality image rendering. The experimental results on summer dataset demonstrate that the proposed method can achieve high-quality performance with image color rendering. At present, the image resolution of image processing based on deep learning is limited, which leads to the limitation in the practical application of rendering method. In the future, we will focus on improving the resolution of network model input images.
This work were partially supported by the National Natural Science Foundation of China (No. 62002285 and No. 61902311).
The authors declare there is no conflict of interest.
[1] | Abdulai RT, Owusu-Ansah A (2011) House price determinants in Liverpool, United Kingdom. Current Politics and Economics of Europe, Nova Science Publishers, Inc., 22: 1–26. |
[2] | Afşar A, Yilmazel Ö, Yilmazel S (2017) Konut fiyatlarini etkileyen faktörlerin hedonik model ile belirlenmesi: Eskişehir Örneği. Selçuk Ün Sos Bil Ens Der, 195–205. |
[3] |
Anaman KA, Osei-Amponsah C (2007) Analysis of the causality links between the growth of the construction industry and the growth of the macro-economy in Ghana. Constr Manage Econ 25: 951–961. doi: 10.1080/01446190701411208
![]() |
[4] | Apergis N (2020) Natural disasters and housing prices: fresh evidence from a global country sample. Int Real Estate Rev 23: 815–836. |
[5] | Bahmani-Oskooee M, Ghodsi SH. (2018) Asymmetric causality between the US housing market and its stock market: Evidence from state level data. J Econ Asymmetr 18: e00095. |
[6] | Bahmani-Oskooee M, Ghods SH (2019) On the Link between Value of the Dollar and Housing Production in the U.S.: Evidence from State Level Data. Int Real Estate Rev 22: 231–274. |
[7] |
Beaver WH (1968) The information content of annual earnings announcements. J Account Res 6: 67–92. doi: 10.2307/2490070
![]() |
[8] |
Brau J, Holmes A (2006) Why do REITs repurchase stock? Extricating the effect of managerial signaling in open market share repurchase announcements. J Real Estate Res 28: 1–24. doi: 10.1080/10835547.2006.12091169
![]() |
[9] |
Brown SJ, Warner JB (1980) Measuring security price performance. J Financ Econ 8: 205–258. doi: 10.1016/0304-405X(80)90002-1
![]() |
[10] |
Brown SJ, Warner JB (1985) Using daily stock returns: The case of event studies. J Financ Econ 14: 3–31. doi: 10.1016/0304-405X(85)90042-X
![]() |
[11] |
Chen H, Michaux M, Roussanov N (2020) Houses as ATMs: mortgage refinancing and macroeconomic uncertainty. J Financ 75: 323–375. doi: 10.1111/jofi.12842
![]() |
[12] |
Chen MH, Jang SS, Kim WG (2007) The impact of the SARS outbreak on Taiwanese hotel stock performance: an event-study approach. Int J Hosp Manag 26: 200–212. doi: 10.1016/j.ijhm.2005.11.004
![]() |
[13] |
Cloyne J, Huber K, Ilzetzki E, et al. (2019) The effect of house prices on household borrowing: a new approach. Am Econ Rev 109: 2104–2136. doi: 10.1257/aer.20180086
![]() |
[14] | Contat JC, Turnbull GK, Waller BD (2019) Politics and Prices: Presidential Elections and the Housing Market. |
[15] |
Dyckman T, Philbrick D, Stephan J (1984) A comparison of event study methodologies using daily stock returns: A simulation approach. J Account Res 22: 1–30. doi: 10.2307/2490855
![]() |
[16] | Erol I, Umut U (2015) Role of construction sector in economic growth: new evidence in Turkey. MPRA Paper, 1–31. |
[17] |
Fama E (1991) Efficient capital markets: Ⅱ. J Financ 46: 1575–1617. doi: 10.1111/j.1540-6261.1991.tb04636.x
![]() |
[18] |
Rasmus F, Michael MH (2003) Is sterilised foreign exchange intervention effective after all? An event study approach. Econ J 113: 390–411. doi: 10.1111/1468-0297.00122
![]() |
[19] |
Fehrle D (2019) Housing and the business cycle revisited. J Econ Dyn Control 99: 103–115. doi: 10.1016/j.jedc.2018.12.004
![]() |
[20] | Fuerst F (2005) Exogenous shocks and real estate rental markets: An event study of the 9/11 attacks and their impact on the New York office market. Available from: https://ssrn.com/abstract=800006 or http://dx.doi.org/10.2139/ssrn.800006. |
[21] |
Giang DT, Pheng LS (2011) Role of construction in economic development: Review of key concepts in the past 40 years. Habitat Int 35: 118–125. doi: 10.1016/j.habitatint.2010.06.003
![]() |
[22] |
Glewwe P (1991) Investigating the determinants of household welfare in Côte d'Ivoire. J Dev Econ 35: 307–337. doi: 10.1016/0304-3878(91)90053-X
![]() |
[23] | Greene WH (1993) Econometric analysis. 2nd Edition, New Jersey: Prentice Hall. |
[24] | Greene WH (2000). Econometric analysis. Upper Saddle River, New Jersey: Prentice-Hall. |
[25] | Greene WH (2001) Fixed and Random Effect in Nonlinear Models. Available from: http://www.stren.nyu.edu/eco/wkpapers/workingpapers01/EC-01-01.pdf. |
[26] | Gül ZB (2017) Construction industry in Turkey: an input-output analysis using the world input-output database (wiod) for the 2002–2011 periods. Atatürk Üniversitesi İktisadi ve İdari Bilimler Dergisi 31. |
[27] | Gupta A, Mittal V, Peeters J, et al. (2021) Flattening the curve: pandemic-induced revaluation of urban real estate. NBER Working Papers. |
[28] |
He P, Sun Y, Zhang Y, et al. (2020) COVID-19's impact on stock prices across different sectors—An event study based on the Chinese stock market. Emerg Mark Financ Trade 56: 2198–2212. doi: 10.1080/1540496X.2020.1785865
![]() |
[29] |
Hoesli M, Milcheva S, Moss A (2020) Is Financial Regulation Good or Bad for Real Estate Companies? —An Event Study. J Real Estate Finan Econ 61: 369–407. doi: 10.1007/s11146-017-9634-z
![]() |
[30] | Hu MR, Lee AD, Zou D (2021) COVID-19 and Housing Prices: Australian Evidence with Daily Hedonic Returns. Available from: https://ssrn.com/abstract=3768953 or http://dx.doi.org/10.2139/ssrn.3768953. |
[31] |
Ilhan B, Yaman H (2011) A comparative input-output analysis of the construction sector in Turkey and EU countries. Eng Constr Archit Manag 18: 248–265. doi: 10.1108/09699981111126160
![]() |
[32] |
Jung H, Lee J (2017) The effects of macroprudential policies on house prices: Evidence from an event study using Korean real transaction data. J Financ Stab 31: 167–185. doi: 10.1016/j.jfs.2017.07.001
![]() |
[33] | Warner JB, Kothari SP (2006) Econometrics of event studies. Handb Empir Corp Financ 1: 3–36. |
[34] |
Kofoworola OF, Gheewala S (2008) An input–output analysis of Thailand's construction sector. Constr Manag Econ 26: 1227–1240. doi: 10.1080/01446190802425560
![]() |
[35] |
Lean CS (2001) Empirical tests to discern linkages between construction and other economic sectors in Singapore. Constr Manag Econ 19: 355–363. doi: 10.1080/01446190010022686
![]() |
[36] | Lewis TM (2009) Quantifying the GDP–construction relationship. Econ Mod Built Environ, 34–59. |
[37] |
Liargovas P, Repousis S (2011) The impact of mergers and acquisitions on the performance of the Greek banking sector: An event study approach. Int J Econ Financ 3: 89–100. doi: 10.5539/ijef.v3n2p89
![]() |
[38] |
Ling DC, Wang C, Zhou T (2020) A first look at the impact of COVID-19 on commercial real estate prices: Asset-level evidence. Rev Asset Pricing Stud 10: 669–704. doi: 10.1093/rapstu/raaa014
![]() |
[39] |
Liow KH, Song J (2019) Market Integration Among the US and Asian Real Estate Investment Trusts in Crisis Times. Int Real Estate Rev 22: 463–512. doi: 10.53383/100288
![]() |
[40] | Liu S, Su Y (2021) The impact of the Covid-19 pandemic on the demand for density: Evidence from the US housing market. Econ Lett 207: 110010. |
[41] | Lopes J (2009) Investment in construction and economic growth: a long-term perspective. Econ Mod Buill Environ, 94–112. |
[42] | Lopes J (2012) Construction in the economy and its role in socio-economic development: role of construction in economic development. New perspectives on construction in develops countries, 1st Edition, Routledge, 41–71. |
[43] |
Miyajima H, Yafeh Y (2007) Japan's banking crisis: An event-study perspective. J Bank Financ 31: 2866–2885. doi: 10.1016/j.jbankfin.2007.03.006
![]() |
[44] |
Nanda A, Ross SL (2012) The impact of property condition disclosure laws on housing prices: Evidence from an event study using propensity scores. J Real Estate Financ Econ 45: 88–109. doi: 10.1007/s11146-009-9206-y
![]() |
[45] |
Ozkan F, Ozkan O, Gunduz M (2012) Causal relationship between construction investment policy and economic growth in Turkey. Technol Forecast Soc Change 79: 362–370. doi: 10.1016/j.techfore.2011.04.007
![]() |
[46] | Ozturk N, Fitoz E (2009) Türkiye'de Konut Piyasasinin Belirleyicileri: Ampirik Bir Uygulama. Uluslararası Yönetim İktisat ve İşletme Dergisi 5: 21–46. |
[47] | Patell JM (1976) Corporate forecasts of earnings per share and stock price behavior: Empirical test. J Account Res, 246–276. |
[48] |
Redl C (2018) Macroeconomic uncertainty in south africa. S Afr J Econ 86: 361–380. doi: 10.1111/saje.12198
![]() |
[49] | Rosenthal SS, Strange WC, Urrego JA, (2021) JUE insight: Are city centers losing their appeal? Commercial real estate, urban spatial structure, and COVID-19. J Urban Econ, 103381. |
[50] |
Ruddock L, Lopes J (2006) The construction sector and economic development: the "Bon curve". Constr Manag Econ 24: 717–723. doi: 10.1080/01446190500435218
![]() |
[51] |
Strobel J (2015) On the different approaches of measuring uncertainty shocks. Econ Lett 134: 69–72. doi: 10.1016/j.econlet.2015.06.012
![]() |
[52] |
Tse RYC, Ganesan S (1997) Causal relationship between construction flows and GDP: evidence from Hong Kong. Constr Manag Econ 15: 371–376. doi: 10.1080/014461997372926
![]() |
[53] | Wibowo A (2009) The contribution of the construction industry to the economy of Indonesia: A systemic approach. Available from: http://eprints.undip.ac.id/387/1/Agung_Wibowo.pdf. |
[54] |
Wigren R, Wilhelmsson M (2007) Construction investments and economic growth in Western Europe. J Policy Model 29: 439–451. doi: 10.1016/j.jpolmod.2006.10.001
![]() |
[55] |
Wilhelmsson M, Wigren R (2011) The robustness of the causal and economic relationship between construction flows and economic growth: evidence from Western Europe. Appl Econ 43: 891–900. doi: 10.1080/00036840802600020
![]() |
[56] |
Wu X, Zhang Z (2005) Input–output analysis of the Chinese construction sector. Constr Manag Econ 23: 905–912. doi: 10.1080/01446190500183974
![]() |
[57] | Yamak N, Koçak S, Samut S (2018) Türkiye' de inşaat sektörünün kısa ve uzun dönem dinamikleri. Muğla Sıtkı Koçman Üniversitesi İktisadi ve İdari Bilimler Fakültesi Ekonomi ve Yönetim Araştırmaları Dergisi 7: 96–113. |
[58] | Yang Z (2001) An application of the hedonic price model with uncertain attribute-The case of the People's Republic of China. Prop manag 19: 50–63. |
[59] | Yayar R, Karaca SS (2014) Konut Fiyatlarına Etki Eden Faktörlerin Hedonik Modelle Belirlenmesi: TR83 Bölgesi Örneği. Ege Acad Rev 14: 509–514. |
![]() |
![]() |
1. | Hongan Li, Guanyi Wang, Qiaozhi Hua, Zheng Wen, Zhanli Li, Ting Lei, An image watermark removal method for secure internet of things applications based on federated learning, 2022, 0266-4720, 10.1111/exsy.13036 | |
2. | Hong-an Li, Guanyi Wang, Kun Gao, Haipeng Li, A Gated Convolution and Self-Attention-Based Pyramid Image Inpainting Network, 2022, 31, 0218-1266, 10.1142/S0218126622502085 | |
3. | Peng Zhao, Yongxin Zhang, Qiaozhi Hua, Haipeng Li, Zheng Wen, Bio-Inspired Optimal Dispatching of Wind Power Consumption Considering Multi-Time Scale Demand Response and High-Energy Load Participation, 2023, 134, 1526-1506, 957, 10.32604/cmes.2022.021783 | |
4. | Hong’an Li, Min Zhang, Dufeng Chen, Jing Zhang, Meng Yang, Zhanli Li, Image Color Rendering Based on Hinge-Cross-Entropy GAN in Internet of Medical Things, 2023, 135, 1526-1506, 779, 10.32604/cmes.2022.022369 | |
5. | Xiaonan Shi, Jian Huang, Bo Huang, An Underground Abnormal Behavior Recognition Method Based on an Optimized Alphapose-ST-GCN, 2022, 31, 0218-1266, 10.1142/S0218126622502140 | |
6. | Zhi-Hua zhao, Li Chen, Bifurcation Fusion Network for RGB-D Salient Object Detection, 2022, 31, 0218-1266, 10.1142/S0218126622502152 | |
7. | Hong-an Li, Liuqing Hu, Qiaozhi Hua, Meng Yang, Xinpeng Li, Image Inpainting Based on Contextual Coherent Attention GAN, 2022, 31, 0218-1266, 10.1142/S0218126622502097 | |
8. | Mei Gao, Baosheng Kang, Blind Image Inpainting Using Low-Dimensional Manifold Regularization, 2022, 31, 0218-1266, 10.1142/S0218126622502115 | |
9. | Qianqian Liu, Xiaoyan Zhang, Qiaozhi Hua, Zheng Wen, Haipeng Li, Yan Huo, Adaptive Differential Evolution Algorithm with Simulated Annealing for Security of IoT Ecosystems, 2022, 2022, 1530-8677, 1, 10.1155/2022/6951849 | |
10. | Hong’an Li, Jiangwen Fan, Qiaozhi Hua, Xinpeng Li, Zheng Wen, Meng Yang, Biomedical sensor image segmentation algorithm based on improved fully convolutional network, 2022, 197, 02632241, 111307, 10.1016/j.measurement.2022.111307 | |
11. | Youzhong Ma, Qiaozhi Hua, Zheng Wen, Ruiling Zhang, Yongxin Zhang, Haipeng Li, Thippa Reddy G, k Nearest Neighbor Similarity Join Algorithm on High-Dimensional Data Using Novel Partitioning Strategy, 2022, 2022, 1939-0122, 1, 10.1155/2022/1249393 | |
12. | Wenchao Ren, Liangfu Li, Shiyi Wen, Lingmei Ai, APE-GAN: A colorization method for focal areas of infrared images guided by an improved attention mask mechanism, 2024, 124, 00978493, 104086, 10.1016/j.cag.2024.104086 | |
13. | Thavavel Vaiyapuri, Jaiganesh Mahalingam, Sultan Ahmad, Hikmat A. M. Abdeljaber, Eunmok Yang, Soo-Yong Jeong, Ensemble Learning Driven Computer-Aided Diagnosis Model for Brain Tumor Classification on Magnetic Resonance Imaging, 2023, 11, 2169-3536, 91398, 10.1109/ACCESS.2023.3306961 | |
14. | Samarth Deshpande, Siddhi Deshmukh, Atharva Deshpande, Devyani Manmode, Sakshi Dhamne, Abha Marathe, 2023, Pseudo Coloring Using Deep Learning Approach, 979-8-3503-4805-7, 218, 10.1109/ICAECIS58353.2023.10170408 | |
15. | Ali Salim Rasheed, Marwa Jabberi, Tarek M. Hamdani, Adel M. Alimi, PIXGAN-Drone: 3D Avatar of Human Body Reconstruction From Multi-View 2D Images, 2024, 12, 2169-3536, 74762, 10.1109/ACCESS.2024.3404554 | |
16. | Mamoona Jamil, Mubashar Sarfraz, Sajjad A. Ghauri, Muhammad Asghar Khan, Mohamed Marey, Khaled Mohamad Almustafa, Hala Mostafa, Optimized Classification of Intelligent Reflecting Surface (IRS)-Enabled GEO Satellite Signals, 2023, 23, 1424-8220, 4173, 10.3390/s23084173 | |
17. | Muhammad Asif Khan, Hamid Menouar, Ridha Hamila, 2023, Crowd Counting in Harsh Weather using Image Denoising with Pix2Pix GANs, 979-8-3503-7051-5, 1, 10.1109/IVCNZ61134.2023.10343548 | |
18. | Huda F. AL-Shahad, Razali Yaakob, Nurfadhlina Mohd Sharef, Hazlina Hamdan, Hasyma Abu Hassan, An Improved Pix2pix Generative Adversarial Network Model to Enhance Thyroid Nodule Segmentation, 2025, 16, 17982340, 37, 10.12720/jait.16.1.37-48 | |
19. | Qizhi Zou, Binghua Wang, Zhaofei Jiang, Qian Wu, Jian Liu, Xinting Ji, Dynamic style transfer for interior design: An IoT-driven approach with DMV-CycleNet, 2025, 117, 11100168, 662, 10.1016/j.aej.2024.12.030 | |
20. | Alvan Reyhanza Vittorino, Giovanus Immanuel, Simeon Yuda Prasetyo, Eko Setyo Purwanto, 2024, Machine Learning Approaches for Diabetic Retinopathy Classification Utilizing Gabor, LBP, and HOG Feature Extraction, 979-8-3315-0857-9, 697, 10.1109/BTS-I2C63534.2024.10942171 |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
LSpix | 32.795 | 11.003 | 24.107 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
LSpix | 94.506 | 68.123 | 86.011 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
pixG7 | 27.565 | 9.232 | 17.600 |
pixG1+G13 | 28.960 | 9.947 | 20.682 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
pixG7 | 91.188 | 4.964 | 39.057 |
pixG1+G13 | 87.615 | 42.562 | 71.630 |
Note: Bold font is the best value for each column. |
Model | pix2pix | pixG1 | pixG6 | pixG7 | pixG7+G13 | pixG1+G13 |
Time | 8.72 | 9.00 | 8.76 | 8.43 | 15.27 | 16.52 |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpix | 32.795 | 11.003 | 24.107 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
LSpix_GP | 31.859 | 12.030 | 24.019 |
LSpixG1_GP | 32.342 | 11.514 | 21.290 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1_GP | 32.113 | 11.067 | 21.384 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpix | 94.506 | 68.123 | 86.011 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
LSpix_GP | 94.641 | 67.250 | 85.967 |
LSpixG1_GP | 90.772 | 54.308 | 78.067 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG6_GP | 90.941 | 52.740 | 78.236 |
Note: Bold font is the best value for each column. |
Model | pix2pix | LSpix | LSpixG1 | LSpix_GP | LSpixG1_GP | LSpixG6 | LSpixG6_GP |
Time | 8.72 | 8.66 | 8.66 | 8.72 | 11.17 | 8.72 | 11.72 |
Note: Bold font is the best value for each row. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.489 | 10.770 | 22.528 |
pixG1 | 25.657 | 10.562 | 18.665 |
LSpix | 29.655 | 11.805 | 22.528 |
LSpixG1 | 27.650 | 12.409 | 19.942 |
LSpix_GP | 29.516 | 11.950 | 22.504 |
LSpixG1_GP | 27.306 | 11.548 | 19.966 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 21.977 | 8.723 | 12.535 |
pixG1 | 26.158 | 7.864 | 14.441 |
LSpix | 21.457 | 9.119 | 12.579 |
LSpixG1 | 24.565 | 7.946 | 14.171 |
LSpix_GP | 21.948 | 9.334 | 12.563 |
LSpixG1_GP | 24.337 | 7.886 | 14.127 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
LSpix | 32.795 | 11.003 | 24.107 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
LSpix | 94.506 | 68.123 | 86.011 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
pixG7 | 27.565 | 9.232 | 17.600 |
pixG1+G13 | 28.960 | 9.947 | 20.682 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
pixG7 | 91.188 | 4.964 | 39.057 |
pixG1+G13 | 87.615 | 42.562 | 71.630 |
Note: Bold font is the best value for each column. |
Model | pix2pix | pixG1 | pixG6 | pixG7 | pixG7+G13 | pixG1+G13 |
Time | 8.72 | 9.00 | 8.76 | 8.43 | 15.27 | 16.52 |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.204 | 11.126 | 23.024 |
pixG1 | 28.225 | 10.477 | 19.981 |
pixG6 | 27.874 | 9.883 | 20.012 |
LSpix | 32.795 | 11.003 | 24.107 |
LSpixG1 | 32.524 | 11.238 | 21.354 |
LSpix_GP | 31.859 | 12.030 | 24.019 |
LSpixG1_GP | 32.342 | 11.514 | 21.290 |
LSpixG6 | 32.616 | 10.632 | 21.409 |
LSpixG1_GP | 32.113 | 11.067 | 21.384 |
Note: Bold font is the best value for each column. |
Network | MAX SSIM | MIN SSIM | AVE SSIM |
pix2pix | 92.888 | 52.474 | 82.163 |
pixG1 | 86.101 | 36.592 | 69.145 |
pixG6 | 85.625 | 33.117 | 68.845 |
LSpix | 94.506 | 68.123 | 86.011 |
LSpixG1 | 91.757 | 54.785 | 78.485 |
LSpix_GP | 94.641 | 67.250 | 85.967 |
LSpixG1_GP | 90.772 | 54.308 | 78.067 |
LSpixG6 | 91.312 | 56.897 | 78.387 |
LSpixG6_GP | 90.941 | 52.740 | 78.236 |
Note: Bold font is the best value for each column. |
Model | pix2pix | LSpix | LSpixG1 | LSpix_GP | LSpixG1_GP | LSpixG6 | LSpixG6_GP |
Time | 8.72 | 8.66 | 8.66 | 8.72 | 11.17 | 8.72 | 11.72 |
Note: Bold font is the best value for each row. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 29.489 | 10.770 | 22.528 |
pixG1 | 25.657 | 10.562 | 18.665 |
LSpix | 29.655 | 11.805 | 22.528 |
LSpixG1 | 27.650 | 12.409 | 19.942 |
LSpix_GP | 29.516 | 11.950 | 22.504 |
LSpixG1_GP | 27.306 | 11.548 | 19.966 |
Note: Bold font is the best value for each column. |
Network | MAX PSNR | MIN PSNR | AVE PSNR |
pix2pix | 21.977 | 8.723 | 12.535 |
pixG1 | 26.158 | 7.864 | 14.441 |
LSpix | 21.457 | 9.119 | 12.579 |
LSpixG1 | 24.565 | 7.946 | 14.171 |
LSpix_GP | 21.948 | 9.334 | 12.563 |
LSpixG1_GP | 24.337 | 7.886 | 14.127 |
Note: Bold font is the best value for each column. |