
Reservoir computing (RC) is a promising approach for model-free prediction of complex nonlinear dynamical systems. Here, we reveal that the randomness in the parameter configurations of the RC has little influence on its short-term prediction accuracy of chaotic systems. This thus motivates us to articulate a new reservoir structure, called homogeneous reservoir computing (HRC). To further gain the optimal input scaling and spectral radius, we investigate the forecasting ability of the HRC with different parameters and find that there is an ellipse-like optimal region in the parameter space, which is completely beyond the area where the spectral radius is smaller than unity. Surprisingly, we find that this optimal region with better long-term forecasting ability can be accurately reflected by the contours of the l2-norm of the output matrix, which enables us to judge the quality of the parameter selection more directly and efficiently.
Citation: Bolin Zhao. Seeking optimal parameters for achieving a lightweight reservoir computing: A computational endeavor[J]. Electronic Research Archive, 2022, 30(8): 3004-3018. doi: 10.3934/era.2022152
[1] | K. Wayne Forsythe, Cameron Hare, Amy J. Buckland, Richard R. Shaker, Joseph M. Aversa, Stephen J. Swales, Michael W. MacDonald . Assessing fine particulate matter concentrations and trends in southern Ontario, Canada, 2003–2012. AIMS Environmental Science, 2018, 5(1): 35-46. doi: 10.3934/environsci.2018.1.35 |
[2] | Carolyn Payus, Siti Irbah Anuar, Fuei Pien Chee, Muhammad Izzuddin Rumaling, Agoes Soegianto . 2019 Southeast Asia Transboundary Haze and its Influence on Particulate Matter Variations: A Case Study in Kota Kinabalu, Sabah. AIMS Environmental Science, 2023, 10(4): 547-558. doi: 10.3934/environsci.2023031 |
[3] | Pasquale Avino, Maurizio Manigrasso . Ozone formation in relation with combustion processes in highly populated urban areas. AIMS Environmental Science, 2015, 2(3): 764-781. doi: 10.3934/environsci.2015.3.764 |
[4] | Xiaobo Li, Xinggang Ye, Meirong Qian . An embedded machine learning system method for air pollution monitoring and control. AIMS Environmental Science, 2025, 12(4): 576-593. doi: 10.3934/environsci.2025026 |
[5] | Nikolaos Barmparesos, Vasiliki D. Assimakopoulos, Margarita Niki Assimakopoulos, Evangelia Tsairidi . Particulate matter levels and comfort conditions in the trains and platforms of the Athens underground metro. AIMS Environmental Science, 2016, 3(2): 199-219. doi: 10.3934/environsci.2016.2.199 |
[6] | Andrea L. Clements, Matthew P. Fraser, Pierre Herckes, Paul A. Solomon . Chemical mass balance source apportionment of fine and PM10 in the Desert Southwest, USA. AIMS Environmental Science, 2016, 3(1): 115-132. doi: 10.3934/environsci.2016.1.115 |
[7] | Meher Cheberli, Marwa Jabberi, Sami Ayari, Jamel Ben Nasr, Habib Chouchane, Ameur Cherif, Hadda-Imene Ouzari, Haitham Sghaier . Assessment of indoor air quality in Tunisian childcare establishments. AIMS Environmental Science, 2025, 12(2): 352-372. doi: 10.3934/environsci.2025016 |
[8] | Tabaro H. Kabanda . Investigating PM2.5 pollution patterns in South Africa using space-time analysis. AIMS Environmental Science, 2024, 11(3): 426-443. doi: 10.3934/environsci.2024021 |
[9] | Barend L. Van Drooge, David Ramos García, Silvia Lacorte . Analysis of organophosphorus flame retardants in submicron atmospheric particulate matter (PM1). AIMS Environmental Science, 2018, 5(4): 294-304. doi: 10.3934/environsci.2018.4.294 |
[10] | Jordan Finch, Daniel W. Riggs, Timothy E. O’Toole, C. Arden Pope III , Aruni Bhatnagar, Daniel J. Conklin . Acute exposure to air pollution is associated with novel changes in blood levels of endothelin-1 and circulating angiogenic cells in young, healthy adults. AIMS Environmental Science, 2019, 6(4): 265-276. doi: 10.3934/environsci.2019.4.265 |
Reservoir computing (RC) is a promising approach for model-free prediction of complex nonlinear dynamical systems. Here, we reveal that the randomness in the parameter configurations of the RC has little influence on its short-term prediction accuracy of chaotic systems. This thus motivates us to articulate a new reservoir structure, called homogeneous reservoir computing (HRC). To further gain the optimal input scaling and spectral radius, we investigate the forecasting ability of the HRC with different parameters and find that there is an ellipse-like optimal region in the parameter space, which is completely beyond the area where the spectral radius is smaller than unity. Surprisingly, we find that this optimal region with better long-term forecasting ability can be accurately reflected by the contours of the l2-norm of the output matrix, which enables us to judge the quality of the parameter selection more directly and efficiently.
In an information era, owing to hardware and software advances, the development of technologies in engineering is tented to high computational ability. Intelligent computing and machine learning provide a good strategy to solve the complicated problems and a fast computational ability to reduce the complexity of the procedure. In the computer vision field, it usually operates in the frequency domain to extract the essential information in order to model the events. However, the procedure operated in the frequency domain is a kind of complex computation. Involving the intelligence into the modeling technique to solve the confronted problems of computer vision is the current trend. Intelligent computing is the integration of many technologies and decisions of knowledge for the computing environment. Computational photography scheme, image processing, requests the intelligent computing to deal with the object model and feature extraction. Motion blur is usually presented in many practical scenarios, such as hand-held cameras especially mobile material. The motion-blurred picture is caused by the relative motion between the camera and an imaged scene during exposure. Generally, the relative motion can be divided into two kinds: camera shake and object motion. Sensor movement during exposure leads to unwanted blur in the acquired image. Assuming a scene and ignoring the effects of defocus and lens abnormality, each point in the blurred image can be modeled as the convolution of the unblurred image by a global point spread function (PSF). Image deblurring aims to achieve a deconvolution process to recover the clear image from the acquired blurry image. Based on intelligent computing strategy, image deblurring could be reduced the computing complexity and improved the computational time.
Artificial neural networks (NNs) have been used extensively in image processing [1,2]. Schuler et al. [3] proposed a learning-based neural network (NN) to estimate the features and then estimate blur kernel for deblurring by using the deconvolution. Additionally, a common assumption in motion deblurring methods is that the motion PSF is spatially invariant. This implies that all pixels are convolved with the same motion blur kernel. The problem of blur kernel estimation and more generally blind deconvolution is a long-existing problem in computer vision. Restoration of blurry images is highly dependent on estimation of motion blur kernel after implementing the appropriate image restoration method. Many well-known PSF algorithms to estimate blur kernel have been proposed [4,5,6,7,8]. When the PSF is known, or can be estimated, a deconvolution algorithm, such as Richardson-Lucy [9,10,11,12], can be used to deblur the image. Motion blur is a type of the relative motion of the camera and shooting scene during exposure. Mathematically, the corresponding motion blur information is usually modeled as a linear image degradation process.
B=I⊗K+N, | (1.1) |
where ⊗ denotes the convolution operator, B, I, K, and N denote the blurred image, true sharp image, unknown blur kernel, and noise term, respectively. Blind image deconvolution is an inherently ill-posed problem since the blurred image B does not provide enough information for determining both I and K. Therefore, how to estimate blur kernel K from blurred image B is an important issue in motion-blurred image restoration. Many of studies have been proposed. Some studies are briefly described in the following.
Blurred kernel information is usually hidden in the regions with edges, if the edge of an image suffered serious damage, it will result in inaccurate blur kernel estimation. Tai et al. [10] proposed a modified Richardson-Lucy (RL) method to incorporate space-invariant blur model under a projective motion path for image scenes. Yang et al. [11] proposed blur kernel estimation and non-blind image deconvolution to deblur image by using bilateral filter and Gradient attenuation Richardson-Lucy deconvolution algorithm. Dobeš et al. [13] and Goldstein and Fattal [5] proposed the kernel estimation in frequency domain. The blur kernel is then recovered using a phase retrieval algorithm with improved convergence and disambiguation capabilities. Deblurring approaches, based on spectral properties and edge information of an image have presented by [5,14,15] to retrieval the blur kernel information. In addition, one of many deblurring techniques is to incorporate image priors to impose on the deblurred results. Deshpande and Patnaik [16] proposed an image motion deblurring algorithm based on dual Fourier spectrum combined with bit plane slicing algorithm and Radon transform (RT) for accurate estimation of PSF parameters (blur length and blur angle). Shao et al. [17] used non-stationary Gaussian prior to estimate the salient edges of image as the cues to blur kernel estimation. He et al. [18] proposed motion blurring which used different priors for the local region and the motion blur kernel to formulate a minimization energy function that alternates between blur kernel estimation and deblurring image restoration. Jia [19] relied on color mixtures to estimate the motion blur kernel of moving objects given their boundary alpha values. Levin et al. [20] used a maximum a posteriori estimation (MAP) to estimate blur kernel and achieve the deblurring results. Except MAP methods [21,22], many methods are being developed [13,23,24,25,26].
In this paper, we propose a motion deblurring method based on fast PSF (FPSF) to achieve image restoration. The advantages of this system can speed up the running time and find an optimal blur kernel, as well as obtain a good image quality for deblurring. In addition, in order to verify the reliability of our proposed system, the experimental data include the real motion-blurred images and artificial blurred images.
The rest of this paper is organized as follows. Section 2 describes the related techniques. Section 3 describes the proposed method which includes blur kernel clustering, blur kernel integration, and the optimal blue kernel searching. Section 4 presents the experimental results. Finally, a conclusion is given in Section 5.
In this section, we briefly describe the techniques related to our proposed approach.
A particular property of natural image scenes can be illustrated by the following power-law relationship [27,28].
∣ˆI(ω)∣2∝∥ω∥−β, | (2.1) |
where ω is the coordinates in frequency domain, ˆI denotes Fourier transform of a natural image (I), according to the literature, β≈2.
As the researches indicated, the blur information is hidden in the power-law of the neighborhoods of edges, therefore, filtering an image forming Eq. (2.1) can be acquired this information and is expressed as
|^I∗d(ω)|2=|ˆI(ω)|2⋅|ˆd(ω)|2≈π2∥ω∥−2∥ω∥2=c, | (2.2) |
for constant c, d is a first-order Laplacian filter. Thus, a blurry image B=I∗k, this filtering process can be used to estimate the following blur-kernel power spectrum |ˆk(ω)|2.
|^(B∗d)(ω)|2=|ˆI(ω)|2⋅|ˆd(ω)|2⋅|ˆk(ω)|2≈c|ˆk(ω)|2. | (2.3) |
The power spectrum of any signal F can relay to its autocorrelation according to the Wiener-Khinchin theorem [29].
ˆRF(ω)=|ˆF(ω)|2, | (2.4) |
where the autocorrelation is defined by RF(x)=(ˉF∗F)(x). The blur approximation in Eq. (2.3) can be identified by real-space parts for the spectrum and can be expressed as
RB∗d(x)≈cRk(x). | (2.5) |
Evidently, the power spectrum of a natural image varies by multiplicative factors along the different directions, that is,
|ˆI(ω)|2≈cθ(ω)⋅∥ω∥−2, | (2.6) |
where θ(ω)=arctan(ωx,ωy) is the angle of the vector ω.
Blur kernel information, like cθ(ω) and the kernel phase, can be recovered by means of the Fourier slice theorem and Wiener-Khinchin theorem, given autocorrelation functions computed from the input blurry image B(x). From Eq. (2.6) known, only single parameter cθ is unknown. Based on Fourier theorem and Wiener-Khinchin theorem, Eq.(2.3) can be rewritten in real-space and expressed as Eq.(2.7) [5].
fθ(x)≈cθ⋅RPθ(k)(x),θ∈[−π,π], | (2.7) |
where Pθ is a projection of a 2D signal into 1D by integrating it along the direction orthogonal to θ. By repeating this procedure for all the θ, an approximation for the 2D blur-kernel power spectrum function |ˆk(ω)|2 can be obtained. In [5], this procedure repeated three times. In addition, based an iterative phase-retrieval algorithm, this approximation can recover the blur kernel k.
The phase recovery also called phase retrieval. As described above, recovering the kernel k, given its power spectrum |ˆk|2 requires estimating the phase component of ˆk(ω). However, this procedure only obtains the spectrum information, the phase information is still unknown because it iteratively switches between Fourier and real-space domains. In addition, the input |ˆk|2 and the spatial constraints may not guarantee a unique solution. Moreover, as [30] discussed, it may converge to local minima formation, therefore, the phase retrieval procedure is repeated multiple times under starting randomly phase component to further estimate the blur kernel.
According to GS algorithm [31], this algorithm is a common method for phase retrieval. It is based on iterative Fourier transform and inverse Fourier transform between the object domain and the Fourier domain. A hybrid input-output method is used to estimate the blur kernel in iterative phase retrieval procedure under the appropriate frequency/spatial domain constraints [5,31]. Therefore, based on iterative phase retrieval algorithm, the blur kernel can be recovered. Thus, the blurry image can be deblurred through a deconvolution. The procedures of phase retrieval algorithm are briefly described as follows. Its pseudo code is shown in Algorithm 1 (see [5] for details).
Step 1) Given the initial phase ϕ(ω) within [−π,π] randomly generated.
Step 2) Transfrom the real-space domain g using inverse Fourier transform.
Step 3) Transform g into ˆg using Fourier domain constraints.
Step 4) Transform to real-space g2 using the phase information of ˆg.
Step 5) Compute the constraint R(x) using the space domain constraints.
Step 6) Obtain a hybrid input-output constraint Ω which is the union of R(x) and s.
Step 7) Repeat Steps 3 - 6 for m times, and output the kn based on the constraint Ω which is the union of g2(x) and s.
Algorithm 1 Iterative phase retrieval |
1: Input: kernel magnitude spectrum, p(ω)=∣ˆk(ω)∣ and kernel size =s 2: for n=0 to Nguesses do 3: //initiate the phase ϕ(ω) randomly; 4: Sample ϕ(ω) uniformly from [−π,π] 5: //transform to real space using inverse Fourier transform 6: g=F−1(p⋅eiϕ) 7: for m=1 to Ninner do 8: //appply fourier domain constraints; 9: g2=F−1((αp+(1−α)∣ˆg∣)⋅expi⋅ϕ(ˆg)) 10: //apply space domain constraints 11: R(x)=2g2(x)−g(x) 12: β=β0+(1−β0)(1−exp(−m/7)3) 13: Ω={x:R(x)<0}∪{x:x∉[0,s]×[0,s]} 14: g(x)={βg(x)+(1−2β)g2(x),ifx∈Ω,g2(x),ifx∉Ω; 15: end for 16: Ω={x:g2(x)<0}∪{x:x∉[0,s]×[0,s]} 17: kn(x)={0,ifx∈Ω,g2(x),ifx∉Ω 18: end for 19: Output: kn |
However, using the phase retrieval method, it cannot guarantee to obtain the same kernel in each of iterations and cannot decide which one is the best blur kernel, as shown in Fig. 1. Figure 1 shows thirty blur kernels which include the symmetric or failure blur kernels after performing thirty iterations in phase retrieval. Assuming that repeating phase retrieval for n times, it can find n blur kernels. Afterwards, an optimal kernel can be obtained from these n kernels. [5] takes the final kernel to deconvolve the blurry image. In our experiments, we utilize a normalized sparsity measure (NSM) [24] to decide the optimal blur kernel from n blur kernels and to estimate the quality of blur kernel.
As known above, it can obtain n blur kernels after iterating n times. Each of NSM values for deconvolution by using the corresponding kernel can be calculated. Figure 1 shows an example of blur kernel for thirty iteration times. From Fig. 1, it is obvious that the symmetric relationship exists among blur kernels and the estimated blur kernel for each iteration is also different. Hence, the measure of blur kernel quality will test the symmetry of blur kernel and will make a score. In order to estimate the symmetric characteristic of blur kernel, it has to calculate the NSM score twice. For example, if there are thirty kernels, it will calculate the NSM for sixty times. After computing NSM value, the smaller the NSM score, the better the reconstructed image, as shown in Fig. 2. According to our experimental results, a kernel with the minimum NSM value is a good kernel more confidently.
In our system, we modify the number of computing NSM times and thus speed up an optimal blur kernel acquirement from n blur kernels.
Natural image signals are highly structural information, such as pixels exhibiting strong dependencies and containing important information about the structure of the objects in the visual scene. In order to estimate the structural performance of the reconstructed image after deconvolution, we adopt the structural-similarity-based image quality measure (SSIM) [32] instead of the mean squared error (MSE). The SSIM mainly compute the structural similarity between the reference and the distorted signals. However, one usually requires the overall image quality measure, a mean SSIM (MSSIM) derived from SSIM is used to achieve this measure, which can exhibit much better consistency with the qualitative visual appearance. In our experiments, we adopt the MSSIM to estimate the quality of the reconstructed image. The MSSIM index is briefly defined as
MSSIM(X,Y)=1MM∑j=1SSIM(xj,yj), | (2.8) |
where X and Y are the reference and the distorted images, respectively; xj and yj are the image contents at the jth local window; and M is the number of local windows of the image. The higher the MSSIM, the closer both images.
In this paper, we propose an image deblurring using fast point spread function (FPSF) method to efficiently and quickly search the optimal blur kernel. Because the method by [5] is time-consuming, in our approach, we will further improve the computational time and speed up decide the optimal blur kernel to deblur the blurred image. Figure 3 illustrates the flowchart of the proposed system. The details of the procedures are described in the following.
In order to improve the blur kernel estimation which is seriously affected by high-frequency components [5], we use Gaussian filter [33] to reduce this influence and improve the blur kernel estimation for a blurred image. Here, we use Gaussian filter with size 5×5, σ=0.6, and a mean of zero, it is proportional to the size of the neighborhood on which the filter operates. Pixels more distant from the center of the operator have smaller influence. After filtering the blurry image, it can improve the power spectrum of 2D blur kernels, afterwards, the optimal blur kernel can be acquired by our proposed method.
After doing blur kernel estimation and iterative phase retrieval described by the previous section, it can obtain the corresponding kernels in each of iterative times in phase retrieval. However, from Figure 1, it is clearly obvious that each of these kernels is different. In other words, it does not guarantee that these kernels can yield a good deblurring result. Based on this reason, we propose a FPSF method to estimate an optimal blur kernel. Using this estimated optimal blur kernel, it can deblur the blurry image by deconvolution. Figure 4 illustrates the flowchart of the FPSF.
This method consists of blur kernel clustering and blur kernel integration. The blur kernel clustering based on MSSIM is to classify these blur kernels obtained by phase retrieval. After performing the clustering, we use the integration to find the optimal kernel from the clusters. For clustering, we use the MSSIM to estimate the similarity of all candidate kernels, kernel which has a high MSSIM will be clustered into the same cluster. The procedures of the clustering method are described as follows.
First of all, assuming that there are kn blur kernels.
Step 1) Select the first kernel in these kernels as an initial base.
Step 2) Calculate the MSSIM values for this base and the rest of candidate kernels.
Step 3) If MSSIM value is more than or equal to a threshold, then this candidate kernel is classified into the corresponding cluster.
Step 4) If MSSIM value is less than a threshold, then a new base is yielded and it will serve as a new cluster.
Step 5) Repeat Steps 2 - 4, until all candidate kernels are clustered.
Based on MSSIM characteristic, the higher the MSSIM value, more similar both kernels. According to our experiments, this threshold is set 0.9. An example of kernel clustering algorithm is illustrated as follows. Assuming there are nine blur kernels, they will be clustered.
![]() |
First, kernel k1 is chosen as a base, next, the MSSIM value is computed. The first cycle can obtain the first cluster called k1 which includes k2, k6, and k8 kernels. Then, kernel k3 is a new cluster. After computing the clustering procedures, all candidate kernels can be classified into the corresponding clusters.
Based on above clustering process, assume there are n blur kernels, they will be classified into Cm kernel clusters where m≤n. As described above, a good kernel can be obtained after computing the NSM for sixty times, but, it is very time-consuming. Therefore, in order to reduce the number of computing NSM times, we propose a blur kernel integration technique to gain this performance. This kernel integration technique consists of mean blur kernel calculation and refining described as follows.
◆ Mean blur kernel: We calculate an average blur kernel corresponding to each of kernel clusters with the relative coordinates. The average blur kernel for each of kernel clusters is defined as
Km,avg(x,y)=1gg∑i=1Cm(i)(x,y), | (3.1) |
where g denotes the number of kernels corresponding to kernel cluster, m denotes the number of kernel clusters, Cm(⋅)(x,y) denotes a kernel value corresponding to the x and y coordinates at the mth kernel cluster.
◆ Refining: Owing to the difference between kernels in cluster, the average kernel may involve noise (as shown in Figure 5 number 1's result). As described above, the noise will affect the deblurring process, hence, in order to reduce the influence of the noise and keep the important information of the kernel, we further refine the mean blur kernel to gain performance. We make a weight matrix to achieve the refining. First, we define a weight matrix of a kernel cluster which represents the number of nonzero kernel values corresponding to the coordinates in the same kernel cluster, the weight matrix is defined as
wm(x,y)={wm(x,y)+1,ifCm(i)(x,y)>0,wm(x,y),otherwise, | (3.2) |
where the initial wm(x,y) sets to zero for m clusters. An average weight value is computed by
aveWm=∑x∑ywm(x,y)Sm, | (3.3) |
where Sm is the number of nonzero kernel values corresponding to the relative cluster. The refining mean blur kernel is expressed as
RKm,avg(x,y)={Km,avg(x,y),ifwm(x,y)≥aveWm,Km,avg(x,y)=0,otherwise. | (3.4) |
Figure 5 illustrates an example of kernel integration processing in a kernel cluster.
Based on the blur kernel integration, we can obtain a refining mean blur kernel corresponding to each of kernel clusters. Afterwards, using the measure of blur kernel quality again described by subsection 2.3 for m refining mean blur kernels, we obtain the best mean blur kernel and its corresponding cluster. Because we have recorded the symmetry of the cluster in searching cluster process, thus, we can greatly reduce the computational time and quickly find an optimal blur kernel. The procedures of searching an optimal blur kernel (OBK) are described as follows.
Step 1) Give a set of blur kernels which belong to the best kernel cluster, and the corresponding symmetric property of the phase.
Step 2) Deconvolve all the blurred images using these kernels.
Step 3) Compute the relative NSM values for all deconvolution results.
Step 4) Find the minimum NSM value from these NSM values and its corresponding kernel, this kernel serves as an optimal kernel.
To verify the performance of our proposed method, the experimental results are compared with Goldstein and Fattal [5] and Krishnan et al. [24] to perform the visual quality of the reconstruction image and computational time. Based on the above procedures, we can obtain the optimal kernel for the corresponding blurry image, then each of three components (red, green, blue) for the color image is individually used this optimal kernel to work the non-blind deconvolution method based on [34]; finally, all blurry color images can be reconstructed.
All experiments are implemented in Microsoft Visual Studio 2010 C♯, an Intel © i5 dual Core 3.2 GHz, and 4 GB RAM computer with window 7 64 bits platform.
First, we need to make the experimental data by using predefined blur kernels. How to decide the number of predefined blur kernels is a trade-off between objectivity and variety of the experimental data. In order to perform these attributes, in our experiments, we make ten blur kernels with size 21×21 shown in Figure 6 to simulate ten kinds of blurred images. A database including ten images with size 768×1024, 682×1024, 1024×682, and 720×960 pixels separated 4, 3, 2, and 1 is given. Then we blurred each of these images using ten blur kernels, it can obtain 100 motion-blurred images for test. In order to avoid noise disturbance to affect kernel estimation, all of experimental data are firstly filtered by Gaussian filter. After filtering, we can obtain more convergent blur kernel by means of our proposed method. Here, we use Gaussian filter with the standard deviation σ=0.6 and run thirty iterations for phase retrieval to demonstrate our experiments.
For performance measure, we use the peak signal-to-noise ratio (PSNR) and MSSIM to estimate the performance of the system. The PSNR and mean square error (MSE) are defined as
PSNR=10logS2MaxMSE,MSE=1h×wh∑iw∑j|I1(i,j)−I2(i,j)|2, | (4.1) |
where I1(i,j) and I2(i,j) denote the reconstructed image and the original one corresponding to the coordinates (i,j). h and w denote the height and width of the image, respectively. For a gray-level image, SMax is 255 gray value.
Because our proposed method improved Goldstein and Fattal method, the power spectrum of blur kernel for each iteration in phase retrieval may be not equal. Hence, in order to obtain the stable performance in our experiments, each blurry image is doing blur kernel estimation and its convolution for ten times. In addition, the size of blur kernel will seriously affect the execution time and reconstructed image quality, therefore, we test the different kernel sizes to demonstrate the execution speed. First of all, we take an average execution time in which the blurry image is doing blur kernel estimation with the different kernel sizes for ten iterations. Figure 7 shows the average execution time for the different blur kernel sizes in our experimental data. From Figure 7, it is clear that the execution time increases with increasing kernel size, and our proposed method is more upgraded than those of methods. Hence, considering the computational time and the visual quality of the reconstructed images, we adopt the blur kernel of size 21 × 21.
Table 1 illustrates the PSNR value and MSSIM value. Image deblurring results of a part of tests are shown in Figures 8-9. Besides man-made test data, we also use the real motion-blurred images to demonstrate our proposed method, as shown in Figures 10-11.
Kernel | Method | ||
Proposed | [5] | [24] | |
k1 | 22.13/0.94 | 22.15/0.81 | 20.47/0.70 |
k2 | 23/0.96 | 23.24/0.85 | 20.36/0.73 |
k3 | 23.95/0.96 | 24.36/0.53 | 25.94/0.85 |
k4 | 22.6/0.95 | 22.47/0.83 | 22.04/0.79 |
k5 | 21.69/0.93 | 21.48/0.79 | 20.16/0.65 |
k6 | 22.42/0.94 | 22.49/0.82 | 22.8/0.83 |
k7 | 19.43/0.88 | 19.33/0.68 | 18.77/0.57 |
k8 | 24.27/0.95 | 24.28/0.95 | 21.16/0.88 |
k9 | 22.13/0.94 | 22.14/0.80 | 21.63/0.78 |
k10 | 23.4/0.95 | 23.54/0.82 | 20.23/0.66 |
Ave. | 22.5/0.94 | 22.55/0.82 | 21.36/0.75 |
The experimental results have been presented above. For computational time, our proposed method is superior to Goldstein and Fattal's method and Krishnan et al.'s method, as shown in Figure 7. For MSSIM and PSNR values, our proposed method is superior to Krishnan et al.'s method and is close to Goldstein and Fattal's method, illustrated in Table 1. For real blurry images, because many factors caused the motion-blurred images are unknown beforehand, the recovered image still existed the failure, as shown in Figure 11. In summary, the global performance for MSSIM and computational time is superior to that for the methods of Goldstein and Fattal and Krishnan et al.
In this paper, we have proposed an image deblurring based on FPSF and clustering to recover the sharp image. Based on FPSF method applied the intelligence computing on the estimated image, the advantage of the computing strategy could reduce much more the computational complexity, at the same time, the optimal blur kernel could be estimated efficiently. As the experimental results, our proposed algorithm could efficiently reduce computational time. Also, for the blurry images, the proposed algorithm could restore the images with a good visual quality. It could enhance the image quality shown in many kinds of the view devices.
There is no conflict of interest in this paper.
[1] | H. Jaeger, The "echo state" approach to analysing and training recurrent neural networks-with an erratum note, German National Research Center for Information Technology, Bonn, Germany, 148 (2001), 13. |
[2] |
W. Maass, T. Natschläger, H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Comput., 14 (2002), 2531–2560. https://doi.org/10.1162/089976602760407955 doi: 10.1162/089976602760407955
![]() |
[3] |
H. Jaeger, H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004), 78–80. https://doi.org/10.1126/science.1091277 doi: 10.1126/science.1091277
![]() |
[4] |
Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett, E. Ott, Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos, 27 (2017), 041102. https://doi.org/10.1063/1.4979665 doi: 10.1063/1.4979665
![]() |
[5] |
J. Pathak, Z. Lu, B. R. Hunt, M. Girvan, E. Ott, Using machine learning to replicate chaotic attractors and calculate lyapunov exponents from data, Chaos, 27 (2017), 121102. https://doi.org/10.1063/1.5010300 doi: 10.1063/1.5010300
![]() |
[6] |
J. Pathak, B. Hunt, M. Girvan, Z. Lu, E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018), 024102. https://doi.org/10.1103/PhysRevLett.120.024102 doi: 10.1103/PhysRevLett.120.024102
![]() |
[7] |
L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, et al., Information processing using a single dynamical node as complex system, Nat. Commun., 2 (2011), 1–6. https://doi.org/10.1038/ncomms1476 doi: 10.1038/ncomms1476
![]() |
[8] |
A. Rodan, P. Tino, Minimum complexity echo state network, IEEE Trans. Neural Networks, 22 (2010), 131–144. https://doi.org/10.1109/TNN.2010.2089641 doi: 10.1109/TNN.2010.2089641
![]() |
[9] |
A. Griffith, A. Pomerance, D. J. Gauthier, Forecasting chaotic systems with very low connectivity reservoir computers, Chaos, 29 (2019), 123108. https://doi.org/10.1063/1.5120710 doi: 10.1063/1.5120710
![]() |
[10] |
M. Buehner, P. Young, A tighter bound for the echo state property, IEEE Trans. Neural Networks, 17 (2006), 820–824. https://doi.org/10.1109/TNN.2006.872357 doi: 10.1109/TNN.2006.872357
![]() |
[11] | M. Lukosevicius, H. Jaeger, Overview of reservoir recipes, Technical Report, Jacobs University Bremen, 2007. |
[12] | D. Verstraeten, Reservoir Computing: Computation with Dynamical Systems, Ph.D thesis, Ghent University, 2009. |
[13] |
I. B. Yildiz, H. Jaeger, S. Kiebel, Re-visiting the echo state property, Neural Networks, 35 (2012), 1–9. https://doi.org/10.1016/j.neunet.2012.07.005 doi: 10.1016/j.neunet.2012.07.005
![]() |
[14] |
G. Manjunath, H. Jaeger, Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks, Neural Comput., 25 (2013), 671–696. https://doi.org/10.1162/neco_a_00411 doi: 10.1162/neco_a_00411
![]() |
[15] | S. Basterrech, Empirical analysis of the necessary and sufficient conditions of the echo state property, in 2017 International Joint Conference on Neural Networks, IEEE, (2017), 888–896. https://doi.org/10.1109/IJCNN.2017.7965946 |
[16] |
J. Jiang, Y. C. Lai, Model-free prediction of spatiotemporal dynamical systems with recurrent neural networks: Role of network spectral radius, Phys. Rev. Res., 1 (2019), 033056. https://doi.org/10.1103/PhysRevResearch.1.033056 doi: 10.1103/PhysRevResearch.1.033056
![]() |
[17] |
C. G. Langton, Computation at the edge of chaos: Phase transitions and emergent computation, Phys. D, 42 (1990), 12–37. https://doi.org/10.1016/0167-2789(90)90064-V doi: 10.1016/0167-2789(90)90064-V
![]() |
[18] |
N. Bertschinger, T. Natschläger, Real-time computation at the edge of chaos in recurrent neural networks, Neural Comput., 16 (2004), 1413–1436. https://doi.org/10.1162/089976604323057443 doi: 10.1162/089976604323057443
![]() |
[19] | N. Bertschinger, T. Natschläger, R. Legenstein, At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks, Adv. Neural Inf. Process. Syst., 17 (2004). |
[20] | B. Schrauwen, D. Verstraeten, J. Van Campenhout, An overview of reservoir computing: theory, applications and implementations, in Proceedings of the 15th European Symposium on Artificial Neural Networks, (2007), 471–482. |
[21] |
A. Haluszczynski, J. Aumeier, J. Herteux, C. Räth, Reducing network size and improving prediction stability of reservoir computing, Chaos, 30 (2020), 063136. https://doi.org/10.1063/5.0006869 doi: 10.1063/5.0006869
![]() |
[22] |
Q. Zhu, H. F. Ma, W. Lin, Detecting unstable periodic orbits based only on time series: When adaptive delayed feedback control meets reservoir computing, Chaos, 29 (2019), 093125. https://doi.org/10.1063/1.5120867 doi: 10.1063/1.5120867
![]() |
[23] |
J. W. Hou, H. F. Ma, D. He, J. Sun, Q. Nie, W. Lin, Harvesting random embedding for high-frequency change-point detection in temporal complex, Natl. Sci. Rev., 2022. https://doi.org/10.1093/nsr/nwab228 doi: 10.1093/nsr/nwab228
![]() |
[24] |
X. Ying, S. Y. Leng, H. F. Ma, Q. Nie, Y. C. Lai, W. Lin, Continuity scaling: A rigorous framework for detecting and quantifying causality accurately, Research, 2022 (2022), 9870149. https://doi.org/10.34133/2022/9870149 doi: 10.34133/2022/9870149
![]() |
1. | Ivan Izonin, Roman Tkachenko, Nataliya Shakhovska, Nataliia Lotoshynska, The Additive Input-Doubling Method Based on the SVR with Nonlinear Kernels: Small Data Approach, 2021, 13, 2073-8994, 612, 10.3390/sym13040612 | |
2. | Lesia Mochurad, Natalia Kryvinska, Parallelization of Finding the Current Coordinates of the Lidar Based on the Genetic Algorithm and OpenMP Technology, 2021, 13, 2073-8994, 666, 10.3390/sym13040666 | |
3. | Alireza Mohammadi, Dmytro Chumachenko, Tetyana Chumachenko, 2021, Machine Learning Model of COVID-19 Forecasting in Ukraine Based on the Linear Regression, 978-1-6654-4296-1, 149, 10.1109/ELIT53502.2021.9501122 | |
4. | Natalya Shakhovska, Vitaliy Yakovyna, Valentyna Chopyak, A new hybrid ensemble machine-learning model for severity risk assessment and post-COVID prediction system, 2022, 19, 1551-0018, 6102, 10.3934/mbe.2022285 | |
5. | Vladyslav Kotsovsky, Anatoliy Batyuk, Volodymyr Voityshyn, 2021, On the Size of Weights for Bithreshold Neurons and Networks, 978-1-6654-4257-2, 13, 10.1109/CSIT52700.2021.9648833 | |
6. | Nataliia Dotsenko, Dmytro Chumachenko, Igor Chumachenko, 2021, Formation of a project team using an intelligent logic-combinatorial approach, 978-1-6654-4257-2, 206, 10.1109/CSIT52700.2021.9648786 | |
7. | Larysa Zomchak, Maryna Nehrey, 2022, Chapter 59, 978-3-031-04808-1, 645, 10.1007/978-3-031-04809-8_59 | |
8. | Jarosław Bilski, Bartosz Kowalczyk, 2023, Chapter 1, 978-3-031-23491-0, 3, 10.1007/978-3-031-23492-7_1 | |
9. | Lindsay C. Todman, Alex Bush, Amelia S.C. Hood, ‘Small Data’ for big insights in ecology, 2023, 01695347, 10.1016/j.tree.2023.01.015 | |
10. | Dmytro Chumachenko, Ievgen Meniailov, Kseniia Bazilevych, Tetyana Chumachenko, Sergiy Yakovlev, On intelligent agent-based simulation of COVID-19 epidemic process in Ukraine, 2022, 198, 18770509, 706, 10.1016/j.procs.2021.12.310 | |
11. | Hossein Saberi, Ehsan Esmaeilnezhad, Hyoung Jin Choi, Artificial Neural Network to Forecast Enhanced Oil Recovery Using Hydrolyzed Polyacrylamide in Sandstone and Carbonate Reservoirs, 2021, 13, 2073-4360, 2606, 10.3390/polym13162606 | |
12. | Kaixin Liu, Fumin Wang, Yuxiang He, Yi Liu, Jianguo Yang, Yuan Yao, Data-Augmented Manifold Learning Thermography for Defect Detection and Evaluation of Polymer Composites, 2022, 15, 2073-4360, 173, 10.3390/polym15010173 | |
13. | Vladyslav Kotsovsky, Anatoliy Batyuk, 2021, Decision List-Based Representation of Bithreshold Functions, 978-1-6654-4257-2, 21, 10.1109/CSIT52700.2021.9648689 | |
14. | Zongliang Guo, Sikai Lin, Runze Suo, Xinming Zhang, An Offline Weighted-Bagging Data-Driven Evolutionary Algorithm with Data Generation Based on Clustering, 2023, 11, 2227-7390, 431, 10.3390/math11020431 | |
15. | Ivan Izonin, Nataliya Shakhovska, Special issue: Informatics & data-driven medicine, 2021, 18, 1551-0018, 6430, 10.3934/mbe.2021319 | |
16. | Vladyslav Kotsovsky, Anatoliy Batyuk, 2022, Feed-forward Neural Network Classifiers with Bithreshold-like Activations, 979-8-3503-3431-9, 9, 10.1109/CSIT56902.2022.10000739 | |
17. | Dmytro Chumachenko, Kseniia Bazilevych, Ievgen Meniailov, Sergiy Yakovlev, Tetyana Chumachenko, 2021, Simulation of COVID-19 Dynamics using Ridge Regression, 978-1-6654-0618-5, 163, 10.1109/AICT52120.2021.9628991 | |
18. | Dmytro Chumachenko, Tetyana Chumachenko, Ievgen Meniailov, Olena Muradyan, Grigoriy Zholtkevych, 2021, Forecasting of COVID-19 Epidemic Process by Lasso Regression, 978-1-6654-2652-7, 80, 10.1109/UkrMiCo52950.2021.9716621 | |
19. | Vitaliy Yakovyna, Natalya Shakhovska, Software failure time series prediction with RBF, GRNN, and LSTM neural networks, 2022, 207, 18770509, 837, 10.1016/j.procs.2022.09.139 | |
20. | Liang-Sian Lin, Susan C Hu, Yao-San Lin, Der-Chiang Li, Liang-Ren Siao, A new approach to generating virtual samples to enhance classification accuracy with small data—a case of bladder cancer, 2022, 19, 1551-0018, 6204, 10.3934/mbe.2022290 | |
21. | Junbo Qiu, Xin Yin, Yucong Pan, Xinyu Wang, Min Zhang, Prediction of Uniaxial Compressive Strength in Rocks Based on Extreme Learning Machine Improved with Metaheuristic Algorithm, 2022, 10, 2227-7390, 3490, 10.3390/math10193490 | |
22. | Nataliia Melnykova, Nataliya Shakhovska, Volodymyr Melnykov, Kateryna Melnykova, Khrystyna Lishchuk-Yakymovych, Personalized Data Analysis Approach for Assessing Necessary Hospital Bed-Days Built on Condition Space and Hierarchical Predictor, 2021, 5, 2504-2289, 37, 10.3390/bdcc5030037 | |
23. | Hongliang Gao, Lang Xiong, Research on a hybrid controller combining RBF neural network supervisory control and expert PID in motor load system control, 2022, 14, 1687-8132, 168781322211099, 10.1177/16878132221109994 | |
24. | Elias Dritsas, Maria Trigka, Machine Learning Techniques for Chronic Kidney Disease Risk Prediction, 2022, 6, 2504-2289, 98, 10.3390/bdcc6030098 | |
25. | Yaroslav Tolstyak, Valentyna Chopyak, Myroslav Havryliuk, An investigation of the primary immunosuppressive therapy's influence on kidney transplant survival at one month after transplantation, 2023, 09663274, 101832, 10.1016/j.trim.2023.101832 | |
26. | Lesia Mochurad, Yaroslav Hladun, Yevgen Zasoba, Michal Gregus, An Approach for Opening Doors with a Mobile Robot Using Machine Learning Methods, 2023, 7, 2504-2289, 69, 10.3390/bdcc7020069 | |
27. | Laith Alzubaidi, Jinshuai Bai, Aiman Al-Sabaawi, Jose Santamaría, A. S. Albahri, Bashar Sami Nayyef Al-dabbagh, Mohammed A. Fadhel, Mohamed Manoufali, Jinglan Zhang, Ali H. Al-Timemy, Ye Duan, Amjed Abdullah, Laith Farhan, Yi Lu, Ashish Gupta, Felix Albu, Amin Abbosh, Yuantong Gu, A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications, 2023, 10, 2196-1115, 10.1186/s40537-023-00727-2 | |
28. | Anusuya KRİSHNAN, Kennedyraj MARİAFRANCİS, Analyzing the Impact of Augmentation Techniques on Deep Learning Models for Deceptive Review Detection: A Comparative Study, 2023, 3, 2757-7422, 96, 10.54569/aair.1329048 | |
29. | Dmytro Chumachenko, Sergiy Yakovlev, Artificial Intelligence Algorithms for Healthcare, 2024, 17, 1999-4893, 105, 10.3390/a17030105 | |
30. | Dmytro Chumachenko, Tetyana Chumachenko, Ievgen Meniailov, Olena Muradyan, Grigoriy Zholtkevych, 2023, Chapter 30, 978-3-031-35466-3, 503, 10.1007/978-3-031-35467-0_30 | |
31. | Dmytro Chumachenko, Pavlo Pyrohov, 2022, Estimation of the Migration Impact on COVID-19 Dynamics in Slovakia by Machine Learning: Simulation Study during Russian War in Ukraine, 979-8-3503-9891-5, 383, 10.1109/PICST57299.2022.10238479 | |
32. | Viacheslav Kovtun, Krzysztof Grochla, Vyacheslav Kharchenko, Mohd Anul Haq, Andriy Semenov, Stochastic forecasting of variable small data as a basis for analyzing an early stage of a cyber epidemic, 2023, 13, 2045-2322, 10.1038/s41598-023-49007-2 | |
33. | Dmytro Chumachenko, Adam Wojciechowski, Sergiy Yakovlev, 2023, Chapter 19, 978-3-031-36200-2, 227, 10.1007/978-3-031-36201-9_19 | |
34. | Viacheslav Kovtun, Elena Zaitseva, Vitaly Levashenko, Krzysztof Grochla, Oksana Kovtun, Small Stochastic Data Compactification Concept Justified in the Entropy Basis, 2023, 25, 1099-4300, 1567, 10.3390/e25121567 | |
35. | Yue Ma, Mingming Guo, Ye Tian, Jialing Le, Recent advances and prospects in hypersonic inlet design and intelligent optimization, 2024, 146, 12709638, 108953, 10.1016/j.ast.2024.108953 | |
36. | Xingjiang Xu, Predicting the Risk of Chronic Kidney Disease Using Machine Learning, 2023, 1556-5068, 10.2139/ssrn.4636627 | |
37. | Inyong Jeong, Yeongmin Kim, Nam-Jun Cho, Hyo-Wook Gil, Hwamin Lee, A Novel Method for Medical Predictive Models in Small Data Using Out-of-Distribution Data and Transfer Learning, 2024, 12, 2227-7390, 237, 10.3390/math12020237 | |
38. | Tiko Iyamu, Wandisa Nyikana, Activity Theory View of Big Data Architectural Design for Enterprises, 2024, 9, 2468-4376, 29581, 10.55267/iadt.07.15494 | |
39. | Dmytro Chumachenko, Tetiana Dudkina, Sergiy Yakovlev, Tetyana Chumachenko, Fei Hu, Effective Utilization of Data for Predicting COVID-19 Dynamics: An Exploration through Machine Learning Models, 2023, 2023, 1687-6423, 1, 10.1155/2023/9962100 | |
40. | Dmytro Chumachenko, 2024, Chapter 2, 978-3-031-59130-3, 27, 10.1007/978-3-031-59131-0_2 | |
41. | Daobao Luo, Xin Hu, Wujun Ji, Construction of battery charge state prediction model for new energy electric vehicles, 2024, 119, 00457906, 109561, 10.1016/j.compeleceng.2024.109561 | |
42. | Mykola Butkevych, Olha Manakova, Dmytro Chumachenko, 2024, Forecasting of Salmonellosis Dynamics with LSTM Deep Learning Model, 979-8-3315-2056-4, 213, 10.1109/TCSET64720.2024.10755553 | |
43. | Mykola Stakhiv, Predicting the Duration of Treatment Using Personalized Medical Data, 2024, 9, 25240382, 146, 10.23939/acps2024.02.146 | |
44. | Pallavi V. Baviskar, Vidya A. Nemade, Vishal V. Mahale, 2025, Chapter 19, 978-981-97-8668-8, 245, 10.1007/978-981-97-8669-5_19 | |
45. | Dmytro Chumachenko, 2025, Chapter 26, 978-3-031-80934-7, 555, 10.1007/978-3-031-80935-4_26 | |
46. | Oksana Mulesa, An Adaptive Selection of Urban Construction Projects: A Multi-Stage Model with Iterative Supercriterion Reduction, 2025, 9, 2413-8851, 146, 10.3390/urbansci9050146 | |
47. | Zhe Li, Haoyu Liu, Zuochao She, Kailun Zhang, Chenyang Tu, Hanxiong Sun, 2024, Line Loss Prediction Method for Urban Distribution Networks Based on Optimized RBF and K-means++ Algorithm Under Double Carbon Targets, 979-8-3315-1170-8, 52, 10.1109/CESPE64643.2024.00019 |
Kernel | Method | ||
Proposed | [5] | [24] | |
k1 | 22.13/0.94 | 22.15/0.81 | 20.47/0.70 |
k2 | 23/0.96 | 23.24/0.85 | 20.36/0.73 |
k3 | 23.95/0.96 | 24.36/0.53 | 25.94/0.85 |
k4 | 22.6/0.95 | 22.47/0.83 | 22.04/0.79 |
k5 | 21.69/0.93 | 21.48/0.79 | 20.16/0.65 |
k6 | 22.42/0.94 | 22.49/0.82 | 22.8/0.83 |
k7 | 19.43/0.88 | 19.33/0.68 | 18.77/0.57 |
k8 | 24.27/0.95 | 24.28/0.95 | 21.16/0.88 |
k9 | 22.13/0.94 | 22.14/0.80 | 21.63/0.78 |
k10 | 23.4/0.95 | 23.54/0.82 | 20.23/0.66 |
Ave. | 22.5/0.94 | 22.55/0.82 | 21.36/0.75 |
Kernel | Method | ||
Proposed | [5] | [24] | |
k1 | 22.13/0.94 | 22.15/0.81 | 20.47/0.70 |
k2 | 23/0.96 | 23.24/0.85 | 20.36/0.73 |
k3 | 23.95/0.96 | 24.36/0.53 | 25.94/0.85 |
k4 | 22.6/0.95 | 22.47/0.83 | 22.04/0.79 |
k5 | 21.69/0.93 | 21.48/0.79 | 20.16/0.65 |
k6 | 22.42/0.94 | 22.49/0.82 | 22.8/0.83 |
k7 | 19.43/0.88 | 19.33/0.68 | 18.77/0.57 |
k8 | 24.27/0.95 | 24.28/0.95 | 21.16/0.88 |
k9 | 22.13/0.94 | 22.14/0.80 | 21.63/0.78 |
k10 | 23.4/0.95 | 23.54/0.82 | 20.23/0.66 |
Ave. | 22.5/0.94 | 22.55/0.82 | 21.36/0.75 |