
Hepatitis is an infectious disease typified by inflammation in internal organ tissues, and it is caused by infection or inflammation of the liver. Hepatitis is often feared as a fatal illness, especially in developing countries, mostly due to contaminated water, poor sanitation, and risky blood transfusion practices. Although viruses are typically blamed, other potential causes of this kind of liver infection include autoimmune disorders, toxins, medicines, opioids, and alcohol. Viral hepatitis may be diagnosed using a variety of methods, including a physical exam, liver surgery (biopsy), imaging investigations like an ultrasound or CT scan, blood tests, a viral serology panel, a DNA test, and viral antibody testing. Our study proposes a new decision-support system for hepatitis diagnosis based on spherical q-linear Diophantine fuzzy sets (Sq-LDFS). Sq-LDFS form the generalized structure of all existing notions of fuzzy sets. Furthermore, a list of novel Einstein aggregation operators is developed under Sq-LDF information. Also, an improved VIKOR method is presented to address the uncertainty in analyzing the viral hepatitis categories demonstration. Interesting and useful properties of the proposed operators are given. The core of this research is the proposed algorithm based on the proposed Einstein aggregation operators and improved VIKOR approach to address uncertain information in decision support problems. Finally, a hepatitis diagnosis case study is examined to show how the suggested approach works in practice. Additionally, a comparison is provided to demonstrate the superiority and efficacy of the suggested decision technique.
Citation: Huzaira Razzaque, Shahzaib Ashraf, Wajdi Kallel, Muhammad Naeem, Muhammad Sohail. A strategy for hepatitis diagnosis by using spherical q-linear Diophantine fuzzy Dombi aggregation information and the VIKOR method[J]. AIMS Mathematics, 2023, 8(6): 14362-14398. doi: 10.3934/math.2023735
[1] | Nadiyah Hussain Alharthi, Abdon Atangana, Badr S. Alkahtani . Numerical analysis of some partial differential equations with fractal-fractional derivative. AIMS Mathematics, 2023, 8(1): 2240-2256. doi: 10.3934/math.2023116 |
[2] | Asifa Tassaddiq, Muhammad Tanveer, Muhammad Arshad, Rabab Alharbi, and Ruhaila Md Kasmani . Fractal generation and analysis using modified fixed-point iteration. AIMS Mathematics, 2025, 10(4): 9462-9492. doi: 10.3934/math.2025437 |
[3] | Min Woong Ahn . An elementary proof that the set of exceptions to the law of large numbers in Pierce expansions has full Hausdorff dimension. AIMS Mathematics, 2025, 10(3): 6025-6039. doi: 10.3934/math.2025275 |
[4] | Chao Yue, Chuanhe Shen . Fractal barrier option pricing under sub-mixed fractional Brownian motion with jump processes. AIMS Mathematics, 2024, 9(11): 31010-31029. doi: 10.3934/math.20241496 |
[5] | Sudesh Kumari, Renu Chugh, Jinde Cao, Chuangxia Huang . On the construction, properties and Hausdorff dimension of random Cantor one pth set. AIMS Mathematics, 2020, 5(4): 3138-3155. doi: 10.3934/math.2020202 |
[6] | Abdon Atangana, Ali Akgül . Analysis of a derivative with two variable orders. AIMS Mathematics, 2022, 7(5): 7274-7293. doi: 10.3934/math.2022406 |
[7] | Najmeddine Attia, Bilel Selmi . Different types of multifractal measures in separable metric spaces and their applications. AIMS Mathematics, 2023, 8(6): 12889-12921. doi: 10.3934/math.2023650 |
[8] | Emile Franc Doungmo Goufo, Abdon Atangana . On three dimensional fractal dynamics with fractional inputs and applications. AIMS Mathematics, 2022, 7(2): 1982-2000. doi: 10.3934/math.2022114 |
[9] | Obaid Algahtani, Sayed Saifullah, Amir Ali . Semi-analytical and numerical study of fractal fractional nonlinear system under Caputo fractional derivative. AIMS Mathematics, 2022, 7(9): 16760-16774. doi: 10.3934/math.2022920 |
[10] | Ruhua Zhang, Wei Xiao . What is the variant of fractal dimension under addition of functions with same dimension and related discussions. AIMS Mathematics, 2024, 9(7): 19261-19275. doi: 10.3934/math.2024938 |
Hepatitis is an infectious disease typified by inflammation in internal organ tissues, and it is caused by infection or inflammation of the liver. Hepatitis is often feared as a fatal illness, especially in developing countries, mostly due to contaminated water, poor sanitation, and risky blood transfusion practices. Although viruses are typically blamed, other potential causes of this kind of liver infection include autoimmune disorders, toxins, medicines, opioids, and alcohol. Viral hepatitis may be diagnosed using a variety of methods, including a physical exam, liver surgery (biopsy), imaging investigations like an ultrasound or CT scan, blood tests, a viral serology panel, a DNA test, and viral antibody testing. Our study proposes a new decision-support system for hepatitis diagnosis based on spherical q-linear Diophantine fuzzy sets (Sq-LDFS). Sq-LDFS form the generalized structure of all existing notions of fuzzy sets. Furthermore, a list of novel Einstein aggregation operators is developed under Sq-LDF information. Also, an improved VIKOR method is presented to address the uncertainty in analyzing the viral hepatitis categories demonstration. Interesting and useful properties of the proposed operators are given. The core of this research is the proposed algorithm based on the proposed Einstein aggregation operators and improved VIKOR approach to address uncertain information in decision support problems. Finally, a hepatitis diagnosis case study is examined to show how the suggested approach works in practice. Additionally, a comparison is provided to demonstrate the superiority and efficacy of the suggested decision technique.
Image deblurring or deconvolution is an important topic in image processing due to its wide range of applications in astronomical imaging, robot vision, medical image processing, remote sensing, virtual reality and many others. In the field of image deconvolution, the use of non-linear variational methods have received a great amount of attention in the last few decades. While applying these methods to blurred and noisy images researchers have to face two main difficulties. One difficulty is non-linearity, and the other is related to solve the underlying large matrix system. For this paper, our primary goal is also to deal with these two computational difficulties for a high-order variational model. The total variation (TV) model [1,29,36]
ℑ(u)=12‖→Ku−z‖2+α∫Ω|▽u|dx | (1.1) |
is the most famous non-linear variational model. It has so many desirable properties, but this model transforms smooth functions into piecewise constant functions which generates staircase effects in the deblurred images. That is why the resulting images look blocky. To overcome the problem of staircase effects, one idea is to use mean curvature (MC)-based regularization models [12,19,31,39,40,41],
ℑ(u)=12‖→Ku−z‖2+α2∫Ω(▽⋅▽u|▽u|)2dx. | (1.2) |
The MC-based regularization models not only remove the staircase effects but they also preserve the edges in the deblurred images. However, the Euler-Lagrange equations of mean curvature models can be used to solve non-linear fourth-order integro-differential equations. This non-linear high order term comes from the MC functional. Furthermore the discretization of an integro-differential equation results in a large ill-conditioned system that causes numerical algorithms like Krylov subspace methods (e.g. conjugate gradient (CG) method) to slowly converge. The Jacobian matrix of the ill-conditioned system has a block-banded structure with a large bandwidth. The efficient and robust numerical solution is a key issue for MC-based regularization methods due to the ill-conditioned system and high non-linearity.
In this paper, we propose an augmented Lagrangian method for an MC-based image deblurring problem. The augmented Lagrangian methods have already been successfully applied to the optimization problems in image processing and computer vision [9,22,32,37]. In these works, the augmented Lagrangian methods achieve much higher speeds than other numerical methods. The augmented Lagrangian methods have been shown to decompose the original nontrivial minimization problem into a number of subproblems that are easy and fast to solve. Some of them can be solved by fast solvers like the fast Fourier transform (FFT), and others have 'closed-form' solutions. Therefore, the construction of an efficient augmented Lagrangian method for the minimization of a given functional depends on whether one can break down the original functional into simple ones. In our proposed augmented Lagrangian method, we use the cell-centered finite difference scheme along with the midpoint quadrature scheme for discretizing the subproblem for the primary variable u (image) of the associated Lagrangian equations. However, this leads to a large non-linear matrix system. The coefficient matrix of this system is symmetric positive definite (SPD). So the CG method is suitable for the solution of this matrix system. But, the SPD matrix is quite dense which causes the CG method to slowly converge. To overcome the problem of slow convergence due to the CG method, we have introduced a new circulant preconditioned matrix. So, for the solution of the system, instead of applying ordinary CG method, we use a preconditioned CG (PCG) method. By using the proposed new preconditioner fast convergence can be observed in the numerical results.
The contributions of the paper are as follows: (i) we present a preconditioned augmented Lagrangian method for an MC-based image deblurring model; (ii) we introduce a new circulant preconditioned matrix that overcomes the problem of slow convergence due to the CG method inside the augmented Lagrangian scheme and (iii) we present a comparison with existing numerical methods to demonstrate the efficiency of the preconditioned augmented Lagrangian method. The paper includes different sections. The second section explains the image deblurring problem. The third section is about our proposed augmented Lagrangian method for an MC-based image deblurring model. The cell discretization and matrix system are also presented in third section. The fourth section describes the proposed circulant preconditioner, and the fifth section discusses the numerical experiments. We present the conclusions in the last section of the manuscript.
We start the paper by presenting a concise description of the image debluring problem. The mathematical relationship of u (original or true image) and z (blurred image) is
3z=→Ku+ϵ. | (2.1) |
The parameter ϵ represents a noise function which can be Gaussian noise, Brownian noise, salt-and-pepper noise etc. In our experiments, we have only considered Gaussian noise. →K represents the blurring operator,
(→Ku)(x)=∫Ωk(x,y)u(y)dy,x∈Ω, | (2.2) |
where k(x,y)=k(x−y) is known as translation-invariant kernel. Figure 1 depicts the process of blurring.
If →K=I (identity operator), then Problem (2.1) becomes an image denoising problem. If the blurring operator →K is known then the technique is referred to non-blind deconvolution [14,34,38]. However, if the blurring operator is unknown, then it is called blind deconvolution [6,19,23]. Our research will focus on non-blind deconvolution. →K is a Fredholm-integral operator of the first kind, so it is compact. That is why Problem (2.1) becomes ill-posed [1,35,36]. The image intensity function u lies in a square Ω⊂R2. The position of the pixel in Ω is defined by x=(x,y). Additionally, |x|=√x2+y2 and ‖⋅‖ are the Euclidean norm and L2(Ω) norm respectively. The recovery of u from z makes Problem (2.1) an unstable inverse problem [1,35,36]. The use of the MC regularization functional[12,31,39,41],
J(u)=∫Ωκ(u)2dx=∫Ω(▽⋅▽u|▽u|)2dx, | (2.3) |
makes Problem (2.1) a stable problem. The Problem (2.1) is then to find a u that minimizes the functional
3ℑ(u)=12‖→Ku−z‖2+α2J(u),α>0. | (2.4) |
In [41], Zhu and Chan explained the well-posedness of Problem (2.4) for the case of synthetic image denoising. The Euler-Lagrange equations of Problem (2.4) are as follows:
3→K∗(→Ku−z)+α▽⋅(▽κ√|▽u|2+β2−▽κ⋅▽u(√|▽u|2+β2)3▽u)=0inΩ, | (2.5) |
∂u∂n=0in∂Ω, | (2.6) |
κ(u)=0in∂Ω, | (2.7) |
where →K∗is the adjoint operator of →K and n is the outward unit normal. β>0 is added to make the MC functional differentiable at zero. Equation (2.5) is a non-linear fourth order differential equation.
The MC-based regularization models not only remove the staircase effects but they also preserve edges during the recovery of digital images. However, the Euler-Lagrange equations of the MC model can be used to solve the non-linear fourth order integro-differential equation. The MC functional generates a non-linear high-order term. When developing an efficient numerical method, one key issue is determining a proper approximation of the MC functional. In the next section, the non-linearity of MC is treated by introducing three new variables for the associated Lagrangian.
In the given literature [37,40,42,43] one can find lot of work on the augmented Lagrangian method (ALM) for the MC-based image denoising problem but not for the image deblurring problem. In this paper, we have extended the ALM algorithm for application to the image deblurring problem. To develop an ALM for the image deblurring model described by Problem (2.4), we introduce three new variables, i.e., w,→v and →t, and consider the following constrained minimization problem
minu,w,→v,→t[12∫(→Ku−z)2+∫Ω|w|],withw=▽⋅→v,→v=→t√|→t|2+β,→t=▽u. | (3.1) |
The associated augmented Lagrangian functional in Problem (3.1) is
L(u,w,→v,→t,λ1,→λ2,λ3)=12∫(→Ku−z)2+α∫|w| | (3.2) |
+r12∫(w−▽⋅→v)2+∫λ1(w−▽⋅→v) |
+r22∫|→v−→t√|→t|2+β|2+∫→λ2⋅(→v−→t√|→t|2+β) |
+r32∫(→t−▽u)2+∫→λ3⋅(→t−▽u), |
where r1,r2 and r3 are the penalization parameters. λ1∈R, →λ2,→λ3∈R3 are Lagrange multipliers and →v,→t∈R3. One of the key issues with ALMs is related to handling the subproblems involving the term →t√|→t|2+β=▽u√|▽u|2+β. This type of term appears due to the mean curvature functional ▽⋅▽u|▽u|. One remedy is to introduce a variable →t=⟨▽u,1⟩ instead of →t=▽u for the curvature term. Accordingly, we will use →v=⟨▽u,1⟩√|⟨▽u,1⟩|2+β. Then, our MC model becomes
minu,w,→v,→t[12∫(→Ku−z)2+∫Ω|w|],withw=▽⋅→v,→v=→t√|→t|2+β,→t=⟨▽u,1⟩. | (3.3) |
Then, the associated augmented Lagrangian functional is
L(u,w,→v,→n,→t,λ1,→λ2,λ3,→λ4)=12∫(→Ku−z)2+α∫|w| | (3.4) |
+r1∫(|→t|−→t⋅→n)+∫λ1(|→t|−→t⋅→n) |
+r22∫|→t−⟨▽u,1⟩|2+∫→λ2⋅(→t−⟨▽u,1⟩) |
+r32∫(w−∂xv1−∂yv2)2+∫λ3⋅(w−∂xv1−∂yv2) |
+r42∫|→v−→n|2+∫→λ4⋅(→v−→n)+δR(→n), |
where r1,r2,r3 and r4 are the penalty parameters. λ1,λ3,∈R, →λ2,→λ4∈R3 are Lagrange multipliers and →v,→n,→t∈R3. Here, we have introduced →n to relax the variable →v. R={→n∈L2(Ω):|→n|≤1a.e.n∈Ω} and δR(→n) is a characteristic function on R that can be expressed as
δR(→n)={0,→n∈R+∞,otherwise. | (3.5) |
To make the term |→t|−→t⋅→n always non-negative, we need →n∈R. To find the saddle points of the minimization given by Problem (3.3), we need to find the saddle points of the augmented Lagrangian functional given by Eq (3.4). So, we have to fix all variables in Eq (3.4) except one particular variable. Then, we find a critical point of the induced functional to update that particular variable. We repeat the same process for all variables in Eq (3.4). The Lagrangian multipliers will automatically be advanced once all variables are updated. The process will continue until all variables in Eq (3.4) converge to a steady state. The ALM is summarized in the following algorithm.
Algorithm 1: The Augmented Lagrangian method for MC model |
Step-Ⅰ: Initialize u0,w0,→v0,→n0,→t0, and λ01,→λ02,λ03,→λ04. |
Step-Ⅱ: For k=0,1,2,...: Compute (uk,wk,→vk,→nk,→tk) as an (approximate) minimizer of the augmented Lagrangian functional with the Lagrange multiplier λk−11,→λk−12,λk−13,→λk−14, i.e., |
(uk,wk,→vk,→nk,→tk)≈argminL(u,w,→v,→n,→t,λk−11,→λk−12,λk−13,→λk−14).(3.6) |
Step-Ⅲ: Update the Lagrangian multipliers |
λk1=λk−11+r1(|→tk|−→tk⋅→nk),(3.7) |
→λk2=→λk−12+r2(|→tk|−⟨▽uk,1⟩),(3.8) |
λk3=λk−13+r3(wk−∂vk1−∂vk2),(3.9) |
→λk4=→λk−14+r4(→vk−→nk),(3.10) |
where →n=⟨n1,n2,n3⟩. |
Step-Ⅳ: Stop the iteration if relative residuals are smaller than a threshold ϵr. |
Now we consider the following subproblems and find their critical points
f1(u)=12∫(→Ku−z)2+r22∫|→t−⟨▽u,1⟩|2+∫→λ2.(→t−⟨▽u,1⟩), | (3.11) |
f2(w)=α∫|w|+r32∫(w−∂xv1−∂yv2)2+∫λ3⋅(w−∂xv1−∂yv2), | (3.12) |
f3(→t)=r1∫(|→t|−→t⋅→n)+∫λ1(|→t|−→t.→n)+r22∫|→t−⟨▽u,1⟩|2+∫→λ2.(→t−⟨▽u,1⟩), | (3.13) |
f4(→v)=r32∫(w−∂xv1−∂yv2)2+∫λ3⋅(w−∂xv1−∂yv2)+r42∫|→v−→n|2+∫→λ4.(→v−→n), | (3.14) |
f5(→n)=r1∫(|→t|−→t⋅→n)+∫λ1(|→t|−→t⋅→n)+r42∫|→v−→n|2+∫→λ4⋅(→v−→n)+δR(→n). | (3.15) |
The standard procedure gives us the following Euler-Lagrange equations for the functionals f1(u) and f4(→v):
K∗Ku−△u=K∗z−(r2t1+λ21)x−(r2t2+λ22)y, | (3.16) |
3r4v1−r3(∂xv1+∂yv2)x=r4n1−λ41−(r3w−λ3)x, | (3.17) |
r4v2−r3(∂xv1+∂yv2)y=r4n2−λ42−(r3w−λ3)y, | (3.18) |
r4v3=r4n3−λ43, | (3.19) |
respectively, where →v=⟨v1,v2,v3⟩,→t=⟨t1,t2,t3⟩,→n=⟨n1,n2,n3⟩,→λ2=⟨λ21,λ22,λ23⟩ and →λ4=⟨λ41,λ42,λ43⟩. To update the functionals f2(w),f3(t) and f5(→n), we need the following equations:
argminwf2(w)=max{0,1−αr3|˜w|}˜w,˜w=∂xv1+∂yv2−λ3r3, | (3.20) |
argmin→tf3(→t)=max{0,1−r1+λ1r2|˜t|}˜t,˜t=⟨▽u,1⟩−→λ2r2+r1+λ1r2→n, | (3.21) |
argmin→nf5(→n)=max{˜n,|˜n|≤1˜n/|˜n|,otherwise,˜n=→v+→λ4r4+r1+λ1r4→t. | (3.22) |
To update the Lagrangian multipliers, we have the following equations:
3λnew1=λold1+r1(|→t|−→t⋅→n), | (3.23) |
→λnew2=→λold2+r2(→t−⟨▽u,1⟩), | (3.24) |
λnew3=λold3+r3(w−∂v1−∂v2), | (3.25) |
→λnew4=→λold4+r4(→v−→n). | (3.26) |
Since the MC regularizer κ(u)2=(▽⋅▽u|▽u|)2 of the image deblurring problem is not homogeneous in u, we need a spatial kind of discretization especially for the terms that involve derivatives. This is because the role of the spatial mesh size is quite important to the numerical performance of the MC model. So, we partitioned the domain Ω=(0,1)×(0,1) by δx×δy. Additionally,
δx:0=x1/2<x3/2<...<xnx+1/2=1,δy:0=y1/2<y3/2<...<ynx+1/2=1, |
where the number of equispaced partitions in the direction of x or y is equal to nx and (xi,yj) represents the centers of the cells. Additionally,
xi=(2ih−h)2fori=1,2,3,...,nx, |
yj=(2jh−h)2forj=1,2,3,...,nx, |
where h=1nx; (xi±12,yj) and (xi,yj±12) represent the midpoints of cell edges:
xi±12=xi±h2fori=1,2,3,...,nx, |
yj±12=yj±h2forj=1,2,3,...,nx. |
For each i=1,2,...,nx, and j=1,2,...,nx, define
Ωi,j=(xi−1/2,xi+1/2)×(yj−1/2,yj+1/2). |
For the function θ(x,y), let θk,l denote θ(xl,ym), where k and l may take values of i−1,i, or i+1 and j−1,j, or j+1 respectively, for integers i,j≥0. For backward and forward discrete functions, we need values at proper discrete points, so we define
[d+xθ]i,j=θi+1,j−θi,jh,[d−xθ]i,j=θi,j−θi−1,jh,[d+yθ]i,j=θi,j+1−θi,jh,[d−yθ]i,j=θi,j−θi,j−1h. |
Then the central difference discrete functions and discrete gradient are
[dcxθ]i,j=[d−xθ]i,j+[d+xθ]i,j2,[dcyθ]i,j=[d−yθ]i,j+[d+yθ]i,j2, |
and
[▽+θ]i,j=⟨[d+xθ]i,j,[d+yθ]i,j⟩,[▽−θ]i,j=⟨[d−xθ]i,j,[d−yθ]i,j⟩, |
respectively. By applying midpoint quadrature approximation, we have that
(Ku)(xi,yj)≅[KhU](ij). |
Here, we consider the cell-centered finite-difference (CCFD) method for the MC-based image deblurring problem. The CCFD approximations {Ui,j} to {u(xi,j)} are chosen. So from Eq (3.16) we have that
[K∗KU]i,j+r2[div−▽+U]i,j=[K∗Z]i,j−[d−x(r2t1+λ21)]i,j−[d−y(r2t2+λ22)]i,j, |
[K∗KU]i,j+r2[div−▽+U]i,j=[K∗Z]i,j+g(i,j), | (3.27) |
where g(i,j)=−[d−x(r2t1+λ21)]i,j−[d−y(r2t2+λ22)]i,j. According to the lexicographical ordering of the unknowns, U=[U11U12...Unxnx]t. Then, from Eq (3.27), we have the following matrix system:
3(K∗hKh+r2B∗hBh)U=K∗hZ+Gh. | (3.28) |
Here Kh is a matrix of size n2x×n2x and Bh is a matrix of size 2nx(nx−1)×n2x. The matrices K∗hZ and Gh are of size n2x×1. K∗hKh is symmetric positive semidefinite. The matrix Kh is a block Toeplitz matrix with Toeplitz blocks (BTTB). The structure of the matrix Bh is
Bh=1h[B1B2], |
where the size of both B1 and B2 is nx(nx−1)×n2x, and
B1=C⊗I,B2=I⊗C. |
The size of
C=[1−11−1⋱⋱⋱−11−1] |
is (nx−1)×nx and I is an identity matrix. To get the value of u, one has to solve the large matrix system (3.28).
Now given Eqs (3.17) and (3.18), after discretizing, we have that
3r4v1(i,j)−r3([d+xd−xv1]i,j+[d+xd−yv2]i,j)=r4n1(i,j)−λ41(i,j)−[d+xd+x(r3w−λ3)]i,j, | (3.29) |
r4v2(i,j)−r3([d+xd−xv1]i,j+[d+xd−yv2]i,j)=r4n2(i,j)−λ42(i,j)−[d+xd+y(r3w−λ3)]i,j. | (3.30) |
Then, applying a discrete Fourier transformation followed by a discrete inverse Fourier transformation to Eqs (3.29) and (3.30), one can get v1 and v2, and v3 can be obtained directly from Eq (3.19). To update the variables w,→t,→n and Lagrange multipliers λ1,→λ2,λ3,→λ4, we have to discretize Eqs (3.20)–(3.26) on grid points (i,j). So we need the following equations:
w(i,j)=max{0,1−αr3|˜w(i,j)|}˜w(i,j),˜w(i,j)=∂xv1(i,j)+∂yv2(i,j)−λ3r3, | (3.31) |
→t(i,j)=max{0,1−r1+λ1r2|˜t(i,j)|}˜t(i,j), | (3.32) |
˜t(i,j)=⟨d+xu(i,j),d+yu(i,j),1⟩−⟨λ21,λ22,λ23⟩(i,j)r2+r1+λ1r2⟨n1,n2,n3⟩(i,j), |
→n(i,j)=max{˜n(i,j),|˜n|≤1˜n(i,j)/|˜n(i,j)|,otherwise, | (3.33) |
˜n(i,j)=→v(i,j)+→λ4(i,j)r4+r1+λ1r4→t(i,j). |
To update the Lagrangian multipliers, we have the following equations:
3λnew1(i,j)=λold1(i,j)+r1(|→t(i,j)|−→t(i,j).→n(i,j)), | (3.34) |
→λnew21(i,j)=→λold21(i,j)+r2(t1(i,j)−d−xu(i,j)), | (3.35) |
→λnew22(i,j)=→λold22(i,j)+r2(t2(i,j)−d−yu(i,j)), | (3.36) |
→λnew23(i,j)=→λold23(i,j)+r2(t3(i,j)−1), | (3.37) |
λnew3(i,j)=λold3(i,j)+r3(w(i,j)−d−xv1(i,j)−d−yv2(i,j)), | (3.38) |
→λnew41(i,j)=→λold41(i,j)+r4(v1(i,j)−n1(i,j)) | (3.39) |
→λnew42(i,j)=→λold42(i,j)+r4(v2(i,j)−n2(i,j)) | (3.40) |
→λnew43(i,j)=→λold43(i,j)+r4(v3(i,j)−n3(i,j)). | (3.41) |
To find the value of our main variable u, one has to solve the matrix system given by Eq (3.28). The Hessian matrix K∗hKh+r2B∗hBh of the system given by Eq (3.28) is extremely large for practical application and tends to be quite ill-conditioned when r2 is small. This happens because of the eigenvalues of the blurring operator ˉK are very small and close to zero [36]. K∗hKh is a full matrix, but an FFT can be used to evaluate K∗hKhu in O(nxlognx) operations [36] because the blurring operator ˉK is translation-invariant. The good thing is that the Hessian matrix is SPD.
Theorem 4.1. The inner product ⟨ˉAU,U⟩ is positive for any matrix U≠0, where ˉA=K∗hKh+r2B∗hBh is the Hessian matrix of the system given by Eq (3.28).
Proof. For any matrix U≠0, consider that
⟨ˉAU,U⟩=⟨(K∗hKh+r2B∗hBh)U,U⟩ |
=⟨K∗hKhU,U⟩+r2⟨B∗hBhU,U⟩ |
=⟨KhU,KhU⟩+r2⟨BhU,BhU⟩ |
=‖KhU‖2+r2‖BhU‖2>0. |
This completes the proof.
The CG method is suitable for the solution of the system given by Eq (3.28) owing to the above-mentioned properties. But, the CG method have a quite slow convergence rate due to the system being large and ill-conditioned system. So, we use a PCG method [7,8,10,24,25,26,30]. The preconditioning matrix P must be SPD so that we get an effective solution for our system [2,3,4,11,36]. Here, we introduce our SPD circulant preconditioned matrix P of the Strang-type [27].
3P=~K∗h~Kh+γIh, | (4.1) |
where ~Kh is a circulant approximation of Matrix Kh and γ>0. Ih is an identity matrix. While applying the PCG to Eq (3.28), one of the requirements is to take the inverse of the preconditioned matrix P. Since the second term in P is an identity matrix Ih, inversion is not a problem. For the inversion of the first term ~K∗h~Kh, we need to apply FFTs to O(nxlognx) floating-point operations [36].
Now, let ˉA=K∗hKh+r2B∗hBh be the Hessian matrix of the system given by Eq (3.28). Let the eigenvalues of K∗hKh and B∗hBh be λKi and λBi respectively such that λKi→0 and λBi→∞. Then the eigenvalues of P−1ˉA are
θi=λKi+r2λBiλKi+γ. | (4.2) |
One can notice that clearly θi→λBiγ as i→∞. Hence, for λBi≤γ, the spectrum of P−1ˉA is more favorable than the Hessian matrix ˉA. This can also be observed in the numerical examples when we use the PCG algorithm.
Here, we present numerical examples for the image deblurring problem. In all experiments, we have used different values of nx, and the resulting matrix system has n2x unknowns. Then the mesh size is h=1/nx. The numerical computations were performed using MATLAB software on an Intel(R) Core(TM) i7-4510U CPU @ 2.00 GHz 2.60 GHz. In all experiments, we have taken the initial guess to be the zero vector. The value of the parameters α and β were set by referencing [5,41]. To observe the optimum value of γ, we have done numerical experiments by using Goldhills image. We found that the value of the peak signal-to-noise ratio (PSNR) does not show much improvement after γ=1. So, one can use optimum value of γ close to one. The results of the experiments are given in Figure 2.
The PSNR is used to measure the quality of the restored images. For numerical calculations, we have used the ke−gen(nx,r,σ) kernel [13,16,17,28]. It is a circular Gaussian filter of size nx×nx with a standard deviation σ and radius r. Figure 3 depicts the ke−gen(120,40,4) kernel.
Example 1.
For this example, we have used three different types of images. These are Goldhills, Kids and Peppers images (see [33]). The Goldhills image is a real image and Peppers image is a nontexture image. The Kids image is a complicated image, because it contains a small scale texture part (shirt) and a large scale cartoon part (face). The different aspects of all images can be seen in Figure 5. Each subfigure has a pixel of 256×256. Here, we have also compared our MC-based ALMs, i.e., ALM and preconditioned ALM (PALM) with the TV-based method (TVM) [1,29,36]. The Hessian matrix of the TV-based model is SPD, so for the numerical calculation we have used the CG method. The parameters of ALM and PALM were established by referencing [42,43]. The parameters for the ALM and PALM were α=8e−9,r1=9.5e−7,r2=1e−6,r3=1e−8,r4=1e−5,β=1 and γ=1. For the TVM algorithm (CG), we have used α=1e−6 and β=1, as according to [36]. For the numerical calculations, the ke−gen(nx,300,10) kernel was used. The stopping criteria for the numerical methods is based on the tolerance tol=1e−7. The Tables 1–3 contain all of the information of this experiment. A performance comparison of the TVM, ALM and PALM is depicted in Figure 7. The relative residues and eigenvalues of the ALM and PALM are depicted in Figures 4 and 6, respectively.
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Goldhills | 128×128 | 22.9784 | TVM | 48.5749 | 37.1002 |
128×128 | 22.9784 | ALM | 56.3756 | 40.8027 | |
128×128 | 22.9784 | PALM | 56.3654 | 30.5428 | |
256×256 | 22.2335 | TVM | 42.3448 | 146.2581 | |
256×256 | 22.2335 | ALM | 51.9872 | 195.2980 | |
256×256 | 22.2335 | PALM | 51.8512 | 126.7891 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Kids | 128×128 | 20.4859 | TVM | 45.2541 | 21.0522 |
128×128 | 20.4859 | ALM | 52.1174 | 40.1252 | |
128×128 | 20.4859 | PALM | 52.0586 | 23.5469 | |
256×256 | 20.4147 | TVM | 40.5482 | 133.0289 | |
256×256 | 20.4147 | ALM | 46.0595 | 189.2586 | |
256×256 | 20.4147 | PALM | 46.9837 | 141.2561 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Peppers | 128×128 | 20.5545 | TVM | 46.2567 | 30.2596 |
128×128 | 20.5545 | ALM | 49.7693 | 37.2904 | |
128×128 | 20.5545 | PALM | 49.1567 | 32.2563 | |
256×256 | 20.5531 | TVM | 49.2863 | 118.5327 | |
256×256 | 20.5531 | ALM | 51.0284 | 190.3504 | |
256×256 | 20.5531 | PALM | 51.4522 | 146.3177 |
Remarks:
(1) From Figure 5 one can notice that the deblurred images produced via our methods (ALM and PALM) are much better than those produced via the TVM.
(2) From Tables 1–3, one can observe that the PSNR values of the ALM and PALM methods were quite higher than those for the TVM for all images. Although the TVM generates its PSNR value in much less computational time as compared to the ALM and PALM, its PSNR value is quite low. For example, for the Goldhills image with a pixel size of 128×128, the TVM took 37.1002 seconds to generate its 48.5749 PSNR, but the ALM and PALM generated higher PSNR values of 56.3756 and 56.3654 in 40.8027 seconds and 30.5428 seconds respectively. For the Kids image with a pixel size of 256×256, the TVM took 133.0289 seconds to generate a 40.5482 PSNR, but the ALM and PALM generated higher PSNR values of 46.0595 and 46.9837 in 189.2586 seconds and 141.2561 seconds respectively. Similarly, for the Peppers image with a pixel size of 256×256, the TVM took 118.5327 seconds to generate 49.2863 PSNR, but the ALM and PALM generated higher PSNR values of 51.0284 and 51.4522 in 190.3504 seconds and 146.3177 seconds respectively. Similar behavior was observed for other sizes. So, the proposed ALM and PALM tend to generate high-quality deblurred images as compared to the TVM.
Example 2.
Here, we have compared our MC-based ALMs, i.e., the ALM and PALM with the MC-based method proposed by Fairag et al. [18], who developed a One-Level method (OLM) and Two-Level method (TLM) for the MC-based image deblurring problem. For this experiment, we used three different types of images, i.e., the images of Goldhills (real), Cameraman (complicated) and Moon (non-texture). The different aspects of all images can be seen in Figure 8. Each subfigure has apixel size of 256×256.
The parameters for ALM and PALM were α=1e−9,r1=9.5e−7,r2=1e−6,r3=1e−8,r4=1e−5, β=1 and γ=1. For the MC-based algorithms (OLM and TLM) we used α=8e−9 and β=1. The Level-Ⅱ parameter ˜α of the TLM was calculated by referencing the formula presented in [18]. For the numerical calculations, the ke−gen(nx,300,10) kernel was used. The stopping criteria for the numerical methods is based on the tolerance tol=1e−8. Tables 4–6 contain all of the information on this experiment. A performance comparison of the ALM, PALM, OLM and TLM is depicted in Figure 9.
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Goldhills | 128×128 | 22.9784 | OLM | 50.2303 | 47.9473 |
128×128 | 22.9784 | TLM | 54.6042 | 31.0442 | |
128×128 | 22.9784 | ALM | 56.3756 | 40.8027 | |
128×128 | 22.9784 | PALM | 56.3654 | 30.5428 | |
256×256 | 22.2335 | OLM | 45.1287 | 248.5400 | |
256×256 | 22.2335 | TLM | 49.2369 | 128.4097 | |
256×256 | 22.2335 | ALM | 51.9872 | 195.2980 | |
256×256 | 22.2335 | PALM | 51.8512 | 126.7891 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Cameraman | 128×128 | 18.6322 | OLM | 43.8732 | 24.5327 |
128×128 | 18.6322 | TLM | 44.1709 | 19.7541 | |
128×128 | 18.6322 | ALM | 48.7714 | 38.4161 | |
128×128 | 18.6322 | PALM | 48.9437 | 18.2997 | |
256×256 | 17.8172 | OLM | 38.8709 | 197.7316 | |
256×256 | 17.8172 | TLM | 40.7359 | 112.5587 | |
256×256 | 17.8172 | ALM | 40.1991 | 164.2474 | |
256×256 | 17.8172 | PALM | 40.9556 | 108.2873 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Moon | 128×128 | 26.1840 | OLM | 55.2128 | 34.3866 |
128×128 | 26.1840 | TLM | 56.5319 | 24.8366 | |
128×128 | 26.1840 | ALM | 60.2041 | 25.3032 | |
128×128 | 26.1840 | PALM | 60.1598 | 16.4356 | |
256×256 | 26.4905 | OLM | 52.3328 | 179.7899 | |
256×256 | 26.4905 | TLM | 55.4471 | 125.3007 | |
256×256 | 26.4905 | ALM | 56.9144 | 148.3155 | |
256×256 | 26.4905 | PALM | 56.7752 | 119.1213 |
Remarks:
(1) From Figure 8, one can notice that the ALM and PALM methods both generated slightly higher quality results.
(2) From Tables 4–6, it can be observed that the PSNR values of the ALM and PALM were higher than the OLM and TLM for all images. The TLM generated its PSNR in much less time than the OLM and ALM, but not the PALM. For example, for the Goldhills image with a pixel size of 256×256, the OLM, TLM and ALM took 248.5400,128.4097 and 195.2980 seconds, respectively, to deblur the image. But, the PALM only required 126.7891 seconds for deblurring. For the Cameraman image with a pixel size of 128×128, the OLM, TLM and ALM took 24.5327, 19.7541 and 38.4161 seconds, respectively, to deblurred image. But, the PALM only required 18.2997 seconds for deblurring. Similarly, for Moon image with a pixel size of 128×128, the OLM, TLM and ALM took 34.3866, 24.8366 and 25.3032 seconds, respectively, to deblur image. But, the PALM only required 16.4356 seconds for deblurring. Similar behavior was observed for other sizes. So, our PALM consumed much less CPU time than other methods, and it also generated high-quality deblurred images.
An ALM for the primal form of the image deblurring problem with MC regularizational functional has been presented. A new SPD circulant preconditioned matrix has been introduced to overcome the problem of slow convergence due to the CG method inside of the ALM. Numerical experiments were conducted using different kinds of images (synthetic, real, complicated and nontexture) by our proposed PALM with a new preconditioned matrix. We have compared the TV (total variation) based algorithm with our mean curvature based (ALM and PALM) algorithm. The proposed ALMs, i.e., the ALM and PALM were also compared with the latest MC-based techniques i.e., the OLM and TLM. The numerical experiments showed the effectiveness of our proposed PALM method with a new preconditioned matrix. In the future, we will work on developing an ALM for other computationally expensive regularization functionals like the fractional TV regularized tensor [21]. Furthermore, the matrix B∗hBh was also constructed to have a block Toeplitz matrix structure in the system given by Eq (3.28), so, a single term preconditioner like tau preconditioner [15,20] can be used to approximate both the matrices B∗hBh and K∗hKh in order to increase the convergence rate of the CG method inside the ALM.
The second and third authors would like to thank King Fahd University of Petroleum and Minerals (KFUPM) for its continuous support. The authors also thank the referees for their very careful reading and valuable comments. This work was funded by KFUPM, Grant No. #SB201012.
The authors declare that there is no conflict of interest.
[1] | L. A. Zadeh, Fuzzy sets, in: Fuzzy sets, fuzzy logic, and fuzzy systems: Selected papers by Lotfi A Zadeh, 1996,394–432. https://doi.org/10.1142/9789814261302 0021 |
[2] |
K. T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Set. Syst., 20 (1986), 87–96. https://doi.org/10.1016/S0165-0114(86)80034-3 doi: 10.1016/S0165-0114(86)80034-3
![]() |
[3] |
R. R. Yager, Pythagorean membership grades in multi-criteria decision making, IEEE T. Fuzzy Syst., 2014,958–965. https://doi.org/10.1109/TFUZZ.2013.2278989 doi: 10.1109/TFUZZ.2013.2278989
![]() |
[4] |
R. R. Yager, Generalized orthopair fuzzy sets, IEEE T. Fuzzy Syst., 25 (2017), 1222–1230. https://doi.org/10.1109/TFUZZ.2016.2604005 doi: 10.1109/TFUZZ.2016.2604005
![]() |
[5] |
M. Riaz, M. R. Hashmi, Linear Diophantine fuzzy set and its applications towards multi-attribute decision-making problems, J. Intell. Fuzzy Syst., 37 (2019), 5417–5439. https://doi.org/10.3233/JIFS-190550 doi: 10.3233/JIFS-190550
![]() |
[6] |
A. O. Almagrabi, S. Abdullah, M. Shams, Y. D. Al-Otaibi, S. Ashraf, A new approach to q-linear Diophantine fuzzy emergency decision support system for COVID19, J. Amb. Intel. Hum. Comp., 13 (2022), 1687–1713. https://doi.org/10.1007/s12652-021-03130-y doi: 10.1007/s12652-021-03130-y
![]() |
[7] |
S. Ashraf, S. Abdullah, Spherical aggregation operators and their application in multi-attribute group decision‐making, Int. J. Intell. Syst., 34 (2019), 493–523. https://doi.org/10.1002/int.22062 doi: 10.1002/int.22062
![]() |
[8] |
S. Ashraf, S. Abdullah, T. Mahmood, F. Ghani, T. Mahmood, Spherical fuzzy sets and their applications in multi-attribute decision making problems, J. Intell. Fuzzy Syst., 36 (2019), 2829–2844. https://doi.org/10.3233/JIFS-172009 doi: 10.3233/JIFS-172009
![]() |
[9] |
M. Riaz, M. R. Hashmi, D. Pamucar, Y. M. Chu, Spherical linear Diophantine fuzzy sets with modeling uncertainties in MCDM, Comput. Model. Eng. Sci., 126 (2021), 1125–1164. https://doi.org/10.32604/cmes.2021.013699 doi: 10.32604/cmes.2021.013699
![]() |
[10] |
J. Dombi, A general class of fuzzy operators, the demorgan class of fuzzy operators and fuzziness measures induced by fuzzy operators, Fuzzy Set. Syst., 8 (1982), 149–163. https://doi.org/10.1016/0165-0114(82)90005-7 doi: 10.1016/0165-0114(82)90005-7
![]() |
[11] | C. Jana, G. Muhiuddin, M. Pal, D. Al-Kadi, Intuitionistic fuzzy dombi hybrid decision-making method and their applications to enterprise financial performance evaluation, Math. Probl. Eng., 2021. |
[12] | C. Jane, M. Pal, G. Wei, Multiple attribute decision making method based on intuitionistic Dombi operators and its application in mutual fund evaluation, Arch. Control Sci., 2020,437–470. |
[13] |
Z. Li, H. Gao, G. Wei, Methods for multiple attribute group decision making based on intuitionistic fuzzy dombi hamy mean operators, Symmetry, 10 (2018), 574. https://doi.org/10.3390/sym10110574 doi: 10.3390/sym10110574
![]() |
[14] |
T. Senapati, V. Simic, A. Saha, M. Dobrodolac, Y. Rong, E. B. Tirkolaee, Intuitionistic fuzzy power Aczel-Alsina model for prioritization of sustainable transportation sharing practices, Eng. Appl. Artif. Intell., 119 (2023), 105716. https://doi.org/10.1016/j.engappai.2022.105716 doi: 10.1016/j.engappai.2022.105716
![]() |
[15] |
C. Jana, T. Senapati, M. Pal, Pythagorean fuzzy Dombi aggregation operators and its applications in multiple attribute decision-making, Int. J. Intell. Syst., 34 (2019), 2019–2038. https://doi.org/10.1002/int.22125 doi: 10.1002/int.22125
![]() |
[16] |
M. Akram, W. A. Dudek, J. M. Dar, Pythagorean Dombi fuzzy aggregation operators with application in multicriteria decision-making, Int. J. Intell. Syst., 34 (2019), 3000–3019. https://doi.org/10.1002/int.22183 doi: 10.1002/int.22183
![]() |
[17] |
P. Liu, Q. Khan, T. Mahmood, R. A. Khan, H. U. Khan, Some improved pythagorean fuzzy Dombi power aggregation operators with application in multiple-attribute decision making, J Intel. Fuzzy Syst., 40 (2021), 9237–9257. https://doi.org/10.3233/JIFS-201723 doi: 10.3233/JIFS-201723
![]() |
[18] | M. Akram, A. Khan, A. Luqman, T. Senapati, D. Pamucar, An extended MARCOS method for MCGDM under 2-tuple linguistic q-rung picture fuzzy environment, Eng. Appl. Artif. Intel., 120 (2023), 105892. |
[19] | T. Senapati, Approaches to multi-attribute decision-making based on picture fuzzy Aczel-Alsina average aggregation operators, Comput. Appl. Math., 41 (2022), 40. |
[20] |
C. Jana, G. Muhiuddin, M. Pal, Some Dombi aggregation of Q-rung orthopair fuzzy numbers in multiple-attribute decision making, Int. J. Intell. Syst., 34 (2019), 3220–3240. https://doi.org/10.1002/int.22191 doi: 10.1002/int.22191
![]() |
[21] |
S. Ashraf, S. Abdullah, T. Mahmood, Spherical fuzzy Dombi aggregation operators and their application in group decision making problems, J. Amb. Intel. Hum. Comp., 11 (2020), 2731–2749. https://doi.org/10.1007/s12652-019-01333-y doi: 10.1007/s12652-019-01333-y
![]() |
[22] | J. Aldring, S. Santhoshkumar, D. Ajay, A decision making approach using linear diophantine fuzzy sets with Dombi operations, In: International Conference on Intelligent and Fuzzy Systems, 2022,684–692. |
[23] |
S. Ashraf, S. Abdullah, M. Aslam, M. Qiyas, M. A. Kutbi, Spherical fuzzy sets and its representation of spherical fuzzy t-norms and t-conorms, J. Intell. Fuzzy Syst., 36 (2019), 6089–6102. https://doi.org/10.3233/JIFS-181941 doi: 10.3233/JIFS-181941
![]() |
[24] |
P. Liu, B. Zhu, P. Wang, M. Shen, An approach based on linguistic spherical fuzzy sets for public evaluation of shared bicycles in China, Eng. Appl. Artif. Intell., 87 (2020), 103295. https://doi.org/10.1016/j.engappai.2019.103295 doi: 10.1016/j.engappai.2019.103295
![]() |
[25] |
M. Rafiq, S. Ashraf, S. Abdullah, T. Mahmood, S. Muhammad, The cosine similarity measures of spherical fuzzy sets and their applications in decision making, J. Intell. Fuzzy Syst., 36 (2019), 6059–6073. https://doi.org/10.3233/JIFS-181922 doi: 10.3233/JIFS-181922
![]() |
[26] |
S. Ashraf, S. Abdullah, T. Mahmood, Spherical fuzzy Dombi aggregation operators and their application in group decision making problems, J. Amb. Intel. Hum. Comp., 11 (2020), 2731–2749. https://doi.org/10.1007/s12652-019-01333-y doi: 10.1007/s12652-019-01333-y
![]() |
[27] |
Q. Khan, T. Mahmood, K. Ullah, Applications of improved spherical fuzzy Dombi aggregation operators in decision support system, Soft Comput., 25 (2021), 9097–9119. https://doi.org/10.1007/s00500-021-05829-8 doi: 10.1007/s00500-021-05829-8
![]() |
[28] | G. Elif, H. Aygün, Generalized spherical fuzzy Einstein aggregation operators: Application to multi-criteria group decision-making problems, In: Conference Proceedings of Science and Technology, 3 (2020), 227–235. |
[29] |
S. Ashraf, S. Abdullah, Decision aid modeling based on sine trigonometric spherical fuzzy aggregation information, Soft Comput., 25 (2021), 8549–8572. https://doi.org/10.1007/s00500-021-05712-6 doi: 10.1007/s00500-021-05712-6
![]() |
[30] |
M. Qiyas, S. Abdullah, S. Khan, M. Naeem, Multi-attribute group decision making based on sine trigonometric spherical fuzzy aggregation operators, Granular Comput., 7 (2022), 141–162. https://doi.org/10.1007/s41066-021-00256-4 doi: 10.1007/s41066-021-00256-4
![]() |
[31] |
R. Chinram, S. Ashraf, S. Abdullah, P. Petchkaew, Decision support technique based on spherical fuzzy Yager aggregation operators and their application in wind power plant locations: A case study of Jhimpir, Pakistan, J. Math., 2020. https://doi.org/10.1155/2020/8824032 doi: 10.1155/2020/8824032
![]() |
[32] | M. Riaz, H. M. Athar Farid, D. Pamucar, S. Tanveer, Spherical fuzzy information aggregation based on Aczel-Alsina operations and data analysis for supply chain, Math. Probl. Eng., 2022. |
[33] | M. Qiyas, S. Abdullah, M. Naeem, Spherical uncertain linguistic Hamacher aggregation operators and their application on achieving consistent opinion fusion in group decision making, Int. J. Intell. Comput., 2021. |
[34] | G. Wei, H. Zhang, X. Chen, Spherical fuzzy Hamacher power aggregation operators based on entropy for multiple attribute group decision making. |
[35] |
F. Kutlu-Gündoğdu, C. Kahraman, A novel VIKOR method using spherical fuzzy sets and its application to warehouse site selection, J. Intell. Fuzzy Syst., 37 (2019), 1197–1211. https://doi.org/10.3233/JIFS-182651 doi: 10.3233/JIFS-182651
![]() |
[36] | F. Kutlu-Gündoğdu, C. Kahraman, A. Karaşan, Spherical fuzzy VIKOR method and its application to waste management, In: International Conference on Intelligent and Fuzzy Systems, Springer Cham., 2019,997–1005. |
[37] |
M. Akram, C. Kahraman, K. Zahid, Group decision-making based on complex spherical fuzzy VIKOR approach, Knowl.-Based Syst., 216 (2021), 106793. https://doi.org/10.1016/j.knosys.2021.106793 doi: 10.1016/j.knosys.2021.106793
![]() |
[38] | M. Akram, M. Shabir, A. Adeel, A. N. Al-Kenani, A multiattribute decision-making framework: VIKOR method with complex spherical fuzzy-soft sets, Math. Probl. Eng., 2021. |
[39] | I. M. Sharaf, Spherical fuzzy VIKOR with SWAM and SWGM operators for MCDM, In: Decision Making with Spherical Fuzzy Sets, Springer, Cham. 2021,217–240. |
[40] | E. Ayyildiz, A. Taskin, A novel spherical fuzzy AHP-VIKOR methodology to determine serving petrol station selection during COVID-19 lockdown: A pilot study for İstanbul, Socio-Econ. Plan. Sci., 2022, 101345. |
[41] |
T. Y. Chen, An evolved VIKOR method for multiple-criteria compromise ranking modeling under T-spherical fuzzy uncertainty, Adv. Eng. Inform., 54 (2022), 101802. https://doi.org/10.1016/j.aei.2022.101802 doi: 10.1016/j.aei.2022.101802
![]() |
[42] |
M. Qiyas, S. Abdullah, S. Ashraf, M. Aslam, Utilizing linguistic picture fuzzy aggregation operators for multiple-attribute decision-making problems, Int J Fuzzy Syst., 22 (2020), 310–320. https://doi.org/10.1007/s40815-019-00726-7 doi: 10.1007/s40815-019-00726-7
![]() |
[43] | V. T. Nguyen, R. Chaysiri, Spherical fuzzy AHP-VIKOR model application in solar energy location selection problem: A case study in Vietnam, In : {International Joint Symposium on Artificial Intelligence and Natural Language Processing} (iSAI-NLP), IEEE, 2022, 1–5. |
[44] |
I. M. Sharaf, A new approach for spherical fuzzy TOPSIS and spherical fuzzy VIKOR applied to the evaluation of hydrogen storage systems, Soft Comput., 27 (2023), 1–21. https://doi.org/10.1007/s00500-022-07749-7 doi: 10.1007/s00500-022-07749-7
![]() |
[45] |
I. Riali, M. Fareh, M. C. Ibnaissa, M. Bellil, A semantic-based approach for hepatitis C virus prediction and diagnosis using a fuzzy ontology and a fuzzy Bayesian network, J. Intell. Fuzzy Syst., 44 (2022), 1–15. https://doi.org/10.3233/JIFS-213563 doi: 10.3233/JIFS-213563
![]() |
[46] |
K. Naeem, M. Riaz, F. Karaaslan, A mathematical approach to medical diagnosis via Pythagorean fuzzy soft TOPSIS, VIKOR and generalized aggregation operators, Complex Intell. Syst., 7 (2021), 2783–2795. https://doi.org/10.1007/s40747-021-00458-y doi: 10.1007/s40747-021-00458-y
![]() |
[47] |
B. Khaoula, B. Imene, G. Wahiba, G. Abdelkader, L. Slimane, Intelligent analysis of some factors accompanying hepatitis B, Mol. Sci. Appl., 2 (2022), 61–71. https://doi.org/10.37394/232023.2022.2.7 doi: 10.37394/232023.2022.2.7
![]() |
[48] | W. Liu, X. Liu, M. Peng, G. Q. Chen, P. H. Liu, X. W. Cui, et al., Artificial intelligence for hepatitis evaluation, World J. Gastroentero., 27 (2021), 5715. |
[49] |
M. Riaz, S. T. Tehrim, A robust extension of VIKOR method for bipolar fuzzy sets using connection numbers of SPA theory based metric spaces, Artif. Intell. Rev., 54 (2021), 561–591. https://doi.org/10.1007/s10462-020-09859-w doi: 10.1007/s10462-020-09859-w
![]() |
[50] |
A. Vaillant, Transaminase elevations during treatment of chronic hepatitis B infection: Safety considerations and role in achieving functional cure, Viruses, 13 (2021), 745. https://doi.org/10.3390/v13050745 doi: 10.3390/v13050745
![]() |
[51] | J. Wei, X. Lin, The multiple attribute decision-making VIKOR method and its application, In: 2008 4th international conference on wireless communications, networking and mobile computing, IEEE, 2008, 1–4. |
Algorithm 1: The Augmented Lagrangian method for MC model |
Step-Ⅰ: Initialize u0,w0,→v0,→n0,→t0, and λ01,→λ02,λ03,→λ04. |
Step-Ⅱ: For k=0,1,2,...: Compute (uk,wk,→vk,→nk,→tk) as an (approximate) minimizer of the augmented Lagrangian functional with the Lagrange multiplier λk−11,→λk−12,λk−13,→λk−14, i.e., |
(uk,wk,→vk,→nk,→tk)≈argminL(u,w,→v,→n,→t,λk−11,→λk−12,λk−13,→λk−14).(3.6) |
Step-Ⅲ: Update the Lagrangian multipliers |
λk1=λk−11+r1(|→tk|−→tk⋅→nk),(3.7) |
→λk2=→λk−12+r2(|→tk|−⟨▽uk,1⟩),(3.8) |
λk3=λk−13+r3(wk−∂vk1−∂vk2),(3.9) |
→λk4=→λk−14+r4(→vk−→nk),(3.10) |
where →n=⟨n1,n2,n3⟩. |
Step-Ⅳ: Stop the iteration if relative residuals are smaller than a threshold ϵr. |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Goldhills | 128×128 | 22.9784 | TVM | 48.5749 | 37.1002 |
128×128 | 22.9784 | ALM | 56.3756 | 40.8027 | |
128×128 | 22.9784 | PALM | 56.3654 | 30.5428 | |
256×256 | 22.2335 | TVM | 42.3448 | 146.2581 | |
256×256 | 22.2335 | ALM | 51.9872 | 195.2980 | |
256×256 | 22.2335 | PALM | 51.8512 | 126.7891 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Kids | 128×128 | 20.4859 | TVM | 45.2541 | 21.0522 |
128×128 | 20.4859 | ALM | 52.1174 | 40.1252 | |
128×128 | 20.4859 | PALM | 52.0586 | 23.5469 | |
256×256 | 20.4147 | TVM | 40.5482 | 133.0289 | |
256×256 | 20.4147 | ALM | 46.0595 | 189.2586 | |
256×256 | 20.4147 | PALM | 46.9837 | 141.2561 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Peppers | 128×128 | 20.5545 | TVM | 46.2567 | 30.2596 |
128×128 | 20.5545 | ALM | 49.7693 | 37.2904 | |
128×128 | 20.5545 | PALM | 49.1567 | 32.2563 | |
256×256 | 20.5531 | TVM | 49.2863 | 118.5327 | |
256×256 | 20.5531 | ALM | 51.0284 | 190.3504 | |
256×256 | 20.5531 | PALM | 51.4522 | 146.3177 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Goldhills | 128×128 | 22.9784 | OLM | 50.2303 | 47.9473 |
128×128 | 22.9784 | TLM | 54.6042 | 31.0442 | |
128×128 | 22.9784 | ALM | 56.3756 | 40.8027 | |
128×128 | 22.9784 | PALM | 56.3654 | 30.5428 | |
256×256 | 22.2335 | OLM | 45.1287 | 248.5400 | |
256×256 | 22.2335 | TLM | 49.2369 | 128.4097 | |
256×256 | 22.2335 | ALM | 51.9872 | 195.2980 | |
256×256 | 22.2335 | PALM | 51.8512 | 126.7891 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Cameraman | 128×128 | 18.6322 | OLM | 43.8732 | 24.5327 |
128×128 | 18.6322 | TLM | 44.1709 | 19.7541 | |
128×128 | 18.6322 | ALM | 48.7714 | 38.4161 | |
128×128 | 18.6322 | PALM | 48.9437 | 18.2997 | |
256×256 | 17.8172 | OLM | 38.8709 | 197.7316 | |
256×256 | 17.8172 | TLM | 40.7359 | 112.5587 | |
256×256 | 17.8172 | ALM | 40.1991 | 164.2474 | |
256×256 | 17.8172 | PALM | 40.9556 | 108.2873 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Moon | 128×128 | 26.1840 | OLM | 55.2128 | 34.3866 |
128×128 | 26.1840 | TLM | 56.5319 | 24.8366 | |
128×128 | 26.1840 | ALM | 60.2041 | 25.3032 | |
128×128 | 26.1840 | PALM | 60.1598 | 16.4356 | |
256×256 | 26.4905 | OLM | 52.3328 | 179.7899 | |
256×256 | 26.4905 | TLM | 55.4471 | 125.3007 | |
256×256 | 26.4905 | ALM | 56.9144 | 148.3155 | |
256×256 | 26.4905 | PALM | 56.7752 | 119.1213 |
Algorithm 1: The Augmented Lagrangian method for MC model |
Step-Ⅰ: Initialize u0,w0,→v0,→n0,→t0, and λ01,→λ02,λ03,→λ04. |
Step-Ⅱ: For k=0,1,2,...: Compute (uk,wk,→vk,→nk,→tk) as an (approximate) minimizer of the augmented Lagrangian functional with the Lagrange multiplier λk−11,→λk−12,λk−13,→λk−14, i.e., |
(uk,wk,→vk,→nk,→tk)≈argminL(u,w,→v,→n,→t,λk−11,→λk−12,λk−13,→λk−14).(3.6) |
Step-Ⅲ: Update the Lagrangian multipliers |
λk1=λk−11+r1(|→tk|−→tk⋅→nk),(3.7) |
→λk2=→λk−12+r2(|→tk|−⟨▽uk,1⟩),(3.8) |
λk3=λk−13+r3(wk−∂vk1−∂vk2),(3.9) |
→λk4=→λk−14+r4(→vk−→nk),(3.10) |
where →n=⟨n1,n2,n3⟩. |
Step-Ⅳ: Stop the iteration if relative residuals are smaller than a threshold ϵr. |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Goldhills | 128×128 | 22.9784 | TVM | 48.5749 | 37.1002 |
128×128 | 22.9784 | ALM | 56.3756 | 40.8027 | |
128×128 | 22.9784 | PALM | 56.3654 | 30.5428 | |
256×256 | 22.2335 | TVM | 42.3448 | 146.2581 | |
256×256 | 22.2335 | ALM | 51.9872 | 195.2980 | |
256×256 | 22.2335 | PALM | 51.8512 | 126.7891 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Kids | 128×128 | 20.4859 | TVM | 45.2541 | 21.0522 |
128×128 | 20.4859 | ALM | 52.1174 | 40.1252 | |
128×128 | 20.4859 | PALM | 52.0586 | 23.5469 | |
256×256 | 20.4147 | TVM | 40.5482 | 133.0289 | |
256×256 | 20.4147 | ALM | 46.0595 | 189.2586 | |
256×256 | 20.4147 | PALM | 46.9837 | 141.2561 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Peppers | 128×128 | 20.5545 | TVM | 46.2567 | 30.2596 |
128×128 | 20.5545 | ALM | 49.7693 | 37.2904 | |
128×128 | 20.5545 | PALM | 49.1567 | 32.2563 | |
256×256 | 20.5531 | TVM | 49.2863 | 118.5327 | |
256×256 | 20.5531 | ALM | 51.0284 | 190.3504 | |
256×256 | 20.5531 | PALM | 51.4522 | 146.3177 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Goldhills | 128×128 | 22.9784 | OLM | 50.2303 | 47.9473 |
128×128 | 22.9784 | TLM | 54.6042 | 31.0442 | |
128×128 | 22.9784 | ALM | 56.3756 | 40.8027 | |
128×128 | 22.9784 | PALM | 56.3654 | 30.5428 | |
256×256 | 22.2335 | OLM | 45.1287 | 248.5400 | |
256×256 | 22.2335 | TLM | 49.2369 | 128.4097 | |
256×256 | 22.2335 | ALM | 51.9872 | 195.2980 | |
256×256 | 22.2335 | PALM | 51.8512 | 126.7891 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Cameraman | 128×128 | 18.6322 | OLM | 43.8732 | 24.5327 |
128×128 | 18.6322 | TLM | 44.1709 | 19.7541 | |
128×128 | 18.6322 | ALM | 48.7714 | 38.4161 | |
128×128 | 18.6322 | PALM | 48.9437 | 18.2997 | |
256×256 | 17.8172 | OLM | 38.8709 | 197.7316 | |
256×256 | 17.8172 | TLM | 40.7359 | 112.5587 | |
256×256 | 17.8172 | ALM | 40.1991 | 164.2474 | |
256×256 | 17.8172 | PALM | 40.9556 | 108.2873 |
Image | Pixel size | Blurred | Method | Deblurred | CPU time |
PSNR | PSNR | ||||
Moon | 128×128 | 26.1840 | OLM | 55.2128 | 34.3866 |
128×128 | 26.1840 | TLM | 56.5319 | 24.8366 | |
128×128 | 26.1840 | ALM | 60.2041 | 25.3032 | |
128×128 | 26.1840 | PALM | 60.1598 | 16.4356 | |
256×256 | 26.4905 | OLM | 52.3328 | 179.7899 | |
256×256 | 26.4905 | TLM | 55.4471 | 125.3007 | |
256×256 | 26.4905 | ALM | 56.9144 | 148.3155 | |
256×256 | 26.4905 | PALM | 56.7752 | 119.1213 |