Loading [MathJax]/jax/output/SVG/jax.js
Research article

Anisotropic variation formulas for imaging applications

  • Received: 16 April 2019 Accepted: 15 May 2019 Published: 30 May 2019
  • MSC : 49Mxx, 68U05, 65D18, 65D25

  • The discrete anisotropic variation, sometimes referred to as the anisotropic gradient, and its integral are important in a variety of image processing applications and set boundary measure computations. We provide a method for computing the weight factors for general anisotropic variation approximations of functions on R2. The method is developed in the framework of regular arrays, but applicability extends to arbitrary finite discrete sampling strategies. The mathematical model and computations use concepts from vector calculus and introductory linear algebra so the discussion is accessible for upper-division undergraduate students.

    Citation: Thomas Asaki, Heather A. Moon. Anisotropic variation formulas for imaging applications[J]. AIMS Mathematics, 2019, 4(3): 576-592. doi: 10.3934/math.2019.3.576

    Related Papers:

    [1] Luis M. Pedruelo-González, Juan L. Fernández-Martínez . Generalization of Snell's Law for the propagation of acoustic waves in elliptically anisotropic media. AIMS Mathematics, 2024, 9(6): 14997-15007. doi: 10.3934/math.2024726
    [2] Shengying Mu, Yanhui Zhou . An analysis of the isoparametric bilinear finite volume element method by applying the Simpson rule to quadrilateral meshes. AIMS Mathematics, 2023, 8(10): 22507-22537. doi: 10.3934/math.20231147
    [3] Mohamed Abdelsabour Fahmy, Ahmad Almutlg . A time-stepping BEM for three-dimensional thermoelastic fracture problems of anisotropic functionally graded materials. AIMS Mathematics, 2025, 10(2): 4268-4285. doi: 10.3934/math.2025197
    [4] Wenjuan Liu, Zhouyu Li . Global weighted regularity for the 3D axisymmetric non-resistive MHD system. AIMS Mathematics, 2024, 9(8): 20905-20918. doi: 10.3934/math.20241017
    [5] Aissa Boukarou, Kaddour Guerbati, Khaled Zennir, Mohammad Alnegga . Gevrey regularity for the generalized Kadomtsev-Petviashvili I (gKP-I) equation. AIMS Mathematics, 2021, 6(9): 10037-10054. doi: 10.3934/math.2021583
    [6] Lirong Huang . A local Palais-Smale condition and existence of solitary waves for a class of nonhomogeneous generalized Kadomtsev-Petviashvili equations. AIMS Mathematics, 2023, 8(6): 14180-14187. doi: 10.3934/math.2023725
    [7] Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475
    [8] Wei Ma, Qiongfen Zhang . Existence of solutions for Kirchhoff-double phase anisotropic variational problems with variable exponents. AIMS Mathematics, 2024, 9(9): 23384-23409. doi: 10.3934/math.20241137
    [9] Armel Judice Ntsokongo, Daniel Moukoko, Franck Davhys Reval Langa, Fidèle Moukamba . On higher-order anisotropic conservative Caginalp phase-field type models. AIMS Mathematics, 2017, 2(2): 215-229. doi: 10.3934/Math.2017.2.215
    [10] Jamilu Sabi'u, Ibrahim Mohammed Sulaiman, P. Kaelo, Maulana Malik, Saadi Ahmad Kamaruddin . An optimal choice Dai-Liao conjugate gradient algorithm for unconstrained optimization and portfolio selection. AIMS Mathematics, 2024, 9(1): 642-664. doi: 10.3934/math.2024034
  • The discrete anisotropic variation, sometimes referred to as the anisotropic gradient, and its integral are important in a variety of image processing applications and set boundary measure computations. We provide a method for computing the weight factors for general anisotropic variation approximations of functions on R2. The method is developed in the framework of regular arrays, but applicability extends to arbitrary finite discrete sampling strategies. The mathematical model and computations use concepts from vector calculus and introductory linear algebra so the discussion is accessible for upper-division undergraduate students.


    Given a function f:ΩR, ΩR2, bounded. The variation of f is given by

    Varf(x,y)=max(u,v)R2(u,v)=1limα0|f(x+αu,y+αv)f(x,y)α|. (1.1)

    In the case that f is differentiable, Varf(x,y)=|f(x,y)|. Measures of the total variation, and its generalization the total p-variation, in images are used to solve a variety of image and data processing tasks through the use of variational minimization methods [1,6].

    One key application is noise removal [8,11]. The total variation is an approximate measure of the magnitude of the noise because noisy images have more pixel-to-pixel intensity variation. The left and right images in Figure 1 show an image and a noisy version of the same scene, respectively. The computed variation is about 18 times larger for the noisy image.

    Figure 1.  Left: A grayscale image of a set of magnets on a table. Right: A noisy version of the same scene. The total variation of the right image is approximately 18 times that of the left image.

    A second application is that of edge-detection [12]. The image variation is relatively large at image locations with sudden changes in intensity, which is characteristic of a (visual) edge. The left and right images in Figure 2 shows an image of a pile of rocks and the corresponding variation image, respectively. The same method applied to characteristic functions (binary images) can be used to measure the length of a set boundary [7]. The left image of Figure 3 shows a binary image of the state of Idaho. The right image of Figure 3 shows the corresponding variation image for which the sum of the pixel intensities is proportional to the boundary length.

    Figure 2.  Left: A grayscale image of a rock wall. Right: The variation image which approximates the location of image intensity edges or intensity boundaries.
    Figure 3.  Left: A binary image of the state of Idaho. Right: The computed variation image. The sum of the values in the right image approximate boundary length.

    Other applications include the removal of blur artifacts due to imperfect focus of camera optics or long exposure of objects moving in a scene [4], inpainting [2,5] which seeks to recover missing or corrupted data, image segmentation [3], compressed sensing [10], and determining characteristic length scales [14,16].

    An image is the discretization of a function f:ΩR restricted to a finite regular grid. For example, a digital camera image is a collection of values {fi,j} at (pixel) locations (i,j) on a rectangular grid. Figure 4 is an example of such an image. It has image intensities, shown as shades of gray, on a regular grid of square pixels. In displaying the image, each pixel value is assigned a grayscale color which covers its associated pixel. In this case, a completely black pixel corresponds to a pixel value of zero, while a completely white pixel corresponds to a value of 255. We say that this image was obtained using an 8-bit camera because all pixel values are integers satisfying 0fi,j255.

    Figure 4.  A zoomed-in portion of the grayscale image of a rock wall (large white box) reveals pixel intensities (numerical values) arranged in a regular grid of subregions (pixels).

    A regular grid may be composed of hexagonal pixels. One example is shown in Figure 5. This type of grid is not in widespread use because of the relative difficulty in simply specifying the geometry and the widespread use of rectangular coordinates in digital displays.

    Figure 5.  An example of a hexagonal array which could be used to display images.

    We propose a method for computing variation on images and image-like data as the sum of weighted differences of pixel intensities. Weights are chosen optimally for computing the variation on smooth arbitrary functions and their discrete representations. We begin in Section 2 by defining the anisotropic variation on discrete images and introduce displacement vector sets used to calculate the variation. In Section 3 we develop the theoretical aspects of the variation on discrete data such as images and define the anisotropic gradient function. This function is defined in terms of a geometric sampling strategy and optimal weight parameters. In Section 4 we provide details on the computation of the anisotropic gradient weights for the general scenario and address existence and uniqueness. In Section 5, we provide specific computational examples for determining optimal weights including computations on rectangular grids, hexagonal grids and sampling uniformly in angle without reference to a grid. Conclusions are provided in Section 6.

    A central task of image processing is to approximate key continuous functions, such as Varf in Eq. (1.1), on the scene space using only the discrete image data. Recall that f(x,y) is the vector whose components are the partial derivatives of f, each of which is defined by limits

    fx(x,y)=limh0f(x+h,y)f(x,y)h

    and

    fy(x,y)=limh0f(x,y+h)f(x,y)h.

    In the case of discrete image data, we cannot take the limit h0 because the function is defined at only discrete locations. Thus, for our image analysis tasks, f need not be differentiable. Instead, our plan is to approximate Varf at each pixel location (x,y). That is, we are interested in a finite sampling strategy for approximating

    g|f(x,y)|.

    It is natural to consider a direct computation

    g=f2x(x,y)+f2y(x,y),wherefx=fx,andfy=fy.

    However, f is known only at the finite set of locations (x,y) and we do not know fx nor fy. We can approximate the variation at each array location, gi,j, with the forward difference strategy

    gi,j(fi+1,jfi,jhx)2+(fi,j+1fi,jhy)2, (2.1)

    where hx and hy are the horizontal and vertical distances between pixels. Eq. (2.1) is an example of an isotropic gradient approximation, which is exact in the following sense for differentiable functions:

    Varf(x,y)=|f(x,y)|=limhx,hy0(f(x+hx,y)f(x,y)hx)2+(f(x,y+hy)f(x,y)hy)2

    Throughout the remaining discussion, we will take hx=hy=1 for simplicity.

    We consider anisotropic approximations for the variation having the following form [8].

    Definition 1. Suppose ΩR2. Let f:ΩR, define the set of nonzero displacement vectors to be V={dk=(uk,vk)}mk=1, and let {wk}mk=1 be a set of non-negative scalars. The function

    g(x,y)mk=1wk|f(x+uk,y+vk)f(x,y)| (2.2)

    is an anisotropicvariation_ of f.

    If a location (x,y) corresponds to pixel array indices (i,j) in an image and if the components of the displacement vectors are integer, then g(x,y) is determined by available pixel intensities. Several possible configurations of displacement vectors are shown in Figure 6. These cases correspond to the following displacement vector sets.

    VA={(1,0),(0,1)}VB={(1,0),(0,1),(1,0),(0,1)}VC={(1,0),(0,1),(1,1)}VD={(0,1),(1,1),(1,1)}VE=VB{(1,1),(1,1),(1,1),(1,1)}VF=VE{(2,1),(1,2),(1,2),(2,1),(2,1),(1,2),(1,2),(2,1)}
    Figure 6.  Six examples of displacement vector sets which can be used to compute discrete local anisotropic gradients on rectangular arrays such as digital images.

    VA is the two-direction set noted by Condat [6] with unit weights. VB specifies computations on the Von Neumann neighborhood of four nearest neighbors and VE the Moore neighborhood of eight nearest neighbors. VF uses 16 nearest neighbors which has been successful in some recent computations [9]. VA is a two-vector forward difference set yielding the approximation

    gi,j=w1|fi+u1,j+v1fi,j|+w2|fi+u2,j+v2fi,j|=w1|fi+1,jfi,j|+w2|fi,j+1fi,j|.

    VC shows displacement vectors that lead to the approximation

    gi,j=w1|fi+1,jfi,j|+w2|fi,j+1fi,j|+w3|fi1,j1fi,j|.

    These examples show that the anisotropic variation is not a unique function, but is defined by the chosen direction set V and scalar weights {wk}. Notice also that, at a local extreme point for f, where f=0, the anisotropic variation is not necessarily equal to zero. This suggests that the anisotropic variation does not accurately approximate the steepness of functions around local extrema.

    The weight computations presented here are not specific to the rectangular grids illustrated in Figure 6. We also consider direction sets on hexagonal tilings, some of which are shown in Figure 7. However, any other nonzero direction set in R2 can be used. No standard method for determining weights seems to be in general use, though some authors reference weights used in similar computational tasks [15] or weights based on geometric considerations [13]. In this section, we discuss a criterion for choosing optimal weights. We also discuss assumptions about the functions for which we use the anisotropic variation.

    Figure 7.  Three examples of displacement vector sets which can be used to compute discrete local anisotropic gradients on hexagonal arrays.

    The goal in choosing specific weights for the anisotropic variation is to minimize the discrepancy between g(x,y) Eq. (2.1) and Varf(x,y) Eq. (2.2) at pixel locations. Weights should be chosen so that the approximation is good for arbitrary image functions f(x,y) as no set of weights can be optimal for all functions. Thus, we must make assumptions on the types of functions we expect to encounter. Our first assumption is that the gradient of f(x,y) is slowly varying over the sampling distances dk. In this case, differencing strategies, such as Eq. (2.2), should produce good results because any variation calculation is only weakly affected by sampling distance. To understand this, consider a one-dimensional example in which we seek to compute |df/dx| for the function f:RR. We have

    dfdx=limh0f(x+h)f(x)h.

    Notice that if f is approximately linear or affine on [x,x+h] with slope s then

    f(x+h)f(x)hf(x)+shf(x)h=s,

    and df/dxs independent of sampling distance h.

    Our second assumption is that we have no prior knowledge concerning the direction of the gradient. If we had some such information, we could preferentially weight the anisotropic gradient terms associated with directions which we believe are close to the gradient direction. So, whatever weights we choose must be independent of actual gradient vector orientation, but they could depend on relative angles between sampling directions. We expect the use of our anisotropic variation to be robust to computing gradients of any orientation.

    With these two assumptions, we begin by considering a family of functions, F, whose graphs are planes and whose gradient has unit length. We seek optimal weights relative to this family. Functions in this family satisfy fθ=(cosθ,sinθ), where θ is a parameter identifying a family member. The functions are then given by

    fθ(x,y)=xcosθ+ysinθ,0θ<2π.

    Our objective is to approximate the variation Varfθ with the anisotropic variation gθ as closely as possible by determining optimal weights {wk} for a given direction set V. To add clarity of notation, let w=[w1,w2,,wm]T and define gw to be the anisotropic gradient with chosen weights wk, 1km. We measure optimality based on the squared error function. That is, for any given function fθF, we seek weights w1,w2,,wm which would minimize the function given by

    E(w,θ)=(Varfθgwθ)2=(1mk=1wk|fθ(x+uk,y+vk)fθ(x,y)|)2=(1mk=1wk|ukcosθ+vksinθ|)2.

    However, finding the weights that minimize the squared error above is optimal for only one such function fθF. We are interested in finding our best overall choice of weights that would minimize the total integrated squared error over all family members (for all possible θ):

    H(w)=2π0E(w,θ)dθ=2π0(1mk=1wk|ukcosθ+vksinθ|)2dθ. (3.1)

    That is, we want to find a point w=[w1,w2,,wm]T whose coordinates are the weights so that H(w)H(w) for all wRm. We seek minimizers w of H by finding stationary points (where wH(w)=0). Here, we use the notation w to indicate that the derivatives are taken with respect to the variables w1,w2,,wm. The jth component of wH(w) (1jm) is

    H(w)wj=22π0|ujcosθ+vjsinθ|(1mk=1wk|ukcosθ+vksinθ|)dθ=22π0|ujcosθ+vjsinθ|dθ+2mk=1wk2π0|ujcosθ+vjsinθ||ukcosθ+vksinθ|dθ

    In order to simplify the representation and facilitate further computations, we define the integrals in each of these terms as

    J(k)=2π0|ukcosθ+vksinθ|dθ

    and

    K(k,j)=2π0|ukcosθ+vksinθ||ujcosθ+vjsinθ|dθ.

    Thus

    H(w)wj=2J(j)+2mk=1wkK(k,j).

    Notice that this second term, involving K, can be written using a matrix product:

    2mk=1wkK(k,j)=2[K(j,1)K(j,2)K(j,m)][w1w2wm].

    Now, stationary points w of H satisfy mk=1wkK(k,j)=J(j), for 1jm. That is, the matrix equation below holds.

    [K(1,1)K(1,2)K(1,m)K(2,1)K(2,2)K(2,m)K(m,1)K(m,2)K(m,m)][w1w2wm]=[J(1)J(2)J(m)], (3.2)

    The integrals evaluate to

    J(k)=4||dk||

    and

    K(k,j)=dkdj[(π2Δkj)cos(Δkj)+2sin(Δkj)],

    where Δkj is the minimum positive angle between sampling directions dj and dk.

    Collecting the integral computations, we find that the optimal weights w satisfy the following system of equations.

    Kw=J,K(k,j)=dkdj((π2Δkj)cos(Δkj)+2sin(Δkj)),K(k,k)=πdk2,J(k)=4dk.

    We see that the entries of each row of the matrix K and the corresponding entry in J contain a common factor, dk for row k. These common factors do not affect the solution and can be removed. Furthermore, the entries of each column of the matrix K contain a common factor, dj for column j. Thus, each entry in the solution vector w can be rescaled by the corresponding vector norm. This leads to the somewhat simplified system of equations:

    Az=B,A(k,j)=(π2Δkj)cos(Δkj)+2sin(Δkj),A(k,k)=π,B(k)=4,wj=zj/dj.. (4.1)

    Now we consider existence and uniqueness of solutions to Az=B. We first show that at least one solution exists. Next, we show that if some pair of direction vectors are colinear, then there exist infinitely many solutions. Finally, we choose, among possibly infinitely many solutions, the solution z that has minimum 2-norm.

    Claim 1. The system of equations Az=B given in (4.1) has at least one solution.

    Proof. Recall that solutions to (4.1) are stationary points of the function H given in Eq. (3.1). We will show that H has at least one stationary point. First note that H(w)0 for all choices of weights w1,w2,,wm.

    Now, let us use the integral computations above to rewrite H(w).

    H(w)=2π0(1mk=1wk|ukcosθ+vksinθ|)2dθ=2π0[12mk=1wk|ukcosθ+vksinθ|+mk=1mj=1wkwj|ukcosθ+vksinθ||ujcosθ+vjsinθ|]dθ=2π2wTJ+wTKw,

    where J and K are defined above. Notice that H is a quadratic function of w1,w2,,wm (the weights).

    Notice also that if K is negative definite or indefinite (possessing at least one negative eigenvalue) then the quadratic, H, would obtain negative values for some w. So K must be positive definite (or positive semi-definite). Thus, H is a convex quadratic function, though not necessarily strictly convex, and bounded below by zero. Therefore H has at least one stationary point and the system of equations (4.1) has at least one solution.

    Another way to understand the result of the above proof is to note that a quadratic function which is bounded below (non-negative in this case) must be either convex quadratic or constant along any direction. If the function were to behave (nonconstant) linearly along some direction then it could not be bounded below.

    Claim 2. Suppose there exist two distinct vectors dk,djV which are colinear. Then, the system of equations Az=B given in (4.1) has infinitely many solutions.

    Proof. The entries in row k of matrix A are determined solely by the minimum angle between each vector in V and the line {v=αdk|αR}. If dj is colinear with dk, then they define the same line and rows j and k of A must be equal. Thus, rank(A)<m and, by the previous claim, Az=B has infinitely many solutions.

    If the solution is unique, then we have z=A1B. If the solution is not unique, we proceed by diagonalizing A and constructing a pseudo-inverse operator P such that z=PB.

    Since A is a real symmetric matrix, it is diagonalizable by an orthogonal matrix Q as A=QDQT where D is a diagonal matrix consisting of the eigenvalues of A. We can order the eigenvalues λi, and the corresponding columns of Q so that λ1λ2λm. Note also that λm0 because of the convexity of H(w). Suppose there are κ positive (not zero) eigenvalues. Let ˜Q be the matrix consisting of the first κ columns of Q. Let ˜D be the κ×κ diagonal matrix with entries λ1,λ2,,λκ.

    Claim 3. The pseudo-inverse solution ˜z of minimum 2-norm for solving the system of equations (4.1) is ˜z=PB=˜Q˜D1˜QTB.

    Proof. Consider the unique representation z=˜z+z0, where z0 is in N(A), the null space of A, and ˜z is in N(A), the orthogonal complement of N(A). We show that PB=˜z. That is, P is the invertible operator that maps from the range of A to N(A). We write Q=[˜QQ0] and note that the columns of Q0 form a basis for N(A) and the columns of ˜Q form a basis for N(A). Observe,

    PB=PAz=(˜Q˜D1˜QT)([˜QQ0][˜D000][˜QTQT0])z=(˜Q˜D1˜QT)([˜QQ0][˜D000][˜QTzQT0z])=˜Q˜D1˜QT(˜Q˜D˜QTz)=˜Q˜D1˜D˜QTz=˜Q˜QTz=˜z

    Thus, P returns solution ˜z: A˜z=A˜z+0=A˜z+Az0=Az=B.

    Next, we show that ˜z represents the solution of minimum 2-norm. If ˜z is the unique solution, then it is also the solution of minimum norm. Otherwise, let ˜y be any other solution. As A˜y=B, ˜y˜zN(A). Consider the standard inner product on Rm and noting that any vector in N(A) is orthogonal to ˜z, we have ˜y,˜y=˜z+˜y˜z,˜z+˜y˜z=˜z,˜z+˜y˜z,˜y˜z. Or, equivalently, ˜y2=˜z2+˜y˜z2. Thus, ˜y˜z.

    We next present example computations illustrating the determination of weights for several scenarios from Figures 6 and 7. Table 1 provides weight computation results for each scenario. We end this section with the computation for uniformly distributed (in angle) unit vector directions.

    Table 1.  Weights for each displacement vector set example in Figures 6 and 7.
    Scenario V w
    rectangular
    (A) d1=(1,0), d2=(0,1) w1=w20.77797
    (B) d1=(0,1), d3=(1,0), w1=w30.38898
    d2=(1,0), d4=(0,1) w2=w40.38898
    (C) d1=(1,0), d2=(0,1), w1=w20.72501
    d3=(1,1) w30.07635
    (D) d1=(0,1), w10.10784
    d2=(1,1), d3=(1,1) w2=w30.51266
    d1=(0,1), d5=(1,0), w1=w50.19624
    (E) d2=(1,1), d6=(1,1), w2=w60.13876
    d3=(0,1), d7=(0,1), w3=w70.19624
    d4=(1,1), d8=(1,1) w4=w80.13876
    d1=(1,0), d9=(1,0) w1=w90.12214
    d2=(2,1), d10=(2,1) w2=w100.04542
    d3=(1,1), d11=(1,1) w3=w110.04769
    (F) d4=(1,2), d12=(1,2) w4=w120.04542
    d5=(0,1), d13=(0,1) w5=w130.12214
    d6=(1,2), d14=(1,2) w6=w140.04542
    d7=(1,1), d15=(1,1) w7=w150.04769
    d8=(2,1), d16=(2,1) w8=w160.04542
    hexagonal
    (A) d1=(32,12), d2=(0,1) w1=w20.74112
    (B) d1=(0,1), w10.52268
    d2=(32,12), d3=(32,12) w2=w30.52268
    d1=(0,1), d4(0,1) w1=w40.26134
    (C) d2=(32,12), d5=(32,12) w2=w50.26134
    d3=(32,12), d6=(32,12) w3=w60.26134

     | Show Table
    DownLoad: CSV

    We have direction set VA={(1,0),(0,1)}. We first compute the direction vector norms and the matrix elements A(k,j):

    d1=d2=1.
    Δ1,2=π/2.
    A(1,1)=A(2,2)=π.
    A(1,2)=A(2,1)=2.

    For this set, Eq. (4.1) takes the form

    [π22π][z1z2]=[44].

    The matrix A is invertible. The solution is

    z1=z2=4π+20.77797,

    and the weights are

    w1=w2=4π+20.77797.

    We have direction set VC={(1,0),(0,1),(1,1)} We first compute the direction vector norms and the matrix elements A(k,j):

    d1=1,d2=1,d3=2,
    Δ1,2=π/2,Δ1,3=Δ2,3=π/4.
    A(1,1)=A(2,2)=A(3,3)=π
    A(1,2)=2,A(1,3)=A(2,3)=π+422.

    For this set, Eq. (4.1) takes the form

    [π2π+4222ππ+422π+422π+422π][z1z2z3]=[444].

    The matrix A is invertible. The solution is

    z10.72501,
    z20.72501,
    z30.10784.

    Thus,

    w1=z1/d10.72501,
    w2=z2/d20.72501,
    w3=z3/d30.07635.

    Consider Von Neumann set VB={(1,0),(0,1),(1,0),(0,1)}. Proceeding as before, we find

    d1=d2=d3=d4=1.
    Δ1,2=Δ2,3=Δ3,4=Δ1,4=π/2.
    Δ1,3=Δ2,4=0.
    A(1,1)=A(2,2)=A(3,3)=A(4,4)=π.
    A(1,2)=A(2,3)=A(3,4)=A(1,4)=2.
    A(1,3)=A(2,4)=π.

    For this set, Eq. (4.1) takes the form

    [π2π22π2ππ2π22π2π][z1z2z3z4]=[4444].

    The matrix A has rank 2 and is not invertible. A is diagonalizable with

    ˜D=[2π+4002π4],˜Q=[1/21/21/21/21/21/21/21/2].

    The pseudo-inverse operator is

    P=˜Q˜D1˜QT=[p1p2p1p2p2p1p2p1p1p2p1p2p2p1p2p1],

    where

    p1=π4π216,p2=24π216.

    The solution is z=PB:

    z1=z2=z3=z4=2π+20.38898.

    Since all direction vectors have norm 1, we have

    w1=w2=w3=w4=2π+20.38898.

    Consider the case of m unit-length displacement vectors uniformly distributed in angle:

    (uk,vk)=(sin(2πk1m),cos(2πk1m)),k=1,2,,m.

    Symmetry dictates that w1=w2==wm and Eq. (3.2) reduces to

    w=4mk=1A(1,k),

    where the unique weight, w, is expressed as a function of the number of displacement vectors. If we consider m, noting that 0<Δ<π2, we have

    limm(π2mmk=1A(1,k))=π20[(π2θ)cosθ+2sinθ]dθ=4.

    Thus,

    limmmw=π2.

    Several weight values are given in Table 2. Notice that m=3,4,6 are scenarios which also appear in Table 1.

    Table 2.  Optimal anisotropic gradient weight w for m uniformly distributed unit displacement vectors. For m3, w is very closely approximated by π2m.
    m w π2m
    1 1.27324 1.57080
    2 0.63662 0.78540
    3 0.52268 0.52360
    4 0.38898 0.39270
    5 0.31409 0.31416
    6 0.26134 0.26180
    7 0.22439 0.22440
    8 0.19624 0.19635
    9 0.17453 0.17453
    10 0.15704 0.15708
    11 0.14280 0.14280
    12 0.13089 0.13090

     | Show Table
    DownLoad: CSV

    The discrete anisotropic gradient and its integral are important in a variety of image processing applications and set boundary measure computations. We provided a method for computing weight factors for general anisotropic variation approximations of functions on R2. Optimal weights are found by minimizing the total integrated square error over all unit gradient functions. The method was developed in the framework of regular arrays, but applies more broadly to arbitrary finite discrete sampling strategies.

    We would like to thank the anonymous referees and AIMS editor for their assistance in improving the clarity and focus of this paper.

    The authors declare no conflict of interest.



    [1] V. Caselles, A. Chambolle, D. Cremers, et al. An introduction to total variation for image analysis, In: Fornasier M. Editor, Theoretical Foundations and Numerical Methods for Sparse Recovery, De Gruyters, 2010.
    [2] R. H. Chan, S. Setzer and G. Steidl, Inpainting by flexible Haar-wavelet shrinkage, SIAM J. Imaging Sci., 1 (2008), 273-293. doi: 10.1137/070711499
    [3] T. F. Chan and S. Esedoḡlu, A multiscale algorithm for Mumford-Shah image segmentation, UCLA CAM Report 03-57, 2003.
    [4] H. Chen, C. Wang, Y. Song, et al. Split Bregmanized anisotropic total variation model for image deblurring, J. Vis. Commun. Image R., 31 (2015), 282-293. doi: 10.1016/j.jvcir.2015.07.004
    [5] R. Choksi, Y. V. Gennip and A. Oberman, Anisotropic total variation regularized L1-approximation and denoising/deblurring of 2D bar code, Inverse Probl. Imag., 5 (2010), 591-617.
    [6] L. Condat, Discrete total variation: New definition and minimization, SIAM J. Imaging Sci., 10 (2017), 1258-1290. doi: 10.1137/16M1075247
    [7] V. Duval, J. F. Aujol and Y. Gousseau, The TVL1 model: A geometric point of view, J. Multiscale Model. Simulat., 8 (2009), 154-189. doi: 10.1137/090757083
    [8] S Esedoḡlu and S. J. Osher, Decomposition of images by the anisotropic Rudin-Osher-Fatemi model, Commun. Pure Appl. Math., 57 (2004), 1609-1626. doi: 10.1002/cpa.20045
    [9] D. Goldfarb andW. Yin, Parametric maximum flow algorithms for fast total variation minimization, SIAM J. Sci. Comput., 31 (2009), 3712-3743. doi: 10.1137/070706318
    [10] T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343. doi: 10.1137/080725891
    [11] Y. Lou, T. Zeng, S. Osher, et al. A weighted difference of anisotropic and isotropic total variation model for image processing, SIAM J. Imaging Sci., 8 (2015), 1798-1823. doi: 10.1137/14098435X
    [12] S. P. Morgan and K. R. Vixie, L1TV computes the flat norm for boundaries, Abstr. Appl. Anal., 2007 (2007), Article ID 45153.
    [13] L. B. Montefusco, D. Lazzaro and S. Papi, Fast sparse image reconstruction using adaptive nonlinear filtering, IEEE Trans. Image Process., 20 (2011), 534-544. doi: 10.1109/TIP.2010.2062194
    [14] H. A. Moon and T. Asaki, A finite hyperplane traversal Algorithm for 1-dimensional L1pTV minimization, for 0 < p ≤ 1, Comput. Optim. Appl., 61 (2015), 783-818.
    [15] H. T. Nguyen, M. Worring and R. V. D. Boomgaard, Watersnakes: Energy-driven watershed segmentation, IEEE T. Pattern Anal., 25 (2003), 330-342. doi: 10.1109/TPAMI.2003.1182096
    [16] D. Strong and T. F. Chan, Edge-preserving and scale-dependent properties of total variation regularization, Inverse Probl., 19 (2003), S165-S187.
  • This article has been cited by:

    1. Erdal Bas, Ramazan Ozarslan, Resat Yilmazer, Spectral structure and solution of fractional hydrogen atom difference equations, 2020, 5, 2473-6988, 1359, 10.3934/math.2020093
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4689) PDF downloads(587) Cited by(1)

Figures and Tables

Figures(7)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog