Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

Some new results on the face index of certain polycyclic chemical networks


  • Received: 11 January 2023 Revised: 01 February 2023 Accepted: 02 February 2023 Published: 23 February 2023
  • Silicate minerals make up the majority of the earth's crust and account for almost 92 percent of the total. Silicate sheets, often known as silicate networks, are characterised as definite connectivity parallel designs. A key idea in studying different generalised classes of graphs in terms of planarity is the face of the graph. It plays a significant role in the embedding of graphs as well. Face index is a recently created parameter that is based on the data from a graph's faces. The current draft is utilizing a newly established face index, to study different silicate networks. It consists of a generalized chain of silicate, silicate sheet, silicate network, carbon sheet, polyhedron generalized sheet, and also triangular honeycomb network. This study will help to understand the structural properties of chemical networks because the face index is more generalized than vertex degree based topological descriptors.

    Citation: Ricai Luo, Khadija Dawood, Muhammad Kamran Jamil, Muhammad Azeem. Some new results on the face index of certain polycyclic chemical networks[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 8031-8048. doi: 10.3934/mbe.2023348

    Related Papers:

    [1] Ricardo Almeida . Variational problems of variable fractional order involving arbitrary kernels. AIMS Mathematics, 2022, 7(10): 18690-18707. doi: 10.3934/math.20221028
    [2] Yongjian Hu, Huifeng Hao, Xuzhou Zhan . On the solvability of the indefinite Hamburger moment problem. AIMS Mathematics, 2023, 8(12): 30023-30037. doi: 10.3934/math.20231535
    [3] Xiaojing Du, Xiaotong Liang, Yonghong Xie . Integral expressions of solutions to higher order λ-weighted Dirac equations valued in the parameter dependent Clifford algebra. AIMS Mathematics, 2025, 10(1): 1043-1060. doi: 10.3934/math.2025050
    [4] Hong Li, Keyu Zhang, Hongyan Xu . Solutions for systems of complex Fermat type partial differential-difference equations with two complex variables. AIMS Mathematics, 2021, 6(11): 11796-11814. doi: 10.3934/math.2021685
    [5] Kun Li, Peng Wang . Properties for fourth order discontinuous differential operators with eigenparameter dependent boundary conditions. AIMS Mathematics, 2022, 7(6): 11487-11508. doi: 10.3934/math.2022640
    [6] Valérie Gauthier-Umaña, Henryk Gzyl, Enrique ter Horst . Decoding as a linear ill-posed problem: The entropy minimization approach. AIMS Mathematics, 2025, 10(2): 4139-4152. doi: 10.3934/math.2025192
    [7] Tuba Gulsen, Emrah Yilmaz, Ayse Çiğdem Yar . Proportional fractional Dirac dynamic system. AIMS Mathematics, 2024, 9(4): 9951-9968. doi: 10.3934/math.2024487
    [8] Yong Liu, Chaofeng Gao, Shuai Jiang . On meromorphic solutions of certain differential-difference equations. AIMS Mathematics, 2021, 6(9): 10343-10354. doi: 10.3934/math.2021599
    [9] Clara Burgos, Juan Carlos Cortés, Elena López-Navarro, Rafael Jacinto Villanueva . Probabilistic analysis of linear-quadratic logistic-type models with hybrid uncertainties via probability density functions. AIMS Mathematics, 2021, 6(5): 4938-4957. doi: 10.3934/math.2021290
    [10] Noureddine Bahri, Abderrahmane Beniani, Abdelkader Braik, Svetlin G. Georgiev, Zayd Hajjej, Khaled Zennir . Global existence and energy decay for a transmission problem under a boundary fractional derivative type. AIMS Mathematics, 2023, 8(11): 27605-27625. doi: 10.3934/math.20231412
  • Silicate minerals make up the majority of the earth's crust and account for almost 92 percent of the total. Silicate sheets, often known as silicate networks, are characterised as definite connectivity parallel designs. A key idea in studying different generalised classes of graphs in terms of planarity is the face of the graph. It plays a significant role in the embedding of graphs as well. Face index is a recently created parameter that is based on the data from a graph's faces. The current draft is utilizing a newly established face index, to study different silicate networks. It consists of a generalized chain of silicate, silicate sheet, silicate network, carbon sheet, polyhedron generalized sheet, and also triangular honeycomb network. This study will help to understand the structural properties of chemical networks because the face index is more generalized than vertex degree based topological descriptors.



    The step-length-based techniques and the trust region methods are two types of optimization methods for the least-squares problem (LSP): 12F(x)2, where F={f1,f2,,fN}T is the vector of the non-linear functions, M is the number of unknowns x, (MN).

    The Newton method and the Gauss-Newton (GN) method are frequently utilized for solving LSPs, belong to the step-length-based category [1,2]. The Newton approach calculates the least-squares function's optimum descent direction, resulting in a faster rate of convergence. However, for determining the descent direction, one needs to obtain the precise derivatives of the least-squares function and the time consuming. On the other hand, the GN method approximates the derivatives using a simple approach that is relatively straightforward to evaluate, however the solution may not converge owing to the over-simplification of the derivatives, which may result in an incorrect descent direction. The Levenberg-Marquardt method (LMM) was introduced by Levenberg [3] and Marquardt [4] for solving the (LSP) using a trust region approach [5,6]. In the GN method, the simplified Hessian matrix HGN is not always invertible, to avoid this problem a modified positive definite Hessian HLM=HGN+Sk, is usually used. The LMM takes Sk=βkIN×M, where βk represents the Levenberg-Marquardt parameter which is a scalar quantity that changes during optimization and IN×M is the identity matrix. The gradient descent method and the GN method are combined in the LM algorithm. When the parameters are far from their ideal value (βk increases) the LM process conducts as a gradient-descent method, while it works as the GN method when the parameters are close to their optimal value (βk decreases) [4,7,8,9,10]. Many researchers have expressed interest in Newton-type methods for solving inverse problems because of their rapid convergence for the well-posed problems. The LMM is known to have a quadratic rate of convergence under non singularity conditions [6,11,12,13,14,15]. Yamashita et al. [6], investigated the solution of a nonlinear system of equations F(x)=0 under the assumption F(x), which provides a local error bound, where the convergence rate is still quadratic when βk=F(xk)2. Khayat [12] has established the quadratic convergence of the LMM for the inverse problem of identifying a Robin coefficient R for the Stokes system in the case where R is a piecewise constant on some non-accessible part of the boundaryand the velocity of a given reference solution does not vanish on the boundary. In this case, the regularizing parameter is defined by βk=U(Rk)Zη2, where Zη is the noisy observed data.

    Elasticity imaging is a relatively new medical technique used to detect several pathologies and especially cancerous tumors. These pathologies are known to affect the elastic properties (elastic modulus, Poisson's ratio and stiffness, , etc.) of soft tissues.

    Many authors have dealt with the elasticity imaging inverse problem (EIIP). Mei et al. [16] have estimated the elastic modulus from surface displacements during multiple observations. Raghavan et al. [17] have solved the equilibrium equations using finite difference schemes to recover elastic properties. Doyley et al. [18] applied a modified Newton-Raphson method. A variational method based on error functional decomposition was applied by Constantinescu [19] for anisotropic elastic body. Oberai et al. [20] reconstructed the identification parameter using a first-order adjoint method. Jadamba et al. [21] have reconstructed the Lamé parameters using a conjugate gradient trust-region technique. Arnold et al. [22] have utilized a numerical clustering procedure to identify the Young's modulus. Abdelhamid et al. [23] have investigated the identification problem of reconstructing the modulus of elasticity using the nonlinear conjugate gradient method. Mohammadi et al. [24] have used a statistical technique combined with a fixed-point algorithm. In this study we introduce an interesting and rapid converging method (the LMM) for solving the inverse problem of the elasticity imaging inverse problems. This method has the particularity to transform a nonlinear and a non-convex optimization into a convex one. Recently, the LMM is used in a closely related problems in the field. Indeed, Raja et al. [25] have combined the LMM with artificial neural networks to study the thermal radiation and Hall effect on boundary layer flow. Shoaib et al. [26] have used the same method to study the influence of activation energy on the Third-Grade Nanofluid flow. In the following, a mathematical treatment of the LMM method and an exhaustive numerical study are done. A realistic examples are treated using the proposed procedure.

    Worldwide, about 19.3 million new cancer cases, where 18.1 million of them excluding nonmelanoma skin cancerand almost 10.0 million cancer deaths occurred in 2020 [27]. It has estimated that about 18.1 million new cancer cases recorded and 9.6 million cancer deaths in 2018, across 20 world regions [28]. In the literature the soft tissue can be modeled as an incompressible, isotropicand continuous elastic object. The linear elasticity system is introduced to simulate the external and tractional forces on the soft tissue. In this study, the Young's modulus E is defined as a function of the position (x,y) and Poisson's ratio is a given constant ν (ν0.48). Reconstructing E of the elasticity imaging inverse problem is important for characterizing the complex mechanical behaviour of the soft tissue. We derive the basic equations for the theory of elasticity. A free body diagram (FBD) is a diagrammatic depiction of a single body or a subsystem of bodies that is separated from its surroundings and depicts all the forces operating on it as shown in Figure 1.

    Figure 1.  A free body diagram of two-dimensional body.

    Let ω be an infinitesimal element [29,30], the FBD of ω is represented in Figure 1 and the stresses components are shown as positive. Summation of the forces in the x and x+dx axes at equilibrium are introduced as follows:

    Fx=(σx+σxx)dxdyσxdxdy+(τxy+τxyy)dxdyτxydxdy+fxdxdy=0, (2.1)

    we do the same to the y and y+dy axes:

    Fy=(τxy+τxyx)dxdyτxydxdy+(σy+σyy)dxdyσydxdy+fydxdy=0. (2.2)

    We denote by fx and fy the body forces in x- and y-axes which are generally assumed to be positive when acted along the positive axes, simplifying Eqs (2.1) and (2.2) we derive the basic equations of the theory of elasticity bellow

    σxx+τxyy+fx=0, (2.3)
    τxyx+σyy+fy=0. (2.4)

    The stresses-strains relationship (for an isotropic material) is defined by

    (σ)=[D](ε), (2.5)

    where (σ)=(σx,σy,τxy)T denotes the stress and (ε)=(εx,εy,γxy)T is called small strain tensor:

    εij(u)=12(uixj+ujxi).
    εx=ε11(u)=ux,εy=ε22(u)=vy,γxy=2ε12(u)=2ε21(u)=uy+vx.

    Where, u=(u,v) represents the displacement field. Here, Eq (2.5) equivalent to

    (σxσyτxy)=E[D](εxεyγxy). (2.6)

    An approximation of the plane stress situation can be used to approximate the body's deformation, for this reason we take the material property matrix D as follow

    Dσ=ρ1[1v0v1000ρ2],such thatρ1=11ν2, ρ2=1ν2. (2.7)

    Where, E and ν are the Young's modulus and the Poisson's ratio, ρ1 and ρ2, are given constants depend on ν.

    A tumor can be identified by investigating the elastic properties of the soft tissue such as Young's modulus and Poisson ratio from a given measurements. The following forward system of partial differential equations describes the response of an elastic object, when the external body forces and traction are applied to the boundary. Let ΩRd, for d=2,3, be an open bounded and connected domain such that Ω=ΓiΓc. Consider the following elasticity problem with essential (Dirichlet) and natural (Neumann) boundary conditions, governing the displacement u=(u,v) in x and y directions when the body forces f=(fx,fy) are applied to Ω:

    (DP):{σxx+τxyy+fx=0inΩ, τxyx+σyy+fy=0inΩ, σxnx+τxyny=qxonΓi, τxynx+σyny=qyonΓi, (u,v)=(u0,v0)onΓc. (2.8)

    On the section of the boundary designated by Γi (left, right and top), the traction vectors q=(qx,qy) are specified as Neumann boundary conditions, n=(nx,ny) its unit outward normal, while Γc (bottom) is taken as constant Dirichlet boundary condition. Young's modulus E is considered as a function of location E(x,y) and Poisson's ratio is given as a constant (i.e., ν=0.48). The test function space, represented by W, is defined by

    W={w:(w1,w2)H1(Ω):w=0 onΓc}. (2.9)

    To determine the weak formulation of the linear elasticity Eq (2.8), we apply the weighted residual method by multiplying Eq (2.8) by a test function and integrate over Ω, such that:

    Ω(w1(σxx+τxyy)w2(τxyx+σyy))dΩ+Ω(w1fxw2fy)dΩ=0. (2.10)

    By utilizing the Green's identity, using the integration by parts for the first term of the Eq (2.10) and using the boundary conditions we get:

    Ω(w1xσx+w1yτxyw2xτxy+w2yσy)dΩ+Ω(w1fxw2fy)dΩ+Γi(w1qxw2qy)dΓi=0. (2.11)

    The variational formulation of Eq (2.8) can be rewritten as follows:

    Ω(w1x0w1y0w2yw2x)(σxσyτxy)dΩ=Ω(w1fxw2fy)dΩ+Γi(w1qxw2qy)dΓi. (2.12)

    Finally, using Eqs (2.6) and (2.7), the forward problem can be solved to find the displacement field (u,v):

    Ω(w1x0w1y0w2yw2x)EDσ(uxvyuy+vx)dΩ=Ω(w1fxw2fy)dΩ+Γi(w1qxw2qy)dΓi. (2.13)

    The body deformation represents the solution of the forward problem to find the displacement field on the problem domain. This deformation helps in solving the elasticity imaging inverse problem of identifying the elastic properties from the available measurements on subdomain of the problem.

    Let X and Y two Hilbert spaces, J:D(J)XY. Consider the system of nonlinear equations as a continuously differentiable function:

    J(x)=0. (3.1)

    Assume that Eq (3.1) has a solution set K that is not empty. The system (3.1) may not have a solution, i.e., J(x) is not necessarily invertible. To remedy this, we replace Eq (3.1) by the following minimization problem:

    min (3.2)

    The LMM is a Newton type technique for solving nonlinear least squares problems [4,8,9,10]. Particularly, LMM is a kind of GN technique for reducing a least squares cost \dfrac{1}{2}\| \boldsymbol{J}(x) \|^{2} . In its purest form, the GN technique effectively takes in each iteration:

    x^{k+1} = x^{k}+d^{k},

    where the direction of descent d^k is

    \begin{equation} \left[\nabla \boldsymbol{J}\left(x^{k}\right) \nabla \boldsymbol{J}\left(x^{k}\right)^{\prime}\right]d^{k} = \nabla \boldsymbol{J}\left(x^{k}\right) \boldsymbol{J}\left(x^{k}\right). \end{equation} (3.3)

    As in the GN method, the operator \nabla \boldsymbol{J}\left(x^{k}\right) \nabla \boldsymbol{J}\left(x^{k}\right)^{\prime} is not always non-singular. So, the matrix is substituted by the following positive definite matrix:

    \boldsymbol{J}\left(x^{k}\right) \nabla \boldsymbol{J}\left(x^{k}\right)^{\prime}+\mathcal{S}^k.

    Here \mathcal{S}^k is the identity matrix I multiplied by \beta_k ( \mathcal{S}^k = \beta_k I ) which is selected as a positive multiple and it is chosen so that the modified matrix is always non-singular. This method has been widely used because of its simplicity. It is used also to have a quadratic convergence rate under some suitable hypothesis.

    Remark 1. This is equivalent to the problem of unconstrained minimization:

    \begin{equation} \min\limits _{d \in \mathcal{X}} \mathcal{O}^{k}(d), \end{equation} (3.4)

    where \mathcal{O}^{k}: \mathcal{X} \longrightarrow \mathbb{R} is a strictly convex function defined by:

    \mathcal{O}^{k}(d) = \left\|\boldsymbol{J}^{\prime}\left(x^{k}\right) d+\boldsymbol{J}\left(x^{k}\right)\right\|^{2}+\beta_{k}\|d\|^{2}.

    First we must define the set of admissible coefficients set for \mathcal{E} :

    \begin{eqnarray} { {\mathbb K}} = \left\{\mathcal{E} \in H^{1}(\Omega): \mathcal{E}_{0} \leq \mathcal{E}(x, y) \leq \mathcal{E}_{1} < \infty, \rm { a.e. in } \Omega\right\}, \end{eqnarray} (3.5)

    where \mathcal{E}_{0} and \mathcal{E}_{1} are two positive constants.

    We are concerned in this section with the inverse problem of identifying \mathcal{E} on \Omega from measurements available on \Gamma_i :

    \begin{eqnarray} {( {\mathcal P} {\mathcal I})} \left\{ \begin{array}{l} \mbox{Find } \mathcal{E}(x, y) \in {\mathbb K} \mbox{ s.t. }\\ \boldsymbol{u} = (u, v) \in H^{1}(\Omega)\times H^{1}(\Omega) \mbox{ is the solution of Eq}\; (2.8). \\ \end{array} \right. \end{eqnarray} (3.6)

    For analyzing inverse problems, the usual output least-squares approach is:

    \begin{equation} \mathcal{J}(\mathcal{E}) = \frac{1}{2}\left\|{\bf u}(\mathcal{E})-z^{\eta} \right\|_{L^{2}(\Omega)}^{2}. \end{equation} (3.7)

    We assume the noise level in the observation data z^\eta = (z_u, z_v) of the true solution \boldsymbol{u} of the system (2.8) is of order \eta . Hence

    \begin{equation} \left\|{\bf u}(\tilde{\mathcal{E}})-z^{\eta}\right\|_{L^{2}(\Omega)} \leq \eta, \end{equation} (3.8)

    where \tilde{\mathcal{E}} is the exact solution of Eq (3.6). We transform the linear elasticity imaging inverse problem of identifying Young's modulus \mathcal{E} to the following stable minimization with Tikhonov regularization,

    \begin{eqnarray} \min\limits_{\mathcal{E}\in {\mathbb K}} {\mathcal J}(\mathcal{E}); \mbox{ and} \, \, \mathcal{J}(\mathcal{E}) = \frac{1}{2}\left\|{\bf u}(\mathcal{E})-z^{\eta} \right\|_{L^{2}(\Omega)}^{2}+\beta \Vert \mathcal{E}-\tilde{\mathcal{E}} \Vert^2_{L^2(\Omega)}. \end{eqnarray} (3.9)

    where \Vert \mathcal{E}-\tilde{\mathcal{E}} \Vert^2_{L^2(\Omega)} is the Tikhonov regularization term and \beta is the regularization parameter. The LMM iteration is obtained by following the formality as explained above.

    \begin{equation} \begin{aligned} \mathcal{E}^{k+1}& = \underset{\mathcal{E} \in {\mathbb K}}{\operatorname{argmin }}\; \mathcal{J}(\mathcal{E}), \\ & = \underset{\mathcal{E} \in {\mathbb K}}{\operatorname{argmin}}\left\{\left\|\boldsymbol u\left(\mathcal{E}^{k}\right)-z^{\eta}\right\|_{ {\mathbb L}^{2}\left(\Gamma_{0}\right)}^{2}+\mu_{k}\left\|\mathcal{E}-\mathcal{E}^{k}\right\|_{ {\mathbb L}^{2}\left(\Gamma_{o u t}\right)}^{2}\right\}.\end{aligned} \end{equation} (3.10)

    We assume that the identification parameter \mathcal{E}(x, y) is perturbed by a small amount \mathcal{E}+\delta\mathcal{E} , where \delta\mathcal{E} can be any direction in L^{\infty}(\Omega) .

    u(\mathcal{E}+\delta\mathcal{E}) \approx u(\mathcal{E})+u^{\prime}(\mathcal{E}) \delta\mathcal{E}+\mathcal{O}\left(\|\delta\mathcal{E}\|_{L^{\infty}(\Omega)}^{2} \right),

    and

    v(\mathcal{E}+\delta\mathcal{E} )\approx v(\mathcal{E})+v^{\prime}(\mathcal{E})\delta\mathcal{E}+\mathcal{O} \left(\|\delta\mathcal{E}\|_{L^{\infty}(\Omega)}^{2}\right).

    Let (u^{1}, v^{1})\equiv(u^{\prime}(\mathcal{E}) \delta\mathcal{E}, v^{\prime}(\mathcal{E}) \delta\mathcal{E}) \in L^{2}(\Omega)\times L^{2}(\Omega) the Fréchet derivative at direction \delta\mathcal{E} of the forward solution \boldsymbol{u}(\mathcal{E}) = (u(\mathcal{E}), v(\mathcal{E})) in Eq (2.8). Abdelhamid et al. [23] obtained the sensitivity problem of elasticity as follows:

    \begin{equation} {(\mathcal{SP}):} \left\{\begin{array}{ll} \frac{\partial \sigma_{x}^{1}}{\partial x}+\frac{\partial \tau_{x y}^{1}}{\partial y} = -\left(\frac{\partial \tilde{\sigma}_{x}}{\partial x}+\frac{\partial \tilde{\tau}_{x y}}{\partial y}\right), & {\rm { in }}\; \Omega \\ \ \\ \frac{\partial \tau_{x y}^{1}}{\partial x}+\frac{\partial \sigma_{y}^{1}}{\partial y} = -\left(\frac{\partial \tilde{\tau}_{x y}}{\partial x}+\frac{\partial \tilde{\sigma}_{y}}{\partial y}\right), & {\rm { in }}\; \Omega \\ \ \\ \sigma_{x}^{1} n_{x}+\tau_{x y}^{1} n_{y} = -\left(\tilde{\sigma}_{x} n_{x}+\tilde{\tau}_{x y} n_{y}\right), & {\rm { on }}\; \Gamma_{i} \\ \ \\ \tau_{x y}^{1} n_{x}+\sigma_{y}^{1} n_{y} = -\left(\tilde{\tau}_{x y} n_{x}+\tilde{\sigma}_{y} n_{y}\right), & {\rm { on }}\; \Gamma_{i} \\ \ \\ u^{1} = v^{1} = u_{0}. & {\rm { on }}\; \Gamma_{c} \end{array}\right. \end{equation} (3.11)

    For the sensitivity Eq (3.11), we may describe the relationship between stresses and strains as follows:

    \begin{equation} \left(\begin{array}{c} \sigma^1_{x} \\ \ \\ \sigma^1_{y} \\ \ \\ \tau^1_{x y} \end{array}\right) = \mathcal{E} D_{\sigma}\left(\begin{array}{c} \dfrac{\partial u^1}{\partial x} \\ \ \\ \dfrac{\partial v^1}{\partial x}\\ \ \\ \dfrac{\partial u^1}{\partial y}+\dfrac{\partial v^1}{\partial x} \end{array}\right). \end{equation} (3.12)

    The right-hand side of Eq (3.11) can be rewritten similarly:

    \begin{equation} \left(\begin{array}{c} \tilde\sigma_{x} \\ \ \\ \tilde\sigma_{y} \\ \ \\ \tilde\tau_{x y} \end{array}\right) = \delta\mathcal{E} D_{\sigma}\left(\begin{array}{c} \dfrac{\partial u}{\partial x} \\ \ \\ \dfrac{\partial v}{\partial x}\\ \ \\ \dfrac{\partial u}{\partial y}+\dfrac{\partial v}{\partial x} \end{array}\right). \end{equation} (3.13)

    Now, we'll complete the sensitivity problem's variational formulation:

    \begin{equation} \begin{aligned} -& \int_{\Omega}\left(\begin{array}{l} \sigma_{x}^{1} \frac{\partial w_{1}}{\partial x}+\tau_{x y}^{1} \frac{\partial w_{1}}{\partial y} \\ \tau_{x y}^{1} \frac{\partial w_{2}}{\partial x}+\sigma_{y}^{1} \frac{\partial w_{2}}{\partial y} \end{array}\right) \mathrm{d} \Omega+\int_{\Omega}\left(\begin{array}{l} w_{1}\left(\frac{\partial \tilde{\sigma}_{x}}{\partial x}+\frac{\partial \tilde{\tau}_{x y}}{\partial y}\right) \\ w_{2}\left(\frac{\partial \tilde{\tau}_{x y}}{\partial x}+\frac{\partial \tilde{\sigma}_{y}}{\partial y}\right) \end{array}\right) \mathrm{d} \Omega \\ &-\int_{\Gamma_{i}}\left(\begin{array}{l} w_{1}\left(\tilde{\sigma}_{x} n_{x}+\tilde{\tau}_{x y} n_{y}\right) \\ w_{2}\left(\tilde{\tau}_{x y} n_{x}+\tilde{\sigma}_{y} n_{y}\right) \end{array}\right) \mathrm{d} \Gamma_{i} = 0. \end{aligned} \end{equation} (3.14)

    We get the following result as we use the integration by parts for the second term of (3.14):

    \begin{equation} -\int_{\Omega}\left(\begin{array}{c} \sigma_{x}^{1} \frac{\partial w_{1}}{\partial x}+\tau_{x y}^{1} \frac{\partial w_{1}}{\partial y} \\ \tau_{x y}^{1} \frac{\partial w_{2}}{\partial x}+\sigma_{y}^{1} \frac{\partial w_{2}}{\partial y} \end{array}\right) \mathrm{d} \Omega-\int_{\Omega}\left(\begin{array}{l} \tilde{\sigma}_{x} \frac{\partial w_{1}}{\partial x}+\tilde{\tau}_{x y} \frac{\partial w_{1}}{\partial y} \\ \tilde{\tau}_{x y} \frac{\partial w_{2}}{\partial x}+\tilde{\sigma}_{y} \frac{\partial w_{2}}{\partial y} \end{array}\right) \mathrm{d} \Omega = 0. \end{equation} (3.15)

    After that,

    \begin{equation} \int_{\Omega}\left(\begin{array}{c} \sigma_{x}^{1} \frac{\partial w_{1}}{\partial x}+\mid \tau_{x y}^{1} \frac{\partial w_{1}}{\partial y} \\ \tau_{x y}^{1} \frac{\partial w_{2}}{\partial x}+\sigma_{y}^{1} \frac{\partial w_{2}}{\partial y} \end{array}\right) \mathrm{d} \Omega = -\int_{\Omega}\left(\begin{array}{c} \tilde{\sigma}_{x} \frac{\partial w_{1}}{\partial x}+\tilde{\tau}_{x y} \frac{\partial w_{1}}{\partial y} \\ \tilde{\tau}_{x y} \frac{\partial w_{2}}{\partial x}+\tilde{\sigma}_{y} \frac{\partial w_{2}}{\partial y} \end{array}\right) \mathrm{d} \Omega. \end{equation} (3.16)

    Using Eqs (3.12) and (3.13), the Eq (3.16) can be rewritten in the following manner:

    \begin{equation} \int_{\Omega}\left(\begin{array}{ccc} \frac{\partial w_{1}}{\partial x} & 0 & \frac{\partial w_{1}}{\partial y} \\ 0 & \frac{\partial w_{2}}{\partial y} & \frac{\partial w_{2}}{\partial x} \end{array}\right) \mathcal{E} D_{\sigma}\left(\begin{array}{c} \frac{\partial u^{1}}{\partial x} \\ \frac{\partial v^{1}}{\partial y} \\ \frac{\partial u^{1}}{\partial y}+\frac{\partial v^{1}}{\partial x} \end{array}\right) \mathrm{d} \Omega = -\int_{\Omega}\left(\begin{array}{ccc} \frac{\partial w_{1}}{\partial x} & 0 & \frac{\partial w_{1}}{\partial y} \\ 0 & \frac{\partial w_{2}}{\partial y} & \frac{\partial w_{2}}{\partial x} \end{array}\right) \delta \mathcal{E} D_{\sigma}\left(\begin{array}{c} \frac{\partial u}{\partial x} \\ \frac{\partial v}{\partial y} \\ \frac{\partial u}{\partial y}+\frac{\partial v}{\partial x} \end{array}\right) \mathrm{d} \Omega. \end{equation} (3.17)

    where D_{\sigma} refers to the material property of the applied plane stress condition.

    The Lagrangian multipliers \mathcal{L} are introduced first in this section. Consider the following problem:

    \begin{equation} \left\{\begin{array}{ll} &\min _{\mathcal{E} \in \mathrm{K}} \mathcal{J}(u, v, \mathcal{E}), \\ \ \\ {\rm { subject\ to }}& c_{x}(u, v, \mathcal{E}) = 0 \quad {\rm { in }}\; \Omega, \\ &c_{y}(u, v, \mathcal{E}) = 0 \quad {\rm { in }}\; \Omega, \\ &B \cdot C_{c_{x}} = 0 \quad {\rm { on }}\; \Gamma_{i}, \\ &B \cdot C_{c_{y}} = 0 \quad {\rm { on }}\; \Gamma_{i}. \end{array}\right. \end{equation} (3.18)

    The state equations in the x and y directions are represented by c_x and c_y , respectively. Consequently, the Neumann boundary conditions on \Gamma_i corresponding to c_x and c_y are defined by B.C_{cx} and B.C_{cy} , respectively. The Lagrangian may be shown as:

    \begin{equation} \begin{aligned} \mathcal{L}\left(u, v, \mathcal{E}, \lambda_{u}, \lambda_{v}\right)& = \mathcal{J}(u, v, \mathcal{E})+\int_{\Omega} \lambda_{u} c_{x}(u, v, \mathcal{E}) \mathrm{d} \Omega+\int_{\Omega} \lambda_{v} c_{y}(u, v, \mathcal{E}) \mathrm{d} \Omega \\ &-\int_{\Gamma_{i}}\left(\lambda_{u} B \cdot C_{c_{x}}+\lambda_{v} B \cdot C_{c_{y}}\right) \mathrm{d} \Gamma_{i}. \end{aligned} \end{equation} (3.19)

    The Lagrange multiplier is represented with (\lambda_u, \lambda_v) . To find the adjoint equation, the differential of \mathcal{L} must be computed:

    \begin{equation} \begin{aligned} \frac{\partial \mathcal{L}}{\partial u} \delta u& = \frac{\partial \mathcal{J}}{\partial u} \delta u+\int_{\Omega} \lambda_{u} \frac{\partial}{\partial u}\left(\frac{\partial \sigma_{x}}{\partial x}+\frac{\partial \tau_{x y}}{\partial y}+f_{x}-\left(\sigma_{x} n_{x}+\tau_{x y} n_{y}\right)\right) \delta u \mathrm{\; d} \Omega \\ &+\int_{\Omega} \lambda_{v} \frac{\partial}{\partial u}\left(\frac{\partial \tau_{x y}}{\partial x}+\frac{\partial \sigma_{y}}{\partial y}+f_{y}-\left(\tau_{x y} n_{x}+\sigma_{y} n_{y}\right)\right) \delta u \mathrm{\; d} \Omega = 0, \end{aligned} \end{equation} (3.20)

    and

    \begin{equation} \begin{aligned} \frac{\partial \mathcal{L}}{\partial v} \delta v& = \frac{\partial \mathcal{J}}{\partial v} \delta v+\int_{\Omega} \lambda_{u} \frac{\partial}{\partial v}\left(\frac{\partial \sigma_{x}}{\partial x}+\frac{\partial \tau_{x y}}{\partial y}+f_{x}-\left(\sigma_{x} n_{x}+\tau_{x y} n_{y}\right)\right) \delta v \mathrm{\; d} \Omega \\ &+\int_{\Omega} \lambda_{v} \frac{\partial}{\partial v}\left(\frac{\partial \tau_{x y}}{\partial x}+\frac{\partial \sigma_{y}}{\partial y}+f_{y}-\left(\tau_{x y} n_{x}+\sigma_{y} n_{y}\right)\right) \delta v \mathrm{\; d} \Omega = 0. \end{aligned} \end{equation} (3.21)

    By subtracting Eq (3.21) from Eq (3.20) and two times utilizing integration par part we obtain the adjoint problem:

    \begin{equation} {(\mathcal{AP}):} \left\{\begin{array}{ll} \frac{\partial \sigma_{x}^{*}}{\partial x}+\frac{\partial \tau_{x y}^{*}}{\partial y} = -\rho_{1}\left[\frac{\partial}{\partial x}\left(\mathcal{E}\left(\frac{\partial\left(u-z_{u}\right)}{\partial x}+v \frac{\partial\left(v-z_{v}\right)}{\partial y}\right)\right)\right. \left.+\rho_{2} \frac{\partial}{\partial y}\left(\mathcal{E}\left(\frac{\partial\left(u-z_{u}\right)}{\partial y}+\frac{\partial\left(v-z_{v}\right)}{\partial x}\right)\right)\right] & {\rm { in }}\; \Omega, \\ \ \\ \frac{\partial \tau_{x y}^{*}}{\partial x}+\frac{\partial \sigma_{y}^{*}}{\partial y} = -\rho_{1}\left[\rho_{2} \frac{\partial}{\partial x}\left(\mathcal{E}\left(\frac{\partial\left(u-z_{u}\right)}{\partial y}+\frac{\partial\left(v-z_{v}\right)}{\partial x}\right)\right)\right. \left.+\frac{\partial}{\partial y}\left(\mathcal{E}\left(v \frac{\partial\left(u-z_{u}\right)}{\partial x}+\frac{\partial\left(v-z_{v}\right)}{\partial y}\right)\right)\right] & {\rm { in }}\; \Omega, \\ \ \\ \sigma_{x}^{*} n_{x}+\tau_{x y}^{*} n_{y} = 0 & {\rm { in }}\; \Gamma_i, \\ \ \\ \tau_{x y}^{*} n_{x}+\sigma_{y}^{*} n_{y} = 0 & {\rm { on }}\; \Gamma_{i}, \\ \ \\ \lambda_{u} = \lambda_{v} = \lambda_{0} & {\rm { on }}\; \Gamma_{c}. \end{array}\right. \end{equation} (3.22)

    where (u, v) solves the original forward problem of elasticity (2.8) and \lambda_0 is the initial condition. The relationship between stresses \sigma^* and the adjoint solution can be shown as follows:

    \begin{equation} \left(\begin{array}{c} \sigma_{x}^{*} \\ \ \\ \sigma_{y}^{*} \\ \ \\ \tau_{x y}^{*} \end{array}\right) = \mathcal{E} D_{\sigma}\left(\begin{array}{c} \dfrac{\partial \lambda_{u}}{\partial x} \\ \dfrac{\partial \lambda_{v}}{\partial y} \\ \dfrac{\partial \lambda_{u}}{\partial y}+\dfrac{\partial \lambda_{v}}{\partial x} \end{array}\right). \end{equation} (3.23)

    Using the conditions to the limits in Eq (3.22) and the relation Eq (3.23), we obtain the variational formulation of Eq (3.22):

    \begin{array} \int_{\Omega}\left(\begin{array}{ccc} \frac{\partial w_{1}}{\partial x} & 0 & \frac{\partial w_{1}}{\partial y} \\ 0 & \frac{\partial w_{2}}{\partial y} & \frac{\partial w_{2}}{\partial x} \end{array}\right) \mathcal{E} D_{\sigma}\left(\begin{array}{c} \frac{\partial \lambda_{u}}{\partial x} \\ \frac{\partial \lambda_{v}}{\partial y} \\ \frac{\partial \lambda_{u}}{\partial y}+\frac{\partial \lambda_{v}}{\partial x} \end{array}\right) \mathrm{d} \Omega \\ = -\int_{\Omega}\left(\begin{array}{l} \rho_{1} \mathcal{E}\left(\frac{\partial\left(u-z_{u}\right)}{\partial x}+v \frac{\partial\left(v-z_{v}\right)}{\partial y}\right) \frac{\partial w_{1}}{\partial x} +\rho_{1} \rho_{2} \mathcal{E}\left(\frac{\partial\left(u-z_{u}\right)}{\partial y}+\frac{\partial\left(v-z_{v}\right)}{\partial x}\right) \frac{\partial w_{1}}{\partial y} \\ \rho_{1} \rho_{2} \mathcal{E}\left(\frac{\partial\left(u-z_{u}\right)}{\partial y}+\frac{\partial\left(v-z_{v}\right)}{\partial x}\right) \frac{\partial w_{2}}{\partial x} +\rho_{1} \mathcal{E}\left(v \frac{\partial\left(u-z_{u}\right)}{\partial x}+\frac{\partial\left(v-z_{v}\right)}{\partial y}\right) \frac{\partial w_{2}}{\partial y} \end{array}\right) \mathrm{d} \Omega. \end{array} (3.24)

    This section explains how to use the LMM to solve the proposed optimization inverse problem. Table 1 introduces the specifications of different algorithms compared with the LMM in terms of the convergence and computation complexity.

    Table 1.  Specifications of different algorithms.
    Algorithms Convergence Computation Complexity
    Gradient descent Stable, slow Gradient
    Newton Unstable, fast Gradient and Hessian
    Gauss-Newton Unstable, fast Jacobian
    Levenberg-Marquardt Stable, fast Jacobian

     | Show Table
    DownLoad: CSV

    According to the study done above, the different steps for computing the Young's modulus \mathcal{E}(x, y) are given by the following algorithm:

    Algorithm 1:
    1. Initialization
    (1) Choose the initial value \mathcal{E}^0 .
    (2) Choose the observation data z^{\eta} = (z_u, z_v) and its noise order \eta > 0 .
    (3) Choose a tolerance parameter \epsilon > 0 .
    (4) Set k: = 0 .
    2. Solve
    (1) Forward problem (2.8) with \mathcal{E} = \mathcal{E}^k .
    (2) Sensitivity problem (3.11) with \mathcal{E} = \mathcal{E}^k .
    (3) Adjoint problem (3.22) with \mathcal{E} = \mathcal{E}^k .
    3. Calculate: Using the current value of \mathcal{E}^k
            \beta_k = \Vert \boldsymbol{u}(\mathcal{E}^k)- z^{\eta} \Vert^2_{L^2(\Omega)}
    4. Compute the descent direction d^k :
            \begin{equation*} \left[\boldsymbol{u}^{\prime}\left(\mathcal{E}^k\right)^{\star} \boldsymbol{u}^{\prime}\left(\mathcal{E}^k\right)+\beta_{k} I\right] d_{k} = -\boldsymbol{u}^{\prime}\left(\mathcal{E}^k\right)^{\star}\left[\boldsymbol{u}\left(\mathcal{E}^k\right)-z^{\eta}\right] \end{equation*}
    5. Update the identification parameter \mathcal{E}^{k+1} :
             \begin{equation*} \mathcal{E}^{k+1}: = \mathcal{E}^k+d^k. \end{equation*}
    6. if: \quad \dfrac{\parallel \mathcal{E}^{k+1}-\mathcal{E}^k\parallel^{2}_{L^{2}(\Omega)}}{\parallel \mathcal{E}^k\parallel^{2}_{L^{2}(\Omega)}}\leq\epsilon then

    The Young's modulus \mathcal{E}(x, y) can be reconstructed using this process. For each iteration with \mathcal{E}(x, y) = \mathcal{E}(x, y)^k the algorithm necessitates the resolution of the forward problem, sensitivity problem and the adjoint problem then calculate the regularization parameter \beta_k and the descent direction d^k , next updating the identification parameter \mathcal{E} , finally we verify the stopping criteria, the estimated parameter must be accurate enough or satisfies the discrepancy principle.

    Let us consider a two dimensional square domain [0, 1] \times [0, 1] with \Gamma_c = [0, 1]\times\{y = 0\} and \Gamma_i is the remaining part of the boundary \partial\Omega . In order to identify a smooth \mathcal{E}(x, y) , we give numerical examples to prove our study's effectiveness. All calculations are made using Freefem++4.6 [31]. CPU: Intel(R) Core(TM) i7-4510U CPU @ 2.00GHz, 2601 MHz, 2 cores, 4 processors, Memory: 8 GB. We generate data z^{\eta} = (z_u, z_v)\in L^2(\Omega) experimentally by:

    \begin{equation} z^{\eta} = {\bf u}(x, y)(1+\eta \xi) \quad {\rm { on }}\; \Omega, \end{equation} (5.1)

    where \eta is the amount of noise and \xi is a uniformly distributed random variable in [-1, 1] , then using the FreeFem function Rand1( \cdot ) we generate this random variable. The exact solution is synthetically generated using the fundamental of elasticity problem:

    \begin{equation} \boldsymbol{u}(x, y) = (u, v) = (x y, x y). \end{equation} (5.2)

    The body forces \boldsymbol{f} = (f_x, f_y) and the traction \boldsymbol{q} = (q_x, q_y) applied on its boundaries can be derived directly on borders \Gamma_i .

    Due to changes in sample composition and test technique, Young's modulus might vary somewhat. The constant values presented here [32,33,34,35] are approximated and solely intended for relative comparison.

    Example 1. We consider a constant Young's modulus for various materials such as Aluminum, Titanium, \dots, etc.

    Table 2 presents the reconstruction of \mathcal{E} at a number of elements \mathcal{NE} = 1800 , \eta = 1\% for different materials at constant initial guesses. The obtained results show the rapid convergence and the precision of the introduced procedure.

    Table 2.  The identification of Young's modulus for various materials.
    Material \mathcal{E}_{Exact} \mathcal{E}_{0} \mathcal{E}_{Calculated} k e_{\mathcal{E}}
    Aluminium (_{13}Al) 68 70.25 67.9904 25 0.0096133
    Titanium (_{22}Ti) 116 115 116.011 13 0.0109185
    Bronze 112 113 112.250 13 0.0066388
    Zinc (_{30}Zn) 108 110.25 108.007 25 0.0074127
    Nylon (66) 2.93 2.5 2.92746 6 0.0025417

     | Show Table
    DownLoad: CSV

    Figure 2 shows the convergence speed of the proposed LMM at different levels of noise \eta in the measurement data for the Titanium material with initial guess \mathcal{E}_0 = 115 . Figure 3 introduces the convergence speed at \eta = 1\% and \eta = 2\% for the Nylon material with initial guess \mathcal{E}_0 = 2.5 . It is observed that the best convergence speed is obtained for low level of noise in the data. At large level of noise, the proposed algorithm takes more than 50 iterations to get the optimal solution of the reconstructed parameter. Table 3 presents the decline of the relative error e_\mathcal{E} , until the convergence occurs at the 13th iteration.

    Figure 2.  Convergence speed for \mathcal{E}_0 = 115 (Titanium) with different noise order \eta .
    Figure 3.  Convergence speed for \mathcal{E}_0 = 2.5 (Nylon) with \eta = 1\% and \eta = 2\% .
    Table 3.  The relative error with \eta = 0.01 and \mathcal{E}^{0} = 115 for Example 1 (Titanium).
    k e_\mathcal{E} k e_\mathcal{E} k e_{\mathcal{E}}
    0 1 5 0.729589 10 0.311645
    1 0.925356 6 0.662241 11 0.116378
    2 0.857706 7 0.595521 12 0.042788
    3 0.809445 8 0.553536 13 0.010918
    4 0.768314 9 0.493262

     | Show Table
    DownLoad: CSV

    Example 2. The exact identification of the Young's modulus is defined by:

    \begin{equation} \mathcal{E}(x, y) = 3 + 0.05 e^{(x+y)^{2}} \quad { { in }}\; \Omega. \end{equation} (5.3)

    In this Example the initial guess is given by \mathcal{E}^{0} = 3.3 .

    Table 4 presents the variation of the relative error e_{\mathcal{E}} with respect to the noise level in the data. We notice that as the amounts of noise \eta rises, e_{\mathcal{E}} rises as well at chosen \mathcal{NE} = 1800 . However, this relative error remains interesting upto \eta = 1.5\% . We also remark on this table that the solution of the inverse problem fails for \eta > 2.75\% . Figure 4 shows the variation of the residual error (left) and the relative error (right) with respect to the number of iterations k . We remark that these errors suddenly decrease upto k = 10 and then they remain almost constant and independent of k .

    Table 4.  The relative error with varying amounts of noise \eta in the data Example 2.
    \mathcal{N} \mathcal{E} \eta(\%) k e_{\mathcal{E}} \mathcal{N} \mathcal{E} \eta(\%) k e_{\mathcal{E}}
    1800 0.25 19 0.0012 1800 1.75 09 0.011
    1800 0.50 10 0.0031 1800 2.00 05 0.019
    1800 0.75 09 0.0044 1800 2.25 03 0.041
    1800 1.00 07 0.0058 1800 2.50 06 0.053
    1800 1.25 09 0.0070 1800 2.75 03 0.068
    1800 1.50 09 0.0074 1800 > 2.75 fail! fail!

     | Show Table
    DownLoad: CSV
    Figure 4.  Residual (left) and relative error (right) corresponding to k for Example 2.

    Figures 5(a), (b) and 6 show the exact (left) and reconstructed (middle) \mathcal{E}^k for Example 2 in 2D and 3D at \eta = 0.5\% and \mathcal{NE} = 20,000 . The obtained results are satisfactory at k = 9 , where the relative error e_\mathcal{E} = 0.0032 . Figures 5(c) and 6 (right) show the residual error of the coefficient \mathcal{E}^k . It is observed that the relative error grows at a part of the boundary of the problem domain. We have also examined the procedure for different values of \mathcal{NE} . Figures 7 and 8 show the exact (left) and reconstructed (middle) \mathcal{E}^k and residual error (right) at \eta = 0.5\% for \mathcal{NE} = 9800 and 1800 respectively. Figure 9 represents the variations and sensitivity of the measured data z^{\eta} with respect to the noise level.

    Figure 5.  2D view of the exact \mathcal{E} (left), reconstructed \mathcal{E}^k (middle) and the residual error (right) for Example 2 at \eta = 0.5\% and \mathcal{NE} = 20,000 , the relative error e_\mathcal{E} = 0.0032 at k = 9 .
    Figure 6.  3D view of the exact \mathcal{E} (left), reconstructed \mathcal{E}^k (middle)and the residual error (right) for Example 2 at \eta = 0.5\% and \mathcal{NE} = 20,000 , the relative error e_\mathcal{E} = 0.0032 at k = 9 .
    Figure 7.  3D view of the exact \mathcal{E} (left), reconstructed \mathcal{E}^k (middle) and the residual error (right) for Example 2 at \eta = 0.5\% and \mathcal{NE} = 9800 .
    Figure 8.  3D view of the exact \mathcal{E} (left), reconstructed \mathcal{E}^k (middle) and the residual error (right) for Example 2 at \eta = 0.5\% and \mathcal{NE} = 1800 .
    Figure 9.  Measurement data at different \eta for Example 2.

    Example 3. The exact coefficient is given by:

    \begin{equation} \mathcal{E}(x, y) = 1+\frac{0.5}{1+\mathrm{e}^{50\left((0.6-x)^{2}+(0.3-y)^{2}\right)-3}}+\frac{0.3}{1+\mathrm{e}^{100\left((0.4-x)^{2}+(0.75-y)^{2}\right)-3}} \quad \quad{ { in }} \;\Omega, \end{equation} (5.4)

    with initial guess \mathcal{E}^0 = 1.1.

    Table 5 shows the numerical convergence for \mathcal{E}^k with respect to the number of iterations k . We observe that the relative and the residual errors drop rapidly until the 8th iteration. Then they decline slowly as shown in Figure 10. Figure 11 depicts the exact (left), reconstructed (middle) \mathcal{E}^k and residual error (right) for \eta = 0.5\% at iterations k = 1, 2, 4, 6\; {\rm{ and }}\; 8 .

    Table 5.  The relative error at \eta = 0.5\% for Example 3.
    k e_\mathcal{E} k e_\mathcal{E} k e_{\mathcal{E}}
    1 0.103244 7 0.019356 13 0.003495
    2 0.078826 8 0.010968 14 0.003495
    3 0.050282 9 0.006122 15 0.003693
    4 0.040450 10 0.004676 16 0.002940
    5 0.031286 11 0.003857 17 0.002623
    6 0.025083 12 0.002983 18 0.002521

     | Show Table
    DownLoad: CSV
    Figure 10.  The relation between the residual (left) and relative error (right) corresponding to k for Example 3.
    Figure 11.  The 2D view of the exact (left), reconstructed (middle) \mathcal{E} and residual error (right) at \eta = 0.5\% for Example 3.

    Figures 12(a), (b) and 13 show the exact (left) and reconstructed (middle) \mathcal{E}^k for Example 3 in 2D and 3D at \eta = 0.5\% and \mathcal{NE} = 20,000 . It is observed that the reconstructed modulus of elasticity \mathcal{E}^k has a very satisfying relative error ( e_\mathcal{E} = 0.0025 ) at k = 18 . Figures 12(c) and 13 (right) show the residual error of \mathcal{E}^k for Example 3 in 2D and 3D. Jadamba et al. [36] have implemented both first and second-order adjoint methods and the Newton method with a simple backtracking line search method for solving Example 3. They found that it is possible to achieve a good precision at the 139th iteration (for the first-order adjoint method) as well as at the 49th iteration (for the second-order adjoint method), then decreased very slowly with increasing k . These results are in a good accord with the literature. Furthermore, Abdelhamid et al. [23] implemented the nonlinear conjugate gradient method for solving this example observing that at the first 25th iteration. The relative and residual errors are decreasing rapidly with increasing k , after which they decrease very slow to reach the minimizer at k = 34 with relative error e_\mathcal{E} = 0.0128 . It is found that the LMM gives better results in less number of iterations k .

    Figure 12.  2D view of the exact \mathcal{E} (left), reconstructed \mathcal{E}^k (middle) and the residual error of the coefficient \mathcal{E}^k (right) for Example 3 at \eta = 0.5\% and \mathcal{NE} = 20,000, the relative error e_\mathcal{E} = 0.0025 at k = 18.
    Figure 13.  3D view of the exact \mathcal{E} (left), reconstructed \mathcal{E}^k (middle) and the residual error of the coefficient \mathcal{E}^k (right) for Example 3 at \eta = 0.5\% and \mathcal{NE} = 20,000, the relative error e_\mathcal{E} = 0.0025 at k = 18.

    Example 4. The modulus of elasticity coefficient \mathcal{E}(x, y) , body force function \boldsymbol f(x, y) and boundary conditions are as follows:

    \begin{equation} \begin{aligned} &\mathcal{E}(x, y) = 1-\frac{1}{2} \operatorname{sinc}\left[6 \pi\left(x+\frac{1}{10}\right)\left(y+\frac{1}{10}\right)\right], \quad \boldsymbol f(x, y) = \left[\begin{array}{c} -\frac{1}{5} x \\ \cos (\pi x) \end{array}\right]\; { { in }}\;\Omega.\\ &g(x, y) = \frac{1}{10}\left[\begin{array}{l} \sin (\pi y) \\ \sin (\pi x) \end{array}\right] \quad {{on}}\; \; \Gamma_1 \;{{ and }}\;\quad h(x, y) = \frac{1}{10}\left[\begin{array}{l} 1+10 x \\ 1+10 y \end{array}\right] { { on }}\; \Gamma_2, \end{aligned} \end{equation} (5.5)

    where \partial\Omega = \Gamma_1\cup\Gamma_2 , \Gamma_1 represents the bottom and left sides of the border and \Gamma_2 represents the top and the right edges.

    Remark 2. The \operatorname{sinc} function also called the "sampling function" defined as following:

    \begin{equation*} \operatorname{sinc}(x) = \left\{\begin{array}{ll} 1 & {for} \quad x = 0, \\ \dfrac{\sin x}{x} & {Otherwise}. \end{array}\right. \end{equation*}

    For this example, the suggested method is tested with \eta = 0.5\% and \mathcal{NE} = 20,000 . Table 6 shows that the relative error e_\mathcal{E} slowly decreases with increasing k . We reach to the minimizer and the stopping criteria is satisfied at k = 19 . Figure 14 shows that the relative error e_\mathcal{E} reduces rapidly as the number of iterations k increases for the first 10 iterations and then slowly approach to the minimizer.

    Table 6.  The relative error at \eta = 0.5\% for Example 4.
    k e_\mathcal{E} k e_\mathcal{E} k e_{\mathcal{E}} k e_{\mathcal{E}}
    1 0.14524 6 0.06519 11 0.02621 16 0.00397
    2 0.13081 7 0.05603 12 0.01979 17 0.00591
    3 0.11370 8 0.04337 13 0.17039 18 0.00287
    4 0.09535 9 0.03191 14 0.00910 19 0.00278
    5 0.07731 10 0.02882 15 0.00701

     | Show Table
    DownLoad: CSV
    Figure 14.  Variation of the residual (left) and the relative error (right) corresponding to k for Example 4.

    Figures 15(a), (b) and 16 represent the exact (left) and the reconstructed (middle) \mathcal{E}^k for Example 4 in 2D and 3D respectively for \eta = 0.5\% and \mathcal{NE} = 20,000 . Figure 17 shows the same representation for \mathcal{NE} = 9800 . The proposed method represents a reasonable results at k = 19 for which the relative error is e_\mathcal{E} = 0.0028 . Figures 15(c) and 16 (right) introduces the residual error of e_\mathcal{E} for Example 4 in 2D and 3D. It is observed that the e_\mathcal{E} grows at the corners of the boundary due to the influence of the gradient computations.

    Figure 15.  2D view of the exact, reconstructed \mathcal{E}^k and residual error at k = 19 for \eta = 0.5\% and \mathcal{NE} = 20,000 , e_{\mathcal{E}} = 0.0028 for Example 4.
    Figure 16.  3D view of the exact, reconstructed \mathcal{E}^k and residual error at k = 19 for \eta = 0.5\% and \mathcal{NE} = 20,000 , e_{\mathcal{E}} = 0.0028 for Example 4.
    Figure 17.  3D view of the exact \mathcal{E} (left), reconstructed \mathcal{E}^k (middle) and the residual error of the coefficient \mathcal{E}^k (right) at \eta = 0.5\% and \mathcal{NE} = 9800 for Example 4.

    Figure 18 gives a graphical representation of the exact, the reconstructed \mathcal{E}^k and the residual error at k = 5, 12, 15\; {\rm{ and }}\; 17 , the method converges rapidly to the solution in the first 10th iterations, then gives a good reconstruction \mathcal{E}^k in the 19 th iteration as shown in Figures 1517 and Table 6. Table 7 introduces a comparison of the relative e_\mathcal{E} , residual error E and the number of iterations k using the present work compared to that of Abdelhamid et al. [23].

    Figure 18.  3D view of the exact (left), reconstructed (middle) \mathcal{E}^k and residual error (right) at \eta = 0.5\% for Example 4.
    Table 7.  Comparison for Example 4.
    k Relative error e_\mathcal{E} Residual error E
    Abdelhamid et al. [23], \mathcal{NE}=12,800 67 0.038095 0.036702
    Present work, \mathcal{NE}=9800 23 0.003532 0.005026

     | Show Table
    DownLoad: CSV

    In this paper we develop a theoretical framework for the inverse problem of identifying the modulus of elasticity {\mathcal{E}} at some measurement data on the boundary. The inverse problem is discretized using the finite element approach. The optimization problem of reconstructing the elastic modulus (Young's modulus) of elasticity imaging inverse problem is formulated. The identification parameter is defined on the domain and can be identified from given measurement data at some parts of the boundaries. The LMM method is used to treat this ill-posed inverse problem and the non-convex minimization is changed into a convex one. The mathematical formulation for the forward problem of elasticity is introduced in the 2D plane, where \Omega \subset R^{d}, d = 2, 3 . The descent direction d^{k} is introduced from the solution of the sensitivity and adjoint equations. The obtained results from the identification problem of the constant and variable identification parameter for various real materials are satisfactory. The obtained results show the accuracy and efficiency of the proposed algorithm. The obtained results of the 2D and 3D view for the reconstruction of the Young's modulus are compared with those obtained from the exact one. At different levels of noise, the proposed algorithm is implemented and show efficient and accurate results upto \eta = 2.75\% .

    The work of Talaat Abdelhamid is supported by Science, Technology & Innovation Funding Authority (STDF) under grant number 39385.

    The authors declare that there is no conflict of interests regarding the publication of this paper.



    [1] M. F. Nadeem, M. Azeem, A. Khalil, The locating number of hexagonal möbius ladder network, J. Appl. Math. Comput., 66 (2021), 149–165. https://doi.org/10.1007/s12190-020-01430-8 doi: 10.1007/s12190-020-01430-8
    [2] A. Ahmad, A. N. Koam, M. Siddiqui, M. Azeem, Resolvability of the starphene structure and applications in electronics, Ain Shams Eng. J., 13 (2022), 101587. https://doi.org/10.1016/j.asej.2021.09.014 doi: 10.1016/j.asej.2021.09.014
    [3] M. F. Nadeem, M. Imran, H. M. A. Siddiqui, M. Azeem, A. Khalil, Y. Ali, Topological aspects of metal-organic structure with the help of underlying networks, Arab. J. Chem., 14 (2021), 103157. https://doi.org/10.1016/j.arabjc.2021.103157 doi: 10.1016/j.arabjc.2021.103157
    [4] M. Azeem, M. F. Nadeem, Metric-based resolvability of polycyclic aromatic hydrocarbons, Eur. Phys. J. Plus, 136 (2021), 395.
    [5] M. Imran, A. Ahmad, Y. Ahmad, M. Azeem, Edge weight based entropy measure of different shapes of carbon nanotubes, IEEE Access, 9 (2021), 139712–139724. https://doi.org/10.1109/ACCESS.2021.3119032 doi: 10.1109/ACCESS.2021.3119032
    [6] M. F. Nadeem, A. Shabbir, Computing and comparative analysis of topological invariants of y-junction carbon nanotubes, Int. J. Quant. Chem., 122 (2022), e26847. https://doi.org/10.1002/qua.26847 doi: 10.1002/qua.26847
    [7] X. Zuo, M. F. Nadeem, M. K. Siddiqui, M. Azeem, Edge weight based entropy of different topologies of carbon nanotubes, IEEE Access, 9 (2021), 102019–102029. https://doi.org/10.1109/ACCESS.2021.3097905 doi: 10.1109/ACCESS.2021.3097905
    [8] M. F. Nadeem, M. Azeem, H. M. K. Siddiqui, Comparative study of zagreb indices for capped, semi-capped and uncapped carbon nanotubes, Polycyclic Aromat. Compd., 42 (2022), 3545–3562. https://doi.org/10.1080/10406638.2021.1890625 doi: 10.1080/10406638.2021.1890625
    [9] F. Afzal, S. Hussain, D. Afzal, S. Razaq, Some new degree based topological indices via m-polynomial, J. Inf. Optim. Sci., 41 (2020), 1061–1076. https://doi.org/10.1080/02522667.2020.1744307 doi: 10.1080/02522667.2020.1744307
    [10] A. Rauf, M. Naeem, S. U. Bukhari, Quantitative structure–property relationship of ev-degree and ve-degree based topological indices with physico-chemical properties of benzene derivatives and application, Int. J. Quant. Chem., 122 (2022), e26851.
    [11] A. Rauf, M. Naeem, A. Aslam, Quantitative structure–property relationship of edge weighted and degree-based entropy of benzene derivatives, Int. J. Quant. Chem., 122 (2022), e26839.
    [12] J. B. Liu, X.-B. Peng, S. Hayat, Topological index analysis of a class of networks analogous to alicyclic hydrocarbons and their derivatives, Int. J. Quant. Chem., 122 (2022), e26827.
    [13] Y. Shang, Sombor index and degree-related properties of simplicial networks, Appl. Math. Comput., 419 (2022), 126881. https://doi.org/10.1016/j.amc.2021.126881 doi: 10.1016/j.amc.2021.126881
    [14] Z. Wang, Y. Mao, K. C. Das, Y. Shang, Nordhaus–gaddum-type results for the steiner gutman index of graphs, Symmetry, 12 (2020), 1711. https://doi.org/10.3390/sym12101711 doi: 10.3390/sym12101711
    [15] Y. Shang, Lower bounds for gaussian estrada index of graphs, Symmetry, 10 (2018), 325. https://doi.org/10.3390/sym10080325 doi: 10.3390/sym10080325
    [16] S. Khan, S. Pirzada, Y. Shang, On the sum and spread of reciprocal distance laplacian eigenvalues of graphs in terms of harary index, Symmetry, 14 (2022), 1937. https://doi.org/10.3390/sym14091937 doi: 10.3390/sym14091937
    [17] J. B. Liu, J. J. Gu, K. Wang, The expected values for the gutman index, schultz index, and some sombor indices of a random cyclooctane chain, Int. J. Quant. Chem., 123 (2022).
    [18] M. Azeem, M. Imran, M. F. Nadeem, Sharp bounds on partition dimension of hexagonal möbius ladder, J. King Saud Univ. Sci., 34 (2022), 101779. https://doi.org/10.1016/j.jksus.2021.101779 doi: 10.1016/j.jksus.2021.101779
    [19] A. Shabbir, M. Azeem, On the partition dimension of tri-hexagonal alpha-boron nanotube, IEEE Access, 9 (2021), 55644–55653. https://doi.org/10.1109/ACCESS.2021.3071716 doi: 10.1109/ACCESS.2021.3071716
    [20] J. B. Liu, M. F. Nadeem, M. Azeem, Bounds on the partition dimension of convex polytopes, Comb. Chem. High Throughput Screening, 25 (2022), 547–553. https://doi.org/10.2174/1386207323666201204144422 doi: 10.2174/1386207323666201204144422
    [21] J. B. Liu, Y. Bao, W. T. Zheng, S. Hayat, Network coherence analysis on a family of nested weighted n-polygon networks, Fractals, 29 (2021), 2150260.
    [22] J. B. Liu, J. J. Gu, Computing and analyzing the normalized laplacian spectrum and spanning tree of the strong prism of the dicyclobutadieno derivative of linear phenylenes, Int. J. Quant. Chem., 122 (2022), e26972.
    [23] J. B. Liu, J. Zhao, Z. Q. Cai, On the generalized adjacency, laplacian and signless laplacian spectra of the weighted edge corona networks, Phys. A Stat. Mechan. Appl., 540 (2020), 123073. https://doi.org/10.1016/j.physa.2019.123073 doi: 10.1016/j.physa.2019.123073
    [24] J.-B. Liu, J. Zhao, J. Min, J. Cao, The hosoya index of graphs formed by a fractal graph, Fractals, 27 (2019), 1950135.
    [25] J.-B. Liu, C. Wang, S. Wang, and B. Wei, Zagreb indices and multiplicative zagreb indices of eulerian graphs, Bull. Malays. Math. Sci. Soc., 42 (2017), 67–78. https://doi.org/10.1007/s40840-017-0463-2 doi: 10.1007/s40840-017-0463-2
    [26] J. B. Liu, X. F. Pan, Minimizing kirchhoff index among graphs with a given vertex bipartiteness, Appl. Math. Comput., 291 (2016), 84–88.
    [27] J. B. Liu, X. F. Pan, F. T. Hu, F. F. Hu, Asymptotic laplacian-energy-like invariant of lattices, Appl. Math. Comput., 253 (2015), 205–214. https://doi.org/10.1016/j.amc.2014.12.035 doi: 10.1016/j.amc.2014.12.035
    [28] S. Bukhari, M. K. Jamil, M. Azeem, S. Swaray, Patched network and its vertex-edge metric-based dimension, IEEE Access, 11 (2023), 4478–4485. https://doi.org/10.1109/ACCESS.2023.3235398 doi: 10.1109/ACCESS.2023.3235398
    [29] M. C. Shanmukha, S. Lee, A. Usha, K. C. Shilpa, M. Azeem, Degree-based entropy descriptors of graphenylene using topological indices, Comput. Model. Eng. Sci., 2023 (2023), 1–25.
    [30] X. Zhang, M. T. A. Kanwal, M. Azeem, M. K. Jamil, M. Mukhtar, Finite vertex-based resolvability of supramolecular chain in dialkyltin, Main Group Metal Chem., 45 (2022), 255–264. https://doi.org/10.1515/mgmc-2022-0027 doi: 10.1515/mgmc-2022-0027
    [31] Q. Huang, A. Khalil, D. A. Ali, A. Ahmad, R. Luo, M. Azeem, Breast cancer chemical structures and their partition resolvability, Math. Biosci. Eng., 20 (2022), 3838–3853. https://doi.org/10.3934/mbe.2023180 doi: 10.3934/mbe.2023180
    [32] M. Azeem, M. K. Jamil, A. Javed, A. Ahmad, Verification of some topological indices of y-junction based nanostructures by m-polynomials, J. Math., 2022 (2022), 1–18. https://doi.org/10.1155/2022/8238651 doi: 10.1155/2022/8238651
    [33] M. K. Jamil, M. Imran, K. A. Sattar, Novel face index for benzenoid hydrocarbons, Mathematics, 8 (2020), 312.
    [34] X. Zhang, A. Raza, A. Fahad, M. K. Jamil, M. A. Chaudhry, Z. Iqbal, On face index of silicon carbides, Discrete Dyn. Nat. Soc., 2020 (2020), 1–8. https://doi.org/10.1155/2020/6048438 doi: 10.1155/2020/6048438
    [35] A. Ye, A. Javed, M. K. Jamil, K. A. Sattar, A. Aslam, Z. Iqbal, et al., On computation of face index of certain nanotubes, Discrete Dyn. Nat. Soc., 2020 (2020), 1–6. https://doi.org/10.1155/2020/3468426 doi: 10.1155/2020/3468426
    [36] Z. Ahmad, , M. Naseem, M. K. Jamil, M. K. Siddiqui, M. F. Nadeem, New results on eccentric connectivity indices of v-phenylenic nanotube, Eurasian Chem. Commun., 2 (2020), 663–671. https://doi.org/10.33945/SAMI/ECC.2020.6.3 doi: 10.33945/SAMI/ECC.2020.6.3
    [37] Z. Ahmad, , M. Naseem, M. K. Jamil, M. F. Nadeem, S. Wang, Eccentric connectivity indices of titania nanotubes TiO, Eurasian Chem. Commun., 2 (2020), 712–721. https://doi.org/10.33945/SAMI/ECC.2020.6.8 doi: 10.33945/SAMI/ECC.2020.6.8
    [38] A. N. A. Koam, A. Ahmad, M. Nadeem, Comparative study of valency-based topological descriptor for hexagon star network, Comput. Syst. Sci. Eng., 36 (2021), 293–306. https://doi.org/10.32604/csse.2021.014896 doi: 10.32604/csse.2021.014896
    [39] H. M. A. Siddiqui, S. Baby, M. F. Nadeem, M. K. Shafiq, Bounds of some degree based indices of lexicographic product of some connected graphs, Polycyclic Aromat. Compd., 42 (2022), 2568–2580.
    [40] J. B. Liu, H. M. A. Siddiqui, M. F. Nadeem, M. A. Binyamin, Some topological properties of uniform subdivision of sierpiński graphs, Main Group Metal Chem., 44 (2021), 218–227. https://doi.org/10.1515/mgmc-2021-0006 doi: 10.1515/mgmc-2021-0006
    [41] M. Ishtiaq, A. Rauf, Q. Rubbab, M. K. Siddiqui, H. Ibrahim, Algebraic polynomial based topological properties of anti-tumor drug hyaluronic acid-doxorubicin (HAD), Polycyclic Aromat. Compd., 42 (2022), 7049–7070. https://doi.org/10.1080/10406638.2021.1995011 doi: 10.1080/10406638.2021.1995011
    [42] V. Ravi, M. K. Siddiqui, N. Chidambaram, K. Desikan, On topological descriptors and curvilinear regression analysis of antiviral drugs used in COVID-19 treatment, Polycyclic Aromat. Compd., 42 (2022), 6932–6945. https://doi.org/10.1080/10406638.2021.1993941 doi: 10.1080/10406638.2021.1993941
    [43] A. Ahmad, S. C. López, Distance-based topological polynomials associated with zero-divisor graphs, Math. Prob. Eng., 2021 (2021), 1–8.
    [44] A. Ahmad, Vertex-degree based eccentric topological descriptors of zero divisor graph of commutative rings, Online J. Anal. Comb., 15 (2020), 1–10.
    [45] A. Ahmad, R. Hasni, K. Elahi, M. A. Asim, Polynomials of degree-based indices for swapped networks modeled by optical transpose interconnection system, IEEE Access, 8 (2020), 214293–214299. https://doi.org/10.1109/ACCESS.2020.3039298 doi: 10.1109/ACCESS.2020.3039298
    [46] F. A. Abolaban, A. Ahmad, M. A. Asim, Computation of vertex-edge degree based topological descriptors for metal trihalides network, IEEE Access, 9 (2021), 65330–65339. https://doi.org/10.1109/ACCESS.2021.3076036 doi: 10.1109/ACCESS.2021.3076036
    [47] Özge Çolakoğlu Havare, Topological indices and QSPR modeling of some novel drugs used in the cancer treatment, Int. J. Quant. Chem., 121 (2021), e26813. https://doi.org/10.1002/qua.26813 doi: 10.1002/qua.26813
    [48] J. B. Liu, R. M. Singaraj, Topological analysis of para-line graph of remdesivir used in the prevention of corona virus, Int. J. Quant. Chem., 121 (2021), e26778.
    [49] M. M. Zobair, M. A. Malik, H. Shaker, Eccentricity-based topological invariants of tightest nonadjacently configured stable pentagonal structure of carbon nanocones, Int. J. Quant. Chem., 121 (2021), e26807.
    [50] Z. Sabir, M. Umar, M. A. Z. Raja, H. M. Baskonus, W. Gao, Designing of morlet wavelet as a neural network for a novel prevention category in the HIV system, Int. J. Biomath., 15 (2021), 2250012. https://doi.org/10.1142/S1793524522500127 doi: 10.1142/S1793524522500127
    [51] M. Cancan, D. Afzal, S. Hussain, A. Maqbool, F. Afzal, Some new topological indices of silicate network via m-polynomial, J. Discrete Math. Sci. Cryptography, 23 (2020), 1157–1171. https://doi.org/10.1080/09720529.2020.1809776 doi: 10.1080/09720529.2020.1809776
    [52] J. B. Liu, M. K. Shafiq, H. Ali, A. Naseem, N. Maryam, S. S. Asghar, Topological indices of mth chain silicate graphs, Mathematics, 7 (2019), 42. https://doi.org/10.3390/math7010042 doi: 10.3390/math7010042
    [53] A. Q. Baig, M. Imran, H. Ali, On topological indices of poly oxide, poly silicate, DOX, and DSL networks, Can. J. Chem., 93 (2015), 730–739. https://doi.org/10.1139/cjc-2014-0490 doi: 10.1139/cjc-2014-0490
    [54] M. S. Chen, K. Shin, D. Kandlur, Addressing, routing, and broadcasting in hexagonal mesh multiprocessors, IEEE Trans. Comput., 39 (1990), 10–18. https://doi.org/10.1109/12.46277 doi: 10.1109/12.46277
    [55] M. F. Nadeem, M. Azeem, I. Farman, Comparative study of topological indices for capped and uncapped carbon nanotubes, Polycyclic Aromat. Compd., 42 (2022), 4666–4683. https://doi.org/10.1080/10406638.2021.1903952 doi: 10.1080/10406638.2021.1903952
    [56] M. F. Nadeem, M. Azeem, H. M. A. Siddiqui, Comparative study of zagreb indices for capped, semi-capped, and uncapped carbon nanotubes, Polycyclic Aromat. Compd., 42 (2022), 3545–3562. https://doi.org/10.1080/10406638.2021.1890625 doi: 10.1080/10406638.2021.1890625
    [57] M. V. Diudea, C. L. Nagy, Diamond and Related Nanostructures, Springer Netherlands, 2013. https://doi.org/10.1007/978-94-007-6371-5
  • This article has been cited by:

    1. Mehmet Kayalar, A uniqueness theorem for singular Sturm-liouville operator, 2023, 34, 1012-9405, 10.1007/s13370-023-01097-x
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1985) PDF downloads(74) Cited by(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog