Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article Special Issues

A swarm intelligence-based ensemble learning model for optimizing customer churn prediction in the telecommunications sector

  • Received: 08 October 2023 Revised: 24 December 2023 Accepted: 26 December 2023 Published: 29 December 2023
  • In today's competitive market, predicting clients' behavior is crucial for businesses to meet their needs and prevent them from being attracted by competitors. This is especially important in industries like telecommunications, where the cost of acquiring new customers exceeds retaining existing ones. To achieve this, companies employ Customer Churn Prediction approaches to identify potential customer attrition and develop retention plans. Machine learning models are highly effective in identifying such customers; however, there is a need for more effective techniques to handle class imbalance in churn datasets and enhance prediction accuracy in complex churn prediction datasets. To address these challenges, we propose a novel two-level stacking-mode ensemble learning model that utilizes the Whale Optimization Algorithm for feature selection and hyper-parameter optimization. We also introduce a method combining K-member clustering and Whale Optimization to effectively handle class imbalance in churn datasets. Extensive experiments conducted on well-known datasets, along with comparisons to other machine learning models and existing churn prediction methods, demonstrate the superiority of the proposed approach.

    Citation: Bijan Moradi, Mehran Khalaj, Ali Taghizadeh Herat, Asghar Darigh, Alireza Tamjid Yamcholo. A swarm intelligence-based ensemble learning model for optimizing customer churn prediction in the telecommunications sector[J]. AIMS Mathematics, 2024, 9(2): 2781-2807. doi: 10.3934/math.2024138

    Related Papers:

    [1] Yinlan Chen, Min Zeng, Ranran Fan, Yongxin Yuan . The solutions of two classes of dual matrix equations. AIMS Mathematics, 2023, 8(10): 23016-23031. doi: 10.3934/math.20231171
    [2] Jiaxin Lan, Jingpin Huang, Yun Wang . An E-extra iteration method for solving reduced biquaternion matrix equation AX+XB=C. AIMS Mathematics, 2024, 9(7): 17578-17589. doi: 10.3934/math.2024854
    [3] Xiulin Hu, Lei Wang . Positive solutions to integral boundary value problems for singular delay fractional differential equations. AIMS Mathematics, 2023, 8(11): 25550-25563. doi: 10.3934/math.20231304
    [4] Andrey Muravnik . Nonclassical dynamical behavior of solutions of partial differential-difference equations. AIMS Mathematics, 2025, 10(1): 1842-1858. doi: 10.3934/math.2025085
    [5] Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097
    [6] Golnaz Pakgalb, Mohammad Jahangiri Rad, Ali Salimi Shamloo, Majid Derafshpour . Existence and uniqueness of a positive solutions for the product of operators. AIMS Mathematics, 2022, 7(10): 18853-18869. doi: 10.3934/math.20221038
    [7] Fengxia Zhang, Ying Li, Jianli Zhao . A real representation method for special least squares solutions of the quaternion matrix equation (AXB,DXE)=(C,F). AIMS Mathematics, 2022, 7(8): 14595-14613. doi: 10.3934/math.2022803
    [8] Qiang Li, Jina Zhao . Extremal solutions for fractional evolution equations of order 1<γ<2. AIMS Mathematics, 2023, 8(11): 25487-25510. doi: 10.3934/math.20231301
    [9] Kahraman Esen Özen . A general method for solving linear matrix equations of elliptic biquaternions with applications. AIMS Mathematics, 2020, 5(3): 2211-2225. doi: 10.3934/math.2020146
    [10] Huitzilin Yépez-Martínez, Mir Sajjad Hashemi, Ali Saleh Alshomrani, Mustafa Inc . Analytical solutions for nonlinear systems using Nucci's reduction approach and generalized projective Riccati equations. AIMS Mathematics, 2023, 8(7): 16655-16690. doi: 10.3934/math.2023852
  • In today's competitive market, predicting clients' behavior is crucial for businesses to meet their needs and prevent them from being attracted by competitors. This is especially important in industries like telecommunications, where the cost of acquiring new customers exceeds retaining existing ones. To achieve this, companies employ Customer Churn Prediction approaches to identify potential customer attrition and develop retention plans. Machine learning models are highly effective in identifying such customers; however, there is a need for more effective techniques to handle class imbalance in churn datasets and enhance prediction accuracy in complex churn prediction datasets. To address these challenges, we propose a novel two-level stacking-mode ensemble learning model that utilizes the Whale Optimization Algorithm for feature selection and hyper-parameter optimization. We also introduce a method combining K-member clustering and Whale Optimization to effectively handle class imbalance in churn datasets. Extensive experiments conducted on well-known datasets, along with comparisons to other machine learning models and existing churn prediction methods, demonstrate the superiority of the proposed approach.



    The Stokes equation [1] is a fundamental tool used to describe viscous fluid flow [2] at low Reynolds numbers (Re) [3], which typically indicates laminar flow conditions [4]. Re characterizes the ratio of inertial forces to viscous forces in fluid dynamics. When Re is very small, the characteristic velocity of the flow can be considered to approach zero. In this limit, the quadratic terms involving velocity in the Navier-Stokes equations become negligible. Consequently, the Navier-Stokes equations simplify the Stokes equations, helping to analyze more complicated fluid problems, with a very wide range of applications [5,6]. With the development of computer science, many numerical methods have been developed to solve Stokes problems, such as finite element method (FEM) [7,8,9], finite difference method [10], mixed FEM [11], boundary element method [12,13], and coupling of FEM [14]. Among them, the FEM has gradually become an important numerical computational method for approximating partial differential equations (PDEs) because of its many advantages such as strong program versatility, high accuracy, flexible mesh selection, and ability to deal with complex boundaries and high-order problems.

    The FEM, as an important numerical method for solving mathematical [15] and physical problems [16], has been widely applied in the field of engineering mechanics [17]. In 1943, Courant [18] introduced the concept of FEM by using continuous functions on triangular regions to solve the torsion problem of St. Venant. By the mid-1960s, Feng [19,20] had independently established the mathematical foundation of FEM, making it a systematic and widely used numerical method. Since then, the scope of FEM's application has expanded from single structural analysis to various fields such as sound field analysis, flow field analysis, and electromagnetic field analysis. Based on the variational principle and subdivision interpolation, FEM uses interpolation functions in each element to approximate the unknown function in the domain piece by piece.

    With the continuous progress of computer science, the FEM has undergone remarkable developments and improvements. Many new computational methods have emerged, including the finite volume method [21], upwind FEM [22], and spectral methods [23]. These new methods not only enrich the technical means of numerical simulation but also play a crucial role in improving computational efficiency and enhancing model accuracy. In the FEM, the selection of finite element basis functions is key, and appropriate test and trial function spaces ensure the accuracy and stability of the solution. Common choices include Lagrange functions [24], Hermite functions [25], Argyris functions [26], and Bernstein functions [27]. In 1979, Shi [28] used cubic B-spline variational methods to solve equilibrium problems in composite elastic structures of plates and beams in regular domains, introducing spline FEM. In the same year, Qin [29] proposed the spline finite point method based on spline functions, beam vibration functions, and energy variation. In 2005, Hughes et al. [30] used spline basis functions for approximate numerical calculations of field variables in physical problems in finite element analysis. In 2007, Bhatti and Bracken [27] proposed applying Bernstein polynomial bases to solve PDEs. Zhu and Kang [31], in 2010, used cubic B-spline quasi-interpolation to numerically solve the Burgers-Fisher equation. Dutykh [32], in 2014, solved the KdV and KdV-BBM equations using B-spline FEM. More recently in 2022, Pranta [33] solved 2D nonlinear parabolic PDEs using bicubic Bernstein polynomial bases. These developments highlight the ongoing evolution and versatility of FEM in addressing complex engineering and scientific challenges.

    Lagrange interpolation functions are typically global, offering high accuracy in certain scenarios but potentially leading to numerical instability, especially with high-degree polynomials or complex boundary conditions. Although the Runge phenomenon [34] is less pronounced in FEM due to the integral approximation approach, it can still occur in specific cases, particularly with high-order polynomials or complex boundaries. Conversely, Bernstein polynomial functions have local support, enhancing numerical stability. They facilitate the construction of higher-order test and trial function spaces and are adept at handling complex boundary conditions. Additionally, Bernstein polynomials are non-negative and shape-preserving, making them uniquely suitable for shape-preserving approximations. Therefore, we have chosen Bernstein polynomial basis functions for our FEM implementation to ensure enhanced numerical stability and accuracy. However, it is important to note that Bernstein polynomials also have some limitations. While they offer stability and flexibility, the computational cost can increase significantly with higher polynomial degrees, making them less efficient for large-scale or real-time applications. Moreover, the theoretical foundation for certain specific problems, such as those involving very-high-order polynomials or highly oscillatory functions, may still require further research and development.

    Consider the boundary value problem of the 2D Stokes equation,

    {T(u,p)=f, inΩ,u=0,a.e.inΩ,u=g,a.e.inΩ, (1.1)

    where ΩR2 is a bounded polygonal domain, u=u(x,y) is the velocity vector, p refers to fluid pressure, u is the divergence of u, and T(u,p)=2νD(u)pI is the stress tensor. In more details, the deformation tensor can be written as

    D(u)=(u1x12(u1y+u2x)12(u1y+u2x)u2y). (1.2)

    Hence, the stress tensor can be written as

    T(u,p)=(2νu1xpν(u1y+u2x)ν(u1y+u2x)2νu2yp). (1.3)

    f describes the external force, g is the velocity on the domain boundary, and ν=ULRe represents the kinematic viscosity of the fluid where U and L represent characteristic speed and characteristic length, respectively. The Stokes equation is a basic equation of fluid mechanics, which simulates the motion of low velocity or viscous fluid [35,36] and has important applications in fluid mechanics [37], geophysics [38], telecommunication technology [39], and aerospace [40], among others [41,42,43]. We use the mixed FEM based on Bernstein polynomial basis to solve Stokes equations, and calculate the errors of the L, L2, and H1-semi norms.

    The traditional FEM is a versatile numerical technique that can handle both univariate and multivariate equations. However, when applied to systems involving multiple physical quantities, such as the Stokes equation (Eq (1.1)), traditional FEM requires careful consideration to ensure the existence and uniqueness of the solution. The Stokes equation involves a tight coupling between velocity and pressure, which necessitates precise numerical treatment. To guarantee the uniqueness [44] of the solution to the variational problem, the finite element approximation space must satisfy the Lax-Milgram theorem [45]. Additionally, to ensure the stability of the solution, especially for coupled variables, the inf-sup condition [45] must be satisfied. While traditional FEM can theoretically meet these requirements, the selection of appropriate finite element spaces for velocity and pressure is crucial. If the selection is not appropriate, the solution can become unstable and lose accuracy. Therefore, to ensure that the finite element approximation for the Stokes equation is both convergent and stable, we have chosen the mixed FEM. The mixed FEM can better handle the coupling between velocity and pressure by selecting suitable finite element spaces for these variables. This approach more effectively satisfies the inf-sup condition, thereby providing a more stable and accurate solution. Besides, we found that only the gradient term of pressure appeared in the Stokes equation, which cannot guarantee that the solution of pressure is unique. Therefore, in the process of solving, we need to impose additional conditions for pressure. In this article, we fix pressure at one point in the region. Furthermore, the mixed FEM is not limited to the Stokes equation. It can be equally effective in other multivariate systems, such as the velocity-stress formulation of the wave equation [46,47]. By using mixed FEM, we can achieve higher accuracy and stability in solving a wide range of coupled PDEs.

    This paper is organized as follows. In Section 2, we first review some basic contents of Bernstein polynomial basis, Bézier curves, and surfaces. In Section 3, we use the mixed FEM based on the Bernstein polynomial basis to derive the discrete scheme of Stokes equation. In Section 4, the error result is obtained by some numerical examples. In Section 5, we summarize the work.

    In this section, we will recall the definitions and properties of Bernstein polynomial bases, Bézier curves, and surfaces.

    Definition 1. Bernstein polynomial bases of degree n are defined by

    Bni(x)=(ni)xi(1x)ni,i=1,2,,n, (2.1)

    where, (ni)=n!i!(ni)!,i=0,1,,n. For simplicity, when i<0 or i>n, let Bni(x)=0,x[0,1].

    Definition 2. Given control points Pi(x,y)R2(i=0,1,,n), the Bézier curve of n degrees is defined by

    P(x)=ni=0PiBni(x),x[0,1],

    where Bni(x)(i=0,1,,n) is defined as Eq (2.1), and the n-edge polygon obtained by connecting two adjacent control points with straight line segments is called the control polygon.

    Bernstein polynomial bases of tensor product type can be obtained by tensor product from Bernstein polynomial bases of one variable.

    Definition 3. The tensor product type Bernstein polynomial bases of m×n degree are defined by

    Bm,ni,j(s,τ)=Bmi(s)Bnj(τ),i=0,1,,m,j=0,1,,n. (2.2)

    Definition 4. For a continuous function f(s,τ) defined on [0,1]×[0,1], the tensor product Bernstein polynomial interpolation operator Bh is defined as

    Bh(f,s,τ)=mi=0nj=0fBm,ni,j(s,τ). (2.3)

    Next, we prove that Bh is a bounded interpolation operator.

    Since f is a continuous function, it is bounded on [0,1]×[0,1]. Therefore, there exists a constant M such that |f(s,τ)|M for all (s,τ)[0,1]×[0,1]. Hence

    |Bh(f,s,τ)|=|mi=0nj=0fBm,ni,j(s,τ)|mi=0nj=0|f|Bm,ni,j(s,τ)Mmi=0nj=0Bm,ni,j(s,τ)=M. (2.4)

    So we can get that Bh is a bounded interpolation operator.

    Since Bh is a bounded interpolation operator, by the Bramble-Hilbert Lemma [45], we can conclude that

    In this section, we first construct function spaces of the mixed FEM with Bernstein polynomial basis, and the discrete scheme of Stokes equation in Eq (1.1) is derived.

    First of all, consider the subspace H^{1}_0(\Omega) of Sobolev space H^{1}(\Omega) :

    H^{1}_0(\Omega) = \left\{u \in H^{1}(\Omega);u|_{\partial \Omega} = 0\right\}.

    Multiplying the first equation of Eq (1.1) by test vector function \mathbf{v}(x, y) \in H^{1}_0(\Omega)\times H^{1}_0(\Omega) and then integrating on \Omega yields

    \begin{equation*} \displaystyle{\int}_{\Omega}(-\nabla \cdot \mathbb{T}(\mathbf{u},p))\cdot\mathbf{v} \:dxdy = \displaystyle{\int}_{\Omega}f\cdot\mathbf{v}\:dxdy. \end{equation*}

    Second, by multiplying the divergence-free equation by a test function q(x, y) , we get

    \begin{equation*} \displaystyle{\int}_{\Omega}(\nabla\cdot \mathbf{u})q\:dxdy = 0. \end{equation*}

    Then, applying Green's identity,

    \begin{equation*} \begin{aligned} \int_{\Omega}2\nu\mathbb{D}(\mathbf{u}):\mathbb{D}(\mathbf{v})\:dxdy-\int_{\Omega}p(\nabla\cdot\mathbf{v})\:dxdy & = \int_{\Omega}\mathbf{f}\cdot\mathbf{v}\:dxdy, \forall \mathbf{v} \in H_{0}^{1}(\Omega)\times H_{0}^{1}(\Omega), \\ -\int_{\Omega}(\nabla\cdot\mathbf{u})q\:dxdy& = 0, \forall q \in L^{2}(\Omega), \end{aligned} \end{equation*}

    where,

    \begin{equation*} \begin{aligned} &\mathbb{D}(\mathbf{u}):\mathbb{D}(\mathbf{v}) \\ = & \begin{aligned}\frac{\partial u_1}{\partial x}\frac{\partial v_1}{\partial x}+\frac{\partial u_2}{\partial y}\frac{\partial v_2}{\partial y}+\frac12\frac{\partial u_1}{\partial y}\frac{\partial v_1}{\partial y}\end{aligned}+\frac{1}{2}\frac{\partial u_{1}}{\partial y}\frac{\partial v_{2}}{\partial x}+\frac{1}{2}\frac{\partial u_{2}}{\partial x}\frac{\partial v_{1}}{\partial y}+\frac{1}{2}\frac{\partial u_{2}}{\partial x}\frac{\partial v_{2}}{\partial x}. \end{aligned} \end{equation*}

    Introducing bilinear form,

    \begin{equation*} \begin{aligned} a(\mathbf{u},\mathbf{v})& = \int_\Omega2\nu\mathbb{D}(\mathbf{u}):\mathbb{D}(\mathbf{v})\:dxdy,\\ b(\mathbf{u},q)& = -\int_{\Omega}(\nabla\cdot\mathbf{u})q\:dxdy. \end{aligned} \end{equation*}

    Then, the variational formulation of the mixed FEM of Eq (1.1) is to find \mathbf{u}\in H_0^1(\Omega)\times H_0^1(\Omega) and p\in L^2(\Omega) , satisfying the following equation

    \begin{equation} \left\{ \begin{aligned} a(\mathbf{u},\mathbf{v})+b(\mathbf{v},p)& = (\mathbf{f},\mathbf{v}),\\ b(\mathbf{u},q)& = 0, \end{aligned} \right. \end{equation} (3.1)

    for any \mathbf{v}\in H_0^1(\Omega)\times H_0^1(\Omega) and q\in L^2(\Omega) , where (\mathbf{f}, \mathbf{v}) = \int_{\Omega}\mathbf{f}\cdot\mathbf{v}\:dxdy.

    Then, we consider the discrete form of variational Eq (3.1).

    Let \Omega_{h} be a uniform rectangle partition of \Omega , \mathbf{h} = [h_1, h_2] = [\frac{1}{N_1}, \frac{1}{N_2}] is the mesh size, where N_1 and N_2 represent the number of subintervals on the x -axis and y -axis of quasi-uniform subdivision. For each T\in \Omega_{h} , the local finite element space Q(T, m, n) is spanned by Bernstein polynomial basis defined on T , i.e.,

    \begin{equation*} Q(T,m,n) = \left\{v, v\in span\left\{B_{i,j}^{m,n}(s,\tau)\right\} \right\}. \end{equation*}

    Consider a finite element space U_h(m, n)\subset H^1(\Omega) for the velocity \mathbf{u} and a finite element space W_h(h, l)\subset L^2(\Omega) for the pressure p . Assume that the polynomial space in the construction of U_h contains P_k, k\geq 1 and that of W_h contains P_{k-1} , where,

    \begin{equation*} \begin{aligned} & U_h(m,n) = \left\{r, r\in Q(T,m,n), \forall T\in \Omega_{h} \right\},\\ & W_h(h,l) = \left\{w, w\in Q(T,h,l), \forall T\in \Omega_{h} \right\}. \end{aligned} \end{equation*}

    Define U_{h0} to be the space that consists of the functions of U_h with vanishing boundary values.

    Subsequently, the discrete scheme of Eq (3.1) is to find \mathbf{u}_h\in U_h\times U_h and p_h\in W_h , where \mathbf{u}_h = (u_{1h}, u_{2h}) such that

    \begin{equation} \left\{ \begin{aligned} a(\mathbf{u}_h,\mathbf{v}_h)+b(\mathbf{v}_h,p_h)& = (\mathbf{f},\mathbf{v}_h),\\ b(\mathbf{u}_h,q_h)& = 0, \end{aligned} \right. \end{equation} (3.2)

    for any \mathbf{v}_h\in U_{h0}\times U_{h0} and q_h\in W_h .

    In order to verify if \mathbf{v}_h and q_h satisfy the inf-sup condition, we now define an interpolation \pi_h so that it is a modification of B_h , that is, satisfying \pi_h\mathbf{u} = B_h(\mathbf{u}) . From (2.4), we know \pi_h is bounded. Discrete compatibility is similar to that proved in [48], that is, b(\mathbf{u}-\pi_h\mathbf{u}, q) = 0 . So, \mathbf{v}_h and q_h satisfy the following inf-sup condition:

    \begin{equation*} \inf\limits_{0\neq q_h\in W_h}\sup\limits_{0\neq\mathbf{v}_h\in U_{h0}\times U_{h0}}\frac{b(\mathbf{v}_h,q_h)}{\left\|\nabla\mathbf{v}_h\right\|_0\left\|q_h\right\|_0} > \beta, \end{equation*}

    where \beta > 0 is a constant independent of mesh size h .

    In the scalar format, Eq (3.2) is to find u_{1h} \in U_h, u_{2h} \in U_h, and p_h \in W_h such that

    \begin{equation} \begin{aligned} &\int_{\Omega}\nu\Big(2\frac{\partial u_{1h}}{\partial x}\frac{\partial v_{1h}}{\partial x}+2\frac{\partial u_{2h}}{\partial y}\frac{\partial v_{2h}}{\partial y}+\frac{\partial u_{1h}}{\partial y}\frac{\partial v_{1h}}{\partial y} \\ &\left.+\frac{\partial u_{1h}}{\partial y}\frac{\partial v_{2h}}{\partial x}+\frac{\partial u_{2h}}{\partial x}\frac{\partial v_{1h}}{\partial y}+\frac{\partial u_{2h}}{\partial x}\frac{\partial v_{2h}}{\partial x}\right)dxdy \\ &-\int_{\Omega}\left(p_{h}\frac{\partial v_{1h}}{\partial x}+p_{h}\frac{\partial v_{2h}}{\partial y}\right)dxdy \\ & = \int_{\Omega}(f_{1}v_{1h}+f_{2}v_{2h})dxdy \\ &-\int_{\Omega}\left(\frac{\partial u_{1h}}{\partial x}q_{h}+\frac{\partial u_{2h}}{\partial y}q_{h}\right)dxdy = 0, \\ \end{aligned} \end{equation} (3.3)

    for any v_{1h} \in U_h, v_{2h} \in U_h, q_h \in W_h.

    Since u_{1h}, \:u_{2h}\in U_h = span\{r_j\}_{j = 1}^{N_b} and p_h\in W_h = span\{w_j\}_{j = 1}^{N_{bp}} , then

    \begin{equation*} u_{1h} = \sum\limits_{j = 1}^{N_b}u_{1j}r_j,\quad u_{2h} = \sum\limits_{j = 1}^{N_b}u_{2j}r_j,\quad p_h = \sum\limits_{j = 1}^{N_{bp}}p_j w_j, \end{equation*}

    for some coefficients u_{1j}, \:u_{2j} (j = 1, \cdots, N_b) , and p_j (j = 1, \cdots, N_{bp}).

    Now, we set up a linear algebraic system for u_{1j}, u_{2j} (j = 1, \cdots, N_b) and p_j (j = 1, \cdots, N_{bp}). Then we can solve it to obtain the finite element solution \mathbf{u}_h = (u_{1h}, u_{2h})^t and p_h .

    For the first equation in the Eq (3.3), in the first set of test functions, we set \mathbf{v}_h = (r_i, 0)^t , namely, v_{1h} = r_i(i = 1, \cdots, N_b) and v_{2h} = 0 . Then

    \begin{equation*} \begin{aligned} &2\int_{\Omega}\nu\left(\sum\limits_{j = 1}^{N_b}u_{1j}\frac{\partial r_j}{\partial x}\right)\frac{\partial r_i}{\partial x}dxdy+\int_{\Omega}\nu\left(\sum\limits_{j = 1}^{N_b}u_{1j}\frac{\partial r_j}{\partial y}\right)\frac{\partial r_i}{\partial y}dxdy \\ &+\int_{\Omega}\nu\left(\sum\limits_{j = 1}^{N_{b}}u_{2j}\frac{\partial r_{j}}{\partial x}\right)\frac{\partial r_{i}}{\partial y}dxdy-\int_{\Omega}\left(\sum\limits_{j = 1}^{N_{bp}}p_{j}w_{j}\right)\frac{\partial r_{i}}{\partial x}dxdy \\ & = \int_{\Omega}f_1 r_idxdy. \end{aligned} \end{equation*}

    After that, we let \mathbf{v}_h = (0, r_i)^t , i.e., v_{1h} = 0 and v_{2h} = r_i(i = 1, \cdots, N_b) ,

    \begin{equation*} \begin{aligned} &2\int_{\Omega}\nu\left(\sum\limits_{j = 1}^{N_{b}}u_{2j}\frac{\partial r_{j}}{\partial y}\right)\frac{\partial r_{i}}{\partial y}dxdy+\int_{\Omega}\nu\left(\sum\limits_{j = 1}^{N_{b}}u_{1j}\frac{\partial r_{j}}{\partial y}\right)\frac{\partial r_{i}}{\partial x}dxdy \\ &+\int_{\Omega}\nu\left(\sum\limits_{j = 1}^{N_{b}}u_{2j}\frac{\partial r_{j}}{\partial x}\right)\frac{\partial r_{i}}{\partial x}dxdy-\int_{\Omega}\left(\sum\limits_{j = 1}^{N_{bp}}p_{j} w_{j}\right)\frac{\partial r_{i}}{\partial y}dxdy \\ & = \int_{\Omega}f_2 r_idxdy. \\ \end{aligned} \end{equation*}

    Lastly, set q_h = w_i(i = 1, \cdots, N_{bp}) in the second equation of the Eq (3.3), getting

    \begin{equation*} \begin{aligned} -\int_{\Omega}\left(\sum\limits_{j = 1}^{N_{b}}u_{1j}\frac{\partial r_{j}}{\partial x}\right)w_{i}dxdy-\int_{\Omega}\left(\sum\limits_{j = 1}^{N_{b}}u_{2j}\frac{\partial r_{j}}{\partial y}\right)w_{i}dxdy = 0. \end{aligned} \end{equation*}

    Simplify the above three sets of equations, obtaining

    \begin{equation*} \begin{aligned} &\sum\limits_{j = 1}^{N_b}u_{1j}\left(2\int_{\Omega}\nu\frac{\partial r_j}{\partial x}\frac{\partial r_i}{\partial x}dxdy+\int_{\Omega}\nu\frac{\partial r_j}{\partial y}\frac{\partial r_i}{\partial y}dxdy\right) \\ &+\sum\limits_{j = 1}^{N_{b}}u_{2j}\left(\int_{\Omega}\nu\frac{\partial r_{j}}{\partial x}\frac{\partial r_{i}}{\partial y}dxdy\right)+\sum\limits_{j = 1}^{N_{bp}}p_{j}\left(-\int_{\Omega}w_{j}\frac{\partial r_{i}}{\partial x}dxdy\right) = \int_{\Omega}f_{1}r_{i}dxdy, \\ &\sum\limits_{j = 1}^{N_b}u_{1j}\left(\int_{\Omega}\nu\frac{\partial r_j}{\partial y}\frac{\partial r_i}{\partial x}dxdy\right) \\ &+\sum\limits_{j = 1}^{N_{b}}u_{2j}\left(2\int_{\Omega}\nu\frac{\partial r_{j}}{\partial y}\frac{\partial r_{i}}{\partial y}dxdy+\int_{\Omega}\nu\frac{\partial r_{j}}{\partial x}\frac{\partial r_{i}}{\partial x}dxdy\right) \\ &+\sum\limits_{j = 1}^{N_{bp}}p_{j}\left(-\int_{\Omega}w_{j}\frac{\partial r_{i}}{\partial y}dxdy\right) = \int_{\Omega}f_{2} r_{i}dxdy, \\ &\sum\limits_{j = 1}^{N_b}u_{1j}\left(-\int_{\Omega}\frac{\partial r_j}{\partial x}w_idxdy\right)+\sum\limits_{j = 1}^{N_b}u_{2j}\left(-\int_{\Omega}\frac{\partial r_j}{\partial y}w_idxdy\right)+\sum\limits_{j = 1}^{N_{bp}}p_j\cdot0 = 0. \end{aligned} \end{equation*}

    Define the stiffness matrix as

    \begin{equation*} A = \left( \begin{array}{ccc} 2A_1+A_2& A_3 & A_5\\ A_4& 2A_2+A_1&A_6\\ A_7& A_8 &\mathbb{O} \end{array} \right), \end{equation*}

    where \mathbb{O} = [0]_{i = 1, j = 1}^{N_{bp}, N_{bp}},

    \begin{equation*} \begin{gathered} A_{1} = \left[\int_{\Omega}\nu\frac{\partial r_{j}}{\partial x}\frac{\partial r_{i}}{\partial x}dxdy\right]_{i,j = 1}^{N_{b}},\quad A_{2} = \left[\int_{\Omega}\nu\frac{\partial r_{j}}{\partial y}\frac{\partial r_{i}}{\partial y}dxdy\right]_{i,j = 1}^{N_{b}}, \\ A_{3} = \left[\int_{\Omega}\nu\frac{\partial r_{j}}{\partial x}\frac{\partial r_{i}}{\partial y}dxdy\right]_{i,j = 1}^{N_{b}},\quad A_{4} = \left[\int_{\Omega}\nu\frac{\partial r_{j}}{\partial y}\frac{\partial r_{i}}{\partial x}dxdy\right]_{i,j = 1}^{N_{b}}, \\ A_{5} = \left[\int_{\Omega}-w_{j}\frac{\partial r_{i}}{\partial x}dxdy\right]_{i = 1,j = 1}^{N_{b},N_{bp}},\quad A_{6} = \left[\int_{\Omega}-w_{j}\frac{\partial r_{i}}{\partial y}dxdy\right]_{i = 1,j = 1}^{N_{b},N_{bp}} ,\\ A_{7} = \left[\int_{\Omega}-\frac{\partial r_{j}}{\partial x}w_{i}dxdy\right]_{i = 1,j = 1}^{N_{bp},N_{b}},\quad A_{8} = \left[\int_{\Omega}-\frac{\partial r_{j}}{\partial y}w_{i}dxdy\right]_{i = 1,j = 1}^{N_{bp},N_{b}} . \end{gathered} \end{equation*}

    Define the load vector

    \begin{equation*} \vec{b} = \left( \begin{array}{c} \vec{b_1}\\ \vec{b_2}\\ \vec{0} \end{array} \right), \end{equation*}

    where \vec{0} = [0]_{i = 1}^{N_{bp}} ,

    \begin{equation*} \vec{b}_1 = \left[\displaystyle{\int}_\Omega f_1 r_idxdy\right]_{i = 1}^{N_b},\:\vec{b}_2 = \left[\displaystyle{\int}_\Omega f_2 r_idxdy\right]_{i = 1}^{N_b}. \end{equation*}

    Define the unknown vector

    \begin{equation*} \vec{X} = \left( \begin{array}{c} \vec{X_1}\\ \vec{X_2}\\ \vec{X_3} \end{array} \right), \end{equation*}

    where,

    \begin{equation*} \vec{X_1} = [u_{1j}]_{j = 1}^{N_b},\:\vec{X_2} = [u_{2j}]_{j = 1}^{N_b},\:\vec{X_3} = [p_j]_{j = 1}^{N_{bp}}. \end{equation*}

    Then, we get a linear system of ordinary differential equations for u_{1j}, u_{2j}(j = 1, \cdots, N_b) and p_j(j = 1, \cdots, N_{bp}) ,

    \begin{equation} A\vec{X} = \vec{b}, \end{equation} (3.4)

    so we can solve system (3.4) and obtain the unknown vector group \vec{X} .

    In this section, we verify the feasibility and effectiveness of this method using several numerical examples. Tensor product Bernstein polynomial bases are used to construct the trial function space and test function space of the mixed FEM, the approximate solutions are solved by MATLAB 2022 b, and the error and convergence order of the exact solution and the finite element solution under L^{\infty} , L^{2} , and H^{1} -semi norms are obtained. The numerical results obtained by solving Stokes equation with bilinear, biquadratic, and bicubic Lagrange basis functions are consistent with those obtained by using Bernstein polynomial basis with corresponding orders. Since using Lagrange basis functions of higher than bicubic order leads to the Runge phenomenon when solving the Stokes equations, we only present the error results of Bernstein polynomial basis.

    Example 1. Consider the following two-dimensional stokes equation with Dirichlet boundary in rectangular domain \Omega = [0, 1]\times [0, 1] .

    \begin{equation} \left\{ \begin{aligned} &-\nabla \cdot\mathbf{T}(\mathbf{u}(x,y),p(x,y)) = \mathbf{f}(x,y),\ (x,y)\in \Omega,\\ &\nabla \cdot\mathbf{u}(x,y) = 0,\ (x,y)\in \Omega,\\ &\mathbf{u}(x,y)|_{\partial \Omega} = 0, \end{aligned} \right. \end{equation} (4.1)

    where the exact solution \mathbf{u} = (u_1, u_2)^t is

    \begin{equation*} \begin{aligned} u_1(x,y)& = x^{2}(1-x)^{2}(2y-6y^2+4y^3),\\ u_2(x,y)& = -y^{2}(1-y)^{2}(2x-6x^{2}+4x^{3}), \end{aligned} \end{equation*}

    the exact solution p(x, y) is

    \begin{equation*} p(x,y) = x-x^2, \end{equation*}

    and the body force \mathbf{f} = (f_1, f_2)^t is

    \begin{equation*} \begin{aligned} f_1(x,y) = &\nu(2y(y - 1)^2(12x^2 - 12x + 2) - x^2(24y - 12)(x - 1)^2 \\ &+ y^2(2y - 2)(12x^2 - 12x + 2))- 2x - 2\nu(2(x - 1)^2(4y^3 - 6y^2 + 2y)\\ & + 2x^2(4y^3 - 6y^2 + 2y) + 4x(2x - 2)(4y^3 - 6y^2 + 2y)) + 1,\\ f_2(x,y) = &2\nu(2(y - 1)^2(4x^3 - 6x^2 + 2x) + 2y^2(4x^3 - 6x^2 + 2x)\\ & + 4y(2y - 2)(4x^3 - 6x^2 + 2x)) - \nu(2x(x - 1)^2(12y^2 - 12y + 2)\\ & - y^2(24x - 12)(y - 1)^2 + x^2(2x - 2)(12y^2 - 12y + 2)), \end{aligned} \end{equation*}

    where we set \nu = 1 .

    The domain \Omega is partitioned into uniform rectangles. Here, we use biquadratic, bicubic, and biquartic Bernstein polynomial basis to solve the Stokes Eq (4.1), and calculate the L^{\infty} , L^{2} , and H^{1} -semi norms between the approximate solution and the exact solution. Tables 1 and 2 show the numerical errors for these kinds of basis functions in L^{\infty} , L^{2} , and H^{1} -semi norms; the corresponding convergence orders are shown in Tables 3 and 4. The comparison of errors are shown in Figures 1 and 2.

    Table 1.  The comparison of numerical errors of \mathbf{u} in L^{\infty} , L^{2} , and H^{1} -semi norms.
    basis h_{1} h_{2} ||u-u_{h}||_{L^\infty} ||u-u_{h}||_{L^{2}} |u-u_{h}|_{H^{1}}
    biquadratic Bernstein \frac{1}{4} \frac{1}{4} 2.5683e-04 2.2975e-04 1.3000e-03
    \frac{1}{8} \frac{1}{8} 3.3051e-05 2.9674e-05 1.7101e-04
    \frac{1}{16} \frac{1}{16} 4.4028e-06 3.7355e-06 2.1478e-05
    \frac{1}{32} \frac{1}{32} 5.5386e-07 4.6772e-07 2.6875e-06
    bicubic Bernstein \frac{1}{4} \frac{1}{4} 7.3008e-06 4.9632e-06 2.3383e-04
    \frac{1}{8} \frac{1}{8} 4.4274e-07 3.0623e-07 2.8934e-05
    \frac{1}{16} \frac{1}{16} 2.6941e-08 1.9059e-08 3.6060e-06
    \frac{1}{32} \frac{1}{32} 1.6506e-09 1.1897e-09 4.5039e-07
    biquartic Bernstein \frac{1}{4} \frac{1}{4} 2.1182e-12 6.8916e-13 2.2454e-11
    \frac{1}{8} \frac{1}{8} 7.6057e-13 3.5117e-13 2.3499e-11
    \frac{1}{16} \frac{1}{16} 3.5179e-13 1.7482e-13 2.3665e-11
    \frac{1}{32} \frac{1}{32} 1.6601e-13 8.7608e-14 2.3789e-11

     | Show Table
    DownLoad: CSV
    Table 2.  The comparison of numerical errors of p in L^{\infty} , L^{2} , and H^{1} -semi norms.
    basis h_{1} h_{2} ||p-p_{h}||_{L^\infty} ||p-p_{h}||_{L^{2}} |p-p_{h}|_{H^{1}}
    bilinear Bernstein \frac{1}{4} \frac{1}{4} 1.0700e-02 1.0400e-02 1.4430e-01
    \frac{1}{8} \frac{1}{8} 2.6000e-03 2.6000e-03 7.2200e-02
    \frac{1}{16} \frac{1}{16} 6.5301e-04 6.5104e-04 3.6100e-02
    \frac{1}{32} \frac{1}{32} 1.6289e-04 1.6276e-04 1.8000e-02
    biquadratic Bernstein \frac{1}{4} \frac{1}{4} 2.3900e-05 7.1260e-06 1.5554e-04
    \frac{1}{8} \frac{1}{8} 1.2875e-06 1.8008e-07 8.0360e-06
    \frac{1}{16} \frac{1}{16} 6.6020e-08 4.9431e-09 4.8683e-07
    \frac{1}{32} \frac{1}{32} 3.5086e-09 1.8548e-10 4.8902e-08
    bicubic Bernstein \frac{1}{4} \frac{1}{4} 2.8915e-10 1.2332e-10 4.8600e-09
    \frac{1}{8} \frac{1}{8} 2.6243e-10 1.1638e-10 9.6834e-09
    \frac{1}{16} \frac{1}{16} 2.4346e-10 1.1637e-10 1.9344e-08
    \frac{1}{32} \frac{1}{32} 1.5440e-09 1.2847e-09 3.8696e-08

     | Show Table
    DownLoad: CSV
    Table 3.  Convergence order under three norms of \mathbf{u} .
    basis h/(\frac{1}{2} h) L^{\infty}-order L^{2}-order H^{1}-order
    biquadratic Bernstein \frac{1}{4}/\frac{1}{8} 2.9580 2.9528 2.9264
    \frac{1}{8}/\frac{1}{16} 2.9082 2.9898 2.9931
    \frac{1}{16}/\frac{1}{32} 2.9908 2.9976 2.9985
    bicubic Bernstein \frac{1}{4}/\frac{1}{8} 4.0435 3.6940 3.0146
    \frac{1}{8}/\frac{1}{16} 4.0386 4.0061 3.0043
    \frac{1}{16}/\frac{1}{32} 4.0287 4.0018 3.0012

     | Show Table
    DownLoad: CSV
    Table 4.  Convergence order under three norms of p .
    basis h/(\frac{1}{2} h) L^{\infty}-order L^{2}-order H^{1}-order
    bilinear Bernstein \frac{1}{4}/\frac{1}{8} 2.0410 2.0000 0.9990
    \frac{1}{8}/\frac{1}{16} 1.9933 1.9977 1.0000
    \frac{1}{16}/\frac{1}{32} 2.0032 2.0000 1.0040
    biquadratic Bernstein \frac{1}{4}/\frac{1}{8} 4.2144 5.3064 4.2747
    \frac{1}{8}/\frac{1}{16} 4.2855 5.1871 4.0450
    \frac{1}{16}/\frac{1}{32} 4.2339 4.7361 3.3154

     | Show Table
    DownLoad: CSV
    Figure 1.  Error comparison of u-u_h under L^\infty, L^2 , and H^1 norm.
    Figure 2.  Error comparison of p-p_h under L^\infty, L^2 , and H^1 norm.

    When solving 2D Stokes equations, with equal mesh sizes, for velocity \mathbf{u} , the numerical accuracy of the bicubic Bernstein polynomial basis is 1 and 2 orders of magnitude higher than that of the biquadratic Bernstein polynomial basis, while the biquartic Bernstein polynomial basis is 47 orders of magnitude higher than the bicubic. For pressure p, the numerical accuracy of the biquadratic Bernstein polynomial basis is 35 orders of magnitude higher than that of the bilinear Bernstein polynomial basis, and the bicubic Bernstein polynomial basis is 15 orders of magnitude higher than the biquadratic.

    When solving Eq (4.1) using biquartic Bernstein polynomial basis for velocity \mathbf{u} and bicubic Bernstein polynomial basis for pressure p , we attempted many methods, including adjusting the accuracy setting of MATLAB 2022 b to improve the accuracy and convergence order of the mixed FEM, but the effect was not obvious due to the limitation of computer hardware, so the convergence order could not be computed. In the future, we will continue to explore ways to improve performance.

    Example 2. Consider the following Stokes equation

    \begin{equation} \left\{ \begin{aligned} &-\nabla \cdot\mathbf{T}(\mathbf{u}(x,y),p(x,y)) = \mathbf{f}(x,y),\ (x,y)\in \Omega,\\ &\nabla \cdot\mathbf{u}(x,y) = 0,\ (x,y)\in \Omega,\\ &\mathbf{u}(x,y)|_{\partial \Omega} = 0, \end{aligned} \right. \end{equation} (4.2)

    where \Omega = [0, 1]\times[0, 1] , the exact solution \mathbf{u} = (u_1, u_2)^t is

    \begin{equation*} \begin{aligned} u_1(x,y)& = -cos2\pi x sin2\pi y+sin2\pi y,\\ u_2(x,y)& = sin2\pi x cos2\pi y-sin2\pi x, \end{aligned} \end{equation*}

    the exact solution p(x, y) is

    \begin{equation*} p(x,y) = x^2+y^2, \end{equation*}

    and the body force \mathbf{f} = (f_1, f_2)^t is

    \begin{equation*} \begin{aligned} f_1(x,y) = &2x + 4\nu\pi^2sin(2\pi y) - 8\nu\pi^2cos(2\pi x)sin(2\pi y),\\ f_2(x,y) = &2y - 4\nu\pi^2sin(2\pi x) + 8\nu\pi^2cos(2\pi y)sin(2\pi x), \end{aligned} \end{equation*}

    where we set \nu = 1 .

    Analogous to Example 1, mixed FEM with bivariate Bernstein polynomial basis are used to solve the above problems. The numerical errors in L^{\infty} , L^{2} , and H^{1} -semi norms are listed in Tables 5 and 6. The corresponding convergence orders are shown in Tables 7 and 8. Figures 3 and 4 display the error image.

    Table 5.  The comparison of numerical errors of \mathbf{u} in L^{\infty} , L^{2} , and H^{1} -semi norms.
    basis h_{1} h_{2} ||u-u_{h}||_{L^\infty} ||u-u_{h}||_{L^{2}} |u-u_{h}|_{H^{1}}
    biquadratic Bernstein \frac{1}{2} \frac{1}{2} 1.6174e-01 1.5937e-01 2.3473e-00
    \frac{1}{4} \frac{1}{4} 5.0100e-02 3.8000e-02 3.2450e-01
    \frac{1}{8} \frac{1}{8} 7.6000e-03 5.3000e-03 4.2000e-02
    \frac{1}{16} \frac{1}{16} 9.7064e-04 6.8074e-04 5.3000e-03
    bicubic Bernstein \frac{1}{2} \frac{1}{2} 6.4761e-02 3.8419e-02 9.7374e-01
    \frac{1}{4} \frac{1}{4} 4.6000e-03 2.2000e-03 1.0640e-01
    \frac{1}{8} \frac{1}{8} 3.8009e-04 1.4194e-04 1.3500e-02
    \frac{1}{16} \frac{1}{16} 2.5417e-05 8.9279e-06 1.7000e-03
    biquartic Bernstein \frac{1}{2} \frac{1}{2} 5.8918e-03 3.8720e-03 1.1391e-01
    \frac{1}{4} \frac{1}{4} 4.4542e-04 2.9417e-04 2.3000e-03
    \frac{1}{8} \frac{1}{8} 1.4655e-05 9.4296e-06 7.2741e-05
    \frac{1}{16} \frac{1}{16} 4.5821e-07 2.9652e-07 2.2833e-06
    biquintic Bernstein \frac{1}{2} \frac{1}{2} 1.1305e-03 7.5132e-04 3.0851e-02
    \frac{1}{4} \frac{1}{4} 2.7425e-05 1.9000e-05 1.4613e-04
    \frac{1}{8} \frac{1}{8} 6.3294e-07 3.0283e-07 2.3289e-06
    \frac{1}{16} \frac{1}{16} 1.0992e-08 4.7554e-09 3.7396e-08

     | Show Table
    DownLoad: CSV
    Table 6.  The comparison of numerical errors of p in L^{\infty} , L^{2} , and H^{1} -semi norms.
    basis h_{1} h_{2} ||p-p_{h}||_{L^\infty} ||p-p_{h}||_{L^{2}} |p-p_{h}|_{H^{1}}
    bilinear Bernstein \frac{1}{2} \frac{1}{2} 1.1055e-01 8.7401e-02 4.0825e-01
    \frac{1}{4} \frac{1}{4} 5.2700e-02 2.6300e-02 2.7690e-01
    \frac{1}{8} \frac{1}{8} 1.3800e-02 6.3000e-03 1.0890e-01
    \frac{1}{16} \frac{1}{16} 2.0000e-03 1.3000e-03 5.1100e-02
    biquadratic Bernstein \frac{1}{2} \frac{1}{2} 3.0446e-01 1.5070e-01 1.7746e-00
    \frac{1}{4} \frac{1}{4} 1.6700e-02 6.5000e-03 1.7270e-01
    \frac{1}{8} \frac{1}{8} 9.8603e-04 4.0417e-04 2.3900e-02
    \frac{1}{16} \frac{1}{16} 7.6837e-05 2.6949e-05 3.3000e-03
    bicubic Bernstein \frac{1}{2} \frac{1}{2} 2.0060e-02 7.7363e-03 1.4123e-01
    \frac{1}{4} \frac{1}{4} 5.0825e-04 2.2814e-04 7.5000e-03
    \frac{1}{8} \frac{1}{8} 1.0791e-05 3.8490e-06 2.4536e-04
    \frac{1}{16} \frac{1}{16} 1.8552e-07 6.3212e-08 7.9160e-06
    biquartic Bernstein \frac{1}{2} \frac{1}{2} 3.9974e-03 2.3690e-03 7.2008e-02
    \frac{1}{4} \frac{1}{4} 2.9541e-05 1.1172e-05 6.0166e-04
    \frac{1}{8} \frac{1}{8} 2.1714e-07 7.5228e-08 8.0550e-06
    \frac{1}{16} \frac{1}{16} 1.2925e-08 3.3008e-09 8.7917e-07

     | Show Table
    DownLoad: CSV
    Table 7.  Convergence order under three norms of \mathbf{u} .
    basis h/(\frac{1}{2} h) L^{\infty}-order L^{2}-order H^{1}-order
    biquadratic Bernstein \quad \frac{1}{2}/\frac{1}{4}\quad 1.6908 2.0683 2.8547
    \quad \frac{1}{4}/\frac{1}{8}\quad 2.7207 2.8419 2.9498
    \quad \frac{1}{8}/\frac{1}{16}\quad 2.9690 2.9608 2.7866
    bicubic Bernstein \quad \frac{1}{2}/\frac{1}{4}\quad 3.8154 4.1262 3.1940
    \quad \frac{1}{4}/\frac{1}{8}\quad 3.5972 3.9542 2.9785
    \quad \frac{1}{8}/\frac{1}{16}\quad 3.9025 3.9908 2.9894
    biquartic Bernstein \quad \frac{1}{2}/\frac{1}{4}\quad 3.7255 3.7184 5.6301
    \quad \frac{1}{4}/\frac{1}{8}\quad 4.9257 4.9633 4.9827
    \quad \frac{1}{8}/\frac{1}{16}\quad 4.9992 4.9910 4.9936
    biquintic Bernstein \quad \frac{1}{2}/\frac{1}{4}\quad 5.7603 6.1691 5.2112
    \quad \frac{1}{4}/\frac{1}{8}\quad 5.6375 5.9800 4.9813
    \quad \frac{1}{8}/\frac{1}{16}\quad 5.9043 5.9930 4.9953

     | Show Table
    DownLoad: CSV
    Table 8.  Convergence order under three norms of p .
    basis h/(\frac{1}{2} h) L^{\infty}-order L^{2}-order H^{1}-order
    bilinear Bernstein \quad \frac{1}{2}/\frac{1}{4}\quad 1.0688 1.7326 0.5601
    \quad \frac{1}{4}/\frac{1}{8}\quad 1.9331 2.0616 1.3464
    \quad \frac{1}{8}/\frac{1}{16}\quad 2.7866 2.2768 1.0916
    biquadratic Bernstein \quad \frac{1}{2}/\frac{1}{4}\quad 4.1883 4.5351 3.3612
    \quad \frac{1}{4}/\frac{1}{8}\quad 4.0821 4.0074 2.8532
    \quad \frac{1}{8}/\frac{1}{16}\quad 3.6818 3.9067 2.8565
    bicubic Bernstein \quad \frac{1}{2}/\frac{1}{4}\quad 5.3026 5.0837 4.2350
    \quad \frac{1}{4}/\frac{1}{8}\quad 5.5576 5.8893 4.9339
    \quad \frac{1}{8}/\frac{1}{16}\quad 5.8621 5.9281 4.9540
    biquartic Bernstein \quad \frac{1}{2}/\frac{1}{4}\quad 5.5209 6.2951 5.0997
    \quad \frac{1}{4}/\frac{1}{8}\quad 5.8011 5.9533 4.8669
    \quad \frac{1}{8}/\frac{1}{16}\quad 5.1332 5.3535 4.4198

     | Show Table
    DownLoad: CSV
    Figure 3.  Error comparison of u-u_h under L^\infty, L^2 , and H^1 norm.
    Figure 4.  Error comparison of p-p_h under L^\infty, L^2 , and H^1 norm.

    It can be observed from the error line in Figures 3 and 4, and the error convergence order in Tables 7 and 8 that when the mesh size is equal, the higher the degree of Bernstein polynomial basis, not only the higher the numerical accuracy of the error norm, but also the higher the error convergence order.

    Example 3. Consider the following non-homogenous 2D Stokes equation

    \begin{equation} \left\{ \begin{aligned} &-\nabla \cdot\mathbf{T}(\mathbf{u}(x,y),p(x,y)) = \mathbf{f}(x,y),\ (x,y)\in \Omega,\\ &\nabla \cdot\mathbf{u}(x,y) = 0,\ (x,y)\in \Omega,\\ &\mathbf{u}(x,y)|_{\partial \Omega} = g, \end{aligned} \right. \end{equation} (4.3)

    where \Omega = [0, 1]\times[0, 1] , the exact solution \mathbf{u} = (u_1, u_2)^t is

    \begin{equation*} \begin{aligned} u_1(x,y)& = \pi sin\pi x cos\pi y,\\ u_2(x,y)& = -\pi cos\pi x sin\pi y, \end{aligned} \end{equation*}

    the exact solution p(x, y) is

    \begin{equation*} p(x,y) = sin\pi xsin\pi y, \end{equation*}

    and the body force \mathbf{f} = (f_1, f_2)^t is

    \begin{equation*} \begin{aligned} f_1(x,y) = &2\nu\pi^3 cos(\pi y)sin(\pi x)+\pi cos\pi xsin\pi y,\\ f_2(x,y) = &-2\nu\pi^3cos(\pi x)sin(\pi y)+\pi cos\pi y sin\pi x, \end{aligned} \end{equation*}

    where we set \nu = 1 .

    We continue to use the above method to solve Stokes Eq (4.3). The numerical errors are shown in Tables 9 and 10.

    Table 9.  The comparison of numerical errors of \mathbf{u} in L^{\infty} , L^{2} , and H^{1} -semi norms.
    Bernstein basis h_{1} h_{2} ||u-u_{h}||_{L^\infty} ||\mathbf{u}-\mathbf{u}_{h}||_{L^{2}} |\mathbf{u}-\mathbf{u}_{h}|_{H^1}
    biquadratic for \mathbf{u} \quad \frac{1}{4}\quad \quad \frac{1}{4}\quad 7.1700e-02 4.5300e-02 3.0170e-01
    \quad \frac{1}{8}\quad \quad \frac{1}{8}\quad 1.8600e-02 1.0600e-02 7.6200e-02
    bilinear for p \quad \frac{1}{16}\quad \quad \frac{1}{16}\quad 4.8000e-03 2.6000e-03 1.9100e-02
    \quad \frac{1}{32}\quad \quad \frac{1}{32}\quad 1.2000e-03 6.4904e-04 4.8000e-03
    bicubic for \mathbf{u} \quad \frac{1}{4}\quad \quad \frac{1}{4}\quad 5.8200e-02 2.7800e-02 2.4340e-01
    \quad \frac{1}{8}\quad \quad \frac{1}{8}\quad 1.5600e-02 6.9000e-03 7.9500e-02
    bilinear for p \quad \frac{1}{16}\quad \quad \frac{1}{16}\quad 4.0000e-03 1.7000e-03 2.6800e-02
    \quad \frac{1}{32}\quad \quad \frac{1}{32}\quad 1.0000e-03 4.3277e-04 9.2000e-03

     | Show Table
    DownLoad: CSV
    Table 10.  The comparison of numerical errors of p in L^{\infty} , L^{2} , and H^{1} -semi norms.
    Bernstein basis h_{1} h_{2} ||p-p_{h}||_{L^\infty} ||p-p_{h}||_{L^{2}} |p-p_{h}|_{H^1}
    biquadratic for \mathbf{u} \quad \frac{1}{4}\quad \quad \frac{1}{4}\quad 9.5200e-02 4.5800e-02 5.7360e-01
    \quad \frac{1}{8}\quad \quad \frac{1}{8}\quad 2.7400e-02 1.0700e-02 1.6570e-01
    bilinear for p \quad \frac{1}{16}\quad \quad \frac{1}{16}\quad 7.5000e-03 2.6000e-03 6.0500e-02
    \quad \frac{1}{32}\quad \quad \frac{1}{32}\quad 2.0000e-03 6.6176e-04 2.6800e-02
    bicubic for \mathbf{u} \quad \frac{1}{4}\quad \quad \frac{1}{4}\quad 8.7900e-02 3.0400e-02 4.0270e-01
    \quad \frac{1}{8}\quad \quad \frac{1}{8}\quad 2.1600e-02 7.3000e-03 1.3450e-01
    bilinear for p \quad \frac{1}{16}\quad \quad \frac{1}{16}\quad 5.5000e-03 1.8000e-03 5.5500e-02
    \quad \frac{1}{32}\quad \quad \frac{1}{32}\quad 1.4000e-03 4.5304e-04 2.6100e-02

     | Show Table
    DownLoad: CSV

    As can be seen from Tables 9 and 10, the Bernstein basis function shows good convergence under the three norms as the grid size decreases, which further verifies its advantages in numerical stability. In particular, the cubic or higher-degree Lagrange interpolation shows unstable oscillations, while the Bernstein basis function can provide a stable and consistent solution, while maintaining good geometric properties and flexible boundary condition processing capabilities. The numerical stability and global approximation characteristics of Bernstein polynomial make the results more reliable than Lagrange interpolation.

    The Stokes equations are primarily used to describe fluid flow phenomena at low Re, where the inertial forces are significantly smaller compared to the viscous forces and can thus be neglected. This results in a flow that is smooth and orderly. Through the three numerical experiments presented above, we observe that as the mesh size decreases, the errors also gradually diminish. This indicates that our numerical solutions are progressively approaching the true laminar flow state. This trend demonstrates the effectiveness and accuracy of our numerical method in handling low Re fluid dynamics problems.

    In this study, we use tensor product Bernstein polynomial basis function and Lagrange basis function to solve Stokes equation and verify the basis functions of different orders in detail. The results show that the solutions obtained by using bicubic or low-order Lagrange basis functions are basically the same as those obtained by using Bernstein polynomial basis functions in numerical accuracy and convergence order, with slight differences only after the decimal point of some p values, but the performance of Bernstein basis functions is slightly better overall. This shows that the performance of the two basis functions is equivalent in the case of lower order, but Bernstein basis function shows better stability in detail processing.

    However, when the biquartic Lagrange basis function is used to solve the problem, the situation has changes significantly. In Example 1, the error result of biquartic Lagrange basis function is not as good as that of biquartic Lagrange basis function. In Examples 2 and 3, the solution of biquartic Lagrange basis function appears an obvious oscillation phenomenon, which leads to unreliable numerical results. This phenomenon shows that with the increase of the order of the basis function, and with Lagrange basis function being prone to numerical instability when dealing with complex problems, especially in the case of high order, this instability will be aggravated. In contrast, Bernstein basis functions show excellent numerical stability and higher accuracy in high-order cases. By using high-order Bernstein polynomial basis, we not only effectively alleviate the oscillation problem caused by high-order Lagrange basis function, but also generate more stable and accurate numerical solutions. Therefore, in order to ensure the reliability and stability of numerical results, we only show the solution results using Bernstein polynomial basis functions.

    We review the Bernstein polynomial basis and use it to construct the mixed finite element function space. Then, the Galerkin mixed FEM based on the bivariate Bernstein polynomial basis is used to solve the 2D Stokes equation, and the L^{\infty} , L^{2} , and H^{1} -semi norms of the error and convergence order between the exact solution and the finite element solution are calculated. At the same time, compared with the Lagrange basis function, the numerical accuracy and convergence order of solving Stokes equation with bicubic and below Lagrange interpolation polynomial basis and Bernstein polynomial basis are the same. High-order Lagrange interpolation function is often limited by Runge's phenomenon, so we use high-order Bernstein polynomial basis to effectively overcome this problem and obtain significantly better numerical results.

    Lanyin Sun: Writing-review & editing, methodology, funding acquisition, conceptualization, visualization, data curation; Siya Wen: Writing-review & editing, writing-original draft, software. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are very grateful to the editor and anonymous referees for their valuable comments and constructive suggestions, which helped to improve the paper significantly. This work is partly supported by Program for Science Technology Innovation Talents in Universities of Henan Province (No. 22HASTIT021), the Science and Technology Project of Henan Province (No. 212102210394).

    The authors declare that there is no conflict of interest.



    [1] J. Wu, A study on customer acquisition cost and customer retention cost: Review and outlook, Proceedings of the 9th International Conference on Innovation & Management, 2012,799–803.
    [2] A. Bilal Zorić, Predicting customer churn in banking industry using neural networks, INDECS, 14 (2016), 116–124. https://doi.org/10.7906/indecs.14.2.1 doi: 10.7906/indecs.14.2.1
    [3] K. G. M. Karvana, S. Yazid, A. Syalim, P. Mursanto, Customer churn analysis and prediction using data mining models in banking industry, 2019 International Workshop on Big Data and Information Security (IWBIS), 2019, 33–38. https://doi.org/10.1109/IWBIS.2019.8935884 doi: 10.1109/IWBIS.2019.8935884
    [4] A. Keramati, H. Ghaneei, S. M. Mirmohammadi, Developing a prediction model for customer churn from electronic banking services using data mining, Financ. Innov., 2 (2016), 10. https://doi.org/10.1186/s40854-016-0029-6 doi: 10.1186/s40854-016-0029-6
    [5] J. Kaur, V. Arora, S. Bali, Influence of technological advances and change in marketing strategies using analytics in retail industry, Int. J. Syst. Assur. Eng. Manag., 11 (2020), 953–961. https://doi.org/10.1007/s13198-020-01023-5 doi: 10.1007/s13198-020-01023-5
    [6] O. F. Seymen, O. Dogan, A. Hiziroglu, Customer churn prediction using deep learning, Proceedings of the 12th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2020), 2020,520–529. https://doi.org/10.1007/978-3-030-73689-7_50 doi: 10.1007/978-3-030-73689-7_50
    [7] A. Dingli, V. Marmara, N. S. Fournier, Comparison of deep learning algorithms to predict customer churn within a local retail industry, Int. J. Mach. Learn. Comput., 7 (2017), 128–132. https://doi.org/10.18178/ijmlc.2017.7.5.634 doi: 10.18178/ijmlc.2017.7.5.634
    [8] M. C. Mozer, R. Wolniewicz, D. B. Grimes, E. Johnson, H. Kaushansky, Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry, IEEE T. Neural Networks, 11 (2000), 690–696. https://doi.org/10.1109/72.846740 doi: 10.1109/72.846740
    [9] J. Hadden, A. Tiwari, R. Roy, D. Ruta, Computer assisted customer churn management: State-of-the-art and future trends, Comput. Oper. Res., 34 (2007), 2902–2917. https://doi.org/10.1016/j.cor.2005.11.007 doi: 10.1016/j.cor.2005.11.007
    [10] K. Coussement, D. Van den Poel, Churn prediction in subscription services: An application of support vector machines while comparing two parameter-selection techniques, Expert Syst. Appl., 34 (2008), 313–327. https://doi.org/10.1016/j.eswa.2006.09.038 doi: 10.1016/j.eswa.2006.09.038
    [11] J. Burez, D. Van den Poel, Handling class imbalance in customer churn prediction, Expert Syst. Appl., 36 (2009), 4626–4636. https://doi.org/10.1016/j.eswa.2008.05.027 doi: 10.1016/j.eswa.2008.05.027
    [12] P. C. Pendharkar, Genetic algorithm based neural network approaches for predicting churn in cellular wireless network services, Expert Syst. Appl., 36 (2009), 6714–6720. https://doi.org/10.1016/j.eswa.2008.08.050 doi: 10.1016/j.eswa.2008.08.050
    [13] A. Idris, A. Khan, Y. S. Lee, Genetic programming and adaboosting based churn prediction for telecom, IEEE international conference on Systems, Man, and Cybernetics (SMC), 2012, 1328–1332. https://doi.org/10.1109/ICSMC.2012.6377917 doi: 10.1109/ICSMC.2012.6377917
    [14] T. Vafeiadis, K. I. Diamantaras, G. Sarigiannidis, K. C. Chatzisavvas, A comparison of machine learning techniques for customer churn prediction, Simulation Modell. Prac. Theory, 55 (2015), 1–9. https://doi.org/10.1016/j.simpat.2015.03.003 doi: 10.1016/j.simpat.2015.03.003
    [15] A. Idris, A. Khan, Churn prediction system for telecom using filter–wrapper and ensemble classification, Comput. J., 60 (2017), 410–430. https://doi.org/10.1093/comjnl/bxv123 doi: 10.1093/comjnl/bxv123
    [16] M. Imani, Customer Churn Prediction in Telecommunication Using Machine Learning: A Comparison Study, AUT J. Model. Simulation, 52 (2020), 229–250. https://doi.org/10.22060/miscj.2020.18038.5202 doi: 10.22060/miscj.2020.18038.5202
    [17] S. Wu, W.-C. Yau, T.-S. Ong, S.-C. Chong, Integrated churn prediction and customer segmentation framework for telco business, IEEE Access, 9 (2021), 62118–62136. https://doi.org/10.1109/ACCESS.2021.3073776 doi: 10.1109/ACCESS.2021.3073776
    [18] Y. Beeharry, R. Tsokizep Fokone, Hybrid approach using machine learning algorithms for customers' churn prediction in the telecommunications industry, Concurrency Comput.: Prac. Exper., 34 (2022), e6627. https://doi.org/10.1002/cpe.6627 doi: 10.1002/cpe.6627
    [19] D. W. Hosmer Jr, S. Lemeshow, R. X. Sturdivant, Applied logistic regression, Hoboken: John Wiley & Sons, 2013. https://doi.org/10.1002/9781118548387
    [20] J. R. Quinlan, Induction of decision trees, Mach. Learn., 1 (1986), 81–106. https://doi.org/10.1007/BF00116251 doi: 10.1007/BF00116251
    [21] I. Rish, An empirical study of the naive Bayes classifier, IJCAI 2001 workshop on empirical methods in artificial intelligence, 3 (2001), 41–46.
    [22] C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn., 20 (1995), 273–297. https://doi.org/10.1007/BF00994018 doi: 10.1007/BF00994018
    [23] M. H. Hassoun, Fundamentals of artificial neural networks, Cambridge: MIT press, 1995.
    [24] T. K. Ho, Random decision forests, Proceedings of 3rd international conference on document analysis and recognition, 1 (1995), 278–282. https://doi.org/10.1109/ICDAR.1995.598994 doi: 10.1109/ICDAR.1995.598994
    [25] Y. Freund, R. E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, J. Comput. Syst. Sci., 55 (1997), 119–139. https://doi.org/10.1006/jcss.1997.1504 doi: 10.1006/jcss.1997.1504
    [26] J. H. Friedman, Greedy function approximation: a gradient boosting machine, Ann. Statist., 29 (2001), 1189–1232. https://doi.org/10.1214/aos/1013203451 doi: 10.1214/aos/1013203451
    [27] T. Chen, C. Guestrin, Xgboost: A scalable tree boosting system, Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016,785–794. https://doi.org/10.1145/2939672.2939785 doi: 10.1145/2939672.2939785
    [28] Z. Ghasemi Darehnaei, M. Shokouhifar, H. Yazdanjouei, S. M. J. Rastegar Fatemi, SI‐EDTL: Swarm intelligence ensemble deep transfer learning for multiple vehicle detection in UAV images, Concurrency Comput.: Prac. Exper., 34 (2022), e6726. https://doi.org/10.1002/cpe.6726 doi: 10.1002/cpe.6726
    [29] N. Behmanesh-Fard, H. Yazdanjouei, M. Shokouhifar, F. Werner, Mathematical Circuit Root Simplification Using an Ensemble Heuristic–Metaheuristic Algorithm, Mathematics, 11 (2023), 1498. https://doi.org/10.3390/math11061498 doi: 10.3390/math11061498
    [30] A. Amin, S. Anwar, A. Adnan, M. Nawaz, N. Howard, J. Qadir, et al., Comparing oversampling techniques to handle the class imbalance problem: A customer churn prediction case study, IEEE Access, 4 (2016), 7940–7957. https://doi.org/10.1109/ACCESS.2016.2619719 doi: 10.1109/ACCESS.2016.2619719
    [31] I. V. Pustokhina, D. A. Pustokhin, P. T. Nguyen, M. Elhoseny, K. Shankar, Multi-objective rain optimization algorithm with WELM model for customer churn prediction in telecommunication sector, Complex Intell. Syst., 9 (2021), 3473–3485. https://doi.org/10.1007/s40747-021-00353-6 doi: 10.1007/s40747-021-00353-6
    [32] N. V. Chawla, Data mining for imbalanced datasets: An overview, In: Data mining and knowledge discovery handbook, Boston: Springer, 2009,875–886. https://doi.org/10.1007/978-0-387-09823-4_45
    [33] D.-C. Li, C.-S. Wu, T.-I. Tsai, Y.-S. Lina, Using mega-trend-diffusion and artificial samples in small data set learning for early flexible manufacturing system scheduling knowledge, Comput. Oper. Res., 34 (2007), 966–982. https://doi.org/10.1016/j.cor.2005.05.019 doi: 10.1016/j.cor.2005.05.019
    [34] H. He, Y. Bai, E. A. Garcia, S. Li, ADASYN: Adaptive synthetic sampling approach for imbalanced learning, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), 2008, 1322–1328. https://doi.org/10.1109/IJCNN.2008.4633969 doi: 10.1109/IJCNN.2008.4633969
    [35] H. Peng, F. Long, C. Ding, Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy, IEEE T. Pattern Anal. Mach. Intell., 27 (2005), 1226–1238. https://doi.org/10.1109/TPAMI.2005.159 doi: 10.1109/TPAMI.2005.159
    [36] S. Barua, M. M. Islam, X. Yao, K. Murase, MWMOTE--majority weighted minority oversampling technique for imbalanced data set learning, IEEE T. Knowl. Data Eng., 26 (2012), 405–425. https://doi.org/10.1109/TKDE.2012.232 doi: 10.1109/TKDE.2012.232
    [37] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [38] A. Shokouhifar, M. Shokouhifar, M. Sabbaghian, H. Soltanian-Zadeh, Swarm intelligence empowered three-stage ensemble deep learning for arm volume measurement in patients with lymphedema, Biomed. Signal Proc. Control, 85 (2023), 105027. https://doi.org/10.1016/j.bspc.2023.105027 doi: 10.1016/j.bspc.2023.105027
    [39] J. Pamina, B. Beschi Raja, S. Sathya Bama, M. Sruthi, S. Kiruthika, V. J. Aiswaryadevi, et al., An effective classifier for predicting churn in telecommunication, J. Adv Res. Dyn. Control Syst.s, 11 (2019), 221–229.
    [40] N. I. Mohammad, S. A. Ismail, M. N. Kama, O. M. Yusop, A. Azmi, Customer churn prediction in telecommunication industry using machine learning classifiers, Proceedings of the 3rd international conference on vision, image and signal processing, 2019, 34. https://doi.org/10.1145/3387168.3387219 doi: 10.1145/3387168.3387219
    [41] S. Agrawal, A. Das, A. Gaikwad, S. Dhage, Customer churn prediction modelling based on behavioral patterns analysis using deep learning, 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), 2018, 1–6. https://doi.org/10.1109/ICSCEE.2018.8538420 doi: 10.1109/ICSCEE.2018.8538420
    [42] A. Amin, F. Al-Obeidat, B. Shah, A. Adnan, J. Loo, S. Anwar, Customer churn prediction in telecommunication industry using data certainty, J. Bus. Res., 94 (2019), 290–301. https://doi.org/10.1016/j.jbusres.2018.03.003 doi: 10.1016/j.jbusres.2018.03.003
    [43] S. Momin, T. Bohra, P. Raut, Prediction of customer churn using machine learning, EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing, 2020,203–212. https://doi.org/10.1007/978-3-030-19562-5_20 doi: 10.1007/978-3-030-19562-5_20
    [44] E. Hanif, Applications of data mining techniques for churn prediction and cross-selling in the telecommunications industry, Master thesis, Dublin Business School, 2019.
    [45] S. Wael Fujo, S. Subramanian, M. Ahmad Khder, Customer Churn Prediction in Telecommunication Industry Using Deep Learning, Inf. Sci. Lett., 11 (2022), 24–30. http://doi.org/10.18576/isl/110120 doi: 10.18576/isl/110120
    [46] V. Umayaparvathi, K. Iyakutti, Automated feature selection and churn prediction using deep learning models, Int. Res. J. Eng. Tech., 4 (2017), 1846–1854.
    [47] U. Ahmed, A. Khan, S. H. Khan, A. Basit, I. U. Haq, Y. S. Lee, Transfer learning and meta classification based deep churn prediction system for telecom industry, 2019. https://doi.org/10.48550/arXiv.1901.06091
    [48] A. De Caigny, K. Coussement, K. W. De Bock, A new hybrid classification algorithm for customer churn prediction based on logistic regression and decision trees, Eur. J. Oper. Res., 269 (2018), 760–772. https://doi.org/10.1016/j.ejor.2018.02.009 doi: 10.1016/j.ejor.2018.02.009
    [49] A. Idris, A. Khan, Y. S. Lee, Intelligent churn prediction in telecom: Employing mRMR feature selection and RotBoost based ensemble classification, Appl. Intell, 39 (2013), 659–672. https://doi.org/10.1007/s10489-013-0440-x doi: 10.1007/s10489-013-0440-x
    [50] W. Verbeke, K. Dejaeger, D. Martens, J. Hur, B. Baesens, New insights into churn prediction in the telecommunication sector: A profit driven data mining approach, Eur. J. Oper. Res., 218 (2012), 211–229. https://doi.org/10.1016/j.ejor.2011.09.031 doi: 10.1016/j.ejor.2011.09.031
    [51] Y. Xie, X. Li, E. Ngai, W. Ying, Customer churn prediction using improved balanced random forests, Expert Syst. Appl., 36 (2009), 5445–5449. https://doi.org/10.1016/j.eswa.2008.06.121 doi: 10.1016/j.eswa.2008.06.121
    [52] J.-W. Byun, A. Kamra, E. Bertino, N. Li, Efficient k-anonymization using clustering techniques, International Conference on Database Systems for Advanced Applications, DASFAA 2007, 2007,188–200. https://doi.org/10.1007/978-3-540-71703-4_18 doi: 10.1007/978-3-540-71703-4_18
    [53] A. P. Bradley, The use of the area under the ROC curve in the evaluation of machine learning algorithms, Pattern Recog., 30 (1997), 1145–1159. https://doi.org/10.1016/S0031-3203(96)00142-2 doi: 10.1016/S0031-3203(96)00142-2
  • This article has been cited by:

    1. Haiyan Zhang, Yanni Dou, Weiyan Yu, Positive Solutions of Operator Equations AX = B, XC = D, 2023, 12, 2075-1680, 818, 10.3390/axioms12090818
    2. Qing-Wen Wang, Zi-Han Gao, Jia-Le Gao, A Comprehensive Review on Solving the System of Equations AX = C and XB = D, 2025, 17, 2073-8994, 625, 10.3390/sym17040625
    3. Hranislav Stanković, Polynomially accretive operators, 2025, 54, 2651-477X, 516, 10.15672/hujms.1421159
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2266) PDF downloads(165) Cited by(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog