The purpose of this work is to interpret the experiences of students when audience response systems (ARS) were implemented as a strategy for teaching large mathematics lecture groups at university. Our paper makes several contributions to the literature. Firstly, we furnish a basic model of how ARS can form a teaching and learning strategy. Secondly, we examine the impact of this strategy on student attitudes of their experiences, focusing on the ability of ARS to: assess understanding; identify strengths and weaknesses; furnish feedback; support learning; and to encourage participation. Our findings support the position that there is a place for ARS as part of a strategy for teaching and learning mathematics in large groups.
Citation: Christopher C. Tisdell. Embedding opportunities for participation and feedback in large mathematics lectures via audience response systems[J]. STEM Education, 2021, 1(2): 75-91. doi: 10.3934/steme.2021006
Related Papers:
[1]
Rajagopalan Ramaswamy, Gunaseelan Mani .
Application of fixed point result to solve integral equation in the setting of graphical Branciari $ {\aleph } $-metric spaces. AIMS Mathematics, 2024, 9(11): 32945-32961.
doi: 10.3934/math.20241576
[2]
Muhammad Sarwar, Muhammad Fawad, Muhammad Rashid, Zoran D. Mitrović, Qian-Qian Zhang, Nabil Mlaiki .
Rational interpolative contractions with applications in extended $ b $-metric spaces. AIMS Mathematics, 2024, 9(6): 14043-14061.
doi: 10.3934/math.2024683
[3]
Gunaseelan Mani, Arul Joseph Gnanaprakasam, Absar Ul Haq, Imran Abbas Baloch, Choonkil Park .
On solution of Fredholm integral equations via fuzzy $ b $-metric spaces using triangular property. AIMS Mathematics, 2022, 7(6): 11102-11118.
doi: 10.3934/math.2022620
[4]
Aftab Hussain .
Fractional convex type contraction with solution of fractional differential equation. AIMS Mathematics, 2020, 5(5): 5364-5380.
doi: 10.3934/math.2020344
[5]
Hasanen A. Hammad, Hassen Aydi, Choonkil Park .
Fixed point approach for solving a system of Volterra integral equations and Lebesgue integral concept in F$ _{\text{CM}} $-spaces. AIMS Mathematics, 2022, 7(5): 9003-9022.
doi: 10.3934/math.2022501
[6]
Nizar Souayah, Nabil Mlaiki, Salma Haque, Doaa Rizk, Amani S. Baazeem, Wasfi Shatanawi .
A new type of three dimensional metric spaces with applications to fractional differential equations. AIMS Mathematics, 2022, 7(10): 17802-17814.
doi: 10.3934/math.2022980
[7]
Zhenhua Ma, Jamshaid Ahmad, Abdullah Eqal Al-Mazrooei .
Fixed point results for generalized contractions in controlled metric spaces with applications. AIMS Mathematics, 2023, 8(1): 529-549.
doi: 10.3934/math.2023025
[8]
Saif Ur Rehman, Iqra Shamas, Shamoona Jabeen, Hassen Aydi, Manuel De La Sen .
A novel approach of multi-valued contraction results on cone metric spaces with an application. AIMS Mathematics, 2023, 8(5): 12540-12558.
doi: 10.3934/math.2023630
[9]
Mian Bahadur Zada, Muhammad Sarwar, Reny George, Zoran D. Mitrović .
Darbo-Type $ \mathcal{Z}_{\rm{m}} $ and $ \mathcal{L}_{\rm{m}} $ contractions and its applications to Caputo fractional integro-differential equations. AIMS Mathematics, 2021, 6(6): 6340-6355.
doi: 10.3934/math.2021372
[10]
Hongyan Guan, Jinze Gou, Yan Hao .
On some weak contractive mappings of integral type and fixed point results in $ b $-metric spaces. AIMS Mathematics, 2024, 9(2): 4729-4748.
doi: 10.3934/math.2024228
Abstract
The purpose of this work is to interpret the experiences of students when audience response systems (ARS) were implemented as a strategy for teaching large mathematics lecture groups at university. Our paper makes several contributions to the literature. Firstly, we furnish a basic model of how ARS can form a teaching and learning strategy. Secondly, we examine the impact of this strategy on student attitudes of their experiences, focusing on the ability of ARS to: assess understanding; identify strengths and weaknesses; furnish feedback; support learning; and to encourage participation. Our findings support the position that there is a place for ARS as part of a strategy for teaching and learning mathematics in large groups.
1.
Introduction
Consider the following generalized absolute value equations (GAVE)
Ax−B|x|=b,
(1.1)
where A,B∈Rn×n are two given matrices and b∈Rn is a given vector, and |⋅| denotes the componentwise absolute value. Especially, if B=I, where I stands for the identity matrix, then the GAVE (1.1) reduces to the standard absolute value equations (AVE)
Ax−|x|=b.
(1.2)
The GAVE (1.1) and the AVE (1.1) arise in many scientific computing and engineering problems, including the linear programming problems, the linear complementarity problems (LCP), the bimatrix games, the quadratic programming and so on, see [1,2,3,4] for more details. Taking the well-known LCP as an example: for a given matrix M∈Rn×n and a given vector q∈Rn, find two vectors z,w∈Rn such that
z≥0,w=Mz+q≥0andzTw=0.
(1.3)
Here and thereafter, (⋅)T denotes the transpose of either a vector or a matrix. By simply letting z=|x|−x and w=|x|+x, then the LCP (1.3) can be equivalently transformed into the GAVE
(M+I)x−(M−I)|x|=qwithx=12((M−I)z+q).
(1.4)
In fact, the GAVE (1.4) can also be transformed into the LCP (1.3) [5,6]. For more details of the relation between the GAVE (1.1) and the LCP (1.3), please see [7,8,9].
In recent decades, much more attention has been paid to the GAVE (1.1) and the AVE (1.2). On one hand, some sufficient conditions have been studied to guarantee the existence and uniqueness of the solution of the GAVE (1.1) [10,11,12,13,14]. Rohn showed that the GAVE (1.1) is uniquely solvable for any b∈Rn if σmin(A)>σmax(|B|)[10]. Wu and Li extended the results of [10], and exhibited that the GAVE (1.1) is uniquely solvable for any b∈Rn if σmin(A)>σmax(B)[11,12]. In [13], Rohn et al. showed that the GAVE (1.1) is uniquely solvable for any b∈Rn if ρ(|A−1B|)<1. Here and in the sequel, σmin(⋅), σmax(⋅) and ρ(⋅) denote the minimal singular value, the maximal singular value and the spectral radius of the corresponding matrix, respectively. On the other hand, many efficient iteration methods, including the concave minimization algorithm [15,16], the sign accord algorithm [17], the optimization algorithm [18], the hybrid algorithm [19], the preconditioned AOR iterative method [20], the Picard-HSS iteration method [21], the Newton-type method [22,23,24] and so on, have been studied for solving the GAVE (1.1).
Due to the existence of the nonlinear term B|x|, the GAVE (1.1) can be regarded as a system of nonlinear equations
F(x)=0withF(x)=Ax−B|x|−b.
(1.5)
As a result, the well-known Newton iteration method
xk+1=xk−F′(xk)−1F(xk),k=0,1,2,⋯,
(1.6)
can be used provided that the Jacobian matrix F′(x) of F(x) exists and is invertible. However, the Newton iteration method (1.6) can not be used directly to solve the GAVE (1.1) since F(x)=Ax−B|x|−b is non-differentiable. For the special case, i.e., B=I, by considering F(x)=Ax−|x|−b as a piece-wise linear vector function, Mangasarian in [22] used the generalized Jacobian ∂|x| of |x| based on a subetaadient of its components and presented the following generalized Newton (GN) iteration method
x(k+1)=(A−D(x(k)))−1b,k=0,1,2,⋯
(1.7)
to get an approximate solution of the AVE (1.2), where D(x(k))=∂|x|=diag(sign(x(k))) and sign(x) stands for a vector with components equal to 1, 0, or −1 depending on whether the corresponding component of x is positive, zero or negative. Theoretical analysis showed that the GN iteration method (1.7) is globally linearly convergent under certain conditions [22]. Hu et al. extended the GN iteration scheme (1.7) to solve the GAVE (1.1) and proposed a weaker convergence condition [25]. For a general matrix B, the specific GN iteration scheme is
x(k+1)=(A−BD(x(k)))−1b,k=0,1,2,⋯.
(1.8)
Recently, convergence results of the GN iteration schemes (1.7) and (1.8) have been further discussed in [26,27,28]. From the GN iteration scheme (1.7) or (1.8), we can see that the coefficient matrix A−D(xk) or A−BD(xk) is changed at each iteration step. For large problems, it is very difficult especially if the coefficient matrix is ill-conditioned. In addition, if the generalized Jacobian matrix is singular, then the GN iteration method fails. To remedy these, a lot of effective improvements have been presented, such as a stable and quadratic locally convergent iteration scheme [28], the generalized Traub's method [29], the modified GN iteration method [24,30], the inexact semi-smooth Newton iteration method [31], a new two-step iterative method [32] and so on. All these improvements greatly accelerate the convergence rate of the GN iteration method. However, when the singular generalized Jacobian matrix happens, these newly developed iteration methods fail, too.
In this paper, by introducing a relaxation iteration parameter, we propose a relaxed generalized Newton (RGN) iteration method to solve the GAVE (1.1). In fact, the RGN iteration method is a generalization of the GN iteration method [22] and the Picard iteration method [13] studied recently. The advantage of the new RGN iteration method is twofold. By introducing suitable iteration parameter, it not only can avoid singularity of the generalized Jacobian matrix, but also improves the convergence rate. Theoretically, we prove that the RGN iteration method is well defined and globally linearly convergent under certain conditions. Moreover, a specific sufficient convergence condition is presented when the coefficient matrix A is symmetric positive definite. With two numerical examples, we show that the new proposed RGN iteration method is much more efficient than some existing Newton-type iteration methods.
The rest of this paper is organized as follows. In Section 2, the RGN iteration method is introduced to solve the GAVE (1.1). Convergence analyses are studied in detail in Section 3. In Section 4, two numerical examples from the LCP (1.3) are presented to demonstrate the effectiveness of our new method. Finally, we end this paper with some conclusions and outlook in Section 5.
2.
The relaxed generalized Newton iteration method
In this section, a new relaxed generalized Newton iteration method is introduced to solve the GAVE (1.1).
Let θ≥0 be a nonnegative real parameter. Based on the Newton iteration scheme (1.6) and the ideas studied [22,30], a new iteration scheme is introduced to solve the GAVE (1.1)
F(xk)+(∂F(xk)+(1−θ)BD(xk))(xk+1−xk)=0.
(2.1)
Substituting F(xk)=Axk−B|xk|−b (1.5) and the generalized Jacobian ∂F(xk)=A−BD(xk) into (2.1), we obtain
Axk−B|xk|−b+(A−θBD(xk))(xk+1−xk)=0.
(2.2)
Since D(x)=diag(sign(x)) is a diagonal matrix and satisfies D(xk)xk=|xk|[22]. Then the new iteration scheme (2.1) or (2.2) is simplified into the following final form
(A−θBD(xk))xk+1=b+(1−θ)B|xk|,k=0,1,2,⋯.
(2.3)
Here, the iteration parameter θ can be regarded as a role of relaxation, which can avoid the singularity problems and adjust the condition number of the coefficient matrix A−θBD(xk) so as to improve the convergence rate of the GN iteration method (1.8). So, we call the new iteration method (2.3) the relaxed generalized Newton (RGN) iteration method. In particular, if θ=1, then the RGN iteration method (2.3) reduces to the GN iteration method (1.8). If θ=0, then the RGN iteration method (2.3) becomes
Axk+1=b+B|xk|,k=0,1,2,⋯,
(2.4)
which is known as the Picard iteration method [7,13].
The RGN iteration method (2.3) is well defined provided that the coefficient matrix A−θBD(xk) is nonsingular at each iteration step. The following theorem presents a sufficient condition. To this end, we first define a set of matrices
since the diagonal matrix D(x)=diag(sign(x)) may change at each iteration step.
Theorem 2.1.Let A,B∈Rn×n, θ≥0 be a nonnegative real parameter and D∈Rn×n be any matrix in the set D (2.5). Let λmin(ATA) and λmax(BTB) be the smallest eigenvalue of ATA and the largest eigenvalue of BTB, respectively. If
λmin(ATA)λmax(BTB)>θ2,
(2.6)
then A−θBD is nonsingular and the RGN iteration method (2.3) is well defined.
Proof. We argue it by contradiction. If A−θBD is singular, then there exists a nonzero vector x such that
(A−θBD)x=0.
In addition, since D∈Rn×n is a diagonal matrix with each diagonal element being 1, −1 or 0, DTD is a diagonal matrix, too, and each diagonal element is either 1 or 0. Thus, it holds
Therefore, A−θBD is nonsingular and the RGN iteration method (2.3) is well defined provided that the condition (2.6) holds.
Remark 2.1.It should be noted that the condition given in Theorem 2.1 is a theoretical generalization of some recent works. In particular, if B=I and θ=1, then the condition (2.6) becomes λmin(ATA)>1, which means that all singular values of A exceed 1 and is in good agreement with [22, Lemma 2.1]. If only θ=1, then the condition (2.6) is the one given in [25, Theorem 3.1]. In addition, if θ=0, then the condition (2.6) is equivalent to show that the matrix A is nonsingular, which clearly shows that the Picard iteration method (2.4) is well defined.
To better implement the new RGN iteration method (2.3) in actual computations, we present an algorithmic version of the RGN iteration method (2.3) as follows. Here, ‖⋅‖2 denotes the Euclidean norm of either a vector or a matrix.
Algorithm 2.1.(The RGN iteration method)
1). Choose an arbitrary initial vector x0 and a nonnegative parameter θ. Given ε and set k=0;
2). If ‖Axk−B|xk|−b‖2≤ε‖b‖2, stop;
3). Compute D(xk)=diag(sign(xk));
4). Solve the following linear system to obtain xk+1
(A−θBD(xk))xk+1=b+(1−θ)B|xk|;
5). Set k=k+1. Go to Step 2.
3.
Convergence analysis
In this section, we will establish the convergence theory of the RGN iteration method (2.3) for solving the GAVE (1.1). Specially, two general convergence conditions of the RGN iteration method (2.3) will be presented firstly. Then, a sufficient convergence condition is proposed when the system matrix A is symmetric positive definite. In addition, as the special cases of our new RGN iteration method (2.3), the convergence conditions of the GN iteration method (1.8) and the Picard iteration method (2.4) can be acquired immediately.
3.1. General sufficient convergence conditions
In this subsection, we first study some sufficient convergence conditions only when the RGN iteration method (2.3) is well defined.
Theorem 3.1.Let A,B∈Rn×n, θ≥0 be a nonnegative real parameter and satisfy the condition (2.6). Let D∈Rn×n be any matrix in the set D (2.5). If
‖(A−θBD)−1‖2‖B‖2<11+θ,
(3.1)
then the RGN iteration method (2.3) converges linearly from any starting point to a solution x∗ of the GAVE (1.1).
Proof. Let x∗ be a solution of the GAVE (1.1), then it satisfies
where the inequality ‖|xk|−|x∗|‖2≤‖xk−x∗‖2 is used. From (3.4), we can see that the RGN iteration method (2.3) converges linearly from any starting point to a solution x∗ of the GAVE (1.1) provided that the condition (3.1) is satisfied.
Theorem 3.2.Under the conditions in Theorem 3.1. Further assume that A is nonsingular. If
‖A−1‖2‖B‖2<11+2θ,
(3.5)
then the RGN iteration method (2.3) converges linearly from any starting point to a solution x∗ of the GAVE (1.1).
Proof. According to Theorem 3.1, we only need to verify the condition (3.1). Under the condition (3.5), by the Banach perturbation lemma (see [33,Lemma 2.3.3]), we have
Therefore, the RGN iteration method (2.3) converges linearly from any starting point to a solution x∗ of the GAVE (1.1) if the condition (3.5) is satisfied.
As mentioned in Section 2, the well-known GN iteration method (1.8) and the Picard iteration method (2.4) are special cases of the new RGN iteration method (2.3) when the relaxation parameter θ is 1 and 0, respectively. By simply letting θ=1 and 0, we can obtain the following two corollaries, which describe the convergence conditions of the GN iteration method (1.8) and the Picard iteration method (2.4), respectively, for solving the GAVE (1.1).
Corollary 3.1.Let A,B∈Rn×n, and D∈Rn×n be any matrix in the set D (2.5). Assume that A−BD is nonsingular. If
‖(A−BD)−1‖2‖B‖2<12,
or if A is nonsingular and
‖A−1‖2‖B‖2<13,
then the GN iteration method (1.8) converges linearly from any starting point to a solution x∗ of the GAVE (1.1).
Corollary 3.2.Let A,B∈Rn×n. Assume that A is nonsingular. If
‖A−1‖2‖B‖2<1,
then the Picard iteration method (2.4) converges linearly from any starting point to a solution x∗ of the GAVE (1.1).
3.2. The case of symmetric positive definite
In this subsection, we turn to discuss the convergence conditions of the RGN iteration method (2.3) for solving the GAVE (1.1) when the system matrix A is symmetric positive definite.
Theorem 3.3.Let A∈Rn×n be a symmetric positive definite matrix, B∈Rn×n, D∈Rn×n be any matrix in the set D (2.5), θ be a positive constant and satisfy (2.6). Further assume that μmin is the smallest eigenvalue of the matrix A and ‖B‖2=τ. If
μmin>(1+2θ)τ,
(3.6)
then the RGN iteration method (2.3) converges linearly from any starting point to a solution x∗ of the GAVE (1.1).
Proof. Since the matrix A is symmetric positive definite, it is easy to check that
‖A−1‖2‖B‖2≤τμmin.
If μmin and τ further satisfy the condition (3.6), we have
‖A−1‖2‖B‖2≤τμmin<11+2θ.
Therefore, by Theorem 3.2, we obtain that the RGN iteration method (2.3) converges linearly from any starting point to a solution x∗ of the GAVE (1.1). This completes the proof.
In Theorem 3.3, by setting θ=1 and 0, we have the following two corollaries to guarantee the convergence of the GN iteration method (1.8) and the Picard iteration method (2.4) for solving the GAVE (1.1), respectively.
Corollary 3.3.Let A∈Rn×n be a symmetric positive definite matrix and μmin be its smallest eigenvalue. Let B∈Rn×n and ‖B‖2=τ. Let D∈Rn×n be any matrix in the set D (2.5). If
μmin>3τ,
then the GN iteration method (1.8) converges linearly from any starting point to a solution x∗ of the GAVE (1.1).
Corollary 3.4.Let A∈Rn×n be a symmetric positive definite matrix and μmin be its smallest eigenvalue. Let B∈Rn×n and ‖B‖2=τ. If
μmin>τ,
then the Picard iteration method (2.4) converges linearly from any starting point to a solution x∗ of the GAVE (1.1).
4.
Numerical experiments
In this section, we present some numerical examples arising from two types of LCP (1.3) to show the effectiveness of the RGN iteration method (2.6), and demonstrate the advantages of the new RGN iteration method over the well-known Lemke's method, the Picard iteration method (2.4), the GN iteration method (1.8) and the modified GN iteration (MGN) method [30] from aspects of the iteration counts (denoted by “IT”) and the elapsed CPU times (denoted by “CPU”). The first LCP comes from [4], which have been studied for standard test problem by many researchers. The second LCP arising from the practical traffic single bottleneck models [34], which are usually solved by the well-known Lemke's method. Note that the Lemke's method is one of the most efficient direct methods for solving the LCP (1.3).
In our experiments, the initial guess vector is the zero vector. All runs are terminated once
RES(x(k)):=‖Ax(k)−B|x(k)|−b‖2‖b‖2≤10−7
or if the prescribed iteration number kmax=5000 is exceeded. At each iteration step of the RGN iteration method, the Picard iteration method, the GN iteration method and the MGN iteration method, we need to solve a system of linear equations with the coefficient matrix A−θBD, A, A−BD and A+I−BD respectively. Here, these systems of linear equations are solved by the sparse LU factorization when these coefficient matrices are nonsymmetric and by the sparse Cholesky factorization when these coefficient matrices are symmetric positive definite. To efficiently implement the RGN iteration method, we need to choose the relaxation iteration parameter θ in advance. The convergence rates of all parameter dependent iteration methods heavily depend on the particular choice of the iteration parameter. The analytic determination of the value θ which results in the fastest convergence of the RGN iteration method appears to be quite a difficult problem. Here, the relaxation iteration parameter θ used in the new RGN iteration method is chosen to be the experimentally optimal one θexp, which leads to the smallest iteration step. In the following tables, “-” means that the corresponding iteration method does not converge to the approximate solution within kmax iteration steps or even diverges. All computations are run in MATLAB (version R2014a) in double precision, Intel(R) Core(TM) (i5-3337U CPU, 8G RAM) Windows 8 system. Here, we use the Matlab codes presented in https://ww2.mathworks.cn/matlabcentral/fileexchange/41485 to test the Lemke's method.
Example 4.1.([4]) Consider the LCP (1.3), in which M∈Rn×n is given by M=ˆM+μI∈Rn×n and q∈Rn is given by q=−Mz(∗), where
is the unique solution of the LCP (1.3). Here n=m2. From the discussion presented in Section 1, we can see that the LCP (1.3) can be equivalently expressed as the GAVE (1.1), where A=M+I and B=M−I. The exact solution of the GAVE (1.1) is
x(∗)=(−0.6,−0.6,−0.6,⋯,−0.6,⋯)T∈Rn.
For the first example, we take two cases of the parameter μ, i.e., μ=0 and μ=−1, in actual computations. For each parameter μ, four increasing sizes, i.e., m=30,60,90,120, are considered. The corresponding dimensions for each problem are n=900,3600,8100,14400, respectively. Note that for the case μ=0, both the matrices M and A are symmetric positive definite, while for the case μ=−1, the matrix M is symmetric indefinite and the matrix A is symmetric positive definite. In Tables 1 and 2, we list the numerical results of different methods for μ=0 and μ=−1, respectively.
Table 1.
Numerical results for Example 4.1 with μ=0.
From Tables 1 and 2, we can see that the GN iteration method and the new RGN iteration method perform much better than the other three computational methods in terms of both iteration steps and elapsed CPU times. For the case μ=0, the Lemke's method can not converge to a satisfactory solution for large problems. Although the Picard iteration method and the MGN iteration method converge, the numerical results show that the convergence rates of these two iteration methods are very slow. We have also noticed that the iteration steps of the Picard iteration step and the MGN iteration step are almost the same, but the elapsed CPU times show that the MGN iteration method costs much more expensive than the Picard iteration method. The reason is that the coefficient matrix of the MGN iteration method is changed at each iteration step. The best choice of the relaxation iteration parameter in the RGN iteration method is θexp=1, which means that the GN iteration method is the best one. For the case μ=−1, the Lemke's method computes the exact solution for all test problems, but the iteration steps and the elapsed CPU times show that this method is not competitive in actual computations. The Picard iteration method diverges. This is because the matrix M is indefinite and the convergence conditions in Corollary 3.2 and Corollary 3.4 can not be satisfied. Among the GN iteration method, the MGN iteration method and the RGN iteration method, we can see from the numerical results that the new RGN iteration method is the best one.
Example 4.2.([4]) Consider the LCP (1.3), in which M=ˆM++μI∈Rn×n and q=−Mz(∗). Different from Example 4.1, we assume that the matrix ˆM in the second example is nonsymmetric, i.e.,
Similar to Example 4.1, the second example can also be equivalently expressed as the GAVE (1.1). Note that Example 4.2 has the same exact solution as that of Example 4.1. For the second example, we still take two parameters for μ, i.e., μ=0 and μ=−1. For each parameter μ, we consider four increasing sizes, i.e., m=30,60,90,120. Thus, the total dimensions for each problem are n=900,3600,8100,14400, respectively. Different from Example 4.1, both the matrices M and A are nonsymmetric positive definite for the case μ=0. For the case μ=−1, the matrix M is nonsymmetric indefinite and the matrix A is nonsymmetric positive definite.
Tables 3 and 4 list the corresponding numerical results of different methods for μ=0 and μ=−1, respectively. These numerical results further confirm the observations obtained from Tables 1 and 2, i.e., the GN iteration method and the new RGN iteration method are superior to other three computational methods in terms of computing efficiency. For the case μ=0, the Lemke's method converge very slow for small problems and do not converge within the given maximum iteration step for large problems. The other four computational methods are convergent. However, the Picard iteration method and the MGN iteration method converge very slow. The GN iteration method and the new RGN iteration method have the same computational results and converge very fast. That means the GN iteration method is the best one. Most important, the iteration steps of both the GN iteration method and the new proposed RGN iteration method are constant as the problem sizes grow. For the case μ=−1, the Lemke's method performs much better than itself for the case μ=0. However, the computational results show that the Lemke's method is still not competitive in real applications. The Picard iteration method is still divergent. The reason is the same as that in Example 4.1. The GN iteration method, the MGN iteration method and the proposed RGN iteration method converge faster than the Lemke's method. From these numerical results, we see again that the RGN iteration method performs best among the three Newton-based iteration methods.
Table 3.
Numerical results for Example 4.2 with μ=0.
Example 4.3.([34]) The second example comes from the single bottleneck model with both the homogeneous commuters and the heterogeneous commuters. The dynamic equilibrium conditions for the single bottleneck model can be transformed into the LCP (1.3), in which the system matrix M and the vector q have the following specific structure
M=[0M1M2−MT3−MT1S000M1I0M3000]andq=[q1s1q2q3],
where the submatrices M1∈R(ΥG)×Υ, M2∈R(ΥG)×(ΥG), M3∈RG×(ΥG), S∈RΥ×Υ are
Here, τ∈T=[0,1,2,⋯,Υ] and g∈G=[1,2,⋯,G] are the indexes for the time interval and the user group, respectively. When G=1 and G>1, then the LCP (1.3) can be used to study the homogeneous case and the heterogeneous case, respectively. s denotes the bottleneck capacity with units given by number of vehicles per time interval and Ng denotes the number of individuals in group g. αg, βg and γg are the unit costs (or value) of the travel time, arriving early to work and arriving late to work in group g, respectively. τ∗g is the preferred arrival time in group g. 1 stands for a vector of all ones.
For the second example, the total dimension is n=2ΥG+Υ+G. It is proved in [34] that the system matrix M is a copositive matrix and the LCP (1.3) has a unique solution. In [34], the authors used the Lemke's method to solve such problems and simulate the single bottleneck model for both the homogeneous case (G=1) and the heterogenous case (G=3). In addition, they took the total demand N=25, the bottleneck capacity s=3 for time unit and the preferred arrival time τ∗=7. The time duration is 10 time units. For the homogeneous case, i.e., G=1, the unit costs are taken as α=2, β=1 and γ=4 for per time unit. For the heterogeneous case, i.e., G=3, the unit costs are taken as α:β:γrations=2:1:4 and τ∗g=6,7,8 for groups 1−3, respectively. For more detailed selection of these parameters, please see [34]. Here, the corresponding LCP is further equivalently transformed into the GAVE (1.1), where A=M+I and B=M−I. Then the Picard iteration method, the GN iteration method, the MGN iteration method and the new RGN iteration method are applied to solve the GAVE.
Numerical results of different computational methods are listed in Tables 5 and 6 for G=1 and G=3, respectively. From these two tables, we can see that the Lemke's method is successfully applied to solve the single bottleneck model, but the elapsed CPU times indicate that the Lemke's method costs very expensive. The GN iteration method fails to solve the test problems. This is because the singularity of the coefficient matrix A−BD(xk) occurs in implementing the GN iteration method. The Picard iteration method and the MGN iteration method can only be applied to some small problems. For large problems, these two iteration methods fail to converge within the prescribed largest iteration number. Our new RGN iteration method can be successfully applied to solve all the test problems and costs very cheap. Therefore, the new RGN iteration method is a powerful computational method to solve the GAVE (1.1).
Table 5.
Numerical results for Example 4.3 with homogeneous case (G=1).
In this paper, by introducing a relaxation iteration parameter, a new relaxed generalized Newton (RGN) iteration method is proposed to solve the generalized absolute value equations. We have proved that the RGN iteration method is well defined and converges globally under certain conditions. Two numerical examples, both arising from the well-known LCP problem, are used to illustrate the efficiency of the new computational method. Numerical results show that the RGN iteration method converges and has much better computing efficiency than some existing methods provided that suitable relaxation iteration parameters are chosen.
Just like most of the parameter-based iteration methods, the choice of the iteration parameter is an open and a challenging problem. Here, the RGN iteration method is proved to be only linearly convergent. In some recent works, the GN iteration method has been modified to be a globally and quadratically iteration method under very strong conditions. Therefore, how to improve the RGN iteration method needs further-in-depth studies. In addition, the generalized absolute value equations with general nonlinear term, which arise in nonlinear complementarity problems [35], implicit complementarity problems [36,37], quasi complementarity problems [38,39], are of great interesting. Future work should focus on estimating the quasi-optimal value of the relaxation iteration parameter, finding globally and quadratically convergent RGN iteration method, extending to solve more applications and so on.
Acknowledgments
The author are very much indebted to An Wang for writing the Matlab codes. This work is supported by the National Natural Science Foundation of China (Nos. 11771225, 61771265, 71771127), the Humanities and Social Science Foundation of the Ministry of Education of China (No. 18YJCZH274), the Science and Technology Project of Nantong City (No. JC2018142) and the ‘226’ Talent Scientific Research Project of Nantong City.
Conflict of interest
The authors declare there is no conflict of interest.
References
[1]
Likert scales and data analyses. Quality Progress (2007) 40: p. 64-65.
[2]
Archer, M.S., Bhaskar, R., Collier, A., Lawson, T., Norrie, A. Critical Realism: Essential Readings. 2009, London, UK: Routledge.
[3]
Bagley, S.F., Improving student success in calculus: a comparison of four college calculus classes[dissertation]. 2014, San Diego State University: San Diego, USA.
[4]
Baker, J.W., The 'classroom flip': using web course management tools to become the guide by the side, in Selected Papers from the 11th International Conference on College Teaching and Learning. 2001, Florida Community College at Jacksonville: Jacksonville (FL), p. 9–17.
[5]
Banks, D.A., Audience Response Systems in Higher Education: Applications and Cases. 2006, Hershey, PA, USA: Information Science Publishing.
[6]
Berends, M., Survey methods in educational research, in Handbook of Complementary Methods in Education Research, J.L. Green, G. Camilli, P.B. Elmore, Ed. 2006, Lawrence Erlbaum Associates. p. 623-640. Retrieved from http://psycnet.apa.org/record/2006-05382-038
[7]
Bligh, D.A., What's the Use of Lectures? 1972, Harmondsworth, UK: Penguin Books.
[8]
Bonwell, C.C., Eison, J.A., Active Learning: Creating Excitement in the Classroom. 2005, San Francisco, USA: Jossey-Bass.
[9]
Box, G.E., Robustness in the Strategy of Scientific Model Building. 1979, Ft. Belvoir: Defense Technical Information Center. Retrieved from http://www.dtic.mil/docs/citations/ADA070213
[10]
Exploring student perceptions, learning outcome and gender differences in a flipped mathematics course. British Journal of Educational Technology (2016) 47: p. 1096-1112.
Coe, R., Waring, M., Hedges, L.V., Arthur, J., Research Methods and Methodologies in Education. 2017, Los Angeles, CA: SAGE.
[13]
Cohen, J., Statistical Power Analysis for the Behavioral Sciences. 1988, Abingdon-on-Thames, UK: Routledge.
[14]
Things I have learned (so far). Am Psychol (1990) p. 1304-1312.
[15]
Cohen, L., Manion, L., Morrison, K., Research Methods in Education. 2018, London: Routledge.
[16]
Creswell, J.W., Qualitative Inquiry and Research Design: Choosing among Five Approaches. 2007, Thousand Oaks, CA: Sage.
[17]
Cronhjort, M., Filipsson, L., Weurlander, M., Improved engagement and learning in flipped-classroom calculus. Teaching Mathematics and its Applications: An International Journal of the IMA, 2018. 37(3): p. 113–121. doi: 10.1093/teamat/hrx007.
[18]
Day A.L., Case study research, in Research Methods & Methodologies in Education, 2nd ed. R. Coe, M. Waring, L.V. Hedges, J. Arthur, Ed. 2017, Los Angeles, CA: SAGE. p. 114-121.
[19]
Duncan, D., Clickers in the Classroom: How to Enhance Science Teaching Using Classroom Response Systems. 2005, San Francisco, CA: Pearson Education.
[20]
Instructor perceptions of using a mobile-phone-based free classroom response system in first-year statistics undergraduate courses. International Journal of Mathematical Education in Science and Technology (2012) 43: p. 1041-1056.
[21]
Mobile-phone-based classroom response systems: Students' perceptions of engagement and learning in a large undergraduate course. International Journal of Mathematical Education in Science and Technology (2013) 44: p. 1160-1174.
Implementing a flipped classroom approach in a university numerical methods mathematics course. International Journal of Mathematical Education in Science and Technology (2017) 48: p. 485-498.
[25]
On flipping the classroom in large first year calculus courses. International Journal of Mathematical Education in Science and Technology (2015) 46: p. 508-520.
[26]
'Pretty lights' and maths! Increasing student engagement and enhancing learning through the use of electronic voting systems. Computers & Education (2009) 53: p. 189-199.
[27]
Kline, R.B., Beyond Significance Testing: Reforming Data Analysis Methods in Behavioral Research. 2004, Washington, DC: American Psychological Association, p. 95.
[28]
Mathematics education and mobile technologies. Math Ed Res J (2016) 28: p. 1-7.
[29]
Lomen, D.O., Robinson, M.K., Using ConcepTests in single and multivariable calculus, in Electronic Proceedings of the Sixteenth Annual International Conference on Technology in Collegiate Mathematics. 2005. Retrieved October 17, 2017 from http://archives.math.utk.edu/ICTCM/i/16/S107.html
[30]
Student learning and perceptions in a flipped linear algebra course. International Journal of Mathematical Education in Science and Technology (2014) 45: p. 317-324.
[31]
Flipping the calculus classroom: an evaluative study. Teaching Mathematics and its Applications (2016) 35: p. 187-201.
Student performance and attitudes in a collaborative and flipped linear algebra course. International Journal of Mathematical Education in Science and Technology (2015) 47: p. 653-673.
[34]
Expectations and implementations of the flipped classroom model in undergraduate mathematics courses. International Journal of Mathematical Education in Science and Technology (2015) 46: p. 968-978.
[35]
von Neumann, J., The mathematician, in Works of the Mind, R.B., Haywood Ed. 1947, Chicago: University of Chicago Press. p. 180-196.
[36]
Novak, J., Kensington-Miller, B., Evans, T., Flip or flop? Students' perspectives of a flipped lecture in mathematics. International Journal of Mathematical Education in Science and Technology, 2017. 48(5): p. 647-658. doi: 10.1080/0020739x.2016.1267810.
[37]
Center for Educational Research and Innovation, Giving Knowledge for Free: The Emergence of Open Educational Resources. Retrieved from http://www.oecd.org/edu/ceri/38654317.pdf
[38]
On flipping first-semester calculus: a case study. International Journal of Mathematical Education in Science and Technology (2016) 47: p. 573-582.
[39]
Robson, L., Guide to Evaluating the Effectiveness of Strategies for Preventing Work Injuries: How to Show Whether a Safety Intervention Really Works. 2001, Cincinnati, OH: DHHS.
[40]
Smartphones as audience response systems for lectures and seminars. Analytical and Bioanalytical Chemistry (2018) 410: p. 1609-1613.
[41]
New effect size rules of thumb. Journal of Modern Applied Statistical Methods (2009) 8: p. 467-474.
[42]
Shadish, W.R., Cook, T.D., Campbell, D.T., Experimental and Quasi-experimental Designs for Generalized Causal Inference. 2001, Belmont, CA: Wadsworth Cengage Learning.
[43]
Analyzing and interpreting data from Likert-type scales. Journal of Graduate Medical Education (2013) 5: p. 541-542.
[44]
Critical perspectives of pedagogical approaches to reversing the order of integration in double integrals. International Journal of Mathematical Education in Science and Technology (2017) 48: p. 1285-1292.
[45]
Pedagogical alternatives for triple integrals: moving towards more inclusive and personalized learning. International Journal of Mathematical Education in Science and Technology (2018) 49: p. 792-801.
[46]
On Picard's iteration method to solve differential equations and a pedagogical space for otherness. International Journal of Mathematical Education in Science and Technology (2019) 50: p. 788-799.
[47]
Schoenfeld's problem-solving models viewed through the lens of exemplification. For the Learning of Mathematics (2019) 39: p. 24-26.
Wang, V.C., Handbook of Research on e-learning Applications for Career and Technical Education: Technologies for Vocational Training. 2009, Hershey, PA: IGI Global.
[51]
Wasserman, N., Norris, S., Carr, T., Comparing a "flipped" instructional model in an undergraduate Calculus Ⅲ course, in Proceedings of the 16th Annual Conference on Research in Undergraduate Mathematics Education; S. Brown, G. Karakok, K.H. Roh, M. Oehrtman Ed..2013, Denver, CO.
[52]
Yin, R.K., Case Study Research: Design and Methods. 4th ed. 2009, Thousand Oaks, CA: Sage.
This article has been cited by:
1.
Wan-Chen Zhao, Xin-Hui Shao,
New matrix splitting iteration method for generalized absolute value equations,
2023,
8,
2473-6988,
10558,
10.3934/math.2023536
2.
Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi,
A general modulus-based matrix splitting method for quasi-complementarity problem,
2022,
7,
2473-6988,
10994,
10.3934/math.2022614
3.
Yiming Zhang, Dongmei Yu, Yifei Yuan,
On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations,
2023,
15,
2073-8994,
589,
10.3390/sym15030589
4.
Luotao Wang, Kalidoss Rajakani,
Pattern Generation and Design of Floral Patterns Based on Newton’s Iterative Algorithm,
2022,
2022,
1530-8677,
1,
10.1155/2022/2116403
5.
Xin-Hui Shao, Wan-Chen Zhao,
Relaxed modified Newton-based iteration method for generalized absolute value equations,
2023,
8,
2473-6988,
4714,
10.3934/math.2023233
6.
Hongjun Sun, Tianyu Yang, Hongbing Ding, Jinxia Li, Wenqiang Zhang,
Online Measurement of Gas and Liquid Flow Rates in Wet Gas Using Vortex Flowmeter Coupled With Conductance Ring Sensor,
2022,
71,
0018-9456,
1,
10.1109/TIM.2021.3129222
7.
Yan-Xia Dai, Ren-Yi Yan, Ai-Li Yang,
Minimum Residual BAS Iteration Method for Solving the System of Absolute Value Equations,
2024,
2096-6385,
10.1007/s42967-024-00403-z
8.
Lu-Xin Wang, Yang Cao, Qin-Qin Shen,
Two Variants of Robust Two-Step Modulus-Based Matrix Splitting Iteration Methods for Mixed-Cell-Height Circuit Legalization Problem,
2024,
2096-6385,
10.1007/s42967-024-00400-2
9.
Lin Zheng, Yangxin Tang, Luigi Rarità,
A Two‐Step Matrix‐Splitting Iterative Method for Solving the Generalized Absolute Value Equation,
2024,
2024,
2314-4629,
10.1155/2024/8396895
10.
Rashid Ali, Fuad A. Awwad, Emad A. A. Ismail,
The development of new efficient iterative methods for the solution of absolute value equations,
2024,
9,
2473-6988,
22565,
10.3934/math.20241098
11.
Chen-Can Zhou, Yang Cao, Qin-Qin Shen, Quan Shi,
A modified Newton-based matrix splitting iteration method for generalized absolute value equations,
2024,
442,
03770427,
115747,
10.1016/j.cam.2023.115747
12.
Ximing Fang, Minhai Huang,
Convergence of the modified Newton-type iteration method for the generalized absolute value equation,
2025,
2193-5343,
10.1007/s40065-025-00500-8
Christopher C. Tisdell. Embedding opportunities for participation and feedback in large mathematics lectures via audience response systems[J]. STEM Education, 2021, 1(2): 75-91. doi: 10.3934/steme.2021006
Christopher C. Tisdell. Embedding opportunities for participation and feedback in large mathematics lectures via audience response systems[J]. STEM Education, 2021, 1(2): 75-91. doi: 10.3934/steme.2021006