This study detailed the course design principles and implementation of project-based learning (PBL) in a technology-themed graduate-level online course. Students were trained to develop knowledge and skills in instructional leadership, such as the capability to design, deliver, and evaluate educational technology professional development programs. Pre- and post- survey data were collected to examine any change in students' knowledge and skills in instructional leadership by completing this course (N = 18). Quantitative findings revealed positive learning outcomes, and there was statistical significance regarding student improvement in knowledge and skills of instructional leadership, rendering the PBL approach viable.
Citation: Min Lun Wu, Lan Li, Yuchun Zhou. Enhancing technology leaders' instructional leadership through a project-based learning online course[J]. STEM Education, 2023, 3(2): 89-102. doi: 10.3934/steme.2023007
Related Papers:
[1]
Yoshi Nishitani, Chie Hosokawa, Yuko Mizuno-Matsumoto, Tomomitsu Miyoshi, Shinichi Tamura .
Classification of Spike Wave Propagations in a Cultured Neuronal Network: Investigating a Brain Communication Mechanism. AIMS Neuroscience, 2017, 4(1): 1-13.
doi: 10.3934/Neuroscience.2017.1.1
[2]
Yoshi Nishitani, Chie Hosokawa, Yuko Mizuno-Matsumoto, Tomomitsu Miyoshi, Shinichi Tamura .
Learning process for identifying different types of communication via repetitive stimulation: feasibility study in a cultured neuronal network. AIMS Neuroscience, 2019, 6(4): 240-249.
doi: 10.3934/Neuroscience.2019.4.240
[3]
Shinichi Tamura, Yoshi Nishitani, Chie Hosokawa .
Feasibility of Multiplex Communication in a 2D Mesh Asynchronous Neural Network with Fluctuations. AIMS Neuroscience, 2016, 3(4): 385-397.
doi: 10.3934/Neuroscience.2016.4.385
[4]
Shun Sakuma, Yuko Mizuno-Matsumoto, Yoshi Nishitani, Shinichi Tamura .
Simulation of Spike Wave Propagation and Two-to-one Communication with Dynamic Time Warping. AIMS Neuroscience, 2016, 3(4): 474-486.
doi: 10.3934/Neuroscience.2016.4.474
[5]
Shun Sakuma, Yuko Mizuno-Matsumoto, Yoshi Nishitani, Shinichi Tamura .
Learning Times Required to Identify the Stimulated Position and Shortening of Propagation Path by Hebb’s Rule in Neural Network. AIMS Neuroscience, 2017, 4(4): 238-253.
doi: 10.3934/Neuroscience.2017.4.238
[6]
Dirk Roosterman, Graeme S. Cottrell .
Astrocytes and neurons communicate via a monocarboxylic acid shuttle. AIMS Neuroscience, 2020, 7(2): 94-106.
doi: 10.3934/Neuroscience.2020007
[7]
Anastasios A. Mirisis, Anamaria Alexandrescu, Thomas J. Carew, Ashley M. Kopec .
The Contribution of Spatial and Temporal Molecular Networks in the Induction of Long-term Memory and Its Underlying Synaptic Plasticity. AIMS Neuroscience, 2016, 3(3): 356-384.
doi: 10.3934/Neuroscience.2016.3.356
[8]
Mohammad Mofatteh .
mRNA localization and local translation in neurons. AIMS Neuroscience, 2020, 7(3): 299-310.
doi: 10.3934/Neuroscience.2020016
[9]
Georgia Theocharopoulou .
The ubiquitous role of mitochondria in Parkinson and other neurodegenerative diseases. AIMS Neuroscience, 2020, 7(1): 43-65.
doi: 10.3934/Neuroscience.2020004
[10]
Ernest Greene .
New encoding concepts for shape recognition are needed. AIMS Neuroscience, 2018, 5(3): 162-178.
doi: 10.3934/Neuroscience.2018.3.162
Abstract
This study detailed the course design principles and implementation of project-based learning (PBL) in a technology-themed graduate-level online course. Students were trained to develop knowledge and skills in instructional leadership, such as the capability to design, deliver, and evaluate educational technology professional development programs. Pre- and post- survey data were collected to examine any change in students' knowledge and skills in instructional leadership by completing this course (N = 18). Quantitative findings revealed positive learning outcomes, and there was statistical significance regarding student improvement in knowledge and skills of instructional leadership, rendering the PBL approach viable.
1.
Introduction
In the present investigation, H is a real Hilbert space and C is a nonempty, closed, and convex subset of H, A:H→H is a continus mapping. The variational inequality problem (abbreviated, VI(A,C)) is of the form: find z∗∈C satisfied with
⟨Az∗,z−z∗⟩≥0,∀z∈C.
(1.1)
Numerous domains have important uses for variational inequalities. Many academics have studied and come up with a multitude of findings [1,2,3,4].
The problem VI(A,C) (1.1) is analogous to the problem of fixed points:
z∗=PC(z∗−λAz∗),λ>0.
As a result, VI(A,C) (1.1) is possible solved by using the fixed point problem (see, e.g., [5,6]). The following projection gradient algorithm is the simplest one:
zn+1=PC(zn−λAzn).
(1.2)
However, this method's convergence necessitates a moderately strong supposition that A is a η-strongly monotone and L-Lipschitz continuous mapping, η is a positive constant and step-size λ∈(0,2ηL2). However, algorithm (1.2) does not work when A is monotone.
The extragradient algorithm of the following type was presented by Korpelevich in [7]: given z1∈C,
{yn=PC(zn−λnAzn),zn+1=PC(zn−λnAyn),
(1.3)
where λn∈(0,1L). A is relaxed to a monotone mapping based on algorithm (1.2). Moreover, it has been shown that the sequence {xn} will eventually arrive at a solution for the (1.1). However, PC lacks a closed form formula and (1.3) requires calculating PC twice in each iteration, which will result in an increase in the amount of computing that the procedure requires. There has been a lot of research done in this area by Censor et al ([8,9,10]). The issue that it can be tricky to calculate PC was solved by using the projection onto the half space or intersection of half spaces rather than subset C. He first proposed projection and contraction method (PCM) in [11]. Cai et al. in [12] have studied the optimal step sizes ηn for PCM, and the method which takes the following form:
The benefit of this method is that A is as flexible as the algorithm (1.3) and only needs to calculate the projection once. The method's efficacy will be vastly enhanced by both theoretical and numerical experiments. After adding the optimal step size ηn, the speed of convergence is enhanced further. The focus of numerous professionals has been drawn to method (1.4) because of its great characteristics and results. Based on the method (1.4), numerous academics have achieved numerous significant advancements (see, e.g., [12,13,14] and others). Recently, Dong et al. in [13] added inertial to method (1.4) in order to obtain better convergence effect. In [15], Shehu and Iyiola incorporated alternating inertial and adaptive step-sizes:
{vn={un+αn(un−un−1),n= odd ,un,n= even ˉun=PC(vn−λnAvn),d(vn,ˉun)=vn−ˉun−λn(Avn−Aˉun),un+1=vn−γηnd(vn,ˉun),
When the assumption of mapping A is relaxed to pseudo-monotone, convergence of the algorithm is proved. Additionally, they gave R-linear convergence analysis when A is a strongly pseudo-monotone mapping. In numerical experiments, the algorithm with alternating inertial in [15] performs better than the algorithm with general inertial in [13].
A fascinating concept has lately been created by Malitsky in [16] to solve mixed variational inequalities problem: find z∗∈C satisfied with
⟨Az∗,z−z∗⟩+g(z)−g(z∗)≥0,∀z∈C,
(1.9)
where A is monotone mapping, g is a proper convex lower semicontinuous function. He proposed the following version of the golden ratio algorithm:
{¯zn=(ϕ−1)zn+¯zn−1ϕ,zn+1=proxλg(¯zn−λAzn),
(1.10)
where ϕ is golden ratio, i.e. ϕ=√5+12. In algorithm (1.10), ¯zn is actually a convex combination of all the previously generated iterates. It is straightforward to ascertain that when g=ιC, (1.19) is equivalent to (1.1). Then, the algorithm (1.10) may be written equivalently as:
{¯zn=(ϕ−1)zn+¯zn−1ϕ,zn+1=PC(¯zn−λAzn).
(1.11)
Numerous inertial algorithms have been published to address the issue of pseudo-monotone variational inequalities. Moreover, the golden ratio algorithms and their convergence have been researched for solving mixed variational inequalities problem when A is monotone. However, there are still few results about golden ratio for solving variational inequalities problem (1.1) when A is pseudo-monotone. The algorithm presented by Malitsky is very novel, and it provides us with some inspiration. Under more general circumstances, we hope to solve the variational inequalities problem (1.1) using the convex combination structure in this algorithm.
In this research, we combine the projection contraction method in [11] and golden ratio technique to present a new extrapolation projection contraction algorithm for the pseudo-monotone VI(A,C) (1.1). To speed up the convergence of the new extrapolation projection contraction algorithm, we also present an alternating extrapolation algorithm. We can greatly expand the selection range of step size in the combination structure, and expanding the range of step size has a significant effect on the results of numerical experiments. Although the golden ratio is not used in our algorithm in the end in [13], considering that this paper is inspired by Malitsky’s golden ratio algorithm, the algorithms proposed in this paper is still recorded as projection contraction algorithms based on the golden ratio. In this paper, we primarily make the following improvements:
∙ We propose a projection contraction algorithm and an alternating extrapolation projection contraction algorithm based on the golden ratio. Weak convergence of two algorithms are established when A is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous.
∙ We get R-linear convergence results of two algorithms when A is strongly pseudo-monotone.
∙ Our algorithms all use the new self-adaptive step-sizes which is not monotonically decreasing, like (1.10).
∙ In our algorithms, A is a pseudo-monotone mapping which is weaker than [13,17,18]. Additionally, it is not necessary to restrict the extrapolation parameter ψ in (1,√5+12] as in [19,20], it can be to extend the value to (1,+∞).
The structure of the article is as follows:
Section 2: Related knowledge involved in the paper. Section 3: We give a projection contraction algorithm based on the golden ratio and the proofs of weak and R-linear convergence of the algorithm. Section 4: We also give an alternating extrapolation projection contraction algorithm based on the golden ratio, prove weak and R-linear convergence of the algorithm. Section 5: We give two numerical examples to verify the effectiveness of the algorithms.
2.
Preliminaries
Let {zn} be a sequence in H. We denote zn⇀z as {zn} weakly converges to z, while denote zn→z as {zn} strongly converges to z.
Lemma 2.3.[24] Suppose A is pseudo-monotone in VI(A,C) (1.1) and S is the solution set of VI(A,C) (1.1). Then S is closed, convex and
S={z∈C:⟨Aw,w−z⟩≥0,∀w∈C}.
Lemma 2.4.[25] Let {zn} be a sequence in H such that the following two conditions hold:
(i) for any z∈C, limn→∞‖zn−z‖ exists;
(ii) ωw(zn)⊂S.
Then {zn} converges weakly to a point in C.
Lemma 2.5.[19] Let {an} and {bn} be non-negative real sequences which meet
an+1≤an−bn,∀n>N,
where N is some non-negative integer. Then limn→∞bn=0 and limn→∞an exists.
Lemma 2.6.[26] Let {λn} be non-negative number sequence such that
λn+1≤ξnλn+τn,∀n∈N,
where {ξn} and {τn} meet
{ξn}⊂[1,+∞),∞∑n=1(ξn−1)<+∞,τn>0and∞∑n=1τn<+∞.
Then limn→∞λn exists.
3.
Projection contraction algorithm based on the golden ratio
We provide a PCM algorithm with a new extrapolation step and the corresponding convergence analyses in this section.
Assumption 3.1.In this paper, the following suppositions are true:
(a) A: H→H is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous.
(b) The solution set S is nonempty.
(c) {ξn}⊂[1,+∞),∑∞n=1(ξn−1)<+∞,τn>0and∑∞n=1τn<+∞.
Algorithm 3.1. Projection contraction algorithm based on the golden ratio.
Step 0: Take the iterative parameters μ∈(0,1), ψ∈(1,+∞), γ∈(0,2), and ξ1,τ1,λ1>0. Let u1∈H, v0∈H be given starting points. Known sequences {ξn},{τn}. Set n:=1.
Remark 3.1.Observe in Algorithm 3.1 that if Avn≠A¯un, then
μ‖vn−¯un‖‖Avn−A¯un‖≥μL‖vn−¯un‖‖vn−¯un‖=μL.
Therefore, λn≥min{μL,λ1}>0. By (3.3), we have λn+1≤ξnλn+τn. From Lemma 2.6 we obtain limn→∞λn=λ when {ξn}⊂[1,+∞),∑∞n=1(ξn−1)<+∞,and∑∞n=1τn<+∞.
Remark 3.2.In our algorithms, it is not necessary to restrict the range of ψ to (1,√5+12] or (1,2], ψ only needs to be greater than 1, which greatly relaxes the range of parameter to be chosen.
Lemma 3.1.Assume {un} is the sequence generated by Algorithm 3.1 under the conditions of Assumption 3.1. Then {un} is bounded and limn→∞‖un−u∗‖ exists, where u∗∈S.
From the above proof, we have obtained an≥0 and bn≥0. According to Lemma 2.5, we can get limn→∞bn=0 and limn→∞an exists. Thus, we can get further limn→∞‖vn−un+1‖2=0.
Lemma 3.3.Assume that {un} is generated by Algorithm 3.1, then ωw(un)⊂S.
Proof. Since {un} is bounded, ωw(un)≠∅. Arbitrarily choose q∈ωw(un), then there exists a subsequence {unk}⊂{un} such that unk⇀q. Then ¯unk−1⇀q,vnk−1⇀q. From Lemma 2.1(ii) and (3.2) we have
Fixing u∈C and passing k→∞ in (3.23), noting ‖vnk−¯unk‖→0, {¯unk}and{Avnk} are bounded, we obtain
lim_k→∞⟨Avnk−1,u−vnk−1⟩≥0.
(3.24)
Choosing a decreasing sequence {ϵk} such that ϵk>0 and limk→∞ϵk=0. For each ϵk,
AvNk≠0and⟨Avnj−1,u−vnj−1⟩+ϵk≥0,∀j≥Nk,
(3.25)
where Nk is smallest non-negative integer that satisfies (3.25). As {ϵk} is decreasing, {Nk} is increasing. For simplicity, it is useful to write Nk as nNk. Setting
ϑNk−1=AvNk−1‖AvNk−1‖2,
one gets ⟨AvNk−1,ϑNk−1⟩=⟨AvNk−1,AvNk−1‖AvNk−1‖2⟩=1. Then, by (3.25) for each k,
Since vnk−1⇀q as k→∞ and Definition 2.1(d), we obtain that Avnk−1⇀Aq. Suppose Aq≠0 (if Aq=0,q∈S). Following that, employing the norm's sequentially weakly lower semicontinuity, we gain
and this means limk→∞‖ϵkϑNk−1‖=0. Inputting k→∞ into (3.27), we get
⟨Au,u−q⟩≥0,∀u∈C.
From Lemma 2.3, q∈S, then ωw(un)⊂S. □
Theorem 3.1.Assume {un} is the sequence generated by Algorithm 3.1 under the conditions of Assumption 3.1. There exists u⋆∈S such that un⇀u⋆.
Proof. From Lemmas 3.1 and 3.3, we get limn→∞‖un−u∗‖ exists and ωw(un)⊂S. From Lemma 2.4, un⇀u⋆∈S. □
Theorem 3.2.Suppose {un} is generated by Algorithm 3.1 under the condition of A is η-strongly pseudo-monotone with η>0. Then {un} converges R-linearly to the unique solution u∗ of VI(A,C) (1.1).
Proof. Since ¯un∈C, from Definition 2.1(a), we have
⟨A¯un,¯un−u∗⟩≥η‖¯un−u∗‖2.
Multiply λn on both sides of above inequality, we get
Let δ2=1−γ1−ρ(1+ρ)2min{12(2−γ)(1−ρ),12λη}, we have 0<δ2<1 and
‖un+1−u∗‖2<δ2‖vn−u∗‖2,∀n≥n′0.
(3.35)
Putting (3.14) into (3.35), after collation, we get
ψψ−1‖vn+1−u∗‖2<(δ2+1ψ−1)‖vn−u∗‖2,∀n≥n′0.
(3.36)
Since 0<δ2<1, δ2+1ψ−1<1+1ψ−1=ψψ−1. And we can get 0<δ2+1ψ−1ψψ−1<1. Therefore,
‖vn+1−u∗‖2<r2‖vn−u∗‖2,∀n≥n′0,
where r=√δ2+1ψ−1ψψ−1. By induction, we get
‖vn+1−u∗‖2<r2(n−n′0+1)‖vn′0−u∗‖2,∀n≥n′0.
By (3.35),
‖un+1−u∗‖2<δ2r2(n−n′0)‖vn′0−u∗‖2,∀n≥n′0.
And we have
‖un+1−u∗‖1n<rn−n′0n(δ‖vn′0−u∗‖)1n,∀n≥n′0.
So
¯limn→∞‖un+1−u∗‖1n≤r<1.
Therefore, {un} converges R-linearly to the unique solution u∗. □
4.
Alternating extrapolation projection contraction algorithm based on the golden ratio
In this part, we offer an algorithm for settling the problem of variational inequalities based on the golden ratio and provide the proofs of weak and R-linear convergence.
Algorithm 4.1. Alternating extrapolation projection contraction algorithm based on the golden ratio.
Step 0: Take the iterative parameters μ∈(0,1), ψ∈(1,+∞), γ∈(0,2) and ξ1,τ1,λ1>0. Let u1∈H, v0∈H be given starting points. Known sequences {ξn},{τn}. Set n:=1.
Lemma 4.1.Assume {un} is the sequence generated by Algorithm 4.1 under the conditions of Assumption 3.1. Then {u2n} is bounded and limn→∞‖u2n−u∗‖ exists, where u∗∈S.
Proof. Following the proof line (3.7)–(3.13) of Lemma 3.1 and ‖v2n−u∗‖2=‖u2n−u∗‖2, we obtain
Similar to Lemma 3.3, the following proof steps are omitted as they are redundant. Thus, we come to the conclusion,
⟨Au,u−p⟩≥0,∀u∈C.
Using Lemma 2.3, we get p∈S. □
Theorem 4.1.Assume {un} is the sequence generated by Algorithm 4.1 under the conditions of Assumption 3.1. There exists q∈S such that un⇀q.
Proof.{u2n} is bounded implies that {u2n} has weakly convergent subsequences. Then, we can choose a subsequence of {u2n}, denoted by {u2nk} such that u2nk⇀q∈H. We obtain limn→∞‖u2n−q‖ exists and q∈S from Lemma 4.1 and 4.3. The proof of the whole sequence u2n⇀q∈S which is the same as Lemma 4.4 in [15]. Hence, un⇀q∈S. □
Theorem 4.2.Suppose {un} is generated by Algorithm 4.1 under the condition of A is η-strongly pseudo-monotone with η>0. Then {un} converges R-linearly to the unique solution u∗ of VI(A,C) (1.1).
Proof. From (3.35), ∀n≥n′0, we have
‖u2n+1−u∗‖2<δ2‖v2n−u∗‖2=δ2‖u2n−u∗‖2,
(4.17)
and
‖u2n+2−u∗‖2<δ2‖v2n+1−u∗‖2,
(4.18)
where δ2=1−γ1−ρ(1+ρ)2min{12(2−γ)(1−ρ),12λη} and 0<δ2<1. Combining (4.9) and (4.18),
Therefore, {un} converges R-linearly to the unique solution u∗. □
5.
Numerical examples
The following sections provide some computational experiments and comparisons between our algorithms considered in Sections 3 and 4 and other algorithms. All codes were written in MATLAB R2016b and performed on a PC Desktop AMD Ryzen R7-5600U CPU @ 3.00 GHz, RAM 16.00 GB.
We make a comparison of our Algorithm 3.1, Algorithm 4.1, Algorithm 2 in [15] and Algorithm 1 in [27], Time in the table indicates CPU Time. In this section, we set maximum number of iterations nmax=6×105, ξn=1+1n2 and τn=1n2.
Example 5.1.Define A:Rm→Rm by
Au=(M+β)(Nu+q),
where M=e−uTQu, N is a positive semi-definite matrix, Q is a positive definite matrix, q∈Rm and β>0. In addition to being easy to obtain, A is pseudo-monotone, differentiable and Lipschitz continuous. Take C={u∈Rm∣Bu≤b}, where B is a k∗×m matrix and b∈Rk∗+ with k∗=10. Select the initial point u1=(1,1,…,1)T for all algorithms. Initial points of Algorithm 3.1 and Algorithm 4.1, v0 is generated randomly in Rm. We take ψ=√5+12,μ=0.6 in Algorithm 3.1 and Algorithm 4.1. We take θn=2−γ1.01γ in Algorithm 2 in [15] and θ=0.45(1−μ) in Algorithm 1 in [27]. Thus, we take different values for λ1 and γ respectively to compare with the algorithms in the other two papers. In this example, we take ‖¯un−vn‖<10−3 as the stopping criterion.
In Table 1, we give a comparison of our Algorithms 3.1 and 4.1 with Algorithm 1 in [27] and Algorithm 2 in [15] in different dimensions for γ=1.5,λ1=0.05 and a comparison Figure 1 for m=100. It is illustrated that our two algorithms have some superiority.
Table 1.
Example 5.1 with γ=1.5,λ1=0.05 and various values of m.
In Tables 2 and 3, we give a comparison of Algorithm 3.1 and Algorithm 4.1 for the same number of dimensions with different γ, respectively. We find that the larger γ is for both algorithms in the same dimension, the fewer the iterations and the shorter the CPU Time, where γ∈(0,2).
The matrices N and P are randomly generated skew-symmetric matrix and positive diagonal matrix, respectively. Assume C:={u∈Rm∣Mu≤p}, where matrix M∈Rk×m and vector p∈Rk are randomly generated. Thus, all entries in p are non-negative. Here VI(A,C) (1.1) has a unique solution u∗=0. Set ψ=√5+12,μ=1√2 in Algorithm 3.1, 4.1. We choose θn=2−γ1.01γ in Algorithm 2 in [15] and θ=0.45(1−μ) in Algorithm 1 in [27]. Additionally, we take different values for λ1 and γ, respectively, to compare with the algorithms in the other two papers. We use the stopping criterion ‖¯un−yn‖≤10−3.
In Table 4, we give a comparison of our Algorithm 3.1 and Algorithm 4.1 with Algorithm 1 in [27] and Algorithm 2 in [15] in different dimensions for γ=1.5,λ1=0.05 and a comparison Figure 2 for k=30,m=60.
Table 4.
Example 5.2 with γ=1.5,λ1=0.05 and various values of k,m.
Remark 5.1.From Figures 1 and 2, we can see that the projection contraction algorithms based on golden ratio have numerical advantages over inertial extrapolation. Alternating extrapolation projection contraction algorithm is better than projection contraction algorithm based on golden ratio. Thus, it can be seen from Figures 3 and 4 that our algorithm with larger ψ converges faster.
6.
Conclusions
We present a projection contraction algorithm and an alternating extrapolation projection contraction algorithm based on the golden ratio for solving pseudo-monotone variational inequalities problem in real Hilbert spaces. We give proofs of weak convergence of the two algorithms when the operator is pseudo-monotone. Thus, we obtain R-linear convergence when A is strongly pseudo-monotone mapping. We have extended the range of the ψ from (1,√5+12] to (1,+∞), and the proofs of both algorithms are given in the absence of Lipschitz constant. We give numerical examples and show the superiority of our algorithms. Then, we discover that our algorithms suffer less impact under the same unfavorable conditions and has a relatively stable rate of convergence.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
The first author is supported by the Fundamental Science Research Funds for the Central Universities (Program No. 3122018L004).
Conflict of interest
The authors declare there is no conflict of interest.
References
[1]
Aas, M. and Paulsen, J.M., National strategy for supporting school principal's instructional leadership: A Scandinavian approach. Journal of Educational Administration, 2019, 57(5): 540‒553. https://doi.org/10.1108/JEA-09-2018-0168 doi: 10.1108/JEA-09-2018-0168
[2]
Albritton, S. and Stacks, J., Implementing a Project-Based Learning Model in a Pre-Service Leadership Program. International Journal of Educational Leadership Preparation, 2016, 11(1): n1.
[3]
Berkovich, I. and Hassan, T., Principals' digital instructional leadership during the pandemic: Impact on teachers' intrinsic motivation and students' learning. Educational Management Administration & Leadership, 2022, 17411432221113411. https://doi.org/10.1177/17411432221113411 doi: 10.1177/17411432221113411
[4]
Claassen, K., Dos Anjos, D.R., Kettschau, J. and Broding, H.C., How to evaluate digital leadership: a cross-sectional study. Journal of Occupational Medicine and Toxicology, 2021, 16(1): 1‒8. https://doi.org/10.1186/s12995-021-00335-x doi: 10.1186/s12995-020-00290-z
[5]
Dani, D.E., Wu, M.L., Hartman, S.L., Kessler, G., Grey, T.M.G., Liu, C., et al., Leveraging Partnerships to Support Community-Based Learning in a College of Education. Handbook of Research on Adult Learning in Higher Education, 2020, 58‒89. https://doi.org/10.4018/978-1-7998-1306-4.ch003 doi: 10.4018/978-1-7998-1306-4.ch003
[6]
Edwards, B. and Hinueber, J., Why teachers make good learning leaders. The Learning Professional, 2015, 36(5): 26.
[7]
Elmore, R., Building a New Structure for School Leadership. Albert Shanker Institute. 2000.
[8]
Helle, L., Tynjala, P. and Olkinurora, E., Project-based learning in post-secondary education: Theory, practice and rubber sling shots. Higher Education, 2006, 51: 287‒314. https://doi.org/10.1007/s10734-004-6386-5 doi: 10.1007/s10734-004-6386-5
[9]
Jaipal-Jamani, K., Figg, C., Collier, D., Gallagher, T., Winters, K.L. and Ciampa, K., Developing TPACK of university faculty through technology leadership roles. Italian Journal of Educational Technology, 2018, 26(1): 39‒55.
[10]
King, B. and Smith, C., Using project-based learning to develop teachers for leadership. The Clearing House: A Journal of Educational Strategies, Issues and Ideas, 2020, 93(3): 158‒164. https://doi.org/10.1080/00098655.2020.1735289 doi: 10.1080/00098655.2020.1735289
[11]
Lambrecht, J., Lenkeit, J., Hartmann, A., Ehlert, A., Knigge, M. and Spörer, N., The effect of school leadership on implementing inclusive education: How transformational and instructional leadership practices affect individualized education planning. International Journal of Inclusive Education, 2022, 26(9): 943‒957. https://doi.org/10.1080/13603116.2020.1752825 doi: 10.1080/13603116.2020.1752825
[12]
Larmer, J., Mergendoller, J. and Boss, S., Setting the standard for project-based learning: A proven approach to rigorous classroom instruction. Alexandria, VA: Association for Supervision and Curriculum Development. 2015.
[13]
Raman, A. and Thannimalai, R., Importance of technology leadership for technology integration: Gender and professional development perspective. SAGE Open, 2019, 9(4): 2158244019893707. https://doi.org/10.1177/2158244019893707 doi: 10.1177/2158244019893707
[14]
Shadish, W.R., Cook, T.D. and Campbell, D.T., Experimental and quasi-experimental designs for generalized causal inference. Houghton, Mifflin and Company. 2002.
[15]
Shaked, H., Perceptions of Israeli school principals regarding the knowledge needed for instructional leadership. Educational Management Administration & Leadership, 2023, 51(3): 655‒672. https://doi.org/10.1177/17411432211006092 doi: 10.1177/17411432211006092
[16]
Suyudi, M., Rahmatullah, A.S., Rachmawati, Y. and Hariyati, N., The Effect of Instructional Leadership and Creative Teaching on Student Actualization: Student Satisfaction as a Mediator Variable. International Journal of Instruction, 2022, 15(1): 113‒134. https://doi.org/10.29333/iji.2022.1517a doi: 10.29333/iji.2022.1517a
Olawale K. Oyewole, Simeon Reich,
Two subgradient extragradient methods based on the golden ratio technique for solving variational inequality problems,
2024,
97,
1017-1398,
1215,
10.1007/s11075-023-01746-z
2.
Youngjin Hwang, Soobin Kwak, Hyundong Kim, Junseok Kim,
An explicit numerical method for the conservative Allen–Cahn equation on a cubic surface,
2024,
9,
2473-6988,
34447,
10.3934/math.20241641
3.
Hammed Anuoluwapo Abass, Olawale Kazeem Oyewole, Seithuti Philemon Moshokoa, Abubakar Adamu,
An Adapted Proximal Point Algorithm Utilizing the Golden Ratio Technique for Solving Equilibrium Problems in Banach Spaces,
2024,
12,
2227-7390,
3773,
10.3390/math12233773
4.
Olawale K. Oyewole, Hammed A. Abass, O.J. Ogunsola,
An improved subgradient extragradient self-adaptive algorithm based on the golden ratio technique for variational inequality problems in Banach spaces,
2024,
03770427,
116420,
10.1016/j.cam.2024.116420
5.
O. K. Oyewole, H. A. Abass, S. P. Moshokoa,
A simple proximal algorithm based on the golden ratio for equilibrium problem on Hadamard manifolds,
2025,
74,
0009-725X,
10.1007/s12215-024-01183-4
6.
Victor Amarachi Uzor, Oluwatosin Temitope Mewomo,
Convergence analysis of a resolvent-free method for solving inclusion problems beyond Co-coercivity,
2025,
36,
1012-9405,
10.1007/s13370-025-01275-z
7.
Limei Xue, Jianmin Song, Shenghua Wang,
A modified projection and contraction method for solving a variational inequality problem in Hilbert spaces,
2025,
10,
2473-6988,
6128,
10.3934/math.2025279
8.
Zhong-bao Wang, Xing-yu Chen, Zhang-you Chen,
A New Golden Ratio Inertial Algorithm with Two Types of Self Adaptive Step Sizes for Solving Nonlinear Inclusion Problems,
2025,
103,
0885-7474,
10.1007/s10915-025-02903-3
Author's biographyDr. Min Lun Wu is associate professor of instruction in Innovative Learning Design and Technology with Ohio University, USA. He is specialized in educational technology. His research areas are digital game-based learning, gamification, instructional leadership, computational thinking and educational game design, and online education; Dr. Lan Li is professor of Classroom Technology with Bowling Green State University, USA. Her research interests include technology-assisted teaching and learning, teacher preparation, peer assessment, and online education; Dr. Yuchun Zhou is associate professor of Educational Research and Evaluation with Ohio University, USA. Her research areas include mixed methodology, scale development, validation analysis, SEM, and program evaluation