Loading [MathJax]/jax/output/SVG/jax.js
Research article

Integrating research into clinical practice for hip fracture rehabilitation: Implementation of a pragmatic RCT

  • Received: 05 December 2017 Accepted: 02 March 2018 Published: 17 March 2018
  • Objective: Testing clinical interventions integrated within health care delivery systems provides advantages, but it is important to make the distinction between the design of the intervention and the operational elements required for effective implementation. Thus, the objective of this study was to describe contextual factors for an outpatient follow-up clinic for older adults with hip fracture. Design: Implementation evaluation of a parallel-group 1:1 single-blinded two-arm pragmatic randomized controlled trial. Setting: Hospital-based multi-disciplinary outpatient clinic in Vancouver, Canada. Participants: Community-dwelling older adults (≥ 65 years) with hip fracture in the previous year. Interventions: Usual care vs. usual care and a comprehensive geriatric clinic for older adults after hip fracture. The primary outcome for the main study was mobility as measured by the Short Physical Performance Battery. Outcome measures: A description of central tenets of implementation that include recruitment, participant characteristics (reach) and aspects of the innovation (e.g., delivery system, fidelity to the intervention, and exercise dose delivered and enacted. Results: We identified the reach for the intervention and delivered the intervention as intended. There were 53 older adults who enrolled in the study; more than 90% of participants returned for the final assessment. We provide a comprehensive description of the intervention and report on dose delivered to participants, and participants’ 12-month maintenance for balance and strength exercises. Conclusions: It is important to move beyond solely assessing outcomes of an intervention and describe factors that influence effective implementation. This is essential if we are to replicate interventions across setting or populations or deliver interventions at broad scale to affect the health of patients, in future. Trial registration: This trial was registered on ClinicalTrials.gov (NCT01254942).

    Citation: Maureen C. Ashe, Khalil Merali, Nicola Edwards, Claire Schiller, Heather M. Hanson, Lena Fleig, Karim M. Khan, Wendy L. Cook, Heather A. McKay. Integrating research into clinical practice for hip fracture rehabilitation: Implementation of a pragmatic RCT[J]. AIMS Medical Science, 2018, 5(2): 102-121. doi: 10.3934/medsci.2018.2.102

    Related Papers:

    [1] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [2] Li-Tao Zhang, Xian-Yu Zuo, Shi-Liang Wu, Tong-Xiang Gu, Yi-Fan Zhang, Yan-Ping Wang . A two-sweep shift-splitting iterative method for complex symmetric linear systems. AIMS Mathematics, 2020, 5(3): 1913-1925. doi: 10.3934/math.2020127
    [3] Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi . A general modulus-based matrix splitting method for quasi-complementarity problem. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614
    [4] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [5] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
    [6] ShiLiang Wu, CuiXia Li . A special shift splitting iteration method for absolute value equation. AIMS Mathematics, 2020, 5(5): 5171-5183. doi: 10.3934/math.2020332
    [7] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [8] Junxiang Lu, Chengyi Zhang . On the strong P-regular splitting iterative methods for non-Hermitian linear systems. AIMS Mathematics, 2021, 6(11): 11879-11893. doi: 10.3934/math.2021689
    [9] Huiling Wang, Zhaolu Tian, Yufeng Nie . The HSS splitting hierarchical identification algorithms for solving the Sylvester matrix equation. AIMS Mathematics, 2025, 10(6): 13476-13497. doi: 10.3934/math.2025605
    [10] Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han . A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677
  • Objective: Testing clinical interventions integrated within health care delivery systems provides advantages, but it is important to make the distinction between the design of the intervention and the operational elements required for effective implementation. Thus, the objective of this study was to describe contextual factors for an outpatient follow-up clinic for older adults with hip fracture. Design: Implementation evaluation of a parallel-group 1:1 single-blinded two-arm pragmatic randomized controlled trial. Setting: Hospital-based multi-disciplinary outpatient clinic in Vancouver, Canada. Participants: Community-dwelling older adults (≥ 65 years) with hip fracture in the previous year. Interventions: Usual care vs. usual care and a comprehensive geriatric clinic for older adults after hip fracture. The primary outcome for the main study was mobility as measured by the Short Physical Performance Battery. Outcome measures: A description of central tenets of implementation that include recruitment, participant characteristics (reach) and aspects of the innovation (e.g., delivery system, fidelity to the intervention, and exercise dose delivered and enacted. Results: We identified the reach for the intervention and delivered the intervention as intended. There were 53 older adults who enrolled in the study; more than 90% of participants returned for the final assessment. We provide a comprehensive description of the intervention and report on dose delivered to participants, and participants’ 12-month maintenance for balance and strength exercises. Conclusions: It is important to move beyond solely assessing outcomes of an intervention and describe factors that influence effective implementation. This is essential if we are to replicate interventions across setting or populations or deliver interventions at broad scale to affect the health of patients, in future. Trial registration: This trial was registered on ClinicalTrials.gov (NCT01254942).


    Consider large sparse linear matrix equation

    $ AXB=C, $ (1.1)

    where $ A\in \mathbb{C}^{m \times m} $ and $ B\in \mathbb{C}^{n \times n} $ are non-Hermitian positive definite matrices, $ C\in \mathbb{C}^{m \times n} $ is a given complex matrix. In many areas of scientific computation and engineering applications, such as signal and image processing [9,20], control theory [11], photogrammetry [19], we need to solve such matrix equations. Therefore, solving such matrix equations by efficient methods is a very important topic.

    We often rewrite the above matrix Eq (1.1) as the following linear system

    $ (BTA)x=c, $ (1.2)

    where the vectors $ \mathbf{x} $ and $ \mathbf{c} $ contain the concatenated columns of the matrices $ X $ and $ C $, respectively, $ \otimes $ being the Kronecker product symbol and $ B^T $ representing the transpose of the matrix $ B $. Although this equivalent linear system can be applied in theoretical analysis, in fact, solving (1.2) is always costly and ill-conditioned.

    So far there are many numerical methods to solve the matrix Eq (1.1). When the coefficient matrices are not large, we can use some direct algorithms, such as the QR-factorization-based algorithms [13,28]. Iterative methods are usually employed for large sparse matrix Eq (1.1), for instance, least-squares-based iteration methods [26] and gradient-based iteration methods [25]. Moreover, the nested splitting conjugate gradient (NSCG) iterative method, which was first proposed by Axelsson, Bai and Qiu in [1] to solve linear systems, was considered for the matrix Eq (1.1) in [14].

    Bai, Golub and Ng originally established the efficient Hermitian and skew-Hermitian splitting (HSS) iterative method [5] for linear systems with non-Hermitian positive definite coefficient matrices. Subsequently, some HSS-based methods were further considered to improve its robustness for linear systems; see [3,4,8,17,24,27] and other literature. For solving the continuous Sylvester equation, Bai recently established the HSS iteration method [2]. Hereafter, some HSS-based methods were discussed for solving this Sylvester equation [12,15,16,18,22,30,31,32,33]. For the matrix Eq (1.1), Wang, Li and Dai recently use an inner-outer iteration strategy and then proposed an HSS iteration method [23]. According to the discussion in [23], if the quasi-optimal parameter is employed, the upper bound of the convergence rate is equal to that of the CG method. After that, Zhang, Yang and Wu considered a more efficient parameterized preconditioned HSS (PPHSS) iteration method [29] to further improve the efficiency for solving the matrix Eq (1.1), and Zhou, Wang and Zhou presented a modified HSS (MHSS) iteration method [34] for solving a class of complex matrix Eq (1.1).

    Moreover, the shift-splitting (SS) iteration method [7] was first presented by Bai, Yin and Su to solve the ill-conditioned linear systems. Then this splitting method was subsequently considered for solving saddle point problems due to its promising performance; see [10,21] and other literature. In this paper, the SS technique is implemented to solve the matrix Eq (1.1). Some related convergence theorems of the SS method are discussed in detail. Numerical examples demonstrate that the SS is superior to the HSS and NSCG methods, especially when the coefficient matrices are ill-conditioned.

    The content of this paper is arranged as follows. In Section 2 we establish the SS method for solving the matrix Eq (1.1), and then some related convergence properties are studied in Section 3. In Section 4, the effectiveness of our method is illustrated by two numerical examples. Finally, our brief conclusions are given in Section 5.

    Based on the shift-splitting proposed by Bai, Yin and Su in [7], we have the shift-splitting of $ A $ and $ B $ as follows:

    $ A=12(αIm+A)12(αImA), $ (2.1)

    and

    $ B=12(βIn+B)12(βInB), $ (2.2)

    where $ \alpha $ and $ \beta $ are given positive constants.

    Therefore, using the splitting of the matrix $ A $ in (2.1), the following splitting iteration method to solve (1.1) can be defined:

    $ (αIm+A)X(k+1)B=(αImA)X(k)B+2C. $ (2.3)

    Then, from the splitting of the matrix $ B $ in (2.2), we can solve each step of (2.3) iteratively by

    $ (αIm+A)X(k+1,j+1)(βIn+B)=(αIm+A)X(k+1,j)(βInB)+2(αImA)X(k)B+4C. $ (2.4)

    Therefore, we can establish the following shift-splitting (SS) iteration method to solve (1.1).

    Algorithm 1 (The SS iteration method). Given an initial guess $ X^{(0)}\in \mathbb{C}^{m \times n} $, for $ k = 0, 1, 2, \ldots $, until $ X^{(k)} $ converges.

    Approximate the solution of

    $ (αIm+A)Z(k)B=2R(k) $ (2.5)

    with $ R^{(k)} = C-AX^{(k)}B $, i.e., let $ Z^{(k)}: = Z^{(k, j+1)} $ and compute $ Z^{(k, j+1)} $ iteratively by

    $ (αIm+A)Z(k,j+1)(βIn+B)=(αIm+A)Z(k,j)(βInB)+4R(k), $ (2.6)

    once the residual $ P^{(k)} = 2R^{(k)}-(\alpha I_m+A)Z^{(k, j+1)}B $ of the outer iteration (2.5) satisfies

    $ \|P^{(k)}\|_F\leq\varepsilon_k\|R^{(k)}\|_F, $

    where $ \|\cdot\|_F $ denotes the Frobenius norm of a matrix. Then compute

    $ X^{(k+1)} = X^{(k)}+Z^{(k)}. $

    Here, $ \{\varepsilon_k\} $ is a given tolerance. In addition, we can choose efficient methods in the process of computing $ Z^{(k, j+1)} $ in (2.6).

    The pseudo-code of this algorithm is shown as following:

    The pseudo-code of the SS algorithm for matrix equation $ AXB = C $
    1. Given an initial guess $ X^{(0)}\in \mathbb{C}^{m \times n} $
    2. $ R^{(0)} = C-AX^{(0)}B $
    3. For $ k = 0, 1, 2, \ldots, k_{\text{max}} $ Do:
    4. Given an initial guess $ Z^{(k, 0)}\in \mathbb{C}^{m \times n} $
    5. $ P^{(k, 0)} = 2R^{(k)}-(\alpha I_m+A)Z^{(k, 0)}B $
    6. For $ j = 0, 1, 2, \ldots, j_{\text{max}} $ Do:
    7. Compute $ Z^{(k, j+1)} $ iteratively by
      $ (\alpha I_m+A)Z^{(k, j+1)}(\beta I_n+B) = (\alpha I_m+A)Z^{(k, j)}(\beta I_n-B)+4R^{(k)} $
    8. $ P^{(k, j+1)} = 2R^{(k)}-(\alpha I_m+A)Z^{(k, j+1)}B $
    9. If $ \|P^{(k, j+1)}\|_F\leq\varepsilon_k\|R^{(k)}\|_F $ Go To 11
    10. End Do
    11. $ X^{(k+1)} = X^{(k)}+Z^{(k)} $
    12. $ R^{(k+1)} = C-AX^{(k+1)}B $
    13. If $ \|R^{(k+1)}\|_F\leq \text{tol}\|R^{(0)}\|_F $ Stop
    14. End Do

     | Show Table
    DownLoad: CSV

    Remark 1. Because the SS iteration scheme is only a single-step method, a considerable advantage is that it costs less computing workloads than the two-step iteration methods such as the HSS iteration [23] and the modified HSS (MHSS) iteration [34].

    In this section, we denote by

    $ H=12(A+A)andS=12(AA) $

    the Hermitian and skew-Hermitian parts of the matrix $ A $, respectively. Moreover, $ \lambda_{\min} $ and $ \lambda_{\max} $ represent the smallest and the largest eigenvalues of $ H $, respectively, and $ \kappa = \lambda_{\max}/\lambda_{\min} $.

    Firstly, the unconditional convergence property of the SS iteration (2.3) is given as follows.

    Theorem 1. Let $ A\in \mathbb{C}^{m \times m} $ be positive definite, and $ \alpha $ be a positive constant. Denote by

    $ M(α)=In((αIm+A)1(αImA)). $ (3.1)

    Then the convergence factor of the SS iteration method (2.3) is given by the spectral radius $ \rho(M(\alpha)) $ of the matrix $ M(\alpha) $, which is bounded by

    $ φ(α):=(αIm+A)1(αImA)2. $ (3.2)

    Consequently, we have

    $ ρ(M(α))φ(α)<1,α>0, $ (3.3)

    i.e., the SS iteration (2.3) is unconditionally convergent to the exact solution $ X^{\star}\in \mathbb{C}^{m \times n} $ of the matrix Eq (1.1).

    Proof. The SS iteration (2.3) can be reformulated as

    $ X(k+1)=(αIm+A)1(αImA)X(k)+2(αIm+A)1CB1. $ (3.4)

    Using the Kronecker product, we can rewrite (3.3) as follows:

    $ \mathbf{x}^{(k+1)} = M(\alpha)\mathbf{x}^{(k)}+N(\alpha)\mathbf{c}, $

    where $ M(\alpha) $ is the iteration matrix defined in (3.1), and $ N(\alpha) = 2B^{-T}\otimes(\alpha I_m+A)^{-1} $.

    We can easily see that $ \rho(M(\alpha))\leq\varphi(\alpha) $ holds for all $ \alpha > 0 $. From Lemma 2.1 in [8], we can obtain that $ \varphi(\alpha) < 1 $, $ \forall \alpha > 0 $. This completes the proof.

    Noting that the matrix $ (\alpha I_m+A)^{-1}(\alpha I_m-A) $ is an extrapolated Cayley transform of $ A $, from [6], we can obtain another upper bound for the convergence factor of $ \rho(M(\alpha)) $, as well as the minimum point and the corresponding minimal value of this upper bound.

    Theorem 2. Let the conditions of Theorem 1 be satisfied. Denote by

    $ σ(α)=maxλ[λmin,λmax]|αλ|α+λ,ζ(α)=S2α+λmin. $

    Then for the convergence factor of $ \rho(M(\alpha)) $ it holds that

    $ ρ(M(α))((σ(α))2+(ζ(α))21+(ζ(α))2)1/2ϕ(α)<1. $ (3.5)

    Moreover, at

    $ α={λminλmax,forS2λminκ1,λ2min+S22,forS2λminκ1, $ (3.6)

    the function $ \phi(\alpha) $ attains its minimum

    $ ϕ(α)={(η2+τ21+τ2)1/2,forS2λminκ1,(1υ1+υ)1/2,forS2λminκ1, $ (3.7)

    where

    $ η=κ1κ+1,τ=S2(κ+1)λminandυ=λminλ2min+S22. $

    Proof. From Theorem 3.1 in [6], we can directly obtain (3.2)–(3.7).

    Remark 2. $ \alpha_\star $ is called the theoretical quasi-optimal parameter of the SS iteration method. Similarly, the theoretical quasi-optimal parameter $ \beta_\star $ of the inner iterations (2.6) can be also obtained, which has the same form as $ \alpha_\star $.

    In the following, we present another convergence theorem for a new form.

    Theorem 3. Let the conditions of Theorem 1 be satisfied. If $ \{X^{(k)}\}_{k = 0}^{\infty}\subseteq \mathbb{C}^{m \times n} $ is an iteration sequence generated by Algorithm 1 and if $ X^{\star}\in \mathbb{C}^{m \times n} $ is the exact solution of the matrix Eq (1.1), then it holds that

    $ X(k+1)XF(φ(α)+μθεk)X(k)XF,k=0,1,2, $

    where the constants $ \mu $ and $ \theta $ are given by

    $ μ=BT(αIm+A)12,θ=BTA2. $

    In particular, when

    $ φ(α)+μθεmax<1, $ (3.8)

    the iteration sequence $ \{X^{(k)}\}_{k = 0}^{\infty} $ converges to $ X^{\star} $, where $ \varepsilon_{\max} = \max_{k}\{\varepsilon_k\} $.

    Proof. We can rewrite the SS iteration in Algorithm 1 as the following form:

    $ (BT(αIm+A))z(k)=2r(k),x(k+1)=x(k)+z(k), $ (3.9)

    with $ \mathbf{r}^{(k)} = \mathbf{c}-(B^{T}\otimes A)\mathbf{x}^{(k)} $, where $ \mathbf{z}^{(k)} $ is such that the residual

    $ p(k)=2r(k)(BT(αIm+A))z(k) $

    satisfies $ \|\mathbf{p}^{(k)}\|_2\leq\varepsilon_k\|\mathbf{r}^{(k)}\|_2 $.

    In fact, the inexact variant of the SS iteration method for solving the linear system (1.2) is just the above iteration scheme (3.9). From (3.9), we obtain

    $ x(k+1)=x(k)+(BT(αIm+A))1(2r(k)p(k))=x(k)+(BT(αIm+A))1(2c2(BTA)x(k)p(k))=(In((αIm+A)1(αImA)))x(k)+2(BT(αIm+A)1)c(BT(αIm+A)1)p(k). $ (3.10)

    Because $ \mathbf{x}^{\star}\in \mathbb{C}^{n} $ is the exact solution of the linear system (1.2), it must satisfy

    $ x=(In((αIm+A)1(αImA)))x+2(BT(αIm+A)1)c. $ (3.11)

    By subtracting (3.11) from (3.10), we have

    $ x(k+1)x=(In((αIm+A)1(αImA)))(x(k)x)(BT(αIm+A)1)p(k). $ (3.12)

    Taking norms on both sides from (3.12), then

    $ x(k+1)x2In((αIm+A)1(αImA))2x(k)x2+BT(αIm+A)12p(k)2φ(α)x(k)x2+μεkr(k)2. $ (3.13)

    Noticing that

    $ r(k)2=c(BTA)x(k)2=(BTA)(xx(k))2θx(k)x2, $

    by (3.13) the estimate

    $ ||x(k+1)x||2(φ(α)+μθεk)||x(k)x||2,k=0,1,2, $ (3.14)

    can be obtained. Note that for a matrix $ Y\in \mathbb{C}^{m \times n} $, $ \|Y\|_F = ||\mathbf{y}||_2 $, where the vector $ \mathbf{y} $ contains the concatenated columns of the matrix $ Y $. Then the estimate (3.14) can be equivalently rewritten as

    $ X(k+1)XF(φ(α)+μθεk)X(k)XF,k=0,1,2,. $

    So we can easily get the above conclusion.

    Remark 3. From Theorem 3 we know that, in order to guarantee the convergence of the SS iteration, it is not necessary for the condition $ \varepsilon_k\rightarrow 0 $. All we need is that the condition (3.8) is satisfied.

    In this section, two different matrix equations are solved by the HSS, SS and NSCG iteration methods. The efficiencies of the above iteration methods are examined by comparing the number of outer iteration steps (denoted by IT-out), the average number of inner iteration steps (denoted by IT-in-1 and IT-in-2 for the HSS, IT-in for the SS), and the elapsed CPU times (denoted by CPU). The notation "–" shows that no solution has been obtained after 1000 outer iteration steps.

    The initial guess is the zero matrix. All iterations are terminated once $ X^{(k)} $ satisfies

    $ \frac{\|C-AX^{(k)}B\|_F}{\|C\|_F}\leq 10^{-6}. $

    We set $ \varepsilon_k = 0.01 $, $ k = 0, 1, 2, \ldots $ to be the tolerances for all the inner iteration schemes.

    Moreover, in practical computation, we choose direct algorithms to solve all sub-equations involved in each step. We use Cholesky and LU factorization for the Hermitian and non-Hermitian coefficient matrices, respectively.

    Example 1 ([2]) We consider the matrix Eq (1.1) with $ m = n $ and

    $ A=M+5qN+100(n+1)2IandB=M+2qN+100(n+1)2I, $

    where $ M, N\in \mathbb{R}^{n \times n} $ are two tridiagonal matrices as follows:

    $ M=tridiag(1,2,1)andN=tridiag(0.5,0,0.5). $

    In Tables 1 and 2, the theoretical quasi-optimal parameters and experimental optimal parameters of HSS and SS are listed, respectively. In Tables 3 and 4, the numerical results of HSS and SS are listed.

    Table 1.  The theoretical quasi-optimal parameters of HSS and SS for Example 1.
    Method HSS SS
    $ n $ $ q $ $ \alpha_{\text{quasi}} $ $ \beta_{\text{quasi}} $ $ \alpha_{\text{quasi}} $ $ \beta_{\text{quasi}} $
    $ n=16 $ $ q=0.1 $ 1.28 1.28 1.28 1.28
    $ q=0.3 $ 1.28 1.28 1.52 1.28
    $ q=1 $ 1.28 1.28 4.93 2.00
    $ n=32 $ $ q=0.1 $ 0.64 0.64 0.64 0.64
    $ q=0.3 $ 0.64 0.64 1.50 0.64
    $ q=1 $ 0.64 0.64 4.98 1.99
    $ n=64 $ $ q=0.1 $ 0.32 0.32 0.50 0.32
    $ q=0.3 $ 0.32 0.32 1.50 0.60
    $ q=1 $ 0.32 0.32 4.99 2.00
    $ n=128 $ $ q=0.1 $ 0.16 0.16 0.50 0.20
    $ q=0.3 $ 0.16 0.16 1.50 0.60
    $ q=1 $ 0.16 0.16 5.00 2.00

     | Show Table
    DownLoad: CSV
    Table 2.  The experimental optimal iteration parameters of HSS and SS for Example 1.
    Method HSS SS
    $ n $ $ q $ $ \alpha_{\exp} $ $ \beta_{\exp} $ $ \alpha_{\exp} $ $ \beta_{\exp} $
    $ n=16 $ $ q=0.1 $ 1.22 1.14 1.14 0.98
    $ q=0.3 $ 1.40 1.12 1.66 1.16
    $ q=1 $ 1.96 1.30 0.36 1.74
    $ n=32 $ $ q=0.1 $ 1.84 0.72 0.70 0.66
    $ q=0.3 $ 1.04 0.72 1.12 0.68
    $ q=1 $ 1.70 0.90 3.02 0.84
    $ n=64 $ $ q=0.1 $ 3.00 0.40 0.20 0.40
    $ q=0.3 $ 1.10 0.40 0.90 0.50
    $ q=1 $ 1.30 0.60 2.30 0.70
    $ n=128 $ $ q=0.1 $ 3.00 0.30 0.30 0.20
    $ q=0.3 $ 1.10 0.30 0.60 0.30
    $ q=1 $ 1.10 0.80 2.90 0.60

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of HSS and SS with the theoretical quasi-optimal parameters for Example 1.
    Method HSS SS
    $ n $ $ q $ IT-out IT-in-1 IT-in-2 CPU IT-out IT-in CPU
    $ n=16 $ $ q=0.1 $ 22 7.2 7.0 0.0151 11 4.0 0.0036
    $ q=0.3 $ 16 7.0 7.0 0.0104 9 4.0 0.0029
    $ q=1 $ 20 6.0 6.0 0.0117 17 5.0 0.0078
    $ n=32 $ $ q=0.1 $ 36 14.0 14.0 0.1610 19 6.9 0.0295
    $ q=0.3 $ 33 14.0 14.0 0.1216 15 7.0 0.0269
    $ q=1 $ 39 11.0 11.0 0.1168 24 10.0 0.0472
    $ n=64 $ $ q=0.1 $ 68 28.3 28.3 1.6490 30 13.0 0.2677
    $ q=0.3 $ 74 24.7 24.8 1.5486 27 16.0 0.2906
    $ q=1 $ 87 25.0 25.0 1.8381 35 20.0 0.4377
    $ n=128 $ $ q=0.1 $ 144 54.5 54.7 34.394 57 21.2 4.3253
    $ q=0.3 $ 188 45.1 45.9 36.729 48 35.0 6.2253
    $ q=1 $ 465 52.0 52.0 104.331 52 38.0 6.7005

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results of HSS and SS with the experimental optimal iteration parameters for Example 1.
    Method HSS SS
    $ n $ $ q $ IT-out IT-in-1 IT-in-2 CPU IT-out IT-in CPU
    $ n=16 $ $ q=0.1 $ 19 7.0 7.0 0.0096 11 4.0 0.0023
    $ q=0.3 $ 15 8.0 8.0 0.0093 8 4.0 0.0017
    $ q=1 $ 16 6.0 6.0 0.0070 11 4.0 0.0022
    $ n=32 $ $ q=0.1 $ 29 13.0 13.0 0.0853 18 7.0 0.0227
    $ q=0.3 $ 23 13.0 13.0 0.0667 12 7.0 0.0162
    $ q=1 $ 25 11.0 11.0 0.0637 20 7.0 0.0244
    $ n=64 $ $ q=0.1 $ 118 24.0 24.0 2.0849 30 10.5 0.1952
    $ q=0.3 $ 41 23.0 23.0 0.6952 16 11.1 0.1008
    $ q=1 $ 37 18.0 18.0 0.4733 30 10.0 0.1650
    $ n=128 $ $ q=0.1 $ 229 39.7 39.7 32.032 40 20.5 2.4942
    $ q=0.3 $ 70 35.0 35.0 8.6951 22 18.0 1.2280
    $ q=1 $ 53 38.6 38.6 7.9165 45 14.0 1.7385

     | Show Table
    DownLoad: CSV

    From Tables 3 and 4 it can be observed that, the SS outperforms the HSS for various $ n $ and $ q $, especially when $ q $ is small (the coefficient matrices are ill-conditioned).

    Moreover, as two single-step methods, the numerical results of NSCG and SS are compared in Table 5. From Table 5 we see that the SS method has better computing efficiency than the NSCG method.

    Table 5.  Numerical results of NSCG and SS for Example 1.
    Method NSCG SS
    $ n $ $ q $ IT-out IT-in CPU IT-out IT-in CPU
    $ n=16 $ $ q=0.1 $ 15 25.4 0.0230 11 4.0 0.0023
    $ q=0.3 $ 291 30.5 0.2445 8 4.0 0.0017
    $ q=1 $ 90 170.9 0.4020 11 4.0 0.0022
    $ n=32 $ $ q=0.1 $ 18 7.0 0.0227
    $ q=0.3 $ 45 488.6 1.9451 12 7.0 0.0162
    $ q=1 $ 75 493.3467 2.9591 20 7.0 0.0244
    $ n=64 $ $ q=0.1 $ 30 10.5 0.1952
    $ q=0.3 $ 77 497.7 9.5250 16 11.1 0.1008
    $ q=1 $ 62 494.5 7.5782 30 10.0 0.1650
    $ n=128 $ $ q=0.1 $ 74 493.3 48.129 40 20.5 2.4942
    $ q=0.3 $ 69 492.9 44.699 22 18.0 1.2280
    $ q=1 $ 69 492.8 45.885 45 14.0 1.7385

     | Show Table
    DownLoad: CSV

    Example 2 ([2]) We consider the matrix Eq (1.1) with $ m = n $ and

    $ {A=diag(1,2,,n)+rLT,B=2tIn+diag(1,2,,n)+rLT+2tL, $

    where $ L $ is a strictly lower triangular matrix and all the elements in the lower triangle part are ones, and $ t $ is a specified problem parameter. In our tests, we take $ t = 1 $.

    In Tables 6 and 7, for various $ n $ and $ r $, we list the theoretical quasi-optimal parameters and experimental optimal parameters of HSS and SS, respectively. In Tables 8 and 9, the numerical results of HSS and SS are listed. Moreover, the numerical results of NSCG and SS are compared in Table 10.

    Table 6.  The theoretical quasi-optimal parameters of HSS and SS for Example 2.
    Method HSS SS
    $ n $ $ q $ $ \alpha_{\exp} $ $ \beta_{\exp} $ $ \alpha_{\exp} $ $ \beta_{\exp} $
    $ n=32 $ $ r=0.01 $ 5.66 6.75 5.66 6.75
    $ r=0.1 $ 5.63 6.71 5.63 6.71
    $ r=1 $ 4.94 6.36 10.20 6.36
    $ n=64 $ $ r=0.01 $ 8.00 9.47 8.00 10.07
    $ r=0.1 $ 7.96 9.41 7.96 9.41
    $ r=1 $ 6.90 8.89 20.38 10.22
    $ n=128 $ $ r=0.01 $ 11.31 13.31 11.31 20.01
    $ r=0.1 $ 11.25 13.23 11.25 16.35
    $ r=1 $ 9.66 12.46 40.75 20.39
    $ n=256 $ $ r=0.01 $ 16.00 18.75 16.00 39.95
    $ r=0.1 $ 15.91 18.63 15.91 32.62
    $ r=1 $ 13.55 17.50 81.49 40.75

     | Show Table
    DownLoad: CSV
    Table 7.  The experimental optimal iteration parameters of HSS and SS for Example 2.
    Method HSS SS
    $ n $ $ q $ $ \alpha_{\exp} $ $ \beta_{\exp} $ $ \alpha_{\exp} $ $ \beta_{\exp} $
    $ n=32 $ $ r=0.01 $ 7 10 7 13
    $ r=0.1 $ 7 9 7 14
    $ r=1 $ 7 6 30 10
    $ n=64 $ $ r=0.01 $ 10 11 10 25
    $ r=0.1 $ 11 12 10 26
    $ r=1 $ 10 1 60 15
    $ n=128 $ $ r=0.01 $ 16 10 15 49
    $ r=0.1 $ 16 12 15 53
    $ r=1 $ 16 2 120 23
    $ n=256 $ $ r=0.01 $ 24 16 22 98
    $ r=0.1 $ 24 16 24 104
    $ r=1 $ 24 4 239 34

     | Show Table
    DownLoad: CSV
    Table 8.  Numerical results of HSS and SS with the theoretical quasi-optimal parameters for Example 2.
    Method HSS SS
    $ n $ $ q $ IT-out IT-in-1 IT-in-2 CPU IT-out IT-in CPU
    $ n=32 $ $ r=0.01 $ 37 10.3 10.3 0.1743 18 6.0 0.0373
    $ r=0.1 $ 37 10.3 10.3 0.1380 18 7.0 0.0388
    $ r=1 $ 39 10.4 10.5 0.1376 11 9.0 0.0306
    $ n=64 $ $ r=0.01 $ 60 12.9 12.9 0.9061 25 8.0 0.2025
    $ r=0.1 $ 56 12.9 12.9 0.8133 25 9.0 0.2330
    $ r=1 $ 69 18.6 18.7 1.4371 11 12.0 0.1388
    $ n=128 $ $ r=0.01 $ 95 19.7 19.7 9.9527 35 8.0 1.2843
    $ r=0.1 $ 96 20.2 20.3 10.807 35 10.0 1.4921
    $ r=1 $ 100 27.3 27.4 14.947 11 12.0 0.5410
    $ n=256 $ $ r=0.01 $ 100 28.4 28.4 93.739 49 8.0 10.863
    $ r=0.1 $ 100 29.2 29.2 92.406 49 10.0 13.902
    $ r=1 $ 100 39.0 38.8 124.37 11 12.0 3.8920

     | Show Table
    DownLoad: CSV
    Table 9.  Numerical results of HSS and SS with the experimental optimal iteration parameters for Example 2.
    Method HSS SS
    $ n $ $ q $ IT-out IT-in-1 IT-in-2 CPU IT-out IT-in CPU
    $ n=32 $ $ r=0.01 $ 32 7.7 7.7 0.0787 16 3.1 0.0130
    $ r=0.1 $ 31 8.2 8.2 0.0783 16 3.2 0.0127
    $ r=1 $ 30 7.0 7.0 0.0646 2 6.0 0.0028
    $ n=64 $ $ r=0.01 $ 43 12.2 12.2 0.5301 21 3.2 0.0510
    $ r=0.1 $ 42 11.4 11.4 0.4874 21 3.3 0.0647
    $ r=1 $ 39 55.9 55.9 2.1008 2 8.0 0.0117
    $ n=128 $ $ r=0.01 $ 57 24.3 24.3 5.8351 28 3.4 0.3985
    $ r=0.1 $ 54 20.3 20.3 4.8781 27 3.3 0.3821
    $ r=1 $ 53 55.2 55.2 12.885 2 10.8 0.0902
    $ n=256 $ $ r=0.01 $ 75 30.9 30.9 59.116 36 3.1 2.5999
    $ r=0.1 $ 70 30.1 30.1 64.391 34 3.2 3.3392
    $ r=1 $ 66 52.3 52.3 85.011 2 14.0 0.7423

     | Show Table
    DownLoad: CSV
    Table 10.  Numerical results of NSCG and SS for Example 2.
    Method NSCG SS
    $ n $ $ q $ IT-out IT-in CPU IT-out IT-in CPU
    $ n=32 $ $ r=0.01 $ 17 25.4 0.0467 16 3.1 0.0130
    $ r=0.1 $ 18 50.1 0.0717 16 3.2 0.0127
    $ r=1 $ 100 87.3 0.6921 2 6.0 0.0028
    $ n=64 $ $ r=0.01 $ 21 36.2 0.2206 21 3.2 0.0510
    $ r=0.1 $ 23 86.9 0.4631 21 3.3 0.0647
    $ r=1 $ 100 99.5 2.4361 2 8.0 0.0117
    $ n=128 $ $ r=0.01 $ 25 51.0 1.7965 28 3.4 0.3985
    $ r=0.1 $ 29 96.5 3.4998 27 3.3 0.3821
    $ r=1 $ 100 100 12.997 2 10.8 0.0902
    $ n=256 $ $ r=0.01 $ 31 64.0 17.264 36 3.1 2.5999
    $ r=0.1 $ 37 98.1 32.679 34 3.2 3.3392
    $ r=1 $ 100 100 96.801 2 14.0 0.7423

     | Show Table
    DownLoad: CSV

    From Tables 810 we get the same conclusion as example 1.

    Therefore, for large sparse matrix equation $ AXB = C $, the SS method is an effective iterative approach.

    By utilizing an inner-outer iteration strategy, we established a shift-splitting (SS) iteration method for large sparse linear matrix equations $ AXB = C $. Two different convergence theories were analysed in depth. Furthermore, the quasi-optimal parameters of SS iteration matrix are given. Numerical experiments illustrated that, the SS method can always outperform the HSS and NSCG methods both in outer and inner iteration numbers and computing time, especially for the ill-conditioned coefficient matrices.

    The authors are very grateful to the anonymous referees for their helpful comments and suggestions on the manuscript. This research is supported by the Natural Science Foundation of Gansu Province (No. 20JR5RA464), the National Natural Science Foundation of China (No. 11501272), and the China Postdoctoral Science Foundation funded project (No. 2016M592858).

    The authors declare there is no conflict of interest.

    [1] Handoll HH, Sherrington C, Mak JC (2011) Interventions for improving mobility after hip fracture surgery in adults. John Wiley Sons, Ltd 129: CD001704.
    [2] Hoffmann TC, Glasziou PP, Boutron I, et al. (2014) Better reporting of interventions: Template for intervention description and replication (TIDieR) checklist and guide. BMJ 348: g1687. doi: 10.1136/bmj.g1687
    [3] Eccles MP, Mittman BS (2006) Welcome to Implementation Science. Implementation Sci 1: 1. doi: 10.1186/1748-5908-1-1
    [4] Colditz G, (2012) The Promise and Challenges of Dissemination and Implementation Research, In: Brownson RC, Colditz G, Proctor E, Editors, Dissemination and Implementation Research in Health, New York: Oxford University Press, 3–22.
    [5] Geng EH, Peiris D, Kruk ME (2017) Implementation science: Relevance in the real world without sacrificing rigor. PLoS Med 14: e1002288. doi: 10.1371/journal.pmed.1002288
    [6] Thoma A, Farrokhyar F, Mcknight L, et al. (2010) Practical tips for surgical research: How to optimize patient recruitment. Can J Surg 53: 205–210.
    [7] Walsh MC, Trenthamdietz A, Gangnon RE, et al. (2012) Selection bias in population-based cancer case-control studies due to incomplete sampling frame coverage. Cancer Epidemiol Biomarkers Prev 21: 881–886. doi: 10.1158/1055-9965.EPI-11-1066
    [8] Groves RM, Fowler FJJ, Couper MP, et al. (2004) Survey methodology. John Wiley & Sons 3: 10–11.
    [9] Michie S, Richardson M, Johnston M, et al. (2013) The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Ann Behav Med 46: 81–95. doi: 10.1007/s12160-013-9486-6
    [10] Hoffmann TC, Erueti C, Glasziou PP (2013) Poor description of non-pharmacological interventions: Analysis of consecutive sample of randomised trials. BMJ 347: f3755. doi: 10.1136/bmj.f3755
    [11] Glasgow RE, Vogt TM, Boles SM (1999) Evaluating the public health impact of health promotion interventions: The RE-AIM framework. Am J Public Health 89: 1322–1327. doi: 10.2105/AJPH.89.9.1322
    [12] Durlak JA, Dupre EP (2008) Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol 41: 327–350. doi: 10.1007/s10464-008-9165-0
    [13] Nilsen P (2015) Making sense of implementation theories, models and frameworks. Implementation Sci 10: 53. doi: 10.1186/s13012-015-0242-0
    [14] Cook WL, Khan KM, Bech MH, et al. (2011) Post-discharge management following hip fracture-get you back to B4: A parallel group, randomized controlled trial study protocol. BMC Geriatr 11: 30. doi: 10.1186/1471-2318-11-30
    [15] Moher D, Hopewell S, Schulz KF, et al. (2010) CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol 63: e1–e37. doi: 10.1016/j.jclinepi.2010.03.004
    [16] Guralnik JM, Simonsick EM, Ferrucci L, et al. (1994) A short physical performance battery assessing lower extremity function: Association with self-reported disability and prediction of mortality and nursing home admission. J Gerontol 49: M85–M94. doi: 10.1093/geronj/49.2.M85
    [17] Panel on Prevention of Falls in Older Persons American Geriatrics Society and British Geriatrics Society (2011) Summary of the updated AGS and BGS clinical practice guideline for prevention of falls in older persons. J Am Geriatr Soc 59: 148–157. doi: 10.1111/j.1532-5415.2010.03234.x
    [18] Cook WL, Schiller C, Mcallister MM, et al. (2015) Feasibility of a follow-up hip fracture clinic. J Am Geriatr Soc 63: 598–599. doi: 10.1111/jgs.13285
    [19] Stott-Eveneshen S, Sims-Gould J, Mcallister MM, et al. (2017) Reflections on Hip Fracture Recovery From Older Adults Enrolled in a Clinical Trial. Gerontol Geriatr Med 3: 2333721417697663.
    [20] Sims-Gould J, Stott-Eveneshen S, Fleig L, et al. (2017) Patient Perspectives on Engagement in Recovery after Hip Fracture: A Qualitative Study. J Aging Res 2017: 1–9.
    [21] Stewart AL, Mills KM, King AC, et al. (2001) CHAMPS physical activity questionnaire for older adults: Outcomes for interventions. Med Sci Sports Exercise 33: 1126–1141.
    [22] National Institutes on Aging. Available from: https://www.nia.nih.gov/research/dgcg/clinical-research-study-investigators-toolbox/adverse-events.
    [23] Fritz S, Lusardi M (2009) White paper: "Walking speed: The sixth vital sign". J Geriatr Phys Ther 32: 46–49.
    [24] Balazs CL, Morellofrosch R (2013) The three R's: How community based participatory research strengthens the rigor, relevance and reach of science. Environ Justice 6.
    [25] Thomson H (2013) Improving utility of evidence synthesis for healthy public policy: The three R's (relevance, rigor, and readability [and resources]). Am J Public Health 103: e17–e23.
    [26] Gordon WA (2009) Clinical trials in rehabilitation research: Balancing rigor and relevance. Arch Phys Med Rehabil 90: S1–S2. doi: 10.1016/j.apmr.2009.08.138
    [27] Wandersman A, Duffy J, Flaspohler P, et al. (2008) Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. Am J Community Psychol 41: 171–181. doi: 10.1007/s10464-008-9174-z
    [28] Mcdonald AM, Knight RC, Campbell MK, et al. (2006) What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies. Trials 7: 9.
    [29] Smit ES, Hoving C, Cox VC, et al. (2012) Influence of recruitment strategy on the reach and effect of a web-based multiple tailored smoking cessation intervention among Dutch adult smokers. Health Educ Res 27: 191–199. doi: 10.1093/her/cyr099
    [30] Ranhoff AH, Holvik K, Martinsen MI, et al. (2010) Older hip fracture patients: Three groups with different needs. BMC Geriatr 10: 65. doi: 10.1186/1471-2318-10-65
    [31] Bell KR, Hammond F, Hart T, et al. (2008) Participant recruitment and retention in rehabilitation research. Am J Phys Med Rehabil 87: 330–338. doi: 10.1097/PHM.0b013e318168d092
    [32] Langford DP, Edwards NY, Gray S, et al. (2017) "Life goes on": Everyday tasks, coping self-efficacy and independence: Exploring older adults' recovery from hip fracture. Qual Health Res.
    [33] Kammerlander C, Roth T, Friedman SM, et al. (2010) Ortho-geriatric service-a literature review comparing different models. Osteoporos Int 21: 637–646.
    [34] Mclellan AR, Gallacher SJ, Fraser M, et al. (2003) The fracture liaison service: Success of a program for the evaluation and management of patients with osteoporotic fracture. Osteoporos Int 14: 1028–1034. doi: 10.1007/s00198-003-1507-z
    [35] Bogoch ER, Elliotgibson V, Beaton D, et al. (2017) Fracture prevention in the orthopaedic environment: Outcomes of a coordinator-based fracture liaison service. J Bone Jt Surg Am Vol 99: 820–831. doi: 10.2106/JBJS.16.01042
    [36] Makras P, Panagoulia M, Mari A, et al. (2017) Evaluation of the first fracture liaison service in the Greek healthcare setting. Arch Osteoporos 12: 3. doi: 10.1007/s11657-016-0299-7
    [37] Moriwaki K, Noto S (2017) Economic evaluation of osteoporosis liaison service for secondary fracture prevention in postmenopausal osteoporosis patients with previous hip fracture in Japan. Osteoporos Int 28: 621–632. doi: 10.1007/s00198-016-3777-2
    [38] Langridge CR, Mcquillian C, Watson WS, et al. (2007) Refracture following fracture liaison service assessment illustrates the requirement for integrated falls and fracture services. Calcif Tissue Int 81: 85–91. doi: 10.1007/s00223-007-9042-0
    [39] Horne R, Weinman J, Barber N, et al. (2005) Concordance, adherence and compliance in medicine taking. London: NCCSDO 2005: 40–46.
    [40] Sabaté E, Adherence to long-term therapies: Evidence for action, 2003. Available from: http://www.who.int/chp/knowledge/publications/adherence_report/en/.
    [41] Johnson SB (1992) Methodological issues in diabetes research. Measuring adherence. Diabetes Care 15: 1658–1667.
    [42] Sidani S, Fox M, Streiner DL, et al. (2015) Examining the influence of treatment preferences on attrition, adherence and outcomes: A protocol for a two-stage partially randomized trial. BMC Nurs 14: 57. doi: 10.1186/s12912-015-0108-4
    [43] Haentjens P, Magaziner J, ColóNemeric CS, et al. (2010) Meta-analysis: Excess mortality after hip fracture among older women and men. Ann Intern Med 152: 380–390. doi: 10.7326/0003-4819-152-6-201003160-00008
    [44] Eastwood EA, Magaziner J, Wang J, et al. (2002) Patients with hip fracture: Subgroups and their outcomes. J Am Geriatr Soc 50: 1240–1249. doi: 10.1046/j.1532-5415.2002.50311.x
    [45] Panel on Prevention of Falls in Older Persons AGS, British Geriatrics S (2011) Summary of the Updated American Geriatrics Society/British Geriatrics Society clinical practice guideline for prevention of falls in older persons. J Am Geriatr Soc 59: 148–157. doi: 10.1111/j.1532-5415.2010.03234.x
    [46] Miller WR, Rose GS (2009) Toward a theory of motivational interviewing. Am Psychol 64: 527–537. doi: 10.1037/a0016830
    [47] Michie S, Ashford S, Sniehotta FF, et al. (2011) A refined taxonomy of behaviour change techniques to help people change their physical activity and healthy eating behaviours: The CALO-RE taxonomy. Psychol Health 26: 1479–1498. doi: 10.1080/08870446.2010.540664
    [48] Fried LP, Tangen CM, Walston J, et al. (2001) Frailty in older adults: Evidence for a phenotype. J Gerontol 56: M146–M156. doi: 10.1093/gerona/56.3.M146
  • This article has been cited by:

    1. Baohua Huang, Xiaofei Peng, A randomized block Douglas–Rachford method for solving linear matrix equation, 2024, 61, 0008-0624, 10.1007/s10092-024-00599-9
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6330) PDF downloads(1026) Cited by(4)

Figures and Tables

Figures(1)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog