Digital microscopy combines the benefits of traditional optical microscopy and the advantages of computer sciences. Using digital whole slides in all areas of pathology is increasingly popular. Telepathology or long distance diagnosis is one such area. In our study we have evaluated digital slide based histopathology diagnosis in an international setting, between Sweden and Hungary. Routine cases from the Sundsvall County Hospital (Landstinget Vasternorrland) were collected. Glass slides were scanned using Pannoramic 250 Flash II. (3DHISTECH Ltd., Budapest, Hungary). During the first round of evaluation the glass slides were shipped to Hungary for primary diagnosis. Two pathologists from Hungary, reading glass slides and one pathologist from Sweden reading digital slides signed out 500 cases. Pathologists from Hungary reached the hospital information system with a secure connection. During the second round the pathologists in Hungary reevaluated 200 from the 500 cases using digital slides after three months washout period. Diagnostic accuracy was calculated and diagnostic errors was graded according to clinicopathological consequences. In 182/200 (91%) cases digital and optical diagnoses were in full agreement. Out of the remaining 18 cases, 1 (0.5%) critical error was identified. In this case the error had therapeutic and prognostic consequence and no uncertainty either because of case complexity or poor image quality was recorded by the pathologist. We think language and communication issues as well as differences in minimal data sets of pathological reports and in guidelines used in Sweden and in Hungary are factors potentially limiting the widespread use of digital slides in a teleconsultation service provided to Sweden from Hungary. We found the quality of digital slides in our study setting acceptable to reach correct primary diagnosis in routine, unselected, random cases of a small-to-medium sized pathology department in Sweden.
1.
Introduction
Saddle point problems occur in many scientific and engineering applications. These applications inlcudes mixed finite element approximation of elliptic partial differential equations (PDEs) [1,2,3], parameter identification problems [4,5], constrained and weighted least squares problems [6,7], model order reduction of dynamical systems [8,9], computational fluid dynamics (CFD) [10,11,12], constrained optimization [13,14,15], image registration and image reconstruction [16,17,18], and optimal control problems [19,20,21]. Mostly iterative solvers are used for solution of such problem due to its usual large, sparse or ill-conditioned nature. However, there exists some applications areas such as optimization problems and computing the solution of subproblem in different methods which requires direct methods for solving saddle point problem. We refer the readers to [22] for detailed survey.
The Finite element method (FEM) is usually used to solve the coupled systems of differential equations. The FEM algorithm contains solving a set of linear equations possessing the structure of the saddle point problem [23,24]. Recently, Okulicka and Smoktunowicz [25] proposed and analyzed block Gram-Schmidt methods using thin Householder $ QR $ factorization for solution of $ 2 \times 2 $ block linear system with emphasis on saddle point problems. Updating techniques in matrix factorization is studied by many researchers, for example, see [6,7,26,27,28]. Hammarling and Lucas [29] presented updating of the $ QR $ factorization algorithms with applications to linear least squares (LLS) problems. Yousaf [30] studied $ QR $ factorization as a solution tools for LLS problems using repeated partition and updating process. Andrew and Dingle [31] performed parallel implementation of the $ QR $ factorization based updating algorithms on GPUs for solution of LLS problems. Zeb and Yousaf [32] studied equality constraints LLS problems using $ QR $ updating techniques. Saddle point problems solver with improved Variable-Reduction Method (iVRM) has been studied in [33]. The analysis of symmetric saddle point systems with augmented Lagrangian method using Generalized Singular Value Decomposition (GSVD) has been carried out by Dluzewska [34]. The null-space approach was suggested by Scott and Tuma to solve large-scale saddle point problems involving small and non-zero (2, 2) block structures [35].
In this article, we proposed an updating $ QR $ factorization technique for numerical solution of saddle point problem given as
which is a linear system where $ A \in \mathcal{R}^{p\times p} $, $ B \in \mathcal{R}^{p\times q} $ ($ q\leq p $) has full column rank matrix, $ B^T $ represents transpose of the matrix $ B $, and $ C \in \mathcal{R}^{q\times q} $. There exists a unique solution $ z = (x, y)^T $ of problem (1.1) if $ 2\times 2 $ block matrix $ M $ is nonsingular. In our proposed technique, instead of working with large system having a number of complexities such as memory consumption and storage requirements, we compute $ QR $ factorization of matrix $ A $ and then updated its upper triangular factor $ R $ by appending $ B $ and $ \left(BT−C
\right) $ to obtain the solution. The $ QR $ factorization updated process consists of updating of the upper triangular factor $ R $ and avoiding the involvement of orthogonal factor $ Q $ due to its expensive storage requirements
[6]. The proposed technique is not only applicable for solving saddle point problem but also can be used as an updating strategy when $ QR $ factorization of matrix $ A $ is in hand and one needs to add matrices of similar nature to its right end or at bottom position for solving the modified problems.
The paper is organized according to the following. The background concepts are presented in Section 2. The core concept of the suggested technique is presented in Section 3, along with a MATLAB implementation of the algorithm for problem (1.1). In Section 4 we provide numerical experiments to illustrate its applications and accuracy. Conclusion is given in Section 5.
2.
Background study
Some important concepts are given in this section. These concepts will be used further in our main Section 3.
The $ QR $ factorization of a matrix $ S \in \mathcal{R}^{p\times q} $ is defined as
where $ Q $ is an orthogonal matrix and $ R $ is an upper trapezoidal matrix. It can be computed using Gram Schmidt orthogonalization process, Givens rotations, and Householder reflections.
The $ QR $ factorization using Householder reflections can be obtained by successively pre-multiplying matrix $ S $ with series of Householder matrices $ H_q \cdots H_2 H_1 $ which introduces zeros in all the subdiagonal elements of a column simultaneously. The $ H\in \mathcal{R}^{q\times q} $ matrix for a non-zero Householder vector $ u\in \mathcal{R}^{q} $ is in the form
Householder matrix is symmetric and orthogonal. Setting
we have
where $ t $ is a non-zero vector, $ \alpha $ is a scalar, $ ||{\cdot}||_{2} $ is the Euclidean norm, and $ e_1 $ is a unit vector.
Choosing the negative sign in (2.3), we get positive value of $ \alpha $. However, severe cancellation error can occur if $ \alpha $ is close to a positive multiple of $ e_1 $ in (2.3). Let $ t\in \mathcal{R}^q $ be a vector and $ t_1 $ be its first element, then the following Parlett's formula [36]
can be used to avoid the cancellation error in the case when $ t_1 > 0 $. For further details regarding $ QR $ factorization, we refer to [6,7].
With the aid of the following algorithm, the Householder vector $ u $ required for the Householder matrix $ H $ is computed.
3.
Solution procedure
We consider problem (1.1) as
where
Computing $ QR $ factorization of matrix $ A $, we have
where $ \hat{R} \in \mathcal{R}^{p\times p} $ is the upper triangular matrix, $ \hat{d}\in \mathcal{R}^{p} $ is the corresponding right hand side (RHS) vector, and $ \hat{Q}\in \mathcal{R}^{p\times p} $ is the orthogonal matrix. Moreover, multiplying the transpose of matrix $ \hat{Q} $ with matrix $ M_c = B \in \mathcal{R}^{p\times q} $, we get
Equation (3.1) is obtained using MATLAB build-in command qr which can also be computed by constructing Householder matrices $ H_1\ldots H_p $ using Algorithm 1 and applying Householder $ QR $ algorithm [6]. Then, we have
where $ \hat{Q} = H_1 \ldots H_p $ and $ N_c = H_{p} \ldots H_1 M_c $. It gives positive diagonal values of $ \hat{R} $ and also economical with respect to storage requirements and times of calculation [6].
Appending matrix $ N_c $ given in Eq (3.2) to the right end of the upper triangular matrix $ \hat{R} $ in (3.1), we get
Here, if the factor $ \acute{R} $ has the upper triangular structure, then $ \acute{R} = \bar{R} $. Otherwise, by using Algorithm 1 to form the Householder matrices $ H_{p+1} \ldots H_{p+q} $ and applying it to $ \acute{R} $ as
we obtain the upper triangular matrix $ \bar{R} $.
Now, the matrix $ M_r = \left(BT−C
\right) $ and its corresponding RHS $ f_2 \in \mathcal{R}^{q} $ are added to the $ \bar{R} $ factor and $ \bar{d} $ respectively in (3.4)
Using Algorithm 1 to build the householder matrices $ H_{1} \ldots H_{p+q} $ and apply it to $ \bar{R}_r $ and its RHS $ \bar{d}_{r} $, this implies
Hence, we determine the solution of problem (1.1) as $ \tilde{z} = backsub(\tilde{R}, \tilde{d}) $, where backsub is the MATLAB built-in command for backward substitution.
The algorithmic representation of the above procedure for solving problem (1.1) is given in Algorithm 2.
4.
Numerical experiments
To demonstrate applications and accuracy of our suggested algorithm, we give several numerical tests done in MATLAB in this section. Considering that $ z = (x, y)^T $ be the actual solution of the problem (1.1) where $ x = ones(p, 1) $ and $ y = ones(q, 1) $. Let $ \tilde{z} $ be our proposed Algorithm 2 solution. In our test examples, we consider randomly generated test problems of different sizes and compared the results with the block classical block Gram-Schmidt re-orthogonalization method (BCGS2) [25]. Dense matrices are taken in our test problems. We carried out numerical experiments as follow.
Example 1. We consider
where $ randn('state', 0) $ is the MATLAB command used to reset to its initial state the random number generator; $ A_1 = P_1D_1P_1' $, $ C_1 = P_2D_2P_2' $, $ P_1 = orth(rand(p)) $ and $ P_2 = orth(rand(q)) $ are randomly orthogonal matrices, $ D_1 = logspace(0, -k, p) $ and $ D_2 = logspace(0, -k, q) $ are diagonal matrices which generates $ p $ and $ q $ points between decades $ 1 $ and $ 10^{-k} $ respectively. We describe the test matrices in Table 1 by giving its size and condition number $ \kappa $. The condition number $ \kappa $ for a matrix $ S $ is defined as $ \kappa(S) = ||{S}||_{2}||{S^{-1}}||_{2} $. Moreover, the results comparison and numerical illustration of backward error tests of the algorithm are given respectively in Tables 2 and 3.
The relative errors for the presented algorithm and its comparison with BCGS2 method in Table 2 showed that the algorithm is applicable and have good accuracy. Moreover, the numerical results for backward stability analysis of the suggested updating algorithm is given in Table 3.
Example 2. In this experiment, we consider $ A = H $ where $ H $ is a Hilbert matrix generated with MATLAB command $ hilb(p) $. It is symmetric, positive definite, and ill-conditioned matrix. Moreover, we consider test matrices $ B $ and $ C $ similar to that as given in Example 1 but with different dimensions. Tables 4–6 describe the test matrices, numerical results and backward error results, respectively.
From Table 5, it can bee seen that the presented algorithm is applicable and showing good accuracy. Table 6 numerically illustrates the backward error results of the proposed Algorithm 2.
5.
Conclusions
In this article, we have considered the saddle point problem and studied updated of the Householder $ QR $ factorization technique to compute its solution. The results of the considered test problems with dense matrices demonstrate that the proposed algorithm is applicable and showing good accuracy to solve saddle point problems. In future, the problem can be studied further for sparse data problems which are frequently arise in many applications. For such problems updating of the Givens $ QR $ factorization will be effective to avoid unnecessary fill-in in sparse data matrices.
Acknowledgments
The authors Aziz Khan, Bahaaeldin Abdalla and Thabet Abdeljawad would like to thank Prince Sultan university for paying the APC and support through TAS research lab.
Conflict of interest
There does not exist any kind of competing interest.