Algorithm 1 Computing parameter τ and Householder vector u [6] |
Input: t∈Rq Output: u, τ σ=||t||22 u=t,u(1)=1 if (σ=0) then τ=0 else μ=√t21+σ end if if t1≤0 then u(1)=t1−μ else u(1)=−σ/(t1+μ) end if τ=2u(1)2/(σ+u(1)2) u=u/u(1) |
Evaluating behavioral patterns through logic mining within a given dataset has become a primary focus in current research. Unfortunately, there are several weaknesses in the research regarding the logic mining models, including an uncertainty of the attribute selected in the model, random distribution of negative literals in a logical structure, non-optimal computation of the best logic, and the generation of overfitting solutions. Motivated by these limitations, a novel logic mining model incorporating the mechanism to control the negative literal in the systematic Satisfiability, namely Weighted Systematic 2 Satisfiability in Discrete Hopfield Neural Network, is proposed as a logical structure to represent the behavior of the dataset. For the proposed logic mining models, we used ratio of r to control the distribution of the negative literals in the logical structures to prevent overfitting solutions and optimize synaptic weight values. A new computational approach of the best logic by considering both true and false classification values of the learning system was applied in this work to preserve the significant behavior of the dataset. Additionally, unsupervised learning techniques such as Topological Data Analysis were proposed to ensure the reliability of the selected attributes in the model. The comparative experiments of the logic mining models by utilizing 20 repository real-life datasets were conducted from repositories to assess their efficiency. Following the results, the proposed logic mining model dominated in all the metrics for the average rank. The average ranks for each metric were Accuracy (7.95), Sensitivity (7.55), Specificity (7.93), Negative Predictive Value (7.50), and Mathews Correlation Coefficient (7.85). Numerical results and in-depth analysis demonstrated that the proposed logic mining model consistently produced optimal induced logic that best represented the real-life dataset for all the performance metrics used in this study.
Citation: Nurul Atiqah Romli, Nur Fariha Syaqina Zulkepli, Mohd Shareduwan Mohd Kasihmuddin, Nur Ezlin Zamri, Nur 'Afifah Rusdi, Gaeithry Manoharam, Mohd. Asyraf Mansor, Siti Zulaikha Mohd Jamaludin, Amierah Abdul Malik. Unsupervised logic mining with a binary clonal selection algorithm in multi-unit discrete Hopfield neural networks via weighted systematic 2 satisfiability[J]. AIMS Mathematics, 2024, 9(8): 22321-22365. doi: 10.3934/math.20241087
[1] | Jawdat Alebraheem . Asymptotic stability of deterministic and stochastic prey-predator models with prey herd immigration. AIMS Mathematics, 2025, 10(3): 4620-4640. doi: 10.3934/math.2025214 |
[2] | Chuanfu Chai, Yuanfu Shao, Yaping Wang . Analysis of a Holling-type IV stochastic prey-predator system with anti-predatory behavior and Lévy noise. AIMS Mathematics, 2023, 8(9): 21033-21054. doi: 10.3934/math.20231071 |
[3] | Chuangliang Qin, Jinji Du, Yuanxian Hui . Dynamical behavior of a stochastic predator-prey model with Holling-type III functional response and infectious predator. AIMS Mathematics, 2022, 7(5): 7403-7418. doi: 10.3934/math.2022413 |
[4] | Yingyan Zhao, Changjin Xu, Yiya Xu, Jinting Lin, Yicheng Pang, Zixin Liu, Jianwei Shen . Mathematical exploration on control of bifurcation for a 3D predator-prey model with delay. AIMS Mathematics, 2024, 9(11): 29883-29915. doi: 10.3934/math.20241445 |
[5] | Francesca Acotto, Ezio Venturino . How do predator interference, prey herding and their possible retaliation affect prey-predator coexistence?. AIMS Mathematics, 2024, 9(7): 17122-17145. doi: 10.3934/math.2024831 |
[6] | Xuyang Cao, Qinglong Wang, Jie Liu . Hopf bifurcation in a predator-prey model under fuzzy parameters involving prey refuge and fear effects. AIMS Mathematics, 2024, 9(9): 23945-23970. doi: 10.3934/math.20241164 |
[7] | Saiwan Fatah, Arkan Mustafa, Shilan Amin . Predator and n-classes-of-prey model incorporating extended Holling type Ⅱ functional response for n different prey species. AIMS Mathematics, 2023, 8(3): 5779-5788. doi: 10.3934/math.2023291 |
[8] | Naret Ruttanaprommarin, Zulqurnain Sabir, Salem Ben Said, Muhammad Asif Zahoor Raja, Saira Bhatti, Wajaree Weera, Thongchai Botmart . Supervised neural learning for the predator-prey delay differential system of Holling form-III. AIMS Mathematics, 2022, 7(11): 20126-20142. doi: 10.3934/math.20221101 |
[9] | Xuegui Zhang, Yuanfu Shao . Analysis of a stochastic predator-prey system with mixed functional responses and Lévy jumps. AIMS Mathematics, 2021, 6(5): 4404-4427. doi: 10.3934/math.2021261 |
[10] | Wei Ou, Changjin Xu, Qingyi Cui, Yicheng Pang, Zixin Liu, Jianwei Shen, Muhammad Zafarullah Baber, Muhammad Farman, Shabir Ahmad . Hopf bifurcation exploration and control technique in a predator-prey system incorporating delay. AIMS Mathematics, 2024, 9(1): 1622-1651. doi: 10.3934/math.2024080 |
Evaluating behavioral patterns through logic mining within a given dataset has become a primary focus in current research. Unfortunately, there are several weaknesses in the research regarding the logic mining models, including an uncertainty of the attribute selected in the model, random distribution of negative literals in a logical structure, non-optimal computation of the best logic, and the generation of overfitting solutions. Motivated by these limitations, a novel logic mining model incorporating the mechanism to control the negative literal in the systematic Satisfiability, namely Weighted Systematic 2 Satisfiability in Discrete Hopfield Neural Network, is proposed as a logical structure to represent the behavior of the dataset. For the proposed logic mining models, we used ratio of r to control the distribution of the negative literals in the logical structures to prevent overfitting solutions and optimize synaptic weight values. A new computational approach of the best logic by considering both true and false classification values of the learning system was applied in this work to preserve the significant behavior of the dataset. Additionally, unsupervised learning techniques such as Topological Data Analysis were proposed to ensure the reliability of the selected attributes in the model. The comparative experiments of the logic mining models by utilizing 20 repository real-life datasets were conducted from repositories to assess their efficiency. Following the results, the proposed logic mining model dominated in all the metrics for the average rank. The average ranks for each metric were Accuracy (7.95), Sensitivity (7.55), Specificity (7.93), Negative Predictive Value (7.50), and Mathews Correlation Coefficient (7.85). Numerical results and in-depth analysis demonstrated that the proposed logic mining model consistently produced optimal induced logic that best represented the real-life dataset for all the performance metrics used in this study.
Saddle point problems occur in many scientific and engineering applications. These applications inlcudes mixed finite element approximation of elliptic partial differential equations (PDEs) [1,2,3], parameter identification problems [4,5], constrained and weighted least squares problems [6,7], model order reduction of dynamical systems [8,9], computational fluid dynamics (CFD) [10,11,12], constrained optimization [13,14,15], image registration and image reconstruction [16,17,18], and optimal control problems [19,20,21]. Mostly iterative solvers are used for solution of such problem due to its usual large, sparse or ill-conditioned nature. However, there exists some applications areas such as optimization problems and computing the solution of subproblem in different methods which requires direct methods for solving saddle point problem. We refer the readers to [22] for detailed survey.
The Finite element method (FEM) is usually used to solve the coupled systems of differential equations. The FEM algorithm contains solving a set of linear equations possessing the structure of the saddle point problem [23,24]. Recently, Okulicka and Smoktunowicz [25] proposed and analyzed block Gram-Schmidt methods using thin Householder QR factorization for solution of 2×2 block linear system with emphasis on saddle point problems. Updating techniques in matrix factorization is studied by many researchers, for example, see [6,7,26,27,28]. Hammarling and Lucas [29] presented updating of the QR factorization algorithms with applications to linear least squares (LLS) problems. Yousaf [30] studied QR factorization as a solution tools for LLS problems using repeated partition and updating process. Andrew and Dingle [31] performed parallel implementation of the QR factorization based updating algorithms on GPUs for solution of LLS problems. Zeb and Yousaf [32] studied equality constraints LLS problems using QR updating techniques. Saddle point problems solver with improved Variable-Reduction Method (iVRM) has been studied in [33]. The analysis of symmetric saddle point systems with augmented Lagrangian method using Generalized Singular Value Decomposition (GSVD) has been carried out by Dluzewska [34]. The null-space approach was suggested by Scott and Tuma to solve large-scale saddle point problems involving small and non-zero (2, 2) block structures [35].
In this article, we proposed an updating QR factorization technique for numerical solution of saddle point problem given as
Mz=f⇔(ABBT−C)(xy)=(f1f2), | (1.1) |
which is a linear system where A∈Rp×p, B∈Rp×q (q≤p) has full column rank matrix, BT represents transpose of the matrix B, and C∈Rq×q. There exists a unique solution z=(x,y)T of problem (1.1) if 2×2 block matrix M is nonsingular. In our proposed technique, instead of working with large system having a number of complexities such as memory consumption and storage requirements, we compute QR factorization of matrix A and then updated its upper triangular factor R by appending B and (BT−C) to obtain the solution. The QR factorization updated process consists of updating of the upper triangular factor R and avoiding the involvement of orthogonal factor Q due to its expensive storage requirements [6]. The proposed technique is not only applicable for solving saddle point problem but also can be used as an updating strategy when QR factorization of matrix A is in hand and one needs to add matrices of similar nature to its right end or at bottom position for solving the modified problems.
The paper is organized according to the following. The background concepts are presented in Section 2. The core concept of the suggested technique is presented in Section 3, along with a MATLAB implementation of the algorithm for problem (1.1). In Section 4 we provide numerical experiments to illustrate its applications and accuracy. Conclusion is given in Section 5.
Some important concepts are given in this section. These concepts will be used further in our main Section 3.
The QR factorization of a matrix S∈Rp×q is defined as
S=QR, Q∈Rp×p, R∈Rp×q, | (2.1) |
where Q is an orthogonal matrix and R is an upper trapezoidal matrix. It can be computed using Gram Schmidt orthogonalization process, Givens rotations, and Householder reflections.
The QR factorization using Householder reflections can be obtained by successively pre-multiplying matrix S with series of Householder matrices Hq⋯H2H1 which introduces zeros in all the subdiagonal elements of a column simultaneously. The H∈Rq×q matrix for a non-zero Householder vector u∈Rq is in the form
H=Iq−τuuT, τ=2uTu. | (2.2) |
Householder matrix is symmetric and orthogonal. Setting
u=t±||t||2e1, | (2.3) |
we have
Ht=t−τuuTt=∓αe1, | (2.4) |
where t is a non-zero vector, α is a scalar, ||⋅||2 is the Euclidean norm, and e1 is a unit vector.
Choosing the negative sign in (2.3), we get positive value of α. However, severe cancellation error can occur if α is close to a positive multiple of e1 in (2.3). Let t∈Rq be a vector and t1 be its first element, then the following Parlett's formula [36]
u1=t1−||t||2=t21−||t||22t1+||t||2=−(t22+⋯+t2n)t1+||t||2, |
can be used to avoid the cancellation error in the case when t1>0. For further details regarding QR factorization, we refer to [6,7].
With the aid of the following algorithm, the Householder vector u required for the Householder matrix H is computed.
Algorithm 1 Computing parameter τ and Householder vector u [6] |
Input: t∈Rq Output: u, τ σ=||t||22 u=t,u(1)=1 if (σ=0) then τ=0 else μ=√t21+σ end if if t1≤0 then u(1)=t1−μ else u(1)=−σ/(t1+μ) end if τ=2u(1)2/(σ+u(1)2) u=u/u(1) |
We consider problem (1.1) as
Mz=f, |
where
M=(ABBT−C)∈R(p+q)×(p+q), z=(xy)∈Rp+q, and f=(f1f2)∈Rp+q. |
Computing QR factorization of matrix A, we have
ˆR=ˆQTA, ˆd=ˆQTf1, | (3.1) |
where ˆR∈Rp×p is the upper triangular matrix, ˆd∈Rp is the corresponding right hand side (RHS) vector, and ˆQ∈Rp×p is the orthogonal matrix. Moreover, multiplying the transpose of matrix ˆQ with matrix Mc=B∈Rp×q, we get
Nc=ˆQTMc∈Rp×q. | (3.2) |
Equation (3.1) is obtained using MATLAB build-in command qr which can also be computed by constructing Householder matrices H1…Hp using Algorithm 1 and applying Householder QR algorithm [6]. Then, we have
ˆR=Hp…H1A, ˆd=Hp…H1f1, |
where ˆQ=H1…Hp and Nc=Hp…H1Mc. It gives positive diagonal values of ˆR and also economical with respect to storage requirements and times of calculation [6].
Appending matrix Nc given in Eq (3.2) to the right end of the upper triangular matrix ˆR in (3.1), we get
ˊR=[ˆR(1:p,1:p)Nc(1:p,1:q)]∈Rp×(p+q). | (3.3) |
Here, if the factor ˊR has the upper triangular structure, then ˊR=ˉR. Otherwise, by using Algorithm 1 to form the Householder matrices Hp+1…Hp+q and applying it to ˊR as
ˉR=Hp+q…Hp+1ˊR and ˉd=Hp+q…Hp+1ˆd, | (3.4) |
we obtain the upper triangular matrix ˉR.
Now, the matrix Mr=(BT−C) and its corresponding RHS f2∈Rq are added to the ˉR factor and ˉd respectively in (3.4)
ˉRr=(ˉR(1:p,1:p+q)Mr(q:p+q,q:p+q)) and ˉdr=(ˉd(1:p)f2(1:q)). |
Using Algorithm 1 to build the householder matrices H1…Hp+q and apply it to ˉRr and its RHS ˉdr, this implies
˜R=Hp+q…H1(ˉRMr), ˜d=Hp+q…H1(ˉdf2). |
Hence, we determine the solution of problem (1.1) as ˜z=backsub(˜R,˜d), where backsub is the MATLAB built-in command for backward substitution.
The algorithmic representation of the above procedure for solving problem (1.1) is given in Algorithm 2.
Algorithm 2 Algorithm for solution of problem (1.1) |
Input: A∈Rp×p, B∈Rp×q, C∈Rq×q, f1∈Rp, f2∈Rq Output: ˜z∈Rp+q [ˆQ,ˆR]=qr(A), ˆd=ˆQTf1, and Nc=ˆQTMc ˆR(1:p,q+1:p+q)=Nc(1:p,1:q) if p≤p+q then ˉR=triu(ˆR), ˉd=ˆd else for m=p−1 to min(p,p+q) do [u,τ,ˆR(m,m)]=householder(ˆR(m,m),ˆR(m+1:p,m)) W=ˆR(m,m+1:p+q)+uTˆR(m+1:p,m+1:p+q) ˆR(m,m+1:p+q)=ˆR(m,m+1:p+q)−τW if m<p+q then ˆR(m+1:p,m+1:p+q)=ˆR(m+1:p,m+1:p+q)−τuW end if ˉd(m:p)=ˆd(m:p)−τ(1u)(1uT)ˆd(m:p) end for ˉR=triu(ˆR) end if for m=1 to min(p,p+q) do [u,τ,ˉR(m,m)]=householder(ˉR(m,m),Mr(1:q,m)) W1=ˉR(m,m+1:p+q)+uTMr(1:q,m+1:p+q) ˉR(m,m+1:p+q)=ˉR(m,m+1:p+q)−τW1 if m<p+q then Mr(1:q,m+1:p+q)=Mr(1:q,m+1:p+q)−τuW1 end if ˉdm=ˉd(m) ˉd(m)=(1−τ)ˉd(m)−τuTf2(1:q) f3(1:q)=f2(1:q)−τuˉdm−τu(uTf2(1:q)) end for if p<p+q then [ˊQr,ˊRr]=qr(Mr(:,p+1:p+q)) ˉR(p+1:p+q,p+1:p+q)=ˊRr f3=ˊQTrf2 end if ˜R=triu(ˉR) ˜d=f3 ˜z=backsub(˜R(1:p+q,1:p+q),˜d(1:p+q)) |
To demonstrate applications and accuracy of our suggested algorithm, we give several numerical tests done in MATLAB in this section. Considering that z=(x,y)T be the actual solution of the problem (1.1) where x=ones(p,1) and y=ones(q,1). Let ˜z be our proposed Algorithm 2 solution. In our test examples, we consider randomly generated test problems of different sizes and compared the results with the block classical block Gram-Schmidt re-orthogonalization method (BCGS2) [25]. Dense matrices are taken in our test problems. We carried out numerical experiments as follow.
Example 1. We consider
A=A1+A′12, B=randn(′state′,0), and C=C1+C′12, |
where randn(′state′,0) is the MATLAB command used to reset to its initial state the random number generator; A1=P1D1P′1, C1=P2D2P′2, P1=orth(rand(p)) and P2=orth(rand(q)) are randomly orthogonal matrices, D1=logspace(0,−k,p) and D2=logspace(0,−k,q) are diagonal matrices which generates p and q points between decades 1 and 10−k respectively. We describe the test matrices in Table 1 by giving its size and condition number κ. The condition number κ for a matrix S is defined as κ(S)=||S||2||S−1||2. Moreover, the results comparison and numerical illustration of backward error tests of the algorithm are given respectively in Tables 2 and 3.
Problem | size(A) | κ(A) | size(B) | κ(B) | size(C) | κ(C) |
(1) | 16×16 | 1.0000e+05 | 16×9 | 6.1242 | 9×9 | 1.0000e+05 |
(2) | 120×120 | 1.0000e+05 | 120×80 | 8.4667 | 80×80 | 1.0000e+05 |
(3) | 300×300 | 1.0000e+06 | 300×200 | 9.5799 | 200×200 | 1.0000e+06 |
(4) | 400×400 | 1.0000e+07 | 400×300 | 13.2020 | 300×300 | 1.0000e+07 |
(5) | 900×900 | 1.0000e+08 | 900×700 | 15.2316 | 700×700 | 1.0000e+08 |
Problem | size(M) | κ(M) | ||z−˜z||2||z||2 | ||z−zBCGS2||2||z||2 |
(1) | 25×25 | 7.7824e+04 | 6.9881e-13 | 3.3805e-11 |
(2) | 200×200 | 2.0053e+06 | 4.3281e-11 | 2.4911e-09 |
(3) | 500×500 | 3.1268e+07 | 1.0582e-09 | 6.3938e-08 |
(4) | 700×700 | 3.5628e+08 | 2.8419e-09 | 4.3195e-06 |
(5) | 1600×1600 | 2.5088e+09 | 7.5303e-08 | 3.1454e-05 |
Problem | ||M−˜Q˜R||F||M||F | ||I−˜QT˜Q||F |
(1) | 6.7191e-16 | 1.1528e-15 |
(2) | 1.4867e-15 | 2.7965e-15 |
(3) | 2.2052e-15 | 4.1488e-15 |
(4) | 2.7665e-15 | 4.9891e-15 |
(5) | 3.9295e-15 | 6.4902e-15 |
The relative errors for the presented algorithm and its comparison with BCGS2 method in Table 2 showed that the algorithm is applicable and have good accuracy. Moreover, the numerical results for backward stability analysis of the suggested updating algorithm is given in Table 3.
Example 2. In this experiment, we consider A=H where H is a Hilbert matrix generated with MATLAB command hilb(p). It is symmetric, positive definite, and ill-conditioned matrix. Moreover, we consider test matrices B and C similar to that as given in Example 1 but with different dimensions. Tables 4–6 describe the test matrices, numerical results and backward error results, respectively.
Problem | size(A) | κ(A) | size(B) | κ(B) | size(C) | κ(C) |
(6) | 6×6 | 1.4951e+07 | 6×3 | 2.6989 | 3×3 | 1.0000e+05 |
(7) | 8×8 | 1.5258e+10 | 8×4 | 2.1051 | 4×4 | 1.0000e+06 |
(8) | 12×12 | 1.6776e+16 | 12×5 | 3.6108 | 5×5 | 1.0000e+07 |
(9) | 13×13 | 1.7590e+18 | 13×6 | 3.5163 | 6×6 | 1.0000e+10 |
(10) | 20×20 | 2.0383e+18 | 20×10 | 4.4866 | 10×10 | 1.0000e+10 |
Problem | size(M) | κ(M) | ||z−˜z||2||z||2 | ||z−zBCGS2||2||z||2 |
(6) | 9×9 | 8.2674e+02 | 9.4859e-15 | 2.2003e-14 |
(7) | 12×12 | 9.7355e+03 | 2.2663e-13 | 9.3794e-13 |
(8) | 17×17 | 6.8352e+08 | 6.8142e-09 | 1.8218e-08 |
(9) | 19×19 | 2.3400e+07 | 2.5133e-10 | 1.8398e-09 |
(10) | 30×30 | 8.0673e+11 | 1.9466e-05 | 1.0154e-03 |
Problem | ||M−˜Q˜R||F||M||F | ||I−˜QT˜Q||F |
(6) | 5.0194e-16 | 6.6704e-16 |
(7) | 8.4673e-16 | 1.3631e-15 |
(8) | 7.6613e-16 | 1.7197e-15 |
(9) | 9.1814e-16 | 1.4360e-15 |
(10) | 7.2266e-16 | 1.5554e-15 |
From Table 5, it can bee seen that the presented algorithm is applicable and showing good accuracy. Table 6 numerically illustrates the backward error results of the proposed Algorithm 2.
In this article, we have considered the saddle point problem and studied updated of the Householder QR factorization technique to compute its solution. The results of the considered test problems with dense matrices demonstrate that the proposed algorithm is applicable and showing good accuracy to solve saddle point problems. In future, the problem can be studied further for sparse data problems which are frequently arise in many applications. For such problems updating of the Givens QR factorization will be effective to avoid unnecessary fill-in in sparse data matrices.
The authors Aziz Khan, Bahaaeldin Abdalla and Thabet Abdeljawad would like to thank Prince Sultan university for paying the APC and support through TAS research lab.
There does not exist any kind of competing interest.
[1] |
J. J. Hopfield, D. W. Tank, "Neural" computation of decisions in optimization problems, Biol. Cybern., 52 (1985), 141–152. https://doi.org/10.1007/BF00339943 doi: 10.1007/BF00339943
![]() |
[2] |
W. A. T. W. Abdullah, Logic programming on a neural network, Int. J. Intell. Syst., 7 (1992), 513–519. https://doi.org/10.1002/int.4550070604 doi: 10.1002/int.4550070604
![]() |
[3] | M. S. M. Kasihmuddin, M. A. Mansor, S. Sathasivam, Hybrid genetic algorithm in the hopfield network for logic satisfiability problem, Pertanika J. Sci. Technol., 25 (2017). |
[4] | M. A. Mansor, M. S. M. Kasihmuddin, S. Sathasivam, Artificial immune system paradigm in the hopfield network for 3-satisfiability problem, Pertanika J. Sci. Technol., 25 (2017). |
[5] |
S. Sathasivam, M. A. Mansor, M. S. M. Kasihmuddin, H. Abubakar, Election algorithm for random k satisfiability in the hopfield neural network, Processes, 8 (2020), 568. https://doi.org/10.3390/PR8050568 doi: 10.3390/PR8050568
![]() |
[6] |
S. Sathasivam, M. A. Mansor, A. I. M. Ismail, S. Z. M. Jamaludin, M. S. M. Kasihmuddin, M. Mamat, Novel random k satisfiability for k ≤ 2 in hopfield neural network, Sains Malays., 49 (2020), 2847–2857. https://doi.org/10.17576/jsm-2020-4911-23 doi: 10.17576/jsm-2020-4911-23
![]() |
[7] |
S. A. Karim, N. E. Zamri, A. Alway, M. S. M. Kasihmuddin, A. I. M. Ismail, M. A. Mansor, et al., Random satisfiability: A higher-order logical approach in discrete hopfield neural network, IEEE Access, 9 (2021), 50831–50845. https://doi.org/10.1109/ACCESS.2021.3068998 doi: 10.1109/ACCESS.2021.3068998
![]() |
[8] |
Y. Guo, M. S. M. Kasihmuddin, Y. Gao, M. A. Mansor, H. A. Wahab, N. E. Zamri, et al., YRAN2SAT: A novel flexible random satisfiability logical rule in discrete hopfield neural network, Adv. Eng. Softw., 171 (2022), 103169. https://doi.org/10.1016/j.advengsoft.2022.103169 doi: 10.1016/j.advengsoft.2022.103169
![]() |
[9] |
N. E. Zamri, S. A. Azhar, S. S. M. Sidik, M. A. Mansor, M. S. M. Kasihmuddin, S. P. A. Pakruddin, et al., Multi-discrete genetic algorithm in hopfield neural network with weighted random k satisfiability, Neural Comput. Appl., 34 (2022) 19283–19311. https://doi.org/10.1007/s00521-022-07541-6 doi: 10.1007/s00521-022-07541-6
![]() |
[10] |
S. Sathasivam, W. A. T. Wan Abdullah, Logic mining in neural network: Reverse analysis method, Computing, 91 (2011), 119–133. https://doi.org/10.1007/s00607-010-0117-9 doi: 10.1007/s00607-010-0117-9
![]() |
[11] | L. C. Kho, M. S. M. Kasihmuddin, M. A. Mansor, S. Sathasivam, Logic mining in league of legends, Pertanika J. Sci. Technol., 28 (2020). |
[12] |
N. E. Zamri, M. A. Mansor, M. S. M. Kasihmuddin, A. Alway, S. Z. M. Jamaludin, S. A. Alzaeemi, Amazon employees resources access data extraction via clonal selection algorithm and logic mining approach, Entropy, 22 (2020), 596. https://doi.org/10.3390/E22060596 doi: 10.3390/E22060596
![]() |
[13] |
S. Z. M. Jamaludin, M. S. M. Kasihmuddin, A. I. M. Ismail, M. A. Mansor, M. F. M. Basir, Energy based logic mining analysis with hopfield neural network for recruitment evaluation, Entropy, 23 (2021). https://doi.org/10.3390/e23010040 doi: 10.3390/e23010040
![]() |
[14] |
S. Z. M. Jamaludin, M. A. Mansor, A. Baharum, M. S. M. Kasihmuddin, H. A. Wahab, M. F. Marsani, Modified 2 satisfiability reverse analysis method via logical permutation operator, Comput. Mater. Contin., 74 (2023), 2853–2870. https://doi.org/10.32604/cmc.2023.032654 doi: 10.32604/cmc.2023.032654
![]() |
[15] |
M. S. M. Kasihmuddin, S. Z. M. Jamaludin, M. A. Mansor, H. A. Wahab, S. M. S. Ghadzi, Supervised learning perspective in logic mining, Mathematics, 10 (2022), 915. https://doi.org/10.3390/math10060915 doi: 10.3390/math10060915
![]() |
[16] |
S. Z. M. Jamaludin, N. A. Romli, M. S. M. Kasihmuddin, A. Baharum, M. A. Mansor, M. F. Marsani, Novel logic mining incorporating log linear approach, J. King Saud Univ.-Comput. Inf. Sci., 34 (2022), 9011–9027. https://doi.org/10.1016/j.jksuci.2022.08.026 doi: 10.1016/j.jksuci.2022.08.026
![]() |
[17] |
N. A. Rusdi, M. S. M. Kasihmuddin, N. A. Romli, G. Manoharam, M. A. Mansor, Multi-unit discrete hopfield neural network for higher order supervised learning through logic mining: Optimal performance design and attribute selection, J. King Saud Univ.–Com., 35 (2023), 101554. https://doi.org/10.1016/j.jksuci.2023.101554 doi: 10.1016/j.jksuci.2023.101554
![]() |
[18] |
G. Manoharam, M. S. M. Kasihmuddin, S. N. F. M. A. Antony, N. A. Romli, N. A. Rusdi, S. Abdeen, et al., Log-linear-based logic mining with multi-discrete hopfield neural network, Mathematics, 9 (2023), 2121. https://doi.org/10.3390/math11092121 doi: 10.3390/math11092121
![]() |
[19] |
A. Alway, N. E. Zamri, M. A. Mansor, M. S. M. Kasihmuddin, S. Z. M. Jamaludin, M. F. Marsani, A novel hybrid exhaustive search and data preparation technique with multi-objective discrete hopfield neural network, Decis. Anal., 9 (2023), 100354. https://doi.org/10.1016/j.dajour.2023.100354 doi: 10.1016/j.dajour.2023.100354
![]() |
[20] |
N. E. Zamri, M. A. Mansor, M. S. M. Kasihmuddin, S. S. Sidik, A. Alway, N. A. Romli, et al., A modified reverse-based analysis logic mining model with weighted random 2 satisfiability logic in discrete hopfield neural network and multi-objective training of modified niched genetic algorithm, Expert Syst. Appl., 240 (2024) 122307. 2024. https://doi.org/10.1016/j.eswa.2023.122307 doi: 10.1016/j.eswa.2023.122307
![]() |
[21] |
N. E. Zamri, S. A. Azhar, M. A. Mansor, A. Alway, M. S. M. Kasihmuddin, Weighted random k satisfiability for k = 1, 2 (r2SAT) in discrete hopfield neural network, Appl. Soft Comput., 126 (2022), 109312. https://doi.org/10.1016/j.asoc.2022.109312 doi: 10.1016/j.asoc.2022.109312
![]() |
[22] |
D. Bajusz, A. Rácz, K. Héberger, Why is Tanimoto index an appropriate choice for fingerprint-based similarity calculations? J. Cheminformatics, 7 (2015), 1–13. https://doi.org/10.1186/s13321-015-0069-3 doi: 10.1186/s13321-015-0069-3
![]() |
[23] | L. N. De Castro, F. J. von Zuben, The clonal selection algorithm with engineering applications, Gecco, 2000 (2000), 36–39. |
[24] |
Y. Zhu, W. Li, T. Li, A hybrid artificial immune optimization for high-dimensional feature selection, Knowl.-Based Syst., 260 (2023), 11011. https://doi.org/10.1016/j.knosys.2022.110111 doi: 10.1016/j.knosys.2022.110111
![]() |
[25] | G. Singh, F. Mémoli, G. Carlsson, Topological methods for the analysis of high dimensional data sets and 3D object recognition, In: 4th symposium on point based graphics, PBG@Eurographics 2007, 2007. |
[26] |
A. D. Smith, P. Dłotko, V. M. Zavala, Topological data analysis: Concepts, computation, and applications in chemical engineering, Comput. Chem. Eng., 146 (2021), 107202. https://doi.org/10.1016/j.compchemeng.2020.107202 doi: 10.1016/j.compchemeng.2020.107202
![]() |
[27] |
Y. Chen, I. Volic, Topological data analysis model for the spread of the coronavirus, Plos One, 16 (2021), e0255584. https://doi.org/10.1371/journal.pone.0255584 doi: 10.1371/journal.pone.0255584
![]() |
[28] |
O. Kwon, J. M. Sim, Effects of data set features on the performances of classification algorithms, Expert Syst. Appl., 40 (2013), 1847–1857. https://doi.org/10.1016/j.eswa.2012.09.017 doi: 10.1016/j.eswa.2012.09.017
![]() |
[29] |
J. Dou, Y. Song, G. Wei, Y. Zhang, Fuzzy information decomposition incorporated and weighted Relief-F feature selection: When imbalanced data meet incompletion, Inf. Sci., 584 (2022), 417–432. https://doi.org/10.1016/j.ins.2021.10.057 doi: 10.1016/j.ins.2021.10.057
![]() |
[30] |
K. Jha, S. Saha, Incorporation of multimodal multiobjective optimization in designing a filter based feature selection technique, Appl. Soft Comput., 98 (2021), 106823. https://doi.org/10.1016/j.asoc.2020.106823 doi: 10.1016/j.asoc.2020.106823
![]() |
[31] |
S. Sen, A. V. Deokar, Toward understanding variations in price and billing in US healthcare services: A predictive analytics approach, Expert Syst. Appl., 209 (2022), 118241. https://doi.org/10.1016/j.eswa.2022.118241 doi: 10.1016/j.eswa.2022.118241
![]() |
[32] |
S. Hasija, P. Akash, M. B. Hemanth, A. Kumar, S. Sharma, A novel approach for detection of COVID-19 and Pneumonia using only binary classification from chest CT-scans, Neurosci. Inform., 2 (2022), 100069. https://doi.org/10.1016/j.neuri.2022.100069 doi: 10.1016/j.neuri.2022.100069
![]() |
[33] |
A. Luque, A. Carrasco, A. Martín, A. de las Heras, The impact of class imbalance in classification performance metrics based on the binary confusion matrix, Pattern Recognit., 91 (2019), 216–231. https://doi.org/10.1016/j.patcog.2019.02.023 doi: 10.1016/j.patcog.2019.02.023
![]() |
[34] |
F. Amin, M. Mahmoud, Confusion matrix in binary classification problems: A step-by-step tutorial, J. Eng. Res., 6 (2022). https://doi.org/10.21608/erjeng.2022.274526 doi: 10.21608/erjeng.2022.274526
![]() |
[35] |
M. Ohsaki, P. Wang, K. Matsuda, S. Katagiri, H. Watanabe, A. Ralescu, Confusion-matrix-based kernel logistic regression for imbalanced data classification, IEEE Trans. Knowl. Data Eng., 29 (2017), 1806–1819. https://doi.org/10.1109/TKDE.2017.2682249 doi: 10.1109/TKDE.2017.2682249
![]() |
[36] |
J. Gorodkin, Comparing two K-category assignments by a K-category correlation coefficient, Comput. Biol. Chem., 28 (2004), 367–374. https://doi.org/10.1016/j.compbiolchem.2004.09.006 doi: 10.1016/j.compbiolchem.2004.09.006
![]() |
[37] |
D. Chicco, G. Jurman, The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation, BMC Genomics, 21 (2020), 1–13. https://doi.org/10.1186/s12864-019-6413-7 doi: 10.1186/s12864-019-6413-7
![]() |
[38] |
V. Someetheram, M. F. Marsani, M. S. M. Kasihmuddin, N. E. Zamri, S. S. M. Sidik, S. Z. M. Jamaludin, et al., Random maximum 2 satisfiability logic in discrete hopfield neural network incorporating improved election algorithm, Mathematics, 10 (2022), 4734. https://doi.org/10.3390/math10244734 doi: 10.3390/math10244734
![]() |
[39] |
S. Abdeen, M. S. M. Kasihmuddin, N. E. Zamri, G. Manoharam, M. A. Mansor, N. Alshehri, S-type random k satisfiability logic in discrete hopfield neural network using probability distribution: Performance optimization and analysis, Mathematics, 11 (2023), 984. https://doi.org/10.3390/math11040984 doi: 10.3390/math11040984
![]() |
[40] |
A. Alway, N. E. Zamri, S. A. Karim, M. A. Mansor, M. S. M. Kasihmuddin, M. M. Bazuhair, Major 2 satisfiability logic in discrete hopfield neural network, Int. J. Comput. Math., 99 (2022), 924–948. https://doi.org/10.1080/00207160.2021.1939870 doi: 10.1080/00207160.2021.1939870
![]() |
[41] |
C. Iwendi, K. Mahboob, Z. Khalid, A. R. Javed, M. Rizwan, U. Ghosh, Classification of COVID-19 individuals using adaptive neuro-fuzzy inference system, Multimedia Syst., 2022, 1–15. https://doi.org/10.1007/s00530-021-00774-w doi: 10.1007/s00530-021-00774-w
![]() |
[42] |
M. Herland, R. A. Bauder, T. M. Khoshgoftaar, The effects of class rarity on the evaluation of supervised healthcare fraud detection models, J. Big Data, 6 (2019), 1–33. https://doi.org/10.1186/s40537-019-0181-8 doi: 10.1186/s40537-019-0181-8
![]() |
[43] |
M. Noureddin, E. Truong, J. A. Gornbein, R. Saouaf, M. Guindi, T. Todo, et al., MRI-based (MAST) score accurately identifies patients with NASH and significant fibrosis, J. Hepatol., 76 (2022), 781–787. https://doi.org/10.1016/j.jhep.2021.11.012 doi: 10.1016/j.jhep.2021.11.012
![]() |
[44] |
P. Ong, Z. Zainuddin, Optimizing wavelet neural networks using modified cuckoo search for multi-step ahead chaotic time series prediction, Appl. Soft Compt., 80 (2019), 374–386. https://doi.org/10.1016/j.asoc.2019.04.016 doi: 10.1016/j.asoc.2019.04.016
![]() |
[45] |
H. L. Li, J. Cao, C. Hu, H. Jiang, A. Alsaedi, Synchronization analysis of nabla fractional-order fuzzy neural networks with time delays via nonlinear feedback control, Fuzzy Set. Syst., 475 (2024), 108750. https://doi.org/10.1016/j.fss.2023.108750 doi: 10.1016/j.fss.2023.108750
![]() |
[46] |
H. L. Li, J. Cao, C. Hu, L. Zhang, H. Jiang, Adaptive control-based synchronization of discrete-time fractional-order fuzzy neural networks with time-varying delays, Neural Netw., 168 (2023), 59–73. https://doi.org/10.1016/j.neunet.2023.09.019 doi: 10.1016/j.neunet.2023.09.019
![]() |
[47] |
J. Cao, K. Udhayakumar, R. Rakkiyappan, X. Li, J. Lu, A comprehensive review of continuous-/discontinuous-time fractional-order multidimensional neural networks, IEEE Trans. Neural Netw. Learn. Syst., 34 (2021), 5476–5496. https://doi.org/10.1109/TNNLS.2021.3129829 doi: 10.1109/TNNLS.2021.3129829
![]() |
Algorithm 1 Computing parameter τ and Householder vector u [6] |
Input: t∈Rq Output: u, τ σ=||t||22 u=t,u(1)=1 if (σ=0) then τ=0 else μ=√t21+σ end if if t1≤0 then u(1)=t1−μ else u(1)=−σ/(t1+μ) end if τ=2u(1)2/(σ+u(1)2) u=u/u(1) |
Algorithm 2 Algorithm for solution of problem (1.1) |
Input: A∈Rp×p, B∈Rp×q, C∈Rq×q, f1∈Rp, f2∈Rq Output: ˜z∈Rp+q [ˆQ,ˆR]=qr(A), ˆd=ˆQTf1, and Nc=ˆQTMc ˆR(1:p,q+1:p+q)=Nc(1:p,1:q) if p≤p+q then ˉR=triu(ˆR), ˉd=ˆd else for m=p−1 to min(p,p+q) do [u,τ,ˆR(m,m)]=householder(ˆR(m,m),ˆR(m+1:p,m)) W=ˆR(m,m+1:p+q)+uTˆR(m+1:p,m+1:p+q) ˆR(m,m+1:p+q)=ˆR(m,m+1:p+q)−τW if m<p+q then ˆR(m+1:p,m+1:p+q)=ˆR(m+1:p,m+1:p+q)−τuW end if ˉd(m:p)=ˆd(m:p)−τ(1u)(1uT)ˆd(m:p) end for ˉR=triu(ˆR) end if for m=1 to min(p,p+q) do [u,τ,ˉR(m,m)]=householder(ˉR(m,m),Mr(1:q,m)) W1=ˉR(m,m+1:p+q)+uTMr(1:q,m+1:p+q) ˉR(m,m+1:p+q)=ˉR(m,m+1:p+q)−τW1 if m<p+q then Mr(1:q,m+1:p+q)=Mr(1:q,m+1:p+q)−τuW1 end if ˉdm=ˉd(m) ˉd(m)=(1−τ)ˉd(m)−τuTf2(1:q) f3(1:q)=f2(1:q)−τuˉdm−τu(uTf2(1:q)) end for if p<p+q then [ˊQr,ˊRr]=qr(Mr(:,p+1:p+q)) ˉR(p+1:p+q,p+1:p+q)=ˊRr f3=ˊQTrf2 end if ˜R=triu(ˉR) ˜d=f3 ˜z=backsub(˜R(1:p+q,1:p+q),˜d(1:p+q)) |
Problem | size(A) | κ(A) | size(B) | κ(B) | size(C) | κ(C) |
(1) | 16×16 | 1.0000e+05 | 16×9 | 6.1242 | 9×9 | 1.0000e+05 |
(2) | 120×120 | 1.0000e+05 | 120×80 | 8.4667 | 80×80 | 1.0000e+05 |
(3) | 300×300 | 1.0000e+06 | 300×200 | 9.5799 | 200×200 | 1.0000e+06 |
(4) | 400×400 | 1.0000e+07 | 400×300 | 13.2020 | 300×300 | 1.0000e+07 |
(5) | 900×900 | 1.0000e+08 | 900×700 | 15.2316 | 700×700 | 1.0000e+08 |
Problem | size(M) | κ(M) | ||z−˜z||2||z||2 | ||z−zBCGS2||2||z||2 |
(1) | 25×25 | 7.7824e+04 | 6.9881e-13 | 3.3805e-11 |
(2) | 200×200 | 2.0053e+06 | 4.3281e-11 | 2.4911e-09 |
(3) | 500×500 | 3.1268e+07 | 1.0582e-09 | 6.3938e-08 |
(4) | 700×700 | 3.5628e+08 | 2.8419e-09 | 4.3195e-06 |
(5) | 1600×1600 | 2.5088e+09 | 7.5303e-08 | 3.1454e-05 |
Problem | ||M−˜Q˜R||F||M||F | ||I−˜QT˜Q||F |
(1) | 6.7191e-16 | 1.1528e-15 |
(2) | 1.4867e-15 | 2.7965e-15 |
(3) | 2.2052e-15 | 4.1488e-15 |
(4) | 2.7665e-15 | 4.9891e-15 |
(5) | 3.9295e-15 | 6.4902e-15 |
Problem | size(A) | κ(A) | size(B) | κ(B) | size(C) | κ(C) |
(6) | 6×6 | 1.4951e+07 | 6×3 | 2.6989 | 3×3 | 1.0000e+05 |
(7) | 8×8 | 1.5258e+10 | 8×4 | 2.1051 | 4×4 | 1.0000e+06 |
(8) | 12×12 | 1.6776e+16 | 12×5 | 3.6108 | 5×5 | 1.0000e+07 |
(9) | 13×13 | 1.7590e+18 | 13×6 | 3.5163 | 6×6 | 1.0000e+10 |
(10) | 20×20 | 2.0383e+18 | 20×10 | 4.4866 | 10×10 | 1.0000e+10 |
Problem | size(M) | κ(M) | ||z−˜z||2||z||2 | ||z−zBCGS2||2||z||2 |
(6) | 9×9 | 8.2674e+02 | 9.4859e-15 | 2.2003e-14 |
(7) | 12×12 | 9.7355e+03 | 2.2663e-13 | 9.3794e-13 |
(8) | 17×17 | 6.8352e+08 | 6.8142e-09 | 1.8218e-08 |
(9) | 19×19 | 2.3400e+07 | 2.5133e-10 | 1.8398e-09 |
(10) | 30×30 | 8.0673e+11 | 1.9466e-05 | 1.0154e-03 |
Problem | ||M−˜Q˜R||F||M||F | ||I−˜QT˜Q||F |
(6) | 5.0194e-16 | 6.6704e-16 |
(7) | 8.4673e-16 | 1.3631e-15 |
(8) | 7.6613e-16 | 1.7197e-15 |
(9) | 9.1814e-16 | 1.4360e-15 |
(10) | 7.2266e-16 | 1.5554e-15 |
Algorithm 1 Computing parameter τ and Householder vector u [6] |
Input: t∈Rq Output: u, τ σ=||t||22 u=t,u(1)=1 if (σ=0) then τ=0 else μ=√t21+σ end if if t1≤0 then u(1)=t1−μ else u(1)=−σ/(t1+μ) end if τ=2u(1)2/(σ+u(1)2) u=u/u(1) |
Algorithm 2 Algorithm for solution of problem (1.1) |
Input: A∈Rp×p, B∈Rp×q, C∈Rq×q, f1∈Rp, f2∈Rq Output: ˜z∈Rp+q [ˆQ,ˆR]=qr(A), ˆd=ˆQTf1, and Nc=ˆQTMc ˆR(1:p,q+1:p+q)=Nc(1:p,1:q) if p≤p+q then ˉR=triu(ˆR), ˉd=ˆd else for m=p−1 to min(p,p+q) do [u,τ,ˆR(m,m)]=householder(ˆR(m,m),ˆR(m+1:p,m)) W=ˆR(m,m+1:p+q)+uTˆR(m+1:p,m+1:p+q) ˆR(m,m+1:p+q)=ˆR(m,m+1:p+q)−τW if m<p+q then ˆR(m+1:p,m+1:p+q)=ˆR(m+1:p,m+1:p+q)−τuW end if ˉd(m:p)=ˆd(m:p)−τ(1u)(1uT)ˆd(m:p) end for ˉR=triu(ˆR) end if for m=1 to min(p,p+q) do [u,τ,ˉR(m,m)]=householder(ˉR(m,m),Mr(1:q,m)) W1=ˉR(m,m+1:p+q)+uTMr(1:q,m+1:p+q) ˉR(m,m+1:p+q)=ˉR(m,m+1:p+q)−τW1 if m<p+q then Mr(1:q,m+1:p+q)=Mr(1:q,m+1:p+q)−τuW1 end if ˉdm=ˉd(m) ˉd(m)=(1−τ)ˉd(m)−τuTf2(1:q) f3(1:q)=f2(1:q)−τuˉdm−τu(uTf2(1:q)) end for if p<p+q then [ˊQr,ˊRr]=qr(Mr(:,p+1:p+q)) ˉR(p+1:p+q,p+1:p+q)=ˊRr f3=ˊQTrf2 end if ˜R=triu(ˉR) ˜d=f3 ˜z=backsub(˜R(1:p+q,1:p+q),˜d(1:p+q)) |
Problem | size(A) | κ(A) | size(B) | κ(B) | size(C) | κ(C) |
(1) | 16×16 | 1.0000e+05 | 16×9 | 6.1242 | 9×9 | 1.0000e+05 |
(2) | 120×120 | 1.0000e+05 | 120×80 | 8.4667 | 80×80 | 1.0000e+05 |
(3) | 300×300 | 1.0000e+06 | 300×200 | 9.5799 | 200×200 | 1.0000e+06 |
(4) | 400×400 | 1.0000e+07 | 400×300 | 13.2020 | 300×300 | 1.0000e+07 |
(5) | 900×900 | 1.0000e+08 | 900×700 | 15.2316 | 700×700 | 1.0000e+08 |
Problem | size(M) | κ(M) | ||z−˜z||2||z||2 | ||z−zBCGS2||2||z||2 |
(1) | 25×25 | 7.7824e+04 | 6.9881e-13 | 3.3805e-11 |
(2) | 200×200 | 2.0053e+06 | 4.3281e-11 | 2.4911e-09 |
(3) | 500×500 | 3.1268e+07 | 1.0582e-09 | 6.3938e-08 |
(4) | 700×700 | 3.5628e+08 | 2.8419e-09 | 4.3195e-06 |
(5) | 1600×1600 | 2.5088e+09 | 7.5303e-08 | 3.1454e-05 |
Problem | ||M−˜Q˜R||F||M||F | ||I−˜QT˜Q||F |
(1) | 6.7191e-16 | 1.1528e-15 |
(2) | 1.4867e-15 | 2.7965e-15 |
(3) | 2.2052e-15 | 4.1488e-15 |
(4) | 2.7665e-15 | 4.9891e-15 |
(5) | 3.9295e-15 | 6.4902e-15 |
Problem | size(A) | κ(A) | size(B) | κ(B) | size(C) | κ(C) |
(6) | 6×6 | 1.4951e+07 | 6×3 | 2.6989 | 3×3 | 1.0000e+05 |
(7) | 8×8 | 1.5258e+10 | 8×4 | 2.1051 | 4×4 | 1.0000e+06 |
(8) | 12×12 | 1.6776e+16 | 12×5 | 3.6108 | 5×5 | 1.0000e+07 |
(9) | 13×13 | 1.7590e+18 | 13×6 | 3.5163 | 6×6 | 1.0000e+10 |
(10) | 20×20 | 2.0383e+18 | 20×10 | 4.4866 | 10×10 | 1.0000e+10 |
Problem | size(M) | κ(M) | ||z−˜z||2||z||2 | ||z−zBCGS2||2||z||2 |
(6) | 9×9 | 8.2674e+02 | 9.4859e-15 | 2.2003e-14 |
(7) | 12×12 | 9.7355e+03 | 2.2663e-13 | 9.3794e-13 |
(8) | 17×17 | 6.8352e+08 | 6.8142e-09 | 1.8218e-08 |
(9) | 19×19 | 2.3400e+07 | 2.5133e-10 | 1.8398e-09 |
(10) | 30×30 | 8.0673e+11 | 1.9466e-05 | 1.0154e-03 |
Problem | ||M−˜Q˜R||F||M||F | ||I−˜QT˜Q||F |
(6) | 5.0194e-16 | 6.6704e-16 |
(7) | 8.4673e-16 | 1.3631e-15 |
(8) | 7.6613e-16 | 1.7197e-15 |
(9) | 9.1814e-16 | 1.4360e-15 |
(10) | 7.2266e-16 | 1.5554e-15 |