Over time for the past few years, facial expression identification has been a promising area. However, darkness, lighting conditions, and other factors make facial emotion identification challenging to detect. As a result, thermal images are suggested as a solution to such problems and for a variety of other benefits. Furthermore, focusing on significant regions of a face rather than the entire face is sufficient for reducing processing and improving accuracy at the same time. This research introduces novel infrared thermal image-based approaches for facial emotion recognition. First, the entire image of the face is separated into four pieces. Then, we accepted only four active regions (ARs) to prepare training and testing datasets. These four ARs are the left eye, right eye, and lips areas. In addition, ten-folded cross-validation is proposed to improve recognition accuracy using Convolutional Neural Network (CNN), a machine learning technique. Furthermore, we incorporated a parallelism technique to reduce processing-time in testing and training datasets. As a result, we have seen that the processing time reduces to 50%. Finally, a decision-level fusion is applied to improve the recognition accuracy. As a result, the proposed technique achieves a recognition accuracy of 96.87 %. The achieved accuracy ascertains the robustness of our proposed scheme.
Citation: Basem Assiri, Mohammad Alamgir Hossain. Face emotion recognition based on infrared thermal imagery by applying machine learning and parallelism[J]. Mathematical Biosciences and Engineering, 2023, 20(1): 913-929. doi: 10.3934/mbe.2023042
[1] | Hui Yan, Hongxing Wang, Kezheng Zuo, Yang Chen . Further characterizations of the weak group inverse of matrices and the weak group matrix. AIMS Mathematics, 2021, 6(9): 9322-9341. doi: 10.3934/math.2021542 |
[2] | Jinyong Wu, Wenjie Shi, Sanzhang Xu . Revisiting the m-weak core inverse. AIMS Mathematics, 2024, 9(8): 21672-21685. doi: 10.3934/math.20241054 |
[3] | Xiaofei Cao, Yuyue Huang, Xue Hua, Tingyu Zhao, Sanzhang Xu . Matrix inverses along the core parts of three matrix decompositions. AIMS Mathematics, 2023, 8(12): 30194-30208. doi: 10.3934/math.20231543 |
[4] | Yang Chen, Kezheng Zuo, Zhimei Fu . New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375. doi: 10.3934/math.2022242 |
[5] | Wanlin Jiang, Kezheng Zuo . Revisiting of the BT-inverse of matrices. AIMS Mathematics, 2021, 6(3): 2607-2622. doi: 10.3934/math.2021158 |
[6] | Hongjie Jiang, Xiaoji Liu, Caijing Jiang . On the general strong fuzzy solutions of general fuzzy matrix equation involving the Core-EP inverse. AIMS Mathematics, 2022, 7(2): 3221-3238. doi: 10.3934/math.2022178 |
[7] | Almudena Campos-Jiménez, Francisco Javier García-Pacheco . The core of the unit sphere of a Banach space. AIMS Mathematics, 2024, 9(2): 3440-3452. doi: 10.3934/math.2024169 |
[8] | Yuxu Chen, Hui Kou . Core compactness of ordered topological spaces. AIMS Mathematics, 2023, 8(2): 4862-4874. doi: 10.3934/math.2023242 |
[9] | Wanlin Jiang, Kezheng Zuo . Further characterizations of the m-weak group inverse of a complex matrix. AIMS Mathematics, 2022, 7(9): 17369-17392. doi: 10.3934/math.2022957 |
[10] | Liying Yang, Jinjin Li, Yiliang Li, Qifang Li . Sub-base local reduct in a family of sub-bases. AIMS Mathematics, 2022, 7(7): 13271-13277. doi: 10.3934/math.2022732 |
Over time for the past few years, facial expression identification has been a promising area. However, darkness, lighting conditions, and other factors make facial emotion identification challenging to detect. As a result, thermal images are suggested as a solution to such problems and for a variety of other benefits. Furthermore, focusing on significant regions of a face rather than the entire face is sufficient for reducing processing and improving accuracy at the same time. This research introduces novel infrared thermal image-based approaches for facial emotion recognition. First, the entire image of the face is separated into four pieces. Then, we accepted only four active regions (ARs) to prepare training and testing datasets. These four ARs are the left eye, right eye, and lips areas. In addition, ten-folded cross-validation is proposed to improve recognition accuracy using Convolutional Neural Network (CNN), a machine learning technique. Furthermore, we incorporated a parallelism technique to reduce processing-time in testing and training datasets. As a result, we have seen that the processing time reduces to 50%. Finally, a decision-level fusion is applied to improve the recognition accuracy. As a result, the proposed technique achieves a recognition accuracy of 96.87 %. The achieved accuracy ascertains the robustness of our proposed scheme.
The weak core inverse was introduced in [1] where the authors presented some characterizations and properties. In [2], the authors introduced an extension of the weak core inverse. Continuing previous research about the weak core inverse, our purpose is to present new characterizations and representations of the weak core inverse. Additionally, we also give several equivalent conditions for a matrix to be a weak core matrix.
Let Cm×n be the set of all m×n complex matrices and Z+ denotes the set of all positive integers. The symbols R(A), N(A), A∗, r(A) and In will denote the range space, null space, conjugate transpose, rank of A∈Cm×n and the identity matrix of order n. Ind(A) means the index of A∈Cn×n. Let Cn×nk be the set consisting of all n×n complex matrices with index k. The symbol dim(S) represents the dimension of a subspace S⊆Cn. PL stands for the orthogonal projection onto the subspace L. PA, PA∗ respectively denote the orthogonal projection onto R(A) and R(A∗), i.e., PA=AA†, PA∗=A†A.
We will now introduce definitions of several generalized inverses that will be used throughout the paper. The Moore-Penrose inverse of A∈Cm×n, denoted by A†, is defined as the unique matrix X∈Cn×m satisfying [3]:
(1) AXA=A, (2) XAX=X, (3) (AX)∗=AX, (4) (XA)∗=XA. |
In particular, X is an outer inverse of A which is denoted as A(2) if XAX=X. For any matrix A∈Cm×n with r(A)=r, let T⊆Cn, S⊆Cm be two subspaces such that dim(T)=t≤r and dim(S)=m−t. Then A has an outer inverse X that satisfies R(X)=T and N(X)=S if and only if AT⊕S=Cm. In that case X is unique and denoted by A(2)T,S [4].
The Drazin inverse of A∈Cn×nk, denoted by AD, is the unique matrix X∈Cn×n satisfying [5]: XAX=X, AX=XA, XAk+1=Ak.
For any matrix A∈Cn×n1, a new generalized inverse, which is called core inverse [6] was introduced. Two other generalizations of the core inverse for A∈Cn×nk such as core-EP inverse [7], DMP inverse [8] were also introduced.
In 2018, Wang and Chen [9] defined the weak group inverse of A∈Cn×nk, denoted by AⓌ, as the unique matrix X∈Cn×n such that [9]: . Moreover, it was verified that
.
Recently, Ferreyra et al. introduced a new generalization of core inverse called the weak core inverse of A∈Cn×nk, denoted by AⓌ,†(or, in short, WC inverse). It is defined as the unique matrix X∈Cn×n satisfying [1]:
XAX=X, AX=CA†, XA=ADC, |
where C=AAⓌA. Moreover, it is proved that AⓌ,†=ADCA†=AⓌAA†.
The structure of this paper is as follows: In Section 2, we give some preliminaries which will be made use of later in this paper. In Section 3, we discuss some characterizations of the WC inverse based on its range space, null space and matrix equations. In Section 4, several new representations of the WC inverse are proposed. Section 5 is devoted to deriving some properties of the WC inverse by the core-EP decomposition. Moreover, in Section 6, we present several equivalent conditions for a matrix to be a weak core matrix.
For convenience, we will use the following notations: CCMn, CEPn, CPn and COPn will denote the subsets of Cn×n consisting of core matrices, EP matrices, idempotent matrices and Hermitian idempotent matrices, respectively, i.e.,
● CCMn={A∣A∈Cn×n,r(A2)=r(A)};
● CEPn={A∣A∈Cn×n,R(A)=R(A∗)};
● CPn={A∣A∈Cn×n,A2=A};
● COPn={A∣A∈Cn×n,A2=A=A∗}.
Before giving characterizations of the WC inverse, we first present the following auxiliary lemmas which will be repeatedly used throughout this paper.
Lemma 2.1. [10] Let A∈Cn×nk. Then A can be represented as
A=U[TS0N]U∗, | (2.1) |
where T∈Ct×t is nonsingular and t=r(T)=r(Ak), N is nilpotent with index k, and U∈Cn×n is unitary.
Moreover, the representation of A given by (2.1) is unique [10,Theorem 2.4]. In that case, we have that
Ak=U[Tk˜T00]U∗, | (2.2) |
where ˜T=k−1∑j=0TjSNk−1−j.
Lemma 2.2. [1,9,10,11,12] Let A∈Cn×nk be given by (2.1). Then :
A†=U[T∗△−T∗△SN†(In−t−N†N)S∗△N†−(In−t−N†N)S∗△SN†]U∗, | (2.3) |
AD=U[T−1(Tk+1)−1˜T00]U∗, | (2.4) |
![]() |
(2.5) |
AD,†=U[T−1(Tk+1)−1˜TNN†00]U∗, | (2.6) |
A†,D=U[T∗△T∗△T−k˜T(In−t−N†N)S∗△(In−t−N†N)S∗△T−k˜T]U∗, | (2.7) |
AⓌ=U[T−1T−2S00]U∗, | (2.8) |
.
AⓌ,†=U[T−1T−2SNN†00]U∗, | (2.9) |
where ˜T=k−1∑j=0TjSNk−1−j and △=[TT∗+S(In−t−N†N)S∗]−1.
˜T and △ will be often used throughout this paper.
Lemma 2.3. [1] Let A∈Cn×nk. Then
(a) AⓌ,†=A(2)R(Ak), N((Ak)∗A2A†);
(b) AAⓌ,†=PR(Ak), N((Ak)∗A2A†);
(c) AⓌ,†A=PR(Ak), N((Ak)∗A2).
Lemma 2.4. Let A∈Cn×nk and C=AAⓌA. The following conditions hold:
(a) [1]
(b) [1] r(AⓌ,†)=r(Ak);
(c) [1] CA†C=C;
(d) [9] AⓌAk+1=Ak;
(e) Ck=AkAⓌA.
Proof. Item (e) can be directly verified by (2.1), (2.2) and (2.8).
Applying existing results for the WC inverse with respect to R(X)=R(Ak) and N(X)=N((Ak)∗A2A†), some new results can be obtained for the WC inverse in the next result.
Theorem 3.1. Let A∈Cn×nk and C=AAⓌA. The following statements are equivalent:
(a) X=AⓌ,†;
(b) N(X)=N((Ak)∗A2A†) and XAA∗=AⓌAA∗;
(c) N(X)=N((Ak)∗A2A†) and XA=AⓌA;
(d) R(X)=R(Ak) and A∗AX=A∗CA†;
(e) R(X)=R(Ak) and AX=CA†;
(f) R(X)=R(Ak) and AkX=CkA†;
(g) R(X)=R(Ak), N(X)=N((Ak)∗A2A†) and XAAⓌ=AⓌ;
(h) R(X)=R(Ak), N(X)=N((Ak)∗A2A†) and XAk+1=Ak.
Proof. (a)⇒(b). By the definition of AⓌ,†, we have that XAA∗=ADCA∗=ADAAⓌAA∗=AⓌAA∗. Hence, by (a) of Lemma 2.3, we now obtain that (b) holds.
(b)⇒(c). Postmultiplying XAA∗=AⓌAA∗ by (A†)∗, we obtain that XA=AⓌA.
(c)⇒(d). From N(X)=N((Ak)∗A2A†), we have that N(AA†)⊆N((Ak)∗A2A†)=N(X), which leads to X=XAA†. Thus we get that X=XAA†=AⓌAA†=AⓌ,† by XA=AⓌA. Hence, by the definition of AⓌ,† and (a) of Lemma 2.3, we have that (d) holds.
(d)⇒(e). Evidently.
(e)⇒(f). Since C=AAⓌA and Ck=AkAⓌA, premultiplying AX=CA† by Ak−1, we have that AkX=CkA†.
(f)⇒(g). From (2.2) and R(X)=R(Ak), we can set X=U[X1X200]U∗, where X1∈Ct×t, X2∈Ct×(n−t) and t=r(Ak). Furthermore, it follows from AkX=CkA† and (2.9) that X=AⓌ,†. Therefore, by the definition of AⓌ,† and (a) of Lemma 2.3, we obtain that (g) holds.
(g)⇒(h). It follows from AⓌAk+1=Ak and XAAⓌ=AⓌ that XAk+1=XAAⓌAk+1=AⓌAk+1=Ak.
(h)⇒(a). By R(X)=R(Ak) and XAk+1=Ak, we get that XAX=X. Hence, by (a) of Lemma 2.3, we get that X=AⓌ,†.
Now we will consider other characterizations of the WC inverse by the fact that AⓌ,†AAⓌ,†=AⓌ,†.
Theorem 3.2. Let A∈Cn×nk and C=AAⓌA. The following statements are equivalent:
(a) X=AⓌ,†;
(b) XAX=X, R(X)=R(Ak) and N(X)=N((Ak)∗A2A†);
(c) XAX=X, R(X)=R(Ak) and AX=CA†;
(d) XAX=X, AX=CA† and XAk=AⓌAk;
(e) XAX=X, XA=AⓌA and AkX=CkA†;
(f) XAX=X, XA=AⓌA and N(X)=N((Ak)∗A2A†).
Proof. (a)⇒(b). The proof can be demonstrated by (a) of Lemma 2.3.
(b)⇒(c). By the definition of AⓌ,† and (b) of Lemma 2.3, we get that AX∈CPn, R(AX)=AR(X)=R(Ak+1)=R(Ak)=R(AAⓌ,†)=R(CA†) and N(AX)=N(X)=N((Ak)∗A2A†)=N(AAⓌ,†)=N(CA†). On the other hand, Lemma 2.4 (c) implies CA†∈CPn, hence AX=CA†.
(c)⇒(d). By item (c) of Lemma 2.3, we obtain that R(X)=R(Ak)=R(AⓌ,†A). So we get that AⓌ,†AX=X, which implies that XAk=AⓌ,†AXAk=AⓌ,†CA†Ak=AⓌ,†AAⓌAA†Ak=AⓌAk.
(d)⇒(e). By conditions and AAⓌ=Ak(AⓌ)k, we can infer that X=XCA†=XAAⓌAA†=XAk(AⓌ)kAA†=AⓌAk(AⓌ)kAA†=AⓌ,†. Hence, by AⓌ,†=AⓌAA† and AkAⓌA=Ck, we obtain that (e) holds.
(e)⇒(f). Since XAX=X, XA=AⓌA, we have that R(X)=R(XA)=R(AⓌA)=R(Ak). We now obtain that X=AⓌ,† by (f) of Theorem 3.1. Hence (f) holds by (a) of Lemma 2.3.
(f)⇒(a). It follows from XAX=X that N(AX)=N(X), by conditions and (a) of Lemma 2.3. We now obtain that X=XAX=AⓌAX=AⓌAA†AX=AⓌ,†PR(AX), N(AX)=AⓌ,†.
Notice the fact that XAk+1=Ak if X=AⓌ,†. Therefore, we will characterize the WC inverse in terms of AⓌ,†Ak+1=Ak.
Theorem 3.3. Let A∈Cn×nk and C=AAⓌA. The following statements are equivalent:
(a) X=AⓌ,†;
(b) XAk+1=Ak, A∗AX=A∗CA† and r(X)=r(Ak);
(c) XAk+1=Ak, AX=CA† and r(X)=r(Ak);
(d) XAk+1=Ak, AkX=CkA† and r(X)=r(Ak).
Proof. (a)⇒(b). Since AⓌ,†=AⓌAA†, we can show that XAk+1=Ak, A∗AX=A∗CA†. Then, by (b) of Lemma 2.4, we get that (b) holds.
(b)⇒(c). Obviously.
(c)⇒(d). Premultiplying AX=CA† by Ak−1, we have that AkX=CkA† from AkAⓌA=Ck.
(d)⇒(a). It follows from XAk+1=Ak and r(X)=r(Ak) that R(X)=R(Ak). Hence, we obtain that X=AⓌ,† from (f) of Theorem 3.1.
In the following example, we show that the condition r(X)=r(Ak) in Theorem 3.3 is necessary.
Example 3.4. Let
A=[100003000], X=[100 002000]. |
Then Ind(A)=2,
A†=[10000001/30], C=[100000000] and AⓌ,†=[100000000]. |
It can be directly verified that XA3=A2, A∗AX=A∗CA† and r(X)≠r(A2), but X≠AⓌ,†. The other cases follow similarly.
By Lemma 2.3, it is clear that AX=PR(Ak), N((Ak)∗A2A†) and XA=PR(Ak), N((Ak)∗A2) if X=AⓌ,†. However, the converse is invalid as shown in the next example:
Example 3.5. Let A, X be the same as in Example 3.4. Then
AX=[100000000], XA=[100000000]and AⓌ,†=[100000000]. |
It can be directly verified that AX=PR(A2), N((A2)∗A2A†) and XA=PR(A2), N((A2)∗A2), but X≠AⓌ,†.
In the next result, we will present some new equivalent conditions for the converse implication:
Theorem 3.6. Let A∈Cn×nk and X∈Cn×n. The following statements are equivalent:
(a) X=AⓌ,†;
(b) AX=PR(Ak), N((Ak)∗A2A†),XA=PR(Ak), N((Ak)∗A2) and r(X)=r(Ak);
(c) AX=PR(Ak), N((Ak)∗A2A†),XA=PR(Ak), N((Ak)∗A2) and XAX=X;
(d) AX=PR(Ak), N((Ak)∗A2A†),XA=PR(Ak), N((Ak)∗A2) and AX2=X.
Proof. (a)⇒(b). The proof can be demonstrated by (b) and (c) of Lemma 2.3 and (b) of Lemma 2.4.
(b)⇒(c). By R(XA)=R(Ak) and r(X)=r(Ak), we obtain that R(X)=R(XA)=R(Ak), hence we further derive that XAX=X.
(c)⇒(d). By conditions and (a) of Lemma 2.3, we have that X=AⓌ,†. Therefore, by (2.9), it can be directly verified that AX2=X.
(d)⇒(a). From AX2=X, we have that X=AX2=A2X3=⋯=AkXk+1, which implies R(X)⊆R(Ak). Combining with the condition R(Ak)=R(XA)⊆R(X), we get that R(X)=R(Ak). From (2.2), we now set X=U[X1X200]U∗, where X1∈Ct×t, X2∈Ct×(n−t) and t=r(Ak). On the other hand, it follows from N(AX)=N((Ak)∗A2A†) that (Ak)∗A2A†=(Ak)∗A2X, which yields X1=T−1 and X2=T−2SNN†. Therefore, by (2.9), we obtain that X=AⓌ,†.
In [1], the authors introduced the definition of AⓌ,† with an algebraic approach. In the next result, we will consider characterization of AⓌ,† with a geometrical point of view.
Theorem 3.7. Let A∈Cn×nk. Then:
(a) AⓌ,† is the unique matrix X that satisfies:
AX=PR(Ak), N((Ak)∗A2A†), R(X)⊆R(Ak). | (3.1) |
(b) AⓌ,† is the unique matrix X that satisfies:
XA=PR(Ak), N((Ak)∗A2), N(A∗)⊆N(X). | (3.2) |
Proof. (a). Since R(AD)=R(Ak), it is a consequence of [2,Corollary 3.2] by properities of Drazin and MP inverse.
(b). Since items (c) of Lemma 2.3, AⓌ,† satisfies XA=PR(Ak), N((Ak)∗A2). Additionally, we derive that N(A∗)=N(A†)⊆N(AⓌAA†)=N(X). Now it remains to prove that X is unique.
Assume that X1, X2 satisfy (3.2), then X1A=X2A, \ N(A∗)⊆N(X1) and N(A∗)⊆N(X2). Furthermore, we get that (X1−X2)A=0 and R(X∗i)⊆R(A) for i=1,2, which further imply that A∗(X∗1−X∗2)=0 and R(X∗1−X∗2)⊆R(A). Therefore we have that R(X∗1−X∗2)⊆R(A)∩N(A∗)=R(A)∩R(A)⊥={0}. Thus, X∗1=X∗2, i.e., X1=X2.
Remark 3.8. In Theorem 3.7, R(X)⊆R(Ak) in (3.1) can be replaced by R(X)=R(Ak). However, if we replace N(A∗)⊆N(X) with N(A∗)=N(X) in (3.2), item (b) of Theorem 3.7 does not hold.
Characterizations of some generalized inverses by using its block matrices have been investigated in [13,14,15,16,17]. In [18,Theorem 3.2], the authors presented a characterization for the WC inverse using its block matrices. Next we will give another proof of it by using characterization of projection operator.
Theorem 3.9. Let A∈Cn×nk and r(Ak)=t. Then there exist a unique matrix P such that
P2=P, PAk=0, (Ak)∗A2P=0, r(P)=n−t, | (3.3) |
a unique matrix Q such that
Q2=Q, QAk=0, (Ak)∗A2A†Q=0, r(Q)=n−t, | (3.4) |
and a unique matrix X such that
r([AI−QI−PX])=r(A). | (3.5) |
Furthermore, X is the WC inverse AⓌ,† of A and
P=PN((Ak)∗A2), R(Ak), Q=PN((Ak)∗A2A†), R(Ak). | (3.6) |
Proof. It is not difficult to prove that
the condition (3.3) hold⟺(I−P)2=I−P, (I−P)Ak=Ak,(Ak)∗A2(I−P)=(Ak)∗A2, r(P)=n−t⟺I−P=PR(Ak), N((Ak)∗A2)⟺P=PN((Ak)∗A2), R(Ak). |
Similarly, we can show that (3.4) have the unique solution Q=PN((Ak)∗A2A†), R(Ak).
Furthermore, comparing (3.6) and items (b) and (c) of Lemma 2.3 immediately leads to the conclusion that
r([AI−QI−PX])=r([AAAⓌ,†AⓌ,†AX])=r(A)+r(X−AⓌ,†). |
By (3.5), we obtain that X=AⓌ,†.
In [19], Drazin introduced the (b,c)-inverse in semigroup. In [20], Benítez et al. investigated the (B,C)-inverse of A∈Cm×n, as the unique matrix X∈Cn×m satisfying [20]:
CAX=C, XAB=B, R(X)=R(B), N(X)=N(C), |
where B,C∈Cn×m. In the next result, we will show that the WC inverse is a special (B,C)-inverse.
Theorem 4.1. Let A∈Cn×nk. Then
AⓌ,†=A(Ak,(Ak)∗A2A†). |
Proof. According to Lemma 2.3, we get that
R(AⓌ,†)=R(Ak), N(AⓌ,†)=N((Ak)∗A2A†)). |
Observe that AⓌ,†AAk=AⓌAk+1=Ak and (Ak)∗A2A†AAⓌ,†=(Ak)∗A2AⓌAA†=(Ak)∗A2A†. Thus, we obtain AⓌ,†=A(Ak,(Ak)∗A2A†).
In [21], the authors introduced the Bott-Duffin inverse of A∈Cn×n when APL+PL⊥ is nonsingular, i.e., A(−1)L=PL(APL+PL⊥)−1=PL(APL+I−PL)−1. In [22], the authors showed the weak group inverse by a special Bott-Duffin inverse. Next we will show that the WC inverse of A is indeed the Bott-Duffin inverse of A2 with respect to R(Ak).
Theorem 4.2. Let A∈Cn×nk be given by (2.1). Then
AⓌ,†=(A2)(−1)(R(Ak))APA=(PAkA2PAk)†APA. | (4.1) |
Proof. It follows from (2.3) and (2.2) that
PA=U[It00NN†]U∗, | (4.2) |
PAk=U[It000]U∗. | (4.3) |
We now obtain that
(A2)(−1)(R(Ak))APA=PAk(A2PAk+I−PAk)−1APA=U[It000][T200In−t]−1[TS0N][It00NN†]U∗=U[T−1T−2SNN†00]U∗=AⓌ,†. |
Similarly, by a direct calculation, we can derive that AⓌ,†=(PAkA2PAk)†APA.
Working with the fact that P=PN((Ak)∗A2), R(Ak) and Q=PN((Ak)∗A2A†), R(Ak) in Theorem 3.9, we will consider other representations of AⓌ,† in the next theorem.
Theorem 4.3. Let A∈Cn×nk and P=PN((Ak)∗A2), R(Ak), Q=PN((Ak)∗A2A†), R(Ak). Then for any a,b≠0, we have
AⓌ,†=(A+aP)−1(I−Q)=(I−P)(A+bQ)−1. | (4.4) |
Proof. From items (b) and (c) of Lemma 2.3, it is not difficult to conclude that
(A+aP)AⓌ,†=I−Q. |
Now we only need to show the invertibility of A+aP. Assume that α=U[α1α2]∈Cn such that (A+aP)α=0, i.e., Aα=−aPα, where α1∈Cp, α2∈Cn−p. Now it follows from condition (c) of Lemma 2.3 and (3.6) that
[TS0N][α1α2]=−a[0−T−1S−T−2SN0I][α1α2], |
implying α1=0 and α2=0 since a≠0, N is nilpotent and T is non singular. Thus A+aP is nonsingular.
Analogously, we can prove that A+bQ is invertible and AⓌ,†=(I−P)(A+bQ)−1.
The limit expressions for some generalized inverses of matrices have been given in [14,15,16,17,23,24]. Similarly, the WC inverse can also be characterized as limit value as shown in the next result:
Theorem 4.4. Let A∈Cn×nk. Then:
(a) AⓌ,†=limλ→0Ak(λIn+(Ak)∗Ak+2)−1(Ak)∗A2A∗(λIn+AA∗)−1;
(b) AⓌ,†=limλ→0Ak(Ak)∗A(λIn+Ak+1(Ak)∗A)−1AA∗(λIn+AA∗)−1;
(c) AⓌ,†=limλ→0(λIn+Ak(Ak)∗A2)−1Ak(Ak)∗A2A∗(λIn+AA∗)−1;
(d) AⓌ,†=limλ→0Ak(Ak)∗A2A∗(λIn+AA∗)−1(λIn+Ak+1(Ak)∗A2A∗(λIn+AA∗)−1)−1.
Proof. According to condition (a) of Lemma 2.3, it is not hard to show that
AⓌ,†=A(2)R(Ak), N((Ak)∗A2A†)=A(2)R(Ak(Ak)∗A2A†), N(Ak(Ak)∗A2A†). |
Thus, by [25,Theorem 2.1], we have the following results:
(a) Let X=Ak, Y=(Ak)∗A2A† and by A†=limλ→0A∗(λIn+AA∗)−1. We have
AⓌ,†=limλ→0Ak(λIn+(Ak)∗Ak+2)−1(Ak)∗A2A∗(λIn+AA∗)−1. |
(b) Let X=Ak(Ak)∗A, Y=AA† and by A†=limλ→0A∗(λIn+AA∗)−1. We have
AⓌ,†=limλ→0Ak(Ak)∗A(λIn+Ak+1(Ak)∗A)−1AA∗(λIn+AA∗)−1. |
(c) Let X=In, Y=Ak(Ak)∗A2A† and by A†=limλ→0A∗(λIn+AA∗)−1. We have
AⓌ,†=limλ→0(λIn+Ak(Ak)∗A2)−1Ak(Ak)∗A2A∗(λIn+AA∗)−1. |
(d) Let X=Ak(Ak)∗A2A†, Y=In and by A†=limλ→0A∗(λIn+AA∗)−1. We have
AⓌ,†=limλ→0Ak(Ak)∗A2A∗(λIn+AA∗)−1(λIn+Ak+1(Ak)∗A2A∗(λIn+AA∗)−1)−1. |
We end up this section with three examples of computing the WC inverse of a matrix using three different expressions in Theorems 4.2–4.4.
Example 4.5. Let
A=[2210342000010000]. | (4.5) |
Then Ind(A)=2 and the weak core inverse of A is
AⓌ,†=A2(A4)†A2A†=[2−1−1/20−3/211/2000000000]. |
Firstly, using the expression (4.1) to compute the WC inverse of A. Then
(A2PA2+I−PA2)−1=[11/2−300−9/25/20000100001] and (PA2A2PA2)†=[11/2−300−9/25/20000000000]. |
After simplification, it follows that (A2)(−1)(R(A2))APA=AⓌ,† and (PA2A2PA2)†APA=AⓌ,†.
Secondly, using the expression (4.4), we obtain
(A−6P)−1=[2−1−1/2−19/12−3/217/1297/7200−1/6−1/36000−1/6] and (A+15Q)−1=[2−1−1/25/2−3/21−210005−250005]. |
Therefore, it can be directly verified (A−6P)−1(I−Q)=AⓌ,†, (I−P)(A+15Q)−1=AⓌ,†.
Finally, using the limit expressions of item (a) in Theorem 4.4.
Let B=A2(λIn+(A2)∗A4)−1(A2)∗A2A∗(λIn+AA∗)−1, then
B=A2(λIn+(A2)∗A4)−1(A2)∗A2A∗(λIn+AA∗)−1=[2(30321λ2+5361λ+580)/λ12(54629λ2+6699λ−290)/λ1(1305λ−58)/λ20(110503λ2+19405λ−870)/λ14(49773λ2+6090λ+145)/λ1(2378λ+58)/λ2000000000]. |
where λ1=λ4+38734λ3+1470569λ2+197888λ+580, λ2=λ3+38697λ2+38812λ+116.
After simplification, it follows that
limλ→0B=limλ→0A2(λIn+(A2)∗A4)−1(A2)∗A2A∗(λIn+AA∗)−1=AⓌ,†. |
The other cases in Theorem 4.4 can be similarly verified.
In this section, we discuss some properties of the WC inverse and consider the connection between the WC inverse and other known classes of matrices.
Lemma 5.1. Let A∈Cn×nk be given by (2.1). Then:
(a) A∈CEPn⇔S=0 and N=0;
(b) A∈CPn⇔T=It and N=0;
(c) A∈COPn⇔T=It, S=0 and N=0.
Proof. (a) The proof can be easily verified from (2.1) and (2.3).
(b) By (2.1), we obtain that A∈CPn is equivalent with
U[T2TS+SN0N2]U∗=U[TS0N]U∗, |
which is further equivalent with T2=T, TS+SN=S and N2=N. Hence, by non singularity of T and Nk=0, we can conclude that A∈CPn if and only if T=It and N=0.
(c) Since COPn⊆CPn, it is a direct consequence of item (b) and (2.1).
Theorem 5.2. Let A∈Cn×nk be given by (2.1). The following statements hold:
(a) AⓌ,†=0⇔A is nilpotent $;
(b) AⓌ,†=A⇔A∈CEPn and A3=A;
(c) AⓌ,†=A∗⇔A∈CEPn and AA∗=PAk;
(d) AⓌ,†=PA⇔A∈CPn;
(e) AⓌ,†=PA∗⇔A∈COPn.
Proof. (a) By (2.1) and (2.9), we directly get that
AⓌ,†=0⟺r(Ak)=t=0⟺A is nilpotent. |
(b) It follows (2.1), (2.9) and (a) of lemma 5.1 that
AⓌ,†=A⟺U[T−1T−2SNN†00]U∗=U[TS0N]U∗⟺S=0 , N=0 and T3=T⟺A∈CEPn and A3=A. |
(c) By (2.1), (2.9) and (a) of lemma 5.1, we have that
AⓌ,†=A∗⟺U[T−1T−2SNN†00]U∗=U[T∗0S∗N∗]U∗⟺S=0 , N=0 and TT∗=It⟺A∈CEPn and AA∗=PAk. |
(d) From (2.9), (4.2) and (b) of lemma 5.1, we obtain that
AⓌ,†=PA⟺U[T−1T−2SNN†00]U∗=U[It00NN†]U∗⟺T=It, N=0⟺A∈CPn. |
(e) It follows from (2.1) and (2.3) that
PA∗=A†A=U[T∗△TT∗△S(In−t−N†N)(In−t−N†N)S∗△TN†N+(In−t−N†N)S∗△S(In−t−N†N)]U∗. | (5.1) |
By (2.9) and (5.1), we now get that AⓌ,†=PA∗ is equivalent with
U[T−1T−2SNN†00]U∗=U[T∗△TT∗△S(In−t−N†N)(In−t−N†N)S∗△TN†N+(In−t−N†N)S∗△S(In−t−N†N)]U∗, |
which is further equivalent with T−1=T∗△T, (In−t−N†N)S∗△T=0 and N†N+(In−t−N†N)S∗△S(In−t−N†N)=0. Hence, by nonsingularity of △T and (c) of lemma 5.1, we can conclude that AⓌ,†=PA∗ if and only if A∈COPn.
From Lemma 2.3, we know that both AAⓌ,† and AⓌ,†A are oblique projectors. The next theorem will further discuss other characteriations for AAⓌ,† and AⓌ,†A.
Theorem 5.3. Let A∈Cn×nk be given by (2.1). The following statements hold:
(a) AAⓌ,†=PA⇔A∈CCMn; (b) AAⓌ,†=PA∗⇔A∈CEPn;
(c) AⓌ,†A=PA⇔A∈CEPn; (d) AⓌ,†A=PA∗⇔A∈CEPn.
Proof. It follows from (2.1) and (2.9) that
AAⓌ,†=U[ItT−1SNN†00]U∗, | (5.2) |
AⓌ,†A=U[ItT−1S+T−2SN00]U∗. | (5.3) |
(a) By (4.2) and (5.2), the result can be directly verified.
(b) By (5.1) and (5.2), we can show that AAⓌ,†=PA∗ if and only if (In−t−N†N)S∗△T=0 and N†N+(In−t−N†N)S∗△S(In−t−N†N)=0, which is further equivalent with S=0 and N=0, i.e., A∈CEPn.
(c) It follows from (4.2) and (5.3) that AⓌ,†A=PA is equivalent with A∈CEPn.
(d) From (5.1) and (5.3), it is similar to the proof of (b).
Recall from [6] that the core inverse is necessarily EP. The next Theorem shows that this is not the case with the WC inverse.
Theorem 5.4. Let A∈Cn×nk be given by (2.1) and t∈Z+. The following statements are equivalent:
(a) AⓌ,†∈CEPn; (b) SN=0;
(c) ; (d)
;
(e) AⓌ,†At=AtAⓌ.
Proof. (a)⇔(b). Since AⓌ,†∈CEPn is equivalent with R(AⓌ,†)=R((AⓌ,†)∗). Using (2.9), we have that AⓌ,†∈CEPn if and only if SN=0.
(c)⇔(b). By (2.5) and (5.3), it can be directly verified that if and only if SN=0.
(d)⇔(b). By (2.5) and (2.9), it follows that
![]() |
(e)⇔(b). From (2.8) and (2.9), it follows that
AⓌ,†At=AtAⓌ⟺U[Tt−1Tt−2S+T−2TtN00]U∗=U[Tt−1Tt−2S00]U∗⟺T−2TtN=0⟺SN=0. |
where Tt=t−1∑j=0TjSNt−1−j.
In [26], the authors introduced that a matrix A to be a weak group matrix if A∈CWGn, which is equivalent with SN=0. Therefore, we have that following remark:
Remark 5.5. It is worth noting that conditions (a), (c)–(e) in Theorem 5.4 are equivalent with A∈CWGn.
The next theorems provide some equivalent conditions for AⓌ,†∈CPn and AⓌ,†∈COPn.
Theorem 5.6. Let A∈Cn×nk be given by (2.1). The following statements are equivalent:
(a) AⓌ,†∈CPn; (b) T=It;
(c) AAⓌ,†=AⓌ,†; (d) AⓌ,†Ak=Ak;
(e) Ak(AⓌ,†)k=AⓌ,†; (f) A(AⓌ,†)k=(AⓌ,†)k.
Proof. (a)⇔(b). From (2.9), it is not hard to prove that AⓌ,†∈CPn is equivalent with T=It.
(c)⇔(b). From (2.9) and (5.3), it follows that AAⓌ,†=AⓌ,† if and only if T=It.
(d)⇔(b). By (2.2) and (2.9), it is easy to verify that AⓌ,†Ak=Ak if and only if T=It.
The proofs (e)⇔(b) and (f)⇔(b) are similar to the proof (d)⇔(b).
Theorem 5.7. Let A∈Cn×nk be given by (2.1). The following statements are equivalent:
(a) AⓌ,†∈COPn; (b) T=It and SN=0;
(c) AAⓌ,†=(AⓌ,†)∗; (d) AⓌ,†A=AⓌ;
(e) (AⓌ,†)kAk=AⓌ; (f) (AⓌ,†)kA=(AⓌ)k.
Proof. (a)⇔(b). From (2.9) and Theorem 5.6, we can show that AⓌ,†∈COPn is equivalent with T=It and SN=0.
(c)⇔(b). By (2.9) and (5.3), it follows from AAⓌ,†=(AⓌ,†)∗ that
[ItT−1SNN†00]=[(T−1)∗0(T−2SNN†)∗0]. |
Hence, we get that AAⓌ,†=(AⓌ,†)∗ is equivalent with T=It and SN=0.
The proofs of (d)⇔(b), (e)⇔(b) and (f)⇔(b) are similar to the proof of (c)⇔(b).
Corollary 5.8. Let A∈Cn×nk. Then A∈COPn if and only if AⓌ,†∈CPn∩CEPn.
Proof. It is a direct consequence from Theorem 5.4 and Theorem 5.6.
Working with Theorem 5.6 and Theorem 5.7, we have the following corollary.
Corollary 5.9. Let A∈Cn×nk and for any l∈N, l≥k. The following statements statements hold:
(a)A∈CPn⇔AⓌ,†∈CPn and Al=A;
(b)A∈COPn⇔AⓌ,†∈CPn and Al=A∗.
Proof. (a) The result can be easily derived by lemma 5.1 and Theorem 5.6,
(b) From Lemma 5.1 and Theorem 5.7, we can show that (b) holds.
Ferreyra et al.[1] introduced the weak core matrix. The set of all n×n weak core matrices is denoted by CWCn, that is:
CWCn={A∣A∈Cn×n,AⓌ,†=AD,†}. |
In this section, we discuss some equivalent conditions satisfied by a matrix A such that A∈CWCn using the core-EP decomposition. For convenience, we introduce a necessary lemma.
Lemma 6.1. [1] Let A∈Cn×nk be given by (2.1). Then the following statements are equivalent:
(a) A∈CWCn;
(b) SN2=0;
(c) AⓌ=AD.
Theorem 6.2. Let A∈Cn×nk be given by (2.1) and t∈Z+. The following statements are equivalent:
(a) A∈CWCn;
(b) AⓌA=ADA;
(c) AtAⓌA=AtADA;
(d) AtAⓌ,†=AtAD,†;
(e) AkAⓌ,†=AkA†;
(f) AkAⓌA=Ak.
Proof. (a)⇒(b). It is a direct consequence from condition (c) of Lemma 6.1.
(b)⇒(c). Evident.
(c)⇒(a). By condition, it follows from (2.4) and (2.8) that
[TtTt−1S+Tt−2SN00]=[TtTt−1S+Tt−k−1˜TN00], |
which implies Tt−k−1(Tk−2SN2+⋯+TSNk−1)=0. We now obtain that SN2=0 since T is invertible. By Lemma 6.1, we obtain that A∈CWCn.
(a)⇔(d). It follows form (2.6), (2.9) and Lemma 6.1 that
AtAⓌ,†=AtAD,†⟺U[Tt−1Tt−2SNN†00]U∗=U[Tt−1Tt−k−1˜TNN†00]U∗⟺Tt−2SNN†=Tt−k−1˜TNN†⟺SN2=0⟺A∈CWCn. |
(a)⇒(e). From the definition of the weak core matrix, we have that AkAⓌ,†=AkAD,†=AkA†.
(e)⇒(f). Evident.
(f)⇒(a). If AkAⓌA=Ak, by (2.2) and (2.8), we can conclude that SN2. Hence item (a) holds.
Corollary 6.3. Let A∈Cn×nk be given by (2.1). Then A∈CWCn if and only if Ak=Ck, where C=AAⓌA.
Proof. Since AkAⓌA=Ck, the result is a direct consequence of item (f) of Theorem 6.2.
Theorem 6.4. Let A∈Cn×nk be given by (2.1) and for some t∈Z+. The following statements are equivalent:
(a) A∈CWCn;
(b) A(AⓌ)tA=(AⓌ)tA2;
(c) (AⓌ)tA=(AⓌ)t+1A2;
(d) A(AⓌ)tA commutes with (AⓌ)tA2;
(e) (AⓌ)tA commutes with (AⓌ)t+1A2.
Proof. By (2.1) and (2.8), we get that
A(AⓌ)tA=U[T−t+2T−t+1S+T−tSN00]U∗, | (6.1) |
(AⓌ)tA2=U[T−t+2T−t+1S+T−tSN+T−t−1SN200]U∗. | (6.2) |
(a)⇔(b). By (6.1), (6.2) and Lemma 6.1, we get that A(AⓌ)tA=(AⓌ)tA2 if and only if A∈CWCn.
(a)⇔(c). Similar to the part (a)⇔(b).
(a)⇔(d). It follows from (6.1), (6.2) and Lemma 6.1 that
A(AⓌ)tA(AⓌ)tA2−(AⓌ)tA2A(AⓌ)tA=U[0T−2t+1SN200]U∗, |
which implies that A(AⓌ)tA commutes with (AⓌ)tA2 if and only if A∈CWCn.
(a)⇔(e). It is analogous to that of the part (a)⇔(d).
Corollary 6.5. Let A∈Cn×nk and t∈Z+. The following statements are equivalent:
(a) A∈CWCn;
Proof. Since , Corollary 6.5 can be directly verified.
In this paper, new characterizations and properties of the WC inverse are derived by using range, null space, matrix equations, respectively. Several expressions of the WC inverse are also given. Finally, we show various characterizations of the weak core matrix.
According to the current research background, more characterizations and applications for the WC inverse are worthy of further discussion which as follows:
1) Characterizing the WC inverse by maximal classes of matrices, full rank decomposition, integral expressions and so on;
2) New iterative algorithms and splitting methods for computing the WC inverse;
3) Using the WC inverse to solve appropriately constrained systems of linear equations;
4) Investigating the WC inverse of tensors.
This work was supported by the Natural Science Foundation of China under Grants 11961076. The authors are thankful to two anonymous referees for their careful reading, detailed corrections and pertinent suggestions on the first version of the paper, which enhanced the presentation of the results distinctly.
All authors read and approved the final manuscript. The authors declare no conflict of interest.
[1] | M. A. Hossain, B. Assiri, Facial emotion verification by infrared image, in International Conference on Emerging Smart Computing and Informatics (ESCI), (2020). https://doi.org/10.1109/ESCI48226.2020.9167616 |
[2] | M. A. Hossain, B. Assiri, Emotion specific human face authentication based on infrared thermal image, in 2020 2nd International Conference on Computer and Information Sciences (ICCIS), (2020). https://doi.org/10.1109/ICCIS49240.2020.9257683 |
[3] | M. Vollmer, K. P. Möllmann, Infrared Thermal Imaging: Fundamentals, Research and Applications, 2nd edition, WILEY-VCH Verlag, 2018. https://doi.org/10.1002/9783527693306 |
[4] |
M. A. Hossain, G. Sanyal, Tracking humans based on interest point over span-space in multifarious situations, Int. J. Software Eng. Appl., 10 (2016), 175–192. https://doi.org/10.14257/ijseia.2016.10.9.15 doi: 10.14257/ijseia.2016.10.9.15
![]() |
[5] | C. Myeon-gyun, A study on the obstacle recognition for autonomous driving RC car using lidar and infrared thermal camera, in 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), (2019). http://doi.org/10.1109/ICUFN.2019.8806152 |
[6] | Q. Wan, S. P. Rao, A. Kaszowska, V. Voronin, K. Panetta, H. A, Taylor, et al., Face description using anisotropic gradient: infrared thermal to visible face recognition, in Mobile Multimedia Image Processing, Security, and Applications, (2018). https://doi.org/10.1117/12.2304898 |
[7] |
T. Bae, K. Youngchoon, A. Sangho, IR-band conversion of target and background using surface temperature estimation and error compensation for military IR sensor simulation, Sensors, 19 (2019), 2455. https://doi.org/10.3390/s19112455 doi: 10.3390/s19112455
![]() |
[8] | Y. Abdelrahman, P. Knierim, P. W. Wozniak, N. Henze, A. Schmidt, See through the fire: evaluating the augmentation of visual perception of firefighters using depth and thermal cameras, in Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing, (2017), 693–696. https://doi.org/10.1145/3123024.3129269 |
[9] | M. A. Hossain, D. Samanta, G. Sanyal, Eye diseases detection based on covariance, Int. J. Comput. Sci. Inf. Technol. Sec., 2 (2012), 376–379. |
[10] |
E. Sousa, R. Vardasca, S. Teixeira, A. Seixas, J. Mendes, A. Costa-Ferreira, A review on the application of medical infrared thermal imaging in hands, Infrared Phy. Technol., 85 (2017), 315–323. https://doi.org/10.1016/j.infrared.2017.07.020 doi: 10.1016/j.infrared.2017.07.020
![]() |
[11] |
M. A. Hossain, B. Assiri, Facial expression recognition based on active region of interest using deep learning and parallelism, Peer. J. Comput. Sci., 8 (2022), e894. https://doi.org/10.7717/peerj-cs.894 doi: 10.7717/peerj-cs.894
![]() |
[12] |
N. M. Moacdieh, N. Sarter, The effects of data density, display organization, and stress on search prformance: An eye tracking study of clutter, IEEE Trans. Hmman Mach. Syst., 47 (2017), 886–895. https://doi.org/10.1109/THMS.2017.2717899 doi: 10.1109/THMS.2017.2717899
![]() |
[13] |
M. A. Hossain, B. Assiri, An enhanced eye tracking approach using pipeline computation, Arab. J. Sci. Eng., 45 (2020), 1–14. https://doi.org/10.1007/s13369-019-04322-7 doi: 10.1007/s13369-019-04073-5
![]() |
[14] |
S. U. Mahmood, F. Crimbly, S. Khan, E. Choudry, S. Mehwish, Strategies for rational use of personal protective equipment (PPE) among healthcare providers during the COVID-19 crisis, Cureus, 12 (2020), e8248. http://doi.org/10.7759/cureus.8248 doi: 10.7759/cureus.8248
![]() |
[15] |
C. Filippini, D. Perpetuini, D. Cardone, A. M. Chiarelli, A. Merla, Thermal infrared imaging-based affective computing and its application to facilitate human robot interaction: A review, Appl. Sci., 10 (2020), 2924. https://doi.org/10.3390/app10082924 doi: 10.3390/app10082924
![]() |
[16] |
M. A. Eid, N. Giakoumidis, A. El Saddik, A novel eye-gaze-controlled wheelchair system for navigating unknown environments: Case study with a person with ALS, IEEE Access, 4 (2016), 558–573. https://doi.org/10.1109/ACCESS.2016.2520093 doi: 10.1109/ACCESS.2016.2520093
![]() |
[17] | M. A. Hossain, D. Samanta, G. Sanyal, Extraction of panic expression depending on lip detection, in 2012 International Conference on Computing Sciences, (2012), 137–141, https://doi.org/10.1109/ICCS.2012.35 |
[18] |
M. A. Hossain, D. Samanta, Automated smiley face extraction based on genetic algorithm, Comput. Sci. Inf. Technol., 2012 (2012), 31–37. https://doi.org/10.5121/csit.2012.2304 doi: 10.5121/csit.2012.2304
![]() |
[19] |
S. S. Alam, R. Jianu, Analyzing eye-tracking information in visualization and data space: From where on the screen to what on the screen, IEEE Trans. Visualization Comput. Graphics, 23 (2017), 1492–1505. https://doi.org/10.1109/TVCG.2016.2535340 doi: 10.1109/TVCG.2016.2535340
![]() |
[20] |
W. Zhang, H. Liu, Toward a reliable collection of eye-tracking data for image quality research: Challenges, solutions, and applications, IEEE Trans. Image Process., 26 (2017), 2424–2437. https://doi.org/10.1109/TIP.2017.2681424 doi: 10.1109/TIP.2017.2681424
![]() |
[21] |
A. Torabi, G. Massé, G. A. Bilodeau, An iterative integrated framework for thermal–visible image registration, sensor fusion, and people tracking for video surveillance applications, Comput. Vision Image Understanding, 116 (2012), 210–221. https://doi.org/10.1016/j.cviu.2011.10.006 doi: 10.1016/j.cviu.2011.10.006
![]() |
[22] | Y. Liu, Y. Cao, Y. Li, M. Liu, R. Song, Y. Wang, et al., Facial expression recognition with PCA and LBP features extracting from active facial patches, in 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), (2016). https://doi.org/10.1109/RCAR.2016.7784056 |
[23] |
W. R. Almeida, F. A. Andaló, R. Padilha, G. Bertocco, W. Dias, R. da S. Torres, et al., Detecting face presentation attacks in mobile devices with a patch-based CNN and a sensor aware loss function, Plos One, 4 (2020), 1–24. https://doi.org/doi.org/10.1155/2020/6385281 doi: 10.1155/2020/6385281
![]() |
[24] | F. Khan, Facial expression recognition using facial landmark detection and feature extraction via neural networks, preprint, arXiv: 1812.04510. |
[25] | M. A. Hossain, H. Zogan, Emotion tracking and grading based on sophisticated statistical approach, Int. J. Adv. Electron. Comput. Sci., 5 (2018), 9–13. https://doi.org/12-451-152482928314-18 |
[26] |
W. Zhang, X. Sui, G. Gu, Q. Chen, H. Cao, Infrared thermal imaging super-resolution via multiscale Spatio-Temporal feature fusion network, IEEE Sensors J., 21 (2021), 19176–19185. https://doi.org/10.1109/JSEN.2021.3090021 doi: 10.1109/JSEN.2021.3090021
![]() |
[27] | H. Mady, S. M. S. Hilles, Face recognition and detection using Random forest and combination of LBP and HOG features, in 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), (2018). https://doi.org/10.1109/ICSCEE.2018.8538377 |
[28] | K. T. Islam, R. G. Raj, A. Al-Murad, Performance of SVM, CNN, and ANN with BoW, HOG, and image pixels in face recognition, in 2017 2nd International Conference on Electrical & Electronic Engineering (ICEEE), (2017). https://doi.org/10.1109/CEEE.2017.8412925 |
[29] |
M. Sajjad, S. Zahir, A. Ullah, Z. Akhtar, K. Muhammad, Human behavior understanding in big multimedia data using CNN based facial expression recognition, Mobile Networks Appl., 25 (2020), 1611–1621. https://doi.org/10.1007/s11036-019-01366-9 doi: 10.1007/s11036-019-01366-9
![]() |
[30] | P. Liu, S. Han, Z. Meng, Y. Tong, Facial expression recognition via a boosted deep belief network, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2014). https://doi.org/10.1109/CVPR.2014.233 |
[31] |
A. T. Lopes, E. Aguiar, A. F. D. Souza, T. O. Santos, Facial expression recognition with convolutional neural networks: coping with few data and the training sample order, Pattern Recognit., 61 (2017), 610–628. https://doi.org/10.1016/j.patcog.2016.07.026 doi: 10.1016/j.patcog.2016.07.026
![]() |
[32] |
S. Rajan, P. Chenniappan, S. Devaraj, N. Madian, Novel deep learning model for facial expression recognition based on maximum boosted CNN and LSTM, IET Image Process., 14 (2020), 1373–1381. https://doi.org/10.1049/iet-ipr.2019.1188 doi: 10.1049/iet-ipr.2019.1188
![]() |
[33] |
M. A. Hossain, G. Sanyal, A new improved tactic to extract facial expression based on genetic algorithm and WVDF, Int. J. Adv. Inf. Technol., 2 (2012), 37. https://doi.org/10.5121/ijait.2012.2504 doi: 10.5121/ijait.2012.2504
![]() |
[34] | M. A. Hossain, D. Samanta, G. Sanyal, A novel approach for panic-face extraction based on mutation, in 2012 IEEE International Conference on Advanced Communication Control and Computing Technologies (ICACCCT), (2012), 473–477. https://doi.org/10.1109/ICACCCT.2012.6320825 |
[35] |
J. Lee, S. Kim, S. Kim, K. Sohn, Multi-Modal recurrent attention networks for facial expression recognition, IEEE Trans. Image Process., 29 (2020), 6977–6991. https://doi.org/10.1109/TIP.2020.2996086 doi: 10.1109/TIP.2020.2996086
![]() |
[36] |
Z. Kang, S. J. Landry, An eye movement analysis algorithm for a multielement target tracking task: Maximum transition-based agglomerative hierarchical clustering, IEEE Trans. Hmman Mach. Syst., 45 (2015), 13–24. https://doi.org/10.1109/THMS.2014.2363121 doi: 10.1109/THMS.2014.2363121
![]() |
[37] | M. A. Hossain, D. Samanta, G. Sanyal, Statistical approach for extraction of panic expression, in 2012 Fourth International Conference on Computational Intelligence and Communication Networks, (2012), 420–424. https://doi.org/10.1109/CICN.2012.189 |
[38] |
R. Janarthanan, E. A. Refaee, K. Selvakumar, M. A. Hossain, S. Rajkumar, K. Marimuthu, Biomedical image retrieval using adaptive neuro-fuzzy optimized classifier system, Math. Biosci. Eng., 19 (2022), 8132–8151. https://doi.org/10.3934/mbe.2022380 doi: 10.3934/mbe.2022380
![]() |
[39] |
F. Bu, T. Pu, W. Huang, L. Zhu, Performance and evaluation of five-phase dual random SVPWM strategy with optimized probability density function, IEEE Trans. Ind. Electron., 66 (2019), 3323–3332. https://doi.org/10.1109/TIE.2018.2854570 doi: 10.1109/TIE.2018.2854570
![]() |
[40] |
B. Manda, P. Bhaskare, R. Muthuganapathy, A convolutional neural network approach to the classification of engineering models, IEEE Access, 9 (2021), 22711–22723. https://doi.org/10.1109/ACCESS.2021.3055826 doi: 10.1109/ACCESS.2021.3055826
![]() |
[41] |
M. A. Hossain, G. Sanyal, A stochastic statistical approach for tracking human activity, IJITMC, 1 (2013), 33–42. https://doi.org/10.5121/ijitmc.2013.1304 doi: 10.5121/ijitmc.2013.1304
![]() |
[42] |
A. J. A. AlBdairi, Z. Xiao, M. Alghaili, Identifying ethnics of people through face recognition: A deep CNN approach, Sci. Prog., 2020 (2020), 6385281. https://doi.org/10.1155/2020/6385281 doi: 10.1155/2020/6385281
![]() |
[43] |
N. Alay, H. H. Al-Baity, Deep learning approach for multimodal biometric recognition system based on fusion of iris, face, and finger vein traits, Sensor, 20 (2020), 5523–5539. https://doi.org/10.3390/s20195523 doi: 10.3390/s20195523
![]() |
1. | Wende Li, Jianlong Chen, Yukun Zhou, CHARACTERIZATIONS AND PROPERTIES OF WEAK CORE INVERSES IN RINGS WITH INVOLUTION, 2024, 54, 0035-7596, 10.1216/rmj.2024.54.793 | |
2. | Jiaxuan Yao, Hongwei Jin, Xiaoji Liu, The weak group-star matrix, 2023, 37, 0354-5180, 7919, 10.2298/FIL2323919Y | |
3. | Dijana Mosić, Janko Marovt, Weighted MP weak group inverse, 2024, 32, 1844-0835, 221, 10.2478/auom-2024-0012 |