
In this study, we employed a developed Fractional Cointegrating Vector Autoregressive (FCVAR) model to analyze the relationship between three different securities, i.e., housing prices, S&P500 stock prices and gold, and inflation rate, to determine the hedging properties of each type of asset against inflation shocks. Our analyses covered seven decades; ranging from January 1953 to January 2023. Our results suggested that housing prices and S&P500 performed partially against inflation and gold did not have hedging properties when attending to the full sample. Accounting for structural breaks, we discovered that these results changed. We found that housing prices and the S&P500 showed a superior hedging performance against inflation since the second regime. On the other hand, when we studied the behavior of gold, this security showed the inverse results, i.e., it showed no hedging performance in the second regime. Finally, these results were important for an optimal investment strategy, risk diversification, and monetarism.
Citation: Alejandro Almeida, Julia Feria, Antonio Golpe, José Carlos Vides. 2024: Financial assets against inflation: Capturing the hedging properties of gold, housing prices, and equities, National Accounting Review, 6(3): 314-332. doi: 10.3934/NAR.2024014
[1] | Feiyue Wang, Tang Wee Teo, Shoubao Gao . China primary school students' STEM views, attitudes, self-concept, identity and experiences: A pilot study in Shandong province. STEM Education, 2024, 4(4): 381-420. doi: 10.3934/steme.2024022 |
[2] | Dragana Martinovic, Marina Milner-Bolotin . Examination of modelling in K-12 STEM teacher education: Connecting theory with practice. STEM Education, 2021, 1(4): 279-298. doi: 10.3934/steme.2021018 |
[3] | Van Thi Hong Ho, Hanh Thi Thuy Doan, Ha Thanh Vo, Thanh Thi Nguyen, Chi Thi Nguyen, Chinh Ngoc Dao, Dzung Trung Le, Trang Gia Hoang, Nga Thi Hang Nguyen, Ngoc Hoan Le, Gai Thi Tran, Duc Trong Nguyen . Effects of teaching approaches on science subject choice toward STEM career orientation of Vietnamese students. STEM Education, 2025, 5(3): 498-514. doi: 10.3934/steme.2025024 |
[4] | Ruiheng Cai, Feng-kuang Chiang . A laser-cutting-centered STEM course for improving engineering problem-solving skills of high school students in China. STEM Education, 2021, 1(3): 199-224. doi: 10.3934/steme.2021015 |
[5] | Rommel AlAli, Wardat Yousef . Enhancing student motivation and achievement in science classrooms through STEM education. STEM Education, 2024, 4(3): 183-198. doi: 10.3934/steme.2024012 |
[6] | Fitore Bajrami Lubishtani, Milot Lubishtani . Advancing geodesy education: Innovative pedagogical approaches and integration into STEM curricula. STEM Education, 2025, 5(2): 229-249. doi: 10.3934/steme.2025012 |
[7] | Rrezart Prebreza, Bleona Beqiraj, Besart Prebreza, Arianit Krypa, Marigona Krypa . Factors influencing the lower number of women in STEM compared to men: A case study from Kosovo. STEM Education, 2025, 5(1): 19-40. doi: 10.3934/steme.2025002 |
[8] | Ping Chen, Aminuddin Bin Hassan, Firdaus Mohamad Hamzah, Sallar Salam Murad, Heng Wu . Impact of gender role stereotypes on STEM academic performance among high school girls: Mediating effects of educational aspirations. STEM Education, 2025, 5(4): 617-642. doi: 10.3934/steme.2025029 |
[9] | Changyan Di, Qingguo Zhou, Jun Shen, Li Li, Rui Zhou, Jiayin Lin . Innovation event model for STEM education: A constructivism perspective. STEM Education, 2021, 1(1): 60-74. doi: 10.3934/steme.2021005 |
[10] | Yicong Zhang, Yanan Lu, Xianqing Bao, Feng-Kuang Chiang . Impact of participation in the World Robot Olympiad on K-12 robotics education from the coach's perspective. STEM Education, 2022, 2(1): 37-46. doi: 10.3934/steme.2022002 |
In this study, we employed a developed Fractional Cointegrating Vector Autoregressive (FCVAR) model to analyze the relationship between three different securities, i.e., housing prices, S&P500 stock prices and gold, and inflation rate, to determine the hedging properties of each type of asset against inflation shocks. Our analyses covered seven decades; ranging from January 1953 to January 2023. Our results suggested that housing prices and S&P500 performed partially against inflation and gold did not have hedging properties when attending to the full sample. Accounting for structural breaks, we discovered that these results changed. We found that housing prices and the S&P500 showed a superior hedging performance against inflation since the second regime. On the other hand, when we studied the behavior of gold, this security showed the inverse results, i.e., it showed no hedging performance in the second regime. Finally, these results were important for an optimal investment strategy, risk diversification, and monetarism.
As the most complex and important organ in the human body, the brain plays an indispensable role in regulating the body's motor functions and maintaining higher cognitive activities such as consciousness, language, memory, sensation and emotion. Therefore, studying the brain not only lets us have a better understanding of the brain, but it can also provide auxiliary methods for the treatment and diagnosis of brain disease. In recent years, many countries around the world have attached great importance to brain science research; researchers have started to study the brain and formed a unique brain science knowledge system. The brain-computer interface (BCI) [1,2] is a communication and control technology that directly translates the perceptual thinking generated by the brain into the corresponding action of external devices, which is of high scientific research value and is currently being widely applied in medical health, vehicle driving [3], daily life entertainment [4], etc. However, BCI systems still have problems that need to be solved urgently, such as a long user training time and limited online performance [5].
Currently, many advanced machine learning algorithms have been proposed and used in electroencephalography (EEG) classification, including Bayesian classifiers [6], support vector machines [7], linear discriminant analysis (LDA) [8], adaptive classifiers and deep learning classifiers [9]. In recent years, because transfer learning can take advantage of similarities between data, tasks or models, it has been utilized to analyze EEG signals with individual differences and those that are non-stationary, making it possible to apply models and the knowledge learned in the old domain to the new domain. In this way, not only can the classification performance of unlabeled data be improved by using labeled data, but the model training time can also be drastically reduced [10,11,12]. In transfer learning, a domain is defined as follows [13–14]: A domain D consists of a feature space X and its associated marginal probability distribution P(X), i.e., D={X,P(X)}, where X∈X. A source domain DS and a target domain DT are different if they have different feature spaces, i.e., XS≠XT, and/or different marginal probability distributions, i.e., PS(X)≠PT(X).
However, most of the existing transfer learning methods train classifiers by batch learning in an offline manner, where all source and target data are pre-given [15,16], whereas this assumption may not be practical in real application scenarios where collecting enough data is very time-consuming. In addition, data are often transferred in a stream and cannot be collected in their entirety. Therefore, researchers have introduced online learning into the field of transfer learning and proposed an online transfer learning (OTL) framework. Unlike online learning, which merely considers the dynamic changes of data on a data domain, OTL also considers the changes in the data distributions in the source domain and target domain. OTL has been used in areas such as online feature selection [17] and graphical retrieval [18]. It combines the advantages of dynamically updating classification models for online learning with the ability to effectively exploit knowledge from source domains of transfer learning [19], aiming to apply online learning tasks to target domains by transferring knowledge from the source domain. Existing OTL approaches focus on how to use knowledge from the source domain for online learning in the target domain. Most of them use strategies based on integrated learning, i.e., directly combining source and target classifiers. Zhao et al. [20] proposed a single-source domain OTL algorithm in homogeneous and heterogeneous spaces, and it dynamically weights the combination of source and target classifiers to form the final classifier. Kang et al. [21] proposed an OTL algorithm for multi-class classification, utilizing a new loss function and updating method to extend the OTL of binary classification to multi-class tasks. Based on Zhao's work, Ge et al. [22] first proposed a multi-source online transfer framework. Zhou et al. [23] proposed an online transfer strategy, reusing the features extracted from a pre-trained model, and it can achieve automatic feature extraction and achieve good classification accuracy. However, the OTL algorithm can only handle single-source domain transfer, which can easily lead to negative transfer [13] when facing multiple source domains. Negative transfer may happen when the source domain data and task contribute to the reduced performance of learning in the target domain.
Nevertheless, single-source transfer learning requires a high similarity between the source domain samples and the target domain samples during training. For example, in EEG signal classification, the class features of source subjects and target subjects are required to be as similar as possible. Since the knowledge that can be provided by a single source domain is limited, researchers tend to naturally think of applying transfer learning to the target domain by exploiting knowledge from multiple source domains. To utilize knowledge from multiple source domains, researchers have developed some boosting-based algorithms to adjust the weights of different domains or samples [24,25,26]. Eaton and DesJardins [24] proposed a method that uses AdaBoost to assign higher weights to source domains that are more similar to the target domain and adjust the weights of individual samples in each source domain. Yao and Doretto [25] proposed a multi-source transfer method that first integrates the knowledge from multiple source domains and then migrates the knowledge to the target domain; it achieved better performance on some benchmark datasets. Tan et al. [26] utilized different views from different source domains to assist the target domain task. Jiang et al. [27] proposed a general multi-source transfer framework to preserve independent information between different tasks. However, these studies assume that the data have already been known when dealing with target domain data, and they conducted transfer only via offline batch learning. Recently, after noticing the shortcomings of OTL, Wu et al. [17] proposed a HomOTLMS algorithm to train a classifier in each source domain and classify samples by weighting the classifiers in the source and target domains. Du et al. [28] proposed the HomOTL-ODDM algorithm to update the mapping matrix in an online manner, thus further reducing the differences between domains; the experimental results showed that the transfer effect can be significantly improved after considering the differences in data distributions. Li et al. [29] proposed an online EEG classification method based on instance transfer (OECIT). It can align the EEG data online, and combines with HomOTL, greatly reducing computational and memory costs.
To address the negative transfer problem in online single-source transfer and further reduce the individual differences between subjects, this paper presents a multi-source online transfer EEG classification algorithm based on source domain selection (SDS). Different from the HomOTLMS and HomOTL-ODDM algorithms, the proposed algorithm dynamically selects several suitable source domains and uses the multi-source online transfer algorithm to train a classifier in each selected source domain; it then combines the trained classifier with the target domain classifier for weighting, which not only reduces the training time but also achieves better classification results in the target domain. The algorithm was applied to two publicly available motor imagery EEG datasets and analyzed in comparison with the same type of multi-source online transfer algorithms to confirm the superiority of this algorithm.
Suppose a subject has n trials; we can have
ˉE=1nn∑i=1XiXTi | (1) |
where ˉE is the Euclidean mean of all EEG trials from a subject and Xi∈Rc×t denotes the trial segments of EEG signals, where c is the number of EEG channels and t is the number of time samples. Then, we perform alignment by using
˜Xi=ˉE−1/2Xi | (2) |
Given a spatiotemporal EEG signal X of one N-channel, where X is an N×T matrix and T denotes the number of samples per channel, the normalized covariance matrix of the EEG signal is shown below.
C=XXTtrace(XXT) | (3) |
The covariance matrices C1 and C2 for each category can be calculated from the sample means. The projection matrix of CSP is
Wcsp=UTP | (4) |
where U is the orthogonal matrix and P is the whitening feature matrix.
After filtering with the projection matrix Wcsp, the feature matrix is obtained:
Z0=WcspTX | (5) |
The first m rows and the last m rows of the feature matrix are taken to construct the matrix Z=(z1z2⋯z2m)∈RN×2m. The feature vector F=(f1f2⋯f2m)T∈R2m×1 can be obtained after normalization.
The SDS method can select the nearest source domain to reduce computational cost and potentially improve classification performance.
Supposing that there are Z different source domains, for the z-th source domain, the average feature vector for each source domain class mz,c(c=1,2) is calculated first. After obtaining a small number of target domain labels, the known label information is used to calculate the average feature vector for each target domain class mt,c. Then, the distance between the two domains is expressed as
d(z,t)=∑2c=1‖mz,c−mt,c‖ | (6) |
The next step is to cluster these distances {d(z,t)}z=1,…,Z to the k means. Assuming that the clusters are divided into (C1,C2,…,Ck), the objective is to minimize the squared error E, which can be expressed as
E=∑ki=1∑x∈Ci‖d(z,t)−μi‖22 | (7) |
where μi is the mean vector of cluster Ci, which is also known as a centroid, with the following expression:
μi=1|Ci|∑x∈Cid(z,t) | (8) |
Finally, the cluster with the smallest center of mass, i.e., the source domain closest to the target domain, is selected. In this way, OTL is performed for only Z/l source domains where l is the number of clusters of the k-means cluster. The larger the value of l, the lower the computational cost. However, when l is too large, there may not be enough source domains for classification, resulting in unstable classification performance. Therefore, to minimize computational cost and improve classification performance, l was set to equal 2.
In this section, a multi-source OTL with SDS (MSOTL-SDS) algorithm is proposed. The algorithm flow is shown in Figure 1.
We set n source domains DS={DS1,DS2,…,DSn} and a certain number of labeled samples {((xt,yt)|t=1,2,…,m)} from the target domain DT. For the i-th source domain DSi, xSi×y represents the source domain data space, xsi∈Rdi represents the feature space and y={−1,+1} represents the label space. fSi represents the classifier learned on the second source domain. x×y denotes the target domain data space, the feature space is x∈Rd and the target domain and source domain share the same label space y.
When the source domain data are extracted by domain alignment and CSP features, the source domains differing largely from the target domain samples are eliminated by SDS, which leaves only k (1<k<n) source domains; the classifier is trained by training each of these k source domains DS={DS1,DS2,…,DSk} and then assigning weights accordingly. In online mode, after domain alignment and feature extraction by online Euclidean space data alignment (OEA) and CSP, the samples (xt,yt) are sent to a classifier Ft(⋅), which is weighted by all classifiers, for label prediction. After obtaining the true label, the weight of the classifier and the online target domain classifier are updated by the loss function.
Assume that the target domain classifier is given as follows:
fT(x)=∑ti=1αiyik(xi,x) | (9) |
where αi is the coefficient of the i-th target sample.
Set the weight vector of the source domain classifier as ut=(u1t,u2t,…,unt)T, construct the target domain weight variable v and apply the Hedge algorithm to dynamically update the weights of the source and target domain classifiers as follows:
uit+1=uitβZit,Zit=I(sign(ytfSi(xt))<0),i=1,2,…,n | (10) |
vt+1=vtβZvt,Zvt=I(sign(ytfTt(xt))<0),i=1,2,…,n | (11) |
Finally, its class label is predicted by the following prediction function:
yt=sign(∑ni=1pitfSi(xt)+pvtfTt(xt)) | (12) |
where pit and pvt correspond to the weights of the classifiers in the source domain and target domain, respectively, and the calculation formula is given as follows:
pit=uit∑nj=1ujt+vt,pvt=vt∑nj=1ujt+vt | (13) |
Algorithm 1 MSOTL-SDS algorithm |
Input: source domain classifier fS=(fS1,fS2,…,fSk), initial trade-off parameter C, discount weight β∈(0,1) |
Initialization: online classifier fT1=0, weight u1=1n+1,v1=1n+1 |
// SDS section for s=1,2,…,k do The distance d(s,t) from the sth source domain to the target domain is calculated by using Eq (6) end if The k-means clustering of {d(S,t)}s=1,…,k is performed to select the source domain with a smaller center of mass to form S'=S1,S2,…,Sn(n<k) // Multi-source online transfer |
for t=1,2,…,m do |
Receiving test specimens. xt∈X Calculate the classifier weights by using Eq (10) The predicted label is obtained by using Eq (12) |
Receive real tags yt={−1,+1} |
The weights of each classifier are updated by Eqs (10) and (11) |
Calculate the loss function lt=[1−ytfTtxt]+ |
if lk>0 then |
fTt+1=fTt+τtytxt, where τt=min{C,lt/‖xt‖2} |
end if |
end for output:ft(x)=sign(∑ni=1pitfSi(xt)+pvtfTt(xt)) |
Here, the algorithm was evaluated on two publicly available datasets of motor imagery EEG signals, where the first dataset was Dataset Ⅱa from BCI Competition Ⅳ [30] and the second dataset was Dataset 2 from BNCI Horizon 2020 [31]. These two datasets have multiple subjects and are suitable for multi-source classification.
1) Dataset Ⅱa of BCI Competition Ⅳ. It comprises 22-channel EEG signals obtained from two different sessions of nine healthy subjects, and the sampling rate was 250 Hz. Each subject was instructed to perform four motor imagery tasks, including movements of the left hand, right hand, feet and tongue. Each task had 72 trials in one session.
2) Dataset 2 (BNCI Horizon 2020 Dataset). It consists of EEG data from 14 healthy subjects, eight of whom were children. The data for each subject consists of two categories of motor imagery EEG of the subject's right hand and foot, with 50 samples in each category. Each experimental signal was recorded by using 15 electrodes, with electrode positions following the International 10–20 system; 15 channels of EEG signals were recorded and sampled at 512 Hz.
The preprocessing part is introduced first. EEG signals from 2.5 to 5.5 s were selected for BCI Competition Ⅳ Dataset Ⅱa, and, for Dataset 2 from BNCI Horizon 2020, the EEG signals ranged from 5.5 to 8.5 s. Bandpass filtering was performed by using a 5th-order Butterworth filter with the frequency ranging from 8 to 30 Hz. The samples in the target domain were randomly arranged 20 times and the online experiment was repeated 20 times; finally, the average results of the 20 repetitions were recorded.
To verify the effectiveness, the proposed MSOTL-SDS algorithm is compared against six state-of-the-art algorithms for EEG classification, grouped as follows:
ⅰ) OECIT-Ⅰ [29]: After aligning the sample domains of the target domain using OEA, the classification was performed with an OTL algorithm.
ⅱ) OECIT-Ⅱ [29]: After aligning the sample domain of the target domain using OEA, the classification was performed with an OTL algorithm; a different weight update strategy was utilized instead of OECIT-Ⅰ.
ⅲ) HomOTLMS [17]: Migrate multiple-source domain knowledge in the OTL process by constructing the final classifier in a band-weighted classifier integration approach.
ⅳ) HomOTL-ODDM [28]: A multi-source online transfer algorithm for the simultaneous reduction of marginal distribution and conditional distribution differences between domains via a linear feature transformation process.
ⅴ) EA-CSP-LDA [32]: EA was used to align the data from different domains; then, the source domain data were used to design the filters and an LDA classifier was employed for classification.
ⅵ) CA-Joint distribution adaptation (JDA) [33]: The data were aligned before the JDA algorithm.
ⅶ) Riemannian alignment-minimum distance to the Riemannian mean (RA-MDMR) [34]: It is a Riemannian space method that centers the covariance matrix relative to the reference covariance matrix.
A comparison of the online classification results for the two datasets are given in Tables 1 and 2. The MSOTL-SDS algorithm achieved the highest average accuracies of 79.29 and 70.86% for the two datasets, respectively.
Subject | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Avg |
RA-MDMR | 72.22 | 56.94 | 84.03 | 65.97 | 60.42 | 67.36 | 61.81 | 86.81 | 82.64 | 70.91 |
CA-JDA | 66.75 | 47.45 | 65.52 | 59.34 | 54.55 | 54.27 | 54.27 | 73.01 | 65.18 | 60.04 |
EA-CSP-JDA | 86.08 | 56.84 | 97.74 | 72.26 | 51.56 | 65.56 | 68.51 | 89.10 | 72.43 | 73.34 |
OECIT-Ⅰ | 88.91 | 56.31 | 97.56 | 73.57 | 58.44 | 67.01 | 71.04 | 92.88 | 75.87 | 75.73 |
OECIT-Ⅱ | 89.51 | 55.49 | 97.88 | 74.06 | 58.03 | 66.64 | 71.09 | 92.98 | 75.94 | 75.74 |
HomOTLMS | 89.25 | 56.36 | 97.84 | 74.22 | 58.90 | 68.17 | 73.92 | 93.69 | 76.51 | 76.54 |
HomOTL-ODDM | 90.91 | 57.31 | 97.54 | 75.57 | 58.84 | 68.01 | 74.04 | 93.88 | 75.87 | 76.89 |
MSOTL-SDS | 90.88 | 60.93 | 97.80 | 76.78 | 60.88 | 73.64 | 78.09 | 94.69 | 79.94 | 79.29 |
Subject | OECIT-Ⅰ | OECIT-Ⅱ | HomOTLMS | HomOTL-ODDM | MSOTL-SDS |
1 | 54.96 | 55.48 | 56.28 | 59.29 | 62.31 |
2 | 63.31 | 64.03 | 67.43 | 68.49 | 71.77 |
3 | 62.74 | 63.21 | 64.33 | 66.53 | 72.24 |
4 | 76.35 | 76.36 | 76.98 | 78.43 | 77.86 |
5 | 55.88 | 55.83 | 56.46 | 57.98 | 60.61 |
6 | 47.29 | 48.68 | 47.36 | 48.26 | 49.67 |
7 | 58.46 | 58.59 | 58.33 | 59.44 | 60.79 |
8 | 84.79 | 83.57 | 84.64 | 85.68 | 84.96 |
9 | 81.73 | 82.45 | 82.92 | 83.79 | 85.93 |
10 | 74.63 | 76.44 | 77.38 | 78.29 | 78.16 |
11 | 70.96 | 71.08 | 72.68 | 73.46 | 72.76 |
12 | 57.46 | 58.44 | 58.69 | 59.78 | 62.21 |
13 | 74.33 | 73.29 | 74.33 | 75.33 | 78.38 |
14 | 68.26 | 68.78 | 70.26 | 71.33 | 74.35 |
Avg | 66.51 | 66.87 | 67.72 | 69.01 | 70.86 |
For BCI Competition Ⅳ Dataset Ⅱa, we mainly selected 22-channel EEG data from the left and right hands. The classification accuracy of the MSOTL-SDS algorithm was higher than those of the other seven algorithms for more than half of the subjects. The HomOTLMS and HomOTL-ODDM algorithms had slightly lower accuracies, and HomOTL-ODDM was better than HomOTLMS. More importantly, except for Subject 3, the classification accuracies of the multi-source online transfer algorithms were higher than that of the single-source online transfer algorithm OECIT, indicating that the multi-source online transfer algorithm can effectively eliminate the effect of negative transfer when dealing with multiple source domains. The classification accuracy of the MSOTL-SDS algorithm was 2.4% higher than that of the best multi-source online transfer algorithm, HomOTL-ODDM, and 3.55% higher than the single-source online learning algorithm OECIT-Ⅱ.
For Dataset 2, the MSOTL-SDS algorithm achieved the best classification accuracy on EEG data for subjects other than Subjects 4, 8, 10 and 11, while HomOTL-ODDM performed best on these four subjects, indicating that, for multi-source online classification, it is necessary to consider the conditional and marginal distributions of the samples. Similarly, the accuracies of the MSOTL-SDS algorithm were 1.85 and 3.99% higher than HomOTL-ODDM and OECIT-Ⅱ, respectively.
Datasets 1 and 2 are both multi-source categorical datasets taken from healthy subjects, and they both have data on multiple subjects. The difference is that Dataset Ⅱa is from the earlier BCI Competition Ⅳ 2008, and it has a lower sampling rate and a higher number of EEG channels; it also contains more EEG feature information. Dataset 2 is from BNCI Horizon 2020, and it has a higher sampling rate but reduced number of channels; it is also more difficult, so all eight algorithms, including the MSOTL-SDS algorithm, achieved better performance on Dataset Ⅱa.
In Tables 1 and 2, it is noticeable that the MSOTL-SDS algorithm showed significant improvement in terms of accuracy for some subjects with poorly differentiated class features, such as Subjects 2, 6 and 7 in BCI Competition Ⅳ Dataset Ⅱa and Subjects 1, 2, 3, 5, 9, 12 and 14 in Dataset 2 from BNCI Horizon 2020; the single-source OECIT and HomOTL-ODDM for each subject achieved average improvements of 5.85 and 3.69%, respectively. This indicates that MSOTL-SDS can effectively discover subjects with high similarity to the target subjects' characteristics by applying SDS when dealing with a multi-source online classification task, thus reducing individual differences and improving classification accuracy. Finally, as shown in Table 3, we compare the average online classification accuracy of some different transfer learning methods on the two datasets [35].
Algorithm | Dataset Ⅱa Acc. (%) | Dataset 2 Acc. (%) |
EA-RCSP-LDA [35] | 73.72 | 64.09 |
EA-RCSP-OwAR [35] | 74.78 | 65.71 |
OECIT-Ⅰ [29] | 75.73 | 66.51 |
OECIT-Ⅱ [29] | 75.74 | 66.87 |
HomOTLMS [33] | 76.54 | 67.72 |
HomOTL-ODDM [34] | 76.89 | 69.01 |
MSOTL-SDS | 79.29 | 70.86 |
Table 4 gives the computing time consumption of the five algorithms on different datasets. Dataset 2 is less time-consuming because it has fewer EEG signal channels than Dataset Ⅱa. OECIT handles the online single-source transfer task and thus takes the least amount of time. HomOTLMS and HomOTL-ODDM cost more time due to the increased complexity of the algorithm. MSOTL-SDS uses a multi-source online transfer framework, requiring less source domain data to train the classifier; so, it has an advantage in terms of time consumption. The reduction of computing time was 4.96 and 2.34 s on BCI Competition Ⅳ Dataset Ⅱa and Dataset 2 from BNCI Horizon 2020, respectively, relative to HomOTLMS. Additionally, compared with HomOTL-ODDM, MSOTL-SDS was 36.56 and 31.28 s faster on the two datasets, respectively.
Dataset Ⅱa | Dataset 2 | |||
Mean (s) | Std (s) | Mean (s) | Std (s) | |
EA-CSP-LDA | 9.9918 | 0.9250 | - | - |
RA-MDRM | 172.7034 | 27.7032 | - | - |
CA-JDA | 68.0582 | 1.2880 | - | - |
OECIT-Ⅰ | 7.9428 | 0.8685 | 4.6735 | 0.7435 |
OECIT-Ⅱ | 7.8275 | 0.6577 | 4.7823 | 0.6421 |
HomOTLMS | 40.6568 | 1.9549 | 20.8283 | 1.8543 |
HomOTL-ODDM | 72.2592 | 4.9532 | 49.7724 | 4.7286 |
MSOTL-SDS | 35.6982 | 1.9368 | 18.4931 | 1.9254 |
Figure 2(a), (b) show the average online classification accuracy curves with the number of samples for Datasets Ⅱa and 2, respectively. The poor performance of the OECIT algorithm on a multi-source online classification task is mainly because it views multiple source subjects as a single individual during its training, which can cause negative transfer problems since there exist large individual differences among source subjects. For both datasets, MSOTL-SDS demonstrated the best performance in more than half of the subject accuracy variation plots. The HomOTLMS and HomOTL-ODDM algorithms present similar trends in most of the subject accuracy variation curves, which can be explained by their similar multi-source OTL framework.
In order to compare the differences between the proposed MSOTL-SDS algorithm and the other four algorithms, a paired t-test was applied to the classification accuracy of the figure, and the significance level was set as α=0.05. The test results are shown in Table 5. The results show that MSOTL-SDS was significantly better than other algorithms on Dataset Ⅱa, but there was no significant difference between MSOTL-SDS and HomOTL-ODDM on Dataset 2. This indicates that the multi-source online transfer algorithm is effective in eliminating the effect of negative transfer when dealing with multiple source domains. Moreover, for multi-source online classification, reducing the differences in conditional and edge distributions of samples is an effective method.
MSOTL-SDS | ||
Dataset Ⅱa | Dataset 2 | |
EA-CSP-LDA | 0.0004 | - |
RA-MDRM | 0.0081 | - |
CA-JDA | 0.0001 | - |
OECIT-Ⅰ | 0.0010 | 0.0003 |
OECIT-Ⅱ | 0.0005 | 0.0004 |
HomOTLMS | 0.0034 | 0.0021 |
HomOTL-ODDM | 0.0018 | 0.0627 |
This methodology explores the negative transfer problem in online single-source transfer and tries to reduce the individual differences between subjects. We proposed a multi-source OTL method based on SDS, which was applied for the online classification study of motor imagery EEG signals. By utilizing some of the target domain sample labels in advance, the SDS method can select the source domain data that are similar to the target domain data while eliminating the source domain data that are different from the target domain data. Then, by combining the weighting coefficients with the online classifier, each selected source domain is trained into a separate classifier, which is employed to predict sample labels. The proposed method was validated on two motor imagery EEG datasets and compared with other online transfer methods. The experimental results show that our MSOTL-SDS algorithm achieved the best performance, the best accuracy and the fastest calculation speed in the online scenarios. However, there are still some limitations of this study. The difference in conditional distribution between the source domain and the target domain was not fully considered. Hence, in future work, we can utilize some advanced transfer learning algorithms, such as balanced distribution adaptation [37] and manifold embedded distribution alignment [37], to align the conditional distribution between domains and improve the proposed method.
This work was supported by Zhejiang Provincial Natural Science Foundation of China (Grant No. LZ22F010003) and the National Natural Science Foundation of China (Grant Nos. 61871427 and 62071161). The authors also gratefully acknowledge the research funding supported by the National Students' Innovation and Entrepreneurship Training Program (No. 202210336034).
The authors declare that there is no conflict of interest.
[1] |
Adekoya OB, Oliyide JA, Oduyemi GO (2021) How COVID-19 upturns the hedging potentials of gold against oil and stock markets risks: Nonlinear evidences through threshold regression and markov-regime switching models. Resour Policy 70: 101926. https://doi.org/10.1016/j.resourpol.2020.101926 doi: 10.1016/j.resourpol.2020.101926
![]() |
[2] |
Ahmed S, Cardinale M (2005) Does inflation matter for equity returns?. J Asset Manag 6: 259–273. https://doi.org/10.1057/palgrave.jam.2240180 doi: 10.1057/palgrave.jam.2240180
![]() |
[3] | Alderman L (2022) Eurozone Inflation Hits its Highest Level since the Creation of the Euro in 1999. The New York Times. https://www.nytimes.com/2022/05/31/business/eurozone-inflation.html. |
[4] |
Anari A, Kolari J (2001) Stock prices and inflation. J Financ Res 24: 587–602. https://doi.org/10.1111/j.1475-6803.2001.tb00832.x doi: 10.1111/j.1475-6803.2001.tb00832.x
![]() |
[5] |
Anari A, Kolari J (2002) House prices and inflation. Real Estate Econ 30: 67–84. https://doi.org/10.1111/1540-6229.00030 doi: 10.1111/1540-6229.00030
![]() |
[6] |
Antonakakis N, Gupta R, Tiwari AK (2017) Has the correlation of inflation and stock prices changed in the United States over the last two centuries?. Res Int Bus Financ 42: 1–8. https://doi.org/10.1016/j.ribaf.2017.04.005 doi: 10.1016/j.ribaf.2017.04.005
![]() |
[7] | Arif I, Khan L, Iraqi KM (2019) Relationship between oil price and white precious metals return: a new evidence from quantile-on-quantile regression. Pak J Commer Soc Sci 13: 515–528. |
[8] |
Arnold S, Auer BR (2015) What do scientists know about inflation hedging?. N Am J Econ Financ 34: 187–214. https://doi.org/10.1016/j.najef.2015.08.005 doi: 10.1016/j.najef.2015.08.005
![]() |
[9] |
Aye GC, Chang T, Gupta R (2016) Is gold an inflation-hedge? Evidence from an interrupted Markov-switching cointegration model. Resour Policy 48 77–84. https://doi.org/10.1016/j.resourpol.2016.02.011 doi: 10.1016/j.resourpol.2016.02.011
![]() |
[10] |
Aye GC, Carcel H, Gil-Alana LA, et al. (2017) Does gold act as a hedge against inflation in the UK? Evidence from a fractional cointegration approach over 1257 to 2016. Resour Policy 54, 53–57. https://doi.org/10.1016/j.resourpol.2017.09.001 doi: 10.1016/j.resourpol.2017.09.001
![]() |
[11] |
Baillie RT, Booth GG, Tse Y, et al. (2002) Price Discovery and Common Factor Models. J Financ Mark 5: 309–321. https://doi.org/10.1016/s1386-4181(02)00027-7 doi: 10.1016/s1386-4181(02)00027-7
![]() |
[12] |
Bai J, Perron P (2003) Computation and analysis of multiple structural change models. J Appl Econom 18 1–22. https://doi.org/10.1002/jae.659 doi: 10.1002/jae.659
![]() |
[13] |
Bampinas G, Panagiotidis T (2015) Are gold and silver a hedge against inflation? A two century perspective. Int Rev Financ Anal 41 267–276. https://doi.org/10.1016/j.irfa.2015.02.007 doi: 10.1016/j.irfa.2015.02.007
![]() |
[14] |
Beckmann J, Czudaj R (2013) Gold as an inflation hedge in a time-varying coefficient framework. The North American J Econ Financ 24 208–222. https://doi.org/10.1016/j.najef.2012.10.007 doi: 10.1016/j.najef.2012.10.007
![]() |
[15] | Bureau of Labor Statistics (2022) Consumer Prices up 8.6 Percent over Year Ended May 2022: the Economics Daily. U.S. Bureau of Labor Statistics. Available from: https://www.bls.gov/opub/ted/2022/consumer-prices-up-8-6-percent-over-year-ended-may-2022.htm. https://doi.org/10.1787/d1e962ba-en. |
[16] | Comin DA, Johnson RC, Jones CJ (2023) Supply chain constraints and inflation (No. w31179). National Bureau of Economic Research. https://doi.org/10.3386/w31179 |
[17] |
Christou C, Gupta R, Nyakabawo W, et al. (2018) Do house prices hedge inflation in the US? A quantile cointegration approach. Int Rev Econ Financ 54 15–26. https://doi.org/10.1016/j.iref.2017.12.012 doi: 10.1016/j.iref.2017.12.012
![]() |
[18] |
Chua J, Woodward RS (1982) Gold as an inflation hedge: a comparative study of six major industrial countries. J Bus Finan Account 9 191–197. https://doi.org/10.1111/j.1468-5957.1982.tb00985.x doi: 10.1111/j.1468-5957.1982.tb00985.x
![]() |
[19] |
Dolatabadi S, Nielsen MØ , Xu K (2015) A Fractionally Cointegrated VAR Analysis of Price Discovery in Commodity Futures Markets. J Futures Mark 35: 339–356. https://doi.org/10.1002/fut.21693 doi: 10.1002/fut.21693
![]() |
[20] |
Dolatabadi S, Nielsen MØ , Xu K (2016) A Fractionally Cointegrated VAR Model with Deterministic Trends and Application to Commodity Futures Markets. J Empir Financ 38: 623–639. https://doi.org/10.1016/j.jempfin.2015.11.005 doi: 10.1016/j.jempfin.2015.11.005
![]() |
[21] |
Dolatabadi S, Narayan PK, Nielsen MØ , et al. (2018) Economic Significance of Commodity Return Forecasts from the Fractionally Cointegrated VAR Model. J Futures Mark 38: 219–242. https://doi.org/10.1002/fut.21866 doi: 10.1002/fut.21866
![]() |
[22] |
Erb CB, Harvey CR (2013) The golden dilemma. Financ Anal J 69: 10–42. https://doi.org/10.3386/w18706 doi: 10.3386/w18706
![]() |
[23] |
Engsted T, Tanggaard C (2002) The relation between asset returns and inflation at short and long horizons. J Int Financ Mark I 12: 101–118. https://doi.org/10.1016/s1042-4431(01)00052-x doi: 10.1016/s1042-4431(01)00052-x
![]() |
[24] |
Fama EF, Schwert GW (1977) Asset returns and inflation. J Financ Econ 5: 115–146. https://doi.org/10.1016/s0014-2921(98)00090-7 doi: 10.1016/s0014-2921(98)00090-7
![]() |
[25] | Fisher I (1930) The theory of interest, New York: MacMillan Company. |
[26] |
Geweke J, Porter‐Hudak S (1983) The estimation and application of long memory time series models. J Time Ser Anal 4: 221–238. https://doi.org/10.1111/j.1467-9892.1983.tb00371.x doi: 10.1111/j.1467-9892.1983.tb00371.x
![]() |
[27] |
Hillier D, Draper P, Faff R (2006) Do precious metals shine? An investment perspective. Financ Anal J 62: 98–106. https://doi.org/10.2469/faj.v62.n2.4085 doi: 10.2469/faj.v62.n2.4085
![]() |
[28] |
Van Hoang TH, Lahiani A, Heller D (2016) Is gold a hedge against inflation? New evidence from a nonlinear ARDL approach. Econ Model 54: 54–66. https://doi.org/10.1016/j.econmod.2015.12.013 doi: 10.1016/j.econmod.2015.12.013
![]() |
[29] |
Hong G, Khil J, Lee BS (2013) Stock returns, housing returns and inflation: is there an inflation illusion?. Asia‐Pac J Financ Stud 42: 511–562. https://doi.org/10.1111/ajfs.12023 doi: 10.1111/ajfs.12023
![]() |
[30] |
Iqbal J (2017) Does gold hedge stock market, inflation and exchange rate risks? An econometric investigation. Int Rev Econ Financ 48: 1–17. https://doi.org/10.1016/j.iref.2016.11.005 doi: 10.1016/j.iref.2016.11.005
![]() |
[31] |
Jain A, Ghosh S (2013) Dynamics of global oil prices, exchange rate and precious metal prices in India. Resour Policy 38: 88–93. https://doi.org/10.1016/j.resourpol.2012.10.001 doi: 10.1016/j.resourpol.2012.10.001
![]() |
[32] | Johansen S (1995) Likelihood-based inference on cointegration in the vector autoregressive model, Oxford, UK: Oxford University Press. https://doi.org/10.1093/0198774508.001.0001 |
[33] |
Johansen S (2008a) A Representation Theory For A Class Of Vector Autoregressive Models For Fractional Processes. Economet Theory 24: 651–676. https://doi.org/10.1017/s0266466608080274 doi: 10.1017/s0266466608080274
![]() |
[34] |
Johansen S (2008b) Representation of Cointegrated Autoregressive Processes with Application to Fractional Processes. Economet Reviews 28: 121–145. https://doi.org/10.1080/07474930802387977 doi: 10.1080/07474930802387977
![]() |
[35] |
Johansen S, Nielsen MØ (2010) Likelihood Inference for a Non-stationary Fractional Autoregressive Model. J Econom 158: 51–66. https://doi.org/10.2139/ssrn.1150164 doi: 10.2139/ssrn.1150164
![]() |
[36] |
Johansen S, Nielsen MØ (2012) Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model. Econometrica 80: 2667–2732. https://doi.org/10.2139/ssrn.1618653 doi: 10.2139/ssrn.1618653
![]() |
[37] |
Kasa K (1992) Common stochastic trends in international stock markets. J Monetary Econ 29: 95–124. https://doi.org/10.1016/0304-3932(92)90025-w doi: 10.1016/0304-3932(92)90025-w
![]() |
[38] | Lothian JR, McCarthy CH (2001) Equity returns and inflation: The puzzlingly long lags. |
[39] |
Oloko TF, Ogbonna AE, Adedeji AA, et al. (2021) Fractional cointegration between gold price and inflation rate: Implication for inflation rate persistence. Resour Policy 74: 102369. https://doi.org/10.1016/j.resourpol.2021.102369 doi: 10.1016/j.resourpol.2021.102369
![]() |
[40] |
Qian L, Jiang Y, Long H (2023) Extreme risk spillovers between China and major international stock markets. Modern Financ 1: 30–34. https://doi.org/10.61351/mf.v1i1.6 doi: 10.61351/mf.v1i1.6
![]() |
[41] |
Reboredo JC (2013) Is gold a hedge or safe haven against oil price movements?. Resour Policy 38 130–137. https://doi.org/10.1016/j.resourpol.2013.02.003 doi: 10.1016/j.resourpol.2013.02.003
![]() |
[42] |
Robinson PM (1995) Log-periodogram regression of time series with long-range dependence. Ann Stat, 1048–1072. https://doi.org/10.1214/aos/1176324636 doi: 10.1214/aos/1176324636
![]() |
[43] | Rödel MG (2012) Inflation Hedging: An Empirical Analysis on Inflation Nonlinearities, Infrastructure, and International Equities (Doctoral dissertation). |
[44] |
Rödel M (2014) Inflation hedging with international equities. J Portf Manage 40: 41–53. https://doi.org/10.3905/jpm.2014.40.2.041 doi: 10.3905/jpm.2014.40.2.041
![]() |
[45] |
Rushdi M, Kim JH, Silvapulle P (2012) ARDL bounds tests and robust inference for the long run relationship between real stock returns and inflation in Australia. Econ Model 29: 535–543. https://doi.org/10.1016/j.econmod.2011.12.017 doi: 10.1016/j.econmod.2011.12.017
![]() |
[46] |
Salisu AA, Raheem ID, Ndako UB (2020) The inflation hedging properties of gold, stocks and real estate: A comparative analysis. Resour Policy 66: 101605. https://doi.org/10.1016/j.resourpol.2020.101605 doi: 10.1016/j.resourpol.2020.101605
![]() |
[47] | Schnabel I (2023) New challenges for the Economic and Monetary Union in the post-crisis environment. Speech by Isabel Schnabel, Member of the Executive Board of the ECB, at the Euro50 Group conference. Luxembourg, 19 June 2023. |
[48] |
Shahzad SJH, Mensi W, Hammoudeh S, et al. (2019) Does gold act as a hedge against different nuances of inflation? Evidence from Quantile-on-Quantile and causality-in-quantiles approaches. Resour Policy 62: 602–615. https://doi.org/10.1016/j.resourpol.2018.11.008 doi: 10.1016/j.resourpol.2018.11.008
![]() |
[49] |
Shimotsu K, Phillips PC (2002) Pooled log periodogram regression. J Time Ser Anal 23: 57–93. https://doi.org/10.1111/1467-9892.00575 doi: 10.1111/1467-9892.00575
![]() |
[50] |
Spierdijk L, Umar Z (2015) Stocks, bonds, T-bills and inflation hedging: From great moderation to great recession. J Econ Bus 79: 1–37. https://doi.org/10.1016/j.jeconbus.2014.12.002 doi: 10.1016/j.jeconbus.2014.12.002
![]() |
[51] |
Taderera M, Akinsomi O (2020) Is commercial real estate a good hedge against inflation? Evidence from South Africa. Res Int Bus Financ 51: 101096. https://doi.org/10.2139/ssrn.3373626 doi: 10.2139/ssrn.3373626
![]() |
[52] |
Tiwari AK, Cunado J, Gupta R, et al. (2018) Are stock returns an inflation hedge for the UK? Evidence from a wavelet analysis using over three centuries of data. Stud Nonlinear Dyn Econom 23: 20170049. https://doi.org/10.1515/snde-2017-0049 doi: 10.1515/snde-2017-0049
![]() |
[53] |
Tkacz G (2001) Estimating the Fractional Order of Integration of Interest Rates Using a Wavelet OLS Estimator. Stud Nonlinear Dyn Econom 5: 19–32. https://doi.org/10.2202/1558-3708.1068 doi: 10.2202/1558-3708.1068
![]() |
[54] |
Valadkhani A, Nguyen J, Chiah M (2022) When is gold an effective hedge against inflation?. Resour Policy 79: 103009. https://doi.org/10.1016/j.resourpol.2022.103009 doi: 10.1016/j.resourpol.2022.103009
![]() |
[55] |
Valcarcel VJ (2012) The dynamic adjustments of stock prices to inflation disturbances. J Econ Bus 64: 117–144. https://doi.org/10.1016/j.jeconbus.2011.11.002 doi: 10.1016/j.jeconbus.2011.11.002
![]() |
[56] |
Velasco C (1999) Non-stationary log-periodogram regression. J Econom 91: 325–371. https://doi.org/10.1016/s0304-4076(98)00080-3 doi: 10.1016/s0304-4076(98)00080-3
![]() |
[57] | World Bank (2020) Global economic prospects, June 2020. The World Bank. https://doi.org/10.1596/978-1-4648-1553-9 |
[58] |
Yaya O, Adenikinju O, Olayinka HA (2024) African stock markets' connectedness: Quantile VAR approach. Modern Financ 2: 51–68. https://doi.org/10.61351/mf.v2i1.70 doi: 10.61351/mf.v2i1.70
![]() |
[59] |
Yeap GP, Lean HH (2017) Asymmetric inflation hedge properties of housing in Malaysia: New evidence from nonlinear ARDL approach. Habitat Int 62: 11–21. https://doi.org/10.1016/j.habitatint.2017.02.006 doi: 10.1016/j.habitatint.2017.02.006
![]() |
[60] |
Zaremba A, Umar Z, Mikutowski M (2019) Inflation hedging with commodities: A wavelet analysis of seven centuries worth of data. Econ Lett 181: 90–94. https://doi.org/10.1016/j.econlet.2019.05.002 doi: 10.1016/j.econlet.2019.05.002
![]() |
[61] |
Zaremba A, Szczygielski JJ, Umar Z, et al. (2021) Inflation hedging in the long run: Practical perspectives from seven centuries of commodity prices. J Altern Invest 24: 119–134. https://doi.org/10.3905/jai.2021.1.136 doi: 10.3905/jai.2021.1.136
![]() |
1. | Mengfan Li, Jundi Li, Zhiyong Song, Haodong Deng, Jiaming Xu, Guizhi Xu, Wenzhe Liao, EEGNet-based multi-source domain filter for BCI transfer learning, 2024, 62, 0140-0118, 675, 10.1007/s11517-023-02967-z | |
2. | Xiaowei Zhang, Zhongyi Zhou, Qiqi Zhao, Kechen Hou, Xiangyu Wei, Sipo Zhang, Yikun Yang, Yanmeng Cui, Discriminative Joint Knowledge Transfer With Online Updating Mechanism for EEG-Based Emotion Recognition, 2024, 11, 2329-924X, 2918, 10.1109/TCSS.2023.3314508 | |
3. | Muhui Xue, Baoguo Xu, Lang Li, Jingyu Ping, Minmin Miao, Huijun Li, Aiguo Song, Mean-based geodesic distance alignment transfer for decoding natural hand movement from MRCPs, 2025, 247, 02632241, 116836, 10.1016/j.measurement.2025.116836 |
Algorithm 1 MSOTL-SDS algorithm |
Input: source domain classifier fS=(fS1,fS2,…,fSk), initial trade-off parameter C, discount weight β∈(0,1) |
Initialization: online classifier fT1=0, weight u1=1n+1,v1=1n+1 |
// SDS section for s=1,2,…,k do The distance d(s,t) from the sth source domain to the target domain is calculated by using Eq (6) end if The k-means clustering of {d(S,t)}s=1,…,k is performed to select the source domain with a smaller center of mass to form S'=S1,S2,…,Sn(n<k) // Multi-source online transfer |
for t=1,2,…,m do |
Receiving test specimens. xt∈X Calculate the classifier weights by using Eq (10) The predicted label is obtained by using Eq (12) |
Receive real tags yt={−1,+1} |
The weights of each classifier are updated by Eqs (10) and (11) |
Calculate the loss function lt=[1−ytfTtxt]+ |
if lk>0 then |
fTt+1=fTt+τtytxt, where τt=min{C,lt/‖xt‖2} |
end if |
end for output:ft(x)=sign(∑ni=1pitfSi(xt)+pvtfTt(xt)) |
Subject | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Avg |
RA-MDMR | 72.22 | 56.94 | 84.03 | 65.97 | 60.42 | 67.36 | 61.81 | 86.81 | 82.64 | 70.91 |
CA-JDA | 66.75 | 47.45 | 65.52 | 59.34 | 54.55 | 54.27 | 54.27 | 73.01 | 65.18 | 60.04 |
EA-CSP-JDA | 86.08 | 56.84 | 97.74 | 72.26 | 51.56 | 65.56 | 68.51 | 89.10 | 72.43 | 73.34 |
OECIT-Ⅰ | 88.91 | 56.31 | 97.56 | 73.57 | 58.44 | 67.01 | 71.04 | 92.88 | 75.87 | 75.73 |
OECIT-Ⅱ | 89.51 | 55.49 | 97.88 | 74.06 | 58.03 | 66.64 | 71.09 | 92.98 | 75.94 | 75.74 |
HomOTLMS | 89.25 | 56.36 | 97.84 | 74.22 | 58.90 | 68.17 | 73.92 | 93.69 | 76.51 | 76.54 |
HomOTL-ODDM | 90.91 | 57.31 | 97.54 | 75.57 | 58.84 | 68.01 | 74.04 | 93.88 | 75.87 | 76.89 |
MSOTL-SDS | 90.88 | 60.93 | 97.80 | 76.78 | 60.88 | 73.64 | 78.09 | 94.69 | 79.94 | 79.29 |
Subject | OECIT-Ⅰ | OECIT-Ⅱ | HomOTLMS | HomOTL-ODDM | MSOTL-SDS |
1 | 54.96 | 55.48 | 56.28 | 59.29 | 62.31 |
2 | 63.31 | 64.03 | 67.43 | 68.49 | 71.77 |
3 | 62.74 | 63.21 | 64.33 | 66.53 | 72.24 |
4 | 76.35 | 76.36 | 76.98 | 78.43 | 77.86 |
5 | 55.88 | 55.83 | 56.46 | 57.98 | 60.61 |
6 | 47.29 | 48.68 | 47.36 | 48.26 | 49.67 |
7 | 58.46 | 58.59 | 58.33 | 59.44 | 60.79 |
8 | 84.79 | 83.57 | 84.64 | 85.68 | 84.96 |
9 | 81.73 | 82.45 | 82.92 | 83.79 | 85.93 |
10 | 74.63 | 76.44 | 77.38 | 78.29 | 78.16 |
11 | 70.96 | 71.08 | 72.68 | 73.46 | 72.76 |
12 | 57.46 | 58.44 | 58.69 | 59.78 | 62.21 |
13 | 74.33 | 73.29 | 74.33 | 75.33 | 78.38 |
14 | 68.26 | 68.78 | 70.26 | 71.33 | 74.35 |
Avg | 66.51 | 66.87 | 67.72 | 69.01 | 70.86 |
Algorithm | Dataset Ⅱa Acc. (%) | Dataset 2 Acc. (%) |
EA-RCSP-LDA [35] | 73.72 | 64.09 |
EA-RCSP-OwAR [35] | 74.78 | 65.71 |
OECIT-Ⅰ [29] | 75.73 | 66.51 |
OECIT-Ⅱ [29] | 75.74 | 66.87 |
HomOTLMS [33] | 76.54 | 67.72 |
HomOTL-ODDM [34] | 76.89 | 69.01 |
MSOTL-SDS | 79.29 | 70.86 |
Dataset Ⅱa | Dataset 2 | |||
Mean (s) | Std (s) | Mean (s) | Std (s) | |
EA-CSP-LDA | 9.9918 | 0.9250 | - | - |
RA-MDRM | 172.7034 | 27.7032 | - | - |
CA-JDA | 68.0582 | 1.2880 | - | - |
OECIT-Ⅰ | 7.9428 | 0.8685 | 4.6735 | 0.7435 |
OECIT-Ⅱ | 7.8275 | 0.6577 | 4.7823 | 0.6421 |
HomOTLMS | 40.6568 | 1.9549 | 20.8283 | 1.8543 |
HomOTL-ODDM | 72.2592 | 4.9532 | 49.7724 | 4.7286 |
MSOTL-SDS | 35.6982 | 1.9368 | 18.4931 | 1.9254 |
MSOTL-SDS | ||
Dataset Ⅱa | Dataset 2 | |
EA-CSP-LDA | 0.0004 | - |
RA-MDRM | 0.0081 | - |
CA-JDA | 0.0001 | - |
OECIT-Ⅰ | 0.0010 | 0.0003 |
OECIT-Ⅱ | 0.0005 | 0.0004 |
HomOTLMS | 0.0034 | 0.0021 |
HomOTL-ODDM | 0.0018 | 0.0627 |
Algorithm 1 MSOTL-SDS algorithm |
Input: source domain classifier fS=(fS1,fS2,…,fSk), initial trade-off parameter C, discount weight β∈(0,1) |
Initialization: online classifier fT1=0, weight u1=1n+1,v1=1n+1 |
// SDS section for s=1,2,…,k do The distance d(s,t) from the sth source domain to the target domain is calculated by using Eq (6) end if The k-means clustering of {d(S,t)}s=1,…,k is performed to select the source domain with a smaller center of mass to form S'=S1,S2,…,Sn(n<k) // Multi-source online transfer |
for t=1,2,…,m do |
Receiving test specimens. xt∈X Calculate the classifier weights by using Eq (10) The predicted label is obtained by using Eq (12) |
Receive real tags yt={−1,+1} |
The weights of each classifier are updated by Eqs (10) and (11) |
Calculate the loss function lt=[1−ytfTtxt]+ |
if lk>0 then |
fTt+1=fTt+τtytxt, where τt=min{C,lt/‖xt‖2} |
end if |
end for output:ft(x)=sign(∑ni=1pitfSi(xt)+pvtfTt(xt)) |
Subject | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Avg |
RA-MDMR | 72.22 | 56.94 | 84.03 | 65.97 | 60.42 | 67.36 | 61.81 | 86.81 | 82.64 | 70.91 |
CA-JDA | 66.75 | 47.45 | 65.52 | 59.34 | 54.55 | 54.27 | 54.27 | 73.01 | 65.18 | 60.04 |
EA-CSP-JDA | 86.08 | 56.84 | 97.74 | 72.26 | 51.56 | 65.56 | 68.51 | 89.10 | 72.43 | 73.34 |
OECIT-Ⅰ | 88.91 | 56.31 | 97.56 | 73.57 | 58.44 | 67.01 | 71.04 | 92.88 | 75.87 | 75.73 |
OECIT-Ⅱ | 89.51 | 55.49 | 97.88 | 74.06 | 58.03 | 66.64 | 71.09 | 92.98 | 75.94 | 75.74 |
HomOTLMS | 89.25 | 56.36 | 97.84 | 74.22 | 58.90 | 68.17 | 73.92 | 93.69 | 76.51 | 76.54 |
HomOTL-ODDM | 90.91 | 57.31 | 97.54 | 75.57 | 58.84 | 68.01 | 74.04 | 93.88 | 75.87 | 76.89 |
MSOTL-SDS | 90.88 | 60.93 | 97.80 | 76.78 | 60.88 | 73.64 | 78.09 | 94.69 | 79.94 | 79.29 |
Subject | OECIT-Ⅰ | OECIT-Ⅱ | HomOTLMS | HomOTL-ODDM | MSOTL-SDS |
1 | 54.96 | 55.48 | 56.28 | 59.29 | 62.31 |
2 | 63.31 | 64.03 | 67.43 | 68.49 | 71.77 |
3 | 62.74 | 63.21 | 64.33 | 66.53 | 72.24 |
4 | 76.35 | 76.36 | 76.98 | 78.43 | 77.86 |
5 | 55.88 | 55.83 | 56.46 | 57.98 | 60.61 |
6 | 47.29 | 48.68 | 47.36 | 48.26 | 49.67 |
7 | 58.46 | 58.59 | 58.33 | 59.44 | 60.79 |
8 | 84.79 | 83.57 | 84.64 | 85.68 | 84.96 |
9 | 81.73 | 82.45 | 82.92 | 83.79 | 85.93 |
10 | 74.63 | 76.44 | 77.38 | 78.29 | 78.16 |
11 | 70.96 | 71.08 | 72.68 | 73.46 | 72.76 |
12 | 57.46 | 58.44 | 58.69 | 59.78 | 62.21 |
13 | 74.33 | 73.29 | 74.33 | 75.33 | 78.38 |
14 | 68.26 | 68.78 | 70.26 | 71.33 | 74.35 |
Avg | 66.51 | 66.87 | 67.72 | 69.01 | 70.86 |
Algorithm | Dataset Ⅱa Acc. (%) | Dataset 2 Acc. (%) |
EA-RCSP-LDA [35] | 73.72 | 64.09 |
EA-RCSP-OwAR [35] | 74.78 | 65.71 |
OECIT-Ⅰ [29] | 75.73 | 66.51 |
OECIT-Ⅱ [29] | 75.74 | 66.87 |
HomOTLMS [33] | 76.54 | 67.72 |
HomOTL-ODDM [34] | 76.89 | 69.01 |
MSOTL-SDS | 79.29 | 70.86 |
Dataset Ⅱa | Dataset 2 | |||
Mean (s) | Std (s) | Mean (s) | Std (s) | |
EA-CSP-LDA | 9.9918 | 0.9250 | - | - |
RA-MDRM | 172.7034 | 27.7032 | - | - |
CA-JDA | 68.0582 | 1.2880 | - | - |
OECIT-Ⅰ | 7.9428 | 0.8685 | 4.6735 | 0.7435 |
OECIT-Ⅱ | 7.8275 | 0.6577 | 4.7823 | 0.6421 |
HomOTLMS | 40.6568 | 1.9549 | 20.8283 | 1.8543 |
HomOTL-ODDM | 72.2592 | 4.9532 | 49.7724 | 4.7286 |
MSOTL-SDS | 35.6982 | 1.9368 | 18.4931 | 1.9254 |
MSOTL-SDS | ||
Dataset Ⅱa | Dataset 2 | |
EA-CSP-LDA | 0.0004 | - |
RA-MDRM | 0.0081 | - |
CA-JDA | 0.0001 | - |
OECIT-Ⅰ | 0.0010 | 0.0003 |
OECIT-Ⅱ | 0.0005 | 0.0004 |
HomOTLMS | 0.0034 | 0.0021 |
HomOTL-ODDM | 0.0018 | 0.0627 |