
Citation: Carina Ladeira, Susana Viegas, Mário Pádua, Elisabete Carolino, Manuel C. Gomes, Miguel Brito. Relation between DNA damage measured by comet assay and OGG1 Ser326Cys polymorphism in antineoplastic drugs biomonitoring[J]. AIMS Genetics, 2015, 2(3): 204-218. doi: 10.3934/genet.2015.3.204
[1] | Yunqian Yu, Zhenliang Hao, Guojie Li, Yaqing Liu, Run Yang, Honghe Liu . Optimal search mapping among sensors in heterogeneous smart homes. Mathematical Biosciences and Engineering, 2023, 20(2): 1960-1980. doi: 10.3934/mbe.2023090 |
[2] | Ray-Ming Chen . Extracted features of national and continental daily biweekly growth rates of confirmed COVID-19 cases and deaths via Fourier analysis. Mathematical Biosciences and Engineering, 2021, 18(5): 6216-6238. doi: 10.3934/mbe.2021311 |
[3] | Sakorn Mekruksavanich, Anuchit Jitpattanakul . RNN-based deep learning for physical activity recognition using smartwatch sensors: A case study of simple and complex activity recognition. Mathematical Biosciences and Engineering, 2022, 19(6): 5671-5698. doi: 10.3934/mbe.2022265 |
[4] | Keruo Jiang, Zhen Huang, Xinyan Zhou, Chudong Tong, Minjie Zhu, Heshan Wang . Deep belief improved bidirectional LSTM for multivariate time series forecasting. Mathematical Biosciences and Engineering, 2023, 20(9): 16596-16627. doi: 10.3934/mbe.2023739 |
[5] | Xiaoguang Liu, Meng Chen, Tie Liang, Cunguang Lou, Hongrui Wang, Xiuling Liu . A lightweight double-channel depthwise separable convolutional neural network for multimodal fusion gait recognition. Mathematical Biosciences and Engineering, 2022, 19(2): 1195-1212. doi: 10.3934/mbe.2022055 |
[6] | Wajid Aziz, Lal Hussain, Ishtiaq Rasool Khan, Jalal S. Alowibdi, Monagi H. Alkinani . Machine learning based classification of normal, slow and fast walking by extracting multimodal features from stride interval time series. Mathematical Biosciences and Engineering, 2021, 18(1): 495-517. doi: 10.3934/mbe.2021027 |
[7] | Tianjun Lu, Xian Zhong, Luo Zhong, RuiqiLuo . A location-aware feature extraction algorithm for image recognition in mobile edge computing. Mathematical Biosciences and Engineering, 2019, 16(6): 6672-6682. doi: 10.3934/mbe.2019332 |
[8] | Shangbin Li, Yu Liu . Human motion recognition based on Nano-CMOS Image sensor. Mathematical Biosciences and Engineering, 2023, 20(6): 10135-10152. doi: 10.3934/mbe.2023444 |
[9] | Xihe Qiu, Xiaoyu Tan, Chenghao Wang, Shaotao Chen, Bin Du, Jingjing Huang . A long short-temory relation network for real-time prediction of patient-specific ventilator parameters. Mathematical Biosciences and Engineering, 2023, 20(8): 14756-14776. doi: 10.3934/mbe.2023660 |
[10] | Xiaoguang Liu, Yubo Wu, Meng Chen, Tie Liang, Fei Han, Xiuling Liu . A double-channel multiscale depthwise separable convolutional neural network for abnormal gait recognition. Mathematical Biosciences and Engineering, 2023, 20(5): 8049-8067. doi: 10.3934/mbe.2023349 |
Smart homes aim to provide a comfortable, convenient, and efficient living environment and effectively alleviate the impact of the functional decline [1,2]. Besides, smart home is also designed to improve energy management [3,4,5]. The premise of achieving the goal is to accurately recognize daily activities, which take place in smart homes. To achieve good activity recognition performance, several approaches have been proposed. Existing approaches focus on different stages of the process of activity recognition [6]. Some approaches focus on stages such as, segmenting sensor event streams [7,8,9,10], extracting and selecting daily activity features [11,12,13,14,15], and developing recognition models [16,17,18,19]. In this paper, the proposed approach focuses on extracting and selecting daily activity features.
The primary task of extracting and selecting daily activity features is to establish a feature space and generate a sample space. Daily activity features are divided into temporal and sensor features. Temporal features include the start time, end time, and duration of the daily activity. For the sensor features, while some approaches take all smart home sensors as the feature space, others take the sets or sequences of frequency sensors as the feature space. For a given sensor feature and daily activity, most of the approaches take the frequency activated in the daily activity as the value of the sensor feature. The existing common practice for extracting daily activity features is discretizing sensor event streams. This widespread practice leads to the character loss of the time series of sensor event streams and limits the improvement of activity recognition performance.
To utilize the character of the time series of sensor event streams to improve activity recognition performance, this paper proposes a novel approach for extracting daily activity features. As compared with existing approaches, the proposed approach achieves better activity recognition performance. The main contributions of this paper are as follows.
(1) An algorithm that serves to extract time series data from sensor event streams is proposed.
(2) Several common statistic formulas are proposed to establish an initial feature space.
(3) A feature selection algorithm is employed to generate final daily activity features.
(4) The proposed approach is evaluated on two common datasets. The experiment results show that the proposed approach achieves better performance than previous approaches for solving daily activity features.
The rest of this paper is arranged as follows: First, related work is introduced; the proposed approach is then introduced; the proposed approach is validated and results discussed. Finally, we summarize our findings.
Approaches for activity recognition in smart homes can be divided into knowledge-driven and data-driven approaches. For knowledge-driven approaches, an activity model is developed as a reusable context model, which associates objects, space, and time with activities. The knowledge driven model is semantically clear and follows an agreed indication. Logic language and ontology are the two most common models representing domain knowledge [20,21,22,23,24,25,26]. After the knowledge model is established, logical reasoning is employed to perform activity recognition. Knowledge-driven approaches are robust but face limitations in the case of uncertain data.
Data-driven approaches adopt data mining and machine learning techniques to develop the activity recognition model. Conventional classification algorithms e. g. Naive Bayesian (NB) [27,28,29,30], Hidden Markov Model (HMM) [31,32], Dynamic Bayesian Network (DBN) [33], Support Vector Machine (SVM) [34], Conditional Random Field (CRF) [35], and Recursive Neural Network (RNN) [36] have been widely used in activity recognition tasks. Besides conventional classification algorithms, some specialized algorithms were invented. Wan et al. proposed a novel activity recognition model called COBRA. COBRA mixed the combined sliding window with a logistic regression model for near-real-time activity recognition [37]. To deal with the problem of class imbalance and improve the performance of the model, Medina-Quero et al. developed an integrated classifier based on long short-term memory (LSTM) to recognize daily activities [38].
Besides classification algorithms, the fine features of daily activities are equally vital to activity recognition performance. Daily activity features can be divided into temporal and sensor features. The temporal features space usually includes the time when a daily activity starts, its duration, and when it ends. The sensor features space is generated directly or indirectly from the set of initial sensors. Liu et al. take the set of initial sensors as the sensor features space and a sensor as a daily activity feature [39]. Because the relationship between sensors is lost when a sensor is taken as a daily activity feature, daily activities which activate similar sensors are hard to differentiate. To improve activity recognition performance, the frequent items mining method [40], frequent periodic pattern mining method [41], and activity modelling based on a low-dimensional feature space [42] were proposed. In addition, Wen et al. and Nasreen et al. used the association rules mining method to mine frequent sensor combinations, to conduct activity modelling for the low-dimensional feature space [43,44]. Twomey proposed an unsupervised method to learn the topology structure of sensors in a smart home and mined effective combinations of sensor events as daily behaviour characteristics, according to the topology structure [45]. Yatbaz et al. [46] used a Scanpath Trend Analysis (STA) method to set a priority for sensors to obtain sensor combinations that represent daily activity features, to improve the evaluation standard of the model. Compared to the approach where a sensor is taken as a daily activity feature, these approaches consume more computing resources, even if activity recognition performance is slightly improved.
For sensor features, truth value, frequency, and density of the activated sensors are the most common eigenvalues [47]. In addition, the term frequency-inverse document frequency (TF-IDF) formula [8], mutual information formula [48], deep learning technology [49], and differential representation between different activities were employed to compute these eigenvalues [50]. However, these above mentioned strategies for estimating the eigenvalues generate only shallow features, which are far from adequate when describing the nature of the time series of sensor event streams. This paper performs an in-depth feature mining on the time series data, which can preserve the essential information of the time itself. Consequently, the proposed approach can be extended to promote the user activity recognition model.
In a smart home, different types of non-invasive sensors e.g. infrared motion sensors, and temperature sensors, are deployed in different parts of the house. When residents carry on daily activities e.g. sleeping, and bathing, corresponding sensor readings are generated accordingly. Figure 1 shows a sequence of sensors triggered by cooking breakfast as a daily activity. Each line denotes a sensor event. Each activated sensor is recorded as a sensor event se, denoted as a four tuple; se = (D, T, I, R). D and T are the date and the time when the se is generated, respectively; I is the identification of the activated sensor, and R is the sensor reading. For example, the sensor event shown in line 1 is generated at 07:58:39.655022 on 2011-06-15. The activated sensor is M007 with a reading ON.
Based on the nature time series of sensor events, 6 categories of common daily activity features are proposed in form of statistic formulas. Each statistic formula corresponds to a given time series. Throughout this section, T = < t1, t2, …, tn > denotes a time series, where tiis the ith time value of a given time series.
(1) Mean: µ(T) returns the mean of T. µ(T) is defined in formula (1).
$ \mu (T) = \frac{{\sum {{t_i}} }}{n} $ | (1) |
(2) Standard Deviation: σ(T) returns the standard deviation of T. σ(T) is defined in formula (2).
$ \sigma (T) = \sqrt {\frac{{\sum {{{({t_i} - \mu )}^2}} }}{{n - 1}}} $ | (2) |
(3) Skewness: Skew(T) returns the skewness of T. Skew(T) is defined in formula (3).
$ Skew(T) = \frac{{{\mu ^3}}}{{{\sigma ^3}}} $ | (3) |
(4) Slope: Slope(T) returns the slope of the linear least-squares regression for the values of T. Slope(T) is defined in formula (4).
$ Slope\left( T \right) = slope\left( {llsr(T)} \right) $ | (4) |
Where llsr returns the linear least-squares regression for the values of T.
(5) Wave: Wave(T) returns the number of troughs and peaks and of T. Wave(T) is defined in formula (5).
$ Wave(T) = peaks(T) + troughs(T) $ | (5) |
Where peaks returns the number of peaks of T. troughs returns the number of troughs of T.
(6) Wavelet Transform Coefficients: For two specified parameters w and t∈T, CWTC(t, w) returns continuous wavelet transform coefficients. CWTC(t, w) is defined in formula (6). cwt(t, w) returns a continuous wavelet transform for the ricker wavelet of the wavelet function.
$ CWTC{\rm{(}}t{\rm{, }}w{\rm{)}} = wavelettransformcoefficients{\rm{(}}t{\rm{, }}w, cwt(t, w){\rm{)}} $ | (4.1) |
$ cwt(t, w) = \frac{2}{{\sqrt {3 \cdot w} \cdot {\pi ^{\frac{1}{4}}}}}(1 - \frac{{{t^2}}}{{{w^2}}})exp( - \frac{{{t^2}}}{{2 \cdot {w^2}}}) $ | (6.2) |
Where w is the width parameter in the wavelet transform function, which is 2 in the experiment.
For a sequence of sensor events activated by a daily activity, time series data which is input to each feature category. Feature space is generated by Algorithm 1. Algorithm 1 can be divided into two stages. In the first stage (lines 4-12), the identification and times of each activated sensor are extracted. Activated times form time series data. In the second stage (lines 13-22), features are extracted using feature formulas with extracted time series data.
One drawback to the features of daily activity is the high dimension of feature set. To obtain the strong ability features and eliminate the weak ability ones, we need to use feature selection technique to optimize feature subset. In this paper, the SDSFS algorithm [51] is used to evaluate the activity recognition capability of these features.
In the initial phase, each agent is assigned to combine the features subset in their respective search spaces (all possible combinations of the features). Each agent will use an independent random split to divide the dataset into training and testing subsets according to a ratio of 4:1. The hypothesis is a binary string that represents the feature subset within the subset size. In the string, if the bit is 1, it contains its corresponding feature, if 0, it is not.
In the test phase, these activities of the agent are determined according to the average F-score of multiple classifiers in their fitness function, where the agent selects another random agent and compares them. If the F-score of the selected agent is more than that of the random agent, then the selected agent is set to active, otherwise it is set to inactive. The agents will repeat this process to determine their respective states. After that, the diffusion phase would ideally begin.
In the diffusion phase, both inactive and active agents choose other agents. If the randomly selected agent is active, it will offset the hypothesis (the feature subset), which will be shared with an inactive agent. Instead, the selected agents choose the new random hypothesis (the feature subset) from its search space (all feature combinations in the subset size). For offsetting, one of the features is randomly removed (by changing 1 to 0) with another randomly added one (by changing 0 to 1). This keeps the size of the subset. Besides, when the active agent picks another active agent that maintains a similar hypothesis, the selected agent will be set to inactive and assigned to a random hypothesis. This frees up all agents and increases diversity. Algorithm 2 is repeatedly executed until the maximum number of iterations (numIterations) is reached.
Algorithm 1. featureExtraction |
Input: S, deployed sensor identifications in smart house Φ, set of the proposed feature categories E, a sequence of sensor events activated by a daily activity a Output: F 1. F←Ø; 2. TS←Ø; 3. IS←Ø; // set of sensor identifications activated by a 4. while(true) 5. e←getNextSensorEvent(E); //Get next sensor event e in E. 6. (t, s)←extractTime & Sensor(e)//Extract T and I of e. 7. TS←TS∪{(T, I)}; 8. IS←IS∪{I}; 9. if(e is last traversed sensor event in E) then 10. break; 11. end if 12. end for 13. for each I in S 14. for each φ in Φ 15. if I∈IS then 16. TI 17. F←F∪{(I, j(TI))}; 18. else 19. F←F∪{(I, 0)}; 20. end if 21. end for 22. end for 23. return F |
Algorithm 2. Description of SDSFS algorithm |
Input: numIterations, the number of iterations numAgents, the number of agents Output: Optimal feature subset 1. //Initialisation phase 2. Assign numAgents agents to random hypotheses with inactive states, each agent represents a set of features. 3. while less than numIterations do 4. //Evaluation phase 5. for each agent in agents 6. Evaluate the fitness value; 7. Find the maximum fitness value; 8. end for 9. //Test phase 10. for each agent in agents 11. if Agent’s fitness > random agent’s fitness then 12. Set agent as active; 13. end if 14. end for 15. //Diffusion phase 16. for each agent in agents 17. if agent is inactive then 18. Select a random agent; 19. if selected agent is active then 20. Copy its hypothesis & offset it; 21. Evaluate the fitness value; 22. else 23. Pick a random hypothesis; 24. Evaluate the fitness value; 25. end if 26. end if 27. end for 28. end for 29. return Optimal feature subset |
Activity recognition performance depends on the daily activity feature. We use two common datasets “Cairo” and “Tulum2009” to evaluate the approaches for solving activity recognition performance of daily activity features. “Cairo” and “Tulum2009” are provided by the Washington State University [52]. The involved sensors and daily activities are listed in Table 1.
Dataset | Residents and pets | Sensor Categories | Number of sensors | Activity Categories | Number of Activity instances | Measurement Time |
“Cairo” | 2 residents and 1 pet | “Motion sensors” (M001-M027) | 27 | “Night_wandering” | 67 | 57 days |
“Bed_to_toilet” | 30 | |||||
“R1_wake” | 53 | |||||
“R2_wake” | 52 | |||||
“R2_take_medicine” | 44 | |||||
“Breakfast” | 48 | |||||
“Temperature sensors ”(T001-T005) | 5 | “Leave_home” | 69 | |||
“Lunch” | 37 | |||||
“Dinner” | 42 | |||||
“R2_sleep” | 52 | |||||
“R1_sleep” | 50 | |||||
“R1_work_in_office” | 46 | |||||
“Laundry” | 10 | |||||
“tulum2009” | 2 residents | “Motion sensors” (M001-M018) | 18 | “Cook_Breakfast” | 80 | 84 days |
“R1_Eat_Breakfast” | 66 | |||||
“Cook_Lunch” | 71 | |||||
“Leave_Home” | 75 | |||||
“Watch_TV” | 528 | |||||
“Temperature sensors ”(T001-T002) | 2 | “R1_Snack” | 491 | |||
“Enter_Home” | 73 | |||||
“Group_Meeting” | 11 | |||||
“R2_Eat_Breakfast” | 47 | |||||
“Wash_Dishes” | 71 |
For daily activity feature solving approaches, we use Jupyter Notebook to carry out experimental comparison of four methods. First, the proposed method is called "SR", and the second is called "FR". FR is an extraction approach of daily activity feature. For FR method, frequency of activated sensor is extracted as daily activity feature. “FR” and its variants have been be used most extensively as daily activity feature. Feature spaces of the approach are composed of “st”, “et”, “du” and sensor features [53]. The other two are the combination of SR and FR with SDSFS algorithm respectively, called "SR+FS" and "FR+FS". FR+FS (FR+FS) means that SDSFS is employed to select features of daily activity after features are extracted.
For a given daily activity, the values of st, et, and du refer to the start time, end time, and duration of the daily activity, respectively. Sensor features, each of which corresponds to a sensor are mapped to all deployed sensors in the smart home. For FR, the value of a sensor feature is the frequency that activates the corresponding sensor in the given daily activity. For the SDSFS algorithm, the parameters involved are listed in Table 2.
Configuration Name | Parameter Interpretation |
the number of iterations | numIterations←150 |
the number of agents | numAgents←30 |
the minimum number of features included in an agent | lowerLim←5 |
the maximum number of features included in an agent | upperLim←30 |
For data-driven approaches, activity recognition is usually treated as a classification problem. Without loss of generality, Logistic Regression (LR), Naive Bayesian (NB), Decision Tree (DT) and LSTM are used to evaluate the proposed approach. The parameters involved are listed in Table 3. The rest of the parameters are default. A leave-one-day-out cross validation is taken to evaluate the proposed approach. The performance indicators used are the Recall, Precision, and F-score, which are defined in formula (7), (8) and (9), respectively, where Q is the number of activity labels; TPi is the number of true positives; FPi is the number of false positives; FNi is the number of false negatives; TNi is the number of true negatives.
$ Recall = \frac{{\sum\limits_{i = 1}^Q {\frac{{T{P_i}}}{{T{P_i} + F{N_i}}}} }}{Q} $ | (7) |
$ Precision = \frac{{\sum\limits_{i = 1}^Q {\frac{{T{P_i}}}{{T{P_i} + F{P_i}}}} }}{Q} $ | (8) |
$ F - score = \frac{{2*Precision*\operatorname{Re} call}}{{Precision + \operatorname{Re} call}} $ | (9) |
Classifiers Name | Parameter Name | Parameter Settings |
LR | regularization intensity, random number seed | C←1.0, random_state←2018 |
DT | random number seed | random_state←2018 |
NB | / | / |
LSTM | The number of units | 16 |
Gradient descent algorithm | AdamOptimizar | |
Learning rate | 1e-3 | |
Batch size | 100 | |
Epoch number | 100 |
Recall, Precision, and F-measure are listed in Tables 4-9 and Figures 3, 5.
Approaches | LR | NB | DT | LSTM |
FR | 79.263% | 72.207% | 84.346% | 81.021% |
SR | 70.900% | 70.775% | 79.358% | 88.152% |
FR + FS | 83.112% | 81.334% | 87.379% | / |
SR + FS | 87.006% | 83.958% | 89.511% | / |
Approaches | LR | NB | DT | LSTM |
FR | 74.776% | 70.225% | 85.018% | 75.2693% |
SR | 66.043% | 68.411% | 78.236% | 88.268% |
FR + FS | 81.310% | 79.023% | 86.295% | / |
SR + FS | 82.918% | 80.634% | 87.114% | / |
Approaches | LR | NB | DT | LSTM |
FR | 75.603% | 69.265% | 84.280% | 76.321% |
SR | 66.254% | 67.167% | 78.074% | 87.748% |
FR + FS | 81.021% | 78.589% | 86.189% | / |
SR + FS | 84.133% | 80.214% | 87.529% | / |
Approaches | LR | NB | DT | LSTM |
FR | 72.627% | 58.688% | 84.993% | 85.907% |
SR | 77.075% | 73.139% | 80.689% | 88.8% |
FR + FS | 81.864% | 65.650% | 86.122% | / |
SR + FS | 90.844% | 81.221% | 85.329% | / |
Approaches | LR | NB | DT | LSTM |
FR | 64.540% | 72.008% | 79.591% | 76.256% |
SR | 74.478% | 75.513% | 78.609% | 86.716% |
FR + FS | 65.519% | 75.564% | 81.985% | / |
SR + FS | 86.101% | 87.574% | 84.771% | / |
Approaches | LR | NB | DT | LSTM |
FR | 65.912% | 58.368% | 81.628% | 80% |
SR | 75.543% | 71.112% | 79.265% | 86.909% |
FR + FS | 68.535% | 66.448% | 83.669% | / |
SR + FS | 88.205% | 83.562% | 84.727% | / |
(1) Results on Dataset “Cairo”
After feature selection, the scores of each agent are shown in Figure 2. FR+FS and SR+FS have the highest average F-score in the 14th and 3rd agents respectively. Based on the above results, we conduct the following experiments using the features in the best agents respectively.
The Precision obtained using SR+FS is higher than that obtained using the other three methods for all classifiers. The Recall and F-score obtained using SR+FS are higher than those obtained using FR, SR and FR+FS for LR and NB. The highest Precision (89.511%), Recall (87.114%) and F-score (87.529%) are obtained using SR+FS for DT. Besides, the first three classifiers, their average Precision value obtained using SR+FS improves by at least 2.883% compared with the performance of other tests. And the average Recall value of the SR+FS is 83.555%. There is 1.346% improvement over the best outcomes of the first three methods. Similarly, the average F-score of the SR+FS achieves at least 7.574%, 13.459% and 2.024% improvements over other benchmark methods. Finally, SR also beats FR in every metric of the LSTM.
(2) Results on Dataset “Tulum2009”
After feature selection, the scores of each agent are shown in Figure 4. FR+FS and SR+FS have the highest average F-score in the 5th and 22nd agents respectively. Based on the above results, we conduct the following experiments using the features in the best agents respectively.
The dataset also gets the same result pattern. The Precision obtained using SR+FS is also higher than that obtained using the other three methods for all classifiers. The Recall and F-score obtained using SR+FS are higher than those obtained using FR, SR and FR+FS for LR and NB. Although SR+FS lags behind FR + FS in the Precision of DT. Besides, the first three classifiers, their average Precision value obtained using SR+FS improves by at least 7.919% compared with the performance of other tests. Besides, the average Recall value of the SR+FS is 86.148%. There is 9.948% improvement over the best outcomes of the first three methods. Similarly, the average F-score of the SR+FS achieves at least 16.862%, 10.192% and 12.614% improvements over other benchmark methods. Finally, SR also beats FR in every metric of the LSTM.
We discuss few crucial observations from our experiments. As shown in Figures 3, 5, SR+FS performs better than the other three groups. First, this gain may be due to feature selection. There are inevitably redundant features in the original data. These features are not sensitive to the classification label, but they can disturb the classifier's correct judgment of the sample. Therefore, the performance of SR is low in some classifiers.
In addition, it may be that the traditional method only counts the frequency of the sensor and loses the time information of the sensor. In contrast, our method extracts the trigger time of the sensor in the activity in turn, and then carries out feature calculation. Such different feature computing methods increase the diversity of features. In this way, the frequency information and the time information of the time series data are retained.
We note that there are imbalances of categories in daily activities, but this paper does not do the corresponding processing when modeling. Excessive differences between categories may make the model biased towards more categories, which may affect the performance evaluation of the model to some extent. Consequentially, it may be worthwhile to perform further studies to determine how such problems impact performance, especially in smart home environment.
Daily activity features have a significant influence on activity recognition performance. To improve activity recognition performance, we proposed a statistic representation of daily activity features based on the time series nature of sensor event streams. We utilized four classifiers to compare the proposed approach with approaches based on the frequency and truth of sensor events on two common datasets. The results showed that the proposed approach can significantly improve activity recognition performance.
This work was supported by the National Natural Science Foundation of China (No. 61976124); the Open Project Program of Artificial Intelligence Key Laboratory of Sichuan Province (Nos. 2018RYJ09, 2019RZJ01); the Opening Project of Key Laboratory of Higher Education of Sichuan Province for Enterprise Informationalization and Internet of Things(No. 2019WZY03); the Major Frontier Project of Science and Technology Plan of Sichuan Province (No. 2018JY0512).
The authors declare no conflicts of interest.
[1] | CDC—The National Institute for Occupational Safety and Health (NIOSH) (2004). available from: http://www.cdc.gov/niosh/. |
[2] |
Kopjar N, Garaj-Vrhovac V, Kašuba V, et al. (2009) Assessment of genotoxic risks in Croatian health care workers occupationally exposed to cytotoxic drugs: A multi-biomarker approach. Int J Hyg Environ Health 212: 414-431. doi: 10.1016/j.ijheh.2008.10.001
![]() |
[3] |
Mahboob M, Rahman F, Rekhadevi PV, et al. (2012) Monitoring of Oxidative Stress in Nurses Occupationally Exposed to Antineoplastic Drugs. Toxicol Int 19: 20-24. doi: 10.4103/0971-6580.94510
![]() |
[4] | Villarini M, Dominici L, Piccinini R, et al. (2011) Assessment of primary, oxidative and excision repaired DNA damage in hospital personnel handling antineoplastic drugs. Mutagenesis 26: 359-369. |
[5] |
Villarini M, Dominici L, Fatigoni C, et al. (2012) Biological effect monitoring in peripheral blood lymphocytes from subjects occupationally exposed to antineoplastic drugs: assessment of micronuclei frequency. J Occup Health 54: 405-415. doi: 10.1539/joh.12-0038-OA
![]() |
[6] |
Fucic A, Jazbec A, Mijic A, et al. (1998) Cytogenetic consequences after occupational exposure to antineoplastic drugs. Mutat Res Toxicol Environ Mutagen 416: 59-66. doi: 10.1016/S1383-5718(98)00084-9
![]() |
[7] |
Burgaz S, Karahalil B, Bayrak P, et al. (1999) Urinary cyclophosphamide excretion and micronuclei frequencies in peripheral lymphocytes and in exfoliated buccal epithelial cells of nurses handling antineoplastics. Mutat Res 439: 97-104. doi: 10.1016/S1383-5718(98)00180-6
![]() |
[8] | Sessink RP, Bos RP (1999) Drugs hazardous to healthcare workers. Evaluation of methods for monitoring occupational exposure to cytostatic drugs. Drug Saf Int J Med Toxicol Drug Exp 20: 347-359. |
[9] |
Bouraoui S, Brahem A, Tabka F, et al. (2011) Assessment of chromosomal aberrations, micronuclei and proliferation rate index in peripheral lymphocytes from Tunisian nurses handling cytotoxic drugs. Environ Toxicol Pharmacol 31: 250-257. doi: 10.1016/j.etap.2010.11.004
![]() |
[10] | Gulten T, Evke E, Ercan I, et al. (2011) Lack of genotoxicity in medical oncology nurses handling antineoplastic drugs: effect of work environment and protective equipment. Work Read Mass 39: 485-489. |
[11] | Buschini A, Villarini M, Feretti D, et al. (2013) Multicentre study for the evaluation of mutagenic/carcinogenic risk in nurses exposed to antineoplastic drugs: assessment of DNA damage. Occup Environ Med 70: 789-794. |
[12] |
Jackson MA, Stack HF, Waters MD (1996) Genetic activity profiles of anticancer drugs. Mutat Res 355: 171-208. doi: 10.1016/0027-5107(96)00028-0
![]() |
[13] | Connor TH (2006) Hazardous Anticancer Drugs in Health Care: Environmental Exposure Assessment. Ann N Y Acad Sci 1076: 615-623. |
[14] | Kopjar N, Milas I, Garaj-Vrhovac V, et al. (2006) M. Gamulin, Alkaline comet assay study with breast cancer patients: evaluation of baseline and chemotherapy-induced DNA damage in non-target cells. Clin Exp Med 6: 177-190. |
[15] | Kiffmeyer T, Hadtstein C (2007) Handling of chemotherapeutic drugs in the or: hazards and safety considerations. Cancer Treat Res 134: 275-290. |
[16] |
Collins AR (2004) The comet assay for DNA damage and repair: principles, applications, and limitations. Mol Biotechnol 26: 249-261. doi: 10.1385/MB:26:3:249
![]() |
[17] |
Collins AR (2009) Investigating oxidative DNA damage and its repair using the comet assay. Mutat Res 681: 24-32. doi: 10.1016/j.mrrev.2007.10.002
![]() |
[18] |
Laffon B, Teixeira JP, Silva S, et al. (2005) Genotoxic effects in a population of nurses handling antineoplastic drugs, and relationship with genetic polymorphisms in DNA repair enzymes. Am J Ind Med 48: 128-136. doi: 10.1002/ajim.20189
![]() |
[19] | Dusinska M, Collins AR (2008) The comet assay in human biomonitoring: gene-environment interactions. Mutagenesis 23: 191-205. |
[20] | Azqueta A, Shaposhnikov S, Collins A (2009) Detection of oxidised DNA using DNA repair enzymes, In: Anderson, D. and Dhawan A, Comet Assay Toxicol, Royal Society of Chemistry, 58-63. |
[21] | Moller P, Knudsen LE, Loft S, et al. (2000) The comet assay as a rapid test in biomonitoring occupational exposure to DNA-damaging agents and effect of confounding factors. Cancer Epidemiol Biomark Prev Publ Am Assoc Cancer Res Cosponsored Am Soc Prev Oncol 9: 1005-1015. |
[22] | Collins A.R, Dusinská M, Horváthová E, et al. (2001) Inter-individual differences in repair of DNA base oxidation, measured in vitro with the comet assay. Mutagenesis 16: 297-301. |
[23] | Collins A, Oscoz A, Brunborg G, et al. (2008)The comet assay: topical issues. Mutagenesis 23: 143-151. |
[24] |
Hoeijmakers JH (2001) Genome maintenance mechanisms for preventing cancer. Nature 411: 366-374. doi: 10.1038/35077232
![]() |
[25] |
Boiteux S, Radicella JP (1999) Base excision repair of 8-hydroxyguanine protects DNA from endogenous oxidative stress. Biochimie 81: 59-67. doi: 10.1016/S0300-9084(99)80039-X
![]() |
[26] | Ersson C (2011) International validation of the comet assay and a human intervention study. Stockholm: Karolinska Institutet. |
[27] |
Pilger A, Rüdiger HW (2006) 8-Hydroxy-2′-deoxyguanosine as a marker of oxidative DNA damage related to occupational and environmental exposures. Int Arch Occup Environ Health 80: 1-15. doi: 10.1007/s00420-006-0106-7
![]() |
[28] |
Kohno T, Shinmura K, Tosaka M, et al. (1998) Genetic polymorphisms and alternative splicing of the hOGG1 gene, that is involved in the repair of 8-hydroxyguanine in damaged DNA. Oncogene 16: 3219-3225. doi: 10.1038/sj.onc.1201872
![]() |
[29] | Macpherson P, Barone F, Maga G, et al. (2005) 8-Oxoguanine incorporation into DNA repeats in vitro and mismatch recognition by MutSα. Nucleic Acids Res 33: 5094-5105. |
[30] |
Hu YC, Ahrendt SA (2005) hOGG1 Ser326Cys polymorphism and G:C-to-T:A mutations: no evidence for a role in tobacco-related non small cell lung cancer. J Int Cancer 114: 387-393. doi: 10.1002/ijc.20730
![]() |
[31] |
Larson RR, Khazaeli MB, DillonHK (2003) A new monitoring method using solid sorbent media for evaluation of airborne cyclophosphamide and other antineoplastic agents. Appl Occup Environ Hyg 18: 120-131. doi: 10.1080/10473220301435
![]() |
[32] | Castiglia L, Miraglia N, Pieri M, et al. (2008) Evaluation of occupational exposure to antiblastic drugs in an Italian hospital oncological department. J Occup Health 50: 48-56. |
[33] |
Hedmer M, Jönsson BAG, Nygren O (2004) Development and validation of methods for environmental monitoring of cyclophosphamide in workplaces. J Environ Monit 6: 979-984. doi: 10.1039/b409277e
![]() |
[34] | Hedmer M, Tinnerberg H, Axmon A, et al. (2008) Environmental and biological monitoring of antineoplastic drugs in four workplaces in a Swedish hospital. Int Arch Occup Environ Health 81: 899-911. |
[35] | Kopp B, Crauste-Manciet S, Guibert A, et al. (2013) Environmental and Biological Monitoring of Platinum-Containing Drugs in Two Hospital Pharmacies Using Positive Air Pressure Isolators. Ann Occup Hyg 57: 374-383. |
[36] | Schmaus G, Schierl R, Funck S (2002) Monitoring surface contamination by antineoplastic drugs using gas chromatography-mass spectrometry and voltammetry. Am J Health Syst Pharm 59: 956-961. |
[37] | Singh N, Lai H (2009) Methods for freezing blood samples at -80 ℃ for DNA damage analysis in human leukocytes, In: Anderson D and Dhawan A, Comet Assay Toxicol, Royal Society of Chemistry, 120-128. |
[38] |
Duthie SJ, Pirie L, Jenkinson AM, et al. (2002) Cryopreserved versus freshly isolated lymphocytes in human biomonitoring: endogenous and induced DNA damage, antioxidant status and repair capability. Mutagenesis 17: 211-214. doi: 10.1093/mutage/17.3.211
![]() |
[39] | Collins AR, Azqueta A (2012) Single-Cell Gel Electrophoresis Combined with Lesion-Specific Enzymes to Measure Oxidative Damage to DNA, In: Methods Cell Biol, Elsevier, 69-92. available from; http://linkinghub.elsevier.com/retrieve/pii/B9780124059146000044 |
[40] | Collins AR (2002) The comet assay. Principles, applications, and limitations. Methods Mol Biol Clifton NJ 203: 163-177. |
[41] | Hon C, Chua PP, Danyluk Q, et al. (2013) Examining factors that influence the effectiveness of cleaning antineoplastic drugs from drug preparation surfaces: a pilot study. J Oncol Pharm Pract 20: 210-216. |
[42] |
Cavallo D, Ursini CL, Perniconi B, et al. (2005) Evaluation of genotoxic effects induced by exposure to antineoplastic drugs in lymphocytes and exfoliated buccal cells of oncology nurses and pharmacy employees. Mutat Res Toxicol Environ Mutagen 587: 45-51. doi: 10.1016/j.mrgentox.2005.07.008
![]() |
[43] |
Hedmer M, Wohlfart G (2012) Hygienic guidance values for wipe sampling of antineoplastic drugs in Swedish hospitals. J Environ Monit 14: 1968-1975. doi: 10.1039/c2em10704j
![]() |
[44] |
Viegas S, Pádua M, Veiga A, et al. (2014) Antineoplastic drugs contamination of workplaces surfaces in two Portuguese hospitals. Environ Monit Assess 186: 7807-18. doi: 10.1007/s10661-014-3969-1
![]() |
[45] | Collins AR (1999) Oxidative DNA damage, antioxidants, and cancer, BioEssays News Rev. Mol Cell Dev Biol 21: 238-246. |
[46] | Cavallo D, Ursini CL, Rondinone B, et al. (2009) Evaluation of a suitable DNA damage biomarker for human biomonitoring of exposed workers. Environ Mol Mutagen: 781-790. |
[47] |
Digue L, Orsière T, Méo M De, et al. (1999) Evaluation of the genotoxic activity of paclitaxel by the in vitro micronucleus test in combination with fluorescent in situ hybridization of a DNA centromeric probe and the alkaline single cell gel electrophoresis technique (comet assay) in human T-lymphocytes. Environ Mol Mutagen 34: 269-278. doi: 10.1002/(SICI)1098-2280(1999)34:4<269::AID-EM7>3.0.CO;2-D
![]() |
[48] |
Blasiak J, Kowalik J, Małecka-Panas EJ, et al. (2000) DNA damage and repair in human lymphocytes exposed to three anticancer platinum drugs. Teratog Carcinog Mutagen 20: 119-131. doi: 10.1002/(SICI)1520-6866(2000)20:3<119::AID-TCM3>3.0.CO;2-Z
![]() |
[49] |
Ursini CL, Cavallo D, Colombi A, et al. (2006) Evaluation of early DNA damage in healthcare workers handling antineoplastic drugs. Int Arch Occup Environ Health 80: 134-140. doi: 10.1007/s00420-006-0111-x
![]() |
[50] |
Sasaki M, Dakeishi M, Akeishi S, et al. (2008) Assessment of DNA Damage in Japanese Nurses Handling Antineoplastic Drugs by the Comet Assay. J Occup Health 50: 7-12. doi: 10.1539/joh.50.7
![]() |
[51] |
Ündeğer Ü, Başaran N, Kars A, et al. (1999) Assessment of DNA damage in nurses handling antineoplastic drugs by the alkaline COMET assay. Mutat Res Toxicol Environ Mutagen 439: 277-285. doi: 10.1016/S1383-5718(99)00002-9
![]() |
[52] |
Yoshida J, Kosaka H, Tomioka S, et al. (2006) Genotoxic Risks to Nurses from Contamination of the Work Environment with Antineoplastic Drugs in Japan. J Occup Health 48: 517-522. doi: 10.1539/joh.48.517
![]() |
[53] |
Branham MT, Nadin SB, Vargas-Roig LM, et al. (2004) DNA damage induced by paclitaxel and DNA repair capability of peripheral blood lymphocytes as evaluated by the alkaline comet assay. Mutat Res 560: 11-17. doi: 10.1016/j.mrgentox.2004.01.013
![]() |
[54] |
Mader RM, Kokalj A, Kratochvil E, et al. (2009) Longitudinal biomonitoring of nurses handling antineoplastic drugs. J Clin Nurs 18: 263-269. doi: 10.1111/j.1365-2702.2007.02189.x
![]() |
[55] |
Kopjar N, Garaj-Vrhovac V (2001) Application of the alkaline comet assay in human biomonitoring for genotoxicity: a study on Croatian medical personnel handling antineoplastic drugs. Mutagenesis 16: 71-78. doi: 10.1093/mutage/16.1.71
![]() |
[56] | Maluf SW, Erdtmann B (2000) Follow-up study of the genetic damage in lymphocytes of pharmacists and nurses handling antineoplastic drugs evaluated by cytokinesis-block micronuclei analysis and single cell gel electrophoresis assay. Mutat Res 471: 21-27. |
[57] |
Kopjar N, Želježić D, Vrdoljak AL, et al. (2007) Irinotecan Toxicity to Human Blood Cells in vitro: Relationship between Various Biomarkers. Basic Clin Pharmacol Toxicol 100: 403-413. doi: 10.1111/j.1742-7843.2007.00068.x
![]() |
[58] | Rekhadevi PV, Sailaja N, Chandrasekhar M, et al. (2007) Genotoxicity assessment in oncology nurses handling anti-neoplastic drugs. Mutagenesis 22: 395-401. |
[59] |
Cornetta T, Padua L, Testa A, et al. (2008) Molecular biomonitoring of a population of nurses handling antineoplastic drugs. Mutat Res 638: 75-82. doi: 10.1016/j.mrfmmm.2007.08.017
![]() |
[60] |
Izdes S, Sardas S, Kadioglu E, et al. (2009) Assessment of genotoxic damage in nurses occupationally exposed to anaesthetic gases or antineoplastic drugs by the comet assay. J Occup Health 51: 283-286. doi: 10.1539/joh.M8012
![]() |
[61] | Rombaldi F, Cassini C, Salvador M, et al. (2008) Occupational risk assessment of genotoxicity and oxidative stress in workers handling anti-neoplastic drugs during a working week. Mutagenesis 24: 143-148. |
[62] |
Ladeira C, Viegas S, Pádua M, et al. (2014) Assessment of Genotoxic Effects in Nurses Handling Cytostatic Drugs. J Toxicol Environ Health A 77: 879-887. doi: 10.1080/15287394.2014.910158
![]() |
[63] |
Deng H, Zhang M, He J, et al. (2005) Investigating genetic damage in workers occupationally exposed to methotrexate using three genetic end-points. Mutagenesis 20: 351-357. doi: 10.1093/mutage/gei048
![]() |
[64] |
Xing DY, Tan E, Song N, et al. (2001) Ser326Cys polymorphism in hOGG1 gene and risk of esophageal cancer in a Chinese population. Int J Cancer 95: 140-143. doi: 10.1002/1097-0215(20010520)95:3<140::AID-IJC1024>3.0.CO;2-2
![]() |
[65] |
Elahi A, Zheng Z, Park P, et al. (2002) The human OGG1 DNA repair enzyme and its association with orolaryngeal cancer risk. Carcinogenesis 23: 1229-1234. doi: 10.1093/carcin/23.7.1229
![]() |
[66] |
Pawlowska E, Janik-Papis K, Rydzanicz M, et al. (2009) The Cys326 allele of the 8-oxoguanine DNA N-glycosylase 1 gene as a risk factor in smoking- and drinking-associated larynx cancer. Tohoku J Exp Med 219: 269-275. doi: 10.1620/tjem.219.269
![]() |
[67] |
Kim JI, Park YJ, Kim KH, et al. (2003) hOGG1 Ser326Cys polymorphism modifies the significance of the environmental risk factor for colon cancer. World J Gastroenterol 9: 956-960. doi: 10.3748/wjg.v9.i5.956
![]() |
[68] | Takezaki T, Gao C, Wu J, et al. (2002) hOGG1 Ser(326)Cys polymorphism and modification by environmental factors of stomach cancer risk in Chinese. Int J Cancer 99: 624-627. |
[69] |
Chen SK, Hsieh WA, Tsai MH, et al. (2003) Age-associated decrease of oxidative repair enzymes, human 8-oxoguanine DNA glycosylases (hOgg1), in human aging. J Radiat Res (Tokyo) 44: 31-35. doi: 10.1269/jrr.44.31
![]() |
[70] | Aka P, Mateuca R, Buchet JP, et al. (2004) Are genetic polymorphisms in OGG1, XRCC1 and XRCC3 genes predictive for the DNA strand break repair phenotype and genotoxicity in workers exposed to low dose ionising radiations? Mutat Res 556: 169-181. |
[71] | Mateuca RA, Roelants M, Iarmarcovai G, et al. (2008) hOGG1(326), XRCC1(399) and XRCC3(241) polymorphisms influence micronucleus frequencies in human lymphocytes in vivo. Mutagenesis 23: 35-41. |
[72] | Tarng DC, Tsai TJ, Chen WT, et al. (2001) Effect of Human OGG1 1245C→G Gene Polymorphism on 8-Hydroxy-2'-Deoxyguanosine Levels of Leukocyte DNA among Patients Undergoing Chronic Hemodialysis. J Am Soc Nephrol 12: 2338-2347. |
1. | Sakorn Mekruksavanich, Anuchit Jitpattanakul, RNN-based deep learning for physical activity recognition using smartwatch sensors: A case study of simple and complex activity recognition, 2022, 19, 1551-0018, 5671, 10.3934/mbe.2022265 | |
2. | Keruo Jiang, Zhen Huang, Xinyan Zhou, Chudong Tong, Minjie Zhu, Heshan Wang, Deep belief improved bidirectional LSTM for multivariate time series forecasting, 2023, 20, 1551-0018, 16596, 10.3934/mbe.2023739 |
Dataset | Residents and pets | Sensor Categories | Number of sensors | Activity Categories | Number of Activity instances | Measurement Time |
“Cairo” | 2 residents and 1 pet | “Motion sensors” (M001-M027) | 27 | “Night_wandering” | 67 | 57 days |
“Bed_to_toilet” | 30 | |||||
“R1_wake” | 53 | |||||
“R2_wake” | 52 | |||||
“R2_take_medicine” | 44 | |||||
“Breakfast” | 48 | |||||
“Temperature sensors ”(T001-T005) | 5 | “Leave_home” | 69 | |||
“Lunch” | 37 | |||||
“Dinner” | 42 | |||||
“R2_sleep” | 52 | |||||
“R1_sleep” | 50 | |||||
“R1_work_in_office” | 46 | |||||
“Laundry” | 10 | |||||
“tulum2009” | 2 residents | “Motion sensors” (M001-M018) | 18 | “Cook_Breakfast” | 80 | 84 days |
“R1_Eat_Breakfast” | 66 | |||||
“Cook_Lunch” | 71 | |||||
“Leave_Home” | 75 | |||||
“Watch_TV” | 528 | |||||
“Temperature sensors ”(T001-T002) | 2 | “R1_Snack” | 491 | |||
“Enter_Home” | 73 | |||||
“Group_Meeting” | 11 | |||||
“R2_Eat_Breakfast” | 47 | |||||
“Wash_Dishes” | 71 |
Configuration Name | Parameter Interpretation |
the number of iterations | numIterations←150 |
the number of agents | numAgents←30 |
the minimum number of features included in an agent | lowerLim←5 |
the maximum number of features included in an agent | upperLim←30 |
Classifiers Name | Parameter Name | Parameter Settings |
LR | regularization intensity, random number seed | C←1.0, random_state←2018 |
DT | random number seed | random_state←2018 |
NB | / | / |
LSTM | The number of units | 16 |
Gradient descent algorithm | AdamOptimizar | |
Learning rate | 1e-3 | |
Batch size | 100 | |
Epoch number | 100 |
Approaches | LR | NB | DT | LSTM |
FR | 79.263% | 72.207% | 84.346% | 81.021% |
SR | 70.900% | 70.775% | 79.358% | 88.152% |
FR + FS | 83.112% | 81.334% | 87.379% | / |
SR + FS | 87.006% | 83.958% | 89.511% | / |
Approaches | LR | NB | DT | LSTM |
FR | 74.776% | 70.225% | 85.018% | 75.2693% |
SR | 66.043% | 68.411% | 78.236% | 88.268% |
FR + FS | 81.310% | 79.023% | 86.295% | / |
SR + FS | 82.918% | 80.634% | 87.114% | / |
Approaches | LR | NB | DT | LSTM |
FR | 75.603% | 69.265% | 84.280% | 76.321% |
SR | 66.254% | 67.167% | 78.074% | 87.748% |
FR + FS | 81.021% | 78.589% | 86.189% | / |
SR + FS | 84.133% | 80.214% | 87.529% | / |
Approaches | LR | NB | DT | LSTM |
FR | 72.627% | 58.688% | 84.993% | 85.907% |
SR | 77.075% | 73.139% | 80.689% | 88.8% |
FR + FS | 81.864% | 65.650% | 86.122% | / |
SR + FS | 90.844% | 81.221% | 85.329% | / |
Approaches | LR | NB | DT | LSTM |
FR | 64.540% | 72.008% | 79.591% | 76.256% |
SR | 74.478% | 75.513% | 78.609% | 86.716% |
FR + FS | 65.519% | 75.564% | 81.985% | / |
SR + FS | 86.101% | 87.574% | 84.771% | / |
Approaches | LR | NB | DT | LSTM |
FR | 65.912% | 58.368% | 81.628% | 80% |
SR | 75.543% | 71.112% | 79.265% | 86.909% |
FR + FS | 68.535% | 66.448% | 83.669% | / |
SR + FS | 88.205% | 83.562% | 84.727% | / |
Dataset | Residents and pets | Sensor Categories | Number of sensors | Activity Categories | Number of Activity instances | Measurement Time |
“Cairo” | 2 residents and 1 pet | “Motion sensors” (M001-M027) | 27 | “Night_wandering” | 67 | 57 days |
“Bed_to_toilet” | 30 | |||||
“R1_wake” | 53 | |||||
“R2_wake” | 52 | |||||
“R2_take_medicine” | 44 | |||||
“Breakfast” | 48 | |||||
“Temperature sensors ”(T001-T005) | 5 | “Leave_home” | 69 | |||
“Lunch” | 37 | |||||
“Dinner” | 42 | |||||
“R2_sleep” | 52 | |||||
“R1_sleep” | 50 | |||||
“R1_work_in_office” | 46 | |||||
“Laundry” | 10 | |||||
“tulum2009” | 2 residents | “Motion sensors” (M001-M018) | 18 | “Cook_Breakfast” | 80 | 84 days |
“R1_Eat_Breakfast” | 66 | |||||
“Cook_Lunch” | 71 | |||||
“Leave_Home” | 75 | |||||
“Watch_TV” | 528 | |||||
“Temperature sensors ”(T001-T002) | 2 | “R1_Snack” | 491 | |||
“Enter_Home” | 73 | |||||
“Group_Meeting” | 11 | |||||
“R2_Eat_Breakfast” | 47 | |||||
“Wash_Dishes” | 71 |
Configuration Name | Parameter Interpretation |
the number of iterations | numIterations←150 |
the number of agents | numAgents←30 |
the minimum number of features included in an agent | lowerLim←5 |
the maximum number of features included in an agent | upperLim←30 |
Classifiers Name | Parameter Name | Parameter Settings |
LR | regularization intensity, random number seed | C←1.0, random_state←2018 |
DT | random number seed | random_state←2018 |
NB | / | / |
LSTM | The number of units | 16 |
Gradient descent algorithm | AdamOptimizar | |
Learning rate | 1e-3 | |
Batch size | 100 | |
Epoch number | 100 |
Approaches | LR | NB | DT | LSTM |
FR | 79.263% | 72.207% | 84.346% | 81.021% |
SR | 70.900% | 70.775% | 79.358% | 88.152% |
FR + FS | 83.112% | 81.334% | 87.379% | / |
SR + FS | 87.006% | 83.958% | 89.511% | / |
Approaches | LR | NB | DT | LSTM |
FR | 74.776% | 70.225% | 85.018% | 75.2693% |
SR | 66.043% | 68.411% | 78.236% | 88.268% |
FR + FS | 81.310% | 79.023% | 86.295% | / |
SR + FS | 82.918% | 80.634% | 87.114% | / |
Approaches | LR | NB | DT | LSTM |
FR | 75.603% | 69.265% | 84.280% | 76.321% |
SR | 66.254% | 67.167% | 78.074% | 87.748% |
FR + FS | 81.021% | 78.589% | 86.189% | / |
SR + FS | 84.133% | 80.214% | 87.529% | / |
Approaches | LR | NB | DT | LSTM |
FR | 72.627% | 58.688% | 84.993% | 85.907% |
SR | 77.075% | 73.139% | 80.689% | 88.8% |
FR + FS | 81.864% | 65.650% | 86.122% | / |
SR + FS | 90.844% | 81.221% | 85.329% | / |
Approaches | LR | NB | DT | LSTM |
FR | 64.540% | 72.008% | 79.591% | 76.256% |
SR | 74.478% | 75.513% | 78.609% | 86.716% |
FR + FS | 65.519% | 75.564% | 81.985% | / |
SR + FS | 86.101% | 87.574% | 84.771% | / |
Approaches | LR | NB | DT | LSTM |
FR | 65.912% | 58.368% | 81.628% | 80% |
SR | 75.543% | 71.112% | 79.265% | 86.909% |
FR + FS | 68.535% | 66.448% | 83.669% | / |
SR + FS | 88.205% | 83.562% | 84.727% | / |