Processing math: 100%
Research article

Mathematical analysis of an SIR respiratory infection model with sex and gender disparity: special reference to influenza A

  • Received: 11 November 2017 Accepted: 01 December 2018 Published: 26 March 2019
  • The aim of this work is to study the impact of sex and gender disparity on the overall dynamics of influenza A virus infection and to explore the direct and indirect effect of influenza A mass vaccination. To this end, a deterministic SIR model has been formulated and throughly analysed, where the equilibrium and stability analyses have been explored. The impact of sex disparity (i.e., disparity in susceptibility and in recovery rate between females and males) on the disease outcome (i.e., the basic reproduction number R0 and the endemic prevalence of influenza in females and males) has been investigated. Mathematical and numerical analyses show that sex and gender disparities affect on the severity as well as the endemic prevalence of infection in both sexes. The analysis shows further that the efficacy of the vaccine for both sexes (e1&e2) and the response of the gender to mass-vaccination campaigns ψ play a crucial role in influenza A containment and elimination process, where they impact significantly on the protection ratio as well as on the direct, indirect and total effect of vaccination on the burden of infection.

    Citation: Muntaser Safan. Mathematical analysis of an SIR respiratory infection model with sex and gender disparity: special reference to influenza A[J]. Mathematical Biosciences and Engineering, 2019, 16(4): 2613-2649. doi: 10.3934/mbe.2019131

    Related Papers:

    [1] Chunmei He, Hongyu Kang, Tong Yao, Xiaorui Li . An effective classifier based on convolutional neural network and regularized extreme learning machine. Mathematical Biosciences and Engineering, 2019, 16(6): 8309-8321. doi: 10.3934/mbe.2019420
    [2] Xianli Liu, Yongquan Zhou, Weiping Meng, Qifang Luo . Functional extreme learning machine for regression and classification. Mathematical Biosciences and Engineering, 2023, 20(2): 3768-3792. doi: 10.3934/mbe.2023177
    [3] Rustam Aipov, Andrey Linenko, Ildar Badretdinov, Marat Tuktarov, Salavat Akchurin . Research of the work of the sieve mill of a grain-cleaning machine with a linear asynchronous drive. Mathematical Biosciences and Engineering, 2020, 17(4): 4348-4363. doi: 10.3934/mbe.2020240
    [4] Mengya Zhang, Qing Wu, Zezhou Xu . Tuning extreme learning machine by an improved electromagnetism-like mechanism algorithm for classification problem. Mathematical Biosciences and Engineering, 2019, 16(5): 4692-4707. doi: 10.3934/mbe.2019235
    [5] Giuseppe Ciaburro . Machine fault detection methods based on machine learning algorithms: A review. Mathematical Biosciences and Engineering, 2022, 19(11): 11453-11490. doi: 10.3934/mbe.2022534
    [6] Sungwon Kim, Meysam Alizamir, Youngmin Seo, Salim Heddam, Il-Moon Chung, Young-Oh Kim, Ozgur Kisi, Vijay P. Singh . Estimating the incubated river water quality indicator based on machine learning and deep learning paradigms: BOD5 Prediction. Mathematical Biosciences and Engineering, 2022, 19(12): 12744-12773. doi: 10.3934/mbe.2022595
    [7] Ting Yao, Farong Gao, Qizhong Zhang, Yuliang Ma . Multi-feature gait recognition with DNN based on sEMG signals. Mathematical Biosciences and Engineering, 2021, 18(4): 3521-3542. doi: 10.3934/mbe.2021177
    [8] Vinh Huy Chau . Powerlifting score prediction using a machine learning method. Mathematical Biosciences and Engineering, 2021, 18(2): 1040-1050. doi: 10.3934/mbe.2021056
    [9] Bochuan Du, Pu Tian . Factorization in molecular modeling and belief propagation algorithms. Mathematical Biosciences and Engineering, 2023, 20(12): 21147-21162. doi: 10.3934/mbe.2023935
    [10] Gang Chen, Binjie Hou, Tiangang Lei . A new Monte Carlo sampling method based on Gaussian Mixture Model for imbalanced data classification. Mathematical Biosciences and Engineering, 2023, 20(10): 17866-17885. doi: 10.3934/mbe.2023794
  • The aim of this work is to study the impact of sex and gender disparity on the overall dynamics of influenza A virus infection and to explore the direct and indirect effect of influenza A mass vaccination. To this end, a deterministic SIR model has been formulated and throughly analysed, where the equilibrium and stability analyses have been explored. The impact of sex disparity (i.e., disparity in susceptibility and in recovery rate between females and males) on the disease outcome (i.e., the basic reproduction number R0 and the endemic prevalence of influenza in females and males) has been investigated. Mathematical and numerical analyses show that sex and gender disparities affect on the severity as well as the endemic prevalence of infection in both sexes. The analysis shows further that the efficacy of the vaccine for both sexes (e1&e2) and the response of the gender to mass-vaccination campaigns ψ play a crucial role in influenza A containment and elimination process, where they impact significantly on the protection ratio as well as on the direct, indirect and total effect of vaccination on the burden of infection.


    1. Introduction

    Large scale deployment of smart meters for residential customers is well underway in many European and other countries. For example, it is anticipated that by 2020 all UK households who give their permission will be equipped with an automatic meter reading system that measures and displays in real time aggregate energy usage with an in-home display unit [1]. This large governmental investment promises significant improvements in energy demand via automatic, more efficient and more informed billing. However, to provide deeper energy feedback, information about consumption of individual appliance is necessary. Indeed, up to 20% of reduction in energy consumption is expected via appliance-feedback and specific appliance replacement programs [2].

    Monitoring individual appliances using individual appliance-specific sensors in a house is often impractical and expensive, especially since the number of electrical devices in the home is rapidly increasing. On the other hand, energy disaggregation via non-intrusive appliance load monitoring (NALM) offers a non-intrusive, purely computational, software-based approach to separate aggregate load obtained from a single electricity meter into individual appliance loads.

    NALM appeared in the research literature in the 1980’s [3], and since then, many NALM algorithms have been proposed that improve the initial design of [3] and adapt to advances in sensor technology, capturing energy measurands at a range of sampling rates, generally in the order of kHz. See [2,4,5,6,7], for examples of NALM applications. However, with large-scale smart metering deployments on the way, there is an increased interest in NALM algorithms that work at lower sampling rates, in the order of seconds and minutes. It is not only the cost of the sensing technology [2], but also computational and storage cost as well as implementation efficiency that are key drivers towards wide deployment of lowsampling smart meters. For example, in the USA most utilities capture data at 15-min intervals. UK smart meters, as defined by the Smart Metering Equipment Technical Specification (SMETS) proposal by UK Department of Energy and Climate Change provide readings at 30 min intervals to energy suppliers, but an 8-10 second sampling rate for load readings is available to households that install a Consumer Access Device in their homes to read the smart meter measurements directly [1]. However, so far, there are no widely available efficient solutions for NALM, that offer high accuracy and low complexity at such low sampling rates [4,5].

    Based on the requirement for labelled training, all NALM methods can be grouped into supervised and unsupervised techniques (though hybrid, semi-supervised approaches are also possible). Supervised NALM techniques (see, for example, [8,9,10]) require a labelled dataset for training, are commonly based on event detection, and generally provide the highest disaggregation accuracy. They rely on different optimization and pattern recognition approaches, such as rule-based, neural networks, or Bayes-based classification. However, these approaches are less practical and prone to errors since the training usually relies on customer-filled appliance diaries, that are often unreliable.

    Unsupervised approaches do not require a labelled dataset for training, and are currently probabilistic [11,12,13,14], based on sparse coding [15], or time-series and motif mining [16,17]. All these approaches, however, still require substantial customer input and depend on the availability of time periods when only one appliance is running for building efficient probabilistic models [11,12,13] or database of signatures [17].

    In this paper, based on our initial conference paper [18], we propose an efficient low-complexity supervised NALM approach that combines k-means and Support Vector Machine (SVM). In particular, to benefit from the high classification performance of non-linear SVMs and low computational cost of k-means clustering, we effectively combine conventional k-means and SVM obtaining a hybrid method that outperforms k-means and SVM classification individually. We use k-means to cleverly select a subset of input data used to train SVM. By training the SVM only on a small set of representative samples, we are able to significantly reduce processing time. Note that, combining k-means and SVM has been reported before, but not in the context of NALM. Recognizing that in a majority of cases a large portion of the input data is redundant for training, in [19], k-means is used to decrease the number of support vectors and the training set size. Similarly, in [20,21], k-means is employed to select a subset of original data for SVM training. The clustered SVM of [22] trains a linear SVM on each of the k-means clusters, in a divide-and-conquer manner.

    To make the above approach practical and reduce or remove customer effort in maintaining a timediary, a database of appliance signatures is created. Such a database is then used to develop a novel approach that requires no training from the household, and hence no input from the customer. The designed database is a compact collection of appliance power load signatures (active power measurements over a duty cycle) plus statistical features, such as, mean, variance, auto-correlation, and a statistical model for each appliance that are then used for load disaggregation. The database is populated using open source datasets from the USA [23], Austria and Italy [24], and our measurements in 20 UK houses [7]. Similar attempts have recently been reported in [25] but for USA houses only and high sampling rates.

    The main contributions of the paper are:

    · A low-complexity NALM approach, trained on measurements from the house whose aggregate load NALM is being applied on; this is termed Approach 1 using House-specific training data;

    · A generic database of appliance signatures populated from 34 houses in UK, Europe, and US, containing over 200 appliance signatures. The database* (* The database is made publicly accessible.) can be used with different energy disaggregation algorithms as well as for appliance mining and load prediction

    · A low-complexity NALM approach that uses the developed database for training, irrespective of the house, and hence does not require customer input; this is termed Approach 2 using Houseagnostic training data.

    The developed approaches are tested in real settings using real house measurements. We tested the supervised approach for different training periods and artificially introduced errors in the training set.

    The rest of the paper is organized as follows. Section 2 provides a brief review on NALM. Section 3 describes the proposed NALM algorithms and the database of appliance signatures. The last two sections discuss the simulation results, conclusion and future work.

    2. Background and literature review

    Non-intrusive Appliance Load Monitoring (NALM), also referred to as NILM or NIALM [3], disaggregates total power readings and identifies each appliance in use at any point in time based on the available measured total household consumption.

    Traditional event-based NALM methods [3] consist of signal pre-processing, edge detection and feature extraction followed by classification. After acquisition, signal pre-processing can be done in the form of power normalization, filtering (for signal smoothing and getting rid of sudden peaks), and thresholding to remove small power loads that would appear as noise as well as the base-load, from appliances that are always running. Next, edge detection is done to identify events of appliances switching on and off. Edge detection is followed by extracting the features in the identified event windows. Classification is then used to group sets of extracted features which have similar characteristics, such as power levels, time profile, reactive components etc.

    In this paper, we focus on low complexity, low-rate NALM algorithms, where sampling rates are in the range of seconds and minutes. In particular, we test the proposed methods using 1 sec, 8 sec, and 1 min sampling rates. The sampling rate influences the type of features that can be used. For example, low-rate NALM approaches can use only steady-state parameters, such as active or real power [12], reactive power [3,4], power factor [26], voltage or current waveform [27,28].

    The simplest approach, from an implementation point of view, is to use a current transformer (CT) sensor with a clamp to measure alternating current and an AC-to-AC power adapter with a circuit to measure voltage. This way, active and reactive power components can be calculated from the measured current and voltage. However, measuring voltage in a simple way requires additional plug points, which are often not available close to the electricity meter. Moreover, processing, communicating and storing two dimensional data (active and reactive power) is often impractical, especially because the reactive component is not needed for billing purposes. That is why, in this paper, we consider disaggregation using only active power values, obtained, for example, from the electric current measured via a simple CT sensor, which is a type of metering massively deployed for automated meter reading.

    Recent work on NALM has mainly focused on state-based probabilistic methods. In [11] four different methods for low-rate NALM are proposed using (conditional) factorial Hidden Markov Model (HMM) and Hidden semi-Markov models. The obtained accuracy was in the range of 72% to 99% for 3 sec sampling rate in seven different houses with an average accuracy of 83%. This method cannot disaggregate base load, that is, the lowest most frequent value extracted from the aggregate load data, which is a good indication of the number of appliances being left on standby or background appliances such as boiler control units, fridges and freezers. The method is not of low computational complexity, and is prone to converge to a local minimum.

    In [12] a factorial HMM is used for disaggregation of active power load at 1 min sampling rate. The method builds initial models for state transition probabilities using knowledge of appliance-specific power operation that can be obtained, for example, from study and understanding of the appliance operation. To obtain reliable results, it is necessary to correctly set the a priori-values for each state of each appliance, which in turn is strongly dependent on the particular aggregate dataset on which NALM is being performed. Indeed, a similar factorial HMM-based approach is tested in [23], where it is shown that the disaggregation accuracy drops by up to 25% when different houses are used to set the initial models compared to the case when the same house is used for building the models (training) and testing. Results are reported for REDD dataset [23] with sampling rates of 1 sec and 3 sec.

    In [29] and [8] a decision-tree (DT) classifier is used for pattern matching. The DT-based algorithm developed in [8], is a low-complexity, supervised approach that uses only rising and falling active power edges to build a DT model that is used for classification. The method is not scalable, since re-training is needed whenever a new appliance is added, but is fast and performs very well even when the training period is very short.

    In [13], an unsupervised Additive Factorial Approximate Maximum A-Posteriori (AFMAP) inference algorithm is proposed using differential factorial HMMs. First, all snippets of active power data are extracted using a threshold and modelled by an HMM; next the k-nearest-neighbor graph is used to build nine motifs that are treated as HMMs over which AFMAP is run. The results show average accuracy of 87:2% using 7 appliances and sampling rate of 60 Hz. In [14] Hierarchical Dirichlet Process Hidden Semi-Markov Model (HDP-HSMM) factorial structure is used removing some limitations of the approach of [11], but at increased complexity. The results of [14] are reported for five appliances using 20sec resolution with 18 24-hour segments across four houses from the REDD dataset [23] outperforming the EM-based method of [23].

    A powerful classification technique used for NALM is SVM. SVM-based NALM has shown good performance [5], it is scalable, and is a well established method for classifying noisy data. Non-linear classifiers, such as kernel SVM, that map the input feature space into a high dimensional space and find the optimal separating hyperplane between two classes to separate them, is one of the most effective classification methods, but has at least quadratic training time complexity.

    The main problem with the above state-based and SVM-based approaches is that they are not suitable for real-time applications due to their high computational complexity. See [30] for some examples. The low-complexity HMM-based method proposed in [31] reduced execution time 72.7 times, but still requires 11.4 seconds for disaggregating two appliances and 94 minutes for 11 appliances.

    Motivated by increased demand for near real-time approaches with minimal to no customer input, in the next section, we first propose a low-complexity supervised NALM approach that remedies the first problem of high complexity and then, build a database of signatures tackling the second issue of customer input, in the form of time-diaries, for example.

    3. Methodology

    In this section, we describe our disaggregation algorithms, starting with our first approach using house-specific training data, followed by the design of the appliance power load signature database and finally our second approach using house-agnostic training data from the database.

    The disaggregation procedure always comprises three steps: event detection, feature extraction, and classification, and the proposed two methods differ only in the classification step. We use edge detection [8] to isolate appliance events. Event detection isolates windows of events where an appliance is switched on or off (see the conference version [18] for more details). From each detected event window, different features are extracted and stored. Tested extracted features include (1) all active power readings in the event window, (2) rising/falling edge, (3) maximum/minimum active power value, (4) area, calculated as the area of the irregular polygon formed by the active power (Watt) samples in the event window, i.e., the energy of that event window in Joules. The optimal features to use, for each appliance, will be selected using the training dataset. Extracted features from each detected event are matched to the pre-defined appliance classes using a trained classifier.

    3.1. Proposed Approach 1 using house-specific training data

    In the first approach, training is always done on aggregate data using a labelled dataset, which is obtained from time-diaries or sub-metering of the particular house, whose aggregate load is being disaggregated.

    First, we test two well-known techniques to perform classification and pattern matching: k-means and SVM. First we adapt k-means, which we term trained k-means, to perform supervised clustering similarly to [32]. Trained k-means uses a labelled dataset to classify the input data based on minimum distance classification. By a labelled dataset, we mean a collection of event windows with labels indicating which appliance was running. For example, if a microwave was switched on, the resulting event window of active power samples will then be labelled as microwave. During training, aggregate samples with Appliance A label from the entire training dataset are grouped, forming the Appliance A class. Like conventional k-means, the centroid of each appliance class is set as its head. Note that, the number of classes is always equal to the number of known appliances in the household. When a new testing sample (feature vector - active power load) is introduced, it is compared with all heads, and the minimum distance determines the classification outcome.

    SVM-based algorithms are optimal classifiers in the presence of noise and proven to perform well for NALM applications [5]. We train binary classifiers to separate one appliance at a time. After an appliance has been classified, its contribution is removed, the threshold used for edge detection is adapted, and disaggregation is attempted on the next appliance.

    While the trained k-means-based NALM is time efficient, it provides low disaggregation accuracy. On the other hand, as shown in next section, the SVM-based NALM method significantly outperforms the trained k-means-based approach, but requires up to 10 times more execution time. In order to design a high-performance, low-complexity solution, we propose to use SVM on a substantially reduced training set, obtained using trained k-means.

    To combine k-means and SVM, we first train k-means as explained above using the entire training dataset. As a result, k classes, each corresponding to one appliance, are formed with a centroid as head. Next, all feature vectors falling in Class i that are at an Euclidean distance larger than r from their head, form a subset of feature vectors Ci that is removed from Class i and used to train an SVM for Appliance i. r is a pre-set threshold, unique for each house, obtained heuristically, that is used to trade off complexity and performance. See Fig.1 for an illustration. Note that, in this way, SVM will be trained using a significantly reduced dataset obtained from the trained k-means classifier, and hence the combined k-means+SVM complexity will be reduced, compared to SVM classification alone. Algorithm 1 shows the training steps, where d(x, y) denotes the Euclidean distance between vectors x and y.

    Figure 1. Filtering data samples in the proposed algorithm. Red rhomboids inside the circle centred at cluster head c will not be fed into the SVM training module.

    During testing, in general, if the Euclidean distance between a tested sample and any cluster head is smaller than a pre-set threshold, then the sample is classified to the closest cluster head. Otherwise, the sample is input to the SVM classifier. The proposed combined method has low execution time, since many samples will be classified rapidly using k-means, and only a small amount of samples that are far away from their heads, will be fed to the SVM classifier. However, the proposed algorithm maintains high performance, since SVM improves classification for samples that would most likely be incorrectly classified using the trained k-means.

    Algorithm 1 Training: Perform training on the extracted features of the collected dataset.
    function TRain(Labelled training dataset L, |M|, r)
     k=|M| Number of Appliances
     [Cluster, c]=kmeans(k, L) Call kmeans function
    Returns Cluster distribution and cluster heads c.
    for i = 1 : k do
      Ci = {@\emptyset@}
      for @\forall l \in@ Clusteri do Clusteri denotes i-th cluster in Cluster
       if d(l, ci) ≥ r then ci denotes i-th element of k-length vector c
        Ci = Ci @\bigcup@ {l}
       end if
      end for
      SVMTrain(Ci) Call conventional SVM training function
    end for
    end function

    Algorithm 2 Training: Perform testing on the extracted features of the collected dataset.
    function TEST(Testing dataset, Clusters, c, |M|, r)
     k=|M| Number of Appliances
    for i = 1 : k do
      Ci = {@\emptyset@}
      for @\forall l \in@ Clusteri do
       if d(l, ci) ≥ r then
        Ci = Ci @\bigcup@ {l}
       elseClassify sample i to the appliance corresponding to ci
      end if
    end for
      SVMTest(Ci) Call conventional SVM training function
    end for
    end function

    Testing is straightforward and shown in Algorithm 2. Samples at distance less than r from a cluster head are classified to the appliance corresponding to that cluster head. All other samples are classified using SVM.

    3.2. Database of appliance signatures

    All domestic appliances are designed to work within a certain active power range, which can often be found in the appliance instruction manual. However, in practice, the actual active power measured will deviate due to electrical noise, interference, ageing, etc. The probability density function (PDF) of active power captures the electrical behaviour of an appliance, e.g., Figures 2 and 3 show the pdf for several domestic appliances in the REFIT and GREEND datasets. Due to the typical low sampling rates (≤ 1 Hz) expected of smart meters, we focus only on steady state operation, automatically removing transient values from each appliance operation during data cleaning. As shown in Figures 2 and 3, the active power follows a Gaussian mixture distribution. See Subsection 4.4, which validates Gaussian modelling using root-mean squared error (RMSE) with respect to the true power load curve obtained through sub-metering.

    Figure 2. Probability density distribution of six REFIT dataset appliances, characterised by Gaussian mixture distribution. Fitted data shows appliance power sampled at 8 sec for five households over a period of 5 weeks.
    Figure 3. Probability density function for four different appliances from the GREEND dataset. Histograms are showing true data obtained via sub-metering.

    A database of 280 unique appliance-load profiles is constructed from REDD [23], GREEND [24] and REFIT [7] datasets, with appliances ranging from standard kitchen appliances, such as kettle, toaster, microwave, cold appliances, washing machines and dishwashers, to electronics such as TV, Hi-Fi, PCs. Each database entry comprises the duty cycle for each appliance at the dataset’s original sampling rate. Additionally, just like [11], each appliance’s power load profile is modelled using a Gaussian mixture model, obtained via curve fitting. The model is represented in the database by its mean, variance, PDF and the first two correlation coefficients, calculated as:

    @R(τ)=t(Xtμ)(Xt+τμ)σ2,
    @
    (1)
    where @X_t@ is active power measurement at time instance t, μ is the mean power value, @\sigma^2@ is the variance and @\tau@ is the sample lag.

    We aim to build one general model for each type of appliance, i.e., one generic refrigerator model that would best fit all refrigerator makes and models encountered in our dataset houses. While this kind of generalization may not work for some appliances like televisions, all monitored washing machines and dishwashers have similar signatures. Figure 4 shows Gaussian distribution models for different Televisions (TVs). It is obvious from the figure that energy consumptions of different TV makes and models are very different. Figures 5a and 5b show the Gaussian distribution model for the washing machine and dishwasher obtained using the data from all GREEND houses. It can be seen that an efficient general model can be formed that represents well different appliance brands, which is due to the fact that all tested washing machines and dishwashers in our dataset have signatures that are similar, in the sense that there are clearly identifiable cycles, each with a similar operational power range.

    Figure 4. Different distributions of different TV types from the GREEND dataset. Histograms are showing true data obtained via sub-metering.
    Figure 5. Washing machine and dishwasher general models.

    Some appliances, referred to as multi-state appliances, such as washing machine or dishwasher, have several operating states. For example, the washing machine typically comprises three distinct states: wash, rinse and spin. The number of Gaussian components in the Gaussian mixture model is thus set to the number of operating states. Standby modes are neglected using proper thresholds for each appliance.

    3.3. Approach 2 using house-agnostic training data

    Using the appliance active power signatures from the database, a general appliance model is generated for each appliance type, using the PDF of this appliance type. Note that the appliance PDF is generated from the active power signatures of that appliance, obtained from different makes and models in different houses in our datasets. This way, we generate one general Gaussian mixture model for each appliance type (e.g., washing machine, kettle, etc.), described by its mean and variance.

    Based on the generated model, we design two methods. The first method draws samples from the obtained Gaussian mixture distribution that are used to form a training dataset. Effectively, the labelled dataset, in the case of Approach 1, that needs to be obtained via time-diary or sub-metering, is replaced by data samples generated from the Gaussian distribution for that particular appliance. Then, Algorithm 1 can readily be used to perform labelling, without any need for a time diary or sub-metering. The testing approach is the same as in the supervised method above.

    The second proposed method, called Mean-variance General Model approach, uses only mean and variance of the generated general Gaussian mixture model. That is, unlike the previous approach, this approach does not draw samples from the model, which in turn implies that no other features (such as those shown in Table 1) can be used, besides mean and variance for classification of k-means and SVM. Due to its limited feature space, this approach is not expected to perform well, due to statistical similarities of appliance signatures.

    Table 1. Comparison between the three methods using FM for REDD data House 2.
    k-meansSVMProposed Combined
    FeaturesApplianceFM(%)FM(%)FM(%)
    Area & durationMW0025.9
    Stove000
    Refrigerator92.192.180.1
    Toaster02.64.7
    Dishwasher01.78
    Min & AreaMW000
    Stove0044.4
    Refrigerator92.172.792.4
    Toaster031.727.8
    Dishwasher000
    Max, dur. & AreaMW000
    Stove000
    Refrigerator92.492.192.6
    Toaster30.765.876.5
    Dishwasher026.0829.2
    Max, area & Max/meanMW000
    Stove02.10
    Refrigerator92.192.493
    Toaster037.772.5
    Dishwasher0028.5
     | Show Table
    DownLoad: CSV

    Note that both approaches can be used with different event-based supervised methods. That is, the designed general appliance model can replace training. We test both house-specific and house-agnostic training-based approaches next.

    4. Results and Discussion

    In this section we present our experimental results and discuss our main findings. We use the publicly available REDD dataset [23] and GREEND dataset [24] with 1min and 1sec resolution, respectively, as well as our measurements that constitutes the REFIT dataset [7] acquired at 8sec resolution. The training size varied in the experiments, but testing is always performed on four weeks worth of data. We used Spring, Summer and Autumn periods for training and testing.

    We organize the results as follows. First, the performance of the proposed Approach 1 with housespecific training, with respect to k-means and SVM approaches separately, is assessed. We show that combining k-means and SVM, as proposed in the previous section, leads to significant reduction in processing time while providing similar accuracy to that of SVM-based disaggregation. Then, we discuss feature selection and show that different classification features are suitable for different appliances. Hence, we propose that the choice of feature(s) to use for each appliance is made during training.

    The third set of results assesses accuracy of the proposed approach when the training period varies and labelling errors occur. We show that the proposed approach is not sensitive to the size of the training period and presence of labelling errors. We use HMM-based [12], k-means-based, and the SVM-based approach as benchmarks.

    Finally, we show that the proposed statistical modelling methodology introduces very small RMSE. Appliance models are then fed to the disaggregation module using house-agnostic training data. This provides close performance to the approach using house-specific training data, but without the requirement for training on the specific house data.

    4.1. Performance metrics

    The evaluation metrics used are precision (PR), recall (RE) and F-Measure (FM), commonly used in NALM literature ([8,17]) and defined as:

    @PR=TP/(TP+FP)
    @
    (2)
    @RE=TP/(TP+FN)
    @
    (3)
    @FM=2(PRRE)/(PR+RE),
    @
    (4)
    where true positive (TP) presents the correctly detected event, false positive (FP) represents an ncorrect detection, and false negative (FN) indicates that the appliance used was not identified.

    4.2. Comparison with k-means and SVM

    First, we evaluate the improvement obtained by combining k-means and SVM into a combined approach. Tables 1 and 2 show results obtained using House 2 from the REDD dataset (see the conference version [18] for other results) for the trained k-means-based, SVM-based, and the combined algorithm. All three algorithms always use the same edge detection and feature extraction method explained in the previous section.

    Table 2. Comparison between the three methods using Execution time (training and testing) for REDD data House 2.
    k-meansSVMProposed Combined
    Featurestrain (sec)test (sec)train (sec)test (sec)train (sec)test (sec)
    Area & duration0.270.271.50.910.380.69
    Min & Area0.240.291.190.720.150.53
    Max, dur. & Area0.290.231.150.880.160.71
    Max, area & Max/mean0.210.281.110.80.190.61
     | Show Table
    DownLoad: CSV

    In House 2, we trained the algorithms with the following five known appliances: refrigerator, stove, microwave (MW), toaster, and dishwasher. All other household appliances were considered to be unknown and hence they contribute to noise. The training size contains 7000 samples or roughly one week of data, while testing was performed on 4 weeks of data. All experiments were run on an HP Pavilion 15 Notebook PC with 8 GB RAM, 1 TB Hard drive and AMD A10 with 2.2 GHz Radeon HD dual Graphics processor (quad core). Execution time in Table 2 shows overall time spent for training (for all appliances with one-week worth of data) and testing (four weeks).

    We tested different two-, three-, four, and five-dimensional classifiers by extracting different features (event window area, time duration, minimum or maximum power value in the event window or maximum-to-mean value ratio) and present results for the best two two-dimensional and two threedimensional classifiers.

    Marked with bold typeface are the better performing features for each appliance. One can see that for different appliances different features give the best performance. For example, only Area and Duration classification returns a non-zero disaggregation accuracy for the microwave. Since we are classifying one appliance at a time, it is possible to adapt classification features from appliance to appliance. Thus, during training, the best features to use are identified per appliance which are then used during testing. In the following, we refer to this method, as the proposed combined method.

    It can be seen from the tables that the SVM-based method outperforms the trained k-means, except for the refrigerator, but requires more time for both training and testing. For example, the best SVMbased NALM result for the toaster is obtained for the 3D classifier using maximum, duration and area is 2.5 times more accurate than the best k-means-based performance, but is over 3 times slower when performing training and testing.

    The combined approach clearly outperforms both the k-means and SVM-based approaches for all appliances, through the appropriate selection of features for every appliance. While execution time for k-means is the smallest as expected, the combined approach executes faster than the SVM-based approach, confirming that combining both k-means and SVM approaches reduces the SVM execution time. Indeed, the proposed method reduces the operation time of SVM by reducing the number of samples fed to the SVM classifier, through clustering.

    The performance of the three approaches can be explained by looking at Figure 6 which shows TP, FP, and FN values for the three approaches for all five appliances. The proposed, combined approach has the largest number of TPs and lowest number of FNs. k-means approach has generally low FP, but high FN and low TP. For example, for the microwave k-means and SVM yields TP = 0, FP = 32 and 0, respectively, and FN = 39. On the other hand, the proposed approach detects instances of the appliance running at TP = 32, with few omissions FN = 7, and FP = 176. Thus, the proposed approach detected almost all occurrences of microwave, but had too high sensitivity detecting the microwave running when it was not, due to very short microwave’s duty cycle. This reduced the overall FM for House 2 as shown in Table 8.

    Figure 6. TP, FP, and FN for each appliance, after disaggregation by k-means, SVM and the proposed combined method for House 2 in the REDD dataset. MW = microwave, DishW = Dishwasher.

    4.3. Accuracy and Complexity

    First, we test accuracy of the algorithms when the training time varies. Intuitively, by increasing the training time, the performance should improve, since more samples will be available to train classifiers. However, longer training time increases complexity and burden on customers if time diaries are used. That is why, it is desirable to have methods that do not require long training periods.

    Figure 7 shows the average results for three REDD houses, benchmarked against the following three methods: the trained k-means based approach, the SVM-based approach and the HMM-based method of [12]. All methods used the same period for training and testing. K-means, SVM, and the proposed method use the optimal features (see Table 2).

    Figure 7. FM for the four methods for three REDD houses and three different training set sizes.

    FM results are given for 3 different training sizes, namely 2000 (roughly 2 days), 5000 (roughly 5 days), and 7000 samples (roughly one week). Testing is done using four-week worth of data.

    It can be seen that the proposed combined method either outperforms HMM-based and SVM-based approaches or provides a similar accuracy. k-means based approach provides lower disaggregation accuracy. Relatively high FM obtained by k-means is somewhat misleading and can be explained by low FP values, despite low TP and high FN values (see Figure 6). On the other hand, high FP value for microwave, reduced overall FM for the proposed method.

    The methods are not sensitive to the variation of the training size. The HMM-based method only requires one appliance event running alone for a full duty cycle period to build a model. If other appliances are on, model generation will not be successful. A drop in the performance of the HMM method, as the size of the training set is reduced, is due to the fact that the appliances are not modeled properly and hence are not disaggregated. Note that the SVM-based and proposed method can have slightly better performance for smaller training sets due to better quality of the training data. In average over all three houses, the proposed method outperforms all other approaches for training sizes of 2000 and 7000 and is the second-best method for the training size of 5000 after k-means, whose FM is not a true reflection of the performance (see Figure 6).

    Table 8, in the appendix, shows the execution time which includes time spent on training and testing. The training execution time of the proposed method slightly increases as the training set size increases but it is still significantly lower than that of the HMM-based approach for all houses and all training sizes. Indeed, the proposed method needs roughly 18 and 3.5 times less time for testing than HMM and SVM, respectively, and 13 and 2.5 times less time for training than HMM and SVM, respectively. In average over all three houses, the proposed method is 2.75 times and 2 times faster for training and testing, respectively, than the SVM method and over 90 and 50 times faster, for training and testing, respectively, than the HMM-based method.

    Table 3 shows the accuracy of the approaches when labelling errors occur as FM vs the error rate for three different appliances. The error rate is the percentage of wrongly labelled data during training. Note that the proposed method is fairly not sensitive to the labelling errors except for the toaster whose operation is very short and easily confused with other appliances.

    Table 3. FM results for House 2 obtained by introducing errors in the labelled dataset used for training.
    Error rateRefrigeratorToasterDishwasher
    0 %93.646.70
    K-means5 %91.957.40
    15 %91.82.40
    20 %91.92.40
    0 %92.569.1326
    SVM5 %92.243.644.4
    15 %9245.30
    20 %91.700
    0 %94.379.129.2
    Proposed5 %91.811.417.39
    15 %93.311.942.8
    20 %91.98.945.3
    0 %87.6964.912.32
    HMM5 %83.4264.912.32
    15 %83.4249.9712.32
    20 %83.5546.9712.32
     | Show Table
    DownLoad: CSV

    4.4. Modelling Validation

    In this section, we validate the proposed Gaussian mixture mathematical model for the pdf of active power for domestic appliances as described in Section 3.

    We test suitability of three models for representing power signature: Gaussian, Laplace and Lognormal. Figure 8 presents the RMSE for each of the three distribution models. The values are obtained by averaging in time and across different houses. It can be seen that the Gaussian mixture model is the best fit for all appliances, especially for high loads such as kettle and toaster. As expected, the Gaussian mixture model is also the best for the aggregate readings, since it is a sum for nearly independent processes. This validates our approach of using the Gaussian mixture model (red line in Figure 2) to build the distribution model of all appliances. We also note that RMSE is insignificantly small except for low-consumers such as TV and Stereo Player. These findings are similar to those reported in [11].

    Figure 8. RMSE of Gaussian, Laplace and Log-normal models for 13 appliances from REFIT and GREEND datasets, and the REFIT aggregate meter reading, all shown on a log-scale.

    Table 4 shows that regardless of the TV model, each individual TV appliance is modelled well resulting in relatively small RMSE error. However, when a general model is built, encompassing all the TVs in the dataset, where the pdfs varied widely, the RMSE is relatively high. The implications of this are that drawing on the general model for TV for training the unsupervised method would not result in high accuracy. A supervised approach trained on that household’s dataset would be more effective in the case of TV.

    Table 4. RMSE, mean [W], variance, 1st order correlation coefficient and 2nd order correlation coefficient for different TV’s in Houses 4 and 5 from GREEND dataset. GM denotes the general model obtained by considering data from all GREEND houses.
    ApplianceMean valueVarianceRMSE1st Cor.2nd Cor.
    House4 Kitchen TV42.142.885.68 E-20.26980.2455
    House4 living room TV16.6848.414.9 E-30.09620.0184
    House5 LCD TV35.252.463.77 E-40.01990.0768
    56.241.07
    House5 Plasma TV144.4713.948.3 E-30.14420.0746
    201.4534.67
    GM TV28.54.1170.0169
    55.986.047
    190.872.37
     | Show Table
    DownLoad: CSV

    Table 5 confirms that the RMSE obtained using the general model averaging appliance data from all the houses, is still small across all GREEND houses.

    Table 5. RMSE, mean [W], variance, 1st order correlation coefficient and 2nd order correlation coefficient for different washing machines and dishwashers of different GREEND houses. GM denotes the general model obtained by considering data from all GREEND houses.
    ApplianceMean valueVarianceRMSE1st Cor.2nd Cor.
    H0 WM80.193.84.2 E-30.0751-0.1059
    1955.673.07
    H1 WM40.473.973.8 E-30.022-0.1126
    1991.790.92
    H3 WM94.7115.591.91 E-20.0609-0.0838
    1957.869.51
    H4 WM54.993.713.3 E-30.0298-0.2105
    597.413.77
    1946.1224
    GM WM139.2119.933.2 E-3
    2009.390.45
    H0 DW7714.056.17 E-50.19250.1226
    1953.377.8
    H1 DW13.728.94.5 E-5-0.042-0.1111
    179629.54
    H2 DW18.133.199.97 E-5-0.066-0.1186
    2071.338.79
    GM DW4842.566.7 E-3
    2480368.2
     | Show Table
    DownLoad: CSV

    4.5. Approach based on house agnostic training data

    Tables 6 and 7 show, for two appliances, the relative performance of the following methods: Approach 1 - Regular training using house-specific training data which uses real labelled data from the specific house for training; Approach 2 - disaggregation using house-agnostic training data that uses features derived from the Gaussian distribution models and draws samples from the Gaussian distribution to train the k-means and SVM, and finally, the mean-variance approach using the mean and variance features directly from our Gaussian General model directly from our databases, called General Model (GM). As benchmarks, we used HMM-based and SVM-based methods. The models were built using at least three houses from the GREEND dataset and tested using two different houses. We selected washing machine and dishwasher since these are the only two appliances present in at least 5 houses, and also known to be significant.

    Table 6. Results of washing machine disaggregation in GREEND House 1 (H1) and House 2 (H2), using three different methods. Houses 3, 4 and 5 are used for training.
    House-specific trainingHouse-agnostic trainingGM
    H%SVMCombinedHMMSVMCombinedSVMCombined
    PR63.881000.578.5772.5400
    H1RE10065.2195.571.7380.4300
    FM77.9678.941.0475.0076.2800
    PR83.3387.502.1483.3388.236.269.09
    H2RE10093.3396.41001003.333.33
    FM90.9090.324.1990.9093.754.344.87
     | Show Table
    DownLoad: CSV

    Tables 6 and 7 show that the proposed approach using house-agnostic training data shows competitive performance to that of disaggregation using house-specific training data. The GM method could not detect washing machine events with only mean and variance as features, whereas the approach using house-agnostic training data used 2D classifiers with different features such as maximum load value, area and duration. The reason for this is that mean and variance of dishwashers in our dataset are very fairly standard, which is not the case with washing machines. Hence, we can conclude that it is necessary to train the disaggregation methods, drawing samples from the database, rather than using the features directly for the classification.

    Table 7. Results of dishwasher disaggregation in GREEND House 2 (H2) and House 3 (H3), using three different methods. Houses 1, 4 and 5 are used for training.
    House-specific trainingHouse-agnostic trainingGM
    H%SVMCombinedHMMSVMCombinedSVMCombined
    PR97.0197.0124.6892.0688.4092.2994.20
    H1RE1001003.1989.2393.84100100
    FM98.4898.485.6690.6291.0496.2997.01
    PR87.5087.502.5187.5087.5083.3383.33
    H2RE1001008210010071.4271.42
    FM93.3393.334.8893.3393.3376.9276.92
     | Show Table
    DownLoad: CSV

    5. Conclusion

    Designing accurate NALM algorithms for low sampling data is challenging. In this paper we proposed two low-complexity solutions based on combining k-means and SVM. Appliances from a range of datasets are fitted to appliance-specific Gaussian Mixture models. The approach using houseagnostic training data uses a database of signatures to draw samples from the Gaussian Mixture models for training. The approach using house-specific training data is accurate even when the training period is short and training errors are present and competitive to state-of-the-art approaches. Training using house-agnostic data yields performance close to the case where training is done via house-specific data for large loads such as washing machine and dishwasher.

    Our study provides the opportunity to make a trade-off between accuracy and complexity or execution time. Generally what we observed is that the time savings far outweigh the accuracy. The next steps are further development of the database of signatures based on crowdsourcing and testing the proposed methods using different datasets.

    Acknowledgments

    This work is supported in part by the UK Engineering and Physical Sciences Research Council (EPSRC) projects REFIT EP/K002368, under the Transforming Energy Demand in Buildings through Digital Innovation (BuildTEDDI) funding programme.

    Conflict of interest

    No conflict of interest. All authors contributed to paper writing, data analysis and algorithm design. Hana Altrabalsi implemented the proposed algorithms, ran simulations and generated the results.

    Appendix

    Table 8. F-measure [%] and Execution time for training and testing [sec] for the three REDD houses (H) using three different training sizes (t. size) given in number of samples.
    HMMk-meansSVMProposed
    Ht. sizetraintestFMtraintestFMtraintestFMtraintestFM
    1200015.1821.2973.530.180.1871.90.370.5773.80.190.3273.2
    500023.7919.2775.580.250.2571.20.760.6574.20.270.2571.7
    700028.3222.9077.060.20.2730.830.7880.370.70.2677.52
    2200018.3818.5681.030.090.2589.690.430.7285.90.10.3284.7
    500021.1318.0382.380.060.284.60.840.7287.10.30.3484.4
    700022.7718.0982.380.230.2384.71.150.7985.50.250.5582.17
    6200020.5210.9969.920.070.1886.50.330.3583.10.120.296.58
    500022.4613.9172.760.090.1496.580.560.4581.50.150.2688
    700030.2216.1972.760.240.24970.690.5380.60.110.2895.58
     | Show Table
    DownLoad: CSV


    [1] B. Hancioglu, D. Swigon and G. Clermont, A dynamical model of human immune response to influenza A virus infection, J. Theor. Biol., 246 (2007), 70–86.
    [2] L. Mohler, D. Flockerzi, H. Sann, et al., Mathematical model of influenza A virus production in large-scale microcarrier culture, Biotechnol. Bioeng., 90 (2005), 46–58.
    [3] C. J. Luke and K. Subbarao, Vaccines for pandemic influenza, Emerg. Infect. Dis., 12 (2006), 66–72.
    [4] M. Erdem, M. Safan, C. Castillo-Chavez, Mathematical Analysis of an SIQR Influenza model with Imperfect quarantine, B. Math. Biol., 79 (2017), 1612–1636.
    [5] A. Flahault, E. Vergu, L. Coudeville, et al., Strategies for containing a global influenza pandemic, Vaccine, 24 (2006), 6751–6755.
    [6] B. J. Coburn, B. G.Wagner and S. Blower, Modeling influenza epidemics and pandemics: insights into the future of swine flu (H1N1), BMC Med., 7 (2009), 30–37.
    [7] P. Y. Boelle, P. Bermillon, J. C. Desenclos, A preliminary estimation of the reproduction ratio for new influenza A(H1N1) from the outbreak in Mexico, March-April 2009, Eurosurveillance, 14 (2009), pii=19205.
    [8] H. Nishiura, C. Castillo-Chavez, M. Safan, et al., Transmission potential of the new influenza A (H1N1) virus and its age-specificity in Japan, Eurosurveillance, 14 (2009), pii=19227.
    [9] H. Nishiura H, G. Chowell, M. Safan, et al., Pros and cons of estimating the reproduction number from early epidemic growth rate of influenza A (H1N1) 2009, Theor. Biol. Med. Model., 7 (2010), 1–9.
    [10] M. Nuno, Z. Feng, M. Martcheva, et al., Dynamics of two-strain influenza with isolation and partial cross-immunity, SIAM J. Appl. Math., 65 (2005), 964–982.
    [11] A. L. Vivas-Barber, C. Castillo-Chavez and E. Barany, Dynamics of an "SAIQR" Influenza Model, Biomath., 3 (2014), 1–13.
    [12] H. Hethcote, M. Zhien and L. Shengbing, Effects of quarantine in six endemic models for infectious diseases, Math. Biosci., 180 (2002), 141–160.
    [13] A. Ruggieri, W. Malorni and W. Ricciardi, Gender disparity in response to anti-viral vaccines: new clues toward personalized vaccinology, Ital. J. Gender-Specific Med., 2 (2016), 93–98.
    [14] S. L. Klein, A. Pekosz, C. Passaretti, et al., Sex, Gender and Influenza, World Health Organization, Geneva, (2010), 1–58.
    [15] S. L. Klein and K. L. Flanagan, Sex differences in immune responses, Nat. Rev. Immunol., 16 (2016), 626–638.
    [16] D. Furman, B. P. Hejblum, N. Simon, et al., Systems analysis of sex differences reveals an immunosuppressive role for testosterone in the response to influenza vaccination, P. Natl. Acad. Sci. USA, 111 (2014), 869–874.
    [17] S. L. Klein, A. Hodgson and D. P. Robinson, Mechanisms of sex disparities in influenza pathogenesis, J. Leukoc. Biol., 92 (2012), 67–73.
    [18] S. L. Klein, I. Marriott and E. N. Fish, Sex-based differences in immune function and responses to vaccination, Trans. R. Soc. Trop. Med. Hyg., 109 (2015), 9–15.
    [19] A. Ruggieri, S. Anticoli, A. D'Ambrosio, et al., The influence of sex and gender on immunity, infection and vaccination, Ann. Ist. Super Sanità, 52 (2016), 198–204.
    [20] A. L. Fink and S. Klein, Sex and Gender Impact Immune Responses to Vaccines Among the Elderly, Physiology, 30 (2015), 408–416.
    [21] J. Fisher, N. Jung, N. Robinson, et al., Sex differences in immune responses to infectious diseases, Infection, 43 (2015), 399–403.
    [22] S. L. Klein, Sex influences immune responses to viruses, and efficacy of prophylaxis and therapeutic treatments for viral diseases, Bioessays, 34 (2012), 1050–1059.
    [23] J. V. Lunzen and M. Altfeld, Sex Differences in Infectious Diseases? Common but Neglected, J. Infect. Dis., 209(S3) (2014), S79–80.
    [24] X. Tan, L. Yuan, J. Zhou, et al., Modelling the initial transmission dynamics of influenza A H1N1 in Guangdong Province, China, Int. J. Infect. Dis., 17 (2017), e479–e484.
    [25] N. S. Cardell and D. E. Kanouse, Modeling heterogeneity in susceptibility and infectivity for HIV infection, in Mathematical and Statistical Approaches to AIDS Epidemiology, Lecture notes in biomathematics, 88 (eds. C. Castillo-Chavez), Springer-Verlag, Berlin Heidelberg New York London Paris Tokyo Hong Kong, (1989), 138–156.
    [26] M. Safan and K. Dietz, On the eradicability of infections with partially protective vaccination in models with backward bifurcation, Math. Biosci. Eng., 6 (2009), 395–407.
    [27] M. Safan, M Kretzschmar and K. P. Hadeler, Vaccination based control of infections in SIRS models with reinfection: special reference to pertussis, J. Math. Biol., 67 (2013), 1083–1110.
    [28] O. Neyrolles and L. Quintana-Murci, Sexual Inequality in Tuberculosis, PLoS Med., 6 (2009), e1000199. https://doi.org/10.1371/journal.pmed.1000199.
    [29] World Health Organization, Global tuberculosis control 2009: epidemiology, strategy, financing, Geneva: WHO, 2009. Available from: http://www.who.int/tb/country/en/index.html.
    [30] European Centre for Disease Prevention and Control, Pertussis. In: ECDC. Annual epidemiological report for 2015. Stockholm: ECDC; 2017.
    [31] World Health Organization (2018), Global Health Observatory (GHO) data: Number of women living with HIV, accessed 29 November 2018, http://www.who.int/gho/hiv/epidemic_status/cases_adults_women_children_text/en/.
    [32] U.S. Department of Health & Human Services, Office on Women's Health (last updated 21 November 2018), Women and HIV, accessed 29 November 2018, https://www. womenshealth.gov/hiv-and-aids/women-and-hiv.
    [33] Avert (last updated 21 August 2018), Global information and education on HIV and AIDS: Women and girls (HIV and AIDS), accessed 29 November 2018, https://www.avert.org/ professionals/hiv-social-issues/key-affected-populations/women.
    [34] O. Diekmann, J. A. P. Heesterbeek and M. G. Roberts, The Construction of next-Generation Matrices for Compartmental Epidemic Models, J. R. Soc. Interface, 47 (2010), 873–885.
    [35] H. Thieme, Mathematics in Population Biology, Princeton university press, Princeton, New Jersey, 2003.
    [36] C. Castillo-Chavez, Z. Feng and W. Huang, On the computation of R0 and its role on global stability, in Mathematical Approaches for Emerging and Reemerging Infectious Diseases: An Introduction (eds. C. Castillo-Chavez, S. Blower, P. van den Driessche, D. Krirschner and A. A. Yakubu), The IMA Volumes in Mathematics and its Applications 125, Springer-Verlag, New York, (2002), 229–250.
    [37] O. Patterson-Lomba, M. Safan, S. Towers, et al., Modeling the Role of Healthcare Access Inequalities in Epidemic Outcomes, Math. Biosci. Eng., 13 (2016), 1011–1041.
    [38] United States Centers for Disease Control and Prevention (2011) A CDC framework for preventing infectious diseases: Sustaining the Essentials and Innovating for the Future, October 2011.
    [39] World Health Organization. Evaluation of influenza vaccine effectiveness: a guide to the design and interpretation of observational studies, Geneva: World Health Organization, 2017.
    [40] H. L. Smith and H. Thieme, Dynamical Systems and Population Persistence, AMS, 2011.
    [41] M. Eichner, M. Schwehm, L. Eichner, et al., Direct and indirect effects of influenza vaccination, BMC Infect. Dis., 17 (2017), 308–315.
  • This article has been cited by:

    1. Liang-Sian Lin, Susan C Hu, Yao-San Lin, Der-Chiang Li, Liang-Ren Siao, A new approach to generating virtual samples to enhance classification accuracy with small data—a case of bladder cancer, 2022, 19, 1551-0018, 6204, 10.3934/mbe.2022290
    2. Michele La Rocca, Cira Perna, 2022, Chapter 51, 978-3-030-99637-6, 315, 10.1007/978-3-030-99638-3_51
    3. Michele La Rocca, Cira Perna, 2023, Chapter 43, 978-3-031-34203-5, 532, 10.1007/978-3-031-34204-2_43
    4. Ji Xian, Anbo Meng, Jiajin Fu, Short-Term Load Probability Prediction Based on Conditional Generative Adversarial Network Curve Generation, 2024, 12, 2169-3536, 64165, 10.1109/ACCESS.2024.3395659
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5802) PDF downloads(702) Cited by(6)

Article outline

Figures and Tables

Figures(11)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog