Processing math: 100%
Research article Special Issues

A lightweight dual-path cascaded network for vessel segmentation in fundus image


  • Automatic and fast segmentation of retinal vessels in fundus images is a prerequisite in clinical ophthalmic diseases; however, the high model complexity and low segmentation accuracy still limit its application. This paper proposes a lightweight dual-path cascaded network (LDPC-Net) for automatic and fast vessel segmentation. We designed a dual-path cascaded network via two U-shaped structures. Firstly, we employed a structured discarding (SD) convolution module to alleviate the over-fitting problem in both codec parts. Secondly, we introduced the depthwise separable convolution (DSC) technique to reduce the parameter amount of the model. Thirdly, a residual atrous spatial pyramid pooling (ResASPP) model is constructed in the connection layer to aggregate multi-scale information effectively. Finally, we performed comparative experiments on three public datasets. Experimental results show that the proposed method achieved superior performance on the accuracy, connectivity, and parameter quantity, thus proving that it can be a promising lightweight assisted tool for ophthalmic diseases.

    Citation: Yanxia Sun, Xiang Li, Yuechang Liu, Zhongzheng Yuan, Jinke Wang, Changfa Shi. A lightweight dual-path cascaded network for vessel segmentation in fundus image[J]. Mathematical Biosciences and Engineering, 2023, 20(6): 10790-10814. doi: 10.3934/mbe.2023479

    Related Papers:

    [1] Yong Liu, Hong Yang, Shanshan Gong, Yaqing Liu, Xingzhong Xiong . A daily activity feature extraction approach based on time series of sensor events. Mathematical Biosciences and Engineering, 2020, 17(5): 5173-5189. doi: 10.3934/mbe.2020280
    [2] Ridha Ouni, Kashif Saleem . Secure smart home architecture for ambient-assisted living using a multimedia Internet of Things based system in smart cities. Mathematical Biosciences and Engineering, 2024, 21(3): 3473-3497. doi: 10.3934/mbe.2024153
    [3] Sanket Desai, Nasser R Sabar, Rabei Alhadad, Abdun Mahmood, Naveen Chilamkurti . Mitigating consumer privacy breach in smart grid using obfuscation-based generative adversarial network. Mathematical Biosciences and Engineering, 2022, 19(4): 3350-3368. doi: 10.3934/mbe.2022155
    [4] Hongji Xu, Shi Li, Shidi Fan, Min Chen . A new inconsistent context fusion algorithm based on BP neural network and modified DST. Mathematical Biosciences and Engineering, 2021, 18(2): 968-982. doi: 10.3934/mbe.2021051
    [5] Shuming Sun, Yijun Chen, Ligang Dong . An optimization method for wireless sensor networks coverage based on genetic algorithm and reinforced whale algorithm. Mathematical Biosciences and Engineering, 2024, 21(2): 2787-2812. doi: 10.3934/mbe.2024124
    [6] Jia-Gang Qiu, Yi Li, Hao-Qi Liu, Shuang Lin, Lei Pang, Gang Sun, Ying-Zhe Song . Research on motion recognition based on multi-dimensional sensing data and deep learning algorithms. Mathematical Biosciences and Engineering, 2023, 20(8): 14578-14595. doi: 10.3934/mbe.2023652
    [7] Minghe Liu, Ye Tao, Zhongbo Wang, Wenhua Cui, Tianwei Shi . Research on Simultaneous localization and mapping Algorithm based on Lidar and IMU. Mathematical Biosciences and Engineering, 2023, 20(5): 8954-8974. doi: 10.3934/mbe.2023393
    [8] Noman Zahid, Ali Hassan Sodhro, Usman Rauf Kamboh, Ahmed Alkhayyat, Lei Wang . AI-driven adaptive reliable and sustainable approach for internet of things enabled healthcare system. Mathematical Biosciences and Engineering, 2022, 19(4): 3953-3971. doi: 10.3934/mbe.2022182
    [9] Zheng Ma, Zexin Xie, Tianshuang Qiu, Jun Cheng . Driving event-related potential-based speller by localized posterior activities: An offline study. Mathematical Biosciences and Engineering, 2020, 17(1): 789-801. doi: 10.3934/mbe.2020041
    [10] Diego G. Rossit, Segio Nesmachnow, Jamal Toutouh, Francisco Luna . Scheduling deferrable electric appliances in smart homes: a bi-objective stochastic optimization approach. Mathematical Biosciences and Engineering, 2022, 19(1): 34-65. doi: 10.3934/mbe.2022002
  • Automatic and fast segmentation of retinal vessels in fundus images is a prerequisite in clinical ophthalmic diseases; however, the high model complexity and low segmentation accuracy still limit its application. This paper proposes a lightweight dual-path cascaded network (LDPC-Net) for automatic and fast vessel segmentation. We designed a dual-path cascaded network via two U-shaped structures. Firstly, we employed a structured discarding (SD) convolution module to alleviate the over-fitting problem in both codec parts. Secondly, we introduced the depthwise separable convolution (DSC) technique to reduce the parameter amount of the model. Thirdly, a residual atrous spatial pyramid pooling (ResASPP) model is constructed in the connection layer to aggregate multi-scale information effectively. Finally, we performed comparative experiments on three public datasets. Experimental results show that the proposed method achieved superior performance on the accuracy, connectivity, and parameter quantity, thus proving that it can be a promising lightweight assisted tool for ophthalmic diseases.



    Smart homes are designed to maintain personal independence and enhance their sense of well-being for residents. It is one of the basic functions of smart homes to recognize daily activities, e.g., sleeping, cooking. To recognize daily activities, smart homes are equipped with ambient sensors [1]. These sensors are activated continuously when daily activities are carried out [2]. It is a challenging subject to recognize daily activities from activated sensor events. Thus, daily activity recognition has been discussed widely.

    Approaches for daily activity recognition can be divided into two categories: data-driven approaches and knowledge-driven approaches. Data-driven approaches include supervised and unsupervised learning approaches. Supervised learning methods require a large amount of labeled data [3]. Labeling data is a time-consuming and error-prone task [4]. The goal of the research gradually shifted to reducing data labeling and maximizing the use of knowledge to solve similar problems [5]. To save these problems, transferring trained models of daily activity recognition from one smart home to another becomes a promising study [6].

    Sensors are important features for daily activity recognition, whether based on wearable sensor environments or smart home environments [7]. Sensor flow is closely related to activity features [8]. On the one hand, recent studies based on activity recognition for wearable sensors has shown an emphasis on sensor data. Tang et al. increased the expressiveness of sensor features to recognize activity by using the idea of hierarchical-split (HS) [9]. Meanwhile, Huang et al. improved the method of normalizing mixed sensor features by proposing a method called Channel Equalization. They used the execution of whitening or de-correlation operations to reactivate these channels that were suppressed by normalization [10]. In addition, Cheng et al. used conditionally parametrized convolution for real-time HAR on mobile and wearable devices to improve the efficiency of computing sensor features [11]. On the other hand, sensor features are also important for the field of activity recognition in smart homes. There are different house configurations and sensor equipment in different smart homes [12]. Sensor equipment in the same house can be altered over time. Therefore, it is one of the primary tasks to map sensors in heterogeneous smart homes. After this, transfer learning methods can be used to recognize and transfer the features of daily activities between different residents [13]. However, it is common practice among most of existing approaches that only sensor profile information or the ontological relationship between sensor location and furniture attachment are used for sensor mapping. The rough mapping seriously restricts the performance of daily activity recognition. Intuitively, superior sensor mapping brings more promising results of daily activity recognition in heterogeneous smart homes. To achieve superior existing sensor mapping, this paper presents an optimal search based on a sensor mapping strategy.

    The study of sensor mapping methods is often neglected in more existing approaches, and most of them use the sensor profile information, or the ontological relationship between sensor location and furniture attachment to calculate its similarity, then obtain a rough mapping.

    The main contributions of this paper can be summarized as follows.

    1) In order to find the most similar one from multiple sources of the smart homes, we propose a similarity algorithm for solving the similarity between smart homes.

    2) We propose a sensor mapping algorithm to achieve superior sensor mapping.

    3) We employ DANN to transfer the trained daily activity recognition model.

    4) We evaluate the proposed approach on public datasets.

    The rest of this paper is organized as follows: Section 2 summarizes the knowledge-driven and data-driven activity recognition methods based on heterogeneous environments. Section 3 displays the concrete implementation of the method. Section 4 analyzes the experimental setup, evaluation methods and results. Section 5 is a summary of the article.

    There are two categories of approaches for daily activity recognition in heterogeneous smart homes. The first is the knowledge-driven approach. For the knowledge-driven approach, reasoning is performed to recognize daily activities based on a shared knowledge model (e.g., ontology). The other is the data-driven approach, which adjusts learned model by the sensor events stream collected from the source smart homes to recognize daily activities in target smart homes. Transfer learning is widely used for daily activity model evolution.

    Ye et al. proposed a knowledge-driven ensemble learning technique called Slearn [14]. It is based on semantic mapping to migrate knowledge between multiple datasets. Then, they improved on it and proposed the method named XLearn [15]. Firstly, ontologies are used to perform sensor space and daily activity space mapping. Then, a part of daily activities are identified based on clustering, and the remaining daily activities are recognized based on ensemble learning. Ye et al. also proposed a knowledge model to represent shared daily activities in smart homes [16]. The knowledge model is applied to achieve computationally efficient feature space remapping and uncertainty inference, which leads to an effective classifier fusion and further improves activity recognition accuracy. Marjan et al. proposed a framework called E-care@home [17]. Semantic interpretation of events and context awareness are achieved by integrating measurement data collected from heterogeneous smart homes into an ontology. Stream inference is performed based on an incremental answer set solver to recognize daily activities. Wemlinger and Holder proposed a method called SCEAR [18]. Firstly, an initial common ontology of semantic feature space is established. Then, the raw sensor data is transformed into common conceptual features to revise the initial ontology. The reasoning is conducted to recognize daily activities based on smart home ontology.

    A feature-based knowledge transfer framework was proposed by Chiang et al. [19]. The framework uses transfer learning to mitigate the constraint that training, and testing datasets are required to be highly similar in distribution. The framework can outperform non-transfer learning models by 8% in terms of accuracy, and greatly reduces the task of labeling the target domain. In addition, Chiang and Hsu used sensor profiles to encode activities and further measured the feature similarity between datasets [20]. Graph matching algorithm is applied to automatically compute the appropriate mapping of features based on similarity measures. Zhen et al. used web search to access similarity functions that are used to evaluate the similarity between the daily activities of the source smart home and the target smart home [21]. By the learned similarity metric, the collected data from a smart home are interpreted as the data in another smart home with a different confidence level. Feuz and Cook proposed a novel heterogeneous transfer learning technique named Feature Space Remapping (FSR) [22]. The features of the source and target domains are linked by constructing meta-features and using integrated learning for activity classification. And they also proposed a heterogeneous transfer learning for AR based on heuristic search techniques [23]. Azkune et al. proposed two data-driven daily activity recognition systems, SEMINAR-u and SEMINAR-s, to address the two cases of the presence of labeled and unlabeled daily activities in the source domain respectively [24]. Word embeddings were used to establish a common semantic feature domain for different domain sensors and daily activities mapping. Hu et al. supposed that the feature space of sensors is the same among smart homes [25]. Initially, keywords related to daily activities are retrieved using a web search engine. Then, being used as the weight of daily activity feature in the source domain, the similarity between daily activities is solved by applying the maximum mean difference (MMD). Finally, pseudo-training datasets are of the target domain. The feature mapping from the source domain to the target domain is completed. Hu and Yang proposed a transfer learning framework [26]. This framework transforms the sensor readings into the same feature space by KL-divergence and Dynamic Time Warping. Daily activity labels of the source domain are used to label the daily activities of the target domain by the sensor distribution. After that, the Google similarity distance metric is applied to find labels of target domain that are the most similar to the labels of the source domain. Myagmar et al. proposed a novel heterogeneous transfer learning algorithm called Heterogeneous Daily Living Activity Learning (HDLAL) [27]. HDLAL processes data from two domains into a derivation space on account of the maximum mean difference. Then, domain-invariant feature representation space from the cross-domain data distribution is derived. An ensemble classification algorithm is operated to train a multi-label classifier in a new feature space. Finally, the projection data is used to predict the labels of the target domain. In unsupervised domain adaptation, Sanabria et al. integrated bidirectional generative adversarial networks (Bi-GAN) and kernel mean matching (KMM) to identify feature transfer between two heterogeneous domains for daily activity recognition [28].

    In this section, we will illustrate how to select a source domain that is similar to the target one, and how to find the optimal mapping of sensors in different smart home environments.

    To demonstrate the process of selecting the similar source domains, necessary definitions are given below.

    Definition 1. Let SH = {sh1, sh2, …, shn} be a set of smart homes. For any shSH, let sh.FA be set of function areas of sh and let sh.SC be set of sensor categories of sh. SH.FS = sh1.FAsh2.FA ∪ … ∪ shn.FA × sh1.SCsh2.SC ∪ … ∪ shn.SC is said to be feature space of SH.

    Definition 2. Let SS = {ss1, ss2, …, ssj}, TS = {ts1, ts2, …, tsm} be a set of sensors of a source smart home and a target smart home, respectively. ssk denotes the k-th sensor in the source domain, 1kj. And tsh denotes the h-th sensor in the target domain, 1hm.

    Since sensors are activated continuously when daily activities are carried out, they are regarded as important space features of daily activity. Location, category and number of sensors vary from one smart home to another. For instance, there are 20 pressure sensors and 10 light sensors in a smart home sh1. Whereas there are 10 pressure sensors and 8 light sensors in smart home sh2. Some pressure sensors are installed in the bedroom and others are installed in the living room in sh1. All pressure sensors are installed in the shower room in sh2. When the same daily activity is carried out in two smart home sh1 and sh2, two corresponding sensor streams ss1 and ss2 are generated. Intuitively, the more similar in location, category and number of sensors of the two smart homes are, the more similar the activated sensor events streams are. Thus, ss1 can be used approximately to recognize the daily activity, which is carried out in sh2, and vice versa. From the perspective of daily activity recognition, two smart homes are shown to be similar if locations, categories and number of them are similar to each other. Hence, it is an important premise for cross-environment daily activity recognition to find the most similar one from multiple source smart homes.

    Algorithm 1 is used to find the source smart home which is the most similar to the target one. Given a set of source smart homes SH = {sh1, sh2, …, shn} and a target smart home sh*, divide sensors of each smart home into several classes by feature space of SH ∪ {sh*}. Count the number q of sensors which belong to same class for each smart home. And sh.L denotes a feature vector consisting of the numbers of sensors under all categories in a given smart home environment. Solve similarities between each shSH and sh* and select the source smart home sh#, which is most similar to sh*. A source domain's similarity is the number of classifiers that select this source domain. And th is the maximum similarity. Figure 1 shows a sample.

    Algorithm 1.
    Input: SH = {sh1, sh2, …, shn}, set of source smart homes
         sh*, target smart home
    Output: sh#SH, most similar to sh*
    1.  sh# ← ∅
    2.  for each sh in SH ∪ {sh*}
    3.    sh.L ← ∅
    4.    for each f in SH ∪ {sh*}. FS
    5.      qget Quantity (sh, f)
    6.      sh.Lsh.L ∪ {(f, q)}
    7.    end for
    8.  end for
    9.  th ← 0
    10.  for each sh in SH
    11.    if max (similarity (sh.L, sh.*L)) then
    12.      Thsimilarity (sh.L, sh*.L)
    13.      sh#sh
    14.    end if
    15.  end for
    16.  return sh#

    Figure 1.  A sample of source smart home selection.

    Firstly, sensors of selected source smart and target smart home are merged. Then, merged sensors are divided into different parts by certain category (e.g., motion) of sensors and function area (e.g., bedroom) in which they located. Values of "category" and "function area" are used as label of part. Figure 2 shows a sample of the sensors division. There are 7 sensors in the selected source smart and target smart home, respectively. c_fa1, c_fa2 and c_fa3 are three different values of "category" and "function area". All sensors are divided into three parts ({ss1, ss2}, {ts1, ts2}), ({ss3, ss4}, {ts3, ts4, ts5}) and ({ss5, ss6, ss7}, {ts6, ts7}).

    Figure 2.  A sample of sensors division.

    To show the process of sensors mapping, necessary terms are defined as follow.

    Definition 3. Let SP = {(SS#, TS#) | SS#SS, TS#TS} be sensors division for SS and TS. For a part pSP, CMp = {(ss, ts) | ssSS#tsTS#} is said to be a candidate mapping of p if ∀ (ss1, ts), (ss2, ts) ∈ CMp, ss1 = ss2 holds.

    Definition 4. se = (d, t, sn, sv, ar) is called a sensor event, where sn is the sensor name, d is the date when sn was activated, t is the time when sn was activated, sv is the value of sn when sn was activated, and ar is the daily activity occurring when sn was activated.

    Definition 5. Given n sensor events se1, se2, …, sen, < se1, se2, …, sen > is said to be a sensor events stream. If 1 ≤ in-1, sei+1 is always followed by sei in chronological order.

    Table 1 shows a fragment of sensor events stream which is activated by daily activity "Relax".

    Table 1.  A fragment of sensor events stream.
    d t sn sv ar
    2012/8/25 15/01 M008 OFF Relax
    2012/8/25 15/01 M009 ON
    2012/8/25 15/01 M008 OFF
    2012/8/25 15/02 M009 ON
    2012/8/25 15/03 LS008 25
    2012/8/25 15/05 LS004 31
    2012/8/25 15/06 LS003 3
    2012/8/25 15/07 LS016 8
    2012/8/25 15/07 LS004 8
    2012/8/25 15/09 LS008 24
    2012/8/25 15/09 M008 OFF

     | Show Table
    DownLoad: CSV

    For a part pSP, there is usually more than one candidate mapping. Algorithm 2 is used to find optimal sensors mapping from all candidate mappings for each part of SP. To begin with, a handful of sensors event stream TD* which is collected from target smart home TD are extracted as samples to evaluate performance of sensor mapping. For each candidate mapping CMp of p, sensors of all sensor events of TD* are replaced with the sensors used in source smart home by the mapping relations of CMp. Table 2 shows an instance of sensors mapping with the assumption of CMp = {(M008, M003), (M009, M004), (LS004, LS002), (LS008, LS005), (LS008, LS006)}. A handful of sensor event streams are collected from the target smart home which is activated by daily activity "Sleep". Sensor names of column sn* are generated after the column sn are replaced. Next, sensors event stream collected from selected source smart home SD is used as training set, and TD* is used as test set. A classifier is employed to evaluate the performance of the sensors mapping based on accuracy. The candidate sensors mapping with the best metrics are selected as the final sensors mapping.

    Algorithm 2.
    Input: SP, sensors division for SS and TS
          SD, sensors event stream collected from selected source smart home
          TD, sensors event stream collected from target smart home
    Output: SM, optimal sensors mapping
    1.  TD*handful (TD) // Extract a handful of sensor events stream from TD.
    2.  for each p in SP
    3.    optq ← 0
    4.    optCM ← ∅
    5.    for each CMp in p
    6.      for each (ss, ts) in CMp
    7.        replace (ss, ts, TD) // Replace ts of TD with ss.
    8.      end for
    9.      qevaluate (SD, TD*) // Use some classifier to solve performance of CMp on SD and TD*.
    10.      if (q > optq) then
    11.          optqq
    12.          optCMCMp
    13.      end if
    14.    end for
    15.    SMSM ∪ {optCM}
    16.  end for
    17.  return SM

    Table 2.  A handful of sensor events stream collected from target smart home.
    d t sn sn* sv ar
    2012/9/5 13/07 M004 M009 OFF Sleep
    2012/9/5 13/08 M003 M008 ON
    2012/9/5 13/08 M004 M009 ON
    2012/9/5 13/12 M003 M008 OFF Sleep
    2012/9/5 13/13 LS006 LS008 5
    2012/9/5 13/15 LS002 LS004 42
    2012/9/5 13/15 LS005 LS008 62

     | Show Table
    DownLoad: CSV

    Center for Advanced Studies in Adaptive Systems (CASAS) of Washington State University is well-known for their research on the daily activity recognition in smart homes. CASAS published multiple collected datasets from different smart homes [29]. In this paper, four smart homes HH101, HH105, HH109 and HH110 and corresponding collected datasets are employed to evaluate the proposed approach. Room layouts and sensor locations of these smart homes are shown in Table 3. Every smart home is divided into seven parts. They are the Kitchen, Dining, Parlor, Porch, Toilet, Bedroom and Porch_toilet, respectively. Installed sensors can be divided into six categories, the "temperature sensor (T)", "infrared motion sensor (M)", "wide area infrared motion sensor (MA)", "light sensor (LS)", "Light Switch Sensor (L)" and "Door Switch Sensor (D)", respectively. LS and T will output real values when triggered, and the M, MA, D and L will output boolean when triggered. In Table 3, each data item denotes the number of sensors which belong to some categories and are installed in some parts for some sensor categories in different smart homes. For the underlined data item 4, 3, 2, 2, these numbers mean that there are 4, 3, 2, 2 sensors which belong to M category and are installed in Kitchen of HH101, HH105, HH109 and HH110, respectively. Ten categories of daily activity, the "Bed_Toilet_Transition", "Cook", "Dress", "Eat", "Med", "Personal_Hygiene", "Relax", "Sleep", "Sleep_Out_Of_Bed", "Toilet", are used to evaluate the proposed approach. Please note that "Cook_Lunch", "Cook_Breakfast" and "Cook_Dinner" are merged into one daily activity, "Cook". And "Eat", "Eat_Lunch", "Eat_Breakfast" and "Eat_Dinner" are merged into the "Eat". In addition, "Take_Medicine", "Morning_Meds" and "Evening_Meds" are merged into the "Med".

    Table 3.  Sensor layouts of selected smart homes.
    Kitchen Dining Parlor Porch Toilet Bedroom Porch_t-oilet
    HH101,
    HH105,
    HH109,
    HH110
    M 4, 3, 2, 2 1, 2, 1, 1 3, 2, 4, 4 1, 1, 1, 1 0, 0, 0, 0 2, 3, 3, 2 1, 2, 2, 1
    MA 1, 1, 1, 1 0, 1, 0, 0 1, 1, 1, 1 0, 0, 0, 0 1, 1, 1, 1 1, 1, 1, 1 0, 0, 0, 0
    D 0, 2, 0, 2 0, 0, 0, 0 1, 1, 1, 1 1, 1, 1, 1 1, 1, 1, 1 0, 0, 0, 0 0, 0, 0, 0
    T 0, 1, 0, 1 0, 0, 0, 0 3, 1, 1, 1 1, 1, 1, 1 1, 2, 1, 1 0, 0, 0, 0 0, 0, 0, 0
    L 0, 2, 0, 2 0, 0, 0, 1 0, 1, 0, 1 0, 0, 0, 1 0, 2, 0, 0 0, 1, 0, 1 0, 0, 0, 0
    LS 5, 4, 3, 3 1, 3, 1, 1 4, 3, 5, 5 1, 1, 1, 1 1, 1, 1, 1 3, 4, 4, 3 1, 2, 2, 1

     | Show Table
    DownLoad: CSV

    Daily activity recognition is a classification task. Hence, the evaluation metrics used are accuracy, precision and F1-score, which are shown in Eqs (1)–(3), respectively. The recall is shown in Eq (4). TP is the number of the true positives which are correctly classified based on the proposed approach, whereas FP is the number of false positives which are incorrectly classified based on the approach. TN is the number of correctly classified true negatives based on the proposed method, while FN is the number of false negatives which are incorrectly classified.

    Accuracy=TP+TN/(TP+TN+FP+FN) (1)
    Precision=TP/(TP+FP) (2)
    F1score=2PrecisionRecall/(Precision+Recall) (3)
    Recall=TP/(TP+FN) (4)

    HH109 is used as target smart home and HH101, HH105 and HH110 are used as source smart homes. The similarity between source smart home and target smart home is solved as a classification task. K-Nearest Neighbor (KNN), Random forest (RF), Decision Tree (DT) and Naive Bayes (NB) are used for similarity solution. The results are shown in Table 4. Since HH101 is the most similar to HH109 on KNN, RF and DT, it is selected as most similar source smart home.

    Table 4.  Similarities between HH109 and HH101, HH105, HH110.
    Target Smart Home Source Smart Homes KNN RF DT NB
    HH109 HH101
    HH105
    HH110

     | Show Table
    DownLoad: CSV

    Data collected from HH109 in six different dates are independently used for the sensor mapping. DANN is employed to evaluate candidate sensors mappings. Parameters of DANN are shown in Table 5. Sensors installed in HH101 and HH109 are divided into 23 parts which are shown in Table 6. Accuracy obtained from DANN is used as the evaluation criterion for each part. The one with the highest accuracy is selected as the optimal sensor mapping, as shown in Table 7.

    Table 5.  Parameters of the DANN network.
    Learning_rate Momentum_rate Batch_size Epoch Optimizer
    0.001 0.9 64 100 Momentum optimizer

     | Show Table
    DownLoad: CSV
    Table 6.  Division for sensors from HH101 and HH109.
    HH101 HH109
    Part1 LS011 LS016, LS011
    Part2 M011 M016, M011
    Part3 LS005, LS008, LS010, LS013 LS002, LS003, LS004, LS005, LS006
    Part4 M009, M012 M012, M014, M015
    Part5 LS015 LS017
    Part6 MA016 MA009
    Part7 MA015 MA017
    Part8 M001 M001
    Part9 T102 T102
    Part10 LS009, LS012, LS014 LS012, LS013, LS014, LS015
    Part11 D001 D001
    Part12 T101, T104, T105 T101
    Part13 MA014 MA013
    Part14 M005, M008, M010 M002, M003, M004, M006
    Part15 LS002, LS003, LS006, LS007, LS016 LS008, LS010, LS009
    Part16 T103 T103
    Part17 D003 D003
    Part18 D002 D002
    Part19 LS001 LS001
    Part20 LS004 LS007
    Part21 MA013 MA005
    Part22 M002, M003, M006, M007 M008, M010
    Part23 M004 M007

     | Show Table
    DownLoad: CSV
    Table 7.  Optimal mapping for each division in different samples space.
    D* Date 1 Date 2 Date 3 Date 4 Date 5 Date 6
    Part1 {(LS011, LS016),
    (LS011, LS011)}
    {(LS011, LS016),
    (LS011, LS011)}
    {(LS011, LS016),
    (LS011, LS011)}
    {(LS011, LS016),
    (LS011, LS011)}
    {(LS011, LS016),
    (LS011, LS011)}
    {(LS011, LS016),
    (LS011, LS011)}
    Part2 {(M011, M016),
    (M011, M011)}
    {(M011, M016),
    (M011, M011)}
    {(M011, M016),
    (M011, M011)}
    {(M011, M016),
    (M011, M011)}
    {(M011, M016),
    (M011, M011)}
    {(M011, M016),
    (M011, M011)}
    Part3 {(LS008, LS002),
    (LS010, LS003),
    (LS008, LS004),
    (LS010, LS005)
    (LS013, LS006)}
    {(LS013, LS002),
    (LS008, LS003),
    (LS013, LS004),
    (LS010, LS005)
    (LS005, LS006)}
    {(LS010, LS002),
    (LS008, LS003),
    (LS008, LS004),
    (LS010, LS005)
    (LS008, LS006)}
    {(LS005, LS002),
    (LS008, LS003),
    (LS008, LS004),
    (LS010, LS005)
    (LS013, LS006)}
    {(LS008, LS002),
    (LS010, LS003),
    (LS013, LS004),
    (LS010, LS005)
    (LS010, LS006)}
    {(LS008, LS002),
    (LS005, LS003),
    (LS010, LS004),
    (LS010, LS005)
    (LS010, LS006)}
    Part4 {(M012, M012),
    (M010, M014),
    (M012, M015)}
    {(M009, M012),
    (M009, M014),
    (M012, M015)}
    {(M012, M012),
    (M012, M014),
    (M012, M015)}
    {(M012, M012),
    (M009, M014),
    (M012, M015)}
    {(M012, M012),
    (M009, M014),
    (M012, M015)}
    {(M009, M012),
    (M009, M014),
    (M009, M015)}
    Part5 {(LS015, LS017)} {(LS015, LS017)} {(LS015, LS017)} {(LS015, LS017)} {(LS015, LS017)} {(LS015, LS017)}
    Part6 {(MA016, MA009)} {(MA016, MA009)} {(MA016, MA009)} {(MA016, MA009)} {(MA016, MA009)} {(MA016, MA009)}
    Part7 {(MA015, MA017)} {(MA015, MA017)} {(MA015, MA017)} {(MA015, MA017)} {(MA015, MA017)} {(MA015, MA017)}
    Part8 {(M001, M001)} {(M001, M001)} {(M001, M001)} {(M001, M001)} {(M001, M001)} {(M001, M001)}
    Part9 {(T102, T102)} {(T102, T102)} {(T102, T102)} {(T102, T102)} {(T102, T102)} {(T102, T102)}
    Part10 {(LS009, LS012),
    (LS009, LS013),
    (LS012, LS014),
    (LS014, LS015)}
    {(LS009, LS012),
    (LS009, LS013),
    (LS012, LS014),
    (LS014, LS015)}
    {(LS009, LS012),
    (LS012, LS013),
    (LS009, LS014),
    (LS014, LS015)}
    {(LS012, LS012),
    (LS014, LS013),
    (LS014, LS014),
    (LS014, LS015)}
    {(LS009, LS012),
    (LS009, LS013),
    (LS014, LS014),
    (LS012, LS015)}
    {(LS012, LS012),
    (LS014, LS013),
    (LS014, LS014),
    (LS012, LS015)}
    Part11 {(D001, D001)} {(D001, D001)} {(D001, D001)} {(D001, D001)} {(D001, D001)} {(D001, D001)}
    Part12 {(T105, T101)} {(T105, T101)} {(T105, T101)} {(T105, T101)} {(T101, T101)} {(T105, T101)}
    Part13 {(MA014, MA013)} {(MA014, MA013)} {(MA014, MA013)} {(MA014, MA013)} {(MA014, MA013)} {(MA014, MA013)}
    Part14 {(M008, M002),
    (M005, M003),
    (M010, M004),
    (M005, M006)}
    {(M010, M002),
    (M010, M003),
    (M010, M004),
    (M008, M006)}
    {(M005, M002),
    (M005, M003),
    (M010, M004),
    (M005, M006)}
    {(M010, M002),
    (M010, M003),
    (M010, M004),
    (M010, M006)}
    {(M005, M002),
    (M010, M003),
    (M008, M004),
    (M008, M006)}
    {(M008, M002),
    (M008, M003),
    (M005, M004),
    (M008, M006)}
    Part15 {(LS003, LS008),
    (LS006, LS010),
    (LS007, LS009)}
    {(LS007, LS008),
    (LS006, LS010),
    (LS003, LS009)}
    {(LS003, LS008),
    (LS016, LS010),
    (LS008, LS009)}
    {(LS003, LS008),
    (LS006, LS010),
    (LS007, LS009)}
    {(LS003, LS008),
    (LS002, LS010),
    (LS007, LS009)}
    {(LS002, LS008),
    (LS016, LS010),
    (LS006, LS009)}
    Part16 {(T103, T103)} {(T103, T103)} {(T103, T103)} {(T103, T103)} {(T103, T103)} {(T103, T103)}
    Part17 {(D003, D003)} {(D003, D003)} {(D003, D003)} {(D003, D003)} {(D003, D003)} {(D003, D003)}
    Part18 {(D002, D002)} {(D002, D002)} {(D002, D002)} {(D002, D002)} {(D002, D002)} {(D002, D002)}
    Part19 {(LS001, LS001)} {(LS001, LS001)} {(LS001, LS001)} {(LS001, LS001)} {(LS001, LS001)} {(LS001, LS001)}
    Part20 {(LS004, LS007)} {(LS004, LS007)} {(LS004, LS007)} {(LS004, LS007)} {(LS004, LS007)} {(LS004, LS007)}
    Part21 {(MA013, MA005)} {(MA013, MA005)} {(MA013, MA005)} {(MA013, MA005)} {(MA013, MA005)} {(MA013, MA005)}
    Part22 {(M006, M008),
    (M007, M010)}
    {(M003, M008),
    (M007, M010)}
    {(M002, M008),
    (M007, M010)}
    {(M003, M008),
    (M002, M010)}
    {(M002, M008),
    (M006, M010)}
    {(M003, M008),
    (M007, M010)}
    Part23 {(M004, M007)} {(M004, M007)} {(M004, M007)} {(M004, M007)} {(M004, M007)} {(M004, M007)}

     | Show Table
    DownLoad: CSV

    For each activated sensor s of an instance of daily activity, s is represented in pattern of FA_C_N, where FA is the name of function area in which s is installed, C is the category of s, N is the sensor serial number at this area and category. For example of the sensor event "2012/8/25 15/01 M008 OFF" shown in Table 1, sensor M008 of is represented in bedroom_M_2, where M008 is installed in bedroom. Further, sensor events stream corresponding to an instance of daily activity is represented as a string vector. The string vector is transformed into a digital vector using word2vec algorithm. After the sensor events streams of all instances of daily activity are represented in digital vectors, these digital vectors from source smart home are used to train DANN. The results are shown in Table 8. And iteration processes for each date are shown in Figures 35. It is shown that no matter which date is selected, a favorable performance of daily activity recognition has been achieved.

    Table 8.  Results of daily activity recognition.
    D* Accuracy Precision F1-score
    Date 1 83.89% 78.14% 80.47%
    Date 2 81.70% 76.88% 76.31%
    Date 3 83.56% 77.88% 79.65%
    Date 4 80.31% 68.83% 73.57%
    Date 5 81.02% 78.58% 78.87%
    Date 6 80.40% 69.07% 73.56%

     | Show Table
    DownLoad: CSV
    Figure 3.  The accuracy of daily activity recognition which vary from 1st epoch to 100th epoch based on six dates.
    Figure 4.  The precision of daily activity recognition which vary from 1st epoch to 100th epoch based on six dates.
    Figure 5.  The F1-score of daily activity recognition which vary from 1st epoch to 100th epoch based on six dates.

    Among HH101, HH105 and HH110, HH101 is most similar to HH109. We employ DANN to evaluate the performances of daily activity recognition using HH101 and HH105 as training sets, respectively. As shown in Figures 68, the performance of daily activity recognition based on the data from HH101 as training set is better than that from HH105 as training set. The experiment results demonstrate the effectiveness of Algorithm 1.

    Figure 6.  The accuracy of daily activity recognition using data collected from HH101 and HH105 as training set, respectively.
    Figure 7.  The precision of daily activity recognition using data collected from HH101 and HH105 as training set, respectively.
    Figure 8.  The F1-score of daily activity recognition using data collected from HH101 and HH105 as training set, respectively.

    We compared the proposed method within two state-of-the-art methods, which are the ontology sensor mapping method and the word embedding mapping method [15,24]. The results are shown in Figures 911. The proposed method is superior to the word embedding mapping method and the ontology sensor mapping method. It is shown that the proposed method of the precise sensor mapping is more advantageous.

    Figure 9.  The accuracy of daily activity recognition using different sensor mapping methods.
    Figure 10.  The precision of daily activity recognition using different sensor mapping methods.
    Figure 11.  The F1-score of daily activity recognition using different sensor mapping methods.

    Rough sensor mapping consists of two sensors which are respectively installed in source smart home and target smart home, the two sensors are mapped when locations and categories of them are the same. The results are shown in Figures 1214. Owing to precise sensor mapping, which generates more distinguished features of daily activity, the proposed method is also superior to rough sensor mapping.

    Figure 12.  The accuracy of daily activity recognition using the proposed method and rough sensors mapping method.
    Figure 13.  The precision of daily activity recognition using the proposed method and rough sensors mapping method.
    Figure 14.  The F1-score of daily activity recognition using the proposed method and rough sensors mapping method.

    The performance of daily activity recognition in cross-environment mainly depends on the sensor mapping between heterogeneous smart homes. This paper presents a novel approach to discovering the optimal sensor mapping by iteratively evaluating each candidate sensor mapping between the most similar source smart home and target smart home. Two public datasets involving sensor data on ten daily activities are investigated to validate the proposed approach, and the results have proven its excellent performance.

    This work was supported by the National Natural Science Foundation of China (Nos. 61976124, 62173053), the Fundamental Research Funds for the Central Universities (No. 3132018194).

    The authors declare there is no conflict of interest.



    [1] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M. Goldbaum, Detection of blood vessels in retinal images using two-dimensional matched filters, IEEE Trans. Med. Imaging, 8 (1989), 263–269. https://doi.org/10.1109/42.34715 doi: 10.1109/42.34715
    [2] Q. Li, J. You, D. Zhang, Vessel segmentation and width estimation in retinal images using multi-scale production of matched filter responses, Expert Syst. Appl., 39 (2012), 7600–7610. https://doi.org/10.1016/j.eswa.2011.12.046 doi: 10.1016/j.eswa.2011.12.046
    [3] K. S. Sreejini, V. K. Govindan, Improved multi-scale matched filter for retina vessel segmentation using PSO algorithm, Egyptian Inf. J., 16 (2015), 253–260. https://doi.org/10.1016/j.eij.2015.06.004 doi: 10.1016/j.eij.2015.06.004
    [4] S. K. Saroj, R. Kumar, N. P. Singh. Frechet PDF based matched filter approach for retinal blood vessels segmentation, Comput. Methods Programs Biomed., 194 (2020), 105490. https://doi.org/10.1016/j.cmpb.2020.105490 doi: 10.1016/j.cmpb.2020.105490
    [5] A. M. Aibinu, M. I. Iqbal, A. A. Shafie, M. J. E. Salami, M. Nilsson, Vascular intersection detection in retina fundus images using a new hybrid approach, Comput. Biol. Med., 40 (2010), 81–89. https://doi.org/10.1016/j.compbiomed.2009.11.004 doi: 10.1016/j.compbiomed.2009.11.004
    [6] M. Vlachos, E. Dermatas, Multi-scale retinal vessel segmentation using line tracking, Comput. Med. Imaging Graphics, 34 (2010), 213–227. https://doi.org/10.1016/j.compmedimag.2009.09.006 doi: 10.1016/j.compmedimag.2009.09.006
    [7] F. Zana, J. C. Klein, Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation, IEEE Trans. Image Process., 10 (2001), 1010–1019. https://doi.org/10.1109/83.931095 doi: 10.1109/83.931095
    [8] M. M. Fraz, S. A. Barma, P. Remagnino, A. Hoppe, A. Basit, B. Uyyanonvara, et al., An approach to localize the retinal blood vessels using bit planes and centerline detection, Comput. Methods Programs Biomed., 108 (2012), 600–616. https://doi.org/10.1016/j.cmpb.2011.08.009 doi: 10.1016/j.cmpb.2011.08.009
    [9] Y. Yang, S. Y. Huang, N. N. Rao, An automatic hybrid method for retinal blood vessel extraction, Int. J. Appl. Math. Comput. Sci., 18 (2008), 399–407. https://doi.org/10.2478/v10006-008-0036-5 doi: 10.2478/v10006-008-0036-5
    [10] K. Mardani, K. Maghooli, Enhancing retinal blood vessel segmentation in medical images using combined segmentation modes extracted by DBSCAN and morphological reconstruction, Biomed. Signal Process. Control, 69 (2021), 102837. https://doi.org/10.1016/j.bspc.2021.102837 doi: 10.1016/j.bspc.2021.102837
    [11] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever; B. van Ginneken, Ridge-based vessel segmentation in color images of the retina, IEEE Trans. Med. Imaging, 23 (2004), 501–509. https://doi.org/10.1109/TMI.2004.825627 doi: 10.1109/TMI.2004.825627
    [12] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar, Retinal vessel segmentation using the 2-D morlet wavelet and supervised classification, IEEE Trans. Med. Imaging, 25 (2005). https://doi.org/10.1109/TMI.2006.879967 doi: 10.1109/TMI.2006.879967
    [13] S. A. Khowaja, P. Khuwaja, I. A. Ismaili, A framework for retinal vessel segmentation from fundus images using hybrid feature set and hierarchical classification, Signal Image Video Process., 13 (2018), 379–387. https://doi.org/10.1007/s11760-018-1366-x doi: 10.1007/s11760-018-1366-x
    [14] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing And Computer-assisted Intervention, (2015), 234–241. https://doi.org/10.48550/arXiv.1505.04597
    [15] J. K. Wang, X. Li, Y. Z. Cheng, Towards an extended efficient net-based u-Net framework for joint optic disc and cup segmentation in the fundus image, Biomed. Signal Process. Control, 85 (2023), 104906. https://doi.org/10.1016/j.bspc.2023.104906 doi: 10.1016/j.bspc.2023.104906
    [16] B. Yang, L. Qin, H. Peng, C. Guo, X. Luo, J. Wang, SDDC-Net: A U-shaped deep spiking neural P convolutional network for retinal vessel segmentation, Dig. Signal Process., 2023 (2023), 4002. https://doi.org/10.1016/j.dsp.2023.104002 doi: 10.1016/j.dsp.2023.104002
    [17] G. X. Xu, C. X. Ren, SPNet: A novel deep neural network for retinal vessel segmentation based on shared decoder and pyramid-like loss, Neurocomputing, 523 (2023), 199–212. https://doi.org/10.1016/j.neucom.2022.12.039 doi: 10.1016/j.neucom.2022.12.039
    [18] Y. Wu, Y. Xia, Y. Song, Y. Zhang, W. Cai, Multi-scale network followed network model for retinal vessel segmentation, in International Conference on Medical Image Computing And Computer-Assisted Intervention, (2018), 119–126. https://doi.org/10.1007/978-3-030-00934-2_14
    [19] J. Zhuang, LadderNet: Multi-path networks based on u-Net for medical image segmentation, preprint, arXiv: 1810.07810.
    [20] M. Z. Alom, C. Yakopcic, M. Hasan, T. M. Taha, V. K. Asari, Recurrent residual u-Net for medical image segmentation, J. Med. Imaging, 6 (2019), 6–14. https://doi.org/10.1117/1.JMI.6.1.014006 doi: 10.1117/1.JMI.6.1.014006
    [21] L. Li, M. Verma, Y. Nakashima, H. Nagahara, R. Kawasaki, IterNet: Retinal image segmentation utilizing structural redundancy in vessel networks, in IEEE Winter Conference on Applications of Computer Vision, (2020). https://doi.org/10.48550/arXiv.1912.05763
    [22] Z. Gu, J. Cheng, H. Fu, K. Zhou, H. Hao, Y. Zhao, CE-Net: Context encoder network for 2D medical image segmentation, IEEE Trans. Med. Imaging, (2019). https://doi.org/10.1109/TMI.2019.2903562 doi: 10.1109/TMI.2019.2903562
    [23] Z. F. Lin, J. P. Huang, Y. Y Chen, X. Zhang, W. Zhao, Y. Li, et al., A high resolution representation network with multi-path scale for retinal vessel segmentation, Comput. Methods Programs Biomed., 208 (2021). https://doi.org/10.1016/j.cmpb.2021.106206 doi: 10.1016/j.cmpb.2021.106206
    [24] X. Li, Y. Jiang, M. Li, S. Yin, Lightweight attention convolutional neural network for retinal vessel image segmentation, IEEE Trans. Ind. Inf., 17 (2021), 1958–1967. https://doi.org/10.1109/TII.2020.2993842 doi: 10.1109/TII.2020.2993842
    [25] Y. Zhang, J. Fang, Y. Chen, L. Jia, Edge-aware U-net with gated convolution for retinal vessel segmentation, Biomed. Signal Process. Control, 73 (2022), 103472. https://doi.org/10.1016/j.bspc.2021.103472 doi: 10.1016/j.bspc.2021.103472
    [26] X. Deng, J. Ye, A retinal blood vessel segmentation based on improved D-MNet and pulse-coupled neural network, Biomed. Signal Process. Control, 73 (2022), 103467. https://doi.org/10.1016/j.bspc.2021.103467 doi: 10.1016/j.bspc.2021.103467
    [27] J. He, Q. Zhu, K. Zhang, P. Yu, J. Tang, An evolvable adversarial network with gradient penalty for COVID-19 infection segmentation, Appl. Soft Comput., 113 (2021), 107947. https://doi.org/10.1016/j.asoc.2021.107947 doi: 10.1016/j.asoc.2021.107947
    [28] N. Mu, H. Wang, Y. Zhang, J. Jiang, J. Tang, Progressive global perception and local polishing network for lung infection segmentation of COVID-19 CT images, Pattern Recognit., 120 (2021), 108168. https://doi.org/10.1016/j.patcog.2021.108168 doi: 10.1016/j.patcog.2021.108168
    [29] C. Zhao, A. Vij, S Malhotra, J. Tang, H. Tang, D. Pienta, et al., Automatic extraction and stenosis evaluation of coronary arteries in invasive coronary angiograms, Comput. Biol. Med., 136 (2021), 104667. https://doi.org/10.1016/j.compbiomed.2021.104667 doi: 10.1016/j.compbiomed.2021.104667
    [30] X. Liu, Z. Guo, J. Cao, J. Tang, MDC-net: A new convolutional neural network for nucleus segmentation in histopathology images with distance maps and contour information, Comput. Biol. Med., 135 (2021), 104543. https://doi.org/10.1016/j.compbiomed.2021.104543 doi: 10.1016/j.compbiomed.2021.104543
    [31] Y. Wu., Y. Xia., Y. Song. Y. Zhang, W Cai, NFN+: a novel network followed network for retinal vessel segmentation, Neural Networks, 126 (2020), 153–162. https://doi.org/10.1016/j.neunet.2020.02.018 doi: 10.1016/j.neunet.2020.02.018
    [32] G. Ghiasi, T. Y. Lin, Q. V. Le, Dropblock: a regularization method for convolutional networks, Adv. Neural Inf. Process. Syst., 31 (2018). https://doi.org/10.48550/arXiv.1810.12890 doi: 10.48550/arXiv.1810.12890
    [33] Q. Jin, Z. Meng, T. D. Pham, Q. Chen, L. Wei, R. Su, DUNet: a deformable network for retinal vessel segmentation, Knowl. Based Syst., 178 (2019), 149–162. https://doi.org/10.1016/j.knosys.2019.04.025 doi: 10.1016/j.knosys.2019.04.025
    [34] F. Chollet, Xception: Deep learning with depthwise separable convolutions, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), 1251–1258. https://doi.org/10.48550/arXiv.1610.02357
    [35] L. Mou, L. Chen, J. Cheng, Z. Gu, Y. Zhao, J. Liu, Dense dilated network with probability regularized walk for vessel detection, IEEE Trans. Med. Imaging, 39 (2020), 1392–1403. https://doi.org/10.1109/TMI.2019.2950051 doi: 10.1109/TMI.2019.2950051
    [36] Z. Yan, X. Yang, K. T. Cheng, A three-stage deep learning model for accurate retinal vessel segmentation, biomedical and health informatics, IEEE J. Biomed. Health Inf., 23 (2019), 1427–1436. https://doi.org/10.1109/JBHI.2018.2872813 doi: 10.1109/JBHI.2018.2872813
    [37] V. Badrinarayanan, A. Kendall, R. Cipolla, Segnet: a deep convolutional encoder-decoder architecture for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017): 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615 doi: 10.1109/TPAMI.2016.2644615
    [38] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2014). https://doi.org/10.1109/TPAMI.2016.2572683
    [39] N. Ibtehaz, M. S. Rahman, MultiResUNet: Rethinking the u-Net architecture for multimodal biomedical image segmentation, Neural Networks, 121 (2020), 74–87. https://doi.org/10.1016/j.neunet.2019.08.025 doi: 10.1016/j.neunet.2019.08.025
    [40] A. Chaurasia, E. Culurciello, Linknet: Exploiting encoder representations for efficient semantic segmentation, in 2017 IEEE Visual Communications and Image Processing (VCIP), (2017), 1–4. https://doi.org/10.1109/VCIP.2017.8305148
    [41] L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder-decoder with atrous separable convolution for semantic image segmentation, in Proceedings of the European Conference on Computer Vision, (2018), 801–818. https://doi.org/10.48550/arXiv.1802.02611
    [42] O. Papandreou, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, et al., Attention u-net: Learning where to look for the pancreas, preprint, arXiv: 1804.03999.
    [43] A. Glowacz, Thermographic fault diagnosis of shaft of BLDC motor, Sensors, 22 (2022), 8537. https://doi.org/10.3390/s22218537 doi: 10.3390/s22218537
    [44] A. Glowacz, Fault diagnosis of electric impact drills using thermal imaging, Measurement, 171 (2021), 108815. https://doi.org/10.1016/j.measurement.2020.108815 doi: 10.1016/j.measurement.2020.108815
  • This article has been cited by:

    1. Miguel Ferreira, Luís Moreira, António Lopes, Differential drive kinematics and odometry for a mobile robot using TwinCAT, 2023, 31, 2688-1594, 1789, 10.3934/era.2023092
    2. Farhat Ullah, Xin Chen, Syed Bilal Hussain Shah, Saoucene Mahfoudh, Muhammad Abul Hassan, Nagham Saeed, A Novel Approach for Emotion Detection and Sentiment Analysis for Low Resource Urdu Language Based on CNN-LSTM, 2022, 11, 2079-9292, 4096, 10.3390/electronics11244096
    3. Kokten Ulas Birant, Derya Birant, Multi-Objective Multi-Instance Learning: A New Approach to Machine Learning for eSports, 2022, 25, 1099-4300, 28, 10.3390/e25010028
    4. Hadi Ashraf Raja, Karolina Kudelina, Bilal Asad, Toomas Vaimann, Ants Kallaste, Anton Rassõlkin, Huynh Van Khang, Signal Spectrum-Based Machine Learning Approach for Fault Prediction and Maintenance of Electrical Machines, 2022, 15, 1996-1073, 9507, 10.3390/en15249507
    5. Chunguang Zhang, Heqiu Yang, Jun Ma, Huayue Chen, An Efficient End-to-End Multitask Network Architecture for Defect Inspection, 2022, 22, 1424-8220, 9845, 10.3390/s22249845
    6. José R. García Oya, Alejandro Sainz Rojas, Daniel Narbona Miguel, Ramón González Carvajal, Fernando Muñoz Chavero, Low-Power Transit Time-Based Gas Flow Sensor with Accuracy Optimization, 2022, 22, 1424-8220, 9912, 10.3390/s22249912
    7. Yingjie Song, Ying Liu, Huayue Chen, Wu Deng, A Multi-Strategy Adaptive Particle Swarm Optimization Algorithm for Solving Optimization Problem, 2023, 12, 2079-9292, 491, 10.3390/electronics12030491
    8. Mindaugas Vasiljevas, Robertas Damaševičius, Rytis Maskeliūnas, A Human-Adaptive Model for User Performance and Fatigue Evaluation during Gaze-Tracking Tasks, 2023, 12, 2079-9292, 1130, 10.3390/electronics12051130
    9. Junjie Xu, Yali Zhao, Huayue Chen, Wu Deng, ABC-GSPBFT: PBFT with grouping score mechanism and optimized consensus process for flight operation data-sharing, 2023, 624, 00200255, 110, 10.1016/j.ins.2022.12.068
    10. Hongjiang Cui, Guanxin Chen, Ying Guan, Wu Deng, Study on Aerodynamic Drag Reduction at Tail of 400 km/h EMU with Air Suction-Blowing Combination, 2023, 11, 2075-1702, 222, 10.3390/machines11020222
    11. Hongjiang Cui, Guanxin Chen, Ying Guan, Huimin Zhao, Numerical Simulation and Analysis of Turbulent Characteristics near Wake Area of Vacuum Tube EMU, 2023, 23, 1424-8220, 2461, 10.3390/s23052461
    12. Zakaria Boulouard, Mariyam Ouaissa, Mariya Ouaissa, Farhan Siddiqui, Mutiq Almutiq, Moez Krichen, An Integrated Artificial Intelligence of Things Environment for River Flood Prevention, 2022, 22, 1424-8220, 9485, 10.3390/s22239485
    13. Hail Jung, Jeongjin Rhee, Application of YOLO and ResNet in Heat Staking Process Inspection, 2022, 14, 2071-1050, 15892, 10.3390/su142315892
    14. Muhammad Sulaiman, Naveed Ahmad Khan, Fahad Sameer Alshammari, Ghaylen Laouini, Performance of Heat Transfer in Micropolar Fluid with Isothermal and Isoflux Boundary Conditions Using Supervised Neural Networks, 2023, 11, 2227-7390, 1173, 10.3390/math11051173
    15. Prashant Kumar, Prince Kumar, Ananda Shankar Hati, Heung Soo Kim, Deep Transfer Learning Framework for Bearing Fault Detection in Motors, 2022, 10, 2227-7390, 4683, 10.3390/math10244683
    16. Huimin Zhao, Chenchen Wang, Huayue Chen, Tao Chen, Wu Deng, A hybrid classification method with dual-channel CNN and KELM for hyperspectral remote sensing images, 2023, 44, 0143-1161, 289, 10.1080/01431161.2022.2162352
    17. Taha A. Taha, Mohammed Fadhil, Abadal-Salam T. Hussain, Shouket A. Ahmed, Hazry Desa, Mohammed M Mohammedshakir, 2023, Wireless Power Transfer for Smart Power Outlet, 979-8-3503-8306-5, 1, 10.1109/ISAS60782.2023.10391618
    18. Yunqian Yu, Kun Tang, Yaqing Liu, A Fine-Tuning Based Approach for Daily Activity Recognition between Smart Homes, 2023, 13, 2076-3417, 5706, 10.3390/app13095706
    19. Xiaoya Fang, Shuangshuang Ma, Han Yu, Yuting Shi, Fan Long, Qianyi Liu, Cang Deng, Zhichao Wei, Yuan Wang, Ram Bilas Pachori, Lei Chen, 2024, Guidance scheme for object detection visual impairment assistance equipment, 9781510680012, 15, 10.1117/12.3029625
    20. Rong Xu, 2024, Optimizing Smart Home Systems: Utilizing Blockchain and Edge Computing, 979-8-3503-7994-5, 785, 10.1109/ICESC60852.2024.10689819
    21. Huayue Chen, Ye Chen, Qiuyue Wang, Tao Chen, Huimin Zhao, A New SCAE-MT Classification Model for Hyperspectral Remote Sensing Images, 2022, 22, 1424-8220, 8881, 10.3390/s22228881
    22. Lingli Gong, Anshuman Sharma, Mohammad Abdul Bhuiya, Hilmy Awad, Mohamed Z. Youssef, An Adaptive Fault Diagnosis of Electric Vehicles: An Artificial Intelligence Blended Signal Processing Methodology, 2023, 46, 2694-1783, 196, 10.1109/ICJECE.2023.3264852
    23. Zixin Dai, Shuangjiao Zhai, Pinle Qin, Rui Chai, Pengcheng Zhao, 2023, WVGR: Gesture Recognition based on WiFi-Video Fusion, 979-8-3503-4538-4, 1, 10.1109/ICCC57788.2023.10233557
    24. Hoang Nguyen, Dina Nawara, Rasha Kashef, Connecting the indispensable roles of IoT and artificial intelligence in smart cities: A survey, 2024, 29497159, 10.1016/j.jiixd.2024.01.003
    25. Lifeng Yin, Yingfeng Wang, Huayue Chen, Wu Deng, An Improved Density Peak Clustering Algorithm for Multi-Density Data, 2022, 22, 1424-8220, 8814, 10.3390/s22228814
    26. Kunpeng Li, Junjie Xu, Huimin Zhao, Wu Deng, A clustered borderline synthetic minority over-sampling technique for balancing quick access recorder data, 2023, 45, 10641246, 6849, 10.3233/JIFS-233548
    27. Ling Jiang, Lu Zhang, Xiaobo Wang, 2024, Chapter 67, 978-981-97-7709-9, 863, 10.1007/978-981-97-7710-5_67
    28. Yahui Wang, Yaqing Liu, Fine-Grained Mapping Between Daily Activity Features in Smart Homes, 2025, 18, 1999-4893, 131, 10.3390/a18030131
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2352) PDF downloads(124) Cited by(0)

Figures and Tables

Figures(14)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog