
Diabetes is a real public health problem in children and adolescents because of its chronicity and the difficulty in the control of blood glucose levels at paediatric age.
The aim of this study was to assess the link of socio-demographic and anthropometric characteristics with the management and glycemic control in children with type1 diabetes (T1D).
The study included a sample of 184 children with T1D of 15 years old or less. A structured questionnaire was used to collect information on socio-demographic status, characteristics and complications of the disease, diabetes management, diet, physical activity and therapeutic education of participants. Weight and height were measured and body mass index calculated.
The mean age of the patients surveyed was 8.49 ± 4.1 years; the majority (68.5%) was of school age, female (53.2%) and was from low socioeconomic level (83.2%). Only 20.1% of the patients had a good glycemic control. The low socioeconomic status and overweight or obesity were significantly more prevalent in children with poor compared to those with good glycemic control (P ≤ 0.001). Multivariate analysis revealed an association of poor glycemic control with the family history of diabetes (adjusted OR = 38.70, 95% CI: 11.61, 128.98) and the absence of therapeutic education (adjusted OR = 3.29, 95% CI: 1.006, 10.801).
This study shows that diabetes is associated with overweight and obesity in children and that the quality of glycemic control is generally poor in these patients. The data showed also that improving the quality of life of T1D patients requires good therapeutic education, hence the need to introduce a real national policy.
Citation: Sanaa El–Jamal, Houda Elfane, Imane Barakat, Khadija Sahel, Mohamed Mziwira, Aziz Fassouane, Rekia Belahsen. Association of socio-demographic and anthropometric characteristics with the management and glycemic control in type 1 diabetic children from the province of El Jadida (Morocco)[J]. AIMS Medical Science, 2021, 8(2): 87-104. doi: 10.3934/medsci.2021010
[1] | Ali N. A. Koam, Adnan Khalil, Ali Ahmad, Muhammad Azeem . Cardinality bounds on subsets in the partition resolving set for complex convex polytope-like graph. AIMS Mathematics, 2024, 9(4): 10078-10094. doi: 10.3934/math.2024493 |
[2] | Adnan Khali, Sh. K Said Husain, Muhammad Faisal Nadeem . On bounded partition dimension of different families of convex polytopes with pendant edges. AIMS Mathematics, 2022, 7(3): 4405-4415. doi: 10.3934/math.2022245 |
[3] | Naila Mehreen, Rashid Farooq, Shehnaz Akhter . On partition dimension of fullerene graphs. AIMS Mathematics, 2018, 3(3): 343-352. doi: 10.3934/Math.2018.3.343 |
[4] | Nouf Almutiben, Edward L. Boone, Ryad Ghanam, G. Thompson . Classification of the symmetry Lie algebras for six-dimensional co-dimension two Abelian nilradical Lie algebras. AIMS Mathematics, 2024, 9(1): 1969-1996. doi: 10.3934/math.2024098 |
[5] | Hongwu Li, Yuling Feng, Hongwei Jiao, Youlin Shang . A novel algorithm for solving sum of several affine fractional functions. AIMS Mathematics, 2023, 8(4): 9247-9264. doi: 10.3934/math.2023464 |
[6] | Dalal Awadh Alrowaili, Uzma Ahmad, Saira Hameeed, Muhammad Javaid . Graphs with mixed metric dimension three and related algorithms. AIMS Mathematics, 2023, 8(7): 16708-16723. doi: 10.3934/math.2023854 |
[7] | Zhao-Li Shen, Yu-Tong Liu, Bruno Carpentieri, Chun Wen, Jian-Jun Wang . Recursive reordering and elimination method for efficient computation of PageRank problems. AIMS Mathematics, 2023, 8(10): 25104-25130. doi: 10.3934/math.20231282 |
[8] | Kangqiao Li, Gongxiang Liu . Finite duals of affine prime regular Hopf algebras of GK-dimension one. AIMS Mathematics, 2023, 8(3): 6829-6879. doi: 10.3934/math.2023347 |
[9] | Juan Gerardo Alcázar, Hüsnü Anıl Çoban, Uğur Gözütok . Detecting affine equivalences between certain types of parametric curves, in any dimension. AIMS Mathematics, 2024, 9(6): 13750-13769. doi: 10.3934/math.2024670 |
[10] | Tiecheng Zhang, Liyan Wang, Yuan Zhang, Jiangtao Deng . Finite-time stability for fractional-order fuzzy neural network with mixed delays and inertial terms. AIMS Mathematics, 2024, 9(7): 19176-19194. doi: 10.3934/math.2024935 |
Diabetes is a real public health problem in children and adolescents because of its chronicity and the difficulty in the control of blood glucose levels at paediatric age.
The aim of this study was to assess the link of socio-demographic and anthropometric characteristics with the management and glycemic control in children with type1 diabetes (T1D).
The study included a sample of 184 children with T1D of 15 years old or less. A structured questionnaire was used to collect information on socio-demographic status, characteristics and complications of the disease, diabetes management, diet, physical activity and therapeutic education of participants. Weight and height were measured and body mass index calculated.
The mean age of the patients surveyed was 8.49 ± 4.1 years; the majority (68.5%) was of school age, female (53.2%) and was from low socioeconomic level (83.2%). Only 20.1% of the patients had a good glycemic control. The low socioeconomic status and overweight or obesity were significantly more prevalent in children with poor compared to those with good glycemic control (P ≤ 0.001). Multivariate analysis revealed an association of poor glycemic control with the family history of diabetes (adjusted OR = 38.70, 95% CI: 11.61, 128.98) and the absence of therapeutic education (adjusted OR = 3.29, 95% CI: 1.006, 10.801).
This study shows that diabetes is associated with overweight and obesity in children and that the quality of glycemic control is generally poor in these patients. The data showed also that improving the quality of life of T1D patients requires good therapeutic education, hence the need to introduce a real national policy.
International Diabetes Federation;
Type 1 diabetes;
International Society for Paediatric and Adolescent Diabetes;
Diabetes control and complications trial;
Glycated haemoglobin;
World Health Organization;
Waist circumference;
Waist-to-Height ratio;
Body mass index;
Standard deviations;
Fasting blood glucose;
Postprandial blood glucose;
Odds ratio;
Confidence interval;
Medical assistance scheme;
American Diabetes Association;
Versus
With the rapid growth of digital information in recent years, the effective management and rational utilization of large-scale information become urgent in practical applications, such as cross-modal retrieval [1,2], control system [3,4,5,6], and data analysis [7,8]. Dimension reduction becomes an effective tool to find valuable information with less information loss and strong discriminant ability in low-dimensional space.
A large number of dimension reduction methods have been proposed in the past few decades. Principal Components Analysis (PCA) [9] and Linear Discriminant Analysis (LDA) [10,11,12] are the most popular methods due to their simplification and efficiency. PCA maximizes the global variance to obtain the low-dimensional subspace, while LDA maximizes the ratio between the trace of between-class scatter and the trace of within-class scatter. PCA and LDA are linear methods, and they cannot achieve appreciated projection results for the nonlinear high-dimensional data, so many manifold learning methods have been developed. Representative algorithms mainly include Isometric Mapping [13], Kernel Principal Component Analysis [14], and Maximum Variance Unfolding [15]. These methods are nonlinear dimension reduction methods and can effectively show the structure of the original data, but they preserve solely the global structure information.
Locality Preserving Projection aims to find an optimal subspace that can preserve the local relationship between adjacent samples as much as possible [16,17,18,19,20]. Neighborhood Preserving Embedding [21], Locality Sensitive Discriminant Analysis [22], and Marginal Fisher Analysis [23] are all local methods. Sparsity Preserving Projection [24,25] is also one of the most influential methods and preserves the sparse reconstructive relationship of the data. The sparse representation structure means the sparse reconstructive relationship of the data through basis vectors with few nonzero elements in sparse subspace. To extend the Sparsity Preserving Projection to a semi-supervised embedding method, Weng et al. [26] introduce the idea of in-class constraints and propose a semi-supervised method for data embedding named Constrained Sparsity Preserving Embedding. Besides, the representative methods also include Sparsity Preserving Discriminant Analysis [27] and Multi-view Sparsity Preserving Projection [24]. Taking into account the local and global structure information, Cai et al. [28] present a linear dimension reduction algorithm named Global and Local Structure Preserving Projection to keep both global and local clustering structures hidden in data. A sparse dimension reduction method named Sparse Global-Local Preserving Projections is proposed [29]. The method preserves both the global and local structures of the data set and extracts sparse transformation vectors that can reveal meaningful correlations between variables. Liu et al. [30] utilize the collaborative representation to construct within-class and between-class graphs and transform the dimensionality reduction task into a graph embedding framework. Hinton et al. [31] describe an effective way of initializing the weights that allows deep auto-encoder networks to learn low-dimensional codes that work much better than PCA to reduce the dimensionality. A deep variational auto-encoder [32] is utilized for unsupervised dimension reduction of single-cell RNA-seq data. The method can explicitly model the dropout events and find the nonlinear hierarchical feature representations of the original data.
In fact, in real applications, some data have special meanings and the importance of each point is different. One example is illustrated by the three figures in Figure 1 which describe one person. The discriminative features that can be extracted from the figures are different. This inspires us to pay more attention to the important points in the process of dimension reduction, whereas the existing approaches usually ignore the different importance of points. In this paper, we partition the data in a high-dimensional space into two classes: the representative data and the common data, and deal with different kinds of data with different methods. The representative data dominate the feature of the entire model and their projection largely determines the projection of the whole model. Therefore, we focus on the representative data in dimension reduction, and then the projections of the common data are then recovered from the projected representative data. The contributions of this paper are stated as follows:
● A weighted affinity propagation algorithm is proposed to cluster the data with different weights. The weight of each point provides an importance factor for clustering, which makes the data with high weights play more important roles in the clustering process.
● Two approaches of dimension reduction are utilized to preserve the structure information of the representative data and common data with different aims. The data that have more discriminative features or more importance are paid more attention to in the process of dimension reduction.
The rest of this paper is organized as follows. The related work is introduced in Section 2. Section 3 describes the details of the proposed method. The contents include data partition and structure information preserving projections of the representative data and common data. The experimental results and discussions are presented in Section 4. Finally, Section 5 summarizes this work.
Affinity Propagation (AP) is a clustering method proposed by Frey and Dueck [33]. Compared with other methods, AP clustering achieves the number of clusters automatically through message exchange. By exchanging messages among data points simultaneously, AP determines whether a point should be an exemplar (cluster center), or the exemplar to which it should belong.
AP algorithm passes real-valued messages between points until a high-quality set of exemplars and corresponding clusters are found. The responsibility r(i,k) is the message sent from data xi to candidate exemplar point xk and reflects the possibility of point xk to be the exemplar. The availability a(i,k) reflects how appropriate it would be for point xi to choose candidate exemplar xk as its exemplar. Mathematically, the responsibility r(i,k) and the availability a(i,k) are updated as
r(i,k)=s(i,k)−maxk′≠k{a(i,k′)+r(i,k′)} | (2.1) |
a(i,k)={min{0,r(k,k)+∑i′≠i,kmax{0,r(i′,k)}},i≠k∑i′≠kmax{0,r(i′,k)}},i=k | (2.2) |
where s(i,j) describes the similarity relationship between point xi and point xj and the negative squared Euclidean distance may be considered as the similarity measure.
The responsibility and the availability are updated repeatedly until some step criterion is met. For example, the iterative process can be terminated after a fixed number of iterations. Namely, for point xi the value of k is
k=argmaxj{a(i,j)+r(i,j)} | (2.3) |
If k=i, then point xi is an exemplar or cluster center. If k≠i, then point xk is an exemplar for point xi.
The distance function determines how close the data are. Euclidean distance is widely used to calculate the distance between two data points, while sometimes it cannot reflect the real distance [34,35]. For the points in Figure 2(a), the Euclidean distance between two specified points is represented by dotted line in Figure 2(c). According to this metric the two points appear deceptively close, while they are on the opposite parts. Geodesic distance between two points is defined usually as the length of the shortest paths. Figure 2(d) shows the benefit of Geodesic distance. In this case, the two points linked by dottted line are not neighbors according to Geodesci distance measure.
For two points s,t∈X, there exists a shortest Geodesic path from s to t. To compute the Geodesic distance between s and t, we approximate the Geodesic path on graph G. If the Euclidean distance between two points is lower than a given threshold, the line-segment between two points is chosen as the edge of graph G as shown in Figure 2(b). The Euclidean distance is assigned to the edge as the weight.
The Geodesic distance between s and t is defined as follows
dG(s,t)=minPath(s,t)=minp∈Pathminxi∈p,xi≠s,td(s,x1)+d(x1,x2)+⋯+d(x|p|,t) | (2.4) |
where d(xi,xj)=‖xi−xj‖, Path denotes the path set that contains all the paths between points s and t, p is one of the paths, xi is the point on the path p.
The graph G is a weighted undirected graph and there exist several algorithms to achieve the shortest distance of the graph G, such as Floyd-Warshall algorithm, which gets the shortest distance between any two points and outputs the correct result as long as no negative cycles exist in the graph G. The shortest distance is taken as the approximation of the Geodesic distance, which reflects the distance more accurately. The distance measure in this paper is based on Geodesic distance and applied in the search of k nearest neighbors.
For the data that have special meaning or different discriminative features, the dimension reduction approach based on data partition and structure information preserving is proposed. Firstly, data points are partitioned into representative data and common data through the weighted affinity propagation clustering algorithm. Furthermore, sparse relationship and Geodesic distance preserving projections are adopted to project the representative data from the original space to the low-dimensional space. Finally, the location of the common data can be achieved through the linear combination of its adjacent representative points. The outline of the proposed method is shown in Figure 3.
In the process of dimension reduction, the data have different discriminative features or importance, so it is not a good choice to regard the data as the same. In many practical applications, the weights should be estimated except the data that have weights itself.
The density has some connection with the importance. In general, the bigger the value of density is, the more important it is. Based on the assumption, we introduce the kernel function to describe the importance as the weight. For any x∈X, where X={x1,⋯,xn} is the dataset, the kernel weight estimator f(x;h) is given by the following formula
f(x;h)=1nhn∑i=1k(xi−xh) | (3.1) |
where h>0 is the bandwidth, and k(⋅) is the Gaussian kernel function.
Kernel weight estimator uses a fixed bandwidth h over the whole data support. We use the data-driven least squares cross-validation method to select the bandwidth h by minimizing the cross-validation function CVf(h) [36]
CVf(h)=1n2hn∑i=1n∑j=1ˉk(xi−xjh)−1n(n−1)hn∑i=1n∑j=1,j≠ik(xi−xjh) | (3.2) |
In the practical applications, the weights of the data have special meaning. Three-dimensional points have not topological information, so it is difficult to do some operations, such as deformation and surface reconstruction. Curvature is the geometrical property and a good invariant feature of local surfaces. Curvature in 3D space usually refers to Gaussian curvature, mean curvature and principal curvature. Gaussian curvature K and mean curvature H can be estimated using the method proposed by [37]. The principal curvatures k1, k2 are calculated by Gaussian curvature K and mean curvature H:
{k1=H−√H2−Kk2=H+√H2−K | (3.3) |
Curvature can be negative or positive according to whether it is consistent with the manifold orientation. Here, the absolute value of the curvature is taken as the weight. The points with large weights are more important than the others and should be paid more attention to in the process of dimension reduction.
In many applications, each point has a special meaning and the importance of each point is different. It is obvious that the data should have different possibilities to become an exemplar. To give the same possibility of becoming the exemplar for the data with different importance is not a good choice. That ignores the different importance of points, so we introduce the weighted affinity propagation clustering algorithm to solve the problem.
In the traditional AP algorithm, a point has more possibility to be the exemplar if there are more neighbors around it. Therefore, for the point x with high weight, we add virtual points around it to increase the possibility of the point x to be the exemplar. The values and position of virtual points are the same as the point x. For the number of virtual points, it depends on the weight of the given point x. The function V(w) reflecting the relationship between the number of virtual points and the weight of the given point is constructed. As the range of the weights is different in real applications, the number of virtual points cannot be approximated accurately.
Let xi be the point with weight wi, the number of virtual points is ni. For points set X, the sum of the virtual points is ∑Ni=1ni, where N is the number of points in the set X. Before the virtual points are added to the points set X, the responsibility r(i,k) and the availability a(i,k) are N×N matrices. After M virtual points are added, the two matrices change into (N+M)×(N+M) matrices. The virtual points are the copies of the original point in the dataset S, so the responsibility r(i,k) and the availability a(i,k) of virtual points are the same for the original points.
The rules for message propagation are updated as follows
r(t+1)(i,k)=(1−λ)×(s(i,k)−maxk′=k{a(t)(i,k′)+s(i,k′)})+λ×r(t)(i,k) | (3.4) |
a(t+1)(i,k)={(1−λ)×min{0,r(t)(k,k)+∑i′≠i,kmax{0,r(t)(i′,k)}}+λ×a(t)(i,k),i≠k(1−λ)×∑i′≠kmax{0,r(t)(i′,k)}},i=k | (3.5) |
where λ is damping factor in the iterative steps.
The higher value of the damping factor λ leads to slower convergence rates. In the iterative process of the algorithm, oscillation and non-convergence occur when two or more points are suitable as representative points in the same class cluster. The original points are copied many times, so the values of r(i,k) and a(i,k) of an original point and its virtual points should be unified and the sum is considered to judge whether the point is the exemplar. An example of the weighted AP algorithm is shown in Figure 4.
The exemplar means a cluster center and is assumed to be more important than the other points in one class. Therefore, we choose the exemplars as the representative points and the rest as the common ones. Besides the exemplars, the points whose weights are higher than a given threshold are also taken as the representative points. These points are important but far away from the other points. In Figure 4(c), the red points are the representative data and the rest points are the common data.
In this section, we give the structure information preserving projection for the representative data. The structure information refers to sparse neighborhood and Geodesic distance. The representative data have more importance and determines the distribution of the original data, so the dimension reduction of the representative data is expected to obtain more discriminative features.
The basic idea of sparse neighborhood preserving projection is that the same weight matrix that reconstructs the point xi should also reconstruct its projection yi through its neighbors. Sparse neighborhood preserving projection constructs sparse reconstructive weights between representative data and preserves the neighborhood similarity relationship in the low-dimensional subspace. Assume Xi denotes the set of neighbors of point xi, to minimize the reconstruction error, the weight matrix wSi corresponding to xi are obtained by solving an l1-minimization problem
min‖wSi‖1,s.t.‖xi−XiwSi‖<ε | (3.6) |
The weight matrix wSi denotes the reconstruction coefficients following the sparse rule and can be achieved by calculating formula (3.6). The sparse reconstruction error function ES(xi) in the projection space can be expressed as the following
ES(xi)=N∑i=1‖PTixi−PTiXwSi‖2 | (3.7) |
To preserve the neighborhood relationship between the point and its neighbors, the sparse neighborhood preserving projection is utilized. However, for some data like Swiss Roll, the sparse neighborhood relationship is not enough to reflect the real relationship, so the Geodesic distance preserving projection is proposed to describe the structure of data points in another way.
Geodesic distance projection preserves the proportion of the Geodesic distances between these points in the set. The matrix wGi gives the Geodesic distance relationship between point xi and its neighbors. The component wGi,j in the matrix wGi can be defined as the following
wGi,j=e−dG(xi,xj)∑xj∈InClass(xi)e−dG(xi,xj) | (3.8) |
The weight value wGi,j denotes the distance relationship between xi and xj. In the projection space, to keep the Geodesic distance relationship the reconstructive error function ED(xi) is as follows
ED(xi)=N∑i=1‖PTixi−PTiXwGi‖2 | (3.9) |
The neighbors of the point xi can be k-NN or ε-Neighbors. Here, k-NN is used to choose the neighbors of point xi. To keep sparse neighborhood relationship, we minimize the error function ES(xi). At the same time, we minimize the error function ED(xi) to keep the Geodesic distance. To preserve the two relationships in the process of dimension reduction, we get the following total error function
E(X)=N∑i=1E(xi)=λ1N∑i=1ES(xi)+λ2N∑i=1ED(xi)=λ1N∑i=1‖PTixi−PTiXwSi‖2+λ2N∑i=1‖PTixi−PTiXwGi‖2 | (3.10) |
where λ1 and λ2 are parameters to adjust the balance of the sparse neighborhood relationship and Geodesic distance. The N projection matrices are independent of each other and thus the objective function can be considered as the sum of N subfunctions E(xi) which are separately optimized, so we can get the following formula
E(Xi)=λ1M∑i=1‖PTixi−PTiXwSi,j‖2+λ2M∑i=1‖PTixi−PTiXwGi,j‖2=λ1M∑i=1tr[PTi(xi,j−XiwSi,j)(xi,j−XiwSi,j)TPi]+λ2M∑i=1tr[PTi(xi,j−XiwGi,j)(xi,j−XiwGi,j)TPi]=M∑i=1tr[PTi(λ1MSi+λ2MGi)Pi] | (3.11) |
where MSi=(xi,j−XiwSi,j)(xi,j−XiwSi,j)T and MGi=(xi,j−XiwGi,j)(xi,j−XiwGi,j)T.
To avoid degenerate solutions, a constraint PTiXiXTiPi=I is added. Therefore, the objective function can be expressed as the following optimization function
mintr(PTiXi(I−wi−wTi+wTiwi)XTiPi),s.t.PTiXiXTiPi=I | (3.12) |
where wi=λ1wSi+λ2wGi, λ1 and λ2 satisfy this condition that λ1+λ2=1.
The projection matrix that minimizes the objective function is given by the minimum eigenvalue solution to the generalized eigenvalue problem
Xi(I−wi−wTi+wTiwi)XTiPi=λXiXTiPi | (3.13) |
The optimization problem can be solved with the generalized eigenvalue decomposition mentioned in the related work as E(xi)=λkiPi. For the data point xi, the projection in the low-dimensional space is yi=PTixi.
Algorithm 1: Sparse neighborhood and Geodesic distance preserving projection. |
Input:Representative dataset X=x1,x2,…,xN, the graph G and the value of λ1 and λ2.
Output: Project matrices Pi,i=1,2,…,N. Step 1: Transform the optimization problem to a generalized eigenvalue problem; calculate sparse weight matrix wSi and gastric distance matrix wGi; Step 2: Solve problem and sort the eigenvalue as λ1≥λ2≥⋯≥λd≥0≥λd+1≥⋯≥λD; Step 3: Select the eigenvectors that correspond to positive eigenvalues to form the projection matrix. |
For a common point, because of the uncertain projection position of its adjacent common data, the performance and computational cost of dimension reduction cannot be guaranteed if the location of the common data depends on all the adjacent representative and common data. On the other hand, the representative data have more importance than the common one, so the projection location of the common data can be achieved solely according to its adjacent representative ones. Here, the structure information preserving is to keep the Geodesic distance relationship between the common point and its adjacent representative points, which not only preserves the distance relationship but also reduces the running time.
The unknown position of a common point in the low-dimensional space is achieved through the linear combination of its adjacent representative points as shown in Figure 5. The crucial issue for the success of the linear combination method is whether we can obtain appropriate weights for all component systems in an efficient manner. The linear combination can be expressed as follows
yi=n∑j=1λjyj | (3.14) |
where yi is the projection coordinate of the j-th representative point in the adjacent set Qi, λj is the weight assigned to the projection point yj. The adjacent representative points set Qi of the common point xi is defined as follows: yj∈Qi, if dG(xi,xj)<d and yj∉Qi, otherwise.
Here, we utilize the distance representation method to construct the relationship between the common point and its adjacent representative points in the original space. The distance representation can discover the local relationship and has natural discriminating power. Given the weight threshold, all points within the threshold band are included into the neighbor data set. The definition of the weight λj is the following
λj=e−dG(xi,xj)p∑xj∈Qie−dG(xi,xj)p | (3.15) |
where p is the adjustable parameter.
Through the linear combination of the adjacent representative points of common data xi, we directly achieve the projection location of the common data xi in the low-dimensional space. The method preserves the Geodesic distance relationship between the common and its representative points and avoids the solution of equations.
To evaluate the performance of the proposed method, we do comparison experiments with the related methods such as Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Multidimensional Scaling (MDS), Factor Analysis (FA) [38], Principal Component Analysis (PCA), Probabilistic Principal Component Analysis (PPCA) [39,40], Stochastic Neighbor Embedding (SNE) [41], Kernel t-distributed Stochastic Neighbor Embedding (Kt-SNE) [42], Symmetric Stochastic Neighbor Embedding (SSNE) [43] and Variational Auto-encoder(VAE) [32] on benchmark datasets. FA is a statistical method used to describe variability among correlated variables in terms of a potentially lower number of unobserved variables called factors. The difference between FA and PCA is whether the objective function clearly identify certain unobservable factors from the observed variables. PPCA is an expression of PCA as the maximum likelihood solution of a probabilistic latent variable model, so the information about uncertainty of measurements can be utilized to produce a more accurate model. Kt-SNE and SSNE are proposed based on Stochastic Neighbor Embedding which minimizes the Kullback-Leibler divergence between the similarity matrix of a high-dimensional data set and its counterpart from a low-dimensional embedding. Kt-SNE preserves the flexibility of basic t-SNE, but enables explicit out-of-sample extensions. SSNE uses a symmetrized cost function with simpler gradients. VAE is an autoencoder whose training is regularized to avoid overfitting and ensures the latent space has good properties. The mentioned methods include classical methods and the improved methods.
The datasets with class labels are projected to the low-dimensional space and the projection points are classified using AP algorithm and SVM classifier in different experiments. Here, classification accuracy is introduced to evaluate the performance of different methods and is described as the following
Acc=∑Ni=1δ(yi,ci)N | (4.1) |
where N is the number of data points, yi and ci denote the class label after and before dimension reduction, δ(y,c) is a function that equals one if y=c and equals zero otherwise.
To visualize the results of dimension reduction, 3D datasets are projected on 2D surface to intuitively show the performance of different methods. In our experiments, 3D datasets include Vowel, Haberman, Swiss Roll and Rabbit model. Vowel dataset has 871 samples and 6 classes, and Haberman dataset has 36 samples and 2 classes. The visualization results of dimension reduction are shown in the Figure 6 and 7 and the classification accuracy is given in Table 1.
Method | ||||||||||
Dataset | FA | LDA | LPP | MDS | PCA | PPCA | Kt-SNE | SNE | SSNE | Ours |
Swiss Roll | 0.5847 | 0.8423 | 0.6980 | 0.9290 | 0.9290 | 0.8760 | 0.9067 | 0.9713 | 0.5847 | 0.9823 |
Vowel | 0.5109 | 0.5993 | 0.4811 | 0.6383 | 0.6383 | 0.4937 | 0.7141 | 0.6923 | 0.2181 | 0.7566 |
Haberman | 0.5131 | 0.4547 | 0.5327 | 0.5948 | 0.5948 | 0.5797 | 0.6634 | 0.683 | 0.5784 | 0.6830 |
Swiss Roll is a typical dataset where the Euclidean distance of two points may be small but the Geodesic distance is large. As Swiss Roll is developable and Gaussian curvature of every point is zero, the absolute values of mean curvature are taken as the weights. In this experiment, Swiss Roll is constructed using Matlab program as shown in Figure 8(a) and the generation process of the dataset is as follows: N=3000; t=(3π/2)×(1+2×rand(1,N)); s=20×rand(1,N); X=[t⋅cos(t);s;t⋅sin(t)]. The projection results using comparative methods are shown in Figure 8(b)-8(k) and Figure 8(l) is the projection results of our method.
The comparative methods based on Euclidean distance, such as MDS, PCA, PPCA and SNE, achieve good dimension reduction results on the Swiss Roll dataset, while the distribution shapes of projection points cannot meet the need in the field of surface construction. Swiss Roll is developable, which means that Swiss Roll can be fully flattened on the 2D surface. In this experiment our method preserves the Geodesic distance of representative data by adjusting λ1 = 0, λ2 = 1 in formula (3.10), then Swiss Roll is unfolded on the 2D surface as shown in Figure 8(l). Compared with these methods, our method not only achieves higher classification accuracy but also is useful in the model design.
Rabbit model is widely used to test the results in the computer aided design. The rabbit model has 30,000 points as shown in Figure Fig 9(a) and is divided into 9 classes through the weighted AP clustering algorithm mentioned in Section 3.2.
We can see from the results shown in Figure 10 that the results using LDA, LPP, MDS, PCA, PPCA and our method can reflect the shape of the rabbit. The difference between these methods is the different viewpoint. From the results, we also observe that the projected points are overlapped in the 2D space. The reason is that the rabbit model is closed and no method can avoid overlapping except splitting the rabbit model into different parts. From the experiments results, we can find that the proposed method get better experimental results than the mentioned methods for 3D data points.
In this subsection, we show the classification performance of the proposed methods on three different types of high-dimensional datasets. The high-dimensional datasets include control_uni, YaleB and caltech101_silhouettes_16_uni. Control_uni dataset is a measure dataset and there are 600 samples and 6 classes. For each data, the original dimension is 60. Yale B dataset contains 2414 front-view face images of 38 individuals as shown in Figure 11. The original dimension of each sample is 1024. The data in caltech101_silhouettes_16_uni dataset is sparse and the original dimension is 256. There are 8641 samples and 101 classes in the dataset. In our experiments, these methods are compared under different dimensions.
In the experiments, the values of δ1 and δ2 in formula (3.14) are both 0.5, which makes the projection of representative points preserve sparse relationship and Geodesic distance at the same time. The value p in formula (3.15) changes from 0.5 to 3 to get the influence of the common data on the projection results. The classification accuracy graphs with different values of p on control_uni dataset, YaleB dataset and caltech101_silhouettes_16_uni dataset are shown in Figure 12(a), 13(a) and 14(a), respectively. On control_uni dataset, the best result in our method with p=3 is compared with the results of other methods and the classification accuracy graph of different methods is shown in Figure 12(b). On YaleB dataset and caltech101_silhouettes_16_uni dataset, classification accuracy graphs are shown in Figure 13(b) and 14(b), where the values of p are both 2 in our method. In these graphs, the vertical axis is the classification accuracy and the horizontal axis represents the dimension.
Different value p affects the relationships between the common point and its adjacent representative points. The results shown in Figure 12(a), 13(a) and 14(a) illustrate that the distribution of the common data affects the classification accuracy, and the value p in the best case varies for different datasets.
The classification accuracy graphs of different methods in Figure 12(b), 13(b) and 14(b) show that the classification results using the methods such as LDA, LPP, PCA, SNE and our method vary in a small range, which means that the dimension has little influence on the classification results using these methods and these methods are stable for the dimensional factors. From the classification accuracy we can see that the worst results of three datasets are less than 30%, 10% and 10%, while our results are the best under any given dimensions and more than 90%, 68% and 80%. From the stability and accuracy, our method achieves a good performance and is superior to the mentioned methods.
In this subsection, we mainly analyze the effects of parameter λ1 and λ2 on the dimension reduction results. In the objective formula (3.10), λ1 and λ2 are utilized to adjust the proportion of sparse neighborhood to Geodesic distance relationship. For the three high-dimensional datasets, the values of classification accuracy with different parameters λ1 and λ2 are listed in Table 2, 3 and 4.
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
50 | 0.9367 | 0.9367 | 0.9400 | 0.9400 | 0.9400 |
40 | 0.9333 | 0.9333 | 0.9367 | 0.9367 | 0.9367 |
30 | 0.9333 | 0.9300 | 0.9367 | 0.9400 | 0.9400 |
20 | 0.9333 | 0.9333 | 0.9400 | 0.9400 | 0.9400 |
10 | 0.9233 | 0.9333 | 0.9433 | 0.9400 | 0.9267 |
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
1000 | 0.5527 | 0.5222 | 0.5752 | 0.5405 | 0.5924 |
900 | 0.6023 | 0.6046 | 0.5434 | 0.6695 | 0.6443 |
800 | 0.5297 | 0.5846 | 0.6350 | 0.6665 | 0.5814 |
700 | 0.5246 | 0.5890 | 0.5717 | 0.6282 | 0.5917 |
600 | 0.5477 | 0.5587 | 0.6095 | 0.6339 | 0.6682 |
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
240 | 0.7180 | 0.7246 | 0.7213 | 0.7235 | 0.7191 |
210 | 0.6998 | 0.6730 | 0.7250 | 0.7179 | 0.7254 |
170 | 0.6619 | 0.7534 | 0.7285 | 0.6776 | 0.6863 |
120 | 0.6664 | 0.7379 | 0.7531 | 0.7254 | 0.7201 |
80 | 0.7155 | 0.7131 | 0.7052 | 0.7209 | 0.6765 |
Table 2, 3 and 4 illustrate that the classification accuracies vary slightly with the change of λ1 and λ2. This is to say that different values of λ1 and λ2 have little influence on the extraction of discriminate features in most cases, but the structure of feature vectors in the projection space are different. Maybe there are not important for the high-dimensional datasets, whereas for the 3D datasets the visualization results in 2D space are fully different. As shown in Figure 8, the classification accuracies using SNE and our method are both high, but the distributions of the projection points have great difference.
On the other hand, SVM is used as the classifier in this subsection. 80% of the data are used for training and the remaining data are used for testing, while the projection points in the other experiments are classified using AP algorithm. The experimental results are shown in Table 2, 3 and 4 with the parameters λ1=0.5 and λ2=0.5, and the corresponding classification accuracy values of our methods in Figure 12(b), Fig 13(b) and Fig 14(b) are different. The reason is that the projection points are classified using different classifiers, which implies that the classifier has effect on the classification accuracy.
To evaluate the computational complexity of our method, we conduct experiments to compare the running time of each method. Table 5 records the running time of each method on three high-dimensional datasets. For each method, we choose the optimal parameter setting in terms of optimal results. The results of each method on the three datasets are repeated 10 independent times, and the mean results are reported. The calculations are executed on a Linux server with 32 processors (2.10GHz for each) and 64 GB RAM memory by MATLAB implementations.
Dataset | |||
Method | control_uni | YaleB | caltech101_silhouettes_16_uni |
FA | 0.1873 | 90.8678 | 16.3910 |
LDA | 0.0089 | 1.6779 | 0.0580 |
LPP | 0.0476 | 2.6535 | 12.14476 |
MDS | 0.0018 | 0.1924 | 0.0405 |
PCA | 0.0013 | 0.1888 | 0.0378 |
PPCA | 5.7341 | 131.0023 | 167.1789 |
Kt-SNE | 6.4335 | 25.3993 | 66.5125 |
SNE | 104.1011 | 361.7535 | 114.2378 |
SSNE | 99.7138 | 396.2319 | 164.1256 |
Ours | 0.2604 | 29.2127 | 16.3784 |
Table 5 illustrates that the comparative methods, such as LDA, MDS and PCA, run faster than the proposed method, while the computational efficiency of these methods based on SNE is lower. In our method, data partition is performed before the solution of objective function, and the calculation of Geodesic distance and clustering of the weighted points occupy most of the time.
Floyd-Warshall algorithm is adopted to construct the graph to calculate the Geodesic distance. Though there are algorithms with a better worst-case runtime than Floyd-Warshall algorithm, but these algorithms are much more complicated and involve complicated data structure. Therefore, Floyd-Warshall algorithm with time complexity is O(n3) is still the best choice in our experiment. Compared with AP algorithm, the weighted AP algorithm adds virtual points in the dataset and has the same the time complexity O(n3) in each iteration. To reduce the running time, we optimize the data structure and algorithm process. In short, some methods are faster than our method, whereas our method achieves a significant improvement in the classification accuracy and the visualization results.
Since the importance of data is different in many applications, we partition them into representative data and common data. Two structure preserving methods are adopted to reduce the dimension. For the representative data, the projection preserves the sparse neighborhood relationship and the Geodesic distance. According to the distribution of the representative data, the position of the common data can be located through the linear combination. The proposed method reduces the distortion in the process of the dimension reduction and extracts more discriminative information. In the future work, we will focus on the dimension reduction of weighted points with large-scale size.
The work is partially supported by the National Natural Science Foundation of China (Nos. 61702310, 61803237, 61772322, U1836216), the major fundamental research project of Shandong, China (No. ZR2019ZD03), and the Taishan Scholar Project of Shandong, China (No. ts20190924).
The authors declare that they have no conflict of interest.
[1] | International Diabetes Federation (IDF) (2019) Diabetes in the young: a global perspective. IDF Diabetes Atlas Brussels: International Diabetes Federation, Available from: www.idf.org. |
[2] |
The Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Research Group (2000) Retinopathy and nephropathy in patients with type 1 diabetes four years after a trial of intensive therapy. N Engl J Med 342: 381-389. [Erratum in: |
[3] |
DiMeglio LA, Acerini CL, Codner E, et al. (2018) ISPAD Clinical Practice Consensus Guidelines 2018: Glycemic control targets and glucose monitoring for children, adolescents, and young adults with diabetes. Pediatr Diabetes 19: 105-114. doi: 10.1111/pedi.12737
![]() |
[4] |
Amuna P, Zotor FB (2008) Epidemiological and nutrition transition in developing countries: impact on human health and development. Proc Nutr Soc 67: 82-90. doi: 10.1017/S0029665108006058
![]() |
[5] | Ministry of Health Report of the national survey on common risk factors for non communicable diseases, STEPS, Morocco (2017–2018) .Available from: https://www.who.int/ncds/surveillance/steps/STEPS-REPORT-2017-2018-Morocco-final.pdf. |
[6] | Balafrej A (2003) Management of diabetic children at the University Hospital Center of Rabat: an example of partnership or personal initiative on the periphery of the School of Medicine? Public Health 15: 163-168. (In French). |
[7] | Nathan DM, DCCT/EDIC Research Group (2014) The diabetes control and complications trial/epidemiology of diabetes interventions and complications study at 30 years: overview. Diabetes Care 37: 9-16. |
[8] | Tubiana-Rufi N, Czernichow P (2003) Special problems and management of the child less than 5 years of age. Contemporary Endocrinology: Type 1 Diabetes: Etiology and Treatment New Jersey, USA: SHP Inc, 279-292. |
[9] | American Diabetes Association (2021) Children and adolescents: standards of medical care diabetes–2021. Diabetes Care 44: S180-S199. |
[10] |
Miller KM, Foster NC, Beck RB, et al. (2015) Current state of type 1 diabetes treatment in the U.S.: updated data from the T1D exchange clinic registry. Diabetes Care 38: 971-978. doi: 10.2337/dc15-0078
![]() |
[11] |
Rosenbauer J, Dost A, Karges B, et al. (2012) Improved metabolic control in children andadolescents with type 1 diabetes: a trend analysis using prospective multicenter data from Germany and Austria. Diabetes Care 35: 80-86. doi: 10.2337/dc11-0993
![]() |
[12] | Purnell JQ (2000) Definitions, Classification, and Epidemiology of Obesity. [Updated 2018 Apr 12]. Endotext [Internet] South Dartmouth (MA): MDText.com, Inc, Available from: https://www.ncbi.nlm.nih.gov/books/NBK279167. |
[13] |
McCarthy HD, Ashwell M (2006) A study of central fatness using waist-to-height ratios in UK children and adolescents over two decades supports the simple message—“keep your waist circumference to less than half your height”. Int J Obes (Lond) 30: 988-992. doi: 10.1038/sj.ijo.0803226
![]() |
[14] | WHOAnthro (version 1.0.4, January 2011) and macros [Internet]. [cité 5 mai 2012] Available from: http://www.who.int/childgrowth/software/en/. |
[15] | WHO Child Growth StandardsLength/height-for-age, weight-for-age, weight-for-length, weight-for-height and body mass index-for-age. Methods and development (2006) .Available from: https://www.who.int/childgrowth/standards/Technical_report.pdf?ua=1. |
[16] | World Health Organization: WHO Child Growth Standards (2007) Growth reference data for 5–19 years Available from: http://www.who.int/growthref/en/. |
[17] |
Butte NF, Garza C, de Onis M (2007) Evaluation of the feasibility of international growth standards for school-aged children and adolescents. J Nutr 137: 153-157. doi: 10.1093/jn/137.1.153
![]() |
[18] |
Rewers M, Pihoker C, Donaghue K, et al. (2007) Assessment and monitoring of glycemic control in children and adolescents with diabetes. Pediatr Diabetes 8: 408-418. doi: 10.1111/j.1399-5448.2007.00352.x
![]() |
[19] | International Diabetes Federation Global IDF/ISPAD guideline for diabetes in childhood and adolescence (2011) .Available from: https://cdn.ymaws.com/www.ispad.org/resource/resmgr/Docs/idf-ispad_guidelines_2011_0.pdf. |
[20] | Karkouri FZModalities of care for diabetic children at Marrakech University Hospital. Morocco (thesis) (2017) . |
[21] | Bensenouci A, Achir M, Boukari R, et al. (2014) Management of type 1 diabetic children in Algeria (Diab Care Pediatric). Med Metab Dis 8: 646-651. |
[22] |
Ben Becher S, Chéour M, Essadam L, et al. (2009) Management of type 1 diabetes in childhood in Tunis: current report and perspectives. Arch Pediatr 16: 866-867. (in French). doi: 10.1016/S0929-693X(09)74183-1
![]() |
[23] |
Anderson BJ, Auslander WF, Jung KC, et al. (1990) Assessing family sharing of diabetes responsibilities. J Pediatr Psychol 15: 477-492. doi: 10.1093/jpepsy/15.4.477
![]() |
[24] |
Edwards R, Burns JA, McElduff P, et al. (2003) Variations in process and outcomes of diabetes care by socio-economic status in Salford, UK. Diabetologia 46: 750-759. doi: 10.1007/s00125-003-1102-z
![]() |
[25] |
Benjelloun S (2002) Nutrition transition in Morocco. Public Health Nutr 5: 135-140. doi: 10.1079/PHN2001285
![]() |
[26] | World Health Organization World diabetes report (2016) .Available from: https://apps.who.int/iris/bitstream/handle/10665/254648/9789242565256-fre.pdf. |
[27] |
Belahsen R (2014) Nutrition transition and food sustainability. Proc Nutr Soc 73: 385-388. doi: 10.1017/S0029665114000135
![]() |
[28] |
Liu LL, Lawrence JM, Davis C, et al. (2010) Prevalence of overweight and obesity in youth with diabetes in USA: the SEARCH for Diabetes in Youth study. Pediatr Diabetes 11: 4-11. doi: 10.1111/j.1399-5448.2009.00519.x
![]() |
[29] |
Boucher-Berry C, Parton EA, Alemzadeh R (2016) Excess weight gain during insulin pump therapy is associated with higher basal insulin doses. J Diabetes Metab Disord 15: 47. doi: 10.1186/s40200-016-0271-5
![]() |
[30] |
Pinhas-Hamiel O, Levek-Motola N, Kaidar K, et al. (2015) Prevalence of overweight, obesity and metabolic syndrome components in children, adolescents and young adults with type 1 diabetes mellitus. Diabetes Metab Res Rev 31: 76-84. doi: 10.1002/dmrr.2565
![]() |
[31] |
Minges KE, Whittemore R, Weinzimer SA, et al. (2017) Correlates of overweight and obesity in 5,529 adolescents with type 1 diabetes: the T1D exchange clinic registry. Diabetes Res Clin Pract 126: 68-78. doi: 10.1016/j.diabres.2017.01.012
![]() |
[32] |
Hogel J, Grabert M, Sorgo W, et al. (2000) Hemoglobin A1c and body mass index in children and adolescents with IDDM. An observational study from 1976–1995. Exp Clin Endocrinol Diabetes 108: 76-80. doi: 10.1055/s-2000-5799
![]() |
[33] |
Huppertz E, Pieper L, Klotsche J, et al. (2009) Diabetes mellitus in German primary care: quality of glycaemic control and subpopulations not well controlled-results of the DETECT study. Exp Clin Endocrinol Diabetes 117: 6-14. doi: 10.1055/s-2008-1073127
![]() |
[34] |
Hughes CR, McDowell N, Cody D, et al. (2012) Sustained benefits of continuous subcutaneous insulin infusion. Arch Dis Child 97: 245-247. doi: 10.1136/adc.2010.186080
![]() |
[35] |
Nathan DM, Cleary PA, Backlund JY, et al. (2005) Intensive diabetes treatment and cardiovascular disease in patients with type 1 diabetes. N Engl J Med 353: 2643-2653. doi: 10.1056/NEJMoa052187
![]() |
[36] |
Rosenbauer J, Dost A, Karges B, et al. (2012) The DPV Initiative and the German BMBF Competence Network Diabetes Mellitus. Improved metabolic control in children and adolescents with type 1 diabetes: a trend analysis using prospective multicenter data from Germany and Austria. Diabetes Care 35: 80-86. doi: 10.2337/dc11-0993
![]() |
[37] |
Nordly S, Mortensen HB, Andreasen AH, et al. (2005) Factors associated with glycaemic outcome of childhood diabetes care in Denmark. Diabet Med 22: 1566-1573. doi: 10.1111/j.1464-5491.2005.01692.x
![]() |
[38] | Guilmin-Crépon S, Tubiana-Rufi N (2010) Self-monitoring of blood glucose in children and adolescent with type 1 diabetes. Med Metab Dis 4: S12-S19. |
[39] |
Asvold BO, Sand T, Hestad K, et al. (2010) Cognitive function in type 1 diabetic adults with early exposure to severe hypoglycemia: a 16-year follow-up study. Diabetes Care 33: 1945-1947. doi: 10.2337/dc10-0621
![]() |
[40] |
Hanas R, Lindgren F, Lindblad B (2009) A 2-year national population study of pediatric ketoacidosis in Sweden: predisposing conditions and insulin pump use. Pediatr Diabetes 10: 33-37. doi: 10.1111/j.1399-5448.2008.00441.x
![]() |
[41] |
Nordwall M, Arnqvist HJ, Bojestig M, et al. (2009) Good glycemic control remains crucial in prevention of late diabetic complications—the Linkoping Diabetes Complications Study. Pediatr Diabetes 10: 168-176. doi: 10.1111/j.1399-5448.2008.00472.x
![]() |
[42] |
Lièvre M, Marre M, Robert JJ, et al. (2005) Cross-sectional study of care, socio-economic status and complications in young French patients with type 1 diabetes mellitus. Diabetes Metab 31: 41-46. doi: 10.1016/S1262-3636(07)70165-9
![]() |
[43] |
Katamay SW, Esslinger KA, Vigneault M, et al. (2007) Eating well with Canada's Food Guide (2007): development of the food intake pattern. Nutr Rev 65: 155-166. doi: 10.1301/nr.2007.apr.155-166
![]() |
[44] |
Patrick K, Norman GJ, Calfas KJ, et al. (2004) Diet, physical activity and sedentary behaviors as risk factors for overweight in adolescence. Arch Pediatr Adolesc Med 158: 385-390. doi: 10.1001/archpedi.158.4.385
![]() |
[45] |
Michaliszyn FS, Shaibi GQ, Quinn L, et al. (2009) Physical fitness, dietary intake, and metabolic control in adolescents with type 1 diabetes. Pediatr Diabetes 10: 389-394. doi: 10.1111/j.1399-5448.2009.00500.x
![]() |
[46] |
Mayer-Davis EJ, Nichols M, Liese AD, et al. (2006) Dietary intake among youth with diabetes: the SEARCH for diabetes in youth study. J Am Diet Assoc 106: 689-697. doi: 10.1016/j.jada.2006.02.002
![]() |
[47] |
Riddell MC, Gallen IW, Smart CE, et al. (2017) Exercise management in type 1 diabetes: a consensus statement. Lancet Diabetes Endocrinol 5: 377-390. doi: 10.1016/S2213-8587(17)30014-1
![]() |
[48] |
Warburton DER, Nicol CW, Bredin SSD (2006) Health benefits of physical activity: the evidence. CMAJ 174: 801-809. doi: 10.1503/cmaj.051351
![]() |
[49] |
Quirk H, Blake H, Tennyson R, et al. (2014) Physical activity interventions in children and young people with type 1 diabetes mellitus: a systematic review with meta-analysis. Diabet Med 31: 1163-1173. doi: 10.1111/dme.12531
![]() |
[50] |
Kummer S, Stahl-Pehe A, Castillo K, et al. (2014) Health behaviour in children and adolescents with type 1 diabetes compared to a representative reference population. PLoS One 9: e112083. doi: 10.1371/journal.pone.0112083
![]() |
[51] |
Chimen M, Kennedy A, Nirantharakumar K, et al. (2012) What are the health benefits of physical activity in type 1 diabetes mellitus? A literature review. Diabetologia 55: 542-551. doi: 10.1007/s00125-011-2403-2
![]() |
[52] |
Guerci B, Tubiana-Rufi N, Bauduceau B, et al. (2005) Advantages to using capillary blood β–hydroxybutyrate determination for the detection and treatment of diabetic ketosis. Diabetes Metab 31: 401-406. doi: 10.1016/S1262-3636(07)70211-2
![]() |
[53] |
Von Sengbusch S, Muller-Godeffroy E, Hager S, et al. (2006) Mobile diabetes education and care: intervention for children and young people with Type 1 diabetes in rural areas of northern Germany. Diabet Med 23: 122-127. doi: 10.1111/j.1464-5491.2005.01754.x
![]() |
[54] |
Swift PGF, Skinner TC, De Beaufort CE, et al. (2010) Target setting in intensive insulin management is associated with metabolic control: the Hvidoere childhood diabetes study group centre differences study 2005. Pediatr Diabetes 11: 271-278. doi: 10.1111/j.1399-5448.2009.00596.x
![]() |
[55] |
Mortensen HB, Robertson KJ, Aanstoot HJ, et al. (1998) Insulin management and metabolic control of Type 1 Diabetes Mellitus in Childhood and Adolescence in 18 Countries. Hvidøre Study Group on Childhood Diabetes. Diabet Med 15: 752-759. doi: 10.1002/(SICI)1096-9136(199809)15:9<752::AID-DIA678>3.0.CO;2-W
![]() |
[56] | The Diabetes Control and Complications Trial Research Group (1993) The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin- dependent diabetes mellitus. N Engl J Med 329: 977-986. |
[57] | Geoffroy L, Gonthier M (2012) Diabetes in children and adolescents Montreal: Ste-Justine University Hospital Center, 451-465. |
[58] | Tfayli H, Arslanian S The challenge of adolescence: Hormonal changes and insulin sensitivity [Online] (2007) .Available from: https://studylibfr.com/doc/3271994/le-d%C3%A9fi-de-l-adolescence---changements-hormonaux-et-sensi. |
Method | ||||||||||
Dataset | FA | LDA | LPP | MDS | PCA | PPCA | Kt-SNE | SNE | SSNE | Ours |
Swiss Roll | 0.5847 | 0.8423 | 0.6980 | 0.9290 | 0.9290 | 0.8760 | 0.9067 | 0.9713 | 0.5847 | 0.9823 |
Vowel | 0.5109 | 0.5993 | 0.4811 | 0.6383 | 0.6383 | 0.4937 | 0.7141 | 0.6923 | 0.2181 | 0.7566 |
Haberman | 0.5131 | 0.4547 | 0.5327 | 0.5948 | 0.5948 | 0.5797 | 0.6634 | 0.683 | 0.5784 | 0.6830 |
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
50 | 0.9367 | 0.9367 | 0.9400 | 0.9400 | 0.9400 |
40 | 0.9333 | 0.9333 | 0.9367 | 0.9367 | 0.9367 |
30 | 0.9333 | 0.9300 | 0.9367 | 0.9400 | 0.9400 |
20 | 0.9333 | 0.9333 | 0.9400 | 0.9400 | 0.9400 |
10 | 0.9233 | 0.9333 | 0.9433 | 0.9400 | 0.9267 |
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
1000 | 0.5527 | 0.5222 | 0.5752 | 0.5405 | 0.5924 |
900 | 0.6023 | 0.6046 | 0.5434 | 0.6695 | 0.6443 |
800 | 0.5297 | 0.5846 | 0.6350 | 0.6665 | 0.5814 |
700 | 0.5246 | 0.5890 | 0.5717 | 0.6282 | 0.5917 |
600 | 0.5477 | 0.5587 | 0.6095 | 0.6339 | 0.6682 |
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
240 | 0.7180 | 0.7246 | 0.7213 | 0.7235 | 0.7191 |
210 | 0.6998 | 0.6730 | 0.7250 | 0.7179 | 0.7254 |
170 | 0.6619 | 0.7534 | 0.7285 | 0.6776 | 0.6863 |
120 | 0.6664 | 0.7379 | 0.7531 | 0.7254 | 0.7201 |
80 | 0.7155 | 0.7131 | 0.7052 | 0.7209 | 0.6765 |
Dataset | |||
Method | control_uni | YaleB | caltech101_silhouettes_16_uni |
FA | 0.1873 | 90.8678 | 16.3910 |
LDA | 0.0089 | 1.6779 | 0.0580 |
LPP | 0.0476 | 2.6535 | 12.14476 |
MDS | 0.0018 | 0.1924 | 0.0405 |
PCA | 0.0013 | 0.1888 | 0.0378 |
PPCA | 5.7341 | 131.0023 | 167.1789 |
Kt-SNE | 6.4335 | 25.3993 | 66.5125 |
SNE | 104.1011 | 361.7535 | 114.2378 |
SSNE | 99.7138 | 396.2319 | 164.1256 |
Ours | 0.2604 | 29.2127 | 16.3784 |
Method | ||||||||||
Dataset | FA | LDA | LPP | MDS | PCA | PPCA | Kt-SNE | SNE | SSNE | Ours |
Swiss Roll | 0.5847 | 0.8423 | 0.6980 | 0.9290 | 0.9290 | 0.8760 | 0.9067 | 0.9713 | 0.5847 | 0.9823 |
Vowel | 0.5109 | 0.5993 | 0.4811 | 0.6383 | 0.6383 | 0.4937 | 0.7141 | 0.6923 | 0.2181 | 0.7566 |
Haberman | 0.5131 | 0.4547 | 0.5327 | 0.5948 | 0.5948 | 0.5797 | 0.6634 | 0.683 | 0.5784 | 0.6830 |
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
50 | 0.9367 | 0.9367 | 0.9400 | 0.9400 | 0.9400 |
40 | 0.9333 | 0.9333 | 0.9367 | 0.9367 | 0.9367 |
30 | 0.9333 | 0.9300 | 0.9367 | 0.9400 | 0.9400 |
20 | 0.9333 | 0.9333 | 0.9400 | 0.9400 | 0.9400 |
10 | 0.9233 | 0.9333 | 0.9433 | 0.9400 | 0.9267 |
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
1000 | 0.5527 | 0.5222 | 0.5752 | 0.5405 | 0.5924 |
900 | 0.6023 | 0.6046 | 0.5434 | 0.6695 | 0.6443 |
800 | 0.5297 | 0.5846 | 0.6350 | 0.6665 | 0.5814 |
700 | 0.5246 | 0.5890 | 0.5717 | 0.6282 | 0.5917 |
600 | 0.5477 | 0.5587 | 0.6095 | 0.6339 | 0.6682 |
Dim | λ1,λ2=0,1 | λ1,λ2=0.2,0.8 | λ1,λ2=0.5,0.5 | λ1,λ2=0.8,0.2 | λ1,λ2=1,0 |
240 | 0.7180 | 0.7246 | 0.7213 | 0.7235 | 0.7191 |
210 | 0.6998 | 0.6730 | 0.7250 | 0.7179 | 0.7254 |
170 | 0.6619 | 0.7534 | 0.7285 | 0.6776 | 0.6863 |
120 | 0.6664 | 0.7379 | 0.7531 | 0.7254 | 0.7201 |
80 | 0.7155 | 0.7131 | 0.7052 | 0.7209 | 0.6765 |
Dataset | |||
Method | control_uni | YaleB | caltech101_silhouettes_16_uni |
FA | 0.1873 | 90.8678 | 16.3910 |
LDA | 0.0089 | 1.6779 | 0.0580 |
LPP | 0.0476 | 2.6535 | 12.14476 |
MDS | 0.0018 | 0.1924 | 0.0405 |
PCA | 0.0013 | 0.1888 | 0.0378 |
PPCA | 5.7341 | 131.0023 | 167.1789 |
Kt-SNE | 6.4335 | 25.3993 | 66.5125 |
SNE | 104.1011 | 361.7535 | 114.2378 |
SSNE | 99.7138 | 396.2319 | 164.1256 |
Ours | 0.2604 | 29.2127 | 16.3784 |