
Persons enduring serious mental illness (SMI) and living in supported housing facilities often receive inadequate care, which can negatively impact their health outcomes. To address these challenges, it is crucial to prioritize interventions that promote personal recovery and address the unique needs of this group. When developing effective, equitable, and relevant interventions, it is essential to consider the experiences of persons with an SMI. By incorporating their perspectives, we can enhance the understanding, and thereby, the design and implementation of activity- and recovery-oriented interventions that promote health, quality of life, and social connectedness in this vulnerable population. Thus, the aim of this study is to explore the stories of participants partaking in Everyday Life Rehabilitation and how they make sense of their engagements in everyday life activities and their recovery processes.
Applying a narrative analysis, this study explores the stories of seven individuals with an SMI residing in Swedish supported housing facilities, participating in the Everyday Life Rehabilitation (ELR) program during six months, and how they retrospectively make meaning of their engagement in everyday life activities and recovery processes.
The participants' stories about their rehabilitation and personal recovery pathways elucidate how the inherent power of the activity, as well as the support the participants received to get started and succeed, had a significant impact on their self-identity, confidence, motivation, mattering, life prospects, and vitality. The participants valued the transparent steps along the process, weekly meetings, the signals, beliefs, and feedback communicated throughout, and the persistent, adaptive, and yet supporting approach in their personal progress.
This study underscores the need for interventions that prioritize meaningful activities and are sensitive to the complexity of the personal recovery process, especially in supported housing facilities. Future research should further explore effective strategies and mechanisms to promote personal recovery and to reduce the stigma associated with SMI.
Citation: Rosaline Bezerra Aguiar, Maria Lindström. Stories of taking part in Everyday Life Rehabilitation - A narrative inquiry of residents with serious mental illness and their recovery pathway[J]. AIMS Public Health, 2024, 11(4): 1198-1222. doi: 10.3934/publichealth.2024062
[1] | Meijiao Wang, Yu chen, Yunyun Wu, Libo He . Spatial co-location pattern mining based on the improved density peak clustering and the fuzzy neighbor relationship. Mathematical Biosciences and Engineering, 2021, 18(6): 8223-8244. doi: 10.3934/mbe.2021408 |
[2] | Hui Du, Depeng Lu, Zhihe Wang, Cuntao Ma, Xinxin Shi, Xiaoli Wang . Fast clustering algorithm based on MST of representative points. Mathematical Biosciences and Engineering, 2023, 20(9): 15830-15858. doi: 10.3934/mbe.2023705 |
[3] | Yunyun Sun, Peng Li, Zhaohui Jiang, Sujun Hu . Feature fusion and clustering for key frame extraction. Mathematical Biosciences and Engineering, 2021, 18(6): 9294-9311. doi: 10.3934/mbe.2021457 |
[4] | Xian Fu, Xiao Yang, Ningning Zhang, RuoGu Zhang, Zhuzhu Zhang, Aoqun Jin, Ruiwen Ye, Huiling Zhang . Bearing surface defect detection based on improved convolutional neural network. Mathematical Biosciences and Engineering, 2023, 20(7): 12341-12359. doi: 10.3934/mbe.2023549 |
[5] | Darrak Moin Quddusi, Sandesh Athni Hiremath, Naim Bajcinca . Mutation prediction in the SARS-CoV-2 genome using attention-based neural machine translation. Mathematical Biosciences and Engineering, 2024, 21(5): 5996-6018. doi: 10.3934/mbe.2024264 |
[6] | Qishuo Pang, Xianyan Mi, Jixuan Sun, Huayong Qin . Solving nonlinear equation systems via clustering-based adaptive speciation differential evolution. Mathematical Biosciences and Engineering, 2021, 18(5): 6034-6065. doi: 10.3934/mbe.2021302 |
[7] | Jimmy Ming-Tai Wu, Jerry Chun-Wei Lin, Philippe Fournier-Viger, Youcef Djenouri, Chun-Hao Chen, Zhongcui Li . The density-based clustering method for privacy-preserving data mining. Mathematical Biosciences and Engineering, 2019, 16(3): 1718-1728. doi: 10.3934/mbe.2019082 |
[8] | Miin-Shen Yang, Wajid Ali . Fuzzy Gaussian Lasso clustering with application to cancer data. Mathematical Biosciences and Engineering, 2020, 17(1): 250-265. doi: 10.3934/mbe.2020014 |
[9] | He Ma . Achieving deep clustering through the use of variational autoencoders and similarity-based loss. Mathematical Biosciences and Engineering, 2022, 19(10): 10344-10360. doi: 10.3934/mbe.2022484 |
[10] | Wangming Lu, Zhiyong Yu, Zhanheng Chen, Haijun Jiang . Prescribed-time cluster practical consensus for nonlinear multi-agent systems based on event-triggered mechanism. Mathematical Biosciences and Engineering, 2024, 21(3): 4440-4462. doi: 10.3934/mbe.2024196 |
Persons enduring serious mental illness (SMI) and living in supported housing facilities often receive inadequate care, which can negatively impact their health outcomes. To address these challenges, it is crucial to prioritize interventions that promote personal recovery and address the unique needs of this group. When developing effective, equitable, and relevant interventions, it is essential to consider the experiences of persons with an SMI. By incorporating their perspectives, we can enhance the understanding, and thereby, the design and implementation of activity- and recovery-oriented interventions that promote health, quality of life, and social connectedness in this vulnerable population. Thus, the aim of this study is to explore the stories of participants partaking in Everyday Life Rehabilitation and how they make sense of their engagements in everyday life activities and their recovery processes.
Applying a narrative analysis, this study explores the stories of seven individuals with an SMI residing in Swedish supported housing facilities, participating in the Everyday Life Rehabilitation (ELR) program during six months, and how they retrospectively make meaning of their engagement in everyday life activities and recovery processes.
The participants' stories about their rehabilitation and personal recovery pathways elucidate how the inherent power of the activity, as well as the support the participants received to get started and succeed, had a significant impact on their self-identity, confidence, motivation, mattering, life prospects, and vitality. The participants valued the transparent steps along the process, weekly meetings, the signals, beliefs, and feedback communicated throughout, and the persistent, adaptive, and yet supporting approach in their personal progress.
This study underscores the need for interventions that prioritize meaningful activities and are sensitive to the complexity of the personal recovery process, especially in supported housing facilities. Future research should further explore effective strategies and mechanisms to promote personal recovery and to reduce the stigma associated with SMI.
Connectedness, Hope, Identity, Meaning, and Empowerment;
Everyday Life Rehabilitation;
Housing staff;
Occupational Therapist;
Physiotherapist;
Randomized Controlled Trial;
Sustainable Development Goals;
Serious mental illness;
Simple Taxonomy for Supported Accommodation;
Years lived with disability
Clustering is a powerful technique for data analysis, with an irreplaceable role in data mining. It essentially exploits the similarities between samples to divide them into several different clusters. Its application is found in numerous popular areas, including image processing [1,2], medicine [3,4], text segmentation [5], community network analysis [6], bioinformatics [7,8], etc. So far, researchers have developed multiple algorithms with satisfactory performance, including the division-based method K-means [9], the layer structure-based method BIRCH [10], the density-based method DBSCAN [11], the grid-based method WaveCluster [12] and the graph theory-based spectral clustering [13]. Among them, DBSCAN is a well-performing density-based clustering algorithm that determines clusters based on density connectivity relationships. It has the advantage of being able to cluster arbitrarily shaped datasets and presents a high resistance to noise. However, the presence of the two parameters ϵ and MinPts in the algorithm significantly impacts the final clustering results, and it shows poor performance on datasets with varying densities.
In 2014, Alex Rodriguez et al. [14] published the DPC (density peak clustering algorithm). Due to its high efficiency, robustness and simplicity of understanding, an increasing number of researchers are emphasizing the aforementioned algorithm. DPC uses two criteria when selecting cluster centers. First, the centers of the different clusters are far from each other, and secondly, the clustering center will be enclosed by some low densities of points. Considering the rest of the points, the DPC employs a one-step strategy to assign each point in a cluster to the nearest high-density cluster. While it presents multiple obvious advantages over other classical algorithms, the DPC suffers from a few problems: 1) It requires manual involvement when selecting cluster centers, and this is extremely difficult in datasets where the boundaries between clusters are not clear. 2) The DPC tends to fail to find peaks in sparse areas; consequently, it does not achieve satisfactory clustering results when large differences in data density between different clusters are present. 3) The assignment of non-peak points by the DPC tends to ignore the real situation in the area where the sample is located. Hence, when a misclassification of one point occurs during the assignment process, a domino effect is triggered. In light of the aforementioned problems, scholars have suggested various improved clustering algorithms on the basis of DPC to address its drawbacks.
In this context, ADPC-KNN [15] used K-nearest neighbors to determine the cut-off distance, with a novel method proposed for automatically selecting cluster centers, but it still underperformed on datasets with varying densities. The NDPC [16] reduced the density gap between samples from sparse areas and those from dense areas through a new method that calculated local densities so that peaks could also be found in sparse regions. Unfortunately, the NDPC was unable to discover all peaks in datasets with large density differences. The AmDPC [17] employed density deviation to replace the original local density, and the authors proposed a density deviation multi-peak auto-clustering approach that overcame the poor performance of the original DPC on non-convex datasets with low-density peaks. Nevertheless, its parameter selection process was complex. The FKNN-DPC [18] solved the sample assignment error problem in the DPC assignment strategy using the fuzzy weighted K-nearest neighbor technique. Although it was more robust than the DPC, the FKNN-DPC still required manual intervention for the selection of clustering centers. Meanwhile, the SNN-DPC [19] adopted a two-step distribution method of necessary and possible subordination to overcome the consequences of the domino effect caused by the shortcomings of the DPC allocation rules. However, it required human involvement and was more complex compared to the DPC. 3W-DPET [20] suggested a three-way density peak clustering approach on the basis of evidence theory, which solved the problem of mislabel propagation in the DPC. However, the 3W-DPET could not automatically determine the number of clusters. On the other hand, the DPC-KNN [21] joined the notion of K-nearest neighbors for distance calculation and sample assignment to improve the clustering effect in non-spherical datasets. However, the algorithm still required a manual selection of clustering centers. Through the analysis of multiple algorithms that have improved the DPC in recent years, it is clear that the above algorithms generally still do not address the problem that the DPC performs poorly on datasets with varying density differences or non-convexity; and despite the aforementioned improvement of individual algorithms to tackle this issue, manual involvement is still required when selecting clustering centers.
The primary contribution of this study is the proposal of a novel density-peak clustering algorithm (AKDPC) that automatically selects clustering centers and adaptively completes the corresponding clusters. The suggested method divides the samples into core and non-core points by mutual K-nearest neighbor values, where non-core points are placed in the second assignment, reducing the impact on the final clustering. Simultaneously, only the average distance between the K-nearest neighbors of a sample is used as an indicator to select the cluster center. Specifically, the one with the lowest average distance among the K-nearest neighbors in remaining core points will be selected as a cluster center. Afterwards, the AKDPC adds core points that satisfy the merging condition to the corresponding clusters. Finally, the non-core points are clustered. The clustering of the AKDPC is fully automatic and is better suited to complex datasets with large density differences or non-convexity.
The DPC is an extremely efficient density clustering algorithm. It considers that the locations surrounded by some low local density data samples should be the cluster centers; furthermore, the clustering centers of different clusters should not be too close to each other. The DPC constructs decision diagrams to select centers from the sample local densities ρ and relative distances δ.
The original DPC uses both cut-off distance and kernel distance to compute the densities of the samples, and these two methods might apply to different datasets. The local density of sample i is defined as Eq (1).
ρi=∑i≠jχ(dij−dc),χ(x)={1x<00x≥0 | (1) |
Alternatively, obtained by Gaussian kernels defined as Eq (2),
ρi=∑i≠jexp[−(dijdc)2] | (2) |
where dij is the distance between two different samples, and dc is the cut-off distance.
δ denotes the distance from a sample to the nearest sample with a density bigger than it, and the formula is defined as follows:
δi=minj:ρi>ρj(dij) | (3) |
The relative distance for the sample with the highest density is specified to be the highest value among all samples, and it is defined as Eq (4).
δi=maxi≠j(δj) | (4) |
After getting the ρ and δ values for all samples, the DPC considers the points with simultaneously larger ρ and δ to be the clustering centers. Finally, the DPC makes the labels for the leftover points consistent with the labels of the closest high-density points.
To overcome the issue that the DPC requires manual intervention to choose clustering centers and poor performance on complex datasets with varying densities or non-convexity, we propose AKDPC. The AKDPC consists of three main parts: 1) classifying samples into core and non-core points based on the sizes of their mutual K-nearest neighbor values; 2) Clustering the core points; 3) Allocation of remaining non-core points.
The classification method determines whether to classify each sample point as core or non-core by its mutual K-nearest neighbor value.
The mutual K-nearest neighbor value MKi of sample xi is the sum of points in the sample xi's K-nearest neighbor collection that incorporates sample xi into its K-nearest neighbor collection, and MKi is defined as Eq (5).
MKi=∑j∈KNNif(xj),f(xj)={1xi∈KNNj0otherwise | (5) |
where KNNi is a collection of K nearest neighbors of sample i, defined as Eq (6), and where K is the first parameter to be specified in our algorithm.
KNNi={xj∣minK(dij),xi,xj∈X,i≠j} | (6) |
The MK values of samples have the following characteristics:
1) The MK value of a point represents its spatial relationship with its neighbors. If the value is large, the sample point is closely distributed with its neighbors or is in the same distribution area, so it is treated as a core point. If the value is smaller, the sample point is isolated from the surrounding points and treated as a non-core point.
2) The MK value of a sample point is not affected by the fact that the sample is located in a different density area because the MK value reflects the proximity of the sample to its neighbors.
We classify all points into core and non-core points by defining a classification threshold MKT, which is defined by Eq (7).
MKT=∑nj=0MKjN | (7) |
where N is the sum of samples.
Definition 1 (Core point). If the mutual K-nearest neighbor value MKi of sample i is larger than or equal to MKT, then it is a core point.
Definition 2 (Non-core point). If the mutual K-nearest neighbor value MKi of sample i is less than the MKT, then it is a non-core point.
With our proposed method above some non-core points can be accurately identified to prevent the possibility of a large impact on the final clustering, and Algorithm 1 explains in detail the classification of the sample points.
Algorithm 1: Classification of core and non-core points |
Input: Dataset D = {x1, x2, ..., xn}, K |
Output: core points CG = {g1, g2, ..., gm}, non_corepoints NG = {n1, n2, ..., nj}, distance matrix {dij}n×n, Sorting Matrix SDn×n |
1: Calculate distance matrix Dn×n = {dij}n×n |
2: Sort the distance matrix Dn×n in ascending order and record it as the SDn×n |
3: Calculate the MK value for all samples based on Eq (5) |
4: Calculate the MKT based on Eq (7) |
5: Create set CG = ∅, NG = ∅ |
6: For each sample x in Data Do |
7: If MKx > = MKT Then |
8: CG = CG∪x |
9: If MKx < MKT Then |
10: NG = NG∪x |
11: End if |
12: End for |
To prevent the old DPC algorithm from requiring manual intervention to select centers of all clusters and the problem of easily ignoring clustering centers that exist in low-density areas, we redefine the local density of a sample based on the distance relationship from sample points to their nearest neighbors and suggest a new approach for merging core points. Our method does not need human involvement in the selection of clustering centers, and the whole clustering process is fully automatic. Figure 1 displays the clustering results of the original DPC algorithm and our algorithm for clustering on the classical dataset Jain [22]. As shown in the figure, the original DPC selected clustering centers located in the lower branches with higher density, no matter how the distance is chosen up to the end. The correct classification is not completed in the end, while our method accurately identifies the clustering centers in two different density regions.
First, we define the Dki of core sample point i as the average distance from core sample i to its K-nearest neighbor set of points, and Dk is defined as Eq (8).
Dki=1k∑j∈Nkidij | (8) |
where Nki denotes the collection of k nearest neighbors of the core point i, defined as Eq (9), and where k is the second parameter to be specified in our algorithm.
Nki={xj∣mink(dij),xi∈N,i≠j} | (9) |
The nearest neighbor mean Dk of a sample point has the following properties:
1) If the Dk of the core point is particularly small, it is a good indication that the core point is in a very dense area, and its surrounding neighbors are all very close to it. Conversely, if the Dk value is very large, it means that the core is far away from its neighbors.
2) There is no doubt that the smaller the Dk value of a core point is, the greater the indicated density and the more qualified it is to be a cluster center. However, some clusters have a low overall density, which results in their overall Dk values being larger. It would be unwise for us to select cluster centers directly based on their Dk values. Therefore, we adopted a sequential clustering process. We first selected the point with the smallest Dk value among all core points as the center of a new cluster, and then after completing the clustering of the new cluster, we selected the core point with the smallest Dk value from the remaining core points for the center of another new cluster and then completed the clustering of the corresponding cluster. The above procedure was repeated until each core point has been allocated to different clusters. By using this method, we can correctly select the cluster centers even in data sets with uneven density distributions.
To better understand the process of clustering, we propose a definition of connectivity.
Definition 3 (Connectivity). A core point i is joinable to cluster Cj by connectivity if the distance from the core point i to the cluster Cj is smaller than the maximum value of Dk for core point i and any core point in cluster Cj. The connectivity of core point i to cluster Cj satisfies Eq (10).
‖Xi−Cj‖<max(Dki,Dkt),t∈Cj | (10) |
where ‖Xi−Cj‖ denotes the minimum distance between core point i and any core point in cluster Cj, and max(Dki,Dkt) denotes the maximum Dk value in core point i and cluster Cj.
As in Figure 2, the nearest distance between core point 5 and cluster 1 is d45 = 0.32. Also, according to Table 1, the largest nearest neighbor mean Dk in core point 5 and the core points in cluster 1 is Dk5 = 0.38. Under this condition, d45 is less than Dk5 satisfies connectivity, therefore core point 5 is added to cluster 1. The closest distance between core point 6 and cluster 1 is d56 = 0.33, and the maximum Dk value in core point 6 and the core points in cluster 1 is Dk6 = 0.41. d56 is less than Dk6, satisfying the Connectivity, and thus core point 6 can join cluster 1. For the core point 7, the closest distance between it and cluster 1 is d67 = 0.38, and the maximum Dk value in core point 7 and the core points in cluster 1 is Dk7 = 0.43. d67 is less than Dk7, and thus core point 7 can join cluster 1. In contrast, the closest distance between core point 9 and cluster 1 is d69 = 0.73, and the maximum Dk value in core point 9 and the core points in the cluster is Dk9 = 0.67, because d69 is greater than Dk9. The Connectivity cannot be satisfied, so core point 9 cannot be joined to cluster 1. Likewise, core point 8 cannot be joined to cluster 1.
Sample | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Dki(cm) | 0.35 | 0.34 | 0.31 | 0.33 | 0.38 | 0.41 | 0.43 | 0.68 | 0.67 |
Algorithm 2 shows the whole procedure of clustering the core points, where SDn×n is the distance matrix after sorting in ascending order, and ‖g−Ci‖ is the smallest distance from the core point g to cluster Ci. First, we create a new cluster and select the cluster center with the smallest Dk value among all core points. Second, depending on the distance of the remaining core points from the center of that cluster, we traverse from near to far and add the core points that satisfy the connectivity condition to that cluster (note that the merging of cores with clusters is an adaptive process, as each traversal adds cores to the cluster that satisfy the condition, so the maximum Dk value of the core points in the cluster will change, i.e., in the process of clustering Eq (6), the value on the right-hand side changes continuously), until no more cores can be added to the cluster. We repeat the above procedure until all core points have been grouped into clusters.
Algorithm 2: Clustering of core points |
Input: core points CG = {g1, g2, ..., gm}, SDn×n, {dij}n×n, k |
Output: cluster set C = {c1, c2, ..., co}, Center set T = {t1, t2, ..., to} |
1: Calculate the Dkib ased on Eq (8). |
2: Sorting the core points array CG in descending order according to Dki |
3: Create set C = ∅, T = ∅ |
4: Create a new cluster c1, c1 = c1∪CG[0], T = T∪CG[0], i = 1 /* Create the first new cluster and set the point with the smallest Dki value to be the center of the cluster */ |
5: while existing g in CG Do /* Start Clustering */ |
6: FLAG = = 0 /* Used to judge whether a cluster has joined a new core point */ |
7: For each core point g in SD[T[i]] /* Traversing the core points in order from near to far from the center of the ith cluster */ |
8: If ‖g−Ci‖<max(Dkg,Dkci) Then /* Connectivity judgment */ |
9: FLAG = = 1 |
10: Ci = Ci∪g, CG = CG $ g $ |
11: End if |
12: End for |
13: If FLAG = = 0 Then /* If the current cluster is no longer changing, a new cluster is created and the point with the smallest Dki value among the remaining core points is used to be the center of clustering */ |
14: Creating a new cluster(c), c = c∪CG[0], CG = CG \ CG[0], T = T∪CG[0] |
15: i = i + 1 /* The clustering of the ith+1st new cluster is started */ |
16: End if |
17: End While |
Once we had finished clustering the core points, to determine the labels of the non-core samples, we first sorted the non-core samples in ascending order based on their Dk value. Empirically, we found that most of the neighboring samples were in the same cluster, so we label the unclassified non-core points with the same labels as the nearest points that have been classified. The above method is explained in detail in Algorithm 3.
Algorithm 3: Allocation of remaining non-core points |
Input: cluster set C = {c1, c2, ..., co}, non_corepoints, NG = {n1, n2, ..., nj} Sorting Matrix SDn×n |
Output: cluster set C = {c1, c2, ..., co} |
1: For each n in NG do |
2: For x in SD[n] do |
3: If x ∈ cj (j = 1, 2, …, o) Then |
4: cj = cj∪x |
5: Break |
6: End if |
7: End for |
8: End for |
To understand our algorithm more intuitively, we show the whole process of our algorithm using the manual dataset Aggregation [23] as an example. Aggregation has a total of 788 samples, and there were 7 correctly classified clusters. The results of each step in the clustering procedure are clearly shown in Figure 3 (algorithm parameters K = 11, k = 11).
Assume that there are n samples in the dataset, and then the n samples are divided into a core and b non-core points. The complexity of our method depends mainly on the following six aspects.
1) The complexity of computing MK values for each sample is O(kn);
2) The complexity of classifying samples based on MKT values is O(n);
3) The complexity of the calculation of the Dk value for each sample is O(n);
4) The complexity of defining clustering centers and clustering the core points is O(a2);
5) The complexity of distributing non-core points is O(ab);
6) The AKDPC has an overall time complexity of O(n2) based on the analysis above, which is the same as the original DPC.
To prove the feasibility and validity of our suggested AKDPC algorithm, we used DPC and its improved algorithm DPC-KNN and DBSCAN for comparison. Since DBSCAN is a classical density-based clustering algorithm, and DPC-KNN is similar to this paper in that it is based on KNN to improve the original DPC, using them as a comparison can demonstrate the superiority of the AKDPC algorithm. Experiments were performed on eight classical manual datasets and eight real datasets obtained from [24,25]. To verify the performance of the algorithms for different density distributions and shapes, we used seven datasets that can represent different situations such as Aggregation, Jain, Flame [26], ThreeCircles, Pathbased [27], D9 [28], T4, etc., while the number of clusters and features of the samples are also different for each of the eight real datasets, for which experiments can be conducted to verify the generalizability of AKDPC. The detailed attribute tables for the data are given in Table 2. The source code and datasets used in this section are available at https://github.com/wanghuani/AKDPC.
Dataset | Instances | Dimensions | Clusters |
Manual | |||
Aggregation | 788 | 2 | 7 |
Jain | 373 | 2 | 2 |
Flame | 240 | 2 | 2 |
ThreeCircles | 299 | 2 | 3 |
Pathbased | 300 | 2 | 3 |
D9 | 1400 | 2 | 4 |
T4 | 8000 | 2 | 6 |
Real | |||
Zoo | 101 | 16 | 7 |
Thyroid | 215 | 6 | 3 |
Wine | 178 | 13 | 3 |
Wdbc | 569 | 30 | 2 |
Vote | 299 | 2 | 2 |
Pima | 768 | 9 | 2 |
Diabetes | 768 | 8 | 2 |
Ecoli | 336 | 8 | 8 |
We uniformly used adjusted rand index (ARI [29]), normalized mutual information (NMI [30]), and clustering accuracy (ACC) as performance evaluation metrics throughout the experiments, with an upper limit of 1 for the metrics and higher values indicating better performance. Among them, ACC is one of the most commonly used clustering performance evaluation metrics, defined as Eq (11).
ACC=N∑i=1δ(yi,map(zi))N | (11) |
where yi is the true label, zi is the label after clustering, and map(.) represents the reassignment of clustering labels, which is generally implemented using the Hungarian algorithm.
ARI is an adjusted RI. ARI measures the degree of agreement between two data distributions, and Eqs (12) and (13) show the calculation process of RI and ARI:
RI=a+bCn2 | (12) |
ARI=RI−E(RI)max(RI)−E(RI) | (13) |
where Y represents the actual label, Z represents the clustering result, Cn2 represents the total number of element pairs that can be composed in the data set, a represents the number of elements in both Y and Z that are of the same category, and b represents the number of elements that are not of the same category, while E represents the expectation.
NMI is also a commonly used measure of the similarity between the clustering results and the true results, as defined by Eq (14).
NMI(Y,Z)=2∗I(Y,Z)/(H(Y)+H(Z) | (14) |
where Y represents the true category, Z represents the result after clustering, H represents the cross entropy, and I(Y, Z) represents the mutual information.
To ensure the fairness of the experiment, all datasets used before starting the experiment were mapped linearly to the range [0, 1] by pre-processing.
In this paper, all parameters of the four algorithms are tuned to make sure that the overall experiment demonstrates the best possible clustering for each algorithm. AKDPC requires two parameters, K and k, taking values between 1 and 100. In addition, according to experience, for some simple low-dimensional datasets, the parameters K and k are generally chosen around 10–20, and the same values can be taken in most cases. Meanwhile, for data sets with more samples, the performance of AKDPC is generally better by choosing larger values of K and k. At the same time, we both need to choose a cut-off distance dc for the DPC and DPC-KNN, and empirically we choose a dc that is 1–2% after we sort the data points in ascending order by distance from each other. Furthermore, DPC-KNN requires manual determination of the number of the nearest neighbor K, which we choose from 4 to 15. As for the two parameters of DBSCAN, we choose ε between 0.01 and 2, and minpts between 1 and 50. Finally, for both DPC-KNN and DPC algorithms, we manually set the number of clusters.
In this part, we have selected the classical manual datasets which were used to test various clustering algorithms. The final clustering outcomes of the manual dataset are given in Table 3. Just as Table 3 shows, AKDPC shows near-perfect results for all types of datasets, while other algorithms are not as generalizable as AKDPC. For example, DPC is unable to handle large density differences and some complex non-convex datasets, while DPC-KNN generally does better than DPC on these datasets overall but also has the problem of being unable to handle some complex non-convex datasets. DBSCAN shows good results in processing each dataset though, but it tends to misidentify some samples as noise points. The overall effect is not as good as AKDPC.
Dataset | Algorithm | ACC | ARI | NMI | Arg |
Aggregation | AKDPC | 0.9975 | 0.9956 | 0.9924 | 11/11 |
DPC | 0.9975 | 0.9956 | 0.9924 | 0.062 | |
DBSCAN | 0.9797 | 0.975 | 0.9707 | 0.05/10 | |
DPC-KNN | 0.9975 | 0.9956 | 0.9924 | 7/0.062 | |
Jain | AKDPC | 1 | 1 | 1 | 17/17 |
DPC | 0.8954 | 0.6183 | 0.577 | 0.0424 | |
DBSCAN | 9732 | 0.9731 | 0.9178 | 0.08/5 | |
DPC-KNN | 1 | 1 | 1 | 7/1.08 | |
Flame | AKDPC | 1 | 1 | 1 | 11/11 |
DPC | 1 | 1 | 1 | 0.09 | |
DBSCAN | 0.9208 | 0.8607 | 0.7923 | 0.07/6 | |
DPC-KNN | 1 | 1 | 1 | 7/0.1 | |
ThreeCircles | AKDPC | 1 | 1 | 1 | 11/11 |
DPC | 0.408 | -0.001 | 0.1123 | 0.0266 | |
DBSCAN | 1 | 1 | 1 | 0.09/4 | |
DPC-KNN | 0.4916 | 0.1089 | 0.2019 | 8/0.0266 | |
Pathbased | AKDPC | 0.9933 | 0.9798 | 0.9659 | 12/17 |
DPC | 0.7333 | 0.453 | 0.539 | 0.0545 | |
DBSCAN | 0.6533 | 0.5319 | 0.675 | 0.08/10 | |
DPC-KNN | 0.74 | 0.4572 | 0.5425 | 8/0.0545 | |
D9 | AKDPC | 0.9971 | 0.99 | 0.9772 | 35/55 |
DPC | 0.3836 | 0.0236 | 0.2701 | 0.0375 | |
DBSCAN | 0.9593 | 0.9108 | 0.8844 | 0.05/6 | |
DPC-KNN | 0.4871 | 0.2128 | 0.4678 | 6/0.0375 | |
T4 | AKDPC | 0.993 | 0.9849 | 0.9747 | 70/20 |
DPC | 0.7013 | 0.6027 | 0.7337 | 0.0653 | |
DBSCAN | 0.9156 | 0.8808 | 0.8828 | 0.02/15 | |
DPC-KNN | 0.6721 | 0.5651 | 0.709 | 8/0.0653 |
We have visualized the final clustering results in Figures 4–10, and we have used asterisks to indicate the cluster centers obtained by other algorithms, except for the DBSCAN algorithm. In Figure 4, DPC, DPC-KNN and AKDPC can identify the structure of aggregated datasets consisting of arbitrarily distributed non-spherical clusters, and for DBSCAN the general shape of each cluster is correct although some points are labeled as noise.
The result of the clustering of the classic dataset Jain is shown in Figure 5. Since DPC always prefers to select clustering centers in high-density areas, it cannot detect clustering centers in sparse areas above the dataset Jain, which leads to wrong results in the end. At the same time, DBSCAN incorrectly classifies the left side of the top sparse region as a new cluster, while identifying some points on the right end as noise. In contrast, both AKDPC and DPC-KNN identified density peaks distributed over different density regions.
The results of the Flame dataset are presented in Figure 6, where DPC, DPC-KNN and AKDPC all effectively aggregate the two clusters, while DBSCAN correctly detects both clusters, but it incorrectly identifies some edge points as noise, which affects the outcome of the clustering.
The clustering results of the clustering of the ThreeCircles dataset are presented in Figure 7. As the graph shows, DPC and DPC-KNN select three clustering centers in the middle two rings, making the final outer ring without a center, resulting in the final incorrect clustering result, but DBSCAN correctly identifies three clusters, while our algorithm AKDPC not only identifies the clustering centers of the three rings. The final clustering result is also completely correct.
Figure 8 displays the clustering results for the Pathbased dataset. For this typical non-convex dataset, DPC and DPC-KNN are not performing well. DPC and DPC-KNN correctly identify the three clustering centers, but they wrongly assign the left and right sides of the dataset to the two clusters in the middle, while DBSCAN correctly identifies only two clusters, leaving the remaining samples in the outer sparse region as noise. On this dataset, only AKDPC not only succeeded in identifying three clustering centers, but the final clustering result was also relatively perfect.
As shown in Figure 9, dataset D6 has many discrete points, which often have a significant impact on algorithm performance. Not surprisingly, DPC and DPC-KNN were not up to the task of processing such a large curvature dataset, and only DBSCAN and AKDPC succeeded in identifying the four clusters, with DBSCAN having the slight flaw of identifying almost all of the discrete points as noise.
The results of the T4 dataset for all algorithms are shown in Figure 10, which contains six clusters, most of which are cross-tangled, and many discrete points at the edges of each cluster, testing the ability of the algorithms to handle complex datasets. While DBSCAN and AKDPC showed their performance in adapting to various complex situations and successfully identified six clusters of different shapes, DPC and DPC-KNN showed less satisfactory results on this complex dataset with large cross-tangles.
We further demonstrate the excellent capability of the AKDPC clustering algorithm using eight real datasets. Table 4 displays the clustering performance for the ACC, ARI and NMI metrics on the real datasets. Bold text within the table denotes the optimum results of the same dataset. As we can see from Table 4, no algorithm can always outperform the other algorithms due to the diversity of real datasets with different data, but we can also observe from Table 4 that in most situations, AKDPC does better, indicating that the AKDPC algorithm is more generalizable and can handle datasets with different situations.
Dataset | Algorithm | ACC | ARI | NMI | Arg |
Zoo | AKDPC | 0.8614 | 0.9249 | 0.8885 | 11/15 |
DPC | 0.6337 | 0.4972 | 0.7219 | 1 | |
DBSCAN | 0.8812 | 0.9007 | 0.8584 | 1.1/5 | |
DPC-KNN | 0.6733 | 0.6043 | 0.7815 | 9/1 | |
Thyroid | AKDPC | 0.8884 | 0.7679 | 0.6688 | 20/12 |
DPC | 0.5535 | 0.144 | 0.2819 | 0.676 | |
DBSCAN | 0.8372 | 0.7932 | 0.6405 | 0.13/32 | |
DPC-KNN | 0.5674 | 0.1388 | 0.3035 | 2/0.676 | |
Wine | AKDPC | 0.8652 | 0.7096 | 0.7014 | 18/10 |
DPC | 0.882 | 0.6723 | 0.7104 | 0.4165 | |
DBSCAN | 0.8146 | 0.5292 | 0.5905 | 0.5/21 | |
DPC-KNN | 0.8876 | 0.6855 | 0.7181 | 5/0.4165 | |
Wdbc | AKDPC | 0.9244 | 0.7529 | 0.6361 | 58/8 |
DPC | 0.6203 | -0.0056 | 0.0094 | 0.3528 | |
DBSCAN | 0.8383 | 0.4501 | 0.3376 | 0.46/38 | |
DPC-KNN | 0.8559 | 0.5047 | 0.3932 | 4/0.3528 | |
Vote | AKDPC | 0.9034 | 0.6502 | 0.5706 | 39/26 |
DPC | 0.8759 | 0.5641 | 0.5059 | 0.7071 | |
DBSCAN | 0.8 | 0.465 | 0.387 | 1.0/24 | |
DPC-KNN | 0.8989 | 0.6354 | 0.5534 | 6/0.7071 | |
Pima | AKDPC | 0.638 | 0.0486 | 0.0501 | 16/8 |
DPC | 0.6185 | 0.0146 | 0.002 | 0.2243 | |
DBSCAN | 0.651 | 0 | 0 | 1.4/4 | |
DPC-KNN | 0.6875 | 0.1226 | 0.0611 | 2/0.2243 | |
Diabate | AKDPC | 0.6263 | 0.0642 | 0.0639 | 17/7 |
DPC | 0.6185 | 0.0146 | 0.002 | 0.2243 | |
DBSCAN | 0.6901 | 0.1184 | 0.0584 | 0.3/31 | |
DPC-KNN | 0.6615 | 0.0495 | 0.0201 | 7/0.2243 | |
Ecoli | AKDPC | 0.7649 | 0.6958 | 0.6663 | 23/7 |
DPC | 0.497 | 0.3086 | 0.4973 | 0.1568 | |
DBSCAN | 0.6458 | 0.5255 | 0.5055 | 0.2/22 | |
DPC-KNN | 0.5417 | 0.4323 | 0.59 | 2/0.1568 |
To address the weaknesses of the original DPC, we propose a density peak clustering algorithm for the automatic selection of clustering centers based on K-nearest neighbors. We use relationships between samples and their surrounding neighbors to classify the sample into core and non-core samples while eliminating any possible adverse effects of non-core samples on the final clustering results by batch clustering. At the same time, we use the information on the distance of the sample from its surrounding neighbors to replace the density of samples in the original DPC. Through a novel sequential clustering method, we solve the problem of the former DPC requiring human intervention to determine the clustering centers and can also achieve better performance on complex datasets with large differences in density or non-convexity. Of course, our algorithm AKDPC has some drawbacks. First, we need to manually determine two parameters K and k. Second, we assign non-core points by labeling them as the same as the nearest points that have been classified and do not consider the effect of cluster shape, which may lead to incorrect assignments.
In future research we will try to reduce the number of parameters while maintaining the current accuracy, and for unassigned non-core points, we will propose a new method to reduce incorrect assignments. Finally, we will try to combine it with some supervised algorithms to test it with some excellent variants of the KNN algorithm used in this paper, such as CDNN [31,32] and ECDNN [33], and explore them with respect to their application in some related research [34].
The financial support for this project was provided by the National Natural Science Foundation of China [61962054].
The authors declare no conflict of interest.
[1] | Centers for Disease Control and Prevetion (CDC)About Mental Health (2022). Available from: https://www.cdc.gov/mentalhealth/ |
[2] |
Votruba N, Thornicroft G (2016) Sustainable development goals and mental health: learnings from the contribution of the FundaMentalSDG global initiative. Glob Ment Health 3: e26. https://doi.org/10.1017/gmh.2016.20 ![]() |
[3] | Dybdahl R, Lien L (2017) Mental health is an integral part of the sustainable development goals. Prev Med Comm Health 1: 1-3. https://doi.org/10.15761/PMCH.1000104 |
[4] | World Health Organization (WHO)World mental health report: transforming mental health for all (2022). Available from: https://www.who.int/publications/i/item/9789240049338 |
[5] | Bloom DE, Cafiero E, Jané-Llopis E, et al. (2012) The global economic burden of noncommunicable diseases. Program on the Global Demography of Aging. |
[6] | Global Burden of Disease Collaborative Network (2020) Global Burden of Disease Study 2019 (GBD 2019) Disability Weights. Seattle, United States of America: Institute for Health Metrics and Evaluation (IHME). https://doi.org/10.6069/1W19-VX76 |
[7] |
Lipskaya-Velikovsky L, Elgerisi D, Easterbrook A, et al. (2019) Motor skills, cognition, and work performance of people with severe mental illness. Disabil Rehab 41: 1396-1402. https://doi.org/10.1080/09638288.2018.1425744 ![]() |
[8] |
Sarsour K, Morin CM, Foley K, et al. (2010) Association of insomnia severity and comorbid medical and psychiatric disorders in a health plan-based sample: insomnia severity and comorbidities. Sleep Med 11: 69-74. https://doi.org/10.1016/j.sleep.2009.02.008 ![]() |
[9] |
Egbe CO, Brooke-Sumner C, Kathree T, et al. (2014) Psychiatric stigma and discrimination in South Africa: perspectives from key stakeholders. BMC Psychiatry 14: 1-14. https://doi.org/10.1186/1471-244X-14-191 ![]() |
[10] |
Nagle S, Cook JV, Polatajko HJ (2002) I'm doing as much as I can: Occupational choices of persons with a severe and persistent mental illness. J Occ Science 9: 72-81. https://doi.org/10.1080/14427591.2002.9686495 ![]() |
[11] |
Corring DJ, Cook JV (2007) Use of qualitative methods to explore the quality-of-life construct from a consumer perspective. Psych Services 58: 240-244. https://doi.org/10.1176/ps.2007.58.2.240 ![]() |
[12] |
Laursen TM, Nordentoft M, Mortensen PB (2014) Excess early mortality in schizophrenia. Ann Rev Clin Psychol 10: 425-448. https://doi.org/10.1146/annurev-clinpsy-032813-153657 ![]() |
[13] |
Wilmot EG, Edwardson CL, Achana FA, et al. (2012) Sedentary time in adults and the association with diabetes, cardiovascular disease and death: systematic review and meta-analysis. Diabet 55: 2895-2905. https://doi.org/10.1007/s00125-012-2677-z ![]() |
[14] |
Latoo J, Haddad PM, Mistry M, et al. (2021) The COVID-19 pandemic: an opportunity to make mental health a higher public health priority. BJPpsych Open 7: e172. https://doi.org/10.1192/bjo.2021.1002 ![]() |
[15] | World Health Organization (WHO)Mental health: strengthening our response (2022). Available from: https://www.ccih.org/resource_index/world-health-organization-mental-health-strengthening-our-response/ |
[16] |
Blount A (2003) Integrated primary care: organizing the evidence. Fam Syst Health 21: 121-133. https://doi.org/10.1037/1091-7527.21.2.121 ![]() |
[17] |
Davidson L, Rowe M, Dileo P, et al. (2021) Recovery-oriented systems of care: A perspective on the past, present, and future. Alcohol Res 41: 9. https://doi.org/10.35946/arcr.v41.1.09 ![]() |
[18] |
Steihaug S, Johannessen AK, Ådnanes M, et al. (2016) Challenges in achieving collaboration in clinical practice: the case of Norwegian health care. Int J Integr Care 16: 3. https://doi.org/10.5334/ijic.2217 ![]() |
[19] | Lindström M, Lindberg M, Sjöström S (2011) Responsiveness or resistance: views of housing staff encountering a new rehabilitation model [dissertation]. Umeå: Umeå University. |
[20] |
Mojtabai R (2021) US health care reform and enduring barriers to mental health care among low-income adults with psychological distress. Psych Services 72: 338-342. https://doi.org/10.1176/appi.ps.202000194 ![]() |
[21] |
Humphreys K, Lembke A (2014) Recovery-oriented policy and care systems in the UK and USA. Drug Alcohol Rev 33: 13-18. https://doi.org/10.1111/dar.12092 ![]() |
[22] |
Desisto M, Harding CM, McCormick RV, et al. (1995) The Maine and Vermont three-decade studies of serious mental illness: II. Longitudinal course comparisons. Brit J Psych 167: 338-342. https://doi.org/10.1192/bjp.167.3.338 ![]() |
[23] | Anthony WA (1993) Recovery from mental illness: the guiding vision of the mental health service system in the 1990s. Psychosoc Rehab J 16: 11-23. https://doi.org/10.1037/h0095655 |
[24] |
Slade M, Bird V, Clarke E, et al. (2015) Supporting recovery in patients with psychosis through care by community-based adult mental health teams (REFOCUS): a multisite, cluster, randomised, controlled trial. Lancet Psych 2: 503-514. https://doi.org/10.1016/S2215-0366(15)00086-3 ![]() |
[25] |
Bitter N, Roeg D, Van Assen M, et al. (2017) How effective is the comprehensive approach to rehabilitation (CARe) methodology? A cluster randomized controlled trial. BMC Psych 17: 1-11. https://doi.org/10.1186/s12888-017-1565-y ![]() |
[26] |
Bergmark M, Bejerholm U, Svensson B, et al. (2018) Complex interventions and interorganisational relationships: examining core implementation components of assertive community treatment. Int J Integr Care 18: 11. https://doi.org/10.5334/ijic.3547 ![]() |
[27] |
Hillborg H, Bergmark M, Bejerholm U (2021) Implementation of individual placement and support in a first-episode psychosis unit: a new way of working. Soc Policy Adm 55: 51-64. https://doi.org/10.1111/spol.12611 ![]() |
[28] |
Bejerholm U, Allaskog C, Andersson J, et al. (2022) Implementation of the Recovery Guide in inpatient mental health services in Sweden - A process evaluation study. Health Exp 25: 1405-1417. https://doi.org/10.1111/hex.13480 ![]() |
[29] |
Roe D, Mashiach-Eizenberg M, Lysaker PH (2011) The relation between objective and subjective domains of recovery among persons with schizophrenia-related disorders. Schizophr Res 131: 133-138. https://doi.org/10.1016/j.schres.2011.05.023 ![]() |
[30] |
Phillips P, Sandford T, Johnston C (2013) Working in mental health: practice and policy in a changing environment. Oxford: Routledge. https://doi.org/10.4324/9780203120910 ![]() |
[31] |
Leamy M, Bird V, Le Boutillier C, et al. (2011) Conceptual framework for personal recovery in mental health: systematic review and narrative synthesis. Brit J Psych 199: 445-452. https://doi.org/10.1192/bjp.bp.110.083733 ![]() |
[32] |
Doroud N, Fossey E, Fortune T (2015) Recovery as an occupational journey: A scoping review exploring the links between occupational engagement and recovery for people with enduring mental health issues. Austr Occ Ther J 62: 378-392. https://doi.org/10.1111/1440-1630.12238 ![]() |
[33] |
Lindström M, Sjöström S, Lindberg M (2013) Stories of rediscovering agency: home-based occupational therapy for people with severe psychiatric disability. Qual Health Res 23: 728-740. https://doi.org/10.1177/1049732313482047 ![]() |
[34] |
Sandhu S, Priebe S, Leavey G, et al. (2017) Intentions and experiences of effective practice in mental health specific supported accommodation services: a qualitative interview study. BMC Health Serv Res 17: 1-13. https://doi.org/10.1186/s12913-017-2411-0 ![]() |
[35] | Lindström M, Hariz GM, Bernspång B (2012) Dealing with real-life challenges: Outcome of a home-based occupational therapy intervention for people with severe psychiatric disability. OTJR 32: 5-14. https://doi.org/10.3928/15394492-20110819-01 |
[36] | Lindström M (2011) Promoting agency among people with severe psychiatric disability: occupation-oriented interventions in home and community settings [dissertation]. Umeå: Umeå University. |
[37] |
Eklund M, Argentzell E, Bejerholm U, et al. (2017) Wellbeing, activity and housing satisfaction–comparing residents with psychiatric disabilities in supported housing and ordinary housing with support. BMC Psych 17: 1-12. https://doi.org/10.1186/s12888-017-1472-2 ![]() |
[38] |
Wormdahl I, Husum TL, Rugkåsa J, et al. (2020) Professionals' perspectives on factors within primary mental health services that can affect pathways to involuntary psychiatric admissions. Int J Mental Health Syst 14: 1-12. https://doi.org/10.1186/s13033-020-00417-z ![]() |
[39] | SocialstyrelsenKompetens i LSS-boenden. [Competence in LSS-housing] (2021). Available from: https://www.socialstyrelsen.se/globalassets/sharepoint-dokument/artikelkatalog/ovrigt/2021-3-7312.pdf |
[40] |
Malfitano APS, De Souza RGDM, Townsend EA, et al. (2019) Do occupational justice concepts inform occupational therapists' practice? A scoping review. Can J Occ Ther 86: 299-312. https://doi.org/10.1177/0008417419833409 ![]() |
[41] |
Whiteford G, Jones K, Weekes G, et al. (2020) Combatting occupational deprivation and advancing occupational justice in institutional settings: Using a practice-based enquiry approach for service transformation. Brit J Occ Ther 83: 52-61. https://doi.org/10.1177/0308022619865223 ![]() |
[42] | Wilcock AA (2006) An occupational perspective of health. Slack Incorporated . Available from: https://archive.org/details/occupationalpers0000wilc_r6b9/page/n3/mode/2up |
[43] | Occupational Therapy Australia.OT Australia position statement: occupational deprivation. Austr Occ Ther J (2016) 63: 445-447. https://doi.org/10.1111/1440-1630.12347 |
[44] | SocialdepartementetThe Swedish Act of Healthcare (2017). Available from: https://rkrattsbaser.gov.se/sfst?bet=2017:30 |
[45] |
Lindström M, Lindholm L, Liv P (2022) Study protocol for a pragmatic cluster RCT on the effect and cost-effectiveness of Everyday Life Rehabilitation versus treatment as usual for persons with severe psychiatric disability living in sheltered or supported housing facilities. Trials 23: 657. https://doi.org/10.1186/s13063-022-06622-0 ![]() |
[46] | Lindström M (2007) Everyday Life Rehabilitation: manual for integrated activity and recovery-oriented rehabilitation in social psychiatry settings. Umeå: Umeå University. |
[47] | Lindström M Everyday Life Rehabilitation (ELR): manual 2.0 (Distributed by web-site for the intervention model ELR) (2020). Available from: https://www.everydayliferehabilitation.com/ |
[48] |
Lindström M (2022) Development of the Everyday Life Rehabilitation model for persons with long-term and complex mental health needs: Preliminary process findings on usefulness and implementation aspects in sheltered and supported housing facilities. Front Psych 13: 954068. https://doi.org/10.3389/fpsyt.2022.954068 ![]() |
[49] |
Ulfseth LA, Josephsson S, Alsaker S (2015) Meaning-making in everyday occupation at a psychiatric centre: A narrative approach. J Occ Science 22: 422-433. https://doi.org/10.1080/14427591.2014.954662 ![]() |
[50] |
Reed NP, Josephsson S, Alsaker S (2020) A narrative study of mental health recovery: exploring unique, open-ended and collective processes. Int J Qual Stud Health Well-being 15: 1747252. https://doi.org/10.1080/17482631.2020.1747252 ![]() |
[51] | Ricoeur P (1991) Life in quest of narrative. London: Routledge. |
[52] |
Polkinghorne DE (1995) Narrative configuration in qualitative analysis. Int J Qual Stud Edu 8: 5-23. https://doi.org/10.1080/0951839950080103 ![]() |
[53] |
Nhunzvi C, Langhaug L, Mavindidze E, et al. (2020) Occupational justice and social inclusion among people living with HIV and people with mental illness: A scoping review. BMJ Open 10: e036916. https://doi.org/10.1136/bmjopen-2020-036916 ![]() |
[54] | Ricoeur P Time and Narrative (1984). https://doi.org/10.7208/chicago/9780226713519.001.0001 |
[55] | SocialdepartementetThe Swedish Social Services Act (2001). Available from: https://rkrattsbaser.gov.se/sfst?bet=2001:453 |
[56] | SocialdepartementetThe Swedish Act concerning Support and Service for Persons with Certain Functional Impairments (1993). Available from: https://rkrattsbaser.gov.se/sfst?bet=1993:387 |
[57] |
Lilliehorn S, Fjellfeldt M, Högström E, et al. (2023) Contemporary Accommodation Services for People with Psychiatric Disabilities–the Simple Taxonomy for Supported Accommodation (STAX-SA) Applied and Discussed in a Swedish Context. Scand J Disab Res 25: 92-105. https://doi.org/10.16993/sjdr.879 ![]() |
[58] |
McPherson P, Krotofil J, Killaspy H (2018) What works? Toward a new classification system for mental health supported accommodation services: the simple taxonomy for supported accommodation (STAX-SA). IntJ Env Res Public Health 15: 190. https://doi.org/10.3390/ijerph15020190 ![]() |
[59] |
Gallagher M, Muldoon OT, Pettigrew J (2015) An integrative review of social and occupational factors influencing health and wellbeing. Front Psychol 6: 1-11. https://doi.org/10.3389/fpsyg.2015.01281 ![]() |
[60] | Borg M, Karlsson B, Tondora J, et al. (2009) Implementing person-centered care in psychiatric rehabilitation: what does this involve?. Isr J Psychiatry Relat Sci 46: 84-93. |
[61] |
Lindström M, Isaksson G (2017) Personalized Occupational Transformations: Narratives from Two Occupational Therapists' Experiences with Complex Therapeutic Processes. Occ Ther Ment Health 33: 15-30. https://doi.org/10.1080/0164212X.2016.1194243 ![]() |
[62] | Tondora J, Miller R, Slade M, et al. (2014) Partnering for recovery in mental health: A practical guide to person-centered planning. John Wiley & Sons. |
[63] |
Hitch D, Pepin G (2021) Doing, being, becoming and belonging at the heart of occupational therapy: An analysis of theoretical ways of knowing. Scand J Occup Ther 28: 13-25. https://doi.org/10.1080/11038128.2020.1726454 ![]() |
[64] |
Sutton DJ, Hocking CS, Smythe LA (2012) A phenomenological study of occupational engagement in recovery from mental illness. Can J Occ Ther 79: 142-150. https://doi.org/10.2182/cjot.2012.79.3.3 ![]() |
[65] |
Killaspy H, Harvey C, Brasier C, et al. (2022) Community-based social interventions for people with severe mental illness: a systematic review and narrative synthesis of recent evidence. World Psych 21: 96-123. https://doi.org/10.1002/wps.20940 ![]() |
[66] |
Hermanski A, Mcclelland J, Pearce-Walker J, et al. (2022) The effects of blue spaces on mental health and associated biomarkers. Int J Ment Health 51: 203-217. https://doi.org/10.1080/00207411.2021.1910173 ![]() |
[67] |
Van Den Berg M, Van Poppel M, Van Kamp I, et al. (2016) Visiting green space is associated with mental health and vitality: A cross-sectional study in four european cities. Health Place 38: 8-15. https://doi.org/10.1016/j.healthplace.2016.01.003 ![]() |
[68] |
Piat M, Seida K, Padgett D (2019) Choice and personal recovery for people with serious mental illness living in supported housing. J Ment Health 29: 306-313. https://doi.org/10.1080/09638237.2019.1581338 ![]() |
[69] |
Greenwood RM, Manning RM (2017) Mastery matters: Consumer choice, psychiatric symptoms and problematic substance use among adults with histories of homelessness. Health Soc Care Community 25: 1050-1060. https://doi.org/10.1111/hsc.12405 ![]() |
[70] |
Slade M, Amering M, Farkas M, et al. (2014) Uses and abuses of recovery: implementing recovery-oriented practices in mental health systems. World Psych 13: 12-20. https://doi.org/10.1002/wps.20084 ![]() |
[71] |
Clarke SP, Crowe TP, Oades LG, et al. (2009) Do goal-setting interventions improve the quality of goals in mental health services?. Psych Rehab J 32: 292-299. https://doi.org/10.2975/32.4.2009.292.299 ![]() |
[72] |
Mishler EG (1986) Research interviewing: Context and narrative. Cambridge, UK: Harvard University Press. https://doi.org/10.4159/9780674041141 ![]() |
1. | Sohana Jahan, Afsana Akter Setu, Sidratul Muntaha, Tapan Biswas, Adaptive cluster center initialization using density peaks for geodesic distance-based clustering, 2024, 0, 2155-3289, 0, 10.3934/naco.2024016 | |
2. | Chein-I Chang, Yi-Mei Kuo, Kenneth Yeonkong Ma, Band Selection via Band Density Prominence Clustering for Hyperspectral Image Classification, 2024, 16, 2072-4292, 942, 10.3390/rs16060942 | |
3. | Yuqin Sun, Jingcong Wang, Yuan Sun, Pengcheng Zhang, Tianyi Wang, Graph Distance and Adaptive K-Nearest Neighbors Selection-Based Density Peak Clustering, 2024, 12, 2169-3536, 71783, 10.1109/ACCESS.2024.3403128 | |
4. | Guan Zhou, Yutao Jin, Zihao Fang, Yuanlong Wang, Barrier function–based adaptive STSMC for steering feel feedback of SBW vehicle with uncertainty, 2024, 0142-3312, 10.1177/01423312241277249 | |
5. | Longpeng Bai, Kaiyi Wang, Qiusi Zhang, Qi Zhang, Xiaofeng Wang, Shouhui Pan, Liyang Zhang, Xuliang He, Ran Li, Dongfeng Zhang, Yanyun Han, A Study of Maize Genotype–Environment Interaction Based on Deep K-Means Clustering Neural Network, 2025, 15, 2077-0472, 358, 10.3390/agriculture15040358 | |
6. | Fuxiang Li, Tingting Jiang, Jia Wei, Shu Li, Yunxiao Shan, Density Peaks Clustering Algorithm Based on Neighborhood Radius and Membership Degree, 2025, 13, 2169-3536, 72329, 10.1109/ACCESS.2025.3563990 |
Sample | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Dki(cm) | 0.35 | 0.34 | 0.31 | 0.33 | 0.38 | 0.41 | 0.43 | 0.68 | 0.67 |
Dataset | Instances | Dimensions | Clusters |
Manual | |||
Aggregation | 788 | 2 | 7 |
Jain | 373 | 2 | 2 |
Flame | 240 | 2 | 2 |
ThreeCircles | 299 | 2 | 3 |
Pathbased | 300 | 2 | 3 |
D9 | 1400 | 2 | 4 |
T4 | 8000 | 2 | 6 |
Real | |||
Zoo | 101 | 16 | 7 |
Thyroid | 215 | 6 | 3 |
Wine | 178 | 13 | 3 |
Wdbc | 569 | 30 | 2 |
Vote | 299 | 2 | 2 |
Pima | 768 | 9 | 2 |
Diabetes | 768 | 8 | 2 |
Ecoli | 336 | 8 | 8 |
Dataset | Algorithm | ACC | ARI | NMI | Arg |
Aggregation | AKDPC | 0.9975 | 0.9956 | 0.9924 | 11/11 |
DPC | 0.9975 | 0.9956 | 0.9924 | 0.062 | |
DBSCAN | 0.9797 | 0.975 | 0.9707 | 0.05/10 | |
DPC-KNN | 0.9975 | 0.9956 | 0.9924 | 7/0.062 | |
Jain | AKDPC | 1 | 1 | 1 | 17/17 |
DPC | 0.8954 | 0.6183 | 0.577 | 0.0424 | |
DBSCAN | 9732 | 0.9731 | 0.9178 | 0.08/5 | |
DPC-KNN | 1 | 1 | 1 | 7/1.08 | |
Flame | AKDPC | 1 | 1 | 1 | 11/11 |
DPC | 1 | 1 | 1 | 0.09 | |
DBSCAN | 0.9208 | 0.8607 | 0.7923 | 0.07/6 | |
DPC-KNN | 1 | 1 | 1 | 7/0.1 | |
ThreeCircles | AKDPC | 1 | 1 | 1 | 11/11 |
DPC | 0.408 | -0.001 | 0.1123 | 0.0266 | |
DBSCAN | 1 | 1 | 1 | 0.09/4 | |
DPC-KNN | 0.4916 | 0.1089 | 0.2019 | 8/0.0266 | |
Pathbased | AKDPC | 0.9933 | 0.9798 | 0.9659 | 12/17 |
DPC | 0.7333 | 0.453 | 0.539 | 0.0545 | |
DBSCAN | 0.6533 | 0.5319 | 0.675 | 0.08/10 | |
DPC-KNN | 0.74 | 0.4572 | 0.5425 | 8/0.0545 | |
D9 | AKDPC | 0.9971 | 0.99 | 0.9772 | 35/55 |
DPC | 0.3836 | 0.0236 | 0.2701 | 0.0375 | |
DBSCAN | 0.9593 | 0.9108 | 0.8844 | 0.05/6 | |
DPC-KNN | 0.4871 | 0.2128 | 0.4678 | 6/0.0375 | |
T4 | AKDPC | 0.993 | 0.9849 | 0.9747 | 70/20 |
DPC | 0.7013 | 0.6027 | 0.7337 | 0.0653 | |
DBSCAN | 0.9156 | 0.8808 | 0.8828 | 0.02/15 | |
DPC-KNN | 0.6721 | 0.5651 | 0.709 | 8/0.0653 |
Dataset | Algorithm | ACC | ARI | NMI | Arg |
Zoo | AKDPC | 0.8614 | 0.9249 | 0.8885 | 11/15 |
DPC | 0.6337 | 0.4972 | 0.7219 | 1 | |
DBSCAN | 0.8812 | 0.9007 | 0.8584 | 1.1/5 | |
DPC-KNN | 0.6733 | 0.6043 | 0.7815 | 9/1 | |
Thyroid | AKDPC | 0.8884 | 0.7679 | 0.6688 | 20/12 |
DPC | 0.5535 | 0.144 | 0.2819 | 0.676 | |
DBSCAN | 0.8372 | 0.7932 | 0.6405 | 0.13/32 | |
DPC-KNN | 0.5674 | 0.1388 | 0.3035 | 2/0.676 | |
Wine | AKDPC | 0.8652 | 0.7096 | 0.7014 | 18/10 |
DPC | 0.882 | 0.6723 | 0.7104 | 0.4165 | |
DBSCAN | 0.8146 | 0.5292 | 0.5905 | 0.5/21 | |
DPC-KNN | 0.8876 | 0.6855 | 0.7181 | 5/0.4165 | |
Wdbc | AKDPC | 0.9244 | 0.7529 | 0.6361 | 58/8 |
DPC | 0.6203 | -0.0056 | 0.0094 | 0.3528 | |
DBSCAN | 0.8383 | 0.4501 | 0.3376 | 0.46/38 | |
DPC-KNN | 0.8559 | 0.5047 | 0.3932 | 4/0.3528 | |
Vote | AKDPC | 0.9034 | 0.6502 | 0.5706 | 39/26 |
DPC | 0.8759 | 0.5641 | 0.5059 | 0.7071 | |
DBSCAN | 0.8 | 0.465 | 0.387 | 1.0/24 | |
DPC-KNN | 0.8989 | 0.6354 | 0.5534 | 6/0.7071 | |
Pima | AKDPC | 0.638 | 0.0486 | 0.0501 | 16/8 |
DPC | 0.6185 | 0.0146 | 0.002 | 0.2243 | |
DBSCAN | 0.651 | 0 | 0 | 1.4/4 | |
DPC-KNN | 0.6875 | 0.1226 | 0.0611 | 2/0.2243 | |
Diabate | AKDPC | 0.6263 | 0.0642 | 0.0639 | 17/7 |
DPC | 0.6185 | 0.0146 | 0.002 | 0.2243 | |
DBSCAN | 0.6901 | 0.1184 | 0.0584 | 0.3/31 | |
DPC-KNN | 0.6615 | 0.0495 | 0.0201 | 7/0.2243 | |
Ecoli | AKDPC | 0.7649 | 0.6958 | 0.6663 | 23/7 |
DPC | 0.497 | 0.3086 | 0.4973 | 0.1568 | |
DBSCAN | 0.6458 | 0.5255 | 0.5055 | 0.2/22 | |
DPC-KNN | 0.5417 | 0.4323 | 0.59 | 2/0.1568 |
Sample | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Dki(cm) | 0.35 | 0.34 | 0.31 | 0.33 | 0.38 | 0.41 | 0.43 | 0.68 | 0.67 |
Dataset | Instances | Dimensions | Clusters |
Manual | |||
Aggregation | 788 | 2 | 7 |
Jain | 373 | 2 | 2 |
Flame | 240 | 2 | 2 |
ThreeCircles | 299 | 2 | 3 |
Pathbased | 300 | 2 | 3 |
D9 | 1400 | 2 | 4 |
T4 | 8000 | 2 | 6 |
Real | |||
Zoo | 101 | 16 | 7 |
Thyroid | 215 | 6 | 3 |
Wine | 178 | 13 | 3 |
Wdbc | 569 | 30 | 2 |
Vote | 299 | 2 | 2 |
Pima | 768 | 9 | 2 |
Diabetes | 768 | 8 | 2 |
Ecoli | 336 | 8 | 8 |
Dataset | Algorithm | ACC | ARI | NMI | Arg |
Aggregation | AKDPC | 0.9975 | 0.9956 | 0.9924 | 11/11 |
DPC | 0.9975 | 0.9956 | 0.9924 | 0.062 | |
DBSCAN | 0.9797 | 0.975 | 0.9707 | 0.05/10 | |
DPC-KNN | 0.9975 | 0.9956 | 0.9924 | 7/0.062 | |
Jain | AKDPC | 1 | 1 | 1 | 17/17 |
DPC | 0.8954 | 0.6183 | 0.577 | 0.0424 | |
DBSCAN | 9732 | 0.9731 | 0.9178 | 0.08/5 | |
DPC-KNN | 1 | 1 | 1 | 7/1.08 | |
Flame | AKDPC | 1 | 1 | 1 | 11/11 |
DPC | 1 | 1 | 1 | 0.09 | |
DBSCAN | 0.9208 | 0.8607 | 0.7923 | 0.07/6 | |
DPC-KNN | 1 | 1 | 1 | 7/0.1 | |
ThreeCircles | AKDPC | 1 | 1 | 1 | 11/11 |
DPC | 0.408 | -0.001 | 0.1123 | 0.0266 | |
DBSCAN | 1 | 1 | 1 | 0.09/4 | |
DPC-KNN | 0.4916 | 0.1089 | 0.2019 | 8/0.0266 | |
Pathbased | AKDPC | 0.9933 | 0.9798 | 0.9659 | 12/17 |
DPC | 0.7333 | 0.453 | 0.539 | 0.0545 | |
DBSCAN | 0.6533 | 0.5319 | 0.675 | 0.08/10 | |
DPC-KNN | 0.74 | 0.4572 | 0.5425 | 8/0.0545 | |
D9 | AKDPC | 0.9971 | 0.99 | 0.9772 | 35/55 |
DPC | 0.3836 | 0.0236 | 0.2701 | 0.0375 | |
DBSCAN | 0.9593 | 0.9108 | 0.8844 | 0.05/6 | |
DPC-KNN | 0.4871 | 0.2128 | 0.4678 | 6/0.0375 | |
T4 | AKDPC | 0.993 | 0.9849 | 0.9747 | 70/20 |
DPC | 0.7013 | 0.6027 | 0.7337 | 0.0653 | |
DBSCAN | 0.9156 | 0.8808 | 0.8828 | 0.02/15 | |
DPC-KNN | 0.6721 | 0.5651 | 0.709 | 8/0.0653 |
Dataset | Algorithm | ACC | ARI | NMI | Arg |
Zoo | AKDPC | 0.8614 | 0.9249 | 0.8885 | 11/15 |
DPC | 0.6337 | 0.4972 | 0.7219 | 1 | |
DBSCAN | 0.8812 | 0.9007 | 0.8584 | 1.1/5 | |
DPC-KNN | 0.6733 | 0.6043 | 0.7815 | 9/1 | |
Thyroid | AKDPC | 0.8884 | 0.7679 | 0.6688 | 20/12 |
DPC | 0.5535 | 0.144 | 0.2819 | 0.676 | |
DBSCAN | 0.8372 | 0.7932 | 0.6405 | 0.13/32 | |
DPC-KNN | 0.5674 | 0.1388 | 0.3035 | 2/0.676 | |
Wine | AKDPC | 0.8652 | 0.7096 | 0.7014 | 18/10 |
DPC | 0.882 | 0.6723 | 0.7104 | 0.4165 | |
DBSCAN | 0.8146 | 0.5292 | 0.5905 | 0.5/21 | |
DPC-KNN | 0.8876 | 0.6855 | 0.7181 | 5/0.4165 | |
Wdbc | AKDPC | 0.9244 | 0.7529 | 0.6361 | 58/8 |
DPC | 0.6203 | -0.0056 | 0.0094 | 0.3528 | |
DBSCAN | 0.8383 | 0.4501 | 0.3376 | 0.46/38 | |
DPC-KNN | 0.8559 | 0.5047 | 0.3932 | 4/0.3528 | |
Vote | AKDPC | 0.9034 | 0.6502 | 0.5706 | 39/26 |
DPC | 0.8759 | 0.5641 | 0.5059 | 0.7071 | |
DBSCAN | 0.8 | 0.465 | 0.387 | 1.0/24 | |
DPC-KNN | 0.8989 | 0.6354 | 0.5534 | 6/0.7071 | |
Pima | AKDPC | 0.638 | 0.0486 | 0.0501 | 16/8 |
DPC | 0.6185 | 0.0146 | 0.002 | 0.2243 | |
DBSCAN | 0.651 | 0 | 0 | 1.4/4 | |
DPC-KNN | 0.6875 | 0.1226 | 0.0611 | 2/0.2243 | |
Diabate | AKDPC | 0.6263 | 0.0642 | 0.0639 | 17/7 |
DPC | 0.6185 | 0.0146 | 0.002 | 0.2243 | |
DBSCAN | 0.6901 | 0.1184 | 0.0584 | 0.3/31 | |
DPC-KNN | 0.6615 | 0.0495 | 0.0201 | 7/0.2243 | |
Ecoli | AKDPC | 0.7649 | 0.6958 | 0.6663 | 23/7 |
DPC | 0.497 | 0.3086 | 0.4973 | 0.1568 | |
DBSCAN | 0.6458 | 0.5255 | 0.5055 | 0.2/22 | |
DPC-KNN | 0.5417 | 0.4323 | 0.59 | 2/0.1568 |