Research article

Sexual health and its related factors among Iranian pregnant women: A review study

  • Received: 02 July 2019 Accepted: 09 August 2019 Published: 26 November 2019
  • Introduction: Sexual health is an important dimension of health. Pregnancy is a critical stage in women’s lives and can affect couples’ sexual health and matrimonial life due to physiological, anatomical and psychological changes in pregnancy. This review was conducted on Iranian studies to assess sexual health dimensions and influencing factors in Iranian pregnant women. Methods: This narrative review was carried out by performing a search in Iranian scientific articles published between 2000 and 2018, which considered the dimensions of sexual. Electronic databases including Magiran, Scientific information database (SID), web of science, PubMed, Scopus, and google scholar search engine were searched using the following keywords; sexual health, awareness, belief, attitude, sexual activity, sexual violence, prenatal, pregnancy, and pregnant women. Full text cross-sectional or cohort articles in Persian or English that were related to the field of sexual health of Iranian pregnant women were included in the review. Results: Among the initially identified 1383 articles, 63 met the inclusion criteria. Sexual health of pregnant women was examined and categorized into awareness, attitude, belief-activity, performance-satisfaction, quality of sexual life-sexual violence domains. Majority of studies assessed sexual violence (33 studies), followed by sexual function (24 studies), sexual satisfaction and quality of life (4 studies), and knowledge and attitudes about sexuality in pregnancy (4 studies). Main Conclusion: The review of published studies revealed that the level of awareness and attitude of Iranian pregnant women about sexual activity was low, while the level of sexual dysfunction and sexual violence in pregnancy was high. Therefore, the quality of purposeful care and counseling that have been provided hitherto in order to maintain and improve sexual health during pregnancy and even before pregnancy, should be improved. Further longitudinal and meta-analytic studies on the dimensions of sexual health, including sexual activity and sexual satisfaction are recommended.

    Citation: Shiva Alizadeh, Hedyeh Riazi, Hamid Alavi Majd, Giti Ozgoli. Sexual health and its related factors among Iranian pregnant women: A review study[J]. AIMS Medical Science, 2019, 6(4): 296-317. doi: 10.3934/medsci.2019.4.296

    Related Papers:

    [1] Meili Tang, Qian Pan, Yurong Qian, Yuan Tian, Najla Al-Nabhan, Xin Wang . Parallel label propagation algorithm based on weight and random walk. Mathematical Biosciences and Engineering, 2021, 18(2): 1609-1628. doi: 10.3934/mbe.2021083
    [2] Kun Zhai, Qiang Ren, Junli Wang, Chungang Yan . Byzantine-robust federated learning via credibility assessment on non-IID data. Mathematical Biosciences and Engineering, 2022, 19(2): 1659-1676. doi: 10.3934/mbe.2022078
    [3] Izzat Alsmadi, Michael J. O'Brien . Rating news claims: Feature selection and evaluation. Mathematical Biosciences and Engineering, 2020, 17(3): 1922-1939. doi: 10.3934/mbe.2020101
    [4] Yanghan Ou, Siqin Sun, Haitao Gan, Ran Zhou, Zhi Yang . An improved self-supervised learning for EEG classification. Mathematical Biosciences and Engineering, 2022, 19(7): 6907-6922. doi: 10.3934/mbe.2022325
    [5] Ruoqi Zhang, Xiaoming Huang, Qiang Zhu . Weakly supervised salient object detection via image category annotation. Mathematical Biosciences and Engineering, 2023, 20(12): 21359-21381. doi: 10.3934/mbe.2023945
    [6] Wen Cao, Jiaqi Xu, Feilin Peng, Xiaochong Tong, Xinyi Wang, Siqi Zhao, Wenhao Liu . A point-feature label placement algorithm based on spatial data mining. Mathematical Biosciences and Engineering, 2023, 20(7): 12169-12193. doi: 10.3934/mbe.2023542
    [7] Tinghua Wang, Huiying Zhou, Hanming Liu . Multi-label feature selection based on HSIC and sparrow search algorithm. Mathematical Biosciences and Engineering, 2023, 20(8): 14201-14221. doi: 10.3934/mbe.2023635
    [8] Shiyou Chen, Baohui Li, Lanlan Rui, Jiaxing Wang, Xingyu Chen . A blockchain-based creditable and distributed incentive mechanism for participant mobile crowdsensing in edge computing. Mathematical Biosciences and Engineering, 2022, 19(4): 3285-3312. doi: 10.3934/mbe.2022152
    [9] Xiangtao Chen, Yuting Bai, Peng Wang, Jiawei Luo . Data augmentation based semi-supervised method to improve COVID-19 CT classification. Mathematical Biosciences and Engineering, 2023, 20(4): 6838-6852. doi: 10.3934/mbe.2023294
    [10] Lu Lu, Jiyou Fei, Ling Yu, Yu Yuan . A rolling bearing fault detection method based on compressed sensing and a neural network. Mathematical Biosciences and Engineering, 2020, 17(5): 5864-5882. doi: 10.3934/mbe.2020313
  • Introduction: Sexual health is an important dimension of health. Pregnancy is a critical stage in women’s lives and can affect couples’ sexual health and matrimonial life due to physiological, anatomical and psychological changes in pregnancy. This review was conducted on Iranian studies to assess sexual health dimensions and influencing factors in Iranian pregnant women. Methods: This narrative review was carried out by performing a search in Iranian scientific articles published between 2000 and 2018, which considered the dimensions of sexual. Electronic databases including Magiran, Scientific information database (SID), web of science, PubMed, Scopus, and google scholar search engine were searched using the following keywords; sexual health, awareness, belief, attitude, sexual activity, sexual violence, prenatal, pregnancy, and pregnant women. Full text cross-sectional or cohort articles in Persian or English that were related to the field of sexual health of Iranian pregnant women were included in the review. Results: Among the initially identified 1383 articles, 63 met the inclusion criteria. Sexual health of pregnant women was examined and categorized into awareness, attitude, belief-activity, performance-satisfaction, quality of sexual life-sexual violence domains. Majority of studies assessed sexual violence (33 studies), followed by sexual function (24 studies), sexual satisfaction and quality of life (4 studies), and knowledge and attitudes about sexuality in pregnancy (4 studies). Main Conclusion: The review of published studies revealed that the level of awareness and attitude of Iranian pregnant women about sexual activity was low, while the level of sexual dysfunction and sexual violence in pregnancy was high. Therefore, the quality of purposeful care and counseling that have been provided hitherto in order to maintain and improve sexual health during pregnancy and even before pregnancy, should be improved. Further longitudinal and meta-analytic studies on the dimensions of sexual health, including sexual activity and sexual satisfaction are recommended.


    Traditional machine learning algorithms can be generally divided into three categories: (1) Supervised learning, which aims to learn from a large number of labeled samples and predicts the label of new unknown samples from the learned knowledge. In the practice, usually we can collect a large number of unlabeled samples, but the process of assigning labels to the samples is time consuming [1]. (2) Unsupervised learning, there is no corresponding category known in advance, so the classifier can only be learned from the sample set without labels. The unsupervised learning models are usually built based on the similarity between samples. (3) Semi-supervised learning, it relies on a small number of labeled samples to guide the label prediction of unlabeled samples [2,3]. New labeled samples are continuously added to the training set to compensate for the small number of labeled samples, which leads to performance defects in supervised learning.

    Recently, the application of semi-supervised learning algorithm is more and more extensive, and scholars have done a lot of researches in this area. In addition to the traditional semi-supervised learning algorithm, such as S3VM [4], many new methods have been proposed. J. Levatic, et al. [5] proposed an extension of predictive clustering trees for multi-target regression (MTR) and ensembles thereof towards semi-supervised learning. This approach preserves the attractive properties of the decision tree while allowing the use of unlabeled samples. In particular, it is interpretable, easy to understand, quick to learn, and can handle numerical and nominal description characteristics. B. Jiang, et al. [6] proposed a novel graph-based semi-supervised learning framework which includes the sparse Bayesian semi-supervised learning method and the incremental sparse Bayesian semi-supervised learning method. The proposed algorithms can generate sparse solutions and make probabilistic prediction by transformation and induction. Label propagation algorithm (LPA) is a semi-supervised learning method based on graph. The labels are propagated from some known samples (vertices) to the unknown ones based on the similarities between the vertices in the graph [7]. LPA has the advantages of low complexity and high efficiency, and it has been widely applied in the fields of network community mining [8,9], information classification [10,11] and multimedia recognition and processing [12]. However, the traditional LPA simply takes the similarities between samples as the basis of label propagation, which lacks the evaluation criteria of new labeled samples. In addition, since the algorithm incorporates the new labeled samples into the training set, the propagation error may gradually increase, leading to the degeneration of the performance [13]. Due to the randomness during the process of propagation and the weak ability of LPA to deal with uncertain points, many new improved methods are proposed. C. Gong et al. [14] proposed a novel iterative LPA, in which each propagation alternates between two paradigms, teaching to learn and learning to teach (TLLT). In the "teaching to learn" process, learners disseminate the simplest unmarked examples assigned by the teacher. In the "learning-to-teach" step, the teacher adjusts the selection of the simplest subsequent example based on the learner's feedback. J. Hao, et al. [15] adopted the fuzzy method when dealing with the categories of unlabeled samples. The categories of unlabeled samples are represented by the fuzzy membership degree in the interval of [0, 1]. The final step of the algorithm is to remove the ambiguity. X. K. Zhang, et al. [16] dealt with the problem of random label selection by the value of node similarity. B. Wang et al. [17] proposed dynamic label propagation (DLP) to simultaneously deal with the multiclass and multi-label problem, the method in DLP is to update similarity measures dynamically by fusing multi-label and multi-class information.

    In this paper, we put forward a Label Propagation based on Roll-back and Credibility (LPRC) algorithm to solve these problems. First, the credibility (label confidence) of each unlabeled sample is evaluated. According to the evaluation results, the label propagation order of the samples is determined, and the samples with high credibility are labeled in advance. Then, in order to avoid the impact of false label propagation, a roll-back mechanism based on feedback performance evaluation is proposed. In the process of label propagation in every fixed number of iterations, the new labeled samples will be default with the correct labels and added to the training set. At the same time, the samples in the original training set with the same size of new labeled samples will be chosen to moved out as a new dataset waiting to be labeled. The new labeled samples cannot be added to the training set until the propagation accuracy reached a predefined threshold, so we can reuse these new labeled samples in LPA with certain confidence.

    The traditional LPA is described in section 2, and the details of LPRC algorithm is present in section 3. In section 4, we give the comparison results both on artificial synthetic dataset and UCI dataset, respectively. Finally, the conclusions and future research plan are given in section 5.

    In label propagation algorithm, first, all categories in the sample set are required to be known. Assuming the number of categories in the samples set is t, $ C = \{c_1, c_2, ..., c_t\} $ represents the collection of all the categories in the dataset. Then, the dataset is defined as follows: using $ X_L = \{x_1, x_2, ..., x_l \} $ to represent the dataset of labeled samples, and $ X_U = \{x_1, x_2, ..., x_u \} $ to represent the dataset of unlabeled samples. $ X = X_L\cup X_U $, and $ x_i\in \mathbb R^d $, $ 1\le l < < u $. In addition, $ Y_L = \{y_1, y_2, ..., y_l \} $ represents the labels set of labeled samples, $ Y_U = \{y_1, y_2, ..., y_u \} $ represents the labels set of unlabeled samples. At the beginning, all labels of unlabeled samples can be set as 0 for the initial value of $ Y_U $ is not so important.

    The label propagation algorithm aims to spread labels of labeled samples to the samples with the greatest similarity. Take $ X $ as the initial training matrix, classify the samples in $ X_U $ through the labeled samples in $ X_L $, add each round of newly labeled samples in $ X_U $ to $ X_L $, and then use the updated $ X_L $ to conduct a new round of training on the unlabeled samples in $ X_U $. The label propagation algorithm will repeat the above process until it reaches a certain end condition.

    For a labeled sample $ x_a\in X_L $, the similarity between it and the unlabeled sample $ x_b\in X_U $ determines whether $ x_a $ will spread its label to $ x_b $. X. Zhu et al.[18] defines the similarity between any samples $ x_i $ and $ x_j $ as:

    $ wij=exp(d2ijσ2)=exp(Dd=1(xdixdj)2σ2)
    $
    (2.1)

    The label of samples will be spread according to the similarity between samples themselves. Clearly, the greater the similarity between sample $ x_a $ and $ x_b $, the easier it is for $ x_a $ to propagate its own label to $ x_b $. In [18], a $ (l+u)\times (l+u) $ probabilistic transition matrix $ T $ is defined as (2.2), and $ T_{ij} $ represents the possibility that node $ i $ propagates its own label to node $ j $.

    $ Tij=P(ij)=wijl+uk=1wkj
    $
    (2.2)

    Meanwhile, a $ (l+u)\times t $ label matrix $ Y $ records the relationship between each sample and each category. $ Y_{ij} $ represents the probability that the $ i-th $ sample $ x_i $ belongs to the $ j-th $ class. The initial value of the unlabeled sample in the label matrix $ Y $ is not very necessary to the classification result, and all the initial values in $ Y $ can be set as 0. Using the probability propagation matrix $ T $ to continuously update the label matrix $ Y $, and then the maximum $ s $ values in $ Y $ are chosen. In other words, finding the $ s $ unlabeled samples with the greatest similarity to the labeled samples, and assigning them to corresponding labels, then, moving them from $ X_U $ to $ X_L $. Repeat the process until the stop condition is satisfied.

    The traditional LPA only need a small number of labeled samples in the training process. In each subsequent training, the number of labeled sample dataset is continuously expanded by using the newly labeled samples to improve the accuracy of classification. But in this process, there are also some problems that we cannot ignore. In reality, the differences between samples belong to different categories are often not so clear, and the samples at the edge of the category are sparser than those at the center of the category.

    As shown in Figure 1(a), the samples on the boundary of two categories are identified as key nodes. If the key nodes are wrongly labeled during the label propagation procedure, it would bring serious impacts on the latter label propagation of unknown samples. Just as Figure 1(b) shows.

    Figure 1.  A distribution assumed to explain the importance of key node. (a) The samples with correct classes; (b) The result of key node being misclassified.

    Therefore, we propose LPRC algorithm. First, we will evaluate the credibility of these unlabeled samples, instead of classifying them directly, and the sequence of classification operation of unlabeled samples is determined according to the credibility value of themselves. For those samples with high credibility values, will be classified first. This part will be described in detail in section 3.2.

    In addition, considering the distribution differences of samples in each category in the dataset, we also put forward new ideas on how to select samples for labeling in each round. And this part will be described in section 3.1.

    Finally, as the label propagation algorithm constantly updates the training matrix and increases the number of labeled samples, the classification error may be continuously expanded, resulting in a drastically reduced accuracy. Therefore, we propose a roll-back mechanism based on feedback effect detection. In the process of label propagation, the algorithm performs feedback accuracy detection on the currently labeled samples every k rounds of iteration. Only when the set conditions are satisfied, the algorithm will continue the next round of iteration. Otherwise, the samples labeled in this round will be discarded and the label propagation of this round will be carried out again. This part will be described in detail in section 3.2.

    In a dataset, we can find that the distribution of samples in different categories is often different, which will also have a certain impact on the label propagation algorithm. Figure 2(a) shows a distribution of the samples in a dataset. We can see that the samples of class A are more densely distributed than those of the other two categories. The distribution of class B samples gradually becomes sparse from the center. The distribution of class C samples is relatively uniform. There is an obvious distance between each pair of adjacent samples in Class C, which means C is sparser than the other two categories.

    Figure 2.  (a) A dataset with all labeled samples. 2(b)–2(d) A propagating step of label propagation. (b) The initial state with only 3 labeled samples. (c) A situation that may occur in the course of propagation in one round. (d) The result because of the different density of the sample distribution.

    According to the label propagation algorithm, each round will spread corresponding labels to s unlabeled samples that are most similar to the labeled samples. At this time, since the samples of class C are more sparsely distributed than those of the other two classes, the similarity $ w_c $ between any two samples $ x_{c_1} $ and $ x_{c_2} $ of class C is generally lower than $ w_a $ ($ w_b $) between $ w_{a_1} $ and $ w_{a_2} $ of any two samples of class A (class B). This may lead to that class C has not received any labels spread from the labeled samples of the class during the first n iterations.

    However, the labeled sample dataset of the other two classes may misclassify the samples on the edge of the category in the subsequent propagation due to the continuous expansion. As a result, with the constant updating of $ X_L $, the classification errors in the later stage will become larger and larger, and even the phenomenon of class A (class B) merging the samples of C may occur. Figure 2(b)2(d) have shown this situation. Although the samples of class B in the dashed box are at the edge, their distribution is sparser than that of the samples in the center of the category. Compared with the samples of class C, whose samples are evenly and sparsely distributed in the original category, they are easier to be spread corresponding labels by the similar samples. So, as shown in figure 2(d), the samples in the solid box that originally belong to class C may be wrongly classified as class B. With iteration after iteration of the algorithm, this error will be gradually amplified, and resulting in a large number of these samples being swallowed by class B.

    In order to avoid the situation mentioned above, we no longer select the s most similar samples from the entire unlabeled sample dataset, but from the perspective of each category to consider them separately when we select unlabeled samples to label in each round. That is, $ \beta = s/t $ unlabeled samples most similar to the labeled samples of the class are selected from the $ t $ classes for labeling respectively. In each round of iteration, new labeled samples are guaranteed to be added to each category, so that the labeled sample dataset of each class can be uniformly expanded, avoiding the phenomenon of rapid expansion of a certain class and large disparity in volume between categories, thus reducing or even avoiding category annexation.

    At the later stage of the algorithm iteration, the value of s may need to be recalculated. 1. All unlabeled samples of $ t_1 $ certain classes are classified over: set $ t = t-t_1 $, $ s = \beta \times t $. 2. The number of unlabeled samples $ \alpha $ remaining in each class exists $ \alpha < \beta $: set $ \beta = \alpha $, $ s = \beta \times t $.

    As previously mentioned in section 2, the traditional label propagation algorithm obtains the largest $ s $ values directly from the label matrix $ Y $, and continuously updates $ Y $ by using the probability propagation matrix $ T $. Namely to find the s unlabeled samples with the highest similarity to the labeled samples, assign them to the corresponding labels, and move them from $ X_U $ to $ X_L $.

    However, such a selection mechanism actually only considers one aspect of "similarity". At this point, we need to conduct a classification credibility assessment of these unlabeled samples.

    When we get a labeled sample data set, we often hope that these samples can well represent the classes to which they belong, and there are obvious differences between them. It means that these samples are distributed in the center of their respective categories. In practice, however, we cannot guarantee the quality of the labeled samples we get. For example, most of these samples are on the boundary of two adjacent classes, so it is likely that the distance between these samples is not as significant as we expected. If a label propagation algorithm is used at this time, there may be cases where the similarity of some unlabeled samples to two (or more) labeled samples of different classes is close to and higher than all other similarities, which means the probability that the unlabeled sample may be assigned to its own label by two (or more) different categories is similar. The reason is that the label propagation generated by traditional methods is completely controlled by the adjacency relationship between samples, including those labeled and unlabeled samples. The labels of labeled samples are blindly spread to unlabeled adjacent samples without considering the difficulty and risk of transmission [14].

    As the labeled sample dataset continues to expand with the iteration of the algorithm, we hope that the quality of this dataset can be guaranteed. In this way, the classification accuracy will not be significantly reduced, and the probability of label propagation between samples of different categories can be controlled. Therefore, we need to make an assessment of the label propagation order of unlabeled samples.

    As shown in Figure 3, we assume that there is such a sample distribution: the dividing line of A and B is indicated by the dotted line in the figure, and the existing labeled samples are shown as the red and blue solid circles in the figure. In this round of update of the algorithm, there are samples a-c waiting to be labeled. The distance $ d_{Ac} $ from sample c to class A is less than the distance $ d_{Aa} $ from sample a to class A, the distance $ d_{Bc} $ from sample c to class B is less than the distance $ d_{Bb} $ from sample b to class B, and here we have: $ d_{Ac}-d_{Bc} = \lambda, \lambda \rightarrow 0 $.

    Figure 3.  A distribution of unlabeled samples, the existing labeled samples are shown as the red and blue solid circles.

    So, the probability $ P_{Ac} $ that sample c belongs to class A, and the probability $ P_{Bc} $ that sample c belongs to class B are both large and they are very close. In the traditional label propagation algorithm, the label will be assigned to sample c according to $ max(P_{Ac}, P_{Bc}) $, but this may cause the wrong classification of sample c.

    In our newly proposed LPRC algorithm, we will further process the label matrix Y after each iteration under the conditions in section 3.1: sorting the label sequence of samples, so as to improve the classification accuracy of the label propagation algorithm.

    As we mentioned in Section 2, in the label matrix $ Y $, $ Y_{ij} $ represents the probability that the $ i-th $ sample belongs to the $ j-th $ class. Then, when the situation we discussed in this section occurs, it means the set of categories for the dataset $ C = \{c_1, c_2, ..., c_t\} $, $ {\exists}x_i\in X_U, Y_{x_i c_1 }-Y_{x_i c_2 }\le \lambda, \lambda \rightarrow 0 $. At this point, for any sample $ x_i\in X_U $, there is a set of label vectors $ Y_{x_i C} = \{Y_{x_i c_1 }, Y_{x_i c_2 }, ..., Y_{x_i c_t }\} $, represents the probability that the sample $ x_i $ belongs to each category. We find out the maximum value $ Y_{x_i c_n } $ and the second largest value $ Y_{x_i c_m } $ from the set of label vectors, and set $ D_{value} = Y_{x_i c_n }-Y_{x_i c_m } $.

    Here we can get: $ Y = \left[Yx1c1Yx1c2...Yx1ctYx2c1Yx2c2...Yx2ct............Yxuc1Yxuc2...Yxuct

    \right] $

    It exists in $ Y_{x_i C} = \{Y_{x_i c_1 }, Y_{x_i c_2 }, ..., Y_{x_i c_t }\} $: $ Y_{x_i c_n } = D_{max_i} = max(Y_{x_i c_1 }, Y_{x_i c_2 }, ..., Y_{x_i c_t }) $, and $ DM_{max_i} = max(Y_{x_i c_1 }, Y_{x_i c_2 }, ..., Y_{x_i c_(n-1) }, Y_{x_i c_(n+1) }, ..., Y_{x_i c_t }) $. Moreover, here we need to use $ Class = \{class_1, class_2, ..., class_u\}\in C $ to record the category $ C_n $ indicated by $ Y_{x_i c_n } $. Setting $ D_{vp_i} = (D_{value_i}, class_i) $, then we can get a two-dimensional matrix $ D_{vp} $ to store the value of $ D_{value} $, and $ class_n $ represents the class $ C_n $ with the highest probability that the sample belongs to: $ D_{vp} = \left[Dmax1DMmax1class1Dmax2DMmax2class2......DmaxuDMmaxuclassu

    \right] $

    In each round of update of label matrix $ Y $, the sample set is first divided according to the category set $ C $ and the value of the sample in set $ Class $. Samples have the same value in set Class will be divided into a subset. Select $ \beta $ samples with the largest values in $ D_value $ from each subset to label. That is, $ \beta $ unlabeled samples are selected from $ t $ categories each time should be sure that, the probability that these unlabeled samples belong not only to the certain class is as large as possible, but also to other classes is as small as possible. This operation could ensure the credibility of samples added to the labeled sample dataset in each round. We refer to such samples as easy-to-label samples.

    If the samples a-c in Figure 4 is evaluated here based on $ D_value $ and $ Class $, the sequence of them to be labeled can be changed.

    Figure 4.  The illustration of artificial synthetic datasets, to show the distribution of samples.

    The label matrix $ Y $ composed of the label vector set of samples a–c is: $ Y = \left[YaAYaBYbAYbBYcAYcB

    \right] $

    After calculating its $ D_{value} $ and $ Class $, respectively, we can get: $ D_{vp} = \left[DmaxaDMmaxaclass1DmaxbDMmaxbclass2DmaxcDMmaxcclass3

    \right] $

    According to Figure 5, for sample c: $ Y_{cA}-Y_{cB} = \lambda, \lambda \rightarrow 0 $, therefore, it will be labeled at the end.

    Compared with sample c, sample a and b are not only close to their own classes, but also keep a relatively obvious distance from the different classes to some extent. Such samples like a and b are more reliable in classification.

    Figure 5.  The classification result of the 10 algorithms on 5 synthetic datasets, the last column shows the original datasets.

    After samples a and b are given corresponding labels (A and B) respectively, sample c will be affected by both the original labeled samples and the newly labeled samples a and b. We will get the result that sample c is divided into class B by superimposing these effects before, which is exactly in line with the actual distribution of the sample dataset.

    The advantage of the label propagation algorithm is that it is simple, fast and efficient, but it also has the drawback that the results of each iteration are unstable and the accuracy is not high. In order to control this instability, we added a detection mechanism in the process of algorithm iteration, which called "roll-back".

    As we know, the traditional label propagation algorithm will label $ s $ new samples in each round of iteration and add this part of the samples to the labeled sample dataset. If there is a classification error in the newly added sample in this part, then the classification errors in the later stage will increase with the iteration of the algorithm. So here we try to use the "roll-back" detection mechanism to control this error.

    After each round of iteration, we will get s samples of new labels. At this time, we add a condition: each iteration of $ k $ rounds, a roll-back detection is carried out for the newly labeled $ s $ samples of the round. Only when the condition is satisfied, the algorithm continues to execute. Otherwise, the newly labeled s samples of the round will be discarded and the label propagation of the round will be carried out again. Under the conditions in section 3.1, the flow path of the roll-back detection mechanism set for the new labeled samples of the round is as follows:

    1. Use $ N_s^k $ to represent the dataset composed of $ s $ unlabeled samples with new labels in the $ k-th $ round, and add $ N_s^k = \{x_{n_1 }, x_{n_2 }, ..., x_{n_s }\}\in X_U $ to $ X_L $. By default, all labels owned by samples in $ N_s^k $ are correct.

    2. Find the center sample of $ N_s^k $ and then calculate the distance $ Dis = \{D_1, D_2, ..., D_S\} $ between each sample and the center sample. Draw a circle with $ max(Dis) $ as the radius $ r $, search for samples in $ X_L^k $ within this range, and remove them out as $ U_s $. If the sample number within this range is $ s_1 $, and $ s_1 < s $, we will randomly select $ s-s_1 $ samples from $ X_L^k-U_{s_1 } $ to $ U_s $.

    3. Use $ X_L^k $ to represent the labeled sample dataset used for training in the $ k-th $ round, select $ s $ samples to be removed out from $ X_L^k $ as a new independent unlabeled sample dataset $ U_s $.

    4. Let a conduct label propagation on samples in $ U_s $, and detect the accuracy of newly transmitted labels on samples in $ U_s $.

    5. Set the accuracy rate as $ R_p $ and the threshold as $ \eta $. If $ R_p < \eta $, then the samples in $ N_s^k $ will be discarded and the round of label propagation will be carried out again. If $ R_p < \eta $, the algorithm continues.

    In order to evaluate the effectiveness of the method proposed in this paper, we conducted experiments in the artificial synthetic dataset and UCI dataset, respectively. The hardware environment of the experiment is: Intel(R) Core(TM) i7-9700, CPU 3.00GHz, and the software environment is: Windows10+Matlab2017a+Python3.6.

    In Python, we used the method (sklearn.datasets()) provided by the sklearn library to generate the 5 synthetic datasets [19] we needed, as shown in figure 4 and table 1. Dataset circles is a double ring shape, and it contains 1500 samples and 2 classes, 750 samples in each class. Dataset moons is made by 1500 samples and it also contains 2 classes, 750 samples in each class. Dataset varied, aniso, blobs each has 1500 samples and 3 classes, 500 samples in each class.

    Table 1.  The detail of each artificial synthetic dataset.
    Datasets Sum Classes Sum of each class
    Circles 1500 2 750
    Moons 1500 2 750
    Varied 1500 3 500
    Aniso 1500 3 500
    Blobs 1500 3 500

     | Show Table
    DownLoad: CSV

    We used 9 algorithms and LPRC algorithm for experimental comparison. These algorithms have been proved to have high classification performance, and have been applied in many fields. And they can adapt well to the situation of only a small number of labeled samples is known. The classification effects of the 10 algorithms on the five artificially synthesized datasets and the original datasets are shown in Figure 5, and the specific classification accuracy is shown in table 2. Labeled rate represents the percentage of labeled samples in each dataset.

    Table 2.  The accuracy of the 10 algorithms on 5 synthetic datasets.
    Circles Moons Varied Aniso Blobs
    MiniBatchKmeans 33 $ \% $ 17 $ \% $ 38 $ \% $ 7 $ \% $ 0 $ \% $
    Meanshift 50 $ \% $ 87 $ \% $ 2 $ \% $ 32 $ \% $ 33 $ \% $
    SpectralClustering 1$ \% $ 74 $ \% $ 34 $ \% $ 99 $ \% $ 33 $ \% $
    DBSCAN 50 $ \% $ 0 $ \% $ 33 $ \% $ 33 $ \% $ 100 $ \% $
    GaussianMixture 33 $ \% $ 9 $ \% $ 1 $ \% $ 0 $ \% $ 33 $ \% $
    GaussianNB 58.48 $ \% $ 86.34 $ \% $ 93.93 $ \% $ 79.26 $ \% $ 99.86 $ \% $
    DecisionTree 75.77 $ \% $ 89.16 $ \% $ 95.42 $ \% $ 90.37 $ \% $ 100 $ \% $
    KNN 62.25 $ \% $ 82.78 $ \% $ 89.02 $ \% $ 87.88 $ \% $ 100 $ \% $
    LPA 99.86 $ \% $ 99.93 $ \% $ 66.53 $ \% $ 67.80 $ \% $ 100 $ \% $
    LPRC 100 $ \% $ 100 $ \% $ 97.80 $ \% $ 99.80 $ \% $ 100 $ \% $

     | Show Table
    DownLoad: CSV

    LPRC (labeled rate = 0.01, s = 15), KNN (labeled rate = 0.01, k = 1) [20], DecisionTreeClassifier (labeled rate = 0.01) [21], GaussianNB (labeled rate = 0.01) [22], LPA(label rate = 0.01, s = 15) and keep the number of labeled samples for each category has the same proportion as the number of samples for that category.

    By comparing the classification effects of the above 10 algorithms on 5 synthetic datasets, we can see that the classification ability of LPRC algorithm is better than other algorithms on the whole, and it can also better match the distribution of samples.

    The datasets iris, wine, australian, breast, downloaded through UCI [23], also used the above 10 algorithms for experimental comparison. The information for the data set is shown in table 3. Dataset iris has 150 samples and 3 classes, and the number of features is 4. Dataset wine contains 178 samples for 3 classes, and 13 features. Dataset australian contains 690 samples and 14 features, and dataset breast contains 699 samples and 10 features. Both of dataset australian and breast has 2 classes.

    Table 3.  The details of 4 UCI datasets.
    Datasets Sum of samples Number of features Number of classes
    Iris 150 4 3
    Wine 178 13 3
    Australian 690 14 2
    Breast 699 14 2

     | Show Table
    DownLoad: CSV

    The classification results of 5 algorithms, LPRC, KNN, GaussianNB (GNB), DecisionTreeClassifier (DTC) and LPA with different labeled rate are shown in figure 6(a)-(d) and table 5 (average value of 50 experiments is taken for each result), and the classification results of the other 5 algorithms are shown in table 4.

    Figure 6.  The recognition accuracies with different labeled rate for different algorithms. (a) The classification accuracies of these algorithms with different labeled rate on dataset wine. (b) The results on dataset iris. (c) The results on dataset australian. (d) The results on dataset breast. Compared with these experimental results can prove the good performance in classification of LPRC, especially when the labeled rate is small.
    Table 4.  Accuracies of other 5 algorithms on these 4 UCI datasets.
    Iris Wine Australian Breast
    MiniBatchKmeans 2$ \% $ 10$ \% $ 47$ \% $ 6$ \% $
    Meanshift 33$ \% $ 7$ \% $ 49$ \% $ 4$ \% $
    SpectralClustering 17$ \% $ 16$ \% $ 32$ \% $ 40$ \% $
    DBSCAN 17$ \% $ 4$ \% $ 52$ \% $ 3$ \% $
    GaussianMixture 0$ \% $ 18$ \% $ 44$ \% $ 9$ \% $

     | Show Table
    DownLoad: CSV
    Table 5.  Accuracies with different labeled rate of the 5 algorithms on these 4 UCI datasets.
    Dataset GaussianNB DecisionTree KNN LPA LPRC
    iris 2$ \% $ 69.96$ \% $ 74.54$ \% $ 75.54$ \% $ 74.00$ \% $ 94.33$ \% $
    4$ \% $ 78.22$ \% $ 84.68$ \% $ 84.82$ \% $ 81.99$ \% $ 95.41$ \% $
    6$ \% $ 84.59$ \% $ 86.41$ \% $ 86.24$ \% $ 87.53$ \% $ 95.94$ \% $
    8$ \% $ 87.82$ \% $ 89.01$ \% $ 87.67$ \% $ 89.71$ \% $ 96.16$ \% $
    10$ \% $ 89.80$ \% $ 90.10$ \% $ 89.62$ \% $ 90.16$ \% $ 96.19$ \% $
    12$ \% $ 92.47$ \% $ 90.77$ \% $ 90.91$ \% $ 91.41$ \% $ 96.28$ \% $
    14$ \% $ 93.00$ \% $ 91.46$ \% $ 91.40$ \% $ 92.05$ \% $ 96.48$ \% $
    16$ \% $ 93.31$ \% $ 91.86$ \% $ 92.46$ \% $ 91.33$ \% $ 96.45$ \% $
    18$ \% $ 93.21$ \% $ 91.68$ \% $ 92.42$ \% $ 92.60$ \% $ 96.73$ \% $
    20$ \% $ 94.11$ \% $ 92.72$ \% $ 93.63$ \% $ 92.60$ \% $ 96.68$ \% $
    wine 2$ \% $ 66.02$ \% $ 67.19$ \% $ 68.17$ \% $ 67.58$ \% $ 90.44$ \% $
    4$ \% $ 71.49$ \% $ 74.81$ \% $ 72.00$ \% $ 82.21$ \% $ 93.99$ \% $
    6$ \% $ 74.06$ \% $ 77.65$ \% $ 73.34$ \% $ 85.71$ \% $ 93.43$ \% $
    8$ \% $ 76.78$ \% $ 80.63$ \% $ 75.61$ \% $ 86.45$ \% $ 94.58$ \% $
    10$ \% $ 78.62$ \% $ 82.37$ \% $ 75.60$ \% $ 89.44$ \% $ 94.47$ \% $
    12$ \% $ 79.68$ \% $ 83.30$ \% $ 76.30$ \% $ 90.07$ \% $ 94.84$ \% $
    14$ \% $ 79.33$ \% $ 84.16$ \% $ 76.17$ \% $ 90.91$ \% $ 95.21$ \% $
    16$ \% $ 81.52$ \% $ 84.35$ \% $ 77.94$ \% $ 91.71$ \% $ 95.24$ \% $
    18$ \% $ 82.34$ \% $ 85.00$ \% $ 77.99$ \% $ 91.97$ \% $ 95.41$ \% $
    20$ \% $ 83.01$ \% $ 87.45$ \% $ 78.17$ \% $ 91.71$ \% $ 95.49$ \% $
    australian 2$ \% $ 73.16$ \% $ 75.08$ \% $ 76.12$ \% $ 75.79$ \% $ 79.29$ \% $
    4$ \% $ 75.21$ \% $ 76.71$ \% $ 78.05$ \% $ 78.09$ \% $ 79.29$ \% $
    6$ \% $ 77.13$ \% $ 77.45$ \% $ 78.78$ \% $ 79.31$ \% $ 80.56$ \% $
    8$ \% $ 80.31$ \% $ 78.87$ \% $ 79.58$ \% $ 80.21$ \% $ 81.15$ \% $
    10$ \% $ 80.45$ \% $ 79.70$ \% $ 81.15$ \% $ 80.49$ \% $ 81.37$ \% $
    12$ \% $ 80.73$ \% $ 80.51$ \% $ 81.62$ \% $ 81.63$ \% $ 82.36$ \% $
    14$ \% $ 80.85$ \% $ 80.93$ \% $ 82.69$ \% $ 82.32$ \% $ 82.83$ \% $
    16$ \% $ 80.81$ \% $ 80.52$ \% $ 83.69$ \% $ 82.91$ \% $ 83.61$ \% $
    18$ \% $ 81.32$ \% $ 80.43$ \% $ 83.47$ \% $ 83.35$ \% $ 84.13$ \% $
    20$ \% $ 81.01$ \% $ 81.40$ \% $ 84.15$ \% $ 83.38$ \% $ 84.37$ \% $
    breast 2$ \% $ 87.98$ \% $ 88.26$ \% $ 90.54$ \% $ 92.89$ \% $ 96.67$ \% $
    4$ \% $ 88.04$ \% $ 89.68$ \% $ 92.15$ \% $ 94.71$ \% $ 96.78$ \% $
    6$ \% $ 88.99$ \% $ 89.81$ \% $ 93.25$ \% $ 94.54$ \% $ 96.82$ \% $
    8$ \% $ 89.50$ \% $ 90.78$ \% $ 95.10$ \% $ 95.30$ \% $ 96.90$ \% $
    10$ \% $ 89.73$ \% $ 91.70$ \% $ 95.47$ \% $ 95.85$ \% $ 96.94$ \% $
    12$ \% $ 90.18$ \% $ 91.93$ \% $ 95.59$ \% $ 95.67$ \% $ 96.94$ \% $
    14$ \% $ 91.82$ \% $ 92.61$ \% $ 95.66$ \% $ 96.18$ \% $ 97.02$ \% $
    16$ \% $ 91.41$ \% $ 92.91$ \% $ 96.24$ \% $ 96.12$ \% $ 97.21$ \% $
    18$ \% $ 91.73$ \% $ 92.43$ \% $ 96.67$ \% $ 96.19$ \% $ 97.16$ \% $
    20$ \% $ 92.47$ \% $ 93.21$ \% $ 96.32$ \% $ 96.33$ \% $ 97.24$ \% $

     | Show Table
    DownLoad: CSV

    Otherwise, an additional experiment is made to compare the performance of LPRC with TSVM [24] and negative selection algorithm (NSA), these algorithms just need a small number of labeled samples. The results on UCI datasets of these 3 algorithms are shown in figure 7(a)(b) and table 6, and the results on synthetic datasets are shown in figure 8.

    Figure 7.  The recognition accuracies with different labeled rate for different algorithms. (a) The classification accuracies of these algorithms with different labeled rate on dataset australian. (b) The results on dataset breast.
    Table 6.  Accuracies with different labeled rate of the 3 algorithms on these 2 UCI datasets.
    Dataset TSVM NSA LPRC
    australian 2$ \% $ 72.17$ \% $ 59.91$ \% $ 79.29$ \% $
    4$ \% $ 76.58$ \% $ 65.75$ \% $ 79.29$ \% $
    6$ \% $ 77.20$ \% $ 68.01$ \% $ 80.56$ \% $
    8$ \% $ 78.56$ \% $ 69.48$ \% $ 81.15$ \% $
    10$ \% $ 81.85$ \% $ 69.71$ \% $ 81.37$ \% $
    12$ \% $ 81.90$ \% $ 71.34$ \% $ 82.36$ \% $
    14$ \% $ 83.23$ \% $ 72.38$ \% $ 82.83$ \% $
    16$ \% $ 83.89$ \% $ 71.35$ \% $ 83.61$ \% $
    18$ \% $ 83.86$ \% $ 72.19$ \% $ 84.13$ \% $
    20$ \% $ 84.67$ \% $ 72.74$ \% $ 84.37$ \% $
    breast 2$ \% $ 93.22$ \% $ 82.13$ \% $ 96.67$ \% $
    4$ \% $ 93.91$ \% $ 88.27$ \% $ 96.78$ \% $
    6$ \% $ 94.77$ \% $ 91.23$ \% $ 96.82$ \% $
    8$ \% $ 94.72$ \% $ 92.57$ \% $ 96.90$ \% $
    10$ \% $ 94.87$ \% $ 93.66$ \% $ 96.94$ \% $
    12$ \% $ 94.99$ \% $ 92.94$ \% $ 96.94$ \% $
    14$ \% $ 95.11$ \% $ 93.16$ \% $ 97.02$ \% $
    16$ \% $ 95.23$ \% $ 93.93$ \% $ 97.21$ \% $
    18$ \% $ 95.35$ \% $ 94.31$ \% $ 97.16$ \% $
    20$ \% $ 95.14$ \% $ 93.53$ \% $ 97.24$ \% $

     | Show Table
    DownLoad: CSV
    Figure 8.  The classification result of the 3 algorithms on 2 synthetic datasets.

    After experimental comparison, we found that the LPRC algorithm has good performance in classification, and it especially has high accuracy when only has a small number of labeled samples. As the number of labeled samples increases, the accuracy rate also shows an increasing trend. In the process of propagation, the error caused by the addition of new labeled samples is reduced, and the case of one class of samples being swallowed by another is prevented. The classification effect has been significantly improved.

    Another experiment is made to further test the method LPRC we proposed. The result is analyzed by a significance test to assess the effectiveness of LPRC. Table 7 shows the average results for 50 runs of LPA and LPRC on 5 UCI datasets, and we can find that the accuracies of LPRC are higher than LPA. The statistical test of the results is based on two hypotheses of the average accuracy acc values of LPRC, where $ u_0 $ is the average accuracy of LPA:

    $ \left\{ H0:acc is similar with u0H1:acc is significantly bigger than u0;
    \right. $
    Table 7.  The results of statistical test ($ \alpha $ = 0.05).
    Dataset LPA LPRC Statistical test
    $ acc_1 $ $ var_1 $ $ acc_2 $ $ var_2 $
    iris 2$ \% $ 74.00$ \% $ 1.03E-02 94.33$ \% $ 1.74E-03 138.17
    4$ \% $ 81.99$ \% $ 7.25E-03 95.41$ \% $ 7.26E-04 129.57
    6$ \% $ 87.53$ \% $ 6.59E-03 95.94$ \% $ 4.59E-05 89.33
    8$ \% $ 89.71$ \% $ 1.96E-03 96.16$ \% $ 4.01E-05 230.36
    10$ \% $ 90.16$ \% $ 1.67E-03 96.19$ \% $ 4.27E-05 252.75
    12$ \% $ 91.41$ \% $ 1.22E-03 96.28$ \% $ 4.99E-05 279.43
    14$ \% $ 92.05$ \% $ 9.17E-04 96.48$ \% $ 4.27E-05 338.17
    16$ \% $ 91.33$ \% $ 8.02E-04 96.45$ \% $ 5.41E-05 446.88
    18$ \% $ 92.60$ \% $ 5.07E-04 96.73$ \% $ 4.13E-05 570.21
    20$ \% $ 92.60$ \% $ 4.72E-04 96.68$ \% $ 5.60E-05 605.09
    wine 2$ \% $ 67.58$ \% $ 1.67E-02 90.44$ \% $ 7.64E-03 95.82
    4$ \% $ 82.21$ \% $ 1.17E-02 93.99$ \% $ 4.30E-04 70.47
    6$ \% $ 85.71$ \% $ 6.44E-03 93.43$ \% $ 1.65E-03 83.91
    8$ \% $ 86.45$ \% $ 5.85E-03 94.58$ \% $ 3.50E-05 97.28
    10$ \% $ 89.44$ \% $ 1.63E-03 94.47$ \% $ 2.10E-04 216.01
    12$ \% $ 90.07$ \% $ 1.28E-03 94.84$ \% $ 6.53E-05 260.86
    14$ \% $ 90.91$ \% $ 1.33E-03 95.21$ \% $ 7.35E-05 226.32
    16$ \% $ 91.71$ \% $ 7.66E-04 95.24$ \% $ 4.83E-05 319.84
    18$ \% $ 91.97$ \% $ 7.70E-04 95.41$ \% $ 5.98E-05 312.72
    20$ \% $ 91.71$ \% $ 5.85E-04 95.49$ \% $ 6.25E-05 452.31
    australian 2$ \% $ 75.79$ \% $ 5.46E-03 79.29$ \% $ 3.12E-03 44.87
    4$ \% $ 78.09$ \% $ 4.64E-03 79.29$ \% $ 3.79E-03 18.10
    6$ \% $ 79.31$ \% $ 3.51E-03 80.56$ \% $ 1.38E-03 24.93
    8$ \% $ 80.21$ \% $ 2.72E-03 81.15$ \% $ 1.17E-03 24.18
    10$ \% $ 80.49$ \% $ 1.26E-03 81.37$ \% $ 1.16E-03 48.89
    12$ \% $ 81.63$ \% $ 1.36E-03 82.36$ \% $ 9.49E-04 37.57
    14$ \% $ 82.32$ \% $ 6.70E-04 82.83$ \% $ 6.64E-04 53.28
    16$ \% $ 82.91$ \% $ 5.16E-04 83.61$ \% $ 5.95E-04 94.96
    18$ \% $ 83.35$ \% $ 6.27E-04 84.13$ \% $ 4.27E-04 87.08
    20$ \% $ 83.38$ \% $ 6.11E-04 84.37$ \% $ 5.08E-04 113.42
    breast 2$ \% $ 92.89$ \% $ 2.12E-03 96.67$ \% $ 8.78E-06 124.80
    4$ \% $ 94.71$ \% $ 4.79E-04 96.78$ \% $ 1.31E-05 302.50
    6$ \% $ 94.54$ \% $ 7.34E-04 96.82$ \% $ 1.23E-05 217.44
    8$ \% $ 95.30$ \% $ 1.68E-04 96.90$ \% $ 1.05E-05 666.66
    10$ \% $ 95.85$ \% $ 8.29E-05 96.94$ \% $ 1.11E-05 920.39
    12$ \% $ 95.67$ \% $ 1.14E-04 96.94$ \% $ 1.70E-05 779.82
    14$ \% $ 96.18$ \% $ 9.49E-05 97.02$ \% $ 2.14E-05 619.60
    16$ \% $ 96.12$ \% $ 8.01E-05 97.21$ \% $ 1.79E-05 952.55
    18$ \% $ 96.19$ \% $ 1.17E-04 97.16$ \% $ 2.04E-05 580.35
    20$ \% $ 96.33$ \% $ 5.68E-05 97.24$ \% $ 1.87E-05 1121.48

     | Show Table
    DownLoad: CSV

    Based on the central limit theorem, the average accuracy obtained by repeating the algorithm can be assumed to follow a normal distribution. According to [25], $ \frac{(acc-u_0)}{(s/ \sqrt{n})} $ coincides with $ T(n-1) $, and if $ H_0 $ established, the average accuracy $ acc $ would be close to the value of $ u_0 $ for $ H_0 $. Otherwise, $ H_0 $ would be rejected with a confidence level of $ 1-\alpha $ when $ \frac{(acc-u_0)}{(s/ \sqrt{n})}\ge T_\alpha (n-1) $ is satisfied. We use $ s $ to represent the sample variation and $ n $ is the number of repetitions.

    The results of the statistical test are shown in table 7, $ acc_1 $ and $ acc_2 $ represent the average accuracies of LPA and LPRC, respectively.

    As can be observed in table 7, for the given confidence level, all the test results are higher than the rejection threshold $ T_{\alpha = 0.05} (49) = 1.6777 $. It means that H0 does not established and $ H_1 $ is true. This experiment proved that the proposed method LPRC is effective.

    In this paper, a new algorithm LPRC is proposed to improve the stability of the traditional LPA. To achieve better propagation results, a credibility assessment and a roll-back detection schemes are designed. The credibility assessment of each sample is calculated first to determinate the label propagation order, which ensures that the new labeled samples are more reliable to be added to the labeled set for future propagation. Then, a roll-back mechanism based on feedback detection is used to the control the propagation error caused by wrong labels. Only when the exit conditions are satisfied, the new labeled samples could maintain their labels and be moved to labeled sample dataset, or the new labeled samples in this round will be discarded.

    LPRC not only maintains the original simple and efficient features of label propagation, but also increases its accuracy in classification. The comparisons based on the artificial synthetic datasets and the UCI datasets demonstrated that classification performance of LPRC are obviously better than traditional algorithms. In particular, it is suitable to the situation with only a small number of labeled samples.

    In the feature, we will continue to make deep research in label propagation algorithm in order to let it exert the best performance. The research we made is just based on the static samples, but in the practice, the samples are always dynamical. Considering with this situation, we will focus on the dynamic samples in the next step to fit the practical applications better.

    This work was supported by the Natural Science Foundation of China (Grantnos.61872255, U1736212 and U19A2068).

    The authors declare there is no conflict of interest.


    Acknowledgments



    This study (the project NO. 1397 /58819) was funded by Student Research Committee, Shahid Beheshti University of Medical Sciences, Tehran, Iran. We also appreciate the “Student Research Committee” and “Research & Technology Chancellor” in Shahid Beheshti University of Medical Sciences for their financial support of this study.

    Conflict of interest



    The authors declare no conflict of interest.

    [1] WHO (2010) World Health Organization. Developing sexual health programmes: a framework for action. Geneva. Available from: http://apps.who.int/iris/bitstream/10665/70501/1/WHO_RHR_HRP_10.22_eng.pdf.
    [2] UNFPA (2010) Sexual and Reproductive Health For All. Reducing Poverty, Advancing Development and Protecting Human Rights. Available from: http://www.unfpa.org/publications/sexual-and-reproductive-health-all#sthash.xZTrIzAD.dpuf: UNFPA.
    [3] WHO (2006) World Health Organization. Defining Sexual Health, Report of Technical Consultation on Sexual Health, January 2002, Geneva. Available form: http://www.who.int/reproductivehealth/publications/sexual_health/defining_sexual_health.pdf.
    [4] Higgins JA, Mullinax M, Trussell J, et al. (2011) Sexual satisfaction and sexual health among university students in the United States. Am J Public Health 101: 1643–1654. doi: 10.2105/AJPH.2011.300154
    [5] Alizadeh S, Ebadi A, Kariman N, et al. (2018) Dyadic sexual communication scale: psychometrics properties and translation of the Persian version. Sex Relation Ther, 1–12.
    [6] WHO (2004) Sexual health-a new focus for WHO. Progress in Reproductive Health Research 67: 1–8.
    [7] Addis IB, Van Den Eeden SK, Wassel-Fyr CL, et al. (2006) Sexual activity and function in middle-aged and older women. Obstet Gynecol 107: 755–764. doi: 10.1097/01.AOG.0000202398.27428.e2
    [8] Johnson CE (2011) Sexual Health during pregnancy and the postpartum (CME). J Sex Med 8: 1267–1284. doi: 10.1111/j.1743-6109.2011.02223.x
    [9] Navidian A, Navabi Rigi S, Imani M, et al. (2016) The effect of sex education on the marital relationship quality of pregnant women. J Hayat 22: 115–127.
    [10] Bayrami R, Sattarzadeh N, Ranjbar Koocheksarai F , et al. (2009) Eevaluation of sexual behaviors and some of its related factors in pregnant women, Tabriz, IRAN 2005. Urmia Med J 20: 1–7.
    [11] Galazka I, Drosdzol-Cop A, Naworska B, et al. (2015) Changes in the sexual function during pregnancy. J Sex Med 12: 445–454. doi: 10.1111/jsm.12747
    [12] Pauls RN, Occhino JA, Dryfhout V, et al. (2008) Effects of pregnancy on pelvic floor dysfunction and body image; a prospective study. Int Urogynecol J 19: 1495–1501. doi: 10.1007/s00192-008-0670-3
    [13] Pauleta JR, Pereira NM, Graca LM (2010) Sexuality during pregnancy. J Sex Med 7: 136–142. doi: 10.1111/j.1743-6109.2009.01538.x
    [14] Vannier SA, Rosen NO (2017) Sexual distress and sexual problems during pregnancy: associations with sexual and relationship satisfaction. J Sex Med 14: 387–395. doi: 10.1016/j.jsxm.2016.12.239
    [15] Ozgoli G, Dolatian M , Ozgoli M, et al. (2008) Alteration in sexual drive during pregnancy in women referring to hospitals affilated to Shaheed Beheshti Medical University. Adv Nurs Midwifery 18: 5–12.
    [16] Pasha H, HadjAhmadi M (2007) Evaluation of sexual behaviors in pregnant women and some related factors. Hormozgan Med J 10: 343–348.
    [17] Yu T, Pettit GS, Lansford JE, et al. (2010) The interactive effects of marital conflict and divorce on parent–adult children's relationships. J Marriage Fam 72: 282–292. doi: 10.1111/j.1741-3737.2010.00699.x
    [18] Merghati Khoie E, Afshar M, Yavari Kia P, et al. (2012) Sexual belief and behavior of pregnant women referring to public health centers in Karaj-2011. Iran J Obstet Gynecol Infertil 15: 7–14.
    [19] Nematollahzade M, Maasoumi R, Lamyian M, et al. (2010) Study of women's attitude and sexual function during pregnancy. J Ardabil Univ Med Sci 10: 241–249.
    [20] Ozgoli G, Khoshabi K, Velaii N, et al. (2006) Knowledge and attitude of pregnant women toward sex during pregnancy and its related factors in general hospitals referring to Shahid Beheshti University of Medical Sciences 2004. J Fam Res 2: 137–147.
    [21] Heydari M, Kiani Asiabar A, Faghih Zade S (2006) Couples' knowledge and attitude about sexuality in pregnancy. Tehran Univ Med J TUMS Publ 64: 83–89.
    [22] Abasalizadeh F, Abasalizadeh S (2011) Behavioral dichotomy in sexuality during pregnancy and effect of birth-week intercourse on pregnancy outcomes in an Iranian population. Internet J. Gynecol Obstet 14: 1–6.
    [23] Abouzari-Gazafroodi K, Najafi F, Kazemnejad E, et al. (2015) Demographic and obstetric factors affecting women's sexual functioning during pregnancy. Reprod Health 12: 12–72. doi: 10.1186/1742-4755-12-12
    [24] Abouzari-Gazafroodi K, Najafi F, Kazemnejad E, et al. (2013) Comparison of sexual function between nulliparous with multiparous pregnant women. J Hayat 18: 55–63.
    [25] Dadgar S, Karimi FZ, Bakhshi M, et al. (2018) Assessment of sexual dysfunction and its related factors in pregnant women referred to Mashhad health centers (2017–2018). Iran J Obstet Gynecol Infertil 21: 22–29.
    [26] Hajnasiri H, Aslanbeygi N, Moafi F, et al. (2018) Investigating the relationship between sexual function and mental health in pregnant females. J Nurs Edu 6: 33–40.
    [27] Mousazadeh T, Motavalli R (2018) Sexual function and behavior in pregnant women of Ardabil in 2016. J Health Care 20: 40–47. doi: 10.29252/jhc.20.1.40
    [28] Tabande A, Behnampour N, Joudi Mashahd M, et al. (2016) Sexual satisfaction of women with gestational diabetes. J Mazandaran Univ Med Sci 26: 202–205.
    [29] Balali Dehkordi N, Sadat Rouholamini M (2016) The role of body image and obsessive believes in prediction of sexual function among pregnant women. Iran J Obstet Gynecol Infertil 19: 7–16.
    [30] Jamali S, Mosalanejad L (2013) Sexual dysfnction in Iranian pregnant women. Iran J Reprod Med 11: 479–486.
    [31] Nik-Azin A, Nainian MR, Zamani M, et al. (2013) Evaluation of sexual function, quality of life, and mental and physical health in pregnant women. J Fam Reprod Health 7: 171–176.
    [32] Jamali S, Rasekh Jahromi A, Zarei F, et al. (2014) Compression of sexual dysfunction during three trimester of pregnancy in pregnant women who had referred to Peymanieh clinic Jahrom in 2013. Nurs Dev Health 5: 37–45.
    [33] Bostani Khalesi Z, Rahebi SM, Mansour Ghanaee M (2012) Evaluation of women's sexual performance during first pregnancy. Iran J Obstet Gynecol Infertil 15: 14–20.
    [34] Torkestani F, Hadavand SH, Khodashenase Z, et al. (2012) Frequency and perception of sexual activity during pregnancy in Iranian couples. Int J Fertil Steril 6: 107–110.
    [35] Ebrahimian A, Heydari M, Zafarghandi S (2010) Comparison of female sexual dysfunctions before and during pregnancy. Iran J Obstet Gynecol Infertil 13: 30–36.
    [36] Torkestani F, Hadavand S, Davati A, et al. (2009) Effect of coitus during the second half of pregnancy on pregnancy outcome. Daneshvar Med 16: 5–12.
    [37] Ozgoli G, Zaki F, Amir Ali Akbari S, et al. (2008) A Survey upon the sexual function and behaviour of pregnant women referring to state health centers of Ahvaz City-2007. Pajoohandeh J 13: 397–403.
    [38] Bayrami R, Sattarzadeh N, Ranjbar Koochaksariie F, et al. (2008) Sexual dysfunction in couples and its related factors during pregnancy. J Reprod Infertil 9: 271–283.
    [39] Heidari M, Mohamadi KH, Faghihzade S (2006) Study of changes in sexual activity during pregnancy. Daneshvar Med 13: 27–32.
    [40] Rahimi S, Seyyed Rasooli E (2004) Sexual behavior during pregnancy: A descriptive study of pregnant women in Tabriz, Iran. Payesh 3: 291–299.
    [41] Memarian Z, Lamiyan M, Azin A (2016) Levels of sexual satisfaction in third trimester of pregnancy in nulliparous women and related factors. J Mazandaran Univ Med Sci 25: 178–182.
    [42] Ahmadi Z, Molaie Yarandi E, Malekzadegan A, et al. (2011) Sexual satisfaction and its related factors in primigravidas. Iran J Nur 24: 54–62.
    [43] Nezal AJ, Samiee RF, Kalhor M, et al. (2018) Sexual quality of life in pregnant women: A cross sectional study. Payesh 17: 421–429.
    [44] Parhizkar A (2017) Study of the relationship between domestic violence and pregnancy outcomes in mothers referring to Sanandaj comprehensive health centers in 2015–2016. S J Nurs Midwifery Param Fac 2: 33–44.
    [45] Sarayloo K, Mirzaei Najmabadi K, Ranjbar F, et al. (2017) Prevalence and risk factors for domestic violence against pregnant women. Iran J Nur 29: 28–35. doi: 10.29252/ijn.29.104.28
    [46] Hesami K, Dolatian M, Shams J, et al. (2010) Domestic violence before and during pregnancy among pregnant women. Iran J Nur 23: 51–59.
    [47] Jahanfar S, Jamshidi R (2002) The prevalence of domestic violence among pregnant women who were attended in Iran university of medical sciences' hospitals. Iran J Nur 15: 93–99.
    [48] Abadi MNL, Ghazinour M, Nojomi M, et al. (2012) The buffering effect of social support between domestic violence and self-esteem in pregnant women in Tehran, Iran. J Fam Viol 27: 225–231. doi: 10.1007/s10896-012-9420-x
    [49] Taghizadeh Z, Purbakhtyar M, Daneshparvar H, et al. (2015) Comparison the frequency of domestic violence and problem-solving skill among pregnant women with and without violence in Tehran. Iran J Forensic Med 21: 91–98.
    [50] Jahanfar S, Malekzadegan Z (2007) The prevalence of domestic violence among pregnant women who were attended in Iran University of Medical Science Hospitals. J Fam Viol 22: 643. doi: 10.1007/s10896-007-9084-0
    [51] Mohammad-Alizadeh-Charandabi S, Bahrami-Vazir E, Kamalifard M, et al. (2016) Intimate partner violence during the first pregnancy: A comparison between adolescents and adults in an urban area of Iran. J Forensic Leg Med 43: 53–60. doi: 10.1016/j.jflm.2016.07.002
    [52] Farrokh-Eslamlou H, Oshnouei S, Haghighi N (2014) Intimate partner violence during pregnancy in Urmia, Iran in 2012. J Forensic Leg Med 24: 28–32. doi: 10.1016/j.jflm.2014.03.007
    [53] Noori A, Sanago A, Jouybari L, et al. (2017) Survey of types of domestic violence and its related factors in pregnant mothers in Kalaleh at 2014. Iran J Obstet Gynecol Infertil 19: 54–62.
    [54] Behnam H, Moghadam Hoseini V, Soltanifar A (2008) Domestic violence against the Iranian pregnant women. Horizon Med Sci 14: 70–76.
    [55] Moeini B, Ezzati Rastegar K, Hamidi Y, et al. (2018) Social determinants of intimate partner violence among Iranian pregnant women. Koomesh 20: 350–357.
    [56] Mohammadhosseini E, Sahraean L, Bahrami T (2010) Domestic abuse before, during and after pregnancy in Jahrom, Islamic Republic of Iran. East Mediterr Health J 16: 752–758. doi: 10.26719/2010.16.7.752
    [57] Golchin NAH, Hamzehgardeshi Z, Hamzehgardeshi L, et al. (2014) Sociodemographic characteristics of pregnant women exposed to domestic violence during pregnancy in an Iranian setting. Iran Red Crescent Med J 16.
    [58] Baheri B, Ziaie M, Mohammadi SZ (2012) Frequency of domestic violence in women with adverse pregnancy outcomes (Karaj 2007–2008). Sci J Hamadan Nurs Midwifery Fac 20: 31–41.
    [59] Zadeh H, Nouhjah S, Hassan M (2011) Prevalence of domestic violence against pregnant women and its related factors in women referred to health centers in 2010 Ahvaz, Iran. Jentashapir J Health Res 2: 1–9.
    [60] Ali Kamali M, Rahimi Kian F, Mir Mohamad Ali M, et al. (2015) Comparison of domestic violence and its related factors in pregnant women in both urban and rural population in Zarand city, 2014. J Clin Nurs Midwifery 4: 69–78.
    [61] Khadivzadeh T, Erfanian F (2011) Comparison of domestic violence during pregnancy with the Pre-pregnancy period and its relating factors. Iran J Obstet Gynecol Infertil 14: 47–56.
    [62] Nejatizade AA, Roozbeh N, Yabandeh AP, et al. (2017) Prevalence of domestic violence on pregnant women and maternal and neonatal outcomes in Bandar Abbas, Iran. Electron Physician 9: 5166–5171. doi: 10.19082/5166
    [63] Faramarzi M, Esmaelzadeh S, Mosavi S (2005) Prevalence, maternal complications and birth outcome of physical, sexual and emotional domestic violence during pregnancy. Acta Med Iran 43: 115–122.
    [64] Bagherzadeh R, Keshavarz T, Sharif F, et al. (2008) Relationship between domestic violence during pregnancy and complications of pregnancy, type of delivery and birth weight on delivered women in hospital affiliated to Shiraz University of Medical Sciences. Horizon Med Sci 13: 51–58.
    [65] Baheri B, Ziaie M, Mohammadi Z (2012) Effect of domestic violence on pregnancy outcomes among pregnant women referring to Karaj Medical Centers. Hakim Health Sys Res 15: 140–146.
    [66] Hossieni VM, Toohill J, Akaberi A, et al. (2017) Influence of intimate partner violence during pregnancy on fear of childbirth. Sex Reprod Healthc 14: 17–23. doi: 10.1016/j.srhc.2017.09.001
    [67] Moghaddam Hossieni V, Toohill J, Akaberi A, et al. (2017) Influence of intimate partner violence during pregnancy on fear of childbirth. Sex Reprod Healthc 14: 17–23. doi: 10.1016/j.srhc.2017.09.001
    [68] Pazandeh F, Sheikhan Z, Keshavarz Z, et al. (2017) Effects of sex hormones in combined oral contraceptives and cyclofem on female sexual dysfunction score: A study on Iranian Females. Adv Nurs Midwifery 27: 9–14.
    [69] Gharacheh M, Azadi S, Mohammadi N, et al. (2016) Domestic violence during pregnancy and women's health-related quality of life. Glob J Health Sci 8: 27–34. doi: 10.5539/gjhs.v8n12p27
    [70] Ramezani S, Keramat A, Motaghi Z, et al. (2015) The relationship of sexual satisfaction and marital satisfaction with domestic violence against pregnant women. Int J Pediatr 3: 951–958.
    [71] Hassan M, Kashanian M, Roohi M, et al. (2014) Maternal outcomes of intimate partner violence during pregnancy: Study in Iran. Public Health 128: 410–415. doi: 10.1016/j.puhe.2013.11.007
    [72] Hassan M, Kashanian M, Hassan M, et al. (2013) Assessment of association between domestic violence during pregnancy with fetal outcome. Iran J Obstet Gynecol Infertil 16: 21–29.
    [73] Abdollahi F, Yazdani-Cherati J, Majidi Z (2015) Intimate partner violence during pregnancy in the Northern Iran (2010). J Gorgan Univ Med Sci 17: 89–96.
    [74] Ebrahimi E, Karimian Z, Bonab SKM, et al. (2017) The prevalence of domestic violence and its association with gestational hypertension in pregnant women. Int J Health Stud 3: 21–24.
    [75] Shakerinezhad M (2013) Domestic violence and related factors in pregnant women. ZUMS J 21: 117–126.
    [76] Salehi S, Mehralian H (2006) The prevalence and types of domestic violence against pregnant women referred to maternity clinics in Shahrekord, 2003. J Shahrekord Univ Med Sci 8: 72–77.
    [77] Breuner CC, Mattson G (2016) Sexuality education for children and adolescents. Pediatrics 138.
    [78] Anzaku SA, Ogbe EA, Ogbu GI, et al. (2016) Evaluation of changes in sexual response and factors influencing sexuality during pregnancy among Nigerian women in Jos, Nigeria. Int J Reprod Contracept Obstet Gynecol 5: 3576–3582. doi: 10.18203/2320-1770.ijrcog20163448
    [79] Malarewicz A, Szymkiewicz J, Rogala J (2006) Sexuality of pregnant women. Ginekol Pol 77: 733–739.
    [80] Riazi H, BanooZadeh S, MoghimBeigi A, et al. (2013) The effect of sexual health education on sexual function during pregnancy. Payesh 12: 367–374.
    [81] Fok WY, Chan LY-S, Yuen PM (2005) Sexual behavior and activity in Chinese pregnant women. Acta Obstet Gynecol Scand 84: 934–938. doi: 10.1111/j.0001-6349.2005.00743.x
    [82] Senkumwong N, Chaovisitsaree S, Rugpao S, et al. (2006) The changes of sexuality in Thai women during pregnancy. J Med Assoc Thai 89: 124–129.
    [83] Merghati Khoie E, Abolghasemi N, Taghdisi MH (2013) Child sexual health: qualitative study, explaining the views of parents. J Sch Public Health Inst Public Health Res 11: 65–74.
    [84] Leite AP, Campos AA, Dias AR, et al. (2009) Prevalence of sexual dysfunction during pregnancy. Rev Assoc Med Bras 55: 563–568. doi: 10.1590/S0104-42302009000500020
    [85] Erol B, Sanli O, Korkmaz D, et al. (2007) A cross-sectional study of female sexual function and dysfunction during pregnancy. J Sex Med 4: 1381–1387. doi: 10.1111/j.1743-6109.2007.00559.x
    [86] Ahmed MR, Madny EH, Ahmed WAS (2014) Prevalence of female sexual dysfunction during pregnancy among egyptian women. J Obstet Gynaecol Res 40: 1023–1029. doi: 10.1111/jog.12313
    [87] Kuljarusnont S, Russameecharoen K, Thitadilok W (2011) Prevalence of sexual dysfunction in Thai pregnant women. Thai J Obstet Gynaecol 19: 172–180.
    [88] Stright B (2004) Maternal Neonatal Nursing. Philadelphia Lippincott Williams and Wilkins.
    [89] Aslan G, Aslan D, Kizilyar A, et al. (2005) A prospective analysis of sexual functions during pregnancy. Int J Impot Res 17: 154–157. doi: 10.1038/sj.ijir.3901288
    [90] Jamali S, Rasekh JA, Fatmeh Z (2012) Compression domains of sexual response between three trimester of pregnancy. Int J Gynaecol Obstet 119: S594. doi: 10.1016/S0020-7292(12)61384-8
    [91] Schomerus G, Appel K, Meffert PJ, et al. (2013) Personality-related factors as predictors of help-seeking for depression: a population-based study applying the behavioral model of health services use. Soc Psychiatry Psychiatr Epidemiol 48: 1809–1817. doi: 10.1007/s00127-012-0643-1
    [92] Khajehpour M, Simbar M, Jannesari S, et al. (2013) Health status of women with intended and unintended pregnancies. J Public health 127: 58–64. doi: 10.1016/j.puhe.2012.08.011
    [93] Gessessew A (2009) Unwanted pregnancy and it's impact on maternal health and utilization of health services in Tigray Region (Adigrat Hospital). Ethiop Med J 47: 1–8.
    [94] Serati M, Salvatore S, Siesto G, et al. (2010) Female sexual function during pregnancy and after childbirth. J Sex Med 7: 2782–2790. doi: 10.1111/j.1743-6109.2010.01893.x
    [95] Brotto L, Atallah S, Johnson-Agbakwu C, et al. (2016) Psychological and interpersonal dimensions of sexual function and dysfunction. J Sex Med 13: 538–571. doi: 10.1016/j.jsxm.2016.01.019
    [96] Van Minnen A, Kampman MJS, Therapy R (2000) The interaction between anxiety and sexual functioning: A controlled study of sexual functioning in women with anxiety disorders. Sex Relation Ther 15: 47–57. doi: 10.1080/14681990050001556
    [97] Heider N, Spruyt A, De Houwer J (2015) Implicit beliefs about ideal body image predict body image dissatisfaction. Front Psychol 6: 1402. doi: 10.3389/fpsyg.2015.01402
    [98] Mehta UJ, Siega-Riz AM, Herring AH (2011) Effect of body image on pregnancy weight gain. Matern Child Health J 15: 324–332. doi: 10.1007/s10995-010-0578-7
    [99] Solimany A, Delpisheh A, Khademi N, et al. (2016) Prevalence of violence against women in during pregnancy in IRAN: A systematic review and meta-analysis. J Urmia Nurs Midwifery Fac 13: 973–986.
    [100] Shamu S, Abrahams N, Temmerman M, et al. (2011) A systematic review of African Studies on intimate partner violence against pregnant women: Prevalence and risk factors. PLoS One 6.
    [101] Ergönen AT, Özdemir MH, Can İÖ, et al. (2009) Domestic violence on pregnant women in Turkey. J Forensic Leg Med 16: 125–129. doi: 10.1016/j.jflm.2008.08.009
    [102] Hammoury N, Khawaja M (2007) Screening for domestic violence during pregnancy in an antenatal clinic in Lebanon. Eur J Public Health 17: 605–606. doi: 10.1093/eurpub/ckm009
    [103] Afifi ZE, Al-Muhaideb NS, Hadish NF, et al. (2011) Domestic violence and its impact on married women's health in Eastern Saudi Arabia. Saudi Med J 32: 612–620.
    [104] Soleimani M, Jamshidimanesh M, Daneshkojuri M, et al. (2012) Correlation between partner violence and preterm labor. J Qazvin Univ Med Sci 15: 53–59.
    [105] Cavalin C (2010) WHO Multi-country study on women's health and domestic violence against women. Initial results on prevalence, health outcomes and women's responses. JSTOR, 837–839. Available form: https://www.who.int/reproductivehealth/publications/violence/24159358X/en/.
    [106] Silverman JG, Decker MR, Reed E, et al. (2006) Intimate partner violence victimization prior to and during pregnancy among women residing in 26 US states: Associations with maternal and neonatal health. Am J Obstet Gynecol 195: 140–148. doi: 10.1016/j.ajog.2005.12.052
    [107] Records K (2007) A critical review of maternal abuse and infant outcomes: Implications for newborn nurses. Newborn Infant Nurs Rev 7: 7–13. doi: 10.1053/j.nainr.2006.12.005
    [108] Hill A, Pallitto C, McCleary‐Sills J, et al. (2016) A systematic review and meta‐analysis of intimate partner violence during pregnancy and selected birth outcomes. Int J Gynaecol Obstet 133: 269–276. doi: 10.1016/j.ijgo.2015.10.023
    [109] Han A, Stewart DE (2014) Maternal and fetal outcomes of intimate partner violence associated with pregnancy in the Latin American and Caribbean region. Int J Gynaecol Obstet 124: 6–11. doi: 10.1016/j.ijgo.2013.06.037
  • This article has been cited by:

    1. Chen Xie, Intelligent evaluation method of bank digital transformation credibility based on big data analysis, 2022, 22, 14727978, 1349, 10.3233/JCM-226060
    2. Shuyu Li, Wen Chen, Kaiyan Xing, Hongchao Wang, Yilin Zhang, Ming Kang, MGAN-LD: A sparse label propagation-based anomaly detection approach using multi-generative adversarial networks, 2025, 312, 09507051, 113124, 10.1016/j.knosys.2025.113124
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5271) PDF downloads(570) Cited by(4)

Figures and Tables

Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog