Research article

Navigating the herd: The dynamics of investor behavior in the Brazilian stock market

  • Received: 06 June 2024 Revised: 20 September 2024 Accepted: 24 September 2024 Published: 27 September 2024
  • JEL Codes: G11, G40, G41

  • We investigated under-researched dimensions of market-wide herding behavior in the Brazilian stock market using a sample from January 2010 to December 2022. Employing OLS and quantile regressions, we found no evidence of herding in the sample or across market conditions, including return, trading volume, and volatility. However, dynamic analysis via rolling window regressions revealed intermittent herding behavior during various subperiods, including at the onset of the COVID-19 pandemic and around the beginning of the war in Ukraine. Additionally, regression results differentiate between herding driven by fundamental and non-fundamental factors, elucidating the predominance of negative herding attributable to non-fundamental influences. These findings underscore the presence of irrational behavior among investors, potentially leading to increased price instability and deviations from fundamental values. Moreover, the association of negative herding with diversifiable risk suggests potential implications for portfolio composition. Overall, this study contributes to understanding investor behavior in emerging markets and highlights the impact of herding on market dynamics and portfolio management strategies.

    Citation: Júlio Lobão, Luís Pacheco, Maria Beatriz Naia. Navigating the herd: The dynamics of investor behavior in the Brazilian stock market[J]. Quantitative Finance and Economics, 2024, 8(3): 635-657. doi: 10.3934/QFE.2024024

    Related Papers:

    [1] Keruo Jiang, Zhen Huang, Xinyan Zhou, Chudong Tong, Minjie Zhu, Heshan Wang . Deep belief improved bidirectional LSTM for multivariate time series forecasting. Mathematical Biosciences and Engineering, 2023, 20(9): 16596-16627. doi: 10.3934/mbe.2023739
    [2] Yufeng Qian . Exploration of machine algorithms based on deep learning model and feature extraction. Mathematical Biosciences and Engineering, 2021, 18(6): 7602-7618. doi: 10.3934/mbe.2021376
    [3] Long Wen, Liang Gao, Yan Dong, Zheng Zhu . A negative correlation ensemble transfer learning method for fault diagnosis based on convolutional neural network. Mathematical Biosciences and Engineering, 2019, 16(5): 3311-3330. doi: 10.3934/mbe.2019165
    [4] Jianhua Jia, Lulu Qin, Rufeng Lei . DGA-5mC: A 5-methylcytosine site prediction model based on an improved DenseNet and bidirectional GRU method. Mathematical Biosciences and Engineering, 2023, 20(6): 9759-9780. doi: 10.3934/mbe.2023428
    [5] Jianhua Jia, Mingwei Sun, Genqiang Wu, Wangren Qiu . DeepDN_iGlu: prediction of lysine glutarylation sites based on attention residual learning method and DenseNet. Mathematical Biosciences and Engineering, 2023, 20(2): 2815-2830. doi: 10.3934/mbe.2023132
    [6] Yutao Wang, Qian Shao, Shuying Luo, Randi Fu . Development of a nomograph integrating radiomics and deep features based on MRI to predict the prognosis of high grade Gliomas. Mathematical Biosciences and Engineering, 2021, 18(6): 8084-8095. doi: 10.3934/mbe.2021401
    [7] Shuai Cao, Biao Song . Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991. doi: 10.3934/mbe.2021103
    [8] Honglei Wang, Wenliang Zeng, Xiaoling Huang, Zhaoyang Liu, Yanjing Sun, Lin Zhang . MTTLm6A: A multi-task transfer learning approach for base-resolution mRNA m6A site prediction based on an improved transformer. Mathematical Biosciences and Engineering, 2024, 21(1): 272-299. doi: 10.3934/mbe.2024013
    [9] Pingping Sun, Yongbing Chen, Bo Liu, Yanxin Gao, Ye Han, Fei He, Jinchao Ji . DeepMRMP: A new predictor for multiple types of RNA modification sites using deep learning. Mathematical Biosciences and Engineering, 2019, 16(6): 6231-6241. doi: 10.3934/mbe.2019310
    [10] H. Swapnarekha, Janmenjoy Nayak, H. S. Behera, Pandit Byomakesha Dash, Danilo Pelusi . An optimistic firefly algorithm-based deep learning approach for sentiment analysis of COVID-19 tweets. Mathematical Biosciences and Engineering, 2023, 20(2): 2382-2407. doi: 10.3934/mbe.2023112
  • We investigated under-researched dimensions of market-wide herding behavior in the Brazilian stock market using a sample from January 2010 to December 2022. Employing OLS and quantile regressions, we found no evidence of herding in the sample or across market conditions, including return, trading volume, and volatility. However, dynamic analysis via rolling window regressions revealed intermittent herding behavior during various subperiods, including at the onset of the COVID-19 pandemic and around the beginning of the war in Ukraine. Additionally, regression results differentiate between herding driven by fundamental and non-fundamental factors, elucidating the predominance of negative herding attributable to non-fundamental influences. These findings underscore the presence of irrational behavior among investors, potentially leading to increased price instability and deviations from fundamental values. Moreover, the association of negative herding with diversifiable risk suggests potential implications for portfolio composition. Overall, this study contributes to understanding investor behavior in emerging markets and highlights the impact of herding on market dynamics and portfolio management strategies.



    Plant virus diseases have brought great losses to agriculture. RNA interference (RNAi) attracts more and more attention as one important mechanism of plant resistance to viruses [1]. There are mainly three types of key proteins in RNAi: Dicer-like (DCL), RNA-dependent RNA polymerase (PDR) and Argonaute (AGO) [2,3,4]. The main process is that: (1) DCL cuts double strand RNA (dsRNA) into primary small interference RNA (siRNA); (2) PDR reconstitutes siRNA into dsRNA, and then cuts the newly synthesized dsRNA into more secondary siRNA; (3) AGO is combined with siRNA to form RNA silencing complex (RISC) [5]. RNAi can cut the RISC, target and ultimately degrade virus or RNA nucleic acid sequence through complementary base pairs. SiRNAs, in the size range of 21–24 nucleotides, mediate RNAi and play the most important mechanism in the whole process of RNAi [6]. The main activity of siRNAs is the negative regulation of specific mRNAs or gene expression through target degradation, translational repression, or directing chromatin modification [7,8].

    Phasic small interfering RNAs (phasiRNAs) are plant secondary siRNAs that typically produced by miRNAs targeting polyadenylated mRNAs [9]. A growing number of studies have shown that miRNA-initiated phasiRNAs play crucial roles in regulating plant growth and stress responses [10,11,12]. Substantial analyses in genome and small RNA (sRNA) sequence enhanced the annotations of sRNAs, notably phasiRNAs as well as their targets [13]; therefore relevant databases have been established in succession. Recently, Liu et al. [14] established a database named TarDB that contained 62,888 cross-species conserved miRNA targets, 4304 degradome PARE-seq supported miRNA targets and 3182 miRNA triggered phasiRNA loci.

    Given the importance of phasiRNA in plant-pathogen interactions, we proposed an efficient deep learning based predictor, named DIGITAL, for identifying miRNA-triggered phasiRNA loci. We collected experimental verified duplex mRNA and phasiRNAs from TarDB database, and generated the negative dataset by randomly substituting a certain number of nucleotides in positive samples. The key architecture of DIGITAL consists of a multi-scale residual network (multi-scale ResNet) and a bi-directional long-short term memory (bi-LSTM) network. Consequently, when tested on two independent test sets of 21-nt and 24-nt phasiRNAs, DIGITAL reached the accuracy of 98.45% and 94.02%, respectively, which proves its good robustness and generalization ability.

    Figure 1 illustrates the overall design of DIGITAL. The input layer transforms each nucleic acid into a four-dimensional binary vector by one-hot encoding, which means A, C, G and T are represented as (1 0 0 0), (0 1 0 0), (0 0 1 0) and (0 0 0 1), respectively. To get the feature vectors with the same dimension, we use the way of supplementing 0. Then a deep residual block formed by multi-scale CNN layers is employed to extract local relevant features in input vectors; besides, the bi-directional long-short term memory (bi-LSTM) network is implemented to explore long-range global contextual information. Finally, the resultant latent information is integrated through a flattened layer, and a following fully connected layer with softmax is adopted for label classification.

    Figure 1.  The overall framework of DIGITAL.

    We collected the siRNA sequence information from the TarDB database. [14] This database contains three categories of relatively high-confidence plant miRNA targets: (i) cross-species conserved miRNA targets; (ii) degradome/PARE (Parallel Analysis of RNA Ends) sequencing supported miRNA targets; (iii) miRNA-triggered phasiRNA loci. However, only the miRNA-triggered phasiRNAs were used to construct our prediction model, because they have been identified by previous well-documented criteria [15,16,17,18].

    The TarDB platform deposits both 21-nt and 24-nt phasiRNA in various plants. We obtained 6389 miRNA-phasiRNA target duplex in which miRNA triggered 21-nt phasiRNA, as well as 526 miRNA-phasiRNA target duplex in which miRNA triggered 24-nt phasiRNA in 43 plant species. After removing the repetitive miRNA-target pair, there are 5,408 duplex data left for miRNA-initiated 21-nt phasiRNAs, altogether with 443 duplex data for miRNA-initiated 24-nt phasiRNA, as positive samples.

    The approach to constructing corresponding negative dataset is similar to the method proposed by Mhaned Oubounyt et al. [19], based on the fact that positive and negative sets with less intersection are easier to distinguish [20]. In detail, each positive sequence is divided into multiple 1bp long fragments, and 60% of the fragments are selected and replaced randomly, with the remaining 40% conserved. In this approach, each negative sequence is generated from a positive sequence, and they are equal in length. Also, the number of negative data generated by this process is equivalent to that of positive data.

    In addition, the miRNA dataset that initiates 21-nt phasiRNAs is further divided into three subsets, including the training dataset (60% of the original dataset), the validation dataset (20% of the original dataset) and the independent test dataset (20% of the original dataset, denoted as dataset test_21), where the training set is used to train the classifier, the validation set is used to optimize hyper-parameters and the independent test set is used to evaluate the performance of DIGITAL. The miRNA dataset that initiates 24-nt phasiRNAs is also used as an independent test set to evaluate the performance of DIGITAL, denoted as dataset test_24. The statistics of each dataset are shown in Table 1.

    Table 1.  The statistics of datasets.
    Dataset Positive Negative
    Training 3244 3244
    Validation 1082 1082
    Test_21 1082 1082
    Test_24 443 443

     | Show Table
    DownLoad: CSV

    Fundamental structures in DIGITAL are a multi-scale ResNet network and a bi-LSTM architecture, which have been used by some researches [21,22,23]. Compared with the traditional CNN, the residual network improves the interaction of information, and avoids the gradient disappearance and degradation problems caused by network depth. So we used multi-scale ResNet network with identity mapping. At the same time, in order to extract long-term global context information, we combined multi-scale ResNet network and BiLSTM. Details are as follows.

    The multi-scale ResNet network includes three channels of 1-dimension CNN with 64 convolution filters. Among them, the first channel contains one convolution layer, and the size of the convolution kernel is fixed to 1; the second channel employs two convolution layers, with kernels in size 1 and 3, respectively; the third channel uses three convolution layers, and the sizes of the corresponding convolution kernel are set as 1, 5 and 5, respectively. The bi-LSTM with a self-attention network consists of 121 hidden units, followed by a fully-connected layer with 16 units. The Adam optimizer with a batch size of 110 simultaneously trains all layers in our model, and the learning rate scheduler in Keras is employed to regulate the learning rate. Early stopping is applied based on validation loss. To provide insight into the training process of DIGITAL, the average validation loss and accuracy change during training are shown in Supplementary Figure S1.

    We evaluate DIGITAL based on four most common metrics, containing sensitivity (Sn), specificity (Sp), accuracy (Acc), and Matthew's correlation coefficient (MCC). The formulas are listed as below:

    {Sp=TNTN+FPSn=TPFN+TPAcc=TP+TNTP+TN+FN+FPMcc=TP×TNFP×FN(TP+FN)(FP+TN)(TP+FP)(FN+TN) (1)

    where TP, TN, FP and FN represent the number of true positives, true negatives, false positives and false negatives, respectively. In addition, the area under the receiver operating characteristic curve (AUC) is also used to examine the performance of DIGITAL.

    In this study, we proposed a deep learning model, named DIGITAL, based on multi-scale ResNet network and bi-LSTM to predict miRNA-triggered phasiRNA loci. During training, Bayesian optimization was used to search the most appropriate parameters for identifying miRNA-triggered phasiRNA sites. DIGITAL reaches the satisfying Acc of 98.45% and 94.02% on independent datasets test_21 and test_24, respectively. In addition, six traditional classification algorithms were also constructed and compared with DIGITAL. In empirical studies based on independent tests, DIGITAL outperforms six traditional classification algorithms, and this fact demonstrates the effectiveness of our model. In addition, the robustness and generalization ability of DIGITAL suggest it can be easily extended and applied for recognizing miRNA targets of other species.

    Bayesian optimization is a very effective global optimization algorithm widely used in multitudinous prediction tasks in bioinformatics [24,25,26,27]. In this work, to further improve the performance of DIGITAL, we also applied this method to optimize key hyper-parameters in the training process. As works in previous [28,29], the difference between the experimental value and the predictive value on the validation set is defined as the fitness function evaluation of the hyper-parameter optimization during the training process. The unit number in Bi-LSTM [30,31,32] and the fully-connected layer, as well as the batch size, all varies in the range of (16,128). Corresponding results for each combination are listed in Supplementary Table S1, and the best results with the Acc of 98.71%, MCC of 96.13%, and AUC of 99.78% are achieved at the combination of (121, 16,110).

    In addition, we also choose the parameters by empirical methods [33,34], where the unit number of Bi-LSTM is set as 64, the unit number of the fully-connected layer is set as 32, and the batch size is set as 100. Prediction performance of this combination is shown Figure 2 as DIGITAL_E. As shown in Figure 2, the model based on Bayesian optimization achieved superior results on the validation dataset. Thus, the final model for phasiRNA identification is designed as 121 units in Bi-LSTM, 16 units in the fully-connected layer, and the batch size is designed as 110. DIGITAL denotes a Bayesian optimization and DIGITAL_E denotes an empirical parameter.

    Figure 2.  Results of empirical tuning and Bayesian optimization on the validation dataset.

    In this section, the independent datasets test_21 and test_24 are applied to further evaluate the robustness and generalization ability of DIGITAL. As shown in Table 1, DIGITAL obtains the Acc of 98.48%, Sn of 98.95%, Sp of 98.02% and MCC of 96.95% on independent dataset test_21, and achieves the Acc of 94.02%, Sn of 95.04%, Sp of 93.00% and MCC of 88.05% on independent dataset test_24. In order to display the prediction results more intuitively, we plot the ROC curves and calculate the AUC values, as shown in Figure 3. Our model achieves satisfactory AUC of 99.88% on the independent dataset test_21 and AUC of 98.41% on the independent dataset test_24. The similar prediction performance demonstrates that DIGITAL has good robustness and generalization ability. Besides, these two groups of results also demonstrate that the length of the sequence has a great influence on the prediction performance. With the increasing amount of data in the future, it is necessary to establish special predictors aiming at different sequence lengths.

    Figure 3.  The ROC curves of two independent datasets.

    In addition, we also implemented 5-fold and 10-fold cross-validation tests to further evaluate the generalization capability, respectively, and listed the average results in the Supplementary Table S2. We observed that COPPER achieved the average Acc of 98.14% and 98.30% on 5-fold and 10-fold cross-validation, respectively. The k-fold (k = 5, 10) results are basically consistent with those results on validation dataset.

    In addition to deep learning classification algorithm, we also applied six other commonly used traditional machine learning methods to develop predictive models, consisting of support vector machines (SVM), Naive Bayes (NB), k-nearest neighbors (KNN), XGBoost, logistic regression (LR), and random forest (RF). For each classification algorithm, we implemented parameter selection to achieve the best prediction results. Prediction performances before and after parameter selection on the validation dataset are shown in Supplementary Figure S2. It is surprising that except KNN, the other models do not show significant change before and after parameter selection. For this reason, we tested the six models using default parameters on our two independent datasets and compared them with DIGITAL. As shown in Table 2, DIGITAL reveals better predictive performance relative to the other predictors in terms of MCC, Acc, Sn and Sp, except for Sp on which random forest reaches the best performance. Specifically, the MCC of DIGITAL is 1% higher than the second best method SVM on test_21 dataset, and 16.9% higher than the second best method XGBoost on test_24 dataset. The improved MCC suggests that the Sn and Sp are balanced and relatively similar.

    Table 2.  The performance of DIGITAL and other six machine learning algorithms on two independent datasets.
    Method Dataset Sn(%) Sp(%) Acc(%) MCC AUC
    DIGITAL test_21 98.95 98.02 98.45 0.969 0.999
    test_24 95.04 93.00 94.02 0.881 0.984
    SVM test_21 96.08 99.81 97.92 0.959 0.979
    test_24 43.57 99.09 71.33 0.513 0.713
    KNN test_21 97.72 12.57 55.78 0.197 0.551
    test_24 83.97 73.81 78.89 0.581 0.789
    NB test_21 88.89 99.81 94.27 0.891 0.944
    test_24 1.58 98.65 50.11 0.009 0.501
    XGBoost test_21 97.63 98.87 98.24 0.965 0.983
    test_24 73.14 96.36 84.65 0.712 0.847
    LR test_21 95.26 94.28 94.78 0.896 0.948
    test_24 11.29 92.10 51.69 0.058 0.517
    RF test_21 95.81 1.0 97.87 0.958 0.979
    test_24 4.51 1.0 52.26 0.152 0.523

     | Show Table
    DownLoad: CSV

    As shown in Table 2, for all the seven classification algorithms, prediction results on dataset test_24 are inferior to those on dataset test_21. This may be due to these models are established based on miRNA-initiated 21-nt phasiRNAs. In the future, we shall pay efforts to overcome the influence of sequence length on the model.

    In this section, we constructed the classification model based on word2vec embedding method. We adopted the grammar of 1, window size of context 4 and dimensions of embedding vector of 4 because the dimension of one-hot is also 4. When training the embedding matrix, we chose our training set as the corpus. The comparison of one-hot and word2vec is shown in Figure 4. It can be seen that the model based on one-hot encoding reached the best performance on validation for all of five indicators, and gave relatively low Sps and high values of other for indicators on both test-21 and test-24 datasets. Therefore, we provided the code of two models at https://github.com/yuanyuanbu/DIGITAL.

    Figure 4.  The performance evaluation results of one-hot and word2vec models.

    The hybrid network of DIGITAL is composed of multi-scale ResNet and bi-LSTM these two parts. To analyze the role of each part, we built two based models based on only multi-scale ResNet and bi-LSTM, respectively. The prediction results are listed in Table 3 of measurement by five evaluation indictors. It can be observed that DIGITAL obviously outperformed other two models on for indicators of Sn, Acc, MCC and AUC, especially with the improvement of more than 5% for Sn, but the model based on multi-scale ResNet achieved the high Sp of 99.34% and the model based on only bi-LSTM achieved the high Sp of 98.88%. The reason why the integration of multi-scale ResNet and bi-LSTM can improve Sn significantly is worth studying in the future.

    Table 3.  The performance of ablation experiment.
    Model Sn(%) Sp(%) Acc(%) MCC AUC
    DIGITAL 98.86 97.31 98.06 0.961 0.998
    Only bi-LSTM 91.57 99.34 95.37 0.910 0.994
    Only multi-scale ResNet 93.86 98.88 96.35 0.928 0.969

     | Show Table
    DownLoad: CSV

    In order to intuitively display the process of deep learning to distinguish samples, we employed the popular visualization algorithm termed t-distributed stochastic neighbor embedding (t-SNE) which has been used in bioinformatics. [35,36] As illustrated in Figure 5A and 5B, these two kinds of points are mixed up in confusion by using one-hot encoding and after Multi-scale ResNet. In contrast, most of the points in the two kinds have been separated after bi-LSTM, except that the boundary is not obvious (Figure 5C). Through the last Dense layer, the two types of points are almost completely separated, and the boundary is clear. Taken together, it can be concluded the DIGITAL framework can effectively learn the effective information from the one-hot encoding mapped from the RNA sequences.

    Figure 5.  Visualization of training process projected in 2D space.

    This work was supported by the Fundamental Research Funds for the Central Universities 3132022204.

    The authors declare no competing interests.

    Table S1.  The details of Bayesian optimization.
    Iter Target Bi-LSTM Dense Batch_size
    1 0.9815 43 75 23
    2 0.9815 28 105 61
    3 0.9797 45 73 41
    4 0.9852 58 125 64
    5 0.9838 127 67 26
    6 0.9797 74 128 60
    7 0.9871 121 16 110
    8 0.9834 113 73 123
    9 0.9866 98 42 61
    10 0.9838 97 79 106
    11 0.9783 38 66 98
    12 0.9810 30 107 116
    13 0.9806 126 86 18
    14 0.9834 42 19 81
    15 0.9806 70 41 55
    16 0.9834 99 111 35
    17 0.9801 98 41 60
    18 0.9838 106 99 89
    19 0.9815 56 97 39
    20 0.9866 112 104 95
    21 0.9861 76 119 52
    22 0.9847 81 89 112
    23 0.9797 110 24 61
    24 0.9820 44 89 106
    25 0.9810 69 65 85
    26 0.9857 82 19 109
    27 0.9838 79 87 73
    28 0.9834 61 43 38
    29 0.9783 97 80 106
    30 0.9857 98 21 59

     | Show Table
    DownLoad: CSV
    Table S2.  The performance of the 5-fold and 10-fold cross validation tests.
    Sn(%) Sp(%) Acc(%) MCC AUC
    5-fold 98.44 97.83 98.14 0.963 0.997
    10-fold 98.61 97.99 98.30 96.61 99.78

     | Show Table
    DownLoad: CSV
    Figure S1.  The loss and accuracy trend with different number of epochs on the DIGITAL.
    Figure S2.  Accuracy comparison of six machine learning methods before and after parameter selection on validation datasets.


    [1] Aharon DY (2021) Uncertainty, Fear and Herding Behavior: Evidence from Size-Ranked Portfolios. J Behav Financ 22: 320–337. https://doi.org/10.1080/15427560.2020.1774887 doi: 10.1080/15427560.2020.1774887
    [2] Amihud Y (2002) Illiquidity and stock returns: cross-section and time-series effects. J Financ Mark 5: 31–56. https://doi.org/10.1016/S1386-4181(01)00024-6 doi: 10.1016/S1386-4181(01)00024-6
    [3] Antonelli-Filho P, Bressan AA, Vieira KM, et al. (2021) Sensation Seeking and Overconfidence in day traders: evidence from Brazil. Rev Behav Finance 13: 486–501. https://doi.org/10.1108/RBF-05-2020-0104 doi: 10.1108/RBF-05-2020-0104
    [4] Antony A (2020) Behavioral finance and portfolio management: Review of theory and literature. J Public Aff 20: e1996. https://doi.org/10.1002/pa.1996 doi: 10.1002/pa.1996
    [5] Arjoon V, Bhatnagar CS, Ramlakhan P (2020) Herding in the Singapore stock Exchange. J Econ Bus 109: 105889. https://doi.org/10.1016/j.jeconbus.2019.105889 doi: 10.1016/j.jeconbus.2019.105889
    [6] Babalos V, Stavroyiannis S (2015) Herding, anti-herding behaviour in metal commodities futures: a novel portfolio-based approach. Appl Econ 47: 4952–4966. https://doi.org/10.1080/00036846.2015.1039702 doi: 10.1080/00036846.2015.1039702
    [7] Batmunkh MU, Choijil E, Vieito JP, et al. (2020) Does herding behavior exist in the Mongolian stock market? Pac-Basin Financ J 62: 101352. https://doi.org/10.1016/j.pacfin.2020.101352 doi: 10.1016/j.pacfin.2020.101352
    [8] Bekiros S, Jlassi M, Lucey B, et al. (2017) Herding behavior, market sentiment and volatility: Will the bubbble resume? N Am J Econ Financ 42: 107–131. https://doi.org/10.1016/j.najef.2017.07.005 doi: 10.1016/j.najef.2017.07.005
    [9] Bikhchandani S, Sharma S (2000) Herd behavior in financial markets. IMF Staff Pap 47: 279–310. https://doi.org/10.5089/9781451846737.001 doi: 10.5089/9781451846737.001
    [10] Bogdan S, Suštar N, Draženović BO (2022) Herding behavior in developed, emerging, and frontier European stock markets during COVID-19 pandemic. J Risk Financ Manag 15: 400. https://doi.org/10.3390/jrfm15090400 doi: 10.3390/jrfm15090400
    [11] Bohl MY, Branger N, Trede M (2017) The case for herding is stronger than you think. J Bank Financ 85: 30–40. https://doi.org/10.1016/j.jbankfin.2017.08.006 doi: 10.1016/j.jbankfin.2017.08.006
    [12] Bouri E, Gupta R, Roubaud D (2019) Herding behaviour in cryptocurrencies. Financ Res Lett 29: 216–221. https://doi.org/10.1016/j.frl.2018.07.008 doi: 10.1016/j.frl.2018.07.008
    [13] Brazilian Center for Research in Financial Economics of the University of São Paulo (NEFIN) (2023) Risk Factors. Acessed on 11th April 2023. Available from: https://nefin.com.br/data/risk_factors.html.
    [14] Cakan E, Demirer R, Gupta R, et al. (2018) Oil speculation and herding behavior in emerging stock markets. J Econ Financ 43: 44–56. https://doi.org/10.1007/s12197-018-9427-0 doi: 10.1007/s12197-018-9427-0
    [15] Caporale GM, Economou F, Philippas N (2008) Herd behaviour in extreme market conditions: the case of the Athens Stock Exchange. Econ B 7: 1–13.
    [16] Carhart MM (1997) On persistence in mutual fund performance. J Financ 52: 57–82. https://doi.org/10.2307/2329556 doi: 10.2307/2329556
    [17] Chang EC, Cheng JW, Khorana A (2000) An examination of herd behavior in equity markets: An international perspective. J Bank Financ 24: 1651–1679. https://doi.org/10.1016/S0378-4266(99)00096-5 doi: 10.1016/S0378-4266(99)00096-5
    [18] Chen T (2013) Do investors herd in global stock markets? J Behav Financ 14: 230–239. https://doi.org/10.1080/15427560.2013.819804 doi: 10.1080/15427560.2013.819804
    [19] Chiang TC, Li J, Tan L (2010) Empirical investigation of herding behavior in Chinese stock markets: Evidence from quantile regression analysis. Glob Financ J 21: 111–124. https://doi.org/10.1016/j.gfj.2010.03.005 doi: 10.1016/j.gfj.2010.03.005
    [20] Chiang TC, Zheng D (2010) An empirical analysis of herd behavior in global stock markets. J Bank Financ 34: 1911–1921. https://doi.org/10.1016/j.jbankfin.2009.12.014 doi: 10.1016/j.jbankfin.2009.12.014
    [21] Choi KH, Yoon SM (2020) Investor sentiment and herding behavior in the Korean Stock Market. Int J Financ Stud 8: 34. https://doi.org/10.3390/ijfs8020034 doi: 10.3390/ijfs8020034
    [22] Choi N, Sias RW (2009) Institutional industry herding. J Financ Econ 94: 469–491. https://doi.org/10.1016/j.jfineco.2008.12.009 doi: 10.1016/j.jfineco.2008.12.009
    [23] Costa F, Fortuna N, Lobão J (2024) Herding states and stock market returns. Res Int Bus Financ 68: 102163. https://doi.org/10.1016/j.ribaf.2023.102163 doi: 10.1016/j.ribaf.2023.102163
    [24] Dang HV, Lin M (2016) Herd mentality in the stock market: On the role of idiosyncratic participants with heterogeneous information. Int Rev Financ Analy 48: 247–260. https://doi.org/10.1016/j.irfa.2016.10.005 doi: 10.1016/j.irfa.2016.10.005
    [25] de Almeida RP, Costa HC, da Costa NCA (2012) Herd behavior in Latin American stock markets. Latin Am Bus Rev 13: 81–102. https://doi.org/10.1080/10978526.2012.700271 doi: 10.1080/10978526.2012.700271
    [26] Economou F, Katsikas E, Vickers G (2016) Testing for herding in the Athens Stock Exchange during the crisis period. Financ Res Lett 18: 334–341. https://doi.org/10.1016/j.frl.2016.05.011 doi: 10.1016/j.frl.2016.05.011
    [27] Economou F, Kostakis A, Philippas N (2011) Cross-country effects in herding behaviour: Evidence from four south European markets. J Int Financ Mark I 21: 443–460. https://doi.org/10.1016/j.intfin.2011.01.005 doi: 10.1016/j.intfin.2011.01.005
    [28] Fama EF, French KR (1993) Common risk factors in the returns on stocks and bonds. J Financ Econ 33: 3–56. https://doi.org/10.1016/0304-405X(93)90023-5 doi: 10.1016/0304-405X(93)90023-5
    [29] Fama EF, French KR (2015) A five-factor asset pricing model. J Financ Econ 116: 1–22. https://doi.org/10.1016/j.jfineco.2014.10.010 doi: 10.1016/j.jfineco.2014.10.010
    [30] Galariotis EC, Rong W, Spyrou SI (2015) Herding on fundamental information: A comparative study. J Bank Financ 50: 589–598. https://doi.org/10.1016/j.jbankfin.2014.03.014 doi: 10.1016/j.jbankfin.2014.03.014
    [31] Gębka B, Wohar ME (2013) International herding: Does it differ across sectors? J Int Financ Mark I 23: 55–84. https://doi.org/10.1016/j.intfin.2012.09.003 doi: 10.1016/j.intfin.2012.09.003
    [32] Gouta S, BenMabrouk H (2024) The nexus between herding behavior and spillover: evidence from G7 and BRICS. Rev Behav Financ 16: 360–377. https://doi.org/10.1108/RBF-01-2023-0016 doi: 10.1108/RBF-01-2023-0016
    [33] Hirshleifer D (2015) Behavioral Finance. Annual Rev Financ Econ 7: 133–159. https://doi.org/10.1146/annurev-financial-092214-043752 doi: 10.1146/annurev-financial-092214-043752
    [34] Hirshleifer D, Hong Teoh S (2003) Herd behaviour and cascading in capital markets: A review and synthesis. Eur Financ Manage 9: 25–66. https://doi.org/10.1111/1468-036X.00207 doi: 10.1111/1468-036X.00207
    [35] Hwang S, Salmon M (2004) Market stress and herding. J Empir Financ 11: 585–616. https://doi.org/10.1016/j.jempfin.2004.04.003 doi: 10.1016/j.jempfin.2004.04.003
    [36] Indārs ER, Savin A, Lublóy Á (2019) Herding behaviour in an emerging market: Evidence from the Moscow Exchange. Emerg Mark Rev 38: 468–487. https://doi.org/10.1016/j.ememar.2018.12.002 doi: 10.1016/j.ememar.2018.12.002
    [37] Jiang R, Wen C, Zhang R, et al. (2022) Investor's herding behavior in Asian equity markets during COVID-19 period. Pac-Asin Financ J 73: 101771. https://doi.org/10.1016/j.pacfin.2022.101771 doi: 10.1016/j.pacfin.2022.101771
    [38] Kahneman D, Tversky A (1979) Prospect Theory: An analysis of decision under risk. Econometrica 47: 263–292. https://doi.org/10.2307/1914185 doi: 10.2307/1914185
    [39] Kremer S, Nautz D (2013) Causes and consequences of short-term institutional herding. J Bank Financ 37: 1676–1686. https://doi.org/10.1016/j.jbankfin.2012.12.006 doi: 10.1016/j.jbankfin.2012.12.006
    [40] Litimi H (2017) Herd behavior in the French stock market. Rev Account Financ 16: 497–515. https://doi.org/10.1108/raf-11-2016-0188 doi: 10.1108/raf-11-2016-0188
    [41] Liu T, Zheng D, Zheng S, et al. (2023) Herding in Chinese stock markets: Evidence from the dual-investor-group. Pac-Basin Financ J 79: 101992. https://doi.org/10.1016/j.pacfin.2023.101992 doi: 10.1016/j.pacfin.2023.101992
    [42] Lo AW (2004) The adaptive markets hypothesis. J Portfoli Manage 30: 15–29. https://doi.org/10.3905/jpm.2004.442611 doi: 10.3905/jpm.2004.442611
    [43] Loang OK, Ahmad Z (2024) Does volatility cause herding in Malaysian stock market? Evidence from quantile regression analysis. Millennial Asia 15: 197–215. https://doi.org/10.1177/09763996221101217 doi: 10.1177/09763996221101217
    [44] Mobarek A, Mollah S, Keasey K (2014) A cross-country analysis of herd behavior in Europe. J Int Financ Mark I 32: 107–127. https://doi.org/10.1016/j.intfin.2014.05.008 doi: 10.1016/j.intfin.2014.05.008
    [45] Mulki RU, Rizkianto E (2020) Herding Behavior in BRICS Countries, during Asian and Global Financial Crisis, 34th IBIMA Conference, Madrid.
    [46] Newey WK, West KD (1987) A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55: 703–708. https://doi.org/10.2307/1913610 doi: 10.2307/1913610
    [47] Nguyen HM, Bakry W, Vuong THG (2023) COVID-19 pandemic and herd behavior: Evidence from a frontier market. J Behav Exp Financ 38: 100807. https://doi.org/10.1016/j.jbef.2023.100807 doi: 10.1016/j.jbef.2023.100807
    [48] Pochea MM, Filip AM, Pece AM (2017) Herding behavior in CEE stock markets under asymmetric conditions: A quantile regression analysis. J Behav Financ 18: 400–416. https://doi.org/10.1080/15427560.2017.1344677 doi: 10.1080/15427560.2017.1344677
    [49] Sharma SS, Narayan P, Thuraisamy K (2015) Time-varying herding behavior, global financial crisis, and the Chinese stock market. Rev Pac Basin Financ 18: 1550009. https://doi.org/10.1142/s0219091515500095 doi: 10.1142/s0219091515500095
    [50] Shrotryia VK, Kalra H (2020) Herding and BRICS markets: A study of distribution tails. Rev Behav Financ 14: 91–114. https://doi.org/10.1108/rbf-04-2020-0086 doi: 10.1108/rbf-04-2020-0086
    [51] Signorelli PFCL, Camilo-da-Silva E, Barbedo CHdS (2021) An examination of herding behavior in the Brazilian equity market. BBR. Brazilian Bus Rev 18: 236–254. https://doi.org/10.15728/bbr.2021.18.3.1 doi: 10.15728/bbr.2021.18.3.1
    [52] Spyrou S (2013) Herding in financial markets: A review of the literature. Rev Behav Financ 5: 175–194. https://doi.org/10.1108/rbf-02-2013-0009 doi: 10.1108/rbf-02-2013-0009
    [53] Tan L, Chiang TC, Mason JR, et al. (2008) Herding behavior in Chinese stock markets: An examination of A and B shares. Pac-Basin Financ J 16: 61–77. https://doi.org/10.1016/j.pacfin.2007.04.004 doi: 10.1016/j.pacfin.2007.04.004
    [54] Vartanian PR, dos Santos HF, da Silva WM, et al. (2022). Macroeconomic and financial variables' influence on Brazilian stock and real estate markets: A comparative analysis in the period from 2015 to 2019. Modern Economy 13: 747–769. https://doi.org/10.4236/me.2022.135040 doi: 10.4236/me.2022.135040
    [55] Vo XV, Phan DBA (2016) Herd behavior in emerging equity markets: Evidence from Vietnam. Asian J Law Econ 7: 369–383. https://doi.org/10.1515/ajle-2016-0020 doi: 10.1515/ajle-2016-0020
    [56] Zhou J, Anderson RI (2011) An empirical investigation of herding behavior in the U.S. REIT market. J Real Estate Financ Econ 47: 83–108. https://doi.org/10.1007/s11146-011-9352-x doi: 10.1007/s11146-011-9352-x
  • This article has been cited by:

    1. Shree Prakash Pandey, 2024, 9781394209934, 283, 10.1002/9781394209965.ch12
    2. Shree P. Pandey, Priyanka Pandey, Prashant K. Srivastava, 2025, Chapter 3, 978-1-0716-4397-6, 73, 10.1007/978-1-0716-4398-3_3
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1248) PDF downloads(106) Cited by(1)

Figures and Tables

Figures(1)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog