Citation: Frances A. Maratos, Matthew Garner, Alexandra M. Hogan, Anke Karl. When is a Face a Face? Schematic Faces, Emotion, Attention and the N170[J]. AIMS Neuroscience, 2015, 2(3): 172-182. doi: 10.3934/Neuroscience.2015.3.172
[1] | Yanshuo Wang . Pattern analysis of continuous analytic wavelet transforms of the COVID19 spreading and death. Big Data and Information Analytics, 2020, 5(1): 29-46. doi: 10.3934/bdia.2020003 |
[2] | Amanda Working, Mohammed Alqawba, Norou Diawara, Ling Li . TIME DEPENDENT ATTRIBUTE-LEVEL BEST WORST DISCRETE CHOICE MODELLING. Big Data and Information Analytics, 2018, 3(1): 55-72. doi: 10.3934/bdia.2018010 |
[3] | Ming Yang, Dunren Che, Wen Liu, Zhao Kang, Chong Peng, Mingqing Xiao, Qiang Cheng . On identifiability of 3-tensors of multilinear rank (1; Lr; Lr). Big Data and Information Analytics, 2016, 1(4): 391-401. doi: 10.3934/bdia.2016017 |
[4] | Ugo Avila-Ponce de León, Ángel G. C. Pérez, Eric Avila-Vales . A data driven analysis and forecast of an SEIARD epidemic model for COVID-19 in Mexico. Big Data and Information Analytics, 2020, 5(1): 14-28. doi: 10.3934/bdia.2020002 |
[5] | Elnaz Delpisheh, Aijun An, Heidar Davoudi, Emad Gohari Boroujerdi . Time aware topic based recommender System. Big Data and Information Analytics, 2016, 1(2): 261-274. doi: 10.3934/bdia.2016008 |
[6] | Jian-Bing Zhang, Yi-Xin Sun, De-Chuan Zhan . Multiple-instance learning for text categorization based on semantic representation. Big Data and Information Analytics, 2017, 2(1): 69-75. doi: 10.3934/bdia.2017009 |
[7] | Bill Huajian Yang . Modeling path-dependent state transitions by a recurrent neural network. Big Data and Information Analytics, 2022, 7(0): 1-12. doi: 10.3934/bdia.2022001 |
[8] | Ricky Fok, Agnieszka Lasek, Jiye Li, Aijun An . Modeling daily guest count prediction. Big Data and Information Analytics, 2016, 1(4): 299-308. doi: 10.3934/bdia.2016012 |
[9] | Nickson Golooba, Woldegebriel Assefa Woldegerima, Huaiping Zhu . Deep neural networks with application in predicting the spread of avian influenza through disease-informed neural networks. Big Data and Information Analytics, 2025, 9(0): 1-28. doi: 10.3934/bdia.2025001 |
[10] | Prince Peprah Osei, Ajay Jasra . Estimating option prices using multilevel particle filters. Big Data and Information Analytics, 2018, 3(2): 24-40. doi: 10.3934/bdia.2018005 |
While the atmosphere, a complex natural gaseous system, has been an essential key to support life on earth, air pollution is recognized as a threat to human health as well as to the earth's ecosystems. Among all those particles in air, particles less than 2.5 micrometers in diameter are called"fine" particles, i.e. PM
Ever since urban air quality is listed as one of the world's worst toxic pollution problems in the 2008 Blacksmith Institute World's Worst Polluted Places report, increasing number of air quality monitoring stations were established to inform people the real-time concentration of air pollutants, such as PM2.5, O
1 GB3095-2012 Ambient Air Quality Standards, released by Ministry of Environmental Protection of the People's Republic of China in 02.2012
Unfortunately, current air quality monitoring stations are still insufficient because such a station is in great cost of money, land, and human resources while building and maintaining. Even Beijing, the captain of China, only has 22 stations covering a
Although many statistic-based models have been proposed by environment scientists to approximate the quantitative link from factors like traffic and wind to air quality, empirical assumptions and parameters on which they based may not be applicable to all urban environments. Some methodologies, e.g., methodology based on crowd and participatory sensing using sensor-equipped mobile phones, could only work for a very few kinds of gas like CO
In this paper, we analyse and decompose the real-time PM2.5 concentration data within one year according to time series decomposition theory and infer the future fine-grained air quality information throughout a city using historical and real-time air quality data reported by existing monitor stations. We also product stochastic modelling in fitting and forecasting. We take two methodologies into comparison and discuss their strong and weak points respectively in PM2.5 prediction.
Contributions. The contribution of this paper is as follows:
1. We propose a practical system of time series based PM2.5 predication on the foundation of limited real-time data without expensive devices. Predicating fine particles like PM2.5 can give an effective support on air quality management. Our experimental result demonstrates the effectiveness of our method.
2. We compare and analyse the characters of two essentially-distinct methods applying to PM2.5. The varies on PM2.5 are intrinsically caused by complex human activities and deterministic and stochastic methods can separately excavate different aspects of hidden pattern of human activities.
Organizations. The rest of paper is organized as follows: Section 2 introduces the background material. Section 3 and 4 present in detail the progress of deterministic and stochastic predication, respectively. Section 5 discusses the characters of two methods. The related work and conclusion is in Section 6 and 7.
This section presentd the basic conpects related to this research.
Definition 2.1. Air Quality Index (AQI). AQI is a number used by government agencies to communicate to the public how polluted the air is currently. As the AQI increases, an increasingly large percentage of the population is likely to experience increasingly severe adverse health effects. To compute the AQI requires an air pollutant concentration from a monitor or model. The function used to convert from air pollutant concentration to AQI varies by pollutants, and is different in different countries. Air quality index values are divided into ranges, and each range is assigned a descriptor and a color code. In this paper, we use the standard issued by Ministry of Environmental Protection, People's Republic of China2, as shown in Table 1. The descriptor of each AQI level is regarded as the class to be inferred and the color is employed in the following visualization figures.
![]() |
2 HJ 633-2012 Technical Regulation on Ambient Air Quality Index (on trial), released by Ministry of Environmental Protection of the People's Republic of China in 02.2012
Specifically, the calculation for AQI follows Equation (1) below:
$ \text{AQI} = \max \left\{ {{\rm{IAQ}}{{\rm{I}}_1},{\rm{IAQ}}{{\rm{I}}_2}, \cdots ,{\rm{IAQ}}{{\rm{I}}_n}} \right\} $ | (1) |
where IAQI stands for the sub-indicators of air quality and
Recall the AQI of Wuhan in 2013, PM2.5 contributed to the primary pollutant over most of the days (illustrate in Figure 1(a)). In this paper, we concentrate the prediction on IAQI for PM2.5 only as it is the culprit of air pollution. However, our time-series based method is straightforward to be extended to AQI prediction.
In order to de-constructs the time series into notional components, we identify and construct a number if component series where each represent a certain characteristic or type of behaviour as follows:
-the Trend Component T that reflects the long term progression of the series
-the Cyclical Component C that describes repeated but non-periodic fluctuations
-the Seasonal Component S reflecting seasonality
-the Irregular Component I (or "noise") that describes random, irregular influences. It represents the residuals of the time series after the other components have been removed.
Since cyclicality identification needs complex process and is less-productive, we here consecrate on identifying compositions in the order of trend, seasonality, and irregularity.
Trend identification. Figure 2 exhibits the autocorrelation of time series. We find that the auto-correlation coefficient attenuation of PM
Seasonality identification. We denote spring, summer, fall and winter respectively in blue, green, red and green in Figure 3(a). It is easily observed that significant difference exists in PM
Irregularity identification. Figure 3(b) demonstrates the data after 5(green) and 20(red) intervals moving average process. It is clearly discerned the random fluctuations decreases more as the intervals increasing. Therefore, it is considered that irregularity exists in the time series.
Currently, there are a variety of time-series decomposition models, each of which suits one specific shape. Figure 3.1 shows the tendency feature of two models, namely additive model and multiplicative model [9]. We pick up multiplicative model to decompose the time series of PM2.5 as it is easily observed that PM2.5 time series is roughly actinomorphic. This leads us assume that the PM2.5 can be decomposed into multiplicative model, i.e.,
Trend analysis. Since both trend and seasonality are observed in the data, we first minimize irregularity influence via 20 intervals moving average process and then use Seasonal multivariate regression model fitting.
Figure 5 illustrates the fitting curve from cubic curve (Figure 5(a)) and trigonometric (Figure 5(b)) curve fitting separately. According to fitting goodness in Table. 2, cuber curve fitting is considered with best fitting result. However, unpractical upward trend is observed in the final form of the cubic curve. Comparatively, although the fitting result from triangle curve is not as good as cuber curve's, it is still determined that triangle curve fitting as the final trend substitute. So the trend fitting equation can be written as:
$
ST(t)=1130sin(0.01295t−1.094)+1089sin(0.01412t+1.803)
$
|
(2) |
Curve Fitting | SSE | R-Square | Adjusted R-square | RMSE |
Cubic Fitting | | 0.9544 | 0.954 | 11.3 |
Trigonometric Fitting | | 0.6862 | 0.6814 | 29.74 |
Seasonality analysis. During the process of moving average, not only irregular component but also part of seasonal component can be removed. On the one hand, we aim to remove irregular component to eliminate its interference on other components. On the other hand, we need to maintain seasonal component. To guarantee the effectiveness of prediction, we should add a factor, representing the removed seasonal component. Specifically, we define the generalized seasonal index
Definition 3.1. (Generalized seasonal index). The average remained PM
Date | Sp. | Sum. | Fall | Win. |
1st | -44.33 | 9.14 | -3.75 | 9.66 |
2nd | 8.44 | 10.61 | -4.37 | 25.82 |
3rd | -37.76 | 5.76 | -10.01 | 49.94 |
4th | -39.62 | -19.09 | -11.33 | 37.69 |
5th | -45.78 | -48.95 | 2.67 | 79.07 |
6th | 6.07 | -54.47 | -4.02 | 0.41 |
7th | 9.61 | -46.65 | 3.93 | 5.38 |
8th | -2.16 | -23.5 | -7.46 | -62.35 |
9th | -1.25 | -37.69 | -0.54 | -22.83 |
10th | 85.68 | -16.87 | 15.69 | -56.35 |
11th | 70.64 | -18.72 | 19.23 | -65.58 |
12th | 50.94 | -7.57 | 29.42 | -56.18 |
13th | 22.6 | -14.76 | 14.93 | -92.15 |
14th | 31.94 | -40.61 | 14.41 | -79.5 |
15th | 36.3 | -18.13 | 11.21 | -76.88 |
16th | 21.35 | -18.32 | -1.35 | -79.65 |
17th | 20.75 | -17.18 | 15.41 | -68.78 |
18th | 36.49 | -9.71 | 2.14 | -57.29 |
19th | -28.07 | 8.43 | 13.85 | -30.17 |
20th | -6.62 | 7.24 | 4.54 | 9.57 |
21st | 31.84 | 5.7 | 0.54 | 3.27 |
22nd | 55.66 | 31.84 | -23.15 | -91.08 |
23rd | 28.83 | 26.97 | -0.19 | -110.79 |
24th | 14.35 | -11.58 | 6.74 | -100.22 |
25th | 21.88 | -23.79 | 10.32 | -106.02 |
26th | 94.76 | -16 | 2.21 | -106.87 |
27th | 167.66 | -25.22 | 13.4 | -89.42 |
28th | 60.24 | -30.44 | 9.25 | -46.69 |
29th | 8.84 | -11 | 9.73 | -24.06 |
30th | -0.55 | -22.23 | 22.52 | -63.51 |
31st | 19.49 | -14.87 | 38.36 | -42 |
Note that we utilize mean to decrease the interference from irregular and cyclical component. Thus
$
ST(t)=1130sin(0.01295t−1.094)+1089sin(0.01412t+1.803)+α4∑i=1Qibi
$
|
(3) |
where
$
{Q_i} = {1,if t∈i-th season0,otherwise.
$
|
Cyclicality analysis. The naive method to detect cyclical component is observing to see whether any cyclicality exsits in the remaining series after removing trend and seasonal components. However, most of real-world time series does not show strict repeated model in every cyclical time points. As in our case, few cyclical can be detected after removing trend and seasonality (
As a matter of fact, real-word time series can be seen as cyclicality under certain degree of confidence and to detect that kind of cyclicality in PM2.5, we use autoregressive support vector regression (SVR_AR) with RBF kernel function, i.e.,
We apply cross validation method to select and verify the value of the parameter. Specifically, we divide the dataset equally into 10 parts and repeat the following operation for ten times. At
Thus, the final predication model can be indicated as
$ PM_{2.5} = ST(t)\cdot C(t) $ | (4) |
where ST(t) is calculated according to Equation 3 and C(t) can be fitted from SVR_AR) with RBF(the penalty term equals
The basic approach for stochastic modelling is as follow:
Definition 4.1. (Box-Jenkins model identification). The Box–Jenkins method applies autoregressive moving average ARMA or ARIMA models to find the best fit of a time-series model to past values of a time series.
The original model uses an iterative three-stage modeling approach:
(1). Model identification and model selection: guaranteeing stationariness of the variables, identifying seasonality in the dependent series (seasonally differencing it if necessary), and using plots of the autocorrelation and partial autocorrelation functions of the dependent time series to determine autoregressive(if any) or moving average component.
(2). Parameter estimation: computationally arriving at coefficients that best fit the selected ARIMA model. The maximum likelihood estimation or non-linear least-squares estimation are most common methods.
(3). Model checking: testing the estimated model conformity with the specifications of a stationary univariate process. In particular, the residuals should be independent of each other and constant in mean and variance over time. If inadequate, return to step one and attempt to build a better model.
ARIMA(autoregressive integrated moving average) model can be used in time series prediction based on a limited number of observations. The basic intuition behind ARIMA is that non-stationary sequence firstly built stationary via differencing of appropriate order and then realize fitting in ARMA model. Since sequence after differencing is equal to the weighted summation of sequence before differencing, sequence after differencing can be written in
Definition 4.2. (ARIMA
$
\left\{ {Φ(B)∇dxt=Θ(B)εtE(εt)=0,Var(εt)=σ2ε,E(εsεt)=0,s≠tExsεt=0,∀s<t } \right.
$
|
(5) |
in which
$ {\nabla ^d}{x_t} = \frac{{{\rm{\Theta }}(B)}}{{{\rm{\Phi }}\left( B \right)}}{\varepsilon _t} $ | (6) |
while
Order identification. Since the observed data is identified in-stationary, we utilize differencing approach to achieve stationary. We chose first-order and second-order differencing separately and compared their accuracy. Figure 9 and Figure 10 shows the autocorrelation coefficient and partial correlation coefficient in 20 steps of first-order and second-order differencing. The 2 times of standard deviation of corresponding coefficients is represented by red line in each figures.
It can be observed in both results from first-order (Figure 9) and second-order (Figure 10) that the autocorrelation coefficients when the steps over 2 are all within 2 times of standard deviation (Figure 9(a) and 10(a)). Tailing can be identified since the autocorrelation coefficient is gradually close to zero, thus q = 2. As for partial correlation coefficient, it is less than 2 times of standard deviation when the steps are over 19 and it is gradually close to 0, tailing can be identified, thus p = 19 (Figure 9(b) and 10(b)). Therefore according to Table 4, the model can be identified as ARIMA
Model | ACF | PACF |
White Noise | | |
| attenuated to zero (geometric or volatility) | censored after the |
| censored after the | attenuated to zero (geometric or volatility) |
| attenuated to zero (geometric or volatility) after | attenuated to zero (geometric or volatility) after |
Model fitting and prediction. The prediction results under
Residual test. For ARIMA
The two previous models are evaluated according to the real-time urban PM2.5 concentrations data obtained in Wuhan from December 1 to 10,2013. As can be seen from the Figure 12, stochastic time series analysis method results in better fitting.
Deterministic time series analysis method is relatively simple and lead to a more in-depth understanding of time series various characteristics. It allows more flexibility, which on the other hand means that it needs more empirically determination of parameters. Namely, it works with a certain degree of subjectivity, in which assumptions are required in advance and the tiny inaccurate in assumption could cause large deviations.
Stochastic time series analysis method leads a higher accuracy and stronger generalization ability. Comparatively, the process of stochastic time series analysis method is more fixed. However, the vague process also leads to difficulty in understanding and analysing the results.
We brief related work in four directions.
Classical bottom-up emission models. There are two major "bottom-up" methods in calculating air quality via the observed emission from ground surfaces. The most common one is referencing to the nearby air quality monitor stations, usually applied by public websites reporting AQIs. However, it is with low accuracy since air quality varies non-linearly as illustrated before. The other are classical dispersion models. Gaussian Plume models, Operational Street Canyon models, and Computational Fluid Dynamics are most widely used in this methodology. These models are in most cases a function of meteorology, street geometry, receptor locations, traffic volumes, and emission factors (e.g., g/km per single vehicle), based on a number of empirical assumptions and parameters that might not be applicable to all urban environments[6].
Satellite remote sensing. Satellite remote sensing of surface air quality is regarded as top-down methods in this field, such as[4] and [5]. However, despite its high cost, the result can only the air quality of atmosphere rather than the ground one.
Crowd sensing. Significant efforts[3], [2] have been devoted to crowd sensing and it may be a potential solution solving air pollution in the future. The devices for PM2.5 and NO
Urban computing. Big data has attracted a series of researches on urban computing to promote urban life quality, including managing air pollution. Data from varies aspects such as human mobility data and POIs[7], taxi trajectories[11], GPS-equipped vehicles[8] can be used to product useful pattern in urban life. This kind of method is based on sufficient urban data, sometimes private, which are difficult to acquire. Becides, it is in need of a long time in pre-processing of cleaning and reducing.
Different from classical models, methods with highly-required devices and tremendous data processing, our method offers a simple but efficient aspect in inferring air quality. Effectiveness is guaranteed on the basis of real-time data without expensive device and long time pre-processing.
In this paper, from the perspective of time series, we infer the fine-granularity air quality in a city based on the historical reported PM2.5 concentrations from air quality monitor stations. Using deterministic and stochastic theories, we make two predications. In deterministic point of view, we identify and decompose the historical reported PM2.5 concentrations into trend, seasonality, cyclical and irregular factors, based on which we calculate the PM2.5 concentrations equation. In stochastic point of view, we compare the first-order and second-order differencing methods and compute the quantitative models. Finally, we analyse the strong and weak points of deterministic and stochastic methodologies and reach the conclusion that stochastic is more accurate for PM2.5 concentrations predication.
[1] | Bannerman RL, Milders M, de Gelder B, et al. (2009) Orienting to threat: faster localization of fearful facial expressions and body postures revealed by saccadic eye movements. Proc Biol Sci 276(1662): 1635-1641. |
[2] | Simon EW, Rosen M, Ponpipom A (1996) Age and IQ as predictors of emotion identification in adults with mental retardation. Res Dev Disabil 17(5): 383-389. |
[3] | Eimer M (2011) The face-sensitive N170 component of the event-related brain potential. Oxford handbook face percept: 329-344. |
[4] | Rossion B, Jacques C (2011) 5 The N170: Understanding the time course. Oxford handbook potent components 115. |
[5] | Maurer U, Rossion B, McCandliss BD (2008) Category specificity in early perception: face and word N170 responses differ in both lateralization and habituation properties. Front Hum Neurosci. |
[6] | Eimer M, Kiss M, Nicholas S (2010) Response profile of the face-sensitive N170 component: a rapid adaptation study. Cerebral Cortex 312. |
[7] | Jacques C, Rossion B (2010) Misaligning face halves increases and delays the N170 specifically for upright faces: Implications for the nature of early face representations. Brain Res 13(18): 96-109. |
[8] | Itier RJ, Alain C, Sedore K, et al. (2007) Early face processing specificity: It's in the eyes! J Cog Neurosci 19: 1815-1826. |
[9] | Itier RJ, Batty M (2009) Neural bases of eye and gaze processing: the core of social cognition. Neurosci Biobehav Rev 33(6): 843-863. |
[10] | Dering B, Martin CD, Moro S, et al. (2011) Face-sensitive processes one hundred milliseconds after picture onset. Front Hum Neurosci 5. |
[11] | Eimer M (2011) The face-sensitivity of the n170 component. Front Hum Neurosci 5. |
[12] | Ganis G, Smith D, Schendan HE (2012) The N170, not the P1, indexes the earliest time for categorical perception of faces, regardless of interstimulus variance. Neuroimage 62(3): 1563-1574. |
[13] | Rossion B, Caharel S (2011) ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception. Vision Res 51(12): 1297-1311. |
[14] | Dering B, Hoshino N, Theirry G (2013) N170 modulation is expertisedriven: evidence from word-inversion effects in speakers of different languages. Neuropsycholo Trend 13. |
[15] |
Tanaka JW, Curran T (2001) A Neural Basis for Expert Object Recognition. Psychol Sci 12: 43-47. doi: 10.1111/1467-9280.00308
![]() |
[16] |
Gauthier I, Curran T, Curby KM, et al. (2003) Perceptual interference supports a non-modular account of face processing. Nat Neurosci 6: 428-432. doi: 10.1038/nn1029
![]() |
[17] |
Fan C, Chen S, Zhang L, et al. (2015) N170 changes reflect competition between faces and identifiable characters during early visual processing. NeuroImage 110: 32-38. doi: 10.1016/j.neuroimage.2015.01.047
![]() |
[18] |
Rugg M D, Milner AD, Lines CR, et al. (1987) Modulation of visual event-related potentials by spatial and non-spatial visual selective attention. Neuropsychologia 25: 85-96. doi: 10.1016/0028-3932(87)90045-5
![]() |
[19] |
Schinkel S, Ivanova G, Kurths J, et al. (2014) Modulation of the N170 adaptation profile by higher level factors. Bio Psychol 97: 27-34. doi: 10.1016/j.biopsycho.2014.01.003
![]() |
[20] | Gong J, Lv J, Liu X, et al. (2008) Different responses to same stimuli. Neuroreport 19. |
[21] | Thierry G, Martin CD, Downing P, et al. (2007) Controlling for interstimulus perceptual variance abolishes N170 face selectivity. Nat Neurosci 10: 505-511. |
[22] |
Vuilleumier P, Pourtois G (2007) Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia 45: 174-194. doi: 10.1016/j.neuropsychologia.2006.06.003
![]() |
[23] |
Munte TF, Brack M, Grootheer O, et al. (1998) Brain potentials reveal the timing of face identity and expression judgments. Neurosci Res 30: 25-34. doi: 10.1016/S0168-0102(97)00118-1
![]() |
[24] |
Eimer M, Holmes A (2007) Event-related brain potential correlates of emotional face processing. Neuropsychologia 45: 15-31. doi: 10.1016/j.neuropsychologia.2006.04.022
![]() |
[25] |
Batty M, Taylor MJ (2003) Early processing of the six basic facial emotional expressions. Cog Brain Res 17: 613-620. doi: 10.1016/S0926-6410(03)00174-5
![]() |
[26] |
Krombholz A, Schaefer F, Boucsein W (2007) Modification of N170 by different emotional expression of schematic faces. Biol Psychol 76: 156-162. doi: 10.1016/j.biopsycho.2007.07.004
![]() |
[27] |
Jiang Y, Shannon RW, Vizueta N, et al. (2009) Dynamics of processing invisible faces in the brain: Automatic neural encoding of facial expression information. Neuroimage 44: 1171-1177. doi: 10.1016/j.neuroimage.2008.09.038
![]() |
[28] |
Hung Y, Smith ML, Bayle DJ, et al. (2010) Unattended emotional faces elicit early lateralized amygdala-frontal and fusiform activations. Neuroimage 50: 727-733. doi: 10.1016/j.neuroimage.2009.12.093
![]() |
[29] |
Pegna AJ, Landis T, Khateb A (2008) Electrophysiological evidence for early non-conscious processing of fearful facial expressions. Int J Psychophysiol 70: 127-136. doi: 10.1016/j.ijpsycho.2008.08.007
![]() |
[30] | Hinojosa JA, Mercado F, Carretié L (2015) N170 sensitivity to facial expression: A meta-analysis. Neurosci Biobehav Rev. |
[31] | Ledoux JE (1996) The emotional brain: The mysterious underpinnings of emotional life. New York: Simon & Schuster. |
[32] |
Öhman A, Flykt A, Esteves F (2001) Emotion drives attention: Detecting the snake in the grass. J Exper Psychology-General 130: 466-478. doi: 10.1037/0096-3445.130.3.466
![]() |
[33] | Luo Q, Holroyd T, Jones M, et al. (2007) Neural dynamics for facial threat processing as revealed by gamma band synchronization using MEG. Neuroimage 34(2): 839-847. |
[34] | Maratos FA, Mogg K, Bradley BP, et al. (2009) Coarse threat images reveal theta oscillations in the amygdala: a magnetoencephalography study. Cog Affect Behav Neurosci 9(2): 133-143. |
[35] | Maratos FA, Senior C, Mogg K, et al. (2012) Early gamma-band activity as a function of threat processing in the extrastriate visual cortex. Cog Neurosci 3(1): 62-68. |
[36] | Fox E, Russo R, Dutton K (2002) Attentional bias for threat: Evidence for delayed disengagement from emotional faces. Cog Emotion 16(3): 355-379. |
[37] | Gray KLH, Adams WJ, Hedger N, et al. (2013) Faces and awareness: low-level, not emotional factors determine perceptual dominance. Emotion 13(3): 537-544. |
[38] | Stein T, Seymour K, Hebart MN, et al. (2014) Rapid fear detection relies on high spatial frequencies. Psychol Sci 25(2): 566-574. |
[39] | Öhman A, Soares SC, Juth P, et al. (2012) Evolutionary derived modulations of attention to two common fear stimuli: Serpents and hostile humans. J Cog Psychol 24(1): 17-32. |
[40] | Dickins DS, Lipp OV (2014) Visual search for schematic emotional faces: angry faces are more than crosses. Cog Emotion 28(1): 98-114. |
[41] | Öhman A, Lundqvist D, Esteves F (2001) The face in the crowd revisited: a threat advantage with schematic stimuli. J Personal Soc Psychol 80: 381-396. |
[42] | Maratos FA, Mogg K, Bradley BP (2008) Identification of angry faces in the attentional blink. Cog Emotion 22(7): 1340-1352. |
[43] | Maratos FA (2011) Temporal processing of emotional stimuli: the capture and release of attention by angry faces. Emotion 11(5): 1242. |
[44] | Simione L, Calabrese L, Marucci FS, et al. (2014) Emotion based attentional priority for storage in visual short-term memory. PloS one 9(5): e95261. |
[45] | Pinkham AE, Griffin M, Baron R, et al. (2010) The face in the crowd effect: anger superiority when using real faces and multiple identities. Emotion 10(1): 141. |
[46] | Stein T, Sterzer P (2012) Not just another face in the crowd: detecting emotional schematic faces during continuous flash suppression. Emotion 12(5): 988. |
[47] |
Gratton G, Coles MGH, Donchin E (1983) A new method for off-line removal of ocular artifact. Electroencephalogr Clin Neurophysiol 55:468-474 doi: 10.1016/0013-4694(83)90135-9
![]() |
[48] | Kolassa IT, Musial F, Kolassa S, et al. (2006) Event-related potentials when identifying or color-naming threatening schematic stimuli in spider phobic and non-phobic individuals. BMC Psychiatry 6(38). |
[49] |
Babiloni C, Vecchio F, Buffo P, et al. (2010). Cortical responses to consciousness of schematic emotional facial expressions: A high‐resolution EEG study. Hum Brain Map 31(10): 1556-1569. doi: 10.1002/hbm.20958
![]() |
[50] | Deffke I, Sander T, Heidenreich J, et al. (2007) MEG/EEG sources of the 170 ms response to faces are co-localized in the fusiform gyrus. Neuroimage 35(4): 1495-1501. |
[51] | Luo S, Luo W, He W, et al. (2013) P1 and N170 components distinguish human-like and animal-like makeup stimuli. Neuroreport 24(9): 482-486. |
[52] | Mercure E, Cohen Kadosh K, Johnson M (2011) The N170 shows differential repetition effects for faces, objects, and orthographic stimuli. Front Hum Neurosci 5(6). |
1. | Wenjun Zhang, Zhanpeng Guan, Jianyao Li, Zhu Su, Weibing Deng, Wei Li, Chinese cities’ air quality pattern and correlation, 2020, 2020, 1742-5468, 043403, 10.1088/1742-5468/ab7813 |
![]() |
Curve Fitting | SSE | R-Square | Adjusted R-square | RMSE |
Cubic Fitting | | 0.9544 | 0.954 | 11.3 |
Trigonometric Fitting | | 0.6862 | 0.6814 | 29.74 |
Date | Sp. | Sum. | Fall | Win. |
1st | -44.33 | 9.14 | -3.75 | 9.66 |
2nd | 8.44 | 10.61 | -4.37 | 25.82 |
3rd | -37.76 | 5.76 | -10.01 | 49.94 |
4th | -39.62 | -19.09 | -11.33 | 37.69 |
5th | -45.78 | -48.95 | 2.67 | 79.07 |
6th | 6.07 | -54.47 | -4.02 | 0.41 |
7th | 9.61 | -46.65 | 3.93 | 5.38 |
8th | -2.16 | -23.5 | -7.46 | -62.35 |
9th | -1.25 | -37.69 | -0.54 | -22.83 |
10th | 85.68 | -16.87 | 15.69 | -56.35 |
11th | 70.64 | -18.72 | 19.23 | -65.58 |
12th | 50.94 | -7.57 | 29.42 | -56.18 |
13th | 22.6 | -14.76 | 14.93 | -92.15 |
14th | 31.94 | -40.61 | 14.41 | -79.5 |
15th | 36.3 | -18.13 | 11.21 | -76.88 |
16th | 21.35 | -18.32 | -1.35 | -79.65 |
17th | 20.75 | -17.18 | 15.41 | -68.78 |
18th | 36.49 | -9.71 | 2.14 | -57.29 |
19th | -28.07 | 8.43 | 13.85 | -30.17 |
20th | -6.62 | 7.24 | 4.54 | 9.57 |
21st | 31.84 | 5.7 | 0.54 | 3.27 |
22nd | 55.66 | 31.84 | -23.15 | -91.08 |
23rd | 28.83 | 26.97 | -0.19 | -110.79 |
24th | 14.35 | -11.58 | 6.74 | -100.22 |
25th | 21.88 | -23.79 | 10.32 | -106.02 |
26th | 94.76 | -16 | 2.21 | -106.87 |
27th | 167.66 | -25.22 | 13.4 | -89.42 |
28th | 60.24 | -30.44 | 9.25 | -46.69 |
29th | 8.84 | -11 | 9.73 | -24.06 |
30th | -0.55 | -22.23 | 22.52 | -63.51 |
31st | 19.49 | -14.87 | 38.36 | -42 |
Model | ACF | PACF |
White Noise | | |
| attenuated to zero (geometric or volatility) | censored after the |
| censored after the | attenuated to zero (geometric or volatility) |
| attenuated to zero (geometric or volatility) after | attenuated to zero (geometric or volatility) after |
![]() |
Curve Fitting | SSE | R-Square | Adjusted R-square | RMSE |
Cubic Fitting | | 0.9544 | 0.954 | 11.3 |
Trigonometric Fitting | | 0.6862 | 0.6814 | 29.74 |
Date | Sp. | Sum. | Fall | Win. |
1st | -44.33 | 9.14 | -3.75 | 9.66 |
2nd | 8.44 | 10.61 | -4.37 | 25.82 |
3rd | -37.76 | 5.76 | -10.01 | 49.94 |
4th | -39.62 | -19.09 | -11.33 | 37.69 |
5th | -45.78 | -48.95 | 2.67 | 79.07 |
6th | 6.07 | -54.47 | -4.02 | 0.41 |
7th | 9.61 | -46.65 | 3.93 | 5.38 |
8th | -2.16 | -23.5 | -7.46 | -62.35 |
9th | -1.25 | -37.69 | -0.54 | -22.83 |
10th | 85.68 | -16.87 | 15.69 | -56.35 |
11th | 70.64 | -18.72 | 19.23 | -65.58 |
12th | 50.94 | -7.57 | 29.42 | -56.18 |
13th | 22.6 | -14.76 | 14.93 | -92.15 |
14th | 31.94 | -40.61 | 14.41 | -79.5 |
15th | 36.3 | -18.13 | 11.21 | -76.88 |
16th | 21.35 | -18.32 | -1.35 | -79.65 |
17th | 20.75 | -17.18 | 15.41 | -68.78 |
18th | 36.49 | -9.71 | 2.14 | -57.29 |
19th | -28.07 | 8.43 | 13.85 | -30.17 |
20th | -6.62 | 7.24 | 4.54 | 9.57 |
21st | 31.84 | 5.7 | 0.54 | 3.27 |
22nd | 55.66 | 31.84 | -23.15 | -91.08 |
23rd | 28.83 | 26.97 | -0.19 | -110.79 |
24th | 14.35 | -11.58 | 6.74 | -100.22 |
25th | 21.88 | -23.79 | 10.32 | -106.02 |
26th | 94.76 | -16 | 2.21 | -106.87 |
27th | 167.66 | -25.22 | 13.4 | -89.42 |
28th | 60.24 | -30.44 | 9.25 | -46.69 |
29th | 8.84 | -11 | 9.73 | -24.06 |
30th | -0.55 | -22.23 | 22.52 | -63.51 |
31st | 19.49 | -14.87 | 38.36 | -42 |
Model | ACF | PACF |
White Noise | | |
| attenuated to zero (geometric or volatility) | censored after the |
| censored after the | attenuated to zero (geometric or volatility) |
| attenuated to zero (geometric or volatility) after | attenuated to zero (geometric or volatility) after |