Original Features | ||||
1 | 18 | 0.9429 | 0.9693 | 0.4797 |
2 | 46 | 0.9782 | 0.9877 | 0.7718 |
3 | 108 | 0.9907 | 0.9939 | 0.9076 |
4 | 192 | 1 | 1 | 0.9490 |
This study examines the aftermarket performance of high-green and low-green IPO and how green IPOs can optimize portfolio allocation. We assume the higher level of greenness increases investors' participation in IPOs. To this end, we develop the utility function and determine that investors prefer to participate in new issues when firms account for greenness measures. This study proposes the global aspects of green measure: the desired level of greenness a firm maintains. We find that IPOs in our sample are far below the global standards of greenness. This evidence suggests they must adopt the necessary actions to make the environment green. Another significant contribution of this study is to measure the performance of high and low-green IPOs in short- and long-run horizons. This study reveals that high-green IPOs are less underpriced. This study estimates the effect of greenness on initial returns and finds an inverse relationship suggesting that high-green IPOs are less underpriced due to lower risk associated with new issues. In terms of measuring longer-term performance, this study determines that high-green IPOs underperform less than low-green IPOs.
Citation: Muhammad Zubair Mumtaz, Naoyuki Yoshino. Aftermarket performance of green IPOs and portfolio allocation[J]. Green Finance, 2023, 5(3): 321-342. doi: 10.3934/GF.2023013
[1] | Wenxue Huang, Xiaofeng Li, Yuanyi Pan . Increase statistical reliability without losing predictive power by merging classes and adding variables. Big Data and Information Analytics, 2016, 1(4): 341-348. doi: 10.3934/bdia.2016014 |
[2] | Jian-Bing Zhang, Yi-Xin Sun, De-Chuan Zhan . Multiple-instance learning for text categorization based on semantic representation. Big Data and Information Analytics, 2017, 2(1): 69-75. doi: 10.3934/bdia.2017009 |
[3] | Dongyang Yang, Wei Xu . Statistical modeling on human microbiome sequencing data. Big Data and Information Analytics, 2019, 4(1): 1-12. doi: 10.3934/bdia.2019001 |
[4] | Wenxue Huang, Qitian Qiu . Forward Supervised Discretization for Multivariate with Categorical Responses. Big Data and Information Analytics, 2016, 1(2): 217-225. doi: 10.3934/bdia.2016005 |
[5] | Jinyuan Zhang, Aimin Zhou, Guixu Zhang, Hu Zhang . A clustering based mate selection for evolutionary optimization. Big Data and Information Analytics, 2017, 2(1): 77-85. doi: 10.3934/bdia.2017010 |
[6] | Ricky Fok, Agnieszka Lasek, Jiye Li, Aijun An . Modeling daily guest count prediction. Big Data and Information Analytics, 2016, 1(4): 299-308. doi: 10.3934/bdia.2016012 |
[7] | Minlong Lin, Ke Tang . Selective further learning of hybrid ensemble for class imbalanced increment learning. Big Data and Information Analytics, 2017, 2(1): 1-21. doi: 10.3934/bdia.2017005 |
[8] | David E. Bernholdt, Mark R. Cianciosa, David L. Green, Kody J.H. Law, Alexander Litvinenko, Jin M. Park . Comparing theory based and higher-order reduced models for fusion simulation data. Big Data and Information Analytics, 2018, 3(2): 41-53. doi: 10.3934/bdia.2018006 |
[9] | Cai-Tong Yue, Jing Liang, Bo-Fei Lang, Bo-Yang Qu . Two-hidden-layer extreme learning machine based wrist vein recognition system. Big Data and Information Analytics, 2017, 2(1): 59-68. doi: 10.3934/bdia.2017008 |
[10] | Yaru Cheng, Yuanjie Zheng . Frequency filtering prompt tuning for medical image semantic segmentation with missing modalities. Big Data and Information Analytics, 2024, 8(0): 109-128. doi: 10.3934/bdia.2024006 |
This study examines the aftermarket performance of high-green and low-green IPO and how green IPOs can optimize portfolio allocation. We assume the higher level of greenness increases investors' participation in IPOs. To this end, we develop the utility function and determine that investors prefer to participate in new issues when firms account for greenness measures. This study proposes the global aspects of green measure: the desired level of greenness a firm maintains. We find that IPOs in our sample are far below the global standards of greenness. This evidence suggests they must adopt the necessary actions to make the environment green. Another significant contribution of this study is to measure the performance of high and low-green IPOs in short- and long-run horizons. This study reveals that high-green IPOs are less underpriced. This study estimates the effect of greenness on initial returns and finds an inverse relationship suggesting that high-green IPOs are less underpriced due to lower risk associated with new issues. In terms of measuring longer-term performance, this study determines that high-green IPOs underperform less than low-green IPOs.
In high dimensional, large sample categorical data analysis, feature selection or dimension reduction is usually involved. Existing feature selection procedures either call upon the original variables or rely on linear models (or generalized linear models). Linear models are restrained by certain assumptions of multivariate distribution of data. Some categories in the original categorical explanatory variables may not be informative enough, redundant or even irrelevant with the response variable. Besides, a regular feature selection's statistical reliability may be jeopardized if it picks up variables with a large domains. So we propose a category-based probabilistic approach for feature selection.
One can refer to [10,8] for more introductions to various data types and algorithms in feature selection. The reliability was introduced by variance in [9,3] and by proportion of categories in [5]. The reliability measure used here was proposed in [6], denoted as
As in [6], we propose a category based feature selection method in this article to improve the statistical reliability and to increase the overall point-hit accuracy by merging or removing the less-informative or redundant categories in the categorical explanatory variables. Instead of doing as in [6], we first transform each original categorical explanatory variable into multiple dummy variables, then select the more informative ones by a stepwise forward feature selection approach, and then merge the unselected categories. The merging process in [6], on the other hand, is to find less informative categories within those pre-selected original explanatory variables and merge. Our proposed approach here can compare not only the categories within one explanatory variable one another, but also among the categories in other explanatory variables. Certain introductions and applications to dummy variable can be found in [1,2].
The rest of this article is organized as follows. Section 2 introduces the association measures and the reliability measures; Section 3 introduces the dummy variable approach, proves two theorems, and proposes the detailed feature selection steps; two experiments are conducted in Section 4; we briefly summarize results of this article in the last section.
Assume we are given a data set with one categorical explanatory variable
The GK-lambda (denoted as
λ=∑xρxm−ρ⋅m1−ρ⋅m, |
where
ρ⋅m=maxyρ⋅y=maxyp(Y=y), ρxm=maxyρxy=maxyp(X=x;Y=y). |
Please note that
τ=∑x∑yρxy2/ρx⋅−∑yρ⋅y21−∑yρ⋅y2, |
where
ρx⋅=p(X=x). |
Both
Given a categorical data set with two variables
E(Gini(X|Y))=1−nX∑i=1nY∑j=1p(X=i|Y=j)2p(Y=j). |
Notice that
0≤E(Gini(X|Y))≤1−1|Domain(X)|≤1−1|Domain(X,Y)|; |
and the smaller
We transform
Proposition 3.1.
τ(Y|X1,X2,⋯,XnX)=τ(Y|X). |
Proof. Since
ωY|X=∑i,sp(Y=s|X=i)2p(X=i). |
Thus we only need to prove that
ωY|X1,X2,⋯,XnX=ωY|X |
Since
ωY|X1,X2,⋯,XnX=nY∑j=11∑i1=01∑i2=0⋯1∑inX=0p(X1=i1,X2=i1,⋯,XnX=inX,Y=j)2p(X1=i1,X2=i2,⋯,Xn=in) |
and
Xi=1 when and only when Xs=0, for all s≠i,s=1,2,⋯,nX, |
we have
p(X1=0,X2=0,⋯,Xi=1,⋯,XnX=0,Y=j)=p(X=i,Y=j), |
j=1,2,⋯,nY. |
So
ωY|X1,X2,⋯,XnX=nY∑j=1nnX∑i=1p(X1=0,X2=0,⋯,Xi=1,⋯,XnX=0,Y=j)2p(X1=0,X2=0,⋯,Xi=1,⋯,Xn=0)=nY∑j=1nX∑i=1p(X=i,Y=j)2p(X=i)=ωY|X, |
that is,
ωY|X1,⋯,XnX=ωY|X⇔τ(Y|X1,⋯,XnX)=τ(Y|X). |
It is of interest to introduce the following notion.
Definition 3.1. For a given categorical response variable
The next proposition tells that merging two categories in
Proposition 3.2. If two categories,
τ(Y|X′)≤τ(Y|X), |
where
Proof. Notice that this proposition is equivalent to the following inequality.
ωY|X′≤ωY|X. |
Let
p(X=s;Y=j)p(X=s)=bs,p(X=t;Y=j)p(X=t)=bt, for Y=1,2,⋯,nY. |
We have
p(X=s;Y=j)=p(X=s)bs,p(X=t;Y=j)=p(X=t)bt, and ωY|X′=nY∑j=1∑i≠s,tp(X=i;Y=j)2p(X=i)+nY∑j=1p(X=m;Y=j)2p(X=m), |
where
nY∑j=1p(X=m;Y=j)2p(X=m)=nY∑j=1(p(X=s;Y=j)+p(X=t;Y=j))2p(X=s)+p(X=t)=nY∑j=1(p(X=s)bs+p(X=t)bt)2p(X=s)+p(X=t), | (1) |
and
nY∑j=1(p(X=s;Y=j)2p(X=s)+p(X=t;Y=j)2p(X=t))=nY∑j=1(b2sp(X=s)+b2tp(X=t)). | (2) |
Multiplying
(1)=nY∑j=1(b2sp(X=s)2+b2tp(X=t)2+2bsbtp(X=s)p(X=t)),(2)=nY∑j=1(b2sp(X=s)2+b2tp(X=t)2+(b2s+b2t)p(X=s)p(X=t)). |
Since
ωY|X′≤ωY|X⇔τ(Y|X′)≤τ(Y|X); |
and the equality holds if and only if
In actual high dimensional data analysis projects, there are usually some categories in some explanatory variables that can be merged such that the association degree decrease is ignorable while the merge raises significantly selected features' statistical reliability. This is especially the case when the data set is high dimensional and many explanatory variables have many categories. Two experiments are conducted in the next section to support this supposition by showing that with similar reliability, merging categories can significantly increase the statistical reliability while not reducing association degree significantly.
A feature selection procedure usually follows a stepwise forward variable selection scheme, in which explanatory variables are selected one by one until certain pre-assigned threshold is hit. A reasonable threshold to stop the process is set by an acceptable association degree and statistical reliability. Specifically, for a given set of explanatory variables
1. identify a subset of explanatory variables, denoted as
D1={Xh∈X|τ(Y|{Xh}∪D0)=maxXi∈X∖D0τ(Y|{Xi}∪D0)} |
where
2. select the one in
Xi1={Xk|E(Gini({Xk}∪D0|Y))=minXh∈D1E(Gini({Xh}∪D0|Y))}; |
3. define the new set of selected variables as follows.
D2={Xi1}∪D0 |
4. repeat the previous steps until the stopping criterion is met.
Thus the idea of this general feature selection process is to select a set of variables with the highest association degree and the reliability from the previous step, or with the association degree from the previous step and the highest statistical reliability. More detailed explanations and similar procedures can be found in [8].
The category based version of the previous procedure is to transform all the original (non-binary categorical) explanatory variables into dummy variables before the general steps. The unselected categories are then merged into a new category in each original variable as described below.
1. Transform each original variable
2. Follow the steps in Section (3.2) to select
3. Merge the remaining
Notice that despite the genuine advantage of the category-based forward selection process, it has a higher time cost than the corresponding original variable based approach. It has to go through more loops to reach the same target given more features to be scanned. Generally, a complexity analysis needs to be related to a specifically designed and implemented algorithm. However, it is not this article's objective to discuss this subject in detail thus a brief discussion is carried out as follows.
Assume that the time cost for one variable set's association is a constant
The experiment's purpose is to evaluate the association and the reliability differences between the category based and the original variable based feature selection processes. The first experiment uses the mushroom data set from UCI Machine Learning Repository[13]. It has
The mushroom's type is chosen as the response variable while the other 21 variables are the explanatory ones. We are going to compare the feature selection result by the original variables and that by the transformed dummy variables. Please note that
Original Features | ||||
1 | 18 | 0.9429 | 0.9693 | 0.4797 |
2 | 46 | 0.9782 | 0.9877 | 0.7718 |
3 | 108 | 0.9907 | 0.9939 | 0.9076 |
4 | 192 | 1 | 1 | 0.9490 |
Merged Features | ||||
4 | 16 | 0.9445 | 0.9693 | 0.2098 |
4 | 24 | 0.9908 | 0.9939 | 0.2143 |
5 | 30 | 0.9962 | 0.9979 | 0.4669 |
6 | 38 | 1 | 1 | 0.6638 |
As described in Table 1, there can only be four variables by the feature selection through the original variables with the final association which reaches the maximum (data-based) association degree as
The category-based feature selection always gives rise to remarkably better reliability (
It is also see from these two tables that we have in both experiments a higher association by the categories than that by the original variables given the same reliability threshold: for an almost equal reliability, say,
In this experiment, variable HouseholdType is chosen as the response variable and all other
OrigVarFeatures | ||||
1 | 66 | 0.3005 | 0.3444 | 0.8201 |
2 | 252 | 0.3948 | 0.4391 | 0.9046 |
3 | 1830 | 0.4383 | 0.4648 | 0.9833 |
Merged Features | ||||
2 | 24 | 0.3242 | 0.3934 | 0.5491 |
2 | 36 | 0.3573 | 0.4165 | 0.6242 |
2 | 48 | 0.3751 | 0.4234 | 0.6388 |
3 | 96 | 0.3901 | 0.4234 | 0.7035 |
4 | 186 | 0.4017 | 0.4269 | 0.7774 |
4 | 282 | 0.4121 | 0.4317 | 0.8066 |
5 | 558 | 0.4221 | 0.4548 | 0.8782 |
6 | 966 | 0.4314 | 0.4768 | 0.8968 |
7 | 1716 | 0.4436 | 0.4856 | 0.9135 |
One can see from these two tables that the category based approaches produces an association degree
By transforming the categorical explanatory variables into their dummy forms and applying feature selection procedure to the transformed variables, we can select the informative categories and merge the less informative or redundant categories in the explanatory variables in order to increase the association and raise the reliability.
[1] |
Aggarwal R, Rivoli P (1990) Fads in the initial public offering market? Financ Manage 19: 45–57. https://doi.org/10.2307/3665609 doi: 10.2307/3665609
![]() |
[2] |
Anderloni L, Tanda A (2017) Green energy companies: Stock performance and IPO returns. Res Int Bus Financ 39: 546–552. https://doi.org/10.1016/j.ribaf.2016.09.016 doi: 10.1016/j.ribaf.2016.09.016
![]() |
[3] | Anderloni L, Tanda A (2015) The performance of listed European innovative firms, In: Beccalli, E. Poli, F. (Eds.), Lending Investments and the Financial Crisis, Palgrave Macmillian, London. https://doi.org/10.1057/9781137531018_7 |
[4] |
Belghitar Y, Clark E, Deshmukh N (2014) Does it pay to be ethical? Evidence from the FTSE4Good. J Bank Financ 47: 54–62. https://doi.org/10.1016/j.jbankfin.2014.06.027 doi: 10.1016/j.jbankfin.2014.06.027
![]() |
[5] |
Beatty R, Ritter JR (1986) Investment Banking Reputation and the Underpricing of Initial Public Offerings. J Financ Econ 15: 213–232. https://doi.org/10.1016/0304-405X(86)90055-3 doi: 10.1016/0304-405X(86)90055-3
![]() |
[6] |
Benveniste L, Spindt P (1989) How investment bankers determine the offer price and allocation of new issues. J Financ Econ 24: 343–361. https://doi.org/10.1016/0304-405X(89)90051-2 doi: 10.1016/0304-405X(89)90051-2
![]() |
[7] |
Berk A, Peterle P (2015) Initial and long-run IPO returns in Central and Eastern Europe. Emerg Mark Financ Tr 51: 42–60. https://doi.org/10.1080/1540496X.2015.1080555 doi: 10.1080/1540496X.2015.1080555
![]() |
[8] |
Cai X, Liu GS, Mase B (2008) The long-run performance of initial public offerings and its determinants: The case of China. Rev Quant Financ Account 30: 419–432. https://doi.org/10.1007/s11156-007-0064-5 doi: 10.1007/s11156-007-0064-5
![]() |
[9] |
Gompers PA, Lerner J (2003) The really long-run performance of initial public offerings: The pre-NASDAQ evidence. J Financ 58: 1355–1392. https://doi.org/10.1111/1540-6261.00570 doi: 10.1111/1540-6261.00570
![]() |
[10] | Ibbotson R, Ritter JR (1995) Initial public offerings, In: Jarrow, R. (ed.), Handbook in OR & MS, 993–1016. https://doi.org/10.1016/S0927-0507(05)80074-X |
[11] |
Ji Q, Zhang D (2019) China's crude oil future: Introduction and some stylized facts. Financ Res Lett 24: 151–162. https://doi.org/10.1016/j.frl.2018.06.005 doi: 10.1016/j.frl.2018.06.005
![]() |
[12] |
Que J, Zhang X (2019) Pre-IPO growth, venture capital, and the long-run performance of IPOs. Econ Model 81: 205–216. https://doi.org/10.1016/j.econmod.2019.04.005 doi: 10.1016/j.econmod.2019.04.005
![]() |
[13] |
Loughran T, Ritter JR (1995) The new issues puzzle. J Financ 50: 23–51. https://doi.org/10.1111/j.1540-6261.1995.tb05166.x doi: 10.1111/j.1540-6261.1995.tb05166.x
![]() |
[14] |
Lowry M, Officer MS, Schwert GW (2010) The variability of IPO initial returns. J Financ 65: 425–465. https://doi.org/10.1111/j.1540-6261.2009.01540.x doi: 10.1111/j.1540-6261.2009.01540.x
![]() |
[15] |
Lyon JD, Barber BM, Tsai CL (1999) Improved methods for tests of long-run abnormal stock returns. J Financ 54: 165–201. https://doi.org/10.1111/0022-1082.00101 doi: 10.1111/0022-1082.00101
![]() |
[16] |
Miller EM (1977) Risk, uncertainty and divergence of opinion. J Financ 32: 1151–1168. https://doi.org/10.1111/j.1540-6261.1977.tb03317.x doi: 10.1111/j.1540-6261.1977.tb03317.x
![]() |
[17] |
Mir KA, Purohit P, Mehmood S (2017) Sectoral assessment of greenhouse gas emissions in Pakistan. Environ Sci Pollut R 24: 27345–27355. https://doi.org/10.1007/s11356-017-0354-y doi: 10.1007/s11356-017-0354-y
![]() |
[18] | Mumtaz MZ, Smith N (2022) The Blueness Index, Investment Choice, and Portfolio Allocation, ln: Blue Economy and Blue Finance, Asian Development Bank Institute (ADBI), Japan. |
[19] | Mumtaz MZ, Smith ZA (2019) Green finance for sustainable development in Pakistan. Islamabad Policy Res Institute (IPRI) J 18: 78–110. |
[20] | Mumtaz MZ, Smith ZA (2017) Short and intermediate-term price performance of unseasoned issues. Economia Aplicada 21: 549–579. |
[21] |
Mumtaz MZ, Smith ZA, Ahmad AM (2016a) An examination of short-run performance of IPOs using Extreme Bounds Analysis. Estudios de Economia 43: 71–95. https://doi.org/10.4067/S0718-52862016000100004 doi: 10.4067/S0718-52862016000100004
![]() |
[22] |
Mumtaz MZ, Smith ZA, Ahmed AM (2016b) The aftermarket performance of initial public offerings in Pakistan. Lahore J Econ 21: 23–68. https://doi.org/10.35536/lje.2016.v21.i1.a2 doi: 10.35536/lje.2016.v21.i1.a2
![]() |
[23] |
Ritter JR (1991) The long-run performance of initial public offerings. J Financ 46: 3–27. https://doi.org/10.1111/j.1540-6261.1991.tb03743.x doi: 10.1111/j.1540-6261.1991.tb03743.x
![]() |
[24] |
Ritter JR (1984) Signaling and the valuation of unseasoned new issues: A comment. J Financ 39: 1231–1237. https://doi.org/10.1111/j.1540-6261.1984.tb03907.x doi: 10.1111/j.1540-6261.1984.tb03907.x
![]() |
[25] |
Rock K (1986) Why new issues are underpriced. J Financ Econ 57: 187–212. https://doi.org/10.1016/0304-405X(86)90054-1 doi: 10.1016/0304-405X(86)90054-1
![]() |
[26] |
Taghizadeh-Hesary F, Yoshino N (2020) Sustainable solutions for green financing and investment in renewable energy projects. Energies 13: 1–19. https://doi.org/10.3390/en13040788 doi: 10.3390/en13040788
![]() |
[27] |
Taghizadeh-Hesary F, Yoshino N (2019) The way to induce private participation in green finance and investment. Financ Res Lett 31: 98–103. https://doi.org/10.1016/j.frl.2019.04.016 doi: 10.1016/j.frl.2019.04.016
![]() |
[28] | Wahid A, Mumtaz MZ, Mantell E (2020) Short-run pricing performance of local and dual class IPOs in alternative investment market. Romanian J Econ Forecast 23: 57–74. |
[29] | Yoshino N, Taghizadeh-Hesary F, Otuka M (2020) Optimal Portfolio Selection for Investment in ESG goal, Facing Issues in ESG Investment, edited by N. Nemoto, ADBI book series (forthcoming). |
[30] |
Yoshino N, Taghizadeh-Hesary F, Nakahigashi M (2019) Modelling the social funding and spill-over tax for addressing the green energy financing gap. Econ Model 77: 34–41. https://doi.org/10.1016/j.econmod.2018.11.018 doi: 10.1016/j.econmod.2018.11.018
![]() |
Original Features | ||||
1 | 18 | 0.9429 | 0.9693 | 0.4797 |
2 | 46 | 0.9782 | 0.9877 | 0.7718 |
3 | 108 | 0.9907 | 0.9939 | 0.9076 |
4 | 192 | 1 | 1 | 0.9490 |
Merged Features | ||||
4 | 16 | 0.9445 | 0.9693 | 0.2098 |
4 | 24 | 0.9908 | 0.9939 | 0.2143 |
5 | 30 | 0.9962 | 0.9979 | 0.4669 |
6 | 38 | 1 | 1 | 0.6638 |
OrigVarFeatures | ||||
1 | 66 | 0.3005 | 0.3444 | 0.8201 |
2 | 252 | 0.3948 | 0.4391 | 0.9046 |
3 | 1830 | 0.4383 | 0.4648 | 0.9833 |
Merged Features | ||||
2 | 24 | 0.3242 | 0.3934 | 0.5491 |
2 | 36 | 0.3573 | 0.4165 | 0.6242 |
2 | 48 | 0.3751 | 0.4234 | 0.6388 |
3 | 96 | 0.3901 | 0.4234 | 0.7035 |
4 | 186 | 0.4017 | 0.4269 | 0.7774 |
4 | 282 | 0.4121 | 0.4317 | 0.8066 |
5 | 558 | 0.4221 | 0.4548 | 0.8782 |
6 | 966 | 0.4314 | 0.4768 | 0.8968 |
7 | 1716 | 0.4436 | 0.4856 | 0.9135 |
Original Features | ||||
1 | 18 | 0.9429 | 0.9693 | 0.4797 |
2 | 46 | 0.9782 | 0.9877 | 0.7718 |
3 | 108 | 0.9907 | 0.9939 | 0.9076 |
4 | 192 | 1 | 1 | 0.9490 |
Merged Features | ||||
4 | 16 | 0.9445 | 0.9693 | 0.2098 |
4 | 24 | 0.9908 | 0.9939 | 0.2143 |
5 | 30 | 0.9962 | 0.9979 | 0.4669 |
6 | 38 | 1 | 1 | 0.6638 |
OrigVarFeatures | ||||
1 | 66 | 0.3005 | 0.3444 | 0.8201 |
2 | 252 | 0.3948 | 0.4391 | 0.9046 |
3 | 1830 | 0.4383 | 0.4648 | 0.9833 |
Merged Features | ||||
2 | 24 | 0.3242 | 0.3934 | 0.5491 |
2 | 36 | 0.3573 | 0.4165 | 0.6242 |
2 | 48 | 0.3751 | 0.4234 | 0.6388 |
3 | 96 | 0.3901 | 0.4234 | 0.7035 |
4 | 186 | 0.4017 | 0.4269 | 0.7774 |
4 | 282 | 0.4121 | 0.4317 | 0.8066 |
5 | 558 | 0.4221 | 0.4548 | 0.8782 |
6 | 966 | 0.4314 | 0.4768 | 0.8968 |
7 | 1716 | 0.4436 | 0.4856 | 0.9135 |