Research article Special Issues

The moderating effect of pro-environmental self-identity in the relationship between abnormally-shaped foods and purchase intention

  • Received: 30 December 2019 Accepted: 18 May 2020 Published: 01 June 2020
  • The assumption that consumers reject food that deviates physically from the norm contributes to global food waste because food retailers generally do not offer abnormally shaped food. This study empirically examines how food shape abnormality affects purchase intentions and how pro-environmental self-identity might moderate the food shape abnormality-purchase intention relationship for consumers in Taiwan. A representative sample of 400 Taiwanese consumers indicated their purchase intentions for two fruits and two vegetables with varying levels of food shape abnormality (normal, moderately abnormal, and extremely abnormal). The results demonstrate that food shape influences purchase intentions; consumers are more likely to purchase normally shaped fruits and vegetables than moderately or extremely abnormally shaped food. Pro-environmental self-identity also drive purchase intentions, such that participants with high levels of pro-environmental self-identity express higher purchase intentions toward abnormally shaped food. Results show that pro-environmental self-identity has a higher and positive impact on purchase intention for extremely abnormal food compared to moderately abnormal and normal food. Government entities and food industry actors should take these findings into account to develop effective communication strategies.

    Citation: Kufen Su, Yen-Lun Su, Yao-Ming Kuo. The moderating effect of pro-environmental self-identity in the relationship between abnormally-shaped foods and purchase intention[J]. AIMS Environmental Science, 2020, 7(3): 247-257. doi: 10.3934/environsci.2020015

    Related Papers:

    [1] Wenxue Huang, Xiaofeng Li, Yuanyi Pan . Increase statistical reliability without losing predictive power by merging classes and adding variables. Big Data and Information Analytics, 2016, 1(4): 341-348. doi: 10.3934/bdia.2016014
    [2] Jian-Bing Zhang, Yi-Xin Sun, De-Chuan Zhan . Multiple-instance learning for text categorization based on semantic representation. Big Data and Information Analytics, 2017, 2(1): 69-75. doi: 10.3934/bdia.2017009
    [3] Dongyang Yang, Wei Xu . Statistical modeling on human microbiome sequencing data. Big Data and Information Analytics, 2019, 4(1): 1-12. doi: 10.3934/bdia.2019001
    [4] Wenxue Huang, Qitian Qiu . Forward Supervised Discretization for Multivariate with Categorical Responses. Big Data and Information Analytics, 2016, 1(2): 217-225. doi: 10.3934/bdia.2016005
    [5] Jinyuan Zhang, Aimin Zhou, Guixu Zhang, Hu Zhang . A clustering based mate selection for evolutionary optimization. Big Data and Information Analytics, 2017, 2(1): 77-85. doi: 10.3934/bdia.2017010
    [6] Ricky Fok, Agnieszka Lasek, Jiye Li, Aijun An . Modeling daily guest count prediction. Big Data and Information Analytics, 2016, 1(4): 299-308. doi: 10.3934/bdia.2016012
    [7] Minlong Lin, Ke Tang . Selective further learning of hybrid ensemble for class imbalanced increment learning. Big Data and Information Analytics, 2017, 2(1): 1-21. doi: 10.3934/bdia.2017005
    [8] David E. Bernholdt, Mark R. Cianciosa, David L. Green, Kody J.H. Law, Alexander Litvinenko, Jin M. Park . Comparing theory based and higher-order reduced models for fusion simulation data. Big Data and Information Analytics, 2018, 3(2): 41-53. doi: 10.3934/bdia.2018006
    [9] Cai-Tong Yue, Jing Liang, Bo-Fei Lang, Bo-Yang Qu . Two-hidden-layer extreme learning machine based wrist vein recognition system. Big Data and Information Analytics, 2017, 2(1): 59-68. doi: 10.3934/bdia.2017008
    [10] Yaru Cheng, Yuanjie Zheng . Frequency filtering prompt tuning for medical image semantic segmentation with missing modalities. Big Data and Information Analytics, 2024, 8(0): 109-128. doi: 10.3934/bdia.2024006
  • The assumption that consumers reject food that deviates physically from the norm contributes to global food waste because food retailers generally do not offer abnormally shaped food. This study empirically examines how food shape abnormality affects purchase intentions and how pro-environmental self-identity might moderate the food shape abnormality-purchase intention relationship for consumers in Taiwan. A representative sample of 400 Taiwanese consumers indicated their purchase intentions for two fruits and two vegetables with varying levels of food shape abnormality (normal, moderately abnormal, and extremely abnormal). The results demonstrate that food shape influences purchase intentions; consumers are more likely to purchase normally shaped fruits and vegetables than moderately or extremely abnormally shaped food. Pro-environmental self-identity also drive purchase intentions, such that participants with high levels of pro-environmental self-identity express higher purchase intentions toward abnormally shaped food. Results show that pro-environmental self-identity has a higher and positive impact on purchase intention for extremely abnormal food compared to moderately abnormal and normal food. Government entities and food industry actors should take these findings into account to develop effective communication strategies.


    In high dimensional, large sample categorical data analysis, feature selection or dimension reduction is usually involved. Existing feature selection procedures either call upon the original variables or rely on linear models (or generalized linear models). Linear models are restrained by certain assumptions of multivariate distribution of data. Some categories in the original categorical explanatory variables may not be informative enough, redundant or even irrelevant with the response variable. Besides, a regular feature selection's statistical reliability may be jeopardized if it picks up variables with a large domains. So we propose a category-based probabilistic approach for feature selection.

    One can refer to [10,8] for more introductions to various data types and algorithms in feature selection. The reliability was introduced by variance in [9,3] and by proportion of categories in [5]. The reliability measure used here was proposed in [6], denoted as E(Gini(X|Y)), which measures variable(X)'s conditional concentration on variable Y. One straightforward reason for this measure is to believe that an explanatory variable is more statistically reliable when its domain is more concentrated with respect to the given response variable.

    As in [6], we propose a category based feature selection method in this article to improve the statistical reliability and to increase the overall point-hit accuracy by merging or removing the less-informative or redundant categories in the categorical explanatory variables. Instead of doing as in [6], we first transform each original categorical explanatory variable into multiple dummy variables, then select the more informative ones by a stepwise forward feature selection approach, and then merge the unselected categories. The merging process in [6], on the other hand, is to find less informative categories within those pre-selected original explanatory variables and merge. Our proposed approach here can compare not only the categories within one explanatory variable one another, but also among the categories in other explanatory variables. Certain introductions and applications to dummy variable can be found in [1,2].

    The rest of this article is organized as follows. Section 2 introduces the association measures and the reliability measures; Section 3 introduces the dummy variable approach, proves two theorems, and proposes the detailed feature selection steps; two experiments are conducted in Section 4; we briefly summarize results of this article in the last section.

    Assume we are given a data set with one categorical explanatory variable X with domain Domain(X)={1,2,,nX}, and one response variable Y with domain Domain(Y)={1,2,,nY}. We show our schemes with two Goodman-Kruskal association measures, the GK-lambda and the GK-tau. More details of these associations and the feature selection issues can be found in [8,4]. Please note that these measures can be extended to a categorical multivariate case since the multiple values of a multivariate set can be regarded as "one-dimensional" nominal values.

    The GK-lambda (denoted as λ thereafter) is for modal (or optimal) prediction given as follows.

    λ=xρxmρm1ρm,

    where

    ρm=maxyρy=maxyp(Y=y),  ρxm=maxyρxy=maxyp(X=x;Y=y).

    Please note that p() is the probability of an event. One can see that λ is the relative decreasing rate of the error of predicting Y with X to that without X when the modal prediction is applied; while the GK-tau (denoted as τ thereafter) is the counterpart for the proportional prediction as follows.

    τ=xyρxy2/ρxyρy21yρy2,

    where

    ρx=p(X=x).

    Both τ and λ are used to measure the prediction errors: the first one aims to maximize the point-to-point accuracy and the second one maximize the point hit accuracy under the reasonable assumption that the predicted response variable and the actual response variable share the same distribution (in other words, both the training and predicted are identical in distributions). In practice, a feature selection needs to take care of both data-based association and statistical reliability of features selected. Sometimes, even the balance between model and proportional association has to be considered, cf. Huang and Pan [7].

    Given a categorical data set with two variables X, Y as above, Huang, Li and Pan [6] proposed the following statistical reliability measure of predicting Y based on the information of X.

    E(Gini(X|Y))=1nXi=1nYj=1p(X=i|Y=j)2p(Y=j).

    Notice that

    0E(Gini(X|Y))11|Domain(X)|11|Domain(X,Y)|;

    and the smaller E(Gini(X|Y)), the more reliable the explanatory information. The size of Domain(X,Y) determines the upper bound of E(Gini(X|Y)).

    We transform X into nX dummy variables, written as X1,X2,,XnX, where Xi takes value 1 if X=i; otherwise Xi takes value 0. The following proposition is to ensure that this transformation does not change the association of Y with X (when sample size is large enough).

    Proposition 3.1.

    τ(Y|X1,X2,,XnX)=τ(Y|X).

    Proof. Since Ep(Y) is a constant for a given data set, we have τ(Y|X)ωY|X, where

    ωY|X=i,sp(Y=s|X=i)2p(X=i).

    Thus we only need to prove that

    ωY|X1,X2,,XnX=ωY|X

    Since

    ωY|X1,X2,,XnX=nYj=11i1=01i2=01inX=0p(X1=i1,X2=i1,,XnX=inX,Y=j)2p(X1=i1,X2=i2,,Xn=in)

    and

    Xi=1 when and only when Xs=0, for all si,s=1,2,,nX,

    we have

    p(X1=0,X2=0,,Xi=1,,XnX=0,Y=j)=p(X=i,Y=j),
    j=1,2,,nY.

    So

    ωY|X1,X2,,XnX=nYj=1nnXi=1p(X1=0,X2=0,,Xi=1,,XnX=0,Y=j)2p(X1=0,X2=0,,Xi=1,,Xn=0)=nYj=1nXi=1p(X=i,Y=j)2p(X=i)=ωY|X,

    that is,

    ωY|X1,,XnX=ωY|Xτ(Y|X1,,XnX)=τ(Y|X).

    It is of interest to introduce the following notion.

    Definition 3.1. For a given categorical response variable Y and an association measure βY|X, a categorical explanatory variable X is referred to as reduced if merging of any two classes of X does not strictly decreases β.

    The next proposition tells that merging two categories in X usually decreases (data-based) association.

    Proposition 3.2. If two categories, s,t in X are merged into a new level, we have

    τ(Y|X)τ(Y|X),

    where X is the merged new explanatory variable with only nX1 categories.

    Proof. Notice that this proposition is equivalent to the following inequality.

    ωY|XωY|X.

    Let

    p(X=s;Y=j)p(X=s)=bs,p(X=t;Y=j)p(X=t)=bt, for Y=1,2,,nY.

    We have

    p(X=s;Y=j)=p(X=s)bs,p(X=t;Y=j)=p(X=t)bt, and ωY|X=nYj=1is,tp(X=i;Y=j)2p(X=i)+nYj=1p(X=m;Y=j)2p(X=m),

    where m is the new category merged from s and t, because

    nYj=1p(X=m;Y=j)2p(X=m)=nYj=1(p(X=s;Y=j)+p(X=t;Y=j))2p(X=s)+p(X=t)=nYj=1(p(X=s)bs+p(X=t)bt)2p(X=s)+p(X=t), (1)

    and

    nYj=1(p(X=s;Y=j)2p(X=s)+p(X=t;Y=j)2p(X=t))=nYj=1(b2sp(X=s)+b2tp(X=t)). (2)

    Multiplying p(X=s)+p(X=t) to two sides of (1) and of(2), we have

    (1)=nYj=1(b2sp(X=s)2+b2tp(X=t)2+2bsbtp(X=s)p(X=t)),(2)=nYj=1(b2sp(X=s)2+b2tp(X=t)2+(b2s+b2t)p(X=s)p(X=t)).

    Since 2bsbt(b2s+b2t), we have

    ωY|XωY|Xτ(Y|X)τ(Y|X);

    and the equality holds if and only if bs=bt.

    In actual high dimensional data analysis projects, there are usually some categories in some explanatory variables that can be merged such that the association degree decrease is ignorable while the merge raises significantly selected features' statistical reliability. This is especially the case when the data set is high dimensional and many explanatory variables have many categories. Two experiments are conducted in the next section to support this supposition by showing that with similar reliability, merging categories can significantly increase the statistical reliability while not reducing association degree significantly.

    A feature selection procedure usually follows a stepwise forward variable selection scheme, in which explanatory variables are selected one by one until certain pre-assigned threshold is hit. A reasonable threshold to stop the process is set by an acceptable association degree and statistical reliability. Specifically, for a given set of explanatory variables X={X1,X2,,Xn}, and a response variable Y,

    1. identify a subset of explanatory variables, denoted as D1, that have the highest association degree among a set of unselected explanatory variables, denoted as XD0, by

    D1={XhX|τ(Y|{Xh}D0)=maxXiXD0τ(Y|{Xi}D0)}

    where D0 is the set of previously selected explanatory variables;

    2. select the one in D1 with the highest reliability:

    Xi1={Xk|E(Gini({Xk}D0|Y))=minXhD1E(Gini({Xh}D0|Y))};

    3. define the new set of selected variables as follows.

    D2={Xi1}D0

    4. repeat the previous steps until the stopping criterion is met.

    Thus the idea of this general feature selection process is to select a set of variables with the highest association degree and the reliability from the previous step, or with the association degree from the previous step and the highest statistical reliability. More detailed explanations and similar procedures can be found in [8].

    The category based version of the previous procedure is to transform all the original (non-binary categorical) explanatory variables into dummy variables before the general steps. The unselected categories are then merged into a new category in each original variable as described below.

    1. Transform each original variable Xi into ni dummy variables. Then there are totally M=ni=1ni binary variables, denoted as X11,,X1,n1,, Xn,nn.

    2. Follow the steps in Section (3.2) to select m out of M binary variables, where m=nti=1ni, 1ntn and ni is the number of selected categories in Xi.

    3. Merge the remaining nini categories in Xi into one new category and denote the new variable by Xi. We then have the selected new variables as X1,X2,,Xnt.

    Notice that despite the genuine advantage of the category-based forward selection process, it has a higher time cost than the corresponding original variable based approach. It has to go through more loops to reach the same target given more features to be scanned. Generally, a complexity analysis needs to be related to a specifically designed and implemented algorithm. However, it is not this article's objective to discuss this subject in detail thus a brief discussion is carried out as follows.

    Assume that the time cost for one variable set's association is a constant C, and that there are N independent variables in the data set with m categories in each variable on average. Further assuming that the original variable based process stops at M1 variables and the category-based one stops at M2 binary variables, we have the time cost of an original variable based feature selection method as O(M1(2NM1+1)2) and that of a category-based one as O(M2(2NmM2+1)2).

    The experiment's purpose is to evaluate the association and the reliability differences between the category based and the original variable based feature selection processes. The first experiment uses the mushroom data set from UCI Machine Learning Repository[13]. It has 8,124 observations with 22 categorical variables. The second one uses the data set from a 1996 survey of family expenditure by Canada [12], shortened as FAMEX96. It has 10,417 instances with more than 250 continuous and categorical variables.

    The mushroom's type is chosen as the response variable while the other 21 variables are the explanatory ones. We are going to compare the feature selection result by the original variables and that by the transformed dummy variables. Please note that X2 in Table 1 is the set of selected original variables and X2 in Table 2 is the selected new variables with merged categories after the feature selection to the dummy transformation. We use EG as a short for E(Gini(X|Y)).

    Table 1.  Feature selection by the original variables.
    Original Features|Domain(X2,Y)|τ(Y|X2)λ(Y|X2)EG
    1180.94290.96930.4797
    2460.97820.98770.7718
    31080.99070.99390.9076
    4192110.9490

     | Show Table
    DownLoad: CSV
    Table 2.  Feature selection by the dummy variables.
    Merged Features|Domain(X2,Y)|τ(Y|X2)λ(Y|X2)EG
    4160.94450.96930.2098
    4240.99080.99390.2143
    5300.99620.99790.4669
    638110.6638

     | Show Table
    DownLoad: CSV

    As described in Table 1, there can only be four variables by the feature selection through the original variables with the final association which reaches the maximum (data-based) association degree as 1 and the reliability (EG(X|Y)) is as poor as 0.9076.

    The category-based feature selection always gives rise to remarkably better reliability (EG(X|Y)) and higher associations (λ or τ).

    It is also see from these two tables that we have in both experiments a higher association by the categories than that by the original variables given the same reliability threshold: for an almost equal reliability, say, EG0.46, the original variable based version gets a τ=0.9429 but the category version gets a τ=0.9962. The same story holds for λ.

    In this experiment, variable HouseholdType is chosen as the response variable and all other 24 categorical variables are considered as the explanatory variables. Following the similar pattern as in the previous experiment, we have the feature selection results by the original variables in Table 3 and by the dummy variables (or categories) in Table 4.

    Table 3.  Feature selection by the original variables.
    OrigVarFeatures|Domain(X2,Y)|τ(Y|X2)λ(Y|X2)EG
    1660.30050.34440.8201
    22520.39480.43910.9046
    318300.43830.46480.9833

     | Show Table
    DownLoad: CSV
    Table 4.  Feature selection by the dummy variables.
    Merged Features|Domain(X2,Y)|τ(Y|X2)λ(Y|X2)EG
    2240.32420.39340.5491
    2360.35730.41650.6242
    2480.37510.42340.6388
    3960.39010.42340.7035
    41860.40170.42690.7774
    42820.41210.43170.8066
    55580.42210.45480.8782
    69660.43140.47680.8968
    717160.44360.48560.9135

     | Show Table
    DownLoad: CSV

    One can see from these two tables that the category based approaches produces an association degree τ=0.3242 with a reliability measure of 0.5491, while the original variable approach gives association degree τ=0.3005 with a significantly lower reliability as EG=0.8201. When the association reaches τ=0.3948 in the original version, its reliability is as low as 0.9046. But the category version can still have about 20% better reliability as 0.7035. Another perspective of this story is that the dummy method can give us more choices when we have to choose between the association and the reliability since the original method only gives three choices when the reliability reaches a unreliable 0.9833 but the result from the category based approach offers us more options.

    By transforming the categorical explanatory variables into their dummy forms and applying feature selection procedure to the transformed variables, we can select the informative categories and merge the less informative or redundant categories in the explanatory variables in order to increase the association and raise the reliability. E(Gini(X|Y)) measures the statistical reliability. Experiments are conducted to show that both the association and the reliability are significantly better with the category-based feature selection than with original variables. Of course, the cost of category based approach is higher than the original variable based one. So in the situations that the dimension is not too high, the time cost (or computing complexity) of this category based approach is acceptable.



    [1] Gustavsson J, Cederberg C, Sonesson U, et al. (2011) Global food losses and food waste: Extent causes and prevention. Available from: https://reurl.cc/AqXNk3.
    [2] Smithers R (2013) Almost half of the world's food thrown away, report finds. The Guardian. Available from: http://www.theguardian.com.
    [3] Loebnitz N, Grunert KG (2015) The effect of food shape abnormality on purchase intentions in China. Food Qual Prefer 40: 24-30. doi: 10.1016/j.foodqual.2014.08.005
    [4] Grunert KG, Bredahl L, Brunso K (2004) Consumer perception of meat quality and implications for product development in the meat sector-A review. Meat Sci 66: 259-272. doi: 10.1016/S0309-1740(03)00130-X
    [5] Hurling R, Shepherd R (2003) Eating with your eyes: Effect of appearance on expectations of liking. Appet 41: 167-174. doi: 10.1016/S0195-6663(03)00058-8
    [6] Sobal J, Wansink B (2007) Kitchenscapes, tablescapes, platescapes, and foodscapes: Influences of microscale built environments on food intake. Environ Behav 39: 124-142. doi: 10.1177/0013916506295574
    [7] Loebnitz N, Schuitema G, Grunert KG (2015) Who buys oddly shaped food and why? Impacts of food shape abnormality and organic labeling on purchase intentions. Psychol Market 32: 408-421.
    [8] Allen MW, Wilson M, Ng SH, et al. (2000) Values and beliefs of vegetarians and omnivores. J Soci Psychol 140: 405-422. doi: 10.1080/00224540009600481
    [9] Bisogni CA, Connors M, Devine CM, et al. (2002) Who we are and how we eat: A qualitative study of identities in food choice. J Nutri Edu Behav 34: 128-139. doi: 10.1016/S1499-4046(06)60082-1
    [10] Cook AJ, Kerr GN, Moore K (2002) Attitudes and intentions towards purchasing GM food. J Eco Psychol 23: 557-572. doi: 10.1016/S0167-4870(02)00117-4
    [11] Sparks P, Shepherd R (1992) Self-identity and the theory of planned behavior: Assessing the role of identification with "green consumerism". Soc Psychol Quarter 55: 388-399. doi: 10.2307/2786955
    [12] Fielding KS, McDonald R, Louis WR (2008) Theory of planned behaviour, identity and intentions to engage in environmental activism. J Environ Psychol 28: 318-326. doi: 10.1016/j.jenvp.2008.03.003
    [13] Terry DJ, Hogg MA, White KM (1999) The theory of planned behaviour: Self-identity, social identity and group norms. Brit J of Soci Psychol 38: 225-244. doi: 10.1348/014466699164149
    [14] Whitmarsh L, O'Neill S (2010) Green identity, green living? The role of pro-environmental self-identity in determining consistency across diverse pro-environmental behaviours. J Environ Psychol 30: 305-314.
    [15] Wang MW (2016) Research on poster creation for Taichung tangerine popularization and advocating-A case study on inglorious fruits and vegetables (Master's thesis). Available from: http://www.airitilibrary.com.
    [16] Chen WH (2018) Business model innovation of unselected organic fruits and vegetables transaction (Master's thesis). Available from: http://www.airitilibrary.com.
    [17] Su KF, Su YL, Kuo YM (2019) The mediating effect of genetic modification belief in the relationship between abnormally shaped foods and consumers' risk perception. Sustain Commun Tour Studies 3: 27-36.
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4502) PDF downloads(358) Cited by(5)

Figures and Tables

Figures(2)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog