
The Paris Agreement goals require a rapid and deep reduction in global greenhouse gas emissions. Recent studies have shown the large potential of circular economy to reduce global emissions by improving resource and material efficiency practices. However, most large-scale energy system and Integrated Assessment Models used for mitigation analysis typically ignore or do not adequately represent circular economy measures. This study aims to fill in this research gap by enhancing a leading global energy system model with a representation of energy efficiency and circular economy considerations. The scenario-based analysis offers an improved understanding of the potentials, costs and impacts of circular economy in the decarbonisation context. The study shows that enhanced energy efficiency and increased material circularity can reduce energy consumption in all sectors, but most importantly in the industrial sector. They can also reduce the required carbon price to achieve Paris goals and the dependence on expensive, immature, and risky technologies, like Carbon Capture and Storage. Circular economy measures should be properly integrated with broad climate policies to provide a holistic and self-consistent framework to deeply reduce carbon emissions.
Citation: Panagiotis Fragkos. Analysing the systemic implications of energy efficiency and circular economy strategies in the decarbonisation context[J]. AIMS Energy, 2022, 10(2): 191-218. doi: 10.3934/energy.2022011
[1] | Mani Pavuluri, Amber May . I Feel, Therefore, I am: The Insula and Its Role in Human Emotion, Cognition and the Sensory-Motor System. AIMS Neuroscience, 2015, 2(1): 18-27. doi: 10.3934/Neuroscience.2015.1.18 |
[2] | Mohammad Mofatteh . Neurosurgery and artificial intelligence. AIMS Neuroscience, 2021, 8(4): 477-495. doi: 10.3934/Neuroscience.2021025 |
[3] | Yasir Rehman, Cindy Zhang, Haolin Ye, Lionel Fernandes, Mathieu Marek, Andrada Cretu, William Parkinson . The extent of the neurocognitive impairment in elderly survivors of war suffering from PTSD: meta-analysis and literature review. AIMS Neuroscience, 2021, 8(1): 47-73. doi: 10.3934/Neuroscience.2021003 |
[4] | Jarrod Moss . Introduction to AIMS Neuroscience Special Issue “What Function Does the Anterior Insula Play in Human Cognition?”. AIMS Neuroscience, 2015, 2(3): 153-154. doi: 10.3934/Neuroscience.2015.3.153 |
[5] | Margaret Jane Moore, Nele Demeyere . Neglect Dyslexia in Relation to Unilateral Visuospatial Neglect: A Review. AIMS Neuroscience, 2017, 4(4): 148-168. doi: 10.3934/Neuroscience.2017.4.148 |
[6] | Paul G. Nestor, Toshiyuki Ohtani, James J. Levitt, Dominick T. Newell, Martha E. Shenton, Margaret Niznikiewicz, Robert W. McCarley . Prefrontal Lobe Gray Matter, Cognitive Control and Episodic Memory in Healthy Cognition. AIMS Neuroscience, 2016, 3(3): 338-355. doi: 10.3934/Neuroscience.2016.3.338 |
[7] | Gianna Sepede, Francesco Gambi, Massimo Di Giannantonio . Insular Dysfunction in People at Risk for Psychotic Disorders. AIMS Neuroscience, 2015, 2(2): 66-70. doi: 10.3934/Neuroscience.2015.2.66 |
[8] | Masatoshi Takita, Yumi Izawa-Sugaya . Neurocircuit differences between memory traces of persistent hypoactivity and freezing following fear conditioning among the amygdala, hippocampus, and prefrontal cortex. AIMS Neuroscience, 2021, 8(2): 195-211. doi: 10.3934/Neuroscience.2021010 |
[9] | Byron Bernal, Alfredo Ardila, Monica Rosselli . The Network of Brodmanns Area 22 in Lexico-semantic Processing: A Pooling-data Connectivity Study. AIMS Neuroscience, 2016, 3(3): 306-316. doi: 10.3934/Neuroscience.2016.3.306 |
[10] | Mark Reed, Christopher Miller, Cortney Connor, Jason S. Chang, Forshing Lui . Fat droplets in the cerebrospinal fluid (CSF) spaces of the brain. AIMS Neuroscience, 2024, 11(4): 484-489. doi: 10.3934/Neuroscience.2024029 |
The Paris Agreement goals require a rapid and deep reduction in global greenhouse gas emissions. Recent studies have shown the large potential of circular economy to reduce global emissions by improving resource and material efficiency practices. However, most large-scale energy system and Integrated Assessment Models used for mitigation analysis typically ignore or do not adequately represent circular economy measures. This study aims to fill in this research gap by enhancing a leading global energy system model with a representation of energy efficiency and circular economy considerations. The scenario-based analysis offers an improved understanding of the potentials, costs and impacts of circular economy in the decarbonisation context. The study shows that enhanced energy efficiency and increased material circularity can reduce energy consumption in all sectors, but most importantly in the industrial sector. They can also reduce the required carbon price to achieve Paris goals and the dependence on expensive, immature, and risky technologies, like Carbon Capture and Storage. Circular economy measures should be properly integrated with broad climate policies to provide a holistic and self-consistent framework to deeply reduce carbon emissions.
It is a significant task to identify the authenticity of an image in several scenarios, such as the media industry, digital image forensics and academic appraisal. It is important to know if an image was tampered because people need to ensure if a certain image can serve as effective evidence for a case or a real result of an experiment. There are different kinds of tampering processing including but not limited to copy-paste, blurring and scale transformation. We want to make a fast and reliable preliminary judgment on tampering. And the widely used JPEG compression algorithm gives us a good chance, we can design an algorithm based on it to achieve our goal. Identification of JPEG compression history has received more and more attention in recent years. When an image is saved in bitmap format but has been compressed by JPEG method, we can not access to the jpg file header which contains the information about compression but we still need to know its compression history sometimes.
Among all of the lossy compression algorithms, JPEG (Joint Photographic Experts Group) is one of the most popular and widely used standards. Almost all software provide JPEG compression operations choice when saving digital images. Sometimes images have been compressed by the JPEG method are saved into bitmaps. And we can not get the information whether images have been compressed from images files themselves directly because we do not have any access to the JPEG file headers after it has been saved as bitmaps.
However, this information may be crucial in some cases, in the field of digital image forensics for instance. If the JPEG compression history is efficiently exposed, we can make a preliminary judgment that the image may have been tampered. That is why we need to detect the compression history. Thus, methods for detecting the compression history of bitmaps has become an important issue and received widespread attention.
Many efforts have been attempted in this aspect, and many decent results have been achieved. Most of these works are related to JPEG coefficient, JPEG quantization table, DCT transformation and wavelet transformation. Based on these, different approaches were proposed.
Thanh et al. [1] has proposed a method based on the combination of the quantization effect and the statistics of discrete cosine transform coefficient characterized by the statistical model. Hernandez et al. [2] has proposed a method which can avoid giving false results. When their method can not get the quantization table, it means this bitmap may not be compressed or it is not compressed by the JPEG algorithm. These methods have shown some characters of JPEG coefficients which are very meaningful for further works in this aspect.
And there are some JPEG history detection methods which do not need to estimate the quantization table. Fan et al. [3] proposed a detection method based on the feature of block antifacts in the pixel domain, as the pixel values between the blocks should be inherent if an image was compressed before comparing with uncompressed images. But Fan's method [3] has a relatively high computational complexity. Yang et al. [4] used factor histogram to detect the JPEG compression history of bitmaps, because the decreasing values in factor histogram are observed with the increase of its bin index for uncompressed bitmaps, while no obvious decrease is found in uncompressed images. But Yang's method [4] gets a sudden drop in the accuracy when the compression quality factor is high because the block antifacts phenomenon is not obvious under such circumstances. Especially when the quality factor is 98 or higher the accuracy can go below 50%. Zhang et al. [5] found the tetrolet transformation proposed by Krommweh et al. [6] can be used to exploit the structure of images. Tetrolet transformation is a kind of Harr wavelet transformation which uses at most 177 different kinds of tetrolet as components to disassemble images. The authors proposed a detection method based on the tetrolet transformation to distinguish the uncompressed bitmap image from the decoded JPEG image. As far as we know, Zhang's method [5] has the highest accuracy until now.
Because JPEG compression algorithm is a kind of lossy compression, the compressed image will lose some kind of information after the compression. Proposed in [7], the number of zeros of the JPEG coefficient is a major factor affecting the compression quality of JPEG. For the same bitmap image, the image quality will continue to improve with the increase of JPEG compression quality factor, while the percentage of zeros of the 64 JPEG coefficients will decrease. And we present a method based on this observation.
In this paper, we propose a fast, and reliable method to detect the compression history of bitmaps based on image information loss. Our method is faster than most existing similar methods because we do not need to compress the test image in the processing. A lot of methods proposed contain the compression step because they need a comparison version for obtaining the results. Instead of making a compressed image with quality factor 100 in [5], we obtain an estimated original image which is firstly created based on the test image. This processing costs much less time than compression. Extensive experimental results have been achieved which demonstrate that our proposed method outperforms the state of art respect to the detection accuracy and computational complexity. And the accuracy of our method is relatively high, especially when the quality factor of the test images are below 97. Even when the quality factors are as high as 98 and 99, our method still gives acceptable results. What is more, the proposed method can be generally used no matter the test image uses standard or non-standard JPEG quantization table during the compression which means as long as the image was compressed by JPEG method our detection is effective.
The remaining of the paper is organized as follows. In Section 2, we introduce the relationship of the JPEG coefficients and the image information loss caused by JPEG compression. And the method to create the estimated original image is also described. The framework and the details of the algorithm are stated in Section 3. In Section 4, the experimental results are shown and discussed. And conclusions will be drawn in section 5.
In this paper, the quality factor Q is an important factor that determines the quality of the JPEG image, and the DCT coefficient after quantization is called the JPEG coefficient, which can be read directly from the JPEG image file. The number of zero of the JPEG coefficient is a major factor affecting the compression quality of JPEG images. Through extensive experiments, we find that the proportion of zero JPEG coefficients on the 64 DCT positions show downward trends as the image compression quality factor increase. In other words, for the same bitmap image, the higher the compression quality is, the less the image information loss is and the lower the percentage of zero among 64 JPEG coefficients is. So, the percentage of zero JPEG coefficients on different frequencies can be defined as the index of the amount of information loss after bitmaps were compressed by the JPEG method.
When an image is compressed by JPEG, it will firstly be separated into several 8 × 8 blocks. Then each block is operated by DCT respectively. For each block, there are 64 positions. The first step of our method is doing statistics on the number of zero on 64 positions among all blocks. Note n(j) as the total number of zeros on jth position and m is the number of blocks. Then the amount of image information loss on the 64 DCT positions can be expressed as:
p(j)=n(j)m,j=1,2,...,64 | (2.1) |
and the average image information loss is expressed as:
averageloss=64∑j=1p(j)/64 | (2.2) |
Figure 1 illustrates the result of the average information loss of a bitmap after compressed into JPEG images with the quality factor varying from 60 to 100. The average image information loss decreases as the growth of quality factor Q.
Respecting to this observation, we obtain JPEG images from an uncompressed image Ibmp with different quality factors. And then the JPEG images are decoded to decompressed bitmap images. The uncompressed image and the decompressed images are JPEG compressed with the quality factor 100 to obtain IJPEG1 and IJPEG2 respectively. Obviously, the IJPEG1 is a single JPEG compressed image and IJPEG2 undergoes double JPEG compression. We can compare the amount of image information loss between IJPEG1 and IJPEG2 to achieve the goal of making a primary judge on compression. The higher difference between IJPEG1 and IJPEG2 means higher information loss.
But please notice that in the example there is an assumption that we have the original lossless image and then compress it. And we make the judgement according to the contrast. But in the real case, the original lossless image is usually unacquirable. Therefore, we have to estimate the original image first.
As proposed in [8,9], the image will be separated into blocks, when it undergoes JPEG compression. Then these blocks are operated separately. And to shrink the file size we tolerant some information loss during the quantization. Certain frequency signals are abandoned in quantization step especially for those high-frequency harmonics which only causes very little even no change for the human visual system (HVS). These signals mean redundancy to the human visual system, while they contain a lot of information. That is why they are important to the detection of compression. The image which has not been compressed or just been compressed with relatively high-quality factors remains more information which is a series of signals having different frequencies. Normally, a compressed image has lost a considerable amount of harmonics. Most high-frequency components are set to zero and some low-frequency components are also set to zero if they are small enough. It is true that we can not get what has been abandoned in the previous processing again because of the lossy JPEG compression. But it is still possible to estimate that information existing in the original image of the test image. During the JPEG compression, DCT and quantization are used on each block but not on the full image, which has been discussed in [10]. So, even those harmonics are lost in each separated 8 × 8 blocks but they are still existing among the full-size image. If we want to expose this information, we need to break the existing block artifacts. A method of cutting 4 rows and 4 columns of the test image widely used in image steganalysis [11] is employed to estimate the counterpart of the original image.
The removal of left-top 4 rows and columns has been approved as an excellent way to estimate the counterpart of the original image from compression, which means similar statistic features, as the cut destroys the block-based structure of JPEG. The row and column cutting are illustrated in Figure 2.
Based on the image information loss, we propose a novel algorithm to detect the JPEG compression history as Figure 3 illustrated. The idea of extracting feature from the JPEG file is based on [12].
The whole processing is as following:
ⅰ. To obtain IJPEG1, the test bitmap image is JPEG compressed with quality factor Q = 100.
ⅱ. The counterpart of the original image is estimated by cutting 4 rows and 4 columns from the test image. The IJPEG2 is acquired by compressing the counterpart with quality factor Q = 100 as well.
ⅲ. The features related to the image information loss are extracted from the two JPEG images, and then fed into the classifier to detect whether the test bitmap image has been compressed.
Considering the test image as a decomposed JPEG image, as Figure 4 illustrated. IJPEG1 actually undergoes double JPEG compression with an unknown previous quality factor and the latter quality factor of 100. And because of the counterpart estimated by cutting rows and columns, the IJPEG2 could be considered as a single JPEG compressed image with the quality factor of 100. For the percentage of zero of JPEG coefficients among the 64 DCT positions are defined as the indexes of the amount of information loss after the bitmap image is JPEG compressed, there are disparities between the corresponding indexes IJPEG1 of IJPEG2 and on the 64 DCT positions, as shown in Figure 5. A higher information loss is expected for a JPEG compressed test image. On the contrary, if the test image is uncompressed, there should be no obvious differences between indexes on corresponding positions, as shown in Figure 6.
p1(j) denotes the indexes of the image information loss of IJPEG1. p2(j) denotes the indexes of the image information loss of IJPEG2. Then we describe the difference of information loss as
pdif(j)=p1(j)p2(j),j=1,2,...,64 | (3.1) |
pdif_average(j)=64∑j=1pdif(j)/64 | (3.2) |
pdif_average indicates how much details are found in the estimation counterpart comparing with the test image. If the testing image is uncompressed, the value of pdif_average will be close to 1 which means there is no obvious difference between the test image and the estimation original image. If the test image was compressed, this value will be much greater than 1 which means the bias is observed between the IJPEG1 and IJPEG2 after breaking the 8 × 8 blocks in the test image by cutting. After extracting this feature of images, an SVM classifier is trained. And then we can detect the bitmap JPEG compression history with this model.
Two image databases are used in our experiments to evaluate the performance of the proposed method. Firstly, 1338 uncompressed images from the UCID image database are used in our experiments. These images are saved in Tif format with the resolution of 512 × 384. And a series of standard JPEG quality factors (60, 70, 75, 80, 85, 90, 95, 96, 97, 98, 99) are applied to the images to obtain JPEG compression images of different qualities. The JPEG images with different quality are resaved in Tif format for evaluating the proposed algorithm. In the following, this image dataset is named as dataset1.
The other 480 images come from the well-known Dresden database. Different from the UCID database, the images from the Dresden database are captured by consumer cameras and saved as JPEG image originally. In our experiments, we use 480 JPEG images from 4 different cameras, which are Agfa DC-830i, Canon PowerShotA640, Nikon D200 and Sony DSC-W170, 120 images from each camera. Different from the JPEG images obtained in dataset1, these images are compressed with different consumer-defined JPEG quantization tables with various camera models. Also, the images, named as dataset2 with the resolution of 3872 × 2592, are resaved as bitmap images for the experiments.
We take 500 uncompressed images and 11 × 500 decompressed JPEG images from dataset1as the labeled samples to train the SVM classifier with RBF kernel. After we get the model we use it to test the rest images with different quality factors (60, 70, 75, 80, 85, 90, 95, 96, 97, 98, 99). We also compare our proposed method with Yang's [4], Fan's [3] and Zhang's [5] methods in terms of detection accuracy and algorithm complexity respectively. The results are shown in Table 1 to Table 3.
Methods | Q | original | |||||||||
60 | 70 | 80 | 85 | 90 | 95 | 96 | 97 | 98 | 99 | ||
Fan's | 97.10 | 96.68 | 96.00 | 95.14 | 89.78 | 69.14 | 59.80 | 48.33 | 25.53 | 17.27 | 84.10 |
Yang's | 99.90 | 100 | 100 | 100 | 99.80 | 98.69 | 96.58 | 88.74 | 78.16 | 39.79 | 96.59 |
Zhang's | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 99.93 | 99.10 | 95.65 | 99.88 |
Proposed | 100 | 100 | 100 | 100 | 100 | 100 | 99.93 | 99.48 | 99.03 | 89.31 | 99.92 |
Method | Accuracy(%) |
Zhangs | 34.08 |
Proposed | 100 |
Method | Time cost(s) (image with resolution of 384 × 512) |
Fans | 2.73 |
Yangs | 0.91 |
Zhangs | 9.64 |
Proposed | 0.60 |
As shown in Table 1, Fan's method can give relatively good results when the quality factor is less than 85. The detection accuracy goes below 90% when the quality factors are greater than 90. Yang's method has a similar shortcoming. It performs well when the quality is 96 but the accuracy goes below 90% when the quality factors are greater. Zhang's method has a really good result. The detection accuracy is 95.65% when the quality factor is as high as 99. Our method outperforms Fan's and Yang's methods. And similar detection results are observed between Zhang's and the proposed method. While Zhang's method works better when the quality factor is 99, our method can give results in the shortest time, as shown in Table 3. Also, it can be proved that average cost time for each pixel is stable by simple computation. Time cost may be not the most important index in this aspect. But we can get reliable results within less time indeed. This may have a great advantage in some cases.
Another comparison experiment is implemented between the proposed and Zhang's methods [5]. We take 480 compressed images from dataset2 to prove that our method can work on all JPEG compressed images. All of these bitmap images are not compressed using standard JPEG quantization but took by cameras which means they were compressed using customer-defined JPEG quantization tables. The results are shown in Table 2. Zhang's method is found only effective to images compressed using the standard JPEG quantization tables. And the proposed method still performs well.
The issue of detecting the compression history of images receives more and more attention in recent years. In this paper, we propose a novel and fast detecting method based on novel feature respect to image information loss. According to this, the proportion of zero JPEG coefficients on 64 DCT positions falls down as well. We estimate the image counterpart by cutting 4 rows and 4 columns from the original image and calculate the differences between the values of the 64 DCT positions respectively. The feature extracted from the differences is fed into the SVM to train a mode to classify the test bitmap images. Extensive experiments and the results demonstrate that our proposed method outperforms the state of art, especially in the cases of high compression quality factors and customer-defined quality factors. And also the proposed algorithm indicates a lower computational complexity compared to the previous works.
This work is supported by the National Science Foundation of China (No. 61502076, No. 61772111).
All authors declare no conflicts of interest in this paper.
[1] | IRP, Global Resources Outlook (2019): Natural Resources for the Future We Want. A Report of the International Resource Panel. 2019, United Nations Environment Programme.: Nairobi, Kenya. Available from: https://www.resourcepanel.org/reports/global-resources-outlook. |
[2] |
Van der Voet E, Van Oers L, Verboon M, et al. (2018) Environmental implications of future demand scenarios for metals: Methodology and application to the case of seven major metals. J Ind Ecol 23: 141-155. https://doi.org/10.1111/jiec.12722 doi: 10.1111/jiec.12722
![]() |
[3] |
Hertwich EG (2021) Increased carbon footprint of materials production driven by rise in investments. Nat Geosci 14: 151-155. https://doi.org/10.1038/s41561-021-00690-8 doi: 10.1038/s41561-021-00690-8
![]() |
[4] |
Rogelj J, Luderer G, Pietzcker RC, et al. (2015) Energy system transformations for limiting end-of-century warming to below 1.5 ℃. Nat Clim Change 5: 519-527. https://doi.org/10.1038/nclimate2572 doi: 10.1038/nclimate2572
![]() |
[5] |
Kirchherr J, Reike D, Hekkert M (2017) Conceptualizing the circular economy: An analysis of 114 definitions. Resour, Conserv Recycl 127: 221-232. https://doi.org/10.1016/j.resconrec.2017.09.005 doi: 10.1016/j.resconrec.2017.09.005
![]() |
[6] | Bocken N, Miller K, Evans S (2016) Assessing the environmental impact of new Circular business models. Conference " New Business Models" —Exploring a changing view on organizing value creation—Toulouse, France. Available from: https://www.researchgate.net/publication/305264490_Assessing_the_environmental_impact_of_new_Circular_business_models. |
[7] | European Commission, Circular Economy Action Plan (2019) Available from: https://ec.europa.eu/environment/strategy/circular-economy-action-plan_en. |
[8] | European Commission (2018) In-Depth Analysis in Support of the Commission Communication COM (2018) 773, A Clean Planet for all: A European long-term strategic vision for a prosperous, modern, competitive and climate neutral economy. Available from: https://ec.europa.eu/clima/system/files/2018-11/com_2018_733_analysis_in_support_en.pdf. |
[9] | Material Economics AB (2018) The Circular Economy. Available from: https://materialeconomics.com/publications/the-circular-economy-a-powerful-force-for-climate-mitigation-1. |
[10] |
Fragkos P, Fragkiadakis K, Paroussos L, et al. (2018) Coupling national and global models to explore policy impacts of NDCs. Energy Policy 118: 462-473. https://doi.org/10.1016/j.enpol.2018.04.002 doi: 10.1016/j.enpol.2018.04.002
![]() |
[11] |
Pauliuk S, Heeren N, Berrill P, et al. (2021) Global scenarios of resource and emission savings from material efficiency in residential buildings and cars. Nat Commun 12: 5097. https://doi.org/10.1038/s41467-021-25300-4 doi: 10.1038/s41467-021-25300-4
![]() |
[12] |
Edelenbosch OY, Kermeli K, Crijns-Graus W, et al. (2017) Comparing projections of industrial energy demand and greenhouse gas emissions in long-term energy models. Energy 122: 701-710. https://doi.org/10.1016/j.energy.2017.01.017 doi: 10.1016/j.energy.2017.01.017
![]() |
[13] |
Geissdoerfer M, Pieroni M, Pigosso D, et al. (2020) Circular Business Models: A Review. J Cleaner Prod 277: 123741. https://doi.org/10.1016/j.jclepro.2020.123741 doi: 10.1016/j.jclepro.2020.123741
![]() |
[14] | European Commission, Study on the review of the list of critical raw materials: Non-critical raw materials factsheets (2020) Available from: https://op.europa.eu/en/publication-detail/-/publication/6f1e28a7-98fb-11e7-b92d-01aa75ed71a1/language-en. |
[15] |
Mayer A, Haas W, Wiedenhofer D, et al. (2019) Measuring progress towards a circular economy: A monitoring framework for Economy-wide material loop closing in the EU28. (2019) J Ind Ecol 23: 62-76. https://doi.org/10.1111/jiec.12809 doi: 10.1111/jiec.12809
![]() |
[16] | UNEP (2017) Resource Efficiency: Potential and Economic Implications. Available from: http://www.resourcepanel.org/reports/resource-efficiency. |
[17] | European Commission (2018) A European strategy for plastics in a circular economy. Available from: https://www.europarc.org/wp-content/uploads/2018/01/Eu-plastics-strategy-brochure.pdf. |
[18] | Trinomics (2018) Cooperation fostering industrial symbiosis: market potential, good practice and policy actions. Available from: https://op.europa.eu/en/publication-detail/-/publication/174996c9-3947-11e8-b5fe-01aa75ed71a1/language-en. |
[19] |
Fragkos P (2020) Global energy system transformations to 1.5 ℃: The impact of revised intergovernmental panel on climate change carbon budgets. Energy Technol 8: 2000395. https://doi.org/10.1002/ente.202000395 doi: 10.1002/ente.202000395
![]() |
[20] |
Fragkos P, Kouvaritakis N (2018) Model-based analysis of intended nationally determined contributions and 2 ℃ pathways for major economies. Energy 160: 965-978. https://doi.org/10.1016/j.energy.2018.07.030 doi: 10.1016/j.energy.2018.07.030
![]() |
[21] | Capros P, DeVita A, Tasios N, et al. (2016) EU Reference Scenario 2016—Energy, Transport and GHG Emissions Trends to 2050; European Commission Directorate General for Energy, Directorate General for Climate Action and Directorate General for Mobility and Transport: Brussels, Belgium, 2016. Available from: http://www.e3mlab.eu/e3mlab/reports/referencescenario2016report.pdf. |
[22] |
Fragkos P, Kouvaritakis N, Capros P (2015) Incorporating uncertainty into world energy modelling: The Prometheus model. Environ Model Assess 20: 549-569. https://doi.org/10.1007/s10666-015-9442-x doi: 10.1007/s10666-015-9442-x
![]() |
[23] |
Fragkos P, Kouvaritakis N (2018) Investments in power generation under uncertainty—a MIP specification and Large-Scale application for EU. Environ Model Assess 23: 511-527. https://doi.org/10.1007/s10666-017-9583-1 doi: 10.1007/s10666-017-9583-1
![]() |
[24] |
Grubler A, Wilson C, Bento N, et al. (2018) A low energy demand scenario for meeting the 1.5 C target and sustainable development goals without negative emission technologies. Nature Energy 3: 515-527. https://doi.org/10.1038/s41560-018-0172-6 doi: 10.1038/s41560-018-0172-6
![]() |
[25] |
Fotiou T, de Vita A, Capros P (2019) Economic-Engineering modelling of the buildings sector to study the transition towards deep decarbonisation in the EU. Energies 12: 2745. https://doi.org/10.3390/en12142745 doi: 10.3390/en12142745
![]() |
[26] |
Capros P, Zazias G, Evangelopoulou S, et al. (2019) Energy-system modelling of the EU strategy towards climate-neutrality. Energy Policy 134: 110960. https://doi.org/10.1016/j.enpol.2019.110960 doi: 10.1016/j.enpol.2019.110960
![]() |
[27] | Towards the Circular Economy vol. 1, 2013, Ellen McArthur Foundation. Available from: https://ellenmacarthurfoundation.org/towards-the-circular-economy-vol-1-an-economic-and-business-rationale-for-an. |
[28] | IEA (2020) Iron and Steel. Paris. Available from: https://www.iea.org/reports/iron-and-steel. |
[29] | ISRI (2019) 2019 Recycling Industry Year Book. Available from: https://www.isri.org/recycling-commodities-old/recycling-industry-yearbook. |
[30] | WSP, Parsons Brinckerhoff, DNV GL (2015) Industrial Decarbonisation & Energy Efficiency Roadmaps to 2050—Cement. Available from: https://www.gov.uk/government/publications/industrial-decarbonisation-and-energy-efficiency-roadmaps-to-2050. |
[31] | Accenture (2017) Taking the EU chemicals industry into the circular economy. Available from: https://www.accenture.com/us-en/_acnmedia/PDF-45/Accenture-CEFIC-Report-Exec-Summary.pdf. |
[32] | JRC (2018) Prospective scenarios for the pulp and paper industry. Available from: https://publications.jrc.ec.europa.eu/repository/handle/JRC111652. |
[33] | van Soest HL, Aleluia Reis L, Baptista LB, et al. (2021) Global roll-out of comprehensive policy measures may aid in bridging emissions gap. Nat Commun, 12. https://doi.org/10.1038/s41467-021-26595-z |
[34] |
Fragkos P (2021) Assessing the role of carbon capture and storage in mitigation pathways of developing economies. Energies 14: 1879. https://doi.org/10.3390/en14071879 doi: 10.3390/en14071879
![]() |
[35] |
McCollum DL, Zhou W, Bertram C, et al. (2018) Energy investment needs for fulfilling the Paris Agreement and achieving the Sustainable Development Goals. Nat Energy 3: 589-599. https://doi.org/10.1038/s41560-018-0179-z doi: 10.1038/s41560-018-0179-z
![]() |
[36] | Rogelj J, Shindell D, Jiang K, et al. (2018) Mitigation pathways compatible with 1.5 ℃ in the context of sustainable development. In Global Warming of 1.5 ℃: an IPCC special report on the impacts of global warming of 1.5 ℃ above pre-industrial levels, Geneva, Switzerland: IPCC, In press. Available from: https://www.ipcc.ch/sr15/. |
[37] |
Fricko O, Havlik P, Rogelj J, et al. (2017) The marker quantification of the shared socioeconomic pathway 2: A middle-of-the-road scenario for the 21st century. Global Environ Change 42: 251-267. https://doi.org/10.1016/j.gloenvcha.2016.06.004 doi: 10.1016/j.gloenvcha.2016.06.004
![]() |
[38] | OECD (2020) OECD Economic Outlook; OECD Publishing: Paris, France, 2020; Volume 2020. Available from: https://www.oecd-ilibrary.org/economics/oecd-economic-outlook/volume-2020/issue-1_0d1d1e2e-en#:~:text=GDP%20is%20projected%20to%20fall%20by%2014%25%20in%202020%20before,recover%20by%207.7%25%20in%202021. |
[39] | The World Bank (2021). Global Economic Prospects (Issue June 2021). Available from: https://www.worldbank.org/en/publication/global-economic-prospects. |
[40] |
Rochedo PRR, Fragkos P, Garaffa R, et al. (2021) Is green recovery enough? Analysing the impacts of Post-COVID-19 economic packages. Energies 14: 5567. https://doi.org/10.3390/en14175567 doi: 10.3390/en14175567
![]() |
[41] |
Fragkos PK, Fragkiadakis B, Sovacool L, et al. (2021) Equity implications of climate policy: Assessing the social and distributional impacts of emission reduction targets in the European Union. Energy 237: 21591. https://doi.org/10.1016/j.energy.2021.121591 doi: 10.1016/j.energy.2021.121591
![]() |
[42] | Vona F (2019) Job losses and political acceptability of climate policies: why the 'job-killing' argument is so persistent and how to overturn it. Climate Policy 19: 524-532. 10.1080/14693062.2018.1532871 |
[43] |
Madeddu S, Ueckerdt F, Pehl M, et al. (2020) The CO2 reduction potential for the European industry via direct electrification of heat supply (power-to-heat). Environ Res Lett 15: 124004. https://doi.org/10.1088/1748-9326/abbd02 doi: 10.1088/1748-9326/abbd02
![]() |
[44] |
Levesque A, Pietzker R, Luderer G (2019) Halving energy demand from buildings: The impact of low consumption practices. Technol Forecast Soc Change 146: 253-266. https://doi.org/10.1016/j.techfore.2019.04.025 doi: 10.1016/j.techfore.2019.04.025
![]() |
[45] |
Rodrigues R, Pietzker R, Fragkos P, et al. (2021) Narrative-driven alternative roads to achieve mid-century CO2 net neutrality in Europe. Energy 239: 121908. https://doi.org/10.1016/j.energy.2021.121908 doi: 10.1016/j.energy.2021.121908
![]() |
[46] |
Brodny J, Tutak M (2022) Analysis of the efficiency and structure of energy consumption in the industrial sector in the European Union countries between 1995 and 2019. Sci Total Environ 808: 152052. https://doi.org/10.1016/j.scitotenv.2021.152052 doi: 10.1016/j.scitotenv.2021.152052
![]() |
[47] | Falcone PM, Hiete M, Sapio A (2021) Hydrogen economy and sustainable development goals: Review and policy insights. Curr Opin Green Sustainable Chem 31: 100506. https://doi.org/10.1016/j.cogsc.2021.100506 |
[48] | Sharma HB, Vanapalli KR, Samal B, et al. (2021) Circular economy approach in solid waste management system to achieve UN-SDGs: Solutions for post-COVID recovery. Sci Total Environ 800, 149605. https://doi.org/10.1016/j.scitotenv.2021.149605 |
[49] |
Rocca R, Rosa P, Sassanelli C, et al. (2020) Integrating virtual reality and digital twin in circular economy practices: A laboratory application case. Sustainability 12: 2286. https://doi.org/10.3390/su12062286 doi: 10.3390/su12062286
![]() |
[50] |
D'Adamo I, Falcone PM, Martin M, et al. (2020) A sustainable revolution: Let's go sustainable to get our globe cleaner. Sustainability 12: 4387. https://doi.org/10.3390/su12114387 doi: 10.3390/su12114387
![]() |
[51] |
Longa FD, Fragkos P, Nogueira LP, et al. (2022) System-level effects of increased energy efficiency in global low-carbon scenarios: A model comparison. Comput Ind Eng 167: 108029. https://doi.org/10.1016/j.cie.2022.108029 doi: 10.1016/j.cie.2022.108029
![]() |
[52] | Bordage F (2019) The Environmental Footprint of the Digital World. GreenIT.fr, p. 39. Available from: https://www.greenit.fr/wp-content/uploads/2019/11/GREENIT_EENM_etude_EN_accessible.pdf. |
1. | Barbara Tomasino, Maria Nobile, Marta Re, Monica Bellina, Marco Garzitto, Filippo Arrigoni, Massimo Molteni, Franco Fabbro, Paolo Brambilla, The mental simulation of state/psychological verbs in the adolescent brain: An fMRI study, 2018, 123, 02782626, 34, 10.1016/j.bandc.2018.02.010 | |
2. | Frédérique Roy-Côté, Rayane Zahal, Johannes Frasnelli, Dang Khoa Nguyen, Olivier Boucher, Insula and Olfaction: A Literature Review and Case Report, 2021, 11, 2076-3425, 198, 10.3390/brainsci11020198 | |
3. | Daphné Citherlet, Olivier Boucher, Julie Tremblay, Manon Robert, Anne Gallagher, Alain Bouthillier, Franco Lepore, Dang Khoa Nguyen, Spatiotemporal dynamics of auditory information processing in the insular cortex: an intracranial EEG study using an oddball paradigm, 2020, 225, 1863-2653, 1537, 10.1007/s00429-020-02072-z | |
4. | Barbara Tomasino, Dario Marin, Tamara Ius, Miran Skrap, 2018, Chapter 31, 978-3-319-75467-3, 281, 10.1007/978-3-319-75468-0_31 | |
5. | Gianna Sepede, Francesco Gambi, Massimo Di Giannantonio, Insular Dysfunction in People at Risk for Psychotic Disorders, 2015, 2, 2373-7972, 66, 10.3934/Neuroscience.2015.2.66 | |
6. | Jarrod Moss, Introduction to AIMS Neuroscience Special Issue “What Function Does the Anterior Insula Play in Human Cognition?”, 2015, 2, 2373-7972, 153, 10.3934/Neuroscience.2015.3.153 | |
7. | Olivier Boucher, Daphné Citherlet, Benjamin Hébert-Seropian, Dang Khoa Nguyen, 2018, Chapter 26, 978-3-319-75467-3, 223, 10.1007/978-3-319-75468-0_26 | |
8. | Elisa Cargnelutti, Marta Maieron, Serena D'Agostini, Tamara Ius, Miran Skrap, Barbara Tomasino, Preoperative plasticity in the functional naming network of patients with left insular gliomas, 2023, 22131582, 103561, 10.1016/j.nicl.2023.103561 | |
9. | Daniel J. O’Hara, John Goodden, Ryan Mathew, Rebecca Chan, Paul Chumas, Recovery of major cognitive deficits following awake surgery for insular glioma: a case report, 2024, 38, 0268-8697, 236, 10.1080/02688697.2020.1825620 | |
10. | Elisa Cargnelutti, Marta Maieron, Serena D’Agostini, Tamara Ius, Miran Skrap, Barbara Tomasino, Exploring cognitive Landscapes: Longitudinal Dynamics of left insula gliomas using neuropsychological Inquiry, fMRI and intra-resection real time neuropsychological testing, 2024, 22131582, 103689, 10.1016/j.nicl.2024.103689 |
Methods | Q | original | |||||||||
60 | 70 | 80 | 85 | 90 | 95 | 96 | 97 | 98 | 99 | ||
Fan's | 97.10 | 96.68 | 96.00 | 95.14 | 89.78 | 69.14 | 59.80 | 48.33 | 25.53 | 17.27 | 84.10 |
Yang's | 99.90 | 100 | 100 | 100 | 99.80 | 98.69 | 96.58 | 88.74 | 78.16 | 39.79 | 96.59 |
Zhang's | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 99.93 | 99.10 | 95.65 | 99.88 |
Proposed | 100 | 100 | 100 | 100 | 100 | 100 | 99.93 | 99.48 | 99.03 | 89.31 | 99.92 |
Method | Accuracy(%) |
Zhangs | 34.08 |
Proposed | 100 |
Method | Time cost(s) (image with resolution of 384 × 512) |
Fans | 2.73 |
Yangs | 0.91 |
Zhangs | 9.64 |
Proposed | 0.60 |
Methods | Q | original | |||||||||
60 | 70 | 80 | 85 | 90 | 95 | 96 | 97 | 98 | 99 | ||
Fan's | 97.10 | 96.68 | 96.00 | 95.14 | 89.78 | 69.14 | 59.80 | 48.33 | 25.53 | 17.27 | 84.10 |
Yang's | 99.90 | 100 | 100 | 100 | 99.80 | 98.69 | 96.58 | 88.74 | 78.16 | 39.79 | 96.59 |
Zhang's | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 99.93 | 99.10 | 95.65 | 99.88 |
Proposed | 100 | 100 | 100 | 100 | 100 | 100 | 99.93 | 99.48 | 99.03 | 89.31 | 99.92 |
Method | Accuracy(%) |
Zhangs | 34.08 |
Proposed | 100 |
Method | Time cost(s) (image with resolution of 384 × 512) |
Fans | 2.73 |
Yangs | 0.91 |
Zhangs | 9.64 |
Proposed | 0.60 |