
Process-based Integrated Assessment Models (IAMs) play a crucial role in climate agenda-setting and progress monitoring. They advise climate negotiations, inform nationally determined contributions (NDCs), and help create scenarios for central banks. Recent developments have enhanced IAMs' policy scope and accuracy, including the incorporation of industrial policies, improved sectoral details, and modeling of consumer behavior. Despite these advancements, challenges remain, particularly in improving spatio-temporal and sectoral resolution, adapting to fast-changing sector-specific policies, and addressing complex dynamics beyond the traditional techno-economic cost-minimization framework. This literature review explores Directed Technical Change (DTC) growth models, Agent-Based Modeling (ABM), and game theory to complement mainstream IAM approaches, especially in integrating political economy considerations. DTC emphasizes the role of public research and development (R&D) investment in supporting early-stage mitigation technologies. ABM highlights the decision-making processes and behaviors of heterogeneous agents, while game theory examines market dynamics, such as newcomer vs. incumbent competition, strategic pricing, and resource extraction. While these models cannot replace IAMs, they can broaden the scenario design space and improve the complexity and policy relevance of IAM-based mitigation modeling.
Citation: Chen Chris Gong, Behnaz Minooei Fard, Ibrahim Tahri. Improving accuracy, complexity and policy relevance: a literature survey on recent advancements of climate mitigation modeling[J]. AIMS Environmental Science, 2025, 12(2): 276-320. doi: 10.3934/environsci.2025014
[1] | Ahmed Abul Hasanaath, Abdul Sami Mohammed, Ghazanfar Latif, Sherif E. Abdelhamid, Jaafar Alghazo, Ahmed Abul Hussain . Acute lymphoblastic leukemia detection using ensemble features from multiple deep CNN models. Electronic Research Archive, 2024, 32(4): 2407-2423. doi: 10.3934/era.2024110 |
[2] | Abul Bashar . Employing combined spatial and frequency domain image features for machine learning-based malware detection. Electronic Research Archive, 2024, 32(7): 4255-4290. doi: 10.3934/era.2024192 |
[3] | Yazhuo Fan, Jianhua Song, Yizhe Lu, Xinrong Fu, Xinying Huang, Lei Yuan . DPUSegDiff: A Dual-Path U-Net Segmentation Diffusion model for medical image segmentation. Electronic Research Archive, 2025, 33(5): 2947-2971. doi: 10.3934/era.2025129 |
[4] | Kuntha Pin, Jung Woo Han, Yunyoung Nam . Retinal diseases classification based on hybrid ensemble deep learning and optical coherence tomography images. Electronic Research Archive, 2023, 31(8): 4843-4861. doi: 10.3934/era.2023248 |
[5] | Mashael S. Maashi, Yasser Ali Reyad Ali, Abdelwahed Motwakel, Amira Sayed A. Aziz, Manar Ahmed Hamza, Amgad Atta Abdelmageed . Anas platyrhynchos optimizer with deep transfer learning-based gastric cancer classification on endoscopic images. Electronic Research Archive, 2023, 31(6): 3200-3217. doi: 10.3934/era.2023162 |
[6] | Jinjiang Liu, Yuqin Li, Wentao Li, Zhenshuang Li, Yihua Lan . Multiscale lung nodule segmentation based on 3D coordinate attention and edge enhancement. Electronic Research Archive, 2024, 32(5): 3016-3037. doi: 10.3934/era.2024138 |
[7] | Zengyu Cai, Liusen Xu, Jianwei Zhang, Yuan Feng, Liang Zhu, Fangmei Liu . ViT-DualAtt: An efficient pornographic image classification method based on Vision Transformer with dual attention. Electronic Research Archive, 2024, 32(12): 6698-6716. doi: 10.3934/era.2024313 |
[8] | Yanxia Sun, Tianze Xu, Jing Wang, Jinke Wang . DST-Net: Dual self-integrated transformer network for semi-supervised segmentation of optic disc and optic cup in fundus image. Electronic Research Archive, 2025, 33(4): 2216-2245. doi: 10.3934/era.2025097 |
[9] | Yixin Sun, Lei Wu, Peng Chen, Feng Zhang, Lifeng Xu . Using deep learning in pathology image analysis: A novel active learning strategy based on latent representation. Electronic Research Archive, 2023, 31(9): 5340-5361. doi: 10.3934/era.2023271 |
[10] | Tej Bahadur Shahi, Cheng-Yuan Xu, Arjun Neupane, Dayle B. Fleischfresser, Daniel J. O'Connor, Graeme C. Wright, William Guo . Peanut yield prediction with UAV multispectral imagery using a cooperative machine learning approach. Electronic Research Archive, 2023, 31(6): 3343-3361. doi: 10.3934/era.2023169 |
Process-based Integrated Assessment Models (IAMs) play a crucial role in climate agenda-setting and progress monitoring. They advise climate negotiations, inform nationally determined contributions (NDCs), and help create scenarios for central banks. Recent developments have enhanced IAMs' policy scope and accuracy, including the incorporation of industrial policies, improved sectoral details, and modeling of consumer behavior. Despite these advancements, challenges remain, particularly in improving spatio-temporal and sectoral resolution, adapting to fast-changing sector-specific policies, and addressing complex dynamics beyond the traditional techno-economic cost-minimization framework. This literature review explores Directed Technical Change (DTC) growth models, Agent-Based Modeling (ABM), and game theory to complement mainstream IAM approaches, especially in integrating political economy considerations. DTC emphasizes the role of public research and development (R&D) investment in supporting early-stage mitigation technologies. ABM highlights the decision-making processes and behaviors of heterogeneous agents, while game theory examines market dynamics, such as newcomer vs. incumbent competition, strategic pricing, and resource extraction. While these models cannot replace IAMs, they can broaden the scenario design space and improve the complexity and policy relevance of IAM-based mitigation modeling.
According to the World Health Organization (WHO), cancer is the leading cause of death worldwide [1]. Although it is sometimes possible, early cancer detection does not permanently save lives. A tumor, as opposed to cancer, can be benign, pre-carcinoma, or cancerous. Friendly tumors differ from censures because they do not usually spread to other organs or tissues and can be accurately eradicated [2]. Clinical imaging encompasses a wide range of procedures that can be used as non-invasive methods of peering inside the body [3]. Clinical imaging combines a number of imaging modalities and cycles to see the edge for therapeutic and diagnostic purposes, and it then plays an important and decisive role in directing actions that improve people's welfare. Picture division may be used to achieve a higher level of image preparation, which could be a significant and critical breakthrough in image handling [4].
The main goals of image division in clinical image processing are to identify tumors or sores, use useful machine vision, and obtain reliable results. Improving the affectability and explicitness of tumors or injuries in clinical imaging has been a top priority with the use of computer aided diagnostic (CAD) frameworks. States that brain and other framework nervous cancer is the tenth leading cause of death and that the five-year survival rate for people with ruined brains is 34% for men and 36% for women. Furthermore, the WHO predicts that 400, 000 people worldwide will be affected by brain tumors in the coming year, with 120, 000 deaths. Furthermore, an additional 86, 970 cases of nonthreatening brain and other central nervous system (CNS) malignancies are expected to be studied in the United States in 2019 [5]. A brain tumor develops when abnormal cells grow within the brain [6]. Tumors are classified as either malignant or benign. Brain tumors are dangerous because they start inside the brain, grow quickly, and wreak havoc on the surrounding tissues. It has the potential to spread to other areas of the brain and have an impact on the focal structure. There are two types of brain metastasis malignancies: those that develop within the brain and those that have spread from somewhere else. A benign neoplasm, on the other hand, could be a collection of cells that develops gradually within the brain. Early detection of brain cancer and a higher level of endurance plausibility will now be critical in improving treatment outcomes. However, because MRI images are produced in excess of daily clinical practice, manual division of tumors or sores may be laborious, demanding, and uncomfortable. Reverberation is another name for X-rays. The primary goals of imaging are to detect tumors or damage. Tumor division by MRI is one of the most essential steps in clinical image processing because it typically requires a large amount of data. Furthermore, cancers with delicate tissue restrictions were challenging to understand. As a result, determining the precise division of tumors in the human brain is a daunting task. We developed a competent approach, aided by a convolutional neural network, that aids in the division and identification of the neoplasm without the need for human intervention.
Brain Tumor division is one of the principal vital and challenging errands inside the landscape of the clinical picture; preparing as a human-helping manual characterization may bring about inaccurate forecasts and analysis. Moreover, it's an exasperating errand when there's a larger-than-usual measure of information present to be helped. Since brain tumors have a wide variety for all intents and purposes, and there is a similitude between tumors, and typical tissues, a productive and powerful framework is required for extracting tumor districts from pictures.
The primary point here is to carry out a method of programmed detection of neoplasm utilizing convolutional neural network (CNN) utilizing MRIs as tests. CNN is utilized to make the model and train it utilizing past tumor patients' records and utilizing that model to anticipate a substitution picture if it's infected or not.
The significant Contribution of the paper is as follow:
1) A pipeline based on ensemble DL is presented for efficient and cost-effective tumor identification in the brain.
2) An ensemble method is proposed by using four distinct deep learning architectures (CNN, AlexNet, ResNet50, VGG19, to establish the optimal recognition system by selecting the best-fitting Deep Learning architecture parameters.
3) In this paper, we comprehensively analyze the effectiveness of key variables affecting pertained systems' tuning.
4) We have compared the proposed models utilizing several critical characteristics for Brain Tumor detection in our research.
This section reviews the research that different researchers in this area have already done. We also identify any gaps that have yet to be filled. Brain Tumor division is everything about chief, significant and exhausting assignments inside the landscape of clinical picture handling as a human-helped manual order may bring about incorrect forecasts and analysis. It's also irritating when a large amount of data is to be sorted. Extraction of tumor areas from pictures becomes unassailable because brain tumors are varied for all intents and purposes and share similarities with normal tissues. We proposed a method using a Fuzzy C-Means bunching calculation followed by conventional classifiers and a convolutional neural architecture to detect malignancies using 2D reverberation brain images (MRI). The test research used a consistent dataset with various tumors, sizes, regions, forms, and picture capabilities.
In this research, we this research, we examine deep convolutional neural networks (ConvNets) [7] for brain tumor classification. Using MRI patches, slices, and multi-planar volumetric slices, we show three ConvNets that were trained entirely from scratch. The findings show that ConvNet improves accuracy in all cases when the model is trained on multi-planar volumetric data. With no additional effort put into feature extraction and selection, as in conventional models, it achieves a testing accuracy of 97%. We contrast our results with cutting-edge systems that rely on human feature engineering to finish the job. It demonstrates that ConvNets may improve the grading accuracy by up to 12%. By observing the results of the intermediate layer, we also look at self-learning kernels and filters at different levels.
The top-tier MRI-based brain tumor division procedures are described in detail in this study [8]. Because of the noninterfering and extremely sensitive tissue separation of MRI, a large number of frontal brains tumors division methods function on MRI images and employ a variety of attributes to gather and organize information, as well as consider spatial information in very near proximity. The motivation behind these techniques is to allow for a critical decision on tumor detection, detection, and therapy. Furthermore, it allows for reliable outcomes within a reasonable estimated time. This research presents new methodologies for MRI images of a patient's brain. Gaussian, which may be a direct route, completes the preprocessing here. GLCM highlights perform highlight extraction for the pictures at that point. Finally, the characterization was carried out using a calculation called convolutional neural organization, which can detect tumor regions. Brain tumor detection could be excellent assistance for doctors and help for clinical imaging and ventures performed on gathering CT output and MRI images. This paper [9] discusses tumor detection and extraction. The area is portioned, and therefore the evaluation of the tumor's personality using the gadget indicated here aids the specialists in determining the treatment plan production and monitoring the tumor's state. The advantages of this method are that it enhances the picture's division level and spatial confinement, as well as its efficacy, compared to the alternative framework. It consumes less of an ideal chance for calculation and makes mentoring easier with fewer boundaries than other companies. The framework's exactness will be relatively improved by utilizing an artificial neural organization given the classifier. The mechanized detection strategy created here was made out of principle with three stages: (1) Pre-preparing, (2) Classification using CNN, and (3) post-handling.
We demonstrate a new CNN model for three different forms of brain tumors [10]. The built network has been thoroughly evaluated on T1-weighted differentiation-enhanced reverberation pictures and is less complex than successfully used pre-prepared networks. The organization's presentation was assessed using four procedures: two 10-overlay cross-approval processes and two information bases. Subject-wise cross-approval, one of the 10-overlay methods, was used to assess the organization's capacity for speculation, and a more extensive picture data set was used to test the improvement. The record-wise cross-approval for the raised informational index yielded the best result for the 10-overlay cross-approval methodology, with an exactness of 96.56%. Due to its high speculation capacity and speed, the recently developed CNN design may be an elective choice aid device for radiologists in clinical diagnostics. They presented a CNN model considering local and logical data [11]. A preplanning stage standardizes the photographs, and a post-preparing phase eliminates the fake positives. Another study for neoplasm diagnosis proposes a crossover strategy combining neutrosphere and CNN. The demonstration of the concept outperforms traditional CNN, SVM, and K-closest neighbor (KNN) methods by 95.62%. This study uses the speedier R-CNN approach to analyze malignancies inside MRI brain images to find and identify them. Additionally, the different tumors are broken down into one of three tumor classes, such as meningioma, glioma, or pituitary tumors. Characterization is performed by this approach more rapidly and precisely than by regular R-CNN, which is why it was chosen.
Brain tumor detection is done utilizing MRI and dissecting it [12]. AI might be an incredible system for detecting malignancy tumors on MRI. They achieved a preparation accuracy of 99% and an approval accuracy of 98.6% in this location, with approval misfortune ranging from 0.704 to 0.000 for more than 35 years. This model was made on the CPU-based tensor stream, and the GPU rendition of the tensor stream is far quicker to mentor, which can prompt a lot quicker model creation. Here, convolutional neural network (CNN) calculation is utilized. The following steps are information procurement, expansion, model creation, etc. The system that appeared during this paper could be an essential picture order strategy for Lenet architecture. The more unique methodologies are accessible.
In this paper [13], we investigate deep convolutional neural networks (Con-vNets) for the structural organization of brain tumors using multisession MR images. For MRI patches, cuts, and multi-planar volumetric cuts, we present three ConvNets that are constructed without any preparation. In scenarios when the model is created on a multi-planar volumetric dataset, the results show that ConvNet achieves greater precision all around. It achieves a testing precision of 97 percent with no additional effort put into highlight extraction and determination, depending on the circumstances in ordinary models. We also compare our results to cutting-edge approaches requiring manual component design. It indicates a dramatic improvement of 12 percent in ConvNets evaluating execution. Using a representation of the yields from the halfway layer, we further examine the characteristics of self-learned parts and channels at different levels.
In this study [14], we develop a system for classifying fresh brain MRI scans into those with and those without tumors. It should be done with no human intercession. To utilize a few assortments of classifiers, we preferred to preprocess a few parts of the photos like the shading, space of interest, picture record expansion, and different level. We utilized two mainstream devices to accomplish this, viz. ImageJ and MATLAB. We removed the preeminent significance and separated the highlights of the preprocessed pictures a short time later. During this stage, we remove ten distinct highlights. At last, we utilize an apparatus called WEKA 3.6 to utilize four distinctive grouping calculations on these highlights and ascertain the exactness/review, the F-measure, the portion of accurately arranged pictures, and the time taken to make each model. One of the basic techniques wanted to identify tumors [15] inside the brain is reverberation imaging (MRI). It gives essential data utilized in the technique for examining the internal design of the actual construction indeed. Because of the variety and complexity of brain tumors, clustering MR images is anything but straightforward. The suggested approach for detecting a tumor in MR images includes sigma filtering, flexible edge, and detection site. The suggested technique employs two classifiers that rely on controlled strategies: the first classifier was the C4.5 decision tree calculation, followed by the multi-layer perceptron (MLP) calculation. The classifiers categorize the braincase as typical or uncommon; one benign tumor type and five different tumor types may be found in the bizarre brain. Considering 174 samples of brain MR images and using the MLP computation, the most extreme precision of nearly 95% is achieved.
The author proposed a method [16] for automated brain tumor diagnosis that uses a computer-assisted system to improve accuracy and efficiency compared to manual diagnosis. The process involves preprocessing the brain MRI image, obtaining tumor proposals using an agglomerative clustering-based method, and transferring the proposals to a backbone architecture for feature extraction. Refining the proposals through a refinement network, aligning the refined proposals to the same size, and finally, using a head network for classification. The method was tested on a publicly available brain tumor dataset and showed an overall classification accuracy of 98.04%, outperforming existing approaches. The model achieved high accuracy and sensitivity in the classification task. In this paper, the author[17] proposed a method for detecting brain tumors using an automated magnetic resonance imaging (MRI) technique. The process includes preprocessing the MRI images, using two different deep learning models to extract features, combining the features into a hybrid feature vector using the partial least squares (PLS) method, and revealing the top tumor locations via agglomerative clustering. Aligning the proposals to a predetermined size and finally using a head network for classification. The method showed a high classification accuracy of 98.95%, outperforming existing approaches. The proposed technique can potentially improve the accuracy and efficiency of brain tumor diagnosis.
The convolutional neural network is often used to preprocess medical image data. Many scientists spent a long time developing a model that might better detect tumors. We wanted to develop a reliable approach for appropriately categorizing tumors from 2D brain MRI scans. Although a wholly linked neural structure may identify tumors, we chose CNN for our model due to border sharing and association sparsity [18] shown in Figure 1.
A 5-layer convolutional neural network is described and utilized for tumor localization. The model considers the hidden layers and provides the first tangible evidence of the tumor's concern. Highlight extraction is usually carried out by several convolutional modules that make up a CNN. Each module starts with a convolutional layer and ends with a pooling layer [19]. The last convolutional module is followed by at least one thick layer that conducts characterization. To establish a value between 0 and 1 for each hub, one for each target class within the model (i.e., each of the conceivable classes the model may forecast), the last thick layer in a CNN has the SoftMax initiation ability (the amount of these SoftMax values is satisfactory to 1). One way to conceptualize the SoftMax [20] values for a given picture is as relative estimates of the chance that the image belongs to each target class shown in Figure 2.
Using the convolutional layer in conjunction with the fledgling layer, an information state of the MRI images of 64 × 64 × 3 is created, converting each snapshot into a homogenous measurement. After assembling all images at the same angle, we used 32 convolutional channels, each measuring 3 × 3, and three channel tensors to produce a convolutional component entangled with the information layer. ReLU does not need to prove the yield since it is utilized as an enactment task. Reduce the piece of bounds and computational season of the organization by logically abbreviating the spatial size of the representation in this ConvNet model [21]. The pollution of overfitting may also be costed to the Brain MRI image, and our Max Pooling layer flawlessly works for this distinction. We employ MaxPooling2D as the model for spatial information that proves our information picture. This convolutional layer uses measures of 31 × 31 × 32 [22]. The pool size is (2, 2) due to the separation of the information images in both spatial dimensions, which recommends a tuple of two integers to downscale in an upward direction and to a level plane. A pooled highlight map is acquired after the pooling layer. After pooling, one of the most critical layers is leveling. It is essential for planning since we must improve the framework by addressing the information photos in a single vector section. The Neural Network was then tasked with processing, as shown in Figure 3.
We used two layers that were entirely intertwined. Dense-1 and Dense-2 addressed the thick layer. In Keras, the thick capacity is used to handle the Neural Network and the gotten vector functions as a contribution to this layer. Inside the hidden layer, there are 128 hubs. We kept it as low as possible since the number of measurements or hubs we need is proportional to the processing assets we need to match our model, and 128 hubs produce the most liberal outcome. Because the beginning job requires improved intermingling execution [23], ReLU is used. Following the first thick layer, the model's final layer was created using the second entirely associated layer. We used the sigmoid capacity as an enactment job where the total number of the hub is one because we wanted to reduce the number of processing assets used so that a more significant sum reduces the execution time. Even though the actuation work, we scale the sigmoid capacity, and the quantity of hubs is far fewer and easier to deal with for this profound organization [24], there is a probability that the preparation of profound organizations for using the sigmoid will be hampered. The suggested CNN model's functional evolution is seen here.
Information expansion could be a methodology that grants professionals to fundamentally build the scope of data accessible for preparing models without gathering new information. Since this can be a small dataset, there were short guides to mentor the neural organization. Additionally, the information increase helped handle the data unevenness issue inside the information. Before the information increases, the dataset (RADHAMADHAB DALAI, July 1, 2021, "Brain Tumor Dataset", https://dx.doi.org/10.21227/2qfw-bf10.) comprised of:
i. 155 positive and 98 negative models, prompting 253 model pictures [25].
ii. After the information increase, presently the dataset comprises of:
iii. 1085 positive and 980 models, resulting in 2065 model images.
iv. Note that these 2065 models also include 253 unique images.
The following image preprocessing techniques are used to enhance the image quality:
1) Rescaling: Rescaling the image to a desired size is often one of the first steps in preprocessing. This step is essential to ensure that the image has a consistent size, which makes it easier to process.
2) Denoising: Removing noise from the image can improve its visual quality and make it easier for the algorithm to process. There are various techniques for denoising an image, including Gaussian and median filtering.
3) Color Correction: Adjusting an image's brightness, contrast, and saturation can improve its visual quality and make it easier for the algorithm to identify patterns and features.
4) Rotations and translations: Images may need to be rotated or translated to align with a reference image. It is crucial in applications such as medical imaging, where the images must be registered accurately to obtain meaningful results.
5) Image segmentation: Image segmentation is the process of dividing an image into multiple regions, or segments, based on the similarity of pixel values. It is an essential step in object recognition and classification tasks.
The following preprocessing procedures were conducted for each image:
Remove the area of the image that solely depicts the brain (which is the image's most fundamental piece). Adjust the image's size such that (240, 240, 3) = (picture width, picture height, and the number of channels) is the state: since the dataset's photos arrive in different sizes. As a result, all images should have an indistinguishable form to ensure that they are taken care of as a contribution to the neural network. Use standards to scale pixel values from 0 to 1.
The information was divided into the following categories:
Information for training = 70%
Information for validation = 15%
Information for testing = 15%.
CNNS contains different layers. Each layer has a different function.
Convolution Layer: This is where the learning process occurs; it computes the complications between the neurons and the different patches in the input. Image data is stored in a 4D tensor [26] (A tensor is a multidimensional array of components that describe functions relevant to the coordinates of a space), usually processed by 2D convolutional layers.
Pooling Layer: This layer decreases the network's count of parameters (weights). A model that fits the training data too well is said to overfit. The model becomes so familiar with the details of the data and the noise in the training data that it performs poorly on the new data. This layer also increases the network's dependability [27]. The pooling layer conserves the essential characteristics while reducing the size of the image and is mainly placed between two convolution layers.
Flattening Layer: Neural networks can learn only 1D data; this layer converts the 2D data (tensor/array) into 1D data.
Fully Connected Layer: Here, the input image from the previous layer is fed to the FC layer [28]. This is the last layer placed before the output layer, which comprises the weights along the network's neurons. This layer provides help in connecting the neurons between two different layers. A nonlinear combination of these parameters can be learned by attaching a fully connected layer that is feasible and cost-effective.
Activation Function: The activation function's significant parameters of the convolutional neural network model. There are several activation functions, such as SoftMax, ReLU, Sigmoid functions, etc., and now we will discuss the two most adaptable activation functions for our complications.
Applying any activation function to the output of the preceding layer is the duty of the ReLU Correction Layer. It adds nonlinearity to the network. These activation functions are used to learn and find the accuracy of the continuous and complex relationships between network variables.
Usually defined as ReLU(i) = max (0, i). Visually represented in Figure 4.
f(i)=max(0,j) |
SoftMax Layer: SoftMax extends ideas into a multiclass world related to cross-entropy functions. This layer intends to test the model's efficiency by employing a loss function and the cross-entropy (used to measure the randomness in the information being processed) function to maximize the network's performance.
SoftMax function:
si=exp(ai)∑b=nb=1exp(bi) | (1) |
For i = 1, 2, 3……..., n.
Any neural network's final activation function, the SoftMax function, is often employed to normalize the output of a network.
Architecture detail: Each × (picture) input is supplied into the neural network with a shape of (240, 240, 3). It then moves on to the next layer:
1) A layer with no cushioning and a swimming pool size of (2, 2).
2) A 32-filter convolutional layer with a stride of at least 1 and a filter size of (7, 7) [29].
3) A batch normalization layer that normalizes pixel data to speed up calculations.
4) A ReLU-activating layer.
5) A Max Pooling layer with f and s both equal to four.
6) A Max Pooling layer with f = 4 and s = 4 is used as previously.
7) A layer that flattens the three-dimensional matrix so that it becomes a one-dimensional vector.
8) A dense (output unit) layer that is entirely coupled and has one neuron with sigmoid activation (since this is often a binary classification task).
The training process involves updating the model's parameters to minimize the difference between the model's predicted outputs and the actual outputs (i.e., the labels) in the training data. This process is known as supervised learning and is typically performed using a stochastic gradient descent (SGD) variant. The choice of hyperparameters, such as the learning rate, the number of layers in the model, the number of attention heads, and the model's size, among others, can significantly impact the model's performance. Finding the optimal hyperparameters is often performed through hyperparameter tuning, which involves training multiple models with different hyperparameters and comparing their performance on a validation set. The best-performing model is then selected and used for inference. The number of hyperparameters that can be adjusted and the optimal values for each can vary greatly depending on the specific task and dataset being used. Additionally, many other techniques can be used to improve the performance of language models, such as transfer learning, fine-tuning, and data augmentation.
The architecture we will proceed with further is GoogLeNet, also known as Inception v1. The GoogLeNet (Inception v1) has nine linearly fitted inception modules. It has 22 layers (including the pooling layers. It will be 27). Without sacrificing accuracy or speed, the Inception net eventually lowers the computing cost to a significant degree. Therefore, GoogleNet shows seven million parameters. Nine inception modules, three SoftMax layers for the main auxiliary classifiers, four max-pooling layers, four convolutional layers, three average-pooling five fully connected layers, and four max-pooling layers make up this model. This architecture uses ReLU [30] operation in all convolutional layers and dropout regularization in the fully connected layer.
1x1 Convolution: The 1 × 1 convolution is used in the conception architecture's architecture. These convolutions are used to improve the depth of the architecture while reducing the number of parameters in the design.
Global average pooling: This is used to lower the number of trainable parameters at the network's end to zero and increase the top-1 accuracy by 0.6%.
Inception model: The inception model 3 × 3 max pooling and 1 × 1, 3 × 3, 5 × 5 convolution are performed similarly at the input, and the outputs are piled together to produce the final output. As shown in Figure 5. The first convolutional layer uses a patch of size 7 × 7, which is moderately massive compared to other patch sizes within the network. The main intention of this layer is to lessen the input image but not to lose the information and necessary details. The input image is decreased to a component of four at the second convolutional layer and a factor of eight before thrusting out to the first inception model. Still, the feature maps are generated in a vast amount.
GoogLeNet requires a particular type of configuration to produce accurate results, and the configuration details are mentioned in Table 2. Using nine inception modules makes it unique from other architectures, and Figure 6 delineates a straightforward inception model for Google net architecture Figure 7 Portrays the Google net.
Factor(s) | Values |
Number of convolutional layers | 32-filter convolutional layer with a stride of at least 1 and a filter size of (7, 7) |
Number of cross-channel normalization layers | 1, 2, 3 |
Number of drop out layers | 1, 2, 3 |
Maximum epochs | 20, 40, 50, 60, 80, 100 |
Number of fully connected layers | 1, 2, 3 |
Number of convolutional kernels | 8, 16, 32, 64, 128, 256 |
Kernel size | 1, 3, 5, 7, 9, 10, 11 |
pooling layer | Max pooling, average pooling |
pooling layer window size | 2, 3, 4, 5 |
Optimizers | GoogLeNet, Adam, VGG |
Mini-batch size | 1, 4, 8, 16, 32, 64, 128 |
Dropout rate | 0.1, 0.15, 0.2, 0.25, 0.5 |
Initial learning rate | 0.01, 0.001, 0.0001 |
Learning rate drop factor | 0.1, 0.2, 0.3 |
Type | Patch Size/Stride | Depth | #1x1 | #3x3 Reduce | #3x3 | #5x5 Reduce | #5x5 | Pool Proj | Pramas | Output Size | Ops |
CONVOLUTION | 7x7/2 | 1 | 2.7K | 112X112X64 | 34M | ||||||
MAXPOOL 3x3/2 | 0 | 56X56X64 | |||||||||
CONVOLUTION | 3x3/1 | 2 | 64 | 192 | 112K | 56X56X192 | 360M | ||||
MAXPOOL | 3x3/2 | 0 | 28X28X192 | ||||||||
INCEPTION(3a) | 2 | 64 | 96 | 128 | 16 | 32 | 32 | 159K | 28X28X256 | 128M | |
INCEPTION(3b) | 2 | 128 | 128 | 192 | 32 | 96 | 64 | 380K | 28X28X480 | 304M | |
MAXPOOL | 3x3/2 | 0 | 14X14X480 | ||||||||
INCEPTION(4a) | 2 | 192 | 96 | 208 | 16 | 48 | 64 | 364K | 14X14X512 | 73M | |
INCEPTION(4b)1 | 2 | 160 | 112 | 224 | 24 | 64 | 64 | 437K | 14X14X512 | 88M | |
INCEPTION(4c) 1 | 2 | 128 | 128 | 256 | 24 | 64 | 64 | 463K | 14 × 14 × 512 | 100M | |
INCEPTION(4d) | 2 | 112 | 144 | 288 | 32 | 64 | 64 | 580K | 14 × 14 × 528 | 119M | |
MAXPOOL | 3x3/2 | 0 | 7 × 7 × 832 | ||||||||
INCEPTION(5a) | 2 | 256 | 160 | 320 | 32 | 128 | 128 | 1072K | 7 × 7 × 832 | 54M | |
INCEPTION(5b) | 2 | 384 | 192 | 384 | 48 | 128 | 128 | 1388K | 7 × 7 × 1024 | 71M | |
AVERAGE POOL | 7 × 7/1 | 0 | 1 × 1 × 1024 | ||||||||
DROPOUT Rum | 0 | 1 × 1 × 1024 | |||||||||
LINEAR | 1 | 1000K | 1 × 1 × 1000 | 1M | |||||||
SOFTMAX | 0 | 1 × 1 × 1000 |
The algorithm mentioned below is used to get the input which is the preprocessed image set, and implements all inception modules upon the image set and gives the output, which is the percentage of accuracy or 1000 as specified by us. If it is 100% or 1000, the image is accurate. Then we can proceed with the further prediction of the image. Four convolutional layers, four max-pooling layers, nine inception modules, three SoftMax layers for the primary auxiliary classifiers, three average pooling layers, and five fully connected layers make up the class GoogLeNet [31].
Algorithm 2: Preprocessed image set and implements all inception modules. |
Input: preprocessed image |
Output: array (according to the batch size) #1000 as the output size |
Import torch and all neural network modules from the torch as nn |
begin |
Define Function googlenet(): |
If the model is pretrained: |
If the transform_input is not in kwargs: |
set kwargs ← true |
if aux_logits not in kwargs: |
set kwargs ← false |
if kwargs is aux_logits: |
send a warning (model is not pretrained) |
if model is not original_aux_logits: |
set model.aux_logits ← False |
Set model. aux1← None |
Set model. aux2← None |
return model |
return GoogLeNet(**kwargs) |
#Create class GoogLeNet: |
Test model with attributes and member functions |
This class involves all the inception layers, as mentioned already. |
GoogLeNetOutputs: |
if self.training and self.aux_logits: |
return _GoogLeNetOutputs(x, aux2, aux1) |
else: |
return x # type: ignore[return-value] |
In this section, we analyze the result and compare the result with different parameters.
We use six evaluation matrices to observe their performance: Time cost, area under the curve, Sensitivity, Specificity, Accuracy, and Loss. The five matrices, except the time cost, are calculated using the formula below:
Sensitivity: The exactness of positive models is correlated with sensitivity. It alludes to how numerous positive classes were labeled effectively; this can be determined with the equation below.
Sensitivity=TP/(TP+FN) |
False negatives are instances when positive cases are mistakenly classified as unfavorable, whereas real positives are instances where positive situations are appropriately identified.
Specificity: The conditional probability of true negatives, which may be calculated using the method below, and specificity are correlated.
Specificity=TN/(TN+FP) |
Where FP is the number of false positives, which are defined as the negative instances that are mistakenly categorized as positive cases, and TN is the number of true negatives or negative cases that are negative and classed as unfavorable.
Accuracy: It is the number of correct assessments divided by the total assessments
Accuracy=(TP+TN)/(TP+TN+FP+FN) |
Loss = -Σ, yj, log (ˆyi)
(y is the actual value, and ˆyi is the predicted value)
AUC: AUC corresponds to the area under the curve
AUC = 0.5 (Sensitivity + Specificity)
1) The performance of AlexNet has been measured using the measures mentioned above and arranged in Table 3.
Performance measures | Alex Net |
Accuracy Sensitivity Specificity The Area under the curve Time |
98.95 98.4 99.58 99.05 95.01 |
2) Performance of GoogLeNet in (%):
The performance of GoogLeNet has been measured using the measures mentioned earlier and arranged in Table 4.
Performance measures | Google Net |
Accuracy Sensitivity Specificity The area under the curve Time |
99.45 99.75 99.91 99.8 135.40 |
From the above two Tables 4 and 5, we can see the difference between the performance matrices of both AlexNet and GoogLeNet; that is, the accuracy measure of AlexNet is found to be 98.95. In contrast, the accuracy measure of GoogLeNet is found to be 99.45. Furthermore, the sensitivity of AlexNet is 98.4, and GoogLeNet is 99.75, so from these values, we can infer from the above table that the GooGleNet is highly accurate.
Model | FLOPS | # Params | fps | Latency | Accuracy |
CNN | 83.811 | 32.464 | 1406.23 | 0.0465 | 76.37 |
AlexNet | 167.685 | 32.464 | 860.84 | 0.0764 | 77.85 |
GoogleNet | 85.685 | 60.369 | 1114.82 | 0.0564 | 76.14 |
The number of parameters that GoogLeNet consumes is significantly less; that is, the depth of AlexNet is 8, it takes 60 million parameters, and the image input size is 22 × 227. At the same time, GoogLeNet has a depth of 22 while it takes 7 million parameters, and the input size is 244 × 24.
Our model has been tested with three different learning rates, and when we compare it with the performance results of Alex Net, the GoogLeNet is proved to be more effective, and the results are shown in Tables 4 and 5.
PERFORMANCE MEASURES | LR = 0.01 | LR = 0.001 | LR = 0.001 | |||
TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | |
Accuracy | 74.01% | 77.10% | 100% | 98.10% | 100% | 100% |
Loss | 2.81 | 2.25 | 0.021 | 0.001 | 1.0095 × 10–6 | 9.6568 × 10–8 |
Specificity | 0 | 0 | 1 | 1 | 1 | 1 |
Sensitivity | 0.71 | 0.75 | 1 | 1 | 1 | 1 |
AUC | 0.50 | 0.50 | 1 | 0.9756 | 1 | 1 |
PERFORMANCE MEASURES | LR = 0.01 | LR = 0.001 | LR = 0.001 | |||
TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | |
Accuracy | 74.01% | 77.10% | 100% | 100% | 100% | 100% |
Loss | 1.09 | 3.65 | 3.67 × 10-8 | 9.1 × 10-8 | 3.9714 × 10–8 | 3.1243 × 10–8 |
Specificity | 0 | 0 | 1 | 1 | 1 | 1 |
Sensitivity | 0.71 | 0.75 | 1 | 1 | 1 | 1 |
AUC | 0.5 | 0.5 | 1 | 1 | 1 | 1 |
Accuracy is a metric that measures the fraction of correct predictions made by the model over the total number of predictions. It is a value between 0 and 1, where 1 indicates that the model is making perfect predictions, and 0 indicates that the model is making no correct predictions. Accuracy is an excellent metric to use when the classes are balanced and have roughly equal numbers of samples.
The above-given Figures 8–10 are the graphs depicting the accuracy comparison, specificity comparison, and sensitivity comparison, respectively.
Both loss and accuracy must be relative to the problem being solved and the data being used. A model with a low loss and high accuracy on the training data does not necessarily mean that the model will perform well on new, unseen data. This is why it is common to split the data into training and validation sets, and to monitor both loss and accuracy on the validation set as the model is trained.
Table 8 shows that the proposed work outperformed in comparison to the existing method and active more accuracy. The runtime cost for the learning process of Alex Net for the learning rate (LR) 0.0001 is 19:01 minutes, for LR 0.001 is 15 minutes, and for LR 0.01 is 8:95 minutes, while in the case of Google Net for LR 0.0001, the runtime cost is 34:45 minutes for LR 0.001 it is 35:25 minutes, and for LR 0.01 it is 33:21 minutes. Hence, we can conclude that our model is best for this disease.
Reference | Dataset | Approach | Accuracy (%) |
W. Ayadi et al. (2021) [5] | publicly available database | CNN | 92.98 |
R. L. Kumar et.al. (2021) [15] | Figshare | GoogLeNet, AlexNet and VGGNet | 98.69 |
F. Abdolkarimzadeh et.al. (2021) [1] | Harvard Medical School | DL, K-NN, LDA | 95.45–96.97 |
S. N. Shivhare et.al. (2021) [27] | Figshare, Brainweb, Radiopaedia | LeNet | 88 |
V. V. S. Sasank et.al. (2021) [26] | Kaggle | CNN | 96 |
O. Polat and C. Gungen (2021) [20] | figshare | Fuzzy C Mean+ CNN | 97.5 |
X. L. Lei, (2021) [18] | Nanfang Hospital and General Hospital, Tianjin Medical University | CapsNet | 86.5 |
V. V. S. Sasank and S. Venkateswarlu (2021) [25] | Brats-2015 | CNN | 92.13 |
P. Wang and A. C. S. Chung (2021) [29] | Kaggle | CNN | 92.00 |
M. M. Badža and M. Č. Barjaktarović (2022) [38] | Radhamadhab Dalai, IEEE, FIGSHARE | convolutional neural network (CNN) | 97.28% |
Y. Guan et al. (2021) [36] | Radhamadhab Dalai, IEEE, FIGSHARE | Capsule Networks | 86.56% |
M. Aamir et al. (2022) [37] | Radhamadhab Dalai, IEEE, FIGSHARE | Capsulenet +SVM | 92.60 |
M. M. Badža and M. Č. Barjaktarović (2020) [38] | Radhamadhab Dalai, IEEE, FIGSHARE | GLCM and Wavelet Packets. | 93.30 |
N. Zheng et al. (2023)[39] | Radhamadhab Dalai, IEEE, FIGSHARE | Arithmetic Optimization Algorithm | 96.8 |
Q. Zhou (2023) [40] | Radhamadhab Dalai, IEEE, FIGSHARE | light-weight convolutional neural network CNN) with SCM-GL | 95.8 |
S. Deepak and P. M. Ameer (2023) [41] | Radhamadhab Dalai, IEEE, FIGSHARE | KNN+SVM | 98.2 |
G. Xiao et al. (2023) [42] | Radhamadhab Dalai, IEEE, FIGSHARE | jigsaw puzzle | 97.4 |
Proposed Work | Radhamadhab Dalai, IEEE, FIGSHARE | CNN, AlexNet & GoogLeNet | 99.45 |
Figure 11 shows the comparative analysis of loss during training and validation of the proposed model.
Figure 12 shows the comparative analysis of accuracy during training and validation of the proposed model.
A total of 70% of the dataset is used for training, 20% for validation, and 10% for testing. Python is used for developing the existing system. Figure 9 displays how the proposed approach improves validation accuracy for segmentation to 99.45% and validation loss to 0.01%. it suggested that the proposed model outperforms the state-of-the-art methods in classifying and segmenting images of brain tumors.
This study presented a new technique for detecting tumors at an early stage. We used an image edge identification strategy to locate the high-value region in MRI images. We then used the information expansion strategy to expand our preliminary data. We propose a direct CNN organization as a practical tumor clustering method as a second step. However, our experimental results show we can achieve full precision even on a minimal, low dataset. Our exactness rate is exceptionally high compared to the VGG-16, resnet-50, and Inception-v3 models. This is due to the neural organization requiring a greater-than-usual amount of information to mentor on for a la mode and precise results. Because of the initial investment, our model has less stringent computational requirements. The precision of our model is also significantly higher than that of VGG-16, ResNet-50, and Inception-v3. Our proposed framework could improve the prognosis of patients with brain tumors. Widespread hyper-boundary tuning and an improved pre-landing procedure are frequently considered potential methods for improving lift model proficiency. However, our proposed framework is for paired-order problems. In the future, proposed strategies can be used for apparent characterization problems, such as recognizable proof of tumor types like Glioma, meningioma, and pituitary or is also acclimated and distinguishes other mind anomalies. Cancers such as carcinoma and carcinoma are becoming increasingly common worldwide, and our proposed framework can aid in the early detection of these dangerous infections in other clinical areas related to clinical imaging. We will apply this methodology to other logical domains in addition to the current debate over the availability of massive data, or we can use alternative exchange learning strategies with the same proposed approach.
The authors declare there is no conflict of interest.
[1] |
Lisette van Beek, Maarten Hajer, et al. (2020) Anticipating futures through models: the rise of integrated assessment modelling in the climate science-policy interface since 1970. Global Environ Chang 65:102191. https://doi.org/10.1016/j.gloenvcha.2020.102191 doi: 10.1016/j.gloenvcha.2020.102191
![]() |
[2] |
Jean-Francois Mercure, Florian Knobloch, Hector Pollitt, et al. (2019) Modelling innovation and the macroeconomics of low-carbon transitions: theory, perspectives and practical use. Clim Policy 19: 1019–1037. https://doi.org/10.1080/14693062.2019.1617665 doi: 10.1080/14693062.2019.1617665
![]() |
[3] |
Ajay Gambhir, Isabela Butnar, Pei-Hao Li, et al. (2019) A review of criticisms of integrated assessment models and proposed approaches to address these, through the lens of beccs. Energies 12. https://doi.org/10.3390/en12091747 doi: 10.3390/en12091747
![]() |
[4] |
Sarah Hafner, Annela Anger-Kraavi, Irene Monasterolo, et al. (2020) Emergence of New Economics Energy Transition Models: A Review. Ecol Econ 177. https://doi.org/10.1016/j.ecolecon.2020.106779 doi: 10.1016/j.ecolecon.2020.106779
![]() |
[5] |
I Keppo, I Butnar, N Bauer, et al. (2021) Exploring the possibility space: taking stock of the diverse capabilities and gaps in integrated assessment models. Environ Res Lett 16: 053006. https://doi.org/10.1088/1748-9326/abe5d8 doi: 10.1088/1748-9326/abe5d8
![]() |
[6] |
Isak Stoddard, Kevin Anderson, Stuart Capstick, et al. (2021) Three decades of climate mitigation: Why haven't we bent the global emissions curve? Annu Rev Environ Resour 46: 653–689. https://doi.org/10.1146/annurev-environ-012220-011104 doi: 10.1146/annurev-environ-012220-011104
![]() |
[7] |
Michael Grubb, Claudia Wieners, Pu Yang (2021) Modeling myths: On dice and dynamic realism in integrated assessment models of climate change mitigation. WIREs Clim Chang 12: e698. https://doi.org/10.1002/wcc.698 doi: 10.1002/wcc.698
![]() |
[8] |
Roberto Pasqualino, Cristina Peñasco, Peter Barbrook-Johnson, et al. (2024) Modelling induced innovation for the low-carbon energy transition: a menu of options. Environ Res Lett 19: 073004. https://doi.org/10.1088/1748-9326/ad4c79 doi: 10.1088/1748-9326/ad4c79
![]() |
[9] |
Ploy Achakulwisut, Peter Erickson, Céline Guivarch, et al. Global fossil fuel reduction pathways under different climate mitigation strategies and ambitions. Nat Commun, 14, 09 2023. https://doi.org/10.1038/s41467-023-41105-z doi: 10.1038/s41467-023-41105-z
![]() |
[10] | Irene Monasterolo, María J. Nieto, Edo Schets (2023) The good, the bad and the hot house world: conceptual underpinnings of the NGFS scenarios and suggestions for improvement. Occasional Papers 2302, Banco de España, January 2023. https://doi.org/10.53479/29533 |
[11] | E. Byers, E. Brutschin, F. Sferra, et al. (2023) Scenarios processing, vetting and feasibility assessment for the european scientific advisory board on climate change. Iiasa report, IIASA, Laxenburg, Austria, June 2023. |
[12] | European Environment Agency (2023) Scientific advice for the determination of an EU-wide 2040 climate target and a greenhouse gas budget for 2030–2050. Publications Office of the European Union. |
[13] | Renato Rodrigues, Robert Pietzcker, Joanna Sitarz, et al. (2023) 2040 greenhouse gas reduction targets and energy transitions in line with the eu green deal, 07 2023. https://doi.org/10.21203/rs.3.rs-3192471/v1 |
[14] | Jiankun He, Zheng Li, Xiliang Zhang (2022) China's Long-Term Low-Carbon Development Strategies and Pathways: Comprehensive Report. China Environment Publishing Group Company, Limited. |
[15] | Oliver Richters, Elmar Kriegler, Alaa Al Khourdajie, et al. (2023) NGFS Climate Scenarios Technical Documentation V4.2, November 2023. |
[16] |
Charlie Wilson, Céline Guivarch, Elmar Kriegler, et al. (2021) Evaluating process-based integrated assessment models of climate change mitigation. Clim Change 166. https://doi.org/10.1007/s10584-021-03099-9 doi: 10.1007/s10584-021-03099-9
![]() |
[17] |
Glen Peters, Alaa Al Khourdajie, Ida Sognnaes, et al. (2323) Ar6 scenarios database: an assessment of current practices and future recommendations. npj Climate Action 2. https://doi.org/10.1038/s44168-023-00050-9 doi: 10.1038/s44168-023-00050-9
![]() |
[18] | V. Krey, P. Havlik, P. N. Kishimoto, et al. (2020) MESSAGEix-GLOBIOM Documentation – 2020 release. Technical report, International Institute for Applied Systems Analysis (IIASA), Laxenburg, Austria. |
[19] | Gunnar Luderer, Nico Bauer, Lavinia Baumstark, et al. (2023) Remind - regional model of investments and development - version 3.2.0. |
[20] |
J.-F. Mercure, H. Pollitt, U. Chewpreecha, et al. (2014) The dynamics of technology diffusion and the impacts of climate policy instruments in the decarbonisation of the global electricity sector. Energy Policy 73: 686–700. https://doi.org/10.1016/j.enpol.2014.06.029 doi: 10.1016/j.enpol.2014.06.029
![]() |
[21] | J. Hilaire, C. Bertram (2020) The remind-magpie model and scenarios for transition risk analysis. Technical report, Potsdam Institute for Climate Impact Research. |
[22] | Bas van Ruijven, Ji ho Min (2020) The messageix-globiom model and scenarios for transition risk analysis. 2020. |
[23] |
C. C. Gong, F. Ueckerdt, R. Pietzcker, et al. (2023) Bidirectional coupling of the long-term integrated assessment model regional model of investments and development (remind) v3.0.0 with the hourly power sector model dispatch and investment evaluation tool with endogenous renewables (dieter) v1.0.2. Geosci Model Dev 16: 4977–5033. https://doi.org/10.5194/gmd-16-4977-2023 doi: 10.5194/gmd-16-4977-2023
![]() |
[24] |
T. Brown, L. Reichenberg (2021) Decreasing market value of variable renewables can be avoided by policy action. Energy Econ 100: 105354. https://doi.org/10.1016/j.eneco.2021.105354 doi: 10.1016/j.eneco.2021.105354
![]() |
[25] | Chen Chris Gong, Falko Ueckerdt, Christoph Bertram, et al. (2024) Multi-level emission impacts of electrification and coal pathways in china's netzero transition. |
[26] | I. Weber, J. Thie, J. Jauregui, et al. (2024) Carbon prices and inflation in a world of shocks-systemically significant prices and industrial policy targeting in germany. Bertelsmann Stiftung, Sustainable Social Market Economies, Gütersloh. |
[27] | Falko Ueckerdt, Philipp Verpoort, Rahul Anantharaman, et al. (2022) On the cost competitiveness of blue and green hydrogen. https://doi.org/10.21203/rs.3.rs-1436022/v1 |
[28] |
Jonas Meckling, Bentley B. Allan (2020) The evolution of ideas in global climate policy. Nat Clima Chang 10: 434–438. https://doi.org/10.1038/s41558-020-0739-7 doi: 10.1038/s41558-020-0739-7
![]() |
[29] |
John E T Bistline (2021) The importance of temporal resolution in modeling deep decarbonization of the electric power sector. Environ Res Lett 16: 084005. https://doi.org/10.1088/1748-9326/ac10df doi: 10.1088/1748-9326/ac10df
![]() |
[30] |
Y. Alimou, N. Maïzi, J.-Y. Bourmaud, et al. (2020) Assessing the security of electricity supply through multi-scale modeling: The times-antares linking approach. Appl Energy 279: 115717. https://doi.org/10.1016/j.apenergy.2020.115717 doi: 10.1016/j.apenergy.2020.115717
![]() |
[31] | M. Brinkerink (2020) Assessing 1.5–2 c scenarios of integrated assessment models from a power system perspective – linkage with a detailed hourly global electricity model. Monograph, IIASA, Laxenburg, Austria. |
[32] |
P. Seljom, E. Rosenberg, L. E. Schäffer, et al. (2020) Bidirectional linkage between a long-term energy system and a short-term power market model. Energy, 198: 117311. https://doi.org/10.1016/j.energy.2020.117311 doi: 10.1016/j.energy.2020.117311
![]() |
[33] |
F. Guo, B. J. van Ruijven, B. Zakeri, et al. (2020) Implications of intercontinental renewable electricity trade for energy systems and emissions. Nature Energy 7 1144–1156. https://doi.org/10.1038/s41560-022-01136-0 doi: 10.1038/s41560-022-01136-0
![]() |
[34] |
A. Younis, R. Benders, J. Ramírez, et al. (2022) Scrutinizing the intermittency of renewable energy in a long-term planning model via combining direct integration and soft-linking methods for colombia's power system. Energies 15: 7604. https://doi.org/10.3390/en15207604 doi: 10.3390/en15207604
![]() |
[35] |
M. Brinkerink, B. Zakeri, D. Huppmann, et al. (2022) Assessing global climate change mitigation scenarios from a power system perspective using a novel multi-model framework. Environ Modell Softw 150: 105336. https://doi.org/10.1016/j.envsoft.2022.105336 doi: 10.1016/j.envsoft.2022.105336
![]() |
[36] |
M. Mowers, B. K. Mignone, D. C. Steinberg (2023) Quantifying value and representing competitiveness of electricity system technologies in economic models. Appl Energy 329: 120132. https://doi.org/10.1016/j.apenergy.2022.120132 doi: 10.1016/j.apenergy.2022.120132
![]() |
[37] | NewClimate Institute, Wageningen University and Research, and PBL Netherlands Environmental Assessment Agency (2323) Climate policy database. Technical report. |
[38] | Ernest Orlando, Jayant. Sathaye, Tengfang T. Xu, et al. (2011) Bottom-up representation of industrial energy efficiency technologies in integrated assessment models for the cement sector. Lawrence Berkeley National Laboratory. |
[39] |
Yiyi Ju, Masahiro Sugiyama, Etsushi Kato, et al. (2022) Job creation in response to japan's energy transition towards deep mitigation: An extension of partial equilibrium integrated assessment models. Appl Energy 318: 119178. https://doi.org/10.1016/j.apenergy.2022.119178 doi: 10.1016/j.apenergy.2022.119178
![]() |
[40] | Nur Firdaus Yiyi Ju, Tao Cao (2323) Industry's role in japan's energy transition: soft-linking gcam and national io table with extended electricity supply sectors. Econ Syst Res 1–21. |
[41] | Simone Boldrini, George Krivorotov (2024) Long-run sectoral transition risk using a hybrid mrio/iam approach. 2024. https://doi.org/10.2139/ssrn.4811268 |
[42] | F. Sferra, B. van Ruijven, K. Riahi (2021) Downscaling iams results to the country level - a new algorithm. Iiasa report, IIASA, Laxenburg, Austria, October 2021. |
[43] | Climate Analytics (2023) 1.5∘C National Pathways Explorer. Technical report, Climate Analytics, 2023. |
[44] |
Silvia Madeddu, Falko Ueckerdt, Michaja Pehl, et al. (2020) The co2 reduction potential for the european industry via direct electrification of heat supply (power-to-heat). Environ Res Lett 15. https://doi.org/10.1088/1748-9326/abbd02 doi: 10.1088/1748-9326/abbd02
![]() |
[45] |
Jay Fuhrman, Haewon McJeon, Scott C. Doney, et al. (2019) From zero to hero?: Why integrated assessment modeling of negative emissions technologies is hard and how we can do better. Front Clim 1. https://doi.org/10.3389/fclim.2019.00011 doi: 10.3389/fclim.2019.00011
![]() |
[46] |
Jay Fuhrman, Candelaria Bergero, Maridee Weber, et al. (2023) Diverse carbon dioxide removal approaches could reduce impacts on the energy–water–land system. Nat Clim Chang 13: 1–10. https://doi.org/10.1038/s41558-023-01604-9 doi: 10.1038/s41558-023-01604-9
![]() |
[47] |
Saheed A. Gbadegeshin, Anas Al Natsheh, Kawtar Ghafel, et al. (2022) Overcoming the valley of death: A new model for high technology startups. Sustaina Futures 4: 100077. https://doi.org/10.1016/j.sftr.2022.100077 doi: 10.1016/j.sftr.2022.100077
![]() |
[48] | Enrica De Cian, Johannes Buhl, Samuel Carrara, et al. (2016) Knowledge creation between integrated assessment models and initiative-based learning - an interdisciplinary approach. Nota di Lavoro 66.2016, Milano. https://doi.org/10.2139/ssrn.2871828 |
[49] |
Yueming Qiu, Laura D. Anadon (2012) The price of wind power in china during its expansion: Technology adoption, learning-by-doing, economies of scale, and manufacturing localization. Energy Econo 34: 772–785. https://doi.org/10.1016/j.eneco.2011.06.008 doi: 10.1016/j.eneco.2011.06.008
![]() |
[50] |
John Paul Helveston, Gang He, Michael R Davidson (2022) Quantifying the cost savings of global solar photovoltaic supply chains. Nature 612: 83—87. https://doi.org/10.1038/s41586-022-05316-6 doi: 10.1038/s41586-022-05316-6
![]() |
[51] | Efrem Castelnuovo, Marzio Galeotti, Gretel Gambarelli, et al. (2005) Learning-by-doing vs. learning by researching in a model of climate change policy analysis. Ecol Econo 54: 261–276. https://doi.org/10.1016/j.ecolecon.2004.12.036 Technological Change and the Environment. |
[52] | Yixin Sun, Hongbo Duan (2024) Endogenous technological change in iams: Takeaways in the e3metl model. Energy Climate Management. |
[53] |
Roland Sturm (1993) Nuclear power in Eastern Europe. Learning or forgetting curves? Energy Econ 15: 183–189. https://doi.org/10.1016/0140-9883(93)90004-B doi: 10.1016/0140-9883(93)90004-B
![]() |
[54] |
Mohamad Y. Jaber and Maurice Bonney (1997) A comparative study of learning curves with forgetting. Appl Math Model 21: 523–531. https://doi.org/10.1016/S0307-904X(97)00055-3 doi: 10.1016/S0307-904X(97)00055-3
![]() |
[55] |
C. Lanier Benkard (2000) Learning and forgetting: The dynamics of aircraft production. Am Econ Rev 90: 1034–1054. https://doi.org/10.1257/aer.90.4.1034 doi: 10.1257/aer.90.4.1034
![]() |
[56] |
José Ângelo Ferreira, Edson Luiz Valmorbida, Bruno Goulart Sato, et al. (2024) Forgetting curve models: A systematic review aimed at consolidating the main models and outlining possibilities for future research in production. Expert Syst 41: e13405. https://doi.org/10.1111/exsy.13405 doi: 10.1111/exsy.13405
![]() |
[57] |
Annika Stechemesser, Nicolas Koch, Ebba Mark, et al. (2024) Climate policies that achieved major emission reductions: Global evidence from two decades. Science 385: 884–892. https://doi.org/10.1126/science.adl6547 doi: 10.1126/science.adl6547
![]() |
[58] | R. Pietzcker, J. Feuerhahn, L. Haywood, et al. (2021) Ariadne-hintergrund: Notwendige co2-preise zum erreichen des europäischen klimaziels 2030. Technical report, Ariadne Project. |
[59] | Yannis Dafermos, Maria Nikolaidi (2019) Fiscal policy and ecological sustainability: A post-keynesian perspective. FMM Working Paper 52, Düsseldorf. |
[60] |
Michael Grubb, Alexandra Poncia, Paul Drummond, et al. (2023) Policy complementarity and the paradox of carbon pricing. Oxford Review of Economic Policy 39: 711–730. https://doi.org/10.1093/oxrep/grad045 doi: 10.1093/oxrep/grad045
![]() |
[61] | Julien Lefevre Briera Thibault. (2024) Credible climate policy commitments are needed for keeping long-term climate goals within reach. 2024. |
[62] |
Matthias Kalkuhl, Ottmar Edenhofer, Kai Lessmann (2013) Renewable energy subsidies: Second-best policy or fatal aberration for mitigation? Resour Energy Econ 35: 217–234. https://doi.org/10.1016/j.reseneeco.2013.01.002 doi: 10.1016/j.reseneeco.2013.01.002
![]() |
[63] | D. Jewell, J. McCollum (2014) Report on improving the representation of existing energy policies (taxes and subsidies) in iams. Technical report, ADVANCE Deliverable No. 3.1. |
[64] |
Christoph Bertram, Gunnar Luderer, Robert C. Pietzcker, et al. (2015) Complementing carbon prices with technology policies to keep climate targets within reach. Nat Clim Chang 5: 235–239. https://doi.org/10.1038/nclimate2514 doi: 10.1038/nclimate2514
![]() |
[65] |
Anselm Schultes, Marian Leimbach, Gunnar Luderer, et al. (2018) Optimal international technology cooperation for the low-carbon transformation. Clim Policy 18: 1165–1176. https://doi.org/10.1080/14693062.2017.1409190 doi: 10.1080/14693062.2017.1409190
![]() |
[66] | Runsen Zhang (2020) Chapter 2 - the role of the transport sector in energy transition and climate change mitigation: insights from an integrated assessment model. In Junyi Zhang, editor, Transport Energy Res, 15–30. https://doi.org/10.1016/B978-0-12-815965-1.00002-8 |
[67] |
J.-F. Mercure, A. Lam, S. Billington, et al. (2018) Integrated assessment modelling as a positive science: private passenger road transport policies to meet a climate target well below 2 ℃. Clima Change 151: 109–129. https://doi.org/10.1007/s10584-018-2262-7 doi: 10.1007/s10584-018-2262-7
![]() |
[68] |
K. Calvin, P. Patel, L. Clarke, et al. (2019) Gcam v5.1: representing the linkages between energy, water, land, climate, and economic systems. Geosci Model Dev 12: 677–698. https://doi.org/10.5194/gmd-12-677-2019 doi: 10.5194/gmd-12-677-2019
![]() |
[69] | Aileen Lam. (2019) Modelling the impact of policy incentives on co2 emissions from passenger light duty vehicles in five major economies with a dynamic model of technological change. 2019. |
[70] |
Runsen Zhang, Shinichiro Fujimori (2020) The role of transport electrification in global climate change mitigation scenarios. Environ Res Lett 15: 034019. https://doi.org/10.1088/1748-9326/ab6658 doi: 10.1088/1748-9326/ab6658
![]() |
[71] |
Marianna Rottoli, Alois Dirnaichner, Page Kyle, et al. (2021) Coupling a detailed transport model to the integrated assessment model remind. Environ Model Assess 26. https://doi.org/10.1007/s10666-021-09760-y doi: 10.1007/s10666-021-09760-y
![]() |
[72] |
Aileen Lam, Jean-Francois Mercure. (2021) Which policy mixes are best for decarbonising passenger cars? simulating interactions among taxes, subsidies and regulations for the united kingdom, the united states, japan, china, and india. Energy Res Soc Sci 75: 101951. https://doi.org/10.1016/j.erss.2021.101951 doi: 10.1016/j.erss.2021.101951
![]() |
[73] | Leon Rostek, Meta Thurid Lotz, Sabine Wittig, et al. (2022) A dynamic material flow model for the european steel cycle. Working Papers "Sustainability and Innovation" S07/2022, Fraunhofer Institute for Systems and Innovation Research (ISI). |
[74] |
Shaohui Zhang, Bo-Wen Yi, Ernst Worrell, et al. (2019) Integrated assessment of resource-energy-environment nexus in china's iron and steel industry. J Clean Prod 232: 235–249. https://doi.org/10.1016/j.jclepro.2019.05.392 doi: 10.1016/j.jclepro.2019.05.392
![]() |
[75] |
Katerina Kermeli, Oreane Y. Edelenbosch, Wina Crijns-Graus, et al. (2022) Improving material projections in integrated assessment models: The use of a stock-based versus a flow-based approach for the iron and steel industry. Energy 239: 122434. https://doi.org/10.1016/j.energy.2021.122434 doi: 10.1016/j.energy.2021.122434
![]() |
[76] | Katrin E. Daehn, André Cabrera Serrenho, Julian M. Allwood (2017) How will copper contamination constrain future global steel recycling? Environ Sci Technol 51: 6599–6606. PMID: 28445647. https://doi.org/10.1021/acs.est.7b00997 |
[77] |
Katrin Daehn, André Serrenho, Julian Allwood (2019) Finding the most efficient way to remove residual copper from steel scrap. Metall Maters Trans B 50: 1–16. https://doi.org/10.1007/s11663-019-01537-9 doi: 10.1007/s11663-019-01537-9
![]() |
[78] |
Paul Stegmann, Vassilis Daioglou, Marc Londo, et al. (2022) Plastic futures and their co2 emissions. Nature 612: 272–276. https://doi.org/10.1038/s41586-022-05422-5 doi: 10.1038/s41586-022-05422-5
![]() |
[79] |
G. Ünlü, F. Maczek, J. Min, et al. (2024) Messageix-materials v1.0.0: Representation of material flows and stocks in an integrated assessment model. EGUsphere 2024: 1–41. https://doi.org/10.5194/egusphere-2023-3035-supplement doi: 10.5194/egusphere-2023-3035-supplement
![]() |
[80] |
J. Farmer, Cameron Hepburn, Penny Mealy, et al. (2015) A third wave in the economics of climate change. Environ Resourc Econ 62: 329–357. https://doi.org/10.1007/s10640-015-9965-2 doi: 10.1007/s10640-015-9965-2
![]() |
[81] | Cambridge Econometrics. (2022) E3me model manual. Cambridge Econometrics: Cambridge, UK. |
[82] |
Frank W. Geels, Frans Berkhout, Detlef van Vuuren (2016) Bridging analytical approaches for low-carbon transitions. Nat Clim Chang 6: 576–583. https://doi.org/10.1038/nclimate2980 doi: 10.1038/nclimate2980
![]() |
[83] |
Evelina Trutnevyte, Léon F. Hirt, Nico Bauer, et al. (2019) Societal transformations in models for energy and climate policy: The ambitious next step. One Earth 1: 423–433. https://doi.org/10.1016/j.oneear.2019.12.002 doi: 10.1016/j.oneear.2019.12.002
![]() |
[84] |
Ingrid Schulte, Ping Yowargana, Jonas Ø. Nielsen, et al. (2024) Towards integration? considering social aspects with large-scale computational models for nature-based solutions. Glob Sustain 7: e4. https://doi.org/10.1017/sus.2023.26 doi: 10.1017/sus.2023.26
![]() |
[85] |
Auke Hoekstra, Maarten Steinbuch, Geert Verbong (2017) Creating agent-based energy transition management models that can uncover profitable pathways to climate change mitigation. Complexity 2017: 1967645. https://doi.org/10.1155/2017/1967645 doi: 10.1155/2017/1967645
![]() |
[86] |
D. L. McCollum, A. Gambhir, J. Rogelj, et al. (2020) Energy modellers should explore extremes more systematically in scenarios. Nature Energy 5. https://doi.org/10.1038/s41560-020-0555-3 doi: 10.1038/s41560-020-0555-3
![]() |
[87] |
Richard Hanna, Robert Gross (2021) How do energy systems model and scenario studies explicitly represent socio-economic, political and technological disruption and discontinuity? implications for policy and practitioners. Energy Policy 149: 111984. https://doi.org/10.1016/j.enpol.2020.111984 doi: 10.1016/j.enpol.2020.111984
![]() |
[88] |
Ajay Gambhir, Gaurav Ganguly, Shivika Mittal (2022) Climate change mitigation scenario databases should incorporate more non-iam pathways. Joule 6: 2663–2667. https://doi.org/10.1016/j.joule.2022.11.007 doi: 10.1016/j.joule.2022.11.007
![]() |
[89] |
Vadim Vinichenko, Marta Vetier, Jessica Jewell, et al. (2023) Phasing out coal for 2° c target requires worldwide replication of most ambitious national plans despite security and fairness concerns. Environ Res Lett 18: 014031. https://doi.org/10.1088/1748-9326/acadf6 doi: 10.1088/1748-9326/acadf6
![]() |
[90] |
Greg Muttitt, James Price, Steve Pye, et al. (2023) Socio-political feasibility of coal power phase-out and its role in mitigation pathways. Nat Clim Chang 13: 1–8. https://doi.org/10.1038/s41558-022-01576-2 doi: 10.1038/s41558-022-01576-2
![]() |
[91] |
Stephen L. Bi, Nico Bauer, Jessica Jewell (2023) Coal-exit alliance must confront freeriding sectors to propel Paris-aligned momentum. Nat Clim Chang 13: 130–139. https://doi.org/10.1038/s41558-022-01570-8 doi: 10.1038/s41558-022-01570-8
![]() |
[92] |
Daron Acemoglu (2002) Directed technical change. Rev Econ Stud 69: 781–809. https://doi.org/10.1111/1467-937X.00226 doi: 10.1111/1467-937X.00226
![]() |
[93] | André Grimaud, Luc Rouge (2008) Environ Resourc Econ 41: 439–463. Environment, directed technical change and economic policy. https://doi.org/10.1007/s10640-008-9201-4 |
[94] |
Johannes Behrens, Elisabeth Zeyen, Maximilian Hoffmann, et al. (2024) Reviewing the complexity of endogenous technological learning for energy system modeling. Adv Appl Energy 16: 100192. https://doi.org/10.1016/j.adapen.2024.100192 doi: 10.1016/j.adapen.2024.100192
![]() |
[95] | Raphael Calel, Antoine Dechezlepretre (2016) Environmental policy and directed technological change: evidence from the european carbon market. Rev Econ Stat 98: 173–191. |
[96] |
Philippe Aghion, Antoine Dechezlepretre, David Hemous, et al. (2016) Carbon taxes, path dependency, and directed technical change: Evidence from the auto industry. J Polit Econ 124: 1–51. https://doi.org/10.1086/684581 doi: 10.1086/684581
![]() |
[97] | Daron Acemoglu, Ufuk Akcigit, Douglas Hanley, et al. (2023) Transition to clean technology. J Polit Econ 131: 143–199. |
[98] | International Energy Agency (2023) World energy investment 2023. Report, IEA, Paris, May. |
[99] | International Energy Agency. (2023) Energy technology perspectives 2023. Report, IEA, Paris, September. |
[100] |
Sabrina T Howell (2017) Financing innovation: Evidence from r & d grants. Am Econ Rev 107: 1136–1164. https://doi.org/10.1257/aer.20150808 doi: 10.1257/aer.20150808
![]() |
[101] | Antoine Dechezleprêtre, Ralf Martin, Myra Mohnen (2017) Knowledge spillovers from clean and dirty technologies: A patent citation analysis. Grantham Research Institute on Climate Change and the Environment Working Paper 135. |
[102] | Daron Acemoglu, Philippe Aghion, Lint Barrage, et al. (2021) Climate change, directed innovation and energy transition: The long-run consequences of the shale gas revolution. mimeo. |
[103] |
Valentina Bosetti, Carlo Carraro, Marzio Galeotti, et al. (2006) A world induced technical change hybrid model. Energy J 27: 13–37. https://doi.org/10.5547/ISSN0195-6574-EJ-VolSI2006-NoSI2-2 doi: 10.5547/ISSN0195-6574-EJ-VolSI2006-NoSI2-2
![]() |
[104] |
Valentina Bosetti, Carlo Carraro, Emanuele Massetti, et al. (2009) Optimal energy investment and r & d strategies to stabilize atmospheric greenhouse gas concentrations. Resourc Energy Econ 31: 123–137. https://doi.org/10.1016/j.reseneeco.2009.01.001 doi: 10.1016/j.reseneeco.2009.01.001
![]() |
[105] |
Feng Song, Zichao Yu, Weiting Zhuang, et al. (2021) The institutional logic of wind energy integration: What can china learn from the united states to reduce wind curtailment? Renew Sust Energ Rev 137: 110440. https://doi.org/10.1016/j.rser.2020.110440 doi: 10.1016/j.rser.2020.110440
![]() |
[106] |
Qi Zhang, Hailong Li, Lijing Zhu, et al. (2018) Factors influencing the economics of public charging infrastructures for ev – a review. Renew Sust Energ Rev 94: 500–509. https://doi.org/10.1016/j.rser.2018.06.022 doi: 10.1016/j.rser.2018.06.022
![]() |
[107] | Charles M. Macal, Michael J. North (2010) Tutorial on agent-based modelling and simulation. J Simul 4: 151–162. |
[108] | Joshua M. Epstein, Robert Axtell (1996) Growing artificial societies: Social science from the bottom up. Brookings Institution Press; The MIT Press. https://doi.org/10.7551/mitpress/3374.001.0001 |
[109] |
Eric Bonabeau (2002) Agent-based modeling: Methods and techniques for simulating human systems. Proc Nati Acad Sci, 99: 7280–7287. https://doi.org/10.1073/pnas.082080899 doi: 10.1073/pnas.082080899
![]() |
[110] | Nigel Gilbert, Klaus Troitzsch (2005) Simulation Soc Scientist. Open University Press. |
[111] | Flaminio Squazzoni, Wander Jager, Bruce Edmonds (2014) Social simulation in the social sciences: A brief overview. 32. https://doi.org/10.1177/0894439313512975 |
[112] | Steven F. Railsback, Volker Grimm (2012) Agent-Based and Individual-Based Modeling: A Practical Introduction. Princeton University Press. |
[113] | Christina Nägeli, Clara Camarasa, Yorick Ostermeyer (2021) An agent-based modeling framework for energy transitions in building stocks. Energy Build 250: 111273. |
[114] | Jiankun Li, Yufeng Gao, Jiahai Yuan (2020) Agent-based modeling of china's coal phase-out. Energy Policy 140: 111402. |
[115] | Yan Zhang, Hua Liao (2021) An agent-based modeling approach for understanding the policy impact on the diffusion of renewable energy technologies. Energy Policy 148: 111983. |
[116] | Yong Chen, Dimitrios Zafirakis (2021) Agent-based modelling of energy systems integration. Appl Energy 290: 116723. |
[117] |
Tatiana Filatova (2015) Empirical agent-based land market: Integrating adaptive economic behavior in urban land-use models. Comput Environ Urban Syst 54: 397–413. https://doi.org/10.1016/j.compenvurbsys.2014.06.007 doi: 10.1016/j.compenvurbsys.2014.06.007
![]() |
[118] |
H Zhang, Y Vorobeychik (2019) Empirically grounded agent-based models of innovation diffusion: a critical review. Artif Intell Rev 52: 707–741. https://doi.org/10.1007/s10462-017-9577-z doi: 10.1007/s10462-017-9577-z
![]() |
[119] | F. Lamperti, V. Bosetti, A. Roventini, et al. (2019) The role of financial market regulation in the transition to a low-carbon economy. Econ Energy Environ Policy 61–82. |
[120] | F. Lamperti, G. Dosi, M. Napoletano, et al. (2020) Climate change and the green transition in a keynesian, schumpeterian model. J Econ Dyn Control 114: 103890. |
[121] |
K. Safarzynska, J. C. J. M. van den Bergh (2022) Abm-iam: Optimal climate policy under bounded rationality and multiple inequalities. Environ Res Lett 17: 094022. https://doi.org/10.1088/1748-9326/ac8b25 doi: 10.1088/1748-9326/ac8b25
![]() |
[122] |
Michał Czupryna, Samuel Fankhauser, Pablo Cristóbal Salas (2020) Agent-based integrated assessment modeling: A tool for exploring energy transitions. Energy Policy 137: 111090. https://doi.org/10.1016/j.enpol.2019.111090 doi: 10.1016/j.enpol.2019.111090
![]() |
[123] | Michał Czupryna, Samuel Fankhauser, Pablo Cristóbal Salas (2021) An agent-based integrated assessment model for studying interactions between climate and economic systems. Energy Policy 156: 112417. |
[124] |
L. Gerdes, B. Rengs, M. Scholz-Wäckerle (2022) Labor and environment in global value chains: An evolutionary policy study with a three-sector and two-region agent-based macroeconomic model. Journal of Evolutionary Economics 32: 123–173. https://doi.org/10.1007/s00191-021-00750-7 doi: 10.1007/s00191-021-00750-7
![]() |
[125] | Christoph Schimeczek, Michael Sonnenschein (2020) AMIRIS – a simulation model for analyzing the market integration of renewable energies by simulating the electricity spot market. In 2020 17th International Conference on the European Energy Market (EEM), pages 1–6. IEEE. |
[126] | Sara Giarola, Shivika Mittal, Adam Hawkes, Paul Deane, et al. (2021) The MUSE integrated assessment model for global energy systems: An application of high-performance computing. Environ Modell Softw 137: 104929. |
[127] |
Leila Niamir, Gregor Kiesewetter, Fabian Wagner, et al. (2020) Assessing the macroeconomic impacts of individual behavioral changes on carbon emissions. Clim Change 158: 141–160. https://doi.org/10.1007/s10584-019-02566-8 doi: 10.1007/s10584-019-02566-8
![]() |
[128] | Karolina Safarzyńska (2021) Optimal climate policies under bounded rationality: A behavioral agent-based integrated assessment model. Environ Innov Soc Trans 41: 73–85. |
[129] | Jinkun Dai, Jiajia Gao, Shuxian Zhou, et al. (2021) An agent-based model for simulating land use change in agricultural regions. Ecol Model 440: 109376. |
[130] | Mohamed Shaaban, Zeina Nakat, Dima Ouahrani (2021) Combining qualitative scenarios with agent-based modeling for understanding the future of agri-food systems. Futures 131: 102771, 2021. |
[131] |
Alessandro Di Noia, Ferdinando Villa (2021) Agent-based modeling for climate change adaptation planning in coastal zones: A systematic review. Ocean Coastal Manage 205: 105570. https://doi.org/10.2139/ssrn.4180554 doi: 10.2139/ssrn.4180554
![]() |
[132] | Karl Naumann-Woleske (2023) Agent-based integrated assessment models: Alternative foundations to the environment-energy-economics nexus. https://hal.science/hal-04238110v1, 2023. Preprint submitted on 11 Oct 2023. https://doi.org/10.2139/ssrn.4399363 |
[133] | Ilona M. Otto, Marc Wiedermann, Roger Cremades, et al. (2017) Modeling the dynamics of climate change adaptation with large-scale agent-based models. Earth Syst Dynam 8: 177–195. |
[134] | Stefan Pauliuk, Niken Mardiana Dhaniati, Daniel B. Müller (2017) Recycling and climate change mitigation: Potentials and limitations of material efficiency strategies in the global steel cycle. J Ind Ecol 21: 327–340. |
[135] | Ugo Bardi, Anabela Carvalho Pereira (2021) The empty sea: The future of the blue economy. Springer Nature, 2021. |
[136] |
Helmut Haberl, Dominik Wiedenhofer, Dénes Virág, et al. (2020) A systematic review of the evidence on decoupling of gdp, resource use and ghg emissions, part ii: synthesizing the insights. Environ Res Lett 15: 065003. https://doi.org/10.1088/1748-9326/ab842a doi: 10.1088/1748-9326/ab842a
![]() |
[137] |
Steve Pye, Dan Welsby, Will McDowall, et al. (2022) Regional uptake of direct reduction iron production using hydrogen under climate policy. Energy Clim Change 3: 100087. https://doi.org/10.1016/j.egycc.2022.100087 doi: 10.1016/j.egycc.2022.100087
![]() |
[138] |
Marian Leimbach, Anselm Schultes, Lavinia Baumstark, et al. (2017) Solution algorithms for regional interactions in large-scale integrated assessment models of climate change. Ann Oper Res 255: 29–45. https://doi.org/10.1007/s10479-016-2340-z doi: 10.1007/s10479-016-2340-z
![]() |
[139] | H von Stackelberg (1952) Markt form und gleichgewicht, julius springer, vienne/berlin. Traduction anglaise: The Theory of Market Economy (1952), William Hodge, Londres 1934. |
[140] |
Stephen W Salant (1976) Exhaustible resources and industrial structure: A nash-cournot approach to the world oil market. J Polit Econ 84: 1079–1093. https://doi.org/10.1086/260497 doi: 10.1086/260497
![]() |
[141] | Tracy R Lewis, Richard Schmalensee (1980) Cartel and oligopoly pricing of nonreplenishable natural resources. In Dynam Optim Math Econ 133–156. |
[142] |
Tracy R Lewis, Richard Schmalensee (1980) On oligopolistic markets for nonrenewable natural resources. Q J Econ 95: 475–491. https://doi.org/10.2307/1885089 doi: 10.2307/1885089
![]() |
[143] | Roger L Tobin (1992) Uniqueness results and algorithm for stackelberg-cournot-nash equilibria. Ann Oper Res 34: 21–36. |
[144] |
E. Saltari, W. Semmler, G Di Bartolomeo (2022) A nash equilibrium for differential games with moving-horizon strategies. Comput Econ 60: 1041–1054. https://doi.org/10.1007/s10614-021-10177-8 doi: 10.1007/s10614-021-10177-8
![]() |
[145] |
Bruno Nkuiya (2015) Transboundary pollution game with potential shift in damages. J Environ Econ Manage 72: 1–14. https://doi.org/10.1016/j.jeem.2015.04.001 doi: 10.1016/j.jeem.2015.04.001
![]() |
[146] |
Michael Finus (2008) Game theoretic research on the design of international environmental agreements: insights, critical remarks, and future challenges. International Review of environmental and resource economics 2: 29–67. https://doi.org/10.1561/101.00000011 doi: 10.1561/101.00000011
![]() |
[147] |
Giovanni Di Bartolomeo, Behnaz Minooei Fard, Willi Semmler (2023) Greenhouse gases mitigation: global externalities and short-termism. Environment and Development Economics 28: 230–241. https://doi.org/10.1017/S1355770X22000249 doi: 10.1017/S1355770X22000249
![]() |
[148] | Nicola Acocella, Giovanni Di Bartolomeo (2019) Natural resources and environment preservation: Strategic substitutability vs. complementarity in global and local public good provision. 2019. https://doi.org/10.1561/101.00000109 |
[149] |
Chuyu Sun, Jing Wei, Xiaoli Zhao, et al. (2022) Impact of carbon tax and carbon emission trading on wind power in china: based on the evolutionary game theory. Frontiers in Energy Research 9: 811234. https://doi.org/10.3389/fenrg.2021.811234 doi: 10.3389/fenrg.2021.811234
![]() |
[150] |
Kai Kang, Bing Qing Tan (2023) Carbon emission reduction investment in sustainable supply chains under cap-and-trade regulation: An evolutionary game-theoretical perspective. Expert Syst Appl 227: 120335. https://doi.org/10.1016/j.eswa.2023.120335 doi: 10.1016/j.eswa.2023.120335
![]() |
[151] |
Ming Zhong, Jingjing Yu, Jun Xiao, et al. (2024) Dynamic stepwise carbon trading game scheduling with accounting and demand-side response. Appl Math Nonlinear Sci 9. https://doi.org/10.2478/amns-2024-0410 doi: 10.2478/amns-2024-0410
![]() |
[152] |
Rui Zhao, Gareth Neighbour, Jiaojie Han, et al. (2012) Using game theory to describe strategy selection for environmental risk and carbon emissions reduction in the green supply chain. J Loss Prev Process Ind 25: 927–936. https://doi.org/10.1016/j.jlp.2012.05.004 doi: 10.1016/j.jlp.2012.05.004
![]() |
[153] |
Kourosh Halat, Ashkan Hafezalkotob (2019) Modeling carbon regulation policies in inventory decisions of a multi-stage green supply chain: A game theory approach. Comput Ind Eng 128: 807–830. https://doi.org/10.1016/j.cie.2019.01.009 doi: 10.1016/j.cie.2019.01.009
![]() |
[154] |
Reyer Gerlagh, Eline van der Heijden (2024) Going green: Framing effects in a dynamic coordination game. Journal of Behavioral and Experimental Economics 108: 102148. https://doi.org/10.1016/j.socec.2023.102148 doi: 10.1016/j.socec.2023.102148
![]() |
[155] |
Michael Finus, Francesco Furini, Anna Viktoria Rohrer (2024) The stackelberg vs. nash-cournot folk-theorem in international environmental agreements. Economics Letters 234: 111481. https://doi.org/10.1016/j.econlet.2023.111481 doi: 10.1016/j.econlet.2023.111481
![]() |
[156] |
Rui Bai, Boqiang Lin (2024) Green finance and green innovation: Theoretical analysis based on game theory and empirical evidence from china. International Review of Economics & Finance 89: 760–774. https://doi.org/10.1016/j.iref.2023.07.046 doi: 10.1016/j.iref.2023.07.046
![]() |
[157] |
Yuxuan Cao, Wanyu Ren, Li Yue (2024) Environmental regulation and carbon emissions: New mechanisms in game theory. Cities 149: 104945. https://doi.org/10.1016/j.cities.2024.104945 doi: 10.1016/j.cities.2024.104945
![]() |
[158] |
Rafaela Vital Caetano, António Cardoso Marques (2023) Could energy transition be a game changer for the transfer of polluting industries from developed to developing countries? an application of game theory. Struct Change Econ Dyn 65: 351–363. https://doi.org/10.1016/j.strueco.2023.03.007 doi: 10.1016/j.strueco.2023.03.007
![]() |
[159] |
Songying Fang, Amy Myers Jaffe, Ted Loch-Temzelides, et al. (2024) Electricity grids and geopolitics: A game-theoretic analysis of the synchronization of the baltic states' electricity networks with continental europe. Energy Policy 188: 114068. https://doi.org/10.1016/j.enpol.2024.114068 doi: 10.1016/j.enpol.2024.114068
![]() |
[160] |
Steven A. Gabriel, Florian U. Leuthold (2010) Solving discretely-constrained mpec problems with applications in electric power markets. Energy Econ 32: 3–14. https://doi.org/10.1016/j.eneco.2009.03.008 doi: 10.1016/j.eneco.2009.03.008
![]() |
[161] |
Benteng Zou, Stéphane Poncin, Luisito Bertinelli (2022) The us-china supply competition for rare earth elements: a dynamic game view. Environ Model Assess 27: 883–900. https://doi.org/10.1007/s10666-022-09819-4 doi: 10.1007/s10666-022-09819-4
![]() |
[162] |
Maciej Filip Bukowski (2023) Pax climatica: the nash equilibrium and the geopolitics of climate change. Journal of Environmental Management 348: 119217. https://doi.org/10.1016/j.jenvman.2023.119217 doi: 10.1016/j.jenvman.2023.119217
![]() |
[163] |
W. Semmler, G. Di Bartolomeo, B. M. Fard, et al. (2022) Limit pricing and entry game of renewable energy firms into the energy sector. Struc Change Econo Dyn 61: 179–190. https://doi.org/10.1016/j.strueco.2022.01.008 doi: 10.1016/j.strueco.2022.01.008
![]() |
[164] |
Hassan Benchekroun, Gerard van der Meijden, Cees Withagen (2023) Economically exhaustible resources in an oligopoly-fringe model with renewables. J Enviro Econ Manage 121: 102853. https://doi.org/10.1016/j.jeem.2023.102853 doi: 10.1016/j.jeem.2023.102853
![]() |
[165] |
F. Groot, C. Withagen, A. de Zeeuw (2003) Strong time-consistency in the cartel-versus-fringe model. J Econo Dyn Control 28: 287–306. https://doi.org/10.1016/S0165-1889(02)00154-9 doi: 10.1016/S0165-1889(02)00154-9
![]() |
[166] |
B. Minooei Fard, W. Semmler, G. Di Bartolomeo. Rare earth elements. a game between china and the rest of the world. SSRN Electron J 2023. https://doi.org/10.2139/ssrn.4365405 doi: 10.2139/ssrn.4365405
![]() |
[167] |
Jiuh-Biing Sheu, Yenming J Chen (2012) Impact of government financial intervention on competition among green supply chains. Int J Prod Econ 138: 201–213. https://doi.org/10.1016/j.ijpe.2012.03.024 doi: 10.1016/j.ijpe.2012.03.024
![]() |
[168] |
Seyed Reza Madani, Morteza Rasti-Barzoki (2017) Sustainable supply chain management with pricing, greening and governmental tariffs determining strategies: A game-theoretic approach. Comput Ind Eng 105: 287–298. https://doi.org/10.1016/j.cie.2017.01.017 doi: 10.1016/j.cie.2017.01.017
![]() |
[169] |
Xin Chen, Jiannan Li, Decai Tang, et al. (2024) Stackelberg game analysis of government subsidy policy in green product market. Environ Dev Sustain, 26: 13273–13302. https://doi.org/10.1007/s10668-023-04176-y doi: 10.1007/s10668-023-04176-y
![]() |
[170] | Valentina Bosetti, Carlo Carraro, Enrica De Cian, et al. (2009) The Incentives to Participate in, and the Stability of, International Climate Coalitions: A Game-theoretic Analysis Using the Witch Model. Working Papers 2009.64, Fondazione Eni Enrico Mattei, August 2009. https://doi.org/10.2139/ssrn.1463244 |
[171] | Shunwu Huang Lili Wang, Rui Zhuang, Yong Zhao. Quality competition versus price competition: Why does china dominate the global solar photo-voltaic market? Emerg Mark Financ Trade 55: 1326–1342. https://doi.org/10.1080/1540496X.2018.1507905 |
[172] |
Bruno Turnheim, Mike Asquith, Frank W. Geels (2020) Making sustainability transitions research policy-relevant: Challenges at the science-policy interface. Environ Innov Soc Trans 34: 116–120. https://doi.org/10.1016/j.eist.2019.12.009 doi: 10.1016/j.eist.2019.12.009
![]() |
[173] | Bruno Turnheim, Frank W. Geels (2012) Regime destabilisation as the flipside of energy transitions: Lessons from the history of the british coal industry (1913–1997). Energy Policy 50: 35–49. Special Section: Past and Prospective Energy Transitions - Insights from History. https://doi.org/10.1016/j.enpol.2012.04.060 |
[174] |
Bruno Turnheim (2023) The historical dismantling of tramways as a case of destabilisation and phase-out of established system. Proc Nati Acad Sci 120: e2206227120. https://doi.org/10.1073/pnas.2206227120 doi: 10.1073/pnas.2206227120
![]() |
[175] | Anders Enggaard Bødker (2018) From climate science and scenarios to energy policy. Master's thesis, International Business and Politics, Copenhagen Business School, 2018. |
[176] |
Mert Duygan, Aya Kachi, Pinar Temocin, et al. (2023) A tale of two coal regimes: An actor-oriented analysis of destabilisation and maintenance of coal regimes in germany and japan. Energy Res Soc Sci 105: 103297. https://doi.org/10.1016/j.erss.2023.103297 doi: 10.1016/j.erss.2023.103297
![]() |
[177] |
Hirokazu Takizawa (2017) Masahiko aoki's conception of institutions. Evol Inst Econ Rev 14: 523–540. https://doi.org/10.1007/s40844-017-0087-0 doi: 10.1007/s40844-017-0087-0
![]() |
[178] |
Masahiko Aoki (2007) Endogenizing institutions and institutional changes. J Inst Econ 3: 1–31. https://doi.org/10.1017/S1744137406000531 doi: 10.1017/S1744137406000531
![]() |
[179] | Masahiko Aoki (2000) Institutional evolution as punctuated equilibria. Institutions, contracts and organizations: Perspectives from new institutional economics 11–33. |
[180] |
Asjad Naqvi, Engelbert Stockhammer (2018) Directed technological change in a post-keynesian ecological macromodel. Ecol Econ 154: 168–188. https://doi.org/10.1016/j.ecolecon.2018.07.008 doi: 10.1016/j.ecolecon.2018.07.008
![]() |
[181] |
Darlington Akam, Oluwasegun Owolabi, Solomon Nathaniel (2021) Linking external debt and renewable energy to environmental sustainability in heavily indebted poor countries: new insights from advanced panel estimators. Environ Sci Pollut Res 28: 1–13. https://doi.org/10.1007/s11356-021-15191-9 doi: 10.1007/s11356-021-15191-9
![]() |
[182] |
Xingyuan Yao, Xiaobo Tang (2021) Does financial structure affect co2 emissions? evidence from g20 countries. Finance Res Lett 41: 101791. https://doi.org/10.1016/j.frl.2020.101791 doi: 10.1016/j.frl.2020.101791
![]() |
[183] |
Tian Zhao, Zhixin Liu (2022) Drivers of co2 emissions: A debt perspective. Int J Environ Res Public Health 19. https://doi.org/10.3390/ijerph19031847 doi: 10.3390/ijerph19031847
![]() |
[184] |
Fatima Farooq, Aurang Zaib, Dr Faheem, et al. (2023) Public debt and environment degradation in oic countries: the moderating role of institutional quality. Environ Sci Pollut Res 30: 1–18. https://doi.org/10.1007/s11356-023-26061-x doi: 10.1007/s11356-023-26061-x
![]() |
[185] |
Marina Dabic, Vladimir Cvijanović, Miguel Gonzalez-Loureiro (2011) Keynesian, post-keynesian versus schumpetenan, neo-schumpeterian an integrated approach to the innovation theory. Manag Decis 49: 195–207. https://doi.org/10.1108/00251741111109115 doi: 10.1108/00251741111109115
![]() |
[186] |
Luke Petach, Daniele Tavani (2022) Aggregate demand externalities, income distribution, and wealth inequality. Struct Change Econo Dyn 60: 433–446. https://doi.org/10.1016/j.strueco.2022.01.002 doi: 10.1016/j.strueco.2022.01.002
![]() |
1. | Gunjan Jha, Anshul Jha, Eugene John, 2024, A High Accuracy CNN for Breast Cancer Detection Using Mammography Images, 979-8-3503-8717-9, 1196, 10.1109/MWSCAS60917.2024.10658807 | |
2. | Detection of Brain Tumor using Medical Images: A Comparative Study of Machine Learning Algorithms – A Systematic Literature Review, 2024, 13, 2278-2540, 77, 10.51583/IJLTEMAS.2024.130907 | |
3. | Xiangquan Liu, Xiaoming Huang, Weakly supervised salient object detection via bounding-box annotation and SAM model, 2024, 32, 2688-1594, 1624, 10.3934/era.2024074 | |
4. | Dekai Qiu, Tianhao Guo, Shengqi Yu, Wei Liu, Lin Li, Zhizhong Sun, Hehuan Peng, Dong Hu, Classification of Apple Color and Deformity Using Machine Vision Combined with CNN, 2024, 14, 2077-0472, 978, 10.3390/agriculture14070978 | |
5. | Yu Xue, Zhenman Zhang, Ferrante Neri, Similarity surrogate-assisted evolutionary neural architecture search with dual encoding strategy, 2024, 32, 2688-1594, 1017, 10.3934/era.2024050 | |
6. | Xiaoping Zhao, Liwen Jiang, Adam Slowik, Zhenman Zhang, Yu Xue, Evolving blocks by segmentation for neural architecture search, 2024, 32, 2688-1594, 2016, 10.3934/era.2024092 | |
7. | Katuri Rama Krishna, Mohammad Arbaaz, Surya Naga Chandra Dhanekula, Yagna Mithra Vallabhaneni, MODIFIED VGG16 FOR ACCURATE BRAIN TUMOR DETECTION IN MRI IMAGERY, 2024, 14, 2391-6761, 71, 10.35784/iapgos.6035 | |
8. | Maad M. Mijwil, Smart architectures: computerized classification of brain tumors from MRI images utilizing deep learning approaches, 2024, 1573-7721, 10.1007/s11042-024-20349-x | |
9. | B Gunasundari, H Kezia, T Dharanika, Pydala Anusha, Dasineni Yogitha, Vemula Lakshmi Prashanthi, 2024, Enhanced deep learning model using transfer learning to segment and detect brain tumors, 979-8-3503-7024-9, 1, 10.1109/ICCCNT61001.2024.10723895 | |
10. | Mary Jane C. Samonte, Andrei Bench Mallari, Prince Rayly K. Reyes, John Caleb T. Tan, 2024, Applying Data Science in Computer Vision: Detection of Malignant and Benign Cancer Tumors, 979-8-3503-8826-8, 13, 10.1109/BDEE63226.2024.00010 | |
11. | Bianca Antonia Fabian, Cristian-Cosmin Vancea, 2024, Efficient Methods for Improving Brain Tumors Diagnosis in MRI Scans Using Deep Learning, 979-8-3315-3997-9, 1, 10.1109/ICCP63557.2024.10793036 | |
12. | Tanjim Mahmud, Mohammad Tarek Aziz, Taohidur Rahman, Koushick Barua, Anik Barua, Nanziba Basnin, Israt Ara, Nahed Sharmen, Mohammad Shahadat Hossain, Karl Andersson, 2024, Chapter 5, 978-3-031-73323-9, 39, 10.1007/978-3-031-73324-6_5 | |
13. | Kingshuk Das Bakshi, S. Rani, S. Thanga Ramya, A. Bhagyalakshmi, Earli Manemma, Sumit Kushwaha, Identifying Neurological Disorders using Deep Learning with Biomedical Imaging Techniques, 2025, 1556-5068, 10.2139/ssrn.5109614 | |
14. | R Rekha Sharmily, B Karthik, T Vijayan, 2024, MRI Brain Tumor Medical Images Multi Classification Using Deep Neural Networks, 979-8-3315-1859-2, 1, 10.1109/DELCON64804.2024.10866052 | |
15. | Venkata Kiranmai Kollipara, Surendra Reddy Vinta, 2025, Chapter 43, 978-981-97-7177-6, 507, 10.1007/978-981-97-7178-3_43 | |
16. | Huimin Qu, Haiyan Xie, Qianying Wang, Multi-convolutional neural network brain image denoising study based on feature distillation learning and dense residual attention, 2025, 33, 2688-1594, 1231, 10.3934/era.2025055 | |
17. | Mehak Sharma, Krish Monga, Ripandeep Kaur, Sahil Bhardwaj, 2024, DEEPTUMORNET: BRAIN TUMOR CLASSIFICATION IN MEDICAL IMAGING, 979-8-3315-1905-6, 500, 10.1109/ICACCTech65084.2024.00087 | |
18. | Madona B Sahaai, K Karthika, Aaron Kevin Cameron Theoderaj, EC-HDLNet: Extended coati-based hybrid deep dilated convolutional learning network for brain tumor classification, 2025, 107, 17468094, 107865, 10.1016/j.bspc.2025.107865 | |
19. | Mohammed Nazneen Fathima, Prabhjot Singh, Simrandeep Singh, 2025, VTAF-Ensemble: A Hybrid Vision Transformer and Attention-based Learning Approach for Enhanced Brain Tumor Detection and Classification, 979-8-3315-2924-6, 1, 10.1109/ICISCN64258.2025.10934306 | |
20. | Ankush Goyal, Yajnaseni Dash, Sudhir C. Sarangi, Ajith Abraham, 2025, Chapter 56, 978-3-031-81079-4, 589, 10.1007/978-3-031-81080-0_56 |
Factor(s) | Values |
Number of convolutional layers | 32-filter convolutional layer with a stride of at least 1 and a filter size of (7, 7) |
Number of cross-channel normalization layers | 1, 2, 3 |
Number of drop out layers | 1, 2, 3 |
Maximum epochs | 20, 40, 50, 60, 80, 100 |
Number of fully connected layers | 1, 2, 3 |
Number of convolutional kernels | 8, 16, 32, 64, 128, 256 |
Kernel size | 1, 3, 5, 7, 9, 10, 11 |
pooling layer | Max pooling, average pooling |
pooling layer window size | 2, 3, 4, 5 |
Optimizers | GoogLeNet, Adam, VGG |
Mini-batch size | 1, 4, 8, 16, 32, 64, 128 |
Dropout rate | 0.1, 0.15, 0.2, 0.25, 0.5 |
Initial learning rate | 0.01, 0.001, 0.0001 |
Learning rate drop factor | 0.1, 0.2, 0.3 |
Type | Patch Size/Stride | Depth | #1x1 | #3x3 Reduce | #3x3 | #5x5 Reduce | #5x5 | Pool Proj | Pramas | Output Size | Ops |
CONVOLUTION | 7x7/2 | 1 | 2.7K | 112X112X64 | 34M | ||||||
MAXPOOL 3x3/2 | 0 | 56X56X64 | |||||||||
CONVOLUTION | 3x3/1 | 2 | 64 | 192 | 112K | 56X56X192 | 360M | ||||
MAXPOOL | 3x3/2 | 0 | 28X28X192 | ||||||||
INCEPTION(3a) | 2 | 64 | 96 | 128 | 16 | 32 | 32 | 159K | 28X28X256 | 128M | |
INCEPTION(3b) | 2 | 128 | 128 | 192 | 32 | 96 | 64 | 380K | 28X28X480 | 304M | |
MAXPOOL | 3x3/2 | 0 | 14X14X480 | ||||||||
INCEPTION(4a) | 2 | 192 | 96 | 208 | 16 | 48 | 64 | 364K | 14X14X512 | 73M | |
INCEPTION(4b)1 | 2 | 160 | 112 | 224 | 24 | 64 | 64 | 437K | 14X14X512 | 88M | |
INCEPTION(4c) 1 | 2 | 128 | 128 | 256 | 24 | 64 | 64 | 463K | 14 × 14 × 512 | 100M | |
INCEPTION(4d) | 2 | 112 | 144 | 288 | 32 | 64 | 64 | 580K | 14 × 14 × 528 | 119M | |
MAXPOOL | 3x3/2 | 0 | 7 × 7 × 832 | ||||||||
INCEPTION(5a) | 2 | 256 | 160 | 320 | 32 | 128 | 128 | 1072K | 7 × 7 × 832 | 54M | |
INCEPTION(5b) | 2 | 384 | 192 | 384 | 48 | 128 | 128 | 1388K | 7 × 7 × 1024 | 71M | |
AVERAGE POOL | 7 × 7/1 | 0 | 1 × 1 × 1024 | ||||||||
DROPOUT Rum | 0 | 1 × 1 × 1024 | |||||||||
LINEAR | 1 | 1000K | 1 × 1 × 1000 | 1M | |||||||
SOFTMAX | 0 | 1 × 1 × 1000 |
Performance measures | Alex Net |
Accuracy Sensitivity Specificity The Area under the curve Time |
98.95 98.4 99.58 99.05 95.01 |
Performance measures | Google Net |
Accuracy Sensitivity Specificity The area under the curve Time |
99.45 99.75 99.91 99.8 135.40 |
Model | FLOPS | # Params | fps | Latency | Accuracy |
CNN | 83.811 | 32.464 | 1406.23 | 0.0465 | 76.37 |
AlexNet | 167.685 | 32.464 | 860.84 | 0.0764 | 77.85 |
GoogleNet | 85.685 | 60.369 | 1114.82 | 0.0564 | 76.14 |
PERFORMANCE MEASURES | LR = 0.01 | LR = 0.001 | LR = 0.001 | |||
TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | |
Accuracy | 74.01% | 77.10% | 100% | 98.10% | 100% | 100% |
Loss | 2.81 | 2.25 | 0.021 | 0.001 | 1.0095 × 10–6 | 9.6568 × 10–8 |
Specificity | 0 | 0 | 1 | 1 | 1 | 1 |
Sensitivity | 0.71 | 0.75 | 1 | 1 | 1 | 1 |
AUC | 0.50 | 0.50 | 1 | 0.9756 | 1 | 1 |
PERFORMANCE MEASURES | LR = 0.01 | LR = 0.001 | LR = 0.001 | |||
TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | |
Accuracy | 74.01% | 77.10% | 100% | 100% | 100% | 100% |
Loss | 1.09 | 3.65 | 3.67 × 10-8 | 9.1 × 10-8 | 3.9714 × 10–8 | 3.1243 × 10–8 |
Specificity | 0 | 0 | 1 | 1 | 1 | 1 |
Sensitivity | 0.71 | 0.75 | 1 | 1 | 1 | 1 |
AUC | 0.5 | 0.5 | 1 | 1 | 1 | 1 |
Reference | Dataset | Approach | Accuracy (%) |
W. Ayadi et al. (2021) [5] | publicly available database | CNN | 92.98 |
R. L. Kumar et.al. (2021) [15] | Figshare | GoogLeNet, AlexNet and VGGNet | 98.69 |
F. Abdolkarimzadeh et.al. (2021) [1] | Harvard Medical School | DL, K-NN, LDA | 95.45–96.97 |
S. N. Shivhare et.al. (2021) [27] | Figshare, Brainweb, Radiopaedia | LeNet | 88 |
V. V. S. Sasank et.al. (2021) [26] | Kaggle | CNN | 96 |
O. Polat and C. Gungen (2021) [20] | figshare | Fuzzy C Mean+ CNN | 97.5 |
X. L. Lei, (2021) [18] | Nanfang Hospital and General Hospital, Tianjin Medical University | CapsNet | 86.5 |
V. V. S. Sasank and S. Venkateswarlu (2021) [25] | Brats-2015 | CNN | 92.13 |
P. Wang and A. C. S. Chung (2021) [29] | Kaggle | CNN | 92.00 |
M. M. Badža and M. Č. Barjaktarović (2022) [38] | Radhamadhab Dalai, IEEE, FIGSHARE | convolutional neural network (CNN) | 97.28% |
Y. Guan et al. (2021) [36] | Radhamadhab Dalai, IEEE, FIGSHARE | Capsule Networks | 86.56% |
M. Aamir et al. (2022) [37] | Radhamadhab Dalai, IEEE, FIGSHARE | Capsulenet +SVM | 92.60 |
M. M. Badža and M. Č. Barjaktarović (2020) [38] | Radhamadhab Dalai, IEEE, FIGSHARE | GLCM and Wavelet Packets. | 93.30 |
N. Zheng et al. (2023)[39] | Radhamadhab Dalai, IEEE, FIGSHARE | Arithmetic Optimization Algorithm | 96.8 |
Q. Zhou (2023) [40] | Radhamadhab Dalai, IEEE, FIGSHARE | light-weight convolutional neural network CNN) with SCM-GL | 95.8 |
S. Deepak and P. M. Ameer (2023) [41] | Radhamadhab Dalai, IEEE, FIGSHARE | KNN+SVM | 98.2 |
G. Xiao et al. (2023) [42] | Radhamadhab Dalai, IEEE, FIGSHARE | jigsaw puzzle | 97.4 |
Proposed Work | Radhamadhab Dalai, IEEE, FIGSHARE | CNN, AlexNet & GoogLeNet | 99.45 |
Factor(s) | Values |
Number of convolutional layers | 32-filter convolutional layer with a stride of at least 1 and a filter size of (7, 7) |
Number of cross-channel normalization layers | 1, 2, 3 |
Number of drop out layers | 1, 2, 3 |
Maximum epochs | 20, 40, 50, 60, 80, 100 |
Number of fully connected layers | 1, 2, 3 |
Number of convolutional kernels | 8, 16, 32, 64, 128, 256 |
Kernel size | 1, 3, 5, 7, 9, 10, 11 |
pooling layer | Max pooling, average pooling |
pooling layer window size | 2, 3, 4, 5 |
Optimizers | GoogLeNet, Adam, VGG |
Mini-batch size | 1, 4, 8, 16, 32, 64, 128 |
Dropout rate | 0.1, 0.15, 0.2, 0.25, 0.5 |
Initial learning rate | 0.01, 0.001, 0.0001 |
Learning rate drop factor | 0.1, 0.2, 0.3 |
Type | Patch Size/Stride | Depth | #1x1 | #3x3 Reduce | #3x3 | #5x5 Reduce | #5x5 | Pool Proj | Pramas | Output Size | Ops |
CONVOLUTION | 7x7/2 | 1 | 2.7K | 112X112X64 | 34M | ||||||
MAXPOOL 3x3/2 | 0 | 56X56X64 | |||||||||
CONVOLUTION | 3x3/1 | 2 | 64 | 192 | 112K | 56X56X192 | 360M | ||||
MAXPOOL | 3x3/2 | 0 | 28X28X192 | ||||||||
INCEPTION(3a) | 2 | 64 | 96 | 128 | 16 | 32 | 32 | 159K | 28X28X256 | 128M | |
INCEPTION(3b) | 2 | 128 | 128 | 192 | 32 | 96 | 64 | 380K | 28X28X480 | 304M | |
MAXPOOL | 3x3/2 | 0 | 14X14X480 | ||||||||
INCEPTION(4a) | 2 | 192 | 96 | 208 | 16 | 48 | 64 | 364K | 14X14X512 | 73M | |
INCEPTION(4b)1 | 2 | 160 | 112 | 224 | 24 | 64 | 64 | 437K | 14X14X512 | 88M | |
INCEPTION(4c) 1 | 2 | 128 | 128 | 256 | 24 | 64 | 64 | 463K | 14 × 14 × 512 | 100M | |
INCEPTION(4d) | 2 | 112 | 144 | 288 | 32 | 64 | 64 | 580K | 14 × 14 × 528 | 119M | |
MAXPOOL | 3x3/2 | 0 | 7 × 7 × 832 | ||||||||
INCEPTION(5a) | 2 | 256 | 160 | 320 | 32 | 128 | 128 | 1072K | 7 × 7 × 832 | 54M | |
INCEPTION(5b) | 2 | 384 | 192 | 384 | 48 | 128 | 128 | 1388K | 7 × 7 × 1024 | 71M | |
AVERAGE POOL | 7 × 7/1 | 0 | 1 × 1 × 1024 | ||||||||
DROPOUT Rum | 0 | 1 × 1 × 1024 | |||||||||
LINEAR | 1 | 1000K | 1 × 1 × 1000 | 1M | |||||||
SOFTMAX | 0 | 1 × 1 × 1000 |
Performance measures | Alex Net |
Accuracy Sensitivity Specificity The Area under the curve Time |
98.95 98.4 99.58 99.05 95.01 |
Performance measures | Google Net |
Accuracy Sensitivity Specificity The area under the curve Time |
99.45 99.75 99.91 99.8 135.40 |
Model | FLOPS | # Params | fps | Latency | Accuracy |
CNN | 83.811 | 32.464 | 1406.23 | 0.0465 | 76.37 |
AlexNet | 167.685 | 32.464 | 860.84 | 0.0764 | 77.85 |
GoogleNet | 85.685 | 60.369 | 1114.82 | 0.0564 | 76.14 |
PERFORMANCE MEASURES | LR = 0.01 | LR = 0.001 | LR = 0.001 | |||
TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | |
Accuracy | 74.01% | 77.10% | 100% | 98.10% | 100% | 100% |
Loss | 2.81 | 2.25 | 0.021 | 0.001 | 1.0095 × 10–6 | 9.6568 × 10–8 |
Specificity | 0 | 0 | 1 | 1 | 1 | 1 |
Sensitivity | 0.71 | 0.75 | 1 | 1 | 1 | 1 |
AUC | 0.50 | 0.50 | 1 | 0.9756 | 1 | 1 |
PERFORMANCE MEASURES | LR = 0.01 | LR = 0.001 | LR = 0.001 | |||
TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | TRAINING SET | TESTING SET | |
Accuracy | 74.01% | 77.10% | 100% | 100% | 100% | 100% |
Loss | 1.09 | 3.65 | 3.67 × 10-8 | 9.1 × 10-8 | 3.9714 × 10–8 | 3.1243 × 10–8 |
Specificity | 0 | 0 | 1 | 1 | 1 | 1 |
Sensitivity | 0.71 | 0.75 | 1 | 1 | 1 | 1 |
AUC | 0.5 | 0.5 | 1 | 1 | 1 | 1 |
Reference | Dataset | Approach | Accuracy (%) |
W. Ayadi et al. (2021) [5] | publicly available database | CNN | 92.98 |
R. L. Kumar et.al. (2021) [15] | Figshare | GoogLeNet, AlexNet and VGGNet | 98.69 |
F. Abdolkarimzadeh et.al. (2021) [1] | Harvard Medical School | DL, K-NN, LDA | 95.45–96.97 |
S. N. Shivhare et.al. (2021) [27] | Figshare, Brainweb, Radiopaedia | LeNet | 88 |
V. V. S. Sasank et.al. (2021) [26] | Kaggle | CNN | 96 |
O. Polat and C. Gungen (2021) [20] | figshare | Fuzzy C Mean+ CNN | 97.5 |
X. L. Lei, (2021) [18] | Nanfang Hospital and General Hospital, Tianjin Medical University | CapsNet | 86.5 |
V. V. S. Sasank and S. Venkateswarlu (2021) [25] | Brats-2015 | CNN | 92.13 |
P. Wang and A. C. S. Chung (2021) [29] | Kaggle | CNN | 92.00 |
M. M. Badža and M. Č. Barjaktarović (2022) [38] | Radhamadhab Dalai, IEEE, FIGSHARE | convolutional neural network (CNN) | 97.28% |
Y. Guan et al. (2021) [36] | Radhamadhab Dalai, IEEE, FIGSHARE | Capsule Networks | 86.56% |
M. Aamir et al. (2022) [37] | Radhamadhab Dalai, IEEE, FIGSHARE | Capsulenet +SVM | 92.60 |
M. M. Badža and M. Č. Barjaktarović (2020) [38] | Radhamadhab Dalai, IEEE, FIGSHARE | GLCM and Wavelet Packets. | 93.30 |
N. Zheng et al. (2023)[39] | Radhamadhab Dalai, IEEE, FIGSHARE | Arithmetic Optimization Algorithm | 96.8 |
Q. Zhou (2023) [40] | Radhamadhab Dalai, IEEE, FIGSHARE | light-weight convolutional neural network CNN) with SCM-GL | 95.8 |
S. Deepak and P. M. Ameer (2023) [41] | Radhamadhab Dalai, IEEE, FIGSHARE | KNN+SVM | 98.2 |
G. Xiao et al. (2023) [42] | Radhamadhab Dalai, IEEE, FIGSHARE | jigsaw puzzle | 97.4 |
Proposed Work | Radhamadhab Dalai, IEEE, FIGSHARE | CNN, AlexNet & GoogLeNet | 99.45 |