Research article Special Issues

A nonlinear diffusion equation with reaction localized in the half-line

  • We study the behaviour of the solutions to the quasilinear heat equation with a reaction restricted to a half-line

    ut=(um)xx+a(x)up,

    m,p>0 and a(x)=1 for x>0, a(x)=0 for x<0. We first characterize the global existence exponent p0=1 and the Fujita exponent pc=m+2. Then we pass to study the grow-up rate in the case p1 and the blow-up rate for p>1. In particular we show that the grow-up rate is different as for global reaction if p>m or p=1m.

    Citation: Raúl Ferreira, Arturo de Pablo. A nonlinear diffusion equation with reaction localized in the half-line[J]. Mathematics in Engineering, 2022, 4(3): 1-24. doi: 10.3934/mine.2022024

    Related Papers:

    [1] Riccardo Adami, Raffaele Carlone, Michele Correggi, Lorenzo Tentarelli . Stability of the standing waves of the concentrated NLSE in dimension two. Mathematics in Engineering, 2021, 3(2): 1-15. doi: 10.3934/mine.2021011
    [2] Anne-Charline Chalmin, Jean-Michel Roquejoffre . Improved bounds for reaction-diffusion propagation driven by a line of nonlocal diffusion. Mathematics in Engineering, 2021, 3(1): 1-16. doi: 10.3934/mine.2021006
    [3] Takeyuki Nagasawa, Kohei Nakamura . Asymptotic analysis for non-local curvature flows for plane curves with a general rotation number. Mathematics in Engineering, 2021, 3(6): 1-26. doi: 10.3934/mine.2021047
    [4] Evangelos Latos, Takashi Suzuki . Quasilinear reaction diffusion systems with mass dissipation. Mathematics in Engineering, 2022, 4(5): 1-13. doi: 10.3934/mine.2022042
    [5] Luis A. Caffarelli, Jean-Michel Roquejoffre . The shape of a free boundary driven by a line of fast diffusion. Mathematics in Engineering, 2021, 3(1): 1-25. doi: 10.3934/mine.2021010
    [6] Rupert L. Frank, Tobias König, Hynek Kovařík . Energy asymptotics in the Brezis–Nirenberg problem: The higher-dimensional case. Mathematics in Engineering, 2020, 2(1): 119-140. doi: 10.3934/mine.2020007
    [7] Simon Lemaire, Julien Moatti . Structure preservation in high-order hybrid discretisations of potential-driven advection-diffusion: linear and nonlinear approaches. Mathematics in Engineering, 2024, 6(1): 100-136. doi: 10.3934/mine.2024005
    [8] Boumediene Abdellaoui, Ireneo Peral, Ana Primo . A note on the Fujita exponent in fractional heat equation involving the Hardy potential. Mathematics in Engineering, 2020, 2(4): 639-656. doi: 10.3934/mine.2020029
    [9] Alessandra De Luca, Veronica Felli . Unique continuation from the edge of a crack. Mathematics in Engineering, 2021, 3(3): 1-40. doi: 10.3934/mine.2021023
    [10] Lorenzo Pistone, Sergio Chibbaro, Miguel D. Bustamante, Yuri V. Lvov, Miguel Onorato . Universal route to thermalization in weakly-nonlinear one-dimensional chains. Mathematics in Engineering, 2019, 1(4): 672-698. doi: 10.3934/mine.2019.4.672
  • We study the behaviour of the solutions to the quasilinear heat equation with a reaction restricted to a half-line

    ut=(um)xx+a(x)up,

    m,p>0 and a(x)=1 for x>0, a(x)=0 for x<0. We first characterize the global existence exponent p0=1 and the Fujita exponent pc=m+2. Then we pass to study the grow-up rate in the case p1 and the blow-up rate for p>1. In particular we show that the grow-up rate is different as for global reaction if p>m or p=1m.



    In the presence of load forecasting service, the thing which is important and achievable to meet the upcoming load demands so that power can be obviated to happen. The occurrence of faults due to unnecessary loading, other electrical failures such as blackout etc. can be refrained if properly required precautionary measures to be taken to manage the load demands. However, load demands are kept on continually increasing day by day. It fluctuates a lot on an hourly basis [1], so short-term load demands are highly desired to be automated for proper forecasting. The load forecasting is basically a stochastic problem instead of being random. One cannot find 'certain' in this phenomenon, because forecasters have to deal with stochasticity and haphazardness. The outcome of this process is expected to be in complicated form, for instance, a forecast extending below this or that context, a prediction level, and a little quantile of interest. Forecasting is considered crucial and high priority in the power sector. The previously available couple of global power forecasting fixtures, known as GEFCom2012 as well as GEFCom2014, grabbed the attention of many academia data scientists as well as various industries along with resolved numerous forecasting hurdles faced within the power sectors including probabilistic forecasting [2] and hierarchical forecasting [3].

    There are three approaches most widely used in load forecasting such as shorter-term load forecasting (STLF), medium-term load forecasting (MTLF) and long-term load forecasting (LTLF) [1]. The STLF has been explored extensively as well as modeled at the generation capacity, distribution strategy and in transmission process. Mostly, the problem arises at the distribution stage because the distribution utilities are linked with the generation capacity and its scheduling and system peaking. These utilities do not have direct relation to short term (ST) load demand of customers himself. The ST load forecasting becomes a central problem for the functioning and transmission of energy networks to interrupt crucial outcomes related to failures in flash and energy systems. That is imperative towards the economic strengths of energy networks and the essence of sending off along with assembling start-up-shutting down schemes that play a crucial part in the automatically controlled power networks. Accurately precise power load forecast does not merely aid consumers to go for far suitable power utilization plan of action, but also decreases electric cost expenditure, meanwhile enhancing equipment utilization and hence minimizing the cost of production and upgrading the economic welfare. It is also conducive to optimize the reversion of the power system, enhancing the efficiency of power supply, and attaining the goal of energy conservation along with emission reduction.

    Proper and efficient electric load forecasting can save electricity and make distribution easier and efficient by utilizing minimum energy resources. The cost of the electricity may increase if electric load forecasting is not accurate. Furthermore, when load forecasting is not proper and accurate then it may waste energy resources. Contrary to that, if load forecasting is done properly and accurately, it will help to manage the future electric load demands by efficiently load planning [4].

    Mostly, in a large power system, the usage schemes of tools might show distinction from each other and a single electrical device being randomly used. There are large fluctuations in discrete loads, but on summing up these loads into bigger facility load, emerging pattern could be predicted arithmetically [5]. Here are the following factors which can affect electric load including weather, time factors, economic factors, and random effects.

    Weather is one of the most influencing factors from the above-listed factors. Weather details become extremely essential elements in the load forecast scenario. Usually, load forecasting models are built and tested, taking into considerations the actual reading of weather. Nevertheless, outline operational models prerequisite maneuvering weather forecasts, along with related forecast errors and miscalculations. The weather forecast obviously escort to the degradation and breakdown in the model performance. Weather forecasting, for instance, the speed of wind, climatic temperature along with cloud covering performs a massive role in STLF by modifying the graph. The characteristics of these elements are mirrored in the load requirements even though some are influenced far more than others.

    Weather have significant effects on ST electrical load forecasting of power network [6]. Weather sensitive loads like heating, air conditioning devices and ventilating have a great effects on small-scale industrial energy structures. Weather elements that could influence hourly power load forecasting are barometric pressure, precipitation, speed of wind, solar irradiance along with humidity. During maximum humidness days, the cooling apparatus can function for prolonged duty patterns to get rid of unneeded condensation from the conditioned air. Precipitation holds the capability to minimize the temperature of air that will result in the reduction of cooling load [5].

    There are further three factors that can influence the electric loads: holidays, day to day weekly cycle, and seasonal outputs. The day to day power load forecast for the off day is almost the same as that of what is found on weekends. During holidays, the vastness of power loads is lesser as compared to working days. Day to day weekly cycle is power load forecast schemes which are periodical all through every day and week. While seasoning outcomes describe for long-term modifications in the enhancement and patterns of weather [7].

    Economic factors depend upon the investment that facilitates the basic structure by establishing the labs and new constructions that would add load to the power system.

    All other random conflicts except the three earlier mentioned are included in random effects which can disturb energy load depiction. These disturbances include essential loads that have not plan and the absence of employees which makes the prediction difficult [5].

    Load forecasting has gained more importance in smart energy management systems in recent few years. The number of users continually increasing for load forcasting on daily basis. Mostly, STLF is utilized to manage load forwarding and to control energy transfer schedule for thirty minutes to the entire day. Therefore, any betterment gained around precision and management of STLF escorts to reduce expenses of electrical management system along with the enhance efficiency of the energy network [8].

    According to Gross and Galiana [5], STLF plays an important part in the creation of feasible, secure, dependable, economic as well as consistent working techniques for the electrical management network. STLF provides information about the latest prediction of weather, recent load forecasting strategy, and random behavior of the power system [8]. The load forecast is getting more interesting and attracting the attention of researchers in the current few years because of its importance and increasing trend in microgrids, smart grids, and renewable sources of energy. Different strategies have been implemented for load forecast as well as management, such as auto-regressive integrated moving average models, seasonal auto-regressive, fractionally integrated moving average, regression, Kalman filtering etc. Al-Al-Hamadi and Soliman [9] presented a load management model to meet the time-varying demand of users on an hourly period. There was another approach called the Kalman filter that was implemented to calculate the optimal load forecast on an hourly basis [8]. Song et al. [10] introduced another technique based on the fuzzy linear regression method built on the load data.

    In recent years, numerous other techniques were applied to enhance the accuracy and efficiency of load forecast management of power systems, such as fuzzy logic and artificial intelligence (AI). These effective solutions built projected on the technologies of AI to solve load forecasting issues. AI-based intelligent systems have gained more importance, becoming more widespread nowadays and being developed on large scale. Because of their explanation capability, flexibility, and symbolic reasoning, they are being deployed worldwide in several applications. Ranaweera et al. presented another technique that is based on fuzzy rules developed by using a learning type algorithm to incorporate load management and historical weather data [11]. He et al. [12] proposed a novel method for the quantification of unpredictability stringed to power load and to obtain future load information, after that neural structure has been used in order to construct quantile regression framework in the development of probabilistic techniques for forecasting.

    Recently. artificial intelligence (AI) based methods have been utilized for loads forecasting including smart grid and buildings [13], next day load demands [14], load forecasting in distributed systems [15] and autoregressive ANN with exogenous vector inputs [16] etc. Moreover, other AI base methods including fuzzy logistic techniques [17] specialist networks structures [18], Bayesian neural system [19] and support vector apparatus [20] are widely used in handling number of forecasting issues. Even though extensive research has been carried out, an error-free as well as precise STLF continue in becoming a challenge because it is having a load data which is non-stationary in addition to carrying durable dependencies related to forecasting horizons. For this reason, the long-short term memory (LSTM) is applied [21], which is a peculiar kind of recurrent neural network (RNN) structure [22] to resolve the STLF issue. LSTM functions effectively around long-time horizon forecasting as compared to the rest of the artificial intelligence techniques which is routed through the previous load statistics which governed outcome as well as connections across the series of time.

    The researchers [23] suggested a hybrid technique established on wavelet transform as well Bayesian neural networks (BNN) to get the load characteristics for training of the BNN model. In this technique, an average sum of BNN outputs is used for predicting load for a specific day. Fan et al. [24] introduced another hybrid design established on the junction of a combination of Bi-Square Kernel (BSK) regression framework and phase space reconstruction (PSR) method. In this approach, load statistics were regenerated via using the PSR method to obtain the developmental modes of the documented load statistics to enhance prediction reliability [24]. The researchers [25] proposed another fuzzy logic controller-based hourly for load forecasting depending upon different conditional variables, e.g., random disturbances, load historical statistics, time as well as climate etc. Finally, the developed hybrid model was successfully established by employing a combination of evolutionary algorithms and neural networks.

    In another study, Metaxiotis et al. [26] provides a comparison of models based on CNNs and AI, where CNN models show a distinctive performance in load forecasting prediction. Fukushima K [27] presented CNNs in very simple form. LeCun et al proposed the current form on CNNs with more advanced concepts, there have been many further extensions with improvements, such as batch normalization and max-pooling layer [28] etc. More specifically, some strategies are devised for CNN to facilitate structures, for instance, the addition of pooling surface in the design [29]. In short, CNN is being considered as a potential candidate for load forecasting implementations, so far as the control of over-fitting is concerned. On the other side, Fuzzy time series (FTS) were employed in pattern acquisition-based techniques within numerous implementations which include load forecasting. In 2005, Yu [30] proposed a novel kind of weighted FTS for the forecast of the stock market. In 2009, Wang et al. proposed another technique using FTS that was implemented in stock index forecasting and temperature forecasting [31]. In summary, many other relevant studies show that FTS has been most widely used in load forecasting management systems by many other researchers such as hybrid dynamic and fuzzy time series model for mid-term power load forecasting [32], a new linguistic out-sample approach of fuzzy time series for daily load forecasting [33] and imperialist competitive algorithm combined with refined high-order weighted fuzzy time series (RHWFTS–ICA) for short term load forecasting [34].

    The AI-based machine learning and deep convolutional neural networks methods have also successfully been used in many other areas of complex physiological signals and image processing problems. The applications of AI based methods include seizuer detection using entropy-based methods [35] and machine learning methods [36], congestive heart failure by extracting multimodal features and employing machine learning techniques [37], Alzheimer detection via machine learning [38], brain tumor detection based on hybrid features and machine learning techniques [39], arrhythmia detection [40], lung cacner detection be extracting refined fuzzy entropy [41], and prostate cancer based on deep learning [42] and machine learning [43] methods.

    Deep leaerning methods such as CNN mostly improve the prediction performance using big data and has improved the traditional computer vision tasks such as image classification etc. Recently, CNN is used for both imaging and non-imaging data. There are many applications of 1D-CNN for time series data including electricity load forecasting [44], electricity load forecasting for each day of week [45], hydrological time-series prediction [46], short-term load forecasting of Romanina power system [47] and short-term wind power forecasting [48] etc.

    Figure 1 shows the schematic diagram of the electric load forecasting system for Short-term and medium-term load forecasting. Short-term load forecasting (STLF) is used for the planning of the power systems ranging from 01 hours up to one week. In this case, we computed the STLF for the next 24 hours, 72 hours, and one week. The medium-term load forecasting (MTLF) is used to plan maintenance etc. ranging from one day to a few months. In this case, we computed the next load forecast of one day (24 hours), 72 hours, one week, two weeks, and one month. After applying the data preprocessing, we optimized and initialized the robust neural network and deep learning models such as multilayer perceptron (MPL), LSTM and CNN. The performance was computed on the test set based on the standard performance error metrics such as R-squared, MAPE, MAE, MSE, and RMSE. Finally, the load forecasting for STLF and MTLF ahead was computed and performance was evaluated in terms of errors for actual and predicted load demands.

    Figure 1.  Schematic Diagram of Electric Load forecasting System for Short-term and medium-term load forecasting.

    In this study, we optimized and employed robust machine learning and deep learning-based methods to predict the load forecasting demands for STLF and MTLF. For multilayer perceptron (MLP), we optimized the function by changing the number of neurons in the hidden layer, this number must be high enough to model the problem and not too much high to avoid overfitting. The iterative backpropagation algorithm was used for this purpose. Moreover, to solve the problem of minimizing the cost function concerning connection weights, the gradient descent algorithm is used in conjunction with the backpropagation algorithm.

    Following methods were used to fine-tune the neural network parameters:

    We created an LSTM model with one LSTM layer of 48 neurons and "Relu" as an activation function. We also added a dense layers which contains 24 neurons and the last layer, which also acts as the output layer, contains 1 neuron. Finally, we compiled our model using optimizer = "ADAM" and train it for 100 epochs with a batch size of 24.

    Firstly, for Conv1D we defined 48 filters and a kernel size of 2 with "RELU" as an activation function. In order to reduce the complexity of the output and prevent over fitting of the data we used Max pooling layer after a CNN layer with size of 2. Moreover, a dense layers were added contained 24 neurons and the final output layer, contains 1 neuron. The model was compiled with "ADAM" optimizer and then fit for 100 epochs, and a batch size of 24 samples is used.

    The used model contained a single layer with 48 neuron and "RELU" as an activation function. Similar to LSTM and Conv1D a dense layers were also added with 24 neurons and a final layer for output contain a single neuron respectively. Lastly, the model was fit using the efficient ADAM optimization algorithm and the mean squared error loss function for 100 epochs

    The data was taken from Al-Khwarizmi Institute of Technology, The University of Punjab, Lahore, from one of its project regarding the electricity hourly load demands of the complete year 2008 and July to December 2009 of one grid as used in [4]. The data was taken from a feeder to fulfill the maximum load requirements for this study. The load time series was generated for 24 hours ahead, 72 hours ahead, one week ahead, and one month ahead for predicting the short-term and medium-term load demands.

    Paul Werbos developed the MLP in 1974, which generalizes simple perception in the non-linear approach by using the logistics function

    F(x)=tangh(x) (2.1)

    It has become one of the most popular neural networks conceived for supervised learning. The Multilayer Perceptron constitutes of 3 layers, an input layer, an intermediate layer as well output layer which can be formed by at least one layer, the information is transmitted in one direction, emerging out of the input layer towards the output layer. By an adjustment iteration set comparing outputs and inputs, the MLP adjusts the weights of neural connections; to find an optimal weight structure through the gradient backpropagation method. The network generally converges to a state where the calculation error is low [49].

    Local learning procedure is used to train MLP [50]. For this purpose, few specimens are picked from the neighbourhood of any selected position x* to train MLP. The neighborhood means the group of k-nearest neighbors in the scheme of training set Φ of the selected point. The framework is trained to learn and positioned perfectly to the target function about the selected point x* be the proficient for this query scheme only. Local complication is lesser as compared to the global complexity of the target function. Therefore, elementary MLP design with least concealed neurons can be used that can learn fast. In [50] researcher reported that using a single-neuron carrying sigmoid acceleration function delivered better outcomes than other systems working along with numerous neurons in the concealed surface. MLP can be implemented with a backpropagation algorithm and is considered as one of the most popular and commonly used networks. The backpropagation training algorithm is a monitored acquisition algorithm and has so far been implemented on a large scale for the prediction problems such as Pan evaporation (EP) prediction [51] and prediction of annual gross electricity demands based on socio encomonic indicators and climate conditions [52]. The figure of closest neighbours (that is learning position) becomes the only hyper-parameter which is modulated in the local learning programming (LLP) method. The MLP absorbs utilizing the Levenberg-Marquardt algorithm along with Bayesian regularization [53], which reduces the amalgamation of squared miscalculations along with net weights to keep away from overfitting. The optimization of MLP is figured out in the algorithm given below:

    Figure 2 reflects the general architecture and working of the MLP algorithm, which contain the input time series of different normalized loads demands as actual time series, hidden neurons and activation functions, learning and finally error metrics are computed to predicted the difference between actual and predicted load demands.

    Figure 2.  Multilayer perceptron (MPL) architecture.

    Multilayer Perceptron Algorithm:

    1: Start

    2: Find the set Θ of l nearest neighbours of query pattern x* in Φ.

    3: Do for each iΞ

        3.1: Create the set Φ'=Φ/(xi,yi)

        3.2: Do for k=kmin to kmax

            3.2.1: Find the set Φ" of k nearest neighbors of xi in Φ'

            3.2.2: Learn MLP on Φ"

            3.2.3: Test MLP on xi

        3.3: Select the best value of k for xi

    4: Calculate the optimal value of nearest neighbours kopt as the mean value of k selected in 3.3

    5: Find the set Θ' of kopt nearest neighbors of x* in Φ

    6: Learn MLP on Φ'

    7: Test MLP on x*

    8: Stop

    Deep learning is a sub-domain of Artificial intelligence (AI) works in similar strategies as machine learning (ML) and artificial neural networks (ANN) perform. The AI is functioning in similar pattern to human. Like a human brain, ANN take information to process this information using a group of neurons which form layers. These neurons transfer information to other neurons where some information is sent back to the previous layer and finally processed information is sent to the output layer in the form of classification or regression. Deep learning extract features and data automatically and methods improve the prediction of complex problems [54].

    LSTM is an artificial recurrent neural network (RNN) architecture used in the field of deep learning [55]. LSTM is considered a popular and expert model used in the forecasting of time series that can efficiently deal with both long term and short-term data dependencies. LSTM was designed and motivated to get control of the issues related to disappearing gradients in RNN architecture, specifically while dealing with the long-term data dependency, which lead to the short and LTM neural network. Basically, the LSTM framework adds the input barrier, forgetting gate, and output gateway towards the neurons in regressive neural network architecture. This newly developed approach can efficiently manage the problem of vanishing gradient [56]. This addition gets LSTM structure more appropriate for long-termed data dependency problems. LSTM has the ability of learning long-short term memory from any given input sequence and that is why it is being widely employed in the time series prediction [57].

    LSTM methods from RNNs have been used in many applications such as speech and language modelling, speech recognition and classification of neurocognitive performance by Greff et al., in [58]. Moreover, P. Nagabushanam et al. [59] employed LSTM for classification improvement of EEG signals. Senyurek et al., [60] employed LSTM for puffs detection in smoking by obtaining temporal dynamics sensors. LSTM improved recognition of gait in a neuron degenerated disease as compared to old recurrent neural network (RNN) methods [59]. The machine learning algorithms faces the problem of gradient learning. LSTM on contrast solve this problem as it is based on the appropriate gradient-based learning algorithm and solve error backflow problems. Moreover, LSTM algorithm is also more appropriate when there is noise or incompressible input sequence without losing short time lag capabilities. LSTM is also more efficient in fast and adaptive learning than other machine learning algorithms. It is capable to solve very long-time lag tasks and complex problems which cannot be solved using conventional machine learning problems.

    The hidden layers of LSTM are linear, but self-loop memory blocks allow the gradient to flow through large sequences. LSTM comprised of recurrent blocks termed as memory blocks, each block contain recurrent memory cells and three multiplicative units namely input, output and forget gates [61]. These cells allow memory blocks to store and access information for a long time to solve the vanishing problems [62]. LSTM was originally comprised on input and output gate, the forget gate was included by [63] to improve the functionality by resetting memory cells. Figure 3 reflects the general architecture of the LSTM model.

    Figure 3.  Architecture of LSTM Model.

    Memory cell of LSTM is considered as its major innovation, which is used as a hoarder to store the state particulars. In the initial step, the forget gate is used to get rid of the unnecessary information. After that, a sigmoid operation is applied to measure the accelerate the forget state ft.

    ft=σ(Wf.[ht1,xt]+bf) (2.2)

    The second step is used to know which new data is required to get saved within the cell condition. Another sigmoid layer, known as the "input gate layer", is used to get the updated information. Then, a tanh function is used to create a vector  ct of novel values that ought to be updated embarked on upcoming state.

    it=σ(Wi.[ht1,xt]+bi) (2.3)
     ct=tanh(Wc.[ht1,xt]+bc) (2.4)

    In the second step, the old cell state ct1 is refurbished to new cell state ct. To delete the information from the old cell we can multiply ct1 by ft. Then add it ct. The new candidate values for updating represented by

    ct=ftct1+it ct (2.5)

    In the last step, the output is needed to be decided. The step consists of a couple of further steps: the sigmoid function is used as an output barrier to strain the cell state. Further, the obtained cell state is passed across tanh(), the obtained output ot is multiplied for the calculation of desired information.

    ot=σ(Wo.[ht1,xt]+bo) (2.6)
    ht=ottanh(ct) (2.7)

    In equations (5)–(10), Wi,Wf,Wc,Wo shows the weight matrices and the bias vectors are represented by the bi,bf,bc,bo vectors.

    CNN is also a class of ANN which become dominant in many applications including computer vision, signal and image processing, electricity load forecasting etc. CNN is designed to automatically and adaptively learn spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. In the field of computer vision, artificial neural networks improved the results of various computer vision problems as compare to several classical techniques by utilizing and convolutional neural network is the most common type of artificial neural networks used for computer vision tasks such as image.

    The scope of CNN is very wider and it has numerous applications, for example, in the health care domain as saving lives is a top priority in healthcare so CNN can be used for health risk assessment. In radiology CNN can be used for efficient classification of certain diseases, Segmentation of organs or anatomical structures, to detect abnormalities, by computed tomography (CT) or MRI images. Drug discovery is another major healthcare field with the extensive use of CNNs. There are many other applications of CNN in computer vision such as object location, object detection & image fragmentation [64], recognition of face [65], video grouping, in-depth assessment as well as image inscribing [66] classification. It is worth mentioning here, that the scope of application of CNN is not only limited to image processing tasks. CNN has many other application as well such as in natural language processing [67], day-ahead building-level load forecasts [68], anamoly detection [69], time series stream forecasting [70].

    At that point, this thought acquired by specialists in modern zones there this innovation gets well known. In 1998, Lecun et al. [71] proposed plan which is improved structure than [72] engineering of the present Convolution Neural Network resembles LeCun plan. Convolution Neural Network gets well known by winning the ILSVRC2012 rivalry [73].

    In the current years, many other advanced strategies have been implementer serving the purpose of load forecasting, for example, Artificial Intelligence as well as fuzzy logic. Savvy arrangements, considering AI advancements, to understand short term load forecasting are turning out to be increasingly more far-reaching these days. AI-based frameworks are created and sent out globally in numerous implementations, essentially due to their emblematic insightfulness, adaptability, and explain-country capacities. For instance, Ranaweera et al. suggested a technique that utilized fuzzy guidelines to consolidate recorded climate and load information. The historical data is a source from which these fuzzy principles were acquired utilizing a learning-type calculation [11]. He et al. [12] introduced architecture to evaluate the vulnerability associated with electrical load as well as gaining far more data upcoming subsequent load, afterward neural system has been utilized in order to remake quantile regression archetype aimed to develop probabilistically done forecasting technique. In one of the studies, a hybrid forecast technique for the wavelet change, neural system, and a set-up of evolutionary algorithm was suggested [74] and utilized for STLF. In a comparative methodology, Ghofrani et al. [23] presented a mix of Bayesian Neural Network (BNN) and a wavelet transform (WT) to create definite load attributes for BNN preparation. Viewing the design, a weighted total from the Bayesian Neural Network yields were made in application to anticipate the load desired for a particular day. However, in one of the other hybrid models, an STLF design suggested by [24] conjoins Phase Space Reproduction calculation with bi-square kernel (BSK) design of regression. In this particular structure, load information can be imitated by a phase space reconstruction algorithm to determine developmental patterns around historical loads details as well as data along with the implanted significance highlights data to expand the unwavering quality of the forecast. The BSK design, then again, link to the spatial structures between the points of regressions and the point located at their neighbour to get the guidelines of rotatory rules and unsettling influence in each measurement. The suggested model including multi-dimensional relapse was effectively established and utilized for STLF [24]. Mamlook et al. [25] utilized fuzzy rationale controller-based hourly to predict the effect of different restrictive boundaries, e.g., time, atmosphere, historical load sum of data and arbitrary unsettling influences, overload forecasting regarding fuzzy sets through the age procedure. In this examination, WT deteriorated the time arrangement into its segments, and afterward every part was forecasted by a mix of neural system along with evolutionary algorithm.

    Figure 4 indicates the architecture of the CNN model. A 1D load forecasting time series were used as input to the CNN model, it was processed in different layers of CNN models and finally, the output was in terms of error between actual load demand and predicted values.

    Figure 4.  CNN Architecture for load forecasting.

    The quality of the predictor was examined by quantitatively measuring the accuracy in terms of root mean squared error (RMSE), coefficient of determination (R2), mean square error (MSE), and mean absolute error (MAE). The following renowned error prediction metrics detailed in [75] are used:

    To examine the quality of a predictor, we need metrics to quantitatively measure its accuracy. In the current study, a quantity called RMSE was introduced for such a purpose, as defined by:

    RMSE=ni=1(xiyi)2 (2.8)

    Where xi and yi denote the measured and predicted values of the i-th sample, and 'n' denote the total number of samples of the training dataset. The smaller value of RMSE denotes the better set of selected descriptors.

    R2 can be computed using the following function:

    R2=ni=1(xiyi)2ni=1(xiy)2 (2.9)

    Here y denote the average values of all the samples.

    MSE can be mathematically computed as follow

    MSE=1nni=1(xiyi)2 (2.10)

    The MSE of an estimator measures the average of the squares of errors or deviations. MSE also denotes the second moment of error that incorporates both variance and bias of an estimator.

    MAE is the measure of difference between two consecutive variables, for example, variable y and x denote the predicted and observed values, then MAE can be calculated as:

    MAE=ni=1(yixi)n (2.11)

    The MAPE is computed using the following formula.

    MAE=100nni=1(yixi)n (2.12)

    In this study, we applied machine learning and deep learning methods such as MLP, LSTM, and CNN on load forecasting time series data from January 01, 2008 to December 31, 2009. The performance was evaluated in terms of R2, MAPE, MAE, MSE, and RMSE. We computed the next 24 hours, 72 hours, one week, and one-month forecasting using the proposed methods

    The distance between actual and predicted values is computed. If the difference between the observed and predicted values is smaller and unbiased, it indicates that the model best fits the data. Statistically, the goodness of fitness is also measured using residual plots which can reveal unwanted residual patterns that indicate biased results more effectively than numbers. R-squared is a statistical measure that indicates how close the data are to be fitted. It is also called as coefficient of determination.

    Table 1 reflects the prediction of the next 24 hours ahead load forecasting. Based on the R-squared error method, the LSTM gives the highest prediction with R2 (0.5160) followed by CNN yield R2 (0.5462) and MLP yield R2 (0.6217). Based on MAPE, the highest next 24 hours ahead prediction was obtained using MLP with MAPE (4.97) followed by LSTM yield MAPE (5.17) and CNN yield MAPE (5.62). Accordingly, the prediction performance in terms of MAE, the highest 24hours ahead prediction was obtained using MPL yield MAE (104.33) followed by LSTM with MAE (109.2) and CNN with MAE (115.62). The highest prediction in terms of MSE was obtained using MLP with MSE (17936.03) followed by CNN with MSE (21515.55) and LSTM with MSE (22947.59). Likewise, the next 24 hours ahead load forecasting prediction was obtained using MLP with RMSE (133.92) followed by CNN with RMSE (146.68) and LSTM with RMSE (151.48).

    Table 1.  Prediction of the next 24 hours ahead load forecasting.
    Method R2 MAPE MAE MSE RMSE
    MLP 0.6217 4.97 104.33 17936.03 133.92
    LSTM 0.5160 5.17 109.2 22947.59 151.48
    CNN 0.5462 5.62 115.62 21515.55 146.68

     | Show Table
    DownLoad: CSV

    Figure 5 shows the results of 24-hours ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, the MLP is closest to the actual load curve followed by LSTM and CNN. The corresponding error values are obtained as reflected in Table 1.

    Figure 5.  24 hours ahead load forecasting using a) MLP, b) LSTM, c) CNN.

    Table 2 reflects the prediction of the next 72 hours ahead load forecasting. Based on the R-squared error method, the LSTM gives the highest prediction with R2 (0.7153) followed by CNN yield R2 (0.7176) and MLP yield R2 (0.7588). Based on MAPE, the highest next 72 hours ahead prediction was obtained using MLP with MAPE (7.04) followed by LSTM yield MAPE (8.07) and CNN yield MAPE (7.44). Accordingly, the prediction performance in terms of MAE, the highest 72 hours ahead prediction was obtained using MPL yield MAE (125.92) followed by CNN with MAE (140.21) and LSTM with MAE (144.84). The highest prediction in terms of MSE was obtained using MLP with MSE (35393.93) followed by CNN with MSE (41451.11) and LSTM with MSE (41786.91). Likewise, the next 72 hours ahead load forecasting prediction was obtained using MLP with RMSE (188.13) followed by CNN with RMSE (203.59) and LSTM with RMSE (204.42).

    Table 2.  Prediction of the next 72 hours ahead load forecasting.
    Method R2 MAPE MAE MSE RMSE
    MLP 0.7588 7.04 125.92 35393.93 188.13
    LSTM 0.7153 8.07 144.84 41786.91 204.42
    CNN 0.7176 7.44 140.21 41451.11 203.59

     | Show Table
    DownLoad: CSV

    Figure 6 shows the results of 72-hours ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, it can be seen that the LSTM and CNN are closest to the actual load curve followed by MLP. The corresponding error values are obtained as reflected in Table 2.

    Figure 6.  72 hours ahead load forecasting using a) MLP, b) LSTM, c) CNN.

    Table 3 reflects the prediction of the next one week ahead load forecasting. Based on the R-squared error method, the CNN gives the highest prediction with R2 (0.7616) followed by) and LSTM yield R2 (0.8814) and MLP with R-square (0.8879). Based on MAPE, the highest next one-week ahead prediction was obtained using MLP with MAPE (6.162) followed by LSTM yield MAPE (6.74) and CNN yield MAPE (6.79). Accordingly, the prediction performance in terms of MAE, the highest one-week ahead prediction was obtained using MPL yield MAE (103.156) followed by LSTM with MAE (107.13) and CNN with MAE (158.27). The highest prediction in terms of MSE was obtained using MLP with MSE (22746.21) followed by LSTM with MSE (24060.60) and CNN with MSE (48390.42). Likewise, the next one week ahead load forecasting prediction was obtained using MLP with RMSE (150.81) followed by LSTM with RMSE (155.11) and LSTM with RMSE (219.97).

    Table 3.  Prediction of the next one week ahead load forecasting.
    Method R2 MAPE MAE MSE RMSE
    MLP 0.8879 6.162 103.156 22746.21 150.81
    LSTM 0.8814 6.74 107.13 24060.60 155.11
    CNN 0.7616 9.79 158.27 48390.42 219.97

     | Show Table
    DownLoad: CSV

    Figure 7 shows the results of one week ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, it can be seen that the LSTM and CNN are closest to the actual load curve followed by MLP. The corresponding error values are obtained as reflected in Table 3.

    Figure 7.  One week ahead load forecasting using a) MLP, b) LSTM, c) CNN.

    Table 4 reflects the prediction of the next 15 days ahead load forecasting. Based on the R-squared error method, the CNN gives the highest prediction with R2 (0.76) followed by) and MLP yield R2 (0.87) and LSTM with R-square (0.89). Based on MAPE, the highest next two-weeks ahead prediction was obtained using LSTM with MAPE (5.44) followed by MLP yield MAPE (5.54) and CNN yield MAPE (8.67). Accordingly, the prediction performance in terms of MAE, the highest two-weeks ahead prediction was obtained using LSTM yield MAE (93.67) followed by MLP with MAE (99.50) and CNN with MAE (141.89). The highest prediction in terms of MSE was obtained using LSTM with MSE (17395.14) followed by MLP with MSE (19963.230) and CNN with MSE (36938.71). Likewise, the next two-weeks ahead load forecasting prediction was obtained using LSTM with RMSE (131.89) followed by MLP with RMSE (141.29) and CNN with RMSE (192.19).

    Table 4.  Prediction of the next 15 days ahead load forecasting.
    Method R2 MAPE MAE MSE RMSE
    MLP 0.87 5.54 99.50 19963.23 141.29
    LSTM 0.89 5.44 93.67 17395.14 131.89
    CNN 0.76 8.67 141.89 36938.71 192.19

     | Show Table
    DownLoad: CSV

    Table 3 reflects the prediction of the next one week ahead load forecasting. Based on the R-squared error method, the CNN gives the highest prediction with R2 (0.7616) followed by) and LSTM yield R2 (0.8814) and MLP with R-square (0.8879). Based on MAPE, the highest next one-week ahead prediction was obtained using MLP with MAPE (6.162) followed by LSTM yield MAPE (6.74) and CNN yield MAPE (6.79). Accordingly, the prediction performance in terms of MAE, the highest one-week ahead prediction was obtained using MPL yield MAE (103.156) followed by LSTM with MAE (107.13) and CNN with MAE (158.27). The highest prediction in terms of MSE was obtained using MLP with MSE (22746.21) followed by LSTM with MSE (24060.60) and CNN with MSE (48390.42). Likewise, the next one week ahead load forecasting prediction was obtained using MLP with RMSE (150.81) followed by LSTM with RMSE (155.11) and LSTM with RMSE (219.97).

    Figure 8 shows the results of two weeks ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, it can be seen that the LSTM and CNN are closest to the actual load curve followed by MLP. The corresponding error values are obtained as reflected in Table 4.

    Figure 8.  One two weeks ahead load forecasting using a) MLP, b) LSTM, c) CNN.

    Table 5 reflects the prediction of the next one month ahead load forecasting. Based on the R-squared error method, the CNN gives the highest prediction with R2 (0.90) followed by) and MLP yield R2 (0.92) and MLP with R-square (0.92). Based on MAPE, the highest next one-month ahead prediction was obtained using LSTM with MAPE (4.38) followed by MLP yield MAPE (4.23) and CNN yield MAPE (5.1). Accordingly, the prediction performance in terms of MAE, the highest one-month ahead prediction was obtained using LSTM yield MAE (75.12) followed by MLP with MAE (71.26) and CNN with MAE (84.95). The highest prediction in terms of MSE was obtained using LSTM with MSE (11258.74) followed by MLP with MSE (11017.13) and CNN with MSE (14584.13). Likewise, the next one month ahead load forecasting prediction was obtained using LSTM with RMSE (106.10) followed by MLP with RMSE (104.96) and LSTM with RMSE (120.76).

    Table 5.  Prediction of the next one month ahead load forecasting.
    Method R2 MAPE MAE MSE RMSE
    MLP 0.92 4.23 71.26 11017.13 104.96
    LSTM 0.92 4.38 75.12 11258.74 106.10
    CNN 0.90 5.10 84.95 14584.13 120.76

     | Show Table
    DownLoad: CSV

    Figure 9 shows the results of one month ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, it can be seen that the LSTM and CNN are closest to the actual load curve followed by MLP. The corresponding error values are obtained as reflected in Table 5.

    Figure 9.  One month ahead load forecasting using a) MLP, b) LSTM, c) CNN.

    Accurate load forecasting can help alleviate the impact of renewable energy access to the network, facilitate the power plants to arrange unit maintenance, and encourage the power broker companies to develop a reasonable electricity price plan. Many activities within the power system such as the maintenance scheduling of generators, renewable energy integration, and even the investment of power plants and power grids depend on the load forecasting. In the electricity market, the regulators monitor the activities based upon the forecasting load and power generators. Customers and power brokers decide their action strategies.

    Convolution Neural Network (CNN) is extensively used in forecasting. CNN are capable of apprehending pattern characteristics along with scale-invariant characteristics when the close by statistics has solid relationships with one another [76]. The design of locally set course of load informational data in close by hours could be extricated by CNN. In [77], another load forecasting design that utilizes CNN infrastructure is given and made a comparison to the rest of the neural systems. The outcomes reveal that MAPE along with CV-RMSE of suggested calculation is 9.77% and 11.66% that are the tiniest number across all the designs. The examinations demonstrate the fact about CNN infrastructure is essential regarding load forecasting and concealed characteristics could possibly be taken out through the planned 1D convolution surfaces. In light of the above description, LSTM as well CNN are equally shown to give high exactness forecast in STLF because of their advantageous feature to catch concealed characteristics. Along these lines, it is required to build up a hybrid neural systematic structure that can catch and incorporate such different invisible traits to give effective execution. More pointedly, it comprises of three sections: The LSTM module, the CNN module as well as featured fusion module. The LSTM module can get familiar with the valuable data for quite a while by the overlook gate as well as memory cell with the CNN module is used to remove schemes of nearby patterns and similar design which shows up in various areas. The featured fusion combination module is utilized to incorporate these unseen essentials and make the ultimate forecast. The suggested CNN-LSTM framework was created and implemented to foresee a real word electric load series of time. Also, a few strategies were actualized to be contrasted with our suggested model. To demonstrate the legitimacy of the suggested model, the CNN and LSTM modules were trailed separately. Moreover, the data record was separated into a few segments to test the strength of the suggested model. In outline, this paper proposes a profound learning structure that can successfully catch and incorporate the concealed characteristics of LSTM as well as the CNN model to accomplish far better precision and strength.

    In this study, we computed ahead forecasting of one day, one week, two weeks and one month by applying MLP, LSTM and CNN. The computational performance was computed in terms of R2, MAPE, MAE, MSE and RMSE. We optimized the parameters of these algorithms in order to get the improved performance to obtained the ahead STLF and MTLF as reflected in the Table 6. The results yielded from our study with previous findings in terms of MPE for one day, one week, two weeks and one month revleas that our proposed models with parameters optimization gives the highest ahead detection performance.

    Table 6.  Comparison of MAPE results with previous studies.
    Method Day-ahead 72 hrs ahead Week-ahead 15 days ahead Month ahead
    RDNN [78] 5.66% - 7.55% - -
    SOM-SVM [79] 6.05% - 8.03% - -
    Capula DBN [80] 5.63% - 7.26% - -
    LSTM [81] - 9.34% - -
    ANN [82] - 12.96% - - -
    NP-ARMA [83] - - - - 4.83%
    CNN [84] - - - - 5.08%
    LSTM [85] - - - - 4.83%
    XGBoost [86] - - - - 10.08%
    This Work 4.97% 6.34% 6.16% 5.44% 4.23%

     | Show Table
    DownLoad: CSV

    In this study, we have taken the load data from a feeder and computed load forecasting for the next 24 hours, 72 hours, one-week, two-weeks, and one month to forecast the load demands for short-term and medium-term load demands. We optimized the AI algorithms such as MLP, LSTM, and CNN to improve the forecasting performance. We measured the forecasting performance based on the robust error metrics such as squared error, MAPE, MAE, MSE, and RMSE. The smallest error value between the observed and predicted values of these different ahead forecast value indicates that model best fit the data for the next 24 hours, 72 hours, one week and one-month load forecast. The smallest value of the R2 statistical measure method indicates how closely the data to be fitted. The R-squared yields the smallest error in all these cases followed by MAPE, MAE, RMSE, and MSE. For STLF, the MLP and LSTM gives the better forecasting. However, for MTLF, the CNN and LSTM gives more better forecasting. It indicates that data demands are getting higher the deep learning models with a high number of neurons and optimized activation functions provide better predictions. The results indicate that the proposed model with optimizing the deep learning models yielded good predictions for short-term and medium-term forecasting. This indicated that the power systems with their complexity and growth and other different influential factors for power generation and consumption, power planning, etc. can be better predicted using this approach.

    There are various factors which effects the load growth such as peak hours in the day during which the demand of electricity increases or some environmental factors which effects on energy demand. Currently, we computed ahead load demands using different proposed models from January 2008 to December 2008 and July to December 2009 of one grid. We will considered each of these factors separately to see the effects of load growth.

    The authors declare no conflict of interest in this paper.



    [1] J. Aguirre, M. Escobedo, A Cauchy problem for utΔu=up with 0<p<1. Asymptotic behaviour of solutions, Ann. Fac. Sci. Toulouse Math., 8 (1986), 175–203.
    [2] X. Bai, S. Zhou, S. Zheng, Cauchy problem for fast diffusion equation with localized reaction, Nonlinear Anal., 74 (2011), 2508–2514. doi: 10.1016/j.na.2010.12.006
    [3] R. Ferreira, A. de Pablo, Grow-up for a quasilinear heat equation with a localized reaction, J. Differ. Equations, 268 (2020), 6211–6229. doi: 10.1016/j.jde.2019.11.033
    [4] R. Ferreira, A. de Pablo, J. L. Vázquez, Blow-up for the porous medium equation with a localized reaction, J. Differ. Equations, 231 (2006), 195–211. doi: 10.1016/j.jde.2006.04.017
    [5] R. Ferreira, A. de Pablo, F. Quirós, J. D. Rossi, The blow-up profile for a fast diffusion equation with a nonlinear boundary condition, Rocky Mt. J. Math., 33 (2003), 123–146.
    [6] M. Fila, P. Souplet, The blow-up rate for semilinear parbolic problems on general domains, Nonlinear Differ. Equ. Appl., 8 (2001), 473–480. doi: 10.1007/PL00001459
    [7] V. A. Galaktionov, Blow-up for quasilinear heat equations with critical Fujita's exponents, P. Roy. Soc. Edinb. A, 124 (1994), 517–525. doi: 10.1017/S0308210500028766
    [8] B. H. Gilding, L. A. Peletier, On a class of similarity solutions of the porous media equation, J. Math. Anal. Appl., 55 (1976), 351–364. doi: 10.1016/0022-247X(76)90166-9
    [9] M. A. Herrero, M. Pierre, The Cauchy problem for ut=Δum when 0<m<1, T. Am. Math. Soc., 291 (1985), 145–158.
    [10] H. A. Levine, P. Sacks, Some existence and nonexistence theorems for solutions of degenerate parabolic equations, J. Differ. Equations, 52 (1984), 135–161. doi: 10.1016/0022-0396(84)90174-8
    [11] A. de Pablo, J. L. Vázquez, The balance between strong reaction and slow diffusion, Commun. Part. Diff. Eq., 15 (1990), 159–183. doi: 10.1080/03605309908820682
    [12] A. de Pablo, J. L. Vázquez, Travelling waves and finite propagation in a reaction-diffusion equation, J. Differ. Equations, 93 (1991), 19–61. doi: 10.1016/0022-0396(91)90021-Z
    [13] R. G. Pinsky, Existence and nonexistence of global solutions for ut=Δu+a(x)up in Rd, J. Differ. Equations, 133 (1997), 152–177. doi: 10.1006/jdeq.1996.3196
    [14] A. A. Samarskii, V. A. Galaktionov, S. P. Kurdyumov, A. P. Mikhailov, Blow-up in problems for quasilinear parabolic equations, Berlin: Walter de Gruyter, 1995.
    [15] J. L. Vázquez, The porous medium equation. Mathematical theory, Oxford: The Clarendon Press, Oxford University Press, 2007.
  • This article has been cited by:

    1. Shouming Zhou, Li Zhang, On the Cauchy problem for Keller‐Segel model with nonlinear chemotactic sensitivity and signal secretion in Besov spaces, 2024, 47, 0170-4214, 3651, 10.1002/mma.9104
    2. Razvan Gabriel Iagar, Marta Latorre, Ariel Sánchez, Optimal existence, uniqueness and blow-up for a quasilinear diffusion equation with spatially inhomogeneous reaction, 2024, 533, 0022247X, 128001, 10.1016/j.jmaa.2023.128001
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2400) PDF downloads(124) Cited by(2)

Figures and Tables

Figures(1)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog