
Citation: Ken P. Yocom, Leann Andrews, Nicole Faghin, Karen Dyson, Thomas Leschine, Jungho Nam. Revitalizing urban waterfronts: identifying indicators for human well-being[J]. AIMS Environmental Science, 2016, 3(3): 456-473. doi: 10.3934/environsci.2016.3.456
[1] | Mary Thornbush . Urban agriculture in the transition to low carbon cities through urban greening. AIMS Environmental Science, 2015, 2(3): 852-867. doi: 10.3934/environsci.2015.3.852 |
[2] | Alessio Russo, Francisco J Escobedo, Stefan Zerbe . Quantifying the local-scale ecosystem services provided by urban treed streetscapes in Bolzano, Italy. AIMS Environmental Science, 2016, 3(1): 58-76. doi: 10.3934/environsci.2016.1.58 |
[3] | Beth Ravit, Frank Gallagher, James Doolittle, Richard Shaw, Edwin Muñiz, Richard Alomar, Wolfram Hoefer, Joe Berg, Terry Doss . Urban wetlands: restoration or designed rehabilitation?. AIMS Environmental Science, 2017, 4(3): 458-483. doi: 10.3934/environsci.2017.3.458 |
[4] | Luke Drake, Beth Ravit, Iana Dikidjieva, Laura J. Lawson . Urban greening supported by GIS: from data collection to policy implementation. AIMS Environmental Science, 2015, 2(4): 910-934. doi: 10.3934/environsci.2015.4.910 |
[5] | Vanessa Duarte Pinto, Catarina Martins, José Rodrigues, Manuela Pires Rosa . Improving access to greenspaces in the Mediterranean city of Faro. AIMS Environmental Science, 2020, 7(3): 226-246. doi: 10.3934/environsci.2020014 |
[6] | Nicholas Woodward, Caleb E. Finch, Todd E. Morgan . Traffic-related air pollution and brain development. AIMS Environmental Science, 2015, 2(2): 353-373. doi: 10.3934/environsci.2015.2.353 |
[7] | Sonia Prestamburgo, Filippo Sgroi, Adriano Venudo, Carlo Zanin . Universal design as resilient urban space plan strategy. New scenarios for environmental resources' sustainable management. AIMS Environmental Science, 2021, 8(4): 321-340. doi: 10.3934/environsci.2021021 |
[8] | Swatantra R. Kethireddy, Grace A. Adegoye, Paul B. Tchounwou, Francis Tuluri, H. Anwar Ahmad, John H. Young, Lei Zhang . The status of geo-environmental health in Mississippi: Application of spatiotemporal statistics to improve health and air quality. AIMS Environmental Science, 2018, 5(4): 273-293. doi: 10.3934/environsci.2018.4.273 |
[9] | Alaeddine Mihoub, Montassar Kahia, Mohannad Alswailim . Measuring the impact of technological innovation, green energy, and sustainable development on the health system performance and human well-being: Evidence from a machine learning-based approach. AIMS Environmental Science, 2024, 11(5): 703-722. doi: 10.3934/environsci.2024035 |
[10] | Joseph Muiruri, Raphael Wahome, Kiemo Karatu . Assessment of methods practiced in the disposal of solid waste in Eastleigh Nairobi County, Kenya. AIMS Environmental Science, 2020, 7(5): 434-448. doi: 10.3934/environsci.2020028 |
In the presence of load forecasting service, the thing which is important and achievable to meet the upcoming load demands so that power can be obviated to happen. The occurrence of faults due to unnecessary loading, other electrical failures such as blackout etc. can be refrained if properly required precautionary measures to be taken to manage the load demands. However, load demands are kept on continually increasing day by day. It fluctuates a lot on an hourly basis [1], so short-term load demands are highly desired to be automated for proper forecasting. The load forecasting is basically a stochastic problem instead of being random. One cannot find 'certain' in this phenomenon, because forecasters have to deal with stochasticity and haphazardness. The outcome of this process is expected to be in complicated form, for instance, a forecast extending below this or that context, a prediction level, and a little quantile of interest. Forecasting is considered crucial and high priority in the power sector. The previously available couple of global power forecasting fixtures, known as GEFCom2012 as well as GEFCom2014, grabbed the attention of many academia data scientists as well as various industries along with resolved numerous forecasting hurdles faced within the power sectors including probabilistic forecasting [2] and hierarchical forecasting [3].
There are three approaches most widely used in load forecasting such as shorter-term load forecasting (STLF), medium-term load forecasting (MTLF) and long-term load forecasting (LTLF) [1]. The STLF has been explored extensively as well as modeled at the generation capacity, distribution strategy and in transmission process. Mostly, the problem arises at the distribution stage because the distribution utilities are linked with the generation capacity and its scheduling and system peaking. These utilities do not have direct relation to short term (ST) load demand of customers himself. The ST load forecasting becomes a central problem for the functioning and transmission of energy networks to interrupt crucial outcomes related to failures in flash and energy systems. That is imperative towards the economic strengths of energy networks and the essence of sending off along with assembling start-up-shutting down schemes that play a crucial part in the automatically controlled power networks. Accurately precise power load forecast does not merely aid consumers to go for far suitable power utilization plan of action, but also decreases electric cost expenditure, meanwhile enhancing equipment utilization and hence minimizing the cost of production and upgrading the economic welfare. It is also conducive to optimize the reversion of the power system, enhancing the efficiency of power supply, and attaining the goal of energy conservation along with emission reduction.
Proper and efficient electric load forecasting can save electricity and make distribution easier and efficient by utilizing minimum energy resources. The cost of the electricity may increase if electric load forecasting is not accurate. Furthermore, when load forecasting is not proper and accurate then it may waste energy resources. Contrary to that, if load forecasting is done properly and accurately, it will help to manage the future electric load demands by efficiently load planning [4].
Mostly, in a large power system, the usage schemes of tools might show distinction from each other and a single electrical device being randomly used. There are large fluctuations in discrete loads, but on summing up these loads into bigger facility load, emerging pattern could be predicted arithmetically [5]. Here are the following factors which can affect electric load including weather, time factors, economic factors, and random effects.
Weather is one of the most influencing factors from the above-listed factors. Weather details become extremely essential elements in the load forecast scenario. Usually, load forecasting models are built and tested, taking into considerations the actual reading of weather. Nevertheless, outline operational models prerequisite maneuvering weather forecasts, along with related forecast errors and miscalculations. The weather forecast obviously escort to the degradation and breakdown in the model performance. Weather forecasting, for instance, the speed of wind, climatic temperature along with cloud covering performs a massive role in STLF by modifying the graph. The characteristics of these elements are mirrored in the load requirements even though some are influenced far more than others.
Weather have significant effects on ST electrical load forecasting of power network [6]. Weather sensitive loads like heating, air conditioning devices and ventilating have a great effects on small-scale industrial energy structures. Weather elements that could influence hourly power load forecasting are barometric pressure, precipitation, speed of wind, solar irradiance along with humidity. During maximum humidness days, the cooling apparatus can function for prolonged duty patterns to get rid of unneeded condensation from the conditioned air. Precipitation holds the capability to minimize the temperature of air that will result in the reduction of cooling load [5].
There are further three factors that can influence the electric loads: holidays, day to day weekly cycle, and seasonal outputs. The day to day power load forecast for the off day is almost the same as that of what is found on weekends. During holidays, the vastness of power loads is lesser as compared to working days. Day to day weekly cycle is power load forecast schemes which are periodical all through every day and week. While seasoning outcomes describe for long-term modifications in the enhancement and patterns of weather [7].
Economic factors depend upon the investment that facilitates the basic structure by establishing the labs and new constructions that would add load to the power system.
All other random conflicts except the three earlier mentioned are included in random effects which can disturb energy load depiction. These disturbances include essential loads that have not plan and the absence of employees which makes the prediction difficult [5].
Load forecasting has gained more importance in smart energy management systems in recent few years. The number of users continually increasing for load forcasting on daily basis. Mostly, STLF is utilized to manage load forwarding and to control energy transfer schedule for thirty minutes to the entire day. Therefore, any betterment gained around precision and management of STLF escorts to reduce expenses of electrical management system along with the enhance efficiency of the energy network [8].
According to Gross and Galiana [5], STLF plays an important part in the creation of feasible, secure, dependable, economic as well as consistent working techniques for the electrical management network. STLF provides information about the latest prediction of weather, recent load forecasting strategy, and random behavior of the power system [8]. The load forecast is getting more interesting and attracting the attention of researchers in the current few years because of its importance and increasing trend in microgrids, smart grids, and renewable sources of energy. Different strategies have been implemented for load forecast as well as management, such as auto-regressive integrated moving average models, seasonal auto-regressive, fractionally integrated moving average, regression, Kalman filtering etc. Al-Al-Hamadi and Soliman [9] presented a load management model to meet the time-varying demand of users on an hourly period. There was another approach called the Kalman filter that was implemented to calculate the optimal load forecast on an hourly basis [8]. Song et al. [10] introduced another technique based on the fuzzy linear regression method built on the load data.
In recent years, numerous other techniques were applied to enhance the accuracy and efficiency of load forecast management of power systems, such as fuzzy logic and artificial intelligence (AI). These effective solutions built projected on the technologies of AI to solve load forecasting issues. AI-based intelligent systems have gained more importance, becoming more widespread nowadays and being developed on large scale. Because of their explanation capability, flexibility, and symbolic reasoning, they are being deployed worldwide in several applications. Ranaweera et al. presented another technique that is based on fuzzy rules developed by using a learning type algorithm to incorporate load management and historical weather data [11]. He et al. [12] proposed a novel method for the quantification of unpredictability stringed to power load and to obtain future load information, after that neural structure has been used in order to construct quantile regression framework in the development of probabilistic techniques for forecasting.
Recently. artificial intelligence (AI) based methods have been utilized for loads forecasting including smart grid and buildings [13], next day load demands [14], load forecasting in distributed systems [15] and autoregressive ANN with exogenous vector inputs [16] etc. Moreover, other AI base methods including fuzzy logistic techniques [17] specialist networks structures [18], Bayesian neural system [19] and support vector apparatus [20] are widely used in handling number of forecasting issues. Even though extensive research has been carried out, an error-free as well as precise STLF continue in becoming a challenge because it is having a load data which is non-stationary in addition to carrying durable dependencies related to forecasting horizons. For this reason, the long-short term memory (LSTM) is applied [21], which is a peculiar kind of recurrent neural network (RNN) structure [22] to resolve the STLF issue. LSTM functions effectively around long-time horizon forecasting as compared to the rest of the artificial intelligence techniques which is routed through the previous load statistics which governed outcome as well as connections across the series of time.
The researchers [23] suggested a hybrid technique established on wavelet transform as well Bayesian neural networks (BNN) to get the load characteristics for training of the BNN model. In this technique, an average sum of BNN outputs is used for predicting load for a specific day. Fan et al. [24] introduced another hybrid design established on the junction of a combination of Bi-Square Kernel (BSK) regression framework and phase space reconstruction (PSR) method. In this approach, load statistics were regenerated via using the PSR method to obtain the developmental modes of the documented load statistics to enhance prediction reliability [24]. The researchers [25] proposed another fuzzy logic controller-based hourly for load forecasting depending upon different conditional variables, e.g., random disturbances, load historical statistics, time as well as climate etc. Finally, the developed hybrid model was successfully established by employing a combination of evolutionary algorithms and neural networks.
In another study, Metaxiotis et al. [26] provides a comparison of models based on CNNs and AI, where CNN models show a distinctive performance in load forecasting prediction. Fukushima K [27] presented CNNs in very simple form. LeCun et al proposed the current form on CNNs with more advanced concepts, there have been many further extensions with improvements, such as batch normalization and max-pooling layer [28] etc. More specifically, some strategies are devised for CNN to facilitate structures, for instance, the addition of pooling surface in the design [29]. In short, CNN is being considered as a potential candidate for load forecasting implementations, so far as the control of over-fitting is concerned. On the other side, Fuzzy time series (FTS) were employed in pattern acquisition-based techniques within numerous implementations which include load forecasting. In 2005, Yu [30] proposed a novel kind of weighted FTS for the forecast of the stock market. In 2009, Wang et al. proposed another technique using FTS that was implemented in stock index forecasting and temperature forecasting [31]. In summary, many other relevant studies show that FTS has been most widely used in load forecasting management systems by many other researchers such as hybrid dynamic and fuzzy time series model for mid-term power load forecasting [32], a new linguistic out-sample approach of fuzzy time series for daily load forecasting [33] and imperialist competitive algorithm combined with refined high-order weighted fuzzy time series (RHWFTS–ICA) for short term load forecasting [34].
The AI-based machine learning and deep convolutional neural networks methods have also successfully been used in many other areas of complex physiological signals and image processing problems. The applications of AI based methods include seizuer detection using entropy-based methods [35] and machine learning methods [36], congestive heart failure by extracting multimodal features and employing machine learning techniques [37], Alzheimer detection via machine learning [38], brain tumor detection based on hybrid features and machine learning techniques [39], arrhythmia detection [40], lung cacner detection be extracting refined fuzzy entropy [41], and prostate cancer based on deep learning [42] and machine learning [43] methods.
Deep leaerning methods such as CNN mostly improve the prediction performance using big data and has improved the traditional computer vision tasks such as image classification etc. Recently, CNN is used for both imaging and non-imaging data. There are many applications of 1D-CNN for time series data including electricity load forecasting [44], electricity load forecasting for each day of week [45], hydrological time-series prediction [46], short-term load forecasting of Romanina power system [47] and short-term wind power forecasting [48] etc.
Figure 1 shows the schematic diagram of the electric load forecasting system for Short-term and medium-term load forecasting. Short-term load forecasting (STLF) is used for the planning of the power systems ranging from 01 hours up to one week. In this case, we computed the STLF for the next 24 hours, 72 hours, and one week. The medium-term load forecasting (MTLF) is used to plan maintenance etc. ranging from one day to a few months. In this case, we computed the next load forecast of one day (24 hours), 72 hours, one week, two weeks, and one month. After applying the data preprocessing, we optimized and initialized the robust neural network and deep learning models such as multilayer perceptron (MPL), LSTM and CNN. The performance was computed on the test set based on the standard performance error metrics such as R-squared, MAPE, MAE, MSE, and RMSE. Finally, the load forecasting for STLF and MTLF ahead was computed and performance was evaluated in terms of errors for actual and predicted load demands.
In this study, we optimized and employed robust machine learning and deep learning-based methods to predict the load forecasting demands for STLF and MTLF. For multilayer perceptron (MLP), we optimized the function by changing the number of neurons in the hidden layer, this number must be high enough to model the problem and not too much high to avoid overfitting. The iterative backpropagation algorithm was used for this purpose. Moreover, to solve the problem of minimizing the cost function concerning connection weights, the gradient descent algorithm is used in conjunction with the backpropagation algorithm.
Following methods were used to fine-tune the neural network parameters:
We created an LSTM model with one LSTM layer of 48 neurons and "Relu" as an activation function. We also added a dense layers which contains 24 neurons and the last layer, which also acts as the output layer, contains 1 neuron. Finally, we compiled our model using optimizer = "ADAM" and train it for 100 epochs with a batch size of 24.
Firstly, for Conv1D we defined 48 filters and a kernel size of 2 with "RELU" as an activation function. In order to reduce the complexity of the output and prevent over fitting of the data we used Max pooling layer after a CNN layer with size of 2. Moreover, a dense layers were added contained 24 neurons and the final output layer, contains 1 neuron. The model was compiled with "ADAM" optimizer and then fit for 100 epochs, and a batch size of 24 samples is used.
The used model contained a single layer with 48 neuron and "RELU" as an activation function. Similar to LSTM and Conv1D a dense layers were also added with 24 neurons and a final layer for output contain a single neuron respectively. Lastly, the model was fit using the efficient ADAM optimization algorithm and the mean squared error loss function for 100 epochs
The data was taken from Al-Khwarizmi Institute of Technology, The University of Punjab, Lahore, from one of its project regarding the electricity hourly load demands of the complete year 2008 and July to December 2009 of one grid as used in [4]. The data was taken from a feeder to fulfill the maximum load requirements for this study. The load time series was generated for 24 hours ahead, 72 hours ahead, one week ahead, and one month ahead for predicting the short-term and medium-term load demands.
Paul Werbos developed the MLP in 1974, which generalizes simple perception in the non-linear approach by using the logistics function
F(x)=tangh(x) | (2.1) |
It has become one of the most popular neural networks conceived for supervised learning. The Multilayer Perceptron constitutes of 3 layers, an input layer, an intermediate layer as well output layer which can be formed by at least one layer, the information is transmitted in one direction, emerging out of the input layer towards the output layer. By an adjustment iteration set comparing outputs and inputs, the MLP adjusts the weights of neural connections; to find an optimal weight structure through the gradient backpropagation method. The network generally converges to a state where the calculation error is low [49].
Local learning procedure is used to train MLP [50]. For this purpose, few specimens are picked from the neighbourhood of any selected position x* to train MLP. The neighborhood means the group of k-nearest neighbors in the scheme of training set Φ of the selected point. The framework is trained to learn and positioned perfectly to the target function about the selected point x* be the proficient for this query scheme only. Local complication is lesser as compared to the global complexity of the target function. Therefore, elementary MLP design with least concealed neurons can be used that can learn fast. In [50] researcher reported that using a single-neuron carrying sigmoid acceleration function delivered better outcomes than other systems working along with numerous neurons in the concealed surface. MLP can be implemented with a backpropagation algorithm and is considered as one of the most popular and commonly used networks. The backpropagation training algorithm is a monitored acquisition algorithm and has so far been implemented on a large scale for the prediction problems such as Pan evaporation (EP) prediction [51] and prediction of annual gross electricity demands based on socio encomonic indicators and climate conditions [52]. The figure of closest neighbours (that is learning position) becomes the only hyper-parameter which is modulated in the local learning programming (LLP) method. The MLP absorbs utilizing the Levenberg-Marquardt algorithm along with Bayesian regularization [53], which reduces the amalgamation of squared miscalculations along with net weights to keep away from overfitting. The optimization of MLP is figured out in the algorithm given below:
Figure 2 reflects the general architecture and working of the MLP algorithm, which contain the input time series of different normalized loads demands as actual time series, hidden neurons and activation functions, learning and finally error metrics are computed to predicted the difference between actual and predicted load demands.
Multilayer Perceptron Algorithm:
1: Start
2: Find the set Θ of l nearest neighbours of query pattern x* in Φ.
3: Do for each i∈Ξ
3.1: Create the set Φ'=Φ/(xi,yi)
3.2: Do for k=kmin to kmax
3.2.1: Find the set Φ" of k nearest neighbors of xi in Φ'
3.2.2: Learn MLP on Φ"
3.2.3: Test MLP on xi
3.3: Select the best value of k for xi
4: Calculate the optimal value of nearest neighbours kopt as the mean value of k selected in 3.3
5: Find the set Θ' of kopt nearest neighbors of x* in Φ
6: Learn MLP on Φ'
7: Test MLP on x*
8: Stop
Deep learning is a sub-domain of Artificial intelligence (AI) works in similar strategies as machine learning (ML) and artificial neural networks (ANN) perform. The AI is functioning in similar pattern to human. Like a human brain, ANN take information to process this information using a group of neurons which form layers. These neurons transfer information to other neurons where some information is sent back to the previous layer and finally processed information is sent to the output layer in the form of classification or regression. Deep learning extract features and data automatically and methods improve the prediction of complex problems [54].
LSTM is an artificial recurrent neural network (RNN) architecture used in the field of deep learning [55]. LSTM is considered a popular and expert model used in the forecasting of time series that can efficiently deal with both long term and short-term data dependencies. LSTM was designed and motivated to get control of the issues related to disappearing gradients in RNN architecture, specifically while dealing with the long-term data dependency, which lead to the short and LTM neural network. Basically, the LSTM framework adds the input barrier, forgetting gate, and output gateway towards the neurons in regressive neural network architecture. This newly developed approach can efficiently manage the problem of vanishing gradient [56]. This addition gets LSTM structure more appropriate for long-termed data dependency problems. LSTM has the ability of learning long-short term memory from any given input sequence and that is why it is being widely employed in the time series prediction [57].
LSTM methods from RNNs have been used in many applications such as speech and language modelling, speech recognition and classification of neurocognitive performance by Greff et al., in [58]. Moreover, P. Nagabushanam et al. [59] employed LSTM for classification improvement of EEG signals. Senyurek et al., [60] employed LSTM for puffs detection in smoking by obtaining temporal dynamics sensors. LSTM improved recognition of gait in a neuron degenerated disease as compared to old recurrent neural network (RNN) methods [59]. The machine learning algorithms faces the problem of gradient learning. LSTM on contrast solve this problem as it is based on the appropriate gradient-based learning algorithm and solve error backflow problems. Moreover, LSTM algorithm is also more appropriate when there is noise or incompressible input sequence without losing short time lag capabilities. LSTM is also more efficient in fast and adaptive learning than other machine learning algorithms. It is capable to solve very long-time lag tasks and complex problems which cannot be solved using conventional machine learning problems.
The hidden layers of LSTM are linear, but self-loop memory blocks allow the gradient to flow through large sequences. LSTM comprised of recurrent blocks termed as memory blocks, each block contain recurrent memory cells and three multiplicative units namely input, output and forget gates [61]. These cells allow memory blocks to store and access information for a long time to solve the vanishing problems [62]. LSTM was originally comprised on input and output gate, the forget gate was included by [63] to improve the functionality by resetting memory cells. Figure 3 reflects the general architecture of the LSTM model.
Memory cell of LSTM is considered as its major innovation, which is used as a hoarder to store the state particulars. In the initial step, the forget gate is used to get rid of the unnecessary information. After that, a sigmoid operation is applied to measure the accelerate the forget state ft.
ft=σ(Wf.[ht−1,xt]+bf) | (2.2) |
The second step is used to know which new data is required to get saved within the cell condition. Another sigmoid layer, known as the "input gate layer", is used to get the updated information. Then, a tanh function is used to create a vector ct of novel values that ought to be updated embarked on upcoming state.
it=σ(Wi.[ht−1,xt]+bi) | (2.3) |
ct=tanh(Wc.[ht−1,xt]+bc) | (2.4) |
In the second step, the old cell state ct−1 is refurbished to new cell state ct. To delete the information from the old cell we can multiply ct−1 by ft. Then add it∗ ct. The new candidate values for updating represented by
ct=ft∗ct−1+it∗ ct | (2.5) |
In the last step, the output is needed to be decided. The step consists of a couple of further steps: the sigmoid function is used as an output barrier to strain the cell state. Further, the obtained cell state is passed across tanh(∙), the obtained output ot is multiplied for the calculation of desired information.
ot=σ(Wo.[ht−1,xt]+bo) | (2.6) |
ht=ot∗tanh(ct) | (2.7) |
In equations (5)–(10), Wi,Wf,Wc,Wo shows the weight matrices and the bias vectors are represented by the bi,bf,bc,bo vectors.
CNN is also a class of ANN which become dominant in many applications including computer vision, signal and image processing, electricity load forecasting etc. CNN is designed to automatically and adaptively learn spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. In the field of computer vision, artificial neural networks improved the results of various computer vision problems as compare to several classical techniques by utilizing and convolutional neural network is the most common type of artificial neural networks used for computer vision tasks such as image.
The scope of CNN is very wider and it has numerous applications, for example, in the health care domain as saving lives is a top priority in healthcare so CNN can be used for health risk assessment. In radiology CNN can be used for efficient classification of certain diseases, Segmentation of organs or anatomical structures, to detect abnormalities, by computed tomography (CT) or MRI images. Drug discovery is another major healthcare field with the extensive use of CNNs. There are many other applications of CNN in computer vision such as object location, object detection & image fragmentation [64], recognition of face [65], video grouping, in-depth assessment as well as image inscribing [66] classification. It is worth mentioning here, that the scope of application of CNN is not only limited to image processing tasks. CNN has many other application as well such as in natural language processing [67], day-ahead building-level load forecasts [68], anamoly detection [69], time series stream forecasting [70].
At that point, this thought acquired by specialists in modern zones there this innovation gets well known. In 1998, Lecun et al. [71] proposed plan which is improved structure than [72] engineering of the present Convolution Neural Network resembles LeCun plan. Convolution Neural Network gets well known by winning the ILSVRC2012 rivalry [73].
In the current years, many other advanced strategies have been implementer serving the purpose of load forecasting, for example, Artificial Intelligence as well as fuzzy logic. Savvy arrangements, considering AI advancements, to understand short term load forecasting are turning out to be increasingly more far-reaching these days. AI-based frameworks are created and sent out globally in numerous implementations, essentially due to their emblematic insightfulness, adaptability, and explain-country capacities. For instance, Ranaweera et al. suggested a technique that utilized fuzzy guidelines to consolidate recorded climate and load information. The historical data is a source from which these fuzzy principles were acquired utilizing a learning-type calculation [11]. He et al. [12] introduced architecture to evaluate the vulnerability associated with electrical load as well as gaining far more data upcoming subsequent load, afterward neural system has been utilized in order to remake quantile regression archetype aimed to develop probabilistically done forecasting technique. In one of the studies, a hybrid forecast technique for the wavelet change, neural system, and a set-up of evolutionary algorithm was suggested [74] and utilized for STLF. In a comparative methodology, Ghofrani et al. [23] presented a mix of Bayesian Neural Network (BNN) and a wavelet transform (WT) to create definite load attributes for BNN preparation. Viewing the design, a weighted total from the Bayesian Neural Network yields were made in application to anticipate the load desired for a particular day. However, in one of the other hybrid models, an STLF design suggested by [24] conjoins Phase Space Reproduction calculation with bi-square kernel (BSK) design of regression. In this particular structure, load information can be imitated by a phase space reconstruction algorithm to determine developmental patterns around historical loads details as well as data along with the implanted significance highlights data to expand the unwavering quality of the forecast. The BSK design, then again, link to the spatial structures between the points of regressions and the point located at their neighbour to get the guidelines of rotatory rules and unsettling influence in each measurement. The suggested model including multi-dimensional relapse was effectively established and utilized for STLF [24]. Mamlook et al. [25] utilized fuzzy rationale controller-based hourly to predict the effect of different restrictive boundaries, e.g., time, atmosphere, historical load sum of data and arbitrary unsettling influences, overload forecasting regarding fuzzy sets through the age procedure. In this examination, WT deteriorated the time arrangement into its segments, and afterward every part was forecasted by a mix of neural system along with evolutionary algorithm.
Figure 4 indicates the architecture of the CNN model. A 1D load forecasting time series were used as input to the CNN model, it was processed in different layers of CNN models and finally, the output was in terms of error between actual load demand and predicted values.
The quality of the predictor was examined by quantitatively measuring the accuracy in terms of root mean squared error (RMSE), coefficient of determination (R2), mean square error (MSE), and mean absolute error (MAE). The following renowned error prediction metrics detailed in [75] are used:
To examine the quality of a predictor, we need metrics to quantitatively measure its accuracy. In the current study, a quantity called RMSE was introduced for such a purpose, as defined by:
RMSE=√n∑i=1(xi−yi)2 | (2.8) |
Where xi and yi denote the measured and predicted values of the i-th sample, and 'n' denote the total number of samples of the training dataset. The smaller value of RMSE denotes the better set of selected descriptors.
R2 can be computed using the following function:
R2=∑ni=1(xi−yi)2∑ni=1(xi−−y)2 | (2.9) |
Here −y denote the average values of all the samples.
MSE can be mathematically computed as follow
MSE=1nn∑i=1(xi−yi)2 | (2.10) |
The MSE of an estimator measures the average of the squares of errors or deviations. MSE also denotes the second moment of error that incorporates both variance and bias of an estimator.
MAE is the measure of difference between two consecutive variables, for example, variable y and x denote the predicted and observed values, then MAE can be calculated as:
MAE=n∑i=1(yi−xi)n | (2.11) |
The MAPE is computed using the following formula.
MAE=100nn∑i=1(yi−xi)n | (2.12) |
In this study, we applied machine learning and deep learning methods such as MLP, LSTM, and CNN on load forecasting time series data from January 01, 2008 to December 31, 2009. The performance was evaluated in terms of R2, MAPE, MAE, MSE, and RMSE. We computed the next 24 hours, 72 hours, one week, and one-month forecasting using the proposed methods
The distance between actual and predicted values is computed. If the difference between the observed and predicted values is smaller and unbiased, it indicates that the model best fits the data. Statistically, the goodness of fitness is also measured using residual plots which can reveal unwanted residual patterns that indicate biased results more effectively than numbers. R-squared is a statistical measure that indicates how close the data are to be fitted. It is also called as coefficient of determination.
Table 1 reflects the prediction of the next 24 hours ahead load forecasting. Based on the R-squared error method, the LSTM gives the highest prediction with R2 (0.5160) followed by CNN yield R2 (0.5462) and MLP yield R2 (0.6217). Based on MAPE, the highest next 24 hours ahead prediction was obtained using MLP with MAPE (4.97) followed by LSTM yield MAPE (5.17) and CNN yield MAPE (5.62). Accordingly, the prediction performance in terms of MAE, the highest 24hours ahead prediction was obtained using MPL yield MAE (104.33) followed by LSTM with MAE (109.2) and CNN with MAE (115.62). The highest prediction in terms of MSE was obtained using MLP with MSE (17936.03) followed by CNN with MSE (21515.55) and LSTM with MSE (22947.59). Likewise, the next 24 hours ahead load forecasting prediction was obtained using MLP with RMSE (133.92) followed by CNN with RMSE (146.68) and LSTM with RMSE (151.48).
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.6217 | 4.97 | 104.33 | 17936.03 | 133.92 |
LSTM | 0.5160 | 5.17 | 109.2 | 22947.59 | 151.48 |
CNN | 0.5462 | 5.62 | 115.62 | 21515.55 | 146.68 |
Figure 5 shows the results of 24-hours ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, the MLP is closest to the actual load curve followed by LSTM and CNN. The corresponding error values are obtained as reflected in Table 1.
Table 2 reflects the prediction of the next 72 hours ahead load forecasting. Based on the R-squared error method, the LSTM gives the highest prediction with R2 (0.7153) followed by CNN yield R2 (0.7176) and MLP yield R2 (0.7588). Based on MAPE, the highest next 72 hours ahead prediction was obtained using MLP with MAPE (7.04) followed by LSTM yield MAPE (8.07) and CNN yield MAPE (7.44). Accordingly, the prediction performance in terms of MAE, the highest 72 hours ahead prediction was obtained using MPL yield MAE (125.92) followed by CNN with MAE (140.21) and LSTM with MAE (144.84). The highest prediction in terms of MSE was obtained using MLP with MSE (35393.93) followed by CNN with MSE (41451.11) and LSTM with MSE (41786.91). Likewise, the next 72 hours ahead load forecasting prediction was obtained using MLP with RMSE (188.13) followed by CNN with RMSE (203.59) and LSTM with RMSE (204.42).
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.7588 | 7.04 | 125.92 | 35393.93 | 188.13 |
LSTM | 0.7153 | 8.07 | 144.84 | 41786.91 | 204.42 |
CNN | 0.7176 | 7.44 | 140.21 | 41451.11 | 203.59 |
Figure 6 shows the results of 72-hours ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, it can be seen that the LSTM and CNN are closest to the actual load curve followed by MLP. The corresponding error values are obtained as reflected in Table 2.
Table 3 reflects the prediction of the next one week ahead load forecasting. Based on the R-squared error method, the CNN gives the highest prediction with R2 (0.7616) followed by) and LSTM yield R2 (0.8814) and MLP with R-square (0.8879). Based on MAPE, the highest next one-week ahead prediction was obtained using MLP with MAPE (6.162) followed by LSTM yield MAPE (6.74) and CNN yield MAPE (6.79). Accordingly, the prediction performance in terms of MAE, the highest one-week ahead prediction was obtained using MPL yield MAE (103.156) followed by LSTM with MAE (107.13) and CNN with MAE (158.27). The highest prediction in terms of MSE was obtained using MLP with MSE (22746.21) followed by LSTM with MSE (24060.60) and CNN with MSE (48390.42). Likewise, the next one week ahead load forecasting prediction was obtained using MLP with RMSE (150.81) followed by LSTM with RMSE (155.11) and LSTM with RMSE (219.97).
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.8879 | 6.162 | 103.156 | 22746.21 | 150.81 |
LSTM | 0.8814 | 6.74 | 107.13 | 24060.60 | 155.11 |
CNN | 0.7616 | 9.79 | 158.27 | 48390.42 | 219.97 |
Figure 7 shows the results of one week ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, it can be seen that the LSTM and CNN are closest to the actual load curve followed by MLP. The corresponding error values are obtained as reflected in Table 3.
Table 4 reflects the prediction of the next 15 days ahead load forecasting. Based on the R-squared error method, the CNN gives the highest prediction with R2 (0.76) followed by) and MLP yield R2 (0.87) and LSTM with R-square (0.89). Based on MAPE, the highest next two-weeks ahead prediction was obtained using LSTM with MAPE (5.44) followed by MLP yield MAPE (5.54) and CNN yield MAPE (8.67). Accordingly, the prediction performance in terms of MAE, the highest two-weeks ahead prediction was obtained using LSTM yield MAE (93.67) followed by MLP with MAE (99.50) and CNN with MAE (141.89). The highest prediction in terms of MSE was obtained using LSTM with MSE (17395.14) followed by MLP with MSE (19963.230) and CNN with MSE (36938.71). Likewise, the next two-weeks ahead load forecasting prediction was obtained using LSTM with RMSE (131.89) followed by MLP with RMSE (141.29) and CNN with RMSE (192.19).
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.87 | 5.54 | 99.50 | 19963.23 | 141.29 |
LSTM | 0.89 | 5.44 | 93.67 | 17395.14 | 131.89 |
CNN | 0.76 | 8.67 | 141.89 | 36938.71 | 192.19 |
Table 3 reflects the prediction of the next one week ahead load forecasting. Based on the R-squared error method, the CNN gives the highest prediction with R2 (0.7616) followed by) and LSTM yield R2 (0.8814) and MLP with R-square (0.8879). Based on MAPE, the highest next one-week ahead prediction was obtained using MLP with MAPE (6.162) followed by LSTM yield MAPE (6.74) and CNN yield MAPE (6.79). Accordingly, the prediction performance in terms of MAE, the highest one-week ahead prediction was obtained using MPL yield MAE (103.156) followed by LSTM with MAE (107.13) and CNN with MAE (158.27). The highest prediction in terms of MSE was obtained using MLP with MSE (22746.21) followed by LSTM with MSE (24060.60) and CNN with MSE (48390.42). Likewise, the next one week ahead load forecasting prediction was obtained using MLP with RMSE (150.81) followed by LSTM with RMSE (155.11) and LSTM with RMSE (219.97).
Figure 8 shows the results of two weeks ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, it can be seen that the LSTM and CNN are closest to the actual load curve followed by MLP. The corresponding error values are obtained as reflected in Table 4.
Table 5 reflects the prediction of the next one month ahead load forecasting. Based on the R-squared error method, the CNN gives the highest prediction with R2 (0.90) followed by) and MLP yield R2 (0.92) and MLP with R-square (0.92). Based on MAPE, the highest next one-month ahead prediction was obtained using LSTM with MAPE (4.38) followed by MLP yield MAPE (4.23) and CNN yield MAPE (5.1). Accordingly, the prediction performance in terms of MAE, the highest one-month ahead prediction was obtained using LSTM yield MAE (75.12) followed by MLP with MAE (71.26) and CNN with MAE (84.95). The highest prediction in terms of MSE was obtained using LSTM with MSE (11258.74) followed by MLP with MSE (11017.13) and CNN with MSE (14584.13). Likewise, the next one month ahead load forecasting prediction was obtained using LSTM with RMSE (106.10) followed by MLP with RMSE (104.96) and LSTM with RMSE (120.76).
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.92 | 4.23 | 71.26 | 11017.13 | 104.96 |
LSTM | 0.92 | 4.38 | 75.12 | 11258.74 | 106.10 |
CNN | 0.90 | 5.10 | 84.95 | 14584.13 | 120.76 |
Figure 9 shows the results of one month ahead load forecasts obtained by three different methods (MLP, LSTM, and CNN). From the extracted curves, it can be seen that the LSTM and CNN are closest to the actual load curve followed by MLP. The corresponding error values are obtained as reflected in Table 5.
Accurate load forecasting can help alleviate the impact of renewable energy access to the network, facilitate the power plants to arrange unit maintenance, and encourage the power broker companies to develop a reasonable electricity price plan. Many activities within the power system such as the maintenance scheduling of generators, renewable energy integration, and even the investment of power plants and power grids depend on the load forecasting. In the electricity market, the regulators monitor the activities based upon the forecasting load and power generators. Customers and power brokers decide their action strategies.
Convolution Neural Network (CNN) is extensively used in forecasting. CNN are capable of apprehending pattern characteristics along with scale-invariant characteristics when the close by statistics has solid relationships with one another [76]. The design of locally set course of load informational data in close by hours could be extricated by CNN. In [77], another load forecasting design that utilizes CNN infrastructure is given and made a comparison to the rest of the neural systems. The outcomes reveal that MAPE along with CV-RMSE of suggested calculation is 9.77% and 11.66% that are the tiniest number across all the designs. The examinations demonstrate the fact about CNN infrastructure is essential regarding load forecasting and concealed characteristics could possibly be taken out through the planned 1D convolution surfaces. In light of the above description, LSTM as well CNN are equally shown to give high exactness forecast in STLF because of their advantageous feature to catch concealed characteristics. Along these lines, it is required to build up a hybrid neural systematic structure that can catch and incorporate such different invisible traits to give effective execution. More pointedly, it comprises of three sections: The LSTM module, the CNN module as well as featured fusion module. The LSTM module can get familiar with the valuable data for quite a while by the overlook gate as well as memory cell with the CNN module is used to remove schemes of nearby patterns and similar design which shows up in various areas. The featured fusion combination module is utilized to incorporate these unseen essentials and make the ultimate forecast. The suggested CNN-LSTM framework was created and implemented to foresee a real word electric load series of time. Also, a few strategies were actualized to be contrasted with our suggested model. To demonstrate the legitimacy of the suggested model, the CNN and LSTM modules were trailed separately. Moreover, the data record was separated into a few segments to test the strength of the suggested model. In outline, this paper proposes a profound learning structure that can successfully catch and incorporate the concealed characteristics of LSTM as well as the CNN model to accomplish far better precision and strength.
In this study, we computed ahead forecasting of one day, one week, two weeks and one month by applying MLP, LSTM and CNN. The computational performance was computed in terms of R2, MAPE, MAE, MSE and RMSE. We optimized the parameters of these algorithms in order to get the improved performance to obtained the ahead STLF and MTLF as reflected in the Table 6. The results yielded from our study with previous findings in terms of MPE for one day, one week, two weeks and one month revleas that our proposed models with parameters optimization gives the highest ahead detection performance.
Method | Day-ahead | 72 hrs ahead | Week-ahead | 15 days ahead | Month ahead |
RDNN [78] | 5.66% | - | 7.55% | - | - |
SOM-SVM [79] | 6.05% | - | 8.03% | - | - |
Capula DBN [80] | 5.63% | - | 7.26% | - | - |
LSTM [81] | - | 9.34% | - | - | |
ANN [82] | - | 12.96% | - | - | - |
NP-ARMA [83] | - | - | - | - | 4.83% |
CNN [84] | - | - | - | - | 5.08% |
LSTM [85] | - | - | - | - | 4.83% |
XGBoost [86] | - | - | - | - | 10.08% |
This Work | 4.97% | 6.34% | 6.16% | 5.44% | 4.23% |
In this study, we have taken the load data from a feeder and computed load forecasting for the next 24 hours, 72 hours, one-week, two-weeks, and one month to forecast the load demands for short-term and medium-term load demands. We optimized the AI algorithms such as MLP, LSTM, and CNN to improve the forecasting performance. We measured the forecasting performance based on the robust error metrics such as squared error, MAPE, MAE, MSE, and RMSE. The smallest error value between the observed and predicted values of these different ahead forecast value indicates that model best fit the data for the next 24 hours, 72 hours, one week and one-month load forecast. The smallest value of the R2 statistical measure method indicates how closely the data to be fitted. The R-squared yields the smallest error in all these cases followed by MAPE, MAE, RMSE, and MSE. For STLF, the MLP and LSTM gives the better forecasting. However, for MTLF, the CNN and LSTM gives more better forecasting. It indicates that data demands are getting higher the deep learning models with a high number of neurons and optimized activation functions provide better predictions. The results indicate that the proposed model with optimizing the deep learning models yielded good predictions for short-term and medium-term forecasting. This indicated that the power systems with their complexity and growth and other different influential factors for power generation and consumption, power planning, etc. can be better predicted using this approach.
There are various factors which effects the load growth such as peak hours in the day during which the demand of electricity increases or some environmental factors which effects on energy demand. Currently, we computed ahead load demands using different proposed models from January 2008 to December 2008 and July to December 2009 of one grid. We will considered each of these factors separately to see the effects of load growth.
The authors declare no conflict of interest in this paper.
[1] | Neumann B, Vafeidis A, Zimmermann J, et al. (2015) Future Coastal Population Growth and Exposure to Sea-Level Rise and Coastal Flooding—A Global Assessment. PLoS ONE 10: e0118571. |
[2] | Moser S, Ekstrom J (2010) A framework to diagnose barriers to climate change adaptation. P Natl Acad Sci USA 107: 22026-22031. |
[3] | Hoyle B (2000) Global and local change on the port-city waterfront. Geogr Rev 90: 395-417. |
[4] |
Mann RB (1988) Ten trends in the continuing renaissance of urban waterfronts. Landscape Urban Plan 16: 177-199. doi: 10.1016/0169-2046(88)90042-4
![]() |
[5] | Desfor G, Laidley J (2011) Fixity and Flow of Urban Waterfront Change, In: Desfor G, Laidley J, Stevens Q, Schubert D Editors, Transforming Urban Waterfronts: Fixity and Flow, London and New York: Routledge, 1-13. |
[6] |
Chang TC, Huang S (2011) Reclaiming the city: Waterfront development in Singapore. Urban Stud 48: 2085-2100. doi: 10.1177/0042098010382677
![]() |
[7] | Shubert D (2011) Waterfront Revitalizations: From a Local to a Regional Perspective in London,Barcelona, Rotterdam, and Hamburg, In: Desfor G, Laidley J, Stevens Q, Schubert D Editors, Transforming Urban Waterfronts: Fixity and Flow, New York: Routledge, 74-100. |
[8] | Cho MR (2010) The politics of urban nature restoration: the case of Cheonggyecheon restoration in Seoul, Korea. Int Devel Plan Rev 32: 146-165. |
[9] | Vegara A (2001) New millennium Bilbao, In: Marshall R Editor, Waterfronts in Post-Industrial Cities, London: Spon Press, 86-94. |
[10] | Desfor G, Laidley J, Stevens Q, et al. (2011) Transforming Urban Waterfronts: Fixity and Flow, New York: Routledge Advances in Geography, Routledge. |
[11] |
Bunce S, Desfor G (2007) Political ecologies of urban waterfront transformations. Cities 24: 251-258. doi: 10.1016/j.cities.2007.02.001
![]() |
[12] | Carley M (2012) Preface, In: Smith H, Garcia Ferrari M.S. Authors, Waterfront regeneration: Experiences in city-building, London and New York: Routledge, xiv. |
[13] | Sairnen R, Kumpulainen S (2006) Assessing social impacts in urban waterfront regeneration. EIA Review 26: 120-135. |
[14] | Bruttomesso R (2001) Complexity on the urban waterfront, In: Marshall R Editor, Waterfronts in post-industrial cities, London and New York: Spon Press, 39-49. |
[15] | Goodwin RF (2010) Redeveloping deteriorated urban waterfronts: The effectiveness of U.S. coastal management programs. Coast Manage 27: 239-269. |
[16] | Desfor G, Jorgensen J (2004) Flexible urban governance: The case of Copenhagen’s recent waterfront development. Eur Plan Stud 12: 479-496. |
[17] | Tasan-Kok T (2010) Entrepreneurial governance: Challenges of large-scale property-led urban regeneration projects. J Econ Soc Geogr 100: 126-149. |
[18] | Stren R, Polese M (2000) Understanding the new sociocultural dynamics of cities: Comparative urban policy in a global context, In: Deboyser K, Dewilde C, Dierckx D, Friedrichs J Editors, The social sustainability of cities: Diversity and the management of change, Toronto: Toronto University Press, 3-38. |
[19] | Tasan-Kok T, Sungu-Eryilmaz Y (2011) Exploring innovative instruments for socially sustainable waterfront regeneration in Antwerp and Rotterdam, In: Desfor G, Laidley J, Stevens Q, Schubert D Editors, Transforming Urban Waterfronts: Fixity and Flow, London and New York: Routledge, 257-273. |
[20] | Godschalk DR, Anderson WR (2012) Sustaining Places: The Role of the Comprehensive Plan. Chicago: American Planning Association Planners Advisory Service Report #567. |
[21] | Dovey K (2005) Fluid City: Transforming Melbourne’s Urban Waterfront. London and New York: Routledge. |
[22] | Lindenmayer D, Likens G (2010) The science and application of ecological monitoring. Biol Conserv 143: 1317-1328. |
[23] | Murray C (2010) Effective policy evaluation: Refining design processes for coastal ecosystem condition indicators. Coast Manage 38: 681-687. |
[24] | Ricklin A, et al. (2012) Healthy planning: An evaluation of comprehensive and sustainability plans addressing public health. Chicago: American Planning Association. |
[25] | Michalos A (1997) Combining Social, Economic, and Environmental Indicators to Measure Sustainable Human Well-Being. Soc Indic Res 40: 221-258. |
[26] | Brown A (2003) Increasing the utility of urban environmental quality information. Landscape Urban Plan 65: 85-93. |
[27] | Leidelmijer K, van Kamp I, Quality of the environment and quality of life: Towards a conceptual framework and conceptual framing (RIGO, RIVM). Report #6309500002, 2003. Available from: http://www.rivm.nl/Bibliotheek/Wetenschappelijk/Rapporten/2004/mei/Kwaliteit_van_de_leefomgeving_en_leefbaarheid_Naar_een_begrippenkader_en_conceptuele_inkadering&prev=search. |
[28] | Pacion M (2003) Introduction on urban environmental quality and human wellbeing. Landscape Urban Plan 65: 1-3. |
[29] | van Kamp I, Leidelmeijer K, Marsman G, et al. (2003) Urban environmental quality and human wellbeing: Towards a conceptual framework and demarcation of concepts; a literature study. Landscape Urban Plan 65: 5-18. |
[30] | Barton H, Thompson S, Burgess S, et al. (2015) The Routledge Handbook of Planning for Health and Well-Being, London and New York: Routledge. |
[31] | Dannenberg A, Frumkin H, Jackson RJ (2012) Making Healthy Places: Designing and Building for Health, Well-being, and Sustainability, Washington D.C.: Island Press. |
[32] | Coles R, Millman Z (2013) Landscape, well-being and environment. London and New York: Routledge. |
[33] | Taylor L, Hochuli DF (2014) Creating better cities: How biodiversity and ecosystem functioning enhance urban residents’ wellbeing. Urban Ecosys 18: 747-762. |
[34] | Butler CD, Oluoch-Kosura W (2006) Linking future ecosystem services and future human well-being. Ecol Soc 11: 30. |
[35] | Sowman L (2013) Towards a Landscape of Well-Being, In: Coles R, Millman Z Editors, Landscape, Well-Being and Environment, London and New York: Routledge, 53-71. |
[36] | King MF, Reno VF, Novo E (2014) The Concept, Dimensions, and Methods of Assessment of Human Well-Being within a Socioecological Context: A Literature Review. Soc Indic Res 116: 681-698. |
[37] | Hassan R, Scholes R, Ash N (2005) Ecosystems and Human Well-being: Current State and Trends, Washington D.C.: Island Press, 29. |
[38] | Haworth J, Hart G (2007) Well-being: Individual, community and social perspectives. Palgrave: Macmillan. |
[39] | Villmagna A, Giesecke C (2014) Adapting human well-being frameworks for ecosystem service assessments across diverse landscapes. Ecol Soc 19: 11. |
[40] | Petrosillo I, Costanza R, Aretano R, et al. (2013) The use of subjective indicators to assess how the natural and social capital of support residents’ quality of life in a small volcanic island. EcolIndic 24: 609-620. |
[41] | Hagerty M, Cummins R, Ferriss A, et al. (2001) Quality of life indexes for national policy: review and agenda for research. Soc Indic Res 55: 1-96. |
[42] | Oswald AJ, Wu S (2010) Objective confirmation of subjective measures of human well-being: Evidence from the USA. Science 327: 576-579. |
[43] | Alcamo J, et al. (2003) Ecosystems and human well-being: A framework for assessment (Synthesis Report). Washington D.C. - Millennium Ecosystem Assessment. |
[44] | Biedenweg K, Hanein A, Nelson K, et al. (2014) Developing human wellbeing indicators in the Puget Sound: Focusing on the watershed scale. Coast Manage 42: 374-390. |
[45] | Biedenweg K, Hanein A, Developing human wellbeing indicators for the Hood Canal Watershed. Tacoma, WA-Puget Sound Institute, 2013. Available from: http://www.eopugetsound.org/articles/report-developing-human-wellbeing-indicators-hood-canal-watershed |
[46] | Day A, Prins M (2013) Developing Human Well Being Indicators for Canada’s Pacific Marine Ecosystems. Nanaimo, BC – Uuma Consulting Ltd. |
[47] | Donatuto J, Grossman EE, Konovsky J, et al. (2014) Indigenous community health and climate change: Integrating biophysical and social science indicators. Coast Manage 42: 355-373. |
[48] | Gilmour D, Blackwood D, Banks L, et al. (2007) A sustainability enhancement framework for the Dundee Central Waterfront Development, In: Horner M, Hardcastle C, Price A, Bebbington J Editors, International conference on whole life urban sustainability and its assessment (proceedings), Glasgow, Scotland. |
[49] | Jackson R, Watson T, Tsiu A, et al. (2014) Urban river parkways: An essential tool for public health. Los Angeles, CA–Center for Occupational and Environmental Health, University of California Los Angeles. Available from: https://ehs.ph.ucla.edu/sites/default/files/downloads/Urban%20River%20Parkways%20Full%20Report_1.pdf. |
[50] |
Lloyd MG, Peel D, Duck R (2013) Towards a social-ecological resilience framework for coastal planning. Land Use Policy 30: 925-933. doi: 10.1016/j.landusepol.2012.06.012
![]() |
[51] | Poe MR, Norman KC, Levin PS (2014) Cultural dimensions of socioecological systems: Key connections and guiding principles for conservation in coastal environments. Conserv Lett 7: 166-175. |
[52] | Vanclay F (2002) Conceptualizing social impacts. EIA Review 22, 183-221. |
[53] | Webler T, Lord F (2010) Planning for the human dimensions of oil spills and spill response. Environ Manage 45: 723-738. |
[54] | Stewardship Centre, Green Shores Coastal Development Rating System, Version 1.0. Stewardship Centre for British Columbia, 2010. Available from: http://stewardshipcentrebc.ca/PDF_docs/greenshores/GreenShoresCDRS.pdf . |
[55] | Committee on health impact assessment; National research council, Improving health in the United States: The role of the Health Impact Assessment. Washington D.C., 2011. Available from: http://www.nap.edu/catalog/13229/improving-health-in-the-united-states-the-role-of-health. |
[56] | Maynard A, What’s new for LEED for neighborhood development. United States Green Building Council, 2014. Available from: http://www.usgbc.org/resources/leed-neighborhood-development-v2009-current-version. |
[57] | SEED evaluator: Version 3.0. Design Corps, 2014. Available from: https://seednetwork.org/evaluator/workbooks/The%20SEED%20Network%20%7C%20The%20SEED%20Evaluator.pdf. |
[58] | SITES v2 rating system and scorecard. Sustainable SITES Initiative, 2014. Available from: http://www.sustainablesites.org/rating-system. |
[59] | Walk Score Methodology. Walk Score. Available from: https://www.walkscore.com/methodology.shtml. |
[60] | Behavioral Risk Factor Surveillance System. Centers for Disease Control and Prevention, 2014. Available from: http://www.cdc.gov/brfss/about/index.htm. |
[61] | Communities Count: Social & Health Indicators Across King County. Communities Count, 2014. Available from: http://www.communitiescount.org. |
[62] | Happiness Initiative & Gross National Happiness Index. Happiness Alliance, 2014. Available from: http://www.happycounts.org/about.html. |
[63] | Health Indicators Warehouse. National Center for Health Statistics, 2014. Available from: http://www.healthindicators.gov/. |
[64] | Tacoma Planning Commission, Thea Foss Waterway Design and Development Plan, 2006. Available from: http://cms.cityoftacoma.org/Planning/Shoreline/PlanDocs/TheaFossPlan.pdf. |
[65] | Port of Bellingham, the Waterfront District–Sub Area Plan, 2013. Available from: http://www.portofbellingham.com/DocumentCenter/View/2796. |
[66] | Noll HH (2002) Social indicators and quality of life research: Background, achievements, and current trends, In: Genov N Editor, Advances in sociological knowledge over half a century, Paris: International Social Science Council. |
[67] | Lipsky RS, Ryan CM (2011) Nearshore restoration in Puget Sound: Understanding stakeholder values and potential coalitions. Coast Manage 39: 577-597. |
[68] | Smith H, Garcia Ferrari MS (2012) Waterfront regeneration: Experiences in city-building. London and New York: Routledge. |
[69] | Welzel C, Inglehart R (2010) Agency, Values, and Well-Being: A Human Development Model. Soc Indic Res 97: 43-63. |
1. | Anne Taufen, Ken Yocom, Transitions in Urban Waterfronts: Imagining, Contesting, and Sustaining the Aquatic/Terrestrial Interface, 2021, 13, 2071-1050, 366, 10.3390/su13010366 | |
2. | Ted R. Angradi, Kathleen C. Williams, Joel C. Hoffman, David W. Bolgrien, Goals, beneficiaries, and indicators of waterfront revitalization in Great Lakes Areas of Concern and coastal communities, 2019, 45, 03801330, 851, 10.1016/j.jglr.2019.07.001 | |
3. | A Comparative Study of Spatial and Temporal Preferences for Waterfronts in Wuhan based on Gender Differences in Check-In Behavior, 2019, 8, 2220-9964, 413, 10.3390/ijgi8090413 | |
4. | Jenia Mukherjee, 2020, Chapter 5, 978-981-15-3950-3, 125, 10.1007/978-981-15-3951-0_5 | |
5. | Asma Othman, Khalid Al-Hagla, Asmaa E. Hasan, The impact of attributes of waterfront accessibility on human well-being: Alexandria Governorate as a case study, 2021, 12, 20904479, 1033, 10.1016/j.asej.2020.08.018 | |
6. | Xun Li, Wendy Y. Chen, Frankie Hin Ting Cho, Raffaele Lafortezza, Bringing the vertical dimension into a planar multilevel autoregressive model: A city-level hedonic analysis of homebuyers' utilities and urban river attributes, 2021, 772, 00489697, 145547, 10.1016/j.scitotenv.2021.145547 | |
7. | Rebecca Nixon, J. Stuart Carlton, Zhao Ma, Drivers of revitalization in Great Lakes coastal communities, 2022, 48, 03801330, 1387, 10.1016/j.jglr.2022.03.008 | |
8. | Edward Simpson, David Bradley, John Palfreyman, Roger White, Sustainable Society: Wellbeing and Technology—3 Case Studies in Decision Making, 2022, 14, 2071-1050, 13566, 10.3390/su142013566 | |
9. | Ted R. Angradi, Jonathon J. Launspach, Molly J. Wick, Human well-being and natural capital indicators for Great Lakes waterfront revitalization, 2022, 48, 03801330, 1104, 10.1016/j.jglr.2022.04.016 | |
10. | Agus Fitrianto, Abdul Rachman Rasyid, Slamet Trisutomo, Indeks Keberlanjutan Kawasan Industri ditepi Air, 2021, 17, 2597-9272, 295, 10.14710/pwk.v17i3.34230 | |
11. | Rebecca Nixon, J. Stuart Carlton, Zhao Ma, Trust and collaboration connect remediation and restoration to community revitalization, 2023, 233, 01692046, 104710, 10.1016/j.landurbplan.2023.104710 | |
12. | Carlos J. L. Balsas, Urban Revitalization in Small Cities across the Atlantic Ocean, 2024, 16, 2071-1050, 639, 10.3390/su16020639 | |
13. | Doğa Üzümcüoğlu, Mukaddes Polay, Unveiling Contemporary and Thrilling Waterfront Design Principles through Theoretical and Case-Based Investigations, 2024, 9, 2548-0170, 44, 10.30785/mbud.1366291 | |
14. | Helen Beck, Rachel Berney, Brian Kirk, Ken P. Yocom, Building equity into public park and recreation service investment: A review of public agency approaches, 2024, 247, 01692046, 105069, 10.1016/j.landurbplan.2024.105069 | |
15. | Matthew Jurjonas, Christopher A. May, Bradley Cardinale, Stephanie Kyriakakis, Douglas R. Pearsall, Patrick J. Doran, The perceived ecological and human well‐being benefits of ecosystem restoration, 2024, 6, 2575-8314, 4, 10.1002/pan3.10558 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.6217 | 4.97 | 104.33 | 17936.03 | 133.92 |
LSTM | 0.5160 | 5.17 | 109.2 | 22947.59 | 151.48 |
CNN | 0.5462 | 5.62 | 115.62 | 21515.55 | 146.68 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.7588 | 7.04 | 125.92 | 35393.93 | 188.13 |
LSTM | 0.7153 | 8.07 | 144.84 | 41786.91 | 204.42 |
CNN | 0.7176 | 7.44 | 140.21 | 41451.11 | 203.59 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.8879 | 6.162 | 103.156 | 22746.21 | 150.81 |
LSTM | 0.8814 | 6.74 | 107.13 | 24060.60 | 155.11 |
CNN | 0.7616 | 9.79 | 158.27 | 48390.42 | 219.97 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.87 | 5.54 | 99.50 | 19963.23 | 141.29 |
LSTM | 0.89 | 5.44 | 93.67 | 17395.14 | 131.89 |
CNN | 0.76 | 8.67 | 141.89 | 36938.71 | 192.19 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.92 | 4.23 | 71.26 | 11017.13 | 104.96 |
LSTM | 0.92 | 4.38 | 75.12 | 11258.74 | 106.10 |
CNN | 0.90 | 5.10 | 84.95 | 14584.13 | 120.76 |
Method | Day-ahead | 72 hrs ahead | Week-ahead | 15 days ahead | Month ahead |
RDNN [78] | 5.66% | - | 7.55% | - | - |
SOM-SVM [79] | 6.05% | - | 8.03% | - | - |
Capula DBN [80] | 5.63% | - | 7.26% | - | - |
LSTM [81] | - | 9.34% | - | - | |
ANN [82] | - | 12.96% | - | - | - |
NP-ARMA [83] | - | - | - | - | 4.83% |
CNN [84] | - | - | - | - | 5.08% |
LSTM [85] | - | - | - | - | 4.83% |
XGBoost [86] | - | - | - | - | 10.08% |
This Work | 4.97% | 6.34% | 6.16% | 5.44% | 4.23% |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.6217 | 4.97 | 104.33 | 17936.03 | 133.92 |
LSTM | 0.5160 | 5.17 | 109.2 | 22947.59 | 151.48 |
CNN | 0.5462 | 5.62 | 115.62 | 21515.55 | 146.68 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.7588 | 7.04 | 125.92 | 35393.93 | 188.13 |
LSTM | 0.7153 | 8.07 | 144.84 | 41786.91 | 204.42 |
CNN | 0.7176 | 7.44 | 140.21 | 41451.11 | 203.59 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.8879 | 6.162 | 103.156 | 22746.21 | 150.81 |
LSTM | 0.8814 | 6.74 | 107.13 | 24060.60 | 155.11 |
CNN | 0.7616 | 9.79 | 158.27 | 48390.42 | 219.97 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.87 | 5.54 | 99.50 | 19963.23 | 141.29 |
LSTM | 0.89 | 5.44 | 93.67 | 17395.14 | 131.89 |
CNN | 0.76 | 8.67 | 141.89 | 36938.71 | 192.19 |
Method | R2 | MAPE | MAE | MSE | RMSE |
MLP | 0.92 | 4.23 | 71.26 | 11017.13 | 104.96 |
LSTM | 0.92 | 4.38 | 75.12 | 11258.74 | 106.10 |
CNN | 0.90 | 5.10 | 84.95 | 14584.13 | 120.76 |
Method | Day-ahead | 72 hrs ahead | Week-ahead | 15 days ahead | Month ahead |
RDNN [78] | 5.66% | - | 7.55% | - | - |
SOM-SVM [79] | 6.05% | - | 8.03% | - | - |
Capula DBN [80] | 5.63% | - | 7.26% | - | - |
LSTM [81] | - | 9.34% | - | - | |
ANN [82] | - | 12.96% | - | - | - |
NP-ARMA [83] | - | - | - | - | 4.83% |
CNN [84] | - | - | - | - | 5.08% |
LSTM [85] | - | - | - | - | 4.83% |
XGBoost [86] | - | - | - | - | 10.08% |
This Work | 4.97% | 6.34% | 6.16% | 5.44% | 4.23% |