Research article

Scenario-based financial planning: the case of Ukrainian railways

  • The crisis in the global and national economies negatively affects the predictability of companies. In such conditions, it becomes impossible to correctly forecast and plan based on the traditional methods and models. Therefore, the aim of this study is to develop an approach to forecasting and planning in conditions of high uncertainty in the functioning of companies. The methodological basis of the study is evolutionary-institutional management, which allows to consider the development of the company as a complex evolutionary process. A scenario approach is used as the basis for forecasting, which allows to study the dynamics of any company in an unstable environment. The study includes three blocks: developing scenarios, scenario forecasting, financial planning. The study is conducted based on data on the functioning of Ukrainian Railways, JSC, which is the operator of the rail infrastructure and the national carrier of goods and passengers in Ukraine. Scenarios are developed considering the specifics of the products of rail companies and based on factors (key uncertainties) that determine the effectiveness of their functioning. Scenario forecasting is based on considering the peculiarities of the company's production process, its cash cycle, and the formation of financial resources. As a result, forecast estimates are obtained for three scenarios (optimistic, pessimistic, and negative). The financial planning model is developed as a system of interrelated models of cash flow planning (operational, investment and financial). This approach allows to plan the sources of funds of the company and their use, taking into account possible changes in the external and internal environment of the company and, as a result, ensure its stable functioning in the implementation of any scenario.

    Citation: Olha Kravchenko, Nadiia Bohomolova, Oksana Karpenko, Maryna Savchenko, Nataliia Bondar. Scenario-based financial planning: the case of Ukrainian railways[J]. National Accounting Review, 2020, 2(3): 217-248. doi: 10.3934/NAR.2020013

    Related Papers:

    [1] Yue Jiang, Xuehu Yan, Jia Chen, Jingwen Cheng, Jianguo Zhang . Meaningful secret image sharing for JPEG images with arbitrary quality factors. Mathematical Biosciences and Engineering, 2022, 19(11): 11544-11562. doi: 10.3934/mbe.2022538
    [2] Gang Cao, Antao Zhou, Xianglin Huang, Gege Song, Lifang Yang, Yonggui Zhu . Resampling detection of recompressed images via dual-stream convolutional neural network. Mathematical Biosciences and Engineering, 2019, 16(5): 5022-5040. doi: 10.3934/mbe.2019253
    [3] Bo Wang, Yabin Li, Xue Sui, Ming Li, Yanqing Guo . Joint statistics matching for camera model identification of recompressed images. Mathematical Biosciences and Engineering, 2019, 16(5): 5041-5061. doi: 10.3934/mbe.2019254
    [4] Shudong Wang, Yuliang Lu, Xuehu Yan, Longlong Li, Yongqiang Yu . AMBTC-based visual secret sharing with different meaningful shadows. Mathematical Biosciences and Engineering, 2021, 18(5): 5236-5251. doi: 10.3934/mbe.2021266
    [5] T. Gayathri Devi, A. Srinivasan, S. Sudha, D. Narasimhan . Web enabled paddy disease detection using Compressed Sensing. Mathematical Biosciences and Engineering, 2019, 16(6): 7719-7733. doi: 10.3934/mbe.2019387
    [6] Guozheng Yang, Lintao Liu, Xuehu Yan . A compressed secret image sharing method with shadow image verification capability. Mathematical Biosciences and Engineering, 2020, 17(4): 4295-4316. doi: 10.3934/mbe.2020237
    [7] Tao Zhang, Hao Zhang, Ran Wang, Yunda Wu . A new JPEG image steganalysis technique combining rich model features and convolutional neural networks. Mathematical Biosciences and Engineering, 2019, 16(5): 4069-4081. doi: 10.3934/mbe.2019201
    [8] Li Li, Min He, Shanqing Zhang, Ting Luo, Chin-Chen Chang . AMBTC based high payload data hiding with modulo-2 operation and Hamming code. Mathematical Biosciences and Engineering, 2019, 16(6): 7934-7949. doi: 10.3934/mbe.2019399
    [9] Lu Lu, Jiyou Fei, Ling Yu, Yu Yuan . A rolling bearing fault detection method based on compressed sensing and a neural network. Mathematical Biosciences and Engineering, 2020, 17(5): 5864-5882. doi: 10.3934/mbe.2020313
    [10] Jinyi Tai, Chang Liu, Xing Wu, Jianwei Yang . Bearing fault diagnosis based on wavelet sparse convolutional network and acoustic emission compression signals. Mathematical Biosciences and Engineering, 2022, 19(8): 8057-8080. doi: 10.3934/mbe.2022377
  • The crisis in the global and national economies negatively affects the predictability of companies. In such conditions, it becomes impossible to correctly forecast and plan based on the traditional methods and models. Therefore, the aim of this study is to develop an approach to forecasting and planning in conditions of high uncertainty in the functioning of companies. The methodological basis of the study is evolutionary-institutional management, which allows to consider the development of the company as a complex evolutionary process. A scenario approach is used as the basis for forecasting, which allows to study the dynamics of any company in an unstable environment. The study includes three blocks: developing scenarios, scenario forecasting, financial planning. The study is conducted based on data on the functioning of Ukrainian Railways, JSC, which is the operator of the rail infrastructure and the national carrier of goods and passengers in Ukraine. Scenarios are developed considering the specifics of the products of rail companies and based on factors (key uncertainties) that determine the effectiveness of their functioning. Scenario forecasting is based on considering the peculiarities of the company's production process, its cash cycle, and the formation of financial resources. As a result, forecast estimates are obtained for three scenarios (optimistic, pessimistic, and negative). The financial planning model is developed as a system of interrelated models of cash flow planning (operational, investment and financial). This approach allows to plan the sources of funds of the company and their use, taking into account possible changes in the external and internal environment of the company and, as a result, ensure its stable functioning in the implementation of any scenario.


    It is a significant task to identify the authenticity of an image in several scenarios, such as the media industry, digital image forensics and academic appraisal. It is important to know if an image was tampered because people need to ensure if a certain image can serve as effective evidence for a case or a real result of an experiment. There are different kinds of tampering processing including but not limited to copy-paste, blurring and scale transformation. We want to make a fast and reliable preliminary judgment on tampering. And the widely used JPEG compression algorithm gives us a good chance, we can design an algorithm based on it to achieve our goal. Identification of JPEG compression history has received more and more attention in recent years. When an image is saved in bitmap format but has been compressed by JPEG method, we can not access to the jpg file header which contains the information about compression but we still need to know its compression history sometimes.

    Among all of the lossy compression algorithms, JPEG (Joint Photographic Experts Group) is one of the most popular and widely used standards. Almost all software provide JPEG compression operations choice when saving digital images. Sometimes images have been compressed by the JPEG method are saved into bitmaps. And we can not get the information whether images have been compressed from images files themselves directly because we do not have any access to the JPEG file headers after it has been saved as bitmaps.

    However, this information may be crucial in some cases, in the field of digital image forensics for instance. If the JPEG compression history is efficiently exposed, we can make a preliminary judgment that the image may have been tampered. That is why we need to detect the compression history. Thus, methods for detecting the compression history of bitmaps has become an important issue and received widespread attention.

    Many efforts have been attempted in this aspect, and many decent results have been achieved. Most of these works are related to JPEG coefficient, JPEG quantization table, DCT transformation and wavelet transformation. Based on these, different approaches were proposed.

    Thanh et al. [1] has proposed a method based on the combination of the quantization effect and the statistics of discrete cosine transform coefficient characterized by the statistical model. Hernandez et al. [2] has proposed a method which can avoid giving false results. When their method can not get the quantization table, it means this bitmap may not be compressed or it is not compressed by the JPEG algorithm. These methods have shown some characters of JPEG coefficients which are very meaningful for further works in this aspect.

    And there are some JPEG history detection methods which do not need to estimate the quantization table. Fan et al. [3] proposed a detection method based on the feature of block antifacts in the pixel domain, as the pixel values between the blocks should be inherent if an image was compressed before comparing with uncompressed images. But Fan's method [3] has a relatively high computational complexity. Yang et al. [4] used factor histogram to detect the JPEG compression history of bitmaps, because the decreasing values in factor histogram are observed with the increase of its bin index for uncompressed bitmaps, while no obvious decrease is found in uncompressed images. But Yang's method [4] gets a sudden drop in the accuracy when the compression quality factor is high because the block antifacts phenomenon is not obvious under such circumstances. Especially when the quality factor is 98 or higher the accuracy can go below 50%. Zhang et al. [5] found the tetrolet transformation proposed by Krommweh et al. [6] can be used to exploit the structure of images. Tetrolet transformation is a kind of Harr wavelet transformation which uses at most 177 different kinds of tetrolet as components to disassemble images. The authors proposed a detection method based on the tetrolet transformation to distinguish the uncompressed bitmap image from the decoded JPEG image. As far as we know, Zhang's method [5] has the highest accuracy until now.

    Because JPEG compression algorithm is a kind of lossy compression, the compressed image will lose some kind of information after the compression. Proposed in [7], the number of zeros of the JPEG coefficient is a major factor affecting the compression quality of JPEG. For the same bitmap image, the image quality will continue to improve with the increase of JPEG compression quality factor, while the percentage of zeros of the 64 JPEG coefficients will decrease. And we present a method based on this observation.

    In this paper, we propose a fast, and reliable method to detect the compression history of bitmaps based on image information loss. Our method is faster than most existing similar methods because we do not need to compress the test image in the processing. A lot of methods proposed contain the compression step because they need a comparison version for obtaining the results. Instead of making a compressed image with quality factor 100 in [5], we obtain an estimated original image which is firstly created based on the test image. This processing costs much less time than compression. Extensive experimental results have been achieved which demonstrate that our proposed method outperforms the state of art respect to the detection accuracy and computational complexity. And the accuracy of our method is relatively high, especially when the quality factor of the test images are below 97. Even when the quality factors are as high as 98 and 99, our method still gives acceptable results. What is more, the proposed method can be generally used no matter the test image uses standard or non-standard JPEG quantization table during the compression which means as long as the image was compressed by JPEG method our detection is effective.

    The remaining of the paper is organized as follows. In Section 2, we introduce the relationship of the JPEG coefficients and the image information loss caused by JPEG compression. And the method to create the estimated original image is also described. The framework and the details of the algorithm are stated in Section 3. In Section 4, the experimental results are shown and discussed. And conclusions will be drawn in section 5.

    In this paper, the quality factor Q is an important factor that determines the quality of the JPEG image, and the DCT coefficient after quantization is called the JPEG coefficient, which can be read directly from the JPEG image file. The number of zero of the JPEG coefficient is a major factor affecting the compression quality of JPEG images. Through extensive experiments, we find that the proportion of zero JPEG coefficients on the 64 DCT positions show downward trends as the image compression quality factor increase. In other words, for the same bitmap image, the higher the compression quality is, the less the image information loss is and the lower the percentage of zero among 64 JPEG coefficients is. So, the percentage of zero JPEG coefficients on different frequencies can be defined as the index of the amount of information loss after bitmaps were compressed by the JPEG method.

    When an image is compressed by JPEG, it will firstly be separated into several 8 × 8 blocks. Then each block is operated by DCT respectively. For each block, there are 64 positions. The first step of our method is doing statistics on the number of zero on 64 positions among all blocks. Note n(j) as the total number of zeros on jth position and m is the number of blocks. Then the amount of image information loss on the 64 DCT positions can be expressed as:

    p(j)=n(j)m,j=1,2,...,64 (2.1)

    and the average image information loss is expressed as:

    averageloss=64j=1p(j)/64 (2.2)

    Figure 1 illustrates the result of the average information loss of a bitmap after compressed into JPEG images with the quality factor varying from 60 to 100. The average image information loss decreases as the growth of quality factor Q.

    Figure 1.  The curve of average information loss with the increase of quality factor Q.

    Respecting to this observation, we obtain JPEG images from an uncompressed image Ibmp with different quality factors. And then the JPEG images are decoded to decompressed bitmap images. The uncompressed image and the decompressed images are JPEG compressed with the quality factor 100 to obtain IJPEG1 and IJPEG2 respectively. Obviously, the IJPEG1 is a single JPEG compressed image and IJPEG2 undergoes double JPEG compression. We can compare the amount of image information loss between IJPEG1 and IJPEG2 to achieve the goal of making a primary judge on compression. The higher difference between IJPEG1 and IJPEG2 means higher information loss.

    But please notice that in the example there is an assumption that we have the original lossless image and then compress it. And we make the judgement according to the contrast. But in the real case, the original lossless image is usually unacquirable. Therefore, we have to estimate the original image first.

    As proposed in [8,9], the image will be separated into blocks, when it undergoes JPEG compression. Then these blocks are operated separately. And to shrink the file size we tolerant some information loss during the quantization. Certain frequency signals are abandoned in quantization step especially for those high-frequency harmonics which only causes very little even no change for the human visual system (HVS). These signals mean redundancy to the human visual system, while they contain a lot of information. That is why they are important to the detection of compression. The image which has not been compressed or just been compressed with relatively high-quality factors remains more information which is a series of signals having different frequencies. Normally, a compressed image has lost a considerable amount of harmonics. Most high-frequency components are set to zero and some low-frequency components are also set to zero if they are small enough. It is true that we can not get what has been abandoned in the previous processing again because of the lossy JPEG compression. But it is still possible to estimate that information existing in the original image of the test image. During the JPEG compression, DCT and quantization are used on each block but not on the full image, which has been discussed in [10]. So, even those harmonics are lost in each separated 8 × 8 blocks but they are still existing among the full-size image. If we want to expose this information, we need to break the existing block artifacts. A method of cutting 4 rows and 4 columns of the test image widely used in image steganalysis [11] is employed to estimate the counterpart of the original image.

    The removal of left-top 4 rows and columns has been approved as an excellent way to estimate the counterpart of the original image from compression, which means similar statistic features, as the cut destroys the block-based structure of JPEG. The row and column cutting are illustrated in Figure 2.

    Figure 2.  Original image estimation.

    Based on the image information loss, we propose a novel algorithm to detect the JPEG compression history as Figure 3 illustrated. The idea of extracting feature from the JPEG file is based on [12].

    Figure 3.  The framework of algorithm based on image information loss.

    The whole processing is as following:

    ⅰ. To obtain IJPEG1, the test bitmap image is JPEG compressed with quality factor Q = 100.

    ⅱ. The counterpart of the original image is estimated by cutting 4 rows and 4 columns from the test image. The IJPEG2 is acquired by compressing the counterpart with quality factor Q = 100 as well.

    ⅲ. The features related to the image information loss are extracted from the two JPEG images, and then fed into the classifier to detect whether the test bitmap image has been compressed.

    Considering the test image as a decomposed JPEG image, as Figure 4 illustrated. IJPEG1 actually undergoes double JPEG compression with an unknown previous quality factor and the latter quality factor of 100. And because of the counterpart estimated by cutting rows and columns, the IJPEG2 could be considered as a single JPEG compressed image with the quality factor of 100. For the percentage of zero of JPEG coefficients among the 64 DCT positions are defined as the indexes of the amount of information loss after the bitmap image is JPEG compressed, there are disparities between the corresponding indexes IJPEG1 of IJPEG2 and on the 64 DCT positions, as shown in Figure 5. A higher information loss is expected for a JPEG compressed test image. On the contrary, if the test image is uncompressed, there should be no obvious differences between indexes on corresponding positions, as shown in Figure 6.

    Figure 4.  The original decompressed image.
    Figure 5.  The comparison of testing image and estimated original image in the case that the test image is decompressed from the JPEG image with quality factor Q = 90.
    Figure 6.  The comparison of testing image and estimated original image in the case that the test image is uncompressed.

    p1(j) denotes the indexes of the image information loss of IJPEG1. p2(j) denotes the indexes of the image information loss of IJPEG2. Then we describe the difference of information loss as

    pdif(j)=p1(j)p2(j),j=1,2,...,64 (3.1)
    pdif_average(j)=64j=1pdif(j)/64 (3.2)

    pdif_average indicates how much details are found in the estimation counterpart comparing with the test image. If the testing image is uncompressed, the value of pdif_average will be close to 1 which means there is no obvious difference between the test image and the estimation original image. If the test image was compressed, this value will be much greater than 1 which means the bias is observed between the IJPEG1 and IJPEG2 after breaking the 8 × 8 blocks in the test image by cutting. After extracting this feature of images, an SVM classifier is trained. And then we can detect the bitmap JPEG compression history with this model.

    Two image databases are used in our experiments to evaluate the performance of the proposed method. Firstly, 1338 uncompressed images from the UCID image database are used in our experiments. These images are saved in Tif format with the resolution of 512 × 384. And a series of standard JPEG quality factors (60, 70, 75, 80, 85, 90, 95, 96, 97, 98, 99) are applied to the images to obtain JPEG compression images of different qualities. The JPEG images with different quality are resaved in Tif format for evaluating the proposed algorithm. In the following, this image dataset is named as dataset1.

    The other 480 images come from the well-known Dresden database. Different from the UCID database, the images from the Dresden database are captured by consumer cameras and saved as JPEG image originally. In our experiments, we use 480 JPEG images from 4 different cameras, which are Agfa DC-830i, Canon PowerShotA640, Nikon D200 and Sony DSC-W170, 120 images from each camera. Different from the JPEG images obtained in dataset1, these images are compressed with different consumer-defined JPEG quantization tables with various camera models. Also, the images, named as dataset2 with the resolution of 3872 × 2592, are resaved as bitmap images for the experiments.

    We take 500 uncompressed images and 11 × 500 decompressed JPEG images from dataset1as the labeled samples to train the SVM classifier with RBF kernel. After we get the model we use it to test the rest images with different quality factors (60, 70, 75, 80, 85, 90, 95, 96, 97, 98, 99). We also compare our proposed method with Yang's [4], Fan's [3] and Zhang's [5] methods in terms of detection accuracy and algorithm complexity respectively. The results are shown in Table 1 to Table 3.

    Table 1.  Identification accuracy (%) of the proposed method and baselines for dataset1.
    Methods Q original
    60 70 80 85 90 95 96 97 98 99
    Fan's 97.10 96.68 96.00 95.14 89.78 69.14 59.80 48.33 25.53 17.27 84.10
    Yang's 99.90 100 100 100 99.80 98.69 96.58 88.74 78.16 39.79 96.59
    Zhang's 100 100 100 100 100 100 100 99.93 99.10 95.65 99.88
    Proposed 100 100 100 100 100 100 99.93 99.48 99.03 89.31 99.92

     | Show Table
    DownLoad: CSV
    Table 2.  Identification accuracy (%) of the proposed method and the baseline for images in dataset2.
    Method Accuracy(%)
    Zhangs 34.08
    Proposed 100

     | Show Table
    DownLoad: CSV
    Table 3.  Average time cost.
    Method Time cost(s)
    (image with resolution of 384 × 512)
    Fans 2.73
    Yangs 0.91
    Zhangs 9.64
    Proposed 0.60

     | Show Table
    DownLoad: CSV

    As shown in Table 1, Fan's method can give relatively good results when the quality factor is less than 85. The detection accuracy goes below 90% when the quality factors are greater than 90. Yang's method has a similar shortcoming. It performs well when the quality is 96 but the accuracy goes below 90% when the quality factors are greater. Zhang's method has a really good result. The detection accuracy is 95.65% when the quality factor is as high as 99. Our method outperforms Fan's and Yang's methods. And similar detection results are observed between Zhang's and the proposed method. While Zhang's method works better when the quality factor is 99, our method can give results in the shortest time, as shown in Table 3. Also, it can be proved that average cost time for each pixel is stable by simple computation. Time cost may be not the most important index in this aspect. But we can get reliable results within less time indeed. This may have a great advantage in some cases.

    Another comparison experiment is implemented between the proposed and Zhang's methods [5]. We take 480 compressed images from dataset2 to prove that our method can work on all JPEG compressed images. All of these bitmap images are not compressed using standard JPEG quantization but took by cameras which means they were compressed using customer-defined JPEG quantization tables. The results are shown in Table 2. Zhang's method is found only effective to images compressed using the standard JPEG quantization tables. And the proposed method still performs well.

    The issue of detecting the compression history of images receives more and more attention in recent years. In this paper, we propose a novel and fast detecting method based on novel feature respect to image information loss. According to this, the proportion of zero JPEG coefficients on 64 DCT positions falls down as well. We estimate the image counterpart by cutting 4 rows and 4 columns from the original image and calculate the differences between the values of the 64 DCT positions respectively. The feature extracted from the differences is fed into the SVM to train a mode to classify the test bitmap images. Extensive experiments and the results demonstrate that our proposed method outperforms the state of art, especially in the cases of high compression quality factors and customer-defined quality factors. And also the proposed algorithm indicates a lower computational complexity compared to the previous works.

    This work is supported by the National Science Foundation of China (No. 61502076, No. 61772111).

    All authors declare no conflicts of interest in this paper.



    [1] Akoff RL (1972) Planning in Large Economic Systems, Moscow: Soviet Radio.
    [2] Akoff RL (1985) Planning the Future of the Corporation, Moscow: Progress.
    [3] Alekseeva MM (2002) Company Activity Planning, Moscow: Finance and statistics.
    [4] Booher I (1999) Consensus Building and Complex Adaptive System: A Framework for Evaluating Collaborative Planning. J Am Plann Ass 65: 412-423. doi: 10.1080/01944369908976071
    [5] Brealey RA, Myers SC (2011) Principles of Corporate Finance, New York: McGraw-Hill.
    [6] Bruckmann В, Bomhauer-Beins А, Weidmann U (2015) A Qualitative Model to Evaluate the Financial Effects of Innovations in the Rail Sector. Transp Res Procedia 10: 564-573. doi: 10.1016/j.trpro.2015.09.010
    [7] Budget Revision (2020) Available from: https://nv.ua/opinion/sokrashchenie-byudzheta-2020-pochemu-pravitelstvo-umalchivaet-nastoyashchie-cifry-novosti-ukrainy-50079223.html.
    [8] Center for Transport Strategies (2020) Available from: https://cfts.org.ua/articles/vopros_na_milliard_chto_s_finansami_ukrzaliznytsi__1568.
    [9] Elliott G, Timmermann A (2016) Economic Forecasting, Princeton: Princeton University Press.
    [10] Graham J, Smart SB, Megginson WL (2009) Corporate Finance: Linking Theory to What Companies Do, Mason: South-Western Cengage Learning.
    [11] Fernandez A, Swanson NR (2017) Further Evidence on the Usefulness of Real-Time Datasets for Economic Forecasting. Quant Financ Econ 1: 2-25. doi: 10.3934/QFE.2017.1.2
    [12] Financial Times (2020) Global Economy Already Set for Historic Contraction. Available from: https://www.ft.com/content/9ac5eb8e-4167-4a54-9b39-dab48c29ac6c.
    [13] Ghysels E, Marcellino M (2018) Applied Economic Forecasting Using Time Series Methods, Oxford: Oxford University Press.
    [14] Gurău C, Dana LP (2020) Financing Paths, Firms' Governance and Corporate Entrepreneurship: Accessing and Applying Operant and Operand Resources in Biotechnology Firms. Technol Forecast Soc 53.
    [15] Hanke J (1992) Business Forecasting, New York: Simon&Schuster.
    [16] Houlden T (1995) How Corporate Planning Adapts and Survives. Long Range Plann 28: 99-108. doi: 10.1016/0024-6301(95)00056-O
    [17] International Monetary Fund (2020a) The Great Lockdown: Worst Economic Downturn Since the Great Depression. Available from: https://blogs.imf.org/2020/04/14/the-great-lockdown-worst-economic-downturn-since-the-great-depression/.
    [18] International Monetary Fund (2020b) World Economic Outlook, April 2020: The Great Lockdown. Available from: https://www.imf.org/en/Publications/WEO/Issues/2020/04/14/weo-april-2020.
    [19] Kjell J, Max K (2013) Applied Predictive Modeling. New York: Springer.
    [20] Kleiner GB (2011) A New Theory of Economic Systems and its Applications. Bull Russ Acad Sci 9: 794-808.
    [21] Kononenko O (2012) Analysis of Financial Statements, Kharkov: Vivat.
    [22] Kostyrko LA (2012) Financial Mechanism of Sustainable Development of Enterprises: Strategic Orientations, Systems of Supply, Adaptation, Lugansk: Knowledge Publishing House.
    [23] Kravchenko O (2019) Public-private partnership as a mechanism for financing infrastructure modernization. Balt J Econ 5: 112-117. doi: 10.30525/2256-0742/2019-5-1-112-117
    [24] Kravchenko O (2020) Scenario Analysis of the Assessment of the Rail Transport Impact on the Economic Growth (on the Example of Ukraine). Econ Stud J 1: 114-135.
    [25] Kravchenko O (2013) Scenario Financial Planning and Forecasting on Rail Transport: Theory and Practice, Kyiv: DETUT.
    [26] Lecours A (2002) L'approche Néo-Institutionnaliste en Science Politique: Unité ou Diversité? Polit Soc 21: 3-19.
    [27] Lidén T (2015) Railway Infrastructure Maintenance-a Survey of Planning Problems and Conducted Research. Transp Res Procedia 10: 574-583. doi: 10.1016/j.trpro.2015.09.011
    [28] Lindgren M, Bandhold H (2009) Scenario Planning-The Link Between Future and Strategy, New York: Palgrave Macmillan.
    [29] Linstone HA (2002) Corporate Planning, Forecasting, and the Long Wave. Futures 34: 317-336. doi: 10.1016/S0016-3287(01)00047-7
    [30] Liu Y, Zheng Y, Drakeford BM (2019) Reconstruction and Dynamic Dependence Analysis of Global Economic Policy Uncertainty. Quant Financ Econ 3: 550-561. doi: 10.3934/QFE.2019.3.550
    [31] Lukashin YP (2003) Adaptive Methods of Short-term Forecasting of Time Series, Moscow: Finance and Statistics.
    [32] Lusby MJ, Larsen J, Bull S (2018) A Survey on Robustness in Railway Planning. Eur J Oper Res 266: 1-15. doi: 10.1016/j.ejor.2017.07.044
    [33] Martelli A (2001) Scenario Building and Scenario Planning: State of the Art and Prospects of Evolution. F Res Quart Summer, 57-70.
    [34] Marx K (2020) Capital: a Critique of Political Economy, Moscow: Eksmo.
    [35] Ministry of Infrastructure of Ukraine (2020) Available from: https://mtu.gov.ua/.
    [36] Moiseeva EG (2010) Cash flow management: planning, balancing, synchronization. Handb Econ, 5. Available from: http://www.profiz.ru/se/5_2010/upravlenie_deneznymi_poto/.
    [37] National Development and Reform Commission of the People's Republic of China (2017) Vision and Actions on Jointly Building Silk Road Economic Belt and 21st-Century Maritime Silk Road. Available from: http://en.ndrc.gov.cn/newsrelease/201503/t20150330_669367.html.
    [38] Nelson RR, Winter SG (2002) The Evolutionary Theory of Economic Change, Moscow: Delo.
    [39] North D (1997) Institutions, Institutional Changes and the Functioning of the Economy, Moscow: Nachala Fond.
    [40] Peters TJ, Waterman RH (1982) In Search of Excellence, New York: Harper & Row.
    [41] Pillkahn U (2008) Using Trend and Scenarios as Tools for Strategy Development, Erlangen: Publicis Corporate Publishing.
    [42] Ramírez R (2016) Strategic Reframing: The Oxford Scenario Planning Approach, Oxford: Oxford University Press.
    [43] Ramírez R, Selin C (2014) Plausibility and probability in scenario planning. Foresight 16: 54-74. doi: 10.1108/FS-08-2012-0061
    [44] Ringland J (2008) Scenario planning for the development of a business strategy, Moscow: Williams.
    [45] Rittel HWJ, Webber MM (1973) Dilemmas in a General Theory of Planning. Policy Sci 4: 155-169. doi: 10.1007/BF01405730
    [46] Sherden WA (1998) The Fortune Sellers: The Big Business of Buying and Selling Predictions, New York: John Wiley & Sons, Inc.
    [47] Skurikhin VI, Zabrodsky VA, Ivashchenko PA, et al. (1980) Methods of OOrganizing Adaptive Planning and Management in Economic Production Systems, Kyiv: Naukova Dumka.
    [48] S&P Global Ratings (2020) Economic Research: COVID-19 Deals a Larger, Longer Hit to Global GDP. Available from: https://www.spglobal.com/ratings/en/research/ articles/200416-economic-research-covid-19-deals-a-larger-longer-hit-to-global-gdp-11440500.
    [49] State Administration of Railway Transport of Ukraine "Ukrayins'ka zaliznytsya" (2020) Available from: http://www.uz.gov.ua/about/investors/financial_statements.
    [50] State Statistics Service of Ukraine (2020) Transport. Available from: http://www.ukrstat.gov.ua.
    [51] Svetunkov IS (2010) Self-learning model of short-term forecasting of socio-economic dynamics, In: Models for the assessment, analysis and forecasting of socio-economic systems, Kharkov: Publishing House Inzhek, 11-32.
    [52] Sukharev OS (2020) Economic Crisis as a Consequence COVID-19 Virus Attack: Risk and Damage Assessment. Quant Financ Econ 4: 274-293. doi: 10.3934/QFE.2020013
    [53] Tarasyuk GM, Shvab LI (2003) Planning the Activity of the Enterprise, Kyiv: Caravella.
    [54] Teitz MB (2007) Planning and the New Institutionalisms, In: Institutions and planning, Amsterdam: Emerald Group Publishing, 17-36.
    [55] The World Bank (2020) The World Bank In Ukraine. Available from: https://www.worldbank.org/en/country/ukraine/overview.
    [56] Wade W, Wagner N (2012) Scenario Planning: A Field Guide to the Future, Hoboken, N.J.: Wiley.
    [57] World Bank Group (2015) Support to Public-Private Partnerships: Lessons from experience from client countries, International Bank of Reconstruction and Development, Washington.
    [58] World Economic Forum (2020a) Covid Action Platform. Available from: https://www.weforum.org/covid-action-platform.
    [59] World Economic Forum (2020b) The Global Competitiveness Index. Available from: https://www.weforum.org/reports/.
    [60] Zub AT (2007) Strategic Management, Moscow: Prospect Publishing House.
  • This article has been cited by:

    1. Myeong Seong Yoon, Gitaek Kwon, Jaehoon Oh, Jongbin Ryu, Jongwoo Lim, Bo-kyeong Kang, Juncheol Lee, Dong-Kyoon Han, Effect of Contrast Level and Image Format on a Deep Learning Algorithm for the Detection of Pneumothorax with Chest Radiography, 2023, 1618-727X, 10.1007/s10278-022-00772-y
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5934) PDF downloads(247) Cited by(1)

Figures and Tables

Figures(3)  /  Tables(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog