Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Runtime verification in uncertain environment based on probabilistic model learning


  • Runtime verification (RV) is a lightweight approach to detecting temporal errors of system at runtime. It confines the verification on observed trajectory which avoids state explosion problem. To predict the future violation, some work proposed the predictive RV which uses the information from models or static analysis. But for software whose models and codes cannot be obtained, or systems running under uncertain environment, these predictive methods cannot take effect. Meanwhile, RV in general takes multi-valued logic as the specification languages, for example the true, false and inconclusive in three-valued semantics. They cannot give accurate quantitative description of correctness when inconclusive is encountered. We in this paper present a RV method which learns probabilistic model of system and environment from history traces and then generates probabilistic runtime monitor to quantitatively predict the satisfaction of temporal property at each runtime state. In this approach, Hidden Markov Model (HMM) is firstly learned and then transformed to Discrete Time Markov Chain (DTMC). To construct incremental monitor, the monitored LTL property is translated into Deterministic Rabin Automaton (DRA). The final probabilistic monitor is obtained by generating the product of DTMC and DRA, and computing the probabilities for each state. With such a method, one can give early warning once the probability of correctness is lower than a pre-defined threshold, and have the chance to do adjustment in advance. The method has been implemented and experimented on real UAS (Unmanned Aerial Vehicle) simulation platform.

    Citation: Ge Zhou, Chunzheng Yang, Peng Lu, Xi Chen. Runtime verification in uncertain environment based on probabilistic model learning[J]. Mathematical Biosciences and Engineering, 2022, 19(12): 13607-13627. doi: 10.3934/mbe.2022635

    Related Papers:

    [1] Chengjun Wang, Xingyu Yao, Fan Ding, Zhipeng Yu . A trajectory planning method for a casting sorting robotic arm based on a nature-inspired Genghis Khan shark optimized algorithm. Mathematical Biosciences and Engineering, 2024, 21(2): 3364-3390. doi: 10.3934/mbe.2024149
    [2] Xueyan Wang . A fuzzy neural network-based automatic fault diagnosis method for permanent magnet synchronous generators. Mathematical Biosciences and Engineering, 2023, 20(5): 8933-8953. doi: 10.3934/mbe.2023392
    [3] H.Thomas Banks, Shuhua Hu . Nonlinear stochastic Markov processes and modeling uncertainty in populations. Mathematical Biosciences and Engineering, 2012, 9(1): 1-25. doi: 10.3934/mbe.2012.9.1
    [4] Qi Wang, John Blesswin A, T Manoranjitham, P Akilandeswari, Selva Mary G, Shubhangi Suryawanshi, Catherine Esther Karunya A . Securing image-based document transmission in logistics and supply chain management through cheating-resistant visual cryptographic protocols. Mathematical Biosciences and Engineering, 2023, 20(11): 19983-20001. doi: 10.3934/mbe.2023885
    [5] Jinjing Huang, Xi Wang . Influenced node discovery in a temporal contact network based on common nodes. Mathematical Biosciences and Engineering, 2023, 20(8): 13660-13680. doi: 10.3934/mbe.2023609
    [6] Dawei Li, Enzhun Zhang, Ming Lei, Chunxiao Song . Zero trust in edge computing environment: a blockchain based practical scheme. Mathematical Biosciences and Engineering, 2022, 19(4): 4196-4216. doi: 10.3934/mbe.2022194
    [7] Guozheng Yang, Lintao Liu, Xuehu Yan . A compressed secret image sharing method with shadow image verification capability. Mathematical Biosciences and Engineering, 2020, 17(4): 4295-4316. doi: 10.3934/mbe.2020237
    [8] Jia Yu, Huiling Peng, Guoqiang Wang, Nianfeng Shi . A topical VAEGAN-IHMM approach for automatic story segmentation. Mathematical Biosciences and Engineering, 2024, 21(7): 6608-6630. doi: 10.3934/mbe.2024289
    [9] Ping Wu, Yuwei Shang, Shuaitao Bai, Lingjian Cheng, Huilin Tang . A lightweight path consistency verification based on INT in SDN. Mathematical Biosciences and Engineering, 2023, 20(11): 19468-19484. doi: 10.3934/mbe.2023862
    [10] Zhenquan Zhang, Junhao Liang, Zihao Wang, Jiajun Zhang, Tianshou Zhou . Modeling stochastic gene expression: From Markov to non-Markov models. Mathematical Biosciences and Engineering, 2020, 17(5): 5304-5325. doi: 10.3934/mbe.2020287
  • Runtime verification (RV) is a lightweight approach to detecting temporal errors of system at runtime. It confines the verification on observed trajectory which avoids state explosion problem. To predict the future violation, some work proposed the predictive RV which uses the information from models or static analysis. But for software whose models and codes cannot be obtained, or systems running under uncertain environment, these predictive methods cannot take effect. Meanwhile, RV in general takes multi-valued logic as the specification languages, for example the true, false and inconclusive in three-valued semantics. They cannot give accurate quantitative description of correctness when inconclusive is encountered. We in this paper present a RV method which learns probabilistic model of system and environment from history traces and then generates probabilistic runtime monitor to quantitatively predict the satisfaction of temporal property at each runtime state. In this approach, Hidden Markov Model (HMM) is firstly learned and then transformed to Discrete Time Markov Chain (DTMC). To construct incremental monitor, the monitored LTL property is translated into Deterministic Rabin Automaton (DRA). The final probabilistic monitor is obtained by generating the product of DTMC and DRA, and computing the probabilities for each state. With such a method, one can give early warning once the probability of correctness is lower than a pre-defined threshold, and have the chance to do adjustment in advance. The method has been implemented and experimented on real UAS (Unmanned Aerial Vehicle) simulation platform.



    Runtime verification (RV) is a lightweight formal verification technique in software verification [1]. It decides whether the system's running satisfies specific properties only based on the current observed trace [2]. RV is closely related to model checking and testing, which are two commonly used methods to evaluate the credibility of software. However, when the software to be verified is very complicated, model checking will encounter the problem of state explosion [3], and testing will also be difficult to cover most paths of the software [4]. RV complements model checking and testing because it makes conclusions of the properties using only observed trace, which is insensitive to software scale. RV can also monitor the properties related to operating environment after software is deployed on actual platform.

    Traditional RV techniques only give judgement with observed trace, while predictive RV [5] will predict the running trend of the target system in advance. This feature can provide great potential to avoid software failures rather than only find them. One main existing predictive RV method uses predictive words [6] to realize advance prediction. The so-called predictive words are auxiliary monitor information generated from the models or codes of target software with the methods such as model abstraction or static analysis. However, for some historical legacy software and black box systems, or the systems running in uncertain environment, these methods are not applicable.

    RV in general takes multi-valued logic as the specification language, and the monitoring process stops whenever the correctness values (i.e., true or false) are encountered. A major drawback of these methods is that it can not provide accurate quantitative description for the satisfaction of property. To resolve this problem and make RV suitable for infinite trace semantics, temporal logic LTL3 adds inconclusive to truth values and corresponding monitor generation method was proposed [7]. But its evaluation value of a formula remains to be unknown when inconclusive is encountered, which may be the most situations in monitoring process. Hence, in these settings, one cannot give a precise predication of the trend of correctness. For such situation, if a probabilistic value can be computed for judgement inconclusive to evaluate the satisfaction of the properties in a quantitative way, it will compensate the deficiency of the existing predictive RV methods. Furthermore, for different target systems and different properties, we can set corresponding thresholds for acceptance of probabilistic values based on requirement or expert experience. When the system with a probabilistic monitor detects that the value of current trace exceeds the threshold, alarm or control operations can be issued to steer the running of system.

    In this paper, we propose the method of learning probabilistic model from historical traces which include system and environment events, and generating the monitor which can give the probability that the property will be satisfied (or violated) when new state of current trace is observed. For this purpose, a Hidden Markov Model (HMM) is learned from historical traces and translated to Discrete Time Markov Chain (DTMC), and the Deterministic Rabin Automaton (DRA) is generated from the property in Linear Temporal Logic (LTL). Then the probabilistic monitor will be generated using DTMC and DRA. Such probabilistic RV method can predict the trend of the target system by quantitatively judging the extent to which the current system state satisfies the monitored property. Compared with previous predictive RV methods, in our approach the model or code of target system is not needed and the environment is also be considered. The probabilistic monitor can also give the guidance information for software execution. {When the software deviates from the expected properties}, the running of target system can be adjusted by predefined behavior to avoid malfunction. The corresponding tool of learning and generating probabilistic monitor is implemented and the experiments show the effectiveness of the proposed method.

    The paper is organized as follows. Section 1 introduces the motivation and related work, and Section 2 presents some basic concepts. Section 3 gives the method of learning HMM and DTMC, while Section 4 elaborates the way to generate probabilistic monitor. Section 5 describe the implements and experiment. The paper is concluded in Section 6.

    Verifying probabilistic properties is firstly considered in probabilistic model checking, which has been studied for a long time with adequate theoretical results and important application fields. For example, in [8], probabilistic model checking is applied in provisioning of cloud resources, which shows good results in experiments. The work in [9] discuss two categories of probabilistic model checking when applied in self-adapted systems. Learning is used in [10] to get the abstract model of stochastic systems without source code for statistical model checking.

    In recent years, combining RV with probability and statistics has become a novel research direction. To do statistical checking of probabilistic properties at runtime, [11] provides a method to decompose a trace into several samples based on specification of two kinds of events. In [12], the probabilistic model is extracted, and traditional instrumentation and monitoring methods in RV are combined to evaluate the property probabilities in quantitative and qualitative ways. [13] did similar work upon the framework of WMI and.NET. Above works mainly study the methods of determining if a probabilistic property will be satisfied for observed trace, but cannot determine the probability that a deterministic property will be satisfied or violated in uncertain environment.

    Traditional RV methods cannot predict whether the future running of the target system satisfies given properties. Predictive RV tries to extend this ability, such as Anticipatory Active Monitoring [14] proposed by us. The main idea is to process the models or codes of the software through static analysis or other methods, so as to obtain necessary information to assist runtime monitor to make decision as early as possible. One of our previous work in predictive RV is based on system model, as the framework shown in Figure 1(A). G is the model of software to be monitored. During runtime, the monitor M generated from LTL property will use the information of model G to predict whether the current system trace ˉρ will violate the property in the future. When there exist violation paths from current state, the active monitor will generate a corresponding intervention behavior V(ˉρ) which can guide the running to correct paths and feed it back to the running system P.

    Figure 1.  Two types of existing predictive RV methods.

    For software without models, we also proposed another predictive RV method which depends on code static analysis [6], as shown in Figure 1(B). Control Flow Graph (CFG) of the code is extracted before software is deployed. For each control node in CFG, the event sequences related to the property in its scope are generated, and the variables whose values will be calculated at run-time and used in branch decision expression are also recorded. At run-time, the monitor will predict the property violation based on these event sequences and variable values at current state. Although the method can predict the violation more -accurately since branch decision is determined at run-time, the cost may be huge for complex software.One obvious shortcoming of above predictive RV methods is that they are not suitable for software without models or codes, or systems running in uncertain environment. These problems will be focused on in this paper.

    In this paper, we use the hidden Markov based Baum-welch algorithm in machine learning to model black box systems and uncertain environments. There are many other methods to model the black box system. For example, in the document [15], it proposes a method based on BP neural network to model the black box of a nonlinear maneuvering ship, while [16] uses the method based on deep neural network to model the system. There is also a modeling method for the system, that is, the state model of the system is built by providing input and observation output to the black box system. For example, the literature [17] uses L* algorithm to model the target system. There are many other methods. The literature [18] uses Bayesian network to model the system, while the literature uses reinforcement learning in machine learning to model [19]. There are many other modeling methods. The purpose of system modeling in this paper is to provide auxiliary information for the later run-time verification, so as to improve the capability of the verification. Therefore, the model we need is mainly the state model of the target system, which is different from other people's work.

    As we all know, the target system where the run-time monitor is deployed will evolve over time. This is the time even if obtaining the verification information becomes critical. For this reason, literature [20] present incremental verification techniques, which exploit the results of previous analyses. Unlike this paper, The target systems modelled as Markov decision processes, developing incremental methods for constructing models from high-level system descriptions and for numerical solution using policy iteration based on strongly connected components. In this work of literature [21], instead of assuming such a model is given, they describe a run-time verification workflow where the model is learn and incrementally refined by using process mining techniques. In addition, Stoller et al. proposed a run-time verification technology with state estimation. This technology improves the efficiency of the monitor by reducing the sampling points, while the run-time monitor with state estimation has the characteristics of probability. As in this paper, the hidden Markov model (HMM)is used in this work. The difference is that we use Baum-Welch algorithm to model the system based on the previously observed target event sequence, while the literature [22] uses the classical forward algorithm to determine the probability of the state sequence of a given observation sequence. The literature [23] present Adaptive Runtime Verification (ARV), a new approach to runtime verification in which overhead control, runtime verification with state estimation, and predictive analysis are synergistically combined. Overhead control maintains the overhead of runtime verification at a specified target level, by enabling and disabling monitoring of events for each monitor instance as needed. In ARV, predictive analysis based on a probabilistic model of the monitored system is used to estimate how likely each monitor instance is to violate a given temporal property in the near future, and these criticality levels are fed to the overhead controllers, which allocate a larger fraction of the target overhead to monitor instances with higher criticality, thereby increasing the probability of violation detection.

    Linear-time Temporal Logic (LTL) is usually used to specify the temporal behavior and properties of system. Assuming that P is a set of atomic propositions, and Σ=2P denotes the finite alphabet, LTL formulae can be defined as

    φ::= p | ¬φ | φ1φ2| Xφ | φ1Uφ2

    where pP. There are some standard derived operators, such as:

     p¬pφ1φ2  ¬(¬φ1¬φ2)φ1φ2  ¬φ1φ2Fφ   UφGφ  ¬(F¬φ)φ1Rφ2  ¬(¬φ1U¬φ2)

    Semantics of LTL formulae are defined with an infinite trace πΣω and a position i. The i-th letter π(i) of the trace π is a subset of P, which can be viewed as an assignment to P. The standard semantics of the LTL formulae are defined as follows [7]:

    π,ipP      pπ(i)π,i¬φ           π,iφπ,iφ1φ2   π,iφ1 or π,iφ2π,iXφ          π,i+1φπ,iφ1Uφ2    ji:π,jφ2 and                      ik<j:π,kφ1

    RV is a lightweight formal verification method that only focuses on whether the current execution of system meets certain properties. At runtime, a system trace can be treat as a finite sequence composed of system states, and different definitions of state should be given for different concerns. At hardware level, the state of the system should be defined with the combination of values of registers, memories and so on. At software level, the system state is usually defined by the locations of the program, the values of variables, the events of function calling and so on.

    The current system execution at runtime is a finite prefix of the infinite trajectory. The goal of RV is to check whether this prefix satisfies the given property. Commonly a monitor will be generated from the given property according to a specific method to do this checking. Thus, from perspective of the formal language, RV solves the problem whether a given word is included in the language corresponding to the given property. Therefore, the monitor is the device that reads a finite trace and yields a certain verdict.

    Monitoring semantics [24] is a set of advanced logical protocols that formally define what decision the monitor makes in different situations. {Different monitoring semantics may get different conclusions}. The standard semantics of LTL is defined on infinite traces, which can not be applied directly to finite traces. For two-valued semantics of LTL, there are the following three methods to solve this problem [25]. Given a finite trace μ and a LTL formula φ,

    (1) Weak semantics: φ is violated iff μ is a bad-prefix of it. Namely, for any infinite word ν, we have μνφ.

    (2) Strong semantics: In this case, φ is satisfied iff μ is a good-prefix of it. That is, μνφ for every infinite word ν.

    (3) Multi-value semantics: There exist consistency problems in two-valued semantics of LTL. Thus multi-value semantics was proposed and defined on infinite trace, such as LTL3:

    [μφ]LTL3={true,   if σΣω:μσLTLφ;false,  if σΣω:μσLTLφ;?,  otherwise.

    Discrete Time Markov Chain is a stochastic process in mathematics for discrete events. In the process, given the current state with knowledge or information, the historical state is not considered when predict the future state. It is called markov-process. In addition, it is found that no matter what state it is, the markov-process will gradually become stable after a period of time, and the state of stability is not related to the initial state.

    With a countable set P of propositions, a discrete time markov chain (DTMC) M is a tuple (S,I,T,L), where

    S is a finite set of states;

    I is an initial distribution over S, and sSI(s)=1;

    T:S×S[0,1] is the transition matrix fulfilling sST(s,s)=1 for each sS;

    L:S2P is the labelling function.

    Event is usually used to encapsulate the behavior and interactions for many software systems, and event-triggered RV is used to ensure the reliability of a system by observing the occurrence of events. During the process of monitoring the target system running in specific environment, there is a kind of events that seem to be independent of each other. However, there often exist some hidden correlations among these events.

    For example, as shown in the Figure 2, a drone often flies from point A to point B, and there is a control tower at point C that sends instruction to this drone. Event p is defined as the drone reaches the area within 100 meters around point B. The operator obtains the position information from the drone and sends instructions through the control tower. In a real environment, the communication between drone and control tower may be hindered by some obstacles, thereby leading to some flight deviation. Another event q is defined as the drone sends the position information back to control tower C. We have to monitor whether the mission can be completed successfully. On the surface, p and q are independent events, but these events contain potential probabilistic relationships since the landform such as mountains or interference sources nearby may affect the sending and receiving of information. This case will also be used in the following when describing the proposed method.

    Figure 2.  An example with uncertain environment in practice.

    Figure 3 shows the framework of learning the DTMC model of target system and environment. First, event traces related to the monitoring property need to be extracted from the log history. Then these paths are joint to form one trace from which the HMM will be learned according to the distribution of their occurrence. Here the forward-backward algorithm in machine learning is adopted. Finally HMM is transformed to DTMC, which will be used in monitor generation.

    Figure 3.  Framework of learning the model of target system and environment.

    For the software without providing models and codes, running trajectory can be obtained from the historical running data. In the case of Figure 2 for example, we can randomly extract a great of trajectories from the log database to learn the model with the focus on events p and q. The traces can be expressed as the form such as: π=SinitppqpSEnd. The symbol indicates that neither p nor q occur. p or q separately shows that corresponding event occur. Symbol pq is used to represent the set {p,q}, which means the occurrence of both events p and q.

    Before formalizing the problem, some definitions need to be introduced. A finite trace π is a string in Σ and length of the trace is denoted by len(π). π(i) denote the i-th location of π for each i<len(π); therefore, π(i)P. A DTMC is a stochastic process that satisfies the following provisos:

    Proviso_1: The probability distribution of the system state at time t+1 is only related to the state at time t and is independent of the states before t;

    Proviso_2: The state transition from time t to t+1 is independent of the value of t.

    Given a DTMC M=(S,I,T,L) and a trace π, and [n] to denote the set {0,,n1} for nN, a mapping from π to M is defined as a function δ with type [len(π)]S such that L(δ(i))=π(i). Then, the probability of π w.r.t.M under δ is defined as

    probδ(M,π)=I(δ(0))iT(δ(i),δ(i+1))

    We denote by ΔM,π the set that comprises all mappings from π to M. We now calculate the value supδΔM,πprobδ(M,π) and the corresponding mapping. In other words, our goal is to find a mapping δ that makes probδ(M,π) as large as possible.

    We can canonically lift this problem when we are given a (finite) path set Π. In this setting, our goal is to determine a mapping δπΔM,π for each πΠ and to maximize the value of ΣπΠprobδπ(M,π). There have been some methods can be used to solve this problem. In this paper, we propose a method by learning a (HMM) model from traces to solve this problem.

    HMM is a directed dynamic Bayesian network diagram [26] with a simple structure that is mainly used for timing data modeling, speech recognition, and natural language processing. As shown in Figure 4, HMM has two groups of variables. The first group is called the hidden or state variables Y = {y1,...,yn}, where yi represents the hidden, unobservable state at time t=i. Assuming that the number of all states in the model is N, that is, the state space Y = {s1,...,sN}, we have yiY. The second group is called the observation variables or display states X = {x1,...,xn}, where xj represents the state that can be observed directly at t=j. The observed variables can either be continuous or discrete. Here we only consider the discrete variables since the systems monitored in our work are supposed discrete time systems. Assuming the number of observed variables is m, that is, the observation space X = {o1,...,om}, we have xjX.

    Figure 4.  Mapping structure of HMM.

    In Figure 4, an arrow represents the dependency among variables. HMM has the following properties:

    The change of hidden states is a DTMC, that is, process (y1y2...yn) is consistent with the Markovian nature.

    The observed variable xt is determined by the hidden variable yt and is independent of other observed or hidden variables.

    Based on above independence relationship, the joint distribution of X and Y is

    P(x1,y1,...,xn,yn)=P(x1|y1)ni=2P(yi|yi1)P(xi|yi)

    If we want to determine a hidden Markov model M, then the following parameters are required in addition to the state space Y and observation space X:

    ● Hidden state transition probability matrix A=[aij]N×N, where aij=P(yt+1=sj|yt=si), 1i, jN, which represents the probability of transitions between states in the model.

    ● Probability matrix of observation B=[bij]N×M, where bij=P(xt=oj|yt=si), 1iN, and 1jM, which represents the observed probability from hidden to display states. In other words, bij represents the probability that the observed value oj is observed at time t, if the hidden state at this time is si.

    ● Initial distribution θ=(θ1,...,θi,...,θn), which is the probability of occurrence of each state at time t=0, where θi=P(y1=si) for 1iN. In other words, θi is the probability that the initial state is si.

    In practice, one of the basic problems that people often pay attention to HMM is how to learn the optimal model parameters based on the sample set. It can be described as: given the observation sequence x={x1,x2,...,xn}, how to learn the model parameter λ=[A,B,θ] to maximize the probability of occurrence of sequence P(x|λ)? Based on the traces we obtain from the log history, a HMM need to be constructed that can best describe the observed data.

    The most popular approach to constructing a HMM model is the forward-backward algorithm. However, the forward-backward algorithm can only process one trajectory to construct HMM, so multiple trajectories need to be combined for processing. First, we deal with all the traces obtained from the log repository (the number of traces is assumed to be N). For example, if a trace πi appears a total of mi times, we use pair (πi,mi) to represent them. The corresponding HMM synthesis algorithm is presented in Algorithm 1.

    Merging all the traces directly will introduce errors into the final model and non-existent paths, such as those transitions from the end states to the initial states. In this case, we add two states to all resulting traces, namely, Sinit and Send. The introduction of these two states will improve the accuracy of the final model [27]. The classical forward-backward algorithm does not implement the incremental process, fortunately, there is a lot of research to solve the problem, such as the work in [28].

    Algorithm 1: HMM generation algorithm
    Input: A finite set traces={(π1,m1),(π2,m2),,(πn,mn)}
    Output: HMM model Hm
    1 Πε;//Initialization of Π as an empty string;
    2 N=n1mi;//N is the total number of traces
    3 arandom(1,N);//a is a random value
    4 for j1 to N do

    8 Hm=hmm(Π);// Call forward-backward algorithm;
    9 return Hm;

    For example in Figure 2, we simulate the process of generating DTMC in UAV platform Ardupilot [29]. The data of all control commands and sensors are recorded into the log system of Ardupilot. We implemented an event parser to obtain concerned events from logs. After simulating the flight for 10000 times, the corresponding traces are extracted and used to generate HMM with Algorithm 1. Left part in Figure 5 depicts the HMM [A,B,θ] learned from 10,000 traces, and the graphical representation of its DTMC only contain p and q is shown in the right part of Figure 5. These logs are obtained by monitoring the target drone and its environment. The DTMC model can be obtained from matrix A in HMM directly. Through experiments, we find that the accuracy of the forward-backward algorithm is closely related to two factors. First, a longer observation sequence corresponds to a higher accuracy. Second, the similarity between the initial model and the target model will directly affect the efficiency of the algorithm.

    Figure 5.  The DTMC (Right) corresponding to the learned HMM (Left, Note that the matrix A contains extra state Sinit and Send), and the precision of data in DTMC is rounded to two decimals.

    The generated DTMC is the model of system and environment learned, which should be used together with the monitored property to determine and predict the satisfaction of property in specific state. Thus, we will generate a probabilistic monitor from DTMC and the automaton corresponding to the property. Figure 6 shows the framework of the procedures for generating probabilistic monitor.

    Figure 6.  The framework of probabilistic monitor generation.

    ω-automata A ω-automaton A=(Q,Σ,δ,Q0,Acc) includes the following elements:

    Q is a finite set of states;

    Σ is a finite set called alphabet;

    δ:Q×Σ2Q is the transition function;

    Q0Q is the initial state set;

    Acc is the acceptance condition.

    ω-automata can be classified into deterministic and non-deterministic ones, which mainly differ in transition functions. An automaton is said deterministic if |Q0|=1 and |δ(q,a)|=1 for each qQ and aΣ. In this case, we consider the transition function δ to be of the type Q×ΣQ.

    An input for A is an infinite string over the alphabet Σ, i.e. it is an infinite sequence α=a0,a1,a2,.... The run of A on such input is an infinite sequence ρ=r0,r1,r2,... of states, defined as follows:r0Q0 and riδ(ri1,ai1) where i1. Let ρ denote a run of A over the infinite word σΣω, and Inf(ρ) be the set of states occurring infinitely in ρ. Given the different acceptance conditions, ω-automata can be classified into different types such as:

    Büchi automata (BA): For accept state set FQ, FInf(ρ);

    Rabin automata (RA): For the set of pairs {(E1,F1),(E2,F2),...,(Em,Fm)}, where Ei,FiQ, there exists some 1im that EiInf(ρ)= and FiInf(ρ).

    In this paper, we use NBA and DRA to designate nondeterministic Büchi automata and deterministic Rabin automata, respectively.

    Directed Graph and SCC A directed graph G is a tuple (V,E), where:

    V is a finite non-empty set of vertices;

    EV×V is a finite set of directed edges.

    Give a directed graph G=(V,E), and VV. If for each pair of vertices x,yV there are xy and yx, then we call the subgraph of G composed by the node set V as strongly connected component SCC V.

    Maximal SCC, Bottom SCCs and Intermediate SCCs If SCCVCV=C for each SCC C of G, then we consider SCC V as maximal SCC and denote it by SCCs V.

    If a SCCs V exists in G=(V,E) and C=VV, for each vertex xV and yC, there is no xy holds. We then call it Bottom SCCs, denoted as BSCCs V.

    If node set V is SCCs, but it is not BSCCs, we say the V is an Intermediate SCCs, denoted as MSCCs.

    Then given the learned DTMC and the LTL property to be monitored, we will generate probabilistic monitor according to the process shown in Figure 6.

    Various methods of translating a LTL formula into an equivalent Büchi automaton have been proposed in literatures. Unlike model checking, here we focus on the probability of property satisfaction when a new event is observed, thus we hope the monitor should be deterministic. Since deterministic Büchi automata are strictly less expressive than the non-deterministic ones, currently there is no way for translating NBAs into DBAs. But, McNaughton's Theorem and Safra's construction provide the algorithm that can translate a Büchi automaton into a deterministic Rabin automaton [30]. Thus we will use DRA as the automaton formalism of LTL property, which still need BA as intermediate representation.

    The tool LTL2BA [31] can convert LTL formulas into Büchi automata. The conversion is implemented in three steps. Firstly, the LTL formula is transformed into a very weak alternating automaton (AWAA). This step mainly deals with the logical relations among the sub-formulas. Secondly, the VWAA is converted into a generalized Büchi automaton (GBA), which mainly deals with the problem of state combinations. Thirdly, GBA is converted into BA by using an easy-to-use and well-known construction method. Finally, we use tool LTL2dstar [32] to convert BA into DRA with a worst-case complexity of 2O(nlogn), where n denotes the number of states in BA.

    With the DTMC learned and the DRA constructed from property, we now can generate the probabilistic monitor as follow.

    Step 1: Product of DRA and DTMC [33]. Given a DRA A=(Q,Σ,δ,q0,ACC) and a DTMC M=(S,I,T,L), where Σ=S, we firstly construct their production. Define the directed graph G=A×M=(V,E), where V={(q,s)|qQ,sS}. An edge eE has the form e=((q,s),(q,s)), which should satisfy T(s,s)>0 and q=δ(q,s). Figure 7 presents an example of this construction process, where the DRA is generated by the LTL formula φ=FGp which is a property should be satisfied by the case in Figure 2.

    Figure 7.  Generating probability monitor for property φ of UAS.

    Step 2: Probability Calculation. We use P(v) to denote the probability that node v=(q,s)V will satisfy the LTL property in finite graph G. In order to calculate the probability of each node in G, the concept of Accepted_SCCs is introduced. The acceptance condition of the DRA generated by LTL is ACC={(E1,F1),...,(Ek,Fk)}. A BSCCs C is accepted if there is some i such that CEi= and CFi. Otherwise the BSCCs C is rejected. For node v=(q,s) in SCCs C, its probability will be obtained as following:

    (1) If C is an Accepted_SCCs then P(v)=1;

    (2) If C is an Rejected_SCCs then P(v)=0;

    (3) If C is an Intermediate_SCCs then P(v)=uW(T(s,s)×P(u)), W is set of all successors of v, and u is denoted as (q,s) where q=δ(q,s).

    We use the case in Figure 7 to describe the procedures. Figure 7(A), (B) present the DRA and DTMC which are input of algorithm 1. Figure 7(C) gives the product of them. The initial states are nodes (S2,p) and (S2,q), then we can obtain all SCCs using the classic Tarjan algorithm and we name it FindSCCs. As shown in Figure 7(C), except for nodes (S2,p), (S2,q), (S1,q), (S1,p), (S0,p), (S0,q) that make up one SCCs, every other node is a SCCs.

    According to the previous definition, we can easily know that node (S0,pq) and node (S2,) are BSCCs. Based on the definition of Accepted_BSCCs and the DRA structure, we can work out the probability of Accepted_BSCCs(S0,pq) with above (1) and Rejected_BSCCs(S2,) with (2). For all MSCCs, we obtain P by solving the equations based on (3). For instance, we can calculate the probability of nodes in one MSCCs with following equations:

    P(S0, p)=P(S0,p)T(p,p)+P(S0,pq)T(p,pq)+P(S2,q)T(p,q)+P(S2,)T(p,)P(S0, q)=P(S2,p)T(q,p)+P(S2,)T(q,)+P(S2,pq)T(q,pq)+P(S2,q)T(q,q)P(S1, p)=P(S0,p)T(p,p)+P(S0,pq)T(p,pq)+P(S0,)T(p,)+P(S0,q)P(S1, q)=P(S2,p)T(q,p)+P(S2,q)T(q,q)+P(S2,)T(q,)+P(S2,pq)T(q,pq)P(S2, p)=P(S1,p)T(p,p)+P(S1,pq)T(p,pq)+P(S2,q)T(p,q)+P(S2,)T(p,)P(S2, q)=P(S1,pq)T(q,pq)+P(S1,p)T(q,p)+P(S2,q)T(q,q)+P(S2,)T(q,)

    The above equations are solved using GaussianElimination method and the probability of each node can be obtained then. As show in these equations, to calculate the probability of node (S2,q), we should know the probability of node (S2,) which we have worked out before. So in fact the process of the calculation is Depth-First-Search (DFS) which consists with the process of Tarjan algorithm. Because of this fact, once we find a SCCs through the Tarjan algorithm, we can calculate the probability of each node in this SCCs which meet formula φ=FGp, as shown in Figure 7(C).

    Step 3: Optimization. After calculating the probability of each state based on G=A×M, G and the labelled probability make up the runtime monitor. For some states in the monitor, the probability labels are 0 or 1. These states can be merged to optimize the structure of the monitor. In this example, given that the probability labels for states (S0,pq) and (S1,pq) are both 1, they can be merged to optimize the monitor as shown in Figure 7(D).

    For the generated probability monitor, there are two traditional deployment methods. One is called the embedded method. This method is to centrally process the monitor code and the source program of the monitored target system, and realize the jump of the monitor status by capturing the trigger of the event. This method has the advantages of good real-time performance, and the biggest problem is that it threatens the security of the original monitoring system; The other method is the external method. Different from the embedded method, the monitor is deployed outside the target system. The monitor can jump by acquiring the flag information of whether the event occurs through the bus and other methods. We can implement the monitor as hardware to achieve better results. This method has the advantage of reducing the interference to the target system as much as possible, The disadvantage is that information may be lost due to communication failure, which may lead to judgment error. In addition, the real-time performance is not as good as the implantable method.

    The tool framework and running process of the probabilistic runtime monitor is briefly described in Figure 8. It extracts the events of interest from event acquirer module. When an event occurs, the probabilistic monitor will perform a transition based on its current state and the received event. Based on the system requirements, if the probability value exceeds our specified gate value, then some predefined behavior will be executed, such as an alarm. When an alarm occurs in the system, the events labelled on the transitions in the monitor following current state can help controller to change the direction of the system running by trigger a specific event.

    Figure 8.  The framework of tool implementation in the UAS simulation platform Ardupilot.

    In the process of running the system, the Acquirer obtains the events from the target system and environment, and Prob_M is the main part of the monitor platform which stores the structure of the monitor and changes its state based on the event obtained by Acquirer. The Controller decides which control event should be issued in next step when the system encounters a problem, or prompts what external command intervention are needed to maintain the system in safe state. In the static phase, we can use the traces newly added to the log library to update the learned model, which can continuously improve the accuracy of the probabilistic monitor.

    Compared with our work, traditional RV monitor under binary semantics of LTL faces the semantic problem of consistency, since they should give a new semantics of LTL on finite traces. Although the monitoring method under three-valued semantics of LTL can satisfy impartial and anticipatory requirements [7], the monitor does not produce more quantitative results when the satisfaction of property in many states are output as "?" (inconclusive). Meanwhile, the probabilistic monitor can exactly determine the quantitative evaluation of satisfying the LTL property in current state, thereby greatly expanding its application scenarios and making up the insufficiency of other RV methods.

    Furthermore, our probabilistic monitors can adjust the subsequent execution of the target system. Specifically, when the system is running, people know which "event" can increase or decrease the future probability for the system to meet the property. The existing runtime monitors are unable to achieve this purpose.

    To show the effectiveness of the method proposed above, we applied the probabilistic RV and its tool to actual UAS platform Ardupilot. It is generally known that the hardware or software defects and the presence of external malicious attacks pose a great threat to the security of UAS. Let's consider the following situation.

    When UAS reconnaissances a hostile force area, if it encounters dangerous conditions (such as abnormal signal interference), in order to balance the task completion and UAS safety, we set the following rules: Firstly, if detection task has been initialized, the drone will not accept abnormal signal (both from the region and the console), until the task has finished or the task is interrupted (such as receiving a normal command of stopping detection); secondly, if the task has not been initialized after enter this hostile area, the task will not be started to ensure the safety of UAS.

    Therefore, we define the following events: (1) p: the UAS is in the state of executing critical task, and (2) q: the UAS has received abnormal instruction. Then the property can be expressed by LTL formula:

    φ=G((p(¬qU¬p))(¬pG¬p))

    The corresponding DRA of property φ is generated and shown in Figure 9(A), and the DTMC model (based on propositions p and q) of this UAS is obtained by learning the log history of multiple flight simulations, which is shown in Figure 9(B). The probabilistic monitor is generated from above DRA and DTMC as shown in Figure 9(C). Depend on this monitor, 200 system running traces with length 1000 (in fact the trace should be infinite, here length 1000 can embody the effectiveness of the method) are checked whether they satisfied this safety property. As the result, there are 140 traces finally arrive the state with probability 1.000 that satisfies the property, and for the left 60 traces the probability is 0.000. The ratio of the traces satisfying the property in all traces is 70%, which is close to the probability on the initial node with the probability 72.59%.

    Figure 9.  The experiments of probability monitor compared with monitor based on three-value semantics.

    As shown in Table 1, to compare with our method, similar experiments with same data is conducted under the semantics of LTL3. Firstly, the same LTL formula is transformed into the monitor shown in Figure 9(D) by applying LTL3 tool. Obviously, there is no state of true in this monitor because the formula begins with operator G, and true cannot be obtained in infinite word under the semantics of LTL3. It is because LTL3 method does not consider the environment that system runs in. But our probabilistic monitor learned the model from history of both system and environment, and use these information in the monitor. By repeating the same 200 traces, the LTL3 monitor get the results with 140 uncertain and 60 false. It must be pointed out here that although the results of the trajectory monitored by the probability monitor ultimately only meet the monitoring nature or violate the nature, this is the reasonable result of the normal trajectory. Unlike the LTL3 monitor, the probability monitor can give a quantitative conclusion that meets the nature when the system is running normally, and this conclusion can guide people to interfere with the operation trajectory of the target system, so as to avoid software failure. This capability is beyond the ability of the LTL3 monitor.

    Table 1.  Comparison with LTL3 semantic monitor.
    NO -Trackas Probability Monitor LTL3 Monitor
    140 1 ?
    60 0 0

     | Show Table
    DownLoad: CSV

    Runtime verification is a lightweight formal verification technology in software verification. It only determines whether the operation of the system meets the monitoring protocol according to the currently observed trajectory. Runtime verification technology, model verification technology and testing technology have their own advantages and disadvantages. In practice, they are often used together. They are three common methods for software quality evaluation. However, when the software to be verified is very complex, the model verification will encounter the problem of state explosion, and the test is difficult to cover most of the paths of the software. Runtime validation complements model verification and testing because it uses only observed trajectories to draw conclusions about properties, which is not sensitive to the size of the software. Since runtime verification technology is often deployed after software implementation, it is often used to monitor the properties related to the operating environment.

    The traditional runtime verification technology can only give an early warning when an error occurs. The predictive runtime verification technology can predict the development direction of the target system, which makes it possible to avoid software failure. The existing prediction runtime verification technology, also known as software active monitoring technology, is mainly realized by extracting "prediction words" from the target system in the static stage. The so-called prediction word is obtained from the model or code of the target software by means of model abstraction or static analysis. However, for some legacy software, black box systems, or systems running in uncertain environments, the existing methods of software active monitoring technology are not applicable.

    In addition, the runtime verification technology often uses binary monitoring semantics, that is, whenever the property is judged to be violated or satisfied, the monitoring process will stop. The main drawback of these methods is that they can not provide an accurate quantitative description of property satisfaction, and there is also the problem of monitoring semantic ambiguity. In order to solve this problem and make runtime verification suitable for infinite trajectory semantics, based on the conclusion that LTL3 runtime verification technology adds uncertainty to the truth value, a corresponding monitor generation method is proposed. However, it is found that in the monitoring process, most of the situations are "uncertain", and the uncertain conclusions can not make a quantitative judgment on the relationship between the current system and the monitoring protocol, which is undoubtedly a pity. However, if the output of the run-time monitor is a probability value, it can quantitatively determine the satisfaction of the target system and the monitoring nature, then it will make up for the shortcomings of the existing prediction run-time verification methods. In addition, for different target systems and different properties, we can set corresponding thresholds according to needs or expert experience to trigger the reminder mechanism, that is, when the system with probability monitor detects that the value of the current trajectory exceeds the threshold, it can send an alarm or control operations to guide the operation of the system.

    To solve the above two problems, this chapter proposes a method to learn the probability model of the target monitoring system and its operating environment from the historical track, and then generate a runtime probability monitor by comprehensively monitoring the nature of the protocol. This probability monitor can give the probability that the target system meets (or violates) the monitoring property in the new state. To this end, hidden Markov models are learned from historical trajectories and transformed into discrete-time Markov chains, and deterministic Rabin automata (DRA) are generated from the properties of linear temporal logic. The probability monitor is then generated using DTMC and DRA. This probabilistic runtime verification method can predict the trend of the target system by quantitatively judging the degree to which the current system state meets the monitoring nature. Compared with the previous predictive run-time verification method, our method does not need the model or code of the target system, and considers the environment. The probability monitor can also provide guidance for software execution. When the software deviates from the expected nature, the operation of the target system can be adjusted through predefined behavior to avoid failure. The corresponding tools for learning and generating probability monitoring are realized, and experiments show the effectiveness of the method.

    For systems that do not have accurate models and program codes, or run in uncertain environments where threats cannot be predicted before deployment, we propose a method to construct probabilistic monitors to detect property violations at run time. This method uses the historical traces from the log base of the target system to learn the hidden Markov model, which represents the behavior of the system and the environment. Then the DTMC model is obtained from hidden Markov model. Combined with the DRA generated by LTL formula, the state probability in the product of DTMC and DRA can be calculated to generate a probability runtime monitor. The probability monitor has good application scenarios, in which the LTL nature cannot be quantitatively judged by the existing runtime verification methods at first, and it can also provide directional guidance for system intervention. We have implemented the corresponding tools on the UAS platform Ardupilot. Compared with other methods such as LTL3, experiments have proved the effectiveness of this method.

    The authors declare there is no conflict of interest.



    [1] P. Zhang, Z. Su, Y. Zhu, W. Li, B. Li, Ws-psc monitor: A tool chain for monitoring temporal and timing properties in composite service based on property sequence chart, in International Conference on Runtime Verification, 6418 (2010), 485–489. https://doi.org/10.1007/978-3-642-16612-9_39
    [2] I. O. Electrical, I. S. Board, IEEE standard for software verification and validation, Software Qual. Prof., 2005 (2005), 1–217. https://doi.org/10.1109/IEEESTD.2005.96278 doi: 10.1109/IEEESTD.2005.96278
    [3] E. M. Clarke, Model checking-my 27-year quest to overcome the state explosion problem, in 2009 24th Annual IEEE Symposium on Logic In Computer Science, IEEE, (2008). https://doi.org/10.1109/LICS.2009.42
    [4] O. J. Dahl, E. W. Dijkstra, C. A. R. Hoare, Structured Programming, Academic Press, 1972. https://doi.org/10.5555/1243380
    [5] C. Zhao, W. Dong, J. Wang, P. Sui, Z. Qi, Software active online monitoring under anticipatory semantics, Shm, 2009. Available from: https://smcit.ecs.baylor.edu/2011/www-smcit09/abstracts-contri_papers/karsai/DongWei_SHM.pdf.
    [6] K. Yu, Z. Chen, W. Dong, A predictive runtime verification framework for cyber-physical systems, in 2014 IEEE Eighth International Conference on Software Security and Reliability-Companion, (2014), 247–250. https://doi.org/10.1109/SERE-C.2014.43
    [7] A. Bauer, M. Leucker, C. Schallhart, Comparing ltl semantics for runtime verification, J. Log. Comput., 20 (2010), 651–674. https://doi.org/10.1093/logcom/exn075 doi: 10.1093/logcom/exn075
    [8] A. Naskos, E. Stachtiari, P. Katsaros, A. Gounaris, Probabilistic Model Checking at Runtime for the Provisioning of Cloud Resources, Springer International Publishing, 9333 (2015). https://doi.org/10.1007/978-3-319-23820-3_18
    [9] A. Filieri, G. Tamburrelli, Probabilistic verification at runtime for self-adaptive systems, in Assurances for Self-Adaptive Systems, 7740 (2013). https://doi.org/10.1007/978-3-642-36249-1_2
    [10] A. Nouri, B. Raman, M. Bozga, A. Legay, S. Bensalem, Faster statistical model checking by means of abstraction and learning, in Runtime Verification, 8734 (2014), 340–355. https://doi.org/10.1007/978-3-319-11164-3_28
    [11] U. Sammapun, I. Lee, O. Sokolsky, J. Regehr, Statistical runtime checking of probabilistic properties, Springer, Berlin Heidelberg, 2007 (2007). https://doi.org/10.1007/978-3-540-77395-5_14
    [12] V. C. Ngo, A. Legay, V. Joloboff, Pscv: A runtime verification tool for probabilistic systemc models, in International Conference on Computer Aided Verification, 9779 (2016). https://doi.org/10.1007/978-3-319-41528-4_5
    [13] J. Jayaputera, I. Poernomo, H. Schmidt, Runtime verification of timing and probabilistic properties using wmi and.net, in Proceedings. 30th Euromicro Conference, IEEE, (2004), 100–106. https://doi.org/10.1109/EURMIC.2004.1333361
    [14] C. Zhao, W. Dong, Z. Qi, Active monitoring for control systems under anticipatory semantics, in 2010 10th International Conference on Quality Software, (2010), 318–325. https://doi.org/10.1109/QSIC.2010.82
    [15] H. He, Z. Zou, "Black-box modeling of ship maneuvering motion using system identification method based on bp neural network, " in The 39th International Conference on Ocean, Offshore and Arctic Engineering, (2020). https://doi.org/10.1115/OMAE2020-18069
    [16] S. Kundu, A. Soyyigit, K. A. Hoque, K. Basu, "High-level modeling of manufacturing faults in deep neural network accelerators, " in 2020 IEEE 26th International Symposium on On-Line Testing and Robust System Design (IOLTS), (2020). https://doi.org/10.1109/iolts50870.2020.9159704
    [17] D. Angluin, Learning regular sets from queries and counterexamples, Inf. Comput., 75 (1987), 87–106. https://doi.org/10.1016/0890-5401(87)90052-6 doi: 10.1016/0890-5401(87)90052-6
    [18] Y. Kan, K. Yue, H. Wu, X. Fu, Z. Sun, Online learning of parameters for modeling user preference based on bayesian network, Int. J. Uncertainty Fuzzines Knowledge Based Syst., 30 (2022), 285–310. https://doi.org/10.1142/S021848852250012X \newpage doi: 10.1142/S021848852250012X
    [19] S. Tao, J. Jiang, D. Lian, K. Zheng, E. Chen, Predicting human mobility with reinforcement-learning-based long-term periodicity modeling, ACM Tran. Intell. Syst. Technol., 12 (2021), 1–23. https://doi.org/10.1145/3469860 doi: 10.1145/3469860
    [20] V. Forejt, M. Kwiatkowska, D. Parker, H. Qu, M. Ujma, Incremental runtime verification of probabilistic systems, in International Conference on Runtime Verification, 7687 (2012), 314–319. https://doi.org/10.1007/978-3-642-35632-2_30
    [21] A. Ferrando, G. Delzanno, Incrementally predictive runtime verification. in CILC, (2021), 92–106. https://doi.org/10.1007/978-3-642-28891-3_37
    [22] S. D. Stoller, E. Bartocci, J. Seyster, R. Grosu, K. Havelund, S. A. Smolka, et al., Runtime verification with state estimation, in International Conference on Runtime Verification, (2011), 193–207. https://doi.org/10.1007/978-3-642-29860-8_15
    [23] E. Bartocci, R. Grosu, A. Karmarkar, S. A. Smolka, S. D. Stoller, E. Zadok, et al., Adaptive runtime verification, in International Conference on Runtime Verification. (2012), 168–182. https://doi.org/10.1007/978-3-642-35632-2_18
    [24] Z. Chen, O. Wei, Z. Huang, H. Xi, Formal semantics of runtime monitoring, verification, enforcement and control, in International Symposium on Theoretical Aspects of Software Engineering, (2015), 63–70. https://doi.org/10.1109/TASE.2015.11
    [25] D. Giannakopoulou, K. Havelund, Runtime analysis of linear temporal logic specifications, in IEEE International Conference on Automated Software Engineering, (2001). Available from: http://www.riacs.edu/trs/.
    [26] S. M. Chu, T. S. Huang, An experimental study of coupled hidden markov models, in IEEE International Conference on Acoustics, Speech, and Signal Processing, (2002). https://doi.org/10.1109/ICASSP.2002.5745559
    [27] N. M. Abbasi, Hidden markov methods. algorithms and implementation, 2015. Available from: https://manualzz.com/doc/o/cklvd/.
    [28] B. Motik, Y. Nenov, R. Piro, I. Horrocks, Incremental update of datalog materialisation: the backward/forward algorithm, in Twenty-Ninth AAAI Conference on Artificial Intelligence, (2015), 1560–1568. https://doi.org/10.5555/2886521.2886537
    [29] An simulation platform of unmanned aerial vehicle. Available from: https://www.ardupilot.org/.
    [30] S. Safra, On the complexity of ω-automata, in Foundations of Computer Science, W. H. Freeman, (1988), 319–327. https://doi.org/10.1109/SFCS.1988.21948
    [31] P. Gastin, D. Oddoux, Fast LTL to Büchi automata translation, in International Conference on Computer Aided Verification, (2001), 53–65. https://doi.org/10.1007/3-540-44585-4_6
    [32] LTL to deterministic Streett and Rabin automata. Available from: https://www.ltl2dstar.de/.
    [33] W. Liu, F. Song, G. Zhou, Reasoning about periodicity on infinite words, Appl. Microbiol. Biotechnol., 2017 (2017), 200–215. https://doi.org/10.1007/978-3-319-69483-2_12 doi: 10.1007/978-3-319-69483-2_12
  • This article has been cited by:

    1. Rania Taleb, Sylvain Hallé, Raphaël Khoury, Uncertainty in runtime verification: A survey, 2023, 50, 15740137, 100594, 10.1016/j.cosrev.2023.100594
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2330) PDF downloads(67) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog