A Review on Low-Rank Models in Data Analysis

  • Nowadays we are in the big data era. The high-dimensionality of data imposes big challenge on how to process them effectively and efficiently. Fortunately, in practice data are not unstructured. Their samples usually lie around low-dimensional manifolds and have high correlation among them. Such characteristics can be effectively depicted by low rankness. As an extension to the sparsity of first order data, such as voices, low rankness is also an effective measure for the sparsity of second order data, such as images. In this paper, I review the representative theories, algorithms and applications of the low rank subspace recovery models in data processing.

    Citation: Zhouchen Lin. A Review on Low-Rank Models in Data Analysis[J]. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001

    Related Papers:

    [1] Xin Yun, Myung Hwan Chun . The impact of personalized recommendation on purchase intention under the background of big data. Big Data and Information Analytics, 2024, 8(0): 80-108. doi: 10.3934/bdia.2024005
    [2] Bill Huajian Yang . Modeling path-dependent state transitions by a recurrent neural network. Big Data and Information Analytics, 2022, 7(0): 1-12. doi: 10.3934/bdia.2022001
    [3] Antonio N. Bojanic . Accounting for the Trump factor in modeling the COVID-19 epidemic: the case of Louisiana. Big Data and Information Analytics, 2021, 6(0): 74-85. doi: 10.3934/bdia.2021006
    [4] Xing Tan, Giri Kumar Tayi .  CERTONTO: TOWARDS AN ONTOLOGICAL REPRESENTATION OF FAIR TRADE CERTIFICATION STANDARDS. Big Data and Information Analytics, 2017, 2(3&4): 255-264. doi: 10.3934/bdia.2017022
    [5] David E. Bernholdt, Mark R. Cianciosa, David L. Green, Kody J.H. Law, Alexander Litvinenko, Jin M. Park . Comparing theory based and higher-order reduced models for fusion simulation data. Big Data and Information Analytics, 2018, 3(2): 41-53. doi: 10.3934/bdia.2018006
    [6] Zhouchen Lin . A Review on Low-Rank Models in Data Analysis. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001
    [7] Xiaoying Chen, Chong Zhang, Zonglin Shi, Weidong Xiao . Spatio-temporal Keywords Queries in HBase. Big Data and Information Analytics, 2016, 1(1): 81-91. doi: 10.3934/bdia.2016.1.81
    [8] Dongyang Yang, Wei Xu . Statistical modeling on human microbiome sequencing data. Big Data and Information Analytics, 2019, 4(1): 1-12. doi: 10.3934/bdia.2019001
    [9] Weidong Bao, Wenhua Xiao, Haoran Ji, Chao Chen, Xiaomin Zhu, Jianhong Wu . Towards big data processing in clouds: An online cost-minimization approach. Big Data and Information Analytics, 2016, 1(1): 15-29. doi: 10.3934/bdia.2016.1.15
    [10] Tieliang Gong, Qian Zhao, Deyu Meng, Zongben Xu . Why Curriculum Learning & Self-paced Learning Work in Big/Noisy Data: A Theoretical Perspective. Big Data and Information Analytics, 2016, 1(1): 111-127. doi: 10.3934/bdia.2016.1.111
  • Nowadays we are in the big data era. The high-dimensionality of data imposes big challenge on how to process them effectively and efficiently. Fortunately, in practice data are not unstructured. Their samples usually lie around low-dimensional manifolds and have high correlation among them. Such characteristics can be effectively depicted by low rankness. As an extension to the sparsity of first order data, such as voices, low rankness is also an effective measure for the sparsity of second order data, such as images. In this paper, I review the representative theories, algorithms and applications of the low rank subspace recovery models in data processing.


    1. Introduction

    The Business Process Model and Notation (BPMN) [15] has been widely adopted in recent years as one of the standard languages for visual description of business processes. In particular BPMN has been extensively used in facilitating effective communication among technical experts, business users and other stakeholders within a work group, for business process design and implementation, or problem solving in business domains.

    BPMN however does not contain a formal semantics. Consequently, rigorously specifying dynamical behaviors of objects in a BPMN specified business process model can be challenging and error-prone. For critical systems (e.g., a medical process specified in BPMN), the existence of semantic ambiguities in these systems is costly from every perspective and thus should be eliminated as much as possible. For these reasons, in recent years, several efforts have been made to assign BPMN with formal semantics. Among them, [3] and [4] in particular proposed to map a BPMN model into a Petri net, which is a mathematical modeling language (although a Petri net can be graphical too) that has precise execution semantics. In addition, decades of research in Petri nets offers well-established theories to support analysis of dynamic properties in Petri-net specified systems. In [3] and [4], a given BPMN model is first dissected into several BPMN elements. For each type of the element, a mapping from the element to a Petri net module is proposed. After the dissection step and mapping step, these Petri nets modules are eventually integrated together to construct a single Petri net.

    In our previous work ([17], [18], [19]) meanwhile we presented a straightforward first-order-logic (FOL) based framework for Petri nets. More specifically, we have shown that: a) we can represent Petri nets and their variants as Situation-Calculus1 based action theories, and the theories for basic Petri nets are called in short as SCOPE, standing for Situation Calculus Ontology for PEtri nets ([17], [18]); b) given an instance of SCOPE theory, we could further build up Golog procedures2, where sequential, iterative, or nondeterministic composite sequences of transition firings in Petri nets and their variants can be further axiomatized through defining macros; these macros are built on top of more primitive procedures and the only basic action in the domain transition firing [19]; c) executability testing of Golog procedures via theorem-proving can be implemented through efficient logic programming in Prolog ([18], [19]).

    1The Situation Calculus ([10,16]) is a particular formal language for describing changes upon actions in dynamical domains.

    2In the Situation Calculus, a Golog procedure defines a complex action that is composed of basic actions.

    In this paper, we show that the mapping method as proposed in ([3] and [4]) can be naturally extended into SCOPE context. That is, using SCOPE, we can easily define a Petri net module, resulted from performing mapping on a BPMN object, as a Golog complex procedure. These Golog procedures act as interfaces encapsulating Petri net from being directly accessed by end users. Consequently, stakeholders can still work at BPMN level, while the model is enriched with a Petri-net specified formal semantics. Additionally, with these procedures, users can write more complicated user-defined BPMN procedures in a hierarchical way. Since a BPMN procedure can be further translated into Prolog programs [19], our approach support automated verification of important system dynamic properties, such as the executability of these procedures.

    The remainder of this paper is organized as follows. Section 2 provides introductory reviews to the key technologies employed in this paper. Enriched with an example, Section 3 is the core section that explains our approach. Conclusive remarks are presented in Section 4.


    2. Preliminaries

    In this section, we give a brief introduction to BPMN and Petri Nets. The method of mapping from BPMN onto Petri Nets as stated in Section 3 of [4] is then briefly reviewed. We also introduce here Situation Calculus/Golog and SCOPE.


    2.1. Petri nets & BPMN

    Business Process Model and Notation (BPMN) is a notational system for graphical specification of processes in business models. BPMN was developed and has been maintained by the Object Management Group (OMG) [15]. Control-flow related elements in BPMN are the focus of this paper and they include objects of events, tasks and gateways.

    An event denotes happening of something in a model and is represented as a circle. In particular, a start/end event signals the start/finish of a process. An activity denotes work to be done in a model and is represented as a rounded-corner rectangle. A task is an atomic activity. Gateways are diamond-shaped quadrilaterals and they denote different routing constructs. Among them, a fork gateway (AND-split) creates concurrent flows and a join gateway synchronises these flows. A decision gateway (XOR-split) selects exactly one flow from a set of them based on certain criteria (data or event) and a merge gateway (XOR-join) merges the alternative flows in the set into one flow. When the choice is non-unique, the pair of gateways OR-split and OR-join is applied instead. Graphically these BPMN objects are connected to each other by arrows. Figure 2 is an example BPMN-denoted business process (adapted from Fig. 13 (a) in [4]).

    Figure 2. An Order Process in BPMN.

    A Petri net (PN) [14] is a pair (N,M0), where N is a triple (P,T,F) such that P is a finite set of node elements called places, T is a finite set of node elements called transitions, F(P×T)(T×P) consists of ordered pairs, and M0 the initial marking, is a mapping in the form M:PN, indicating the initial assignment of a non-negative integer k to each place p in P. (In this case, we say that the place p is marked with k tokens.) A marking M for N in Petri net PN is defined as a vector (M(p1),,M(pm)), where p1,,pm is an enumeration of P and M(pi) tokens are assigned to node pi, for all i such that 1im. The same process, depicted in Figure 2 as a BPMN model, is presented as a Petri net in Figure 3.

    Figure 3. A Petri Net for the Order Process (Transformed from BPMN).

    The method of mapping a BPMN process into a Petri net introduced in [4] assumes that the BPMN process is in the first place "well-formed". A well-formed BPMN should satisfy several conditions (e.g., a start event has one outgoing flow and no incoming flow.). It is shown that a given BPMN process can always be transformed into a well-formed one ([1,9,25,26]). The mapping process by itself is straightforward and are summarized in Figure 1 (Several issues are identified and addressed accordingly in [4]; however they are not discussed here). In the figure, note that places in the Petri net modules/components are dashed, meaning that these places would act as connectors to link multiple components into a Petri net through mapping. As an example, the Petri net in Figure 3 is resulted from performing mapping on the BPMN model in Figure 2.

    Figure 1. Mapping tasks, events, and gateways onto Petri-net components (Fig. 3. in [4] is copied here).

    2.2. Situation Calculus & SCOPE

    The Situation Calculus is a logical language for representing actions and changes in a dynamical domain. It is first proposed by McCarthy and Hayes in 1969 [10]. The language L of Situation Calculus as stated by [16] is a second-order many-sorted language with equality. Any Situation Calculus-based action theory include: Fluent conditions, whose actual values can be updated from applications of relevant actions; Actions, applications of which will bring effects in term of changes of values of certain fluent conditions in a system. The initial situation is called S_0. Starting from S_0, the system evolves on a sequence of actions into its future situations. Golog is a logic programming language for description and execution of complex actions using domain-specific Situation Calculus primitive actions. It provides imperative programming constructs, including (1) a, a primitive action; (2) α;β, action α is followed by action β; (3) p?, test action on the condition p; (4) if p then α else β, conditionals; (5) α|β, nondeterministic choice of action α or action β; (6) (πx)α(x), nondeterministic choice of arguments; (7) α, nondeterministic iteration; and (8) Procedures. The semantics of Golog programs is defined on the abbreviation Do(δ,s1,do(a,s1)), which denotes that execution of Golog program δ in the situation s1 leads to do(a,s1)), an abbreviation to for the situation of performing a sequence of actions a=[α1,,αn1,αn] starting from s1, i.e., do(αn,do(αn1,,do(α1,s1))). The structure of δ is defined inductively through macro-expansions on the above eight constructs.

    The theory Dscope for Situation Calculus Ontology of PEtri nets (SCOPE) is first proposed in [17] and [18]. In Dscope, the only action is fire and the only fluent is Tkns.

    Primitive Action fire(t): the transition t fires.

    Fluent Tkns(p,s): the number of tokens at place p at situation s.

    Situation-Independent relations pre and post are introduced to specify the topology of a given Petri net.

    pre(m,n). Node m enters node n.

    post(m,n). Node n enters node m.

    pre(m,n)post(n,m).

    The Foundational Axioms Df (not listed here) characterize a generic situation tree for any Basic Action Theory (BAT) written in the Situation Calculus. Based on Df, the Primitive Action Precondition Axiom for primitive actions in Dscope

    (s,p,t)(Poss(fire(t),s)(pre(p,t)Tkns(p,s)1))

    defines the condition for a transition node t to fire legally. That is, the transition t is enabled to fire at situation s iff each place that enters the transition node t contains at least one token. The Successor State Axiom in Dscope defines the effects firing of a transition node would bring to the system.

    (s,p,a,n)(Tkns(p,do(a,s))=nγf(p,n,a,s)
    (Tkns(p,s)=n¬nγf(p,n,a,s))),

    where γf(p,n,a,s)def=γfe(p,n,a,s)γfl(p,n,a,s), referring to the two sets of firing actions that cause the number of tokens at place p on situation do(a,s) to be equal to n:

    γfe(p,n,a,s)def=(t)(pre(t,p)¬post(t,p)n=Tkns(p,s)+1a=fire(t)) (the number of tokens at place p at situation s is (n1), and a is an action of firing transition t, which enters p);

    γfl(p,n,a,s)def=(t)(pre(p,t)¬post(p,t)n=Tkns(p,s)1a=fire(t)) (the number of tokens at place p at situation s is (n+1), and a is an action of firing transition t, which leaves p);

    The above Successor State Axiom summarizes all conditions where the number of tokens at place p is n at situation do(a,s): n could be achieved by action a from situation s, or at situation s the number of tokens at p is already n and the action a that occurs in s will not change it to some other values. Note in particular that, aside from the accommodation of time, this axiom is the same as the one in Dscope, since introducing of inhibitor arc only changes the preconditions where a transition node is enabled to fire.

    As an example, we define Petri net in Figure 3 as a SCOPE theory. The axiomatic part of SCOPE, including Foundational Axioms, Precondition Axioms, and Successor State Axioms remain unchanged. We only need to specify the initial marking and the situation-independent, topological structure of the Petri-net in Figure 3. That is, the number of tokens at place P1 is 0. For any other place node, the number is zero: tkns(P1)=1, and tkns(Pi)=0 for 2i12. Place node P1 enters transition node T1, and T1 enters P2, etc.

    pre(P1,T1),pre(T1,P2),,pre(T11,P12).

    3. An ontological account of BPMN flow-control components

    This section includes two parts. We first specify formally the restrictions for well-formed BPMN processes, this would facilitate implementation of automated tools for evaluating formally whether a BPMN process is well-formed (using a FOL theorem prover for example). We then demonstrate how Petri nets modules for their corresponding BPMN objects can be formally described through carrying out further axiomatization on top of SCOPE.


    3.1. Well-formed BPMN process

    From Section 3 of [4], a well-formed BPMN process has to satisfy the following five criteria:

    1. a start event or an exception event has just one outgoing (sequence) flow but no incoming flow;

    2. an end event has just one incoming flow but no outgoing flow;

    3. activities and intermediate events have exactly one incoming flow and one outgoing flow;

    4. fork or decision gateways have one incoming flow and more than one outgoing flows; and

    5. join or merge gateways have one outgoing flow and more than one incoming flows.

    In this section, first-order axiomatization of these criteria are presented (assuming that objects in the domain are sorted in logical sense).

    startM(x) Message x starts.

    endM(x) Message x ends.

    (x,y)(pre(x,y)post(y,x)).

    (x)startE(x)start(x)startM(x).

    (x)(interE(x)message(x)timer(x)error(x)).

    (x)endE(x)(end(x)endM(x)).

    • A start/exception event has one outgoing flow and does not have incoming flow

    (x,y)(¬(pre(x,start(y))¬(pre(x,exception(y))),
    (y)((start(y)exception(y))((x)pre(y,x)
    ¬(x1,x2)(pre(y,x1)pre(y,x2)x1x2))).

    • An end event has one incoming flow and does not have outgoing flow

    (x,y)¬(pre(end(y),x)).
    (y)(end(y)((x)pre(x,y)
    ¬(x1,x2)(pre(x1,y)pre(x2,y)x1x2))).

    • An activity/intermediate-event has one and only one incoming flow and outgoing flow

    (y)((activity(y)interE(y))
    ((x)pre(y,x)¬(x1,x2)(pre(y,x1)pre(y,x2)x1x2))
    ((z)pre(z,y)¬(z1,z2)(pre(z1,y)pre(z2,y)z1z2))).

    • An gateway fork or decision has one incoming flow and more than one outgoing flows

    (y)((fork(y)decision(y))((x)pre(x,y)
    ¬(x1,x2)(pre(x1,y)pre(x2,y)x1x2))
    (x1,x2)(pre(y,x1)pre(y,x2)x1x2))).

    • An gateway join or merge has one outcoming flow and more than one ingoing flows

    (y)((join(y)merge(y))((x)pre(y,x)
    ¬(x1,x2)(pre(y,x1)pre(y,x2)x1x2))
    (x1,x2)(pre(x1,y)pre(x2,y)x1x2))).

    3.2. A theory of BPMN using SCOPE

    It is known from our previous work that, given a SCOPE theory, we can write SCOPE procedures through defining macros, where these macros are built on top of more primitive, existing procedures and the only basic action in the domain -transition node fire. With respect to the Petri net in Figure 3, we can define a set of simple SCOPE procedures. For example,

    proc checkCreditCardFirst

    fire(T1);fire(T2); fire(T4); fire(T6)

    endProc

    proc prepareAndShipProducts

    fire(T3);fire(T5); fire(T8)

    endProc

    The procedure checkCreditCardFirst sequentially fires the transition nodes T1, T2, T4 (the activity of checking credit card), T6. The procedure is executable from initial setting, as there is a token in P1, thus T1 is initially enabled to fire. However, firing of T4 has non-deterministic effects, the number of tokens in P6 is increased by one, which non-deterministically enables either T6 or T7 (but not both) to fire, reflecting the non-deterministic fact that the credit card can either be accepted or rejected.

    The procedure prepareAndShipProducts sequentially fires the transition nodes T3 (the activity of preparing products), T5 and T8 (the activity of shipping products). Note that, if procedure prepareAndShipProducts is executable, then each of these three transition nodes including T5 is executable. Nevertheless this means that the credit card must be accepted, otherwise T7, not T5, would be enabled to fire, so P7 would never receive a token and T5 can not be enabled to fire consequently.

    With two procedures: checkCreditCardFirst, and prepareAndShipProducts, we define below a new procedure called exampleExectutionOfAnOrder. The procedure first checks the validity of the credit card, if it is accepted; then the process starts to prepare and to ship products.

    proc exampleExectutionOfAnOrder

    checkCreditCardFirst;

    (πn)(tkns(P7,n)n=0) | prepareAndShipProduct

    endProc

    To this end, it is quite clear how methodologically we are able to axiomatize these BPMN flow-control components: we can simply define those Petri-net modules transformed from those BPMN objects, as illustrated in Figure 1, as SCOPE procedures.

    We start from axiomatizing a Petri-net module for the start object N in BPMN. Suppose in the module, place P1 enters transition T, which in turn enters P2. Accordingly we define a SCOPE procedure

    proc startEvent(N)

    [pre(P1,T)pre(T,P2)]?; fire(T)

    endProc

    That is, as long as the topological relationship is verified, T is suggested to fire in this procedure. In almost the same way, we can define procedures endEvent and task (definitions are skipped here).

    The definition of a procedure corresponding to a fork gate in BPMN involves one place node that enters (and multiple place nodes that leave) the transition node to be fired. Suppose that P1 enters T, and T in turn enters P2, , Pn, we have

    proc forkGate(N)

    [pre(P1,T)pre(T,P2)pre(T,Pn)]?; fire(T)

    endProc

    The definition of a procedure corresponding to a join gate in BPMN involves one place node that leaves (and multiple place nodes that enter) the transition node to be fired. Suppose P1 leaves T and P2, , Pn enter T, we have

    proc joinGate(N)

    [pre(P2,T)pre(Pn,T5)pre(T,P1)]?; fire(T5)

    endProc

    The definition of a procedure corresponding to a decision gate in BPMN involves one place node that enters at least one transition node, where each transition node in turn enters a place node. Suppose that P0 enters T1, , Tn, in turn, T1 enters P1, T2 enters P2, Tn enters Pn, we have

    proc decisionGate(N)

    [pre(P,T1)pre(P,Tn)pre(T1,P1)pre(Tn,Pn)]?

    ; [fire(T1) | |fire(Tn)]

    endProc

    The definition of a procedure corresponding to a merge gate in BPMN involves one place node that multiple transition nodes enter, where for each transition node there is a place node that enters the node. Suppose that T1, , Tn enters P, in addition, P1 enters T1, P2 enters T2, Pn enters Tn, we have

    proc mergeGate(N)

    [pre(T1,P)pre(Tn,P)pre(P1,T1)pre(Pn,Tn)]?

    ; [fire(T1) | |fire(Tn)]

    endProc


    3.3. The order process example

    This section includes an example of an order process (adapted from Fig. 13(a) in [4]). When an order request arrives from a customer, both tasks "CheckCreditCard" and "Prepare Products" are initialized. If the credit card is OK, the packed products would be shipped (the task "ShiProducts" is executed) and the process finishes. Alternatively, if the credit card fails, the process finishes directly. BPMN representation of the process is depicted in Fig. 2.

    The following sentences capture the topology of the Order Process Diagram.

    start(N1), pre(N1,N2), fork(N2), pre(N2,N3), pre(N2,N4),

    activity(N3), pre(N2,N3), pre(N3,N5), activity(N4),

    pre(N2,N4), pre(N4,N6), decision(N5), pre(N5,N8),

    pre(N5,N6), join(N6), pre(N6,N7), activity(N7),

    pre(N7,N8), merge(N8), pre(N8,N9), end(N9).

    The nine BPMN objects in Figure 2 are defined as the following SCOPE procedures.

    proc startEvent(N1)

    [pre(P1,T1)pre(T1,P2)]?; fire(T1)

    endProc

    proc forkGate(N2)

    [pre(P2,T2)pre(T2,P3)pre(T2,P4)]?; fire(T2)

    endProc

    proc task(N3)

    [pre(P3,T3)pre(T3,P5)]?; fire(T3)

    endProc

    proc task(N4)

    [pre(P4,T4)pre(T4,P6)]?; fire(T4)

    endProc

    proc decisionGate(N5)

    [pre(P6,T6)pre(P6,T7)pre(T6,P7)pre(T7,P9)]?

    ;

    fire(T6) | fire(T7)

    endProc

    proc joinGate(N6)

    [pre(P5,T5)pre(P7,T5)pre(T5,P8)]?; fire(T5)

    endProc

    proc task(N7)

    [pre(P8,T8)pre(T8,P10)]?; fire(T8)

    endProc

    proc mergeGate(N8)

    [pre(P9,T9)pre(P10,T10)pre(T10,P11)pre(T9,P11)]?

    ;

    fire(T9) | fire(T10)

    endProc

    proc endEvent(N9)

    [pre(P11,T11)pre(T11,P12)]?; fire(T11)

    endProc

    It is handy to create further user-defined new procedures through macro-actions, i.e., through hierarchically constructing more complex procedures from existing ones. For example, the following procedure presumably specifies all possible executions of the BPMN process as specified in Figure 2.

    proc orderProcess

    startEvent(N1);forkGate(N2);

    [task(N3);task(N4)]|[task(N4);task(N3)];

    decisionGate(N5);

    [joinGate(N6);task(N7);|;]; mergeGate(N8); endEvent(N9)

    endProc

    Feasibility/executability of procedure orderProcess could be evaluated via querying the theory with

    ?Do(orderProcess,S0,S),

    any instantiation of the situation variable S would correspond an execution from the initial situation S0. We can also work directly on the Petri-net resulted form mapping (Figure 3). We can run queries to test the resulted Petri-net markings from a given execution instance of orderProcess. It can be verified that for some situation Si, number of tokens in P5 is one, indicating that the order process might complete improperly: since the credit card is rejected, the order process is discarded. The products are already prepared to be shipped, though shipping will never happen in this case.


    4. Conclusive remarks

    This paper proposes to axiomatize the Petri nets modules, resulted from transforming BPMN objects, as SCOPE-based procedures. In fact, the impressive feature of high adaptability and extensibility is shared by all Situation-Calculus-based Basic Action Theories including SCOPE. In a similar way, we recently used SCOPE to aggregate event patterns in Supply Chain Management [21], and to mitigate adverse interaction analysis for concurrent application of clinical practise guidelines for comorbid patients [11,20]. Ongoing approaches using directly first-order logic for integrating medical treatments to manage multimorbidity include [11,27,12,13,22]. Formal process knowledge representation and reasoning might also find its applicability in combining probabilistic graphical models to build up frameworks and systems in areas for example information retrieval [23,24].

    Our previous research ([18] and [19]) indicates that we can easily implement software tools using logic programming language Prolog [2]. Further, we might consider adopting tractable variant to Situation Calculus (for example, a sequence of research on the topic proposed in [6,7,5] is quite interesting and relevant) so an actual implementation became practically feasible. Given the rapid development of first-order theorem proving technologies in the recent years, theorem provers have recently been used to support reasoning in ontologies [8]. As part of our initiative, we plan to adapt theorem proving technologies for process ontologies like SCOPE theories.

    Finally, we remark that our approach is able to incorporate processes specified with temporal constraints, since in our previous research reported in [19], a variant to SCOPE which axiomatize Time Petri-nets are presented.


    Acknowledgments

    We are thankful to the insightful reviews we have received to improve both the content and presentation of the paper. We gratefully acknowledge the support by NSERC (Natural Sciences and Engineering Research Council of Canada) CREATE (Collaborative Research and Training Experience Program) award in ADERSIM (Advanced Disaster, Emergency and Rapid-response Simulation)3, ORF-RE (Ontario Research Fund -Research Excellence)4 award in BRAIN Alliance5, Dapasoft Inc. in Toronto, ON, Canada6, and the York Research Chairs Program.

    3http://www.yorku.ca/adersim/NSERC/

    4https://www.ontario.ca/page/ontario-research-fund-research-excellence

    5http://www.brainalliance.ca/

    6http://dapasoft.com/


    The first and third authors are supported by NSERC CREATE ADERSIM
    [1] [ A. Adler, M. Elad and Y. Hel-Or, Probabilistic subspace clustering via sparse representations, IEEE Signal Processing Letters, 20(2013), 63-66.
    [2] [ A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences, 2(2009), 183-202.
    [3] [ J. Cai, E. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20(2010), 1956-1982.
    [4] [ E. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, Journal of the ACM, 58(2011), Art. 11, 37 pp.
    [5] [ E. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98(2010), 925-936.
    [6] [ E. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9(2009), 717-772.
    [7] [ V. Chandrasekaran, S. Sanghavi, P. Parrilo and A. Willsky, Sparse and low-rank matrix decompositions, Annual Allerton Conference on Communication, Control, and Computing, 2009, 962-967.
    [8] [ C. Chen, B. He, Y. Ye and X. Yuan, The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent, Mathematical Programming, 155(2016), 57-79.
    [9] [ Y. Chen, H. Xu, C. Caramanis and S. Sanghavi, Robust matrix completion with corrupted columns, International Conference on Machine Learning, 2011, 873-880.
    [10] [ B. Cheng, G. Liu, J. Wang, Z. Huang and S. Yan, Multi-task low-rank affinity pursuit for image segmentation, International Conference on Computer Vision, 2011, 2439-2446.
    [11] [ A. Cichocki, R. Zdunek, A. H. Phan and S. Ichi Amari, Nonnegative Matrix and Tensor Factorizations:Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, 1st edition, Wiley, 2009.
    [12] [ Y. Cui, C.-H. Zheng and J. Yang, Identifying subspace gene clusters from microarray data using low-rank representation, PLoS One, 8(2013), e59377.
    [13] [ P. Drineas, R. Kannan and M. Mahoney, Fast Monte Carlo algorithms for matrices Ⅱ:Computing a low rank approximation to a matrix, SIAM Journal on Computing, 36(2006), 158-183.
    [14] [ E. Elhamifar and R. Vidal, Sparse subspace clustering, in IEEE International Conference on Computer Vision and Pattern Recognition, 2009, 2790-2797.
    [15] [ E. Elhamifar and R. Vidal, Sparse subspace clustering:Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 2765-2781.
    [16] [ P. Favaro, R. Vidal and A. Ravichandran, A closed form solution to robust subspace estimation and clustering, IEEE Conference on Computer Vision and Pattern Recognition, 2011, 1801-1807.
    [17] [ J. Feng, Z. Lin, H. Xu and S. Yan, Robust subspace segmentation with block-diagonal prior, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 3818-3825.
    [18] [ M. Frank and P. Wolfe, An algorithm for quadratic programming, Naval Research Logistics Quarterly, 3(1956), 95-110.
    [19] [ Y. Fu, J. Gao, D. Tien and Z. Lin, Tensor LRR based subspace clustering, International Joint Conference on Neural Networks, 2014, 1877-1884.
    [20] [ A. Ganesh, Z. Lin, J. Wright, L. Wu, M. Chen and Y. Ma, Fast algorithms for recovering a corrupted low-rank matrix, International Workshop on Computational Advances in MultiSensor Adaptive Processing, 2009, 213-216.
    [21] [ H. Gao, J.-F. Cai, Z. Shen and H. Zhao, Robust principal component analysis-based four-dimensional computed tomography, Physics in Medicine and Biology, 56(2011), 3181-3198.
    [22] [ M. Grant and S. Boyd, CVX:Matlab software for disciplined convex programming (web page and software), http://cvxr.com/cvx/, 2009.
    [23] [ S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 2862-2869.
    [24] [ H. Hu, Z. Lin, J. Feng and J. Zhou, Smooth representation clustering, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 3834-3841.
    [25] [ Y. Hu, D. Zhang, J. Ye, X. Li and X. He, Fast and accurate matrix completion via truncated nuclear norm regularization, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 2117-2130.
    [26] [ M. Jaggi, Revisiting Frank-Wolfe:Projection-free sparse convex optimization, in International Conference on Machine Learning, 2013, 427-435.
    [27] [ M. Jaggi and M. Sulovský, A simple algorithm for nuclear norm regularized problems, in International Conference on Machine Learning, 2010, 471-478.
    [28] [ I. Jhuo, D. Liu, D. Lee and S. Chang, Robust visual domain adaptation with low-rank reconstruction, IEEE Conference on Computer Vision and Pattern Recognition, 2012, 2168-2175.
    [29] [ H. Ji, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, IEEE Conference on Computer Vision and Pattern Recognition, 2010, 1791-1798.
    [30] [ Y. Jin, Q. Wu and L. Liu, Unsupervised upright orientation of man-made models, Graphical Models, 74(2012), 99-108.
    [31] [ T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51(2009), 455-500.
    [32] [ C. Lang, G. Liu, J. Yu and S. Yan, Saliency detection by multitask sparsity pursuit, IEEE Transactions on Image Processing, 21(2012), 1327-1338.
    [33] [ R. M. Larsen, http://sun.stanford.edu/~rmunk/PROPACK/, 2004.
    [34] [ D. Lee and H. Seung, Learning the parts of objects by non-negative matrix factorization, Nature, 401(1999), 788.
    [35] [ X. Liang, X. Ren, Z. Zhang and Y. Ma, Repairing sparse low-rank texture, in European Conference on Computer Vision, 7576(2012), 482-495.
    [36] [ Z. Lin, R. Liu and H. Li, Linearized alternating direction method with parallel splitting and adaptive penality for separable convex programs in machine learning, Machine Learning, 99(2015), 287-325.
    [37] [ Z. Lin, R. Liu and Z. Su, Linearized alternating direction method with adaptive penalty for low-rank representation, Advances in Neural Information Processing Systems, 2011, 612-620.
    [38] [ G. Liu, Z. Lin, S. Yan, J. Sun and Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Transactions Pattern Analysis and Machine Intelligence, 35(2013), 171-184.
    [39] [ G. Liu, Z. Lin and Y. Yu, Robust subspace segmentation by low-rank representation, in International Conference on Machine Learning, 2010, 663-670.
    [40] [ G. Liu, H. Xu and S. Yan, Exact subspace segmentation and outlier detection by low-rank representation, International Conference on Artificial Intelligence and Statistics, 2012, 703-711.
    [41] [ G. Liu and S. Yan, Latent low-rank representation for subspace segmentation and feature extraction, in IEEE International Conference on Computer Vision, IEEE, 2011, 1615-1622.
    [42] [ J. Liu, P. Musialski, P. Wonka and J. Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 208-220.
    [43] [ R. Liu, Z. Lin, Z. Su and J. Gao, Linear time principal component pursuit and its extensions using l1 filtering, Neurocomputing, 142(2014), 529-541.
    [44] [ R. Liu, Z. Lin, F. Torre and Z. Su, Fixed-rank representation for unsupervised visual learning, IEEE Conference on Computer Vision and Pattern Recognition, 2012, 598-605.
    [45] [ C. Lu, J. Feng, Z. Lin and S. Yan, Correlation adaptive subspace segmentation by trace lasso, International Conference on Computer Vision, 2013, 1345-1352.
    [46] [ C. Lu, Z. Lin and S. Yan, Smoothed low rank and sparse matrix recovery by iteratively reweighted least squared minimization, IEEE Transactions on Image Processing, 24(2015), 646-654.
    [47] [ C. Lu, H. Min, Z. Zhao, L. Zhu, D. Huang and S. Yan, Robust and efficient subspace segmentation via least squares regression, European Conference on Computer Vision, 7578(2012), 347-360.
    [48] [ C. Lu, C. Zhu, C. Xu, S. Yan and Z. Lin, Generalized singular value thresholding, AAAI Conference on Artificial Intelligence, 2015, 1805-1811.
    [49] [ X. Lu, Y. Wang and Y. Yuan, Graph-regularized low-rank representation for destriping of hyperspectral images, IEEE Transactions on Geoscience and Remote Sensing, 51(2013), 4009-4018.
    [50] [ Y. Ma, S. Soatto, J. Kosecka and S. Sastry, An Invitation to 3-D Vision:From Images to Geometric Models, 1st edition, Springer, 2004.
    [51] [ K. Min, Z. Zhang, J. Wright and Y. Ma, Decomposing background topics from keywords by principal component pursuit, in ACM International Conference on Information and Knowledge Management, 2010, 269-278.
    [52] [ Y. Ming and Q. Ruan, Robust sparse bounding sphere for 3D face recognition, Image and Vision Computing, 30(2012), 524-534.
    [53] [ L. Mukherjee, V. Singh, J. Xu and M. Collins, Analyzing the subspace structure of related images:Concurrent segmentation of image sets, European Conference on Computer Vision, 7575(2012), 128-142.
    [54] [ Y. Nesterov, A method of solving a convex programming problem with convergence rate O(1/k2), (Russian) Dokl. Akad. Nauk SSSR, 269(1983), 543-547.
    [55] [ Y. Panagakis and C. Kotropoulos, Automatic music tagging by low-rank representation, International Conference on Acoustics, Speech, and Signal Processing, 2012, 497-500.
    [56] [ Y. Peng, A. Ganesh, J. Wright, W. Xu and Y. Ma, RASL:Robust alignment by sparse and low-rank decomposition for linearly correlated images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(2012), 2233-2246.
    [57] [ J. Qian, J. Yang, F. Zhang and Z. Lin, Robust low-rank regularized regression for face recognition with occlusion, The Workshop of IEEE Conference on Computer Vision and Pattern Recognition, 2014, 21-26.
    [58] [ X. Ren and Z. Lin, Linearized alternating direction method with adaptive penalty and warm starts for fast solving transform invariant low-rank textures, International Journal of Computer Vision, 104(2013), 1-14.
    [59] [ A. P. Singh and G. J. Gordon, A unified view of matrix factorization models, in Proceedings of Machine Learning and Knowledge Discovery in Databases, 5212(2008), 358-373.
    [60] [ H. Tan, J. Feng, G. Feng, W. Wang and Y. Zhang, Traffic volume data outlier recovery via tensor model, Mathematical Problems in Engineering, 2013(2013), 164810.
    [61] [ M. Tso, Reduced-rank regression and canonical analysis, Journal of the Royal Statistical Society, Series B (Methodological), 43(1981), 183-189.
    [62] [ R. Vidal, Y. Ma and S. Sastry, Generalized principal component analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2005), 1945-1959.
    [63] [ R. Vidal, Subspace clustering, IEEE Signal Processing Magazine, 28(2011), 52-68.
    [64] [ J. Wang, V. Saligrama and D. Castanon, Structural similarity and distance in learning, Annual Allerton Conf. Communication, Control and Computing, 2011, 744-751.
    [65] [ Y.-X. Wang and Y.-J. Zhang, Nonnegative matrix factorization:A comprehensive review, IEEE Transactions on Knowledge and Data Engineering, 25(2013), 1336-1353.
    [66] [ S. Wei and Z. Lin, Analysis and improvement of low rank representation for subspace segmentation, arXiv:1107.1561.
    [67] [ Z. Wen, W. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Mathematical Programming Computation, 4(2012), 333-361.
    [68] [ J. Wright, A. Ganesh, S. Rao, Y. Peng and Y. Ma, Robust principal component analysis:Exact recovery of corrupted low-rank matrices via convex optimization, Advances in Neural Information Processing Systems, 2009, 2080-2088.
    [69] [ L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang and Y. Ma, Robust photometric stereo via low-rank matrix completion and recovery, Asian Conference on Computer Vision, 2010, 703-717.
    [70] [ L. Yang, Y. Lin, Z. Lin and H. Zha, Low rank global geometric consistency for partial-duplicate image search, International Conference on Pattern Recognition, 2014, 3939-3944.
    [71] [ M. Yin, J. Gao and Z. Lin, Laplacian regularized low-rank representation and its applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2016), 504-517.
    [72] [ Y. Yu and D. Schuurmans, Rank/norm regularization with closed-form solutions:Application to subspace clustering, Uncertainty in Artificial Intelligence, 2011, 778-785.
    [73] [ H. Zhang, Z. Lin and C. Zhang, A counterexample for the validity of using nuclear norm as a convex surrogate of rank, European Conference on Machine Learning, 8189(2013), 226-241.
    [74] [ H. Zhang, Z. Lin, C. Zhang and E. Chang, Exact recoverability of robust PCA via outlier pursuit with tight recovery bounds, AAAI Conference on Artificial Intelligence, 2015, 3143-3149.
    [75] [ H. Zhang, Z. Lin, C. Zhang and J. Gao, Robust latent low rank representation for subspace clustering, Neurocomputing, 145(2014), 369-373.
    [76] [ H. Zhang, Z. Lin, C. Zhang and J. Gao, Relation among some low rank subspace recovery models, Neural Computation, 27(2015), 1915-1950.
    [77] [ T. Zhang, B. Ghanem, S. Liu and N. Ahuja, Low-rank sparse learning for robust visual tracking, European Conference on Computer Vision, 7577(2012), 470-484.
    [78] [ Z. Zhang, A. Ganesh, X. Liang and Y. Ma, TILT:Transform invariant low-rank textures, International Journal of Computer Vision, 99(2012), 1-24.
    [79] [ Z. Zhang, X. Liang and Y. Ma, Unwrapping low-rank textures on generalized cylindrical surfaces, International Conference on Computer Vision, 2011, 1347-1354.
    [80] [ Z. Zhang, Y. Matsushita and Y. Ma, Camera calibration with lens distortion from low-rank textures, IEEE Conference on Computer Vision and Pattern Recognition, 2011, 2321-2328.
    [81] [ Y. Zheng, X. Zhang, S. Yang and L. Jiao, Low-rank representation with local constraint for graph construction, Neurocomputing, 122(2013), 398-405.
    [82] [ X. Zhou, C. Yang, H. Zhao and W. Yu, Low-rank modeling and its applications in image analysis, ACM Computing Surveys, 47(2014), p36.
    [83] [ G. Zhu, S. Yan and Y. Ma, Image tag refinement towards low-rank, content-tag prior and error sparsity, in International conference on Multimedia, 2010, 461-470.
    [84] [ L. Zhuang, H. Gao, Z. Lin, Y. Ma, X. Zhang and N. Yu, Non-negative low rank and sparse graph for semi-supervised learning, IEEE International Conference on Computer Vision and Pattern Recognition, 2012, 2328-2335.
    [85] [ W. Zuo and Z. Lin, A generalized accelerated proximal gradient approach for total-variation-based image restoration, IEEE Transactions on Image Processing, 20(2011), 2748-2759.
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5781) PDF downloads(706) Cited by(14)

Article outline

Figures and Tables

Figures(16)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog