Citation: Zhouchen Lin. A Review on Low-Rank Models in Data Analysis[J]. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001
[1] | Xin Yun, Myung Hwan Chun . The impact of personalized recommendation on purchase intention under the background of big data. Big Data and Information Analytics, 2024, 8(0): 80-108. doi: 10.3934/bdia.2024005 |
[2] | Bill Huajian Yang . Modeling path-dependent state transitions by a recurrent neural network. Big Data and Information Analytics, 2022, 7(0): 1-12. doi: 10.3934/bdia.2022001 |
[3] | Antonio N. Bojanic . Accounting for the Trump factor in modeling the COVID-19 epidemic: the case of Louisiana. Big Data and Information Analytics, 2021, 6(0): 74-85. doi: 10.3934/bdia.2021006 |
[4] | Xing Tan, Giri Kumar Tayi . CERTONTO: TOWARDS AN ONTOLOGICAL REPRESENTATION OF FAIR TRADE CERTIFICATION STANDARDS. Big Data and Information Analytics, 2017, 2(3&4): 255-264. doi: 10.3934/bdia.2017022 |
[5] | David E. Bernholdt, Mark R. Cianciosa, David L. Green, Kody J.H. Law, Alexander Litvinenko, Jin M. Park . Comparing theory based and higher-order reduced models for fusion simulation data. Big Data and Information Analytics, 2018, 3(2): 41-53. doi: 10.3934/bdia.2018006 |
[6] | Zhouchen Lin . A Review on Low-Rank Models in Data Analysis. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001 |
[7] | Xiaoying Chen, Chong Zhang, Zonglin Shi, Weidong Xiao . Spatio-temporal Keywords Queries in HBase. Big Data and Information Analytics, 2016, 1(1): 81-91. doi: 10.3934/bdia.2016.1.81 |
[8] | Dongyang Yang, Wei Xu . Statistical modeling on human microbiome sequencing data. Big Data and Information Analytics, 2019, 4(1): 1-12. doi: 10.3934/bdia.2019001 |
[9] | Weidong Bao, Wenhua Xiao, Haoran Ji, Chao Chen, Xiaomin Zhu, Jianhong Wu . Towards big data processing in clouds: An online cost-minimization approach. Big Data and Information Analytics, 2016, 1(1): 15-29. doi: 10.3934/bdia.2016.1.15 |
[10] | Tieliang Gong, Qian Zhao, Deyu Meng, Zongben Xu . Why Curriculum Learning & Self-paced Learning Work in Big/Noisy Data: A Theoretical Perspective. Big Data and Information Analytics, 2016, 1(1): 111-127. doi: 10.3934/bdia.2016.1.111 |
The Business Process Model and Notation (BPMN) [15] has been widely adopted in recent years as one of the standard languages for visual description of business processes. In particular BPMN has been extensively used in facilitating effective communication among technical experts, business users and other stakeholders within a work group, for business process design and implementation, or problem solving in business domains.
BPMN however does not contain a formal semantics. Consequently, rigorously specifying dynamical behaviors of objects in a BPMN specified business process model can be challenging and error-prone. For critical systems (e.g., a medical process specified in BPMN), the existence of semantic ambiguities in these systems is costly from every perspective and thus should be eliminated as much as possible. For these reasons, in recent years, several efforts have been made to assign BPMN with formal semantics. Among them, [3] and [4] in particular proposed to map a BPMN model into a Petri net, which is a mathematical modeling language (although a Petri net can be graphical too) that has precise execution semantics. In addition, decades of research in Petri nets offers well-established theories to support analysis of dynamic properties in Petri-net specified systems. In [3] and [4], a given BPMN model is first dissected into several BPMN elements. For each type of the element, a mapping from the element to a Petri net module is proposed. After the dissection step and mapping step, these Petri nets modules are eventually integrated together to construct a single Petri net.
In our previous work ([17], [18], [19]) meanwhile we presented a straightforward first-order-logic (FOL) based framework for Petri nets. More specifically, we have shown that: a) we can represent Petri nets and their variants as Situation-Calculus1 based action theories, and the theories for basic Petri nets are called in short as SCOPE, standing for Situation Calculus Ontology for PEtri nets ([17], [18]); b) given an instance of SCOPE theory, we could further build up Golog procedures2, where sequential, iterative, or nondeterministic composite sequences of transition firings in Petri nets and their variants can be further axiomatized through defining macros; these macros are built on top of more primitive procedures and the only basic action in the domain transition firing [19]; c) executability testing of Golog procedures via theorem-proving can be implemented through efficient logic programming in Prolog ([18], [19]).
1The Situation Calculus ([10,16]) is a particular formal language for describing changes upon actions in dynamical domains.
2In the Situation Calculus, a Golog procedure defines a complex action that is composed of basic actions.
In this paper, we show that the mapping method as proposed in ([3] and [4]) can be naturally extended into SCOPE context. That is, using SCOPE, we can easily define a Petri net module, resulted from performing mapping on a BPMN object, as a Golog complex procedure. These Golog procedures act as interfaces encapsulating Petri net from being directly accessed by end users. Consequently, stakeholders can still work at BPMN level, while the model is enriched with a Petri-net specified formal semantics. Additionally, with these procedures, users can write more complicated user-defined BPMN procedures in a hierarchical way. Since a BPMN procedure can be further translated into Prolog programs [19], our approach support automated verification of important system dynamic properties, such as the executability of these procedures.
The remainder of this paper is organized as follows. Section 2 provides introductory reviews to the key technologies employed in this paper. Enriched with an example, Section 3 is the core section that explains our approach. Conclusive remarks are presented in Section 4.
In this section, we give a brief introduction to BPMN and Petri Nets. The method of mapping from BPMN onto Petri Nets as stated in Section 3 of [4] is then briefly reviewed. We also introduce here Situation Calculus/Golog and SCOPE.
Business Process Model and Notation (BPMN) is a notational system for graphical specification of processes in business models. BPMN was developed and has been maintained by the Object Management Group (OMG) [15]. Control-flow related elements in BPMN are the focus of this paper and they include objects of events, tasks and gateways.
An event denotes happening of something in a model and is represented as a circle. In particular, a start/end event signals the start/finish of a process. An activity denotes work to be done in a model and is represented as a rounded-corner rectangle. A task is an atomic activity. Gateways are diamond-shaped quadrilaterals and they denote different routing constructs. Among them, a fork gateway (AND-split) creates concurrent flows and a join gateway synchronises these flows. A decision gateway (XOR-split) selects exactly one flow from a set of them based on certain criteria (data or event) and a merge gateway (XOR-join) merges the alternative flows in the set into one flow. When the choice is non-unique, the pair of gateways OR-split and OR-join is applied instead. Graphically these BPMN objects are connected to each other by arrows. Figure 2 is an example BPMN-denoted business process (adapted from Fig. 13 (a) in [4]).
A Petri net (PN) [14] is a pair
The method of mapping a BPMN process into a Petri net introduced in [4] assumes that the BPMN process is in the first place "well-formed". A well-formed BPMN should satisfy several conditions (e.g., a start event has one outgoing flow and no incoming flow.). It is shown that a given BPMN process can always be transformed into a well-formed one ([1,9,25,26]). The mapping process by itself is straightforward and are summarized in Figure 1 (Several issues are identified and addressed accordingly in [4]; however they are not discussed here). In the figure, note that places in the Petri net modules/components are dashed, meaning that these places would act as connectors to link multiple components into a Petri net through mapping. As an example, the Petri net in Figure 3 is resulted from performing mapping on the BPMN model in Figure 2.
The Situation Calculus is a logical language for representing actions and changes in a dynamical domain. It is first proposed by McCarthy and Hayes in 1969 [10]. The language
The theory
Primitive Action
Fluent
Situation-Independent relations
•
•
•
The Foundational Axioms
(∀s,p,t)(Poss(fire(t),s)≡(pre(p,t)⊃Tkns(p,s)≥1)) |
defines the condition for a transition node
(∀s,p,a,n)(Tkns(p,do(a,s))=n≡γf(p,n,a,s)∨ |
(Tkns(p,s)=n∧¬∃n′γf(p,n′,a,s))), |
where
•
•
The above Successor State Axiom summarizes all conditions where the number of tokens at place
As an example, we define Petri net in Figure 3 as a SCOPE theory. The axiomatic part of SCOPE, including Foundational Axioms, Precondition Axioms, and Successor State Axioms remain unchanged. We only need to specify the initial marking and the situation-independent, topological structure of the Petri-net in Figure 3. That is, the number of tokens at place
pre(P1,T1),pre(T1,P2),…,pre(T11,P12). |
This section includes two parts. We first specify formally the restrictions for well-formed BPMN processes, this would facilitate implementation of automated tools for evaluating formally whether a BPMN process is well-formed (using a FOL theorem prover for example). We then demonstrate how Petri nets modules for their corresponding BPMN objects can be formally described through carrying out further axiomatization on top of SCOPE.
From Section 3 of [4], a well-formed BPMN process has to satisfy the following five criteria:
1. a start event or an exception event has just one outgoing (sequence) flow but no incoming flow;
2. an end event has just one incoming flow but no outgoing flow;
3. activities and intermediate events have exactly one incoming flow and one outgoing flow;
4. fork or decision gateways have one incoming flow and more than one outgoing flows; and
5. join or merge gateways have one outgoing flow and more than one incoming flows.
In this section, first-order axiomatization of these criteria are presented (assuming that objects in the domain are sorted in logical sense).
•
•
•
•
•
•
• A start/exception event has one outgoing flow and does not have incoming flow
(∀x,y)(¬(pre(x,start(y))∧¬(pre(x,exception(y))), |
(∀y)((start(y)∨exception(y))⊃((∃x)pre(y,x)∧ |
¬(∃x1,x2)(pre(y,x1)∧pre(y,x2)∧x1≠x2))). |
• An end event has one incoming flow and does not have outgoing flow
(∀x,y)¬(pre(end(y),x)). |
(∀y)(end(y)⊃((∃x)pre(x,y)∧ |
¬(∃x1,x2)(pre(x1,y)∧pre(x2,y)∧x1≠x2))). |
• An activity/intermediate-event has one and only one incoming flow and outgoing flow
(∀y)((activity(y)∨interE(y))⊃ |
((∃x)pre(y,x)∧¬(∃x1,x2)(pre(y,x1)∧pre(y,x2)∧x1≠x2)) |
((∃z)pre(z,y)∧¬(∃z1,z2)(pre(z1,y)∧pre(z2,y)∧z1≠z2))). |
• An gateway fork or decision has one incoming flow and more than one outgoing flows
(∀y)((fork(y)∨decision(y))⊃((∃x)pre(x,y)∧ |
¬(∃x1,x2)(pre(x1,y)∧pre(x2,y)∧x1≠x2))∧ |
(∃x1,x2)(pre(y,x1)∧pre(y,x2)∧x1≠x2))). |
• An gateway join or merge has one outcoming flow and more than one ingoing flows
(∀y)((join(y)∨merge(y))⊃((∃x)pre(y,x)∧ |
¬(∃x1,x2)(pre(y,x1)∧pre(y,x2)∧x1≠x2))∧ |
(∃x1,x2)(pre(x1,y)∧pre(x2,y)∧x1≠x2))). |
It is known from our previous work that, given a SCOPE theory, we can write SCOPE procedures through defining macros, where these macros are built on top of more primitive, existing procedures and the only basic action in the domain -transition node fire. With respect to the Petri net in Figure 3, we can define a set of simple SCOPE procedures. For example,
proc
endProc
proc
endProc
The procedure
The procedure
With two procedures:
proc
endProc
To this end, it is quite clear how methodologically we are able to axiomatize these BPMN flow-control components: we can simply define those Petri-net modules transformed from those BPMN objects, as illustrated in Figure 1, as SCOPE procedures.
We start from axiomatizing a Petri-net module for the start object
proc
endProc
That is, as long as the topological relationship is verified,
The definition of a procedure corresponding to a fork gate in BPMN involves one place node that enters (and multiple place nodes that leave) the transition node to be fired. Suppose that
proc
endProc
The definition of a procedure corresponding to a join gate in BPMN involves one place node that leaves (and multiple place nodes that enter) the transition node to be fired. Suppose
proc
endProc
The definition of a procedure corresponding to a decision gate in BPMN involves one place node that enters at least one transition node, where each transition node in turn enters a place node. Suppose that
proc
;
endProc
The definition of a procedure corresponding to a merge gate in BPMN involves one place node that multiple transition nodes enter, where for each transition node there is a place node that enters the node. Suppose that
proc
;
endProc
This section includes an example of an order process (adapted from Fig. 13(a) in [4]). When an order request arrives from a customer, both tasks "CheckCreditCard" and "Prepare Products" are initialized. If the credit card is OK, the packed products would be shipped (the task "ShiProducts" is executed) and the process finishes. Alternatively, if the credit card fails, the process finishes directly. BPMN representation of the process is depicted in Fig. 2.
The following sentences capture the topology of the Order Process Diagram.
The nine BPMN objects in Figure 2 are defined as the following SCOPE procedures.
proc
endProc
proc
endProc
proc
endProc
proc
endProc
proc
;
endProc
proc
endProc
proc
endProc
proc
;
endProc
proc
endProc
It is handy to create further user-defined new procedures through macro-actions, i.e., through hierarchically constructing more complex procedures from existing ones. For example, the following procedure presumably specifies all possible executions of the BPMN process as specified in Figure 2.
proc
endProc
Feasibility/executability of procedure
?−Do(orderProcess,S0,S), |
any instantiation of the situation variable
This paper proposes to axiomatize the Petri nets modules, resulted from transforming BPMN objects, as SCOPE-based procedures. In fact, the impressive feature of high adaptability and extensibility is shared by all Situation-Calculus-based Basic Action Theories including SCOPE. In a similar way, we recently used SCOPE to aggregate event patterns in Supply Chain Management [21], and to mitigate adverse interaction analysis for concurrent application of clinical practise guidelines for comorbid patients [11,20]. Ongoing approaches using directly first-order logic for integrating medical treatments to manage multimorbidity include [11,27,12,13,22]. Formal process knowledge representation and reasoning might also find its applicability in combining probabilistic graphical models to build up frameworks and systems in areas for example information retrieval [23,24].
Our previous research ([18] and [19]) indicates that we can easily implement software tools using logic programming language Prolog [2]. Further, we might consider adopting tractable variant to Situation Calculus (for example, a sequence of research on the topic proposed in [6,7,5] is quite interesting and relevant) so an actual implementation became practically feasible. Given the rapid development of first-order theorem proving technologies in the recent years, theorem provers have recently been used to support reasoning in ontologies [8]. As part of our initiative, we plan to adapt theorem proving technologies for process ontologies like SCOPE theories.
Finally, we remark that our approach is able to incorporate processes specified with temporal constraints, since in our previous research reported in [19], a variant to SCOPE which axiomatize Time Petri-nets are presented.
We are thankful to the insightful reviews we have received to improve both the content and presentation of the paper. We gratefully acknowledge the support by NSERC (Natural Sciences and Engineering Research Council of Canada) CREATE (Collaborative Research and Training Experience Program) award in ADERSIM (Advanced Disaster, Emergency and Rapid-response Simulation)3, ORF-RE (Ontario Research Fund -Research Excellence)4 award in BRAIN Alliance5, Dapasoft Inc. in Toronto, ON, Canada6, and the York Research Chairs Program.
3http://www.yorku.ca/adersim/NSERC/
4https://www.ontario.ca/page/ontario-research-fund-research-excellence
[1] | [ A. Adler, M. Elad and Y. Hel-Or, Probabilistic subspace clustering via sparse representations, IEEE Signal Processing Letters, 20(2013), 63-66. |
[2] | [ A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences, 2(2009), 183-202. |
[3] | [ J. Cai, E. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20(2010), 1956-1982. |
[4] | [ E. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, Journal of the ACM, 58(2011), Art. 11, 37 pp. |
[5] | [ E. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98(2010), 925-936. |
[6] | [ E. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9(2009), 717-772. |
[7] | [ V. Chandrasekaran, S. Sanghavi, P. Parrilo and A. Willsky, Sparse and low-rank matrix decompositions, Annual Allerton Conference on Communication, Control, and Computing, 2009, 962-967. |
[8] | [ C. Chen, B. He, Y. Ye and X. Yuan, The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent, Mathematical Programming, 155(2016), 57-79. |
[9] | [ Y. Chen, H. Xu, C. Caramanis and S. Sanghavi, Robust matrix completion with corrupted columns, International Conference on Machine Learning, 2011, 873-880. |
[10] | [ B. Cheng, G. Liu, J. Wang, Z. Huang and S. Yan, Multi-task low-rank affinity pursuit for image segmentation, International Conference on Computer Vision, 2011, 2439-2446. |
[11] | [ A. Cichocki, R. Zdunek, A. H. Phan and S. Ichi Amari, Nonnegative Matrix and Tensor Factorizations:Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, 1st edition, Wiley, 2009. |
[12] | [ Y. Cui, C.-H. Zheng and J. Yang, Identifying subspace gene clusters from microarray data using low-rank representation, PLoS One, 8(2013), e59377. |
[13] | [ P. Drineas, R. Kannan and M. Mahoney, Fast Monte Carlo algorithms for matrices Ⅱ:Computing a low rank approximation to a matrix, SIAM Journal on Computing, 36(2006), 158-183. |
[14] | [ E. Elhamifar and R. Vidal, Sparse subspace clustering, in IEEE International Conference on Computer Vision and Pattern Recognition, 2009, 2790-2797. |
[15] | [ E. Elhamifar and R. Vidal, Sparse subspace clustering:Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 2765-2781. |
[16] | [ P. Favaro, R. Vidal and A. Ravichandran, A closed form solution to robust subspace estimation and clustering, IEEE Conference on Computer Vision and Pattern Recognition, 2011, 1801-1807. |
[17] | [ J. Feng, Z. Lin, H. Xu and S. Yan, Robust subspace segmentation with block-diagonal prior, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 3818-3825. |
[18] | [ M. Frank and P. Wolfe, An algorithm for quadratic programming, Naval Research Logistics Quarterly, 3(1956), 95-110. |
[19] | [ Y. Fu, J. Gao, D. Tien and Z. Lin, Tensor LRR based subspace clustering, International Joint Conference on Neural Networks, 2014, 1877-1884. |
[20] | [ A. Ganesh, Z. Lin, J. Wright, L. Wu, M. Chen and Y. Ma, Fast algorithms for recovering a corrupted low-rank matrix, International Workshop on Computational Advances in MultiSensor Adaptive Processing, 2009, 213-216. |
[21] | [ H. Gao, J.-F. Cai, Z. Shen and H. Zhao, Robust principal component analysis-based four-dimensional computed tomography, Physics in Medicine and Biology, 56(2011), 3181-3198. |
[22] | [ M. Grant and S. Boyd, CVX:Matlab software for disciplined convex programming (web page and software), http://cvxr.com/cvx/, 2009. |
[23] | [ S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 2862-2869. |
[24] | [ H. Hu, Z. Lin, J. Feng and J. Zhou, Smooth representation clustering, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 3834-3841. |
[25] | [ Y. Hu, D. Zhang, J. Ye, X. Li and X. He, Fast and accurate matrix completion via truncated nuclear norm regularization, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 2117-2130. |
[26] | [ M. Jaggi, Revisiting Frank-Wolfe:Projection-free sparse convex optimization, in International Conference on Machine Learning, 2013, 427-435. |
[27] | [ M. Jaggi and M. Sulovský, A simple algorithm for nuclear norm regularized problems, in International Conference on Machine Learning, 2010, 471-478. |
[28] | [ I. Jhuo, D. Liu, D. Lee and S. Chang, Robust visual domain adaptation with low-rank reconstruction, IEEE Conference on Computer Vision and Pattern Recognition, 2012, 2168-2175. |
[29] | [ H. Ji, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, IEEE Conference on Computer Vision and Pattern Recognition, 2010, 1791-1798. |
[30] | [ Y. Jin, Q. Wu and L. Liu, Unsupervised upright orientation of man-made models, Graphical Models, 74(2012), 99-108. |
[31] | [ T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51(2009), 455-500. |
[32] | [ C. Lang, G. Liu, J. Yu and S. Yan, Saliency detection by multitask sparsity pursuit, IEEE Transactions on Image Processing, 21(2012), 1327-1338. |
[33] | [ R. M. Larsen, http://sun.stanford.edu/~rmunk/PROPACK/, 2004. |
[34] | [ D. Lee and H. Seung, Learning the parts of objects by non-negative matrix factorization, Nature, 401(1999), 788. |
[35] | [ X. Liang, X. Ren, Z. Zhang and Y. Ma, Repairing sparse low-rank texture, in European Conference on Computer Vision, 7576(2012), 482-495. |
[36] | [ Z. Lin, R. Liu and H. Li, Linearized alternating direction method with parallel splitting and adaptive penality for separable convex programs in machine learning, Machine Learning, 99(2015), 287-325. |
[37] | [ Z. Lin, R. Liu and Z. Su, Linearized alternating direction method with adaptive penalty for low-rank representation, Advances in Neural Information Processing Systems, 2011, 612-620. |
[38] | [ G. Liu, Z. Lin, S. Yan, J. Sun and Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Transactions Pattern Analysis and Machine Intelligence, 35(2013), 171-184. |
[39] | [ G. Liu, Z. Lin and Y. Yu, Robust subspace segmentation by low-rank representation, in International Conference on Machine Learning, 2010, 663-670. |
[40] | [ G. Liu, H. Xu and S. Yan, Exact subspace segmentation and outlier detection by low-rank representation, International Conference on Artificial Intelligence and Statistics, 2012, 703-711. |
[41] | [ G. Liu and S. Yan, Latent low-rank representation for subspace segmentation and feature extraction, in IEEE International Conference on Computer Vision, IEEE, 2011, 1615-1622. |
[42] | [ J. Liu, P. Musialski, P. Wonka and J. Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 208-220. |
[43] | [ R. Liu, Z. Lin, Z. Su and J. Gao, Linear time principal component pursuit and its extensions using l1 filtering, Neurocomputing, 142(2014), 529-541. |
[44] | [ R. Liu, Z. Lin, F. Torre and Z. Su, Fixed-rank representation for unsupervised visual learning, IEEE Conference on Computer Vision and Pattern Recognition, 2012, 598-605. |
[45] | [ C. Lu, J. Feng, Z. Lin and S. Yan, Correlation adaptive subspace segmentation by trace lasso, International Conference on Computer Vision, 2013, 1345-1352. |
[46] | [ C. Lu, Z. Lin and S. Yan, Smoothed low rank and sparse matrix recovery by iteratively reweighted least squared minimization, IEEE Transactions on Image Processing, 24(2015), 646-654. |
[47] | [ C. Lu, H. Min, Z. Zhao, L. Zhu, D. Huang and S. Yan, Robust and efficient subspace segmentation via least squares regression, European Conference on Computer Vision, 7578(2012), 347-360. |
[48] | [ C. Lu, C. Zhu, C. Xu, S. Yan and Z. Lin, Generalized singular value thresholding, AAAI Conference on Artificial Intelligence, 2015, 1805-1811. |
[49] | [ X. Lu, Y. Wang and Y. Yuan, Graph-regularized low-rank representation for destriping of hyperspectral images, IEEE Transactions on Geoscience and Remote Sensing, 51(2013), 4009-4018. |
[50] | [ Y. Ma, S. Soatto, J. Kosecka and S. Sastry, An Invitation to 3-D Vision:From Images to Geometric Models, 1st edition, Springer, 2004. |
[51] | [ K. Min, Z. Zhang, J. Wright and Y. Ma, Decomposing background topics from keywords by principal component pursuit, in ACM International Conference on Information and Knowledge Management, 2010, 269-278. |
[52] | [ Y. Ming and Q. Ruan, Robust sparse bounding sphere for 3D face recognition, Image and Vision Computing, 30(2012), 524-534. |
[53] | [ L. Mukherjee, V. Singh, J. Xu and M. Collins, Analyzing the subspace structure of related images:Concurrent segmentation of image sets, European Conference on Computer Vision, 7575(2012), 128-142. |
[54] | [ Y. Nesterov, A method of solving a convex programming problem with convergence rate O(1/k2), (Russian) Dokl. Akad. Nauk SSSR, 269(1983), 543-547. |
[55] | [ Y. Panagakis and C. Kotropoulos, Automatic music tagging by low-rank representation, International Conference on Acoustics, Speech, and Signal Processing, 2012, 497-500. |
[56] | [ Y. Peng, A. Ganesh, J. Wright, W. Xu and Y. Ma, RASL:Robust alignment by sparse and low-rank decomposition for linearly correlated images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(2012), 2233-2246. |
[57] | [ J. Qian, J. Yang, F. Zhang and Z. Lin, Robust low-rank regularized regression for face recognition with occlusion, The Workshop of IEEE Conference on Computer Vision and Pattern Recognition, 2014, 21-26. |
[58] | [ X. Ren and Z. Lin, Linearized alternating direction method with adaptive penalty and warm starts for fast solving transform invariant low-rank textures, International Journal of Computer Vision, 104(2013), 1-14. |
[59] | [ A. P. Singh and G. J. Gordon, A unified view of matrix factorization models, in Proceedings of Machine Learning and Knowledge Discovery in Databases, 5212(2008), 358-373. |
[60] | [ H. Tan, J. Feng, G. Feng, W. Wang and Y. Zhang, Traffic volume data outlier recovery via tensor model, Mathematical Problems in Engineering, 2013(2013), 164810. |
[61] | [ M. Tso, Reduced-rank regression and canonical analysis, Journal of the Royal Statistical Society, Series B (Methodological), 43(1981), 183-189. |
[62] | [ R. Vidal, Y. Ma and S. Sastry, Generalized principal component analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2005), 1945-1959. |
[63] | [ R. Vidal, Subspace clustering, IEEE Signal Processing Magazine, 28(2011), 52-68. |
[64] | [ J. Wang, V. Saligrama and D. Castanon, Structural similarity and distance in learning, Annual Allerton Conf. Communication, Control and Computing, 2011, 744-751. |
[65] | [ Y.-X. Wang and Y.-J. Zhang, Nonnegative matrix factorization:A comprehensive review, IEEE Transactions on Knowledge and Data Engineering, 25(2013), 1336-1353. |
[66] | [ S. Wei and Z. Lin, Analysis and improvement of low rank representation for subspace segmentation, arXiv:1107.1561. |
[67] | [ Z. Wen, W. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Mathematical Programming Computation, 4(2012), 333-361. |
[68] | [ J. Wright, A. Ganesh, S. Rao, Y. Peng and Y. Ma, Robust principal component analysis:Exact recovery of corrupted low-rank matrices via convex optimization, Advances in Neural Information Processing Systems, 2009, 2080-2088. |
[69] | [ L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang and Y. Ma, Robust photometric stereo via low-rank matrix completion and recovery, Asian Conference on Computer Vision, 2010, 703-717. |
[70] | [ L. Yang, Y. Lin, Z. Lin and H. Zha, Low rank global geometric consistency for partial-duplicate image search, International Conference on Pattern Recognition, 2014, 3939-3944. |
[71] | [ M. Yin, J. Gao and Z. Lin, Laplacian regularized low-rank representation and its applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2016), 504-517. |
[72] | [ Y. Yu and D. Schuurmans, Rank/norm regularization with closed-form solutions:Application to subspace clustering, Uncertainty in Artificial Intelligence, 2011, 778-785. |
[73] | [ H. Zhang, Z. Lin and C. Zhang, A counterexample for the validity of using nuclear norm as a convex surrogate of rank, European Conference on Machine Learning, 8189(2013), 226-241. |
[74] | [ H. Zhang, Z. Lin, C. Zhang and E. Chang, Exact recoverability of robust PCA via outlier pursuit with tight recovery bounds, AAAI Conference on Artificial Intelligence, 2015, 3143-3149. |
[75] | [ H. Zhang, Z. Lin, C. Zhang and J. Gao, Robust latent low rank representation for subspace clustering, Neurocomputing, 145(2014), 369-373. |
[76] | [ H. Zhang, Z. Lin, C. Zhang and J. Gao, Relation among some low rank subspace recovery models, Neural Computation, 27(2015), 1915-1950. |
[77] | [ T. Zhang, B. Ghanem, S. Liu and N. Ahuja, Low-rank sparse learning for robust visual tracking, European Conference on Computer Vision, 7577(2012), 470-484. |
[78] | [ Z. Zhang, A. Ganesh, X. Liang and Y. Ma, TILT:Transform invariant low-rank textures, International Journal of Computer Vision, 99(2012), 1-24. |
[79] | [ Z. Zhang, X. Liang and Y. Ma, Unwrapping low-rank textures on generalized cylindrical surfaces, International Conference on Computer Vision, 2011, 1347-1354. |
[80] | [ Z. Zhang, Y. Matsushita and Y. Ma, Camera calibration with lens distortion from low-rank textures, IEEE Conference on Computer Vision and Pattern Recognition, 2011, 2321-2328. |
[81] | [ Y. Zheng, X. Zhang, S. Yang and L. Jiao, Low-rank representation with local constraint for graph construction, Neurocomputing, 122(2013), 398-405. |
[82] | [ X. Zhou, C. Yang, H. Zhao and W. Yu, Low-rank modeling and its applications in image analysis, ACM Computing Surveys, 47(2014), p36. |
[83] | [ G. Zhu, S. Yan and Y. Ma, Image tag refinement towards low-rank, content-tag prior and error sparsity, in International conference on Multimedia, 2010, 461-470. |
[84] | [ L. Zhuang, H. Gao, Z. Lin, Y. Ma, X. Zhang and N. Yu, Non-negative low rank and sparse graph for semi-supervised learning, IEEE International Conference on Computer Vision and Pattern Recognition, 2012, 2328-2335. |
[85] | [ W. Zuo and Z. Lin, A generalized accelerated proximal gradient approach for total-variation-based image restoration, IEEE Transactions on Image Processing, 20(2011), 2748-2759. |