
Practitioners employ operator splitting methods—such as alternating direction method of multipliers (ADMM) and its "dual" Douglas-Rachford method (DR)—to solve many kinds of optimization problems. We provide a gentle introduction to these algorithms, and illustrations of their duality-like relationship in the context of solving basis pursuit problems for audio signal recovery. Recently, researchers have used the dynamical systems associated with the iterates of splitting methods to motivate the development of schemes to improve performance. These developments include a class of methods that act by iteratively minimizing surrogates for a Lyapunov function for the dynamical system. An exemplar of this class is currently state-of-the-art for the feasibility problem of finding wavelets with special structure. Early experimental evidence has also suggested that, when implemented in a primal-dual (ADMM and DR) framework, this exemplar may provide improved performance for basis pursuit problems. We provide a reasonable way to compute the updates for this exemplar, and we study the application of this method to the aforementioned basis pursuit audio problems. We provide experimental results and visualizations of the dynamical system for the dual DR sequence. We observe that for highly structured problems with real data, the algorithmic behavior is noticeably different than for randomly generated problems.
Citation: Andrew Calcan, Scott B. Lindstrom. The ADMM algorithm for audio signal recovery and performance modification with the dual Douglas-Rachford dynamical system[J]. AIMS Mathematics, 2024, 9(6): 14640-14657. doi: 10.3934/math.2024712
[1] | Bing Xue, Jiakang Du, Hongchun Sun, Yiju Wang . A linearly convergent proximal ADMM with new iterative format for BPDN in compressed sensing problem. AIMS Mathematics, 2022, 7(6): 10513-10533. doi: 10.3934/math.2022586 |
[2] | Regina S. Burachik, Bethany I. Caldwell, C. Yalçın Kaya . Douglas–Rachford algorithm for control- and state-constrained optimal control problems. AIMS Mathematics, 2024, 9(6): 13874-13893. doi: 10.3934/math.2024675 |
[3] | Liping Geng, Jinchuan Zhou, Zhongfeng Sun, Jingyong Tang . Compressive hard thresholding pursuit algorithm for sparse signal recovery. AIMS Mathematics, 2022, 7(9): 16811-16831. doi: 10.3934/math.2022923 |
[4] | Wenxue Sun, Huiyuan Zhao, Xiao Zhang, Yuchao Sun, Xiaoxin Liu, Xueling Lv, Di Fan . Zero-watermarking Algorithm for Audio and Video Matching Verification. AIMS Mathematics, 2022, 7(5): 8390-8407. doi: 10.3934/math.2022468 |
[5] | Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601 |
[6] | Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097 |
[7] | Nipa Jun-on, Raweerote Suparatulatorn, Mohamed Gamal, Watcharaporn Cholamjiak . An inertial parallel algorithm for a finite family of -nonexpansive mappings applied to signal recovery. AIMS Mathematics, 2022, 7(2): 1775-1790. doi: 10.3934/math.2022102 |
[8] | Aliyu Muhammed Awwal, Poom Kumam, Kanokwan Sitthithakerngkiet, Abubakar Muhammad Bakoji, Abubakar S. Halilu, Ibrahim M. Sulaiman . Derivative-free method based on DFP updating formula for solving convex constrained nonlinear monotone equations and application. AIMS Mathematics, 2021, 6(8): 8792-8814. doi: 10.3934/math.2021510 |
[9] | Yue Zhao, Meixia Li, Xiaowei Pan, Jingjing Tan . Partial symmetric regularized alternating direction method of multipliers for non-convex split feasibility problems. AIMS Mathematics, 2025, 10(2): 3041-3061. doi: 10.3934/math.2025142 |
[10] | Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286 |
Practitioners employ operator splitting methods—such as alternating direction method of multipliers (ADMM) and its "dual" Douglas-Rachford method (DR)—to solve many kinds of optimization problems. We provide a gentle introduction to these algorithms, and illustrations of their duality-like relationship in the context of solving basis pursuit problems for audio signal recovery. Recently, researchers have used the dynamical systems associated with the iterates of splitting methods to motivate the development of schemes to improve performance. These developments include a class of methods that act by iteratively minimizing surrogates for a Lyapunov function for the dynamical system. An exemplar of this class is currently state-of-the-art for the feasibility problem of finding wavelets with special structure. Early experimental evidence has also suggested that, when implemented in a primal-dual (ADMM and DR) framework, this exemplar may provide improved performance for basis pursuit problems. We provide a reasonable way to compute the updates for this exemplar, and we study the application of this method to the aforementioned basis pursuit audio problems. We provide experimental results and visualizations of the dynamical system for the dual DR sequence. We observe that for highly structured problems with real data, the algorithmic behavior is noticeably different than for randomly generated problems.
Throughout, denotes a Hilbert space. Additionally, denotes the set of proper lower semicontinuous convex functions from to . For readers unfamiliar with these notions, it suffices to know that the functions from the problems described herein all belong to the class .
The alternating direction method of multipliers (ADMM) solves problems of the form
where and are convex functions, , , and . The ADMM algorithm deals with the two functions and separately, and so it is said to be an operator splitting method. Depending upon how we choose the functions and , we can model many different problems in this very flexible form. These include regression and classification problems, image and signal denoising, probabilistic optimization problems like progressive hedging, and basis pursuit [13]. We can use the latter to recover an approximate audio signal from a compressed set of sampling data.
This paper is outlined as follows. In Section 2 we introduce the basis pursuit audio problem. In Section 3 we introduce ADMM, motivating with basis pursuit. In Section 4, we describe the Douglas-Rachford method (DR) and use the basis pursuit problem to illustrate DR's duality-like relationship with ADMM. In Section 5, we describe the exemplar Lyapunov surrogate method with which we will work and provide (Theorem 2), a clean way to compute it. The remainder of the paper contains our experimental results, which we summarize in Section 7. This paper is intended to be widely accessible, including to researchers who are unfamiliar with ADMM. As such, we adopt a somewhat tutorial approach to the topic.
When digital audio is recorded, a microphone's diaphragm vibrates in response to sound waves, and measurements are taken of the diaphragm's displacement. A typical rate at which these samples are collected is 44,100 per second (44.1 kHz). Recording over a time interval of length , we denote by the measurement value at sample time:
(2.1) |
We can define a continuous function
The unique constants can be computed as follows. First, writing down the equality for the th sample yields
Since we have such equalities (one for each sample ) in unknowns (), we can write them as the following linear system:
The matrix is the inverse discrete cosine transform matrix, and it is orthogonal. Its inverse —the discrete cosine transform matrix, allows us to compute the values as . The nonzero values in determine the shifts and dilates of the cosine function (i.e., the frequencies) that are present in the signal . For this reason, is said to be the spectrum of .
To reduce the size of audio files, we can delete some of the samples. The deletion of a sample is reflected in the system by the removal of row from and . For example, removing samples and , the previous linear system becomes
Of course, the solution is no longer unique. The set of solutions is a subspace of dimension , since is the number of samples we have deleted and therefore the number of free variables we have gained. We can still use the Moore-Penrose pseudo-inverse to obtain a spectrum . The signal that corresponds to this spectrum still matches the samples we have not thrown away. Assuming we have not thrown away many samples, may still provide a good approximation of . In practice, of course, we want to reduce file size appreciably. When we remove 90% of samples (as is typical), the spectrum no longer provides a good approximation signal. Note that while we removed entries 2 and 4 for illustrative purposes, there is nothing special about removing even-numbered entries; in our experiments, we remove 90% of entries randomly.
We would expect that the sparsest solution consists of the frequencies that actually matter, while avoiding the superfluous ones. Finding the sparsest solution to a linear system is equivalent to the optimization problem:
(2.2) |
This is called the basis pursuit problem. For our signal recovery application, we solve this optimization problem with and . Using a Matlab code (P-signal) to generate a signal , we compare in Figure 1* the solution from (2.2) to a solution obtained using the Moore-Penrose pseudo-inverse. For this problem, has frequencies (i.e., Freq = ) denoted by with corresponding amplitudes . Here represents 1/10th of a second of audio, and so is defined over domain .
*The specific signal of Figure 1 can be generated by fixing the random generators with the additional code rng(3).
We can use ADMM to solve the basis pursuit problem. We first define the following set and associated function:
We can use these to rewrite (2.2) in the following form:
The function is called the indicator of the set . We introduce it so that we can take advantage of ADMM's ability to treat the functions and separately. ADMM solves the problem
(3.1) |
by finding a triple . This triple is a saddle point for in the sense that minimizes while maximizes . See, for example, [9, Section 5.5] and [10, Section 4.3]. Here is a chosen constant. The multiplier variable generalizes the classical notion of a Lagrange multiplier for a general convex optimization problem, and is also called a dual variable. The primal variables and have the property that the optimal pair is a solution to (BP-p). The ADMM update steps are
(3.2a) |
(3.2b) |
(3.2c) |
The multiplier update (3.2c) maximizes a concave quadratic approximation to . ADMM draws its name from this iterative process of minimizing with respect to the primal variables and , and then maximizing over the dual variable .
Let us look at how these updates are computed for the basis pursuit problem. We begin with the update:
(3.3) |
Here is the closest-point projection operator for the set . As is closed and convex in a Hilbert space, has nonempty, single-valued images [3]. Moreover, since is an affine subspace defined by , has a closed form [11]. We can use that closed form to write (3.3) explicitly:
We compute the update as follows:
The operator is known as the soft thresholding function. It is computable as follows:
Both the projection operator and the shrinkage operator are examples of proximity operators. The proximity operator for a function is defined as follows:
Straight from the definition, one sees that the projection operator is , while the soft thresholding operator is . Such operators are used to compute the steps for operator splitting methods.
We will next look at another operator splitting method, one that has a special duality-like relationship with ADMM. Let be the identity map and denote the reflected proximal mapping:
(4.1) |
The Douglas-Rachford algorithm (DR) minimizes where by generating a sequence as
(4.2) |
In [2], Attouch showed† that if and are convex and belongs to the interior of , then satisfies the necessary conditions for the classical DR convergence result of Lions and Mercier.
†Attouch's actual requirements are less restrictive than what we have presented.
Theorem 1. ([2,26]) If and Attouch's criterion holds, then from (4.2) converges weakly to some such that minimizes .
For our context, we are interested in the setting where the Douglas-Rachford method is applied to the problem
where is a matrix of appropriate dimension. Here denotes the Fenchel-Moreau-Rockafellar conjugate of [3,24], defined as
We are particularly interested in the case when , , and . The conjugates are
Note that Attouch's condition is satisfied for and , as long as the system is consistent. The general problem (d) is said to be dual to the general problem (p), while the specific problem
is dual to the primal basis pursuit problem (BP-p). For fully convex problems, DR and ADMM are closely related, with ADMM for (p) being equivalent to DR for (d) [19,20,24,25]. The linear relationship between DR and ADMM variables is outlined in Table 1.
Primal | Dual |
From Theorem 1 and the relationships in the table, it is clear that the multiplier sequence converges to the solution of (d). For the basis pursuit problem (BP-p) and its dual problem (BP-d), the relationship in Table 1 is illustrated in Figure 2. Because the DR operator has particularly desirable properties, DR convergence results are often cited as particularly elegant proofs of the convergence of ADMM, and the two algorithms are frequently studied together [22,23].
Benoist, Dao, Tam, Giladi, and Rüffer used Lyapunov functions to describe the dynamics of DR in cases when it exhibited spiraling patterns similar to the one in Figure 2 [8,14,21]. All of their Lyapunov functions shared the property that
(5.1) |
Borrowing inspiration from their constructions, the method introduced in [24] provides an update candidate that minimizes a spherical surrogate for such a Lyapunov function. Figure 2 shows an example of such a surrogate, and the update obtained from it. When are collinear, the construction is not possible and so a normal algorithmic update would be accepted. Otherwise, this update, denoted , is constructed to belong to the 2-dimensional affine subspace containing , , and , and to have the property that . A candidate update for the multiplier is
We can propagate the update to the primal variables by and . Then is a candidate for updating . There are various criteria we can use to decide whether to accept it or reject it in favor of a regular ADMM update. In this paper, the main criterion we use in our experiments is whether has a smaller 1-norm value than a regular update would. Based on the symbol for the operator , we refer to the full algorithm in this case as LT. For comparison, in some experiments we instead accept the update always. We denote the algorithm in such cases by LTA. The operator may be computed efficiently, as we now show.
Theorem 2. Let . Let and . If , then are collinear. Otherwise, let where
and is the unique point in the 2-dimensional affine subspace of containing that satisfies
Proof. Letting , the linear system rewrites to
The second equality yields , whereupon the first equality yields
As belongs to the 2-dimensional affine subspace containing , there must exist such that . Substituting this identity into the two equalities above yields the system:
The left matrix is invertible so long as its determinant is nonzero. Bearing in mind that where is the angle between and , we have that this determinant is zero exactly when the angle between and is or (i.e., are collinear). When the determinant is nonzero, we take the inverse of the matrix on the left and obtain the claimed identity for .
Remark 1 (Checking collinearity). In computational practice, to decide whether the determinant in Theorem 2 is nonzero, one must choose some and replace the condition with . In our experiments, we used the threshold , and never encountered the case when . If is chosen very small and the condition number is very large, the computed may suffer numerical inaccuracy. This is another important reason for the inclusion of the objective function check.
Interestingly, for structured feasibility problems of the kind studied in [8,14,21], it was shown in [24] that the circumcentered reflections method (see, for example, [1,4,5,6,7,15,17]) is another example of an algorithm that returns minimizers for surrogates of a Lyapunov function for the DR dynamical system. For reasons described in [24], the natural generalization of circumcentered reflections method does not work for basis pursuit problems, and so we do not include it here. Indeed, a key motivation for the algorithm LT is that whenever a Lyapunov function has the property (5.1), LT and LTA return minimizers of spherical surrogates for that Lyapunov function [24]. This means that they may be useful for a wider variety of applications than those listed here. Algorithm 1 shows their implementation for basis pursuit specifically. In Table 2, we summarize the criteria that we use in our experiments to decide whether to accept an update candidate derived from or reject it in favor of a regular update.
Algorithm 1 LT/LTA. |
Data: , absolute and relative tolerances such that , , a tolerance , and a data vector .
Initialisation: ; ; ; ; ; while: do ![]() end |
Algorithm variation | Criterion used |
LT | |
LTA | true |
Performance profiles are a popular way of comparing algorithm performance; see [18], whose notation we closely follow. We use them to compare the performances of LT, LTA, and ADMM. For each problem and solver , we define
The performance of each algorithm is evaluated relative to the superior algorithm in each instance. This is done according to the following ratio, where denotes the set of solvers
Our performance profile graphs depict the cumulative distribution functions , defined as
where is the total number of problems solved by . For example, if , then algorithm solved 80% of problems using no more than 150% of the passes through (3.2) required by the algorithm that used the fewest passes.
In [24], LT was found to reliably outperform ADMM for problem (2.2), where the problems were randomly generated according to the specifications in Boyd, Parikh, Chu, Peleato and Eckstein's Matlab scripts [12]. The code for these problems is (P-random). Here, , with each entry drawn from the standard normal distribution, while is sparse with approximately nonzero . The data vector is given by . The performance profile for LT, LTA, and ADMM in solving 1000 problems of this kind is Figure 3b. For these problems, the algorithms stopped once locating solutions within of optimality (i.e., ).
Following [24], we say that an algorithm shows signs of spiraling when (i.e., ) exhibits a distinct "tombstone" pattern (observable in Figure 3a). This phenomenon is often associated with the existence of a Lyapunov function satisfying condition (5.1). Problems generated using (P-random) typically exhibit signs of spiraling.
In Figure 4, we illustrate the behavior of the DR dual sequence for a problem generated using (P-random)‡. We visualize 's behavior by projecting it onto 10 different 2-dimensional subspaces (i.e., plotting coordinate pairs). Namely, we plot () (red), () (neon green), () (blue), () (cyan), () (magenta), () (yellow), () (purple), () (teal), () (orange), and () (green). In Figure 4b and 4c, we zoom in on the tail of the sequence () and plot the sequence () of coordinate pairs (blue shades) as well as the sequence () (green shades). The accepted updates are shown as squares.
‡For the example shown, the random seed state was rng(8).
Because LT and LTA afforded measurable improvement for solving basis pursuit problems with random structure, we next benchmarked their performance on the audio compression and recovery problem.
To evaluate the performance of LT and LTA for audio recovery problems, we generated signals using (P-signal). was used as the domain of each , while is the sum of shifted and dilated sine functions with amplitudes and frequencies specified by and . The number of frequencies (Freq) corresponds to the number of such functions composing , with a higher number of frequencies resulting in more complex signals. To assess the algorithms on a variety of signals, we ran experiments with and . For each Freq value, 1000 problems were solved where . We include the performance profiles for these problems in Figure 5.
We provide a sense of how the algorithms pursue a sparse solution in Figure 6, where we define the superfluous as having absolute values less than . This threshold was determined based on plotting the spectra for various solved problems (as in Figure 1). Figures 6a–c depict , and (), as produced by ADMM solving (2.2) with and specified by (P-signal) with §. We graph the percentage of that satisfy for , in Figure 6d. These signals were reconstructed with accuracy. Although not perfect, we found that lower and values did not result in better approximations for , so this accuracy is used throughout our experiments.
§For the example shown, the random seed state was rng(3).
We likewise applied LT and LTA to recovering the waveform signal of a performance of Frédéric Chopin's Scherzo No. 2. We decomposed the minute and second performance into problems, each recovering seconds of audio. Using the file's sampling rate of Hz, each problem was formulated with . The performance profile for this experiment is Figure 7.
The meta algorithms exhibited noticeably less improvement relative to ADMM when the random problem structure of (P-random) was replaced by the structure of the simulated audio problem (P-signal), and even less improvement for a real audio problem. For problems we solved using , we typically did not observe signs of spiraling for the dual sequence ; when we did observe signs of spiraling, we only observed it after the signal was recovered (i.e. when computing more than necessary). Figure 8 shows coordinate pairs for an example problem generated using (P-signal)¶ with Freq = 10. Enhanced views of sequences () and () for this problem are included in Figure 9a. Figure 9b enhances the sequence (), and also shows the sequence of coordinate pairs generated by LT, as well as the sequence of coordinate pairs generated by the variant LTA. For the latter two sequences, regular DR updates are circles while updates accepted from the surrogate are squares.
¶For the example shown, the random seed state was rng(4)
Our results highlight the importance of testing algorithms on problems with real data. While the Lyapunov surrogate variant LTA is state-of-the-art (much faster than DR and equally reliable) for the real world problem of finding wavelets [16], its performance for the real world audio compression and recovery problem is less impressive than experiments on random problems might suggest.
Additionally, our results are consistent with the hypothesis in [24] that Lyapunov surrogate methods are more likely to deliver improvement in situations when signs of spiraling are present, as we suspect that LT's inconsistent performance is due to differences in the dynamics of the underlying DR dual algorithm.
Finally, the visualizations in Figures 4 and 9 are interesting in two ways. First, they support the hypothesis that signs of spiraling do, in fact, reflect the dynamic they purport to. The dynamics we observe for the simulated audio problem in Figure 9, when signs of spiraling were not present, appear much less regular than those we observed for the fully random problem in Figure 4b, when signs of spiraling were present. Second, the latter dynamics still exhibit patterns that may lend themselves to extrapolation methods or other dynamics-based schemes. We suggest this as future research.
The authors declare that they have not used artificial intelligence tools in the creation of this article.
The authors declare no conflict of interest.
The code used to generate all examples in this paper is available at https://github.com/AndrewCalcan/Basis-Pursuit-via-LT-LTA-ADMM.git.
[1] |
R. Arefidamghani, R. Behling, Y. Bello-Cruz, A. Iusem, L. Santos, The circumcentered-reflection method achieves better rates than alternating projections, Comput. Optim. Appl., 79 (2021), 507–530. http://dx.doi.org/10.1007/s10589-021-00275-6 doi: 10.1007/s10589-021-00275-6
![]() |
[2] |
H. Attouch, On the maximality of the sum of two maximal monotone operators, Nonlinear Anal.-Theor., 5 (1981), 143–147. http://dx.doi.org/10.1016/0362-546X(81)90039-0 doi: 10.1016/0362-546X(81)90039-0
![]() |
[3] | H. Bauschke, P. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, Cham: Springer, 2 Eds., 2017. http://dx.doi.org/10.1007/978-3-319-48311-5 |
[4] | H. Bauschke, H. Ouyang, X. Wang, On circumcenters of finite sets in Hilbert spaces, arXiv: 1807.02093. |
[5] |
R. Behling, Y. Bello-Cruz, L. Santos, Circumcentering the Douglas-Rachford method, Numer. Algor., 78 (2018), 759–776. http://dx.doi.org/10.1007/s11075-017-0399-5 doi: 10.1007/s11075-017-0399-5
![]() |
[6] |
R. Behling, Y. Bello-Cruz, L. Santos, On the linear convergence of the circumcentered-reflection method, Oper. Res. Lett., 46 (2018), 159–162. http://dx.doi.org/10.1016/j.orl.2017.11.018 doi: 10.1016/j.orl.2017.11.018
![]() |
[7] |
R. Behling, Y. Bello-Cruz, L. Santos, On the circumcentered-reflection method for the convex feasibility problem, Numer. Algor., 86 (2021), 1475–1494. http://dx.doi.org/10.1007/s11075-020-00941-6 doi: 10.1007/s11075-020-00941-6
![]() |
[8] |
J. Benoist, The Douglas-Rachford algorithm for the case of the sphere and the line, J. Glob. Optim., 63 (2015), 363–380. http://dx.doi.org/10.1007/s10898-015-0296-1 doi: 10.1007/s10898-015-0296-1
![]() |
[9] | D. Bertsekas, Convex optimization theory, Melrose: Athena Scientific, 2009. |
[10] | J. Borwein, A. Lewis, Convex analysis and nonlinear optimization: theory and examples, New York: Springer, 2 Eds., 2006. http://dx.doi.org/10.1007/978-0-387-31256-9 |
[11] | S. Boyd, L. Vandenberghe, Convex optimisation, Cambridge: Cambridge University Press, 2004. http://dx.doi.org/10.1017/CBO9780511804441 |
[12] | S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Matlab scripts for alternating direction method of multipliers, 2011. Available from: https://web.stanford.edu/~boyd/papers/admm/. |
[13] | S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimisation and statistical learning via the alternating direction method of multipliers, New York: Now Foundations and Trends, 2010. http://dx.doi.org/10.1561/2200000016 |
[14] |
M. Dao, M. Tam, A Luapunov-type approach to convergence of the Douglas-Rachford algorithm, J. Glob. Optim., 73 (2019), 83–112. http://dx.doi.org/10.1007/s10898-018-0677-3 doi: 10.1007/s10898-018-0677-3
![]() |
[15] |
N. Dizon, J. Hogan, S. Lindstrom, Circumcentered reflections method for wavelet feasibility problems, ANZIAM J., 62 (2020), C98–C111. http://dx.doi.org/10.21914/anziamj.v62.16118 doi: 10.21914/anziamj.v62.16118
![]() |
[16] | N. Dizon, J. Hogan, S. Lindstrom, Centering projection methods for wavelet feasibility problems, In Current trends in analysis, its applications and computation, Cham: Birkhäuser, 2022. http://dx.doi.org/10.1007/978-3-030-87502-2_66 |
[17] | Neil Dizon, J. A. Hogan, and Scott B. Lindstrom, Circumcentering reflection methods for nonconvex feasibility problems. Set-Valued Var. Anal., 30 (2022), 943–973. http://dx.doi.org/10.1007/s11228-021-00626-9 |
[18] |
E. Dolan, J. Moré, Benchmarking optimisation software with performance profiles, Math. Program., 91 (2022), 201–213. http://dx.doi.org/10.1007/s101070100263 doi: 10.1007/s101070100263
![]() |
[19] | J. Eckstein, W. Yao, Understanding the convergence of the alternating direction method of multipliers: theoretical and computational perspectives, Pac. J. Optim., 11 (2015), 619–644. |
[20] |
D. Gabay, Applications of the method of multipliers to variational inequalities, Studies in Mathematics and its Applications, 15 (1983), 299–331. http://dx.doi.org/10.1016/S0168-2024(08)70034-1 doi: 10.1016/S0168-2024(08)70034-1
![]() |
[21] |
O. Giladi, B. Rüffer, A Lyapunov function construction for a non-convex Douglas-Rachford iteration, J. Optim. Theory Appl., 180 (2019), 729–750. http://dx.doi.org/10.1007/s10957-018-1405-3 doi: 10.1007/s10957-018-1405-3
![]() |
[22] |
B. He, X. Yuan, On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method, SIAM J. Numer. Anal., 50 (2012), 700–709. http://dx.doi.org/10.1137/110836936 doi: 10.1137/110836936
![]() |
[23] |
B. He, X. Yuan, On the convergence rate of Douglas-Rachford operator splitting method, Math. Program., 153 (2015), 715–722. http://dx.doi.org/10.1007/s10107-014-0805-x doi: 10.1007/s10107-014-0805-x
![]() |
[24] |
S. B. Lindstrom, Computable centering methods for spiralling algorithms and their duals with motivations from the theory of Lyapunov functions, Comput. Optim. Appl., 83 (2022), 999–1026. http://dx.doi.org/10.1007/s10589-022-00413-8 doi: 10.1007/s10589-022-00413-8
![]() |
[25] |
S. B. Lindstrom, B. Sims, Survey: sixty years of Douglas-Rachford, J. Aust. Math. Soc., 110 (2021), 333–370. http://dx.doi.org/10.1017/S1446788719000570 doi: 10.1017/S1446788719000570
![]() |
[26] |
P. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. http://dx.doi.org/10.1137/0716071 doi: 10.1137/0716071
![]() |
Primal | Dual |
Algorithm variation | Criterion used |
LT | |
LTA | true |