He Solution of a Strictly Convex Qp is Continuous
A subgradient-based continuous-time algorithm for constrained distributed quadratic programming
Abstract
In this paper, a constrained distributed quadratic programming over a directed graph is studied. Inspired by iterative idea, a smooth penalty-based quadratic programming problem is proposed. To solve the penalty-based quadratic programming problem, a novel distributed subgradient-based continuous-time algorithm is presented. With the help of Łojasiewicz inequality, the state solution of the presented algorithm is proved to be exponentially convergent to a Karush–Kuhn–Tucker (KKT) point of the smooth penalty-based quadratic programming. It should be noted that the exponential convergence of the presented algorithm is independent of strong convexity of objective functions. In particular, the finite-time convergence is obtained when the considered optimization problem is degenerated to a linear one. Finally, one numerical example and the application to robust estimation in wireless sensor networks are shown to verify the effectiveness of the proposed algorithm.
Introduction
Quadratic programming is one of the most important components in nonlinear programming and arises frequently in the engineering modeling, design and control. A great many of practical problems [1], [2], [3], [4], such as economies of scale problem, intelligent micro grids, signal and image processing, pattern recognition, facility allocation and location problem, quadratic assignment problem, can be formulated as quadratic programming problems. Over the past two decades, a variety of algorithms have been developed for solving quadratic programming problems (see [1], [2], [5], [6]). Thanks to the inspiring theoretical and practical progresses in multi-agent networks and control networks [7], [8], [9], distributed optimization draws great attention and an increasing number of researchers have been in quest of distributed algorithms for optimization problems. Numerous distributed discrete-time (see [10], [11], [12]) and continuous-time algorithms (see [13], [14], [15], [16], [17], [18]) have been put forward in the past few years.
Since the first successful attempt to study linear programming by means of a continuous-time algorithm in [19], unremitting efforts have been devoted to continuous-time algorithms for optimization problems. It is founded that continuous-time algorithms are easy to implement because of their characteristics in parallel and real-time computation [20]. In view of these advantages, an increasing number of researchers have focused on constructing distributed continuous-time algorithms for distributed optimization. Sun et al. [21] presented a gradient-based continuous-time seeking method to resolve a distributed time-varying quadratic optimization. Based on KKT condition, Liu et al. [14] presented a continuous-time algorithm for solving a distributed nonsmooth convex optimization with local equality and inequality constraints. However, the continuous-time algorithms mentioned above deal with distributed optimization with undirected connected graph, and a counterexample was given in [13] to demonstrate that the proposed continuous-time algorithm in [13], which successfully solved the unconstrained distributed optimization over a undirected graph, did not work for the same problem over a directed graph. To resolve distributed problems over directed graphs, Gharesifard and Cortés [13] presented a saddle-point continuous-time algorithm incorporating a designed parameter to solve the unconstrained distributed convex optimization. In [22], a distributed continuous-time algorithm with differentiated projection operation was constructed to cope with the nonsmooth constrained optimization problem. Nevertheless, the aforesaid continuous-time algorithms can only handle the distributed optimization problems with weight-balanced directed graphs and may be not appropriate for problems with weight-unbalanced graphs.
Distributed optimization has important application in wireless sensor networks. It helps to spatially monitor the environments in a more effective way. A traditional handling means in sensor network systems is that the central processor is responsible for collecting and processing all data, which requires high demands for good performance of the central processor and high capacity of communication lines. Therefore, it is an urgent task to propose an effective and robust method for processing collected data in a distributed way to decentralize the processing assignments. Convergence rate of a continuous-time algorithm is a key criterion to measure the quality of the algorithm. Whereas, to the best of our knowledge, the existing continuous-time algorithms seldom obtain the convergence rates for distributed optimization without the strong convexity hypothesis.
As is known, the Łojasiewicz inequality is a powerful tool to analyze the convergence as well as the convergence rate of gradient and subgradient continuous-time algorithms (see [23], [24], [25]). With the variations of the Łojasiewicz exponent around equilibrium points, diverse convergence rates of continuous-time algorithms can be obtained, such as exponential convergence, algebraic convergence and finite-time convergence. We believe that the Łojasiewicz inequality has great potential in solving distributed optimization problems.
Enlightened by the aforementioned research works, in this paper, we explore a distributed quadratic optimization problem with linear equality and linear inequality constraints. The main contributions of this paper are listed as follows.
- (I)
-
Exponential convergence of the proposed algorithm is derived via the nonsmooth Łojasiewicz inequality for the general convexity assumption on objective functions. Unlike the continuous-time algorithms in [22], [26], the strong convexity assumption is no longer necessary for the exponential convergence.
- (II)
-
The proposed subgradient-based algorithm in this paper solves the considered distributed problem while getting rid of the weight-balance for the directed graph. Hence, the proposed subgradient-based algorithm has wider application than the algorithms presented in [13], [14], [22], [27] in terms of the connection topology.
- (III)
-
Our subgradient-based algorithm is of lower dimension than that in [28], [29]. To be specific, the dimension of the algorithms in [28], [29] for the ith agent is while the dimension of our algorithm for the ith agent is reduced to . Therefore, in this sense, our algorithm shows advantages in computation when the dimension of the inequality constraints is relatively large (i.e. ).
Notations: Let be the dimensional Euclidean space equipped with inner product ⟨ · , · ⟩ and induced norm ‖ · ‖. B(x, r) is the open ball centered at x with radius r. The symbol ⊗ represents the Kronecker product. Denote by In the n-dimensional identity matrix. For diag{x 1, x 2, ⋅⋅⋅, xn } denotes the diagonal matrix of vector x. blkdiag{A, B} represents the block diagonal matrix of and . int(E) means the internal of a set . Denote by ker(A) the kernel of a matrix A. For a linear subspace means the orthogonal complement space of . Let be a closed convex set, and denote by the unique element with the smallest norm in that is,
Section snippets
Preliminaries
In this section, we introduce some basic preliminaries and further details that can be found in [30].
Definition 2.1
If a function is convex, then the subgradient of f at x is defined as follows:
Next, we recall the subgradients of |t| and as follows:
Lemma 2.1
(Chainrule [31] ) If is absolutely continuous and is regular at x(t), then V(x(t)) is differentiable and
Problem formulation and model description
In this paper, we will consider a network of m agents over a graph aiming to solve the following constrained distributed quadratic programming (CDQP) cooperatively: where is symmetric and positive semidefinite, is of full row rank, and is a nonempty bounded closed convex polyhedron.
Assumption 1
The connection graph is directed and strongly connected.
Lemma 3.1
[27] Suppose Assumption 1 holds.
- (i)
-
There exists a left
Convergence analysis
In this section, we firstly prove the Łojasiewicz inequality around the equilibrium points of the algorithm (29) and calculate the related Łojasiewicz exponent. Then we prove the exponential convergence of the algorithm (29) by taking advantage of Łojasiewicz inequality. In addition, when the considered distributed optimization problem is degraded into a constrained distributed linear programming, the convergence is proved to be accelerated to finite-time convergence.
Simulation and application
In this section, some examples are presented to show the effectiveness and advantages of the proposed algorithm (29).
Conclusions
This paper proposes a subgradient-based continuous-time algorithm to handle the constrained distributed quadratic programming over weight-unbalanced strongly connected directed graphs. The convergence as well as convergence rate of the presented algorithm are strictly proved by nonsmooth Łojasiewicz inequality. At last one simulation and practical application to robust estimation in wireless sensor networks are conducted to illustrate the feasibility and effectiveness of the proposed algorithm.
Acknowledgments
This work is supported by the National Natural Science Foundation of China under Grants 11731010, 11671109, 61773136 and 11871178.
Cited by (1)
Recommended articles (6)
© 2020 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
martinezsucantien.blogspot.com
Source: https://www.sciencedirect.com/science/article/abs/pii/S0016003220301411
0 Response to "He Solution of a Strictly Convex Qp is Continuous"
Post a Comment