<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

分布式一致性最優化的梯度算法與收斂分析

Distributed gradient-based consensus optimization algorithm and convergence analysis

  • 摘要: 研究了多智能體網絡中受集合約束的一致性最優化問題,提出了基于原始–對偶梯度的定步長分布式算法。算法中包括步長在內的參數會影響收斂性,需要先進行收斂分析,再根據收斂條件設置合適的參數。本文首先針對一般的定步長迭代格式,提出一種基于李雅普諾夫函數的收斂分析范式,它類似于一般微分方程關于李雅普諾夫穩定的分析方法。然后,針對所考慮的分布式梯度算法,構造了合適的李雅普諾夫函數,并根據收斂條件得到了算法參數設定范圍,避免了繁冗復雜的分析論證。本文提出的理論與方法也為其他類型的分布式算法提供了一個框架性、系統性的論證方法。

     

    Abstract: A distributed optimization problem is cooperatively solved by a network of agents, which have significant applications in many science and engineering fields, such as metallurgical engineering. For complex industrial processes with multiple-level characteristics, varying working conditions, and long processes, numerous optimization decision-making micro and macro control problems, such as product quality control, production planning, scheduling, and energy comprehensive deployment, are encountered. The theory and method of distributed optimization are keys to promoting the strategic decision-making of the integration of industrialization and new-generation industrial revolution. Their development enhances the ability to deal with large-scale and complex problems of big data, which have important practical value and economic benefits. In this study, consensus optimization with set constraints in multi-agent networks was explored. A distributed algorithm with a fixed step size was proposed on the basis of a primal-dual gradient scheme. Parameters such as step size affect the convergence of the algorithm. As such, convergence should be analyzed first, and appropriate parameters should be subsequently set in accordance with convergence conditions. Existing works have constructed different Lyapunov functions by exploiting the specific iteration scheme of this algorithm and analyzing convergence. Conversely, a convergence analysis paradigm based on a Lyapunov function was proposed in this study for general fixed step size iteration schemes, which were similar to the analysis method of Lyapunov convergence for general differential equations. A suitable Lyapunov function was constructed for the distributed gradient algorithm, and a parameter setting range was obtained in accordance with the convergence conditions. The proposed method avoids the tedious and complicated analysis of algorithm convergence and parameter assignment. The theory and method presented in this study also provide a framework and systematic demonstration method for other types of distributed algorithms and may be regarded as future directions of distributed optimization.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164