The fractional input stability of the electrical circuit equations described by the fractional derivative operators has been investigated. The Riemann-Liouville and the Caputo fractional derivative operators have been used. The analytical solutions of the electrical circuit equations have been developed. The Laplace transforms of the Riemann-Liouville, and the Caputo fractional derivative operators have been used. The graphical representations of the analytical solutions of the electrical circuit equations have been presented. The converging-input converging-state property of the electrical RL, RC and LC circuit equations described by the Caputo fractional derivative, and the global asymptotic stability property of the unforced electrical circuit equations have been illustrated.
1.
Introduction
Over the past decades, the use of biochemical reactors and correlation techniques has increased greatly because of their fruitful application in converting biomass or cells into pharmaceutical or chemical products, such as vaccines [1], antibiotics [2], beverages [3], and industrial solvents [4]. Among various classes or operation regions of bioreactors, the fed-batch modes have extensively used in the biotechnological industry due to its considerable economic profits [5,6,7]. The main objective of these reactors is to achieve a given or maximum concentration of production at the end of the operation, which can be implemented by using some suitable feed rates [8,9,10]. Thus, in order to ensure economic benefit and product quality of the fed-batch processes, the process control of this units is an very important topic for the engineers [11,12,13].
Switched dynamical systems provide a flexible modeling method for a variety of different types of engineering systems, such as financial system [14], train control system [15], hybrid electric vehicle [16], chemical process system [17], and biological system [18,19,20,21]. Generally speaking, switched dynamical systems are formed by some continuous-time or discrete-time subsystems and a switching rule [22]. There usually exist four types of switching rules as follows: time-dependent switching [23], state-dependent switching [24], average dwell time switching [25], and minimum dwell time switching [26]. Recently, switched dynamical system optimal control problems are becoming increasingly attractive due to their significance in theory and industry production [27,28,29,30]. Because of the discrete nature of switching rules, it is very challenging that switched dynamical system optimal control problems are solved by directly using the classical optimal control approaches such as the maximum principle and the dynamic programming method [31,32,33,34]. In additions, analytical methods also can not be applied to obtain an solution for switched dynamical system optimal control problems due to their nonlinear nature [35,36,37]. Thus, in recent work, two kinds of well-known numerical optimization algorithms are developed for switched dynamical system optimal control problems to obtain numerical solutions. One is the bi-level algorithm [38,39]. The other is the embedding algorithm [40,41]. Besides above two kinds of well-known numerical optimization algorithms, many other available numerical optimization algorithms are also developed for obtaining the solution of switched dynamical system optimal control problems [42]. Unfortunately, most of these numerical optimization algorithms depend on the following assumption: the time-dependent switching strategy is used to design the switching rules, which implies that the system dynamic must be continuously differentiable with respect to the system state [43,44,45]. However, this assumption is not reasonable, since some small perturbations of the system state may lead to the dynamic equations being changed discontinuously. Thus, the solution obtained is usually not optimal. In additions, although these approaches have demonstrated to be effective by solving many practical problems, they only obtaining an open loop control [46,47,48,49,50,51,52,53]. Unfortunately, such open loop controls are not usually robust in practice. Thus, an optimal feedback controller is more and more popular.
In this paper, we consider an optimal feedback control problem for a class of fed-batch fermentation processes by using switched dynamical system approach. Our main contributions are as follows. Firstly, a dynamic optimization problem for a class of fed-batch fermentation processes is modeled as a switched dynamical system optimal control problem, and a general state-feedback controller is designed for this dynamic optimization problem. Unlike the existing works, the state-dependent switching method is applied to design the switching rule, and the structure of this state-feedback controller is not restricted to a particular form. In generally, the traditional methods for obtaining an optimal feedback control require solving the well-known Hamilton-Jacobi-Bellman partial differential equation, which is a very difficult issue even for unconstrained optimal control problems. Then, in order to overcome this difficulty, this problem is transformed into a mixed-integer optimal control problem by introducing a discrete-valued function. Furthermore, each of these discrete variables is represented by using a set of 0-1 variables. Then, by using a quadratic constraint, these 0-1 variables are relaxed such that they are continuous on the closed interval [0,1]. Accordingly, the original mixed-integer optimal control problem is transformed into a nonlinear parameter optimization problem, which can be solved by using any gradient-based numerical optimization algorithm. Unlike the existing works, the constraint introduced for these 0-1 variables are at most quadratic. Thus, it does not increase the number of locally optimal solutions of the original problem. During the past decades, many iterative approaches have been proposed for solving the nonlinear parameter optimization problem by using the information of the objective function. The idea of these iterative approaches is usually that a iterative sequence is generated such that the corresponding objective function value sequence is monotonically decreasing. However, the existing algorithms have the following disadvantage: if an iteration is trapped to a curved narrow valley bottom of the objective function, then the iterative methods will lose their efficiency due to the target with objective function value monotonically decreasing may leading to very short iterative steps. Next, in order to overcome this challenge, an improved gradient-based algorithm is developed based on a novel search approach. In this novel search approach, it is not required that the objective function value sequence is always monotonically decreasing. And a large number of numerical experiments shows that this novel search approach can effectively improve the convergence speed of this algorithm, when an iteration is trapped to a curved narrow valley bottom of the objective function. Finally, an optimal feedback control problem of 1, 3-propanediol fermentation processes is provided to illustrate the effectiveness of this method developed by this paper. Numerical simulation results show that this method developed by this paper is low time-consuming, has faster convergence speed, and obtains a better result than the existing approaches.
The rest of this paper is organized as follows. Section 2 presents the optimal feedback control problem for a class of fed-batch fermentation processes. In Section 3, by introducing a discrete-valued function and using a relaxation technique, this problem is transformed into a nonlinear parameter optimization problem, which can be solved by using any gradient-based numerical optimization algorithm. An improved gradient-based numerical optimization algorithm are developed in Section 4. In Section 5, the convergence results of this numerical optimization algorithm are established. In Section 6, an optimal feedback control problem of 1, 3-propanediol fermentation processes is provided to illustrate the effectiveness of this algorithm developed by this paper.
2.
Problem formulation
In this section, a general state-feedback controller is proposed for a class of fed-batch fermentation process dynamic optimization problems, which will be modeled as an optimal control problem of switched dynamical systems under state-dependent switching.
Let α1=[α11,⋯,α1r1]T∈Rr1 and α2=[α21,⋯,α2r2]T∈Rr2 be two parameter vectors satisfying
and
respectively, where a_i, ˉai, i=1,⋯,r1; b_j, ˉbj, j=1,⋯,r2 present given constants. Suppose that tf>0 presents a given terminal time. Then, a class of fed-batch fermentation process dynamic optimization problems can be described as choose two parameter vectors α1∈Rr1, α2∈Rr2, and a general state-feedback controller
to minimize the objective function
subject to the switched dynamical system under state-dependent switching
with the initial condition
where x(t)∈Rn presents the system state; x0 presents a given initial system state; u(t)∈Rm presents the control input; ϑ=[ϑ1,⋯,ϑr1]T∈Rr3 presents a state-feedback parameter vector satisfying
c_k and ˉck, k=1,⋯,r present given constants. Υ:Rn×Rr→Rm; ϕ:Rn→R, f1:Rn×[0,tf]→Rn, f2:Rn×Rm×[0,tf]→Rn, g1:Rn×Rr1×[0,tf]→Rn, g2:Rn×Rr2×[0,tf]→Rn present five continuously differentiable functions. For convenience, this problem is called as Problem 1.
Remark 1. In the switched dynamical system (2.5), Subsystem 1 presents the batch mode, during which there exists no input feed (i.e., control input) u(t), and Subsystem 2 presents the feeding mode, during which there exists input feed (i.e., control input) u(t). This fed-batch fermentation process will oscillate between Subsystem 1 (the batch mode) and Subsystem 2 (the feeding mode), and g1(x(t),α1,t)=0 and g2(x(t),α2,t)=0 present the active conditions of Subsystems 1 and 2, respectively.
Remark 2. Note that an integral term, which is used to measure the system running cost, can be easily incorporated into the objective function (2.4) by augmenting the switched dynamical system (2.5) with an additional system state variable (see Chapter 8 of this work [54]). Thus, it is not a serious restriction that the integral term does not appear in the objective function (2.4).
Remark 3. The structure for this general state-feedback controller (2.3) can be governed by the given continuously differentiable function Υ, and the state-feedback parameter vector ϑ is decision variable vector, which will be chosen optimally. For example, the linear state-feedback controller described by u(t)=Kx(t) is a very common state-feedback controller, where K∈Rm×n presents a state-feedback gain matrix to be found optimally.
3.
Problem transformation and relaxation
3.1. Problem transformation
In Problem 1, the state-dependent switching strategy is adopted to design the switching rule, which is unlike the existing switched dynamical system optimal control problem. Then, the solution of Problem 1 can not be obtained by directly using the existing numerical computation approaches for switched dynamical systems optimal control problem, in which the switching rule is designed by using time-dependent strategy. In order to overcome this difficulty, by introducing a discrete-valued function, the problem will be transformed into a equivalent nonlinear dynamical system optimal control problem with discrete and continuous variables in this subsection.
Firstly, by substituting the general state-feedback controller (2.3) into the switched dynamical system (2.5), Problem 1 can be equivalently written as the following problem:
Problem 2. Choose (α1,α2,ϑ)∈Rr1×Rr2×Rr3 to minimize the objective function
subject to the switched dynamical system under state-dependent switching
and the three bound constraints (2.1), (2.2) and (2.7), where ˉf2(x(t),ϑ,t)=f2(x(t),Υ(x(t),ϑ),t).
Next, note that the solution of Problem 1 can not be obtained by directly using the existing numerical computation approaches for switched dynamical systems optimal control problem, in which the switching rule is designed by using time-dependent strategy and not state-dependent strategy. In order to overcome this difficulty, a novel discrete-valued function y(t) is introduced as follows:
Then, Problem 2 can be transformed into the following equivalent optimization problem with discrete and continuous variables:
Problem 3. Choose (α1,α2,ϑ,y(t))∈Rr1×Rr2×Rr3×{1,2} to minimize the objective function
subject to the nonlinear dynamical system
the equality constraint
and the three bound constraints (2.1), (2.2), and (2.7).
3.2. Problem relaxation
Note that standard nonlinear numerical optimization algorithms are usually developed for nonlinear optimization problems only with continuous variables, for example the sequential quadratic programming algorithm, the interior-point method, and so on. Thus, the solution of Problem 3, which has discrete and continuous variables, can not be obtained by directly using these existing standard algorithms. In order to overcome this difficulty, this subsection will introduce a relaxation problem, which has only continuous variables.
Define
where σ(t)=[σ1(t),σ2(t)]T. Then, a theorem can be established as follows.
Theorem 1. If the nonnegative functions σ1(t) and σ2(t) satisfy the following equality:
then two results can be obtained as follows:
(1) For any t∈[0,tf], the function P(σ(t)) is nonnegative;
(2) For any t∈[0,tf], P(σ(t))=0 if and only if σi(t)=1 for one i∈{1,2} and σi(t)=0 for the other i∈{1,2}.
Proof. (1) By using the equality (3.8) and the Cauchy-Schwarz inequality, we have
Note that the functions σ1(t) and σ2(t) are nonnegative. Then, squaring both sides of the inequality (3.9) yields
which implies that for any t∈[0,tf], the function P(σ(t)) is nonnegative.
(2) The correctness of the second part for Theorem 1 only need to prove the following result: for any t∈[0,tf], P(σ(t))=0 has solutions σi∗(t)=1 for one i∗∈{1,2} and σi(t)=0 for the other i∈{1,2} and i≠i∗.
Define
Then, the inequality (3.9) can be equivalently transformed into as follows:
where \cdot and \left\| \cdot \right\| present the vector dot product and the Euclidean norm, respectively. Note that the equality
holds if and only if there exists a constant \beta \in R such that
By using the equality (3.8), one obtain v_1 \left(t \right) \neq \textbf{0} and v_2 \left(t \right) \neq \textbf{0} , where \textbf{0} presents the zero vector. Then, \beta is a nonzero constant and the equality (3.12) implies
Furthermore, the constant \beta can be set equal to one integer i^* \in \left\{ {1, 2} \right\} , and for the other integer i \in \left\{ {1, 2} \right\} , one have
From the two equalities (3.8) and (3.15), we obtain \sigma_{i^*} \left(t \right) = 1 . This completes the proof of Theorem 1.
Now, Problem 3 can be rewritten as a relaxation problem as follows:
Problem 4. Choose \left({\alpha _1, \alpha _2, \vartheta, \sigma\left(t \right)} \right) \in R^{r_1 } \times R^{r_2 } \times R^{r_3 } \times R^2 to minimize the objective function
subject to the nonlinear dynamical system
the two equality constraints
the bound constraint
the equality constraint (3.8), and the three bound constraints (2.1), (2.2), and (2.7), where
By using Theorem 1, one can derive that Problems 3 and 4 are equivalent.
3.3. A nonlinear parameter optimization problem
Note that the bound constraint (3.20) is essentially some continuous-time inequality constraints. Thus, the solution of Problem 4 can not also be obtained by directly using the existing standard algorithms. In order to obtain the solution of Problem 4, this subsection will introduce a nonlinear parameter optimization problem, which has some continuous-time equality constraints and several bound constraints.
Suppose that \tau_i presents the i th switching time. Then, one have
where M \geqslant 1 presents a given fixed integer. It is important to note that the switching times are not independent optimization variables, whose values can be obtained indirectly by using the state trajectory of the switched dynamical system (2.5). Then, Problem 4 can be transformed into an equivalent optimization problem as follows:
Problem 5. Choose \left({\alpha _1, \alpha _2, \vartheta, \xi} \right) \in R^{r_1 } \times R^{r_2 } \times R^{r_3 } \times R^{2M} to minimize the objective function
subject to the nonlinear dynamical system
the equality constraints
the bound constraint
and the three bound constraints (2.1), (2.2), and (2.7), where \xi _1^i and \xi _2^i present, respectively, the values of \sigma _1 \left(t \right) and \sigma _2 \left(t \right) on the i th subinterval {\left[{\tau _{i - 1}, \tau _i } \right)} , i = 1, \cdots, M ; \xi = \left[{\left({\xi ^1 } \right)^{\rm T}, \left({\xi ^2 } \right)^{\rm T} } \right]^{\rm T} , \xi ^1 = \left[{\xi _1^1, \cdots, \xi _M^1 } \right]^{\rm T} , \xi ^2 = \left[{\xi _1^2, \cdots, \xi _M^2 } \right]^{\rm T} ; \bar P\left(\xi, t \right) = \sum\limits_{i = 1}^M {\left({\sum\limits_{j = 1}^2 {j^2 \xi _i^j } - \left({\sum\limits_{j = 1}^2 {j\xi _i^j } } \right)^2 } \right)} \chi _{\left[{\tau _{i - 1}, \tau _i } \right)} \left(t \right) ; and \chi _I \left(t \right) is given by
which is a function defined on the subinterval I \subset \left[{0, t_f } \right] .
Due to the switching times being unknown, it is very challenging to acquire the gradient of the objective function (3.23). In order to overcome this challenge, the following time-scaling transformation is developed to transform variable switching times into fixed times:
Suppose that the function {\rm{t}}\left(s \right):\left[{0, M} \right] \to R is continuously differentiable and is governed by the following equation:
with the boundary condition
where \theta_i is the subsystem dwell time on the i th subinterval \left[{i- 1, i} \right) \subset \left[{0, t_f } \right] . In general, the transformation (3.30)–(3.31) is referred to as a time-scaling transformation.
Define \theta = \left[{\theta _1, \cdots, \theta _M } \right]^{\rm T} , where
Then, by using the time-scaling transform (3.30) and (3.31), we can rewrite Problem 5 as the following equivalent nonlinear parameter optimization problem, which has fixed switching times.
Problem 6. Choose \left({\alpha _1, \alpha _2, \vartheta, \xi, \theta} \right) \in R^{r_1 } \times R^{r_2 } \times R^{r_3 } \times R^{2M} \times R^M to minimize the objective function
subject to the nonlinear dynamical system
the continuous-time equality constraints
the linear equality constraint (3.27), the three bound constraints (2.1), (2.2), (2.7), (3.28), and (3.32), where \hat x\left(s \right) = x\left({t\left(s \right)} \right) and \hat P\left({\xi, s} \right) = \sum\limits_{i = 1}^M {\theta _i \left({\sum\limits_{j = 1}^2 {j^2 \xi _i^j } - \left({\sum\limits_{j = 1}^2 {j\xi _i^j } } \right)^2 } \right)} \chi _{\left[{i- 1, i} \right)} \left(s \right) .
4.
An improved gradient-based numerical optimization algorithm
In this section, an improved gradient-based numerical optimization algorithm will be proposed for obtaining the solution of Problem 1.
4.1. A penalty problem
In order to handle the continuous-time equality constraints (3.35) and (3.36), by adopting the idea of l_1 penalty function [55], Problem 6 will be written as a nonlinear parameter optimization problem with a linear equality constraint and several simple bounded constraints in this subsection.
Problem 7. Choose \left({\alpha _1, \alpha _2, \vartheta, \xi, \theta} \right) \in R^{r_1 } \times R^{r_2 } \times R^{r_3 } \times R^{2M} \times R^M to minimize the objective function
subject to the nonlinear dynamical system (3.34), the linear equality constraint (3.27), the three bound constraints (2.1), (2.2), (2.7), (3.28) and (3.32), where
where \gamma > 0 presents the penalty parameter.
The idea of l_1 penalty function [47] indicates that any solution of Problem 7 is also a solution of Problem 6. In additions, it is straightforward to acquire the gradient of the linear function in the equality constraint (3.27), and the gradient of the objective function (4.1) will be presented in Section 4.2. Thus, the solution of Problem 7 can be achieved by applying any gradient-based numerical computation method.
4.2. Gradient formulae
In order to acquire the solution of Problem 7, the gradient formulae of this objective function (4.1) will be presented by the following theorem in this subsection.
Theorem 2. For any s \in \left[{0, M} \right] , the gradient formulae of the objective function (4.1) with respect to the decision variables \alpha _1 , \alpha _2 , \vartheta , \xi , and \theta are given by
where H\left({\hat x\left(s \right), \alpha _1, \alpha _2, \vartheta, \xi, \theta, \lambda \left(s \right)} \right) denotes the Hamiltonian function defined by
and the function \lambda \left(s \right) presents the costate satisfying the following system:
with the terminal condition
Proof. Similarly to the discussion of Theorem 5.2.1 described in [56], the gradient formulae (4.2)–(4.6) can be obtained. This completes the proof of Theorem 2.
4.3. Algorithm
For simplicity of notation, let g \left({\eta } \right) = \nabla \tilde J_\gamma \left(\eta \right) presents the gradient of the objective function J_\gamma described by (4.1) at \eta , where \eta = \left[{\left({\alpha _1 } \right)^{\rm T}, \left({\alpha _2 } \right)^{\rm T}, \vartheta ^{\rm T}, \xi ^{\rm T}, \theta ^{\rm T} } \right]^{\rm T} . In additions, let \left\| \cdot \right\| and \left\| \cdot \right\|_\infty present, respectively, the Euclidean norm and the infinity norm, and suppose that the subscript k presents the function value at the point \eta_k or in the k th iteration, for instance, g_k and \left({J_\gamma } \right)_k . Then, based on the above discussion, an improved gradient-based numerical optimization algorithm will be provided to acquire the solution of Problem 1 in this subsection.
Remark 4. During the past decades, many iterative approaches have been proposed for solving the nonlinear parameter optimization problem by using the information of the objective function [57]. The idea of these iterative approaches is usually that a iterative sequence is generated such that the corresponding objective function value sequence is monotonically decreasing. However, the existing algorithms have the following disadvantage: if an iteration is trapped to a curved narrow valley bottom of the objective function, then the iterative methods will lose their efficiency due to the target with objective function value monotonically decreasing may leading to very short iterative steps. Then, in order to overcome this challenge, an improved gradient-based algorithm is developed based on a novel search approach in Algorithm 1. In this novel search approach, it is not required that the objective function value sequence is always monotonically decreasing. In additions, an improved adaptive strategy for the memory element N_k described by (4.12), which is used in (4.13), is proposed in iterative processes in Algorithm 1. The corresponding explanation on the equality (4.12) is as follows. If the 1 st condition described by (4.12) holds, then it implies that the iteration is trapped to a curved narrow valley bottom of the objective function. Thus, in order to avoid creeping along the bottom of this narrow curved valley, the value of the memory element N_k should be increased. If the 2 nd condition described by (4.12) holds, then the value of the memory element N_k is better to remain unchanged. If the 3 rd condition described by (4.12) holds, then it implies that the iteration is in a flat region. Thus, in order to decrease the objective function value, the value of the memory element N_k will be decreased. Above discussions imply that the novel search approach described in Algorithm 1 is also an adaptive method.
Remark 5. The sufficient descent condition is extremely important for the convergence of any gradient-based numerical optimization algorithm. Thus, the goal of lines 12–16 described in Algorithm 1 is avoiding uphill directions and keeping \{ \rho_k \} uniformly bounded. As a matter of fact, for any k , \rho _{\min } \leqslant \rho _k \leqslant \rho _{\max } and d_k = - \rho _k g_k ensure that there are two constants l_1 > 0 and l_2 > 0 such that d_k satisfies the following two conditions:
5.
Convergence analysis
This section will establish the convergence results of Algorithm 1 developed by Section 4. In order to establish the convergence results of this algorithm, we suppose that the following two conditions hold:
Assumption 1. J_\gamma is a continuous differentiable function and bounded below on R^{r_1 } \times R^{r_2 } \times R^{r_3 } \times R^{2M} \times R^M .
Assumption 2. For any \eta_1 \in \Omega and \eta_2 \in \Omega , there is a constant l_3 such that
where \Omega presents a open set and g\left(\eta \right) presents the gradient of J_\gamma \left(\eta \right) .
Theorem 3. Suppose that Assumptions 1 and 2 hold. Let \{ \eta_k \} be a sequence obtained by using Algorithm 1. Then, there is a constant \varsigma > 0 such that the following inequality holds:
Proof. Let \varsigma _0 be defined by \varsigma _0 = \mathop {\inf }\limits_{\forall k} \left\{ {\omega _k } \right\} \geq 0 .
If \varsigma _0 > 0 , then by using the inequalities (4.14) and (4.15), one can obtain
Let \varsigma be defined by \varsigma = l_1 \varsigma _0 . Then, the proof of Theorem 1 is complete for \varsigma _0 > 0 .
If \varsigma _0 = 0 , then there is a subset \Lambda \subseteq \left\{ {0, 1, 2, \cdots } \right\} such that the following equality holds:
which indicates that there exists a \hat k such that the following inequality holds:
for any k > \hat k and {k \in \Lambda } . Let \omega = \omega _k \varpi . Then, the inequality (4.14) does not hold. That is, one can obtain
which implies
Applying the mean value theorem to the left-hand side of the inequality (5.7) yields
where 0 \leqslant \zeta _k \leqslant 1 . From the inequality (5.8), one obtain
By using Assumption 2 and Cauchy-Schwartz inequality, from (4.15) and (5.9), we have
Furthermore, applying \omega = \omega _k \varpi and the inequality (4.16) to the inequality (5.10), one obtain
for any k > \hat k and {k \in \Lambda } . Clearly, the inequalities (5.4) and (5.11) are contradictory. Thus, \varsigma _0 > 0 . This completes the proof of Theorem 3.
Lemma 1. Suppose that Assumptions 1 and 2 hold. Let \{ \eta_k \} be a sequence obtained by using Algorithm 1. Then, the following inequalities
are true, where A = N_{\max } .
Proof. Note that if the following inequality
is true, then the inequality (5.12) also holds. Here, the inequality (5.14) will be proved by using mathematical induction.
Firstly, Theorem 3 indicates
where q\left({Ap} \right) = \min \left\{ {Ap, N_{Ap} } \right\} . By using 0 \leqslant q\left({Ap} \right) \leqslant A and the inequality (5.15), one can derive that the inequality (5.14) is true for j = 1 .
Suppose that the inequality (5.14) is true for 1 \leqslant j \leqslant A - 1 . Note that \varsigma > 0 and the term \left\| {g_{Ap + j - 1} } \right\|^2 described in (5.14) is nonnegative. Then, one can obtain
for 1 \leqslant j \leqslant A - 1 .
Next, by using 0 \leqslant q\left({Ap} \right) \leqslant A , the inequality (5.2), and the inequality (5.16), one can derive
which implies that the inequality (5.14) is also true for j+1 . Then, the inequality (5.14) is true for 1 \leqslant j \leqslant A by using mathematical induction. Thus, the inequality (5.12) holds.
In additions, Assumption 1 shows J_\gamma being a continuous differentiable function and bounded below on R^{r_1 } \times R^{r_2 } \times R^{r_3 } \times R^{2M} \times R^M , which indicates that
Then, summing the inequality (5.12) over p yields
Thus, the inequality (5.13) holds. This completes the proof of Lemma 1.
Theorem 4. Suppose that these conditions of Theorem 3 are true. Then, the following equality holds:
where g\left({\eta _k } \right) presents the gradient of the objective function J_\gamma described by (4.1) at the point \eta_k .
Proof. Firstly, the following result will be proved: there is a constant l_4 such that
By using Assumptions 1 and 2, one can obtain
Let the constant l_4 be defined by l_4 = 1 + l_2 l_3 \omega _k . Then, the inequality (5.21) implies that the inequality (5.20) is true.
Define the function \psi \left(p \right) by
Then, Lemma 1 indicates that the following equality holds:
By using the inequality (5.20), one can obain
Thus, from (5.23) and (5.24), one can deduce that the equality (5.19) is true. This completes the proof of Theorem 4.
6.
Numerical results
In this section, an optimal feedback control problem of 1, 3-propanediol fermentation processes is provided to illustrate the effectiveness of the approach developed by Sections 2–5, and the numerical simulations are all implemented on a personal computer with Intel Pentium Skylake dual core processor i5-6200U CPU(2.3GHz).
The 1, 3-propanediol fermentation process can be described by switching between two subsystem: batch subsystem and feeding subsystem. There exists no input feed during the batch subsystem, while alkali and glycerol will be added to the fermentor during the feeding subsystem. In generally, the subsystem switching will happen, if the glycerol concentration reaches the given upper and lower thresholds. By using the result of the work [58], the 1, 3-propanediol fermentation process can be modeled as the following switched dynamical system under state-dependent switching:
where t_f denotes the given terminal time; the system states x_1 \left(t \right) , x_2 \left(t \right) , x_3 \left(t \right) , x_4 \left(t \right) denote the volume of fluid ( L ), the concentration of biomass ( gL^{-1} ), the concentration of glycerol ( mmolL^{-1} ), the concentration of 1, 3-propanediol ( mmolL^{-1} ), respectively; the control input u\left(t \right) denotes the feeding rate ( Lh^{-1} ); x\left(t \right) = \left[{x_1 \left(t \right), x_2 \left(t \right), x_3 \left(t \right), x_4 \left(t \right)} \right]^{\rm T} denotes the system state vector; Subsystem 1 and Subsystem 2 denote the batch subsystem and the feeding subsystem, respectively; \alpha _1 and \alpha _2 (two parameters that need to be optimized) denote the upper and lower of the glycerol concentration, respectively; and the functions f_1 \left({x\left(t \right), t} \right) , f_2 \left({x\left(t \right), u\left(t \right), t} \right) are given by
Subsystem 1 is essentially a natural fermentation process due to no input feed. The functions \varphi , \Delta_1 , and \Delta_2 are defined by
which denote the growth rate of cell, the consumption rate of substrate, and the formation rate of 1, 3-propanediol, respectively. In the equality (6.4), the parameters x_3^ * and x_4^ * denote the critical concentrations of glycerol and 1, 3-propanediol, respectively; h_1 , h_2 , h_3 , Y_1 , Y_2 , Y_3 , Z_1 , Z_2 , l_7 , and l_8 are given parameters.
Note that the feeding subsystem doesn't only consist of the natural fermentation process. Thus, the function f_2 \left({x\left(t \right), u\left(t \right), t} \right) is provided to describe the process dynamics because of the control input feed in Subsystem 2. In the equality (6.3), the given parameters l_5 and l_6 denote the proportion and concentration of glycerol in the control input feed, respectively.
In generally, as the increase of the biomass, the consumption of glycerol also increases. Then, during Subsystem 1 (batch subsystem), the concentration of glycerol will eventually become too low due to no new glycerol being added. Thus, Subsystem 1 will switch to Subsystem 2 (feeding subsystem), when the equality x_3 \left(t \right) - \alpha _2 = 0 (the active condition of Subsystem 2) satisfies. On the other hand, during Subsystem 2 (feeding subsystem), the concentration of glycerol will eventually become too high due to new glycerol being added. This will inhibit the growth of cell. Thus, Subsystem 2 will switch to Subsystem 1 (batch subsystem), when the equality x_3 \left(t \right) - \alpha _1 = 0 (the active condition of Subsystem 1) satisfies.
Suppose that the feeding rate u \left(t \right) , the upper of the glycerol concentration \alpha _1 , and the lower of the glycerol concentration \alpha _2 satisfy the following bound constraints:
respectively.
The model parameters of the dynamic optimization problem for the 1, 3-propanediol fermentation process are presented by
Suppose that the control input u\left(t \right) takes the piecewise state-feedback controller u\left(t \right) = \sum\limits_{i = 1}^{M} {k_i x\left(t \right)\chi _{\left[{\tau _{i - 1}, \tau _i } \right)} \left(t \right)} . Our main objective is to maximize the concentration of 1, 3-propanediol at the terminal time t_f . Thus, the optimal feedback control problem of 1, 3-propanediol fermentation processes can be presented as follows: choose a control input u\left(t \right) to minimize the objective function J \left(u\left(t \right) \right) = - x_4 \left(t \right) subject to the switched dynamical system described by (6.1) with with the initial condition x \left(0 \right) = x_0 and the bound constraints (6.7–6.9). Then, the improved gradient-based numerical optimization algorithm (Algorithm 1 described by Section 4.3) is adopted to solve the optimal feedback control problem of 1, 3-propanediol fermentation processes by using Matlab 2010a. The optimal objective function value is J^* = -x_4 \left(t_f \right) = -1265.5597 and the optimal values of the parameters \alpha_1 and \alpha_2 are 584.3908 and 246.5423 , respectively. The optimal feedback gain matrixes K_i^* , i = 1, \cdots, 9 are presented by
and the corresponding numerical simulation results are presented by Figures 1–4.
Note that Problem 6 is an optimal control problem of nonlinear dynamical systems with state constraints. Thus, the finite difference approximation approach developed by Nikoobin and Moradi [59] can also be applied for solving this dynamic optimization problem of 1, 3-propanediol fermentation processes. In order to compare with the improved gradient-based numerical optimization algorithm (Algorithm 1 described by Section 4.3), the finite difference approximation approach developed by Nikoobin and Moradi [59] is also adopted for solving this dynamic optimization problem of 1, 3-propanediol fermentation process with the same model parameters under the same condition, and the numerical comparison results are presented by Figure 5 and Table 1.
Figure 5 shows that the improved gradient-based numerical optimization algorithm (Algorithm 1 described by Section 4.3) takes only 67 iterations to obtain the satisfactory result x_4(t_f) = 1265.5597 , while the finite difference approximation approach developed by Nikoobin and Moradi [59] takes 139 iterations to achieve the satisfactory result x_4(t_f) = 1052.9140 . That is, the iterations of the improved gradient-based numerical optimization algorithm (Algorithm 1 described by Section 4.3) is reduced by 51.7986\% . In additions, Table 1 also shows that the result x_4(t_f) = 1052.9140 obtained by using the finite difference approximation approach developed by Nikoobin and Moradi [59] is not superior to the result ( x_4(t_f) = 1265.5597 ) obtained by using the improved gradient-based numerical optimization algorithm (Algorithm 1 described by Section 4.3) with saving 60.4695\% computation time.
In conclusion, the above numerical simulation results show that the improved gradient-based numerical optimization algorithm (Algorithm 1 described by Section 4.3) is low time-consuming, has faster convergence speed, and can obtain a better numerical optimization than the finite difference approximation approach developed by Nikoobin and Moradi [59]. That is, an effective numerical optimization algorithm is presented for solving the dynamic optimization problem of 1, 3-propanediol fermentation process.
7.
Conclusions
In this paper, the dynamic optimization problem for a class of fed-batch fermentation processes is modeled as an optimal control problem of switched dynamical systems under state-dependent switching, and a general state-feedback controller is designed for this dynamic optimization problem. Then, by introducing a discrete-valued function and using a relaxation technique, this problem is transformed into a nonlinear parameter optimization problem. Next, an improved gradient-based algorithm is developed based on a novel search approach, and a large number of numerical experiments show that this novel search approach can effectively improve the convergence speed of this algorithm, when an iteration is trapped to a curved narrow valley bottom of the objective function. Finally, an optimal feedback control problem of 1, 3-propanediol fermentation processes is provided to illustrate the effectiveness of this method developed by this paper, and the numerical simulation results show that this method developed by this paper is low time-consuming, has faster convergence speed, and obtains a better result than the existing approaches. In the future, we will continue to study the dynamic optimization problem for a class of fed-batch fermentation processes with uncertainty constraints.
Acknowledgments
The authors express their sincere gratitude to the anonymous reviewers for their constructive comments in improving the presentation and quality of this manuscript. This work was supposed by the National Natural Science Foundation of China under Grant Nos. 61963010 and 61563011, and the Special Project for Cultivation of New Academic Talent and Innovation Exploration of Guizhou Normal University in 2019 under Grant No. 11904-0520077.
Conflict of interest
The authors declare no conflicts of interest.