您的位置: 首页 > 国外期刊 > Applied Mathematics

Finite Dimensional Approximation of the Monodromy Operator of a Periodic Delay Differential Equation

Vol.09No.11(2018), Article ID:88878,23 pages
10.4236/am.2018.911086

Eli A. Vazquez, Joaquin Collado

Automatic Control Department, CINVESTAV-IPN, Mexico City, Mexico

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: October 20, 2018; Accepted: November 27, 2018; Published: November 30, 2018

ABSTRACT

Using piecewise constant orthonormal functions, an approximation of the monodromy operator of a Linear Periodic Delay Differential Equation (PDDE) is obtained by approximating the integral equation corresponding to the PDDE as a linear operator over the space of initial conditions. This approximation allows us to consider the state space as finite dimensional resulting in a finite matrix approximation whose spectrum converges to the spectrum of the monodromy operator.

Keywords:

Monodromy Operator, Periodic Delay Differential Equations, Walsh Functions, Block Pulse Functions, Finite Rank Approximation

1. Introduction

Linear Periodic Differential Equations (PDDEs) have been of importance for studying problems of vibration, mechanics, astronomy, electric circuits, biology among others in [1] several examples of delay effects on mechanical systems are given, in [2] and [3] effects of the delay in physics and biological processes are considered. Neglecting the fact that interaction between particles does not occur instantaneously sometimes is no longer possible or practical, these finite velocity interactions bring new behaviors that modify significantly the behavior of the system, see for example [4] and [5] . In the study of these Delay Differential Equations many problems arise, mainly due to the infinite dimensional nature of the system, in the case of linear PDDEs the stability depends on the spectrum of the monodromy operator, which in the non delayed case corresponds to the monodromy matrix.

Approximation methods of the monodromy operator have been proposed a number of times, [6] and [7] make use of pseudospectral collocation methods to approximate the monodromy operator. The well known method of semidis- cretization [8] has also been used to determine the stability of a PDDE. In [9] using a Walsh approximation method from [10] a set of approximated solutions of a PDDE was used to construct an approximation of the monodromy operator by numerical integration.

In this work, the main contribution is an approximation of the monodromy operator of the PDDE by a linear Equation (35) of the form x = U k x 0 , where the directly obtained matrix U k will correspond to the approximated monodromy operator, with no need of approximating solutions or numerical integration. Stability of the PDDE can then be determined by the spectrum of U k without any need of solving any equation. Convergence of U k and its spectrum is stated in Theorems 10, 14 and 15. This approximation is made by projecting the integral equation corresponding to the PDDE in to a a finite dimensional subspace spanned by finitely many piecewise constant functions. The utilized functions must belong to a complete set of piecewise constant orthonormal functions with discontinuities only in dyadic rationals, such as Haar, Walsh or Block Pulse Functions (BPF). The theoretical framework of this paper will be based on Walsh functions since most results are stated for these functions. Once obtained the finite dimensional approximation of the monodromy operator will be stated in terms of BPF to reduce the computational cost of the stability analysis.

The main goal of this paper is to provide a computationally light method, with straightforward implementation, to approximate the monodromy operator of a delay differential equation, in order to facilitate the computation of stability diagrams used to study the behavior of the equation with respect to changes in its parameters.

2. Linear Periodic Delay Differential Equations

Consider the linear PDDE:

x ˙ ( t ) = A ( t ) x ( t ) + B ( t ) x ( t τ ) , (1)

where x ( t ) n , t , τ + , 0 < τ ω , A ( t ) , B ( t ) are n × n matrices of ω-periodic functions continuous on [ 0,1 ] . Denote as C the space of conti- nuous functions from [ τ ,0 ] into n , this space is a Banach space with the norm φ = m a x τ θ 0 φ ( θ ) [11] .

A solution of (1) with initial condition φ C at t 0 is understood as a mapping ξ : [ τ ,0 ] × × C n , such that ξ ( θ , t 0 , φ ) = φ ( θ ) for τ θ 0 and that ξ ( θ , t , φ ) = x ( t + θ ) for τ θ 0 and t + θ t 0 , where x ( t ) sa- tisfies (1) for t t 0 with x ( θ ) = φ ( θ ) for θ [ τ ,0 ] . At time t a solution will be an element of space C. Explicitly we will have:

ξ ( θ , t , ξ ( θ , t 0 , φ ) ) = T ( t 0 , t 0 + t ) ξ ( θ , t 0 , φ ) , (2)

with θ [ τ ,0 ] . The operator T is called the solution mapping and is analogous to the state transition matrix of the undelayed case.

2.1. Monodromy Operator

Taking into account the periodic nature of (1) it is relevant to consider (2) when t = t 0 + ω , in this case we will denote the solution mapping as U ( t 0 ) T ( t 0 , t 0 + ω ) .

Next some properties consequence of U ( ) being completely continuous are enlisted, [12] :

• The spectrum σ ( U ( t 0 ) ) is a countable compact set in the complex plane.

0 σ ( U ( t 0 ) ) , and if λ σ ( U ( t 0 ) ) then ( U φ ) ( t 0 ) = λ φ has a nonzero solution φ C .

• If the cardinality of the set σ ( U ( t 0 ) ) is infinite then the only limit point of σ ( U ( t 0 ) ) is 0.

• The cardinality of σ ( U ( t 0 ) ) outside any disc R r of radius r, R r = { z , | z | r } is finite.

σ ( U ( t 0 ) ) = σ ( U ( t 1 ) ) t 0 , t 1 .

U ( 0 ) henceforth denoted simply as U will be called the monodromy opera- tor of (1) and the elements λ σ ( U ) , λ 0 will be called characteristic multipliers.

2.2. Stability

It was shown [12] that Floquet theory is valid for PDDEs in a certain sense, and that stability of (1) will be related to the spectrum of U. The following theorem from [13] establishes the conditions for the stability of (1):

Theorem 1. If the characteristic multipliers are situated inside the unit circle { λ , λ < 1 } , then the zero solution of the system is uniformly asymptotically stable. If the multipliers of the system are inside the closed unit circle, and if multipliers situated on the unit circumference correspond to simple elementary divisors then the zero solution is uniformly stable.

Remark 1. If τ > ω all the above statements are valid for U r , where r ω τ and r [12] .

Remark 2. Note that Theorem 1 extends the properties of an undelayed case [14] [15] .

3. Walsh Functions

3.1. Definition

Walsh functions are a set of piecewise constant complete orthonormal functions introduced in [16] and defined on the interval [ 0,1 ) , although easily translated to any other interval. Formal definition of the Walsh functions may be done in many ways [17] , with the use of Rademacher functions the l-th Walsh function may be defined as [17] :

w l ( t ) = i = 0 k [ sign ( sin 2 i 1 π t ) ] ε i , (3)

where the ε i are the coefficients of the unique binary expansion of l , namely

l = i = 0 k ε i 2 i , with ε i { 0,1 } (4)

There are as well many ways of ordering the Walsh functions, in this paper we will use the so called Dyadic ordering [18] . Figure 1 shows the first eight Walsh functions in this ordering.

To a function f L [ 0,1 ] 2 will correspond the Walsh approximation:

f ( t ) ~ i = 0 n h i w i ( t ) , (5)

where h i = 0 1 f ( t ) w i ( t ) d t . The right side of (5) will converge in norm to f for any f L [ 0,1 ] 2 and if n = 2 k , it will converge uniformly for any continuous function that has, at most, jump discontinuities at dyadic rationals (numbers of the form a 2 b , a , b ) [16] . When n = , (5) will be called a Walsh expan- sion.

3.2. Properties of Walsh Functions

We will define a Walsh matrix W [ k ] as the 2 k × 2 k matrix consisting on the discretization values of the Walsh functions over the interval [ 0,1 ] .

For example we will have for k = 3 :

W [ 4 ] = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] .

We can also define the Delayed Walsh Matrix W [ m , k ] as a Walsh matrix of order 2 k shifted m columns to the right and with the first m columns being zeros. In the same manner we define a Forwarded Walsh Matrix W [ + m , k ] , but shifted to the left:

Figure 1. Walsh functions in dyadic ordering.

W [ 2 , 4 ] = [ 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 ] ; W [ + 2 , 4 ] = [ 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 ] (6)

We define the vector consisting on the first 2 k Walsh functions as:

w ¯ k ( t ) = [ w 0 ( t ) w 1 ( t ) w 2 k 1 ( t ) ] T . (7)

Since the order of the approximation should be clear from context we write simply w ¯ ( t ) . We define as well the dyadic sum between tho nonnegative integers as:

q r = i = 0 k | q i r i | 2 i , (8)

where q i and r i are respectively the coefficients of the binary expansions of q and r as in (4). To a vector σ = [ σ 0 σ 2 σ 2 k 1 ] T we associate a 2 k × 2 k sym- metric matrix Λ ( σ ) whose i, jth entry will be given by σ i 1 j 1 . If for example σ = [ σ 0 σ 1 σ 2 σ 3 ] T , then:

Λ ( σ ) = [ σ 0 σ 1 σ 2 σ 3 σ 1 σ 0 σ 3 σ 2 σ 2 σ 3 σ 0 σ 1 σ 3 σ 2 σ 1 σ 0 ] .

We make use of the following Lemma:

Lemma 2. [19] Let σ 2 k , then:

w ¯ ( t ) w ¯ T ( t ) σ = Λ ( σ ) w ¯ ( t ) . (9)

The integral of a Walsh vector can be approximated by a Walsh expansion of the form:

0 t w ¯ ( s ) d s = P w ¯ ( t ) , (10)

where P is given by [20] :

(11)

The approximated integral converges to the exact integral uniformly [21] .

To a function matrix A ( t ) we associate the Walsh approximation A ^ ( t ) :

A ^ ( t ) = α Ω ( t ) , (12)

where

α = [ α ¯ 11 T α ¯ 12 T α ¯ 1 n T α ¯ 21 T α ¯ 22 T α ¯ 2 n T α ¯ n 1 T α ¯ n 2 T α ¯ n n T ] , Ω ( t ) = [ w ¯ ( t ) 0 0 0 w ¯ ( t ) 0 0 0 w ¯ ( t ) ] , (13)

and each α ¯ i , j is the vector of coefficients of the Walsh approximation of each entry of A ( t ) .

Let A ( t ) be a n × n function matrix with Walsh approximation A ^ ( t ) and f ( t ) a function vector with Walsh approximation f ^ ( t ) , whose elements are of the form f i ( t ) = h i T w ¯ ( t ) , i = 1 , , n where h i 2 k is the vector of Walsh coefficients of the approximation of each element of f. Define H = [ h 1 T h n T ] T , then from (9) we have:

A ^ ( t ) f ^ ( t ) = α Ω ( t ) [ w ¯ T ( t ) h 1 w ¯ T ( t ) h 2 w ¯ T ( t ) h n ] = α [ w ¯ ( t ) w ¯ T ( t ) h 1 w ¯ ( t ) w ¯ T ( t ) h 2 w ¯ ( t ) w ¯ T ( t ) h n ] = [ r = 1 n ( h r T w ¯ ( t ) w ¯ T ( t ) α ¯ 1 , r ) T r = 1 n ( h r T w ¯ ( t ) w ¯ T ( t ) α ¯ 2 , r ) T r = 1 n ( h r T w ¯ ( t ) w ¯ T ( t ) α ¯ n , r ) T ] = [ r = 1 n w ¯ T ( t ) Λ ( α ¯ 1 , r ) h r r = 1 n w ¯ T ( t ) Λ ( α ¯ 2 , r ) h r r = 1 n w ¯ T ( t ) Λ ( α ¯ n , r ) h r ] = Ω T ( t ) Λ ¯ ( α ) H , (14)

where:

Λ ¯ ( α ) = [ Λ ( α ¯ 11 ) Λ ( α ¯ 12 ) Λ ( α ¯ 1 n ) Λ ( α ¯ 21 ) Λ ( α ¯ 22 ) Λ ( α ¯ 2 n ) Λ ( α ¯ n 1 ) Λ ( α ¯ n 2 ) Λ ( α ¯ n n ) ] . (15)

3.3. Approximation Error

Regarding the error of Walsh approximation we have:

Lemma 3. [17] If f satisfies the Lipschitz condition, then:

E k = i = 0 p = 2 k c i w i ( t ) f ( t ) C 2 k , (16)

for some constant C.

3.4. Block Pulse Functions

The set of Block pulse functions of order p is defined on the interval [ 0,1 ) as the set { ψ 0 ( t ) , , ψ m 1 ( t ) } , where [22] :

ψ i ( t ) = { 1, t [ i p , i + 1 p ) 0, otherwise i = 1, , p 1. (17)

Block Pulse Functions, shown on Figure 2, are orthogonal, easily normalizable [23] and when m they form a complete set [22] . We restrict ourselves to

Figure 2. Block pulse functions.

the case p = 2 k for k integer. Walsh functions and Block Pulse Functions are related by a one on one correspondence, meaning that there exists a unique bijective linear transformation that maps the first 2 k Walsh functions on to the set of Block Pulse Functions of order 2 k [24] . The existence of this transfor- mation and the completeness of the Block Pulse functions ensure that the properties of Walsh functions are inherited to Block Pulse Functions. In particular we have the following:

Lemma 4. [24] Matrix P in (10) is similar to the upper triangular matrix P * W [ k ] P W [ k ] = 1 2 I 2 k + Q + Q 2 + + Q 2 k 1 , where Q is a nilpotent matrix

with ones above the diagonal and zeros everywhere else:

P * = [ 1 2 1 1 0 1 2 1 0 0 1 2 ] . (18)

Lemma 5. [25] Let f ^ ( t ) = σ T w ¯ ( t ) be the Walsh approximation of the function f ( t ) , with σ 2 k being the vector of coefficients of the Walsh approximation. Then Λ ( σ ) is similar to a diagonal matrix:

Λ * ( σ ) = W 1 [ k ] Λ ( σ ) W [ k ] = [ f ^ ( 0 2 k ) 0 0 0 f ^ ( 1 2 k ) 0 0 0 f ^ ( 2 k 1 2 k ) ] . (19)

4. Approximation of the Monodromy Operator

Without loss of generality we assume ω = 1 . We approximate the monodromy operator by projecting (1) on a finite dimensional subspace of L 2 [ 0,1 ) formed by the span of 2 k piecewise constant orthonormal functions. We will assume the orthonormal functions to be Walsh functions, the analysis can be carried to the case of Block Pulse Functions or Haar functions by means of similarity transformations. We also restrict ourselves to the case of commensurable delays, that is m τ = ω for some m .

Integrating (1) from 0 to t we will have:

x ( t ) x ( 0 ) = 0 t A ( s ) x ( s ) d s + 0 t B ( s ) x ( s τ ) d s , (20)

with x ( θ ) = φ ( θ ) for θ [ τ ,0 ] . A solution of (20) will correspond to a solution of (1) [26] . Let π k denote the projection mapping that takes the Walsh expansion f ( t ) = l = 0 c l w l ( t ) to the Walsh approximation

f ^ ( t ) = l = 0 2 k c l w l ( t ) . Along with (20) we introduce its projection onto the space

M = s p a n { w 0 , w 1 , , w 2 k 1 } n :

x ^ ( t ) x ( 0 ) = π k 0 t A ^ ( s ) x ^ ( s ) d s + π q 0 t B ^ ( s ) x ^ ( s τ ) d s , (21)

where x ^ ( t ) M and φ ( 0 ) M since it is a constant. A ^ ( t ) and B ^ ( t ) correspond to π k A ( t ) and π k B ( t ) , this is, the approximations of A ( t ) and B ( t ) , respectively, as in (12). x ^ ( x ) ( θ ) is still not defined for θ [ τ ,0 ) , we cant define yet the projection of the initial condition since its defined on a different domain than the domain of definition of the Walsh functions. For this we have:

Proposition 6. The value of the Walsh approximation of order 2 k at an interval [ i 2 k , i + 1 2 k ) , i = 0 , , 2 k 1 , depends only on the value of the function at that same interval.

Proof. It follows immediately from Theorem 2.1.3 in [17] .

Thus we define the projection π k φ ( θ ) = l = 0 2 k 1 c l w l ( θ + ω ) φ ^ ( θ ) as the Walsh approximation of order 2 k of an integrable function φ * ( θ ) defined on [ ω ,0 ) , that is equal to φ ( θ ) in [ τ ,0 ) and 0 everywhere else, and we make x ^ ( θ ) = φ ^ ( θ ) for θ [ τ ,0 ] . Since Walsh functions were not defined at t = 1

we simply set φ ^ ( 0 ) = φ ^ ( 1 2 k ) .

We split the second integral in (20) as:

0 t B ( s ) x ( s τ ) d s = 0 τ B ( s ) x ( s τ ) d s + τ t B ( s ) x ( s τ ) d s = 0 τ B ( s ) φ ( s τ ) d s + τ t B ( s ) x ( s τ ) d s = 0 t ( 1 S ( s τ ) ) B ( s ) φ ( s τ ) d s + 0 t S ( s τ ) B ( s ) x ( s τ ) d s , (22)

where S ( t ) is the unit step function.

We make use of the following Lemma:

Lemma 7. [17] Any function f ( t ) constant on the intervals of the form [ i 2 k , i + 1 2 k ) , 0 i 2 k 1 can be represented in the form:

f ( t ) = l = 0 2 k 1 c l w l ( t ) , (23)

i.e. the non zero coefficients of the Walsh expansion of f ( t ) have indices no greater than 2 k 1 . Moreover, this representation is unique.

From linearity of π k and from Lemma 7. we have that we can split the second integral in (21) as in (22) with this being consistent with the projection:

π k 0 t B ^ ( s ) x ^ ( s τ ) d s = π k 0 t ( 1 S ( s τ ) ) B ^ ( s ) φ ^ ( s τ ) d s + π k 0 t S ( s τ ) B ^ ( s ) x ^ ( s τ ) d s , (24)

Since x ^ ( t ) M we have that x ^ ( t ) = [ h 1 T w ¯ ( t ) , , h n w ¯ ( t ) ] T , we also have that φ ^ ( θ ) = [ c 1 T w ¯ ( θ + ω ) , , c n w ¯ ( θ + ω ) ] T with each h i and c i being vectors of Walsh coefficients. Defining H = [ h 1 T , , h n T ] T , Φ = [ c 1 T , , c n T ] T , and recalling (7) and (12), we can write (21) as:

Ω T ( t ) H x ^ ( 0 ) = π q 0 t α Ω ( s ) Ω T ( s ) H d s + π k 0 t β Ω ( s ) S ( s τ ) Ω T ( s τ ) H d s + π k 0 t β Ω ( s ) ( 1 S ( s τ ) ) Ω T ( s τ + ω ) Φ d s , (25)

From Lemma 7. we have that both, S ( t τ ) Ω T ( t τ ) and ( 1 S ( t τ ) ) Ω T ( s τ + ω ) , can be uniquely represented in terms of Ω ( t ) :

S ( t τ ) Ω T ( t τ ) = Ω T ( t ) W ¯ D ,

( 1 S ( t τ ) ) Ω T ( s τ + ω ) = Ω T ( t ) W ¯ F , (26)

W ¯ F and W ¯ D are 2 k n × 2 k n block diagonal matrices, whose diagonal entries are given respectively by W F T and W D T , which satisfy W F w ¯ ( t ) = ( 1 S ( t τ ) ) w ( s τ + ω ) and W D w ¯ ( t ) = S ( t τ ) w ¯ ( t τ ) When ap- proximating by Walsh functions, matrix W D is given by W D = 1 2 k ( W [ m , k ] W [ k ] ) and it is called the Walsh Shift Operator [10] . W F is given in an analogous way as W F = 1 2 k ( W [ + ( 1 m ) , k ] W [ k ] ) . The term x ^ ( 0 ) will also have an unique representation:

x ^ ( 0 ) = Ω T ( t ) W ¯ C Φ , (27)

since x ^ ( 0 ) = φ ^ ( 0 ) . W ¯ C will be again a 2 k n × 2 k n block diagonal matrix, whose diagonal entries are given by W C T . In the case of Walsh functions, matrix W C evaluates ϕ ( t ) at time 2 k 1 and assigns its value to the coefficient of the constant Walsh function w 0 ( t ) , hence W C T = [ w ¯ ( 2 k 1 ) , 0 , , 0 ] T .

From (14), (27) and (26) we have that we can write (25) as:

Ω T ( t ) H Ω T ( t ) W ¯ C Φ = π k 0 t Ω T ( s ) d s Λ ¯ ( α ) H + π k 0 t Ω T ( s ) d s Λ ¯ ( β ) W ¯ F Φ + π k 0 t Ω T ( s ) d s Λ ¯ ( β ) W ¯ D H . (28)

From (10) we have that:

π k 0 t Ω T ( s ) d s = Ω T ( t ) P ¯ , (29)

where P ¯ is a 2 k n × 2 k n matrix given by:

P ¯ = [ P T 0 0 0 P T 0 0 0 P T ] , (30)

from which we arrive at:

Ω T ( t ) H Ω T ( t ) W ¯ C Φ = Ω T ( t ) P ¯ Λ ¯ ( α ) H Ω T ( t ) + P ¯ Λ ¯ ( β ) W ¯ F Φ + Ω T ( t ) P ¯ Λ ¯ ( β ) W ¯ D H . (31)

Since Ω ( t ) is nonsingular we have:

H = [ I P ¯ Λ ¯ ( α ) P ¯ Λ ¯ ( β ) W ¯ D ] 1 [ W ¯ C + P ¯ Λ ¯ ( β ) W ¯ F ] Φ . (32)

From (32) we can define a mapping T k : M M :

T k = [ I P ¯ Λ ¯ ( α ) P ¯ Λ ¯ ( β ) W ¯ D ] 1 [ W ¯ C + P ¯ Λ ¯ ( β ) W ¯ F ] (33)

However by construction we can see that the domain of T k is in fact the subspace of M consisting of al functions generated by linear combinations of the first 2 k Walsh functions, that are equal to 0 for t [ 0, ω τ ) , we denote this subspace as M . Likewise we can extend the domain of definition of the solution map of (1) from the space C [ τ ,0 ] , to the subspace of L [ ω ,0 ] 2 consisting of al functions that are continuous on [ τ ,0 ] that are equal to 0 for t [ ω , τ ) , this space, denoted C is isomorphic to the space C [ τ ,0 ] and its projection on the space M corresponds to M . We can say now that T k is an approximation of the solution mapping T of (1), we obtain an approximated solution x ^ ( t ) = Ω T ( t ) H which satisfies (21), and thus, we have an approximation of the solution map of (1).

In order to study the monodromy operator of (1), we must study the state at t = ω , this is, we must know the solution of (1) for t [ ω τ , ω ] corresponding to an initial condition. Since M is the projection in the space M of C and since C is isomorphic to C [ ω τ , ω ] , the approximation of the state at t = ω will be given by the projection of the approximated solution into M . Hence, by Lemma 7 we will have:

H = W ¯ P H . (34)

where W ¯ p is a 2 k n × 2 k n is a block diagonal matrix which projects the approximated solution Ω T ( t ) H into the subspace M , with diagonal entries W P T . In the case of Walsh functions, we will have

W P T = 1 2 k W S ( t ( 1 m ) ) [ k ] W [ k ]

where W S ( t ( 1 m ) ) [ k ] are given by Walsh matrices of order k that instead have 0s in the first (1 − m)th columns.

From (34) we obtain our approximated monodromy operator, given by:

U k = W ¯ P [ I P ¯ Λ ¯ ( α ) P ¯ Λ ¯ ( β ) W ¯ D ] 1 [ W ¯ C + P ¯ Λ ¯ ( β ) W ¯ F ] . (35)

Approximation by Block Pulse Functions

Equation (35) for the approximation of U k is valid for all sets of orthonormal functions that can be obtained by linear combinations of Walsh functions. For the numerical calculation of the approximation of the monodromy operator, Block Pulse Functions are the most advantageous set since with their simplicity comes a lesser computational load. In particular for the Block Pulse Function approximation we will have:

W ¯ P = d i a g { W P T , , W P T } , (36)

W P T = [ 0 2 k m × 2 k m 0 2 k m × m 0 m × 2 k m I m × m ] ,

W ¯ C = d i a g { W C T , , W C T } , (37)

W C T = [ 0 0 0 1 0 0 0 1 0 0 0 1 ] ,

P ¯ = d i a g { P T , , P T } (38)

P = 1 2 k [ 1 2 1 1 0 1 2 1 0 0 1 2 ] .

If we define:

Q = [ 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 ] , Q + = [ 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 ] , (39)

we will have:

W ¯ D = d i a g { W D T , , W D T } (40)

W D T = Q m ,

and

W ¯ F = d i a g { W F T , , W F T } (41)

W F T = Q + 2 k m .

Finally, if in (15) we have A ( t ) = [ a 11 ( t ) a 12 ( t ) a 1 n ( t ) a 21 ( t ) a 22 ( t ) a 2 n ( t ) a n 1 ( t ) a n 2 ( t ) a n n ( t ) ] , then Λ ¯ ( α ) will be given as in (15) with:

Λ ( α ¯ i j ) = [ a ^ i j ( 0 2 k ) 0 0 0 a ^ i j ( 1 2 k ) 0 0 0 a ^ i j ( 2 k 1 2 k ) ] , (42)

where

a ^ i j ( i 2 k ) = i 2 k i + 1 2 k a i j ( t ) d t , i = 0 , 1 , , 2 k 1 , (43)

with the same for Λ ¯ ( β ) and B ( t ) .

5. Convergence of U k

We denote as C Δ the space of functions from [ 0,1 ] to n , whose limit from the left exist at every point in ( 0,1 ] and whose limit from the right exists at every point in [ 0,1 ) and that are also continuous at every point that is not a dyadic rational, this is, C Δ is the space of continuous functions on [ 0,1 ] that may have jump discontinuities at dyadic rationals.

Proposition 8. The space C Δ is a Banach space with the sup norm.

Proof. Let { x n } be a Cauchy sequence in C Δ , then ε > 0 , N such that n , m N implies:

sup t [ 0 , 1 ] x n ( t ) x m ( t ) < ε .

Therefore for every t 0 [ 0,1 ] we have x n ( t 0 ) x m ( t 0 ) < ε , for n , m N . Thus { x n ( t 0 ) } is a Cauchy sequence in n , therefore convergent, we denote such limit as x * ( t 0 ) , then we have that for every ε , there exists N, such that for n N :

x n ( x 0 ) x * ( x 0 ) ε , x [ 0,1 ] ,

this is { x n } converges to x * uniformly.

Now let t 0 [ 0,1 ) , then for every x n the limit from the right at t 0 exists, this is, x n { x n } , ! L n + such that for every ε > 0 , there exist δ n > 0 such that:

0 < t t 0 < δ x n ( t ) L n + < ε . (44)

The sequence { L n + } is defined on n , therefore is convergent if and only if it is Cauchy. We assume by contradiction that { L n + } is not Cauchy, then there exist ε 1 such that for every N 1 , there exist n 1 , m 1 N 1 such that:

L m 1 L n 1 ε 1 . (45)

Since { x n } is Cauchy we have that there exists N 2 such that n , m N 2 implies:

sup t [ 0 , 1 ] x n ( t ) x n ( t ) < ε 1 3 .

Let n 1 , m 1 N 2 such that (45) holds. Let δ 1 such that for 0 < t t 0 < δ 1 we have:

x n 1 ( t ) L n 1 < ε 1 3 ,

similarly let δ 2 such that 0 < t t 0 < δ 2 we have:

x m 1 ( t ) L m 1 < ε 1 3 .

Take δ = min { δ 1 , δ 2 } , then for 0 < t t 0 < δ we have:

x n 1 ( t ) x m 1 ( t ) < ε 1 3

( x n 1 ( t ) L n 1 + + x m 1 ( t ) L m 1 + ) ( L m 1 + L n 1 + ) < ε 1 3

| x n 1 ( t ) L n 1 + + x m 1 ( t ) L m 1 + L m 1 + L n 1 + | < ε 1 3 ,

but L m 1 + L n 1 + ε 1 and x n 1 ( t ) L n 1 + + x m 1 ( t ) L m 1 + < 2 ε 1 3 , therefore:

ε 1 L m 1 + L n 1 + x n 1 ( t ) L n 1 + + x m 1 ( t ) L m 1 + < ε 1 3

ε 1 L m 1 + L n 1 + < ε 1 3 + x n 1 ( t ) L n 1 + + x m 1 ( t ) L m 1 +

ε 1 L m 1 + L n 1 + < ε 1 3 + ε 1 3 + ε 1 3 = ε 1 ,

which is a contradiction, therefore the sequence { L n + } converges to a limit L + .

Now let ε > 0 , let N and δ such that for 0 < t + t 0 < δ :

x N ( t ) x * ( t ) < ε 3

x N ( t ) L N + < ε 3

L N + L + < ε 3 ,

then:

x * ( t ) L + x N ( t ) x * ( t ) + x N ( t ) L N + + L N + L + < ε 3 + ε 3 + ε 3 = ε .

Therefore L + is a right limit of x * at t = t 0 , hence the right limits exist. Repeating the process for the left limits we conclude that the left limits of x * exist where required.

Now let t 0 [ 0,1 ] , t 0 not a dyadic rational, then each x n is continuous at t 0 , therefore lim n x n = x * is continuous at t 0 . Then we have x * C Δ , which concludes the proof.

We make use of the following:

Lemma 9. [20] Let the dyadic intervals Δ k ( l ) = [ l 1 2 k , l 2 k ) , l Γ k with

Γ k = { 1 , , 2 k } and let χ Δ k ( l ) ( t ) be the characteristic function of the interval Δ k ( l ) , then:

π k 0 t χ Δ k ( l ) ( s ) d s = { 0 , 0 t < l 1 2 k 1 2 k + 1 , l 1 2 k t < l 2 k 1 2 k , l 2 k t < l 1 1 (46)

Now we state:

Theorem 10. Let the Banach space C Δ , the approximated monodromy operator U k converges in the strong operator sense to the monodromy operator U.

Proof. Let π k denote the projection mapping that takes the Walsh expansion f ( t ) = l = 0 c l w l ( t ) to the Walsh approximation f ^ ( t ) = l = 0 2 k c l w l ( t ) . Without risk of confusion we will denote as U k φ ( θ ) the operator U k π k φ ( θ ) and as T k φ ( θ ) to the operator T k π k φ ( θ ) .

Denote as x ( t ) = T φ ( θ ) the solution of (20) corresponding to an initial condition φ ( θ ) for θ [ τ ,0 ] . Likewise denote as x ^ ( t ) = T k φ ^ ( θ ) the solu- tion of (21). Clearly x ( t ) = U φ ( θ ) and x ^ ( t ) = U k φ ( θ ) for t [ ω τ , ω ]

Let A ^ ( t ) = π k A ( t ) , B ^ ( t ) = π k B ( t ) and φ ^ ( t ) = π k φ ( t ) . Then:

x ^ ( t ) x ( t ) π k x ( t ) x ( t ) + x ^ ( t ) π k x ( t ) . (47)

Since the solution x ( t ) is continuous we have that the first term on the right converges to 0 as k . We now observe that x ^ ( t ) and x ( t ) satisfy:

x ^ ( t ) = φ ( 0 ) + π k 0 t A ^ ( s ) x ^ ( s ) d s + π k 0 t S ( t τ ) B ^ ( s ) x ^ ( s τ ) d s + π k 0 t ( 1 S ( t τ ) ) B ^ ( s ) φ ^ ( s τ ) d s (48)

and

x ( t ) = φ ( 0 ) + 0 t A ( s ) x ( s ) d s + 0 t S ( t τ ) B ( s ) x ( s τ ) d s + 0 t ( 1 S ( t τ ) ) B ( s ) φ ( s τ ) d s , (49)

respectively. Set A ( t ) = A ( t ) + S ( t τ ) B ( t ) , B ( t ) = ( 1 S ( t τ ) ) B ( t ) , the same way set A ^ ( t ) and B ^ ( t ) as their respective approximations. Define y k ( t ) = x ^ ( t ) π k x ( t ) . We have:

y k ( t ) = π k 0 t A ^ ( s ) x ^ ( s ) d s + π k 0 t A ^ ( s ) π k x ( s ) d s π k 0 t A ( s ) x ( s ) d s + π k [ 0 t B ^ ( s ) φ ^ ( s τ ) d s B ( s ) φ ( s τ ) d s ] , (50)

y k ( t ) = π k 0 t A ^ ( s ) x ^ ( s ) d s + π k 0 t [ A ^ ( s ) π k x ( s ) A ( s ) x ( s ) ] d s + π k 0 t [ B ^ ( s ) φ ^ ( s τ ) B ( s ) φ ( s τ ) ] d s . (51)

We have that the right side of (51) is piecewise constant, so if we define the dyadic intervals Δ k ( l ) = [ l 1 2 k , l 2 k ) , for l Γ with Γ = { 1 , , 2 k } , we can write:

π k 0 t A ^ ( s ) y k ( s ) d s = π k 0 t q = 1 2 k A ^ ( q 1 2 k ) y k ( q 1 2 k ) χ Δ k ( q ) ( s ) d s , (52)

where χ Δ k ( q ) ( t ) is the characteristic function of the interval Δ k ( q ) . From Lemma 9, we will have for t Δ k ( l ) :

π k 0 t A ^ ( s ) y k ( s ) = 1 2 k q = 1 l 1 A ^ ( q 1 2 k ) y k ( q 1 2 k ) + 1 2 k + 1 A ^ ( l 1 2 k ) y k ( l 1 2 k ) . (53)

Since y k ( t ) is constant on the dyadic intervals then y k ( t ) = y k ( l 1 2 k ) for t Δ k ( l ) . Therefore:

y k ( l 1 2 k ) 1 2 k q = 1 l 1 A ^ ( q 1 2 k ) y k ( q 1 2 k ) + 1 2 k + 1 A ^ ( l 1 2 k ) y k ( l 1 2 k ) + max 0 t 1 π k 0 t [ B ^ ( s ) φ ^ ( s τ ) B ( s ) φ ( s τ ) ] d s + max 0 t 1 π k 0 t [ A ^ ( s ) π k x 1 ( s ) A ( t ) x 1 ( t ) ] d s . (54)

Define C k * = 1 1 2 k + 1 max q Γ A ^ ( q 1 2 k ) , and assume C k * > 0 . This restraint is not significant since A ( t ) and B ( t ) are bounded, therefore A ^ ( t ) and B ^ ( t ) are bounded, and 1 2 k + 1 0 as k .

We have:

q = 1 l 1 A ^ ( q 1 2 k ) y k ( q 1 2 k ) max q Γ A ^ ( q 1 2 k ) q = 1 l 1 y k ( q 1 2 k ) . (55)

From all of the above and defining D k * 1 2 k A k * C k * , with A k * = max q Γ A ^ ( q 1 2 k ) and denoting:

B k * = 1 C k * [ max 0 t 1 π k 0 t [ A ^ ( s ) π k x ( s ) A ( t ) x ( t ) ] d s + max 0 t 1 π k 0 t [ B ^ ( s ) φ ^ ( s τ ) B ( s ) φ ( s τ ) ] d s ] ,

we arrive at:

y k ( l 1 2 k ) B k * + D k * q = 1 l 1 y k ( q 1 2 k ) . (56)

Which gives:

y k ( l 1 2 k ) B k * ( 1 + D k * ) l 1 . (57)

Indeed, for l = 1 , y k ( l 1 2 k ) B k * , and assuming B k * + D k * q = 1 l 2 y k ( q 1 2 k ) B k * ( 1 + D k * ) l 2 we have:

y k ( l 1 2 k ) B k * + D k * q = 1 l 2 y k ( q 1 2 k ) + D k * y k ( l 2 2 k ) B k * ( 1 + D k * ) l 2 + D k * B k * ( 1 + D k * ) l 2 B k * ( 1 + D k * ) l 1 . (58)

For t Δ k ( l ) we have l 1 2 k t , then l 1 2 k t , therefore:

y k ( t ) B k * ( 1 + D k * ) 2 k t . (59)

Taking into account the definition of D k * we have:

( 1 + D k * ) 2 k t ( 1 + A k * 2 k A k * 2 ) 2 k t ( 1 + A k * 2 k A k * 2 ) 2 k t A k * 2 t ( 1 + A k * 2 k A k * 2 ) A k * 2 t . (60)

Since t [ 0,1 ] and we assumed 1 1 2 k + 1 A k * > 0 , then:

( 1 + A k * 2 k A k * 2 ) 2 k t A k * 2 t ( 1 + A k * 2 k A k * 2 + 1 2 ! ( A k * 2 k A k * 2 ) 2 + ) 2 k t A k * 2 t e A k * t e A k * (61)

and

( 1 + A k * 2 k A k * 2 ) A k * 2 t = ( 1 + D k * ) A k * 2 t ( 1 + D k * ) A k * 2 . (62)

From (61), (62) and (59) we arrive at:

y k ( t ) B k * e A k * ( 1 + D k * ) A k * 2 , t [ 0,1 ] . (63)

The expresion e A k * ( 1 + D k * ) A k * 2 is bounded, and clearly x C Δ , and since φ ( t ω ) C Δ , we have B k * 0 as k , therefore we have y k 0 as k , this is:

T k φ T φ k 0, φ C Δ , (64)

from where it follows immediately that:

U k φ U φ k 0, φ C Δ , (65)

which concludes the proof.

Corollary 1. If φ is Lipschitz, then the error of the approximation U k , satisfies U k φ U φ O ( 1 2 k ) .

Proof. From (59), we have that since φ is Lipschitz and x ( t ) is differentiable, the result follows immediately from Lemma 3.

We now prove that the approximated solution is indeed equal to the Walsh approximation of the exact solution. First we state:

Lemma 11. [21] Let l = 0 c l w l ( t ) be the Walsh expansion of a function f ( t ) , if l = 0 c l w l ( t ) converges to zero everywhere, then c i = 0 , i 0 .

Now we are ready to prove:

Theorem 12. Let x ( t ) = T φ ( θ ) , the solution of (20) corresponding to an initial condition φ ( θ ) for θ [ τ ,0 ] . Likewise let x ^ ( t ) = T k φ ^ ( θ ) , the solution of (21), then x ^ ( t ) = π k x ( t ) .

Proof. We have x ^ ( t ) = l = 0 2 k c l w l ( t ) and π k x ( t ) = l = 0 2 k d l w l ( t ) , for some coefficients c l and d l , then :

π k x ( t ) x ^ ( t ) x ( t ) x ^ ( t ) + π k x ( t ) x ( t ) . (66)

Since the solution x ( t ) is continuous and from (64) we have that the two terms on the right converge to zero, therefore l = 0 2 k ( c l d l ) w l ( t ) converges to zero uniformly, hence everywhere, and from Lemma 11. we have c l = d l , for l = 0 , , 2 k , this is x ^ ( t ) = π k x (t)

Corollary 2. U k φ ( t ) = π k U φ ( t ) ,

Lemma 13. Let X C Δ compact, then for very ε > 0 , there exists k , such that:

π k x x < ε , x X . (67)

Proof. Let ε > 0 , since X is compact, there exist finite open balls of radius ε 3 centered around finite x 1 , , x N X that cover X, then x X , x x i < ε 3 , for some i { 1, , N } . Let k such that for any i { 1, , N } , π k x i x i < ε 3 . Let x X , then for some i { 1, , N } :

π k x x π k x π k x i + x x i + π k x i x i π k x x i + x x i + π k x i x i < ε .

Since there are finitely many x i , the desired result follows immediately.

The solution of (1) will be given by:

x ( t ) = X ( t , 0 ) φ ( 0 ) + τ 0 X ( t , s + τ ) B ( s + τ ) φ ( s ) d s , (68)

where X is a solution matrix such that X ( 0 , 0 ) = I and X ( t , 0 ) = 0 for t < 0 [13] . If φ C Δ then the solution x ( t ) will be continuous and the solution matrix X will be the same that in C . Furthermore we will have a bounded operator:

x ( t ) = T ( φ ( θ ) ) T φ ( θ ) , (69)

and like in C , we will have that T maps arbitrary bounded sequences into equicontinuous sequences, then by the Arzelá-Ascoli Theorem we will have that T is compact in C Δ .

We now have:

Theorem 14 The approximated monodromy operator U k converges uni- formly to the monodromy operator U.

Proof. Let B ¯ denote the closed unit ball on C δ . Since T is compact, then the image T B ¯ is also compact, and by Lemma 13 we will have that for any ε > 0 there exists k such that:

T k T = sup x = 1 π k T x T x < ε . (70)

The fact that U k U 0 as k follows immediately.

With this we can establish convergence of the spectrum:

Theorem 15. The spectrum of the approximated monodromy operator U k converges to the spectrum of the monodromy operator U. More precisely, for any open set V , such that σ ( U ) V , then there exist K such that σ ( U k ) V , for any k K . Furthermore, let λ 0 σ ( U ) and let Γ be a small circle centered at λ 0 such that any other eigenvalue of U is outside Γ , then the sum of the multiplicities of the eigenvalues of U within Γ will be equal to the multiplicity of λ 0 .

Proof. Let Γ be a small circle with center at λ 0 σ ( U ) , such that any other eigenvalue of U is outside this circle. Then the spectral projection of λ 0 is given by:

P = 1 2 π i Γ ( λ I U ) 1 d λ . (71)

The spectral projection of the spectrum of U k inside Γ will be given by:

P k = 1 2 π i Γ ( λ I U k ) 1 d λ . (72)

Since U U k 0 , then P k converges uniformly to P, therefore, there must exist k such that part of the spectrum of U k is in Γ .

Since P and P k are finite dimensional operators and since P k converges uniformly to P, we will have that for sufficiently large k:

d i m ( P ) = d i m ( P k v ) , (73)

this is, the algebraic multiplicity of λ 0 will be the same as the algebraic multiplicity of the spectrum of U k contained in Γ .

6. Approximation of the Solution of a Delayed Mathieu Equation

If one considers a generalization of a Mathieu Differential Equation to a Functional Differential Equation, we will find that we can consider different cases depending on where the parametric excitation is placed [9] . We consider the scalar equation:

x ¨ + ( α + β c o s t ) x = ( γ c o s t ) x ( t τ ) . (74)

The operator T k in (33) provides a natural way to approximate the solution of a periodic delay differential equation. Consider Equation (74), with α = 5.35 ,

β = 2.5 , γ = 0.5 , and τ = 128 . Figure 3 and Figure 4, show the comparison

between the solution obtained by simulation and the approximation of the solution obtained from T k , for k = 7 and k = 10 respectively, for t [ 0,2 π ] .

Figure 5 and Figure 6, show the magnitude of the error for the appro- ximation of the solution obtained with the approximated operator T k , for k = 7 and k = 10 , respectively.

Figure 3. Solution of (74) for t [ 0,2 π ] . The solid line corresponds to the simulated solution and the dashed line corresponds to the approximated solution for k = 7 .

Figure 4. Solution of (74) for t [ 0,2 π ] . The solid line corresponds to the simulated solution and the dashed line corresponds to the approximated solution for k = 10 .

Figure 5. Magnitude of error of the approximated solution corresponding to k = 7 .

Figure 6. Magnitude of error of the approximate solution corresponding to k = 10 .

7. Stability Chart of the Delayed Mathieu Equation

We will use the approximated monodromy operator to determine the stability chart of a particular case of delayed Mathieu equation:

x ¨ ( t ) + ( α + β cos ( t ) ) x ( t ) = γ cos ( t ) x ( t τ ) . (75)

Stability charts are used to to determine the stability of periodic differential equations with respect to some parameters. Figure 7 shows the stability diagram

of (75) for the plane α β with τ = 32 and γ = 1.5 . For this equation stability

zones are disconnected and there is no symmetry with respect to the horizontal axis, contrary to the case of the undelayed Mathieu equation.

Now consider the equation:

x ¨ ( t ) + [ α + β cos ( 2 π t ) ] x ( t ) = δ x ( t τ ) . (76)

This equation has been studied in [8] . Figure 8 shows the stability diagram for the parametric plane α β with δ = 0.15 and τ = 16 .

8. Conclusion

The use of Walsh functions provides the finite dimensional approximation (35) of the monodromy operator. This approximation and the analysis leading to it are virtually the same for any piecewise constant orthonormal basis which can be formed by linear combinations of Walsh functions such as Block Pulse Functions. The use of Block Pulse Functions provides a computationally

Figure 7. Stability diagram for the parametric plane αβ of Equation (75). Asymptotically stable zones (grey) and unstable zones (white) are shown.

Figure 8. Stability diagram for the parametric plane αβ of Equation (76). Asymptotically stable zones (grey) and unstable zones (white) are shown.

inexpensive method that is useful when obtaining stability diagrams. The rate of decay of the error will be maintained regardless of the orthonormal set used [27] . The use of Block Pulse Functions provides a method with a light computational load, due to the simplicity of the functions and the sparse structure of the involved matrices. Implementation of the algorithm is straightforward, it is only necessary to compute matrices (42) corresponding to the matrices A ( t ) and B ( t ) in (1), and to substitute the remaining matrices in Section 4.1. Furthermore, the approximated monodromy operator might be used to provide insight in the nature of the PDDE, specially with a second order equation if the solution space is confined to a two dimensional vector space [28] , since a similar approximation with the use of Block Pulse Functions has been used to analytically prove properties of Periodic Ordinary Differential Equations. [29] . However a downside of the proposed method lies in the rate of convergence, which for certain cases is slower than the convergence of Fourier functions, and is certainly slower than the rate of convergence of approximations with, for example, Chebyshev polynomials.

Fund

This work was supported by CONACyT, Mexico, CVU 418725.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Vazquez, E.A. and Collado, J. (2018) Finite Dimensional Approximation of the Monodromy Operator of a Periodic Delay Differential Equation with Piecewise Constant Orthonormal Func- tions. Applied Mathematics, 9, 1315-1337. https://doi.org/10.4236/am.2018.911086

References

  1. 1. Kolmanovskii, V. and Myshkis, A. (2013) Introduction to the Theory and Applications of Functional Differential Equations. Springer Science & Business Media, Vol. 463.

  2. 2. Cui, H.-Y., Han, Z.-J. and Xu, G.-Q. (2016) Stabilization for Schrdinger Equation with a Time Delay in the Boundary Input. Applicable Analysis, 95, 963-977. https://doi.org/10.1080/00036811.2015.1047830

  3. 3. Srividhya, J. and Gopinathan, M. (2006) A Simple Time Delay Model for Eukaryotic Cell Cycle. Journal of Theoretical Biology, 241, 617-627. https://doi.org/10.1016/j.jtbi.2005.12.020

  4. 4. Datko, R., Lagnese, J. and Polis, M. (1986) An Example on the Effect of Time Delays in Boundary Feedback Stabilization of Wave Equations. SIAM Journal on Control and Optimization, 24, 152-156. https://doi.org/10.1137/0324007

  5. 5. Datko, R. and You, Y. (1991) Some Second-Order Vibrating Systems Cannot Tolerate Small Time Delays in Their Damping. Journal of Optimization Theory and Applications, 70, 521-537. https://doi.org/10.1007/BF00941300

  6. 6. Gilsinn, D.E., Potra, F.A., et al. (2006) Integral Operators and Delay Differential Equations. Journal of Integral Equations and Applications, 18, 297-336. https://doi.org/10.1216/jiea/1181075393

  7. 7. Breda, D., Maset, S. and Vermiglio, R. (2014) Pseudospectral Methods for Stability Analysis of Delayed Dynamical Systems. International Journal of Dynamics and Control, 2, 143-153. https://doi.org/10.1007/s40435-013-0041-x

  8. 8. Insperger, T. and Stepan, G. (2002) Semi-Discretization Method for Delayed Systems. International Journal for Numerical Methods in Engineering, 55, 503-518. https://doi.org/10.1002/nme.505

  9. 9. Vazquez, E.A. and Collado, J. (2016) Monodromy Operator Approximation of Periodic Delay Differential Equations by Walsh Functions. 13th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), IEEE.

  10. 10. Chen, W.-L. and Shih, Y.-P. (1978) Shift Walsh Matrix and Delay-Differential Equations. IEEE Transactions on Automatic Control, 23, 1023-1028. https://doi.org/10.1109/TAC.1978.1101888

  11. 11. Kreyszig, E. (1978) Introductory Functional Analysis with Applications, Ser. Wiley Classics Library. John Wiley & Sons.

  12. 12. Stokes, A. (1962) A Floquet Theory for Functional Differential Equation. Proceedings of the National Academy of Sciences of the United States of America, 48, 1330. https://doi.org/10.1073/pnas.48.8.1330

  13. 13. Halanay, A. (1966) Differential Equations: Stability, Oscillations, Time Lags. Academic Press, 23.

  14. 14. Yakubovich, V.A. and Starzhinski, V.M. (1975) Linear Differential Equations with Periodic Coefficients. Vol. 1, John Wiley, New York.

  15. 15. Magnus, W. and Winkler, S. (2004) Hill’s Equation, Ser. Dover Books on Mathematics Series. Dover Publications.

  16. 16. Walsh, J.L. (1923) A Closed Set of Normal Orthogonal Functions. American Journal of Mathematics, 45, 5-24. https://doi.org/10.2307/2387224

  17. 17. Golubov, B., Efimov, A. and Skvortsov, V. (2012) Walsh Series and Transforms: Theory and Applications. Springer Science & Business Media, Berlin, Vol. 64.

  18. 18. Ahmed, N. and Rao, K.R. (2012) Orthogonal Transforms for Digital Signal Proces-sing. Springer Science & Business Media, Berlin.

  19. 19. Karanam, V., Frick, P. and Mohler, R. (1978) Bilinear System Identification by Walsh Functions. IEEE Transactions on Automatic Control, 23, 709-713. https://doi.org/10.1109/TAC.1978.1101806

  20. 20. Gát, G. and Toledo, R. (2015) Estimating the Error of the Numerical Solution of Linear Differential Equations with Constant Coefficients via Walsh Polynomials. Acta Mathematica Academiae Paedagogicae Nyíregyháziensis, 31, 309-330.

  21. 21. Fine, N.J. (1949) On the Walsh Functions. Transactions of the American Mathematical Society, 65, 372-414. https://doi.org/10.1090/S0002-9947-1949-0032833-2

  22. 22. Kwong, C. and Chen, C. (1981) The Convergence Properties of Block-Pulse Series. International Journal of Systems Science, 12, 745-751. https://doi.org/10.1080/00207728108963780

  23. 23. Rao, G. and Srinivasan, T. (1978) Remarks on “Author’s Reply”; to “Comments on Design of Piecewise Constant Gains for Optimal Control via Walsh Functions”. IEEE Transactions on Automatic Control, 23, 762-763. https://doi.org/10.1109/TAC.1978.1101782

  24. 24. Cheng, C., Tsay, Y. and Wu, T. (1977) Walsh Operational Matrices for Fractional Calculus and Their Application to Distributed Systems. Journal of the Franklin Institute, 303, 267-284. https://doi.org/10.1016/0016-0032(77)90029-1

  25. 25. Gulamhusein, M. (1973) Simple Matrix-Theory Proof of the Discrete Dyadic Convolution Theorem. Electronics Letters, 9, 238-239. https://doi.org/10.1049/el:19730172

  26. 26. Hale, J.K. and Lunel, S.M.V. (2013) Introduction to Functional Differential Equations. Springer Science & Business Media, Berlin, Vol. 99.

  27. 27. Deb, A., Sarkar, G. and Sen, S.K. (1994) Block Pulse Functions, the Most Fundamental of All Piecewise Constant Basis Functions. International Journal of Systems Science, 25, 351-363. https://doi.org/10.1080/00207729408928964

  28. 28. Norkin, S. (1973) Introduction to the Theory and Application of Differential Equations with Deviating Arguments. Academic Press, Cambridge, Vol. 105. https://doi.org/10.1016/S0076-5392(08)62969-0

  29. 29. Franco, C. and Collado, J. (2017) A Novel Discriminant Approximation of Periodic Differential Equations.

上一篇:Ordered Rate Constitutive Theo 下一篇:Random Attractors of Stochasti

我要分享到: