您的位置: 首页 > 国外期刊 > Applied Mathematics

Random Attractors of Stochastic Non-Autonomous Nonclassical Diffusion Equations with Linear Memory o

Vol.09No.11(2018), Article ID:88875,16 pages
10.4236/am.2018.911085

Ahmed Eshag Mohamed1,2*, Qiaozhen Ma1, Mohamed Y. A. Bakhet1,3

1College of Mathematics and Statistics, Northwest Normal University, Lanzhou, China

2Faculty of Pure and Applied Sciences, International University of Africa, Khartoum, Sudan

3Department of Mathematics, College of Education, Rumbek University of Science and Technology, Rumbek, South Sudan

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: October 27, 2018; Accepted: November 27, 2018; Published: November 30, 2018

ABSTRACT

In this article, we discuss the long-time dynamical behavior of the stochastic non-autonomous nonclassical diffusion equations with linear memory and additive white noise in the weak topological space H 0 1 ( Ω ) × L μ 2 ( + , H 0 1 ( Ω ) ) ). By decomposition method of the solution, we give the necessary condition of asymptotic compactness of the solutions, and then prove the existence of D-random attractor, while the time-dependent forcing term g L b 2 ( ; L 2 ( Ω ) ) only satisfies an integral condition.

Keywords:

Stochastic Nonclassical Diffusion Equation, Random Attractors, Asymptotic Compactness, Linear Memory

1. Introduction

In this article, we investigate the asymptotic behavior of solutions to the following stochastic nonclassical diffusion equations driven by additive noise and linear memory:

{ u t Δ u t Δ u 0 k ( s ) Δ u ( t s ) d s + λ u + f ( x , u ) = g ( x , t ) + h W ˙ , x Ω , t > 0 , u ( x , t ) = 0 , x Ω , u ( x , t ) = u 0 ( x , τ ) , x Ω , τ 0 , (1.1)

where Ω is a bounded domain in n ( n 3 ) , the initial data u 0 H 0 1 ( Ω ) , u = u ( x , t ) is a real valued function of x Ω , t , h H 0 1 ( Ω ) H 2 ( Ω ) , g L b 2 ( ; L 2 ( Ω ) ) , λ > 0 and W ˙ ( t ) is the generalized time derivative of an infinite dimensional wiener process W ( t ) defined on a probability space ( Ω , F , ) , where Ω = { ω C ( , ) : ω ( 0 ) = 0 } , F is the σ-algebra of Borel sets induced by the compact topology of Ω , is a corresponding wiener measure on F for which the canonical wiener process W ( t ) satisfies that both W ( t ) | t 0 and W ( t ) | t 0 are usual one dimensional Brownian motions. We may identify W ( t ) with ω ( t ) , that is, W ( t ) = W ( t , ω ) = ω ( t ) for all t .

To consider system (1.1), we assume that the memory kernel satisfies

k ( s ) C 2 ( + ) , k ( s ) 0, k ( s ) 0, s + , (1.2)

and there exists a positive constant δ > 0 such that the function μ ( s ) = k ( s ) satisfies

μ C 1 ( + ) L 1 ( + ) , μ ( s ) 0, μ ( s ) + δ μ ( s ) 0, s 0. (1.3)

And suppose that the nonlinearity satisfies as follows:

f ( x , s ) = f 1 ( x , u ) + f 2 ( x , s ) , s and for every fixed x Ω , f 1 ( x , ) C ( , ) satisfying

f 1 ( x , s ) s α 1 | s | p ψ 1 ( x ) , ψ 1 L 1 ( Ω ) L 2 n n 2 ( Ω ) , (1.4)

| f 1 ( x , s ) | β 1 | s | p 1 + ψ 2 ( x ) , ψ 2 L 2 ( Ω ) L q ( Ω ) , (1.5)

and f 2 ( x , ) C ( , ) satisfying

f 2 ( x , s ) s α 2 | s | p γ , (1.6)

| f 2 ( x , s ) | β 2 | s | p 1 + δ , (1.7)

where α i , β i ( i = 1 , 2 ) , γ , δ and l are positive constants, and q is a conjugate exponent of p.

In addition, we assume that for s and 2 p 2 n n 2 , for n 3 ; p > 2 , for n = 1 , 2 .

We assume that the time-dependent external force term g ( x , t ) satisfies a condition

Ω e σ s g ( , s ) 2 d s < , for any s , (1.8)

and for some constant σ > 0 to be specified later.

Equation (1.1) has its physical background in the mathematical description of viscoelastic materials. It’s well known that the viscoelastic material exhibit natural damping, which according to the special property of these materials to retain a memory of their past history. And from the materials point of view, the property of memory comes from the memory kernel k ( s ) , which decays to zero with exponential rate. Many authors have constructed the mathematical model by some concrete examples, see [1] - [7] . In [8] the authors considered the nonclassical diffusion equation with hereditary memory on a 3D bounded domains for a very general class of memory kernels K ; setting the problem both in the classical past history framework and in the more recent minimal state one, the related solution semigroups are shown to possess finite-dimensional regular exponential attractors. Equation (1.1) is a special case of the nonclassical diffusion equation used in fluid mechanics, solid mechanics, and heat conduction theory (see [1] [4] [5] ). In [1] Aifantis, Urbana and Illinois discussed some basic mathematical results concerning certain new types of some equations, and in particular results showing how solutions of some equations can be expressed at in terms of solutions of the heat equation, also discussed diffusion in general viscoelastic and plastic solids. In [4] Kuttler and Aifantis presented a class of diffusion models that arise in certain nonclassical physical situations and discuss existence and uniqueness of the resulting evolution equations.

The long-time behavior of Equation (1.1) without white additive noise and μ 0 has been considered by many researchers; on a bounded domain see, e.g. [9] [10] [11] [12] [13] and the references therein. In [10] the authors proved the existence and the regularity of time-dependent global attractors for a class of nonclassical reaction-diffusion equations when the forcing term g ( x ) H 1 ( Ω ) and the nonlinear function satisfies the critical exponent growth. In [11] Sun and Yang proved the existence of a global attractor for the autonomous case provided that the nonlinearity is critical and g ( x ) H 1 ( Ω ) . The researchers in [12] obtained the Pullback attractors for the nonclassical diffusion equations with the variable delay on a bounded domain, where the nonlinearity is at most two orders growth. As far as the unbounded case for the system (1.1) the long-time behavior of solutions is concerned, most recently, by the tail estimate technique and some omega-limit compactness argument, for more details (see [14] [15] [16] [17] [18] ). In [14] Ma studied the existence of global attractors for nonclassical diffusion equations with the arbitrary order polynomial growth conditions. By a similar technique, Zhang in [16] obtained the Pullback attractors for the non-autonomous case in H 1 ( ) , where the growth order of the nonlinearity is assumed to be controlled by the space dimension N, such that the Sobolev embedding H 1 L 2 p 2 is continuous. However, it is regretted that some terms in the proof of [16] Lemma 3.4 are lost. Anh et al. [17] established the existence of pullback attractor in the space H 1 ( ) L p ( ) , where the nonlinearity satisfied an arbitrary polynomial growth, but some additional assumptions on the primitive function of the nonlinearity were required. And the case of μ 0 with additive noise on a bounded domain, Cheng used the decomposition method of the solution operator to consider the stochastic nonclassical diffusion equation with fading memory. For the case of μ 0 , Zhao studied the dynamics of stochastic nonclassical diffusion equations on unbounded domains perturbed by a ò-random term “intension of noise”. (For more details see [2] [19] [20] [21] [22] ).

To our best knowledge, Equation (1.1) on a bounded domain in the weak topological space and the time-dependent forcing term has not been considered by any predecessors.

The article is organized as follows. In Section two, we recall the fundamental results related to some basic function spaces and the existence of random attractors. In Section three, firstly, we define a continuous random dynamical system to proving the existence and uniqueness of the solution, then prove the existence of a closed random absorbing set and establish the asymptotic compactness of the random dynamical system finally prove the existence of D-random attractor.

2. Preliminaries

In this section, we recall some basic concepts and results related to function spaces and the existence of random attractors of the RDSs. For a comprehensive exposition on this topic, there is a large volume of literature, see [2] [3] [19] [23] - [29] .

Let A = Δ , with the domain D ( A ) = H 0 1 ( Ω ) H 2 ( Ω ) , and fractional

power space D ( A r 2 ) , r , the ( , ) D ( A r 2 ) , D ( A r 2 ) is the inner product and norm, respectively. For convenience, we use H r = D ( A r 2 ) , the norm H r = D ( A r 2 ) , and H 0 = L 2 ( Ω ) , H 1 = H 0 1 ( Ω ) .

Similar to [3] , for the memory kernel μ ( ) , we denote L μ 2 ( + ; H r ) the Hilbert space of function ϕ : + H r , endowed with the inner product and norm respectively,

ϕ 1 , ϕ 2 H r , μ = 0 μ ( s ) ϕ 1 , ϕ 2 H r d s , ϕ H r , μ = 0 μ ( s ) ϕ H r 2 d s . (2.1)

Define the space

H μ 1 ( + ; H r ) = { ϕ | ϕ ( s ) , s ϕ ( s ) L μ 2 ( + ; H r ) }

with the inner product

ϕ 1 , ϕ 2 H μ 1 ( + ; H r ) = 0 μ ( s ) ϕ 1 , ϕ 2 H r d s + 0 μ ( s ) s ϕ 1 ( s ) , s ϕ 2 ( s ) H r d s ,

and the norm

ϕ H μ 1 ( + ; H r ) 2 = ϕ L μ 2 ( + ; H r ) 2 + ϕ L μ 2 ( + ; H r ) 2 .

We also introduce the family of Hilbert space M r = H r × L μ 2 ( + ; H r ) , and endow norm

z M r 2 = ( u , υ ) M r 2 = 1 2 ( u M r 2 + υ M r , μ 2 ) .

In the following of this article, we denote H r , μ 2 : = r , μ 2 . See [2] [3] [23] for more details.

Let Ω = { ω C ( , ) : ω ( 0 ) = 0 } , f is the Borel σ-algebra on Ω , and is the corresponding Wiener measure. Define

θ t ω ( ) = ω ( + t ) ω ( t ) , ω Ω , t .

Then θ = ( θ ) t is the measurable map and θ 0 is the identity on Ω , θ t + s = θ t θ s for all s , t . That is, ( Ω , F , , ( θ t ) t ) is called a metric dynamical system.

Definition 2.1. ( Ω , F , , ( θ t ) t ) is called a metric dynamical system if θ : × Ω Ω is ( B ( ) × F , F ) -measurable, θ 0 is the identity on Ω , θ t + s = θ t θ s for all s , t and θ t = for all t .

Definition 2.2. A continuous random dynamical system (RDS) on X over a metric dynamical system ( Ω , F , , ( θ t ) t ) is a mapping

ϕ : + × Ω × X X , ( t , ω , x ) ϕ ( t , ω , x ) ,

which is B ( + ) × F × B ( X ) , B ( X ) -measurable and satisfied, for P-a.e. ω Ω ,

1) ϕ ( 0 , ω , ) is the identity on X;

2) ϕ ( t + s , ω , ) = ϕ ( t , θ s ω , ) ϕ ( s , ω , ) for all t , s + ;

3) ϕ ( t , ω , ) : X X is continuous for all t + .

Definition 2.3. A random bounded set B = { B ( ω ) } ω Ω is a family of nonempty subsets of X is called tempered with respect to ( θ t ) t if for P-a.e. ω Ω , for all β > 0 ,

l i m | t | e β | t | d ( B ( θ t ω ) ) = 0,

where d ( B ) = s u p x B x X .

Definition 2.4. Let D be the collection of all tempered random sets in X. A set K = { K ( ω ) ; ω Ω } D is called a random absorbing set for RDS ϕ in D , if for every B D and P-a.e. ω Ω , there exists t B ( ω ) > 0 such that for all t t B ( ω ) ,

ϕ ( t , θ t ω , B ( θ t ω ) ) K ( ω ) .

Definition 2.5. Let D be the collection of all tempered random subsets of X. Then ϕ is said to be asymptotically compact in X if for P-a.e. ω Ω , the sequence { ϕ ( t n , θ t n ω , x n ) } n = 1 has a convergent subsequence in X whenever

t n , and x n B ( θ t n ω ) with { B ( ω ) } ω Ω D .

Definition 2.6. (See [30] , [31] , [32] ) Let D be the collection of all tempered random subsets of X and A ( ω ) ω Ω D . Then A ( ω ) ω Ω is called a D-random attractor for ϕ if the following conditions are satisfied, for P-a.e. ω Ω ,

1) A ( ω ) is compact, and ω d ( x , A ( ω ) ) is measurable for every x X ;

2) A ( ω ) ω Ω is invariant, that is, ϕ ( t , ω , A ( ω ) ) = A ( θ t ω ) , t 0 ;

3) A ( ω ) ω Ω attracts every set in D , that is, for every

B = { B ( ω ) } ω Ω D , l i m t d ( ϕ ( t , θ t ω , B ( θ t ω ) ) , A ( ω ) ) = 0,

where d is the Hausdorff semi-metric given by

d ( Z , Y ) = sup z Z inf y Y z y X

for any Z X and Y X .

Theorem 2.1. Let ϕ be a continuous random dynamical system with state space X over ( Ω , F , , ( θ t ) t ) . If there is a closed random absorbing set B ( ω ) of ϕ and ϕ is asymptotically compact in X, then A ( ) is a random attractor of ϕ , where

A ( ω ) = t 0 τ t ϕ ( τ , θ τ ω ) B ( θ τ ω ) ¯ , ω Ω .

Moreover, { A ( ω ) } is the unique random attractor of ϕ .

As mentioned in [23] , we can define a new variable to reflect the memory kernel of (1.1)

η t ( x , s ) = 0 s u ( x , t r ) d r , s 0. (2.2)

Hence,

η t t + η s t = u , s 0. (2.3)

Therefore, we can rewrite (1.1) as follows.

{ u t Δ u t Δ u 0 μ ( s ) Δ η t ( s ) d s + λ u + f ( x , u ) = g ( x , t ) + h W ˙ , t η t ( x , s ) = u ( x , t ) s η t ( x , s ) , u ( x , t ) = 0 , η t ( x , s ) = 0 , ( x , t ) Ω × + , t 0 , u ( x , 0 ) = u 0 ( x , 0 ) , x Ω , η 0 ( x , s ) = η 0 ( x , s ) = 0 s u 0 ( x , r ) d r , ( x , t ) Ω × + , (2.4)

where u ( x , s ) = 0 satisfies that there exist two positive constant C and k, such that

0 e k s u 0 ( s ) 2 d s C . (2.5)

Lemma 2.1. ( [3] [33] ) Assume that μ C 1 ( + ) L 1 ( + ) is a nonnegative function, and there exists s 0 + , such that μ ( s 0 ) = 0 , then μ ( s ) = 0 holds for all s 0 . Moreover, for three Banach space B 0 , B 1 and B 2 , B 0 and B 1 are reflexive and

where, the embedding is compact. Let K L μ 2 ( + ; B 1 ) satisfy

1) K in L μ 2 ( + ; B 0 ) H μ 1 ( + ; B 2 ) ;

2) s u p η K η ( s ) B 1 2 N , a.s. For some N 0 .

Then K is relatively compact in L μ 2 ( + ; B 1 ) .

3. The Random Attractor

In this section, we prove that the stochastic nonclassical diffusion problem (2.4) has a D-random attractor. First, We convert system (2.4) with a random perturbation term and linear memory into a deterministic one with a random parameter ω . For this purpose, we introduce the Ornstein-Uhlenbeck process taking the form

z ( t ) = z ( θ t ω ) : = 0 e s ( θ t ω ) ( s ) d s , t ,

where ω ( t ) = W ( t ) is one dimensional Wiener process defined in the introduction. Furthermore, z ( t ) satisfies the stochastic differential equations

d z + z d t = d W ( t ) forall t . (3.1)

It is known that there exists a θ t -invariant set Ω ¯ Ω of full P measure such that t z ( θ t ) is continuous for every ω Ω ¯ , and the random variable | z ( θ t ) | is tempered, see, e.g., [2] It is easy to show that

d ( Δ z ) = h d W ( t ) (3.2)

where Δ is the Laplacian with domain H 0 1 ( Ω ) H 2 ( Ω ) . Using the change of variable υ ( t ) = u ( t ) z ( θ t ω ) , υ ( t ) satisfies the equation (which depends on the random parameter ω )

{ υ t Δ υ t Δ υ 0 μ ( s ) Δ η t ( s ) d s + λ ( υ + z ( θ t ω ) ) + f ( x , υ + z ( θ t ω ) ) = g ( x , t ) + Δ z ( θ t ω ) , η t t + η s t = υ + z ( θ t ω ) , υ ( x , 0 ) = : υ 0 ( x ) = u 0 ( x , 0 ) z ( ω ) , η t ( x , 0 ) = 0 , η o ( x , s ) = η 0 ( x , s ) = 0 s υ 0 ( x , r ) d r , υ ( x , t ) = 0 , η t ( x , s ) = 0 , ( x , t ) Ω × + , t 0. (3.3)

By the Galerkin method as in [34] , under assumptions (1.2)-(1.8), for P-a.e. ω Ω , and for all z 0 = ( υ 0 , η 0 ) M 1 , problem (3.3) has a unique solution z = ( υ , η t ) in M 1 , satisfying z ( , z 0 , ω ) C ( [ 0, ) ; M 1 ) L ( [ 0, ) ; M 1 ) .

Throughout this article, we always write

u ( t , ω , u 0 ) = υ ( t , ω , u 0 ) + z ( θ t ω ) . (3.4)

If u is the solution of problem (1.1) in some sense, we can define a continuous dynamical system

ϒ ( t , ω , u 0 ) = u ( t , ω , u 0 ) = υ ( t , ω , u 0 ) + z ( θ t ω ) . (3.5)

In order to prove the asymptotic compactness and the existence of global attractor, we give the following results.

Lemma 3.1. ( [23] ) Set I = [ 0 , T ] , T > 0 . Let the memory kernel μ ( s ) satisfy (1.3), then for any η t C ( I ; L μ 2 ( + ; H r ) ) , 0 < r < 3 , there exists a constant δ > 0 , such that

η t , η s t μ , H r δ 2 η t μ , H r 2 . (3.6)

We first show that the random dynamical system ϒ has a closed random absorbing set in D , and then prove that ϒ is asymptotically compact.

Lemma 3.2. Assume that h H 0 1 ( Ω ) H 2 ( Ω ) and (1.2)-(1.8) hold. Let B = { B ( ω ) } ω Ω D . Then for P-a.e. ω Ω , there is a positive random function r 1 ( ω ) and a constant T = T ( B , ω ) > 0 such that for all t T ,

z 0 M 1 2 = υ 0 ( θ t ω ) , η 0 ( θ t ω ) M 1 2 B ,

the solution of (3.3) has the following uniform estimate

υ ( t , θ t ω , υ 0 ( ω ) ) 2 + υ ( t , θ t ω , υ 0 ( ω ) ) 2 + η t ( t , θ t ω , υ 0 ( ω ) ) 1, μ 2 r 1 ( ω ) . (3.7)

Proof. Taking the inner product of the first equation of (3.3) with υ L 2 ( Ω ) , we have

1 2 d d t ( υ 2 + υ 2 ) + υ 2 + λ υ 2 0 μ ( s ) ( Δ η t ( s ) , υ ) d s = ( f ( x , υ + z ( θ t ω ) ) , υ ) + ( g ( x , t ) + Δ z ( θ t ω ) , υ ) . (3.8)

From (2.2) and (2.3), we obtain

0 μ ( s ) ( Δ η t ( s ) , υ ) d s = 0 μ ( s ) ( Δ η t ( s ) , η t t + η s t ) d s + 0 μ ( s ) ( Δ η t ( s ) , z ( θ t ω ) ) d s = 1 2 d d t η t 1 , μ 2 + η t , η s t 1 , μ 0 μ ( s ) ( η t ( s ) , z ( θ t ω ) ) d s . (3.9)

Hence, we can rewrite (3.8) as follows

1 2 d d t ( υ 2 + υ 2 + η t 1 , μ 2 ) + υ 2 + λ υ 2 + δ η t μ , 1 2 0 μ ( s ) ( η t ( s ) , z ( θ t ω ) ) d s = ( f ( x , υ + z ( θ t ω ) ) , υ ) + ( g ( x , t ) + Δ z ( θ t ω ) , υ ) . (3.10)

By Young inequality and Lemma 3.1, we get

0 μ ( s ) ( η t ( s ) , z ( θ t ω ) ) d s δ 2 η t 1, μ 2 + 1 2 δ z ( θ t ω ) 2 . (3.11)

From the first term on the right hand side of (3.8) f = f 1 + f 2 , First we estimate f 1 . By (1.4)-(1.5) and using a similar arguments as (4.2) in [35] , we have

( f 1 ( x , υ + z ( θ t ω ) ) , υ ) = ( f 1 ( x , u ) , u z ( θ t ω ) ) = ( f 1 ( x , u ) , u ) ( f 1 ( x , u ) , z ( θ t ω ) ) α 1 2 u p p c ( z ( θ t ω ) p p + z ( θ t ω ) 2 ) c ( ψ 1 1 + ψ 2 2 ) . (3.12)

By using (1.6)-(1.7), we arrive at

( f 2 ( x , υ + z ( θ t ω ) ) , υ ) = ( f 2 ( x , u ) , u z ( θ t ω ) ) α 2 Ω | u | p d x γ Ω d x β 2 Ω | u | p 1 | z ( θ t ω ) | d x δ Ω | z ( θ t ω ) | d x . (3.13)

By the young inequality, and using assumption (1.6), we see that

β 2 Ω | u | p 1 | z ( θ t ω ) | d x α 2 2 Ω | u | p d x + c Ω | z ( θ t ω ) | p d x , (3.14)

δ Ω | z ( θ t ω ) | d x Ω | z ( θ t ω ) | 2 d x + δ 2 4 . (3.15)

where c = c ( α 2 , β 2 , p ) . Then, it follows from (3.13)-(3.15) that

( f 2 ( υ + z ( θ t ω ) ) , υ ) α 2 2 u p p d x c ( z ( θ t ω ) p p + z ( θ t ω ) 2 ) c . (3.16)

On the other hand, we have

( g , υ ) λ 2 υ 2 + 1 2 λ g ( t ) 2 . (3.17)

From the last term of (3.8), we obtain

( Δ z ( θ t ω ) , υ ) 1 2 z ( θ t ω ) 2 + 1 2 υ 2 . (3.18)

Then, we substituting (3.11), (3.12) and (3.16)-(3.18) into (3.10) we conclude that

1 2 d d t ( υ 2 + υ 2 + η t 1, μ 2 ) + 1 2 υ 2 + λ 2 υ 2 + δ η s t 1, μ 2 + α 1 2 u p p c ( z ( θ t ω ) p p + z ( θ t ω ) 2 ) c ( ψ 1 1 + ψ 2 2 ) + α 2 2 Ω | u | p d s c ( z ( θ t ω ) p p + z ( θ t ω ) 2 ) δ 2 η t 1, μ 2 + 1 2 δ z ( θ t ω ) 2 + 1 2 λ g ( t ) 2 + 1 2 z ( θ t ω ) 2 ,

then we have

1 2 d d t ( υ 2 + υ 2 + η t 1, μ 2 ) + 1 2 υ 2 + λ 2 υ 2 + δ 2 η s t 1 , μ 2 + 1 2 ( α 1 + α 2 ) u p p 1 2 λ g ( t ) 2 + C ( z ( θ t ω ) p p + z ( θ t ω ) 2 + z ( θ t ω ) 2 ) + C . (3.19)

Furthermore, let

σ = max { 1 , λ , δ } . (3.20)

Then (3.19)-(3.20), it implies

d d t ( υ 2 + υ 2 + η t 1, μ 2 ) + σ ( υ 2 + υ 2 + η t 1, μ 2 ) C ( 1 + | Y ( θ t ω ) | 2 + | Y ( θ t ω ) | p ) + 1 2 λ g ( t ) 2 . (3.21)

According to Grnowall's Lemma, we obtain

υ ( t , ω , υ 0 ( ω ) ) 2 + υ ( t , ω , υ 0 ( ω ) ) 2 + η t ( t , ω , η 0 ( ω ) ) 1, μ 2 e 2 σ t υ 0 ( ω ) 2 + υ 0 ( ω ) 2 + η 0 ( ω ) 1, μ 2 + C 0 t e 2 σ ( t s ) ( 1 + | Y ( θ s ω ) | 2 + | Y ( θ s ω ) | p ) d s + 1 2 λ 0 t e 2 σ ( t s ) g ( s ) 2 d s . (3.22)

Substituting ω by θ t ω , then from (3.22), we have that

υ ( t , θ t ω , υ 0 ( θ t ω ) ) 2 + υ ( t , θ t ω , υ 0 ( θ t ω ) ) 2 + η t ( t , θ t ω , η 0 ( θ t ω ) ) 1, μ 2 e 2 σ t υ 0 ( θ t ω ) 2 + υ 0 ( θ t ω ) 2 + η 0 ( θ t ω ) 1, μ 2 + C t 0 e 2 σ r ( 1 + | Y ( θ r ω ) | 2 + | Y ( θ r ω ) | p ) d r + 1 2 λ t 0 e 2 σ r g ( r ) 2 d r . (3.23)

Recalling that z 0 M 1 2 = υ 0 ( θ t ω ) , η 0 ( θ t ω ) M 1 2 B is tempered such that

l i m t + e 2 σ t ( υ 0 ( θ t ω ) 2 + υ 0 ( θ t ω ) 2 + η 0 ( θ t ω ) 1, μ 2 ) = 0. (3.24)

Note that | Y ( θ s ω ) | is the tempered, and z ( θ t ω ) = Δ 1 h Y ( θ t ω ) , h ( x ) H 0 1 ( Ω ) H 2 ( Ω ) , we can choose

r 1 ( ω ) = 2 C 0 e 2 σ r ( 1 + | Y ( θ r ω ) | 2 + | Y ( θ r ω ) | p ) d r + 1 2 λ 0 e 2 σ r g ( r ) 2 d r . (3.25)

Then r 1 ( ω ) is the tempered since | Y ( θ s ω ) | has at most linear growth rate at infinity, now the proof is completed.

To prove the asymptotic compactness of the solution, we decompose the solution x ( t ) = ( u ( t ) , η t ) of (3.3) as follows [3] [23] :

x ( t ) = x 1 ( t ) + x 2 ( t ) , u ( t ) = u 1 ( t ) + u 2 ( t ) , η t = η 1 t + η 2 t ,

where x 1 ( t ) = ( u 1 ( t ) , η 1 t ) , x 2 ( t ) = ( u 2 ( t ) , η 2 t ) satisfy the following problems, respectively

{ u 1 t Δ u 1 t Δ u 1 0 μ ( s ) Δ η 1 t ( s ) d s + λ u 1 + f 1 ( x , u 1 ) = g ( x , t ) g 1 ( x , t ) + ( h h 1 ) W ˙ , t η 1 t ( x , s ) = u 1 ( x , t ) s η 1 t ( x , s ) , u 1 ( x , t ) = 0 , η 1 t ( x , s ) = 0 , ( x , t ) Ω × + , t 0 , u 1 ( x , 0 ) = u 0 ( x , 0 ) z 1 ( ω ) , x Ω , η 1 0 ( x , s ) = η 0 ( x , s ) , ( x , t ) Ω × + , (3.26)

and

{ u 2 t Δ u 2 t Δ u 2 0 μ ( s ) Δ η 2 t ( s ) d s + λ u 2 + f ( x , u ) f 1 ( x , u 1 ) = g 1 ( x , t ) + h 1 W ˙ , t η 2 t ( x , s ) = u 2 ( x , t ) s η 2 t ( x , s ) , u 2 ( x , t ) = 0 , η 2 t ( x , s ) = 0 , ( x , t ) Ω × + , t 0 , u 2 ( x , 0 ) = z 1 ( ω ) , x Ω , η 2 0 ( x , s ) = 0 , ( x , t ) Ω × + , (3.27)

here the nonlinearity f = f 1 + f 2 are satisfies (1.4)-(1.7). The drifting term h , h 1 H 0 1 ( Ω ) H 2 ( Ω ) and the forcing term satisfies a condition as in (1.8), g 1 ( x , t ) L b 2 ( ; L 2 ( Ω ) ) , for any ϵ > 0 , such that

g g 1 < ϵ , h h 1 H 2 ( Ω ) < ϵ . (3.28)

Set Δ z 1 ( θ t ω ) = h 1 Y ( θ t ω ) , we find that

d ( Δ z 1 ) = h 1 d Y = Δ z 1 d t + h 1 d W . (3.29)

Let υ 1 ( t , ω ) = u 1 ( t , ω ) z ( θ t ω ) + z 1 ( θ t ω ) , where u 1 ( t , ω ) satisfies (3.26), and υ 2 ( t , ω ) = u 2 ( t , ω ) z 1 ( θ t ω ) , u 2 ( t , ω ) is the solution of (3.27). Then for υ 1 ( t , ω ) and υ 2 ( t , ω ) we have that

{ υ 1 t Δ υ 1 t Δ υ 1 0 μ ( s ) Δ η 1 t ( s ) d s + λ ( υ 1 + z ( θ t ω ) ) + f 1 ( x , υ 1 + z ( θ t ω ) z 1 ( θ t ω ) ) = g ( x , t ) g 1 ( x , t ) + Δ z ( θ t ω ) z 1 ( θ t ω ) , t η 1 t ( x , t ) = υ 1 z 1 ( θ t ω ) s η 1 t ( x , t ) , υ 1 ( x , 0 ) : = υ 10 ( x , 0 ) = u 0 ( x , 0 ) z ( ω ) + z 1 ( ω ) , η 1 t ( x , 0 ) : = η 10 t = 0 , η 1 0 ( x , s ) = η 10 ( x , s ) = 0 s u 0 ( x , r ) d r , υ 1 ( x , t ) = 0 , η 1 t ( x , s ) = 0 , ( x , t ) Ω × + , t 0 , (3.30)

and

{ υ 2 t Δ υ 2 t Δ υ 2 0 μ ( s ) Δ η 2 t ( s ) d s + λ ( υ 2 + z ( θ t ω ) ) + f ( x , υ + z ( θ t ω ) ) f 1 ( x , υ 1 + z ( θ t ω ) z 1 ( θ t ω ) ) = g 1 ( x , t ) + Δ z ( θ t ω ) + z 1 ( θ t ω ) , t η 2 t ( x , t ) = υ 2 + z 1 ( θ t ω ) s η 2 t ( x , t ) , υ 2 ( x , 0 ) : = υ 20 ( x , 0 ) = z 1 ( ω ) , η 2 t ( x , 0 ) : = η 20 t = 0 , η 2 0 ( x , s ) = η 20 ( x , s ) = 0 , υ 2 ( x , t ) = 0 , η 2 t ( x , s ) = 0 , ( x , t ) Ω × + , t 0. (3.31)

The same of the problem (3.3), we also have the corresponding existence and uniqueness of solutions for (3.30) and (3.31). For the convenience, we obtain the solution operators of (3.30) and (3.31) by { S 1 ( t ) } t 0 and { S 2 ( t ) } t 0 respectively. Then, for every z 0 M 1 , we get

z ( t , ω ) = S ( t ) z 0 = S 1 ( t ) z 0 + S 2 ( t ) z 0 , t 0.

Next, we give some Lemmas to prove the asymptotic compactness.

Lemma 3.3. Assume that the condition on f , f 1 , f 2 , g , g 1 hold. Let B = { B ( ω ) } ω Ω D . Then for P-a.e. ω Ω , there is a constant T 2 = T 2 ( B , ω ) > 0 , ϵ > 0 , if

z 10 M 1 2 = υ 10 ( θ t ω ) , η 10 ( θ t ω ) M 1 2 B ,

then for all t T 2 , the solution of (3.30) satisfies the following uniform estimate

S 1 ( t ) z 10 ( ω ) M 1 2 e 2 σ t z 10 2 + ε r 1 ( ω ) , (3.32)

where the positive random function r 1 ( ω ) is defined in Lemma 3.2.

Proof. From (3.10) we substituting f , g , z ( θ t ω ) by f 1 , f 2 , g g 1 , z ( θ t ω ) z 1 ( θ t ω ) , respectively. Similar to the proof the Lemma 3.2, we compute

υ 1 ( t , θ t ω , υ 10 ( θ t ω ) ) 2 + υ 1 ( t , θ t ω , υ 10 ( θ t ω ) ) 2 + η 1 t ( t , θ t ω , η 10 ( θ t ω ) ) 1, μ 2 e 2 σ t υ 10 ( θ t ω ) 2 + υ 10 ( θ t ω ) 2 + η 10 ( θ t ω ) 1, μ 2 + C t 0 e 2 σ r ( 1 + | Y ( θ r ω ) | 2 + | Y ( θ r ω ) | p ) d r + 1 2 λ t 0 e 2 σ r g ( r ) 2 d r . (3.33)

Since z 10 M 1 2 = υ 0 ( θ t ω ) , η 0 ( θ t ω ) M 1 2 B , | Y ( θ s ω ) | are tempered, we can choose T 2 > 0 , t > T 2 , such that (3.32) is satisfied.

Lemma 3.4. Assume that the condition on f , f 1 , f 2 , g , g 1 , h , h 1 hold. Let B = { B ( ω ) } ω Ω D . Then for P-a.e. ω Ω , there is a positive random function r 1 ( ω ) and

z 10 M 1 2 = υ 0 ( θ t ω ) , η 0 ( θ t ω ) M 1 2 B ,

such that for every given T 0 , the solution of (3.31) has the following uniform estimates

S 2 ( T , z 0 ( ω ) ) M 1 + l 2 r 2 ( ω ) , (3.34)

where l = min { 1 , ( 2 n p ( n 2 ) ) / 2 } .

Proof. Multiplying (3.31) by A l υ 2 and integrating over Ω , we can get

1 2 d d t ( A l 2 υ 2 2 + A l + 1 2 υ 2 2 ) + λ A l 2 υ 2 2 + A l + 1 2 υ 2 2 0 μ ( s ) ( Δ η 2 t ( s ) , A l υ 2 ) d s + ( f ( x , υ + z ( θ t ω ) ) , A l υ 2 ) ( f 1 ( x , υ 1 + z ( θ t ω ) z 1 ( θ t ω ) ) , A l υ 2 ) = ( g 1 ( x , t ) + Δ z ( θ t ω ) + z 1 ( θ t ω ) , A l υ 2 ) . (3.35)

From (2.2) and (3.31), we obtain

0 μ ( s ) ( Δ η 2 t ( s ) , A l υ 2 ) d s = 0 μ ( s ) ( Δ η 2 t ( s ) , A l ( η 2 t t ( s ) + η 2 s t ( s ) z 1 ( θ t ω ) ) ) d s = 1 2 d d t η 2 t 1 + l , μ 2 + η 2 t , η 2 s t 1 + l , μ + 0 μ ( s ) ( η 2 t ( s ) , A l z 1 ( θ t ω ) ) d s , (3.37)

hence

| 0 μ ( s ) ( η 2 t ( s ) , A l z 1 ( θ t ω ) ) d s | ε η 2 t 1 + l , μ 2 + C A l + 1 2 z 1 ( θ t ω ) 2 ,

and

η 2 t , η 2 s t 1, μ δ 2 η t 1, μ 2 δ 2 η t 1, μ 2 .

By f , f 1 , f 2 and the mean value theorem, we have

( f ( x , υ + z ( θ t ω ) ) , A l υ 2 ) ( f 1 ( x , υ 1 + z ( θ t ω ) z 1 ( θ t ω ) ) , A l υ 2 ) = ( f 2 ( x , υ 1 + z ( θ t ω ) z 1 ( θ t ω ) ) , A l υ 2 ) = β 2 Ω | υ 1 + z ( θ t ω ) z 1 ( θ t ω ) | p 1 | A l υ 2 | d x + δ | Ω | . (3.38)

Using the embedding theorem, we have

β 2 Ω | υ 1 + z ( θ t ω ) z 1 ( θ t ω ) | p 1 | A l υ 2 | d x + δ | Ω | C υ 1 + z ( θ t ω ) z 1 ( θ t ω ) L 2 n ( p 1 ) n + 2 2 l p 1 A l υ 2 2 n n + 2 2 l + δ | Ω | C υ 1 + z ( θ t ω ) z 1 ( θ t ω ) p 1 A 1 + l 2 υ 2 + δ | Ω | , (3.39)

where we have used inequality ( n 2 ) ( p 1 ) n + 2 2 l 1 , so 2 n p ( n 2 ) 2 2 n n 2 and the embedding theorem

H 1 = D ( A 1 2 ) L 2 n n 2 , H 1 + l = D ( A 1 + l 2 ) L 2 n n 2 ( 1 + l ) , H 1 l = D ( A 1 l 2 ) L 2 n n 2 ( 1 l ) .

Note that

( g 1 ( x , t ) + Δ z ( θ t ω ) + z 1 ( θ t ω ) , A l υ 2 ) C ε ( g 1 ( x , t ) 2 + Δ z ( θ t ω ) 2 + z 1 ( θ t ω ) 2 ) + ε A 1 + l 2 υ 2 2 . (3.40)

Thanks to Lemma 3.1, the property of the solution of (3.3) and (3.26), and (3.36)-(3.40), we conclude that

1 2 d d t ( A l 2 υ 2 2 + A l + 1 2 υ 2 2 + η 2 t 1 + l , μ 2 ) A l + l 2 υ 2 2 + λ A l 2 υ 2 2 + ( δ 2 + ε ) η 2 t 1 + l , μ 2 + C ( 1 + Δ z ( θ t ω ) 2 + z 1 ( θ t ω ) 2 ) β ( A l + l 2 υ 2 2 + A l 2 υ 2 2 + η 2 t 1 + l , μ 2 ) + C ( 1 + Δ z ( θ t ω ) 2 + z 1 ( θ t ω ) 2 + g 1 ( t ) 2 ) , (3.42)

where β = min { 2 , 2 λ , δ + 2 ε } .

z 2 ( t , ω ) M 1 + l 2 A l 2 υ 2 2 + A l + 1 2 υ 2 2 + η 2 t 1 + l , μ 2 e 2 β t ( A l 2 υ 20 ( ω ) 2 + A l + 1 2 υ 20 ( ω ) 2 + η 20 t ( ω ) 1 + l , μ 2 ) + C 0 t e 2 β ( t s ) ( 1 + | Y ( θ s ω ) | 2 ) d s + C 0 t e 2 β ( t s ) g 1 ( s ) 2 d s C 0 t e 2 β ( t s ) ( 1 + | Y ( θ s ω ) | 2 ) d s + C 0 t e 2 β ( t s ) g 1 ( s ) 2 d s . (3.43)

Thus, for every given T > 0 , we get

S 2 ( T , z 0 ( ω ) ) M 1 + l 2 r 2 ( ω ) , (3.44)

where r 2 ( ω ) = 0 T e 2 β ( t s ) ( 1 + | Y ( θ s ω ) | 2 ) d s + 0 T e 2 β ( t s ) g 1 ( s ) 2 d s is a random function.

The proof is complete.

Since η t ( x , s ) = 0 s u ( x , t r ) d r , s 0 and (3.31), it follows

η 2 t ( x , s ) = { 0 s u 2 ( t r ) d r , 0 < s t , 0 t u 2 ( t r ) d r , s > t , (3.45)

for more information on η t ( x , s ) , see [23] , we have

Lemma 3.5. Let Π : = H 1 × L μ 2 ( + , H 1 ) L μ 2 ( + , H 1 ) is a projection operator, setting Γ 2 T : = Π S 2 ( T , B 0 ( ω ) ) , B 0 ( ω ) is a random bounded absorbing set from Lemma 3.4, S 2 ( T , ) is the solution operator of (3.31), and under the assumption of Lemma 3.4, there is a positive random function r 3 ( ω ) depend on T, such that

1) Γ 2 T is bounded in L μ 2 ( + , H 1 + l ) L μ 2 ( + , H 1 ) ;

2) s u p η Γ 2 T η ( s ) H 1 2 r 3 ( ω ) .

Proof. By the random translation, (3.44) and Lemma 3.4, we can prove this Lemma.

Therefore, Lemma 2.1 implies that Γ 2 T is relatively compact in L μ 2 ( + , H 1 ) . And use the compact embedding H 1 + l H 1 , we have that

Lemma 3.6. Let S 2 ( t , ) be the corresponding solution operator of (3.31), and the assumption of Lemma 3.4 and 3.5 hold, then for any T > 0 , S 2 ( T , B 0 ( ω ) ) is relatively compact in M 1 .

Now we are on a position to prove the existence of a random attractor for the stochastic nonclassical diffusion equation with linear memory and additive white noise.

Theorem 3.1. Let { S ( t ) } t 0 be the solution operator of equation (3.3), and the conditions of the lemma 3.6 hold, then the random dynamical system ϒ has a unique random attractor in M 1 .

Proof. Notice that ϒ has a closed absorbing set B = { B ( ω ) } ω Ω D by Lemma 3.2, and is relatively compact in M 1 by Lemma 3.3 and Lemma 3.6. Hence the existence of a unique D-random attractor follows from Theorem 2.1 immediately.

Foundation Term

This work was supported by the NSFC (11561064), and NWNU-LKQN-14-6.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Mohamed, A.E., Ma, Q.Z. and Bakhet, M.Y.A. (2018) Random Attractors of Stochastic Non-Autonomous Nonclassical Diffusion Equations with Linear Memory on a Bounded Domain. Applied Mathematics, 9, 1299-1314. https://doi.org/10.4236/am.2018.911085

References

  1. 1. Aifantis, E.C. (1980) On the Problem of Diffusion in Solids. Acta Mechanica, 37, 265-296. https://doi.org/10.1007/BF01202949

  2. 2. Cheng, S.L. (2015) Random Attractor for the Nonclassical Diffusion Equation with Fading Memory. Journal of Partial Differential Equations, 28, 253-268. https://doi.org/10.4208/jpde.v28.n3.4

  3. 3. Wang, X. and Zhong, C. (2009) Attractors for the Non-Autonomous Nonclassical Diffusion Equations with Fading Memory. Nonlinear Analysis: Theory, Methods & Applications, 71, 5733-5746. https://doi.org/10.1016/j.na.2009.05.001

  4. 4. Kuttler, K. and Aifantis, E. (1987) Existence and Uniqueness in Nonclassical Diffusion. Quarterly of Applied Mathematics, 45, 549-560. https://doi.org/10.1090/qam/910461

  5. 5. Kuttler, K. and Aifantis, E. (1988) Quasilinear Evolution Equations in Nonclassical Diffusion. SIAM Journal on Mathematical Analysis, 19, 110-120. https://doi.org/10.1137/0519008

  6. 6. Xie, Y, Li, Y. and Zeng, Y. (2016) Uniform Attractors for Nonclassical Diffusion Equation with Memory. Journal of Function Spaces, 2016, Article ID: 5340489.

  7. 7. Cont, M. and Marchini, E.M. (2016) A Remark on Nonclassical Diffusion Equations with Memory. Applied Mathematics and Optimization, 73, 1-21. https://doi.org/10.1007/s00245-015-9290-8

  8. 8. Wang, B. (2014) Random Attractors for Non-Autonomous Stochastic Wave Equations with Multiplicative Noise. Discrete and Continuous Dynamical Systems, 34, 269-300.

  9. 9. Sun, C.Y., Wang, S.Y. and Zhong, C.K. (2007) Global Attractors for a Nonclassical Diffusion Equation. Acta Mathematica Sinica, 23, 1271-1280. https://doi.org/10.1007/s10114-005-0909-6

  10. 10. Ma, Q.Z., Wang, X.P. and Xu, L. (2016) Existence and Regularity of Time-Dependent Global Attractors for the Nonclassical Reaction-Diffusion Equations with Lower Forcing Term. Boundary Value Problems, 2016, 10. https://doi.org/10.1186/s13661-015-0513-3

  11. 11. Sun, C. and Yang, M. (2008) Dynamics of the Nonclassical Diffusion Equations, Asymptotic Analysis, 59, 51-81.

  12. 12. Hu, Z. and Wang, Y. (2012) Pullback Attractors for a Nonautonomous Nonclassical Diffusion Equation with Variable Delay. Journal of Mathematical Physics, 53, 072702. https://doi.org/10.1063/1.4736847

  13. 13. Morillas, F. and Valero, J. (2005) Attractors for Reaction-Diffusion Equations in with Continuous Nonlinearity. Asymptotic Analysis, 44, 111-130.

  14. 14. Ma, Q.Z, Liu, Y. and Zhang, F. (2012) Global Attractors in for Nonclassical Diffusion Equations. Discrete Dynamics in Nature and Society, 2012, Article ID: 672762. https://doi.org/10.1155/2012/672762

  15. 15. Xie, Y, Li, Q. and Zhu, K.(2016) Attractors for Nonclassical Diffusion Equations with Arbitrary Polynomial Growth Nonlinearity. Nonlinear Analysis: Real World Applications, 31, 23-37. https://doi.org/10.1016/j.nonrwa.2016.01.004

  16. 16. Zhang, F. and Liu, Y. (2014) Pullback Attractors in for Non-Autonomous Nonclassical Diffusion Equations. Dynamical Systems, 29, 106-118. https://doi.org/10.1080/14689367.2013.854317

  17. 17. Anh, C.T. and Bao, T.Q. (2012) Dynamics of Non-Autonomous Nonclassical Diffusion Equations on . Communications on Pure and Applied Analysis, 11, 1231-1252.

  18. 18. Zhao, W. (2014) Random Attractors in H1 for Stochastic Two-Dimensional Micropolar Fluid Flows with Spatial-Valued Noises. Electronic Journal of Differential Equations, 2014, 1-19.

  19. 19. Zhao, W. and Song, S. (2015) Dynamics of Stochastic Nonclassical Diffusion Equations on Unbounded Domains. Electronic Journal of Differential Equations, 2015, 1-22.

  20. 20. Anh, C.T. and Toan, N.D. (2014) Uniform Attractors for Non-Autonomous Nonclassical Diffusion Equations on . Bulletin of the Korean Mathematical Society, 51, 1299-1324. https://doi.org/10.4134/BKMS.2014.51.5.1299

  21. 21. Wang, B.X. (2009) Random Attractors for Stochastic Benjamin-Bona-Mahony Equation on Unbounded Domains. Journal of Differential Equations, 246, 2506-2537. https://doi.org/10.1016/j.jde.2008.10.012

  22. 22. Anh, C. and Toan, N.D. (2012) Pullback Attractors for Nonclassical Diffusion Equations in Noncylindrical Domains. International Journal of Mathematics and Mathematics Sciences, 2012, Article ID: 875913.

  23. 23. Wang, X., Yang, L. and Zhong, C.K. (2010) Attractors for the Nonclassical Diffusion Equations with Fading Memory. Journal of Mathematical Analysis and Applications, 362, 327-335. https://doi.org/10.1016/j.jmaa.2009.09.029

  24. 24. Robinson, J.C. (2001) Infinite-Dimensions Dynamical Systems. Cambridge University Press, Cambridge.

  25. 25. Temam, R. (1997) Infinite-Dimensional Dynamical Systems in Mechanics and Physics. Springer-Verlag, New York. https://doi.org/10.1007/978-1-4612-0645-3

  26. 26. Arnold, L. (1998) Random Dynamical Systems. Springer-Verlag, New York. https://doi.org/10.1007/978-3-662-12878-7

  27. 27. Da Prato, G. and Zabczyk, J. (1992) Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511666223

  28. 28. Chueshov, I. (2002) Monotone Random Systems Theory and Applications. Springer-Verlag, Berlin. https://doi.org/10.1007/b83277

  29. 29. Temam, R. (1977) Navier-Stokes Equations, Theory and Numerical Analysis. North-Holland, Amsterdam, New York.

  30. 30. Crauel, H. and Flandoli, F. (1994) Attractors for Random Dynamical Systems. Probability Theory and Related Fields, 100, 365-393. https://doi.org/10.1007/BF01193705

  31. 31. Crauel, H., Debussche, A. and Flandoli, F. (1997) Random Attractors. Journal of Dynamics and Differential Equations, 9, 307-341. https://doi.org/10.1007/BF02219225

  32. 32. Ahmed, E., Abdelmajid, A., Xu, L. and Ma, Q. (2015) Random Attractors for Stochastic Reaction-Diffusion Equations with Distribution Derivatives on Unbounded Domains. Applied Mathematics, 6, 1790-1807. https://doi.org/10.4236/am.2015.610159

  33. 33. Dafermos, C.M. (1970) Asymptotic Stability in Viscoelasticity. Archive for Rational Mechanics and Analysis, 37, 297-308. https://doi.org/10.1007/BF00251609

  34. 34. Giorgi, C., Marzocchi, A. and Pata, V. (1998) Asymptotic Behavior of a Semilinear Problem in Heat Conduction with Memory. Nonlinear Differential Equations and Applications, 5, 333-354. https://doi.org/10.1007/s000300050049

  35. 35. Wang, B.X. (2009) Upper Semicontinuity of Random Attractors for Non-Compact Random Dynamical Systems. Electronic Journal of Differential Equations, 2009, 1-18.

上一篇:Ordered Rate Constitutive Theo 下一篇:Computation of a Point-to-Poin

我要分享到: