All Chapters Note
All Chapters Note
Introduction:
Unit Objectives:
Upon successful completion of this chapter, the student will be able to:
Define the very basic definitions and terminology of differential equations as well as a
discussion of central issues and objectives for the course.
Classify Ordinary Differential Equations (ODEs) and distinguish ODEs from Partial
Differential Equations (PDEs).
Explain what is meant by an integrating factor, find the integrating factor for a linear
first-order equation with constant coefficients and use it to solve the equation.
Solve “Exact” and “Homogeneous” equations.
Decide which (if any) of the above methods can be used to solve a given first-order
differential equation.
Use a given change of variable to transform a first-order differential equation into one
that can more easily be solved.
1.1Basic Concepts (Some Definitions and Terminology)
Section Objectives:
Definition 1.1 A Differential Equation (DE) is an equation involving a dependent variable and its
derivative with respect to one or more independent variable.
Definition 1.2 A differential equation involves functions of only a single variable is called an
Ordinary Differential Equation (ODE). If a differential equation contains only ordinary derivatives
of one or more unknown functions with respect to a single independent variable, it is said to be an
ordinary differential equation (ODE).
The PDE (Partial Differential Equation) is an equation which involves partial derivatives of
an unknown function of two or more variables. That is, the derivatives of functions of more than
one variable.
Definition 1.3 The order of the equation is the order of the highest derivatives that appears
Definition 1.4 The degree of a DE is the highest exponent of the highest order derivatives which
involves in the DE.
An ODE is said to be order n is the n th derivative of the unknown function Y is the highest
derivative of Y in the equation.
Examples 1.1
Definition 1.5 The most general nth order linear differential equation can be written
a0¿ *
where,a 0 ( x ) , a1 (x)… . an ( x) called the coefficient of the equation
The known function f ( x)is called non homogeneous term, equation (*) is called homogeneous if
f ( x )=0
which is called constant coefficient linear equation.a 0 ≠ 0 , otherwise the equation would not be
the nth order. An ordinary differential equation that is not linear is said to be non-linear.
Example 1.4
ODE Property
dy
1. +2 xy=sinx linear variable coefficient,
dx
non homogeneous first order equation
3 d3 y dy x
2. x 3
−2 x +6 y=e linear variable coefficient, non-homogeneous, 3rd
dx dx
order equation,
d2 y dy
3. 2
+a +by =sinφx with φ−constant linear variable coefficient
dx dx
non homogeneous 2nd order equation
d2θ
4. + k sinθ=0 with k -constant non-linear 2nd order equation ,because θ
dt 2
occurs non-linearly in the function sinθ
d4 y 2
5. + y ¿0 non-linear, homogeneous, 4th order equation .
d x4
Definition 1.6 A function relation between two variables (dependent and independent variables)
which satisfy the given DE is called the solution of an ODE or integral curve. A solution of an nth
order equation that contains n arbitrary constants is called the general solution of the equation. If the
arbitrary constants in the general solution are assigned specific value, the result is called a particular
solution of the equation.
Note: it is possible to have more than one solution of a differential equation. For instance
3 3 '
y=2 x + A and y=2 x are solution of the differential equation y =12 x .
Definition 1.7 An ODE together with an initial condition is called an initial value problem.
Thus, if the ordinary differential equation is explicit
'
y =f ( x , y )
the initial value problem is of the form
'
y =f ( x , y ) , y ( x 0 )= y 0
Existence and Uniqueness: Two fundamental questions arise in considering an initial –value
problem:
dy
Existence : Does the differential equation =f ( x , y ) possess solutions?
dx
Do any of the solution curves pass through the point ( x 0 , y 0 ) ?
Uniqueness : When can we be certain that there is precisely one solution curve passing
through the point ( x 0 , y 0 ) ?
Section Objectives:
dy
=f ( x ) ⟹ dy=f ( x )dx
dx
Solution: We have
dy
=cosx
dx
This implies that
dy =cosxdx … (*)
Integrate equation (*) both sides, i.e.
∫ dy=∫ cosxdx
Hence,
y=sinx+ c
We begin our study of how to solve differential equations with the simplest of all differential
equations: first-order equations with separable variables. Because the method in this section and
many techniques for solving differential equations involve integration you are urged to refresh
du
your memory on important formulas (such as ∫ ) and techniques (such as integration by
u
parts) by consulting a calculus text.
Definition 1.8 An equation y ' =f ( x , y ) is called separable if it can be written in the
form
y ' =F ( x ) G( y) (*)
Where,
1
∫ G( y ) dy=M 1 ( y ) and ∫ F (x) dx=M 2 ( x ) + c
Example 1.7: Solve ( 1+ x ) dy− ydx=0
Solution: Dividing by( 1+ x ) y , we can write
dy dx
=
y 1+ x
From which it follows that
dy dx
∫ y
=∫
1+ x
ln∨ y∨¿ ¿∨1+ x∨+c 1
ln|1+ x|+c 1 c1
y=e =e ln |1+ x|. e
¿|1+ x|. ec 1
c1
¿ ± . e (1+ x)
Replacing, c=±. e c then gives
1
y=c (1+ x)
Example 1.8: (An initial value problem)
∫ ( e y − y e− y ) dy=2∫ sinxdx
Using Integration by parts yields:
y −y −y
e + ye +e =−2cosx+ c
The initial condition y=0 when x=0 implies c=4.
Thus, a solution of the initial value problem is
e y + ye− y +e− y =4−2 cosx
Exercises
' y −1
( b) y ' =x 2 (1+ y) (d) dx +e 3 x dy=0 (f) y + =0
1−x
1.4 Homogenous Equations
Sometimes, the best way of solving a DE is to use a change of variables that will put the DE into
a form whose solution method we know. We now consider a class of DEs that are not directly
solvable by separation of variables, but, through a change of variables, can be solved by that
method.
Definition 1.9 A function f (x , y ) is said to be algebraically homogenous of degree n , or simply
homogenous of degreen ,
if
f (kx , ky)=k n f (x , y ),
for some real number n and all k > 0 for f ( x , y ) ≠(0,0).
Example 1.9.
a) f ( x , y )=x 2+3 xy + 4 y 2 is homogenous
a) 2 x2 y ' =x 2+ y 2
Solution: If we divide both sides by 2 x2 then we obtain
()
2
' 1 1 x
y= +
2 2 y
Which is homogeneous. Now, setting
y
u= ⟹ y =ux
x
,yields the equation
' 1 2 1
x u = u −u+
2 2
After rearrange
'
2u 1
=
(u−1) x
2
x 2+ y 2
y'=
xy
Solution: If we divide the numerator and denominator of the fraction by x 2,we obtain
2
y
1+( )
' x
y=
y
( )
x
y
which is homogeneous. Now setting u= or y=ux
x
2
1+u 1
Yields, x u ' = −u=
u u
u2
⟹ =ln ( x ) +c
2
and then
y= √ 2 ln ( x )+ c
Exercises
Solve a) ( y 2 +2 xy ) dx−x 2 dy =0
b ¿ ( x 2 + y 2 ) dx=2 xy dy
c) x 2 ydx −( x3 + y 3 ) dy =0
∂f dk
To determinek ( y), we derive (differentiate with respect to y) from (*). Use (b) to get and
∂y dy
dk
then integrating to get k.
dy
Example 1.13: Solve2 xydx+ ( x2 −1 ) dy=0
Solution: With M ( x , y )=2 xy and N ( x , y )=x 2−1, we have
∂M ∂N
=2 x=
∂y ∂x
Thus, the equation is exact, and so there exist a function f (x , y ) such that
∂f ∂f
=2 xy and =¿ x 2−1
∂x ∂y
From the first of these equations, we obtain after integrating
2
f ( x , y )=x y +k ( y)
Taking the partial derivative of the last expression with respect to y and setting the result equal to
N ( x , y), gives
∂f 2 ' 2
=x +k ( y )=x −1
∂y
It follows that k ' ( y )=−1 andk ( y )=− y .
2
Hence f ( x , y )=x y− y
Note: in the above example, the equation could be solved by separation of variables.
Example 1.14: Solve ( e 2 y − ycosxy ) dx+ ( 2 x e2 y −xcosxy+2 y ) dy=0
Solution: The equation is exact because
∂M 2y ∂N
=2 e + xysinxy−cosxy=
∂y ∂x
Hence a function f (x , y ) exists for which
∂f ∂f
M ( x , y )= and N ( x , y )=
∂x ∂y
∂f
Now for variety we shall start with the assumption that =N ( x , y ) ;that is
∂y
∂f
=2 x e 2 y −xcosxy+2 y
∂y
f ( x , y )=2 x ∫ e dy −x ∫ cosxydx+2 ∫ ydy .
2y
It follows that
2y 2
f ( x , y )=x e −sinxy+ y + h(x)
∂f
=e2 y − ycosxy−h' ( x )=e 2 y − ycosxy
∂x
and so
h' ( x ) =0 or h ( x )=c .
Hence a family of solution is
2y 2
x e −sinxy+ y + c=0.
Exercises
Definition1. 12: An integrating factor is a factor which we multiply the given non exact equation
to make it exact.
Case2: We see also look to see if there is an integrating factor that depends only on y and not on
∂I
x .We can do the same calculation ,this time using would be zero, to see that such an
∂y
N x −M y
integrating factor exists if the ratio is a function of Q( y ) only of y ¿and notx¿; then
M
I ( y ) =e∫
Q (y )dy
Exercises
1. Determine whether the given differential equation is exact. If it is exact, solve it.
a) ( 2 x−1 ) dx+ ( 3 y +7 ) dy =0
b) ( 2 x+ y ) dx− ( x +6 y ) dy=0
c) ( siny− ysinx ) dx=( cosx+ xcosy− y ) dy=0
d) ( 2 xy 2−3 ) dx + ( 2 x 2 y + 4 ) dy=0
(
e) 1+lnx+
y
x )
dx=(1−lnx) dy
a) y dx−xdy +3 x 2 y 2 e x =0
3 2
(
d)cosxdx + 1+ sinx dy =0
y )
' −2 3 y
b) y= − e) y ( x+ y +1 ) dx + ( x+ 2 y ) dy=0
y 2x
c) 6 xydx + ( 4 y +9 x 2) dy =0 f)(−xysinx+ 2 ycosx ) dx +2 xcosxdy=0
Linear ODEs that can be transformed to linear form are models of various phenomena, for
instance, in physics, biology, population dynamics, and ecology.
If the first term is f ( x ) y ’ (instead of y ’), divided the equation by f (x) to get the “ standard form
“ (*) with y ’ as the first term.
For instance,
y ’ cosx + ysin x=x
is a linear ODE and its standard form is
'
y + y tan x=x sec x
To find the general solution of (a) we use an “ integrating factor”; we multiply the equation by a
function I ( x), to obtain
dy
I ( x) + I (x) P ( x ) y=I (x)r (x) (b)
dx
What we would like to happen is for
dy
I (x) + I ( x) P ( x ) y
dx
to be the derivative of something nice. When written thus way, this sum looks sort of like the
output of the product rule. If we can find I ( x ) so that the derivative of I (x) is I ( x) p (x) then
this sum will be
d
the derivative [ I ( x ) . y ].What we want is
dx
'
I (x )=I ( x) p(x)
This is now a (very easy) separable equation for the function I (x), and the solution is
I ( x )=e∫
p(x)dx
.
If r ( x )=0 , then the ODE (a) becomes
'
y + P ( x ) y=0 (c)
and is called homogenous. By separating variables and integrating we then obtain
dy
=−P ( x ) dx
y
Thus ,
ln | y|=−∫ P ( x ) dx +c
¿
Taking exponents on both sides, we obtain the general solution of the homogenous ODE (c)
y ( x ) =c e ∫
¿
− P ( x ) dx
where, c=±e c
Step 3. Multiply the both sides of the standard form equation by the integrating factor. The left-
hand side of the resulting equation is automatically the derivative of the product of the
integrating factor e∫ p (x)dx and y:
d ∫ p ( x ) dx
dx
[e . y ]=e∫
p ( x ) dx
r (x)
Step 4. Integrate both sides of the last equation and solve for y.
Step 5. If an initial condition y ( x 0 )= y 0 is given the required solution of the initial value
problem is obtained by choosing the arbitrary constant c in the general solution found in step 4 so
that
y= y 0when, x=x 0
dy
Example 1.17: Solve −3 y=0
dx
Solution: This linear equation can be solved by separation of variables. Alternatively, since the
differential equation is already in standard form , we identify P ( x ) =−3 ,and so the integrating
factor is e∫ (−3)dx =e−3 x . We then multiply the given equation by this factor and recognize that
−3 x dy −3 x −3 x
e −3 e y =e .0
dx
is the same as
d −3 x
[ e y ]=0
dx
Integration on the last equation
d
∫ dx [ e−3 x y ] dx=∫ 0 dx
then yields,
e−3 x y=c or y=ce3 x , ∞ < x ←∞.
Example 1.18: Solve x y ' =x 4−4 y
' 4
Solution: We have y + y=x 3
x
so
4
p ( x) = and r ( x )=x 3
x
Multiplying both sides by
e∫
p ( x ) dx 4 ln (x) 4
=e =x
to get
4 ' 3 7
x y + 4 x y=x
−1 −1
Plugging in yields = .1+ c
2 2
Hence, c=0.
−cos (2 x)
Solving for y gives y=
2 cos (x)
Example 1.20 Solve the IVP cos x y ' + y =sin x , y ( 0 )=2 ( Similar, do it)
Exercises
The order of the equation is the order of the highest derivatives that appears.
The degree of a DE is the highest exponent of the highest order derivatives which involves in
the DE.
The nth order linear differential equation is written as,
a0¿ … (*)
where,a 0 ( x ) , a1 (x) … . an (x) called the coefficient of the equation .The known function f (x)is
called non homogeneous term , equation (*) is called homogeneous if f ( x) = 0
If the coefficients in (*) are constant, so that (*) becomes a 0 y n +a 1 y n−1 +…+ an−1 y =f ( x )
which is called constant coefficient linear equation.a 0 ≠ 0 , otherwise the equation would not
be the nth order. An ordinary differential equation that is not linear is said to be non-linear.
A function relation between two variables (dependent and independent variables) which
satisfy the given DE is called the solution of an ODE or integral curve. A solution of an nth
order equation that contains n arbitrary constants is called the general solution of the
equation. If the arbitrary constants in the general solution are assigned specific value, the
result is called a particular solution of the equation.
If the differential equation has the form y ' =f ( x ) , then y=∫ f (x )dx +c which is a general
solution.
An equation y ' =f ( x , y ) is called separable if it can be written in the form
'
y =F ( x ) G( y) (*)
For some function F ( x )a dependent only on x andG ( y )dependent on y .Equation (*) itself to be
variable separable type.
A function f (x , y ) is said to be algebraically homogenous of degree n , or simply
homogenous of degreen ,
if f (kx , ky)=k n f (x , y ),
for some real number n and all k > 0 for f ( x , y ) ≠(0,0).
The first ordered differential equation M ( x , y ) dx+ N ( x , y ) dy=0 is side to be exact if a
function F (x , y ) exists such that the total differential d [ F ( x , y ) ] =M ( x , y ) dx+ N ( x , y ) dy
∂M ∂ N
The differential equation M ( x , y ) dx+ N ( x , y ) dy=0 is exact if and only if =
∂ y ∂x
An integrating factor is a factor which we multiply the given non exact equation to make it
exact. To find an integrating factor I ( x , y ) , solve the partial differential equation
I y M −I x N + I ( M y −N x ) =0
This is just as tricky to solve as the original equation. Only in a few special cases are there
methods for computing the integrating factor I (x , y ).
Case1: Suppose we want to see if there exists an integrating factor that depends only in x (and
∂I
not on y). Then, would be zero, since I does not depend on y, and so I ( x) we need to satisfy
∂y
I M y −N x M y −N x
'
= .This can only happen if the ratio is a function of P( x ) only of x (and
I N N
not y ), then I ( x )=e∫ P (x)dx.
Case2: We see also look to see if there is an integrating factor that depends only on y and not on
∂I
x .We can do the same calculation ,this time using would be zero, to see that such an
∂y
N x −M y
integrating factor exists if the ratio is a function of Q( y ) only of y ¿and notx¿; then
M
I ( y ) =e∫
Q (y )dy
A first order ODE is said to be linear if it can be written as: y ' + P ( x ) y=r ( x ) (Standard
form ) (*)
where p and r are a function of x . If the first term is f ( x ) y ’ (instead of y ’), divided the equation
by f ( x) to get the “ standard form “ (*) with y ’ as the first term.
Miscellaneous Exercises
b. sinx
dy
dx
+ ( cosx ) y=0 , y( )
7π
6
=−2
dy 2 −1
c. +2 ( x +1 ) y =0 , y ( 0 ) =
dx 8
d. ( 2 x y 2−sinx ) dx + ( 2+2 x 2 y ) dy =0 , y ( 0 )=1
[ ( )]
2
y x 1
e. 2 y+ + e 1+ dx + ( x+ 2 y ) dy=0 , y ( 1 )=1
x x
References
10. A. Ganesh and Etla, Engineering Mathematics II, 2009 New age International press
11. Wilfred Kaplan,Advanced Calculus, 5th edition, publishing house of electronics industry
12. Salas Hille Etgen, Calculus – One and Several variables,10th edition, WILLEY PLUS
13. Boyce. Diprima, Elementary differential equations and boundary value problem, 2001 ,John
Wiley and Sons.Inc
14. Ravi P. Agarwal. An introduction to differential equation, 2000,Spring
15. Rudolph E. Longer,Ordinary Differential equations,1954,John Wiley and Sons.Inc
Chapter Two
Ordinary Linear Differential Equation of The 2nd order
Introduction
In chapter one we saw that we could solve a few first-order differential equations by recognizing
them as separable, linear, exact or homogeneous equations. We turn now to the solution of
ordinary differential equation of order two or higher. These equations have important
engineering application, especially with connection mechanical and electrical engineering
vibrations.
Linear second order differential equations with constant coefficients are the simplest of the
higher order differential equation and they have many applications.
The most general linear second order differential equation is in the form:
'' '
p ( x ) y + q ( x ) y + r ( x ) y=g ( x )
The equation is called non-homogenous when g ( x )zero is not identically; otherwise it is called
homogenous.
Example 2.1: y ' ' + x y ' + y =0 is a homogenous 2nd order linear differential equation
''
y =sin x is a non-homogenous 2nd order linear differential equation.
Definition 2.1. A function y=h(x) is called a solution of a (linear or non linear) second order ODE on
some open interval I if h is defined and twice differentiable throughout the interval and is such that the
ODE becomes an identify if we replace the unknown y by h , and its successive derivatives.
Theorem: (Superposition or Linear principle)
If y 1 and y 2 are solution of the differential equation
y ' ' + p ( x ) y ' +q ( x ) y=0
thenc 1 y1 , c 2 y 2 and c 1 y1 + c2 y 2 are solutions of the given differential equation.
Proof: Let y 1 and y 2 be solution of the differential equation on I. Then by substituting
y=c1 y 1 +c 2 y 2
and its derivatives into the differential equation, and using the familiar rule
' ' '
(c 1 y 1+ c2 y 2) =c1 y 1 + c2 y 2 .
We get
y ' ' + p y ' + qy=(c 1 y 1 +c 2 y 2)' ' +(c 1 y 1 +c 2 y 2)' + q( c1 y 1+ c 2 y 2 )
'' '' ' '
¿ c 1 y 1 + c 2 y 2 + p( c1 y 1 + c2 y 2 )+¿ q (c 1 y 1 +c 2 y 2)
¿ c 1 ( y 1 + py 1 +q y 1 ) + c2 ( y 2 + p y 2 + q y 2 )=0
'' ' '' '
Since in the last line (…)=0, because y 1 and y 2 are solution of differential equation on I.
Note: Superposition principle holds for homogenous linear ODEs only but doesn’t hold for non-
homogenous linear or non linear.
Example 2.4. Show that y=cos x∧ y=sin x are solution of
''
y + y =0
for all x and show that any linear combination of the two functions is a solution of the given
ODE.
Example 2.5. Verify that
these condition helps to determine the constants c 1andc 2 in the general solution.
Generally: we defined an initial-value problem for a general nth-order differential equation. For
a linear differential equation an
Example 2.6: Solve the following IVP
y ' ' −9 y=0 , y ( 0 )=2 , y ' ( 0 )=−1
Solution: The two function y ( t ) =e3 t and y ( t ) =e−3 t are enough to form the general solution to
the differential equation. The general solution to our differential equation is then
−3 t 3t
y ( t ) =c 1 e + c2 e
Now all we need to do is apply the initial conditions
y ' ( t )=−3 c1 e−3 t +3 c 2 e3 t
Plugging in the initial conditions
y ( 0 )=2=c 1+ c 2
'
y ( 0 )=−1=−3 c 1+3 c 2
This gives us a system of two equations and two unknowns that can be solved. Doing this yields
7 5
c 1= , c2 =
6 6
The solution to the IVP is then
7 −3 t 5 3 t
y (t)= e + e
6 6
Note: The number of linearly independent solution is equal to the order of the differential
equation , so the 2nd order differential equation has two linearly independent solution.
Definition: The Wronskian of n function y 1 ( x) , y 2 ( x ) , y 3 ( x),… y n ( x ) each (n-1) time
differentiable is the form
| |
y1 y2 ⋯ ⋯ yn
y 1' '
y2 ⋯ ⋯ y n'
W ( y )= y 1' ' y 2' ' ⋯ ⋯ y n' ' , where | | denotes the determinant.
⋮ ⋮ ⋯⋯ ⋮
(n−1) (n−1) ⋯ ⋯ (n−1)
Example 7. Considerythe
1 y2
functions y n x , y ( x ) =e 2 x. Then
y 1 ( x )=e 2
W ( e , e )=
x 2x
| y1
y1
'
y2
y2 |
'
| |
x 2x
e e
¿ x 2x
e 2e
¿ e x 2 e2 x −e x e2 x =e3 x
Theorem: If W ( y 1 , y 2 , … y n ) ( x 0) ≠ 0 for some x 0 ∈ I then the set { y 1 , y 2 , … , y n } is LI on I .
Theorem: Given
'' '
y + p y + qy=0 (*)
If y 1 and y 2 are any two solution of (*) on I, then W ( y) ≠0 (i , e W ( y 1 , y2 )≠ 0 ¿
Corollary: Any two solutions y 1 and y 2of the differential equation y ' ' + p y ' + qy=0 are LI.
(Proofs of the above Theorems and corollary are left as exercise for you)
e ∫
− pdx
y 2= y 1∫ 2
dx
y1
Method of Finding y 2
We substitute
y= y 2=u y 1 , y ' = y 2' =u' y 1+u y 1' , y ' ' =u' ' y 1 +2u ' y 1' +u y 1' '
into a homogenous linear equation
'' '
y + p( x ) y +q (x) y=0
This gives
u' ' y 1 +2 u' y 1' + u y1' ' + p ( u' y 1 +u y 1' ) + qu y 1=0
Collecting terms in u' ' , u' and u, we have
u' ' y 1 +u' ( 2 y 1' + p y 1) +u ( y 1'' + p y 1' + q y 1) =0
Now comes the point. Since y 1 is a solution of
'' '
y + p( x ) y +q (x) y=0
i.e. the expression
'' '
y 1 + p y + q y 1=0.
Hence , u is gone, and we are left with an ODE by y 1 and set
' ''
u =U , u =U
'
'' ' 2 y 1 + p y1
u +u =0 .
y1
Thus
' 2 y 1'
U + ( + p)U =0
y1
This is the desired first order ODE, the reduced ordinary differential equation. Separation of
variables and integrations gives
( )
'
dU 2 y1
=− + p dx ,and
U y1
Exercises
2
λ + aλ+b=0 (2)
1(
−a+ √ a −4 b )
2
λ 1=
2
1
λ 2= (−a−√ a2−4 b )
2
Depending on the sign of the discriminant a 2−4 b the quadratic equation (2) may have three
kinds of roots.
Because, y 1 and y 2 are defined (and real) for all x and their quotient is not constant. The
corresponding general solution is
λ1 x λ2 x
y=c1 e +c 2 e
2
λ +11 λ+ 24=0
( λ+ 8 ) ( λ+3 ) =0.
Its roots are λ 1=−8 and λ 2=−3 , and the general solution and its derivative is
−8 x −3 x
y ( x )=c 1 e + c2e
' −8 x −3 x
y ( x )=−8 c 1 e −3 c 2 e
Now, plug in the initial conditions to get the following system of equations
y ( 0 )=0=c 1 +c 2
7 −7
c 1= and c 2=
5 5
7 −8 x 7 −3 x
y ( x) = e − e
5 5
2
λ +3 λ−10=0
( λ+ 5 )( λ−2 ) =0
' −5 x 2x
y ( x )=−5 c1 e +2 c 2 e
Now, plug in the initial conditions to get the following system of equations.
y ( 0 )=4=c 1+ c 2
10 18
c 1= and c 2=
7 7
10 −5 x 18 2 x
y ( x) = e + e
7 7
−a
Case II: Real double root λ=
2
−a
If the discriminant a 2−4 b=0 , the characteristic equation has only one root λ=λ 1=λ2= .
2
−a
( )x
Hence one solution is y =e 2 . To obtain a second independent solution y 2(needed for a basis)
1
e ∫
− p(x )dx
y 2= y 1∫ 2 , but p ( x ) is a constant .
y1
Then it become,
e ∫
− adx −a −a
( )x e−ax ( )x
y 2= y 1 ∫ 2
dx=e 2
∫ −ax
dx=x e 2
y1 e
−a −a
Hence in the case of a double root of (2), a basis of solution of (1) on any interval is:e ( 2 )x , e ( 2 )x .
i.e. if the roots of the characteristic equation are λ=λ 1=λ2 , then the general solution is then
λx λx
y (x )=c 1 e + c 2 x e
2 2
λ −4 λ+4= ( λ−2 ) =0
i.e.
λ 1=λ2 =2
2x 2x
y ( x )=c 1 e +c 2 xe
' 2x 2x 2x
y ( x )=2 c 1 e + c2 e +2 c2 xe
12= y ( 0 ) =c 1
−3= y ' ( 0 ) =2 c1 +c 2
2x 2x
y ( x ) =12 e −27 xe
5x 5x
4 4
y (x )=c 1 e +c 2 xe
5x 5x 5x
' 5 5
y ( x )= c1 e 4 + c 2 e 4 + c 2 x e 4
4 4
3= y ( 0 )=¿ c 1
−9 ' 5
= y ( 0 )= c 1 +c 2
4 4
5x 5x
y ( x ) =3 e 4 −6 xe 4
−1 −1
Case III: Complex roots: a+iω and a−iω.
2 2
This case occurs if the discriminant a 2−4 b<0 of the characteristic equation is negative,
then the characteristic equation has no real root but it has a complex root
That is,
1(
−a ± √ a −4 b )
2
λ 1 , λ2 =
2
1
¿ (−a± i √ 4 b−a2 )
2
−a 1
Put, α =
2
and β=
2
√ 4 b−a
2
Thus ,
λ1 x (α +iβ) x
y 1=e =e
and
are the basis of complex solution of the ODE and the general solution is
⇒
ix iβ
e =cos x+ isin x ❑ e =cos βx +i sin βx
αx αx
y=c1 e (cos βx +isin βx )+ c 2 e (cos βx−i sin βx)
αx αx
y 1=e cos βx , y 2=e sin βx
4x 4x
y ( x )=c 1 e cos ( x )+ c 2 e sin ( x)
' 4x 4x 4x 4x
y ( x )=4 c 1 e cos ( x )−c 1 e sin ( x ) + 4 c 2 e sin ( x ) +c 2 e cos (x )
−4= y ( 0 )=c 1
4x 4x
y ( x ) =−4 e cos ( x)+15 e sin ( x )
λ1 x λ2 x λ1 x λ2 x
I Distinct real e ,e y=c1 e +c 2 e
λ 1 , λ2
−a −a −a
x x x
II Real double root e 2
, xe 2
y=e 2
(c ¿ ¿ 1+c 2 x )¿
−a
λ=
2
αx
λ 1=α + iβ e αx cos βx y=e (c ¿ ¿ 1 cos βx +c 2 sin βx )¿
Exercises
Solve the following
a. y ' ' +2 k y ' +k 2 y=0 , k ≠ 0 , y ( 0 )=2 , y ' ( 0 )=4 e) 9 y ' ' −30 y ' +25 y=0
b. y ' ' + y ' + y=0 , y ( 0 ) =1, y ' ( 0 )=3 f) y ' ' + y ' −6 y=0
c. y ' ' +2 y ' + 3 y=0 y ( 0 )=1 , y ' ( 0 ) =3 g) 4 y ' ' −4 y ' −3 y=0
d. y ' ' + y =0 y ( π ) =2, y ' ( π ) =1 h) y ' ' + 9 y ' +20 y=0
A general solution of the non Homogenous ODE (1) on an open interval I is a solution of the
form y ( x) = yh ( x ) + y p ( x ) (3)
Where , y h=c1 y 1 +c 2 y 2 is a general solution of the homogeneous ODE (2) on I and y p is any
solution of (1) on I containing no arbitrary constant.
A particular solution of (1) on I is a solution obtained from (3) by assigning specific value to the
arbitrary constant c 1 and c 2 in y h .
The method of undetermined coefficients is suitable for linear ODEs with constant coefficients a
and b
a. Basic rule: if r(x) in (4) is one of the function in the first column in table 2.1 , choose y p
in the same line and determine its undetermined coefficients by substitute y p and its
derivates into (4).
b. Modification rule: if a term in your choice for y p happens to be a solution of the
homogenous ODE corresponding to (4), multiply this term by x (or x 2 if this solution
corresponding to a double root of the characteristic equation of the homogenous ODE)
c. Sum rule: if r ( x ) is a sum of functions in the first column, choose y p the sum of
functions in the corresponding lines of the 2nd column.
The characteristic equation for this differential equation and its roots are :
2
λ +1=0
⟹ λ=i
y h= Acosx + Bsinx
2
y p=K 2 x + K 1 x + K 0
Then,
'' 2 2
y p + y p =2 K 2 +2 x + K 1 x+ K 0=0.001 x
Hence,
K 0 =−2 K 2=0.002
This gives
2
y p=0.001 x −0.002 , and
2
y= y h+ y p= Acosx+ Bsinx +0.001 x −0.002
Step 3: Solution of the initial value problem
¿ A−0.002=0 or A=0.002
y ' ( 0 )=B=1.5
2
y=0.002cosx+1.5 sinx+0.001 x −0.002
Exercises
d. y ' ' +2 y ' +10 y=17 sin x−37 sin 3 x y ( 0 )=6.6 , y ' ( 0 )=2.2
'' '
y + p ( x ) y +q ( x ) y=r (x ) … (1)
In the previous discussion, we have seen that a general solution of (1) is the sum of a general
solution y h of the corresponding homogeneous ODE and any particular solution y p of(1). To
obtain y p when r ( x ) is not too complicated we can often use the method of undetermined
coefficients . However, since this method restricted to function , r (x ) whose derivatives are of a
form similar to r (x ) itself (powers, exponential functions etc.), it is desirable to have a method
valid for more general ODEs (1), we shall now develop. It is called the method of variation of
parameters and is credited to Lagrange. Here p , q , r in (1) may be variable (given functions of
x ), but we assume that they are continuous on some open I.
y2r y r
y p=− y 1∫ dx+ y 2∫ 1 dx … (2)
W W
' '
W = y 1 y 2− y 2 y 1
CAUTION! The solution formula (2) is obtained under the assumption that the ODE is written
in standard form, with y ' ' as the first term as shown in (1). If it starts with f (x) y '' , divide first
by f ( x ) .
'' 1
y + y =secx=
cosx
y 1=cosx, y 2=sinx
y h=c1 y 1 +c 2 y 2
y= y h+ y p
Exercises
'' ' −x
a. y +2 y + y=x e e. y ' ' −2 y ' + y=e x sinx
b. y ' ' + y =sec x f. ''
y + y =tan x
c. x 2 y ' ' −2 x2 y ' + x 2 y=e x g. y ' ' + y =cosx+ secx
d. x 2 y ' ' + x y ' − y=x 2 ln x f. x y ' ' −2 x y ' +2 y=x 3 cosx
A general system of n first order linear variable coefficient DE involving the n dependent
variables x 1 ( t ) , x 2 (t ) , … x n (t ) that are functions of the independent variable t (in application t is
often the time), the variable coefficients a ij ( x ) and the non homogenous terms
f 1 ( t ) , f 2 ( t ) , … , f n (t) has the form
[] [ ]
x1 (t ) x ' 1 (t)
x (t ) x ' (t)
x ( t )= 2 x ' ( t )= 2 (4)
⋮ ⋮
xn (t ) x ' n (t)
[ ] []
a11 (t) a12 (t) ⋯ a1 n( t) f 1 (t)
a (t) a22 (t) ⋯ a2 n (t) f (t)
A ( t )= 21 b ( t )= 2
⋮ ⋮ ⋮ ⋮ ⋮
a n 1(t) a n 2(t) ⋯ ann (t) f n (t)
Then ×1 vector x (t) is called the solution vector. The n × n matrix A(t ) is called
the coefficient matrix and the n ×1 vector b (t) is called the non homogenous
form of the system. ,The system (3) becomes an initial value problem for the
solution x (t) when at t=t 0the vector x (t) is required to satisfy the initial
condition
[]
k1
k
x ( t0 ) = 2 (5)
⋮
kn
2
x ' 1=2 x1 −x2 + 4−t
x ' 2=−x 1 +2 x 2 +1 with x 1 ( 0 )=1 , x 2 (0)=0
Solution: In matrix form this is x ' ( t )= A ( t ) x (t ) +b (t ) ,
[ ] []
x1
[ ] []
2
2−1 x ( t )= 4−t 1
where, A= and, b ( t )= ,with initial vector x ( 0 )= .
−1 2 x2 1 0
We now restrict our discussion to homogeneous first-order systems with constant coefficients:
those of the form:
x ' 1 ( t )=a11 ( t ) x 1 ( t ) +a12 ( t ) x2 ( t )+ a13 ( t ) x 3 ( t ) + …+a1 n ( t ) x n ( t )
x ' 2 ( t )=a21 ( t ) x 1 ( t ) +a22 ( t ) x 2 ( t )+ a23 (t ) x3 ( t )+ …+a2 n ( t ) x n ( t )
x ' 3 ( t )=a31 ( t ) x 1 ( t ) +a32 ( t ) x 2 ( t )+ a33 (t ) x3 ( t )+ …+a3 n ( t ) x n ( t )
⋮
x ' n (t )=an 1 ( t ) x 1 ( t ) +a n 2 ( t ) x 2 (t ) +a n3 ( t ) x 3 (t ) +…+ ann ( t ) xn ( t )
Let A=[ a jk ] be an n × n matrix and consider the equation A x=λx where λ is a scalar (real or
complex). A scalar λ which satisfies
Ax=λx … (*)
for some x≠ 0 is called an eigenvalue of A and this non zero vector is called an
eigenvector of A corresponding to this λ .(*) can be written as
Ax−λx =0 … (**)
¿)x=0 … (***)
For these equations to have a solution x≠ 0 the determinant of the coefficient matrix
A−λI must be zero. Then (***) be comes
[ a11−λ
a21
a 12 x1 = 0
a 22−λ x2 0 ]( ) ( )
( a 11−λ ) x 1+ a12 x 2=0
a 21 x1 + ( a22 −λ ) x 2=0 …(i)
Solving [ a11−λ
a21
a 12
a 22−λ
=0
]
for λ and substituting in (i) we get X 1 and X 2 for λ 1 and X 3 , X 4 for λ 2 and thus, The Eigen vector
corresponds to λ 1 will be
⃗x 1=
( XX )
1
X2= X3
⃗
X4 ( )
Theorem; Given ⃗
X =A ⃗x if λ 1 and λ 2 are the eigen values of the matrix A and ⃗y 1and ⃗y 2 are
corresponding Eigen vectors then the general solution of the system is
λ1 t λ2 t
y=C 1 e ⃗y 1 +C 2 e ⃗y 2
[ ] x1
2 x2
and the Eigen space is 1-dimensional with a basis given
[ 12]
Similarly for λ 2=−5, we have
[ 63 126 ] [ xx ]=01
Therefore, y=C 1 e
7t
[ 12]+c e [−12]
2
−5 t
[] [ ]
x1
x2
=C1 −3 ∙ e +C2 −1 . e
1
2t
1
4t
[ ]
Exercises
[ ]
1 −1 4
⃗
X = 3 2 −1 x⃗ , where λ=1 ,−2,3
2 1 −1
c. x ' 1=x 1+ x2
x ' 2=4 x1 + x 2
d. Given ⃗
X =A ⃗x , where
[ ]
4 0−1
A= 22−1
310
Note: The definition is motivated by the Tyalor series for the exponential of a real or complex
∞
zn
number z ; namely e =∑
z
.
n=0 n!
∞
An
Theorem: The infinite sum e =∑
A
Converges for every matrix A .
n=0 n!
Theorem: If A is an n × n matrix, then the unique solution to the initial value problem
⃗y= A ∙ ⃗y with ⃗y ( 0 )=⃗y 0 is given by ⃗y ( x ) =e Ax ∙ ⃗y 0 .
[ eA ] P .
−1
Proof:
(∑ )
∞ −1 n ∞
(p AP) An
=∑
−1
ep AP
=P−1 P=P−1 [e A ] P,
n =0 n! n=0 n!
,where the middle step uses the fact that ( p−1 AP)n=P ( An ) P .
[ 3−2a+5bb]=[ 22 ab]
and thus a=−b .
The eigenvectors are of the form [−bb], so a basis for λ=2 eigenspace is [−11].
For λ=3, we need to solve
[ 0−2
35 ] [ b ] [ b ]
∙ a =3 a
So,
[ 3−2a+5bb]=[ 33 ab]
−2
and thus a= b.
3
[ ]
−2
b
The eigenvectors are of the form 3 , so a basis for the λ=3 eigenspace is
b
−2
3
. [ ]
Since the eigenvalues are distinct ,clearly A is diagonalizable: we can write
A=P−1 DP for D=
20
03
and P= [ ]
−12 .
13 [ ]
−1
We also compute P =
11 [
−3−2 .
]
[ ]
2x
e = e 30x
Dx
Now we compute
0e
From the formula for exponentiating diagonal matrices.
Finally we have
[ ][ ][ ][ ]
2x 2x 3x 2x 3x
e Ax =P e Dx P−1= −1−2 e 30x −3−2 = 3 e 2 x−2 e3 x 2 e −2 e
13 0e 1 1 −3 e +3 e −2 e + 3 e3 x
3x
Unit Summary:
Linear second order differential equations with constant coefficients are the simplest of the
higher order differential equation and they have many applications.
The most general linear second order differential equation is in the form:
A function y=h( x) is called a solution of a (linear or non linear) second order ODE on some
open interval I if h is defined and twice differentiable throughout the interval and is such
that the ODE becomes an identify if we replace the unknown y by h , and its successive
derivatives.
Two function y 1 ( x) and y 2 ( x) are said to be linearly independent (LI) over an interval I if
the equation
c 1 y1 ( x )+ c 2 y 2 ( x ) =0
The Wronskian of n function y 1 ( x) , y 2 ( x ) , y 3 ( x),… y n ( x ) each (n-1) time differentiable is
the form
| |
y1 y2 ⋯ ⋯ yn
y 1' y 2' ⋯ ⋯ y n'
W ( y )= y 1' ' y 2' ' ⋯ ⋯ y n' ' , where | | denotes the determinant.
⋮ ⋮ ⋯⋯ ⋮
y1(n−1)
y (n−1) ⋯ ⋯
2 y n(n−1)
λ 2+ aλ+b=0 (2)
is called the characteristic equation of the given differential equation.
Depending on the sign of the discriminant a 2−4 b the quadratic equation (2) may have three
kinds of roots.
A general solution of the non Homogenous ODE (1) on an open interval I is a solution of the
form y ( x) = yh ( x ) + y p ( x ) (3)
Where , y h=c1 y 1 +c 2 y 2 is a general solution of the homogeneous ODE (2) on I and y p is any
solution of (1) on I containing no arbitrary constant.
Miscellaneous Exercises
References
6.Matthew R and Etla, differential equation with linear algebra,2009, Oxford University press
7.K. Soetaert, T P, Solving Initial Value Diferential Equations in R,2010,R package deS
10. A. Ganesh and Etla, Engineering Mathematics II, 2009 New age International press
11. Wilfred Kaplan,Advanced Calculus, 5th edition, publishing house of electronics industry
12.Salas Hille Etgen, Calculus – One and Several variables,10th edition, WILLEY PLUS
13 .Boyce. Diprima, Elementary differential equations and boundary value problem, 2001 ,John
Wiley and Sons.Inc
Chapter-Three
Fourier series And Integrals
Introduction
The central starting point of Fourier analysis is Fourier series. They arise naturally while
analyzing many physical phenomenon like electrical oscillations, vibrating mechanical systems,
longitudinal oscillations crystals etc. They are infinite series designed to represent general
periodic functions in terms of simple ones, namely, cosines and sines.
Fourier series are very important to the engineer and physicist because they allow the solution of
ODEs in connection with forced oscillations and the Approximation of periodic functions.
Where as Fourier integrals extend the concept of Fourier series to non -periodic functions
defined for all x as in many practical problems we come across functions defined on −∞ < x <∞
.More over periodic functions can be represented in complex Fourier series and non-periodic
functions can be represented in complex Fourier integral form.
In this chapter we will explore representations of periodic functions in Fourier series, in complex
Fourier series and representations of non-periodic functions in Fourier integral, in complex
Fourier integral. Before introducing Fourier series we will see some preliminary concepts, like
Periodic functions, even and odd functions orthogonal functions and trigonometric series.
Unit Objectives:
understand the definition of periodic ,even, odd functions and orthogonal functions;
understand and find Fourier series representation of periodic functions;
find Fourier integral representation of non- periodic functions
Understand and find the Complex Fourier series representation of periodic functions;
Identify and understand the idea of complex Fourier integral representation.
Overview:
In this section, we are going to consider the definition of periodic, even, odd, orthogonal
functions and trigonometric series with examples.
Section Objectives:
The smallest positive period is often called the fundamental period. The graph of a periodic
function has the characteristic that it can be obtained by periodic repetition of its graph in any
interval of length p as shown in the figure 1 bellow
Fig.3.1. Periodic function of period p
Familiar periodic functions are the cosine, sine, tangent, and cotangent with fundamental periods
p= 2 π ,2 π , π ∧π respectively . Examples of functions that are not periodic are x , x 2 , x 3 , e x ,
coshx ∧lnx .
If f (x) has period p, it also has the period2 p because by the above definition of a periodic
function
Example 3.1:
State the period for each of the following functions
a ¿ f ( x)=cos 4 x b ¿ f ( x)=sin3 x
π
f (x)=¿ cos 4 x=cos ( 4 x+ 2 π ) ¿ cos ¿ ) ¿ f ( x+ )
2
π π
⇒ f (x+ ) ¿ f ( x) ⇒ P = is the period of f ( x)
2 2
2π 2π
⇒ f (x + ) ¿ f ( x ) ⇒P¿ is the period of f ( x)
3 3
Definition (Even and odd functions) a function f (x)is called an Even function if
f (−x )=f ( x ) for all x ∈the domain of f ;is called an Odd function if
f (−x )=−f ( x ) for all x ∈thedomain of f .
If f and g are both odd functions, and both even functions; then the product fgis even; and if one
is even and the other is odd then the product is odd. More over even functions are symmetric
about the y- axis where as odd functions are symmetric about the origin.
l l
Solution To show that a function f is odd, we need to evaluate it at – x and show that this is the
same as the value of – f at x ; that is
Definition (Orthogonal functions) Two functions f ( x )∧g (x) are said to be Orthogonal on the
interval [-l,-l] if
l
∫ f ( x ) g ( x ) dx=0
−l
Example 3.3: show that f ( x )= sinx and g ( x )=cos 2 x are orthogonal on [−π , π ]
π π
1
Solution: ∫ sinx cos 2 xdx =¿ ∫ (sin ( 1−2 ) x +¿ cos (1+2) x)dx ¿
−π −π 2
π
1
¿ ∫ (sin (−1 ) x +¿ cos (3) x )dx ¿
−π 2
π
1
¿−∫ (sin x +¿ cos 3 x )dx ¿
−π 2
π
1
¿− ∫ (sin x +¿ cos 3 x )dx ¿
2 −π
π
1 1
¿− ∫ sin xdx+ ¿− ∫ cos 3 x dx ¿
2 −π 2
[ ]
π
1 sin 3 x 1
¿ cosx − = ¿
2 6 −π 2
1 1 1
¿ ¿ ¿ = [ (−1−0 ) – (−1−0) ]
2 2 2
1
= (−1+1 )=0
2
Hence f and g are orthogonal functions.
π π
{00 ,∧n
π
≠m
c)∫ sinnx cosmxdx=¿
−π
,∧n=m
Proof
a) we use the trigonometric identities to prove this Orthogonality that is
For n ≠ m
π π
For n=m,
π π π π
1 1 2π
= [¿+0¿−(−π −0)] ¿ ¿+ π ¿ ¿ ¿π
2 2 2
Similarly b and c can be proved by the same fashion
Definition (trigonometric series) any series that can be expressed in the form of
∞
a o +a1 cosx+b1 sinx +a2 cos 2 x+ b2 sin2 x+ …+¿ ∑ (an cosnx +b n sinnx)
n=1
Wherea o , a1 , b1 , a2 , b2… are constants, called the coefficients of the series and each term has
the period2 π is called a trigonometric series .
Note: If the coefficientsatrigonometric series are such that the series converges, its sum will be a
function of period2 π .
Exercises 3.1
2 πnx 2 πx
(a)f (x)=cosπx (b)f (x )=sinnx(c ). g( x )=cos (d)h(x )=sin
k k
2. If f ( x)∧g (x)have period p ,show thath( x )=af ( x)+bg (x) ¿constants¿ has period p.
3. Show that f =¿constant is periodic with any period but has no fundamental period.
5) Show that if f is an even function and gis odd, then the product f . g is an odd function
( c )=sin2 x , g( x )=sin 3 x
−a
6. Show that ∫ x 5 cosxdx=0, what would you conclude from this?
a
Overview:
In this section, we are going to consider the definition of Fourier series together with some
examples
Section Objectives:
Definition (Fourier series): If f is a periodic function of period 2 π and integrable on the interval (−π , π
), then the Fourier series of f is defined as:
∞
f (x)=ao + ∑ (a n cosnx+b n sinnx) (1)
n=1
where a o,a n ,∧bn are called theFourier coefficients of f and are given by the Euler’s Formula
π
1
a) a o= ∫ f ( x ) dx
2 π −π
π
1
b) a n= ∫ f ( x ) cosnx dx
π −π
n=1,2 ,…
Proof:
Verification
( )
π ∞ π π
∑( )
π ∞ π π
π
1
Hence a n= ∫ f ( x ) cosnx dx
π −π
Proof: for b n
Multiplying on both sides of (1) by sinmx for any fixed positive integerm and integrating on
both sides from – π ¿ π we get
π π ∞
( )
π ∞ π π
∑( )
π ∞ π π
= 0+ bn π (m=n is a positive¿integer )
¿ bn π
π
⇒ ∫ f ( x ) sinmx dx=bn π
−π
π
1
⇒ b n= ∫ f ( x ) sinmx dx As required π
π −π 1
π
b n= ∫ f ( x ) sinmx dx
π −π
1
Henceb n= ∫ f ( x ) sinmx dx
π −π
Example 3.4: find the Fourier series representation of the periodic function
{
f ( x )= −k if −π ≤ x ≤ π ∧f ( x+2 π )=f (x)
k if 0< x< π
Solution: to find the Fourier series representation of f we first find the Fourier coefficients
a o , an∧b nof f using Eule r ' s formulas .
π π
a o=¿ 1 ∫ f ( x ) dx= 1 ¿ ∫ f ( x ) dx ¿
2 π −π 2π 0
π
1
2π ∫
¿ ¿ k dx ¿
0
−k k
¿[0−(−π)] 0 +¿ [π −0] π
2π −π 2π 0
−kπ kπ
¿ +¿ ¿0
2π 2π
Hence,a o=0
π
1
Similarly, a n= a n= ∫ f ( x ) cosnx dx
π –π
π
1
¿ ¿ ∫ f ( x ) cosnxdx ¿
π 0
0 π
1
¿ ∫ −k cosnxdx + ¿ π1 ∫ k cosnxdx
π −π 0
−k
¿ [ sinx ] 0 + k [sinx] π
nπ −π n π 0
−k k
¿ [ sin 0−sin (−π )] + (sinπ−sin 0)
nπ nπ
−k k
¿ [ sin 0−sin (−π )] + (sinπ−sin 0)
nπ nπ
−k k
¿ (0)+ (0) ¿ 0
nπ nπ
Hence , a n=0
π
1
More over,b n= ∫ f ( x ) sinmx dx
π −π
π 0 π
1
¿ ¿ ∫ ksinnx dx ¿= 1 ∫ −k sinnxdx+¿ 1 ∫ ksinnx dx ¿
π 0 π −π π 0
k
¿ [ cosnx ] 0 + −k [cosnx ] π ¿ k [ cos 0−cos ( −n π ) ] + −k (cosnπ −cos 0)
nπ −π 2n π 0 nπ nπ
k −k k k
¿ [ 1−cosn π ¿ ] + (cosnπ −1) ¿ [ 1−cosn π ¿ ] + (1−cosnπ )
nπ nπ nπ 2n π
2k 2k n
¿ (1−cosnπ ) ¿ (1−(−1) )……. Because cosnπ =(−1)n
nπ nπ
{
4k
Here, cosnπ =¿ (−1)n
=
{
−1 , for odd n
1, for even n
⇒
2k
n π
( 1− ( −1 )
n
)= nπ
, for odd n
0 , fof n even
4k
⇒ b n=¿
nπ
4k
Hence,b n=¿
nπ
From these, the Fourier series of f is
( )
∞ ∞
4k
f ( x)=ao + ∑ (a n cosnx+b n sinnx) ¿ 0+ ∑ 0. cosnx+ sinnx
n=1 n =1 nπ
∞
4k 4k 4k 4k
¿∑ sinnx = sinx+0+ sin 3 x+ 0+¿ sin 5 x +…
n=1 nπ π 3π 5π
4k 4k 4k 4k 4k 4k
= sinx+ sin3 x +¿ sin 5 x +… = sinx+ sin3 x +¿ sin 5 x +…
π 3π 5π π 3π 5π
4k 1
= ¿ sin 5 x+ … )
π 5
4k 1
Hence, f ( x )= ¿ sin 5 x+ … )
π 5
Solution: to find a Fourier series representation of this periodic function we first find the Fourier
coefficients a o , an∧b n .
π
1
That is by Euler’s formulas,a o= ∫ f ( x ) dx
2 π −π
π π π
1 1
¿ ¿ ∫ f ( x ) dx ¿= ¿ ∫ 1 dx ] ¿ 1 ∫ 1 dx ]
2π 0 2π 0 2π 0
1 1 π 1
[ x ] π = ( π−0 )= =
2π 0 2π 2π 2
1
Hence,a o=
2
π π
1 1
Similarly,a n= ∫ f ( x ) cosnx dx= ¿ ∫ f ( x ) cosnxdx ¿
π –π π 0
π π π
1
¿ ¿ ∫ 1. cosnxdx ¿= 1 ∫ 1. cosnxdx ¿= 1 ∫ cosnxdx
π 0 π 0 π 0
k
¿ [ sinnx ] π ¿ k (sinnπ−sin0)= k (0−0)= k (0)=0
nπ 0 nπ nπ nπ
Hence,a n=0
π π π
1 1 1
Now,b n= ∫
π −π
f ( x ) sinnx dx= ¿ ∫ 1. sinnx dx ¿= ∫ 1. sinnx dx
π 0 π 0
π
1 π −1 −1
π ∫ sinnx dx= −1
nπ
[cosnx ] =
0 nπ
¿)¿
nπ
¿)
0
{
2
−1 , for odd n
⇒ b n= (cosnπ −1)=¿ nπ
nπ
0 , for even n
{
2
, for odd n
b =
Hence, n nπ
0 , for even n
∞
From these, the Fourier series of f is f (x)=ao + ∑ (a n cosnx+b n sinnx)
n=1
( )
∞ ∞ ∞
1 2 1 2 1 2 1
¿ + ∑ 0. cosnx + sinnx ¿ + ∑ sinnx ¿ + ∑ sinnx
2 n=1 nπ 2 n=1 n π 2 π n=1 n
1 2 1 1
¿ + (sinx+0+ sin 3 x +0+ sin5 x +…)
2 π 3 5
1 2 1 1
Hence, f ( x )= + (sinx +0+ sin 3 x+ 0+ sin 5 x+ …)
2 π 3 5
Example 3.6: Obtain the Fourier series of f ( x )=x 2 over the interval (−π , π ) where
f ( x+ 2 π )=f ( x)
π π
1
Solution: we first find the Fourier coefficients,a o= ∫ f ( x ) dx=2 21π ∫ x2 dx
2 π −π 0
π π π
1
¿ ∫ x dx (Because f is even function¿ ∫ f ( x ) dx=2 ∫ f ( x ) dx )
2
π 0 −π 0
[ ]
1 x3 π 1 3 π 2
¿ = [ x ] =¿ 31π (π 3−0 3) ¿ π
π 3 0 3π 0 3
2
π
Hence,a o=¿
3
π π π
1 1 1
Similarly,a n= ∫ f ( x ) cosnx dx= ∫ x cosnx dx ¿ 2 ∫ x cosnx dx(∴ f is even)
2 2
π –π π –π π 0
n
4 (−1)
Now using integration by parts we have, a n=¿ 2
n
π π
1 1
Now,b n= ∫
π −π
f ( x ) sinnx dx= ∫ x sinnx dx ¿ 0 (∴ x2 sinnx is odd function )
π −π
2
Hence, b n=¿ 0
So far we have considered the Fourier series expansion of functions with period 2 π .in many
application ,we need to find the Fourier series expansion of periodic functions with arbitrary
period ,say 2 l. The transition from period P=2 l to period P=2 π is quite simple and involves
only proportional change of scale.
Consider the periodic function f(x) with period 2l in(−l , l) to change the problem to period 2 π ,
πx lv
Set, ¿ , which gives x=¿ , thus x=±l corresponds to v=± π and the function f ( x) of
l π
period 2l in(−l , l) may be regarded as function g(v) of period 2 π in (−π , π ) .
Hence,
∞
g( v)=ao + ∑ (a n cosnv + bn sinnv) (1)
n=1
Where a o,a n ,∧bn are called theFourier coefficients of f and are given by the Euler’s Formula
π
1
a) a o= ∫ g ( v ) dv
2 π −π
π
1
b) a n= ∫ g ( v ) cosnv dv n=1,2 ,…
π −π
π
1
c) b n= ∫ g ( v ) sinnv dv n=1,2 ,…
π −π
πx
Making the inverse substitution, v=¿ and g( v)=¿ f ( x) in the above we obtain the Fourier
l
series expansion
∞
nπx nπx
f (x)=ao + ∑ (a n cos + bn sin ) (2)
n=1 l l
With coefficients,
l
1
a) a o= ∫ f ( x ) dx
2l −l
l
1 nπx
b) a n= ∫ f ( x ) cos dx n=1,2 ,…
l −l l
l
1 nπx
c) b n= ∫ f ( x ) sin dx n=1,2 ,…
l −l l
Note: we may replace the interval of integration by any interval of length P=2 l , say by the
interval(0,2 l)
Example 3.7: find the Fourier series for the function f ( x)= {
x , If −1≤ x ≤ 0
x +2 , If 0 ≤ x ≤ 1
where
f ( x+ 2)=f (x )
1
1
= ¿ ∫ ( x+ 2)dx ] -1 1
2 0
1
1
= ¿ ∫ ( x+ 2)dx ]
2 0
¿ [ ]
1 x2 0
2 2 −1
+¿
1 x2
2 2 [
+2 x
1
0 ] Figure3.3 graph of f
( ) ( )
2 2 2 2
1 0 (−1 ) 1 1 0
¿ − + +2−( + 2(0))
2 2 2 2 2 2
1 1 +1 1 1
¿ (0− ) +1 ¿− + +1=1
2 2 4 4 4
Hence a o=0
1 0 1
a n=∫ f ( x ) cosnπxdx=∫ xcosnπxdx +∫ ( x+ 2 ) cosnπxdx
−1 −1 0
0 1 1 1 1
¿ ∫ xcosnπxdx +∫ xcosnπxdx +2 ∫ cosnπdx =∫ xcos nπxdx +¿ 2∫ cosnπdx ¿
−1 0 0 −1 0
¿¿
¿¿
¿¿¿
1
Or for simplicity, ∫ xcos nπxdx =0 because xcos nπx is odd function and from section 1
−1
1 1
Hence a n=0
1 0 1
0 1 1 1 1
¿ ∫ x sin nπxdx+∫ x sin nπxdx +2∫ sin nπ xdx=∫ x sin nπxdx +2∫ sin nπ xdx
−1 0 0 −1 0
1 1
¿ 2∫ x sin nπxdx +2∫ sin nπ xdx=2¿
0 0
2 cosnπ 2 cosnπ 2 2 4
¿− − + = − ¿
nπ nπ nπ nπ nπ
{
6
for odd n
b = nπ ,
Thus n
−2
for even n
nπ ,
n n
¿ a o+ ∑ bn sin nπ=1+ ∑ bnsin nπx
n−1 n =1
2
⟹ f ( x )=1+ ¿]
π
1 1 5
Further for x= f ( x )=x +2= + 2=
2, 2 2
1
Setting x= on both sides and we series above, we obtain
2
5
2
2
π [
π 1
=1+ 3 sin − sin π + sin
2 2
3π 1 3
− sin 2 π + sin
2 4 5
5π 1 3
− sin3 π + sin
2 6 7
7π
2
−…
]
2
¿ 1+ ¿
π
or
5
2
2 3 3
−1= 3−1+ − +…
π 5 7 [ ]
3π 3 3
⟹ =3−1+ − +…
4 5 7
π 1 1 1
This gives =1− + − + ….
4 3 5 7
Example 3.8 : Obtain the Fourier series for the periodic function
sin hl
⟹ ao =
l
l l
1
a n= ∫ f ( x ) cos nπx
l −l l
1
dx= ∫ e cos
l −l
−x nπx
x
dx using integration by parts we have,
l
1 nπx 1
a n= ∫ e cos
−x
dx= ¿
l −l x l
[ ( ) ( )
]
−l l
1 e nπ e nπ
¿ – cos π + sin nπ − 2 2 2 – cos π + sin nπ
l l +n π
2 2 2
l l +n π l
2 2
l l
1
¿
l
1
¿ ¿
l
lcosnπ
¿ 2
l +¿ ¿
⟹ an =2¿ ¿
l l
1 nπx 1 nπx
Similarly b n= ∫ f ( x ) sin dx= ∫ e sin
−x
dx
l −l l l −l l
1
¿ ¿
l
1
¿ ¿
l
1
¿ ¿
l
⟹ bn =2¿ ¿
Hence the Fourier series or f ( x) on (−l , l) is
( )
∞
nπx nπx
f ( x )=ao + ∑ an cos +bn sin
n=1 l x
∞
sin hl
⟹ e−x = +∑ ¿ ¿
x n=1
sin hl
¿ +2 sin hl ¿¿
l
¿ sin hl
[( 1
l
1
( πx 1
−2 l 2 2 cos − 2 2 2 cos
l +π l l +2 π
2 πx
l
1
+ 2 2 2 cos
l +3 π
3 πx
l
1
) (πx 2
−2 π 2 2 sin − 2 2 2 sin
l π l l +2 π
2 πx
l
3
+ 2 2 2
l +2 π
Fourier integrals
Fourier integrals extend the concept or Fourier series to non periodic functions defined for all x.
A non periodic function which cannot be represented as a Fourier series over the entire real line
may be represented in an integral form. In many practical problems we came across functions
defined on −∞< x <∞ that are non periodic e.g f ( x )=e−x 2 f (x)
Figure3.4 f ( x )=e−x 2
We cannot expand such functions in Fourier series since they are not periodic; however we can
consider such function to be periodic, but with infinite period.
∞ ∞
1 1
Where A ( ω )= ∫ f (u ) cos ωudu∧B ( ω )= ∫ f ( u ) sin ωudu
π −∞ π −∞
Are the Fourier integrals coefficients is called the Fourier integral representation of f ( x)
The sufficient conductions for which we Fourier integral representation is valid are,
Theorem(Fourier integral theorem): If f(x) satisfies the conditions 1 to 3 stated above, then the
Fourier integral or f converges to at every point x at which f is continuous, and to the mean
value ¿+0 ¿+ f ¿-0 ¿ ¿/2at every point x at which f is discontinuous, where f ( x +) and f ( x -) are the
right and left hand limits respectively.
Solution: f ( x) is piecewise smooth and absolutely integrals over (−∞ , ∞ ) . Thus f ( x) has
Fourier integral representation.
-1 1 x
[ ]
∞ 1
1 1 sin ωu 1 2sin ω
A ( ω )= ∫
π −∞
f (u ) cos ωudu= ∫ cos ω udu=
π −1 πω −1
=
πω
∞ 1
1 1
and B ( ω )= ∫ f ( u ) sin ωudu= ∫ sin ω udu=0 (because sin ω u is odd function)
π −∞ π −1
{
π
for−1< x <1
2 cosωxsinω
∞2, ∞
Thus, ∫ ω dw= π for x=±1 and setting x=0, we have
π 0 ∫ sinωω dω= π2
4, 0
0 , for 1 x 1>1
{
−x
Example 3.10: find the Fourier integral representation of f ( x )= e , x>0
0 , x ≤0
and find the value of the repulsing integral when,(a) x <0, (b) x=0 (c) x >0also derive that
∞
dw π
∫ 1+w 2
=
2
0
Solution: the given function f(x) is piecewise smooth and is absolutely inferable over(−∞ , ∞ ),
∞ ∞
Since ∫ |f ( x )| dx=∫ e−x dx=1 , that is by using improper integrals
−∞ 0
∞ b
−b
= blim
→∞
1− lim e
b→∞
¿ 1−0=1
cos ωu 1
Then, ∫ e sin ωudu=−e − ∫ e cos ωudu………………………………(2)
−u −u −u
w w
∫ e−u cosωudu= ω
+
ω ω [
e−u sin ωu 1 −e−u cos ωu 1
− ∫ e−u cos ωudu
ω ]
−u −u
e sin ωu e cos ωu 1
− 2 ∫ e cos ω udu
−u
¿ −
ω ω 2
ω
( )
−u −u
1 e sin ωu e cos ωu
¿ 1+
ω
2 ∫ e cos ωudu=¿
−u
ω
−
ω
2
¿
2 ∞ −u −u
1+ ω
¿
ω
2 ∫ e−u cos ωudu= e sin ωu e cos ωu
ω
−
ω
2
0
[ ]
∞ 2 −u −u
ω e sin ωu −e cos ωu ∞
⇒ ∫ e cos ωudu=
−u
2
− 2
0 1+ω ω ω 0
[ ]
2 −u −u
ω e sin ωu −e cos ωu ∞
¿ −
1+ ω
2
ω ω
2
0
1
When u tends to infinity, it becomes zero, and when u tend to zero u becomes 2
1+ w
∞
1
⇒ ∫ e−u cos ωudu= 2
0 1+ω
∞ ∞
1 1 1
so that A ( ω )= ∫ f (u ) cos ωudu=¿ ∫ e cos ωudu=¿
−u
¿¿
π −∞ π 0 π ( 1+ ω )
2
∞ ∞
1 1 ω
similarly, B ( ω )= ∫ f ( u ) sin ωudu=¿ ∫ e sin ωudu=¿
−u
¿ ¿(use integration by parts)
π −∞ π 0 π ( 1+ω )
2
{
∞
0 , x <0
cos ωx + w sin ωx π
Thus, ∫ 2
dω= , x=0
0 1+ω 2
π e−x , x >0
Exercises 3.2
1
(a) f ( x)=¿ ( π −x) in π < x <π (b) f (x)=¿ x 2 in π < x <π
2
{
0 ,−2< x ≤−1
2
(d) f ( x )=1−x over the interval (1,-1) (e) f ( x)= k ,−1< x <1 , f ( x)=(x +4)
0 , 1≤ x <2
2. in each of the following drive the Fourier integral representation, at which points, if any ,does the
Fourier integral fail to converge to f(x)? to what value does the integral converge at those points,
{
π π
( ) cosx ,|x|≤
(a) f ( x)=¿ {100 ,0 ≤ x ≤ 2
0 , otherwise
(b) f (x)=¿
2
0 ,| x|>
π
2
{ {
bx 0 , x< 0
,|x|≤ a , a , b>0
(c) f ( x)=¿ a (d) f (x)= k ,0 ≤ x ≤ π
0 ,|x|> a 0 , x >π
In this section, we are going to introduce the complex Fourier series and the complex Fourier
integral representation of real functions together with some examples.
Section Objectives:
Definition (Complex Fourier series): - let f(x) be a real periodic function of period 2 l over the
interval (−l .l).then the complex Fourier series representation of f is defined as.
k inπx
f ( x )= lim
k⟶ ∞ n=−k
∑ cn e l
for−l< x <l
l −inπx
1
Where c n= ∫ f (x )e l
dx , n=0 , ±1 , , ±2 are the complex Fourier coefficients.
2 l −l
Note: in Complex Fourier series, at points of continuity f (x) the series converges to f (x) while
at point of discontinuity it converges to the midpoint
Example 3.11: find the complex Fourier series representation of the function
{
f ( x )= 0 , 0< x ≤1
1, 1< x< 4
When f ( x)=f ( x + 4)
Solution: The function f (x) is periodic with period 4 defined on the internal (0,4) , with2 l=4 ,
l=2 . Thus the complex Fourier coefficient c n is given by:
4 −inπx 4 −inπx
1 1
c n= ∫ f ( x ) e l
dx= ∫ e 2
dx
4 0 4 0
4
1 3
For n=0, we get c o =
41
∫ dx=
4
[ ]
4 −inπx 4 −inπx −inπx
1 1 1 −2 4= 1 [¿
c n= ∫ f ( x ) e l
dx= ∫ e 2
dx= e 2
4 0 4 0 4 inπ 1 4
[ ]
−nπx
i 2
¿ 1−e
2 πn
( )e
k −inπx inπx
i
f ( x )= lim ∑
k⟶ ∞ n=−k 2 πn
1−e 2 2
( )e
k −inπx inπx
3 i
f ( x )= + lim ∑ 1−e 2 2
( n≠ 0)
4 k ⟶ ∞ n=−k 2 πn
Example 3.12: find the complex Fourier series representation of the function
Solution: The function f ( x) is periodic with period2 π , defined on the internal (−π , π ¿ . Here
l=π . Thus the complex Fourier coefficients are:
π π
1 1 −1
c n= ∫ − x inx
e e dx= ∫ e
− ( 1+¿ ) x
dx= [ e−i (1+¿ ) x ] π
2 π −π 2 π −π 2 π (1+ ¿) −π
−1
¿ [ e−(1+¿)π −e(1+¿)π ]= −1 (e−π e−inπ −e π einπ )
2 π (1+¿) 2 π ( 1+¿ )
−1
¿ [ e− π ( cos nπ −isin nπ )−e π ( cos nπ +isin nπ ) ]
2 π (1+¿)
¿
(1−¿)
2
[ ( e π−e−π ) cos nπ ]=(−1)n ( 1−¿ ) sinhπ
2
2 π (1+n ) π (1+n )
( )
k
sinhπ n 1−¿
lim ∑ (−1)
inx
f ( x )= e
π k ⟶ ∞ n=−k 1+n
2
∞
1
Where, c ( ω ) = ∫
2 π −∞
f ( u ) e−iωu du
is the complex Fourier integral coefficient is called the complex Fourier integral representation of f
Example 3.13: If f ( x )=e−a|x| for all real x and witha> 0, a positive constant Then find the
complex Fourier integral representation or f .
{ {
−ax
e , for x<0 x for x >0 , a>0
Solution: The function is ax as |x|= being a constant
e , for x<0 −x for x <0
Obviously, f (x) is piecewise smooth and is absolutely integrable over the interval(−∞, ∞).
[ ]
∞ 0 ∞
1 1
c ( ω) = ∫
2 π −∞
f (u) e
−iωu
du=
2π
∫e au −iωu
e du+∫ e
−au −iωu
e du
−∞ 0
[ ] [ ] [ ]
0 ∞ ( a−iω ) u 0 − ( a+iω ) u
1 1 e e ∞
¿
2π
∫ e (a −iω)u du+∫ e (a +iω)u du = 2 π a−iω −∞ +
1
−(a+iω) 0
−∞ 0
2π
( a−iω ) u ( a−iω ) u
e 1 e
From this as u → 0, → and asu →−∞ →0
a−iω a−iω a−iω
− ( a+iω ) u − (a +iω )u
e e −1
Similarly, as u → ∞ , → 0 and as u → 0, →
−( a+iω) −(a+iω) a+iω
¿
1
[1
+
1
=
a
2 π a+ iω a−iω π ( a +ω )
2 2 ]
=c ( ω )=
a
π ( a +ω 2)
2
a
⟹ c ( ω )=
π ( a +ω 2)
2
−∞ −∞ π ( a +ω )
2 2
π −∞ a +ω
Solution: Clearly f is piecewise continuing and absolutely integrals on (−∞, ∞ ) (i.e over the real
line)
So, to find the complex Fourier integral, we first find the complex Fourier integral coefficient c(
ω) ,that is
∞ 5 5
1
c ( ω) = ∫ f ( u ) e−iωu du= 21π ∫ sinπu e−iωu du= 21π ∫ e−iωu sin πu du
2 π −∞ −5 −5
−cos πu
dv =sin πu , v=
π
( )
5 5
−e−iωu cos πu iω
∫e −iωu
sinπudu=
π
− ∫e
π −5
−iωu
cos π udu (1)
−5
5
sin πu
dv =cos πu , v=
π
5 −iwu 5
e sin πu iw
⟹∫ e + ∫ e sin π udu
−iwu −iwu
cos πu du= (2)
−5 π π −5
[ ( )]
5 −iwu −iwu 5
−e cos πu iw e sin πu iw
∫ e−iωu sin πu du= − + ∫ e−iwu sin π udu 5
−5 π π π π −5 −5
[ )]
5
(
−iwu −iwu 5
−e cos πu iw e sin πu
w ∫e
2 −iwu
¿ − sin π udu
π π π −5+ 2 −5
π
[ ]
−iwu −iwu 5 5
−e cos πu iwe sin πu
w ∫ e sin π udu
2 −iwu
− 2
π π −5+ 2 −5
π
[ ]
5 −iwu −iwu 5 5
1
⟹ ∫ e−iwu sin πu du= 21π −e
2 π −5 π
cos πu iwe
−
π
2
sin πu
−5+
w ∫e
2 −iwu
sin π udu
2 −5
π
( ) [ ]
2 5 −iwu −iwu
w
1− 2
π
∫ e−iwu sin πu du= 21π −e
π
cos πu iwe sin πu 5
−
π
2
−5
−5
[ ]
5
−π 2 −e−iw cos πu iwe−iwu sin πu 5
⟹∫ e
−iwu
sin πu du= 2 −
−5 w −π 2 π π2 −5
[ ]
5
−π 2 −e−iw cos πu iwe−iwu sin πu 5
∫ e−iwu sin πu du= w2 −π 2 π
−
π2 −5
−5
−1
¿ 2 2
[− πe−iwu cos πu−iwe−iwu sin πu ] 5
w −π −5
−1
w2 −π 2
[−πe−i 5 w cos 5 π −iwe−i5 w sin 5 π + πe i5 w cos (−5 ) π +iwe i5 w sin(−5) π ]
−π
¿ 2 2
[ e−i 5 w −e i 5 w ]
w −π
−π
¿ [ cos 5 w−isin 5 w−cos 5 w−isin 5 w ]
( w2 −π 2)
π 2 πi sin5 w
( 2isin 5 w )=
( w −π )
2 2
w 2−π 2
5
2 πi sin 5 w
From this∫ e
−iωu
sin πu du= 2 2
−5 w −π
∞
Thus, c ( w ) =
1
∫
2 π −∞
f ( u ) e−iwu du=¿
(
2 π w 2−π 2 )
1 2 πi sin5 w i sin5 w
= 2 2¿
w −π
∞ 5
Hence f ( x )= ∫ c ( w ) e
−∞
−iwu
du=i ∫
−5
( 2
w −π )
sin 5 w −iwx
2
e du
Exercises 3.3
1. In each of the following find the complex Fourier series representation of f (x) on the given interval.
2. In each of the following problems, the complex Fourier integral of the function and determine what this
integral converges to.
(a) ( x)=¿ xe|x|, for all real x
(b) f ( x)=¿ {
cosπx ,| x|≤ 2
0 ,|x|>2
Unit Summary:
Fourier series are infinite series designed to represent general periodic functions in terms
of simple ones, namely, cosines and sines.
A function f (x) is called a periodic function of period p if f (x) is defined for x , and
f (x+ p)=f (x ) and the smallest period p is called the fundamental period.
If f (x) has period p, it also has the period2 p and in generalfor any integern ≥ 1
f (x+ np)=f ( x) for all x
In Fourier series representations, even and odd functions are very important in finding the
Fourier coefficientsa 0,a n and b n
A function f (x)is called an Even function if f (−x )=f ( x ) for all x in the domain of f
A function f (x) is called an odd function if f (−x )=−f ( x )for all x in the domain of f For
l l
If f ( x) is a periodic function of period P=2 l and integrable over the interval (−l .l).
Then, the Fourier series expansion of f is;
∞
nπx nπx
f (x)=ao + ∑ (a n cos + bn sin )
n=1 l l
With coefficients,
l
1
a) a o= ∫ f ( x ) d x
2l −l
l
1 nπx
b) a n= ∫ f ( x ) cos dx n=1,2 ,…
l −l l
l
1 nπx
c) b n= ∫ f ( x ) sin dx n=1,2 ,…
l −l l
Fourier series are powerful tools for problems involving functions that are periodic or are
of interest on a finite interval only.
Fourier integrals extend the concept or Fourier series to non periodic functions defined
for all x .
The Fourier integral representation of f ( x) can be defined as;
∞
f ( x )=∫ [ A ( ω ) cos ωx + β ( ω ) sin ωx ¿]dω ¿
0
∞ ∞
1 1
Where A ( ω )= ∫ f (u ) cos ωudu∧B ( ω )= ∫ f ( u ) sin ωudu
π −∞ π −∞
The sufficient conductions for which we Fourier integral representation is valid are,
1. f ( x) is piecewise cortisones on every integral [ – l , l ]
∞
If f(x) be a real periodic function of period 2 l over the interval (−l .l).then the complex
Fourier series representation of f is defined as.
k inπx
f ( x )= lim ∑
k⟶ ∞ n=−k
cn e l
for−l< x <l
l −inπx
1
Where c n= ∫ f ( x ) e l
dx , n=0 , ±1 , , ±2 are the complex Fourier coefficients
2 l −l
∞
1
Where, c ( ω ) = ∫ f ( u ) e−iωu du is the complex Fourier integral coefficient.
2 π −∞
Miscellaneous Exercises
(c) Sums and products of even functions (d) Sums and products of odd functions
(e) Absolute values of odd functions (f) product of an odd and an even functions
∞ n
(−1)
(c) f ( x)=e−ax In the interval (−π , π ) and deduce that csch π =∑ 2
n=2 n + 1
1 1 1 1 π2
(d) f ( x )=x− x2 ,−π < x < π and deduce that − + − +…=
12 22 32 4 2 12
{
4x 3
1+ ,− < x ≤ 0
3 2
(g) f ( x)=¿ f (x+ 3)=f ( x)
4x 3
1− ,0≤x<
3 2
(c) f ( x )= {
π ,−π < x <0
π−x , 0 ≤ x ≤ π
(d) f ( x )= {
x ,0< x<1
0 ,1< x <2
5. Let f be aperiodic function of period 2 π such that f ( x)=π 2 −x2 for x ∈(−π , π ), then show that
2 ∞
2π −4
2
π −x = +∑ 2 ¿ ¿
2
3 n=1 n
( )
∞ ∞ 2
−6 6 π
(a)3 x=∑ (b) x =∑ 2 (−1 ) 3 −
3 n
(−1)n sin nx sin nx
n=1 n n=1 n n
{
∞ π
1−cosπω , 0< x ≤ π
(a)∫ sin( xω) dω= 2
0 ω
0 , x> π
{
∞
sin πω sin ωx 1
π sin x , 0 ≤ x ≤ π
(b) ∫ 2
dω= 2
0 1−ω
0,x>π
∞
0 0 , a>1 {
8. If ∫ f ( x ) sin ax dx= 1, 0< a<1 then find f ( x ).
References
Allan pinkas, Fourier Series and Integral transform, Cambridge university press, 1997
Alan Jeffrey, Advanced engineering mathematics, RR Donnelley & Sons, Inc, 2002
Abramowitz, M. and I. A. Stegun (eds.), Handbook of Mathematical Functions. 10th
Courant, R., Differential and Integral Calculus. 2 vols. Hoboken, NJ: Wiley, 1988.
Churchill, R. V., Operational Mathematics. 3rd ed.New York: McGraw-Hill, 1972.
Erwin Kreyzing, Advanced engineering mathematics, 10th ed, wiley, 2000
G.B. Foland, Fourier Analysis and its applications, Wadsworth and Brooks/Cole, Pacific
Grove,CA, 1992
Graham, R. L. et al., Concrete Mathematics. 2nd ed. Reading, MA: Addison-Wesley, 1994.
Hanna, J. R. and J. H. Rowland, Fourier Series, Transforms, and Boundary Value Problems.
2nd ed.New York: Wiley, 2008.
Jerri, A. J., The Gibbs Phenomenon in Fourier Analysis, Splines, and Wavelet
Approximations. Boston: Kluwer, 1998.
Szegö, G., Orthogonal Polynomials. 4th ed. Reprinted. New York: American
Mathematical Society, 2003.
Tolstov, G. P., Fourier Series. New York: Dover, 1976
Thomas, G. et al., Thomas’ Calculus, Early Transcendental Update. 10th ed. Reading, MA:
Addison-Wesley, 2003.
W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems, 5th ed.,McGraw-Hill,
New York, 1993
Zygmund, Trigonometric Series, 2nd ed. (Volumes I and II combined),Cambridge University
Press, Cambridge, UK, 1988
Unit-four
Fourier and Laplace Transformation
Introduction
An integral transform is a transformation that produces from a given function a new function,
that depends on a different variable and appears in the form of an integral. These transformations
are mainly employed as a tool to solve certain initial and boundary value problems in ordinary
and partial differential equations arising in many areas of science and engineering. Fourier
transforms are integral transforms which are of vital importance from the applications view point
in solving initial and boundary value problems.
In this chapter we will discuss three transforms: the Fourier cosine transform, the Fourier sine
transform; the first two being real and the later one complex. These transforms are obtained from
the corresponding Fourier integral. We will also see Laplace transforms, inverse Laplace
transform, differentiation and integration of Laplace transforms convolution and integral
equations
Unit Objectives:
Section Objectives:
The Fourier Cosine and Sine Transforms can be considered as a special cases of the Fourier
transformof f (x) when f ( x) is even or odd function over the real axis
Definition: If f ( x ) is piecewise continuous on each finite interval [0 ,l] and absolutely integrable
over the positive real axis so that its Fourier Transform F ( w ) exists then the Fourier cosine and
¿ ¿
Fourier sine transforms of f(x) denoted by F C ( w )∨f c ∧F s ( w )∨f s respectively is defined as;
√
∞
2
F c ( w )= ∫ f (x )coswxdx ,
π 0
F s ( w )=
√ π
∫ f ( x)sinwxdx
0
√
∞
2
Solution; by definition F C ( w )= ∫ f ( x ) coswxdx
π 0
√ √ √[ ] √
∞ a
2 2 2 sinwx a 2 sinaw
¿ ∫
π 0
1 coswxdx= ∫ coswxdx=
π 0 π w 0
=
π w
Hence, F C ( w )=
√ 2 sinaw
π w
√
∞
2
Similarly, F s ( w )= ∫ f ( x ) sinwxdx
π 0
√ √ √[ ] √
∞ ∞
2
¿ ∫ 1 sinwxdx ¿ 2π ∫ si nwx dx= 2π −coswx
π 0 w
a
0
=
2 1−cosaw
π
(
w
)
0
Hence, F s ( w )=¿
√ 2 1−cosaw
π
(
w
)
Example 4.2: find the Fourier cosine and sine transform of f ( x )= {cosx0,,0x ≤>0x ≤ a
√ √
∞ a
2 2
Solution; by definition F c ( w )= ∫ f (x )coswxdx= ∫ cosx coswxdx
π 0 π 0
√
a
2 1 1
¿ ∫
π 0 2
(cos (1−w) x ¿+ cos (1+ w)x ) dx ¿ (as cosxcosy = ( cos ( x− y ) +cos (x+ y))
2
√
a
1 2
¿ ∫¿ ¿
2 π 0
√ √ √ √
a
1 2 sin ( 1−w ) x 2 sin ( 1+w ) x a 1 2 sin (1−w ) a 1 2 sin ( 1+w ) a
¿ [ ] 1 [ ] = ( )+ ( )
2 π 1−w 0+ π 1+ w 0 4π 1−w 4π 1+ w
2
1
¿ ¿
√2 π
√ √
∞ a
2 2
F s ( w )= ∫
π 0
f ( x)sinwxdx ¿ ∫ cosx sinwxdx
π 0
√
a
2 1
¿ ∫ (sin ( 1−w ) x +¿ sin ( 1+ w) x )dx ¿
π 0 2
√
a
1 2
¿ ∫¿ ¿
2 π 0
¿
2 π √[
1 2 −cos (1−w ) x a 1
1−w
+
0 2 ] √[ ]
2 cos ( 1+w ) x a
π 1+ w 0
¿
2 π √[ 1−w
+
1−w 2 π 1+ w ] √[
1 2 −cos (1−w ) a cos 0 1 2 −cos ( 1+w ) a cos 0
+ +
1+ w ]
2 π√
1 2 −cos ( 1−w ) a
(
1−w
+
1
1−w
+
cos ( 1+ w ) a
1+w
−
1
1+w
)
¿
√ 2 w
+
1
π w −1 √ 2 π
2
¿
Hence, F s ( w )=
√ 2 w
+
1 cos (1+ w ) a cos ( 1−w ) a
π w −1 √2 π
2
(
1+w
−
1−w
)
Like Fourier transform the Fourier cosine and sine transforms also satisfy certain properties
which are useful from application point of view.
Property1 (Linearity): for any two functions f ( x) and g( x ) whose Fourier cosine and sine
transform exist and for any constants a and b
(a) f c [ af ( x ) +bg ( x ) ] =af c [ f ( x ) ] +bf c [ g(x ) ] and
¿ ¿ ¿
( b ) f s¿ [ af ( x )+ bg ( x ) ] =af s¿ [ f ( x) ] +bf s¿ [ g ( x) ]
Proof:
√
∞
2
(a) By definition f c [ af ( x ) +bg ( x ) ] = ∫ (af ( x ) +bg ( x )¿) coswxdx ¿
¿
π 0
√ √
∞ ∞
2 2
¿ ∫
π 0
af ( x ) coswxdx+ ∫ bg ( x ) coswxdx
π 0
√ √
∞ ∞
2 2
¿a ∫
π 0
f ( x ) coswxdx +b ∫
π 0
g ( x ) coswxdx=af c [ f ( x) ] +bf c [ g( x )]
¿ ¿
√
∞
2
(b) Similarly, by definition f s [ af ( x )+ bg ( x ) ] =
¿
∫( af ( x ) +bg ( x ) ¿) sinwxdx ¿
π 0
√ √
∞ ∞
2 2
¿ ∫
π 0
af ( x ) sinwxdx + ∫ bg ( x ) sinwxdx
π 0
√ √
∞ ∞
2 2
¿a ∫
π 0
f ( x ) sinwxdx+ b ∫ g ( x ) sinwxdx ¿ af s¿ [ f ( x) ] +bf s¿ [ g( x)]
π 0
If F c (w) and F s (w) are the Fourier cosine and sine transforms of f ( x),then
1
a) f c [ cos ( w 0 x ) f ( x) ] = [ F c ( w+w 0 ) + Fc ( w−w 0 ) ¿
¿
2
1
b) f c [ sin ( w 0 x ) f (x ) ]= [ F s ( w+ w0 ) + F s ( w−w 0 ) ¿
¿
2
1
c) f s [ cos ( w0 x ) f (x ) ]= [ F s ( w+ w0 ) + F s ( w−w 0 ) ¿
¿
2
1
d) f s [ sin ( w0 x ) f (x) ] = [ F c ( w−w0 ) + F c ( w+w 0 ) ¿
¿
2
1 w
e) f c [ f ( ax) ] = F c ( ) , a> 0
¿
a a
1 w
f) f s [ f (ax) ] = F s ( ) , a> 0
¿
a a
These results follow directly from the definitions of the Fourier cosine and sine transforms
Proof;
√
∞
2
By definition f c [ sin ( w 0 x ) f (x ) ]=
¿
∫ sin ( w 0 x ) cos ( w0 x ) f ( x)dx
π 0
1
sin w 0 x coswx= ¿
2
√
∞
2
Thus, f c [ sin ( w 0 x ) f (x ) ]=
¿
∫ sin ( w 0 x ) cos ( w0 x ) f (x) dx
π 0
[√ ]
∞
1 2
¿
2
∫ sin ( w0 + w ) f ( x) x−sin ( w−w 0 ) f (x )x dx
π 0
1
¿ ¿
2
1
= ¿
2
Hence proved.
√ √
∞ ∞
2 1 2 w 1 w
(c) By definition f c [ f ( ax) ] = ∫ f ( ax)coswxdx= ∫
¿
f (x ) cos xdx= Fc ( )
π 0 a π 0 a a a
Let f ( x) and f ' (x) be continuous and absolutely integrable on the interval¿ and f ' ' (x) be
piecewise continuous on every subinterval¿, then
a) f c ¿ [ f '( x ) ] =w F s ( w ) −
√ 2
π
f (0)
b) f s [ f ' (x) ] =−w Fc ( w )
¿
Proof
√ √
∞ ∞
2 2 ∞
(a) By definition f c [ f '( x ) ] = ∫ f ' ( x)coswxdx= [ [ f ( x ) coswx ] + w∫ f (x )sinwxdx ]
¿
π 0 π 0 0
= ww F s ( w )−
√ 2
π
f (0), this by assuming that f (x)→0 as x → ∞
Hence f c ¿ [ f '( x ) ] =w F s ( w ) −
√ 2
π
f (0)
Proof of (c)
√
∞
2
By definition f c [ f ' ' (x ) ] =
¿
∫ f ' '(x )coswxdx
π 0
√
∞
2 ' ∞
[ [ f ( x ) coswx +wf ( x ) sinwx ] −w ∫ f (x) coswxdx ]
2
¿
π 0 0
=−w 2 F c ( w )−
√ 2
π
f ' (0), this is by assuming that f ( x ) , f '( x )→ 0 as x → ∞
Example 4.3:find the Fourier cosine and sine transform of f ( x )=e−ax, x ≥ 0 , a> 0, by using the
Fourier cosine and sine transforms of derivatives.
Solution; here f ( x )=e−ax, this gives f ' ( x)=−a e−ax and f ' ' ( x ) =a2 e−ax
a 2 F c ( w )=−w2 F c ( w ) +a
√ 2
π
or F c ( w )=
√ 2 a
π w 2 + a2
Hence f c [ f ( x ) ] =f c [ e ]=
¿ ¿ −ax
√ 2 a
π w2 +a2
a 2 f s¿ [ e−ax ]=−w2 F s ( w )+ w
√ 2
π
or F s ( w )=
√ 2 w
π w2 +a 2
Hence f s [ f ( x) ] =f s [ e =
¿ ¿ −ax
]
√ 2 w
π w2 +a2
Exercises 4.1
1. Find the Fourier cosine and sine transform of each of the following
{
∞
cosx , 0 ≤ x ≤ a
a) f ( x )=e −x
, x >0 (b). f ( x )=
0 , x >a
c. ∫ f ( x ) g( x )
0
2. Find the Fourier cosine and sine transform of each of the following functions
a) f ( x )=1 b) f ( x )=e x
Section Objectives:
Fourier transforms of a function f( x) can be derived from the complex Fourier integral
representation of f (x) on the real line, that is,
Recall the complex Fourier integral representation of f (x) on the real line
∞ ∞ ∞
1
f ( x )= ∫ c ( w ) e ∫ ∫ f (u) e−iw (u−x) dudw
iwx
dw ¿
−∞ 2 π −∞ −∞
∞
1
Where, c ( w ) = ∫
2 π −∞
f (u) e
−iwu
du and taking ω=w
∞ ∞ ∞ ∞
1
⟹ f ( x )= ∫ ∫ f (u) e−iw (u−x) dudw= √21 π ∫ [ √ 21 π ¿ ∫ f (u) e−iwu ]e iiwx ¿
2 π −∞ −∞
(1)
−∞ −∞
Here, the expression in the bracket, a function of w denoted by F (w) is called the Fourier
Transform of f and since u is a dummy variable, we replace u by x and have
∞
1
F ( w )= ∫
√ 2 π −∞
f (u)e
−iwx
dx so that (1) becomes
∞ ∞ ∞
1 1
f ( x )= ∫ ∫
2 π −∞ −∞
f (u)e
−iw (u− x)
dudw= ∫
√2 π −∞
−iwx
F ( w ) e dw and is called the inverse Fourier
Transform of F ( w ).
¿
Other common notations used for Fourier transform of f ( x) are f (w) or F (f ( x )) .
Definition (Fourier transform): The Fourier transform denoted by F ( w )∨F (f (x))of a function
f ( x) is defined as
∞
1
F ( w )= ∫
√ 2 π −∞
−iwx
f ( x)e dx
The sufficient conditions for the existence of the Fourier transform of f ( x)are:
∞ a
1 1
Solution: By definition F ( f ( x ) ) =
√ −∞
2 π
∫ −iwx
f ( x ) e dx=
√ 0
2 π
∫ −iwx
k e dx
a
k k e a k −iwx
¿ ∫
√2 π 0
−iwx
k e dx= [ ] ¿
√2 π −w 0 iw √ 2 π
(1−e−iwa )
k −iwa
Hence, F ( w )= (1−e )
iw √ 2 π
[ ] [ ]
a
1 1 e−iwx a 1 e iwa−e−iwa
¿ ∫
√2 π −a
e
−iwx
dx= =
√ 2 π −iw −a w √2 π i
¿
1
w √2 π [
coswa+ isinwa−coswa+isinwa
i ]
1 2 isinwa 2 sinwa 2 sinwa
¿ = =
w √2 π i w √ 2 π √2 π w
¿
√ 4 sinwa
2π w
=
2 sinwa
π w √
Hence: F ( w )=
√ 2 sinwa
π w
{
x
−|x| e ,−∞ < x ≤ 0
Solution: The function can also be written as f ( x )=e = −x (by the definition of
e ,0< x< ∞
absolute value)
∞
1
Now by the definition of Fourier transform, F ( w )= ∫
√ 2 π −∞
−iwx
f (x) e dx
0 ∞
1
⇒ F ( w )= [ ∫ e x e−iwx dx+∫ e−x e−iwx dx ]
√ 2 π −∞ 0
[∫ ]
0 ∞
1
¿ e ( 1−iw ) x
dx+∫ e−( 1−iwx ) dx
√2 π −∞ 0
0 ∞
1
[∫ e dx+∫ e
( 1−iw ) x −(1−iwx)
¿ dx ]
√ 2 π −∞ 0
=
[ ]
1 e (1−iw ) x 0
−
1 e−(1 +iw ) x ∞
=
1 1
[ +
1 1
]
√2 π ( 1−iw ) −∞ √2 π ( 1+iw ) 0 √ 2 π (1−iw ) √ 2 π ( 1+iw ) [ ] [ ]
e (1−iw ) x 1 e−( 1+iw ) x 1
Here, as x→0 → and as x→−∞ →
( 1−iw ) ( 1−iw ) ( 1+iw ) ( 1+iw )
− ( 1 +iw ) x ( 1−iw ) x
e e
Similarly, as x→ ∞ →0 and as x→ 0 →0
( 1+iw ) ( 1−iw )
[
1 e ( 1−iw ) x 0
]
−
1 e−(1 +iw ) x ∞
=
1
[
1
[ −
1
√2 π ( 1−iw ) −∞ √2 π ( 1+iw ) 0 √2 π ( 1−iw ) ( 1+iw )
¿
]
¿
1 1+ iw+1−iw
√2 π
[
1+ w2
¿=¿ 1 2
√2 π 1+ w 2
=
2 1
π 1+ w2 √
Hence, F ( w ) ¿
√
2 1
π 1+ w
2
( 2 √a ) +( 2√ a ) ] dx
2 2
∞ ∞ iw iw
−[ √ a x +
1 1
∫ ∫
2
−(a x +iwx)
¿ e dx= e
√2 π −∞ √2 π −∞
(√ a x + 2iw√a ) ) dx
2 2
−w ∞
1 −(
¿
√2 π
e 4a
∫e
−∞
(√ a x+ 2iw√ a ) ) dx=
2 2 2
−w ∞ −w ∞
1 −(
1 1
∫e ∫ e−t √ a dt
2
⟹ F ( f ( x) )= e 4a e 4a
√2 π −∞ √2 π −∞
√
2 2
−w ∞ −w −w
2
−w
2
−w
2
1 1 √π e π 1
∫e
2
¿ e 4a −t
dt= e 4 a . √π ¿ 4a
= e 4a = e 4a
√2 πa −∞ √ 2 πa √2 πa 2 πa √2 a
∞ ∞
−∞ 0
2
−w
Hence F ( f ( x ) ) =F ( e )= 1 e
2
−a x 4a
√2 a
4.2.1 Properties of Fourier Transform
The properties of Fourier transform help to simplify the calculations involving Fourier transform
and to obtain some results which are otherwise difficult to obtain.
Theorem (Linearity Theorem): For any functions f (x) and g( x )Whose Fourier Transform exist
and for any constants a , b
F [ af ( x ) +bg ( x ) ] =a F ( f ( x ) ) + b F ( g( x ))
Proof:
∞
1
By definition F [ af ( x ) +bg ( x ) ]=¿ ∫ ¿¿
√2 π −∞
∞ ∞
1 1
¿
√ −∞
2 π
∫ af (x )¿ e
−iwx
dx+
√ −∞
2 π
∫ −iwx
bg ( x) e dx
∞ ∞
1
( ∫ af ( x ) ¿ e dx + ∫ bg ( x ) e dx)
−iwx −iwx
¿
√2 π −∞ −∞
∞ ∞
1
¿ (a ∫ f ( x ) ¿ e−iwx dx+b ∫ g ( x ) e−iwx dx)
√ 2 π −∞ −∞
∞ ∞
a b
¿ ∫
√2 π −∞
−iwx
f ( x ) ¿ e dx+ ∫
√ 2 π −∞
−iwx
g ( x ) e dx ¿
∞ ∞
1 1
¿a ∫
√2 π −∞
f ( x ) ¿ e−iwx dx +b ∫
√2 π −∞
g ( x ) e−iwx dx ¿
¿ a F ( f ( x ) ) + b F ( g( x ))
a) F ( f ' ( x ) ) =iw F [f ( x )]
b) F ( f (n) ( x ) ) =(iw )n F [f ( x )]
and this holds for all n such that the derivatives f (r ) ( x ), r =1,2, … , n satisfies the sufficient
conditions for the existence of the Fourier transforms
Proof:
∞
1
(a) By, definition F ( f ' ( x ) ) =
√ −∞
2 π
∫ f '( x )¿ e−iwx dx , integrating by parts we obtain
1
F ( f ' ( x) )= ¿]
√2 π
1
¿ ¿
√2 π
Since f ( x)⟶ 0 as|x|⟶ ∞, therefore
F ( f ' ( x ) ) =iw F [f ( x )]
(b) The repeated application of result (a) gives result (b) provided that the desired conditions are
satisfied at each step.
2
[ −1 −a x ' −1
2a
2
(e ) =
2a]F [(e−a x )' ]
2
1
(iw )F [ e−a x ], using differentiability
2
¿−
2a
( )
2
−w
−iw 1 2
¿ e 4a
Refer example (find the Fourier transform of f ( x )=e−a x , a> 0)
2a √2 a
2
−w
−iw
¿ e 4a
2a √2 a
2
−w
Hence F ( f ( x ) ) = −iw e 4 a
2 a√2 a
n
n n d
Example 4.9: show that (a) F [x f ( x ) ]=i n
[ F (w)]
dw
m
d
(b) ) F [ x m f ( n) ( x ) ( x ) ]=¿ i
m+ n n
m
[ w F(w)]
dw
d d
⟹ F ( x 1 f ( x ) ) =(−i)1
dw
[ F ( w ) ] =−i
dw
[F (w )]
The repeated applications of the differentiation w.r.t w leads to the desired result
dn
F ( x n f ( x ) )=i n [F (w)]
d wn
m
d
(b) Consider F [ x f ( x ) ( x ) ] ¿ i
m ( n) m
m
[f (n ) ( x ) ] ( this is by using part (a))
dw
m m
m d n m n d
¿i m [( iw ) F (w) ]
¿ i .i [w n F (w) ]
dw d wm
This because by the above Theorem (transforms of derivatives)
m m
m n d m+n d
⟹ F [ x f ( x ) ( x ) ]=i .i
m ( n) n
[w F (w) ] ¿ i [w n F (w) ]
d wm d wm
dm
⟹ F [ x m f (n ) ( x ) ( x ) ] =i m+n [w n F (w) ]
d wm
Example 4.10: using the property of the Fourier transform of derivatives, find the Fourier
2
Solution: clearly f ( x ) satisfies the requisite conditions of continuity and absolute inerrability
over the real axis for the existence of Fourier transform.
It is easy to see that f ( x ) satisfies the differential equation f ' (x)+2 af ( x )=0
F [ f ( x ) ] + F [ 2 axf ( x ) ]=F [ f ( x ) ]+ 2 a F [ xf ( x ) ] =0
' '
This gives,
iwF ( w ) +2 a(iF '(w))¿ wF ( w ) +2 aF ' (w)=0 (by the above example part (a))
1
(b) F [ f ( ax ) ] = F ( w/a ) ,a> 0; Scaling x by a
a
Proof: The results follow immediately from the definition of Fourier transform (you try!)
2
Solution: By the shifting property in the above theorem part (a) with x 0=5 , we have
2
−w
1
F [e ]=e F [e ]=e
2 2
− a(x−5) −i5 w −a x −i 5 w
e 4a
(By the above example (refer))
√2 a
2
w
1 −( 4 a +i 5 w)
¿ e
√2 a
Example 4.12: find the Fourier transform of f ( x )=4 e−|x|−5 e−3|x+2|
5
¿ 4 F [ e−|x|]− e 2 iw F [ e ]w → w (Using scaling)
−|x|
3 3
2iw
1 2 5 2 iw 1 2 1 8 30 e
¿4 . − e . . ¿ [ − ] (Refer the previous example)
√ 2 π 1+w 2 3 √2 π 1+¿ ¿ √2 π 1+w2 9+ w2
Exercises 4.2
(a) f ( x)=
{ e x ,|x|<a
0 , ot h erwise
(b) {
f ( x)= a−|x|,|x|<a
0 ,| x|> a
(c) f ( x )=u ( x +1 )−u( x−1) , where u ( x ) is the unit –step function
(d) f ( x )=
siax
x
, a> 0 (e) f ( x)={1 ,|x|≤ a
0 ,| x|> a
{ {
kx
e , x <0 (k >0) ¿ f ( x)= x ,0< x< a
(a) f ( x )= b)
0 , x >0 0 , ot h erwise
{
−1 ,−1< x <0
{
| x|,−1< x< 1 f ( x )= 1 , 0< x< 1
{
−x
(c) f ( x)= (d) (e) f ( x)= xe ,−1< x <0
0 , ot h erwise 0 , ot h erwise 0 , ot h erwise
Section Objectives:
The Laplace transforms which transforms a function f of one variable (t) into function F of
another variable (s) is named in honor of the French mathematician and Astronomer Pierre-
Simon Marquis de Laplace (1749–1827).
Integral Transform; If f (x, y) is a function of two variables, then a definite integral of f with
respect to one of the variables leads to a function of the other variable. For example, by holding y
2 b
constant, we see that∫ 2 x y dx=3 y . Similarly, a definite integral such as∫ K ( s , t ) f (t) dt
2 2
1 a
We are particularly interested in an integral transform, where the interval of integration is the
unbounded interval [0, ∞ ). If f (t) is defined fort ≥0, then the improper integral is defined as a
limit.
∞ b
∫ K ( s , t ) f (t) dt =lim
b →∞
∫ K ( s , t ) f (t) dt (1)
0 0
If the limit in (1) exists, then we say that the integral exists or is convergent; if the limit does not
exist, the integral does not exist and is divergent. The limit in (1) will, in general, exist for only
certain values of the variable s.
Definition: The function K(s, t) in (1) is called the kernel of the transform. The choice
−st
K ( s ,t )=e as the kernel gives us an especially important integral Transform.
Definition (Laplace Transform): let f be a function defined for t ≥ 0. Then the integral
∞
L { f (t) }=F (s )=∫ e
−st
f (t )dt (2)
0
−e−st
Let u=t ⇒ du=dt and let dv =¿ e−st dt ⇒v =
s
∞ −st −st∞
∞ 1 e e ∞ 1
⇒∫ e ] + ∫ e dt=¿=[−t ] ¿ + L {1 } (by the above example)
− st −st
tdt =[−t
0 s 0 s 0 s 0 s
−st
e−st e−st e ∞
As t→ ∞ ,¿−t → 0 and ast → 0,t →0 ⇒ [−t ] ¿0
s s s 0
∞
1 1 1 1
Hence, L { t } =∫ e tdt= L {1 }= . = 2 when ever s>0
−st
0 s s s s
[ ]
∞ ∞ − ( s +3) t
For a) L { e
−3 t
} =∫ e−st e−3 t dt=∫ e−(s+3)t dt= −e ∞= 1
0 0 s+3 0 s +3
1
Hence, L { e } =
−3 t
, whenever s>−3
s+3
Example 4.16: evaluate L { sin 2t }
−e−st
Let u=sin 2t du=2 cos 2 t and let dv =¿ dv =e−st , v=
s
∞ − st ∞ ∞
−e sin 2 t ∞ 2 2
⟹ L { sin 2t }=∫ e − st
sin 2 tdt=[ ] + ∫ e−st cos 2 tdt= ∫ e−st cos 2 tdt
0 s 0 s 0 s 0
Now integrating ∫ e
−st
cos 2tdt by parts we have,
0
−st
−e
Letu=cos 2 t , du=−2 sin 2t and let ¿ e−st , v=
s
[[ ] ]
∞ ∞ ∞
−e−st cos 2 t ∞ 2 1 2 −st
∫e −st
cos 2tdt =
s
− ∫ e sin 2tdt = − ∫ e sin 2 tdt
0 s 0
−st
s s 0
0
−st −st
−e cos 2 t −e cos 2 t 1
Because s ⟶ ∞ , ⟶0 and as s ⟶ 0 , ⟶
s s s
∞ ∞
1 2
∫e −st
cos 2tdt = − ∫ e sin 2 tdt , s>0
s s 0
− st
(2)
0
( )
∞ ∞ ∞
2 1 2 2 4
L { sin 2t }=∫ e − ∫ e sin 2 tdt = 2 − 2 ∫ e sin 2 tdt ¿
− st −st −st
sin 2 tdt=¿
0 s s s 0 s s 0
∞ ∞
4 2
⟹ L {sin 2 t }=∫ e sin 2 tdt+ ¿ 2 ∫ e sin 2 tdt= 2 ¿
−st − st
0 s 0 s
∞
( )∫ e
4
¿ 1+ 2
s 0
−st
sin 2 tdt=
2
s
2
( )∫
2 ∞
s +4 −st 2
¿ 2
e sin 2 tdt= 2
s 0 s
∞
( )
2
s 2 2
⟹ L {sin 2 t }=∫ e
−st
sin 2 tdt= 2 2
= 2
0 s +4 s s +4
2
Hence L { sin 2t }= 2
, s >0
s +4
Property ( Lis a linear transform): for a linear combination of functions we can write
∞ ∞ ∞
∫e −st
[ αf ( t ) + βg ( t ) ] dt=α ∫ e −st
f (t) dt+ β ∫ e
−st
g(t )dt
0 0 0
When ever both integrals converge for s>c . Hence it follows that
L { αf ( t )+ βg ( t ) } =α L { f ( t ) } + β L { g ( t ) } =αF ( s ) + βG(s )
a) For s>0
1 1
L { 1+5 t }=L { 1 } + L {5 t }=L { 1 }+5 L { t }= + 2 (By L linearity and above examples)
s s
b) for s >5
4 20
L { 4 e −10 sin 2t }=4 L {e }−10 L{sin2 t }=
5t 5t
− 2
s−5 s + 4
We state the generalization of some of the preceding examples by means of the next theorem.
From this point on we shall also refrain from stating any restrictions on s; it is understood that s
is sufficiently restricted to guarantee the convergence of the appropriate Laplace transform.
Theorem: Transforms of Some Basic Functions
1 k
(a) L { 1 }= (d) L { sinkt }= 2
2
s s +k
n! s
(b) L { t }= n+1 ,n=1,2,3 ,. .
n
(e) L { coskt }= 2 2
s s +k
1 k
(c) L { e }= (f) L { sinh kt }= 2 2
at
s−a s −k
1 s
(d) L { e } = (g) L { cosh kt }= 2 2
−at
s+ a s −k
These can be proved by the direct application of the definition of Laplace transform
Sufficient Conditions for Existence of L { f (t) }: The integral that define the Laplace transform
∞
does not have to converge (i.e. if L { f (t) }=F (s )=∫ e
−st
f (t )dt doesn’t converge), then the
0
Laplace transform of f doesn’t exist). For example, neither L { 1/t } nor L {e t }exists. Sufficient
2
conditions guaranteeing the existence of L { f (t) } are that f be piecewise continuous on [0,∞ ) and
that f be of exponential order fort ≥ T . Recall that a function f is piecewise continuous on [0,∞ )
if, in any interval 0 ≤ a ≤t ≤b, there are at most a finite number of points t k , k =1 ,2 , . .. , n (t k−1 <
t k ) at which f has finite discontinuities and is continuous on each open interval ( t k−1, t k ). See
Figure 4.1 below. The concept of exponential order is defined in the following manner.
Example 4.18: f ( t )=t , f ( t ) =e−t ∧f ( t )=2 cost Is exponential order of c=1 for allt >T .
Since we have respectively|f (t)|=|t |≤e t ,|f (t )|=|e−t| and|f (t)|=|2 cost|≤2 e t .
2 e−3 s
Hence, L { f (t ) }= , s >0
s
f ( t )=L−1 { F (s) }.
Example 4.20: Evaluate the inverse Laplace transform of each of the following
1 1 1
a. F ( s ) =¿ b). F ( s ) = c. F ( s ) =
s s2 s+3
−1
c) f ( t )=L { F (s) }=L
−1
{s +31 }=e−3 t
Theorem (some inverse transforms)
(a) 1=L
−1
{1s } n
(b)t =L
−1
{ }
n!
s n+1
, n=1,2,3 , …
(c) e =L
at −1
{ s−a1 } (d) sinkt=L
−1
{ } k
s +k 2
2
(e) coskt =¿ L
−1
{ }
s
s +k 2
2 (f) sinhkt = L { }
−1 k
s −k 2
2
(g)coshkt =¿ L
−1
{ }
s
s −k 2
2
Solution: (a) By the above theorem and identifying that n+1=5∨n=4 and then multiplying and
dividing4 !, we have,
L−1
{} { } {
1
s 5
=L −1
s
1
4+1
=L−1 4 ! 1
4! s 4+1
1 −1 1
} 1 −1 4 !
= L 4 ! 4 +1 = L
4! s 4! s 4+1
1
4! { 1
= t 4= t 4
24 } { }
b) L
−1
{ } {
2
1
s +7
=L
−1 1
√7 s +( √ 7)2
2
√7 }
√ 7 =¿ 1 L−1 √ 7 = 1 sin √ 7 t
s +( √ 7)
2 2
√7 { }
1
Here, we have fixed up the expression 2 by multiplying and dividing by √ 7
s +7
−1
L is a linear transform: The inverse Laplace transform is also a linear transform, that is for
constants α and β and for some functions F and G that are transforms of f and g respectively,
then
L
−1
{ αF ( s ) + βG( S) }=α L−1 { F ( s ) }+ β L−1 { G ( s ) }
L−1
{−2 s +6
2
s +4 } {
−2 s
=L−1 2 + 2
6
s +4 s +4
−2 s
=¿ L−1 2
s +4 }
+ L−1 2
−6
s +4 { } { }
=¿ −2 L−1 2
s
s +4
+6 L−1 2
1
s +4 { } { }
{ } { }
−1
−1 s +6 2
= −2 L 2
L 2 (by linearity and fixing the second expression)
s +4 2 s +4
( s−4 } {
s+3
)(s−3) }
s +s s+ s
Now we decompose = into sum of partial fractions
s −7 s +12 ( s−4 ) (s−3)
2
s +3 A ( S−3 )+ B (S−4)
From this = ⟹ A(s−3)+B (S−4) = s+3
( s−4 ) ( s−3) ( s−4 )(s−3)
s+3 s +3 A B 7 −6
From this =¿ = + = +
2
s −7 s +12 ( s−4 ) (s−3) ( s−4 ) ( s−3 ) ( s−4 ) ( s−3 )
Now L
−1
{ 2
s+3
s −7 s+12
=L−1
} {
7
+
−6
( s−4 ) ( s−3 )
=L−1
7
s−4
+ L−1
−6
s−3 } { } { }
¿ 7 L−1 {s−41 }−6 L {s−31 }=7 e −6 e
−1 4t 3t
Transforms of derivative
As was pointed out in the introduction to this chapter, Laplace transform is used to solve
differential equations. To that end we need to evaluate quantities such as L{dy /dt } and L{
d 2 y /d 2 t }.
For example, if f ' is continuous for t ≥ 0, then integration by parts gives
∞ ∞
0 0+ s 0
L {f ( t ) }=sF ( s ) −f (0)
'
or (1)
here we have assumed that e−st f (t ) →0 as t → ∞ ,similarly with the aid of (1),
∞ ∞
∞
L {f (t ) }=∫ e
'' −st ' '
f ( t ) dt=¿] +¿ s∫ e−st f ' (t)dt =−f ' ( 0 ) + s L { f ' ( t ) }
0
0 0
In general, The results (1), (2) and (3) can be generalized in the following theorem
If f , f ' ,… , f (n−1) are continuous on [0,∞ ) and are of exponential order and if f (n )(t ) is piece wise
continuous on [0,∞ ), then
Where, F(s)¿ L { f ( t ) } .
In solving ODEs it is apparent from the general result given in the above theorem (Transform of
a Derivative) that L{d n y /d n t } depends on Y (s)=L{ y (t)} and n−1 derivatives of y (t )
evaluated at t=0 .this property make the Laplace ideally suited for solving linear initial-value
problems in which the differential equation has constant coefficients such a differential is simply
a linear combination of terms y , y ' , y ' ' ,…, y (n ):
n n−1
d y d y
an n
+a n−1 n−1 +…+a 0 y =g (t), y (0)= y 0, y ' ( 0 )= y 1 , … , y ( n−1) ( 0 )= y n−1
dt dt
{ }
n n−1
d y d y
a n L{ n
}+ an−1 n−1
+…+ a0 L{ y }=L{ g(t )}
dt dt
dn y d n−1 y
a n n +a n−1 n−1 +…+a 0 y =g (t) (4)
dt dt
In other words,
The Laplace transform of a linear differential equation with constant coefficient becomes an
algebraic equation inY ( s ).
If we solve the general transformed equation (5) for the symbolY ( s ), we first obtain
Q( s) G( s)
Y ( s)= + (6)
P( s) P( s)
Where P ( s )=a n sn + an−1 sn −1 +…+ a0,Q(s) is a polynomial in s of degree less than or equal to
n−1 consisting of the various products of the coefficient a i , i=0,1 , … , n and the prescribed
initial conditions y 0 , y 1 , … , y n−1 and G ¿) is the Laplace transform of g(t ).Typically, we put the
two terms in (6) over the least common denominator and then decompose the expression into
two or more partial fractions. Finally, the solution y (t) of the original initial- value problem is
y ( t ) =L−1 {Y (s )}, where the inverse transform is done term by term. Let us summarize the
procedure in the following diagram
Find the unknown y ( t ) Apply Laplace Transformed DE
that satisfies the DE and Transform L Becomes an algebraic
Initial conditions equation in Y(s)
Example 4.24: Use the Laplace Transform to solve the initial-value problem
dy
+3 y +13 sin 2 t , y (0)=6
dt
Solution: We first take the transform of each member of the differential equation:
From (1), L { }=sY ( s )− y ( 0 )=sY ( s )−6 and we know that L { sin 2t }=2/(s + 4)
dy 2
dt
26
Or ( s+3 ) Y ( s )=6+ 2 ,we get
s +4
6 26 6 s 2+ 50
Y ( s )= + = (7)
s +3 ( s+3 ) (s 2+ 4) ( s +3 ) ( s2 + 4)
Since the quadratic polynomial s2 +4 does not factor using real numbers, its assumed numerator
in the partial fraction decomposition is a linear polynomial in s:
2
6 s +50 A Bs+C
2
= + 2 .
( s+ 3 ) (s +4 ) s+3 s +4
Putting the right-hand side of the equality over a common denominator and equating numerators
Since the denominator has no more real zeros, we equate the coefficients of s2 and s:
6= A+ B and 0=3 B+C . Using the value of A in the first equation gives B=−2 , and then using
in the second equation givesC=6 .Thus
2
6 s + 50 8 −2 s +6 8 −2 s 6
Y ( s )= =¿ + 2 = + 2 + 2
2
( s +3 ) (s + 4) s +3 s + 4 s+3 s + 4 s + 4
8 −2 s 6
Or Y ( s )= + 2 + 2 (8)
s +3 s + 4 s + 4
Hence, the solution of the initial value problem is ( t )=¿ 8 e−3 t −2 cos 2t +3 sin 2t .
Solution: proceeding as in the example above, we transform the DE. We take the sum of the
transforms of each term, use the given initial conditions and then solve forY ( s ):
{ } { }
2
d y dy
+2 L { y }=L { e }
−4t
L 2
−3 L
dt dt
1
s Y ( s ) −sy ( s )− y ( 0 ) −3 [ s Y ( s )− y ( 0 ) ] =
2 '
s+ 4
( s2−3 s+ 2 ) Y ( s )=s+ 2+ 1
s+ 4
2
s+ 2 1 s +6 s+ 9
Y ( s )= + 2 =
s −3 s+2 ( s +3 s+2 ) (s +4) ( ) ( s−2 )( s+ 4)
2
s−1
Then decomposition into partial fraction yields
25
6 B=25 or B=
6
1
30 C=1 or C=
30
−16
−5 A=16 or A=
5
( ) ( )
2
s +6 s+ 9 A B C −16 1 25 1 1 1
Thus,Y ( s )= = + + = + +C= ( )
( s−1 ) ( s−2 ) ( s+ 4) s−1 s−2 s+ 4 5 s−1 6 s−2 30 s+ 4
Or Y ( s )= ( ) ( )
−16 1
5 s−1
+
25 1
6 s−2
1 1
+C= (
30 s+ 4
)
L−1 { Y ( s ) }=L−1
{ ( )} { ( )} { ( ) }
−16 1
5 s−1
+ L−1
25 1
6 s−2
+ L−1
1 1
30 s+ 4
y (t)=
−16 −1 1
5
L
s−1 { }
25 −1 1
+ L
6
1 −1 1
+ L
s−2 30 { }
s +4 { }
−16 t 25 2 t 1 −4 t
Or y ( t ) =L
−1
{Y ( s ) } = e+ e + e
5 6 30
Hence, the solution of the initial-value problem is
−16 t 25 2 t 1 −4 t
y (t)= e+ e + e
5 6 30
Operational properties of the Laplace transform that enable us to build up a more extensive list of
Transforms without having to resort to the basic definition and integration.
L {e at f (t ) }=F (s−a).
Proof: By definition
∞ ∞
L {e f (t ) }=∫ e
at
e f (t) dt=∫ e−( s−a) t f ( t ) dt=¿ F( s−a)¿.
−st at
0 0
If we consider s as areal variable, then the graph of F (s−a) is the graph of F (s) shifted on the s-
axis by amount|a|. If a> 0, the graph of F (s) is shifted a units to the right, where as if a< 0, is
shifted |a| units to the left as shown in the Fig 4.3.
Where s → s−a means that in the Laplace transform F (s) of f (t),we replace the symbol s when
ever it appears by s−a.
Inverse form of the first translation theorem: To compute the inverse of F (s−a), we must
recognize F (s) ,find f(t) by taking the inverse Laplace transform F (s) ,and then multiply f(t) by
the exponential function e at .this procedure can be summarized symbolically in the following
manner:
L { F ( s−a ) }=L { F ( s ) }=¿ e at f (t )
−1 −1
(1)
Where f ( t )=L { F ( s ) } .
−1
Example 4.26: Solve the Initial-Value Problem
'' ' 2 3t
y −6 y + 9 y=t e , y (0)=2, y ' ( 0 )=17 .
Solution: Using linearity and the initial conditions we simplify and then solve for
Y ( s )=L { f (t ) }:
L { y ' ' }−6 L { y ' }+ L { y } =L {t 2 e3 t }
Now, we use transform of derivatives together with the first translation theorem:
2
s Y ( s ) −sy ( 0 )− y ( 0 )−6 [ sY ( s ) − y ( 0 ) ] + 9 Y ( s )=
2 '
3
(s−3)
2
( s2−6 s+ 9)Y ( s )=2 s +5+ 3
( s−3)
2 2
(s−3) Y ( s )=2 s +5+ 3
( s−3)
2 s +5 2
Y ( s )= 2
+ 5
(s−3) ( s−3)
2 s +5
From this, decomposition into partial fraction on yields
( s−3)2
2 11 2
Y ( s )= + +
s−3 (s−3) ( s−3)52
y ( t ) =L
−1
{ } { } { }
2
s−3
+11 L
−1 1
(s−3)
2 +
2L
−1 1
( s−3)
5
y ( t ) =L
−1
{ s−32 }+11 L {(s−3)
1−1
} 4 ! {( s−3) }
+
2
L 2
4! −1
5
1 3t4
y ( t ) =2 e3 t +11 t e 3 t + te .
12
3t 3t 1 3t4
Hence, y ( t ) =2 e +11 t e + t e is the solution to the IVP
12
Example 4.27: Solve the Initial-Value Problem
'
y ' ' + 4 y ' +6 y=1+e−t , y (0)=0, y ( 0 )=0 .
Solution: L { y ' ' }+ L { y ' } +6 L { y }=L { 1 }+ L{e−t }
1 1
s Y ( s ) −sy ( 0 )− y ( 0 )+ 4 [ s Y ( s )− y ( 0 ) ] +6 Y ( s )= +
2 '
s s +1
2 s+1
(s¿¿ 2+ 4 s +6)Y ( s ) = ¿
s( s+ 1)
2 s+1
Y ( s )=
s ( s+ 1 ) (s ¿¿ 2+ 4 s+6)¿
Since the quadratic term in the denominator does not factor into real linear factors, the partial
fraction decomposition for Y ( s ) is found to be
1/ 6 1/3 s /2+5/3
Y ( s )= + −
s s+1 s 2 +4 s+ 6
Taking the Laplace transform on each side and completing the square on s2 +4 s +6
−1
y ( t ) =L
6 { s } 3 { s +1 }
{Y ( s ) }= 1 L−1 1 + 1 L−1 1 − 1 L−1
2 { s+2
2
−
}
2 −1
( s+ 2) +2 3 √ 2
L
√2
2
(s+2) +2 { }
1 1 −t 1 −2 t √ 2 −2t
¿ + e − e cos √2 t− e sin √ 2
6 3 2 3
Exercises 4.3
(a) L
−1
{ 2
s +3 s
1
}(b) L
−1
{ s+1
2
s −4 s } {
¿) L−1 2
s +1
s −2 s−3 }
(d) L
−1
2 {
1
s +s−20 }
5. Use the Laplace transform to solve the given initial-value problem.
dy dy
(a) − y=5 , y (0)=0 (b) 2 + y=0, y ( 0 )=−3 (c) y ' +6 y =e 4 t , y ( 0 )=2
dt dt
(d) y −4 y =4 e −3 e , y ( 0 )=1 , y ' ( 0 )=−1
'' ' 3t −t
−d −d
That is L { tf (t ) }= F ( s) = L { f (t ) }
ds ds
−d −d
Similarly by the above result L {t 2 f ( t ) }=L { t .tf ( t ) }=¿ L { tf ( t ) }= ¿)
ds ds
2
d
¿ 2
L { f (t) }
ds
k
Solution: Here (t)=sinkt , n=1 and F ( s ) = 2 2 , then by the above theorem
s +k
d 1 −d k ( s 2 +k 2 ) d ( k )−k d ( s2 +k 2 )
L { tsinkt }=(−1) L { sinkt } = ( )=¿ −¿ ( ds ds )¿−¿(
ds ds s 2 +k 2 2 2 2
( s +k )
( s 2 +k 2 ) (0)−2 ks
2 2 2 )
(s + k )
( s 2 +k 2 ) (0)−2 ks −(−2 ks ) 2 ks
¿−¿( 2 2 2 )¿ 2 2 2
= 2 22
(s + k ) (s + k ) (s + k )
2 ks
Hence L { tsinkt }=
(s +k 2 )2
2
1
Solution: f ( t )=e3 t , F ( s ) = , s>3 and n=1
s−3
d d
( )
d −d 1 ( s−3 ) (1 ) −1. ( s−3 ) ( s−3 ) (0)−1
L { t e } =(−1)
3t
L { e }=
13t
=−¿( ds ds )¿−¿( )¿
ds ds s−3 2 (s−3)2
( s−3)
−(−1)
( s−3)2
−1
¿ 2
( s−3)
s
t
∞ ∞
f (t)
However G ( s ) =∫ e (−t ) dt=−∫ e f (t )dt =−F (S)
' −st −st
0 t 0
To proceed further we now make use of the fact that the condition |f (t)/t |≤ M e−kt implies that
∞
lim G ( s )=0 , showing that G ( s )=L { f ( t)/t }=∫ F(u) du , for s>k
s→∞
s
The converse result follows by taking the inverse Laplace transform and using the fact that
{F ( s ) } −1 −1 '
{ G ( s ) }=f (t)/t=L−1 L {G (s) } Together with L { f ( t) }=F ( s )=−G' (s )
−1
L =
t t
{ }
−2 t −3t
e −e
Example 4.30: Evaluate L
t
−2 t −3 t
e −e
Solution: The function is defined and finite for all ¿ 0 , where f (t)=e−2t −e−3 t
t
1 1
L { f (t ) }=L { e−2 t −e−3 t } =F ( s )= +
s +2 s+3
{ }
∞ ∞
∞ ∞
1 1 ∞ ∞
¿∫ du+∫ du= [ ln (u+ 2 ) ] −[ ln ( u+3 ) ]
s u+ 2 s u+3 s s
G ( s )=ln ( s+s+ 32 )
Hence G ( s )=L { e−2 t−e−3 t
t } ( )
=ln
s +3
s +2
Solution: letG ( s )=ln ( s+s+ 32 ), then G ( s) = dsd ln ( ss +3+2 )= dsd (ln ( s +3)−ln ( s+ 2))
'
d d
(s+3) (s +2)
=d d ds ds 1 1
ln ( s +3 )− ln ( s+2 )= − = −
ds ds s+3 s+2 s+3 s +2
1 1
⟹ G ' ( s )=¿ −
s +3 s+ 2
Now L {G ( s ) }=L
−1 ' −1
{s +31 − s +21 }=L { s+31 }−L {s +21 }=e
−1 −1 −3 t
−e−2 t
−1
L { G(s ) }=L { ( )}
−1
ln
s+ 3
s +2
=
−1 −1
t
L L {G (s) }=
' −1 −3 t −2 t
t
( e −e )=(e ¿ ¿−2t −e−3 t )/ t ¿
−1
Hence L ln { ( )} s+ 3
s +2
=( e ¿ ¿−2 t−e−3 t )/t ¿
Exercises 4.4
(a¿ t coswt (b)t 2 sin 3 t (c)t 2 cosh 2t (d) t e−kt sin t (e)t n ekt
2. Use the integral of a transform to find the Laplace transform of each of the following
{ } { } {cosπt8 t }
−5 t
sin 3 t e
(a) L (b). L (c). L
t t
4.5.1 Convolution
Definition: let the functions f (t ) and g(t ) be defined fort ≥ 0. Then the convolution of the
functions f and g denoted by(f∗g)(t), and in abbreviated form by f∗g is a function of t defined
as the integral;
t
( f∗g ) ( t )=∫ f ( τ ) g(t−τ )dτ
0
Note: From the definition it follows almost immediately the convolution has the properties
f∗0=¿ 0∗f
similar to those of multiplication of numbers. However, there are differences of which you
should be aware.
0 0
0 0 0
t
∫ e cos (t−τ) dτ =[−e sin ( t−τ ) ] 0t +¿ ∫ eτ sin (t−τ )dτ ………………….. (2)
τ τ
0 0
0 0
t
t
¿ [ e cos ( t −τ ) ] + ¿ ¿ ∫ eτ sin (t−τ ) dτ ¿
τ
0 0
t
t
⟹ 2∫ e sin (t−τ )dτ ¿ [ e cos ( t −τ ) ] + ¿ ¿
τ τ
0
0
t
1 τ
⟹∫ e sin (t−τ)dτ =
τ
[ e cos ( t−τ ) ] t +¿ ¿
0 2 0
1 t 1
¿ ( e cos ( t−t )−e0 cos ( t−0 ))+ ( e¿¿ t sin ( t−t ) e 0−e 0 sin ( t−0 )) ¿
2 2
1 t 1
¿ ( e −cos t )+ (−sin t)
2 2
1 t
¿ ( e −cos t−sin t )
2
t t
1 t
Hence f∗g=( g∗f ) ( t ) =∫ f ( τ ) g(t−τ) dτ=∫ e sin (t−τ )dτ=¿
τ
( e −cos t−sin t )
0 0
2
If f (t) and g(t ) are piecewise continuous on ¿ and of exponential order, then
or equivalently
t
−1
L { F ( s ) G(s) }=f∗g=∫ f ( τ ) g(t−τ) dτ
0
Proof
∞ ∞
Let F ( s ) =∫ e f (τ) dτ and G ( s )=∫ e
−sτ −sp
g( p)dp
0 0
We now sett= p+ τ , where τ is at first constant. Then p=t−τ , and t varies fromτ to ∞ ,Thus,
∞ ∞
G ( s )=∫ e ∫ e− st g(t−τ )dt ¿
−s ( t−τ ) sτ
g ( t−τ ) dt=¿ e
τ τ
τ in F and t in G vary independently. Hence we can insert the G-integral into the F-integral.
Here we integrate for fixed τ over t fromτ to ∞ .this is the shaded region in Fig 4.4. Under the
assumption on f and g the order of integration can be reversed. We then integrate first overτ
{∫ }
t
e sin ( t−τ ) dτ =L { f ( t ) } . L { g ( t ) }=L { e } . L { sint }
τ t
That is L { f ∗g }=L
0
1 1 1
¿ =¿
2
s−1 s + 1 ( s−1 ) ( s2 +1)
s
¿ 2 , then by the convolution theorem
s +1
2 s 2s
L { t 2∗cost }=L { t 2 } L { cost }= =
3 2
s s +1 s3 (s ¿ ¿ 2+1)= 2
2
¿
s (s¿¿ 2+1)¿
s 1 s 1 s
b) Writing =¿ 2 and letting
F ( s ) = 2 2 and G ( s )= 2 2 , we have
2 2
( s +a )
2 2 2 2
s +a s +a s +a s +a
L−1 { F ( s ) }=L−1
{ } 2
1
s +a
2
=L−1 ¿
Similarly, L
−1
{ G ( s ) }=L−1 { } 2
s
s +a
2
=cosat , then it follows from the convolution theorem that
{ } (a)
−1
s
L
−1
=L {F ( s ) G( s) }= 1 ( sinat∗cosat ) = 1 t sinat
2 2 2
(s +a ) 2a
term such as cosa (t−τ ) and sina(t −τ ) using integration by parts, it is often quicker to replace
−ia(t−τ )t −i (t −τ )at
eiat + e−iat e +e
sinat and cosa (t−τ ) by sinat=¿ , cosa ( t−τ )=
2i 2
Before performing the integrations, and again using these identities to interpret the result in
terms of trigonometric functions
Convolution helps in solving certain integral equations, that is, equations in which the unknown
function y (t) appears in an integral (and perhaps also outside of it).This concerns equations with
an integral of the form of a convolution.
Note: Volterra Integral Equation has the convolution form with the symbol h playing the part of
g in convolution.
t
{ } {∫ }
t t
L { f ( t ) }=L { 3 t 2−e−t } −L ∫ f ( τ ) e t− τ dτ =L {3 t 2 }−L {e−t }−L f ( τ ) et −τ dτ
0 0
2 1 2 1
−L { f ( t ) } L { h (t) } ¿3. 3 − −L { f ( t ) } L { e }
t
⟹ F ( s ) =¿3. 3 −
s s+1 s s+1
2 1 6 1 1
−L { f ( t ) } L { e }=¿ 3 −
−t
¿3. 3 − −F ( s )
s s+1 s s+1 s−1
1 6 1 6 1 s 6 1
⟹(1+ ) F ( s) = 3 − ⟹ ¿) F ( s ) = 3 − ⟹ F ( s )= 3 −
s−1 s s +1 s s +1 s−1 s s+1
⟹ F ( s )=
s−1 6
−
s s s+1
3 (
1
=
6( s−1)
s
4
−
) s−1 6 6
= 3− 4−
s−1
s (s+1) s s s(s+1)
…………………….(1)
s−1
Decomposing in to partial fractions we have
s (s +1)
s−1
s (s +1)
A B
=¿ +
s s+1
=
A ( s+ 1 )+ Bs ( A+ B ) s + A
s (s +1)
=
s( s+1)
⟹ ( A + B ) s+ A=¿ s−1
A + B=1 ⟹ A=−1
A=−1 {
and B=2
s−1 −1 2
Now = + from this (1) becomes
s (s +1) s s+1
6 6 s−1 6 6 1 2
F ( s) = 3 − 4 − = 3− 4 + − taking the inverse Laplace transform of each term we
s s s( s+1) s s s s+1
have
L
−1
{ F ( s ) }=L−1 { 6 6
− 4−
s−1 6 6 1
= 3− 4 + −
s s s( s+1) s s
3
s
2
s+1
=L
−1 6
s
3
−L
−1 6
s} {} { } {} { }
4
+L
−1 1
s
−L
−1 2
s+1
⟹ f (t)=¿ 3 L
−1
{ } { } {} { }
2!
s
3
3!
−L−1 4 + L−1
s
1
s
−2
s
1
+1
=3 t 2−t 3+ 1−2 e−t
Integro-differential equations
We now consider a differential equation of an unusual type, these equations occur in many
applications of mathematics, one of which arises in the continuum mechanics of polymers, where
the dynamical response y (t) of certain types of material at time t depends on a derivative of y (t)
and the time-weighed cumulative effect of what has happened to the material prior to time t .for
obvious reasons materials of this type are called materials with memory.
Definition: Differential equations in which the function y (t) occurs not only as the dependent
variable in the differential equation, but also inside a convolution integral that forms the
Nonhomogeneous term are called Integro-differential equations.
In other words, equations that involve both the integral of an unknown function and its
derivatives are called Integro-differential equations.
{∫ }
t
2
s Y ( s ) −s +Y ( s )=L sin τ y (t −τ) dτ
0
Here the last term is the Laplace of a convolution integral, so from the convolution theorem it
follows that
{∫ } {∫ } {∫ }
t t t
Y (s)
L sin τ y (t−τ)d τ =L sin t L y (t ) = 2
0 0 0 s +1
Using this result in the transformed equation, solving for Y ( s ), and expanding the result using
partial fractions gives
2 Y (s ) Y ( s)
s Y ( s ) −s +Y ( s )= 2 or (s¿¿ 2+1)Y ( s )= +s¿
s +1 s2 +1
2
s +1 11 1 s
i.e. Y ( s )= 2
= + 2
s ( s +2) 2 s 2 ( s +2)
1
y ( t ) = (1+cos √ 2 t) for t >0
2
Exercises 4.5
2. If f ( t )=t 2and g ( t ) =cost , then show that ( f∗g ) ( t )=2 ( t−sint ) =(g∗f )(t)
4. In each of the following use the Laplace transform to solve the given integral equation or Integro-
differential equation.
t t
(a) f ( t ) +∫ (t−τ ) f (τ ) dτ=t (b) f ( t )=2 t−4 ∫ sin τ f (t−τ )dτ
0 0
t
(c) y ( t )=1−sint −∫ y (τ )dτ , y ( 0 )=0
'
t
dy
(d) +6 y ( t )+ 9∫ y (τ ) dτ=1, y ( 0 )=0
dt 0
Unit Summary:
The Fourier Cosine and Sine Transforms can be considered as a special cases of the
Fourier transformof f ( x) when f ( x) is even or odd function over the real axis
¿ ¿
Fourier cosine and Fourier sine transforms of f(x) denoted by F C ( w )∨f c ∧F s ( w )∨f s
respectively is defined as;
√ √
∞ ∞
2 2
F c ( w )= ∫ f ( x )coswxdx F s ( w )= ∫ f ( x)sinwxdx
π 0 π 0
The Fourier cosine and sine transform are linear transforms i.e. for any two functions
f ( x) and g(x ) whose Fourier cosine and sine transform exist and for any constants a and
b, then
( b ) f s¿ [ af ( x )+ bg ( x ) ] =af s¿ [ f ( x) ] +bf s¿ [ g ( x) ]
Let f ( x) and f ' (x) be continuous and absolutely integrable on the interval¿ and f ' ' (x) be
piecewise continuous on every subinterval¿, then the Fourier cosine and sine transforms
of derivatives are
a) f c ¿ [ f '( x ) ] =w F s ( w ) −
√ 2
π
f (0)
b) f s [ f ' ( x) ] =−w Fc ( w )
¿
Fourier transforms of a function f( x) can be derived from the complex Fourier integral
representation of f (x) on the real line.
The Fourier transform denoted by F ( w )∨F (f ( x))of a function f (x) is defined as
∞
1
F ( w )= ∫
√ 2 π −∞
−iwx
f (x) e dx
Fourier transform is linear, that is, for any functions f (x) and g( x )Whose Fourier
Transform exist and for any constants a , b
F [ af ( x ) +bg ( x ) ] =a F ( f ( x ) ) + b F ( g( x ))
Where f is a function defined for t ≥ 0 is called the Laplace transform of f, provided that
Laplace transform is a linear transform i.e. for a linear combination of functions we can
write
∞ ∞ ∞
∫e −st
[ αf ( t ) + βg ( t ) ] dt=α ∫ e−st f (t) dt+ β ∫ e−st g(t )dt
0 0 0
Or L { αf ( t )+ βg ( t ) } =α L { f ( t ) } + β L { g ( t ) } =αF ( s ) + βG (s )
A function f is said to be of exponential order c if there exists constants c , M > 0∧T >0
such that |f (t)|≤ M e ct for all t >T
If f is piecewise continuous on ¿ and exponential order c ,then L { f (t) } exists for s>c .
If L { f (t) }=F ( s )we then say f (t) is the inverse Laplace transform of F (s) and write
f ( t )=L−1 { F (s) }.
The inverse Laplace transform is also a linear transform, that is for constants α and β and
for some functions F and G that are transforms of f and g respectively, then
Where g ( t ) and h( t) are known functions is called Volterra Integral Equation for f ( t ).
Equations that involve both the integral of an unknown function and its derivatives are
called Integro-differential equations.
Miscellaneous Exercises
1. Find the Fourier cosine and Fourier sine transform of each of the stated function
{
x , 0 ≤ x ≤1
(a) f ( x)= {
sin x ,0 ≤ x ≤ π
0 , otherwise
(b). f ( x)= {
cos x ,0 ≤ x ≤ π
0 , otherwise
(c). f ( x)= 2−x , 1≤ x ≤ 2
0 , otherwise
(d) f ( x)= {
1−x 2 , 0 ≤ x <1
0 , otherwise
2. Find the Fourier sine transform of f ( x )=e−ax , a>0 and prove that
∫ xsinαx
2
a +x 2
π −aα
dx= e ,α >0
2
0
3. Find the Fourier cosine and Fourier sine transform s of each of the following
(a) f ( x)= {
aiax , 0< x <1
0 , otherwise
(b). f ( x)= {x , 0< x< a
0 , otherwise
(c). f ( x)= { e x ,|x|< a
0 , otherwise
{ x ,|x|< x 0
2
−x
2
sinax f ( x)=
(d) f ( x )=e 2 (e) f ( x)= ,a >0 (f)
x 0 , otherwise
{
a −x
(g) ) f ( x)= x e , x >0
0, x≤0
1−x ,|x|<1
{
2
5. Find the Fourier transform of f ( x)= and hence show that
0 ,|x|>1
∞
∫ xcosx−sinx
x
3
x
cos dx=
2
−3 π
16
0
π
(a) f ( t )=sin 2 tcos 2t (b). f ( t )=10 cos (t− ) (c). f ( t )=(1+ e2 t )2
6
(d) f ( t )=(e t−e−t )2 (e) f ( t )=et sin 5 t (f) f ( t )=e2 t ( t−1)2 (g) f ( t )=t 10 e−7 t
3t t
(h) f ( t )=( 1−e t +3 e−4 t ) cos 5 t (i) f ( t )=e (9−4 t+ 10 sin )
2
(a) L
−1
{( 1
s +1 ) (s + 4)
2 2 } (b). L
−1
{ }
4
1
s −9
(c ) L
−1
{ s−3
( s− √ 3 ) (s + √3) } (d) L−1
{ 4
6 s+3
2
s +5 s + 4 }
{ } { } { } { }
2
−1 1 −1 1 −1 2 s−1 −1 (s +1)
(e) L (f) L (g) L (g) L
(s+ 2)3 2
s +2 s+5 s 2 (s +1)3 (s+ 2)4
8. Use the Laplace transforms the given initial-value and boundary problem
9. In each of the following use the convolution theorem to find the Laplace transform
{ } { } { } { }
t t t t
(a) L ∫ e dτ τ
(b). L ∫e −τ
cos τ dτ (c). L ∫ τsin τ dτ (d) L ∫ τ e t −τ dτ
0 0 0 0
10. In each of the following use the Laplace transform to solve the given integral equation or Integro-
differential equation.
t t
(a) f ( t )=t e +∫ τf (t−τ) dτ f ( t ) +2∫ f (τ) cos (t−τ )dτ=4 e + sint
t −t
(b)
0 0
t t
(c ) f ( t ) +¿ ∫ f (τ) dτ=1 (d) ) f ( t )=cos t+∫ e
−τ
f (t−τ) dτ
0 0
t
(e) t−2 f ( t )=∫ ( e −e ) f (t−τ )dτ
τ −τ
t t
(f) y +4 y =4 ∫ sin τ y( t−τ )dτ , with y (0)=1 (g) y + y=4∫ e
' ' −2 τ
y (t−τ )dτ , with y (0)=3
0 0
t t
(h) y − y=∫ sinh τ y (t−τ) dτ , with y (0)=4 (k) y −4 y=2∫ sinh2 τ y (t−τ)dτ , with y (0)=1
'' ''
0 0
References
UNIT FIVE
VECTOR CALCULUS
Introduction
Vector calculus deals with the application of calculus operations on vectors (vector fields) .We
will often need to evaluate integrals, derivatives, and other operations that use integrals and
derivatives. The rules needed for these evaluations constitute vector calculus. In particular, line,
volume, and surface integration are important, as are directional derivatives. The relations
defined here are very useful in the context of electromagnetic but, even without reference to
electromagnetics, we will show that the definitions given here are simple extensions to familiar
concepts and they simplify a number of important aspects of calculation.
We will discuss in particular the ideas of line, surface, and volume integration, and the general
ideas of gradient, divergence, and curl, as well as the divergence and Stokes theorems. These
notions are of fundamental importance for the understanding of electromagnetic fields.
More over, Vector fields have many important applications, as they can be used to represent
many physical quantities: the vector at a point may represent the strength of some force (gravity,
electricity, and magnetism) or a velocity (wind speed or the velocity of some other fluid).
Unit Objectives:
In this section, we are going to deal with the definition of scalar fields and vector fields by
considering various examples.
Section Objectives:
A two-dimensional vector field is a function f that maps each point (x, y) in R2 to a two
dimensional Vector⟨ u , v ⟩ , and similarly a three-dimensional vector field maps (x , y , z) to
⟨ u , v , w ⟩ . Since a vector has no position, we typically indicate a vector field in graphical form by
placing the vector f(x, y) with its tail at (x, y). For such a graph to be readable, the vectors must
be fairly short, which is accomplished by using a different scale for the vectors than for the axes.
Such graphs are thus useful for understanding the sizes of the vectors relative to each other but
not their absolute size.
Definition: If to each point P of a set D ⊆ R 3 (¿ R2) is assigned a scalar f ( P) , then a scalar field
is said to be defined in D and the function f : D ⟶ R is called a scalar function (or a scalar
field). Likewise, if to each point P in D is assigned a vector F ( P)∈ R3 (¿ R 2) then a vector field
is said to be defined in D and the vector-valued function F : D ⟶ R 3 (¿ R2) is called a vector
function (or a vector field).
F ( x 1 , x 2 , … , x n ) =( F 1 ( x 1 , x 2 , … , x n ) , F 2 ( x1 , x 2 , … , x n) , … , F n ( x 1 , x 2 , … , x n ) ), where F 1 , F2 , … , F n
are the components of F . Ifn=2, f (resp. F ) is called a scalar (resp. vector) field in the plane. If
n=3, f (resp. F ) is called a scalar (resp. vector) field in space.
(ii). A scalar field or a vector field arising from geometric or physical considerations must
depend only on the points P where it is defined and not on the particular choice of Cartesian
coordinates.
Example 5.1: The scalar function of position F (x , y , z )=xy z 2 for (x , y , z) inside the unit
sphere x 2+ y 2+ z 2=1 defines a scalar field throughout the unit sphere
Example 5.2:(Euclidean distance) Let D=R 3 and f ( P ) =‖P P0‖ the distance of point P from a
fixed point P0 in space. f ( P ) defines a scalar field in space. if we introduce a Cartesian coordinate
system in which P0 :(x 0 , y 0 , z 0) then
Note that the value of f ( P ) does not depend on the particular choice of Cartesian coordinate
system.
The best way to picture a vector field is to draw the arrow representing the vector F(x, y) starting
at the point( x , y ) of course, it’s impossible to do this for all points ( x , y ) ,but we can gain a
reasonable impression of F by doing it for a few representative points in D as shown in the
figure below. Since F(x, y) is a two-dimensional vector, we can write it in terms of its
component functions.
Solution: Since( 1 , 0 )=¿ j , we draw the vector j=⟨ 0,1 ⟩ starting at the point ( 1 , 0 ).since
F ( 0 , 1 )=−i , we draw the vector ⟨ −1,0 ⟩ with starting point ( 0 , 1 ).continuing in this way, we
calculate several other representative values of F (x , y ) in the table and draw the corresponding
vectors.
(x , y) F (x , y ) (x , y ) F (x , y )
(1 , 0) ⟨ 0,1 ⟩ (−1 , 0) ⟨ 0 ,−1 ⟩
(2,2) ⟨ −2,2 ⟩ (−2 ,−2) ⟨ 2,−2 ⟩
(3,0) ⟨ 0,3 ⟩ (−3,0) ⟨ 0 ,−3 ⟩
(0,1) ⟨ −1,0 ⟩ (0 ,−1) ⟨ 1,0 ⟩
(−2,2) ⟨ −2 ,−2 ⟩ (2 ,−2) ⟨ 2,2 ⟩
(0,3) ⟨ −3,0 ⟩ (0 ,−3) ⟨ 3,0 ⟩
It appears from Figure 5 that each arrow is tangent to a circle with center the origin. To confirm this, we
take the dot product of the position vector x=xi+ yj with the vector with the vector F ( x )=F ( x , y ):
x . F ( x )=¿ ¿).(− yi+ xj )=−xy + yx=0
This shows that F ( x , y ) is perpendicular to the position vector ⟨ x , y ⟩ and is therefore tangent to
the circle with center the origin and radius |x|= √ x 2 + y 2. Notice also that
Some computer algebra systems are capable of plotting vector fields in two or three dimensions.
They give a better impression of the vector field than is possible by hand because the computer
can plot a large number of representative vectors. Figure 5.4 shows a computer plot of the vector
field in Example 1; Figures 5.5 and 5.6 show two other vector fields. Notice that the computer
scales the lengths of the vectors so they are not too long and yet are proportional to their true
lengths.
Solution The sketch is shown in the figure below. Notice that all vectors are vertical and upward
above the xy -plane or downward below it. The magnitude increases with the distance from the
xy -plane
Fig 5.7 F ( x , y ) =z k
We were able to draw the vector field in Example 2 by hand because of its particularly simple
formula. Most three-dimensional vector fields, however, are virtually impossible to sketch by
hand and so we need to resort to a computer algebra system. Examples are shown in Figures
Fig.5.8 and Fig5.9.If the vector field in Figure Fig5.9 represents a velocity field, then a particle
would be swept upward and would spiral around the -axis in the clockwise direction as viewed
from above.
y x z
Fig.5.8 F ( x , y , z )= yi+ zj+ xk Fig5.9: F ( x , y , z )= i+ j+ k
z z 4
Example 5.5: Newton’s Law of Gravitation states that the magnitude of the gravitational force
between two objects with masses m and M is
mMG
|F|= 2
r
Where r is the distance between the objects and G is the gravitational constant.(this is an
example of an inverse square law .) let us assume that the object with mass M is located at the
origin in R3 .(For instance, M could be the mass of the earth and the origin would be at its
center.)Let the position vector of the object with mass m be x= ⟨ x , y , z ⟩. Thenr =|x| , so r 2=|x| .
2
The gravitational force exerted on this second object acts toward the origin, and the unit vector in
this direction is
−x
| x|
−mMG
F ( x )= 3
x ¿
| x|
[Physicists often use the notation r instead of x for the position vector, so you may see formula
(¿) written in the form F=−¿ ¿.] The function given by equation ¿ is an example of a vector
field, called the gravitational field, because it associates a vector [the force F ( x )] with every
point x in the space.
Formula ¿ is a compact way of writing the gravitational field, but we can also write it in terms
of its component functions by using the fact that x=xi+ yj+ zk and
|x|= √ x 2 + y 2 + z 2:
−mMGx −mMGy −mMGz
F ( x , y , z )= 3
i+ 3
j+ 3
k
2 2 2 2 2 2 2 2 2 2 2 2
(x + y + z ) (x + y + z ) (x + y + z )
Exercise 5.1
Section Objectives:
In general, a function is a rule that assigns to each element in the domain an element in the range.
A vector-valued function, or vector function, is simply a function whose domain is a set of real
numbers and whose range is a set of vectors. We are most interested in vector functions r whose
values are three-dimensional vectors. This means that for every number t in the domain of r
there is a unique vectorV 3 in denoted byr (t ).If f (t ), g ( t ) and h ( t ) are the components of the vector
r (t ).then f , gand h real-valued functions called the component functions of r and we can write
The limit of a vector function r is defined by taking the limits of its component functions as
follows.
.
Definition: if r ( t )=⟨ f ( t ) , g ( t ) , h ( t ) ⟩, then
t →a ⟨
lim r ( t )=¿ lim f ( t ) , lim g ( t ) , lim h ( t ) ¿
t →a t →a t→a ⟩
Provided the limits of the component function exist
Solution According to the above definition, the limit of r is the vector whose components are the
limits of the component functions of r :
lim r ( t )=¿ ¿ ]i+¿] j +¿
t →0
Note limits of vector functions obey the same rules as limits of real-valued functions
lim r ( t )=r ( a )
t →a
In view of the above definition of limit, we see that r is continuous ata if and only if its
component function f , g and h are continuous at a .
Definition: The derivative r ' of a vector function r is defined in much the same way as for real –
valued functions:
dr
=r ' (t¿=lim r ¿¿¿
dt h→ 0
The following theorem gives us a convenient method for computing the derivative of a vector
function r : just differentiate each component of r
Example 5.7: Find the derivative of r ( t )=( 1+t 3 ) i+t e−t j+sin 2t k .
Solution
According o the above theorem, we differentiate each component of r :
Differentiation Rules
The next theorem shows the differentiation formulas for real-valued functions have their
counterparts for vector-valued functions.
Theorem: suppose u and v are differentiable vector functions, c is a scalar, and f is a real-
valued function, then
d d d
1.
dt
[ u (t ) + v ( t ) ] = [ u ( t ) ] + [ v (t ) ]
dt dt
d
2.
dt
[ c u ( t ) ]=c dtd [ u ( t ) ]
d
3.
dt
[ f (t )u ( t ) ]=f ' ( t ) u ( t )+ f ( t ) u ' ( t )
d
4.
dt
[ u (t ) . v (t) ] =u ( t). v ( t ) +u ( t ) . v ' (t)
'
d
5.
dt
[ u (t ) × v(t) ] =u' ( t ) × v ( t ) +u ( t ) × v ' (t)
d
6. ¿
dt
Proof (Exercise)
Integrals
The definite integral of a continuous vector function r (t ) can be defined in much the same way
as for real-valued functions except that the integral is a vector. But we can express the integral of
r in terms of the integrals of its component functions f , g and h as follows.
b n
∫ r ( t ) dt =lim
n →∞
∑ ¿
r( t i) ∆ t
a i=1
( )( ) ( )
n n n
¿ lim [
n→∞
∑ f ( t ¿i ) ∆ t i+ ∑ g ( t¿ i ) ∆ t j+ ∑ h (t¿i ) ∆ t k]
i=1 i=1 i=1
And so
( )( ) ( )
b b b b
∫ r ( t ) dt=¿ ∫ f (t ) dt i+ ∫ g ( t ) dt j+ ∫ h ( t ) dt k¿
a a a a
This means that we can evaluate integral of a vector function by integrating each component
function.
Note: 1. We can extend the fundamental theorem of calculus to continuous vector functions as
follows:
b
( ) (∫ )
b b
π
2 π π2
∫ r ( t ) dt =[2 sint i−cos t j+ t k ] 2 =2i+ j+ 4 k 2
0
0
Exercises
(b) lim ¿
t→0
+¿
⟨ −2 t
arctan t , e ,
lnt
t ⟩¿
(a)r ( t ) =⟨ t 2 ,1−t , √ t ⟩
( b ) r ( t )=⟨ cos 3 t , t , sin 3 t ⟩
3. Evaluate the following integrals
1
( a )∫ ( 16 t 3 i−9 t 2 j+25 t 4 k ) dt
0
4
1
(b)∫ ( √ t i+t e−t j+ 2
k )dt
1 t
Overview:
In this section, we are going to introduce Curves, Arc length and Tangent by considering various
examples.
Section Objectives:
Curves
Vector calculus has important applications to curves and surfaces in physics and geometry. The
application of vector calculus to geometry is a field known as differential geometry.
Differential geometric methods are applied to problems in mechanics, computer-aided as well as
traditional engineering design, geodesy, geography, space travel, and relativity theory.
Bodies that move in space form paths that may be represented by curves C. This and other
applications show the need for parametric representations of C with parameter t, which may
denote time or something else .A typical parametric representation is given by.
r ( t )=[ x ( t ) , y ( t ) , z ( t ) ]=x ( t ) i+ y ( t ) j+ z ( t ) k
Example 5.9: The line is the simplest curve in the plane as its coordinate functions are
linear .Explicitly the curve
Is a straight line through the reference point p=r ( 0 )=(x 0 , y 0) in the direction v=(u , v)
Here, t is the signed distance from point r ( t ) on the line to p as scaled by‖v‖.
As shown on the above figure, the vector p to a point (x , y ) on the line must be either in the
direction of (u , v ) or in the opposite direction. Hence, the cross product of the two vectors must
be zero, that is,
( x−x 0 , y− y 0 ) × ( u , v ) =0
Expansion of the above cross product yields an implicit equation of the line that relates the x and
y coordinates of every incident point:
vx−uy−v x 0 +u y 0=0
Example 5.10: sketch and identify the curve defined by the parametric equations
Solution: Here r ( t )=[ x ( t ) , y ( t ) ] =[t 2−2 t , t+1].each value of t a point on the curve, as shown in
the table. For instance, if t=0 ,then x=0 , y=1 and so the corresponding point is (0,1).in Fig5.13
we plot the points (x , y ) Determined by several values of the parameter and we join them to
produce a curve
t x y
-2 8 -1
-1 3 0
0 0 1 Fig5.14
1 -1 2 The graph of the curve
2 0 3 2
x=t −2 t y=t +1
3 3 4
Fig5.13 Tabular values of
4 8 5
the curve
2
x=t −2 t y=t +1
A particle whose position is given by the parametric equations moves along the curve in the
direction of the arrows as t increases. Notice that the consecutive points marked on the curve
appear at equal time intervals but not at equal distances. That is because the particle slows down
and then speeds up as increases.
It appears from Fig5.14 that the curve traced out by the particle may be a parabola. This can be
confirmed by eliminating the parameter t as follows. We obtain t= y−1 from the second
equation and substitute into the first equation. This gives
2 2 2
x=t −2 t=( y −1) −2 ( y−1 )= y −4 y +3
And so the curve represented by the given parametric equation is the parabola
2
x= y −4 y +3.
Note: in e the example above no restriction was placed on the parameter t , so we assumed that t
could be any real number.but sometimes we restrict t to lie in finite interval.For instance,the
parametric curve
2
x=t −2 t y=t+1 0 ≤ t ≤ 4
Shown in Fig 5.15 is the part of the parabola in the above example that starts at the point (0,1)
and ends at the point (8,5). The arrowhead indicates the direction in which the curve is traced as t
increases from 0 to 4 .
Fig 5.15 x=t 2−2 t y=t +1 0≤t≤4
Has initial point (f (a) , g (a)) and terminal point (f (b), g (b))
Solution: if we plot points, it appears the curve is a circle. We can confirm this impression by
eliminatingt . Observe that
2 2 2 2
x + y =cos t+ sin t=1
Thus the point (x , y ) moves on the unit circle x 2+ y 2=1.notice that in this example the parameter
t can be interpreted as the angle (in radians) shown in figure.as t increases from0 to 2 π , the point
( x , y ) =(cost , sint ) moves once around the circle in the counter clockwise direction starting from
the point ( 1,0 ) .
Represents an ellipse in the xy -plane with center at the origin and principal axes in the direction
of the x - and y - axis. In fact, since cos 2 t +sin 2 t=1, we obtain from the above equation
2 2
x y
2
+ 2 =1 , z=0
a b
If b=a ,then
Fig. 5.17 circle of the above example Fig5.18 ellipse of the above example
Is called a circular helix, it lies on the cylinder x 2+ y 2=a2 . If c >0 ,the helix is shaped like right-
handed screw(fig.). if c <0 , it looks like a left handed screw(fig.).if c=0,then it is a circle
Fig 5.19 right-handed circular Helix Fig 5.20 Left-handed circular helix
A simple curve is a curve without multiple points, that is, without points at which the curve
intersects or touches itself. Circle and helix are simple curves. Fig 5.20 shows curves that are not
simple. An example is[ sin 2t , cos 2t , 0 ] .Can you sketch it?
An arc of a curve is the portion between any two points of the curve. For simplicity, we say
“curve” for curves as well as for arcs.
Arc length
Recall from the application of integration that the length L of a curve C given in the form
y=F ( x ), a ≤ x ≤ b, F being continuous is given by
√
b
L=∫
a
1+ ( )
dy 2
dx
dx
Suppose that C can also be described by the parametric equation x=f (t ) and y=g ( t ),
α ≤ t ≤ β , where dx / dt=f ' ( t ) >0.this means that C is traversed once, from the left to right, as t
dy dy /dt
increases fromα ¿ β , and f ( α )=a, f ( β )=b. Putting = in to the above formula and using
dx dx /dt
the substitution Rule , we obtain
√ √
b β
( )
dy 2
( ) dxdt dt
2
dy /dt
L=∫ 1+ dx=∫ 1+
a dx α dx /dt
√(
β
L=∫
α dt ) ( )
dx 2 dy 2
+
dt
dt
If the curve is in the space, that is if r ( t )=( x ( t ) , y (t ) , h(t)) where x=f (t), y=g (t) and z=h (t ),
then the arc length is
√(
b
) ( )( )
2 2 2
dx dy dZ
L=¿ ∫ + + dt
a dt dt dt
Thus, using Leibniz notation, we have the following result, which has the same form as the last
two formulas.
√( √(
b b
) ( ) ) ( ) ( ) dt
2 2 2 2 2
dx dy dx dy dz
L=∫ + dt or L=∫ + +
a dt dt a dt dt dt
Example 5.13: Find the arc length of the curve traced out by the end points of the vector
function
2
r ( t )=(2 t ,lnt , t ) for 1 ≤t ≤e
Solution here x=f (t )=2t , y=g ( t )=lnt and z ¿ h ( t )=t 2 then by the above theorem
√(
b
) ( )( )
2 2 2
dx dy dz dx dy 1 dz
L=∫ + + dt where =¿2, = and =2t
a dt dt dt dt dt t dt
√ √ √
e e e
()
2 4 2
1 1 4 t +4 t +1
⟹ L=∫ (2) +2
+ ( 2t )2 dt=∫ 4+ 2 +4 t 2 dt=∫ dt
1
t 1 t 1 t2
√( )
e 2 e 2 e e e
2 t 2 +1 2 t +1 1 1
dt=∫ (2 t+ ¿ )dt=∫ 2t dt+∫ dt= [t ] e e ¿
¿∫ dt = ∫
2
1
t 1 t 1 t 1 1 t 1+ [ lnt ] 1
Hence L=e2
Example 5.13: find the length of one arch of the cycloid x=r (θ−sinθ), y=r (1−cosθ ).
Solution: one arch of cycloid is described by the parameter interval, 0 ≤ θ ≤2 π .
dx dy
Since =r (1−cosθ) and =rsinθ . We have
dθ dθ
√(
2π 2π
L=∫
0 dθ )( )
dx 2 dy 2
+
dθ 0
√
dθ=¿ ∫ ( r (1−cosθ) ) + ( rsinθ ) dθ
2 2
2π 2π
¿ ∫ √ r (1−cosθ ) +r sin θ dθ=∫ √ r 2( 1−2 cosθ+ cos2 θ)+r 2 sin 2 θ dθ
2 2 2 2
0 0
2π 2π
¿ ∫ √ r (1−2 cosθ+ cos θ+sin θ)dθ=∫ √ r (2−2cosθ )dθ
2 2 2 2
0 0
2π
¿ ∫ √ 2(1−cosθ )dθ
0
2 1
To evaluate this integral we use the identity sin x= (1−cos 2 x) withθ=2 x , which gives
2
2 θ θ
1−cosθ=2sin ( )θ . Since 0 ≤ θ ≤2 π .we have 0 ≤ ≤2 π and so sin(θ¿¿ 2)≥ 0 ¿ .therefore
2 2
√ 2(1−cosθ)= √ θ
2 | θ
2
θ
4 sin2 ( )=2 sin ( ) =2 sin( )
2 |
( θ2 ) dθ=2 r [−2 cos( θ2 )] 20π=2 r [−2 cos( θ2 )] 20π=2 r [ 2+ 2]
2π
¿8r
Tangents
In the preceding section we saw that some curves defined by parametric equation x=f (t) and
y=g (t) can also be expressed, by eliminating the parameter, in the form y=F ( x ).that is, if f ' is
continuous and f ' ( t ) ≠ 0 for a ≤ t ≤ b , then the parametric curve x=f (t ) , y=g(t),
a ≤ t ≤ b, can be put in the form y=F ( x ). If we substitute x=f (t ) and y=g (t) in the equation
y=F ( x ), we get
g ( t ) =F(f (t ))
'
g (t )
F ' ( x )= ' ……………………………….(1)
f (t )
Since the slope of the tangent to the curve y=F ( x ) at(x , F (x)) is F ' ( x ), equation 1 enables us to
find tangents to parametric curves without having to eliminate the parameter. Using Leibniz
notation, we can, we can rewrite equation 1 in an easily remembered form.
dy
dy dt dx
= if ≠ 0 …………………………..(2)
dx dx dt
dt
dy
It can be seen from Equation 2 that the curve has horizontal tangent when =0(provided that
dt
dx dx dy
≠ 0) and it has a vertical tangent when =0(provided that ≠ 0).This information is useful
dt dt dt
for sketching parametric curves.
It is also useful to considerd 2 y /dx 2. This can be found by replacing y by dy /dx in equation 2:
d dy
( )
( )
d y d dy dt dx
2
= =
dx 2 dx dx dx
dt
Example 5.14: A curve C is defined by the parametric equation x=t 2, y=t 3−3 t .
(a) Show that C has two tangents at the point (3,0) and find their equations
(b) Find the points on C where the tangent is horizontal or vertical
Solution
(a) Notice that y=t 3−3 t=t ( t 2 −3 )=0 when t=0 or t=± √ 3 .Therefore the point (0,3)
On C arises from two values of the parameter,t=√ 3 andt=−√ 3.this indicates that C crosses
itself at (0,3).since
dy dy /dt 3 t 2 −3 3
=
dx dx /dt
=
2t
= t−
2
1
t ( )
The slope of the tangent when t=± √ 3 is dy / dx=± √ 3(2√ 3)¿ ± √3 , so the equation of the tangents
at (3,0) are
(b) C has a horizontal tangent when dy /dx=0,that is,dy /dt=0 and dx /dt ≠ 0. Since
2 2
dy /dt =3 t −3,this happens when t =1,that is, t=± 1.the corresponding point on C are (1,-2) and
(1,2).C has a vertical tangent when dx /dt=2 t=0 ,that is,t=0 .(Note thatdy /dt ≠0 t there.) the
corresponding point on C is (0,0).
Example 5.15
π
(a) Find the tangent to the cycloid x=r ( θ−sinθ), y=r (1−cosθ ) at the point where θ=
3
(b) At what points is the tangent horizontal? When it is vertical?
Solution
(a) The slope of the tangent line is
dy dy /d θ rsin θ sin θ
= = =
dx dx /d θ r (1−cos θ) (1−cos θ)
π
When θ= , we have
3
π
sin ( )
=√
dy 3 3/2
And = =√ 3
dx π 1
1−cos ( ) 1−
3 2
2 (
y− =√ 3 x− + √
r rπ r 3
3 2 ) or √ 3 x− y=r ( π
√3
−2 )
The tangent is sketched in Fig 5.22 below
Fig 5.22 tangents of x=r ( θ−sinθ), y=r (1−cosθ )
(b) The tangent is horizontal whendy /dx=0, which occurs when sin θ=0 and 1−cos θ ≠0 , that is,
θ=( 2 n−1 ) π , n an integer. The corresponding point on the cycloid is
( ( 2 n−1 ) πr ,2 r ).
When θ=2 nπ , both dx /d θ and dy /d θ are 0. It appears from the graph that there are vertical
tangents at those points. We can verify this by using L’Hospital’s rule as follows:
lim ¿
+¿ dy
θ →2 nπ = lim ¿¿
dx θ →2 nπ +¿ sinθ
= lim ¿¿
1−cos θ θ→ 2nπ +¿ cos θ =∞ ¿
sin θ
A similar computation shows that dy /d x →−∞ asθ → 2nπ −¿¿, so indeed there are vertical
tangentsθ=2 nπ , that is, when x=2 nπr
Exercise 5.2
1. Find parametric equations for the circle with center (h , k ) and radius r
4. In each of the following find an equation of the tangent to the curve at the point corresponding to the
given value of the parameter.
Section Objectives:
Upon successful completion of this chapter, the student will be able to:
Gradient is one of the simplest and most important types of vector field. We may have noticed
that performing a partial derivative is very much like taking the derivative in a particular
df
direction, i.e. the partial derivative measures the rate of increase, or the slope, of the function
dx
, f in the x direction . Since there are only three directions in three- dimensional space there is a
neat and elegant way of summarizing all the information about how the function is increasing,
we simply put all the partial derivatives of the function into a vector. This vector is known as the
gradient of the function.
Definition: Let f : R3 → R be a scalar field, that is a function of three variables. The gradient of
f , denoted ∇ f , is the vector field given by
∇f= ⟨ ∂f ∂f ∂f
, , = ⟩
∂f
∂x ∂y ∂z ∂x ∂y
i+
∂f
j+
∂f
∂z
k.
∂ ∂ ∂
∇= i+ j+ k
∂x ∂y ∂z
∇ f is a vector field
∇ f measures the rate of increase of the scalar function f in each of the three coordinate
directions.
∇ f Points in the direction in which f increases the most.
∇ f ( x , y )=f x ( x , y ) i+ f y (x , y) j
∇ f ( x , y )=f x ( x , y ) i+ f y ( x , y ) j+ f z ( x , y , z )k
¿ 2 x i+2 y j+3 z
Divergence and curl are two measurements of vector fields that are very useful in a variety of
applications. Both are most easily understood by thinking of the vector field as representing a
flow of a liquid or gas; that is, each vector in the vector field should be interpreted as a velocity
vector. Roughly speaking, divergence measures the tendency of the fluid to collect or disperse at
a point, and curl measures the tendency of the fluid to swirl around the point. Divergence is a
scalar, that is, a single number, while curl is itself a vector. The magnitude of the curl measures
how much the fluid is swirling, the direction indicates the axis around which it tends to swirl.
Definition: Let F be a vector field given by F=f i+ g j +h k ,
where, f , g ,andh are scalar functions. The divergence of F is
∂ f ∂ g ∂h
¿ F= + + ,
∂x ∂ y ∂z
and the Curl of F is
∇f= ⟨ ∂f ∂f ∂f
, ,
∂x ∂ y ∂z ⟩
A useful mnemonic for the divergence and Curl is , let
∇= ⟨ ∂ ∂ ∂
, ,
∂x ∂ y ∂z
, ⟩
That is, we pretend that ∇ is a vector with rather odd looking entires . We can then think of the
gradient as
∇f= ⟨ ∂ ∂ ∂
, ,
∂x ∂ y ∂z
f= , ⟩ ⟨
∂f ∂f ∂f
,
∂x ∂y ∂z ⟩
that is, we simply multiply the f into the vector.
The divergence and curl can now be defined in terms of this same odd vector ∇ by using the
cross product and dot product. The divergence of a vector field F=¿ ⟨ f , g ,h ⟩ is
∇ ∙ F= ⟨ ∂ ∂ ∂
, ,
∂x ∂ y ∂ z ⟩ ∂f ∂g ∂h
. ⟨ f , g ,h ⟩= + +
∂ x ∂ y ∂z
.
The curl of F is
∇ × F=¿
Here are two simple but useful facts about divergence and curl.
i. ∇ ∙ (∇ × F )=0.
¿ words , this says that the divergence of the curlis zero .
j. ∇ ×( ∇ f )=0 .
That is, the curl of a gradient is the zero vector. Recalling that gradients are
conservative vector fields, this says that the curl of a conservative vector field is the
zero vector. Under suitable conditions, it is also true that if the curl of F is 0 then F is
conservative.
f ( x , y , z )=2 xy i + ( x + y ) j+2 yz k
2 2
Is F irrotational ?
Solution: The Curl of F is given by
Curl F ( x , y , z )=∇ × F=¿
(After finding the determinant),
¿ ( 2 z−2 z ) i−( 0−0 ) j+(2 x−2 x) k
¿0
Because, Curl F=0 , F is irrational.
Example 5.19: Find the divergence at (2,1 ,−1) for the vector field
2 2 2 2
F ( x , y , z )=x y zi+ x zj+ x yk .
Solution: The divergence of F is
∂ 3 2
divF ( x , y , z )= [ x y z ] + ∂ [ x 2 z ]+ ∂ [x 2 y ]
∂x ∂y ∂z
¿ 3 x2 y2 z
At the point(2,1 ,−1), the divergence is
¿ F ( 2,1 ,−1 ) =3 ( 2 )( 1 ) (−1 )
2 2
¿ 12
Piecewise Smooth Curves
A classic property of gravitational fields is that, subject to certain physical constraints, the work
done by gravity on an object moving between two points in the field is independent of the path
taken by the object. One of the constraints is that path must be a piecewise smooth curve. Recall
that a plane curve C given by
r ( t )=x ( t ) i+ y ( t ) j , a ≤ t ≤ b
is smooth if
dx dy
and
dt dt
are continuous on [a , b] and not simultaneously 0 on ( a , b ) . Similarly, a space curve C given
r ( t )=x ( t ) i+ y ( t ) j+ z (t )K , a ≤ t ≤ b
is smooth if
dx dy dz
, and
dt dt dt
Are continuous on [a , b] and not simultaneously 0 on (a , b). A curve C is piecewise smooth if
the interval [a , b] can be partitioned into a finite number of subintervals, on each which C is
smooth.
Example 5.19: Find a piecewise smooth parameterizations of the graph of C shown in figure
below
Solution: Since C consists of three line segmentsC 1, C 2∧C 3, you can construct a smooth
parameterization for each segment and piece them together by making the last t -value in C i
Correspond to the first t-value in C i+1 , as follows.
C 1 : x ( t )=0 , y ( t )=2 t , z ( t )=0 , 0 ≤ t ≤1
C 2 : x (t )=t−1 , y ( t )=2, z ( t )=0 ,1 ≤ t ≤2
C 3 : x ( t )=1 , y ( t )=2, z (t )=t−2, 2 ≤t ≤3
Section Objectives:
Line Integrals
Introduction: In this section , we consider some new concepts of line integrals. This new kinds
of integrals will be defined as limits of sums in the same general way that single integrals are
defined . An ordinary single integral
b
∫ f (x )dx
a
is an integral of a function which is defined along a line segment (an interval of a co –ordinate
axis). There is a corresponding kind of integral for a function which is defined along a curve.
Such an integral might well be called a curvilinear integral; the usual name is line integral, where
line means, in general, a curved line.
Definition: If f is defined in a region containing a smooth curve c of finte length , then the line
integral of f along c is given by
n
∫ f (x , y ) ds= lim
¿∨∆ ∨¿ →0 i=1
∑ f (x i , y i) ∆ si Plane
C
r ( t )=x ( t )i + y ( t) j
use the fact that
ds=||r ( t )||dt =√ [ x (t)] +[ y (t)] dt
' ' 2 ' 2
∫ (x 2− y−3 z )ds
C
Figure 5.1
¿ √ 6∫ (t + t)dt
2
5 √6
¿
6
Figure 5.2
Solution: Begin by integrating up the line y=x , using the following parameterization
C 1=x=t , y=t , 0 ≤t ≤1
C2 0
3
−1 2
([ ( 1+4 (1−t ) ) ])2 ∨ 1
2
¿
8 3 0
1 3
¿ (5 ¿ ¿ −1)¿
12 2
Consequently,
∫ x ds=∫ x ds +∫ x ds
C C1 C2
¿
√2 + 1 (5 ¿ ¿ 3 −1)≅ 1.56 ¿
2 12 2
||r ' ( t )||= √[ x ' (t )]2 +[ y ' (t)]2 +[z ' (t)]2 =√1+ 4 t+ t2
it follows that
2
2 1
1
¿ ∫ 2(t+ 2)(1+ 4 t+t )2 dt
2
20
1
¿ (13 √ 13−1) ≅ 15.29
3
Where ( x i , y i , z i) is a point in the ith subarc. Consequently, the total work done is given by the
following integral.
W =∫ F ¿ ¿
C
This line integral appears in other contexts and is the basis of the following definition of the
line integral of a vector field . Note in this definition
r ' (t )
F ∙ T ds=F ∙ ‖r ' (t)‖ dt
‖r '(t )‖
¿ F ∙ r '(t) dt
¿ F∙d r
Definition: Let F be a continuous vector field defined on a smooth curve C given by r ( t ) ,
a ≤ t ≤ b. The line integral of F on C is given by
b
b
¿ ∫ F( x ( t ) , y ( t ) , z ( t )) ∙ r ( t ) dt
'
3π
−1 1 1
¿∫ ( cost i− sint j+ k )∙ (−sint i+cost j+ K ) dt
0 2 2 4
3π 3π
¿∫
0
( 1
2
1 1 1 1 3π 3 π
sintcost− sintcost+ dt=∫ dt = t ]0 =
2 4 0 4 4 4 )
Theorem (Fundamental Theorem of Line Integrals)
Let C be a piecewise smooth curve lying in an open region R and given by
r ( t )=x ( t ) i + y ( t ) j , a ≤ t ≤ b .
If F ( x , y ) =M i+ N j is conservative in R , and M and N are continuous in R ,
Then
∫ F . d r =∫ ∇ f ∙ d r=f ( x ( b ) , y ( b ) ) −f ( x ( a ) , y ( a ))
C C
Proof: A proof is provided only for a smooth curve. For piecewise smooth curves, the procedure
is carried out separately on each smooth portion. Because
F ( x , y ) =∇ f x ( x , y ) i+ ∇ f y ( x , y ) j
It follows that
b b
The last step is an application of the Fundamental Theorem of Calculus. That is, using
b
In space, the Fundamental Theorem of Line Integrals takes the following form. Let c be a
piecewise smooth curve lying in an open region Q and given by
r ( t )=x ( t ) i+ y ( t ) j+ z ( t ) k , a ≤ t ≤ b .
If F ( x , y , z )=M i+ N j+ P k is conservative and M , N ,and P are continuous , then
where, F ( x , y , z )=∇ f ( x , y , z ) .
The Fundamental Theorem of Line Integrals states that if the vector field F is conservative,
then the line integral between any two points is simply the difference in the values of the
potential function f at these points.
Example 5.24: Using the Fundamental Theorem of Line Integrals
Evaluate ∫ F . d r , where C is a piecewise smooth curve from (−1 , 4) to (1,2) and
C
Figure 5.4
Solution: F is the gradient of f , where
2
2 y
f ( x , y )=x y− + k
2
Consequently, F is conservative and by the Fundamental Theorem of Line Integrals, it follows
that
[2
¿ 1 ( 2 )−
22
2 ] 2 42
−[ (−1 ) ( 4 ) − ]
2
¿ 4.
Note that it is unnecessary to include a constant k as part of f , because it is canceled by
subtraction.
Green’s Theorem
We now come to the first of three important theorems that extend the Fundamental Theorem of
Calculus to higher dimensions. (The Fundamental Theorem of Line Integrals has already done
this in one way, but in that case we were still dealing with an essentially one-dimensional
integral.) They all share with the Fundamental Theorem the following rather vague description:
To compute a certain sort of integral over a region, we may do a computation on the boundary
of the region that involves one fewer integration. Note that this does indeed describe the
Fundamental Theorem of Calculus and the Fundamental Theorem of Line Integrals: to compute
a single integral over an interval, we do a computation on the boundary (the endpoints) that
involves one fewer integrations, namely, no integrations at all.
In this section, we will study Green’s Theorem, named after the English mathematician George
Green (1793-1841). This theorem states that the value of a double integral over a simply
connected plane region R is determined by the value of a line integral around the boundary of
R . A curve C given by r ( t )=x ( t ) i+ y ( t ) j , where a ≤ t ≤ b , is simple if it does not cross itself,
that is , r ( c ) ≠ r (d) for all c and d in the open interval (a , b). A plane region R is simply
connected if every simple closed curve in R encloses only points that are in R .
∂N ∂M
∫ M dx + N dy=∬ ( ∂ x − ∂ y ) dA . █
C R
To indicate that an integral ∫ ❑ is being done over a closed curve in the counter clockwise
C
direction, we usually write ∮. we also use the notation ∂ D to mean the boundary of D
C
Proof: A proof is given only for a region that is both vertically simple and horizontally simple,
as shown in figure below.
Figure
b a
b
¿ ∫ [ M (x , f 1 ( x)) dx ¿−M (x , f 2 ( x ))] dx ¿
a
|
b
f2
¿∫ M ( x , y ) dx
a f1
b
¿ ∫ [ M (x , f 2 (x)) ¿−M ( x , f 1( x))] dx ¿
a
∂M
Consequently, ∫ M dx=∬ ∂ y dA .
C R
∂N
Similarly, you can use g1 ( y )∧g2 ( y ) to show that ∫ Ndy=∫∫ ∂ x dA . By adding the
C R
integrals ∫ Mdx and ∫ Ndy , you obtain the collection stated in the theorem.
C C
▄
Example 5.26:
Use Green’s Theorem to evaluate the line integral
∫ y 3 dx +( x 3 +3 x y 2 ) dy
C
Where C is the path from (0,0) to (1,1) along the graph of y=x 3 and from (1,1) to (0,0)
along the graph of y=x as shown in figure below
C is simple and closed, and the region R always lies to the left of C
Figure 5.5
Solution:
Since, M = y 3 and N=x 3 +3 x y 2 , it follows that
∂N 2 2 ∂M 2
=3 x +3 y and =3 y .
∂x ∂y
Applying Green’s Theorem, we then have
∂N ∂M
∫ y 3 dx + ( x 3 +3 x y 2 ) dy=∬ ( ∂ x − ∂ y )dA
C R
|
1 x 1 x 1
x
¿ ∫ ∫ [ ( 3 x + 3 y ) −3 y ]dydx=∫ ∫ 3 x dydx=∫ 3 x y ¿ 3 dx
2 2 2 2 2
0 x
3
0 x 3
0 x
1
|
2 6
3x x 1 1
¿ ∫ ( 3 x −3 x ) dx=[
2 5
− ] =
0 4 2 0 4
Example 5.27: While subject to the force
3 3 2
F ( x , y ) = y i+( x +3 x y ) j
a particles travels once around the circle of radius3, as shown in figure below, using Green’s
Theorem find the work done by F .
3 3 2
F ( x , y ) = y i+( x +3 x y ) j
Figure 5.6
Solution: From example one above, (using Green’s Theorem), we have
∫ y 3 dx +( x 3 +3 xy 2)dy=∬ 3 x2 dA .
C R
2π 3
W =∬ 3 x dA=∫ ∫ 3(rcosθ) rdrdθ
2 2
R 0 0
2π 3 2π 4 2
r 81
¿ 3 ∫ ∫ r cos θdrdθ=3∫ cos θ ¿0 dθ ¿ 3∫ cos θdθ
3 2 2 3 2
0 0 0 4 0 4
2π
¿
243
8 0
∫ (1+cos 2θ)dθ=
243
8
[θ+
sin 2θ 2 π 243 π
2
]
0
=
4 |
Note: When evaluating line integrals over closed curves, remember that for conservative vector
∂N ∂ M
fields (those for which = ), the value of the line integral is 0 . This is easily seen from
∂x ∂y
the statement of Green’s Theorem:
∂N ∂M
∫ Mdx +¿ N dy=∬ ( ∂ x − ∂ y )dA=0.¿
C R
∫ y 3 dx +3 x y 2 dy
C
∫ y 3 dx +3 x y 2 dy=0.
C
Example 5.29: Using a line integral find the area of the ellipse
2 2
x y
+ =1.
a2 b 2
Solution : we can induce a counterclockwise orientation to yhe elliptical path by letting
x=acost and y=bsint ,0 ≤ t ≤ 2 π .
So the area is
2π
1
A=
2C
∫ xdy − ydx=¿ 12 ∫ [ ( acost ) ( bcot ) dt−(bsint )(−asint ) dt] ¿
0
2π
¿
ab
∫ (cos2 t+ sin2 t )dt= ab
2 0 2 0 |
[t ] 2 π =πab .
( curl F ) ∙ k =
[ −∂ N ∂ M
∂z
i+
∂z
j+
∂N ∂ M
−
∂x ∂ y (
k ∙k )]
∂N ∂ M
¿ − .
∂x ∂ y
With appropraite conditions on F , C , and R , we can write Green’s Theorem in the vector form
The extension of this vector form of Green’s Theorem to surfaces in space produces Stokes’s
Theorem,we will discusse in the next section.
For the second vector form of Green’s , assume the same conditions for F , C and R .
Using the arc length parameter s for C , we have r ( s )=x ( s ) i+ y ( s ) j . So, a unit tangent vector T
to curve C is given by r ' ( s )=T =x ' ( s ) i+ y ' ( s ) j . Using the figure below
figure
T =cosθ i+ sinθ j
We can see that the outward unit normal vector N can then be written as
N= y ' ( s ) i+ x ' (s) j
Consequently, for F ( x , y ) =Mi+ Nj , we can apply Green’s Theorem to obtain
b
b
dy dx
¿ ∫( M −N )ds=∫ Mdy−Ndx=∫ −Ndx+ Mdy
a ds ds C C
∂M ∂N
¿∬ ( + )dA Green’s Theorem
R
∂x ∂y
¿ ∬ ¿ F dA .
R
Therefore,
The extension of this form to three dimensions is called the Divergence Theorem.
Exercises 3.3
∫ ( y −x ) dx +( 2 x− y )dy
C
1. ∫ 2 xydx +(x + y ) dy
C
C: boundary of the region lying between the graphs of y=0 and y=1−x 2
2. ∫ y 2 dx + xy dy
C
C : boundary of the region lying between the graphs of y=0. y =√ x , and x=9
Section Objectives:
Definition: Let S be a surface given by z=g ( x , y ) and let R be its projection on to the xy−¿
plane. Suppose that g , g x ,∧g y are continuous at all points in R and that f is defined on S.
n
∬ f ( x , y , z ) ds=‖∆lim
‖→ 0
∑ f ( x i , y i , z i )∆ S i
S i=1
√ 2 2
Where ∆ Si ≈ 1+ [ g x ( x i , y i ) ] + [ g y ( x i , y i ) ] ∆ Ai and surface area of f at¿) and from the sum
n
∑ f ( x i , y i , zi )∆ S i provided the limit of this sum as ‖∆‖ approaches 0 exists, then it is called
i=1
Let S be a surface with equation z=g ( x , y ) and let R be its projection on to the xy−¿ plane. If
g , g x ,∧g y are continuous at all points in R and that f is defined on S.
S R
Remark1: If S is the graph of y=g ( x , z ) and R is its projection on to the xz−¿ plane, then
∬ f ( x , y , z ) ds=∬ f (x , g( x , z ) , z )√ 1+[ g x ( x , z ) ] + [ g z ( x , z ) ]
2 2
dA
S R
Remark2: If S is the graph of x=g ( y , z ) and R is its projection on to the yz−¿ plane, then
∬ f ( x , y , z ) ds=∬ f (g ( y , z ), y , z) √1+[ g y ( y , z ) ] +[ g z ( y , z ) ]
2 2
dA
S R
∬ ( y 2 +2 yz ) ds
S
Where S is the first octant portion of the plane 2 x+ y +2 z=6 Fig. 5.6.1.2
1 1
Solution: begin by writing S as z= (6−2 x− y ) and g ( x , y )= (6−2 x− y )
2 2
−1
Using the partial derivatives g x ( x , y ) =−1 and g y ( x , y )= ,
2
√ 2 2
√1 3
you can write 1+ [ gx ( x , y ) ] + [ g y ( x , y ) ] = 1+1+ =
4 2
Using the theorem5.6.1 we have
∬ ( y 2 +2 yz ) ds=∬ f ( x , y , g ( x , y ) ) √1+ [ g x ( x , y ) ] + [ g y ( x , y ) ] dA
2 2
S R
[
¿ ∬ y +2 y
R
2
( 12 ) ( 6−2 x− y ) ] ( 32 ) dA
[ ]
3 2(3−x) 3 3
−3 243
¿ 3∫ ∫ y ( 3−x ) dydx=6∫ (3−x ) dx=
3 3
( 3−x ) =
0 0 0 2 0 2
Solution: project Son to the xy−¿ plane so that z=g ( x , y )= √ 9− y 2 , Fig. 5.6.1.3
√
2
and obtain √ 2
1+ [ gx ( x , y ) ] + [ g y ( x , y ) ] = 1+(
2 −y
√ 9− y 2
)=
√ 9− y 2
3
figure
Now theorem5.6.1 does not apply directly, because g y is not continuous when y=3.
however, you can apply theorem 5.6.1 for 0 ≤ b<3 and then take the limit as b approaches 3. As
follows
∬ ( x+ z ) ds= b 4
lim ¿
S 3
b→3
−¿
∫∫ ( x+ √ 9− y 2 ) dxdy ¿
0 0 √9 − y2
¿ lim ¿
( )
b 4
x
b →3 3∫ ∫
−¿
+ 1 dxdy = lim ¿¿
√ 9− y 2
[√ ]
b 4
0 0 x
2
3∫
−¿
b→3 2
+ x dy ¿
0 2 9− y 0
¿ lim ¿
( )
b
8
b →3 3∫ −¿
+4 dy = lim ¿¿
0 √ 9− y 2 b→ 3
−¿
[
3 4 y+ 8 arcsin
y
]
3 0
¿
b
¿ lim ¿
−¿ b
b →3 3 (4 b+ 8 arcsin )=36+24
3
π
2
=36+12 π ¿ ()
Parametric Surface and Surface Integrals
Note: ds∧dS can be written as ds=‖r ' (t)‖dt and dS=‖r u (u , v )× r v (u , v )‖dA .
∬ (x + z) dS
S
Fig.5.5.1.4
Solution: in parametric form, the surface is given by r ( x , θ ) =xi+ 3 cos θ j+3 sin θ k
π
Where 0 ≤ x ≤ 4∧0 ≤ θ ≤ to evaluate the surface integral in parametric form we have
2
| |
i j k
r x × r θ= 1 0 0 =−3 cos θ j−3 sin θ k
0 −3 sin θ 3 cos θ
‖r x × r θ‖=√ 9 cos2 θ+9 sin2 θ=3
π
4 2
4 π
¿ ∫ [ 3 x θ−9 cos θ j ] 02 dx
0
4
¿∫
0
( 32π x +9) dx]
[ ]
4
3π
¿ x+ 9 =12 π +36
2 0
Orientation of a Surface
Unit normal vectors are used to induce an orientation to a surface s in space. A surface is called
orientable if a unit normal vector N can be defined at every nonboundary point of S in such away
that the noprmal vector vary continuously over the surface S. If this is possible, S is called an
oriented surface.
Most common surfaces, such as sphers, paraboloids, ellipses, and planes, are orientable.
Moreover, for an orientable surface, the gradient vector provides a convenient way to find a unit
normal vector. That is, for an orientable surface S given by z=g(x , y)
Fig.5.6.1.6
ru × rv
the unit normal vectors are given by N=
‖r u × r v‖
Note: suppose that the orientable surface is given by y=g ( x , z )∨x =g ( y , z ) . then you can use
the gradient vector∇ G ( x , y , z )=−g x ( x , z ) i+ j−g z ( x , z ) k , G ( x , y , z )= y−g(x , z )
Flux Integrals
nearly constant. Then the amount of fluid crossing this region per unit
Consequently, the volume of S fluid crossing the surface S per unit of time (called the flux of F
across S) is given by the surface integral in the following definition.
Let F ( x , y , z )=Mi+ Nj+ Pk , where M, N and P have continuous first partial derivatives on the
surface S oriented by a unit normal vector N. the flux integral of F across S is given by
∬ F ⦁ N dS
Geometrically, a flux integral is the surface integral over S of the normal component of F. if
ρ( x , y , z ) is the density of the fluid at ( x , y , z ) , the flux integral
∬ ρF ⦁ N dS
S
represents the mass of the fluid flowing across S per unit of time. To evaluate a flux integral for
a surface given by z=g ( x , y ) , let G ( x , y , z )=z −g ( x , y ) then, NdS can be written as follows.
NdS=
∇ G( x , y , z)
dS=
∇ G( x , y , z )
√ 2 2
1+ [ g x ( x , y ) ] + [ g y ( x , y ) ] dA
‖∇ G( x , y , z)‖ √ 2
1+ [ g x ( x , y ) ] + [ g y ( x , y ) ]
2
¿ ∇ G( x , y , z )dA
Let S be an oriented surface given by z=g(x , y) and let R be its projection onto the xy−¿ plane.
∬ F ⦁ N dS=∬ F ⦁ [ g x ( x , y ) i+ g y ( x , y ) j−k ] dA
S R
For the first integral, the surface is oriented upward and for the second integral, the surface is
oriented downward
Example 5.33: using a flux integral to find the rate of mass flow
¿ ρ∬ [ 2 x 2+ 2 y 2 + ( 4− x2− y 2) ] dA
R
¿ ρ∬ [ ( 4+ x + y ) ] dA
2 2
2π 2
¿ ρ ∫ ∫ ( 4 +r ) r drdθ(∵ polar coordinates)
2
0 0
2π
¿ ρ ∫ 12dθ=24 πρ
0
Hence a representation of S is
S :r= [ u , u2 , v ] ( 0 ≤u ≤ 2,0 ≤ v ≤ 3 ) .
Hence F ( s ) ⦁ N =6 u v2 −6 ,
3 2 3
2
∬ F ⦁ n dA=∫∫ ( 6 u v −6 ) dudv=∫ [ 3 u2 v 2−6 u ]0 dv
2
S 0 0 0
3
3
¿ ∫ (12 v −12)dv=[ 4 v −12 v ]v=0=108−36=72
2 3
Example 5.35: Find the Surface integral when F=( x 2 ,0,3 y 2 ) and S is the portion of the plane
x + y +z=1 in the first octant (Fig.5.6.1.10)
r ( u , v )= [ u , v , 1−u−v ] .
Fig.5.6.1.10
We obtain the first-octant portion S of this plane by restricting x=u and y=v to the projection R
of S in the xy -plane. R is the triangle bounded by the two coordinate axes and the straight line
x + y=1 , obtained from x + y + z=1 by setting z=0 . Thus0 ≤ x ≤ 1− y , 0 ≤ y ≤1.
[ ]
1 1−v 1 2
2. Find the flux of F through S,∬ F ⦁ N dS,where N is the upward unit normal vector to
S
In this section we discuss another “big” integral theorem, the divergence theorem, which
transforms surface integrals into triple integrals. So let us begin with a review of the latter.
Triple integrals can be transformed into surface integrals over the boundary surface of a region in
space and conversely. Such a transformation is of practical interest because one of the two kinds
of integral is often simpler than the other. It also helps in establishing fundamental equations in
fluid flow, heat conduction, etc., as we shall see. The transformation is done by the divergence
theorem, which involves the divergence of a vector function
F=[ F1 , F2 , F 3 ]=F 1 i+ F 2 j+ F 3 k
∂ F 1 ∂ F2 ∂ F 3
Namely, ¿ F= + +
∂x ∂ y ∂z
Let T be a closed bounded region in space whose boundary is a piecewise smooth orientable
surface S. Let F ( x , y , z ) be a vector function that is continuous and has continuous first partial
derivatives in some domain containing T. Then
∭ ¿ F dv=∬ F ⦁ n dA
T S
In components of F=[ F1 , F2 , F 3 ] and of the outer unit normal vector n=[ cos α , cos β , cos γ ] of S
. Now this becomes
∭
T
( ∂∂Fx + ∂∂Fy + ∂∂Fz ) dxdydz=∬( F cos α + F cos β+ F cos γ ) dA
1 2 3
S
1 2 3
Fig.5.6.2.1
The form of the surface suggests that we introduce polar coordinates r , θ defined by
x=r cos θ , y=r sin θ (thus cylindrical coordinatesr , θ , z ). Then the volume element is
dx dy dz=r dr dθ dz , and we obtain
b 2π a
I =∭ 5 x dxdydz= ∫2
∫ ∫ ( 5 x ¿ ¿ 2 cos2 θ)rdrdθdz ¿
T z=0 θ=0 r=0
b 2π 4 b 4
a a 5π 4
¿5 ∫ ∫ cos θ dθdz=5 ∫ dz =
2
a b.
z =0 θ=0 4 z=0 4 4
Now on S we have x=2 cos v cos u , z=2 sin u so that F=[ 7 x , 0 ,−z ] becomes on S
F ( S ) =[ 14 cos v cos u , 0 ,−2sin v ]
And F ( S ) ⦁ N =( 14 cos v cos u ) ⦁ 4 cos 2 v cos u+ (−2 sin v ) ⦁ 4 cos v sin v
3 2 2
¿ 56 cos v cos u−8 cos v sin v
The integral of cos v sin 2 v equals (sin 2 v ¿/3 and that of cos 3 v=cos v (1−sin 2 v) equals
sin v−(sin 2 v)/3 on S we have −π /2≤ v ≤ π /2
2
So that by substituting these limits we get 56 π (2−2/3 )−16 π ∙ =64 π
3
Exercises 3.3
The divergence theorem has many important applications: In fluid flow, it helps characterize
sources and sinks of fluids. In heat flow, it leads to the heat equation. In potential theory, it gives
properties of the solutions of Laplace’s equation. In this section, we assume that the region T and
its boundary surface S are such that the divergence theorem applies.
5.7 Stokes’s Theorem; Applications
Overview
In this subtopic we are going to learn that we can transform surface integrals into line integrals
and conversely, line integrals into surface integrals is called Stokes’s Theorem and we will see
examples.
Section Objectives:
A second higher dimension analog of Green’s Theorem is called Stokes’s Theorem, after the
English mathematical physicist George Gabriel Stokes. In addition to making contributions to
physics, stokes worked with infinite series and differential equation, as well as with the
integration result presented in this section.
Stokes’s Theorem gives the relationship between a surface integral over an oriented surface S
and a line integral along a closed space curve C forming the boundary of , as shown
Fig.5.7.1.1. The positive direction along C is counterclockwise relative to the normal vector N .
That is, if you imagine grasping the normal vector N with your right hand, with your thumb
pointing in
the direction of N, your fingers will point in the positive direction C, as shown Fig. 5.7.1.2.
Fig. 5.7.1.1 Fig. 5.7.1.2
Let S be a piecewise smooth oriented surface with unit normal vector N, and let the boundary of
S be a piecewise smooth simple closed curve C. Let F (x , y , z ) be a continuous vector function
that has continuous first partial derivatives in a domain in space containing S. Then
| |
i j k
∂ ∂ ∂
Where curl F=
∂x ∂y ∂z
F1 F2 F3
OR
Theorem 5.7.1: Stokes’s Theorem (Transformation between Surface and Line Integrals)
Let S be a piecewise smooth oriented surface in space and let the boundary of S be a piecewise smooth
simple closed curve C. Let F (x , y , z ) be a continuous vector function that has continuous first partial
derivatives in a domain in space containing S. Then
Here n is a unit normal vector of S and, depending on n , the integration around C is taken in the sense
' dr
shown in Fig. 5.6.4. Furthermore, r = is the unit tangent vector and s the arc length of C. In
ds
components, formula (*) becomes
∬
S
[( ∂ F3 ∂ F2
∂y
−
∂z ) (
N1+
∂ F1 ∂ F3
∂z
−
∂x
N 2+
∂x ) (
∂ F2 ∂ F1
−
∂y C
) ]
N 3 dudv =∮ (F 1 dx + F 2 dy + F 3 dz )(**)
| |
i j k
∂ ∂ ∂
curl F= i.e. Fig. 5.7.1.3
∂x ∂y ∂z
F1 F2 F3
| |
i j k
∂ ∂ ∂
curl F= =−i− j+2 yk
∂x ∂y ∂z
− y2 z x
Considering z=6−2 x−2 y=g(x , y ,) you can use theorem 5.6.2 for an upward normal vector to
obtain
3 3− y
¿ ∫ ∫ ( 2 y−4 ) dxdy
0 0
3
¿ ∫ (−2 y +10 y−12) dy
2
[ ]
3 3
−2 y 2
¿ +5 y −12 y =−9
3 0
| || |
i j k i j k
∂ ∂ ∂ ∂ ∂ ∂
=3 √ x + y k .
2 2
curl F= =
∂x ∂y ∂z ∂x ∂y ∂z
F1 F2 F 3 − y √ x + y x √ x 2+ y 2
2 2
0
Letting N=k , you have
∬ ( curl F ) ⦁ Nds=∬ 3 √ x 2+ y 2 dA
S R
2π 2 2π 2π
3 2
¿ ∫ ∫ 3 r ¿rdrdθ ¿=∫ [ r ]0 d θ=∫ 8 d θ=16 π
0 0 0 0
is the circler ( s )= [ cos s , sin s , 0 ] . It’s unit tangent vector is r ' ( s )=[ −sin s , cos s ,0 ] . The function
F=[ y , z , x ] on C
2π 2π
We now consider the surface integral. We have F 1= y , F 2=z , F 3=x , so that in ¿ we obtain
Now n dA=N dxdy with x , y instead ofu , v . Using polar coordinatesr , θ defined by
x=r cos θ , y=r sin θ and denoting the projection of S into the xy−¿ plane by R, we thus obtain
2π 1
2π
−2 1 1
¿∫ ( ( cos θ+sin θ ) − )dθ=0+ 0− ( 2 π )=−π
θ=0 3 2 2
Example 5.41: Evaluation of a Line Integral by Stokes’s Theorem
Evaluate ∮ F ⦁ r ds, where C is the circle x 2+ y 2=4 , z =−3 , oriented counterclockwise as seen by
'
a person standing at the origin, and, with respect to right-handed Cartesian coordinates,
Solution: As a surface S bounded by C we can take the plane circular disk x 2+ y 2=4 in the plane
z=−3 .
Hence ( curl F ) ⦁ n is simply the component of curl F in the positive z−¿ direction. Since F with
z=−3 has the components F 1= y , F 2=−27 x , F 3=3 y 3, we thus obtain
∂ F1 ∂ F2
( curl F ) ⦁ n= − =−27−1=−28.
∂x ∂y
∬
S
( ∂∂Fx − ∂∂Fy ) dA=∮ F dx + F dy=−28
1 2
C
1 2
Note: if curl F=0 throuhot region R , the rotation of F about each unit normal N is 0.
Path dependence of line integrals is practically and theoretically so important that we formulate it
as a theorem.
∫ F (r )⦁ dr =∫ ( F 1 dx + F 2 dy + F 3 dz ) ¿
C C
for every pair of endpoints A, B in domain D, (1) has the same value for all paths
∂f ∂f ∂f
F=grad f thus F 1= ,F = , F = ( 2)
∂x 2 ∂ y 3 ∂ z
Proof : We assume that (2) holds for some function f in D and show that this implies path
independence. Let C be any path in D from any point A to any point B in D, given by
r ( t )=[ x ( t ) , y ( t ) , z (t) ] , where a ≤ t ≤ b . Then from (2), the chain rule
b
¿∫
a
( ∂∂ fx dxdt + ∂∂ fy dydt + ∂∂ fz dzdt )
b
df
¿∫
t =b
dt=f [ x ( t ) , y ( t ) , z (t) ]t =a
a ∂x
¿ f ( x ( b ) , y ( b ) , z ( b ) )−f ( x ( a ) , y ( a ) , z ( a ) )=f ( B ) −f ( A)
in any domain in space and find its value in the integration from A: (0, 0, 0) to B: (2, 2, 2).
∂f ∂f ∂f
=2 x=F 1 , =2 y=F 2 , =4 z=F 3 . Hence the integral is independent of path according
∂x ∂y ∂z
to Theorem 1, and gives f ( B )−f ( A )=f ( 2,2,2 )−f ( 0,0,0 )=4 +4 +8=16
If you want to check this, use the most convenient path c :r ( t )= [ t ,t , t ] , 0≤ t ≤ 2 on which
F ( r ( t ) )=[ 2 t , 2 t , 4 t ] so that F ( r ( t ) ) ⦁ r ' ( t ) =2t +2 t+ 4 t=8 t and integration from 0 to 2 gives
[ ]
2 2 2
8t
∫ 8 t dt = 2 =16
0 0
Example 5.43: Evaluate the integral I =∫ ( 3 x dx +2 yz dy+ y dz )from A :(0,1,2) to B:(1 ,−1,7)
2 2
2 2
f x =F 1=3 x , f y =F 2=2 yz , f z =F3 = y
f =x 3 + g ( y , z ) , g= y 2+ h ( z ) , f =x3 + y 2 z+ h ( z ) ,
π
1. ∫
π /2 , π
( 12 cos 12 x cos 2 y dx−2 sin 12 x sin 2 y dy)
(6,1)
2. ∫ e 4 y (2 x dx +4 x 2 dy )
(4,0)
(2,1/ 2 ,π /2)
(1,1,0)
∫
2 2 2
x +y +z
4. e ( x dx+ y dy + z dz)
(0,0,0)
(1,1,1)
Vector calculus deals with the application of calculus operations on vectors (vector).
if D is a subset of Rn , then a scalar field in D is a function
f : D ⟶ R and a vector field in D is a function F : D ⟶ R n.
A curve in R2(or R3 ) is a differentiable function r :[a , b ]⟶ R2(or R3). The initial point is
r ( a) and the final point is r ( b).the domain of the curve is the interval[a , b].
If a curve C is described by the parametric equation x=f (t ) , y=g(t) and z ¿ h ( t ) , a ≤ t ≤ b,
where f ' , g' andh ' are continuous on [a , b] and C is traversed exactly once as t increases
from a to b , then the length L of C is.
√( √(
b b
) ( )
dx 2 dy 2
) ( ) ( ) dt
dx 2 dy 2 dz 2
L=∫ + dt or L=∫ + +
a dt dt a dt dt dt
Let f : R3 → R be a scalar field, that is a function of three variables. The gradient of f ,
denoted ∇ f , is the vector field given by
∇f= ⟨ ∂f ∂f ∂f
, , =
∂f
i+ ⟩
∂f
∂x ∂ y ∂z ∂x ∂ y ∂z
∂f
j+ k .
∂ ∂ ∂
∇= i+ j+
∂x ∂y ∂z
Let F be a vector field given by F=f i+ g j+h k ,where, f , g ,andh are scalar functions.
The divergence of F is
∂ f ∂ g ∂h
¿ F= + + ,
∂x ∂ y ∂z
and the Curl of F is
∫ f ( x , y )ds=¿∨∆lim
∨¿ →0
∑ f ( x i , y i)∆ s i Plane
C i=1
Let S be a surface given by z=g ( x , y ) and let R be its projection on to the xy−¿ plane.
Suppose that g , g x ,∧g y are continuous at all points in R and that f is defined on S.
n
∬ f ( x , y , z ) ds=‖∆lim
‖→ 0
∑ f ( x i , y i , z i )∆ S i
S i=1
Let F ( x , y , z )=Mi+ Nj+ Pk , where M, N and P have continuous first partial derivatives
on the surface S oriented by a unit normal vector N. the flux integral of F across S is
given by
∬ F ⦁ N dS
S
Let S be a piecewise smooth oriented surface with unit normal vector N, and let the
boundary of S be a piecewise smooth simple closed curve C. Let F (x , y , z ) be a
continuous vector function that has continuous first partial derivatives in a domain in
space containing S. Then
∫ F ⦁ dr=∬ (curl F) ⦁ Nds
C S
| |
i j k
∂ ∂ ∂
Where curl F=
∂x ∂y ∂z
F1 F2 F3
Miscellaneous Exercises
12. Evaluate line integral, with F and C as given , by the method that seems most suitable.
Recall that if F is a force, the integral gives the work done in a displacement along C .
a. F=[x 2 , y 2 , z 2 ]
C the straight line segment from (4,1,8) to (0,2,3)
b. F=[ yz ,2 zx , xy ] ,
C The circle x 2+ y 2=9 , z=1, counterclockwise
c. F=[ sinπy , cosπx , sinπx ] ,
1
C the boundary of 0 ≤ x ≤ , 0≤ y ≤2 , z=2 x
2
d. F=[x− y , 0 , e x ]
C : y=3 x 2 , z=2 x for x from 0 to 2
13. Using Green’s Theorem evaluate the line integral
14. Evaluate the integral ∬ (curl F )∙ ndA directly for the give: F and S.
S
a. F=[ 4 z , 16 x , 0 ] , S : z= y (0 ≤ x ≤1 , 0 ≤ y ≤ 1)
2
2 2 1
b. F=[ 0,0,5 xcosz ] , S=x + y =4 , y ≥0 , 0 ≤ z ≤ π
2
c. F=[ −e , e , e ] , S : z=x+ y (0 ≤ x ≤1 , 0 ≤ y ≤ 1)
y z x
3. A. Ganesh and Etla, Engineering Mathematics II, New age International press,2009
5.Salas Hille Etgen, Calculus – One and Several variables,10th edition, WILLEY PLUS
8.Kaplan, W.: "Advanced Calculus," 5th ed., Addison-Wesley Higher Mathematics, Boston,
2003
9. Knopp, K., Theory of Functions. 2 parts. New York:Dover, Reprinted 1996.
10. Krantz, S. G., Complex Analysis: The GeometricViewpoint. Washington, DC: The
MathematicalAssociation of America, 1990.
11.Lang, S., Complex Analysis. 4th ed. New York:Springer, 1999.
12. ] Narasimhan, R., Compact Riemann Surfaces. NewYork: Springer, 1996.
13. Nehari, Z., Conformal Mapping. Mineola, NY:Dover, 1975.
14. Springer, G., Introduction to Riemann Surfaces.Providence, RI: American Mathematical
Society, 2001
CHAPTER-6
Introduction
The transition from “real calculus” to “complex calculus” starts with a discussion of complex
numbers and their geometric representation in the complex plane. We desire functions to be
analytic because these are the “useful functions” in the sense that they are differentiable in some
domain and operations of complex analysis can be applied to them. The most important
equations are the Cauchy–Riemann equations because they allow a test of analyticity of such
functions. Moreover, we show how the Cauchy–Riemann equations are related to the important
Laplace equation.
The remaining sections of the chapter are devoted to elementary complex functions (exponential,
trigonometric, hyperbolic, and logarithmic functions). These generalize the familiar real
functions of calculus.
Unit Objectives:
On the completion of this unit, students should be able to:
Overview:
In this section, we are going to deal with the definition and notation of the complex numbers by
considering various examples.
Section Objectives:
Definition: A complex number is an order pair (x , y ) of real number x and y that is z=(x , y )
x is called the real part and y is called the imaginary part of z, written x=ℜ z and y=ℑ z
The order pair (x ,0)=x and (0,1)=i by definition, two complex number are equal if and only
if their real parts are equal and their imaginary parts are equal.
Now the complex number z=(x , y )=x+ iy if x=0 , then z=iy and is called pure imaginary.
z 1+ z2 =( x 1 , y 1 ) +( x ¿ ¿ 2 , y 2)=(x 1+ x 2 , y 1 + y 2 )¿
Multiplication is defined by
z 1 z 2=( x 1 , y1 ) ( x 2 , y 2 ) =(x 1 x 2− y1 y 2 , x 1 y 2 + x 2 y 1 )
2 2
i =−1 as i =ii=( 0,1 ) ( 0,1 )=(−1,0 )=−1
From this we see continued multiplication by positive power of i leads to the following pattern:
2 3 4 5
i=i ,i =−1, i =−i ,i =1 ,i =i, …
Example 6.1: - Find Real part, Imaginary part, Sum and Product of Complex Numbers
z 1 z 2=( 8,3 )( 9 ,−2 )=( 8 ( 9 )−3 (−2 ) , 8 (−2 )+ 9 (3 ) )=( 78,11 )=78+ 11i
Subtraction and Division are defined as the inverse operation of addition and multiplication,
respectively. Thus the difference is z=z 1−z 2 the complex number z for which z 1=z + z 2
If we equate the real and the imaginary parts on both sides of this equation, setting z=x +iy
We obtain
x 1=x 2 x− y 2 y , y 1= y 2 x + x 2 y
The solution is
z1 x 1 x 2+ y 1 y 2 x 2 y 1−x 1 y 2
Z= =X +iY , X= 2 2
∧Y = 2 2
z2 x2 + y 2 x 2+ y 2
z1
The practical rule used to get this is by multiplying numerator and denominator of
z2
By x 2−i y 2 and simplifying
x 1 +i y 1 ( x 1 +i y1 )
Z= =
x 2 +i y 2 (x ¿ ¿ 2−i y 2) x 1 x 2+ y 1 y 2 x 2 y 1− x1 y 2
( x ¿ ¿ 2+i y 2 ) = i ¿¿
( x 2−i y 2 ) 2 2
x2 + y 2
2 2
x2 + y2
Example 6.2: - Let z 1=8+3 i and z 2=9−2i . Find the difference and quotient.
z1 8+3 i
= =¿ ¿
z2 9−2i
Complex Plane
So far we discussed the algebraic manipulation of complex numbers. Consider the geometric
representation of complex numbers, which is of great practical importance. We choose two
perpendicular coordinate axes, the horizontal x -axis, called the real axis, and the vertical y -axis,
called the imaginary axis. On both axes we choose the same unit of length (Fig. 6.1.1.1). This is
called a Cartesian coordinate system.
The complex plane fig.6.1.1.1 The complex plane in 4−3 i in fig.6.1.1.2
Definition: We now plot a given complex number z=( x , y )=x+ iy as the point P with
coordinates x , y . The xy plane in which the complex numbers are represented in this way is
called the complex plane.
Addition and subtraction can now be visualized as illustrated in Figs. 6.1.1.3 and 6.1.1.4
Complex Conjugate Numbers: The complex conjugate z of a complex number z=x +iy is
defined by
z=x−iy
It is obtained geometrically by reflecting the point z in the real axis. Figure 6.1.1.5 shows this for
z=5+2 i and it’s conjugate z=5−2i
Fig. 6.1.1.5 Complex conjugate numbers
The complex conjugate is important because it permits us to switch from complex to real.
Indeed, by multiplication, z z =x2 + y 2 (verify!). By addition and subtraction,
z + z=2 x , z−z=2 iy .We thus obtain for the real part x and the imaginary part y (not iy !) of
z=x +iy . The important formulas are
1 1
Rez=x= ( z+ z ) , Imz= ( z −z)
2 2i
If z is real, z=x then z=z by the definition of z and conversely. Working with Conjugates is
easy, since we have
( z 1 + z 2) =z 1+ z2 , ( z 1−z 2 )=z 1−z 2
z 1 z 2=z 1 z 2 , ( )
z1 z1
=
z2 z2
b) ℜ[ ( 1+i )16 z 2 ]
c) ℜ z 4 −( ℜ z 2 )2
Polar Form of Complex Numbers: We gain further insight into the arithmetic operations of
complex numbers if, in addition to the xy -coordinates in the complex plane, we also employ the
usual polar coordinates θ defined by x=r cos θ , y=r sin θ then z=x +iy hence
z=r ( cosθ+isinθ) is called polar form
r is called the absolute value or modulus of z and is denoted by |z|. Hence
|z|=r=√ x 2+ y 2=√ z z
Geometrically, |z| is the distance of the point z from the origin .Similarly, |z 1−z 2|is the distance
between z 1∧z 2 .
θ is called the argument of z and is denoted by θ=arg z.
y
tan θ= (z ≠ 0)
x
Geometrically,θ is the directed angle from the positive x -axis to OP in Fig.6.01. Here, as in
calculus, all angles are measured in radians and positive in the counterclockwise sense.
For z=0 this angleθ is undefined. (Why?) For a given z ≠ 0 it is determined only up to integer
multiples of 2 π since cosine and sine are periodic with period 2 π . But one often wants to
specify a unique value of argz of a given z ≠ 0 . For this reason one defines the principal value
Arg z (with capital A!) of arg z by the double inequality −π <arg z ≤ π
Then we have Arg z=0 for positive real z=x which is practical, and Argz=0 (not −π ) for
negative real z . Obviously, for a given z ≠ 0 , the other values of arg z are
arg z= Arg z ± 2 nπ ¿ )
Fig. 6.1.1.6.Complex plane, Fig. 6.1.1.7 Distance between two points in the complex plane
Polar form of a complex number
(
Polar form z=√ 2 cos
1
4
1
π +i sin π ,
4 ) Fig 6.1.1.8
Hence we obtain
1
arg z= π ± 2 nπ ( n=0,1, ⋯ ) ,
4
1
¿ arg z= π and (the principal value).
4
We can now express the polar representation of a complex number in the form of z=r e iθ
we now that |e iθ|=1
iθ n
complex number. To do so, we first write z 0=r 0 e and z =z 0 0
n inθ i θ0
r e =r 0 e
n
From this relation it now follows that r =r 0 and nθ=θ0
θ 0+ 2 kπ
From which we deduce r =√| z|, θ=
n
k =0 , ±1 , ± 2, ⋯
n
Thus
θ0+2 kπ
i( )
z=√ r 0 e
n n
, k=0,1,2, ⋯ ,n−1
The nth root of any complex number z can expressed as
1 θ0 +2 kπ
n
z =ω k+1=e
i(
n
)
, k=0 , ± 1, ± 2 , ⋯ Where |z|=r and θ0 =Arg ( z ) or
m θ0+2 kπ
ℑ( )
z =( √|z|) e
n n m
n m=1,2 , ⋯∧k =0,1,2 , ⋯ , n−1
2π
Solution: z=−1+i √ 3 ,|z|=2 ,θ 0=tan ( √ 3 ) =
−1
3
2π
+2 kπ
3
i3( )
ω k+1=2 √ 2 e 2
k =0,1
k 0 , ω 1=2 √ 2 e =−2 √ 2
iπ
k 1 , ω2 =2 √ 2 e =2 √ 2
i 4π
In this section, we are going to deal with the definition and notation of the limit, derivative,
analytic function by considering various examples.
Section Objectives:
Represent the example of a limit, derivative and analytic function using the notation;
6.2.1 Function of Complex Variable
Let S be a set of complex numbers. A function f defined on S is a rule that assigns to every z
in Sa complex numberw , called the value of f at z . We write w=f ( z)
Here z varies in S and is called a complex variable. The set S is called the domain of definition
of f or, briefly, the domain of f . (In most cases S will be open and connected, thus a domain as
defined just before.)
The set of all values of a function f is called the range of f . w is complex, and we write w=u+iv
where u and v are the real and imaginary parts, respectively. Now w depends on z=x +iy . Hence
u becomes a real function of x and y , and so does v. We may thus write
w=f ( z ) =u ( x , y )+ iv( x , y )
This shows that a complex function f ( z) is equivalent to a pair of real functions u ( x , y ) and
v ( x , y ) , each depending on the two real variables x and y .
Example 6.9: Letw=f ( z ) =z2 +3 z . Find u ( x , y ) and v ( x , y ) the value of f at z=1+3 i
Solution: f ( z )=z 2 +3 z=( x +iy )2 +3( x +iy)
¿ x 2− y 2 +3 x+i (2 xy +3 y)
Now
u ( x , y )=ℜ f ( z )=x 2− y 2 +3 xand v ( x , y ) =2 xy +3 y ,
Also
2
f ( 1+3 i )=( 1+3 i ) + 3 ( 1+3 i )
¿ 1−9+ 6 i+ 3+9 i=−5+15 i
So
u ( 1,3 )=−5 and v ( 1,3 )=15
Example 6.10: Letw=f ( z ) =z2 . Find u ( x , y ) and v ( x , y ) the value of f at z=5 i
Solution: f ( z )=z 2= ( x +iy )2
2 2
¿ x − y ++i(2 xy )
Now
u ( x , y )=ℜ f ( z )=x 2− y 2 +¿and v ( x , y ) =2 xy ,
Also
2
f ( 5 i )=( 5 i ) =−25
1
Exercise: Letw=f ( z ) =2iz+ 6 z . Find u ( x , y )and v ( x , y ) the value of f at z= + 4 i
2
∞
Definition 1: A sequence of complex numbers { z n }1 is said to have the limit z 0 or to converges to
Or equivalently, z n → z 0 as n → ∞ if for any ε > 0 there exists an integer N such that |z n−z 0|< ε
for all n> N .
Definition 2: Let f be a function defined in some neighborhood of z 0 itself. We say that the limit
Or equivalently, f ( z ) → w0 as z → z 0 if for any ε > 0 there exists a positive number δ such that
|f ( z ) −w0|< ε whenever 0<| z−z 0|<δ .
Solution: we must show that for any given ε > 0 there is a positive number δ such that
|z 2−(−1)|< ε ε Whenever 0<| z−i|<δ .
In other words, for f to be continuous at z 0 , it must have a limiting value at z 0 and this limiting
value must be f ( z 0 ) . A function f is said to be continuous on a set S if it is continuous at each point
of S.
Fig.6.2.2.1 Limit
I. lim ( f ( z ) ± g( z))= A ± B
z → z0
f ( z) A
III. lim = if B ≠ 0
z → z0 g (z) B
z+ i
Example 6.12: Evaluate the limit lim 2
z →−i z +1
z+ i z +i 1 −1
Solution: lim = lim = lim =
z →−i z +1 z →−i ( z +i ) (z−i) z →−i ( z−i ) 2i
2
Example 6.13: Evaluate the limit
z−i
lim
z→i z 2 +1
z−i z−i 1 1
Solution: lim =lim =lim =
z→i z +1 z →i ( z+ i ) (z −i) z → i
2
z +1 2 i
Definition: The derivative of a complex function w=f ( z ) at a fixed point z 0 is written f ' (z 0)
and is defined by
f ( z 0+ ∆ z )−f (z 0)
f ' ( z 0 ) = lim … … … … … … … … … … … … … … … ..(¿)
∆ z →0 ∆z
f ( z )−f ( z 0)
f ' ( z 0 ) =lim
z → z0 z−z 0
Example: Show that f ( z )=z is not differentiable at any point in the complex plane.
( z +∆ z )−z ∆z
Solution: f ' ( z )= lim = lim
∆z→ 0 ∆z ∆ z →0 ∆ z
Example 6.15: The non-negative of power integer 1 , z , z 2 , … is analytic in the entire complex
plane
Section Objectives:
Represent the example of Cauchy-Riemann equation and Laplace equation using the
notation;
∂u ∂ v
f ' ( z )= +i ……………………………………... (1)
∂ x ∂x
And let ∆ x →0
1 ∆ u +∆ v
f ' ( z )= lim ( ¿ )¿
∆ y →0 i ∆y ∆y
∂v ∂u
f ' ( z )= −i ……………………………………….. (2)
∂y ∂y
Now combine equation (1) and (2) ,We have
∂u −∂ v ∂u ∂ v
= , =
∂ y ∂y ∂x ∂y
This condition is called the Cauchy-Riemann equation.
Example 6.17: using the Cauchy-Riemann equation Show that
3
f ( z )=z is analytic everywhere.
Solution: we have, f ( z )=z 3 =( x +iy)3
¿ ( x 3−3 x y 2 ) +i (3 x 2 y − y 3)
Here
u ( x , y )=x 3−3 x y 2 and v ( x , y ) =3 x 2 y− y 3
Thus
2
u x =3 x −3 y ,
2
v x =6 xy
u y =−6 xy , v y =3 x2 −3 y 2
2
Solution: we have f ( z )=| z| =z z= x2 + y 2
Here
u ( x , y )=x 2 + y 2 and v ( x , y ) =0
u x =2 x , v x =0 And u y =2 y , v y =0
We observe that u x , u y , v x , and v y are continuous everywhere
Moreover u x =v y is satisfied at all points on x=0 , (the imaginary axis) and u y =−v x is satisfied
at all points on y=0 ,(the real axis)
So both Cauchy-Riemann equations
Satisfied only at (0 , 0)
Thus
2
f ( z )=| z| is differentiable only at (0 , 0) and hence it analytic nowhere.
Exercise: Show that the function f ( z )= √ xy is not analytic at the origin even though the Cauchy-
Riemann equations are satisfied.
Note: also the Cauchy-Riemann equations in polar coordinates
In this section, we are going to deal with the definition and notation of the elementary
functions, exponential functions and trigonometric functions by considering various examples.
Section Objectives:
Properties of e z are
z1 z2 z1 +z2
I. e e =e , ez ≠ 0 ∀ z
II. |e z|=e x
e =1 , y real and ( e )=e
z z z
III.
Example 6.20: show that |e−3 iz+5 i|=e 3 y =|e−3 iz+5 i|=e3 y
Note: For all finite x , this implies that e z is non zero for all finite z , also
arg e z = y ± 2 nπ n=0,1,2, ⋯
6.4.3. Trigonometric Function
If we odd and subtract the Euler formulas
iy
e =cos y +isin y
e−iy =cos y −isin y
We are led to the real trigonometric functions
1 iy −iy
cos y = ( e +e )
2
1 iy −iy
sin y= ( e −e )
2i
To define the complex trigonometric functions………………………………………………(*)
1 iz −iz
cos z= ( e +e )
2
1 iz −iz
sin z= ( e −e )
2i
d d
And Formulas for the derivatives follow readily from cos z=−sin z , sin z=cos z
dz dz
Using the Periodicity
cos (z+ 2 π )=cos z
sin(z +2 π)=sin z
And even and odd function
cos (−z)=cos z
sin(−z)=−sin z
Example 6.21: Real, Imaginary Parts and Absolute value.
Show that
a. cos z=cos x cosh y−i sin x sinh y
b. sin z=sin x cosh y +i cos x sinh y
c. cos iz=cosh y
d. sin iz=i sinh y
e. cos 2 z +sin2 z=1
2
f. |cos z| =cos2 x+ sinh 2 y
2
g. |sin z| =sin2 x +sinh 2 y
1 i(x+ iy) −i (x+iy )
Solution a: cos z= (e +e )
2
1 −y 1 y
¿ e ( cos x+i sin x )+ e ( cos x−i sin x )
2 2
1 1
¿ ( e y +e− y ) cos x− i ( e y −e− y ) sin x .
2 2
1 y −y 1 y −y
We know from calculus, cosh y= ( e +e ) and sinh y= ( e −e )
2 2
Therefore cos z=cos x cosh y−i sin x sinh y
Similar the other properties
General formulas for the real trigonometric functions continue to hold for complex values. This
follows immediately from the definitions. We mention in particular the addition rules
cos (z 1 ± z 2)=cos z1 cos z 2 ∓ i sin z1 sin z 2
sin( z 1 ± z 2 )=sin z 1 cos z 2 ±i sin z2 cos z 1
6.5 Hyperbolic and Logarithm function; General power
Overview:
In this section, we are going to deal with the definition and notation of the hyperbolic,
logarithm function and general power by considering various examples.
Section Objectives:
Represent the example of hyperbolic , logarithm function and general power using the
notation;
Now comes an important point (without analog in real calculus). Since the argument of z is
determined only up to integer multiples of 2 π . the complex natural logarithm ln z (z ≠ 0) is
infinitely many-valued.
The value of ln z corresponding to the principal value Arg z is denoted by ln z (ln with capital L)
and is called the principal value of ln z .
Thus restricted by – π < arg z ≤ π .
ln z=ln| z|+ i Arg z , z ≠0
The uniqueness of Arg z for given z (z ≠ 0) implies that ln z is single-valued, that is, a function in
the usual sense. Since the other values of arg z differ by integer multiples of 2 π , the other
values of ln z are given by
ln z=ln z ±i 2 nπ ,(n=1,2, ⋯)
They all have the same real part, and their imaginary parts differ by integer multiples of 2 π .
If z is positive real, then Arg z=o and ln z becomes identical with the real natural logarithm we
know from calculus. if z is negative real (so that the natural logarithm of calculus is not
defined!), then Arg z=π and ln z=ln| z|+ πi , ( z negative real).
e ln r =r For positive real r we obtain e ln z =z as expected, but since arg ( e z )= y ±2 nπ is
multivalve, so is arg ( e z )=z ±2 nπi ,n=0,1 , ⋯
[( )]
−( ) =2 nπ
π
i ilni π 2
Solution: i =e =exp ( i ln i )=exp i i± 2 nπi =e .
2
−π
The principal value (n=0) is e 2
.
Example 6.24: find the general power of ( 1+i )2−i
Solution: by direct calculation and multiplying out in the exponent
[ { 1
( 1+i )2−i=exp [ ( 2−i ) ln ( 1+i ) ] =exp ( 2−i ) ln √ 2+ πi± 2nπi
4 }]
[ ( ) ( )]
π
± 2nπ 1 1
¿2e 4
sin ln 2 + icos ln 2
2 2
z n =ω k+1=e
i(
n
)
, k=0 , ± 1, ± 2 , ⋯ Where |z|=r and θ0 =Arg ( z ) or
m θ0+2 kπ
ℑ( )
z =( √|z|) e
n n m
n m=1,2 , ⋯∧k =0,1,2 , ⋯ , n−1
∞
6. A sequence of complex numbers { z n }1 is said to have the limit z 0 or to converges
there exists an integer N such that |z n−z 0|< ε for all n> N .
7. Let f be a function defined in some neighborhood of z 0 itself. We say that the limit
∂u 1 ∂ v 1 ∂u −∂ v
equations assume the function = , =
∂ r r ∂ θ r ∂θ ∂r
13. Laplace’s equation: If f ( z )=u ( x , y )+iv ( x , y ) is analytic in a domain D, then
both u and v satisfy Laplace’s equation.∇ 2 u=u xx + u yy =0 and ∇ 2 v=v xx + v yy =0 in
D and have continuous second partial derivatives in D.
14. Complex exponential function is defined by e z =e x+ iy=e x ( cos y +i sin y ) .
15. complex trigonometric functions is defined by
1 iz −iz
cos z= ( e +e )=cos x cosh y−i sin x sinh y
2
1 iz −iz
sin z= ( e −e ) =sin x cosh y +icos x sinh y
2i
16. The complex hyperbolic cosine and sine are defined by the formulas
1 1
cosh z= ( e z + e−z ) =cos iz , sinh y= ( e y −e− y )=−i sin iz
2 2
17. The natural logarithm is defined by
ln z=ln| z|+ i Arg z ± i2 nπ , ,Where z ≠ 0∧n=1,2,3 ,… and
Arg z is principal value of arg z . That is restricted by – π < Arg z ≤ π .
18. General Powers are defined by z c =e cln z (c complex , z ≠ 0)
Miscellaneous Exercises
1. Verify that each of the two numbers z=1 ±i satisfies the equation z 2−2 z+ 2=0
2. Divide 15+23 i by−3+7 i .
3. Find , in form x + yi
a. (2+3 i)2
b. (1−i )10
c. √i
d. e πi /2
4. Represent in polar form, with the principal argument.
a) −4−4 i
b) −15 i
c) 12+i
d) 0.6+0.8i
5. Find the principal argument Arg z when
i
I. z= −2 i
−2
6
II. z=(√ 3−i)
6. Find a roots and graph all values of :
a. √ 81
b. √ −32i
c. √3 1
d. √4 −1
7. State the Cauchy Riemann equation and proof.
8. Find f ( z )=u (x , y )+ iv( x , y ) as u or v are given
a) u=xy
b) v=−e−2 x sin 2 y
c) v= y /(x 2 + y 2)
d) u=cos 3 x cosh 3 y
9. Find all values of z such that
z
I. e =−2
e =1+ √ 3i
z
II.
III. exp ( 2 z−1 )=1
10. Find the value of:
a. cos (3−i)
b. tani
c. sinh(1+ πi)
d. cosh ( π + πi)
e. ln (0.6+0.8 i)
11. Show that
a. exp ( 2 ±3 πi )=−e 2
b. exp ( z +πi ) =−exp z
c. exp 2+( πi
4 ) e
√
= ( 1+i )
2
References:
Ahlfors , L. V., Complex Analysis. 3rd ed. New York: McGraw-Hill, 1979.
Bieberbach, L., Conformal Mapping. Providence, RI: American Mathematical Society,
2000.
Henrici, P., Applied and Computational Complex Analysis. 3 vols. New York: Wiley,
1993.
Hille, E., Analytic Function Theory. 2 vols. 2nd ed. Providence, RI: American
Mathematical Society, Reprint V1 1983, V2 2005.
Knopp, K., Elements of the Theory of Functions. New York: Dover, 1952.
Knuth, D. E., the Art of Computer Programming. 3vols. 3rd ed. Reading, MA: Addison-
Wesley, 1997–2009.
Kreyszig, E., Introductory Functional Analysis with Applications. New York: Wiley,
1989.
Kreyszig, E., on methods of Fourier analysis in multigrid theory. Lecture Notes in Pure
and Applied Mathematics 157. New York: Dekker, 1994, pp. 225–242.
Kreyszig, E., Basic ideas in modern numerical analysis and their origins. Proceedings of
the Annual Conference of the Canadian Society for the History and Philosophy of
Mathematics. 1997, pp. 34–45.
Kreyszig, E., and J. Todd, QR in two dimensions. Elemente der Mathematik 31 (1976),
pp. 109–114.
Mortensen, M. E., Geometric Modeling. 2nd ed. New York: Wiley, 1997.
Morton, K. W., and D. F. Mayers, Numerical Solution of Partial Differential Equations:
An Introduction. New York: Cambridge University Press, 1994.
Ortega, J. M., Introduction to Parallel and Vector Solution of Linear Systems. New
York: Plenum Press, 1988.
Overton, M. L., Numerical Computing with IEEE Floating Point Arithmetic.
Philadelphia: SIAM, 2004.
Press, W. H. et al., Numerical Recipes in C: The Art of Scientific Computing. 2nd ed.
New York: Cambridge University Press, 1992.
CHAPTER 7
COPLEX INTEGRALS
Introduction
Chapter 7 laid the groundwork for the study of complex analysis, covered complex numbers in
the complex plane, limits, and differentiation, and introduced the most important concept of
analyticity. A complex function is analytic in some domain if it is differentiable in that domain.
Complex analysis deals with such functions and their applications. The Cauchy–Riemann
equations and also analytic functions satisfy Laplace’s equation. Furthermore, the Cauchy
integral formula shows the surprising result that analytic functions have derivatives of all orders.
Hence, in this respect, complex analytic functions behave much more simply than real-valued
functions of real variables, which may have derivatives, only up to a certain order. Complex
integration is attractive for several reasons. Some basic properties of analytic functions are
difficult to prove by other methods. This includes the existence of derivatives of all orders just
discussed. A main practical reason for the importance of integration in the complex plane is that
such integration can evaluate certain real integrals that appear in applications and that are not
accessible by real integral calculus.
Unit Objectives:
In this section, we are going to deal with the definition and notation of the complex integrals
Section Objectives:
∫ f ( z )dz .
C
Here the integrand f (z) is integrated over a given curve C or a portion of it (anarc , but we shall
say “curve” in either case, for simplicity). This curve C in the complex plane is called the path
of integration. We may represent C by a parametric representation
z (t )=x ( t ) =iy ( t ) ( a ≤ t ≤ b ) … … … … … … … … … … … … … … … … … … … … … … ..(1)
The sense of increasing t is called the positive sense onC , and we say that C is oriented
by(1).
We assume C to be a smooth curve, that is, C has a continuous and nonzero derivative
' dz
z (t)= =x ' ( t ) +iy ' (t )
dt
at each point. Geometrically this means that C has everywhere a continuously turning tangent, as
follows directly from the definition
z (t +∆ t−z (t))
z ' (t)= lim
∆ t→0 ∆t
Here we use a dot since a prime ' denotes the derivative with respect to z .
Definition of the Complex Line Integral
This is similar to the method in calculus. Let C be a smooth curve in the complex plane given by
(1), and let f ( z) be a continuous function given (at least) at each point of C . We now subdivide
(we “partition”) the interval a ≤ t ≤ b in (1) by points
t 0 ( ¿ a ) ,t 1 , … , t n−1 , t n (¿ b)
Where t 0 <t 1 <t 2 …<t n . To this subdivision there corresponds a subdivision of C by points
z 0 , z 1 , z 2 … , z n−1 , z n( ¿ z )
Fig.7.1. Tangent vector z ' ( t ) of a curve C in the Fig. 7.2.Complex line integral
Complex plane given by z (t) the arrowhead on the curve indicates the positive sense (sense of
increasingt )
Where z j =z ( t j ) . On each portion of subdivision of C we choose an arbitrary point, say, a point
ζ 1 between z 0 and z 1 (that is, ζ 1 =z( t)where t satisfiest 0 ≤ t ≤ t 1 ), a point ζ 2 between z 1 and z 2 etc.
Then we form the sum
n
sn=∑ f ( ζ m )∆ z m where ∆ z m =z m−z m−1 … ………(2)
We do this for each n=2,3 , ⋯ in a completely independent manner, but so that the greatest
|∆ t m|=|t m−t m−1| approaches zero asn → ∞. This implies that the greatest |∆ z m|also approaches
zero. Indeed, it cannot exceed the length of the arc ofC from z m−1 to z m and the latter goes to zero
since the arc length of the smooth curve C is a continuous function oft . The limit of the sequence
of complex numbers s2 , s 3 … thus obtained is called the line integral (or simply the integral) of
f ( z) over the path of integration C with the orientation given by (1).
This line integral is denoted by
If C is a closed path (one whose terminal point Z coincides with its initial point z 0, as
for a circle or for a curve shaped ).
General Assumption: - All paths of integration for complex line integrals are assumed to
be piecewise smooth, that is, they consist of finitely many smooth curves joined end to end.
Basic Properties Directly Implied by the Definition
1. Linearity: - Integration is a linear operation, that is, we can integrate sums term by term
and can take out constant factors from under the integral sign. This means that if the
integrals of f 1and f 2over a path C exist, so does the integral of k 1 f 1+ k 2 f 2 over the same
path and
2. Sense reversal in integrating over the same path, from z 0to Z (left) and from Z to z 0
(right), introduces a minus sign as shown
z z0
Where u=u ( ζ m , ηm ) , v=v ( ζ m , ηm ) and we sum over m from 1 ton . Performing the multiplication,
we may now split up sn into four sums:
sn=∑ u ∆ x m−∑ v ∆ y m+ i [ ∑ u ∆ y m−∑ v ∆ x m ]
These sums are real. Since f is continuous, u and v are continuous. Hence, if we let n approach
infinity in the aforementioned way, then the greatest ∆ x m and ∆ y m will approach zero and each
sum on the right becomes a real line integral:
lim s n=∫ f ( z )dz
n→∞ C
¿ ∫ udx−∫ vdy +i
C C [∫ udy +∫ vdx ] … … … … … … ..(8)
C C
This shows that under our assumptions on f and C the line integral (3) exists and its value is
independent of the choice of subdivisions and intermediate points ζ m .
Theorem 1: (Indefinite Integration of Analytic Functions)
Let f ( z) be analytic in a simply connected domain D. Then there exists an indefinite integral of
f (z) in the domain D, that is, an analytic function F ( z )such that F ' ( z )=f ( z) in D, and for all
paths in D joining two points z 0 and z 1 in D. We have
z1
(Note that we can write z 0 and z 1 instead of C , since we get the same value for all those C from to
z 0 ¿ z 1.)
πi
PROOF: The left side of (10) is given by (8) in terms of real line integrals and we show that the
right side of (10) also equals (8). We have z=x +iy , hence z ' =x ' +iy . We simply write u for
u [ x ( t , y ( t ) ) ]and v for v [ x ( t , y (t ) ) ] . We also have dx=x ' dt anddy = y ' dt . Consequently, in (10)
b b
a a
Dependence on path: Now comes a very important fact. If we integrate a given function f (z)
from a point z 0 to a point z 1 along different paths, the integrals will in general have different
values. In other words, a complex line integral depends not only on the endpoints of the path
but in general also on the path itself. The next example gives a first impression
of this, and a systematic discussion follows in the next section.
Example 7.2:: Integral of a Non analytic Function. Dependence on Path
Integrate f ( z )=Rez=x from 0 to 1+2 i
(a) Along C ¿ in Fig. 7.4,
(b) along C consisting of C 1and C 2 .
Solution: (a) C ¿can be represented by z (t )=t +2( 0 ≤ t ≤ 1 ) . Hence z ' (t)=1+2 iand
f [ z (t ) ] =x(t =t ) onC ¿. We now calculate
1
∫ ℜ zdz =∫ t ( 1+2i ) dt
¿
C 0
1 1
¿ ( 1+ 2i )= + i
2 2
Fig. 7.4
b) we now have
C 1 : z (t )=t , z ' (t)=1 , f ( z ( t )) =x ( t )=t (0 ≤ t ≤ 1)
C 2 : z (t )=t+¿ , z ' (t)=i , f ( z ( t )) =x ( t )=1 ( 0 ≤ t ≤2 ) .
using(6) we calculate
1 2
∫ z 2 dz ,
C
Fig 7.5
Solution: L= √ 2∧|f ( z )|=| z2|≤ 2 on C gives by (11)
|∫ z dz|≤ 2 √2=2.8284
C
2
Example 7.4: Evaluate ∫ z dz , where C is the straight line joining the origin o to the point P (2,
2
Hence
∫z 2
dz=∫ ( 3+ 4 i ) y (2+i )dy
2
C 0
1
1
¿ ( 2+11 )∫ y dy= (2+11i)
2
0 3
1+i
P (1, 1)
Y=x
o
M (1, 0)
o
Fig. 7.6
Solution: a) the equation of the straight line op , refers the fig.7.6 is y=x , thus along the line
op , z=x +iy=x+ ix=( 1+i ) x ,which gives dz= (1+i ) dx ,0 ≤ x ≤ 1 ,
And hence
1+i 1
∫ (x− y +i x 2
)dz =∫ ( x−x+ i x 2 ) (1+i)dx
0 0
1
¿ i(1+i)∫ x dx
2
1
¿− (1−i)
3
b) Along the pathOM , we have y=0 and thus z=x +iy=x and hencedz=dx , 0 ≤ x ≤ 1
Also, along the path MP, we have x=1 and thus z=x +iy=1+ iy ,and hencedz=idy ,0 ≤ y ≤1.
Therefore, the line integral
1+i 1 1
[ ][ ]
2 3 1 2 1
x ix y
¿ + + ( i−1 ) y −i
2 3 0 2 0
1 i i −1 5
¿ + + (i−1 )− = + i
2 3 2 2 6
C (0, 1)
B (1, 1)
x
O (0, 0) A (1, 0)
In this section, we are going to deal with state and proof of Cauchy’s Integral Theorem and
Cauchy’s Integral Formula and given by examples.
Section Objectives:
2. A simply connected domain D in the complex plane is a domain such that every simple
closed path in D encloses only points of D. Examples: The interior of a circle (“open disk”),
ellipse, or any simple closed curve. A domain that is not simply connected is called multiply
connected. Examples: An annulus, a disk without the center, for example, 0<| z|< 1. See also Fig.
7.7.
More precisely, a bounded domain D (that is, a domain that lies entirely in some circle about
the origin) is called p-fold connected if its boundary consists of p closed
Simply simply doubly triply
Connected connected connected connected
Fig.7.7. Simply and multiply connected domains
Note: connected sets without common points. These sets can be curves, segments,
or single points (such as z=0 for 0<| z|< 1, for which p=2 ). Thus,
D has p−1 “holes,” where “hole” may also mean a segment or even a single point.
Hence an annulus is doubly connected ( p=2).
∫ z2 dz=0 … … … … … … … … … … … … … … … … … … … .(¿)
C
∂u ∂u ∂v ∂v
Since f ' (z) is continuous, therefore , , ,
∂x ∂ y ∂x ∂ y
Are also continuous in D, and hence in the region enclosed by C. thus Green’s theorem becomes
∂u ∂u ∂u ∂v
∮ f ( z)dz=−∬ ( ∂ x + ∂ y ) dxdy +i∬ ( ∂ x − ∂ y )dxdy … … … … … … ¿
C E E
Where E is the region bounded by the closed curve C, since f (z)is analytic, u and v satisfied the
C-R.E.
And thus the integrands of the two double integrals on the right side of (**) are identically zero
and hence we obtain
∮ f ( z) dz=0
C
Example 7.6: Evaluate the following integrals by applying Cauchy’s integral theorem, in each
applicable
a) ∮ cos z dz
C
b) ∮ sec z dz
C
dz
c) ∮ z 2−5 z+6
C
Solution: a) The integrand f ( z )=cos z is analytic for all z and also f ' ( z )=sin z is continuous
every where, and hence on and inside C also, thus by Cauchy’s theorem ∮ cos z dz=0
C
1 π 3π
b. The integrand f ( z )=sec z= is not analytic at the points z=± ,± , ⋯ but all
cos z 2 2
these points lie outside the unit circle |z|=1 hence f ( z ) is analytic and f ' ( z ) is continuous
z=2∧z=3 , the points which lie outside the unit circle |z|=1 and hence f ( z ) is analytic
dz
and f ' ( z ) is continuous in and on C. thus by Cauchy’s theorem ∮ 2
=0.
C z −5 z+6
d. The integrand f ( z )=z is analytic and hence the Cauchy’s theorem is not applicable. In
fact, about C :|z|=1 we have
2π 2π
C 0 0
Independence of path
An integral of f ( z ) is independent of path in a domain D, if for very z 1 , z 2 in D the value of
z2
∫ f (z )dz dependence only on the end points z 1∧z2 and not on the choice of the path C of the
z1
path C joining z 1 ¿ z 2 .
Theorem: (Independence of path) If f ( z) is analytic in a simple connected domain D, then
∫ f (z )dz is independent of the path for every piecewise smooth curve C lying entirely with in D.
C
Theorem: (Extension of the Cauchy’s integral theorem) If f (z) is analytic on and between
two closed paths C 1 and C 2 ,then
∫ f (z )dz =∫ f (z ) dz
C1 C2
Theorem: If f ( z) is analytic on between the region included in the closed curves C 1 , C2 , C3 etc.
then
Example 7.7: Evaluate I =∫ sin z dz , where C is composed of the circular arc C 1 and straight
C
iπ
C 0
1 2i
Exercise: 1) ∫ z e dz 2) ∫ sinh z dz
3
2 z
0 0
Let f (z) be analytic in a simply connected domain D. Then for any point in D, then for any point
z o in D and any simple closed path C in D that encloses z o
1 f (z)
f ( z o )= ∮
2 πi C z−z o
dz
z 2+ 1
Example 7.8: Evaluate the integral ∮ 2
dz , C :|z −1|=1
C z −1
2 2
z +1 z +1/ z+1
Solution: writing the integrand as = we observe e that f ( z )=z 2 +1/ z +1 is
z2 −1 z−1
analytic on and inside C, and here z o=1 , as shown the fig.
Hence by Cauchy’s integral formula
(1, 0)
2
∮ zz2−1
+1
dz=2 πif ( 1 )=2 πi
C
2
z +1
Example 7.9: Evaluate the integral ∮ dz C :|z|=1
C z (2 z−1)
z 2 +1
Solution: Let I=∮ dz
C z (2 z−1)
2
z +1 1
The integrand is not analytic at the point z=0 and z= both of which lie inside C,
z(2 z−1) 2
writing it as
[( ) ]
2
z +1 1 1
=( z 2+ 1 ) −
z( 2 z−1) 1 z
z−
2
z 2+1 z 2+ 1 z 2 +1
I =∮ dz=∮ dz −∮ dz
z ( 2 z −1 ) 1 z
C C
z− C
2
5 πi πi
¿ 2 πi [ z +1 ] z= 1 −2 πi [ z +1 ] z=0=
2 2
−2 πi=
2 2 2
Overview:
In this section, we are going to deal with state and proof of Derivatives of an Analytic Function
or Generalized Cauchy’s integral formula and given by examples.
Section Objectives:
As mentioned, a surprising fact is that complex analytic functions have derivatives of all orders.
This differs completely from real calculus. Even if a real function is once differentiable we
cannot conclude that it is twice differentiable nor that any of its higher derivatives exist. This
makes the behavior of complex analytic functions simpler than real functions in this aspect. To
prove the surprising fact we use Cauchy’s integral formula.
If f (z) is analytic in a domain D. Then it has derivatives of all orders in D, which are then also
analytic in D, and the values of these derivatives at a point z o in D are given by the formulas
n! f (z)
f (n ) ( z o ) = ∮
2 πi C ( z−z o ) n+1
dz , n=1,2,3 , …
1 f (z)
f ( z o )= ∮
2 πi C z−z o
dz
Differentiating it under the integral sign with respect to z o we obtain
1! f (z)
f ' ( z o )= ∮
2 πi C ( z −z o )2
dz
Similarly,
2! f (z)
f ' ' ( z o )= ∮
2 πi C ( z−z o )3
dz
And in general
n! f (z)
(n )
f ( zo )= ∮
2 πi C ( z−z o ) n+1
dz
ez 2 πi d2 [ z ]
I =∮ dz= e z =0=πi
C z3 2 ! dz2
z+1
I =∮ 3
dz
C z ( z−2 ) ( z−4 )
¿∮
C1
[ z +1
3
]dz
+∮
z +1
2
[dz
z ( z−4 ) ( z−2 ) C z ( z−2 ) ( z−4 )
3 ]
¿ I 1+ I 2 , say
Now, using the Cauchy’s integral formula
I 1=∮
C1
[ z +1
]
dz
z ( z−4 ) ( z−2 )
3
=2 πi
z+1
z ( z−4 )3 [ ] z=2
=
−3 πi
8
I 2=∮
C2
[ z +1
]
dz
z ( z−2 ) ( z−4 ) 3
=
2 πi d 2 z+1
2! dz z ( z−2 )
2 [ ] z=4
=
23 πi
64
−3 πi 23 πi −πi
Therefore, I =I 1+ I 2= + =
8 64 64
Unit Summary:
- A general method of integration, not restricted to analytic functions, uses the equation
b
dz
z=z ( t ) of C, where a ≤ t ≤ b , ∫ f ( z ) dz=∫ f ( z ( t ) ) z ( t ) dt ( z =
' '
dt
)
C a
- Cauchy’s integral theorem is the most important theorem in this chapter. It states that if
f ( z)is analytic in a simply connected domain D, then for every closed path C in D,
∮ f ( z ) dz=0
C
Under the same assumptions and for any z 0 in D and closed path C in D containing z 0 in
1 f (z)
its interior we also have Cauchy’s integral formula f ( z 0 ) = ∮
2 πi C z−z 0
dz .
This implies Morera’s theorem (the converse of Cauchy’s integral theorem) and
Cauchy’s inequality which in turn implies Liouville’s theorem that an entire function that
is bounded in the whole complex plane must be constant.
Miscellaneous Exercises
1. If F ( a )=∮
C
4 z 2+ z+5
z−a
dz , where C :
x
2 ( )( )
+
y
3
=1 ,
Taken in counter clockwise sense, then fid F ( 3 , 5 ) , F ( i ) , F' (−1 ) , F ' ' (−i )
sin z
2. ∫ z
4
dz integrate clockwise around the unit circle
C
z
3. ∫ ez n dz integrate clockwise around the unit circle
C
6
z
4. ∫ 6
dz integrate clockwise around the unit circle
C (2 z−1)
dz
5. ∫ ( z−2 i )2( z −i/2)2 dz integrate clockwise around the unit circle
C
( 1+ z ) sin z
6. ∮ (2 z−1)
2
dz , C :|z−i|=2 Counterclockwise.
C
exp ( z 2)
7. ∮ z (z−2i)2 dz ,C :|z−3 i|=2 Clockwise.
C
ln( z +3)
8. ∮ (z−2)( z +1)2 dz , C the boundary of the square with vertices± 1.5 ,± 1.5 i
C
counterclockwise.
References: