Numerical Analysis.docx

  • Uploaded by: Pravat Satpathy
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Numerical Analysis.docx as PDF for free.

More details

  • Words: 2,701
  • Pages: 28
CONTENT CHAPTER1 INTRODUCTION CHAPTER1 LITERATURE REVIEW CHAPTER3 OBJECTIVE CHAPTER3 RESEARCH METHODOLOGY CHAPTER4 DATA ANALYSIS REFERENCES

1

CHAPTER1 INTRODUCTION

2

One of the most important tasks in a study of dynamical systems is the numerical calculation of the trajectories. Thus far we have considered the integration method to be a black box into which we pop the system, initial conditions, method and time range and out pops a plot of the trajectories. Although this approach is common in courses on dynamical systems it obscures many of the pitfalls of numerical integration.

It is not possible at the present state of the art to choose a ‘best’ algorithm for the calculation of trajectories. There are several types of numerical algorithm, each with their own advantages and disadvantages. We shall consider a class of methods known as discrete variable methods. These methods approximate the continuous time dynamical system by a discrete time system. This means that we are not really simulating the continuous system but a discrete system which may have different dynamical properties. This is an extremely important point.

The discrete variable methods which we consider fall into two main types, Runge-Kutta methods and Linear Multistep methods. Maple has implementations of both types of method as well as a number of more sophisticated techniques designed to overcome some of the pitfalls of numerical solution. The more sophisticated methods still fall into the discrete variable category.

3

5.1

TYPES OF METHOD

Although the dynamical systems which we are simulating are usually in more than one dimension we can, without loss, restrict our numercal anlaysis of the methods to the single non-autonomous differential equation

bg

x  f t , x

bg

x t 0  x0

subject to the initial condition. We shall usually refer to the differential equation together with the initial condition as an initial value problem (IVP). Discrete variable methods yield a series of approximations

X n  x (tn )

on the set of points

t n 1  tn  h,

n  0,1 N

where h is the stepsize.

4

Taylor Series Method

These methods are based on the Taylor series expansion

x(tn 1 )  x (tn )  hx(tn ) 

h2 h3 ( x tn )  ( x tn )  2! 3!

If the series is truncated and x ( t n ) is replaced by the approximation X n we obtain the Taylor Series Method of order p

h2  h3  h p ( p)  X n1  X n  hX n  X n  X n  Xn 2! 3! p!

where X ( p ) 

dpX . dt p

Although there is an implementation of this method in Maple it is not much used in practice due to the necessity of computing the higher order derivatives of X. We shall only use it as a reference method when discussing the accuracy of other methods.

Runge-Kutta Methods

These methods are based on the notion of finding a formula which agrees with the Taylor series as closely as possible without involving derivatives. For example consider the possibility of matching the second order Taylor series method

5

2

h  X n 1  X n  hX n  Xn 2!

by using a formula of the form X n 1  X n  h (t n , X n , h)

where

 (t , x , h)  f (t , x )  f (t  ah, x  bhf (t , x )) The equivalent Taylor series expression to  ( t , x , h) is

(t , x , h)  x(t ) 

h ( x t) 2

 f (t , x ) 

h f t (t , x )  f x (t , x ) f (t , x ) 2

bg

Expanding  ( t , x , h) in a Taylor series up to terms of O h gives

 (t , x , h)  (   ) f (t , x )  h af t (t , x )  bf x (t , x ) f (t , x )  O(h 2 ) Comparing the two expressions we see that

    1,

a  21 ,

b 

1 2

resulting in the family of solutions

6

  1 ,

  ,

where   0 is a free parameter. For  

X n 1  X n 

1 2

a b

1 2

we obtain the improved Euler method

b

h f (tn , X n )  f tn  h, X n  hf (tn , X n ) 2

g

and for   1 the modified Euler method

b

X n1  X n  hf tn  h2 , X n  h2 f (tn , X n )

g

Unfortunately the terminology for naming second order Runge-Kutta methods is not standardised and Maple calls the improved Euler method the Heun formula and the modified Euler method the improved polygon method.

The procedure above can be extended to give higher order methods such as the classical 4th order method

7

h X n 1  X n  ( k1  2 k 2  2 k 3  k 4 ) 6 k1  f (t n , X n ) k 2  f (t n  21 h, X n  21 hk1 ) k 3  f (t n  21 h, X n  21 hk 2 ) k 4  f (t n  h, X n  hk 3 ) Linear Multistep Methods

These methods are based on integration of an interpolating polynomial having the values f n  f (t n , X n ) on a set of points t n at which X n has already been computed. By integrating

8

CHAPTER2 LITERATURE REVIEW

9

Classical Methods

Maple contains a number of one-step methods for the numerical solution of initial value problems. These are referred to as classical methods and are invoked by including the option method=classical[type] in the call to dsolve. Here type can be one of

foreuler

the forward Euler method;

heunform

the Heun formula (also known as the trapezoidal rule, or the improved Euler method);

impoly

the improved polygon method (also known as the modified Euler method);

rk2

the second-order classical Runge-Kutta method;

rk3

the third-order classical Runge-Kutta method;

rk4

the fourth-order classical Runge-Kutta method;

adambash

the Adams-Bashford method (a "predictor" method);

abmoulton

the Adams-Bashford-Moulton method (a "predictorcorrector" method).

If no type is specified the forward Euler method is used.

10



Worked Example 2 - The Forward Euler Method

Consider the IVP

y   2 xy 2 ,

bg

y 0 1

We can define this in Maple as > ivp:={diff(y(x),x)=-2*x*y(x)^2,y(0)=1}: In this case Maple can find the exact solution using dsolve

> Exactsoln:=rhs(dsolve(ivp,y(x)));

1 Exactsoln :=

x 21

Now we use Euler's method to obtain the numerical solution. Note that this method like all the other methods of type classical uses a fixed stepsize which we provide.

> es0:=dsolve(ivp,y(x),type=numeric, method=classical[foreuler],stepsize=0.001):

Thus we can find the solution at x  0.4 > es0(0.4); [ x.4, y( x ).8623085097414066 ] 11

and plot the solution for a range of values of x > odeplot(es0,[x,y(x)],0..6,labels=[x,y]);

12

CHAPTER3 OBJECTIVE \

a) To Understand the techniques of numerical analysis b) To understand numerical analysis as a set of objects. c) To Find numerical analysis of different objects

13

CHAPTER4 RESEARCH METHODOLOGY

14

 Research Design: Descriptive Research: Descriptive research includes survey and fact-findings enquire of different kinds. The major purpose of descriptive research is description of the state affairs, as it exists at present.  Data Collection: The study is based on the data collected through primary and secondary sources.  Primary Data: An interview schedule was designed to collect primary data from various broadband users.  Secondary Data: Secondary data was collected from journals, magazines, web sites and from other relevant publications.  Sampling Design: The sampling design mainly consists of the sample taken for the study along with the sample size, sample frame and sampling method.  Sample Universe: All customers using broadband connection was taken as the sample universe.  Sample Size: From the universe, sample sizes of 200 problems were selected for the purpose of the study.

 Sampling Method:

15

Convenience sampling was used, based on the willingness and availability of the respondents. The study was conducted on consumers with different type of business. Sample Size —200 respondents Sample Unit- Students of Graduation and the Post Graduation have been taken as sample unit. Sampling Area – Bhubaneswar. Sampling Technique - Random Sampling technique

DATA COLLECTION: • Primary data has been used by me in the form of Questionnaire & Observation, which

are the two basic methods of collecting primary data, which suffices all research objectives. • Secondary data sources like catalogue of the company, product range book of the company & various internet sites such as motorola.com & google.com have been used.

16

CHAPTER5 DATA ANALYSIS

17

Construct an array whose elements compare the exact solution to the numerical solution for three different stepsizes.

> mm:=array(1..8,1..5): mm[1,1]:=`x(k)`:mm[1,2]:=`Exactsoln`:mm[1,3]:=`h=0.1`: mm[1,4]:=`h=0.01`:mm[1,5]:=`h=0.001`: for i from 2 to 8 do mm[i,1]:=0.1*(i-2): mm[i,2]:=evalf(ExactSoln(x(i-2)),5): for j from 3 to 5 do mm[i,j]:=evalf(EulerSoln(x(i-2),10^(-j+2)),5) od: od: > eval(mm);

18

x(k) Exact soln   0 1.   .1 .99010   .2 .96154   .3 .91743   .4 .86207   .5 .80000   .6 .73529

h=0.1 1. 1. .98000 .94158 .88839 .82525 .75715

h=0.01 h=0.001  1. 1.   .99107 .99020   .96330 .96171   .91969 .91766   .86448 .86231   .80229 .80023   .73727 .73549 

Another possibility is to compare the errors at each stepsize. Firstly define a function giving the error

> err:=(x,h)->ExactSoln(x)-EulerSoln(x,h):

> tt:=array(1..8,1..4): tt[1,1]:=`x(k)`:tt[1,2]:=`h=0.1`:tt[1,3]:=`h=0.01`:tt[1,4]:=`h=0.001`: for i from 2 to 8 do tt[i,1]:=0.1*(i-2); for j from 2 to 4 do tt[i,j]:=evalf(err(x(i-2),10^(-j+1)),5); od: od:

> eval(tt);

19

 x(k)   0   .1   .2   .3   .4   .5   .6



h=0.1 0

h=0.01 0

-.00990 -.00097 -.01846 -.00176 -.02415 -.00226 -.02632 -.00241 -.02525 -.00229 -.02186 -.00198

h=0.001  0   -.00010   -.00017   -.00023   -.00024   -.00023   -.00020 

Worked Example 3 - The Classical Second Order Runge-Kutta Method

Consider the solution of the IVP

bg

y   4 y  4 x,

y 0 1

> IVP:={diff(y(x),x)=-4*y(x)+4*x,y(0)=1}:

The exact solution is given by

> dsolve(IVP,y(x));

1 5 ( 4 x ) y( x )x  e 4 4

20

Use the 2nd order classical Runge-Kutta method

> rk2:=h->dsolve(IVP,y(x),type=numeric,method=classical[rk2],stepsize=h): > x:=k->k*0.5: > RK2Soln:=(x,h)->rhs(rk2(h)(x)[2]): > ExactSoln:=x->x-1/4+5/4*exp(-4*x);

1 5 ( 4 x ) ExactSoln := xx  e 4 4

> mm:=array(1..10,1..5): mm[1,1]:=`x(k)`:mm[1,2]:=`Exactsoln`:mm[1,3]:=`h=0.25`: mm[1,4]:=`h=0.5`:mm[1,5]:=`h=0.75`: for i from 2 to 10 do mm[i,1]:=0.5*(i-2): mm[i,2]:=evalf(ExactSoln(x(i-2)),5): for j from 3 to 5 do mm[i,j]:=evalf(RK2Soln(x(i-2),0.25*(j-2)),5) od; od: > eval(nm);

21

 x(k) Exact soln   0 1.   .5 .41918   1.0 .77290   1.5 1.2531   2.0 1.7504   2.5 2.2501   3.0 2.7500   3.5 3.2500   4.0 3.7500

h=0.25

h=0.5

1.

1.

.56250 .82813

1.5000 2.

1.2695 1.7549

2.5000 3.

2.2512

3.5000

2.7503 3.2501

4. 4.5000

3.7500

5.

h=0.75  1.   1.5000   2.3125   9.0625   9.5625   12.016   51.578   52.078   64.785 

Note that as the stepsize is increased the numerical solution fails to represent the exact solution accurately. Indeed for a stepsize of 0.75 the numerical solution 'blows up'. This is due to non-convergence as a result of the numerical method becoming unstable. We shall consider this phenomenon next.

22

Exercises 1

1.

Use the classical numerical methods foreuler, heunform, rk3, rk4 and adambash to attempt to obtain a numerical solution of the IVPs

bg

(a)

dx  2tx 2 , x 0  1 dt

(b)

dx  5x 1  x , x 0  0.5 dt

b g bg

Use a range of stepsizes in the interval [0,1]. At what approximate value of the stepsize do the methods become unstable.

2.

Use each of the methods above to solve the systems of differential equations

(a)

(b)

x   x  xy , y  y  xy ,

x (0)  0.5 y (0)  0.5

x  y,

x(0)  0

y   x   (1  x 2 ) y,

y(0)  0.5

where  is a parameter. (Try   051510 . , , , ).

23

(c)

x  ( y  z), y  x  0.2 y , z  0.2  8z  xz,

x (0)  1 y (0)  1 z(0)  1

In each case use odeplot to obtain time series and phase plots.

24

5.3

LOCAL AND GLOBAL ERRORS

l

q

The output of a discrete variable method is a set of points t n , X n and the output of the

bg

dynamical system is a continuous trajectory x t . For the numerical results to provide a good approximation to the trajectory we require that the difference

bg

X N  x tN  

where  is some defined error tolerance, at each solution point. This difference is called the global error and is the accumulated error over all solution steps. Unfortunately it is extremely difficult to accomplish this and we have to confine ourselves to controlling the local error

bg

~ X n  x tn

~ at each step where X n is the numerical solution obtained on the assumption that the

numerical solution at the previous solution point is exact. There are two sources of local error,the roundoff error and the truncation error.

Roundoff Error

The roundoff error is the error which arises from the fact that numerical methods are implemented on digital computers which only calculate results to a fixed precision 25

which is dependent on the computer system used. Note that since roundoff errors depend only on the number and type of arithmetic operations per step and is thus independent of the integration stepsize h.

Truncation Error

The truncation error of a numerical method results from the approximation of a continuous dynamical system by a discrete one. The truncation error is machine independent, depending only on the algorithm used and the stepsize h.

An important concept in the analysis of the truncation error is that of consistency. Basically consistency requires that the discrete variable method becomes an exact representation of the dynamical system as the stepsize h  0 . Consistency conditions can be derived for both Linear Multistep and Runge-Kutta methods.

26

REFERENCE 1.

Aleksandrov, A.D.; Kolmogorov, A.N.; Lavrent'ev, M.A., eds. (1984). Mathematics, its Content, Methods, and Meaning. Translated by Gould, S.H.; Hirsch, K.A.; Bartha, T. Translation edited by S.H. Gould (2nd ed.). MIT Press; published in cooperation with the American Mathematical Society.

2.

Apostol, Tom M. (1974). Mathematical Analysis (2nd ed.). Addison– Wesley. ISBN 978-0-201-00288-1.

3.

Binmore, K.G. (1980–1981). The foundations of analysis: a straightforward introduction. Cambridge University Press.

4.

Johnsonbaugh, Richard; Pfaffenberger, W.E. (1981). Foundations of mathematical analysis. New York: M. Dekker.

5.

Nikol'skii, S.M. (2002). "Mathematical analysis". In Hazewinkel, Michiel. Encyclopaedia of Mathematics. Springer-Verlag. ISBN 9781-4020-0609-8. Archived from the original on 9 April 2006.

6.

Nicola Fusco, Paolo Marcellini, Carlo Sbordone (1996). Analisi Matematica Due (in Italian). Liguori Editore. ISBN 978-88-207-26751. 27

7.

Rombaldi, Jean-Étienne (2004). Éléments d'analyse réelle : CAPES et agrégation

interne

de

mathématiques (in

French).

EDP

Sciences. ISBN 978-2-86883-681-6. 8.

Rudin, Walter (1976). Principles of Mathematical Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0-07-054235-8.

9.

Rudin, Walter (1987). Real and Complex Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0-07-054234-1.

10.

Smith,

David

E.

(1958). History

of

Mathematics.

Dover

Publications. ISBN 978-0-486-20430-7. 11.

Whittaker,

E.T.; Watson,

G

N. (1927). A

Course

of

Modern

Analysis (4th ed.). Cambridge University Press. ISBN 978-0-52158807-2. 12.

"Real Analysis - Course Notes" (PDF).

28

Related Documents

Numerical
December 2019 24
Numerical
October 2019 21
Numerical
May 2020 7
Numerical
November 2019 26
Numerical Exam
June 2020 2

More Documents from "Pravat Satpathy"