A Very Quick Review Of Differential Equations.

  • Uploaded by: Hao Lian
  • 0
  • 0
  • October 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View A Very Quick Review Of Differential Equations. as PDF for free.

More details

  • Words: 2,552
  • Pages: 3
Hao Lian, programmer-poet. This review of an introduction to differential equations course 2.2 Non-homogeneous assumes in various, tiny ways that you’ve been exposed to a Theorem 2.3 (Existence and uniqueness.). For formal class. Otherwise, you probably should read a textbook a, b, c, t 0 , Y0 , Y1 ∈ R with y P on I with t 0 ∈ I with y1 , y2 first. lindep, there exists a unique solution to the non-homogeneous IVP.

1

FIRST- ORDER LINEAR

Proof. We use the EUT for the homogeneous case and superposition principle to construct the the solution; therefore, When using the separable method, losing zeroes occurs. For it exists. To prove the solution ( y = y + c y + c y ) is g p 1 1 2 2 example, the naïve solution to y 0 = x 2 ( y −13) misses y(x) = unique, assume there exists another solution ψ such that 13. ψ 6= y g . Let y = y g − ψ. Trivially, y satisfies the IVP where Given a1 (x) y 0 (x)+ a0 (x) y(x) = b(x), there are two trivial a y 00 + b y 0 + c y = y − ψ = 0 with initial values y(0) = 0 g cases: a0 (x) = 0 and a0 (x) = a10 (x), the latter due the and y 0 (0) = 0. Note that this is a homogeneous equation. product rule. We can reduce all such equations R to this case. However, f (t) = 0 is already a unique solution to this IVP. Let µ(x) such that µ0 = µP, implying µ = exp( P d x). By the homogeneous EUT, it must be the same solution as y, implying y = y g − ψ = 0 or y g = ψ, a contradiction. d(µ y) 0 µQ = µ( y + P y) = dx Z 2.3 Miscellaneous µy =

µQ d x + C.

Let Z

Z

−g y2

g y1 dt v2 = dt v1 = Definition 1.1 (Exact equation). An exact equation on rectaW [ y1 , y2 ] aW [ y1 , y2 ] angle R is of the form M (x, y) d x + N (x, y) d y = 0 such that there exists F such that F x = M and F y = N . where a y 00 + b y 0 + c = g. Then y p = v1 y1 + v2 y2 . This is the method of variations. Theorem 1.1. An equation is exact iff M y = Nx . The solution Linear equations always obey the superposition principle, follows from the properties of a total derivative: distinguishing them from non-linear equations. Non-linear equations also do not guarantee uniqueness of solutions. M d x + N d y = 0 F x d x + F y d y = 0 F (x, y) = C.

3 2

SECOND - ORDER LINEAR

A guess of exp(r t) for a y + b y + c = 0, the equation resolves to exp(r t)(ar 2 + br + c) = 0. With two distinct r values as solutions, then by superposition y(t) = c1 exp(r1 t) + c2 exp(r2 t). 00

0

Theorem 2.1 (Existence and uniqueness). Given the universe of reals, there exists a unique solution to second-order linear equations with initial conditions y(t 0 ) = Y0 and y 0 (t 0 ) = Y1 valid for all t ∈ R. Theorem 2.2 (Representation). For y1 , y2 that are lindep on I with t 0 ∈ I, there exists c1 , c2 such that c1 y1 + c2 y2 satisfies the IVP on I with the initial t 0 condition. The proof follows from knowing W [ y1 , y2 ] = 0 iff the two solutions are linindep and some non-obvious algebra.

HIGHER - ORDER LINEAR

We can easily generalize the homogeneous linear equation to higher powers using the characteristic equation We can also generalize the method of undetermined coefficients by finding a linear differential operator A (an annihilator) such that A[ f ](x) = 0. So suppose we have a linear equation in operator form L[ y](x) = f (x). Then AL[ y](x) = A[ f ](x) = 0. We have reduced the nonhomogeneous equation to homogeneous form. Boo-ya. Constructing an annhilator is like constructing a guess using the method of undetermined coefficients: Memorize a brief table. If x k exp(r x), then (D − r)m where m is any integer such that m > k. sin β x or cos β x, then (D2 + β 2 ). x k exp(r x) sin β x, then [(D − r)2 + β 2 ]m . For example, 0 = (D − 1)3 (D2 + 1)(D + 1)[ y] y = (C1 e x + C2 x e x + C3 x e x ) + (C4 sin x + C5 cos x) + C6 e−x .

4 2.1

Working from the characteristic polynomial roots

For two identical roots, use a t exp(r t) term. For characteristic equations with complex conjugates α±β i, there exists two: exp(αt)(cos β t, sin β t). Note that complex conjugates only occur if a, b, c ∈ R. Otherwise, two non-conjugate complex roots may arise.

MATRICES

The following statements are equivalent: (1) A is singular (has no inverse); (2) |A| = 0; (3) Ax has non-trivial solutions, i.e. where x 6= 0; and (4) rows of A are lindep. If A is singular, Ax = 0 has infinitely many solutions, all a scalar multiple of some x0 6= 0. Furthermore, Ax = b either has no solutions or infinitely many of the form x = x p + xh . 1

Hao Lian, programmer-poet. Another property, which is significant in the light that matrix multiplication is not commutative: d

dB

dA

Proof. Suppose u1 = cu2 . Then (r1 − r2 )u1 = 0, implying for a contradiction that r1 = r2 because we know u1 6= 0.

As a corollary, n distinct eigenvalues and eigenvectors imply a fundamental solution set for x0 = Ax. The conjugates for complex eigenvalues and eigenvectors Theorem 4.1 (Existence and uniqueness). If A, f are contincreate two linearly independent real vector solutions: uous on an open interval I, t 0 ∈ I, and x0 , then there exists a unique x(t) on I to the IVP x0 (t) = Ax + f. exp(αt)(a, b) · ((cos β t, − sin β t), (sin β t, cos β t)). dt

AB = A

dt

+

dt

B.

Theorem 4.2. Let xi be linindep solutions to Ax = x0 . If they are linindep, then the Wronskian is never zero on I. 4.3

Non-homogeneous

Proof. Suppose for a contradiction that W (t 0 ) = 0 for some Method of undetermined coefficients works similarly. Howt 0 . X(t 0 )c = 0 for some c 6= 0 because columns become ever, variation of parameters yields the equation linearly dependent when the matrix has a nonzero determiZ nant per the above theorem. Another solution to Ax = 0 x = Xc + X X−1 f d t. is z(t) = 0. These solutions are identical on I by the EUT, implying X(t)c = 0 for all t. This implies the xi are lindep, contradicting the initial assumption. 4.4 Matrix exponential Two implications: W (t) is always zero or never zero on I. Definition 4.1 (Matrix exponential). And the set of solutions are linindep iff W (t) is never zero, t2 whence the representation theorem: exp(At) = I + At + A2 + · · · . 2 Theorem 4.3 (Representation). Let x i be linindep solutions to the homogeneous system x0 = Ax(t) on I. Then every solution is The inverse—exp(At)−1 —is exp(−At). The derivative is in the form x(t) = X(t)c. Denote X as the fundamental matrix A exp(At) by virtue of differentiating that pseudo-Taylor sefor the fundamental solution set. ries pseudo-polynomial. Therefore, the exponential is a solution to X0 = AX. Because exp(At) is invertible, the columns are linindep solutions to the system. The general solution is 4.1 Non-homogeneous then x(t) = exp(At)c, and exp(At) is the fundamental matrix Theorem 4.4 (Superposition principle). Let L[x] :− x0 − Ax. for the system. If x1 , x2 are solutions to L[x] = g1 and L[x] = g2 , then c1 x1 + c2 x2 are solutions to L[x] = c1 g1 + c2 g2 . Definition 4.2 (Nilpotent). A matrix B is nilpotent iff there exists k > 0 such that Bk = 0. In such a case, the exponential Theorem 4.5 (Representation). Let x p solve x0 = Ax + f on I has a finite number of terms. and X be the fundamental matrix for the homogeneous system x = Ax. Then every solution on I is of the form x = x p + Xc by If A − rI is nilpotent, then there is a finite expansion: the superposition principle. Denote this as the general solution. Œ ‚ (A − rI)n−1 t n−1 At r t (A−rI)t rt . (1) e =e e =e ··· + (n − 1)! 4.2 Eigenapalooza For the system x0 = Ax, guess x = exp(r t)u, implying r exp(r t)u = exp(r t)Au or (A − rI)u = 0. Eigenvalues are numbers r such that that equation has a nontrivial (u 6= 0) solution; the corresponding u are eigenvectors. Nontrivial solutions, from a previous theorem, occurs iff |A − rI| = 0. The determinant is the characteristic polynomial, and this is the characteristic equation. Theorem 4.6. exp(ri t)ui is a fundamental set for linearly dependent {ui }. Proof. P The Wronskian over the set of those solutions is exp(t ri )|U| where U = [u1 , . . . , un ]. The determinant is never zero because the eigenvectors are linearly independent; therefore, the Wronskian is never zero for all t. Theorem 4.7. If r1 , r2 are distinct eigenvalues, then u1 , u2 are linearly independent.

4.5

Generalized eigenvectors

With repeated eigenvalues, we need a strategy to find additional eigenvectors. Lemma. If X, Y are fundamental matrices for the system x0 = Ax, then there exists a constant matrix such that X(t) = Y(t)C. What is the relationship between exp(At) and a given fundamental matrix X? By the lemma, exp(At) = X(t)C for some C. Plugging t = 0, we find I = X(0)C. Therefore, exp(At) = X(t)X−1 (0). So, to obtain the fundamental solution matrix X, assume the requirement that columns must be of the form exp(At)u, which can be decomposed by (1). Therefore, we need n vectors u whose calculations are feasible. To do so, find p(r) = |A− rI| and therefore the eigenvalues. For each ri with multiplicity mi , find mi linindep generalized eigenvectors. (A 2

Hao Lian, programmer-poet. nonzero vector u that satisfies (A−rI)k u = 0 for some positive order linear equations. k is a generalized eigenvector.) Compute x = exp(r t) exp(t(A − rI)), using the Taylor exy 00 + 2t y 0 − 4 y = 1 with y(0) = y 0 (0) = 0   pansion for the last exponential. It becomes one of the n 3 s 1 d(µY ) s 2 0 Y + Y =− 2 ⇒ − = − e−s /4 linindep solutions to the system. The expansion will be finite s 2 ds 2 2s because (A − rI)i ui = 0 for some i ∈ [1, mi ]. The hardest 2 1 es /4 part is finding such a ui each time. Y (s) = 2 + C 3 . s s

5

(2)

Theorem 5.2. If f (t) is piecewise continuous on [0, ∞) and of exponential order, then lims→∞ L{ f }(s) = 0.

LAPLACE TRANSFORM

Definition 5.1 (Laplace transform). If f is defined on [0, ∞), How to determine C? Using the above theorem, we take then Z∞ the limit of solution at (2) and set it equal to zero, finding easily that C = 0. Therefore, with C, we can now find y. L{ f }(s) = e−st f (t) d t. From Y (s) = 1/s3 , it follows that y(t) = t 2 /2. Magic. 0 The power lies within its ability (usually) to replace dif5.2 Discontinuous and periodic functions ferential equations with algebriac equations by recursively Definition 5.3 (Heaviside step function). applying L{x 0 } = sL{x} − x(0). Definition 5.2. A function is piecewise continuous on the interval [a, b] iff it is continous on that interval except at a finite number of points at which a jump discontinuity occurs. The jump at f (x) = 1/x for x = 0 would, for example, count as an “infinite” jump and not a jump discontinuity. Theorem 5.1. Roughly, L{ f } exists for s > α if f does not grow faster than an exponential function of some positive order α and f is piecewise continuous on [0, ∞). The proof follows from expanding out the exponential order test and then performing an integral comparison test. For s > 0: 1 to 1/s, t n to n!/s n+1 , sin bt to b/(s2 + b2 ), and cos bt to s2 /(s2 + b2 ). For s > a, e at to 1/(s − a) and e at t n to n!/(s−a)n+1 . Finally, f (n) to s n F (s)−s n−1 f (0)−· · ·− f (n−1) (0). Also,

u(t|0 < t) = 1. We can then express any piecewise function in terms of the step function for easier transformation. For example, f (t|t < 2) = 3 f (t|2 < t < 5) = 1 f (t|5 < t < 8) = t f (t|8 < t) = t 2 /10 f (t) = 3 − (1 − 3)u(t − 2) + (t − 1)u(t − 5) + (t 2 /10 − t)u(t − 8). Properties: L{u(t − a)}(s) = e−as /s L −1{e−as F (s)}(t) = f (t − a)u(t − a) N.B. L{e at f (t)}(s) = F (s − a).

L{e at f (t)}(s) = F (s − a) L{t n f (t)}(s) = (−1)n F (n) (s).

5.1

u(t|t < 0) = 0

Inverse Laplace

The solution to a constant-coefficient linear second-order differential equation with a stepped non-homogeneous expression can be, magically, a continous function. However, its second derivative is instead discontinuous.

L −1 , like L , is a linear operator, implying L −1{ f1 + f2 } = L −1{ f1 } + L −1{ f2 } and L −1{c f } = cL −1{ f }. To take the inverse, you basically consult the table. For rational polynomial fractions, partial function decomposition is needed with a small twist: 2s2 + 10s (s2 − 2s + 5)



A(s − 1) + 2B (s − 1)2 + 22

.

Completing the square is necessary to remove any cs n terms. Then, those factors have to go into the numerator so the inverse Laplace rules for sin and cos work correctly. For t 1 coefficients to linear equations, the transform turns these non-constant equations into constant coefficient first3

Related Documents


More Documents from ""