LECTURE NOTES: ADVANCED NUMERICAL COMPUTING - CS 543
Michael J. Johnson Spring 2008
1. Notation N := {1, 2, 3, . . . } is the set of natural numbers. N0 := {0, 1, 2, 3, . . . } is the set of non-negative integers. Z := {. . . , −2, −1, 0, 1, 2, . . . } is the set of integers. R denotes the set of real numbers.
Π denotes the space of univariate polynomials. Πk denotes the space of univariate polynomials of degree at most k (the degree of the 0 polynomial is −1). Pk,Ξ denotes the space of piecewise polynomials of degree k with knot sequence Ξ. C k [a, b] denotes the space of functions f which are k-times continuously differentiable on [a, b]. Sk,Ξ denotes the space of splines of degree k with knot sequence Ξ. Cc (R) denotes the space of compactly supported functions in C(R). 2. Piecewise Polynomials For k ∈ N0 := {0, 1, 2, . . . }, the space of univariate polynomials of degree at most k is denoted by Πk . Let Ξ = {a = ξ1 , ξ2 , . . . , ξN +1 = b} be a set of increasing nodes (ξi < ξi+1 ) and let k ∈ N0 . The space of piecewise polynomials of degree k with knots Ξ, denoted Pk,Ξ , is the collection of functions s : [a, b] → R which satisfy s|
[ξi ,ξi+1 )
∈ Πk |
[ξi ,ξi+1 )
for i = 1, 2, . . . , N,
where the above is taken with the closed interval [ξN , ξN +1 ] when i = N . Note that we have adopted the somewhat arbitrary convention that piecewise polynomials (pp) are continuous from the right. Typeset by AMS-TEX
1
2
ADVANCED NUMERICAL COMPUTING
Example 2.1. Verify that s belongs to P3,Ξ if Ξ = {0, 1, 3, 4} and 3 if 0 ≤ x < 1, x +1 2 s(x) := 2(x − 1) + (x − 1) if 1 ≤ x < 3, 4(x − 3)3 + 2 if 3 ≤ x ≤ 4. Is s continuous?
Theorem 2.2. The dimension of Pk,Ξ is N (k + 1). Piecewise polynomials with Octave In Octave, the polynomial p(x) = p1 xk + p2 xk−1 + · · · pk x + pk+1 is represented by the vector [p1 , p2 , . . . , pk+1 ]. For example, the vector [3, 0, 1, −2] represents the polynomial p(x) = 3x3 + x − 2. Note that the vector [0, 0, 3, 0, 1, −2] also represents this same polynomial while the vector [3, 0, 1, −2, 0, 0] represents the polynomial x2 p(x) = 3x5 + x3 − 2x2 . Some Octave commands which relate to polynomials are polyval (evaluation of a polynomial), conv (multiplication of two polynomials), polyderiv (derivative of a polynomial), and polyinteg (integral of a polynomial). Piecewise polynomials are not a built in part of Octave; however, I have written some routines to handle them. To describe how we represent a piecewise polynomial in Octave, let s ∈ Pk,Ξ . Since s is composed of N polynomial pieces, we can write s as s(x) = si (x − ξi ) for x ∈ [ξi , ξi+1 ), 1 ≤ i ≤ N , where each polynomial si is of degree at most k. The coefficients of these polynomials si , 1 ≤ i ≤ N , are stored in an N × (k + 1) matrix S whose i-th row contains the vector representation of the polynomial s i . Thus the piecewise polynomial s is represented by the pair (Ξ, S). Some commands which I’ve written for piecewise polynomials are pp val (evaluate a pp), pp add (add two pp), pp mult (multiply two pp), pp deriv (differentiate a pp), pp integ (anti-derivative of a pp), pp def integ (definite integral of a pp), and pp knot insert (insert knots into a pp). Example 2.3. For example, the Octave representation of 3 if 0 ≤ x < 1, x +1 2 s(x) := 2(x − 1) + (x − 1) if 1 ≤ x < 3, 4(x − 3)3 + 2 if 3 ≤ x ≤ 4
1 is the pair Ξ = [0, 1, 3, 4] and S = 0 4
0 0 1 2 1 0 . 0 0 2
Knot insertion If two pp’s have identical knots, then adding them or multiplying them is fairly straightforward; however, if they have different knots, then one must first insert knots, as needed, until both pp’s have been rendered with identical knots.
CS-543
3
Example (knot insertion). Let s be as in the previous example and let Ξ1 = {0, 1, 2, 3, 4}. Note that Ξ1 has been obtained from Ξ by inserting the knot 2. Find the representation of s with respect to the knot sequence Ξ1 (ie. as an element of P3,Ξ1 ). In the above example, we see that the computation required is that of expanding s 2 (x+1) in powers of x, when s2 (x) = 2x2 + x. This yields s2 (x + 1) = 2x2 + 5x + 3. In general, the work involved in a knot insertion is simply that of finding, for a given polynomial p(x) = p1 xk + · · · + pk x + pk+1 , the coefficients q1 , . . . , qk+1 such that the polynomial q(x) = q1 xk + · · · + qk x + qk+1 satisfies q(x) = p(x + τ ). In other words, we have to translate the polynomial p by a distance −τ . One can easily write an algorithm for this based on the binomial formula, however the number of flops needed for execution is about 2k 2 . A better algorithm, which uses only k 2 + 1 flops, is the following (assuming that p is the Octave representation of the polynomial p(x) above): q=p s=tau*p(1) for i=k+1:-1:2 q(2)=q(2)+s for j=3:i q(j)=q(j)+tau*q(j-1) end end It is easy repeat the above example using this efficient algorithm and verify that the same result is obtained. 3. Splines and B-splines The space Pk,Ξ contains functions with various smoothness properties (assuming N ≥ 2). The following definition allows us to give a rather precise categorization of the smooth functions contained in Pk,Ξ . Definition 3.1. The space of continuous functions on [a, b] is denoted C[a, b] (or C 0 [a, b]). For a positive integer k, we define C k [a, b] to be the space of functions f : [a, b] → R for which f, f 0 , . . . , f (k) are continuous on [a, b]. Theorem 3.2. For ` = 0, 1, . . . , k, the dimension of Pk,Ξ ∩ C ` [a, b] equals N (k − `) + ` + 1. Moreover, Pk,Ξ ∩ C k [a, b] equals Πk | . [a,b]
Of particular interest is the subspace Pk,Ξ ∩ C k−1 [a, b] which has dimension N + k. Definition 3.3. The subspace Pk,Ξ ∩C k−1 [a, b], denoted Sk,Ξ , is called the space of splines of degree k having knot sequence Ξ. Example 3.4. Determine whether s belongs to S2,Ξ if 2 x −2 s(x) = 2(x − 1)2 + 2(x − 1) − 1 −(x − 2)2 + 6(x − 1) + 3
Ξ = {0, 1, 2, 4} and if 0 ≤ x < 1 if 1 ≤ x < 2 if 2 ≤ x ≤ 4.
4
ADVANCED NUMERICAL COMPUTING
Theorem 3.5. Let s ∈ Sk,Ξ . a) If k > 0, then the derivative s0 belongs to Sk−1,Ξ . b) If se is an anti-derivative of s, then se ∈ Sk+1,Ξ .
The B-splines, which we now define, are important to the theory and applications of splines because they can be used as the elements of a basis for a given spline space Sk,Ξ . Let us begin by assuming, as usual, that we are given knots Ξ = {ξ1 , ξ2 , . . . , ξN +1 } satisfying a = ξ1 < ξ2 < · · · < ξN +1 = b. Additionally, we assume that Ξ has been extended to an infinite sequence of knots {. . . , ξ−1 , ξ0 , ξ1 , ξ2 , ξ3 , . . . } satisfying ξi < ξi+1 for all i ∈ Z. The exact values of the additional knots is of no concern, so long as the entire sequence is increasing. The B-splines of degree 0 are defined by Bi0 (x)
:=
1
if x ∈ [ξi , ξi+1 ),
0
if x 6∈ [ξi , ξi+1 ).
Recursive Definition 3.6. For k = 1, 2, 3, . . . , the B-splines of degree k are defined by ξi+k+1 − x x − ξi k−1 k−1 k Bi (x) + Bi+1 (x). Bi (x) := ξi+k − ξi ξi+k+1 − ξi+1 Note that the B-spline is defined on all of R. In the important special case when the knots are simply ξi = i, the B-splines are called Cardinal B-splines and the above recursion reduces to 1 k−1 (x − i)Bik−1 (x) + ((i + k + 1) − x)Bi+1 (x) ; Bik (x) := k
moreover, we have (in the cardinal case) Bik (x) = B0k (x+i) so that Bik is simply a translate of B0k . The octave sript ex Bspline recursion.m gives a visual demonstration of this construction. Example 3.7. Find a formula for Bi1 . Partition of Unity Theorem 3.8. Let k ∈ N0 . If lim ξi = ±∞, then i→±∞
∞ X
Bik (x) = 1 for all x ∈ R.
i=−∞
Properties of B-splines. For k ∈ N0 and i ∈ Z, the following hold: (i) Bik is a pp of degree k with knots ξi , ξi+1 , . . . , ξi+k+1 . (ii) Bik (x) > 0 if x ∈ (ξi , ξi+k+1 ), and Bik (x) = 0 if x 6∈ [ξi , ξi+k+1 ]. (iii) If k > 0, then d k B (x) = dx i
k ξi+k − ξi
Bik−1 (x)
−
k ξi+k+1 − ξi+1
k−1 Bi+1 (x).
CS-543
5
(iv) Bik is (k − 1)-times continuously differentiable (ie. Bik ∈ C k−1 (−∞, ∞)). ∞ Z x ξi+k+1 − ξi X k+1 k Bi (t) dt = (v) Bj (x). k+1 −∞ j=i The support of a function f : R → R is defined to be the closure of the set {x ∈ R : f (x) 6= 0}. One consequence of property (ii) is that the support of Bik is the interval [ξi , ξi+k+1 ]. Question. Which B-splines have some of their support on the interval (a, b)? The following Corollary extends properties (iii) and (v) to a linear combination of Bsplines. Corollary 3.9. Let f (x) =
∞ X
ci Bik (x).
i=−∞
(i) If k > 0, then
∞ X ci − ci−1 d f (x) = k Bik−1 (x). dx ξi+k − ξi i=−∞
(ii) If k ≥ 0 and only finitely many {ci } are nonzero, then Z
x
f (t) dt = −∞
∞ X
di Bik+1 (x),
i=−∞
i X 1 cj (ξj+k+1 − ξj ). where di = k + 1 j=−∞
As mentioned above, the knots in Ξ are denoted a = ξ1 < ξ2 < · · · < ξN +1 = b, and we assume that we have chosen additional knots ξi for i < 1 and i > N + 1, maintaining ξi < ξi+1 for all i ∈ Z. Theorem 3.10. A basis for the space Sk,Ξ is formed by the functions Bik |
[a,b]
for i = 1 − k, 2 − k, . . . , N.
Note that the number of functions in the basis above is N + k, which of course is also the dimension of Sk,Ξ . We also note that the B-splines Bik , for i = 1 − k, 2 − k, . . . , N , are precisely those B-splines whose support has some overlap with the interval (a, b). The Octave script ex Bspline basis.m gives a visual demonstration. A consequence of Theorem 3.10 is that every function s ∈ Sk,Ξ can be written in the form N X s(x) = cj Bjk (x), x ∈ [a, b], j=1−k
for some scalars (numbers) {cj }. This form is known as the bb-form of s, where bb is meant to connotate the phrase B-spline basis. Since the B-splines in use are determined by the
6
ADVANCED NUMERICAL COMPUTING
knots {ξ1−k , ξ2−k , . . . , ξN +k+1 }, the bb-form of s is determined simply by these knots along with the coefficients {cj }, j = 1 − k, 2 − k, . . . , N . As illustrated by the following example, given the bb-form of s one can use Corollary 3.9 to obtain the bb-form of the derivative of s or of an anti-derivative of s. Example. Let ξi = i for all i, and define 2 s(x) := 2B−1 (x) − B02 (x) + 3B12 (x) + B22 (x) − 3B32 (x) − B52 (x) + B62 (x),
Find the bb-forms for the derivative s0 (x) and the antiderivative se(x) = x ∈ [1, 7].
Rx
−1
x ∈ [1, 7]. s(t) dt, where
4. B-Splines in Octave As usual we assume that we are given the knots Ξ = [a = ξ1 , ξ2 , . . . , ξN +1 = b]. In order to use the B-spline basis for Sk,Ξ we need k additional knots preceeding a and k additional knots following b. These additional knots must be chosen to ensure that the extended knot sequence Ξext = [ξ1−k , ξ2−k , . . . , ξN +k+1 ] is increasing. One suggestion, which is motivated by a desire to maintain numerical stability, is to choose the additional knots to be equispaced with spacing h = (b − a)/N . In Octave, the first index of a vector is always 1, so the vector Ξ ext is stored as [Ξext (1), Ξext (2), . . . , Ξext (N + 2k + 1)]. Thus one has to keep in mind the correspondence ξi = Ξext (i + k) for i = 1 − k, 2 − k, . . . , N + k + 1, or equivalently Ξext (i) = ξi−k for i = 1, 2, . . . , N + 2k + 1. By Theorem 3.10, the restriction to [a, b] of the B-splines Bik , i = 1 − k, 2 − k, . . . , N , form a basis for Sk,Ξ . In order to construct the B-spline Bik in Octave, one must first form the vector x which contains the knots of Bik , namely, x = [ξi , ξi+1 , . . . , ξi+k+1 ]. This can be accomplished with the Octave command x=Xi ext(i+k : i+2*k+1); k The B-spline Bi can then be constructed using the supplementary Octave function B spline. The command C=B spline(x); will produce the (k + 1) × (k + 1) matrix C so that the pair (x,C) represents Bik | [ξi ,ξi+k+1 ]
as a pp in Sk,x (keep in mind that
Bik
= 0 outside [ξi , ξi+k+1 ]).
Warning!. Mathematically speaking, a pp s ∈ Pk,Ξ (as defined in Section 2) is a function defined on the interval [a, b]. However, in Octave one is allowed to evaluate (using pp val) a pp s at any point x ∈ R: If x < a, then the first piece is used (ie. s(x) := s1 (x)) while if x > b, then the last piece is used (ie. s(x) := sN (x)). In other words, the first piece of s is
CS-543
7
extended all the way down to −∞ and the last piece is extended all the way up to ∞. This extension is wisely chosen and usually very convenient, except for the case of B-splines. The problem is that the B-spline Bik should be 0 outside the interval [ξi , ξi+k+1 ], but the first and last pieces produced by the Octave command C=B spline(x); are non-zero polynomials which extend, to the left and right, as non-zero polynomials. We get around this difficulty with the command [X,C1]=B spline(x); which returns the pp (X,C1) which has an extra 0 piece at the beginning and end. Specifically, C1 is obtained from C by adding a row of zeros to the top and bottom, while X is obtained from x by adjoining an extra knot at the beginning and at the end. The Octave script ex Bspline extension.m illustrates this problem. In practice, one usually first constructs the bb-form of a spline s ∈ Sk,Ξ (we discuss several methods in the next section). In order to use this spline s, it is desirable to have s in its pp form. The supplemental Octave command S=bb2pp(Xi ext,d); returns the coefficient matrix S of the pp (Xi ext,S) which corresponds to s(x) =
N X
cj Bjk (x),
j=1−k
where d = [c1−k , c2−k , . . . , cN ]. If one prefers the restriction of this pp to the interval [a, b], then one should use instead the command [Xi,S]=bb2pp(Xi ext,d,k);
5. Spline Interpolation Let us consider the following interpolation problem: Given points (ξi , yi ), i = 1, 2, . . . , N + 1, we seek a nice function s which satisfies (5.1)
s(ξi ) = yi for i = 1, 2, . . . , N + 1.
The function s is said to interpolate the given data. We assume that the nodes Ξ := {a = ξ1 , ξ2 , . . . , ξN +1 = b} are increasing (ie. ξi < ξi+1 ), and we desire to choose s as an element of the spline space Sk,Ξ , where k ≥ 1 is chosen to reflect the desired degree of smoothness in s. The case k = 1, known as linear spline interplation, is quite easy: The i-th piece of −yi s is simply si (x) = yξi+1 (x − ξi ) + yi . i+1 −ξi Example. Find the linear spline s which passes through the points (1, 2), (3, 6), (4, 4), (7, 1), and write s in both pp-form and bb-form. In addition to the nodes in Ξ, we need nodes ξi for i = 1 − k, 2 − k, . . . , 0 and i = N + 2, N + 3, . . . , N + 1 + k chosen so that ξi < ξi+1 for i = 1 − k, 2 − k, . . . , N + k. For the sake of brevity, we will write the B-spline Bjk simply as Bj . By Theorem 3.10,
8
ADVANCED NUMERICAL COMPUTING
the restriction to [a, b] of the B-splines Bj , j = 1 − k, 2 − k, . . . , N , form a basis for Sk,Ξ . Hence, we can write s in the bb-form (5.2)
s(x) =
N X
cj Bj (x),
x ∈ [a, b],
j=1−k
where the coefficients {ci } are unknown at present. The condition s(ξi ) = yi becomes s(ξi ) =
N X
cj Bj (ξi ) = yi ,
1 ≤ i ≤ N + 1.
j=1−k
These equations can be written as the matrix equation B B2−k (ξ1 ) ··· BN (ξ1 ) c1−k y1 1−k (ξ1 ) B2−k (ξ2 ) ··· BN (ξ2 ) c2−k y2 B1−k (ξ2 ) . = . . (5.3) . . . .. . . .. .. .. . . . yN +1 B1−k (ξN +1 ) B2−k (ξN +1 ) · · · BN (ξN +1 ) cN It is very important to note that since Bj (x) = 0 for all x 6∈ (ξj , ξj+k+1 ), the above (N + 1) × (N + k)-matrix has exactly k(N + 1) non-zero entries. Indeed, the i-th row reduces to [0, 0, . . . , 0, Bi−k (ξi ), Bi−k (ξi ), . . . , Bi−1 (ξi ), 0, 0, . . . , 0] which has only k non-zero entries. When k is greater than 1, the dimension of Sk,Ξ (or equivalently, the number of unknowns c1−k , c2−k , . . . , cN ) exceeds the number of conditions in (5.1), and it turns out that there are infinitely many splines s ∈ Sk,Ξ which satisfy the interpolation conditions (5.1). In order to arrive at a unique spline s ∈ Sk,Ξ it is necessary to impose k − 1 additional conditions. In the literature, there are four prominent ways of imposing these additional conditions; these lead to the natural spline, the complete spline, the not-a-knot spline, and the periodic spline. The Natural Spline Theorem 5.4. If k > 1 is an odd integer, then there exists a unique spline s ∈ S k,Ξ which satisfies the interpolation conditions (5.1) along with the additional end conditions s(`) (a) = 0 = s(`) (b) for ` =
k+1 k+3 , , . . . , k − 1. 2 2
With s written in the form (5.2), these additional end conditions become N X
j=1−k
(`) cj Bj (a)
=0=
N X
j=1−k
(`)
cj Bj (b).
CS-543
9
Adjoining these extra equations to those of (5.3) and setting m = (k + 1)/2, we arrive at the linear system
(m)
B1−k (a) .. . (k−1) B1−k (a) B1−k (ξ1 ) .. .
B1−k (ξN +1 ) (m) B1−k (b) .. . (k−1)
B1−k (b)
(m)
B2−k (a) .. . (k−1) B2−k (a) B2−k (ξ1 ) .. .
··· .. . ··· ··· .. .
B2−k (ξN +1 ) (m) B2−k (b) .. . (k−1) B2−k (b)
··· ··· .. . ···
(m) BN (a) 0 .. .. . . (k−1) BN (a) 0 c1−k BN (ξ1 ) y1 c2−k . .. . = . . . . . . BN (ξN +1 ) yN +1 cN (m) 0 BN (b) . .. .. . 0 (k−1) BN (b)
Let us refer to the above (N + k) × (N + k) matrix as A, and note that those additional equations associated with left end-conditions are placed at the top of A and those associated with right end-conditions have been placed at the bottom of A. It follows from Theorems (`) 5.4 and 3.10 that A is non-singular. Since Bj (x) = 0 = Bj (x) for all x 6∈ (ξj , ξj+k+1 ), it turns out that A is a banded matrix, and the above system can be solved very efficiently using Doolittle’s LU decomposition for banded matrices. Example 5.5. Find the natural cubic spline s which passes through the points (1, 2), (3, 6), (4, 4), (7, 1). This example is solved by the Octave script ex natural.m. The natural splines are famous for the following property: Theorem 5.6. Let s be the natural spline as described in Theorem 5.4. If f ∈ C k−1 [a, b] interpolates the given data: f (ξi ) = yi , then
Z
b a
Z (m) 2 s (t) dt ≤
b a
i = 1, 2, . . . , N + 1,
(m) 2 f (t) dt,
where m = (k + 1)/2.
The proof of this Theorem relies heavily on the following lemma: Lemma. Let n ∈ N. If p ∈ Sn,Ξ satisfies p(a) = p0 (a) = · · · = p(n−1) (a) = 0 and p(b) = p0 (b) = · · · = p(n−1) (b) = 0, and if g ∈ C n+1 [a, b] satisfies g(ξi ) = 0 for i = 1, 2, . . . , N + 1, then Z
b
p(t)g (n+1) (t) dt = 0. a
10
ADVANCED NUMERICAL COMPUTING
The proof of this Lemma proceeds by induction, where the case n = 1 is obtained using integration by parts: Z
b 00
p(t)g (t) dt = p(t)g a
0
(t)]ba
−
Z
N X
b 0
0
p (t)g (t) dt = 0 − a
λi (g(ξi+1 ) − g(ξi )) = 0,
i=1
since p(a) = p(b) = 0 and g(ξi ) = 0. The proof of the Theorem is then: Let g = f − s and note that f = s + g and g(ξi ) = 0. Hence Z
b a
Z (m) 2 f (t) dt =
b a
Z (m) 2 s (t) dt+
b a
Z (m) 2 g (t) dt+2
b
f
(m)
(t)g
(m)
(t) dt ≥
a
Z
b a
since the latter term is 0 (by the Lemma).
(m) 2 s (t) dt,
The Complete Spline Theorem 5.7. If k > 1 is an odd integer, then there exists a unique spline s ∈ S k,Ξ which satisfies the interpolation conditions (5.1) along with the additional end conditions s(`) (a) = ya,` s(`) (b) = yb,`
` = 1, 2, . . . ,
k−1 . 2
With s written in the form (5.2), these additional end conditions become N X
(`) cj Bj (a)
= ya,` and
j=1−k
Adjoining these extra equations to those of the linear system (1) (1) B1−k (a) B2−k (a) ··· . . .. .. .. . (m−1) B (m−1) (a) B (a) · · · 1−k 2−k B2−k (ξ1 ) ··· B1−k (ξ1 ) . . .. .. .. . B1−k (ξN +1 ) B2−k (ξN +1 ) · · · (1) (1) B1−k (b) B2−k (b) ··· .. .. .. . . . (m−1) (m−1) B1−k (b) B2−k (b) · · ·
N X
(`)
cj Bj (b) = yb,` .
j=1−k
(5.3) and setting m = (k + 1)/2, we arrive at (1) BN (a) ya,1 .. .. . . (m−1) ya,m−1 BN (a) c y1 BN (ξ1 ) 1−k c2−k .. .. . = . . . . . BN (ξN +1 ) yN +1 cN (1) yb,1 BN (b) .. .. . . (m−1) yb,m−1 BN (b)
As with the linear system obtained for the natural spline, the above (N + k) × (N + k) matrix is nonsingular and banded, and consequently, the above system can be solved very efficiently using Doolittle’s LU decomposition for banded matrices.
CS-543
11
Example 5.8. Find the complete cubic spline s which passes through the points (1, 2), (3, 6), (4, 4), (7, 1) and additionally satisfies s0 (1) = 0 and s0 (7) = −2. This example is solved by the Octave script ex complete.m. The complete splines are famous for the following property: Theorem 5.9. Let s be the complete spline as described in Theorem 5.7. If f ∈ C k−1 [a, b] interpolates the given data: f (ξi ) = yi ,
i = 1, 2, . . . , N + 1,
and also satisfies the end conditions f (`) (a) = ya,` f (`) (b) = yb,` then
Z
b a
Z (m) 2 s (t) dt ≤
b a
` = 1, 2, . . . ,
(m) 2 f (t) dt
k−1 , 2
where m = (k + 1)/2.
The complete splines have another property which can be used to find the bb-form of a spline given in pp form. Theorem 5.10. Let f ∈ Sk,Ξ and let s be the complete spline (as in Theorem 5.7) determined by the interpolation conditions s(ξi ) = f (ξi )
i = 1, 2, . . . , N + 1,
along with the end conditions s(`) (a) = f (`) (a) s(`) (b) = f (`) (b)
` = 1, 2, . . . ,
k−1 . 2
Then s = f . The not-a-knot Spline Let k > 1 be odd and and assume that N ≥ k. The not-a-knot spline is constructed by imposing the interpolation conditions (5.1) and additionally insisting that s(k) be continuous at the first and last (k − 1)/2 interior knots in Ξ. We thus impose k − 1 additional equations, the effect of which is that these k − 1 interior knots are no longer knots of s, hence the descriptor not-a-knot. Rather than add additional equations, however, in prace be obtained tice we simply remove these specified knots from s directly. For this, let Ξ from Ξ by removing the first and last (k − 1)/2 interior knots: e := {ξ1 , ξ(k+3)/2 , ξ(k+5)/2 , . . . , ξN −(k−1)/2 , ξN +1 }. Ξ
12
ADVANCED NUMERICAL COMPUTING
Theorem 5.11. There exists a unique spline s ∈ Sk,Ξe which satisfies the interpolation conditions (5.1). e =: {ξe1 , ξe2 , . . . , ξee }, where N e = N − k + 1, and as usual we suppose Let us write Ξ N +1 we have additional knots ξei so that ξe1−k < ξe2−k < · · · < ξeNe +1+k .
ej , j = 1 − k, 2 − k, . . . , N e , denoting our B-spline basis for S e , we write s in the With B k,Ξ bb-form e N X ej (x). s(x) = cj B j=1−k
The interpolation conditions (5.1) lead to the linear system
e1−k (ξ1 ) B B e1−k (ξ2 ) .. . e B1−k (ξN +1 )
e2−k (ξ1 ) B e2−k (ξ2 ) B .. . e B2−k (ξN +1 )
··· ··· .. . ···
e e (ξ1 ) c1−k y B 1 N e e (ξ2 ) c2−k y2 B N .. .. = .. . . . . e y c N +1 e BNe (ξN +1 ) N
As with the linear system obtained for the natural and complete spline, the above (N + 1) × (N + 1) matrix is nonsingular and banded, and consequently, the above system can be solved very efficiently using Doolittle’s LU decomposition for banded matrices. Example 5.12. Find the not-a-knot cubic spline s which passes through the points (1, 2), (3, 6), (4, 4), (7, 1), (8, 0), and (9, 2). This example is solved by the Octave script ex notaknot.m. The Periodic Spline Definition 5.13. Let k > 1 be an odd integer. The periodic spline s ∈ Sk,Ξ is obtained by imposing the interpolation conditions (5.1) along with the additional end conditions s(`) (a) = s(`) (b)
` = 1, 2, . . . , k − 1.
With s written in the form (5.2), these additional end conditions become N X
j=1−k
(`)
(`)
cj (Bj (a) − Bj (b)) = 0.
CS-543
13
Adjoining these extra equations to those of (5.3),we arrive at the linear system
B1−k (ξ1 ) .. .
B1−k (ξN +1 ) B (1) (a) − B (1) (b) 1−k 1−k . ..
··· .. .
··· ··· .. . (k−1) (k−1) B1−k (a) − B1−k (b) · · ·
y1 . c1−k .. c 2−k BN (ξN +1 ) yN +1 . = . (1) (1) . 0 BN (a) − BN (b) . . .. .. c N . (k−1) (k−1) 0 B (a) − B (b) BN (ξ1 ) .. .
N
N
The above (N + k) × (N + k) matrix fails to be banded due to the bottom k − 1 rows. However, it is still possible to efficiently solve this system (assuming it has a unique solution) using a technique called the shooting method. Example 5.14. Find the periodic cubic spline s which passes through the points (0, 0), (1, 1), (2, 0), (3, −1), (4, 0). This example is solved by the Octave script ex periodic.m.
6. Error Estimates and Applications of Spline Interpolation In the following results, we let h := max (ξi+1 − ξi ) 1≤i≤N
denote the length of the longest subinterval cut by the knot sequence {a = ξ1 , ξ2 , . . . , ξN +1 = b}. For the natural spline, we have the following error estimate: Theorem 6.1. Let k ∈ N be an odd integer, set m = (k + 1)/2, and assume that f ∈ C m [a, b]. There exists a constant C (depending only on f and k) such that if s is the natural spline of degree k which satisfies the interpolation conditions s(ξi ) = f (ξi ) for i = 1, 2, . . . , N + 1, then max |s(x) − f (x)| ≤ Chm .
x∈[a,b]
The complete spline admits a stronger error estimate: Theorem 6.2. Let k ∈ N be an odd integer and let r be any integer satisfying (k + 1)/2 ≤ r ≤ k + 1. Assume that f ∈ C ` [a, b]. There exists a constant C (depending only on f and `) such that if s is the complete spline of degree k which satisfies the interpolation conditions s(ξi ) = f (ξi ) for i = 1, 2, . . . , N + 1,
14
ADVANCED NUMERICAL COMPUTING
and the end conditions s(`) (a) = f (`) (a) s(`) (b) = f (`) (b)
for ` = 1, 2, . . . , m − 1,
then max |s(x) − f (x)| ≤ Chr .
x∈[a,b]
Remark. In case k = 1, the natural linear spline and the complete linear spline are identical to what we have been calling the linear spline interpolant. Both of the above theorems apply to the linear spline interpolatn. Example 6.3. Let f (x) = sin(x), 0 ≤ x ≤ 10. For a given N , let Ξ consist of the N + 1 equi-spaced knots from 0 to 10. Compute the quintic natural, complete, and not-a-knot spline interpolants to f at Ξ for N = 10, 20, 40, and compare the obtained accuracies. Solution. See the Octave script ex splinecompare.m Example 6.4. Construct a spline of degree 6 which approximates Z x 2 e−t dt, −10 ≤ x ≤ 10, f (x) := −10
and discuss the accuracy of the obtained spline approximation. Solution. See the Octave script ex errorfunction.m Example 6.5. A discrete ODE solver is used to obtain points (ξi , yi ), 1 ≤ i ≤ N + 1, which are expected to be close to a solution of the ODE y 000 (t) − cos(t)y(t) =
1 , 1 + t2
0 ≤ t ≤ 5.
Find a spline s which passes through these points and see how well it satisfies the ODE. Solution. See the Octave script ex checkode.m Example 6.6. Construct a parametric cubic spline curve which passes through the points (0, 0), (1, 1), (0, 0), (1, −1), (0, 0), (−1, −1), (0, 0), (−1, 1), (0, 0). Try it with both natural and periodic end conditions and compare the results. Solution. See the Octave script ex splinecurve The above applications were easily implemented using the following supplementary Octave commands: [X,S]=natural spline(Xi,y,k) produces the pp (X,S) which represents the natural spline interpolant as described in Theorem 5.4. [X,S]=complete spline(Xi,y,k,y a,y b)
CS-543
15
produces the pp (X,S) which represents the complete spline interpolant as described in Theorem 5.7. [X,S]=notaknot spline(Xi,y,k) produces the pp (X,S) which represents the not-a-knot spline interpolant as described in Theorem 5.11. [X,S]=periodic spline(Xi,y,k) produces the pp (X,S) which represents the periodic spline interpolant as described in Definition 5.13.
7. The Smoothing Spline Let us consider the following approximation problem: Given points (ξi , yi ), i = 1, 2, . . . , N + 1, we seek a nice function s which satisfies (7.1)
s(ξi ) ≈ yi for i = 1, 2, . . . , N + 1.
The function s is said to approximate the given data. We assume that the nodes Ξ := {a = ξ1 , ξ2 , . . . , ξN +1 = b} are increasing (ie. ξi < ξi+1 ), and we desire to choose s as an element of the spline space Sk,Ξ , where k ≥ 1 is an odd integer chosen to reflect the desired degree of smoothness in s. Definition 7.2. Put m := (k + 1)/2. For λ > 0 and f ∈ C m [a, b] we define Jλ (f ) :=
Z
b a
N +1 X (m) 2 2 |f (ξi ) − yi | . f (x) dx + λ i=1
Theorem 7.3. For each λ > 0, there exists a unique spline s ∈ Sk,Ξ such that Jλ (s) ≤ Jλ (f ) for all f ∈ C m [a, b]. The spline s above is called a smoothing spline. The parameter λ controls the trade-off between the curviness of s and the degree that the given data is approximated. Small values of λ will produce a gentle curve s which does not follow the given data very closely; whereas large values of λ will produce a curvy function s which closely follows the given data. Solving for the smoothing spline We now sketch how one solves for the smoothing spline s. As usual, let Ξ be extended to Ξext = {ξ1−k , ξ2−k , . . . , ξN +1+k } and let us write the B-spline Bjk simply as Bj . We write s in bb-form as N X s(x) = cj Bj (x), j=1−k
16
ADVANCED NUMERICAL COMPUTING
where the coefficients {cj } are unknown for the moment. Our first task is to express Jλ (s) in terms of {cj }. If G is the (N + k) × (N + k) symmetric matrix given by Z b (m) (m) G(i, j) = Bi−k (t)Bj−k (t) dx, c
a
1−k
c2−k and X is the column vector X = .. , then . cN Z b (m) 2 s (t) dt = X · GX. a
If A is the (N + 1) × (N + k) matrix given by A(i, j) = Bj−k (ξi ), then N +1 X
2
|s(ξi ) − yi | = (AX − Y ) · (AX − Y ),
i=1
y1 y2 where Y denotes the column vector Y = .. . Thus we can write Jλ (s) as .
yN +1 Jλ (s) = X · GX + (AX − Y ) · (AX − Y ).
The vector X which minimizes Jλ (s) is found by solving the equations ∂ Jλ (s) = 0 for j = 1 − k, 2 − k, . . . , N. ∂cj Using the gradiant operator, ∇X , these equations become. (7.4)
∇X [X · GX + (AX − Y ) · (AX − Y )] = 0.
Proposition 7.5. If C and D are n × n matrices and b and d are n × 1 column vectors, then ∇X [(CX + d) · (DX + b)] = C 0 (DX + b) + D 0 (CX + d), where C 0 denotes the transpose of C. Applying Proposition 7.5, yields ∇X [X · GX + (AX − Y ) · (AX − Y )] = G0 X + GX + A0 (AX − Y ) + A0 (AX − Y ) = 2(GX + A0 (AX − Y )). Subtituting this into equation (7.4) and simplifying yields the following linear system: (G + λA0 A)X = λA0 Y. The (N + k) × (N + k) matrix G + λA0 A is nonsingular and banded and consequently the above system can be efficiently solved using Doolittle’s LU decomposition for banded matrices. The Octave script ex smoothing demonstrates the smoothing spline.
CS-543
17
8. Approximating Equispaced Data Let f ∈ C(R) and h > 0. In this section we consider the problem of approximating f , given the values f (hj), j ∈ Z. In real life, of course, one would actually work on a bounded interval [a, b], but the theory is cleaner when we work on the entire real line; it also easy to transfer our results and methods to the interval [a, b]. Definition 8.1. A function φ ∈ C(R) is compactly supported if there exists a > 0 such that φ(x) = 0 whenever |x| ≥ a. The space of all compactly supported functions φ ∈ C(R) is denoted Cc (R). Definition 8.2. Let φ, ψ ∈ C(R) and assume that φ or ψ is compactly supported. The convolution of φ and ψ is defined by (φ ∗ ψ)(x) =
Z
∞
φ(t)ψ(x − t) dt. −∞
In the following theorem, we use the notation D k to denote the k-th derivative operator; in other words, D k φ = φ(k) . Theorem 8.3. For φ ∈ Cc (R) and ψ, p ∈ C(R), the following hold: (i) φ ∗ ψ = ψ ∗ φ. (ii) φ ∗ (αψ + βp) = α(φ ∗ ψ) + β(φ ∗ p), for all α, β ∈ R. (iii) φ ∗ ψ ∈ C(R). (iv) If φ ∈ C k (R), then φ ∗ ψ ∈ C k (R) and D k (φ ∗ ψ) = (D k φ) ∗ ψ. (v) If ψ ∈ C k (R), then φ ∗ ψ ∈ C k (R) and D k (φ ∗ ψ) = φ ∗ (D k ψ). Remark. Item (i) above shows that the convolution operator is commutative. It is also associative, provided that all three functions are compactly supported. Items (ii) and (iii) show that φ∗ is a linear operator on C(R). Definition 8.4. For φ ∈ Cc (R) and and p ∈ C(R), the semi-discrete convolution of φ and p is defined by X (φ ∗0 p)(x) = p(j)φ(x − j), x ∈ R. j∈Z
Theorem 8.5. For φ, ψ ∈ Cc (R) and p, q ∈ C(R) the following hold: (i) φ ∗0 (αp + βq) = α(φ ∗0 p) + β(φ ∗0 q), for all α, β ∈ R. (ii) (αφ + βψ) ∗0 p = α(φ ∗0 p) + β(ψ ∗0 p), for all α, β ∈ R.
18
ADVANCED NUMERICAL COMPUTING
(iii) φ ∗0 p ∈ C(R). (iv) If φ ∈ C k (R), then φ ∗0 p ∈ C k (R) and D k (φ ∗0 p) = (D k φ) ∗0 p. Remark. Item (i) shows that p 7→ φ ∗0 p is a linear operator on C(R), while (ii) shows that φ 7→ φ ∗0 p is a linear mapping from Cc (R) into C(R). Given the values f (j), j ∈ Z, our plan is to approximate f by s = φ ∗0 f . Of course one must carefully choose the function φ, but we’ll discuss that issue momentarily. In case the given data is f (hj), j ∈ Z, then we could approximate the function fe(x) = f (hx) (note 0 e that fe(j) , j ∈ Z is given) P by se = φ ∗ f . In that case, our approximation to f would be s(x) = se(x/h) = · · · = j∈Z f (hj)φ(x/h − j). With this as motivation, we make the Definition 8.6. For φ ∈ Cc (R) and and p ∈ C(R), we define (φ ∗0h p)(x) =
X
p(hj)φ(x/h − j),
x ∈ R.
j∈Z
Remark 8.7. It is easy to see that Theorem 8.5 remains true when we replace ∗ 0 with ∗0h , provided the last display is written as D k (φ ∗0h p) = h−k (D k φ) ∗0h p. Example 8.8. Let φ(x) = (a) Find φ ∗0 f if f (x) =
sin(πx/4)
1 − |x|
if |x| ≤ 1
0
if |x| > 1.
if 0 ≤ x ≤ 4
0 if x < 0 or x > 4. (b) Show that φ ∗0 p = p for all p ∈ Π1 . In the following we use the notation kf kR := sup |f (x)| . x∈R
(sup is similar in meaning to max except that the supremum is allowed to equal ∞ and there is no guarantee that the supremum is actually attained.) Approximation Theorem 8.9. Let φ ∈ Cc (R) and k ∈ N0 . If φ ∗0 p = p for all p ∈ Πk , then there exists a constant C (depending only on φ and k) such that for all h > 0 and f ∈ C k+1 (R),
kf − φ ∗0h f kR ≤ Chk+1 D k+1 f R . Our proof of this theorem requires the following lemma and Taylor’s theorem.
CS-543
19
Lemma 8.10. Let φ ∈ Cc (R) and k ∈ N0 . If φ ∗0 p = p for all p ∈ Πk , then φ ∗0h p = p for all p ∈ Πk and h > 0. Definition 8.11. Let k ∈ N0 , c ∈ R, and let f ∈ C k (R). The k-th degree Taylor polynomial to f at c is the polynomial p ∈ Πk given by p(x) :=
k X f (`) (c)
`!
`=0
(x − c)` .
Taylor’s Theorem 8.12. Let f ∈ C k+1 (R) and let p be the k-th degree Taylor polynomial to f at c. For every x 6= c, there exists z, between x and c, such that f (x) = p(x) +
f (k+1) (z) (x − c)k+1 . (k + 1)!
Proof of Theorem 8.9. Since φ is compactly supported, there exists M ∈ N such that φ(x) = 0 whenever |x| ≥ M . It follows that for all y ∈ [−1, 1] and g ∈ C(R), 0
(φ ∗ g)(y) =
M X
g(j)φ(y − j)
j=−M
because φ(y − j) will equal 0 whenever |j| > M . Now let c ∈ R, say c ∈ [h`, h` + h) for some integer `, and let p be the k-th degeree Taylor polynomial to f at c. Then |f (c) − (φ ∗0h f )(c)| = |p(c) − (φ ∗0h f )(c)| = |(φ ∗0h p)(c) − (φ ∗0h f )(c)| , by the Lemma, X X c − h` c = (p(hj) − f (hj))φ( − j) ≤ |p(hj + h`) − f (hj + h`)| φ( − j) h h j∈Z j∈Z M X c − h` c − h` − j) , since ∈ [−1, 1] = |p(hj + h`) − f (hj + h`)| φ( h h j=−M (k+1) M f X (zj ) c − h` k+1 = |hj + h` − c| φ( − j) , by Taylor’s Theorem, (k + 1)! h j=−M
≤ (2M + 1)
(M + 1)k+1 kφkR hk+1 D k+1 f R . (k + 1)!
k+1
+1) The conclusion of the theorem now follows with C = (2M + 1) (M(k+1)!
kφkR .
Remark. Approximation Theorem 8.9 shows that if our approximation f ≈ φ ∗0h f is exact
whenever f is a polynomial in Πk , then the error estimate kf − φ ∗0h f kR ≤ Chk+1 D k+1 f R
20
ADVANCED NUMERICAL COMPUTING
follows. This is a common theme in approximation theory; in fact, if we look back to Theorem 6.1, we see the error estimate kf − sk[a,b] ≤ Chm when s is the natural spline interpolant to the data function f . With the hypothesis of Approximation Theorem 8.9 in view, one might conjecture that the natural spline s equals the data function f whenever f is a polynomial in Πm−1 . This conjecture is indeed true, and the reader is challenged to provide a proof (hint: use the uniqueness in Theorem 5.4). Similarly, with the complete spline, one might, on the basis of Theorem 6.2, make a similar conjecture where the space of polynomials is now Πk , and the reader should be able to prove this conjecture using Theorem 5.7.
9. The Strang-Fix conditions Throughout this section we assume that k ∈ N0 . We see from the above Approximation Theorem that it is desirable to have a function φ ∈ Cc (R) which satisfies φ ∗0 p = p for all p ∈ Πk . In this section we discuss the construction of such a function. Definition 9.1. A function φ ∈ Cc (R) satisfies the Strang-Fix conditions of order k + 1 if φ ∗0 p ∈ Πk for all p ∈ Πk . The above formulation of the Strang-Fix conditions is equivalent to the usual formulation which is written in terms of the Fourier transform of φ (see G. Strang & G. Fix, A Fourier analysis of the finite element variational method, in “Constructive Aspects of Functional Analysis” (G. Geymonat, Ed.), pp. 793–840, C.I.M.E., 1973). We omit the proof of the following theorem because it requires the theory of tempered distributions and their Fourier transforms. Theorem 9.2. If φ ∈ Cc (R) satisfies the Strang-Fix conditions of order k + 1, then φ ∗0 p = φ ∗ p for all p ∈ Πk . The canonical example of a function which satisfies the Strang-Fix conditions is that of a cardinal B-spline. For that let us employ the integer knots ξi = i, i ∈ Z, and let Bik , i ∈ Z, be the B-splines of degree k as defined in Section 3. These B-splines are called cardinal B-splines because the knots are the integers. Since the knots are equi-spaced, the B-splines Bik are all translates of B0k ; indeed, Bik (x) = B0k (x − i),
x ∈ R.
Proposition 9.3. Let k ∈ N. The cardinal B-spline B0k satisfies the Strang-Fix conditions of order k + 1.
CS-543
21
Proof. (by induction on k) The base case k = 1 is covered by our Partition of Unity theorem and a homework problem. For the induction step, assume the proposition is true for k − 1 and consider k. Let q ∈ Πk and note that X d (B0k ∗0 q)(x) = (q(j) − q(j − 1))Bjk−1 (x) = (B0k−1 ∗0 p)(x), dx j where p(x) := q(x) − q(x − 1). Since p ∈ Πk−1 , we have by the induction hypothesis that B0k−1 ∗0 p is a polynomial in Πk−1 . Since the derivative of B0k ∗0 q belongs to Πk−1 , it follows that B0k ∗0 q belongs to Πk . Proposition 9.4. Suppose ψ ∈ Cc (R) satisfies the Strang-Fix conditions of order k + 1. Then the function φ, defined by φ(x) :=
n X
λj ψ(x − τj ),
j=1
also satisfies the Strang-Fix conditions of order k + 1. Proof. HW #3 The upshot of the above propositions is that any function φ of the form φ(x) :=
n X
λj B0k (x − τj )
j=1
will satisfy the Strang-Fix conditions of order k + 1 and consequently we will have φ ∗ 0 p = φ ∗ p for all p ∈ Πk . We can then focus our efforts on choosing the scalars {λj } and translation points {τj } to achieve φ ∗ p = p for all p ∈ Πk . Example 9.5. Show that if φ ∈ Cc (R) and p ∈ Πk , then φ ∗ p is also a polynomial in Πk . Lemma 9.6. Let φ ∈ Cc (R) and let p ∈ Π` be defined by p(x) = x` . Then φ ∗ p = p if and only if Z ∞ tr φ(t) dt = δr,0 for r = 0, 1, . . . , `, where δi,j :=
−∞
1 0
if i = j denotes the Kronnecker δ-function. if i = 6 j
Theorem 9.7. Let φ ∈ Cc (R) and let q ∈ Πk be defined by q(x) = xk . The following are equivalent: (i) φ ∗ p = p for all p ∈ Πk . (ii) Z φ ∗ q = q. ∞
t` φ(t) dt = δ`,0 for ` = 0, 1, . . . , k.
(iii)
−∞
22
ADVANCED NUMERICAL COMPUTING
Definition 9.8. The mean of a function φ ∈ Cc (R) is the quantity Z ∞ φ(t) dt. −∞
The function φ is said to have nonzero mean if
R∞
−∞
φ(t) dt 6= 0.
Our proof of the next theorem makes essential use of the following result about polynomials. Basic Fact 9.9. If p is a polynomial of degree k and τ1 , τ2 , . . . , τk+1 ∈ R are distinct, then the functions p(x − τ1 ), p(x − τ2 ), . . . , p(x − τk+1 ) form a basis for Πk . Consequently, for every polynomial q ∈ Πk , there exist unique scalars λ1 , λ2 , . . . , λk+1 such that q(x) =
k+1 X
λj p(x − τj ),
x ∈ R.
j=1
Theorem 9.10. Assume ψ ∈ Cc (R) has nonzero mean. If τ1 , τ2 , . . . , τk+1 ∈ R are distinct, then there exist unique scalars λ1 , λ2 , . . . , λk+1 such that φ(x) :=
k+1 X
λj ψ(x − τj )
j=1
satisfies φ ∗ p = p for all p ∈ Πk . The above results lead to the following Corollary 9.11. Let k ∈ N0 and let ψ ∈ Cc (R) have nonzero mean and satisfy the StrangFix conditions of order k + 1. If τ1 , τ2 , . . . , τk+1 ∈ R are distinct, then there exist unique scalars λ1 , λ2 , . . . , λk+1 such that φ(x) :=
k+1 X
λj ψ(x − τj )
j=1
satisfies φ ∗0 p = p for all p ∈ Πk . Moreover, the scalars λ1 , λ2 , . . . , λk+1 are uniquely determined by the equations Z ∞ t` φ(t) dt = δ`,0 for ` = 0, 1, . . . , k. −∞
Note that the equation {λj } as
R∞
−∞
t` φ(t) dt = δ`,0 can be written in terms of the unknowns
k+1 X j=1
λj
Z
∞
t` ψ(t − τj ) dt = δ`,0 . −∞
CS-543
23
If A is the (k + 1) × (k + 1) matrix defined by Z ∞ A(i, j) := ti−1 ψ(t − τj ) dt,
1 ≤ i, j ≤ k + 1,
−∞
then the unknowns {λj } are determined by λ1 λ2 A ...
λk+1
the linear system 1 0 = . . .. 0
10. A B-spline construction of φ Throughout this section we assume that k ∈ N. As indicated by Corollary 9.11, we seek a function ψ ∈ Cc (R) which has nonzero mean and satisfies the Strang-Fix conditions of order k + 1. Definition 10.1. The centered cardinal B-spline of degree k, denoted ψk , is defined by ψk (x) := B0k (x + m), k+1 and B0k is the B-spline of degree k having knots 0, 1, 2, . . . , k + 1. 2 We could also have described ψk as the B-spline of degree k having equi-spaced knots −m, −m+1, . . . , m. Since these knots are symmetric about the origin, it follows that ψk is an even function (ie. ψk (−x) = ψk (x)). We also note that ψk is supported on the interval [−m, m] and belongs to C k−1 (R). Since k ≥ 1 (ie k = 0 is excluded), we have ψk ∈ Cc (R). Moreover, it follows from Proposition 9.3 and Proposition 9.4 that ψk satisfies the StrangFix conditions of order k + 1. Lastly, the mean of ψk is non-zero (in fact positive) because ψk is positive on (−m, m) and 0 otherwise. We can now follow Corollary 9.11 and the ensuing discussion to construct our function φ in the form where m =
φ(x) =
k+1 X
λj ψk (x − τj ).
j=1
Two recommended choices of {τj } when k is even Using Octave notation, we mention two recommended choices of the numbers {τj } when k is even: tau = linspace( -k/2, k/2, k+1) and tau = linspace( -1/2, 1/2, k+1). If one adopts the first choice, then the obtained pp φ will have the 2(k + 1) knots {−k − 1/2, −k + 1/2, . . . , k + 1/2}, but φ will be supported on the (large) interval [−k − 1/2, k + 1/2]. Whereas if one adopts the second choice, then φ will be supported on the (small) interval [−m − 1/2, m + 1/2], but will have the (k + 1)2 knots {−m − 1/2, −m − 1/2 + 1/k, . . . , m + 1/2}. The main difference between these choices is that the first choice produces a pp φ having few knots but large support; whereas the second choice produces a pp φ having more knots, but a smaller support.
24
ADVANCED NUMERICAL COMPUTING
Example 10.2. Let k = 4 and ψ := ψk . Construct the function φ in Theorem 9.11 using a) the values τ1 = −2, τ2 = −1, τ3 = 0, τ4 = 1, τ5 = 2, and b) the values τ1 = −1/2, τ2 = −1/4, τ3 = 0, τ4 = 1/4, τ5 = 1/2. Plot both functions together. Solution. See the Octave script ex phifour.m. Two recommended choices of {τj } when k is odd Using Octave notation, we mention two recommended choices of the numbers {τj } when k is odd: tau = linspace( -(k-1)/2, (k+1)/2, k+1) and tau = linspace( -1/2, 1/2 + 1/(k-1), k+1). Bonus Problem 10.3. Prove that with either of the above choices, one will have λk+1 = 0, and hence φ(x) =
k X
λj ψk (x − τj ),
x ∈ R.
j=1
With the conclusion of the above bonus problem in mind, if one adopts the first choice, then the obtained pp φ will have the 2k + 1 knots {−k, −k + 1, . . . , k}, but φ will be supported on the (large) interval [−k, k]. Whereas if one adopts the second choice, then φ will be supported on the (small) interval [−m−1/2, m+1/2], but will have the (k+2)(k−1) knots {−m − 1/2, −m − 1/2 + 1/(k − 1), . . . , m + 1/2}. Example 10.4. Let k = 5 and ψ := ψk . Construct the function φ in Theorem 9.11 using a) the values τ1 = −2, τ2 = −1, τ3 = 0, τ4 = 1, τ5 = 2, τ6 = 3, and b) the values τ1 = −1/2, τ2 = −14, τ3 = 0, τ4 = 1/4, τ5 = 1/2, τ6 = 3/4. Plot both functions together. Solution. See the Octave script ex phifive.m.
11. Numerical Considerations and Applications Let k ∈ N0 and assume that φ ∈ Cc (R) satisfies φ ∗0 p = p for all p ∈ Πk . As demonstrated in Theorem 8.9, one can effectively approximate f with X s(x) = (φ ∗0h f )(x) = f (hj)φ(x/h − j). j∈Z
Since φ is compactly supported, for each x ∈ R, the above sum is a finite sum. From a numerical point of view, it is important to know precisely which terms in the above sum can be nonzero.
CS-543
25
Proposition 11.1. Assume that φ is supported on the interval [c, d] (ie. φ(t) = 0 for all t 6∈ (c, d)). If x ∈ R, then dx/h−ce−1
s(x) =
X
f (hj)φ(x/h − j).
j=bx/h−dc+1
Example. Suppose that the spline φ has octave representation [X,S] (in pp form). Assuming that the values f (hj) are available from the octave function f.m, write octave code which, given x, will compute y = (φ ∗0h f )(x). R1 Example. Consider the problem of approximating the integral I(f ) = −1 f (x) dx. (a) Describe how one could construct a quadrature rule of the form jmax
Q(f ) =
X
wj f (hj)
j=jmin
which is exact for all polynomials in Π4 . (b) Is your quadrature rule from part (a) exact for Π5 ? Example 11.2. Design a second derivative module for use in a digital control system according to the following description: A certain quantity f (t) is sampled at integer times, and at time t = n, the module will know the values . . . , f (n−2), f (n−1), f (n). Based on this data, the module should output s00 (n) and s00 (n + 1/2), where s = φ ∗0 f . Comment: Use the function φ = λ1 ψ3 (· − 3/2) + λ2 ψ3 (· − 3/2 − 1/3) + λ3 ψ2 (· − 3/2 − 2/3) + λ4 ψ3 (· − 5/2) which is supported on [−0.5, 4.5]. The solution has the form f (n − 4) 00 f (n − 3) s00 (n) φ (4) φ00 (3) φ00 (2) φ00 (1) φ00 (0) = f (n − 2) . 00 00 00 00 00 00 s (n + 1/2) φ (4.5) φ (3.5) φ (2.5) φ (1.5) φ (0.5) f (n − 1) f (n)
So far we have been assuming that the function f is defined on R and values f (hj) are available for all j ∈ Z. In practice, however, it often happens that f (hj) is only available when hj belongs to some interval [a, b]. A simple solution to this difficulty is to extend f according to pa (x) if x < a fe(x) = f (x) if x ∈ [a, b] pb (x) if x > b
26
ADVANCED NUMERICAL COMPUTING
where pa (resp. pb ) is the polynomial in Πk which interpolates f at the first (resp. last) k + 1 nodes in hZ ∩ [a, b]. One then approximates f by se = φ ∗0h fe. It is possible to prove that the function se provides good approximation to f on the interval [a, b]. Specifically, one can prove that if φ ∈ Cc (R) satisfies φ ∗0 p = p for all p ∈ Πk , then
provided f ∈ C k+1 [a, b].
kf − sek[a,b] ≤ Chk+1 D k+1 f [a,b]
Example. Find fe if k = 2, h = 1 and f (x) =
1 , −5 ≤ x ≤ 5. 36 − x2
12. The Fourier Transform Definition 12.1. A function f : R → R is said to be integrable if Z
|f (t)| dt < ∞. R
For an integrable function f , the Fourier transform of f , denoted fb or F f , is defined by fb(w) :=
Z
e−iwt f (t) dt,
w ∈ R,
R
where eiθ = cos θ + i sin θ. Basic Properties 12.2. Let f and g be integrable functions and let a, b ∈ R. The following hold: (i) F (af + bg) = afb + bb g (ie F is a linear transformation). (ii) Writing f = fe + fo , where fe (t) = 21 (f (t) + f (−t)) is the even part of f and fo (t) = 1 (f (t) − f (−t)) is the odd part of f , we have 2 fb(w) = 2
Z
∞
cos(wt)fe (t) dt − 2i 0
Z
∞
sin(wt)fo (t) dt 0
Example 12.3. Let f (t) = e−|t| , t ∈ R. Show that f is integrable and that fb(w) =
2 . 1 + w2
Proposition 12.4. If f, g are integrable, then so is the convolution f ∗ g.
CS-543
27
Theorem 12.5. For an integrable function f and constants τ ∈ R, h > 0, the following hold: (i) F [f (· + τ )](w) = eiτ w fb(w). (ii) F [f (h·)](w) = h−1 fb(w/h). (iii) fb is a continuous (complex valued) function and lim fb(w) = 0. |w|→∞
(iv) F (f ∗ g) = fbgb.
Example 12.6. Let φ, f ∈ Cc (R) and h > 0, and put s := φ ∗0h f . Show that φ and s are integrable and X b sb(w) = hd φ(hw) f (hj)e−ij (hw). j∈Z
Derivative Property 12.7. If f is absolutely continuous and f and f 0 are integrable, then F (f 0 )(w) = i w fb(w), w ∈ R. Inversion Formula 12.8. Let f : R → C be integrable. If fb is also integrable, then f is continuous and Z 1 eitw fb(w) dw, t ∈ R. f (t) = 2π R
A great deal of insight into the purpose of the Fourier transform can be obtained from a detailed look at the above inversion formula. For demonstration purposes, let’s take our function f to be the centered B-spline ψ1 . Since ψ1 is an even function, we can write c1 (w) = 2 ψ
Z
∞
cos(wt)ψ1 (t) dt = 2
0
Z
∞
cos(wt)(1 − t) dt = 0
4 w sin2 . 2 w 2
Electrical engineers adopt the viewpoint that the variable t is time, f (t) is a signal, w is frequency and fb(w) corresponds to the amount of frequency w contained in the signal f . In order to make this a little more precise, let’s consider a discrete set of frequencies wj = hj, j = 0, 1, 2, . . . . The integral in the inversion formula can be approximated by a Riemann sum, in which case we write f (t) ≈ fb(0)h + 2
∞ X j=1
hfb(wj ) cos(wj t).
Thus we see that the signal f (t) has been approximated by a linear combination of cosine functions having frequencies w0 , w1 , . . . and corresponding amplitudes hfb(wj ). As h approaches 0, our resolution of the frequency domain (ie the variable w) becomes finer and our approximation of f (t) above approaches equality.