Lecture 7: Fourier Series Periodic Functions Fourier series are very useful to represent or approximate periodic functions. Notice that functions that are not periodic, can be considered periodic with infinite period, and can be represented with Fourier integrals, instead of Fourier series. A function is, f(t), is said to be periodic with period T if f(t+T) = f(t) for any t Based on this definition, periodic functions of period T are also periodic functions of period 2T, 3T,....nT: f(t+nT) = f(t+(n-1)T) = f(t+(n-2)T) = .....= f(t). The smallest period is called the fundamental period of the function. A periodic function can be represented as a sum of sines and cosines with the same fundamental period as the function or multiples of that period. We can write sin and cos of fundamental period T/n as: cos (2p n t / T)
sin (2p n t / T)
Of course all these functions have also period T. Since the fundamental period can be as small as we want (just take n larger), we can use these functions to represent periodic functions of fundamental period as small as we like, or functions with rather sharp features. Let's visualize the sum of some of these functions with Mathematica:
Lecture7.nb
2
<< Graphics`; T = 1; Plot@8Sin@2 Pi t ê TD, Sin@4 Pi t ê TD, Sin@6 Pi t ê TD<, 8t, 0, T<, PlotStyle Æ 8Red, Blue, Purple
0.5
0.2
0.4
0.6
0.8
1
-0.5
-1
<< Graphics`; T = 1; Plot@8Sin@2 Pi t ê TD + Sin@4 Pi t ê TD + Sin@6 Pi t ê TD<, 8t, 0, T
0.2
0.4
0.6
0.8
1
-1 -2
So with just three sine functions we can represent this rather complex function, which of course is periodic with fundamental period T, as you can easily verify by plotting it over a larger interval:
Lecture7.nb
3
<< Graphics`; T = 1; Plot@8Sin@2 Pi t ê TD + Sin@4 Pi t ê TD + Sin@6 Pi t ê TD<, 8t, 0, 3 T
0.5
1
1.5
2
2.5
3
-1 -2
We call Fourier series a general linear combinations of these sines and cosines: ao +
⁄¶ n=1 @an cos H2 p n t ê T L + bn sinH2 p n t ê T L D
The constants an and bn are called Fourier coefficients, the functions cos H2 p n t ê T L and sin H2 p n t ê T L are called Fourier modes. Since these sines and cosines have period T (even if that is not the fundamental period for all of them), then the series also has period T and so it is appropriate to represent a function of period T. As you can already guess, if you play with the values of the Fourier coefficients, you can get the series to take different functional form. If you use many modes, you can approximate many different functions of period T with such a series. This is actually guaranteed by a theorem, that tells you when this is going to work for sure (it is a sufficient condition). First let's define a sectionally continuous periodic function: "A sectionally continuous periodic function is one that is continuous in finite-size sections, with at most a finite number of discontinuities in one period of the function." Then the theorem:
If a periodic function f HtL is continuous, and its derivative is nowhere infinite and is sectionally continuous, then it is possible to construct a Fourier series that converges uniformly to f HtL for all t :
First let's define a sectionally continuous periodic function: 4
"A sectionally continuous periodic function is one that is continuous in finite-size secLecture7.nb tions, with at most a finite number of discontinuities in one period of the function." Then the theorem:
If a periodic function f HtL is continuous, and its derivative is nowhere infinite and is sectionally continuous, then it is possible to construct a Fourier series that converges uniformly to f HtL for all t :
f HtL = ao + ⁄¶ n=1 @an cos H2 p n t ê T L + bn sin H2 p n t ê T L D for
all t.
Uniform convergence means that the series converges to the function f(t) for every value of t. So we say that for a uniformly convergent series, for any small number e independent of t, we can always find a value of M (the number of Fourier modes) such that
† fapprox Ht, M L - f HtL§ < eHM L
for any value of t.
We will see later what we mean by this when we discuss the Gibbs phenomenon.
Orthogonality We could try to play with hundreds of Fourier coefficients until we get a decent representation of the function f(t), but that would be a silly thing to do. It's much better to find a general way to define those coefficients, based on the given function. The trick is based on an important property of the sines and cosines called orthogonality. In general, two functions g(t) and f(t) are said to be orthogonal on the interval [a,b] if b Ÿa gHtL hHtL ‚ t = 0. Our Fourier modes satisfy this property, on the interval @t0 , t0 +T] for any t0 , as can be easily seen:
Ÿto
to +T
Ÿt o
sinH2 p n t ê T L sinH2 p m t ê T L „ t =
to +T
Ÿto
to +T
cosH2 p n t ê T L cosH2 p m t ê T L „ t = 0,
sinH2 p n t ê T L cosH2 p m t ê T L „ t = 0.
m≠n
Notice that for the product of sine and cosine we can include the case m=n, while for the products sin*sin and cos*cos we must exclude the case m=n, because a real function cannot be orthogonal with itself: for any real function gHtL that is nonzero on a finite
o cosH2 p n t ê T L cosH2 p m t ê T L „ t = 0, Ÿ to Lecture7.nb o
t +T
Ÿto
to +T
m≠n
sinH2 p n t ê T L cosH2 p m t ê T L „ t = 0.
5
Notice that for the product of sine and cosine we can include the case m=n, while for the products sin*sin and cos*cos we must exclude the case m=n, because a real function cannot be orthogonal with itself: for any real function gHtL that is nonzero on a finite b range within @a, bD, Ÿa g 2 HtL „ t must be greater than zero. This follows simply because g 2 HtL ¥ 0, so there is a finite positive area under the g 2 HtL curve. It is important to consider the result for m=n, because that will be the leftover when we make use of the orthogonality to find the Fourier coefficients:
Ÿto
to +T
Ÿto
sin2 H2 p n t ê T L „ t = Ÿt o
t +T
cos2 H2 p 0 t ê T L „ t = T
o
to +T
cos2 H2 p n t ê T L „ t = T ê 2, n > 0
Ÿto
to +T
sin2 H2 p 0 t ê T L „ t = 0
f HtL = ao + ⁄¶ n=1 @an cos H2 p n t ê T L + bn sin H2 p n t ê T L D
Finally we are ready to compute our Fourier coefficients for
We first multiply both sides by cos H2 p m t ê TL and integrate over one period, from to to to + T for some choice of to . That will eliminate all sines and all cosines apart from the cosines with n=m and so will give us the a coefficients. Then we multiply both sides by sin H2 p m t ê TL and integrate over one period, from to to to + T for some choice of to . That will eliminate all cosines and all sines apart from the sines with n=m and so will give us the b coefficients: Ÿto
2pmt f HtL cos ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ „ t = T
to +T
‚
¶ n=0
a n Ÿt o
t +T
o
2pnt 2pmt cos ÅÅÅÅÅÅÅÅ ÅÅÅÅÅ cos ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ „ t + ‚ T T
Thus:
o 1 ao = ÅÅÅÅ Å Ÿ T to
t +T
o 2 am = ÅÅÅÅ Å Ÿ T to
t +T
and
f HtL „ t,
¶ n=1
bn Ÿt o
2pmt f HtL cosH ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ L „ t, m > 0 T
t +T
o
2pnt 2pmt sin ÅÅÅÅÅÅÅÅ ÅÅÅÅÅ cos ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ „ t T T
6
f HtL „ t,
Thus:
1 ao = ÅÅÅÅ Å T Ÿt o
to +T
o 2 am = ÅÅÅÅ Å Ÿ T to
t +T
Lecture7.nb
2pmt f HtL cosH ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ L „ t, m > 0 T
and Ÿto
2pmt f HtL sin ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ „ t = T
to +T
‚
¶ n=0
a n Ÿt o
t +T
o
2pnt 2pmt cos ÅÅÅÅÅÅÅÅ ÅÅÅÅÅ sin ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ „ t + ‚ T T
Thus:
o 2 bm = ÅÅÅÅ Å Ÿ T to
t +T
¶ n=1
bn Ÿ t o
t +T
o
2pnt 2pmt sin ÅÅÅÅÅÅÅÅ ÅÅÅÅÅ sin ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ „ t T T
2pmt f HtL sin ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ „ t, m > 0 T
The problem is solved! We have found all the Fourier coefficients, so now the Fourier series is completely defined. Of course the coefficients depend on the function itself (you cannot approximate the function without any information on its shape....), so we will have to solve those integrals of the function in order to find its Fourier approximations. For simple functions those integrals will be easy enough, for complicated functions they may be pretty hard. In any case, we can always rely on Mathematica using the intrinsic function Integrate if there is an analytic primitive of the function, or NIntegrate otherwise, for a numerical integration. Let's now apply this formula to two examples: The triangle wave and the square wave.
The Triangle Wave We will proceed with Mathematica, so we can plot the results immediately. The Triangle Wave is a periodic function with the shape of a triangle. Notice that this function has sectionally continuous (not continuous) first derivative, which should be enough for unifrm convergence of its Foruier representation. First we define the triangle:
Lecture7.nb
7
f@t_D := 2 t ê T ê; 0 £ t < T ê 2; f@t_D := 2 - 2 t ê T ê; T ê 2 £ t < T; T = 1; Plot@f@tD, 8t, 0, T
0.2
0.4
0.6
0.8
1
Then we tell Mathematica to repeat the same shape first to the right and then to the left, using two recursive relations: f@t_D := f@t - TD ê; t > T ; f@t_D := f@t + TD ê; t < 0; Plot@f@tD, 8t, -3 T, 3 T
-3
-2
-1
1
2
3
Now we can compute the coefficients based on the formula we found:
Lecture7.nb
8
Clear@TD; a@n_D = FullSimplify@H2 ê TL Integrate@Cos@2 Pi n t ê TD 2 t ê T, 8t, 0, T ê 2
a@0D = Simplify@H1 ê TL Integrate@2 t ê T, 8t, 0, T ê 2
ao = 1 ê 2.
Let's compute explicitly a few of them: Table@a@nD, 8n, 0, 10
Lecture7.nb
9
fapprox @t_, M_D := Sum@a@nD Cos@2 Pi n t ê TD, 8n, 0, M ToString@MDD, 8M, 1, 11, 2
1
1.5
2
-0.2
Evaluate this cell and animate the plots. You will see that to describe well the sharp edges of the functions you need many modes, because those sharp features require sines and cosines of very small period T/n (large n, or large frequency). Important: Smooth functions are well described by a Fourier series with a small number of modes. Functions with rapid variations need more terms in the series.
The Square Wave and Uniform Convergence While the triangle wave satisfies the convergence theorem, we will now see a case of a function that does not and will learn about the possible consequences of that. Let's draw the square wave within one period, then use the recursive relations to extend it to the right and to the left:
Lecture7.nb
10
Clear@fD; f@t_D := 1 ê; 0 £ t < T ê 2; f@t_D := -1 ê; -T ê 2 £ t < 0; f@t_D := f@t + TD ê; t < -T ê 2; f@t_D := f@t - TD ê; t > T ê 2; T = 1; Plot@f@tD, 8t, -3, 3
0.5
-3
-2
-1
1
2
3
-0.5
-1
This time we have an odd function of t, f H-tL = - f HtL, so we can represent it using just the sine Fourier modes. This function is not continuous and its derivative is infinite at the points t=mT/2. It does not satisfy the theorem. The corresponding b coefficients are: b@n_D = FullSimplify@ 2 ê T H-Integrate@Sin@2 Pi n t ê TD, 8t, -T ê 2, 0
fapprox Ht, M L =
4 ÅÅÅÅ Å p
M
S
n=1 Hn oddL
ÅÅÅÅ1n sinH2 p n t ê T L
Lecture7.nb
11
fapprox @t_, M_D := Sum@b@nD Sin@2 Pi n t ê TD, 8n, 1, M ToString@MDD, 8M, 4, 20, 4
-0.5
0.5
1
-0.5 -1 -1.5
If you animate these plots you will see that as M grows the oscillations do not decrease in amplitude, but they are pushed toward the values of t where the first derivative of the function is singular: t = m T / 2. We can see this better by plotting the error:
Lecture7.nb
12
errorplot@M_D := Ha = fapprox @t, MD; Plot@a - f@tD, 8t, -0.5, 0.5<, PlotRange Æ 8-1, 1<, PlotPoints Æ 100 M, PlotLabel Æ "Error, M = " <> ToString@MDDL; Table@errorplot@MD, 8M, 10, 50, 10
-0.2
0.2
0.4
-0.25 -0.5 -0.75 -1
which you should try to animate again. At the discontinuities, t = m T / 2 the error is maximum and its value is +-1. The amplitude of the other oscillations (errors) is also independent of M. So in a sense we have failed, after all we did not satisfy the sufficient condition of the theorem, so there was a good chance of failure. However, as M grows, the width of the peaks decreases and they all get squeezed toward the singular points. Thi is called the Gibbs Phenomenon and commonly occurs when you convolve a signal with a sharp filter (like a squared filter). Sharp filters are good because they select a very specific frequency, but they also cause this unwanted Gibbs phenomenon. Anyway, away from the singular point, our approximation of the squared wave is indeed improving with growing M, so in a sense we have succeded in our purpose, as long as we don't ask to describe well the function very close to the singular points. Notice as well that the Fourie coefficients for the trangle wave were decreasing as 1 ê n2 , while the Fourier coefficients of the square wave decrease only as 1/n, so more terms are needed for a good description of the square wave than for the triangle wave. Now we can finally understand what the theorem meant by uniform convergence, because here we see a case of non-uniform convergence. Uniform convergence means that the series converges to the function f(t) for every value of t. So we say that for a uniformly convergent series, for any small number e independent of t, we can always find a value of M (the number of Fourier modes) such that
†f
Ht, M L - f HtL§ < eHM L
for any value of t.
more terms are needed for a good description of the square wave than for the triangle wave. Lecture7.nb
Now we can finally understand what the theorem meant by uniform convergence, because here we see a case of non-uniform convergence. Uniform convergence means that the series converges to the function f(t) for every value of t. So we say that for a uniformly convergent series, for any small number e independent of t, we can always find a value of M (the number of Fourier modes) such that
† fapprox Ht, M L - f HtL§ < eHM L
13
for any value of t.
Exponential Fourier Series As we have already found when we solved for the undetermined coefficients of the particular solution of the forced oscillator, the complex notation for oscillatory functions can sometimes make life easier. Now we will define Fourier series with a complex notation. The advantage is not only to make life easier, it is also that with complex Fourier series we can represent complex functions as well. A way to derive the Fourier series in complex form is to simply translate our previous expression (w = 2p / T is the frequency), ¶ f HtL = ao + ⁄¶ n=1 an cos Hn w tL + ⁄n=1 bn sin Hn w tL
into complex form using the trigonometric identities:
cos x = H„‰ x + „-‰ x L ê 2, sin x = H„‰ x - „-‰ x L ê 2 ‰ With this substitution we get: an + bn - n w t an - bn  n w t f HtL = ao + ‚ ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅ ‰ +‚ ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅ ‰ 2 2 n=1 n=1 ¶
¶
In case you forgot basic facts about complex numbers, notice that: (a + i b) + (a - i b) = 2 a
n=1
n=1 Lecture7.nb
14
In case you forgot basic facts about complex numbers, notice that: (a + i b) + (a - i b) = 2 a Hence, in general: z + z* = 2 Re[z] This applies to the a and b coefficients only if they are real. So our Fourier series can be written as - n w t f HtL = ao + Re@⁄¶ D n=1 Han +  bn L ‰
or in the more compact form -‰ n w t f HtL = Re@⁄• D n=0 Cn „
if we define the complex Fourier coefficients as
Co = ao Cn = an + Â bn , n > 0 This result may be useful, but it is not very general because this complex Fourier transform can be used only to represent real functions (we assumed a, b and f(t) were real). With a little trick we can re-express the Fourier series, after substitution with the trigonometric identities, in a different way that does not assume a and b are real:
co = ao , cn = H an +  bn L ê 2, n > 0, cn = H a-n -  b-n L ê 2, n < 0. The result is: -‰ n w t f HtL = ⁄• n=-• cn „
co = ao , cn = H an + Â bn L ê 2, n > 0, Lecture7.nb cn = H a-n - Â b-n L ê 2, n < 0.
15
The result is: -‰ n w t f HtL = ⁄• n=-• cn „
Basically we have reserved the negative frequencies for the complex conjugate terms. In the next lecture we will find the coefficient using the orhtogonality of the exponential Fourier modes. Let's now find the complex Fourier coefficients in the same way we found the real one, that is by taking advantage of the orthogonality of the exponential functions. With complex numbers the orthogonality is defined in a sligthly different way: