Basic arithmetic/calculus.
In the series: Note 13.
Subject: Fourier and Taylor expansions.
Date : 20 June, 2016
Version: 0.3
By: Albert van der Sel
Doc. Number: Note 13.
For who: for beginners.
Remark: Please refresh the page to see any updates.
Status: Ready.
This note is especially for beginners.
Maybe you need to pick up "some" basic "mathematics" rather quickly.
So really..., my emphasis is on "rather quickly".
So, I am not sure of it, but I hope that this note can be of use.
Ofcourse, I hope you like my "style" and try the note anyway.
This note: Note 13: Fourier expansion and Taylor expansion.
Each note in this series, is build "on top" of the preceding ones.
Please be sure that you are on a "level" at least equivalent to the contents up to, and including, note 12.
This note is a very short introduction into "Fourier expansion" or "Fourier series".
The basic idea of an expansion of a function into a series of sin() or cos() "basis" functions, will be explained.
Also we will take a look at Taylor or Maclaurin Series.
Next, the basic ideas on "Fourier transforms" will be touched upon.
To keep it relatively simple, my explanations will be mostly on the conceptual level.
However, my goal is that you really get to understand the Fourier and Taylor series.
Chapter 1. What is a Fourier expansion (or Fourier series)?
Let y=g(x) represent any (continuous) periodic function. Then it can be represented by a series
of sin(Nx) and cos(Nx) functions (where N are integer numbers, so that for example a member in the series is "sin(5x)").
Note the fact that the sin(Nx) and cos(Nx) resemble "basis" vectors. Indeed, in note 12 you saw that a vector can
be written as a superposition (or a sum) of unit or basis vectors.
Here is Fourier's basic expansion:
g(x) = a_{0} /2 + Σ _{n=1}^{∞} ( a_{n} cos(nx) + b_{n} sin(nx) )
= a_{0} /2 + a_{1} cos(x) + b_{1} sin(x) + a_{2} cos(2x) + b_{2} sin(2x) + a_{3} cos(3x) + b_{3} sin(3x) + ...
Please note that the "n"'s are simply integers, so n=1, 2, 3, 4, ......
Also, generic formula's are provided by the theory, to find the coefficents like a_{1}, b_{1}, a_{2}, b_{2} ... etc..
Although the series is "infinite", it's truly amazing how close you can resemble any function g(x), by taking
for example, as low as only 7 pairs of sin() and cos() terms.
The more terms you add in the sum, the more precise you get.
The "x" interval where the function g(x) is equated to the expansion (the sum in equation 1), is often at first limited
to the period T of g(x) (like for example the interval [0π], or [ππ] etc..), or whatever the period is.
This is done to to calculate the coefficients like "b_{n}".
Below is an example that will hopefully make it "likely" (or convincing), that such an expansion is really possible.
1.1 How to make it plausible.
Example 1:
Suppose you would start with a function which does not resemble a sin() or cos() at all.
We could take one of the "famous" examples, like for example a sawtooth graph, or a square wave graph.
Let's investigate a square wave. In figure 1 below, in the upper left corner we see a square wave.
Figure 1. Representing a "square wave" by a small number of sinus harmonics.
Although it's not proven right now, but if we just take a few terms of a Fourier expansion:
sin(x)
1/3 sin(3x)
1/5 sin(5x)
1/7 sin(7x)
and we simple add those terms, then we get g(x) = sin(x)  1/3 sin(3x) + 1/5 sin(5x)  1/7 sin(7x)
In the lower right, you can see the graph of this superpostion. It's remarkable how much the result
already resembles the original square wave.
You can probably imagine, that if we would take, for example, 50 terms of the expansion,
then the resemblence would be very precise.
1.2 Fourier series or Fourier expansion.
As already listed above, a Fourier expansion of a periodic function is:
g(x) = a_{0} /2 + Σ _{n=1}^{∞} ( a_{n} cos(nx) + b_{n} sin(nx) )
= a_{0} /2 + a_{1} cos(x) + b_{1} sin(x) + a_{2} cos(2x) + b_{2} sin(2x) + a_{3} cos(3x) + b_{3} sin(3x) + ... (equation 1)
Most often, the analysis of functions g(x) is done over the "Period" of g(x), like for example the interval on the xaxis,
where x in [0, π], or [0, 2π], or [π π].
Suppose the Period of g(x) is the interval [π, π]. So, the period of g(x) is 2π
The coefficients a_{n} and b_{n}, are calculated as follows:
a_{n} = 1/π ∫_{π} ^{π} g(x) cos(nx) dx (equation 2)
b_{n} = 1/π ∫_{π} ^{π} g(x) sin(nx) dx (equation 3)
a_{0} = 1/π ∫_{π} ^{π} g(x) dx (equation 4)
So, why does these coefficents (as of a_{1}) must be calculated that way?
Let's take a look at, for example, b_{3}. This the coefficient for sin(3x).
In general, suppose that b_{3} would be large, then the contribution of sin(3x) to the series would be large.
In general, suppose that b_{3} would be very small, then the contribution of sin(3x) to the series would be very small.
Thus the coefficient, like b_{3}, determines how much the component (like sin(3x)) contributes to the series.
Many interpret it this way: The integral of "g(x) sin(3x)" over the interval [π, π] is equivalent
to the average (or "weighted") contribution of sin(3x) to the expansion.
However, it's true that as "n" increases, the associated terms quickly get's smaller and smaller.
Note:
The coefficient (or number actually) "a_{0}", which is the first term in the series, has a rather special status.
You may interpret it as the average value of g(x) over the "period" (like the interval [π, π]).
Maybe you still have problems in understanding the coefficients like a_{n}.
As said above, you may interpret them als the "weighted average" contribution of sin(nx) or cos(nx)
to the expansion.
However, you may also view equation 1 in the sense of a vector expansion of g(x) on the basis vectors "sin(nx)" and "cos(nx)".
In that sense, you can view the coefficients in the same way as you would view the vector components, written out
as the sum of unitvectors.
So, it looks like an expansion of vector A on unit vectors. For example, if A=(7,18,1) then A= 7 (1,0,0) + 18 (0,1,0) + 1 (0,0,1).
The contribution of (0,0,1) to A is rather small. As you know from note 12, for example "7", is the projection of A on (1,0,0).
Thus:
a_{n} = 1/π ∫_{π} ^{π} g(x) cos(nx) dx
then "resembles" the component of g(x) on cos(nx).
Thus, g(x) in equation 1, is formed from a (infinite) superpostion of basis vectors, and the coefficients
can be interpreted exactly like you did in vector calculus.
The whole purpose of Chapter 1, was to give you a "reasonable" impression on how a Fourier expansion works.
If that worked, then for now, it's good enough for me.
Chapter 2. Taylor series.
This subject resembles a bit, what we have seen in Chapter 1, in the sense that a function can be expanded in mathematical terms.
While Fourier developed this insights of the expansion of a function around 1800, Taylor developed his own type
of expansion around 1714. But it was not before Lagrange in 1772 rediscovered it's power, that Taylor's work became
really famous. No doubt the work inspired many ohers and pushed differential calculus forwards.
All in all, it's a different approach from the Fourier series. Here is the general equation of a Taylor series:
f(x) =

f(a) +

f '(a)

1!

(xa) +

f ''(a)

2!

(xa)^{2} +

f '''(a)

3!

(xa)^{3} +

... (equation 5) 
Actually, the Taylor series is quite easy to use. Before we provide an explanation, let's do a few examples in a minute.
Ofcourse, we reckognize the first derivative f'(a), the second derivative f''(a) etc..., all calculated in x=a.
Yes, that's really remarkable. Just choose a point x=a on the xaxis. Then calculate the derivatives f', f'', f''' etc..
and multiply each derivative with a corrseponding power of "(xa)", just like you can see in equation 5.
It turns out that you then can "calculate" f(x), for any x.
Just like with Fourier series, the Taylor series get more precise too, if you add more terms to the series.
The denominators like "2!" and "3!"(or more generally, the "n!") you see in equation 5, is a shorthand notation.
It means this:
1! = 1
2! = 2 x 1
3! = 3 x 2 x 1
4! = 4 x 3 x 2 x 1
etc..
Examples:
Example 1: Taylor Expansion of f(x)=e^{x}:
Let's try to expand e^{x}. It's actualy a simple example, since from from note 7, you might remember
that the derivate of e^{x}, is e^{x} again! Indeed, it's true for all derivates of e^{x}
If needed, check out note 7 again.
And, we will pick x=0, as the "x=a" in the general equation (equation 5).
So, we have:
e^{x} =

e^{0} +

e^{0}

1!

x +

e^{0}

2!

x^{2} +

e^{0}

3!

x^{3} +

..... 
And recall that e^{0} = 1. So, we get:
e^{x} =

1 +

x +

x^{2}
 +
2!

x^{3}

3!

+ ..... 
Example 2: Taylor Expansion of f(x)=cos(x):
Here we will choose "a=0" too.
Remember from note 5, that:
If f(x)=sin(x) then f '(x)=cos(x)
If f(x)=cos(x) then f '(x)= sin(x)
So, we have:
cos(x) =

cos(0) +

sin(0)

1!

x +

cos(0)

2!

x^{2} +

sin(0)

3!

x^{3} +

cos(0)

4!

x^{4}

+

..... 
Since cos(0)=1 and sin(0)=0, we get:
cos(x) =

1



x^{2}

2!

+

x^{4}

4!

+

..... 
That's it ! Hope you liked it.