The Fourier series and the Fourier transform
2025-04-17
math programming(This is work in progress, some sections are still missing)
The Fourier series: approximating periodic functions
A periodic function with period \(L\) repeats itself: \(f(x) = f(x + L)\). A Fourier series is an approximation of a periodic function in a single interval in terms of an infinite sums of sine and cosines. It allows us to decompose an arbitrary periodic function into simpler terms, find a solution for each term, and recompose the solved pieces to obtain a solution for the original function. In many cases the solution for the simpler terms is easy to obtain. The solution for the original function can be obtained either analytically or as an arbitrary approximation by cutting the infinite sum after a finite number of terms. Note that there are actually two simplifying steps:
- We break down a periodic function into intervals of length \(L\) (“one period”)
- We decompose each interval into a sum of sine and cosine terms
The Fourier series of a function \(f(x)\) over the interval \([-\pi, \pi]\) is given by
\[ a_0 + \sum_{n=1}^\infty a_n \cos(n x) + \sum_{n=1}^\infty b_n \sin(n x) \]
where
\[ \begin{align} a_0 &= \frac{1}{2 \pi} \int_{-\pi}^\pi f(x) \,\mathrm{d}x \\ a_n &= \frac{1}{\pi} \int_{-\pi}^\pi f(x) \cos(n x) \,\mathrm{d}x \\ b_n &= \frac{1}{\pi} \int_{-\pi}^\pi f(x) \sin(n x) \,\mathrm{d}x \\ \end{align} \]
For functions of period \(\tau\), we can use the substitution \(x = \frac{2\pi}{\tau} y\).
Convergence
Although the above definition is completely general, a few questions are in order. Why does it work? Does the series converge, and if so, to what? Are there functions that cannot be approximated by the Fourier series? These are very important questions, as most mathematicians before Fourier thought that approximation with trigonometric functions couldn’t work. It is easy to show that, if the Fourier series converges to \(f(x)\), then the coefficients \(a_0\), \(a_n\), \(b_n\) must be given by the above formulae: multiply both sides by \(cos(m x)\) (or \(sin(m x)\)), integrate on \([-\pi, \pi]\), and watch most terms cancel out, leaving the expression for the coefficient. But the existence of the coefficients does not guarantee the convergence of the series. However, the fact that we had to integrate suggests that the natural convergence type for the Fourier series should be in \(\mathcal{L}^p\). The proof of convergence requires some deep results and I will not report it here. We can, however, find an easy example for which the Fourier series does not exist: \(tan(x)\), periodic function over \([-\frac{\pi}{2}, \frac{\pi}{2}]\) (hint: check the coefficients for \(n=1\)). Interestingly, the necessary conditions for convergence do not include continuity. Jumping functions can be perfectly well expanded in a Fourier series.
Wave interpretation
The Fourier series decomposes a periodic function into a sum of simple sine and cosine waves. This decomposition is called harmonic analysis of the function. The term \(a_1 cos(x) + b_1 sin(x)\) is called fundamental harmonic, and all other terms have frequencies that are integer multiples of the fundamental harmonic’s. Musical instruments produce standing waves, which are then amplified by the body of the instrument. Except for diapasons, which produce pure sounds, standing waves can be produced in more than one way, potentially infinitely many. A plucked string, for example, can vibrate at its “natural” frequency \(\nu\) (depending on length, radius, and tension of the string), but also at frequency \(2\nu\), \(3\nu\), and so on. The sum of all these simple waves, after they are amplified in the instruments’ body, results in the timber of the instrument.
I asked Gemini to write some code for a physics-based simulation of what happens when we pluck a string that is fixed at both ends. This simulation uses the wave equation (\(\frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}\)) with appropriate boundary conditions to calculate the motion of the string.
The motion of the string at each point changes the motion of nearby air molecules, producing sound waves. These are then amplified by the body of the instrument, and resonance plays a big role. In my current understanding, although the string vibrates with different amplitudes at different positions (e.g. it is almost still near the extrema), the air molecules are perturbed all along the string and the perturbation quickly propagates, interfering with itself along the way. The sound that travels through the air and reaches our ears is the result of a lot of interference and resonance, and I think simulating the sound from first principles using the motion of the string and the shape of the instrument would be extremely hard. In other words, there is no easy relationship between the Fourier series of the string’s wave and the sound that we hear, except perhaps that the fundamental frequency is often the same, but that doens’t have to be the case. One reason is the body of the instrument could amplify higher frequencies to be much stronger than the fundamental frequency. Nevertheless, we can use a microphone to record the sound produced by the instrument, and apply the Fourier series to that signal in order to get the combination of frequencies that end up being amplified the most. To analyze a full soundtrack, where the frequencies also change over time due to different notes being played, we first divide the track into small intervals, then compute the Fourier series within each interval, obtaining the frequency composition during that interval. Decreasing the duration of the interval will give a higher temporal resolution, but the accuracy in the frequency domain will be lower. This tradeoff is inherent to Fourier analysis and is the same principle behind the Heisenberg uncertainty principle in quantum physics. Another interesting fenomenon in music is the “missing fundamental”, where a combination of frequencies that are not multiple of each other results in a wave with a frequency that is lower than all of the initial frequencies. This is not a trick of our brain: the resulting wave really has a lower frequency—its period, the inverse of the frequency, is the least common multiple of the periods of the harmonics.
Complex coefficients
The original Fourier series was expressed with sines and cosines, but using complex numbers can actually simplify the calculations. In the complex plane, multiplying a vector by \(e^{i\theta}\) is equivalent to a rotation of angle \(\theta\). We also have the famous Euler formula: \[ e^{ix} = \cos(x) + i\sin(x). \] Using this, we can rewrite the Fourier series in a more compact form:
\[ \sum_{n=-\infty}^\infty \gamma_n e^{inx} \]
and the coefficients become
\[ \begin{align} \gamma_n &= \frac{1}{2 \pi} \int_{-\pi}^\pi f(x) e^{-inx} \,\mathrm{d}x \\ \end{align} \]
Cartesian interpretation
There is a sense in which the infinete set of sines and cosines (or complex exponentials) in the Fourier series forms an orthogonal basis. Consider a vector space where the elements are functions over a domain \(D\). Define the inner product as \[ <x, y> := \int_D x(t) \bar{y(t)} \,\mathrm{d}t, \] where \(\bar{y(t)}\) denotes the complex conjugate of \(y(t)\). Then, it can be shown that the elements of the set \(\left\{ \frac{1}{\sqrt{2\pi}} e^{int}, n \in \mathbb{Z} \right\}\) are all orthogonal. They constitute an orthonormal basis for the function space. Under appropriate conditions, the basis is complete and can be used to express any function in the space. Thus, the Fourier coefficients can be interpreted as the “cartesian components” of the function in this infinite space.
Geometric interpreation
The \(y\) of a function
 Marginalia
Leave a comment