How to compute Pi in Calculus Style

Last week we dipped our toes into the strange world of limits. We have seen in last post that many discret particles interacting with each other lead to a continuous phenomenon like diffusion! This week, we show how you can actually compute pi using calculus. Before doing so, we learn about one of the crown jewels of Calculus: Taylor series. It is the mathematical equivalent of taking apart a function’s DNA and rebuilding it out of simple pieces. This is a natural follow-up on the series of last week. First of all, we need to ask ourselves one big question.
Today, we go look for limits in the wild and find out their purpose.
Today, we use calculus to find the value of pi.

What is the Taylor series?

In calculus, Taylor’s theorem gives an approximation of a $n$-times differentiable function around a given point $a$ by a polynomial of degree $n$. This is the nth order Taylor polynomial. For a smooth function, we can consider the polynomial for an arbitrary degree. In that case, the Taylor polynomial is the truncation at the order $n$ of the Taylor series of the function. The first-order Taylor polynomial is just the linear approximation of the function or tangent line to the point $a$. The second-order Taylor polynomial is the quadratic approximation.

If this sounds extremely vague then think of it this way. Each extra term is like adjusting the sliders on an audio equalizer. The linear term gives you the basic melody, the quadratic one adds harmony, the cubic one adds richness and with enough sliders, you can reproduce the full song. Just as in this case, having enough terms you will capture more and more behaviour of the function.

So in short if $f$ is $n$ times differentiable, we can approximate it near a point $a$ using a polynomial:

$$ P_n(x)=f(a)+f'(a)(x-a)+\frac{f''(a)}{2!}(x-a)^2+\cdots+\frac{f^{(n)}(a)}{n!}(x-a)^n. $$

At first glance this formula looks like dark magic: we take a function, extract its derivatives, sprinkle factorials everywhere, and somehow a perfect approximation appears. But there’s nothing mystical here. What we do in essence, is just performing the world’s smartest version of zooming in.

Assuming that the derivatives of the original function $f$ all exist, or in other words $f$ is smooth, then as $n$ increases, the polynomial captures more and more of the true behaviour of $f$. In the limit, the approximation becomes exact:

$$ f(x)=\lim_{n\to\infty} P_n(x). $$

This is wild if you think about it for a moment: infinitely complicated functions like exponentials, trig functions, even logarithms can be rebuilt entirely from the humble powers of $x$.

This equation shows that every smooth function is, in a sense, the limit of its own polynomial approximations. In other words, you can state that

$$ f(x)=f(a)+f'(a)(x-a)+\frac{f''(a)}{2!}(x-a)^2+\cdots=\sum_{n=0}^{\infty}\frac{f^{(n)}(a)}{n!}(x-a)^n. $$

This idea is monumental. Just like considered in last post, it means we can replace complicated functions with simpler polynomial ones. The best thing is, we know exactly how good our approximation is. The idea results in simple arithmetic formulas to accurately compute values of many transcendental functions such as the exponential function and trigonometric functions.

A Taylor polynomial has a finite number of terms, whereas a Taylor series has infinitely many terms. The Taylor polynomials are the partial sums of the Taylors series. For instance, the Taylor series of $e^x$ around $x = 0$ is

$$ 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots $$

The $4$rd degree Taylor polynomial of $e^x$ about $x = 0$ is

$$ 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!}. $$

So for $x=1$ we essentially obtain

$$ e = 1 + 1 + \frac{1^2}{2!} + \frac{1^3}{3!} + \frac{1^4}{4!} + \cdots = \sum_{n=0}^{\infty} \frac{1}{n!}. $$

While using a Taylor polynomial, we can approximate $e$ using our previous found $4$rd degree Taylor polynomial:

$$ e \approx 1 + 1 + \frac{1^2}{2!} + \frac{1^3}{3!} + \frac{1^4}{4!}. $$

However, I should mention some things here. For each extra term, the approximation will improve around the considered point. But how fast the approximation will improve in the area around the considered point really depends on the function. Here are some animations of examples where the Taylor series is applied on $e^x$, $\sin(x)$ and $\ln(x)$.

This is where things get visually fun, you can literally watch the Taylor series lock onto the true curve as you add terms. When adding more terms, the Taylor series will become more accurate around the point where it was meant for. This can be seen in the following table when attempting to approximate the value of $e$ using the Taylor series for $e^x$.

\( n \) \( n \)-th Order Approximation of \( e \)
\(0\)\(1.0\)
\(1\)\(2.0\)
\(2\)\(2.5\)
\(3\)\(2.6666666667\)
\(4\)\(2.7083333333\)
\(5\)\(2.7166666667\)
\(6\)\(2.7180555556\)
\(7\)\(2.7182539683\)
\(8\)\(2.7182787698\)
\(9\)\(2.7182815256\)
\(10\)\(2.7182818011\)
\(11\)\(2.7182818262\)
\(12\)\(2.7182818283\)
\(13\)\(2.7182818285\)
\(14\)\(2.7182818285\)
\(15\)\(2.7182818285\)

When approximating non-polynomial functions, it is evident that at some point the approximation breaks down and behaves poorly. When considering the limit of $x$ going to $\pm \infty$ then polynomials also blow up to $\pm \infty$. An illustration of this is shown below for the sine function:

This is the mathematical version of using a bicycle to chase a rocket. The bike works beautifully near the launch pad. One kilometer later, it’s hopeless to follow the rocket. In conclusion, when the function $f$ has a completely different behaviour for large values of $x$ then the Taylor series will be guaranteed to not properly approximate the original function at some point. If you still want to properly approximate the original function $f$ for large $x$ then you might need a high order polynomial.

Taylor series are very interesting when you want to approximate $f$ with very high quality near a point. However, approximating a function $f$ around a point is nice, but it is mostly concerned with improving the approximation around a specific point. When approximating a function in a specific area then usually, other methods are used. However, just as Riemann sums are fundamental to integrals, the Taylor series is the foundation of approximating complicated functions.

Relation to L’H么pital

When computing limits L’H么pital and Taylor polynomials are often named in the same breath. Now here comes a fun dirty secret of calculus about L’H么pital’s rule and Taylor series. They’re basically the same idea wearing a different coat. I will now demonstrate that they are essentially the same. Consider that you have a limit which corresponds to the following:

$$ \lim_{x\to a}\frac{f(x)}{g(x)}. $$

However, $f$ and $g$ both simultaneously become $0$ at the point $x=a$. This means that we get a $0/0$ indefiniteness. L’H么pital would tell us to differentiate both the numerator and denominator, but for now let’s just use Taylor polynomials constructed in the point $x=a$:

$$ \lim_{x\to a}\frac{f(x)}{g(x)} = \lim_{x\to a}\frac{f(a)+f'(a)(x-a)+\frac{f''(a)}{2!}(x-a)^2+\cdots} {g(a)+g'(a)(x-a)+\frac{g''(a)}{2!}(x-a)^2+\cdots}. $$

In the case of $f(a)=g(a)=0$ and $f'(a),g'(a)\neq 0$ then it just comes down to

$$ \lim_{x\to a}\frac{f(x)}{g(x)} = \lim_{x\to a}\frac{f(a)+f'(a)(x-a)}{g(a)+g'(a)(x-a)} = \lim_{x\to a}\frac{f'(a)(x-a)}{g'(a)(x-a)} = \frac{f'(a)}{g'(a)}. $$

However, this is only the case when $f'(a),g'(a)\neq 0$ because when $f'(a)=g'(a)=0$ then we get again a $0/0$ indefiniteness. Usually, we would again apply L’H么pital, but this again just falls down to applying a Taylor polynomial:

$$ \lim_{x\to a}\frac{f(x)}{g(x)} = \lim_{x\to a}\frac{f(a)+f'(a)(x-a)+\frac{f''(a)}{2!}(x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+\cdots} {g(a)+g'(a)(x-a)+\frac{g''(a)}{2!}(x-a)^2+\frac{g^{(3)}(a)}{3!}(x-a)^3+\cdots}. $$

Then assuming that $f(a)=g(a)=f'(a)=g'(a)=0$ this just falls down to:

$$ \lim_{x\to a}\frac{f(x)}{g(x)}=\frac{f''(a)}{g''(a)}. $$

So in essence, if you keep having a $0/0$ indefiniteness then you should add more terms of the Taylor polynomial or keep applying L’H么pital’s rule.

Cook up your own Taylor series at home

You may ask yourself, where on god’s earth do Taylor series come from. The short answer is to think of this section as a recipe. First pick a function, chop it into derivatives, season with factorials, and voil脿 your very own Taylor series.

Why is this the case? Well, it all starts from this idea. We want to find a power series based in the point $x=a$ of the following form.

$$ f(x)=\sum_{k=0}^{\infty} C_k (x-a)^k = C_0 + C_1 (x-a) + C_2 (x-a)^2 + C_3 (x-a)^3 + \cdots $$

To find the coefficients in this power series, we can just substitute $x=a$ in the equation.

$$ f(x) = \sum_{k=0}^{\infty} C_k (x-a)^k = C_0 + C_1 (x-a) + C_2 (x-a)^2 + C_3 (x-a)^3 + \cdots \quad\Rightarrow\quad f(a) = C_0. $$

Very nice! What to do when you want to figure out the next coefficient? Derive and then substitute $x=a$ is the answer.

\[ f'(x)=\sum_{k=1}^{\infty} C_k k (x-a)^{k-1} = C_1 + 2C_2 (x-a) + 3C_3 (x-a)^2 + 4C_4 (x-a)^3 + \cdots \] \[ \Rightarrow f'(a) = C_1. \]

We can do so repeatedly…

\[ f''(x)=\sum_{k=2}^\infty C_k k (k-1) (x-a)^{k-2} = 2C_2 + 6C_3 (x-a) + 12C_4 (x-a)^2 + \cdots \] \[ \Rightarrow f''(a) = 2C_2. \]
\[ f^{(3)}(x)=\sum_{k=3}^\infty C_k k (k-1)(k-2) (x-a)^{k-3} = 6C_3 + 24C_4 (x-a) + 5\cdot4\cdot3 C_5 (x-a)^2 + \cdots \] \[ \Rightarrow f^{(3)}(a) = 6C_3. \]

So basically we can conclude the following from these observations:

$$ f^{(k)}(a) = k! \, C_k \quad\Rightarrow\quad C_k = \frac{f^{(k)}(a)}{k!}. $$

This tiny formula is one of the most powerful ideas in calculus. It means every derivative you take is another puzzle piece of the complete puzzle. We can do this for any $k$ provided that the function is unrestrictedly differentiable and that the function is defined for $a$.

So you may ask, how can you cook up your own Taylor series? First, we will just start simple with the most common examples. We already discussed one iconic example, namely the one for $f(x) = e^x$. It is known that $f^{(n)}(x) = e^x$ and thus $f^{(n)}(0) = e^0 = 1$. The series is then

$$ e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!} = 1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots. $$

Another iconic example is the series of $\sin(x)$. The sine-function is unlimitedly differentiable in $0$. The following holds:

  • $f(0)=\sin x\Rightarrow f(0)=0$
  • $f'(0)=\cos x\Rightarrow f'(0)=1$
  • $f''(0)=-\sin x\Rightarrow f''(0)=0$
  • $f^{(3)}(0)=-\cos x\Rightarrow f^{(3)}(0)=-1$
  • $f^{(4)}(0)=\sin x\Rightarrow f^{(4)}(0)=0$
  • $f^{(5)}(0)=\cos x\Rightarrow f^{(5)}(0)=1$
  • $f^{(6)}(0)=-\sin x\Rightarrow f^{(6)}(0)=0$
  • $f^{(7)}(0)=-\cos x\Rightarrow f^{(7)}(0)=-1$

We note that $f^{(2n)}(0)=0$ and $f^{(2n+1)}(0)=(-1)^n$. Then the Taylor series is

$$ \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots = \sum_{k=0}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}. $$

The exponential function $e^x$ is the poster child of Taylor series, it behaves so nicely that its derivatives all become $e^x$ and evaluate to $1$ at $x=0$. The derivatives of sine, on the other hand, act like a moody teenager: zero, one, zero, minus one, repeat.

We can apply the same idea with the Taylor series of the cosine. Just as the sine-function, the cosine function is unlimitedly differentiable in $0$. The following holds:

  • $f(0)=\cos x\Rightarrow f(0)=1$
  • $f'(0)=-\sin x\Rightarrow f'(0)=0$
  • $f''(0)=-\cos x\Rightarrow f''(0)=-1$
  • $f^{(3)}(0)=\sin x\Rightarrow f^{(3)}(0)=0$
  • $f^{(4)}(0)=\cos x\Rightarrow f^{(4)}(0)=1$
  • $f^{(5)}(0)=-\sin x\Rightarrow f^{(5)}(0)=0$
  • $f^{(6)}(0)=-\cos x\Rightarrow f^{(6)}(0)=-1$

We note that $f^{(2n)}(0)=(-1)^n$ and $f^{(2n+1)}(0)=0$, which results in

$$ \cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \cdots = \sum_{k=0}^\infty (-1)^k \frac{x^{2k}}{(2k)!}. $$

We could also have found this series by just differentiating the Taylor series of $\sin x$.

Our best friend

One last series I wanted to discuss is the following function.

$$ f(x)=\frac{1}{1-x} $$ So let's consider it's derivatives: \[ f'(x)=(1-x)^{-2} \] \[ f''(x)=2(1-x)^{-3} \] \[ f^{(3)}(x)=6(1-x)^{-4} \] \[ \vdots \] \[ f^{(n)}(x)=n!(1-x)^{-(n+1)} \]

Then considering $f^{(n)}(0)=n!$, the Taylor series of this function around $x=0$ is:

$$ \frac{1}{1-x} = 1 + x + x^2 + x^3 + x^4 + \cdots = \sum_{n=0}^\infty x^n. $$

This is actually a weird example because we can indirectly divide by zero. This is indeed very weird, but it’s also the Swiss army knife of Taylor series. Once you know this one, half the other famous series are just clever rewrites of it.

But the math stays consistent. If we actually substitute $x=1$ then we divide by zero on one side but also add an infinite amount of ones on the other. The reason why I actually show this example is because it is an iconic example in maths and a lot of other Taylor series can be derived from it. The famous math YouTuber BlackPenRedPen calls this his best friend when using Taylor series.

The only difference is that this Taylor series is only defined for $x\in[-1,1]$ because otherwise the series blows up. However, this only restriction aside we can find pretty nice results. Let’s see in the following section what we can do.

Pi is also here!

And now comes one of my favorite party tricks in all of mathematics. For example, the following series can be considered for $x\in[-1,1]$:

$$ \frac{1}{1+x^2} = \sum_{n=0}^\infty (-x^2)^n = \sum_{n=0}^\infty (-1)^n x^{2n}. $$

Then by integrating both sides we obtain

$$ \arctan x = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+1}}{2n+1} = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots. $$

Ok, where the hell does this example even come from? It might seem pretty random, but we know that as $\tan(\pi/4)=1$ we can approximate the actual value of $\pi$ by substituting $x=1$.

$$ \pi = 4\sum_{n=0}^\infty \frac{(-1)^n}{2n+1} = 4\left(1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots\right). $$

From nothing but powers of $x$, you suddenly pull $\pi$ out of the universe. That’s the power of Taylor series.

Conclusion

Taylor series turn the chaotic, messy world of functions into something we can tame, study, and compute. They’re the bridge between the wild continuous universe and the simple algebra we can actually handle.

And once you see them at work and realize what they can obtain: rebuilding exponentials, waving patterns, logarithms, even $\pi$ itself then we can realize that:

Nearly every smooth curve hides a secret polynomial inside it. Taylor series are how we make that secret talk.

Comments

Popular posts from this blog

How to Win a Sportscar Using Probability

To divide or not to divide by 3?

Can Ai escape the lab?