Taylor’s Theorem

Since first year calculus textbooks (the ones I have been exposed to) pushes the proofs to appendix, or just dismisses the proof altogether with “this is beyond the scope of the text”, the theory behind Taylor series (and polynomials) always intrigued me. Recently, I have been looking at Walter Rudin’s Principles of Mathematical Analysis, and there is a simple and beautiful proof of the this theorem, and I present it to you:

Theorem. Suppose f is a real function on [a,b] , n is a positive integer, f^{(n-1)} is continuous on [a,b], f^{(n)} exists for every t\in (a,b). Let \alpha and \beta be distinct points of [a,b], and define

\displaystyle P(t) =\sum\limits_{k=0}^{n-1}\frac{f^{(k)}(a)}{k!}(t-\alpha)^k

Then there exists a point x between \alpha and \beta such that

\displaystyle f(\beta)=P(\beta)+\frac{f^{(n)}(x)}{n!}(\beta-\alpha)^n 

Proof: Let M be real number such that:

f(\beta)=P(\beta)+M(\beta-\alpha)^n

Our goal is to find M. Now set

g(t)=f(t)-P(t)-M(t-\alpha)^n

If we differentiate both sides n times, we get

g^{(n)}(t)=f^{(n)}(t)-n!M

If we can prove that g^{(n)}(x)=0 for some x, then we are done, since this will give us the desired value of M. First observe that P^{(k)}(\alpha)=f^{(k)}(\alpha) for k=0,1,\cdots,n-1. For example, for case k=0, this is easily seen by

\displaystyle P(\alpha)=f(\alpha)+\sum\limits_{k=1}^{n-1}\frac{f^{(k)}(a)}{k!}(\alpha-\alpha)^k=f(\alpha)

Now, differentiating g(t) for k=0,1,2,\cdots, n-1 and setting t=\alpha yields

g(\alpha)=g^{(1)}(\alpha)  =g^{(2)}(\alpha)=\cdots=g^{(n-1)}(\alpha)=0

By construction, g(\beta)=0. Since g(\alpha)=0, by mean value theorem, there exists some c_1 between \beta and \alpha such that g'(c_1)=0. Similarly, since g'(\alpha)=0, and g'(c_1)=0, again by mean value theorem there exists some c_2 between \alpha and c_1 such that g''(c_2)=0. Continuing inductively in this manner, we see that there exists some c_n between \alpha and c_{n-1} such that g^{(n)}(c_n)=0. In particular, letting x=c_n, we obtain

\displaystyle 0=f^{(n)}(x)-n!M \ \ \Rightarrow \ \ M=\frac{f^{(n)}(x)}{n!}

as desired. ∎

This theorem, known as Taylor’s Theorem, shows that any real function satisfying certain requirements (namely, the existence of high order derivatives) can be approximated by a polynomial. Interestingly, the error term is available too. Let’s see how can we use this theorem. For convenience, we can assume the domain of f is \mathbb{R} (although this is not important). Now, if we let \alpha=0 and \beta = \epsilon>0, we get the Taylor polynomial around origin. Observe that the closer the \epsilon to 0, we have better approximation because (\beta-\alpha)^n = \epsilon^n gets smaller and smaller as \epsilon\to 0. Furthermore, assuming \epsilon<1 we see that by taking more terms, (that is, making n bigger) the error term also decreases. This is just heuristic behind the Taylor series as we can now let n\to\infty and force the error term go to zero, but one must be very careful when it comes to taking things to infinity.

 

Advertisements
This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s