# Gamma function

If $n \ge 0$ is an integer, then, using integration by parts repeatedly, we see that $\displaystyle n!=\int_0^{\infty} x^ne^{-x}dx$ (see Exercise 2 in this post!). We can now use this integral form of the factorial of a non-negative integer to define the factorial of any real number $x$ which is not a negative integer.

Definition 1. The gamma function $\Gamma: (0,\infty) \longrightarrow (0,\infty)$ is defined by

$\displaystyle \Gamma(x)=\int_0^{\infty}t^{x-1}e^{-t}dt.$

Remark. By definition, $\displaystyle \Gamma(n)=\int_0^{\infty} t^{n-1}e^{-t}dt=(n-1)!$ for all integers $n > 0$ and so $n!=\Gamma(n+1)$ for all integers $n \ge 0.$ So we may define $\displaystyle x! :=\Gamma(x+1)=\int_0^{\infty}t^xe^{-t}dt$ for all real numbers $x > -1$ but, in order to do that, we need to show that $\Gamma$ is well-defined, i.e. $\displaystyle \int_0^{\infty}t^{x-1}e^{-t}dt$ is convergent for all real numbers $x > 0$ (Problem 2).

Example 1. Show that $\displaystyle \Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}.$ In other words, $\displaystyle \left(\frac{-1}{2}\right)!=\sqrt{\pi}.$

Solution. By definition, $\displaystyle \Gamma \left(\frac{1}{2}\right)=\int_0^{\infty}t^{\frac{-1}{2}}e^{-t} dt$ and so the substitution $\sqrt{t}=x$ gives $\displaystyle \Gamma \left(\frac{1}{2}\right)=2\int_0^{\infty}e^{-x^2}dx=\sqrt{\pi}$ (see this post!). $\Box$

Problem 1. Show that $\displaystyle \Gamma(x+1)=x\Gamma(x)$ for all $x > 0.$

Solution. Intgeration by parts with $u=e^{-t}$ and $t^{x-1}dt=dv$ gives

$\displaystyle \Gamma(x)=\int_0^{\infty}t^{x-1}e^{-t}dt=\frac{1}{x}\int_0^{\infty}t^xe^{-t}dt=\frac{1}{x}\Gamma(x+1). \ \Box$

Example 2. Show that $\displaystyle \Gamma\left(n+\frac{1}{2}\right)=\frac{(2n)!}{4^nn!}\sqrt{\pi}$ for all integers $n \ge 0.$ In other words, $\displaystyle \left(n-\frac{1}{2}\right)!=\frac{(2n)!}{4^nn!}\sqrt{\pi}.$

Solution. By Problem 1 and Example 2, we have

\displaystyle \begin{aligned} \Gamma\left(n+\frac{1}{2}\right)=\left(n-\frac{1}{2}\right)\Gamma\left(n-\frac{1}{2}\right)=\left(n-\frac{1}{2}\right)\left(n-\frac{3}{2}\right)\Gamma\left(n-\frac{3}{2}\right) = \cdots \\ =\left(n-\frac{1}{2}\right)\left(n-\frac{3}{2}\right) \cdots \left(\frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right) \\ =\frac{(2n-1)(2n-3) \cdots 3 \cdot 1}{2^n}\sqrt{\pi} \\=\frac{(2n)!}{4^nn!}\sqrt{\pi}. \ \Box \end{aligned}

Problem 2. Show that $\Gamma$ is well-defined, i.e. $\displaystyle \int_0^{\infty}t^{x-1}e^{-t}dt$ is convergent for all real numbers $x >0.$

Solution. By Problem 1, we have $\displaystyle \Gamma(x)=\frac{\Gamma(x+1)}{x}$ and so we only need to show that $\displaystyle \Gamma(x+1)=\int_0^{\infty}t^xe^{-t}dt$ is convergent for $x >0.$ We have

$\displaystyle \int_0^{\infty}t^xe^{-t}dt=\int_0^1 t^xe^{-t}dt+\int_1^{\infty}t^xe^{-t}dt. \ \ \ \ \ \ \ \ \ \ \ \ \ \ (*)$

The integral $\displaystyle \int_0^{\infty}t^xe^{-t}dt, \ x >0,$ is a proper integral and so it is convergent. Now choose an integer $n > 0$ such that $0 < x \le n.$ Then

$\displaystyle \int_1^{\infty}t^xe^{-t}dt \le \int_1^{\infty}t^ne^{-t}dt < \int_0^{\infty}t^ne^{-t}dt=(n-1)!.$

So, by the comparison test, $\displaystyle \int_1^{\infty}t^xe^{-t}dt$ is convergent. Thus, by $\displaystyle (*), \ \int_0^{\infty}t^xe^{-t}dt$ is convergent too. $\Box$

Definition 2. We now use Problem 1 to extend the domain of $\Gamma$ from $(0,\infty)$ to the set $\displaystyle \{x \in \mathbb{R}: \ x \ne 0, -1,-2, -3, \cdots \}.$ If $x > -1,$ then $x+1 > 0$ and so $\Gamma(x+1)$ is defined; we then define $\displaystyle \Gamma(x):=\frac{\Gamma(x+1)}{x}.$ If $x > -2,$ then $x+1 > -1$ and we just defined $\Gamma(x+1);$ we then again define $\displaystyle \Gamma(x):=\frac{\Gamma(x+1)}{x},$ etc. Now if $x$ is a real number which is not a negative integer, we may define $x!:=\Gamma(x+1).$

Example 3. By the above definition and Example 1, $\displaystyle \Gamma\left(\frac{-1}{2}\right)=-2\Gamma\left(\frac{1}{2}\right)=-2\sqrt{\pi}.$

Exercise. Show that if $x \le 0,$ then $\displaystyle \int_0^{\infty}t^{x-1}e^{-t}dt$ is divergent.
Hint. $\displaystyle \int_0^{\infty}t^{x-1}e^{-t}dt \ge \int_0^1 t^{-1}e^{-t}dt \ge \int_0^1 t^{-1}(1-t) \ dt = \infty.$

# Tannery’s theorem

This post goes a little more than epsilon beyond the standard one-variable calculus and so it shouldn’t be in this blog and I should resist posting things like this in the future.

Here I proved that $\displaystyle \lim_{n\to\infty} \frac{1}{n^n} \sum_{k=1}^n k^n=\frac{e}{e-1}.$ My solution was nice and elementary but not very natural. A more natural solution would begin with the following line

$\displaystyle \frac{1}{n^n} \sum_{k=1}^n k^n=\sum_{k=1}^n \left(\frac{k}{n}\right)^n= \sum_{k=0}^n \left(1-\frac{k}{n}\right)^n.$

Then it would continue as follows: but $\displaystyle \lim_{n\to\infty} \left(1-\frac{k}{n}\right)^n=e^{-k}$ and so if we can write $\displaystyle \lim_{n\to\infty} \sum_{k=0}^n \left(1-\frac{k}{n}\right)^n=\sum_{k=0}^{\infty} \lim_{n\to\infty} \left(1-\frac{k}{n}\right)^n,$ then the answer would be $\displaystyle \sum_{k=0}^{\infty}e^{-k}=\frac{1}{1-e^{-1}}=\frac{e}{e-1}.$
Right, but only if we can write it that way. In fact, it turns out that we can!
In general, we have the following result.

Problem (Tannery’s theorem). For an integer $k \ge 0,$ let $f_k$ be a function of the integer variable $n.$ Suppose that there exist sequences $\{L_k\}$ and $\{M_k\},$ where each $M_k$ is independent of $n,$ such that $\displaystyle \lim_{n\to\infty} f_k(n)=L_k$ and $|f_k(n)| \le M_k,$ for all $k, n.$ Show that if $\displaystyle \sum_{k=0}^{\infty} M_k$ is convergent, then

$\displaystyle \lim_{n\to\infty} \sum_{k=0}^n f_k(n)=\sum_{k=0}^{\infty} \lim_{n\to\infty}f_k(n)=\sum_{k=0}^{\infty} L_k.$

Solution. Let $\epsilon > 0$ be given. Since $\displaystyle \sum_{k=0}^{\infty} M_k$ is convergent, $\displaystyle \lim_{m\to\infty} \sum_{k=m}^{\infty}M_k=0$ and so

$\displaystyle \sum_{k=m}^{\infty} M_k < \epsilon \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$

if $m$ is large enough. Note that $m$ doesn’t depend on $n$ because $M_k$ doesn’t depend on $n.$ Thus if $m$ is large enough and $n \ge m,$ then

$\displaystyle \left|\sum_{k=m}^n f_k(n) \right| \le \sum_{k=m}^n |f_k(n)| \le \sum_{k=m}^n M_k \le \sum_{k=m}^{\infty} M_k < \epsilon. \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$

Also since $M_k$ doesn’t depend on $\displaystyle n, \ |L_k| =\lim_{n\to\infty} |f_k(n)| \le M_k$ and so, by $(1),$ if $m$ is large enough, then

$\displaystyle \left|\sum_{k=m}^{\infty} L_k \right| \le \sum_{k=m}^{\infty}|L_k| \le \sum_{k=m}^{\infty}M_k < \epsilon. \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$

Therefore, if $m$ is large enough and $n \ge m,$ then $(2)$ and $(3)$ give

\displaystyle \begin{aligned} \left|\sum_{k=0}^nf_k(n)-\sum_{k=0}^{\infty}L_k\right|=\left|\sum_{k=0}^{m-1}(f_k(n)-L_k)+\sum_{k=m}^n f_k(n) - \sum_{k=m}^{\infty}L_k\right| \\ \le \sum_{k=0}^{m-1}|f_k(n)-L_k|+ \left|\sum_{k=m}^n f_k(n) \right| + \left|\sum_{k=m}^{\infty} L_k \right| \\ \le \sum_{k=0}^{m-1}|f_k(n)-L_k|+ 2\epsilon \end{aligned} \ \ \ \ \ \ \ \ \ \ \ \ \ (4)

Finally, since for each $\displaystyle k, \ \lim_{n\to\infty} f_k(n)=L_k$ and $m$ doesn’t depend on $n,$ we have $\displaystyle \lim_{n\to\infty} \sum_{k=0}^{m-1}|f_k(n)-L_k|=0$ and so $\displaystyle \sum_{k=0}^{m-1}|f_k(n)-L_k| < \epsilon$ provided that $n$ is large enough. Hence, by $(4),$

$\displaystyle \left|\sum_{k=0}^nf_k(n)-\sum_{k=0}^{\infty}L_k\right| < 3\epsilon,$

for all $n$ that are large enough. So $\displaystyle \lim_{n\to\infty} \sum_{k=0}^n f_k(n)=\sum_{k=0}^{\infty} L_k,$ by the definition of limit. $\Box$

Example 1. Using Tannery’s theorem, show that $\displaystyle \lim_{n\to\infty} \frac{1}{n^n} \sum_{k=1}^n k^n=\frac{e}{e-1}.$

Solution. We have

$\displaystyle \frac{1}{n^n} \sum_{k=1}^n k^n=\sum_{k=1}^n\left(\frac{k}{n}\right)^n=\sum_{k=0}^n \left(\frac{n-k}{n}\right)^n=\sum_{k=0}^n \left(1-\frac{k}{n}\right)^n.$

Let $\displaystyle f_k(n):=\left(1-\frac{k}{n}\right)^n.$ Then $\displaystyle L_k:=\lim_{n\to\infty}f_k(n)=e^{-k}.$ Also, since $e^x \ge 1+x$ for all real numbers $x,$ we also have $\displaystyle |f_k(n)|=f_k(n)=\left(1-\frac{k}{n}\right)^n \le e^{-k}:=M_k.$ Clearly $M_k$ doesn’t depend on $n$ and the series $\displaystyle \sum_{k=0}^{\infty}M_k=\sum_{k=0}^{\infty}e^{-k}$ is convergent. Thus, by Tannery’s theorem,

$\displaystyle \lim_{n\to\infty} \frac{1}{n^n} \sum_{k=1}^n k^n=\lim_{n\to\infty} \sum_{k=0}^nf_k(n)=\sum_{k=0}^{\infty} L_k = \sum_{k=0}^{\infty} e^{-k}=\frac{e}{e-1}. \ \Box$

Example 2. Let $a \in \mathbb{R}$. Using Tannery’s theorem, show that $\displaystyle \lim_{n\to\infty} \left(1+\frac{a}{n}\right)^n=\sum_{k=0}^{\infty} \frac{a^k}{k!}.$

Solution. By the binomial theorem, $\displaystyle \left(1+\frac{a}{n}\right)^n=\sum_{k=0}^n \binom{n}{k}\frac{a^k}{n^k}.$ Let $\displaystyle f_k(n):=\binom{n}{k}\frac{a^k}{n^k}.$ Then

$\displaystyle f_k(n)=\frac{n!}{n^k(n-k)!} \cdot \frac{a^k}{k!}=\frac{n(n-1) \cdots (n-k+1)}{n^k} \cdot \frac{a^k}{k!}$

and so $\displaystyle L_k:=\lim_{n\to\infty}f_k(n)=\frac{a^k}{k!}.$ Also $\displaystyle |f_k(n)| \le \frac{|a|^k}{k!}:=M_k$ because $\displaystyle \frac{n(n-1) \cdots (n-k+1)}{n^k} \le 1.$ Clearly $M_k$ doesn’t depend on $n$ and the series $\displaystyle \sum_{k=0}^{\infty}M_k=\sum_{k=0}^{\infty}\frac{|a|^k}{k!}$ is convergent (you may, for example, use the ratio test). Thus, by Tannery’s theorem,

$\displaystyle \lim_{n\to\infty} \left(1+\frac{a}{n}\right)^n=\lim_{n\to\infty} \sum_{k=0}^nf_k(n)=\sum_{k=0}^{\infty} L_k =\sum_{k=0}^{\infty} \frac{a^k}{k!}. \ \Box$

Exercise (a stronger version of Tannery’s theorem). Show that Tannery’s theorem still holds if, in the theorem, we replace $\displaystyle \sum_{k=0}^n f_k(n)$ with $\displaystyle \sum_{k=0}^{g(n)}f_k(n),$ where $g: \mathbb{N} \longrightarrow \mathbb{N}$ is any increasing function of $n.$

# Irrational functions

All polynomials in this post are in $\mathbb{R}[x],$ i.e. their coefficients are real numbers.
A real-valued function $f$ is said to be rational on an interval $(a,b)$ if there exist polynomials $p(x),q(x)$ such that $\displaystyle f(x)=\frac{p(x)}{q(x)}$ for all $x \in (a,b);$ otherwise $f$ is called irrational.
Two polynomials $p(x),q(x)$ are called coprime if they have no common factor in $\mathbb{R}[x].$ Clearly every rational function can be written as $\displaystyle \frac{p(x)}{q(x)}$ for some coprime polynomials $p(x),q(x).$ Now let’s see some examples of irrational functions.

Example 1. Let $n \ge 1$ be an integer and let $p(x)$ be a polynomial. Show that over an interval $(a,b), \ \sqrt[n]{p(x)}$ is rational if and only if $p(x)=q(x)^n$ for some polynomial $q(x).$

Solution. Suppose that $\displaystyle \sqrt[n]{p(x)}=\frac{u(x)}{v(x)}$ for some coprime polynomials $u(x),v(x).$ So

$p(x)v(x)^n=u(x)^n \ \ \ \ \ \ \ \ \ \ \ \ \ (\star)$

and hence, since $u(x)^n, v(x)^n$ are coprime, $u(x)^n$ must divide $p(x).$ Let $p(x)=c(x)u(x)^n$ for some polynomial $c(x).$ Then $(\star)$ gives $c(x)v(x)^n=1$ and so $c(x)$ is a constant. $\Box$

Example 2. Show that the function $y=\ln x$ is irrational on its domain.

Solution. Suppose that there exist polynomials $p(x),q(x)$ such that $\displaystyle \ln x = \frac{p(x)}{q(x)}$ for all $x > 0.$ Since $\displaystyle \lim_{x\to\infty} \ln x = \infty,$ we must have $\deg p(x) > \deg q(x)$ and this is impossible because then

$\displaystyle 0=\lim_{x\to\infty} \frac{\ln x}{x} = \lim_{x\to\infty} \frac{p(x)}{xq(x)} \neq 0. \ \Box$

Example 3. Show that the function $y=e^x$ is irrational on its domain.

Solution. Suppose that there exist polynomials $p(x),q(x)$ such that $\displaystyle e^x = \frac{p(x)}{q(x)}$ for all $x \in \mathbb{R}.$ Then $p(x)e^{-x}=q(x),$ which is impossible because $\displaystyle \lim_{x\to+\infty}p(x)e^{-x}=0$ but $\displaystyle \lim_{x\to+\infty}q(x) \ne 0. \ \Box$

Example 4. Show that the function $y=\sin x$ is irrational on its domain.

Solution. The limit of a rational function at infinity always exists (it’s either infinity or a finite number) but $\displaystyle \lim_{x\to\infty} \sin x$ does not exist because, for example, for integers $n$ we have $\displaystyle \lim_{n\to\infty} \sin(n\pi)=0$ but $\displaystyle \lim_{n\to\infty} \sin \left(2n\pi + \frac{\pi}{2}\right)=1. \ \Box$

It is obvious that $\sin^{-1}x$ and $\cos^{-1}x$ are irrational because their derivatives are irrational (why?).

Example 5. Show that the function $y=\tan^{-1}x$ is irrational on its domain.

Solution. Suppose that there exist polynomials $p(x),q(x)$ such that $\displaystyle \tan^{-1}x = \frac{p(x)}{q(x)}$ for all $x \in \mathbb{R}.$ Then, since $\displaystyle \lim_{x\to+\infty}\tan^{-1}x=\frac{\pi}{2} < \infty,$ we must have $\deg p(x)=\deg q(x).$ Then

$\displaystyle \frac{\pi}{2}=\lim_{x\to+\infty} \tan^{-1}x=\lim_{x\to+\infty} \frac{p(x)}{q(x)}= \lim_{x\to-\infty} \frac{p(x)}{q(x)}=\lim_{x\to-\infty} \tan^{-1}x=\frac{-\pi}{2},$

which is nonsense. $\Box$

Exercise 1. Show that in fact $y=\ln x$ is irrational on any interval $(a,b) \subseteq (0,\infty).$
Hint. Suppose that there exist coprime polynomials $p(x),q(x)$ such that $\displaystyle \ln x = \frac{p(x)}{q(x)}$ for all $x \in (a,b).$ Then differentiating gives

$\displaystyle q(x)^2=xp'(x)q(x)-xp(x)q'(x).$

So $x$ divides $q(x).$ Let $n \ge 1$ be the largest integer such that $q(x)=x^nv(x)$ for some polynomial $v(x).$ Show that $x^{n+1}$ divides $q(x),$ contradicting maximality of $n.$

Exercise 2. Show that in fact $y=e^x$ is irrational on any interval $(a,b) \subseteq \mathbb{R}.$
Hint. Suppose that there exist coprime polynomials $p(x),q(x)$ such that $\displaystyle e^x = \frac{p(x)}{q(x)}$ for all $x \in (a,b).$ Then differentiating gives

$\displaystyle p(x)(q(x)+q'(x))=p'(x)q(x).$

So $p(x)$ divides $p'(x)q(x)$ and hence, since $p(x), q(x)$ have no common factor, $p(x)$ must divide $p'(x).$

Exercise 3. Show that in fact $y=\sin x$ is irrational on any interval $(a,b) \subseteq \mathbb{R}.$
Hint. Suppose that there exist coprime polynomials $p(x),q(x)$ such that $\displaystyle \sin x = \frac{p(x)}{q(x)}$ for all $x \in (a,b).$ Then $\displaystyle \cos x =\frac{p'(x)q(x)-p(x)q'(x)}{q(x)^2}$ and so $\sin^2x+\cos^2x=1$ gives

$\displaystyle (p'(x)q(x)-p(x)q'(x))^2=q(x)^2(q(x)^2-p(x)^2).$

So $q(x)$ divides $p'(x)q(x)-p(x)q'(x)$ and thus $q(x)$ divides $p(x)q'(x)$ implying that $q(x)$ divides $q'(x).$

Exercise 4. Show that in fact $y=\tan^{-1}x$ is irrational on any interval $(a,b) \subseteq \mathbb{R}.$
Hint. Suppose that there exist coprime polynomials $p(x),q(x)$ such that $\displaystyle \tan^{-1}x = \frac{p(x)}{q(x)}$ for all $x \in (a,b).$ Then differentiating gives

$\displaystyle q(x)^2=(1+x^2)(p'(x)q(x)-p(x)q'(x)).$

So $1+x^2$ divides $q(x)^2$ and hence it divides $q(x)$ because $1+x^2$ cannot be factored in $\mathbb{R}[x].$ Now let $n \ge 1$ be the largest integer such that $q(x)=(1+x^2)^nv(x)$ for some polynomial $v(x).$ Show that $(1+x^2)^{n+1}$ divides $q(x),$ contradicting maximality of $n.$

Exercise 5. Show that over any interval $(a,b) \subseteq (-1,1),$ the function $\displaystyle y=\frac{1}{\sqrt{1-x^2}}$ is irrational and conclude that $\sin^{-1}x, \ \cos^{-1}x$ are irrational on any interval in their domains.

# Limit of integrals (10)

Problem. Show that if $f:[-1,1] \longrightarrow \mathbb{R}$ is continuous, then

$\displaystyle \lim_{n\to\infty} \frac{\int_{-1}^1 f(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx}=f(0).$

Solution. Let $g(x):=f(x)-f(0).$ Then $g$ is continuous on $[-1,1], \ g(0)=0$ and

$\displaystyle \frac{\int_{-1}^1 f(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx}=f(0)+.\frac{\int_{-1}^1 g(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx}$

So we only need to show that $\displaystyle \lim_{n\to\infty} \frac{\int_{-1}^1 g(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx}=0.$
Let $\epsilon > 0.$ Since $g$ is continuous, $\displaystyle \lim_{x\to0} g(x)=g(0)=0$ and so there exists $0 < \delta < 1$ such that $|g(x)| < \epsilon$ whenever $|x| < \delta.$ Also, by the extreme value theorem, there exists a real number $M$ such that $|g(x)| \le M$ for all $x \in [-1,1].$ Thus

\displaystyle \begin{aligned} \left| \frac{\int_{-1}^1 g(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx}\right|= \left|\frac{\int_{-1}^{-\delta} g(x)(1-x^2)^n dx + \int_{-\delta}^{\delta}g(x)(1-x^2)^n dx + \int_{\delta}^1g(x)(1-x^2)^n dx }{\int_{-1}^1 (1-x^2)^n dx}\right| \\ \le \frac{\int_{-1}^{-\delta} |g(x)|(1-x^2)^n dx}{\int_{-1}^1(1-x^2)^n dx}+ \frac{\int_{-\delta}^{\delta} |g(x)|(1-x^2)^n dx}{\int_{-1}^1(1-x^2)^n dx}+ \frac{\int_{\delta}^1 |g(x)|(1-x^2)^n dx}{\int_{-1}^1(1-x^2)^n dx} \\ \le \frac{M\int_{-1}^{-\delta} (1-x^2)^n dx}{\int_{-1}^1(1-x^2)^n dx}+ \frac{\epsilon \int_{-\delta}^{\delta}(1-x^2)^n dx}{\int_{-1}^1(1-x^2)^n dx} + \frac{M\int_{\delta}^1 (1-x^2)^n dx}{\int_{-1}^1(1-x^2)^n dx} \\ \le \epsilon + \frac{M\int_{-1}^{-\delta} (1-x^2)^n dx}{\int_{-1}^1(1-x^2)^n dx} +\frac{M\int_{\delta}^1 (1-x^2)^n dx}{\int_{-1}^1(1-x^2)^n dx} \\ = \epsilon + \frac{M\int_{\delta}^1 (1-x^2)^n dx}{\int_0^1(1-x^2)^n dx}. \end{aligned} \ \ \ \ \ \ \ \ \ \ \ \ (1)

Let $1-x^2=(1-\delta^2)(1-t^2).$ Then

\displaystyle \begin{aligned} \int_{\delta}^1 (1-x^2)^n dx=(1-\delta^2)^{n+1}\int_0^1 \frac{t(1-t^2)^n dt}{\sqrt{(1-\delta^2)t^2+\delta^2}} \le \frac{(1-\delta^2)^{n+1}}{\delta} \int_0^1(1-t^2)^ndt \end{aligned}

and so

$\displaystyle \frac{\int_{\delta}^1 (1-x^2)^n dx}{\int_0^1(1-x^2)^n dx} \le \frac{(1-\delta^2)^{n+1}}{\delta}. \ \ \ \ \ \ \ \ \ \ \ \ (2)$

Now $(1)$ and $(2)$ together give

$\displaystyle \left| \frac{\int_{-1}^1 g(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx}\right| \le \epsilon + \frac{M(1-\delta^2)^{n+1}}{\delta}. \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$

But $\displaystyle \lim_{n\to\infty}(1-\delta^2)^{n+1}=0,$ because $0 < 1-\delta^2 < 1,$ and hence there exists an integer $N > 0$ such that $\displaystyle \frac{M(1-\delta^2)^{n+1}}{\delta} < \epsilon$ for $n \ge N.$
Therefore, by $\displaystyle (3), \ \left|\frac{\int_{-1}^1 g(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx}\right| \le 2\epsilon$ for $n \ge N$ and so $\displaystyle \lim_{n\to\infty} \frac{\int_{-1}^1 g(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx}=0. \ \Box$

Example. Show that if $f:[-1,1] \longrightarrow \mathbb{R}$ is continuous, then

$\displaystyle \lim_{n\to\infty} \sqrt{n}\int_{-1}^1 f(x)(1-x^2)^ndx=\sqrt{\pi}f(0).$

Solution. The substitution $x=\cos \theta$ and the first part of the problem in this post give

$\displaystyle \int_{1-}^1(1-x^2)^ndx=2\int_0^1(1-x^2)^ndx=2\int_0^{\frac{\pi}{2}} \sin^{2n+1}\theta \ d\theta=\frac{2}{(2n+1)a_n},$

where $\displaystyle a_n=\frac{\binom{2n}{n}}{4^n}.$ So, by the above problem,

$\displaystyle \lim_{n\to\infty}\frac{(2n+1)a_n}{2}\int_{-1}^1f(x)(1-x^2)^ndx=f(0). \ \ \ \ \ \ \ \ \ \ \ \ (\star)$

But we know from Wallis’ formula that $\displaystyle \lim_{n\to\infty}\sqrt{n}a_n=\frac{1}{\sqrt{\pi}}$ and thus, by $(\star),$

\displaystyle \begin{aligned} f(0)=\lim_{n\to\infty}\frac{(2n+1)\sqrt{n}a_n}{2n} \sqrt{n}\int_{-1}^1f(x)(1-x^2)^ndx =\frac{1}{\sqrt{\pi}}\lim_{n\to\infty} \sqrt{n}\int_{-1}^1f(x)(1-x^2)^ndx. \ \Box \end{aligned}

# Just another AMM problem

Problem (American Mathematical Monthly, 2017). Let $k \ge 0$ be an integer and let $\displaystyle e_n:=e-1-\frac{1}{1!}-\frac{1}{2!}- \cdots - \frac{1}{n!}.$ Consider the power series

$\displaystyle f_k(x):=\sum_{n=k}^{\infty} \binom{n}{k}e_nx^n.$

i) Find the interval of convergence of $f_k.$
ii) Find a closed-form formula for $f_k.$

Solution. i) Let $R$ be the radius of convergence of $f_k.$ I showed here that $\displaystyle e_n=\frac{e^{c_n}}{(n+1)!}$ for some sequence $\{c_n\}$ with $\displaystyle \lim_{n\to\infty}c_n=0.$ Thus

\displaystyle \begin{aligned} \frac{1}{R}=\lim_{n\to\infty} \frac{\binom{n+1}{k}e_{n+1}}{\binom{n}{k}e_n}=\lim_{n\to\infty}\frac{n+1}{n+1-k}\left(1-\frac{1}{(n+1)!e_n}\right)=\lim_{n\to\infty} (1-e^{-c_n})=0 \end{aligned}

and so the interval of convergence is $(-\infty,+\infty).$

ii) I first show that

$\displaystyle f_k(x)=\frac{x^k}{k!}f_0^{(k)}(x), \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$

where $f_0^{(k)}(x)$ is the $k$-th derivative of $\displaystyle f_0(x)=\sum_{n=0}^{\infty}e_nx^n.$ Well, we have

$\displaystyle f_0(x)=\sum_{n=0}^{\infty}e_nx^n=\sum_{n=0}^{k-1}e_nx^n+\sum_{n=k}^{\infty}e_nx^n$

and hence

\displaystyle \begin{aligned} f_0^{(k)}(x)=\left(\sum_{n=k}^{\infty}e_nx^n\right)^{(k)}=\sum_{n=k}^{\infty}n(n-1) \cdots (n-k+1)e_nx^{n-k}=k!x^{-k}\sum_{n=k}^{\infty}\binom{n}{k}e_nx^n=k!x^{-k}f_k(x), \end{aligned}

which proves $(1).$ Now we find a closed-form formula for $f_0.$ We have

$\displaystyle e_n=e-1-\frac{1}{1!}-\frac{1}{2!}- \cdots - \frac{1}{n!}=\sum_{m=n+1}^{\infty}\frac{1}{m!}$

and so

$\displaystyle f_0(x)=\sum_{n=0}^{\infty}e_nx^n=\sum_{n=0}^{\infty} \sum_{m=n+1}^{\infty} \frac{x^n}{m!}=\sum_{m=1}^{\infty} \frac{1}{m!}\sum_{n=0}^{m-1}x^n=\sum_{m=1}^{\infty}\frac{x^m-1}{m!(x-1)} \\ =\frac{1}{x-1}\left(\sum_{m=1}^{\infty} \frac{x^m}{m!}-\sum_{m=1}^{\infty} \frac{1}{m!}\right)=\frac{1}{x-1}(e^x-1-(e-1))=\frac{e^x-e}{x-1}.$

Therefore, by $(1),$

$\displaystyle f_k(x)=\frac{x^k}{k!}\left(\frac{e^x-e}{x-1}\right)^{(k)}. \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$

Finally, the Leibniz rule for the $k$-th derivative of the product of two functions (see Exercise 2 in this post) gives

\displaystyle \begin{aligned} \left(\frac{e^x-e}{x-1}\right)^{(k)}=\sum_{i=0}^k \binom{k}{i}(e^x-e)^{(i)}\left(\frac{1}{x-1}\right)^{(k-i)}=(e^x-e)\left(\frac{1}{x-1}\right)^{(k)}+e^x\sum_{i=1}^k \binom{k}{i}\left(\frac{1}{x-1}\right)^{(k-i)} \\ =-e\left(\frac{1}{x-1}\right)^{(k)}+e^x\sum_{i=0}^k \binom{k}{i}\left(\frac{1}{x-1}\right)^{(k-i)}=\frac{(-1)^{k+1}k!e}{(x-1)^{k+1}}+e^x \sum_{i=0}^k \binom{k}{i} \frac{(-1)^{k-i}(k-i)!}{(x-1)^{k-i+1}} \\ = \frac{k!e}{(1-x)^{k+1}} -\frac{k!e^x}{(1-x)^{k+1}} \sum_{i=0}^k \frac{(1-x)^i}{i!}=\frac{k!e^x}{(1-x)^{k+1}} \left(e^{1-x}-\sum_{i=0}^k \frac{(1-x)^i}{i!}\right). \end{aligned}

So, by $(2),$ we have

$\displaystyle f_k(x)=\frac{x^ke^x}{(1-x)^{k+1}}\left(e^{1-x}-\sum_{i=0}^k \frac{(1-x)^i}{i!}\right). \ \Box$

Example. Let $e_n$ be as defined in the above problem and let $k \ge 0$ be an integer. Evaluate $\displaystyle \sum_{n=k}^{\infty}\binom{n}{k}e_n.$

Solution. So we want to evaluate $f_k(1),$ where $f_k$ is the power series defined in the problem. Using the closed-form of $f_k(x),$ we have

\displaystyle \begin{aligned} f_k(x)= \frac{x^ke^x}{(1-x)^{k+1}}\left(e^{1-x}-\sum_{i=0}^k \frac{(1-x)^i}{i!}\right)= \frac{x^ke^x}{(1-x)^{k+1}} \sum_{i=k+1}^{\infty} \frac{(1-x)^i}{i!}=x^ke^x \left(\frac{1}{(k+1)!} + \sum_{i=1}^{\infty} \frac{(1-x)^i}{(k+i+1)!}\right). \end{aligned}

So $\displaystyle f_k(1)=\frac{e}{(k+1)!}. \ \Box$

# Limit of integrals (9)

Problem. Consider the sequence $\displaystyle I_n:= \int_0^1 \sqrt[n]{x^n+(1-x)^n} \ dx, \ n \ge 1.$ Find $\displaystyle \lim_{n\to\infty} I_n.$

Solution. The substitution $1-x=t$ gives  $\displaystyle \int_0^{\frac{1}{2}} \sqrt[n]{x^n+(1-x)^n} \ dx = \int_{\frac{1}{2}}^1 \sqrt[n]{x^n+(1-x)^n} \ dx$ and so

\displaystyle \begin{aligned} I_n= \int_0^{\frac{1}{2}} \sqrt[n]{x^n+(1-x)^n} \ dx + \int_{\frac{1}{2}}^1 \sqrt[n]{x^n+(1-x)^n} \ dx=2\int_0^{\frac{1}{2}} \sqrt[n]{x^n+(1-x)^n} \ dx. \ \ \ \ \ \ \ \ \ \ (1) \end{aligned}

Let $x=(1-x)s.$ Then $\displaystyle x=\frac{s}{1+s}$ and so $\displaystyle dx=\frac{ds}{(1+s)^2}.$ Hence $(1)$ gives

$\displaystyle I_n=2\int_0^1 \frac{\sqrt[n]{1+s^n}}{(1+s)^3} \ ds. \ \ \ \ \ \ \ \ \ \ \ (2)$

Clearly $1 \le \sqrt[n]{1+s^n} \le \sqrt[n]{2}$ for all $s \in [0,1]$ and so, by $(2),$

$\displaystyle 2\int_0^1 \frac{ds}{(1+s)^3} \ ds \le I_n \le 2 \sqrt[n]{2}\int_0^1 \frac{ds}{(1+s)^3}.$

Thus, since $\displaystyle \lim_{n\to\infty} \sqrt[n]{2}=1,$ we have, by the squeeze theorem,

$\displaystyle \lim_{n\to\infty} I_n=2\int_0^1 \frac{ds}{(1+s)^3}=\frac{3}{4}. \ \Box$

# Limit of (a + 1/n)(a + 2/n) … (a + n/n)

Problem (American Mathematical Monthly, 2018). Given a real number $a > 0,$ evaluate $\displaystyle \lim_{n\to\infty} \prod_{k=1}^n \left(a+\frac{k}{n}\right).$

Solution. Let $\displaystyle u_n:=\prod_{k=1}^n \left(a+\frac{k}{n}\right).$ Then

$\displaystyle \ln u_n=\sum_{k=1}^n \ln \left(a+\frac{k}{n}\right)=n \cdot \frac{1}{n}\sum_{k=1}^n \ln \left(a+\frac{k}{n}\right). \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$

The sum $\displaystyle \frac{1}{n}\sum_{k=1}^n \ln \left(a+\frac{k}{n}\right)$ is a Riemann sum and so

\displaystyle \begin{aligned}\lim_{n\to\infty} \frac{1}{n}\sum_{k=1}^n \ln \left(a+\frac{k}{n}\right)=\int_0^1 \ln(a+x)\ dx=(a+1)\ln(a+1)-a\ln a - 1. \ \ \ \ \ \ \ \ \ \ \ \ (2) \end{aligned}

Now consider the function $g(a):=(a+1)\ln(a+1)-a\ln a - 1, \ a > 0.$ See that $g$ is strictly increasing, $\displaystyle \lim_{a\to0+}g(a)=-1$ and $g(1)=2\ln 2 - 1 > 0.$ Thus $g$ has a unique root, say $a_0,$ and $g(a) > 0$ for $a > a_0$ and $g(a) < 0$ for $a
Hence, by $(1), (2),$ we have $\displaystyle \lim_{n\to\infty}\ln u_n=\begin{cases} +\infty & \text{if} \ a > a_0 \\ -\infty & \text{if} \ a < a_0. \end{cases}$ and so

$\displaystyle \lim_{n\to\infty} u_n=\begin{cases} +\infty & \text{if} \ a > a_0 \\ 0 & \text{if} \ a < a_0 \end{cases}.$

So the only thing left is to evaluate $\displaystyle \lim_{n\to\infty} u_n$ for $a=a_0.$ Well, since $g(a_0)=0,$ we have, by $\displaystyle (2), \ \int_0^1 \ln(a_0+x)\ dx=0$ and so, by $\displaystyle (1), \ \ln u_n=n\left(\frac{1}{n}\sum_{k=1}^n \ln \left(a_0+\frac{k}{n}\right)-\int_0^1 \ln(a_0+x) \ dx \right).$ Therefore, by this post,

\displaystyle \begin{aligned} \lim_{n\to\infty} \ln u_n=\lim_{n\to\infty} n\left(\frac{1}{n}\sum_{k=1}^n \ln \left(a_0+\frac{k}{n}\right)-\int_0^1 \ln(a_0+x) \ dx \right)=\frac{\ln(a_0+1)-\ln a_0}{2}=\ln \sqrt{1+\frac{1}{a_0}} \end{aligned}

and hence $\displaystyle \lim_{n\to\infty} u_n=\sqrt{1+\frac{1}{a_0}}.$ So we have proved that

$\displaystyle \lim_{n\to\infty} u_n=\begin{cases} +\infty & \text{if} \ a > a_0 \\ 0 & \text{if} \ a < a_0 \\ \sqrt{1+\frac{1}{a_0}} & \text{if} \ a = a_0 \end{cases}$

and that completes the solution. $\Box$

Remark. Let $g$ and $a_0$ be as defined in the solution of the above problem, i.e. $g(a)=(a+1)\ln(a+1)-a\ln a - 1, \ a > 0,$ and $g(a_0)=0.$ See that $g(0.5) < 0$ and so $0.5 < a_0 < 1.$ In fact, sketching the graph of $g$ gives $a_0 \approx 0.542.$

Exercise. Evaluate $\displaystyle \lim_{n\to\infty} \frac{(n+2)(n+4)(n+6) \cdots (3n)}{(2n)^n}.$

# Divergence of the sequence {sin(an+b)}

Throughout this post, $a,b$ are constants, $a$ is not an integer multiple of $\pi$ and $b$ is any real number.

Problem. Show that the sequence $x_n:=\sin(an+b)$ is divergent.

Solution. Suppose that $\displaystyle \lim_{n\to\infty}x_n:=x$ exists and let $y_n:=\cos(an+b).$ We have

$x_{n+1}=x_n\cos a + y_n \sin a, \ \ x_{n-1}=x_n\cos a - y_n \sin a$

and thus

$x_{n+1}+x_{n-1}=2x_n\cos a, \ \ \ \ \ \ \ \ \ \ (1)$
$x_{n+1}-x_{n-1}=2y_n\sin a. \ \ \ \ \ \ \ \ \ \ \ (2)$

Note that since $a$ is not an integer multiple of $\pi,$ we have $\cos a \ne 1, \ \sin a \ne 0.$
Now, if, in $(1),$ we take limit as $n\to\infty,$ we get $2x=2x \cos a$ and hence $x=0.$
Also, since $\displaystyle \lim_{n\to\infty}(x_{n+1}-x_{n-1})=0,$ we get from $(2)$ that $\displaystyle \lim_{n\to\infty}y_n$ exists and $\displaystyle \lim_{n\to\infty}y_n=0.$
So we have proved that $\displaystyle \lim_{n\to\infty} x_n=\lim_{n\to\infty}y_n=0$ and that is a contradiction because $x_n^2+y_n^2=1. \ \Box$

Example. Show that the sequences $y_n:=\cos(an+b), \ u_n:=\sin^2(an+b), \ v_n:=\cos^2(an+b)$ are divergent.

Solution. If $\{y_n\}$ diverges, then both $\{u_n\}$ and $\{v_n\}$ diverge too because $v_n=1-u_n$ and $2u_n=1-\cos(2an+2b).$
Now suppose that $\{y_n\}$ is convergent. Then, since $y_{n-1}-y_{n+1}=2\sin a \sin(an+b)$ and $\sin a \ne 0,$ the sequence $\{\sin(an+b)\}$ must also be convergent, contradicting the above problem. $\Box$

Exercise. Show that the sequences $\{\tan(an+b)\}$ and $\{\cot(an+b)\}$ are divergent.

# The sequence defined by the relation a_n=1/(a_1^k + … +a_(n-1)^k)

Problem. Let $k \ge 1$ be an integer and define the sequence $\{a_n\}$ by

$\displaystyle a_1=1, \ \ a_n=\frac{1}{a_1^k+a_2^k+ \cdots + a_{n-1}^k}, \ n \ge 2.$

Show that
i) $\{a_n\}$ is strictly decreasing for $n \ge 2,$
ii) $\displaystyle \frac{1}{1+(k+1)\sqrt[k+1]{n-2}} < a_n < \frac{1}{\sqrt[k+1]{n-1}}$ for $n \ge 3$ and, as a result, $\displaystyle \lim_{n\to\infty}a_n=0,$
iii) $\displaystyle \lim_{n\to\infty} \sqrt[k+1]{n}a_n=\frac{1}{\sqrt[k+1]{k+1}}.$ In other words, $\displaystyle a_n \sim \frac{1}{\sqrt[k+1]{(k+1)n}}.$

Solution. i) By the definition of the sequence, we have $a_1=a_2=1.$ For $n \ge 2,$

$\displaystyle \frac{1}{a_{n+1}}-\frac{1}{a_n}=a_1^k + \cdots +a_n^k - (a_1^k + \cdots + a_{n-1}^k)=a_n^k > 0$

and so $a_n > a_{n+1}.$
ii) By i), we have $a_i > a_n$ for $1 \le i \le n-1$ and so $\displaystyle \frac{1}{a_n}=a_1^k + \cdots + a_{n-1}^k > (n-1)a_n^k,$ which gives $\displaystyle a_n < \frac{1}{\sqrt[k+1]{n-1}}.$ Thus

\displaystyle \begin{aligned} \frac{1}{a_n}=1 + \sum_{i=2}^{n-1}a_i^k \le 1 + \sum_{i=2}^{n-1}\frac{1}{\sqrt[k+1]{(i-1)^k}}= 1+ \sum_{i=2}^{n-1}(i-1)^{\frac{-k}{k+1}}< 1 + \int_0^{n-1}x^{\frac{-k}{k+1}}dx=1+(k+1)\sqrt[k+1]{n-2} \end{aligned}

and so $\displaystyle a_n > \frac{1}{1+(k+1)\sqrt[k+1]{n-2}}.$
iii) As we saw in i), we have $\displaystyle \frac{1}{a_{n+1}}=\frac{1}{a_n}+a_n^k$ and so

$\displaystyle \frac{1}{a_{n+1}^{k+1}}=\left(\frac{1}{a_n}+a_n^k\right)^{k+1}=\frac{1}{a_n^{k+1}}+k+1 + \sum_{i=2}^{k+1} \binom{k+1}{i}a_n^{(k+1)(i-1)}. \ \ \ \ \ \ \ \ \ \ \ (*)$

Since $(k+1)(i-1) > 0$ for $i \ge 2,$ and, by ii), $\displaystyle \lim_{n\to\infty}a_n=0,$ we have $\displaystyle \lim_{n\to\infty} a_n^{(k+1)(i-1)}=0$ for all $i \ge 2$ and hence $(*)$ gives

$\displaystyle \lim_{n\to\infty} \left(\frac{1}{a_{n+1}^{k+1}}-\frac{1}{a_n^{k+1}}\right)=k+1.$

Thus, by the Stolz–Cesàro lemma, $\displaystyle \lim_{n\to\infty} \frac{1}{na_n^{k+1}}=k+1$ and so $\displaystyle \lim_{n\to\infty} \sqrt[k+1]{n}a_n=\frac{1}{\sqrt[k+1]{k+1}}. \ \Box$

# Chebyshev’s integral inequality

Let’s begin with an important little point which is not quite directly related to Chebyshev’s inequality.

Problem 1. Let $f: [a,b] \longrightarrow [0, \infty)$ be a continuous function and suppose that $\displaystyle \int_a^b f(x) \ dx =0.$ Show that $f$ is identically zero on $[a,b],$ i.e. $f(x)=0$ for all $x \in [a,b].$

Solution. If $f$ is not identically zero, then, since $f$ is continuous and non-negative, $f$ has an absolute maximum $M > 0,$ by the extreme value theorem.
But then

$\displaystyle 0 =\int_a^b f(x) \ dx \ge M \int_a^b dx = M(b-a) > 0,$

Note that the result still holds if one or both of $a, b$ are infinity. Because if, for example, $\displaystyle \int_a^{\infty} f(x) \ dx =0$ and $b > a,$ then since $f \ge 0,$ we will have $\displaystyle \int_a^b f(x) \ dx =0$ and so $f(x)=0$ for all $x \in [a,b]. \ \Box$

Now, an example that explains the idea behind Chebyshev’s inequality.

Problem 2. Show that $\displaystyle I:=\int_0^{\frac{\pi}{4}} e^{-\tan x} dx > \frac{\pi}{4}(1-e^{-1}).$

Solution. Let $\displaystyle \tan x = t$ to get $\displaystyle I=\int_0^1 \frac{e^{-t}}{1+t^2} \ dt.$
Now choose $c \in (0,1)$ such that $\displaystyle \frac{1}{1+c^2}=\frac{\pi}{4}.$ Then, since both functions $\displaystyle \frac{1}{1+t^2}$ and $\displaystyle e^{-t}$ are decreasing on the interval $[0,1],$ we have

$\displaystyle \left(\frac{1}{1+t^2}-\frac{\pi}{4}\right)\left(e^{-t}-e^{-c}\right)= \left(\frac{1}{1+t^2}-\frac{1}{1+c^2}\right)\left(e^{-t}-e^{-c}\right)\ge 0$

for all $t \in [0,1]$ and so

$\displaystyle \int_0^1 \left(\frac{1}{1+t^2}-\frac{\pi}{4}\right)\left(e^{-t}-e^{-c}\right) dt > 0. \ \ \ \ \ \ \ \ \ \ (*)$

Note that, by Problem 1, we can’t have equality in $(*)$. Now it follows from $(*)$ that

$\displaystyle I-\frac{\pi}{4}e^{-c}-\frac{\pi}{4}\int_0^1e^{-t}dt+\frac{\pi}{4}e^{-c} > 0$

and the result follows. $\Box$

Problem 3 (Chebyshev’s integral inequality). Let $f,g: [a,b] \longrightarrow \mathbb{R}$ be two continuous functions which are either both increasing or both decreasing. Show that

$\displaystyle \int_a^b f(x)g(x) \ dx \ge \frac{1}{b-a} \int_a^bf(x) \ dx \int_a^b g(x) \ dx.$

Solution. First see that, since $f,g$ are either both increasing or both decreasing, we have $(f(x)-f(y))(g(x)-g(y)) \ge 0$ for all $x,y \in [a,b].$
Now, by the mean value theorem for integrals, there exists $c \in (a,b)$ such that

$\displaystyle \int_a^b f(x) \ dx=(b-a)f(c).$

Thus

$\displaystyle \left(f(t)-\frac{1}{b-a}\int_a^b f(x) \ dx \right)(g(t)-g(c))=(f(t)-f(c))(g(t)-g(c)) \ge 0$

for all $t \in [a,b]$ and so $\displaystyle \int_a^b \left(f(t)-\frac{1}{b-a}\int_a^b f(x) \ dx \right)(g(t)-g(c)) \ dt \ge 0$ which gives

\displaystyle \begin{aligned} \int_a^b f(t)g(t) \ dt-g(c)\int_a^b f(t) \ dt-\frac{1}{b-a}\int_a^bf(x) \ dx \int_a^b g(t) \ dt+g(c) \int_a^bf(x) \ dx \ge 0. \end{aligned}

and the result follows. $\Box$

Remark 1. Let $c \in (a,b)$ be as defined in the solution of Problem 3 and suppose that for any $r \in [a,b]$ there exists $x \in [a,b]$ such that $(f(x)-f(r))(g(x)-g(r)) \ne 0.$
Then, by Problem 1,

$\displaystyle \int_a^b (f(x)-f(c))(g(x)-g(c)) \ dx \ne 0$

and so, in this case, Chebyshev’s inequality is strict, i.e.

$\displaystyle \int_a^b f(x)g(x) \ dx > \frac{1}{b-a} \int_a^bf(x) \ dx \int_a^b g(x) \ dx.$

Remark 2. Let $f,g:[a,b] \longrightarrow \mathbb{R}$ be two continuous functions and suppose that $f$ is increasing and $g$ is decreasing. Then both $f,-g$ are increasing and so, by Problem 2, $\displaystyle \int_a^b f(x)(-g(x)) \ dx \ge \frac{1}{b-a} \int_a^bf(x) \ dx \int_a^b (-g(x)) \ dx$ which gives

$\displaystyle \int_a^b f(x)g(x) \ dx \le \frac{1}{b-a} \int_a^bf(x) \ dx \int_a^b g(x) \ dx.$

Example 1. Let’s use Chebyshev’s integral inequality to solve Problem 2. Put $\displaystyle \tan x = t$ to get $\displaystyle I=\int_0^1 \frac{e^{-t}}{1+t^2} \ dt.$ Since both functions $\displaystyle \frac{1}{1+t^2}$ and $\displaystyle e^{-t}$ are decreasing on the interval $[0,1],$ we have, by Chebyshev’s inequality and Remark 1,

$\displaystyle I > \int_0^1 \frac{dt}{1+t^2} \int_0^1 e^{-t}dt=\frac{\pi}{4}(1-e^{-1}).$

Example 2. Let $n \ge 0$ be an integer. Show that $\displaystyle \int_0^{\frac{\pi}{2}} \left(\frac{\sin x}{x}\right)^{2n+1}dx > \left(\frac{4}{\pi}\right)^{2n+1} \binom{2n+1}{n}^{-1}.$

Solution. Let $\displaystyle f(x)=x^{2n+1}$ and $\displaystyle g(x):=\left(\frac{\sin x}{x}\right)^{2n+1}.$ See that $g(x)$ is decreasing on the interval $\displaystyle \left[0, \frac{\pi}{2}\right].$ So since $f(x)$ is increasing, we have, by Chebyshev’s inequality and Remark 1, 2,

$\displaystyle \int_0^{\frac{\pi}{2}} \sin^{2n+1}x \ dx=\int_0^{\frac{\pi}{2}}f(x)g(x) \ dx < \frac{2}{\pi} \int_0^{\frac{\pi}{2}}f(x) \ dx \int_0^{\frac{\pi}{2}}g(x) \ dx=\frac{1}{2n+2} \left(\frac{\pi}{2}\right)^{2n+1} \int_0^{\frac{\pi}{2}} g(x) \ dx.$

Thus $\displaystyle \int_0^{\frac{\pi}{2}} g(x) \ dx > (2n+2) \left(\frac{2}{\pi}\right)^{2n+1} \int_0^{\frac{\pi}{2}} \sin^{2n+1}x \ dx$ and the result now follows from the first part of the problem in this post. $\Box$

Exercise 1. Let $f_1, f_2, \cdots, f_n: [a,b] \longrightarrow [0,\infty)$ be continuous functions which are either all increasing or all decreasing. Show that

$\displaystyle \int_a^b f_1(x)f_2(x) \cdots f_n(x) \ dx \ge \frac{1}{(b-a)^{n-1}} \int_a^bf_1(x) \ dx \int_a^b f_2(x) \ dx \cdots \int_a^b f_n(x) \ dx.$

Note that, for $n \ge 3,$ we need $f_i$ to be non-negative in order to generalize Chebyshev’s inequality.

Exercise 2. Given integer $n \ge 1,$ let $N_n(k)$ be the number of ways we can write $k$ as $k=x_1+ \cdots + x_m$ for some integers $m, x_i$ with $0 \le x_1 < \cdots < x_m \le n.$ Show that

$\displaystyle \sum_{k \ge 0} \frac{N_n(k)}{k+1} \ge 1 + \frac{n}{2}.$

Hint. Use the above exercise and exercise 5 in this post.

Exercise 3. Solve Problem 2 without using Chebyshev’s inequality.
Hint. You only need to show that $\displaystyle \tan x \le \frac{4}{\pi}x$ for $\displaystyle 0 \le x \le \frac{\pi}{4}.$

Exercise 4. Let $n \ge 1$ be an integer. Show that $\displaystyle \int_0^{\frac{\pi}{2}} \left(\frac{\sin x}{x}\right)^{2n}dx > \frac{2n+1}{2\pi^{2n-1}} \binom{2n}{n}.$

Exercise 5. Let $n \ge 1$ be an integer. Show that $\displaystyle g(x):=\left(\frac{\sin x}{x}\right)^{2n+1}, \ 0 \le x \le \frac{\pi}{2},$ is a decreasing function.