P$. (That is, assume the terms of the two series are equal after some point.) Then, $\sum_{n=0}^\infty a_n$ converges if and only if $\sum_{n=0}^\infty b_n$ converges. That is, the convergence or divergence of a series is not affected by mucking about with finitely many terms of the series. \medskip \noindent This is a powerful and highly useful fact that I'll make repeated use of, often without being explicit about it, so keep a sharp eye out. \label{first-few} \end{fact} \begin{example} Fix a real number $r\ne 0$ and consider the series $\sum_{n=0}^\infty r^n$. We can determine for which values of $r$ this series converges, directly from the definition. Namely, consider the $k^{th}$ partial sum $S_k$: \[ S_k =\sum_{n=0}^k r^n = 1 + r + r^2 + \cdots +r^k =\frac{1-r^{k+1}}{1-r}. \] We now need to evaluate the limit $\lim_{k\rightarrow\infty} S_k$. However, as $k\rightarrow\infty$, we know how $r^{k+1}$ behaves: \begin{itemize} \item if $|r| <1$, then $\lim_{k\rightarrow\infty} r^{k+1} =0$; \item if $r=1$, then $S_k =k+1$ (that is, the formula above breaks down in this case), and so $\{ S_k\}$ diverges; \item if $r > 1$, then $\lim_{n\rightarrow\infty} r^{k+1} =\infty$; \item if $|r| \le -1$, then $\lim_{n\rightarrow\infty} r^{k+1}$ diverges. \end{itemize} Hence, $\lim_{n\rightarrow\infty} r^{k+1}$ exists if and only if $|r| <1$, and in this case $\lim_{n\rightarrow\infty} r^{k+1} =0$. Hence, $\{ S_k\}$ converges if and only if $|r| <1$, and in this case $\{ S_k\}$ converges to $\frac{1}{1-r}$. \label{geometric-example} \end{example} \begin{exercise} \begin{enumerate} \item A ball has {\bf bounce coefficient} $01$. \medskip \noindent For $s =1$, this series is called the {\bf harmonic series}, and we can prove directly that it diverges. Note that $\frac{1}{3} + \frac{1}{4} > \frac{1}{2}$, that $\frac{1}{5} + \cdots + \frac{1}{8} > 4\frac{1}{8} = \frac{1}{2}$, and in general that \[ \frac{1}{2^{k-1} +1} + \frac{1}{2^{k-1} +2} +\cdots +\frac{1}{2^k} > 2^{k-1}\frac{1}{2^k} =\frac{1}{2}. \] Hence, the $(2^k)^{th}$ partial sum $S_{2^k}$ satisfies $S_{2^k} >1 +k\frac{1}{2}$. Since the terms in the harmonic series are all positive, the sequence of partial sums is monotonically increasing, and by the calculation done the sequence of partial sums is unbounded, and so the sequence of partial sums diverges. Hence, the harmonic series diverges. \label{zeta-series} \end{example} \begin{exercise} Prove that $\sum_{n=1}^\infty \frac{1}{n^s}$ diverges for $s <1$, by estimating its partial sums. \label{zeta-exercise} \end{exercise} \begin{theorem} {\bf Arithmetic of sequences:} Let $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ be convergent series, with $\sum_{n=0}^\infty a_n =A$ and $\sum_{n=0}^\infty b_n =B$. \begin{enumerate} \item {\bf sums:} $\sum_{n=0}^\infty (a_n + b_n) =\sum_{n=0}^\infty a_n + \sum_{n=0}^\infty b_n = A+B$; \item {\bf differences:} $\sum_{n=0}^\infty (a_n - b_n) =\sum_{n=0}^\infty a_n - \sum_{n=0}^\infty b_n = A -B$; \item {\bf multiplication by a constant:} for a constant $c$, $\sum_{n=0}^\infty c\: a_n =c \sum_{n=0}^\infty a_n = cA$; \end{enumerate} \label{series-arithmetic} \end{theorem} \noindent \begin{proof} {\bf of Theorem \ref{series-arithmetic}:} Let $S_k =\sum_{n=0}^k a_n$ and $V_k =\sum_{n=0}^k b_n$ be the partial sums of the two series. Since the series are both convergent, we have that $\lim_{k\rightarrow\infty} S_k =A$ and $\lim_{k\rightarrow\infty} V_k =B$. \begin{enumerate} \item this follows immediately from the definition of convergence of a series in terms of partial sums: the partial sums of the series $\sum_{n=0}^\infty (a_n +b_n)$ are \[ T_k =\sum_{n=0}^k (a_n +b_n) =\sum_{n=0}^k a_n + \sum_{n=0}^k b_n = S_k + V_k \] (since the sums are finite). Since $\lim_{k\rightarrow\infty} S_k =A$ and $\lim_{k\rightarrow\infty} V_k =B$, we have that \[ \lim_{k\rightarrow\infty} T_k = \lim_{k\rightarrow\infty} (S_k + V_k) = \lim_{k\rightarrow\infty} S_k + \lim_{k\rightarrow\infty} V_k = A+B. \] So, not only does the series of sums $\sum_{n=0}^\infty (a_n +b_n)$ converge (since its sequence of partial sums converges), but it converges to $A+B$, as expected. \item much as the rule for sums just done, this follows immediately from the definition of convergence of a series in terms of partial sums: the partial sums of the series $\sum_{n=0}^\infty (a_n -b_n)$ are \[ W_k =\sum_{n=0}^k (a_n -b_n) =\sum_{n=0}^k a_n - \sum_{n=0}^k b_n = S_k - V_k \] (since the sums are finite). Since $\lim_{k\rightarrow\infty} S_k =A$ and $\lim_{k\rightarrow\infty} V_k =B$, we have that \[ \lim_{k\rightarrow\infty} W_k = \lim_{k\rightarrow\infty} (S_k - V_k) =\lim_{k\rightarrow\infty} S_k - \lim_{k\rightarrow\infty} V_k = A -B, \] So, not only does the series of differences $\sum_{n=0}^\infty (a_n -b_n)$ converge (since its sequence of partial sums converges), but it converges to $A -B$, as expected. \item much as the rules for sums and differences just done, this follows immediately from the definition of convergence of a series in terms of partial sums: the partial sums of the series $\sum_{n=0}^\infty c\: a_n$ are \[ Z_k =\sum_{n=0}^k c\: a_n =c\: \sum_{n=0}^k a_n = c\: S_k \] (since the sums are finite). Since $\lim_{k\rightarrow\infty} S_k =A$, we have that \[ \lim_{k\rightarrow\infty} Z_k = \lim_{k\rightarrow\infty} c\: S_k =c\: \lim_{k\rightarrow\infty} S_k = cA, \] So, not only does the series of constant multiples $\sum_{n=0}^\infty c\: a_n$ converge (since its sequence of partial sums converges), but it converges to $cA$, as expected. \end{enumerate} \end{proof} \begin{exercise} \begin{enumerate} \item Show that, if $\sum_{n=0}^\infty a_n$ converges and if $\sum_{n=0}^\infty b_n$ diverges, then the series of sums $\sum_{n=0}^\infty (a_n +b_n)$ diverges. \item Show that, if $\sum_{n=0}^\infty a_n$ diverges and if $c\ne 0$, then the series of multiples $\sum_{n=0}^\infty c\: a_n$ diverges. \end{enumerate} \label{some-series-things} \end{exercise} \begin{example} Construct an example of convergent series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ with positive terms for which the series of products $\sum_{n=0}^\infty a_n\: b_n$ diverges, or prove that no such example exists. \medskip \noindent No such example exists: since $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ both converge, the sequences of partial sums $\{ S_k =\sum_{n=0}^k a_n \}$ and $\{ V_k =\sum_{n=0}^k b_n \}$ both converge. Note that the $k^{th}$ partial sum $W_k$ of the series of products satisfies \[ W_k =\sum_{n=0}^k a_n\: b_n \le \left( \sum_{n=0}^k a_n \right) \left( \sum_{n=0}^k b_n \right) = S_k\: V_k. \] Since $S_k \le A =\sum_{n=0}^\infty a_n$ and $V_k \le B =\sum_{n=0}^\infty b_n$ for all $k$, we have that $W_k\le AB$ for all $k$. Since $\{ W_k\}$ is a monotonically increasing sequence (as $a_n$ and $b_n$ are positive for all $n$) and since $\{ W_k\}$ is bounded (by $AB$), we have that $\{ W_k\}$ converges, and hence that the series of products $\sum_{n=0}^\infty a_n\: b_n$ converges. \end{example} \begin{exercise} Unlike sequences, the convergence of series whose terms are products and quotients of convergent series does not necessarily follow. Exploring this phenomenon is the purpose of this example. Construct examples of each of the following, or prove that no such example exists: \begin{enumerate} \item convergent series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ with positive terms for which the series of products $\sum_{n=0}^\infty a_n\: b_n$ converges; \item divergent series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ with positive terms for which the series of products $\sum_{n=0}^\infty a_n\: b_n$ diverges; \item divergent series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ with positive terms for which the series of products $\sum_{n=0}^\infty a_n\: b_n$ converges; \item convergent series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ with positive terms for which the series of quotients $\sum_{n=0}^\infty \frac{a_n}{b_n}$ diverges; \item convergent series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ with positive terms for which the series of quotients $\sum_{n=0}^\infty \frac{a_n}{b_n}$ converges; \item divergent series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ with positive terms for which the series of quotients $\sum_{n=0}^\infty \frac{a_n}{b_n}$ diverges; \item divergent series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ with positive terms for which the series of quotients $\sum_{n=0}^\infty \frac{a_n}{b_n}$ converges; \end{enumerate} \label{product-quotient-series-examples} \end{exercise} \begin{fact} This is a useful fact that we have already run across at least once. If the terms of a series $\sum_{n=0}^\infty a_n$ are all positive, then the sequence of partial sums is monotonically increasing, since \[ S_{k+1} =\sum_{n=0}^{k+1} a_n = \sum_{n=0}^k a_n + a_{k+1} > \sum_{n=0}^k a_n =S_k. \] (If all the terms in the series are non-negative, then the sequence of partial sums is monotonically non-decreasing.) This fact makes an appearance in the proofs of several of the following tests for convergence or divergence of series. \end{fact} \begin{theorem} {\bf Series convergence tests:} be careful when reading the hypotheses, as not all these tests have the same hypotheses. In particular, some only apply to series with non-negative terms, while others apply to all series. \begin{itemize} \item {\bf $n^{{\rm th}}$ term test for divergence:} If $\lim_{n\rightarrow\infty} a_n\ne 0$ (so that either $\{ a_n\}$ diverges, or $\{ a_n\}$ converges to $a\ne 0$), then the series $\sum_{n=1}^\infty a_n$ diverges. \item {\bf First comparison test:} If $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ are series with non-negative terms, if $a_n\leq b_n$ for all $n\geq 0$, and if $\sum_{n=0}^\infty a_n$ diverges, then $\sum_{n=0}^\infty b_n$ diverges. \item {\bf Second comparison test:} If $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ are series with non-negative terms, if $a_n\leq b_n$ for all $n\geq 0$, and if $\sum_{n=0}^\infty b_n$ converges, then $\sum_{n=0}^\infty a_n$ converges. \item {\bf Limit comparison test:} If $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ are series with non-negative terms and if the limit $\lim_{n\rightarrow\infty} \frac{a_n}{b_n} =L$ exists with $0 1$, then $\sum_{n=0}^\infty a_n$ diverges. If $L =1$, this test gives no information. \item {\bf Root test:} Let $\sum_{n=0}^\infty a_n$ be a series of positive terms and suppose $\lim_{n\rightarrow\infty} (a_n)^{1/n} = L$ exists. If $L <1$, then $\sum_{n=0}^\infty a_n$ converges. If $L > 1$, then $\sum_{n=0}^\infty a_n$ diverges. If $L =1$, this test gives no information. \item {\bf Alternating series test:} Consider a series of the form $\sum_{n=0}^\infty (-1)^n a_n$, where $a_n >0$ for all $n \ge 0$. If $a_{n+1}\le a_n$ for all $n\ge 0$ and $\lim_{n\rightarrow\infty} a_n =0$, then the series converges. \end{itemize} \label{series-tests} \end{theorem} \begin{proof} {\bf of Theorem \ref{series-tests}:} \begin{itemize} \item {\bf $n^{{\rm th}}$ term test for divergence:} we prove this by proving its contrapositive: If the series $\sum_{n=1}^\infty a_n$ converges, them $\lim_{n\rightarrow\infty} a_n = 0$. Let $S_k =\sum_{n=0}^k a_n$ be the $k^{th}$ partial sum of the series $\sum_{n=1}^\infty a_n$. By definition, the sequence of partial sums $\{ S_k\}$ converges. By the Cauchy criterion, we then have that for every $\varepsilon >0$, there exists $M$ so that if $p$, $q >M$, then $| S_p -S_q| < \varepsilon$. In particular, taking any $p >M$ and $q =p+1$, we see that $| S_p -S_q| = |a_{p+1}| < \varepsilon$. Hence, if we set $Q =M+1$, then $|a_n| <\varepsilon$ for every $n >Q$, and so $\lim_{n\rightarrow\infty} a_n =0$, as desired. \item {\bf First comparison test:} again, we use partial sums: let $S_k =\sum_{n=0}^k a_n$ and $T_k =\sum_{n=0}^k b_n$ be the partial sums of the two series. Since $a_n\le b_n$ for all $n$, we have that $S_k\le T_k$ for all $k$. Further, since both the series have non-negative terms, we have that both sequences $\{ S_k\}$ and $\{ T_k\}$ are monotonically non-decreasing. Since $\sum_{n=0}^\infty a_n$ diverges, it must be that $\{ S_k\}$ is unbounded, since bounded monotonic sequences converge. Hence, since $S_k\le T_k$ for all $k$, we have that $\{ T_k\}$ is also an unbounded monotonic sequence, hence divergent, and so $\sum_{n=0}^\infty b_n$ must diverge as well. \item {\bf Second comparison test:} yet again, we use partial sums: let $S_k =\sum_{n=0}^k a_n$ and $T_k =\sum_{n=0}^k b_n$ be the partial sums of the two series. Since $a_n\le b_n$ for all $n$, we have that $S_k\le T_k$ for all $k$. Further, since both the series have non-negative terms, we have that both sequences $\{ S_k\}$ and $\{ T_k\}$ are monotonically non-decreasing. Since $\sum_{n=0}^\infty b_n$ converges, it must be that $\{ T_k\}$ is bounded, since a monotonic sequence converges if and only if it is bounded. Hence, $\{ S_k\}$ is also a bounded monotonic sequence, bounded by $\lim_{k\rightarrow\infty} T_k$ since $S_k\le T_k$ for all $k$, and so $\sum_{n=0}^\infty a_n$ is also a convergent series. [Note that the proofs of the first and second comparison tests rely heavily on the fact that the series have non-negative terms, thus forcing the sequences of partial sums to be monotonic.] \item {\bf Limit comparison test:} since $\lim_{n\rightarrow\infty} \frac{a_n}{b_n} =L >0$, we can apply the definition of limit with $\varepsilon =\frac{1}{2} L$ to get that there exists $M$ so that $\frac{1}{2} L< \frac{a_n}{b_n} <\frac{3}{2}L$ for $n >M$. In particular, applying a bit of algebraic massage, we have that $a_n <\frac{3}{2}L b_n$ for all $n >M$ and that $b_n < \frac{2}{L} a_n$ for $n >M$. Let $S_k =\sum_{n=0}^k a_n$ and $T_k =\sum_{n=0}^k b_n$ be the partial sums of the two series. As above, since both the series have non-negative terms, we have that both sequences $\{ S_k\}$ and $\{ T_k\}$ are monotonically non-decreasing. For the sake of precision, remove the first $M$ terms of both series, which does not affect the convergence or divergence of either. This is done so that the two inequalities $a_n <\frac{3}{2}L b_n$ and $b_n < \frac{2}{L} a_n$ hold true for all $n$. \medskip \noindent Suppose that $\sum_{n=0}^\infty b_n$ converges, so that $\{ T_k\}$ is a bounded monotonic sequence. Since $a_n <\frac{3}{2}L b_n$ for all $n >M$, we have that $S_k < \frac{3}{2} L T_k$; hence, the sequence $\{ S_k\}$ is bounded by $\frac{3}{2} L\lim_{k\rightarrow\infty} T_k$, and so $\sum_{n=0}^\infty a_n$ converges. \medskip \noindent Suppose now that $\sum_{n=0}^\infty a_n$ converges, so that $\{ S_k\}$ is a bounded monotonic sequence. Since $b_n <\frac{2}{L} a_n$ for all $n >M$, we have that $T_k < \frac{2}{L} S_k$; hence, the sequence $\{ T_k\}$ is bounded by $\frac{2}{L} \lim_{k\rightarrow\infty} S_k$, and so $\sum_{n=0}^\infty b_n$ converges. \item {\bf Integral test:} the definition of convergence for the integral $\int_0^\infty f(x) {\rm d}x$ is that the limit $\lim_{M\rightarrow\infty} \int_0^M f(x) {\rm d}x$ exists (and is finite). Recall also that $\int_0^M f(x) {\rm d}x$ is the area under the graph of $f(x)$ over the interval $[0, M]$, and that $\int_0^\infty f(x) {\rm d}x$ is the area under the graph of $f(x)$ over $[0,\infty)$. \medskip \noindent Suppose that $\lim_{M\rightarrow\infty} \int_0^M f(x) {\rm d}x$ exists. For each $n$ satisfying $1\le n\le M$, consider the rectangle $R_n$ over the interval $[n-1, n]$ with height $f(n) = a_n$. Since $f$ is decreasing, the rectangle $R_n$ is contained entirely under the graph of $f$, and the area of $R_n$ is ${\rm base}\cdot {\rm height} = f(n) = a_n$. So, comparing areas, we see that \[ \sum_{n=1}^M (\mbox{area of } R_n) =\sum_{n=1}^M a_n \le \int_0^M f(x) {\rm d}x. \] Since the sequence $\{ \int_0^M f(x) {\rm d}x\}$ is monotone increasing (since each $\int_M^{M+1} f(x) {\rm d}x$ is positive) and bounded (by $\lim_{M\rightarrow\infty} \int_0^M f(x) {\rm d}x$), we see that the sequence of partial sums of $\sum_{n=1}^\infty a_n$ is also a bounded monotone sequence, hence convergent. That is, $\sum_{n=0}^\infty a_n$ converges. \medskip \noindent Suppose now that $\sum_{n=0}^\infty a_n$ converges. For each $n\ge 1$, let $W_n$ be the rectangle over the interval $[n-1,n]$ with height $f(n-1) =a_{n-1}$. The part of the graph of $f$ over $[0,M]$ is contained in the union of the rectangles $W_0\cup\cdots \cup W_{M-1}$, and so comparing areas, we see that \[ \int_0^M f(x) {\rm d}x \le \sum_{n=1}^M (\mbox{area of } W_n) =\sum_{n=1}^M a_{n-1}. \] As above, the sequence $\{ \int_0^M f(x) {\rm d}x\}$ is monotonically increasing and bounded (by $\sum_{n=0}^\infty a_n$), and so $\lim_{M\rightarrow\infty} \int_0^M f(x) {\rm d}x$ exists (and is finite), as desired. \item {\bf Ratio test:} [note: the proofs of the ratio and root tests are similar to each other, but different from the proofs already given, in that they don't use partial sums, but instead use comparison to an appropriately chosen geometric series.] \medskip \noindent We are given that $\lim_{n\rightarrow\infty} \frac{a_{n+1}}{a_n} = L$ exists. Suppose that $L <1$. Choose some $\mu$ so that $L < \mu < 1$; applying the definition of limit with $\varepsilon = \mu -L$, there exists $M$ so that $\frac{a_{n+1}}{a_n} <\mu$ for $n \ge M$. (Note the change from the usual $n >M$ to $n\ge M$, made here purely for notational convenience.) So, $a_{M+1} < \mu a_M$, and $a_{M+2} <\mu a_{M+1} < \mu^2 a_M$, and in general, we have that $a_{M+k} < \mu^k a_M$ for $k\ge 0$. (We're using here that the $a_n$ are all positive, so that among other things, the inequalities don't change direction when we multiply through by $a_n$.) Since the geometric series $\sum_{k=0}^\infty \mu^k$ converges (since $\mu <1$), the second comparison test yields that the truncated series $\sum_{n=M}^\infty a_n$ converges, and hence that the original series $\sum_{n=0}^\infty a_n$ converges. \medskip \noindent Suppose now that $L >1$, and essentially repeat the argument. Choose some $\eta$ so that $1 < \eta < L$; applying the definition of limit with $\varepsilon = L -\eta$, there exists $M$ so that $\frac{a_{n+1}}{a_n} >\eta$ for $n \ge M$. (Note the change from the usual $n >M$ to $n\ge M$, made here purely for notational convenience.) So, $a_{M+1} > \eta a_M$, and $a_{M+2} > \eta a_{M+1} > \eta^2 a_M$, and in general, we have that $a_{M+k} >\eta^k a_M$ for $k\ge 0$. (We're using here that the $a_n$ are all positive, so that among other things, the inequalities don't change direction when we multiply through by $a_n$.) Since the geometric series $\sum_{k=0}^\infty \eta^k$ diverges (since $\eta >1$), the first comparison test yields that the truncated series $\sum_{n=M}^\infty a_n$ diverges, and hence that the original series $\sum_{n=0}^\infty a_n$ diverges. \medskip \noindent [The reason this proof does not work when $L =1$ is that we cannot find a number between $L$ and $1$, as we did in both of the parts of the proof just given.] \item {\bf Root test:} The proof here is very similar to the proof just given (and fails when $L =1$ for the same reason). When $L <1$, again choose $\mu$ satisfying $L <\mu <1$, and then apply the definition of limit to find $M$ so that $(a^n)^{1/n} <\mu$ for $n \ge M$. Then, taking the $n^{th}$ power of both sides, we get that $a_n < \mu^n$ for all $n\ge M$, and so again we can use the second comparison test with the convergent geometric series $\sum_{n=M}^\infty \mu^n$ to get convergence of $\sum_{n=0}^\infty a_n$. \medskip \noindent When $L >1$, choose $\eta$ satisfying $1 <\eta \eta$ for $n\ge M$, so that $a_n > \eta^n$ for $n\ge M$. By the first comparison test with the divergent geometric series $\sum_{n=M}^\infty \eta^n$, we get that $\sum_{n=0}^\infty a_n$ diverges. \item {\bf Alternating series test:} start by considering the partial sums $S_k$ for $k$ odd: \[ S_{2p+1} =\sum_{n=0}^{2p+1} a_n = (a_0 - a_1) + (a_2 -a_3) +\cdots + (a_{2p} - a_{2p+1}). \] Since each term in parentheses $a_{2s} - a_{2s+1}$ is non-negative, since $a_{2s+1}\le a_{2s}$ by assumption, we have that the odd partial sums $S_{2p+1}$ are all non-negative, and are monotonically non-decreasing. Also, by grouping the terms in $S_{2p+1}$ differently, namely as \[ S_{2p+1} =\sum_{n=0}^{2p+1} a_n = a_0 - (a_1- a_2) -( a_3- \cdots - a_{2p}) - a_{2p+1}, \] and again using that the parenthetical terms are non-negative, we see that $S_{2p+1}\le a_0$ for all $p$, and so the odd partial sums form a bounded monotone sequence. Let $S =\lim_{p\rightarrow\infty} S_{2p+1}$. \medskip \noindent We need to show now that the even partial sums $S_{2p}$ converge to the same limit. However, since $S_{2p} = S_{2p-1} + a_{2p}$ and since $\lim_{p\rightarrow\infty} a_{2p} =0$, we have that \[ \lim_{p\rightarrow\infty} S_{2p} = \lim_{p\rightarrow\infty} S_{2p-1} + \lim_{p\rightarrow\infty} a_{2p} = S + 0 = S,\] and so the sequence $\{ S_k\}$ of all partial sums converges to $S$. That is, the series $\sum_{n=0}^\infty (-1)^n a_n$ converges. \end{itemize} \end{proof} \begin{example} We use the integral test to show that the series $\sum_{n=1}^\infty \frac{1}{n^s}$ from Example \ref{zeta-series} converges for $s >1$. Recall that we have already seen that this series diverges for $s\le 1$. \medskip \noindent So, consider the function $f(x) =\frac{1}{x^s} = x^{-s}$, so that $\frac{1}{n^s} =f(n)$. Since $s >1$, $f'(x) = -s \frac{1}{x^{s +1}} <0$ for all $x >0$, and so $f(x)$ is decreasing. Further, \begin{eqnarray*} \int_1^\infty f(x) {\rm d}x & = & \lim_{M\rightarrow\infty} \int_1^M x^{-s} {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \frac{1}{-s+1} x^{-s+1}\left|_1^M \right. \\ & = & \lim_{M\rightarrow\infty} \frac{1}{-s+1} \left( \frac{1}{M^{s-1}} - 1 \right) = \frac{1}{s-1}. \end{eqnarray*} Since the limit converges, the series converges, as desired. \medskip \noindent It is known that for $s$ an even positive integer, that $\sum_{n=1}^\infty \frac{1}{n^s}$ is a rational multiple of $\pi^s$. Moreover, there is an explicit formula for the sum of this series. \medskip \noindent For $s$ an odd positive integer, we have already seen that this series diverges for $s =1$ (as this is the harmonic series). Further, it is known that $\sum_{n=1}^\infty \frac{1}{n^3}$ is an irrational number, but it is not known that $\sum_{n=1}^\infty \frac{1}{n^3}$ is a rational multiple of $\pi$. Nothing is known about $\sum_{n=1}^\infty \frac{1}{n^s}$ for $s$ an odd positive integer $s\ge 5$, other than it is a convergent series. \end{example} \begin{method} The first test to apply is always the $n^{th}$ term test for divergence, whether you write out the details or just apply the test mentally. Beyond that, you need to sort the remaining tests into your own personal order of preference, and then go through your list with each series until you get to a test that yields either convergence or divergence. \medskip \noindent My personal preference is to try to use the comparison tests before trying any of the others. I'll then move onto the ratio test, the limit comparison test, and end with the root and integral tests. This is just the way that I work. I also tend sometimes not to use the most obvious test, but to try and see if I can be clever using one of the others. \medskip \noindent In all the series problems which follow, there is no single correct way to do any problem. For each problem, there are many methods that work. \end{method} \begin{example} For a convergent series $\sum_{n=1}^\infty a_n$ with positive terms, prove that $\sum_{n=1}^\infty \frac{a_n}{n}$ converges. \medskip \noindent Let $S_k =\sum_{n=1}^k a_n$ be the $k^{th}$ partial sum of $\sum_{n=1}^\infty a_n$. Consider the $k^{th}$ partial sum $T_k$ of the new series $\sum_{n=1}^\infty \frac{a_n}{n}$, $T_k =\sum_{n=1}^\infty \frac{a_n}{n}$, and compare $T_k$ to $S_k$: since $\frac{a_n}{n} \le a_n$ for all $n\ge 1$, we have that $T_k \le S_k$ for all $n\ge 1$. Since $\sum_{n=1}^\infty a_n$ is a convergent series with positive terms, its sequence of partial sums $\{ S_k\}$ is a monotonically increasing sequence that converges to $S$. In particular, $S_k\le S$ for all $k\ge 1$. Since $T_k\le S_k\le S$, we see that $\{ T_k\}$ is a bounded monotonically increasing sequence, and hence converges. So, $\sum_{n=1}^\infty \frac{a_n}{n}$ is a convergent series. \end{example} \begin{exercise} In each of the following, $\sum_{n=1}^\infty a_n$ is a convergent series with positive terms. \begin{enumerate} \item Prove that, if $\{ c_n\}$ is a sequence of positive terms satisfying $\lim_{n\rightarrow\infty} c_n =0$, then $\sum_{n=1}^\infty a_n c_n$ converges; \item Prove that, if $\{ c_n\}$ is a sequence of positive terms satisfying $\lim_{n\rightarrow\infty} c_n =c\neq 0$, then $\sum_{n=1}^\infty a_n c_n$ converges. \end{enumerate} \label{mucking-series} \end{exercise} \medskip \noindent In general, a series whose terms are positive is much easier to handle, particularly in terms of determining convergence and divergence. One way to handle a general series, that is one without the restriction that the terms be positive, is to compare it to a series with positive terms. \begin{definition} Let $\sum_{n=0}^\infty a_n$ be a series. Consider the associated series $\sum_{n=0}^\infty |a_n|$, whose terms are all positive (or at least non-negative). Say that $\sum_{n=0}^\infty a_n$ {\bf converges absolutely} if the associated series $\sum_{n=0}^\infty |a_n|$ converges. \medskip \noindent Note that absolute convergence and convergence are the same for a series with positive terms. \end{definition} \medskip \noindent The connection between convergence and absolute convergence is given in the following proposition. \begin{proposition} Let $\sum_{n=0}^\infty a_n$ be a series. If $\sum_{n=0}^\infty a_n$ converges absolutely, then $\sum_{n=0}^\infty a_n$ converges. \label{absolute-implies-convergence} \end{proposition} \noindent \begin{proof} {\bf of Proposition \ref{absolute-implies-convergence}:} Let $\sum_{n=0}^\infty a_n$ be a series that converges absolutely, so that $\sum_{n=0}^\infty |a_n|$ converges. By the arithmetic of series, the series $\sum_{n=0}^\infty 2|a_n|$ then also converges. \medskip \noindent We wish to understand whether or not the original series $\sum_{n=0}^\infty a_n$ converges. Note that $0\le a_n +|a_n|\le 2|a_n|$, and so by the second comparison test, the series $\sum_{n=0}^\infty (a_n +|a_n|)$ converges. Since $\sum_{n=0}^\infty |a_n|$ converges, by assumption, their difference $\sum_{n=0}^\infty (a_n +|a_n|) -\sum_{n=0}^\infty |a_n| =\sum_{n=0}^\infty a_n$ converges, by the arithmetic of series, and we are done. \end{proof} \medskip \noindent In Theorem \ref{series-tests}, we stated the ratio and root tests for series with positive terms. Combining Theorem \ref{series-tests} with Proposition \ref{absolute-implies-convergence}, we obtain the ratio and root tests for series with non-zero terms, as tests to determine whether the series converges absolutely or diverges. \begin{proposition} {\bf Ratio and root tests for general series:} Let $\sum_{n=0}^\infty a_n$ be a series with non-zero terms, so that $a_n\ne 0$ for all $n$. \begin{itemize} \item {\bf Ratio test:} Suppose that $\lim_{n\rightarrow\infty} \left| \frac{a_{n+1}}{a_n}\right| = L$ exists. If $L <1$, then $\sum_{n=0}^\infty a_n$ converges absolutely. If $L > 1$, then $\sum_{n=0}^\infty a_n$ diverges. If $L =1$, this test gives no information. \item {\bf Root test:} Suppose that $\lim_{n\rightarrow\infty} (| a_n| )^{1/n} = L$ exists. If $L <1$, then $\sum_{n=0}^\infty a_n$ converges absolutely. If $L > 1$, then $\sum_{n=0}^\infty a_n$ diverges. If $L =1$, this test gives no information. \end{itemize} \label{ratio-root-general} \end{proposition} \begin{definition} Proposition \ref{absolute-implies-convergence} gives us that a series that converges absolutely then necessarily converges. The converse however is not true: there are series that converge but do not converge absolutely. \medskip \noindent To give this possibility a name, say that a series {\bf converges conditionally} if it converges but does not converge absolutely. \end{definition} \begin{example} The alternating series test gives us a way to construct an example of a series that converges conditionally. Consider the {\bf alternating harmonic series} $\sum_{n=1}^\infty (-1)^n \frac{1}{n}$. Since $\frac{1}{n} >\frac{1}{n+1}$ for all $n\ge 1$ and since $\lim_{n\rightarrow\infty} \frac{1}{n} =0$, the alternating series test yields that $\sum_{n=1}^\infty (-1)^n \frac{1}{n}$ converges. However, when we take absolute values of all the terms in this series, we get the harmonic series $\sum_{n=1}^\infty |(-1)^n \frac{1}{n}| =\sum_{n=1}^\infty \frac{1}{n}$, which we have already seen diverges. So, the alternating harmonic series $\sum_{n=1}^\infty (-1)^n \frac{1}{n}$ converges but does not converge absolutely. That is, it converges conditionally. \label{alternating-harmonic} \end{example} \begin{example} Determine whether the series $\sum_{n=0}^\infty e^{-n}$ converges absolutely, converges conditionally, or diverges. if the series converges, determine its limit, where possible. \medskip \noindent {\bf converges absolutely:} we apply the ratio test, as \[ \lim_{n\rightarrow\infty} \frac{e^{-(n+1)}}{e^{-n}} =\lim_{n\rightarrow\infty} \frac{e^{-1}}{1} =\frac{1}{e} <1, \] and so $\sum_{n=0}^\infty e^{-n}$ converges. (We make implicit use of the fact that for a series of positive terms, convergence and absolute convergence are the same notion.) \end{example} \begin{exercise} {\bf The series scavenger hunt:} for each of the infinite series given below, do the following: \begin{itemize} \item Determine whether the series converges absolutely, converges conditionally, or diverges; \item if the series converges, determine its limit, where possible. \end{itemize} \begin{enumerate} \item $\sum_{n=0}^\infty \frac{2^{n-1}}{3^n}$; \item $\sum_{n=0}^\infty (1.01)^n$; \item $\sum_{n=1}^\infty (\frac{e}{10})^n$; \item $\sum_{n=1}^\infty \frac{1}{n^2+n+1}$; \item $\sum_{n=1}^\infty \frac{1}{n + \sqrt{n}}$; \item $\sum_{n=1}^\infty \frac{1}{1+3^n}$; \item $\sum_{n=2}^\infty \frac{10 n^2}{n^3 - 1}$; \item $\sum_{n=1}^\infty \frac{1}{\sqrt{37n^3 + 3}}$; \item $\sum_{n=1}^\infty \frac{\sqrt{n}}{n^2+n}$; \item $\sum_{n=2}^\infty \frac{2}{\ln(n)}$; \item $\sum_{n=1}^\infty \frac{\sin^2(n)}{n^2+1}$; \item $\sum_{n=1}^\infty \frac{n+2^n}{n+3^n}$; \item $\sum_{n=2}^\infty \frac{1}{n^2\ln(n)}$; \item $\sum_{n=1}^\infty \frac{n^3+1}{n^4+2}$; \item $\sum_{n=1}^\infty \frac{1}{n + n^{3/2}}$; \item $\sum_{n=1}^\infty \frac{10 n^2}{n^4+1}$; \item $\sum_{n=2}^\infty \frac{n^2 -n}{n^4 +2}$; \item $\sum_{n=1}^\infty \frac{1}{\sqrt{n^2+1}}$; \item $\sum_{n=1}^\infty \frac{1}{3+5^n}$; \item $\sum_{n=2}^\infty \frac{1}{n-\ln(n)}$; \item $\sum_{n=1}^\infty \frac{\cos^2(n)}{3^n}$; \item $\sum_{n=1}^\infty \frac{1}{2^n+3^n}$; \item $\sum_{n=1}^\infty \frac{1}{n^{(1+\sqrt{n})}}$; \item $\sum_{n=1}^\infty 1 / (2^n (n+1))$; \item $\sum_{n=1}^\infty n! / (n^2 e^n)$; \item $\sum_{n=2}^\infty \sqrt{n} / (3^n \ln(n))$; \item $\sum_{n=2}^\infty (2n)! / (n!)^3$; \item $\sum_{n=1}^\infty (1 - (-1)^n) / n^4$; \item $\sum_{n=1}^\infty (2+\cos(n)) / (n + \ln(n))$; \item $\sum_{n=3}^\infty 1 / (n \ln(n) \sqrt{\ln(\ln(n))})$; \item $\sum_{n=1}^\infty n^n / (\pi^n n!)$; \item $\sum_{n=1}^\infty 2^{n+1} / n^n$; \item $\sum_{n=1}^\infty (-1)^{n-1} / \sqrt{n}$; \item $\sum_{n=1}^\infty \cos(\pi n) / ( (n+1) \ln(n+1) )$; \item $\sum_{n=1}^\infty (-1)^n (n^2 -1) / ( n^2+1)$; \item $\sum_{n=1}^\infty (-1)^n / (n \pi^n)$; \item $\sum_{n=1}^\infty (-1)^n (20n^2 -n -1) / (n^3+n^2+33 )$; \item $\sum_{n=1}^\infty n! / (-100)^n$; \item $\sum_{n=3}^\infty 1 / (n \ln(n) (\ln(\ln(n)))^2)$; \item $\sum_{n=1}^\infty (1 + (-1)^n) / \sqrt{n}$; \item $\sum_{n=1}^\infty e^n \cos^2(n) / (1+\pi^n)$; \item $\sum_{n=2}^\infty n^4 / n!$; \item $\sum_{n=1}^\infty (2n)! 6^n / (3n)!$; \item $\sum_{n=1}^\infty n^{100} 2^n / \sqrt{n!}$; \item $\sum_{n=3}^\infty (1+n!) / (1+n)!$; \item $\sum_{n=1}^\infty 2^{2n} (n!)^2 / (2n)!$; \item $\sum_{n=1}^\infty (-1)^n / ( n^2 + \ln(n) )$; \item $\sum_{n=1}^\infty (-1)^{2n} / 2^n $; \item $\sum_{n=1}^\infty (-2)^n / n!$; \item $\sum_{n=0}^\infty -n / (n^2+1)$; \item $\sum_{n=1}^\infty 100\cos(n\pi) / (2n+3)$; \item $\sum_{n=10}^\infty \sin((n+1/2)\pi) / \ln(\ln(n))$; \item $\sum_{n=1}^\infty (2n)! / ( 2^{2n} (n!)^2)$; \item $\sum_{n=1}^\infty (n / (n+1) )^{n^2}$; \item $\sum_{n=1}^\infty 1 / (1+2+\cdots+n)$; \item $\sum_{n=1}^\infty \ln(n) / (2n^3 - 1)$; \item $\sum_{n=1}^\infty \sin(n) / n^2$; \item $\sum_{n=1}^\infty (-1)^n (n-1) / n$; \item $\sum_{n=1}^\infty (-1)^n 2^{3n} / 7^n$; \item $\sum_{n=1}^\infty \cos(n) / n^4$; \item $\sum_{n=1}^\infty (-1)^n 3^n / (n(2^n + 1))$; \item $\sum_{n=1}^\infty (-1)^{n-1} n / (n^2+1)$; \item $\sum_{n=2}^\infty (-1)^{n-1} / (n\ln^2(n))$; \item $\sum_{n=1}^\infty (-1)^{n-1} 2^n / n^2$; \item $\sum_{n=1}^\infty (-1)^n \sin(\sqrt{n}) / n^{3/2}$; \item $\sum_{n=1}^\infty n^4 e^{-n^2}$; \item $\sum_{n=1}^\infty \sin(n\pi /2) / n$; \item $\sum_{n=2}^\infty 1 / (\ln(n))^8$; \item $\sum_{n=13}^\infty 1 /( n\ln(n) (\ln(\ln(n)))^p )$, where $p >0$ is an arbitary positive real number; \end{enumerate} \label{series-scavenger} \end{exercise} \begin{exercise} Let $\sum_{n=1}^\infty a_n$ be a convergent series of positive terms. Show that for each $s\ge 1$, the series $\sum_{n=1}^\infty a_n^s$ is also convergent. \label{another-series} \end{exercise} \begin{example} {\bf rearranging conditonally convergent series:} There is a rather strange fact, that illustrates the difference between an absolutely convergent and a conditionally convergent series. First, we note that for an absolutely convergent series, rearranging the terms does not affect the sum of the series. \medskip \noindent However, for a conditionally convergent series, rearranging the terms can affect the sum of the series, and in fact, we can play a wonderful game. Let $\sum_{n=0}^\infty a_n$ be a conditionally convergent series with non-zero terms, so that $\sum_{n=0}^\infty a_n$ converges but $\sum_{n=0}^\infty |a_n|$ diverges. (The restriction to a series with non-zero terms is not essential, but it makes the exposition a bit smoother.) Choose any number $S\in {\bf R}$. Then, there is a {\bf rearrangement} $\sum_{n=0}^\infty b_n$ of $\sum_{n=0}^\infty a_n$ (so the same terms, but in a different order) so that $\sum_{n=0}^\infty b_n$ converges to $S$. \medskip \noindent Start by rewriting the original series $\sum_{n=0}^\infty a_n$: set \[ p_n = \left\{ \begin{array}{ll} a_n & \mbox{ if $a_n >0$}; \\ 0 & \mbox{ if $a_n \le 0$}; \end{array}\right. \] and \[ q_n = \left\{ \begin{array}{ll} 0 & \mbox{ if $a_n > 0$}; \\ a_n & \mbox{ if $a_n \le 0$}; \end{array}\right. \] Note that both $\sum_{n=0}^\infty p_n$ and $\sum_{n=0}^\infty q_n$ diverge: since $a_n =p_n +q_n$ for all $n$, if $\sum_{n=0}^\infty p_n$ converges, then $\sum_{n=0}^\infty q_n =\sum_{n=0}^\infty (a_n - p_n)$ converges, by the arithmetic of series. However, $\sum_{n=0}^\infty p_n$ is a series of non-negative terms, for which convergence and absolute convergence are the same notion, and $\sum_{n=0}^\infty q_n$ is a series of non-positive terms, for which convergence and absolute convergence are the same notion. But if both $\sum_{n=0}^\infty p_n$ and $\sum_{n=0}^\infty q_n$ converge absolutely, then so does their sum $\sum_{n=0}^\infty a_n$, a contradiction. Hence, both $\sum_{n=0}^\infty p_n$ and $\sum_{n=0}^\infty q_n$ diverge. \medskip \noindent Given $S$, build the new series as follows: start by choosing elements $b_0 = p_0$, $b_1 = p_1,\ldots, b_m = p_m$ (ignoring all the $p_n$ that are equal to $0$) from the series $\sum_{n=0}^\infty p_n$ until $\sum_{n=0}^m b_n > S$ (but $\sum_{n=0}^{m-1} b_n \le S$). Then, choose elements $b_{m+1} = q_0$, $b_{m+2} = q_1,\ldots, b_{m+k+1} = q_k$ (ignoring all the $q_n$ that are equal to $0$) until $\sum_{n=0}^{m+k+1} b_n < S$ (but $\sum_{n=0}^{m+k} b_n \ge S$). Then, choose the next elements of $\sum_{n=0}^\infty p_n$ (again ignoring the terms equal to $0$) until the sum is greater than $S$, and then choose the next elements of $\sum_{n=0}^\infty q_n$ (again ignoring the terms equal to $0$) until the sum is less than $S$, and repeat indefinitely. This gives a rearrangement $\sum_{n=0}^\infty b_n$ of the original series $\sum_{n=0}^\infty a_n$. (Ignoring the terms equal to $0$ in constructing the $b_n$ means that the only terms appearing in the series $\sum_{n=0}^\infty b_n$ are the same as those appearing in the original series $\sum_{n=0}^\infty a_n$.) The divergence of $\sum_{n=0}^\infty p_n$ and $\sum_{n=0}^\infty q_n$ enters into this construction, as it ensures that we can in fact continue this process indefinitely. \medskip \noindent It remains only to check that $\sum_{n=0}^\infty b_n$ converges to $S$, but this follows immediately from the construction of this new series, and the fact that $\lim_{n\rightarrow\infty} p_n =\lim_{n\rightarrow\infty} q_n =0$. \end{example} \section{Power series} \label{power-series} \begin{definition} A {\bf power series} is an infinite series with a variable. Specifically, a power series is an infinite series of the form \[ \sum_{n=0}^\infty a_n (x -a)^n, \] where the $a_n$ are real numbers, where $x$ is a variable, and where $a$ is a real number, the {\bf center} of the power series. \end{definition} \medskip \noindent The main question we ask about the power series $\sum_{n=0}^\infty a_n (x -a)^n$ is, for what values of $x$ does this series converge? The set of values of $x$ for which the power series converges will always be an interval, the {\bf interval of convergence}, centered at $a$. Note that the series always converges for $x =a$. The interval of convergence will have some radius $r$, the {\bf radius of convergence}. \medskip \noindent So, if the power series $\sum_{n=0}^\infty a_n (x -a)^n$ has radius of convergence $r$, then the series converges absolutely for all values of $x$ in the open interval $(a -r, a+r)$ and diverges for all values of $x$ in the two open rays $(-\infty, a-r)$ and $(a+r, \infty)$. The power series may or may not converge at the two endpoints of the interval, these need to be checked separately. \medskip \noindent Note that there are power series whose radius of convergence is $0$, and these series converge only at their center value. There are also power series whose radius of convergence is $\infty$, and these series converge for all values of $x$ and hence their interval of convergence is all of ${\bf R}$. \medskip \noindent This split between convergence and divergence, with only two points at which convergence needs to be checked by hand, namely the endpoints of the interval of convergence, follows from the ratio and root tests, Proposition \ref{ratio-root-general}. Consider the following example. \begin{example} Consider the power series $\sum_{n=0}^\infty x^n / (n+1)$, which is a power series centered at $a =0$. We always begin the same way with a power series, by using the ratio test. The ratio test asks us to calculate \[ \lim_{n\rightarrow\infty} \left| \frac{x^{n+1}/((n+1) +1)}{x^n/(n+1)} \right| = |x| \lim_{n\rightarrow\infty} \frac{n+1}{n+2} = |x|. \] By Proposition \ref{ratio-root-general}, this series converges absolutely for $|x| <1$ and diverges for $|x| >1$. So, the radius of convergence is $1$. \medskip \noindent Proposition \ref{ratio-root-general} yields that the open interval $(-1,1)$ lies in the interval of convergence. In order to determine the interval of convergence, we need to check the behavior of the series at the two endpoints of this interval, namely $x =1$ and $x =-1$. \medskip \noindent At $x =1$, the series becomes $\sum_{n=0}^\infty 1/(n+1)$, which is the harmonic series and hence diverges. \medskip \noindent At $x =-1$, this series becomes $\sum_{n=0}^\infty (-1)^n/(n+1)$, which is the alternating harmonic series, and hence converges conditionally. \medskip \noindent So, the interval of convergence is the half-open interval $[-1,1)$. \end{example} \begin{exercise} {\bf The power series scavenger hunt:} for each of the power series given below, determine the radius and interval of convergence. \begin{enumerate} \item $\sum_{n=0}^\infty (-1)^n x^n / n!$; \item $\sum_{n=1}^\infty 5^n x^n / n^2$; \item $\sum_{n=1}^\infty x^n / (n(n+1))$; \item $\sum_{n=1}^\infty (-1)^n x^n / \sqrt{n}$; \item $\sum_{n=0}^\infty (-1)^n x^{2n+1} / (2n + 1)!$; \item $\sum_{n=0}^\infty 3^n x^n / n!$; \item $\sum_{n=0}^\infty x^n / (1+n^2)$; \item $\sum_{n=1}^\infty (-1)^{n+1} (x+1)^n / n$; \item $\sum_{n=0}^\infty 3^n (x+5)^n / 4^n$; \item $\sum_{n=1}^\infty (-1)^n (x+1)^{2n+1} / (n^2+4)$; \item $\sum_{n=0}^\infty \pi^n (x-1)^{2n} / (2n+1)!$; \item $\sum_{n=2}^\infty x^n / (\ln(n))^n$; \item $\sum_{n=0}^\infty 3^n x^n$; \item $\sum_{n=0}^\infty n! x^n / 2^n$; \item $\sum_{n=1}^\infty (-2)^n x^{n+1} / (n+1)$; \item $\sum_{n=1}^\infty (-1)^n x^{2n} / (2n)!$; \item $\sum_{n=1}^\infty (-1)^n x^{3n} / n^{3/2}$; \item $\sum_{n=2}^\infty (-1)^{n+1} x^n / (n\ln^2(n))$; \item $\sum_{n=0}^\infty (x-3)^n / 2^n$; \item $\sum_{n=1}^\infty (-1)^n (x-4)^n / (n+1)^2$; \item $\sum_{n=0}^\infty (2n+1)! (x-2)^n / n^3$; \item $\sum_{n=1}^\infty \ln(n) (x-3)^n / n$; \item $\sum_{n=0}^\infty (2x-3)^n / 4^{2n}$; \item $\sum_{n=2}^\infty (x-a)^n / b^n$, where $b>0$ is arbitrary. \item $\sum_{n=0}^\infty (n+p)! x^n / (n!(n+q)!)$, where $p$, $q\in {\bf N}$; \item $\sum_{n=1}^\infty x^{n-1} / (n 3^n)$; \item $\sum_{n=1}^\infty (-1)^{n-1} x^{2n-1} / (2n-1)!$; \item $\sum_{n=1}^\infty n! (x-a)^n$, where $a\in {\bf R}$ is arbitrary; \item $\sum_{n=1}^\infty n (x-1)^n / (2^n (3n-1))$; \end{enumerate} \label{power-series-scavenger} \end{exercise} \begin{exercise} Prove, if $\{ a_n\}$ is a sequence satisfying $\lim_{n\rightarrow\infty} |a_n|^{1/n} = L\neq 0$, then the power series $\sum_{n=0}^\infty a_n x^n$ has radius of convergence $\frac{1}{L}$. \label{radius-exercise} \end{exercise} \medskip \noindent Note that, if we use the ratio test to determine the radius of convergence of a power series, we cannot then use the ratio test to determine whether the series converges or diverges at the endpoints of the interval of convergence. This is because the limit is equal to $1$ at the endpoints of the interval, and when the limit is $1$ is precisely when the ratio test gives no information. \begin{exercise} For each of the following series, determine the values of $x$ for which the series converges. \begin{enumerate} \item $\sum_{n=1}^\infty ((x+2)/(x-1))^n / (2n-1)$; \item $\sum_{n=1}^\infty 1/((x+n)(x+n-1))$; \end{enumerate} \label{semi-power-series} \end{exercise} \section{Continuity} \label{continuity} \begin{definition} $f$ is {\bf continuous at $a$} if $\lim_{x\rightarrow a} f(x) = f(a)$. This is actually a very concise definition, containing several independent pieces: \begin{itemize} \item first, that $\lim_{x\rightarrow a} f(x)$ exists; \item second, that $f$ is defined at $a$; \item third, that these two numbers $\lim_{x\rightarrow a} f(x)$ and $f(a)$ are equal. \end{itemize} \medskip \noindent In general, a function $f: (c,d)\rightarrow {\bf R}$ with domain an interval in ${\bf R}$ is continuous if $f$ is continuous at every $a$ in the interval $(c,d)$. \end{definition} \begin{exercise} Prove, using the definition, that each of the following functions is continuous at all points of ${\bf R}$. \begin{enumerate} \item $h_n(x) =x^n$, where $n\in {\bf N}$; \item $g(x) = c$, where $c\in {\bf R}$; \item $f$ is a function on ${\bf R}$ which satisfies $|f(x)-f(y)|\leq c|x-y|$ for all $x$, $y\in {\bf R}$, where $c >0$ is a constant. \end{enumerate} \label{some-continuous} \end{exercise} \medskip \noindent Since continuous functions are defined in terms of limits, the rules of arithmetic for limits of functions, as given in Theorem \ref{function-limit-thm}, extend immediately to rules of arithmetic for continuous functions. \begin{theorem} Let $f$ and $g$ be functions that are continuous at $a$. Then, the following hold. \begin{enumerate} \item the {\bf sum} $f+g$, defined by setting $(f+g)(x) =f(x) + g(x)$, is continous at $a$; \item the {\bf difference} $f+g$, defined by setting $(f-g)(x) =f(x)- g(x)$, is continous at $a$; \item the {\bf product} $f\cdot g$, defined by setting $(f\cdot g)(x) =f(x)g(x)$, is continous at $a$; \item if $g(a)\ne 0$, then the {\bf quotient} $f/g$, defined by setting $(f/g)(x) =f(x)/g(x)$, is continous at $a$; \end{enumerate} \label{continuous-facts} \end{theorem} \begin{proposition} If $f$ is continuous at $a$ and if $g$ is continuous at $f(a)$, then the composition $g\circ f(x) = g(f(x))$ is continuous at $a$. \label{continuous-composition} \end{proposition} \noindent \begin{proof} {\bf of Proposition \ref{continuous-composition}:} We need to show that $\lim_{x\rightarrow a} g(f(x)) = g(f(a))$. Since $g$ is continuous at $f(a)$, we know that for each $\varepsilon >0$, there exists $\mu >0$ so that if $| z - f(a) | <\mu$, then $|g(z) -g(f(a))| <\varepsilon$. [Here I'm using $z$ as a variable so that there aren't too many $x$'s running around.] \medskip \noindent We also know that $f$ is continuous at $a$, so that for each $\mu >0$, there exists $\delta >0$ so that if $| x-a| <\delta$, then $| f(x) -f(a)| <\mu$. Take the $\mu$ that is output by the definition of continuity of $g$ at $f(a)$ and input it into the definition of continuity of $f$ at $a$: since $|f(x) -f(a)| <\mu$, we can apply the definition of continuity of $g$ at $f(a)$ to get that if $|x-a| <\delta$, then $|f(x) -f(a)| <\mu$, and so $|g(f(x)) - g(f(a))| <\varepsilon$, as desired. \end{proof} \begin{exercise} Prove, if $f$ is continuous and if $\lim_{x\rightarrow\infty} (f(x+1)-f(x)) =0$, that $\lim_{x\rightarrow\infty} f(x)/x =0$. \label{limit-cont-exercise} \end{exercise} \begin{definition} Let $f: [a,b]\rightarrow {\bf R}$ be a real-valued function whose domain is a closed interval. Say that $f$ is {\bf continuous on $[a,b]$} if $f$ is continuous at each point of the open interval $(a,b)$, and if $f(a) =\lim_{x\rightarrow a+} f(x)$ and $f(b) =\lim_{x\rightarrow b-} f(x)$. \label{continuous-on-closed} \end{definition} \medskip \noindent The following two theorems, Theorem \ref{max-value-prop} (Maximum value property for continuous functions) and Theorem \ref{int-value-prop} (Intermediate value property for continuous functions) are two of the most important properties of continuous functions. \begin{theorem} {\bf Maximum value property for continuous functions:} Let $f$ be a function that is continuous on the closed interval $[a,b]$. Then $f$ achieves its maximum on $[a,b]$; that is, there exists some $x_0$ in $[a,b]$ so that $f(x_0)\ge f(x)$ for all $x\in [a,b]$. \label{max-value-prop} \end{theorem} \begin{theorem} {\bf Intermediate value property for continuous functions:} Let $f$ be a function that is continuous on the closed interval $[a,b]$, and let $c$ be a number lying between $f(a)$ and $f(b)$. Then, there exists some $x_0$ in the open interval $(a,b)$ so that $f(x_0) =c$. [Pictorally, this theorem says that any horizontal line whose height lies between $f(a)$ and $f(b)$ must intersect the graph of $f$ over the interval $[a,b]$.] \label{int-value-prop} \end{theorem} \begin{exercise} The {\bf minimum value property} states that, if $f$ is continuous on $[a,b]$, then $f$ achieves its minimum on $[a,b]$; that is, there exists some $y_0$ in $[a,b]$ so that $f(y_0)\le f(x)$ for all $x\in [a,b]$. Prove that a continuous function $f: [a,b]\rightarrow {\bf R}$ satisfies the minimum value property if it satisfies the maximum value property. \label{min-value} \end{exercise} \begin{example} For the function $f(x)$ which is continuous on the closed interval $[a,b]$ and satisfies $a 0$ and $g(b) = f(b) -b< 0$. Applying the Intermediate value property to $g$, we see that there exists $c$ in $(a,b)$ so that $g(c) =0$, and hence so that $f(c) -c =0$. That is, $f(c) =c$, and so the equation $f(x) =x$ has a solution in $[a,b]$, as desired. \end{example} \begin{exercise} For each of the following functions described below, use the Intermediate value property for continuous functions to determine whether there is a solution to the given equation in the specified set. \begin{enumerate} \item $f(x) = x$, where $f(x)$ is continuous on the closed interval $[a,b]$ and satisfies $f(a)0$; \item $3+x^5-1001x^2=0$ for $x>0$; \end{enumerate} \label{int-value-exercises} \end{exercise} \begin{example} {\bf solving equations by the method of bisection:} The intermediate value property allows us to determine whether an equation $f(x) =0$ has a solution on a closed interval $[a,b]$, but we may also interate this process to find the location of a solution to an arbitrary degree of accuracy. Let's illustrate this by taking a specific example; the method works the same for all continuous functions. \medskip \noindent Consider $g(x) = x^2 -\cos(x)$ from the previous exercise. We determined that there exists a solution $c_1$ to $g(x) = 0$ in the interval $[0, 2]$, since $g(0) = -1 <0 $ and $g(2) = 4.6536 ... >0$. To isolate this solution, let's break the interval in half, and see which half contains the solution: the value of $g$ at $1$ is $g(1) = (1)^2 -\cos(1) = 0.4597 ... >0$. Since $g(0) <0$ and $g(1) >0$, the intermediate value property yields the existence of a solution to $g(x) =0$ in $(0, 1)$. [Since $g(1) >0$ and $g(2) >0$, the intermediate value property yields no information about the possible existence of solutions in $[1,2]$. To answer that question, we would need to do something else.] \medskip \noindent Now break $[0, 1]$ in half: the value of $g$ at $0.5$ is $g(0.5) = (0.5)^2 -\cos(0.5) = -0.6276 ...$, and so there is a solution to $g(x) =0$ in $[0.5, 1]$. \medskip \noindent Now, break $[0.5, 1]$ in half: the value of $g$ at $0.75$ is $g(0.75) = (0.75)^2 -\cos(0.75) = 0.1343 ... >0$, and so there is a solution to $g(x) =0$ in $[0.5, 0.75]$. \medskip \noindent Now, break $[0.5, 0.75]$ in half: the value of $g$ at $0.625$ is $g(0.625) = (0.625)^2 -\cos(0.625) = -0.0204 <0$, and so there is a solution to $g(x) =0$ in $[0.625, 0.75]$. We're getting close, since $g(0.625)$ is close to $0$, so we can continue. \medskip \noindent This is an easy method to teach a computer, since it involves evaluating a function, comparing numbers, and diving by $2$. It is also possible to make this method a bit more intelligent: there is no reason to divide the intervals in the middle. For instance, in the last step done above, it would make sense to break the interval $[0.625, 0.75]$ closer to $0.625$ than to $0.75$, since the value of $g$ at $0.625$ is closer to $0$ than the value of $g$ at $0.75$. \end{example} \begin{proposition} Suppose that $f$ is continuous and that the sequence $\{ a_n\}$ converges to $a$. Then, the sequence $\{ f(a_n)\}$ converges to $f(a)$. \label{convergence-cont} \end{proposition} \noindent \begin{proof} Since $f$ is continuous at $a$, for every $\varepsilon >0$, there exists some $\delta >0$ so that if $|x-a| <\delta$, then $|f(x) -f(a)| <\varepsilon$. Since $\{ a_n\}$ converges to $a$, for each $\mu >0$, there exists some $M$ so that if $n >M$, then $|a_n -a| <\mu$. So, suppose we are given some $\varepsilon >0$, and take $\mu =\delta$, where $\delta$ comes from our choice of $\varepsilon$ in the definition of continuity at $a$ and where $\mu$ is the input in the definition of $\{ a_n\}$ converging to $a$. Then, for $n >M$, we have that $|a_n -a| <\mu =\delta$, and hence that $|f(a_n) -f(a)| <\varepsilon$, which is precisely the definition of $\{ f(a_n)\}$ converges to $f(a)$, as desired. [This proof should convince you, if you have not already been convinced, of the power of appropriate definition.] \end{proof} \begin{exercise} Suppose that $f$ is continuous and that the sequence $c$, $f(c)$, $f(f(c))$, $f(f(f(c))),\ldots$ converges to $a$. Prove that $f(a)=a$. \label{interated-sequence} \end{exercise} \begin{definition} A function $f: {\bf R}\rightarrow {\bf R}$ is {\bf uniformly continuous} if for each $\varepsilon >0$, there exists $\delta >0$ so that if $|x-y| < \delta$, then $|f(x) -f(y)| <\varepsilon$. \end{definition} \medskip \noindent Note that this definition is very similar to the definition of continuity, except in one aspect: in the definition of continuity, the value of $\delta$ depends on both $\varepsilon$ and on the point at which continuity is being checked, while for uniform continuity, the value of $\delta$ depends only on $\varepsilon$ and not on the point at which the definition is being checked. To see that the two definitions are in fact different, consider the following example. \begin{example} The function $f: {\bf R}\rightarrow {\bf R}$ given by $f(x) =x^2$ is NOT uniformly continuous. Note however that since $f$ is a polynomial, it is continuous. \medskip \noindent To see that $f$ is not uniformly continuous, we argue by contradiction. We start with a bit of algebra, namely $|f(x) -f(y)| = |x^2 -y^2 | = |x-y|\: |x+y|$. Suppose now that $f$ were uniformly continuous, so that for each $\varepsilon >0$, there exists $\delta >0$ so that if $|x-y| <\delta$, then $| f(x) -f(y)| <\varepsilon$. In particular, there is a value $\delta_1$ of $\delta$ that works for $\varepsilon =1$. That is, if $f$ were uniformly continuous, then there would exist $\delta_1 >0$ so that if $|x-y| <\delta_1$, then $| f(x) -f(y)| <1$. \medskip \noindent Now, take $x$ to be very large and positive. Since we are working with $x$ and $y$ satisfying $| x-y| < \delta_1$, we make take $y = x + \frac{1}{2}\delta_1$. In particular, the value of $|x+y|$ satisfies $|x+y| = x+y = 2x +\frac{1}{2}\delta_1$, and so $|f(x) -f(y)| = |x^2 -y^2 | = |x-y|\: |x+y| = \frac{1}{2}\delta_1\: (2x +\frac{1}{2}\delta_1)$. Now we need only take $x$ large enough for $\frac{1}{2}\delta_1\: (2x +\frac{1}{2}\delta_1) > 2$ (which we can do, since $\delta_1$ is fixed and we have complete freedom to vary $x$) to get a contradition to $|f(x) -f(y)| <1$. \end{example} \begin{definition} A sequence $\{ f_n\}$ of functions $f_n: {\bf R}\rightarrow {\bf R}$ {\bf converges pointwise} to a function $f: {\bf R}\rightarrow {\bf R}$ if for each $a\in {\bf R}$, the sequence $\{ f_n(a)\}$ converges to $f(a)$. \end{definition} \begin{definition} A sequence $\{ f_n\}$ of functions $f_n: {\bf R}\rightarrow {\bf R}$ {\bf converges uniformly} to a function $f: {\bf R}\rightarrow {\bf R}$ if for each $\varepsilon >0$, there exists $M$ so that if $n >M$, then $|f_n(a) - f(a)| <\varepsilon$ for each $a\in {\bf R}$. \end{definition} \medskip \noindent As with the difference between continuity and uniform continuity, the difference between these two definitions is on one level small, merely the placement of a quantifier, but it has major effects. To see this, if we rewrite the definition of pointwise convergence, we get: \medskip \noindent {\em A sequence $\{ f_n\}$ of functions $f_n: {\bf R}\rightarrow {\bf R}$ {\bf converges pointwise} to a function $f: {\bf R}\rightarrow {\bf R}$ if for each $\varepsilon >0$ and for each $a\in {\bf R}$, there exists $M$ so that if $n >M$, then $| f_n(a) -f(a)| <\varepsilon$.} \medskip \noindent Namely, the difference is in the placement of quantifier {\bf for each $a\in {\bf R}$}. We demonstrate that these two definitions are different in two steps, one a theorem and the other an example. \begin{theorem} Suppose that $\{ f_n\}$ is a sequence of functions $f_n: {\bf R}\rightarrow {\bf R}$, where each $f_n$ is continuous. Suppose further that $\{ f_n\}$ converges uniformly to $f$. Then $f$ is continuous. \label{uniform-conv-cont} \end{theorem} \noindent \begin{proof} We show that $f$ is continuous at $a$. So, take an arbitrary $\varepsilon >0$; we need to show that there exists $\delta >0$ so that if $|x-a| <\delta$, then $|f(x) -f(a)| <\varepsilon$. \medskip \noindent Since $\{ f_n\}$ converges to $f$ uniformly, there exists $M$ so that if $n >M$, then $|f_n(b) -f(b)|< \frac{1}{3}\varepsilon$ for all $b\in {\bf R}$. We also know that $f_{M+1}$ is continuous at $a$, and so there exists some $\delta >0$ so that if $|x-a|<\delta$, then $|f_{M+1}(x) -f_{M+1}(a)| <\frac{1}{3}\varepsilon$. Therefore: \begin{eqnarray*} |f(x) -f(a)| & = & |f(x) -f_{M+1}(x) + f_{M+1}(x) -f_{M+1}(a) + f_{M+1}(a) - f(a)| \\ & \le & |f(x) -f_{M+1}(x)| + |f_{M+1}(x) -f_{M+1}(a)| + | f_{M+1}(a) - f(a)| \\ & < & \frac{1}{3}\varepsilon + \frac{1}{3}\varepsilon + \frac{1}{3}\varepsilon = \varepsilon, \end{eqnarray*} and so $f$ is continuous at $a$. (Here, the first and third inequalities follow from the uniform convergence of $\{ f_n\}$ to $f$, while the middle inequality follows from the continuity of $f_{M+1}$.) \end{proof} \begin{example} For $n\ge 1$, define $f_n: [0,1]\rightarrow [0,1]$ by $f_n(x) =x^n$. Then, $\{ f_n\}$ converges pointwise to the discontinuous function \[ f(x) = \left\{ \begin{array}{ll} 0 & \mbox{ for } 0\le x < 1 \\ 1 & \mbox{ for } x = 1\end{array}\right. \] This is just a reflection of the fact that for $0 \le a < 1$, the sequence $\{ a^n\}$ converges to $0$, but the rate of convergence depends on the value of $a$; if $a$ is close to $0$, then the convergence is much quicker than if $a$ is close to $1$. Since the pointwise limit of $\{ f_n\}$ is not continuous, we have by Theorem \ref{uniform-conv-cont} that the convergence of $\{ f_n\}$ to $f$ cannot be uniform. (It is also possible to show that the convergence of $\{ f_n\}$ to $f$ cannot be uniform by direct application of the definition.) \end{example} \section{Differentiability} \label{differentiability} \begin{definition} The function $f$ is {\bf differentiable at $a$} if the limit \[ f'(a) =\lim_{h\rightarrow 0} \frac{f(a+h) -f(a)}{h} =\lim_{w\rightarrow a} \frac{f(w) -f(a)}{w-a} \] exists. $f$ is {\bf differentiable} if it is differentiable at every point of its domain. \end{definition} \begin{example} \end{example} \begin{proposition} Suppose that $f$ is differentiable at $a$. Then, $f$ is continuous at $a$. \label{diff-implies-cont} \end{proposition} \noindent \begin{proof} The proof of this is the evaluation of a single limit. Recall that $f$ is continuous at $a$ if $\lim_{x\rightarrow a} f(x) = f(a)$, or equivalently, if $\lim_{x\rightarrow a} (f(x) -f(a)) =0$. So, \[ \lim_{x\rightarrow a} (f(x) -f(a)) = \lim_{x\rightarrow a} \frac{f(x) -f(a)}{x-a} (x-a) = \lim_{x\rightarrow a} \frac{f(x) -f(a)}{x-a} \lim_{x\rightarrow a} (x-a) = f'(a) \cdot 0 = 0, \] as desired. \end{proof} \begin{theorem} {\bf Rolle's theorem:} Suppose that the function $f$ is continuous on the closed interval $[a,b]$ and differentiable on the open interval $(a,b)$, and that $f(a) = f(b)$. Then, there exists a number $c$ in the interval $(a,b)$ so that $f'(c) =0$. \end{theorem} \noindent \begin{proof} The proof of Rolle's theorem is a direct consequence of the maximum value property for continuous functions on a closed interval, the definition of the derivative, and a bit of calculation. Since $f$ is continuous on $[a,b]$, it achieves its maximum at some point $c$ in $[a,b]$. Assume to start that $f$ achieves its maximum at a point $c$ in the open interval $(a,b)$. Consider the derivative of $f$ at $c$, which is defined and continuous since $f$ is assumed to be differentiable on $(a,b)$: $f'(c) =\lim_{h\rightarrow 0} \frac{f(c+h) -f(c)}{h}$ exists. Since this limit exists, the two one-sided limits $\lim_{h\rightarrow 0+} \frac{f(c+h) -f(c)}{h}$ and $\lim_{h\rightarrow 0-} \frac{f(c+h) -f(c)}{h}$ exist and are both equal to $f'(c)$. Let's examine them individually. \medskip \noindent For $\lim_{h\rightarrow 0+} \frac{f(c+h) -f(c)}{h}$: since $f$ achieves its maximum at $c$, we have that $f(c+h) \le f(c)$ for all values of $h$ for which $c+h$ lies in $(a,b)$, and so $f(c+h) -f(c)\le 0$. Hence, $\lim_{h\rightarrow 0+} \frac{f(c+h) -f(c)}{h} \le 0$, since the numerator is negative or $0$ and the denominator is positive. \medskip \noindent For $\lim_{h\rightarrow 0-} \frac{f(c+h) -f(c)}{h}$: again since $f$ achieves its maximum at $c$, we have that $f(c+h) \le f(c)$ for all values of $h$ for which $c+h$ lies in $(a,b)$, and so $f(c+h) -f(c)\le 0$. Hence, $\lim_{h\rightarrow 0-} \frac{f(c+h) -f(c)}{h} \ge 0$, since the numerator is negative or $0$ and the denominator is also negative. \medskip \noindent Since $\lim_{h\rightarrow 0+} \frac{f(c+h) -f(c)}{h} =\lim_{h\rightarrow 0-} \frac{f(c+h) -f(c)}{h}$ and since the left hand limit is non-positive and the right hand limit is non-negative, it must be that both are equal to $0$, and hence that $f'(c) =0$ as well. \medskip \noindent If $f$ does not achieve its maximum at some point in $(a,b)$, then it must achieve its maximum at the endpoints, and so $f(x)\le f(a)$ for all $x\in [a,b]$. We can make the same argument at the point $c$ in (a,b) at which $f$ achieves its minimum, making use of the minimum value property for $f$, and again argue that if $f$ achieves its minimum at a point $c$ in $(a,b)$, then $f'(c) =0$. \medskip \noindent The only remaining alternative is that $f$ achieves both its maximum and its minimum at the endpoints of $[a,b]$, in which case it must be that $f$ is constant on $[a,b]$. In this case, we can easily calculate that $f'(c) =0$ at every $c$ in $(a,b)$. This completes the proof of Rolle's theorem. \end{proof} \begin{theorem} {\bf Mean value theorem:} Suppose that the function $f$ is continuous on the closed interval $[a,b]$ and differentiable on the open interval $(a,b)$. Then, there exists a number $c$ in the interval $(a,b)$ so that $f'(c)(b-a) =f(b) -f(a)$. \end{theorem} \noindent \begin{proof} Consider the new function \[ g(x) = f(x) -f(a) -\left( \frac{f(b) -f(a)}{b-a} \right) (x-a), \] and note that $g$ is continuous on $[a,b]$ and differentiable on $(a,b)$, since it is constructed from $f$ and a linear polynomial, and moreover we have that $g(b) = g(a) = 0$. Hence, we may apply Rolle's theorem to $g$ to obtain a point $c$ in $(a,b)$ at which $g'(c) =0$. Calculating, we see that \[ g'(c) = f'(c) - \frac{f(b) -f(a)}{b-a}, \] and so when $g'(c) =0$, we have that $f'(c) = \frac{f(b) -f(a)}{b-a}$, which is the conclusion of the mean value theorem. \end{proof} \begin{proposition} Let $f:{\bf R}\rightarrow {\bf R}$ be a differentiable function. If $f'(x) >0$ for all $x$, then $f(x)$ is increasing; that is, if $a 0$ by assumption and since $b -a >0$, we have that $f(b) -f(a) >0$, that is, that $f(b) > f(a)$, as desired. \end{proof} \begin{exercise} Show that $f(x) = |x-2|$ on the interval $[1,4]$ satisfies neither the hypotheses nor the conclusion of the Mean Value Theorem. \label{mean-value-example} \end{exercise} \begin{example} Use the mean value theorem to prove that if $f'(x)$ is constant on ${\bf R}$, then $f(x)$ is a linear function; that is, there exist constants $a$ and $b$ so that $f(x) = ax+b$. \medskip \noindent Since $f'(x)$ is constant on ${\bf R}$, there exists some $a\in {\bf R}$ so that $f'(x) =a$ for all $x\in {\bf R}$. Consider the function $g(x) = f(x) - ax$. Since both $f$ and the linear polynomial $ax$ are differentiable on all of ${\bf R}$, and hence continuous on all of ${\bf R}$, we have that $g$ is differentiable on all of ${\bf R}$, and hence is also continuous on all of ${\bf R}$. In order to apply the mean value theorem, we need to work on closed intervals. \medskip \noindent So, for $x_0 >0$, consider the interval $[0, x_0]$. Since $g$ is continuous on $[0, x_0]$ and differentiable on $(0, x_0)$, the mean value theorem states that there exists some $c$ in $(0,x_0)$ so that $g(x_0) -g(0) = g'(c) (x_0 -0)$. However, $g'(c) = f'(c) - a = a -a =0$, and so $g(x_0) -g(0) =0$, and so $g(x_0) =g(0)$ for all $x_0 >0$. To show that $g(x_0) =g(0)$ for all $x_0 <0$ as well, work with the interval $[x_0, 0]$ and repeat the argument just given. \medskip \noindent So, $g(x)$ is constant, that is, there is $b\in {\bf R}$ so that $g(x) = b$ for all $x\in {\bf R}$. Substituting in $g(x) = f(x) - ax$, this yields that $f(x) -ax =b$ for all $x\in {\bf R}$, or that $f(x) = ax +b$ for all $x\in {\bf R}$, where $a$ and $b$ are constants, as desired. \label{specific-mean-value} \end{example} \begin{exercise} Use the mean value theorem to prove each of the following statements. \begin{enumerate} \item If $g'(x)$ is a polynomial of degree $n-1$, then $g(x)$ is a polynomial of degree $n$; \item $x/(x+1)<\ln(1+x)0$; \item $\sin(x) 0$; \end{enumerate} \label{mean-value-stmts} \end{exercise} \begin{example} For the function $g(x) = x^2 - \cos(x)$, the same as in Exercise \ref{int-value-exercises}, use Rolle's theorem or the mean value theorem to determine whether the solutions described in Exercise \ref{int-value-exercises} to the equation $g(x) = 0$ are the only ones. \medskip \noindent In Exercise \ref{int-value-exercises}, we saw that there exist at least two solutions $c_1$ and $c_2$ to this equation, where $0 < c_1 < 2$ and $-2 < c_2 < 0$. Suppose there were a third solution $c_3$ to $g(x) = 0$. Then, since there are three points $c_1$, $c_2$, and $c_3$ at which $g(x) =0$, by Rolle's theorem there would exist two points $e_1$ and $e_2$ at which $g'(x) = 0$. (For instance, if $c_3 < c_2$, then $e_1$ would lie between $c_3$ and $c_2$, and $e_2$ would lie between $c_2$ and $c_1$.) Note that there is already one point at which $g'(x) =0$, namely $x =0$. \medskip \noindent However, by the same sort of argument used in the solution to parts $3$ and $4$ of Exercise \ref{mean-value-stmts}, we have that $g'(x)$ satisfies $g'(x) = 2x + \sin(x) \ne 0$ for all $x \ne 0$. (Specifically, we have that $g'(0) =0$, and that $g''(x) = 2 +\cos(x) >0$ for all $x\in {\bf R}$, since $-1 \le \cos(x)\le 1$ for all $x\in {\bf R}$. Hence, $g'(x) <0$ for all $x <0$ and $g'(x) > 0$ for all $x >0$.) Hence, by Rolle's theorem, there are only the two solutions to $g(x) =0$ that we had already found. \end{example} \begin{exercise} For each of the following functions, the same as in Exercise \ref{int-value-exercises}, use Rolle's theorem or the mean value theorem to determine whether the solutions described in Exercise \ref{int-value-exercises} are the only ones. \begin{enumerate} \item $f(x) = 0$ on the interval $[-a,a]$, where $a$ is an arbitrary positive real number and $f(x) = x^{1995} + 7654 x^{123} + x$; \item $\tan(x)=e^{-x}$ for $x$ in $[-1,1]$; \item $3\sin^2(x)=2\cos^3(x)$ for $x>0$; \item $3+x^5-1001x^2=0$ for $x>0$; \end{enumerate} \label{mean-value-exercises} \end{exercise} \begin{exercise} For each of the following functions described below, determine whether there is a solution to the given equation in the specified set. \begin{enumerate} \item $g'(a)=0=g'(b)$, where $a 0$; \end{enumerate} \label{more-mean-val-exercises} \end{exercise} \section{The Cauchy mean value theorem and l'Hopital's rule} \label{cauchy-lhopital} \begin{theorem} {\bf Cauchy mean value theorem:} Let $f$ and $g$ be two functions that are both continuous on $[a,b]$ and differentiable on $(a,b)$. Suppose further that $g'(x)$ is never zero on $(a,b)$. Then, there exists some $c$ in $(a,b)$ so that \[ \frac{f(b) -f(a)}{g(b) -g(a)} = \frac{f'(c)}{g'(c)}. \] \label{cauchy-mean-value} \end{theorem} \noindent \begin{proof} Consider the function \[ \varphi(x) = f(x) -f(a) - \left( \frac{f(b) -f(a)}{g(b) -g(a)} \right) (g(x) - g(a)). \] Since both $f$ and $g$ are continuous on $[a,b]$ and differentiable on $(a,b)$, the new function $\varphi(x)$ is as well, as it is a linear combination of $f$ and $g$. Applying the mean value theorem to $\varphi$, there exists a point $c$ in $(a,b)$ so that $\varphi'(c) =\frac{\varphi(b) -\varphi(a)}{b-a}$. That is, \[ \varphi'(c) = f'(c) -\left( \frac{f(b) -f(a)}{g(b) -g(a)} \right) g'(c) = 0, \] since $\varphi(b) = \varphi(a) =0$. Hence, $f'(c) = \left( \frac{f(b) -f(a)}{g(b) -g(a)} \right) g'(c)$. Since $g'(c)\ne 0$ no matter the value of $c$, this is equivalent to \[ \frac{f'(c)}{g'(c)} = \frac{f(b) -f(a)}{g(b) -g(a)}, \] which is the desired conclusion. \end{proof} \medskip \noindent The Cauchy mean value theorem can be thought of as a variant of the mean value theorem that holds simultaneously for two functions. Also, note that the Cauchy mean value theorem follows as an immediate application of the mean value theorem, which is an immediate application of Rolle's theorem, which is an immediate application of the maximum value property for continuous functions on a closed interval. \medskip \noindent The main use of the Cauchy mean value theorem for us is to give a proof of l'Hopital's rule. \begin{theorem} {\bf l'Hopital's rule:} Suppose that $f$ and $g$ are differentiable on the union $I =(a-\varepsilon, a)\cup (a,a +\varepsilon)$ for some $\varepsilon >0$, and that $g'(x)$ is non-zero on $I$. Suppose also that \[ \lim_{x\rightarrow a} f(x) = \lim_{x\rightarrow a} g(x) = 0. \] Then, \[ \lim_{x\rightarrow a} \frac{f(x)}{g(x)} = \lim_{x\rightarrow a} \frac{f'(x)}{g'(x)}, \] provided that the right hand limit either exists or is $\pm \infty$. \label{lhoptial} \end{theorem} \noindent \begin{proof} Since $\lim_{x\rightarrow a} f(x) = 0$, we set $f(a) =0$ in order to insure that $f$ is a continuous function on $(a-\varepsilon, a+\varepsilon)$, and similarly we set $g(a) =0$. Fix a value of $x$ in $I$, and apply the Cauchy mean value theorem to $f$ and $g$ on the interval $[a,x]$ (if $x >a$, or on the interval $[x,a]$ if $x 0$. If $\sum_{n=0}^\infty a_n (x-a)^n =\sum_{n=0}^\infty b_n (x-a)^n$ for all $x$ in $(a-\varepsilon, a+\varepsilon)$, then $a_n =b_n$ for all $n\ge 0$. \label{series-uniqueness} \end{lemma} \medskip \noindent We now present a test to determine when a function is equal to its Taylor series, followed by an example of a function {\bf NOT} equal to its Taylor series, to show that the condition in Theorem \ref{function-equals-series} does not hold for all functions. \begin{theorem} Let $f$ be a function which has derivatives of all orders in the interval $(a-\beta, a+\beta)$ for some $\beta >0$. Then, \[ f(x) =\sum_{n=0}^\infty \frac{1}{n!} f^{(n)}(a) (x-a)^n \] (that is, $f$ is equal to its Taylor series) if and only if \[ \lim_{n\rightarrow\infty} R_n(x) = 0, \] where \[ R_n(x) = \frac{1}{(n+1)!} f^{(n+1)}(z) (x-a)^{n+1}, \] where $z$ is some number between $a$ and $x$ (and so $z$ depends on $a$, $x$, and $n$). \label{function-equals-series} \end{theorem} \begin{example} Consider the Taylor series for the function $f(x) =e^x$ centered at $a =0$, namely \[ \sum_{n=0}^\infty \frac{1}{n!} f^{(n)}(0) x^n = \sum_{n=0}^\infty \frac{1}{n!} x^n, \] since $f^{(n)}(0) =e^0 =1$ for all $n\ge 0$. In order to show that $e^x =\sum_{n=1}^\infty \frac{1}{n!} x^n$ for all $x$ in ${\bf R}$, we need to show that for each $x$, \[ \lim_{n\rightarrow\infty} R_n(x) =\lim_{n\rightarrow\infty} \frac{1}{(n+1)!} e^z x^{n+1}, \] where $z$ lies between $0$ and $x$ and depends on $0$, $x$, and $n$. \medskip \noindent First take the case that $x >0$. In this case, we have that $10$. Then, the following hold on the open interval $(a-\varepsilon, a+\varepsilon)$: \begin{itemize} \item the {\bf sum} $(f+g)(x)$ is given by the power series $(f+g)(x) = \sum_{n=0}^\infty (a_n +b_n)(x-a)^n$; \item the {\bf difference} $(f-g)(x)$ is given by the power series $(f-g)(x) = \sum_{n=0}^\infty (a_n -b_n)(x-a)^n$; \item the {\bf product} $(f\cdot g)(x)$ is given by the power series $(f\cdot g)(x) = \sum_{n=0}^\infty c_n (x-a)^n$, where $c_n =\sum_{k=0}^n a_k\cdot b_{n-k}$. \item the {\bf derivative} of $f(x)$ is given by differentiating the power series term by term: \[ f'(x) = \sum_{n=1}^\infty n\: a_{n-1} (x-a)^{n-1}; \] \item the {\bf (indefinite) integral} of $f(x)$ is given by integrating the power series term by term: \[ \int f(x) {\rm d}x = c + \sum_{n=0}^\infty \frac{a_n}{n+1} (x-a)^{n+1}; \] \end{itemize} \label{power-series-arith} \end{proposition} \begin{example} Determine a series representation for the function $f(x) = (x+1)/(x+2)$ centered at $a =0$. \medskip \noindent One way would be to calculate the Taylor series for $f(x)$ centered at $a =0$, but this gets complicated, as the derivatives of $f(x)$ get complicated. Another way is to use the arithmetic of power series. We start by deriving a series representation for $1/(x+2)$, using the fact that $\frac{1}{1-r} =\sum_{n=0}^\infty r^n$ for $|r| <1$. Hence, for $|-\frac{1}{2}x| <1$, we have: \begin{eqnarray*} \frac{1}{x+2} & = & \frac{1}{2(\frac{1}{2}x +1)} \\ & = & \frac{1}{2} \frac{1}{1 -(-\frac{1}{2}x)} \\ & = & \frac{1}{2} \sum_{n=0}^\infty \left( -\frac{1}{2}x \right)^n \\ & = & \sum_{n=0}^\infty (-1)^n \frac{1}{2^{n+1}} x^n. \end{eqnarray*} Hence, a series representation for $f(x)$ centered at $0$ is: \begin{eqnarray*} f(x) = \frac{x+1}{x+2} & = & (x+1) \sum_{n=0}^\infty (-1)^n \frac{1}{2^{n+1}} x^n \\ & = & \sum_{n=0}^\infty (-1)^n \frac{1}{2^{n+1}} x^{n+1} + \sum_{n=0}^\infty (-1)^n \frac{1}{2^{n+1}} x^n \\ & = & \sum_{n=1}^\infty (-1)^{n+1} \frac{1}{2^n} x^n + \sum_{n=0}^\infty (-1)^n \frac{1}{2^{n+1}} x^n \\ & = & \frac{1}{2} + \sum_{n=1}^\infty \left( (-1)^{n+1} \frac{1}{2^n} + (-1)^n \frac{1}{2^{n+1}} \right) x^n = \frac{1}{2} + \sum_{n=1}^\infty (-1)^{n+1} \frac{1}{2^{n+1}} x^n. \end{eqnarray*} \label{example-uniqueness} \end{example} \section{Last year's exam} \label{past-exams} \medskip \noindent {\bf Semester 1, 1999:} \medskip \noindent {\bf Rubric:} Full marks may be obtained by giving {\bf COMPLETE} and {\bf CORRECT} answers to {\bf ALL} questions. Be sure to justify all of your answers. Each question is worth 5 marks, giving a total of 100 marks for the exam. \begin{enumerate} \item Give an example of a sequence that is bounded but not convergent, or prove that no such sequence exists. Also, give an example of a sequence that is convergent but not bounded, or prove that no such sequence exists. \medskip \noindent {\bf Solution:} The sequence $\{ a_n =(-a)^n\}$ is bounded below by $-1$ and bounded above by $1$, and so is bounded. This sequence does not converge, though; since $|a_n -a_{n+1}| =2$ for all $n$, this sequence fails the Cauchy criterion, and hence diverges. \medskip \noindent For the other part, we know that every convergent sequence is bounded. This is Proposition \ref{conv-implies-bounded}. (Note that you are asked in this question to state and to write out the proof of this proposition.) \item Determine whether the sequence \[ \left\{ a_n = \frac{\left( \frac{2}{3} \right)^n}{2 - n^{1/n}} \right\} \] converges or diverges. If the sequence converges, determine its limit. \medskip \noindent {\bf Solution:} We know that $\lim_{n\rightarrow\infty} (\frac{2}{3})^n =0$, since $\frac{2}{3} <1$. Hence, we need to evaluate $\lim_{n\rightarrow\infty} n^{1/n}$: start by writing \[ n^{1/n} = \exp(\ln(n))^{1/n} = \exp\left( \frac{\ln(n)}{n} \right). \] Since $\lim_{n\rightarrow\infty} n^{1/n} =\exp\left( \lim_{n\rightarrow\infty} \frac{\ln(n)}{n} \right)$, and since $\lim_{n\rightarrow\infty} \frac{\ln(n)}{n}$ has the indeterminate form $\frac{\infty}{\infty}$, we may use l'Hopital's rule to evaluate: \[ \lim_{n\rightarrow\infty} \frac{\ln(n)}{n} =\lim_{n\rightarrow\infty} \frac{ \frac{1}{n}}{1} =0, \] and so \[ \lim_{n\rightarrow\infty} n^{1/n} = \exp\left( \lim_{n\rightarrow\infty} \frac{\ln(n)}{n} \right) = e^0 =1.\] Hence, the original limit can be evaluated using the arithmetic of limits: \[ \lim_{n\rightarrow\infty} \frac{\left( \frac{2}{3} \right)^n}{2 - n^{1/n}} = \frac{0}{2-1} =0, \] and so the sequence converges to $0$. \item Prove that if a sequence $\{ a_n\}$ is increasing and bounded above, then it is convergent. \medskip \noindent {\bf Solution:} Since $\{ a_n\}$ is bounded above, it has a supremum $a$. By the definition of supremum, for every $\varepsilon >0$, there exists $M$ so that $| a_M -a | <\varepsilon$. Since $\{ a_n\}$ is increasing and since $a$ is an upper bound for $\{ a_n\}$, we have that $a_M M$. In particular, we have that $| a_n -a| < |a_M -a| < \varepsilon$ for every $n >M$, and this is just the definition that $\{ a_n\}$ converges to $a$. \item Determine whether the infinite series \[ \sum_{n=3}^\infty {1\over n \ln(n)} \] converges or diverges. (You do not need to evaluate the sum of the series in the case that it converges.) \medskip \noindent {\bf Solution:} Since the terms in the series are all positive, we may use the integral test, with $f(x) =\frac{1}{x\ln(x)}$. This function is continuous for $x \ge 3$ and is decreasing, since $f'(x) = -\frac{\ln(x) +1}{x^2\ln^x(x)} <0$ for $x \ge 3$. Then, the series converges if and only if the improper integral $\int_3^\infty \frac{1}{x\ln(x)} {\rm d}x =\lim_{M\rightarrow\infty} \int_3^M \frac{1}{x\ln(x)} {\rm d}x$ converges. Calculating, we see that \[ \lim_{M\rightarrow\infty} \int_3^M \frac{1}{x\ln(x)} {\rm d}x =\lim_{M\rightarrow\infty} \ln(\ln(x))\left|_3^M \right. =\lim_{M\rightarrow\infty} (\ln(\ln(M)) -\ln(\ln(3)) =\infty. \] Since the intergral diverges, the series diverges. \item Determine whether the infinite series \[ \sum_{n=1}^\infty \frac{(-1)^n}{\sqrt{n}} \] converges absolutely, converges conditionally, or diverges. (You do not need to evaluate the sum of the series in the case that it converges.) \medskip \noindent {\bf Solution:} Notice that this is an alternating series. Since $\lim_{n\rightarrow\infty} \frac{1}{\sqrt{n}} =0$ and since $\frac{1}{\sqrt{n+1}} <\frac{1}{\sqrt{n}}$, the alternating series test yields that this series converges. \medskip \noindent However, the series $\sum_{n=1}^\infty \frac{1}{\sqrt{n}}$ diverges, for instance by comparison to the harmonic series, as $\frac{1}{\sqrt{n}} \ge \frac{1}{n}$ for all $n\ge 1$, and so this series does not converge absolutely. That is, this series converges conditionally. \item By explicitly calculating its partial sums, show that the infinite series \[ \sum_{n=1}^\infty \left( \frac{1}{n} - \frac{1}{n+1} \right) \] is convergent. \medskip \noindent {\bf Solution:} Calculating, we see that the $k^{th}$ partial sum is a telescoping sum, namely \[ S_k =\sum_{n=1}^k \left( \frac{1}{n} - \frac{1}{n+1} \right) = \left( \frac{1}{1} - \frac{1}{1+1} \right) + \left( \frac{1}{2} - \frac{1}{2+1} \right) + \cdots + \left( \frac{1}{k} - \frac{1}{k+1} \right) = 1 -\frac{1}{k+1}. \] Therefore, $\lim_{k\rightarrow\infty} S_k =1 -\lim_{k\rightarrow\infty} \frac{1}{k+1} =1$, and so this series converges. \item Determine the radius of convergence and the interval of convergence of the power series \[ \sum_{n=1}^\infty \left( 1 + \frac{1}{n}\right)^n (x-1)^n. \] \medskip \noindent {\bf Solution:} Apply the ratio test: \[ \lim_{n\rightarrow\infty} \left| \frac{ \left( 1 + \frac{1}{n+1} \right)^{n+1} (x-1)^{n+1}}{ \left( 1 + \frac{1}{n} \right)^n (x-1)^n} \right| = |x-1| \lim_{n\rightarrow\infty} \frac{ \left( 1 +\frac{1}{n+1} \right)^{n+1}}{ \left( 1+ \frac{1}{n} \right)^n} = |x-1| \frac{e}{e} = |x-1|. \] So, the radius of convergence is $1$, and this series converges absolutely for $| x-1| <1$. We need to check the endpoints of this interval. \medskip \noindent At $x =0$, the series becomes $\sum_{n=1}^\infty \left( 1 + \frac{1}{n}\right)^n (-1)^n$, which diverges by the $n^{th}$ term test for divergence, since $\lim_{n\rightarrow\infty} \left( 1 + \frac{1}{n}\right)^n (1)^n$ does not exist, since $\lim_{n\rightarrow\infty} \left( 1 + \frac{1}{n} \right)^n =e$. \medskip \noindent At $x =2$, the series becomes $\sum_{n=1}^\infty \left( 1 + \frac{1}{n}\right)^n$, which diverges since $\lim_{n\rightarrow\infty} \left( 1 + \frac{1}{n}\right)^n =e$. \medskip \noindent So, the interval of convergence is $(0,2)$. \item What can be said about a sequence $\{ a_n\}$ if it converges and if every $a_n$ is an integer? Also, give a qualitative description of all of the convergent subsequences of the sequence \[ 1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, \ldots.\] \medskip \noindent {\bf Solution:} A convergent sequence of integers must be eventually constant; that is, there exists $M$ so that $a_n =a_p$ for all $n$, $p >M$. This follows from the Cauchy criterion with $\varepsilon =\frac{1}{2}$ and the fact that the difference of two non-equal integers is at least $1$. \medskip \noindent For this given sequence, the convergent subsequences are all of the following form: pick a positive integer $p$, and note that $p$ appears infinitely many times in the given sequence. Then, a convergent subsequence is of the form $a_0, a_1, \ldots, a_M, a_{M+1} =p, a_{M+2} =p, \ldots$ for some $M$, where $a_0,\ldots, a_M$ are arbitrary positive integers. \item Explain {\it exactly} what is meant by the statement \[ \lim_{x\rightarrow 4} (x^2 -e^x) =16 - e^4.\] \medskip \noindent {\bf Solution:} For every $\varepsilon >0$, there exists $\delta >0$ so that if $0 <|x-4| <\delta$, then $| (x^2 -e^x) - (16 - e^4)| <\varepsilon$. \item Evaluate the limit \[ \lim_{h\rightarrow 0} \frac{ \frac{1}{2+h} - \frac{1}{2}}{h}.\] \medskip \noindent {\bf Solution:} Either use l'Hopital's rule, since it has the indeterminate form $\frac{0}{0}$, or notice that this is the definition of the derivative of $f(x) =\frac{1}{x}$ at $x+0 =2$, namely \[ \lim_{h\rightarrow 0} \frac{ \frac{1}{2+h} - \frac{1}{2}}{h} =f'(2) =-\frac{1}{4}. \] \item Define what it means for a function $f: {\bf R}\rightarrow {\bf R}$ to be continuous. Using the definition, show that the function $f(x) = 2x-5$ is continuous. \medskip \noindent {\bf Solution:} $f$ is continuous at $a$ if $\lim_{x\rightarrow a} f(x) =f(a)$. $f$ is continuous if it is continuous at every point in its domain. \medskip \noindent To show that $f(x) = 2x-5$ is continuous, we show that it is continuous at $a$ for every $a$. That is, we need to show that \[ \lim_{x\rightarrow a} (2x-5) =2a-5. \] So, for any $\varepsilon >0$, take $\delta =\frac{1}{2}\varepsilon$. Then, if $| x-a| < \delta =\frac{1}{2} \varepsilon$, then \[ |f(x) -f(a) | =| (2x-5) -(2a-5)| = 2|x-a| < 2\frac{1}{2}\varepsilon =\varepsilon, \] and so the definition of $\lim_{x\rightarrow a} f(x) =f(a)$ is satisfied. \item Consider the function $g: {\bf R}\rightarrow {\bf R}$ given by setting $g(x) = 1$ if $x$ is a rational number and $g(x) = 0$ if $x$ is an irrational number. Determine whether $g$ is or is not continuous. \medskip \noindent {\bf Solution:} This function is not continuous at $0$, since there are numbers arbitrarily close to $0$, namely all the irrational numbers of the form $\frac{\pi}{n}$ for $n\in {\bf N}$, and we have that $| g(0) -g( \frac{\pi}{n})| = | 1 - 0| =1$. Hence, for $\varepsilon =\frac{1}{2}$, there does not exist $\delta >0$ so that if $| 0-a| < \delta$, then $|g(0) -g(a)| <\varepsilon =\frac{1}{2}$. So, $\lim_{x\rightarrow 0} g(x) \ne g(0)$. (In fact, $\lim_{x\rightarrow 0} g(x)$ does not exist.) \item Let $f$ be a function which is continuous on the closed interval $[a,b]$, where $a < b$. Suppose that $f(b) < f(a)$. Determine whether there exists a point $c$ in the open interval $(a,b)$ so that $f(c) = c$. \medskip \noindent {\bf Solution:} Not necessarily: take $f(x) =100 -x$ on the interval $[a,b] = [0,1]$. Then, $f(1) =99 < f(0) =100$, but there are no solutions to $x =100 -x$ in the interval $[0,1]$. (The only solution is at $x =50$.) \item Show that the function $h(x) = \sqrt{x-1}$ satisfies the hypotheses of the Mean Value Theorem on the interval $[2,5]$. Find all the numbers $c$ in $(2,5)$ that satisfy the conclusion of the Mean Value Theorem. \medskip \noindent {\bf Solution:} (Be sure to state the mean value theorem first, so that it is clear to me that you know what the hypotheses and the conclusions are.) Note that $h(x)$ is continuous and differentiable on all of $(1,\infty)$, since $x -1 >0$ on $x >1$, and so in particular $h$ is continuous on $[2,5]$ and differentiable on $(2,5)$ (i.e., satisfies the hypotheses). \medskip \noindent So, there exists some $c$ in $(2,5)$ at which \[ h'(c) =\frac{h(5) -h(2)}{5-2} =\frac{1}{3}. \] In fact, since $h'(c) = \frac{1}{2\sqrt{c -1}}$, the only solution to $h'(c) =\frac{1}{3}$ occurs at $c = \frac{13}{4}$ (which does lie in $(2,5)$, as expected). \item Use the Mean Value theorem to prove that if $f$ and $g$ are two differentiable functions on the closed interval $[a,b]$, where $a {\rm i}$, again contradicting the second condition in the definition of an order. \medskip \noindent Hence, since we have that neither $0 <{\rm i}$ nor ${\rm i} <0$, we see that there cannot exist an order on ${\bf C}$ that makes ${\bf C}$ into an ordered field. \medskip \noindent {\bf Solution \ref{rationals-not-complete}:} To see that ${\bf Q}$ is not a complete ordered field, note that the subset $A =\{ a\in {\bf Q}\: |\: a < \sqrt{2}\}$ is bounded above, for instance by $s = 2$, but has no supremum in ${\bf Q}$: that is, for every rational number $s$ so that $a \le s$ for every $a\in A$, we have that there exists another rational number $t$ so that $t < s$ and $a\le t$ for every $a\in A$. (One way to see this is to use decimal expansions, and to recall that a number is rational if and only if its decimal expansion is either repeating or terminating.) \medskip \noindent {\bf Solution \ref{bounded}:} \begin{enumerate} \item Bounded above by $1$ (since for $n\in {\bf Z} -\{ 0\}$, either $n\ge 1$ in which case $\frac{1}{n}\le 1$, or $n\le -1$, in which case $\frac{1}{n}\le 0$), and so has a supremum. Again making use of Exercise \ref{inf-sup-properties}, since $1$ is an upper bound for $S$ and since $1\in S$, $1 = \sup(S)$. In this case, $\sup(S)\in S$. \medskip \noindent Bounded below by $-1$ (since for $n\in {\bf Z} -\{ 0\}$, either $n\ge 1$, in which case $0 <\frac{1}{n}$, or $n\le -1$, in which case $\frac{1}{n}\ge \frac{1}{-1} = -1$), and so has an infimum. Again making use of Exercise \ref{inf-sup-properties}, since $-1$ is a lower bound for $S$ and since $-1\in S$, $-1 = \inf(S)$. In this case, $\inf(S)\in S$. \medskip \noindent Since $S$ is both bounded above and bounded below, it is bounded. \item Bounded below by $0$ (since $2^x >0$ for all $x\in {\bf R}$, we certainly have that $2^x >0$ for all $x\in {\bf Z}$), and so has an infimum. Given any $\varepsilon >0$, we can always find $x$ so that $2^x < \varepsilon$, namely take $\log_2$ of both sides, and take $x$ to be any integer less than $\log_2(\varepsilon)$. Hence, there is no positive lower bound, and so the greatest lower bound, the infimum, is $\inf(S) =0$. Since there are no solutions to $2^x =0$, in this case $\inf(S)\not\in S$. \medskip \noindent Since $2^x >x$ for positive integers $x$, given any $C >0$ we can find an $x$ so that $2^x >C$, and so there is no upper bound. That is, $S$ is not bounded above. \medskip \noindent Since $S$ is not bounded above, it is not bounded. \item Bounded below by $-1$ (since $[-1,1] =\{ x\in {\bf R}\: |\: -1\le x\le 1\}$ and since $-1 < 5$), and so has an infimum. Again making use of Exercise \ref{inf-sup-properties}, since $-1$ is a lower bound for $S$ and since $-1\in S$, $-1 = \inf(S)$. In this case, $\inf(S)\in S$. \medskip \noindent Bounded above by $5$, and so has a supremum. Again making use of Exercise \ref{inf-sup-properties}, since $5$ is an upper bound for $S$ and since $5\in S$, $5 = \sup(S)$. In this case, $\sup(S)\in S$. \medskip \noindent Since $S$ is both bounded above and bounded below, it is bounded. \item Considering the subset of $S$ in which $y = 1$, we have that $S$ contains the natural numbers ${\bf N}$, and hence $S$ is not bounded above. \medskip \noindent Since $x$ and $2^y$ are both positive for $x$, $y\in {\bf N}$, we have that $\frac{x}{2^y} >0$ for all $x$, $y\in {\bf N}$. Therefore, $S$ is bounded below by $0$, and so has an infimum. Considering the subset of $S$ in which $x = 1$, we have that $S$ contains $\frac{1}{2^y}$ for all $y\in {\bf N}$. In particular, for each $\varepsilon >0$, we can find $y\in {\bf N}$ so that $\frac{1}{2^y} <\varepsilon$, namely take $\log_2$ of both sides to get $-y <\log_2(\varepsilon)$, or equivalently $y >\log_2(\varepsilon)$. Hence, there is no positive lower bound, and so $0 = \inf(S)$. Since $\frac{x}{2^y}$ is never $0$ for $x$, $y>0$, in this case $\inf(S)\not\in S$. \medskip \noindent Since $S$ is not bounded above, it is not bounded. \item Write $\frac{n+1}{n} =1+\frac{1}{n}$. Bounded below by $1$, since $\frac{1}{n} >0$ for all $n\in {\bf N}$, and hence $1+\frac{1}{n} >1$ for all $n\in {\bf N}$. Moreover, since for each $\varepsilon >1$ we can find $n$ so that $1+\frac{1}{n} <\varepsilon$, there is no lower bound greater than $1$, and so $\inf(S) =1$. In this case, $\inf(S)\not\in S$, since $1+\frac{1}{n}\ne 1$ for all $n\in {\bf N}$. \medskip \noindent Bounded above by $2$, since $\frac{1}{n}\le 1$ for all $n\in {\bf N}$ and hence $1+\frac{1}{n}\le 2$. In this case, $2 =1+\frac{1}{1}$ and so $2\in S$. Since $2$ is an upper bound for $S$ that is contained in $S$, we have that $2 =\sup(S)$ and so $\sup(S) \in S$. \medskip \noindent Since $S$ is both bounded above and bounded below, it is bounded. \item Break $S$ up into two subsets, one of the positive terms (when $n$ is even) and the negative terms (when $n$ is odd). So, $S =\{ -2, -\frac{4}{3}, -\frac{6}{5},\ldots \}\cup\{ \frac{3}{2}, \frac{5}{4}, \frac{7}{6},\ldots\}$. \medskip \noindent The positive terms are all of the form $1+\frac{1}{n}$ where $n$ is even. Since $\frac{1}{n}$ decreases as $n$ increases, the largest positive term is $1+\frac{1}{2} =\frac{3}{2}$, and so $S$ is bounded above and hence has a supremum. Since $S$ is bounded above by $\frac{3}{2}$ and since $\frac{3}{2}\in S$, $\sup(S) =\frac{3}{2}$, and in this case $\sup(S)\in S$. \medskip \noindent The negative terms are all of the form $1+\frac{1}{n}$ where $n$ is odd. Since $\frac{1}{n}$ decreases as $n$ increases, $-\frac{1}{n}$ increases as $n$ increases, and so the smallest negative term is $\frac{-1-1}{1} =-2$, and so $S$ is bounded below and hence has an infimum. Since $S$ is bounded below by $-2$ and since $-2\in S$, $\inf(S) = -2$, and in this case $\inf(S)\in S$. \medskip \noindent Since $S$ is both bounded above and bounded below, it is bounded. \item We can rewrite $S$ as $S =(-\sqrt{10}, \sqrt{10})\cap {\bf Q}$. By the definition of $(-\sqrt{10}, \sqrt{10})$, $S$ is bounded below by $-\sqrt{10}$, and hence has an infimum. Since there are rational numbers greater than $-\sqrt{10}$ but arbitrarily close to $-\sqrt{10}$ (as can be seen by taking the decimal expansion of $-\sqrt{10}$ and truncating it after some number of places to get a rational number near $-\sqrt{10}$), there is no lower bound greater than $-\sqrt{10}$, and so $\inf(S) =-\sqrt{10}$. In this case, $\inf(S)\not\in S$. \medskip \noindent $S$ is bounded above by $\sqrt{10}$, and hence has a supremum. Since there are rational numbers less than $\sqrt{10}$ but arbitrarily close to $\sqrt{10}$ (as can be seen by taking the decimal expansion of $\sqrt{10}$ and truncating it after some number of places to get a rational number near $\sqrt{10}$), there is no upper bound less than $\sqrt{10}$, and so $\sup(S) =\sqrt{10}$. In this case, $\sup(S)\not\in S$. \medskip \noindent Since $S$ is both bounded above and bounded below, it is bounded. \item Rewrite $S$ as $S =(-\infty, -2)\cup (2, \infty)$. This set is neither bounded above (since for each real number $r$, there is $s\in S$ with $s > r$, namely the larger of $3$ and $r+1$) nor bounded below (since for each real number $r$, there is $s\in S$ with $s < r$, namely the smaller of $-3$ and $r-1$). \medskip \noindent Since $S$ is not bounded below, it has no infimum. Since $S$ is not bounded above, it has no supremum. \medskip \noindent Since $S$ is neither bounded above not bounded below, it is not bounded. \end{enumerate} \medskip \noindent {\bf Solution \ref{inf-sup-properties}:} \begin{enumerate} \item Assume without loss of generality that $\inf(A)\le\inf(B)$, so that $\min( \inf(A), \inf(B)) =\inf(A)$. To show that $\inf(A\cup B) =\inf(A)$, we need to show two things, that $\inf(A)$ is a lower bound for $A\cup B$ and that if $t$ is any lower bound for $A\cup B$, then $t\le\inf(A)$. \medskip \noindent If $a\in A$, then $a\ge \inf(A)$ by definition (since $\inf(A)$ is less than or equal to every element of $A$). Similarly, if $b\in B$, then $b\ge\inf(B)$; since $\inf(B)\ge\inf(A)$, this yields that $b\ge \inf(A)$ for all $b\in B$. Since every element $c$ of $A\cup B$ satisfies either $c\in A$ or $c\in B$ (or both), we see that $c\ge \inf(A)$, and so $\inf(A)$ is a lower bound for $A\cup B$. \medskip \noindent Let $t$ be any lower bound for $A\cup B$. Since $t\le c$ for every $c\in A\cup B$, we also have that $t\le c$ for every $c\in A$. In particular, $t$ is a lower bound for $A$, and so by the definition of infimum, $t\le\inf(A)$. Therefore, $\inf(A)$ is a lower bound for $A\cup B$ that is greater than or equal to any other lower bound for $A\cup B$. That is, $\inf(A\cup B) =\inf(A)$. \item The easiest way to do this is to begin with an intermediate fact: if $A\subset B$ and if $\sup(B)$ exists, then $\sup(A)$ exists and $\sup(A)\le \sup(B)$. The proof uses the definition of supremum: since $\sup(B)$ exists, we have that $b\le \sup(B)$ for all $b\in B$ and that if $u$ is an upper bound for $B$, then $\sup(B)\le u$. Since $b\le\sup(B)$ for all $b\in B$ and since $A\subset B$, we have that $a\le \sup(B)$ for all $a\in A$. In particular, $A$ is bounded above, and so $\sup(A)$ exists. To see the second statement, note that since $\sup(B)$ is an upper bound for $A$, we have that $\sup(A)\le\sup(B)$ by definition. \medskip \noindent So, since $A\cap B\subset A$, we have that $\sup(A\cap B)\le \sup(A)$. Similarly, $A\cap B\subset B$, and so $\sup(A\cap B)\le \sup(B)$. Hence, $\sup(A\cap B)\le \min(\sup(A),\sup(B))$. \medskip \noindent To have an example in which $\sup(A\cap B) < \min(\sup(A), \sup(B))$, take $A =\{ 0, 1\}$ and $B =\{ 0, 2\}$. Then, $\sup(A) = 1$, $\sup(B) = 2$, and $\sup(A\cap B) = 0$ since $A\cap B =\{ 0\}$. \item The easiest way to do this is to begin with an intermediate fact: if $A\subset B$ and if $\inf(B)$ exists, then $\inf(A)$ exists and $\inf(A)\ge \inf(B)$. The proof uses the definition of infimum: since $\inf(B)$ exists, we have that $b\ge \inf(B)$ for all $b\in B$ and that if $t$ is a lower bound for $B$, then $\inf(B)\ge t$. Since $b\ge\inf(B)$ for all $b\in B$ and since $A\subset B$, we have that $a\ge \inf(B)$ for all $a\in A$. In particular, $A$ is bounded below, and so $\inf(A)$ exists. To see the second statement, note that since $\inf(B)$ is a lower bound for $A$, we have that $\inf(A)\ge\inf(B)$ by definition. \medskip \noindent So, since $A\cap B\subset A$, we have that $\inf(A\cap B)\ge \inf(A)$. Similarly, $A\cap B\subset B$, and so $\inf(A\cap B)\ge \inf(B)$. Hence, $\inf(A\cap B)\ge \max(\inf(A),\inf(B))$. \medskip \noindent We note that it is possible to construct an example in which $\inf(A\cap B) > \max(\inf(A), \inf(B))$. Namely, take $A =\{ -1, 0\}$ and $B =\{ -2, 0\}$. Then, $\inf(A) = -1$, $\inf(B) = -2$, and $\inf(A\cap B) = 0$ since $A\cap B =\{ 0\}$. \item Since $u$ is an upper bound for $A$, we have that $u\ge \sup(A)$, by the definition of supremum. (And note that $\sup(A)$ exists since $A$ is bounded above.) Since $u\in A$, we also have that $u\le \sup(A)$. Since $u\ge\sup(A)$ and $u\le\sup(A)$, it must be that $u =\sup(A)$. \item Since $t$ is a lower bound for $A$, we have that $t\le \inf(A)$, by the definition of infimum. (And note that $\inf(A)$ exists since $A$ is bounded below.) Since $t\in A$, we also have that $t\ge \inf(A)$. Since $t\le\inf(A)$ and $t\ge\inf(A)$, it must be that $t =\inf(A)$. \item Set $X =\{ y\: |\: y\mbox{ is a lower bound for A} \}$. By definition, $\inf(A)\in X$, since $\inf(A)$ is a lower bound for $A$. Now take any element $y$ of $X$, so that $y$ is a lower bound for $A$. Again by the definition of the infimum, $y\le \inf(A)$. So, $\inf(A)$ is an upper bound for $X$ and $\inf(A)\in X$, and so $\inf(A) =\sup(X) =\sup\{ y\: |\: y\mbox{ is a lower bound for A} \}$. (Note that the assumption that $\inf(A)$ exists is equivalent to the assumption that $A$ is bounded below, which insures that $X$ is non-empty.) \item Set $X =\{ y\: |\: y\mbox{ is an upper bound for A} \}$. By definition, $\sup(A)\in X$, since $\sup(A)$ is an upper bound for $A$. Now take any element $y$ of $X$, so that $y$ is an upper bound for $A$. Again by the definition of the supremum, $y\ge \sup(A)$. So, $\sup(A)$ is a lower bound for $X$ and $\sup(A)\in X$, and so $\sup(A) =\inf(X) =\inf\{ y\: |\: y\mbox{ is an upper bound for A} \}$. (Note that the assumption that $\sup(A)$ exists is equivalent to the assumption that $A$ is bounded above, which insures that $X$ is non-empty.) \item This one we argue by contradiction. Suppose that a set $A$ has two suprema, and call them $x_1$ and $x_2$. Both $x_1$ and $x_2$ are upper bounds for $A$, by definition. Since $x_1$ is a supremum for $A$, it is less than or equal to all other upper bounds, and so $x_1\le x_2$. Similarly, since $x_2$ is a supremum for $A$, it is less than or equal to all other upper bounds, and so $x_2\le x_1$. Since $x_1\le x_2\le x_1$, it must be that $x_1 =x_2$, and so the supremum of $A$ is unique. (Note that this exercise justifies why we call it 'the supremum' instead of 'a supremum'.) \item This one we argue by contradiction. Suppose that a set $A$ has two infima, and call them $x_1$ and $x_2$. Both $x_1$ and $x_2$ are lower bounds for $A$, by definition. Since $x_1$ is an infimum for $A$, it is greater than or equal to all other lower bounds, and so $x_1\ge x_2$. Similarly, since $x_2$ is an infimum for $A$, it is greater than or equal to all other upper bounds, and so $x_2\ge x_1$. Since $x_1\ge x_2\ge x_1$, it must be that $x_1 =x_2$, and so the infimum of $A$ is unique. (Note that this exercise justifies why we call it 'the infimum' instead of 'an infimum'.) \end{enumerate} \medskip \noindent {\bf Solution \ref{opposites}:} \begin{enumerate} \item Since $\sup(A)$ exists, the set $A$ is bounded above. Let $u$ be any upper bound for $A$, so that $a\le u$ for all $a\in A$. Multiplying through by $-1$, this becomes $-a\ge -u$ for all $a\in A$. Since $-a$ ranges over all of $A^-$ as $a$ ranges over $A$, this yields that $-u$ is a lower bound for $A^-$, and so $\inf(A^-)$ exists. In particular, taking $u =\sup(A)$, we have that $-\sup(A)$ is a lower bound for $A^-$. \medskip \noindent To see that there is no lower bound for $A^-$ that is greater than $-\sup(A)$, note that $t$ is a lower bound for $A^-$ if and only if $-t$ is an upper bound for $A$. Therefore, a lower bound for $A^-$ greater than $-\sup(A)$ exists if and only if an upper bound for $A$ less than $\sup(A)$ exists, but by the definition of supremum no such upper bound can exist. Hence, $-\sup(A)$ is the greatest lower bound for $A^-$, or in other words, $-\sup(A) =\inf(A^-)$, as desired. \item Since $\inf(A)$ exists, the set $A$ is bounded below. Let $t$ be any lower bound for $A$, so that $a\ge t$ for all $a\in A$. Multiplying through by $-1$, this becomes $-a\le -t$ for all $a\in A$. Since $-a$ ranges over all of $A^-$ as $a$ ranges over $A$, this yields that $-t$ is an upper bound for $A^-$, and so $\sup(A^-)$ exists. In particular, taking $t =\inf(A)$, we have that $-\inf(A)$ is an upper bound for $A^-$. \medskip \noindent To see that there is no upper bound for $A^-$ that is less than $-\inf(A)$, note that $u$ is an upper bound for $A^-$ if and only if $-u$ is a lower bound for $A$. Therefore, an upper bound for $A^-$ less than $-\inf(A)$ exists if and only if a lower bound for $A$ greater than $\inf(A)$ exists, but by the definition of infimum no such lower bound can exist. Hence, $-\inf(A)$ is the least upper bound for $A^-$, or in other words, $-\inf(A) =\sup(A^-)$, as desired. \end{enumerate} \medskip \noindent {\bf Solution \ref{some-subsets}:} [Note that each of these exercises has many, many possible solutions. And yes, it is a very silly question.] \begin{enumerate} \item Take $S =\{ x\in {\bf R}\: |\: x > \sqrt{2} \}$, so that $\inf(S) =\sqrt{2}$, which is irrational, and $S$ is also bounded below by $0$, which is rational. (In fact, any set of real numbers that is bounded below has both infinitely many rational lower bounds and infinitely many irrational lower bounds.) \item Take $S = (0, \infty)$, so that $\inf(S) = 0$, which is rational, and $S$ is bounded below by $-1$, which is also rational. \item Take $S =(2, 4)$, so that $\inf(S) =2$, which is rational, and $S$ is also bounded below by $-\pi$, which is irrational. \item Take $S =(\sqrt{3}, \infty)$, so that $\inf(S) =\sqrt{3}$, which is irrational, and $S$ is also bounded below by $\sqrt{2}$, which is also irrational. \end{enumerate} \medskip \noindent {\bf Solution \ref{specific-sequence}:} \begin{itemize} \item $u_1 = \frac{3(1)-1}{4(1)-5} = \frac{2}{-1} = -2$; \item $u_5 = \frac{3(5)-1}{4(5)-5} = \frac{14}{15} \approx 0.9333$; \item $u_{10} = \frac{3(10)-1}{4(10)-5} = \frac{29}{35} \approx 0.8286$; \item $u_{100} = \frac{3(100)-1}{4(100)-5} = \frac{299}{395} \approx .7570$; \item $u_{1000} = \frac{3(1000)-1}{4(1000)-5} = \frac{2999}{3995} \approx 0.7507$; \item $u_{10000} = \frac{3(10000)-1}{4(10000)-5} = \frac{29999}{39995} \approx 0.7501$; \item $u_{100000} = \frac{3(100000)-1}{4(100000)-5} = \frac{299999}{399995} \approx 0.7500$; \end{itemize} \medskip \noindent So, it seems that a reasonable guess would be that $L =\lim_{n\rightarrow\infty} u_n$ exists and equals $0.75 = \frac{3}{4}$. To verify this, we use the definition: we need to show that for any choice of $\varepsilon >0$, we can find $M$ so that $|u_n -L| <\varepsilon$ for all $n >M$. \medskip \noindent Calculating, we see that \[ |u_n -L| = \left| \frac{3n-1}{4n-5} - \frac{3}{4}\right| = \left| \frac{4(3n-1) - 3(4n-5)}{4(4n-5)} \right| = \left| \frac{11}{4(4n-5)} \right| = \frac{11}{4(4n-5)}. \] (The last equality follows since $u_n -L$ is positive for $n >1$.) \medskip \noindent To find the value of $M$ so that $|u_n -L| <\varepsilon$ for $n>M$, we start by solving for $n$: since $\frac{11}{4(4n-5)} <\varepsilon$, we have that $\frac{11}{4\varepsilon} < 4n-5$, and so $\frac{11}{16\varepsilon} + \frac{5}{4} < n$. That is, for a specified value of $\varepsilon$, we can take $M = \frac{11}{16\varepsilon} + \frac{5}{4} = \frac{11+20\varepsilon}{16\varepsilon}$. Then, for any choice of $\varepsilon >0$, we set $M = \frac{11+20\varepsilon}{16\varepsilon}$, and then if we take $n >M$, working backwards we have that $|u_n -L| <\varepsilon$. \medskip \noindent {\bf Solution \ref{another-specific-sequence}:} Set $a_n = \frac{1+2\cdot 10^n}{5+3\cdot 10^n}$ and $L = \frac{2}{3}$. For each choice of $\varepsilon >0$, we need to show that there exists $M$ so that $|a_n -L| <\varepsilon$ for all $n>M$. \medskip \noindent Calculating, we see that \[ |a_n -L| = \left| \frac{1+2\cdot 10^n}{5+3\cdot 10^n} -\frac{2}{3} \right| = \left| \frac{3 + 6\cdot 10^n - (10 + 6\cdot 10^n)}{3(5 +3\cdot 10^n)} \right| = \left| \frac{7}{15 +9\cdot 10^n}\right|. \] \noindent Hence, for a given value of $\varepsilon >0$, we want to find $M$ so that $\left| \frac{7}{15 + 9\cdot 10^n} \right| <\varepsilon$ for $n>M$. So, we solve for $n$ in terms of $\varepsilon$. First, note that $\frac{7}{15 + 9\cdot 10^n} >0$ for all positive integers $n$. So, we need only solve $\frac{7}{15 + 9\cdot 10^n} <\varepsilon$ for $n$. \medskip \noindent So, $\frac{7}{\varepsilon} < 15 + 9\cdot 10^n$, and so $-15 + \frac{7}{\varepsilon} < 9\cdot 10^n$, and so $-\frac{15}{9} + \frac{7}{9\varepsilon} < 10^n$. Performing a final bit of simplification, we get $\frac{-15\varepsilon + 7}{9\varepsilon} < 10^n$. If the numerator is positive, that is if $\varepsilon < \frac{7}{15}$, we can solve for $n$ by taking $\log_{10}$ of both sides. If on the other hand the numerator is negative, then any positive integer will do. So, set \[ M = \left\{ \begin{array}{ll} 1 & \mbox{ if $\varepsilon \ge \frac{7}{15}$}; \\ \log_{10}\left( \frac{-15\varepsilon + 7}{9\varepsilon} \right) & \mbox{ otherwise} \end{array}\right. \] \medskip \noindent To get a specific value of $M$ so that $|a_n -L|<10^{-3}$ for $n >M$, we substitute $\varepsilon = 10^{-3}$ into the above equation to get that $n > \log_{10} \left( \frac{-15\cdot 10^{-3} + 7}{9\cdot 10^{-3}} \right) \approx 2.8899$. So, we can take $M =3$. \medskip \noindent {\bf Solution \ref{euler-exercise}:} We start with the first part of the inequality, that $\frac{1}{n+1} <\ln(n+1)-\ln(n) =\ln \left( \frac{n+1}{n} \right)$. Set $f(x) =\ln \left( \frac{x+1}{x} \right) -\frac{1}{x+1}$ and $b_n =f(n)$. We want to show that $f(x) >0$ for all $x\ge 1$. Calculating, we see that $f'(x) = -\frac{1}{x(x+1)^2} <0$ for all $x >0$. This implies that $f(x)$ is decreasing, and hence that $\{ b_n\}$ is a monotonically decreasing sequence. Since $\lim_{n\rightarrow\infty} b_n =0$, this yields that $b_n >0$ for all $n$. (Because, if some $b_M <0$, then since $\{ b_n\}$ is a monotonically decreasing sequence, we would have that $b_{M+k}0$ for all $n$, we have that $\ln \left( \frac{n+1}{n} \right) >\frac{1}{n+1}$ for all $n$, as desired. \medskip \noindent To handle the other part of the inequality, consider $c_n =\frac{1}{n} - \ln \left( \frac{n+1}{n} \right)$ and set $g(x) =\frac{1}{x} - \ln \left( \frac{x+1}{x} \right)$, so that $c_n =g(n)$. Since $g'(x) =-\frac{1}{x^2(x+1)}$ for all $x >0$, we see that $\{ c_n\}$ is monotonically decreasing. Again, since $\lim_{n\rightarrow\infty} c_n =0$, we see that $c_n >0$ for all $n$, and hence that $\frac{1}{n} > \ln \left( \frac{n+1}{n} \right)$ for all $n$, as desired. \medskip \noindent It remains to show that $\{ a_n\}$ is bounded below and monotonically decreasing. Since \[ a_{n+1} -a_n =\left( \sum_{k=1}^{n+1} \frac{1}{k} \right) -\ln(n+1) -\left( \sum_{k=1}^n \right) + \ln(n) = \frac{1}{n+1} -\ln(n+1) +\ln(n) = \frac{1}{n+1} -\ln\left( \frac{n+1}{n}\right), \] we see that $a_{n+1} -a_n <0$ by the first part of the inequality. That is, $\{ a_n\}$ is monotonically decreasing. \medskip \noindent Since $\frac{1}{n+1} <\ln\left( \frac{n+1}{n}\right)$ for all $n$, we have that \[ a_n =\left( \sum_{k=1}^n \frac{1}{k}\right) -\ln(n) = 1 + \left( \sum_{k=1}^{n-1} \frac{1}{k+1} \right) -\ln(n) > 1 + \sum_{k=1}^{n-1} \ln \left( \frac{k+1}{k}\right) -\ln(n) = 1, \] and so $\{ a_n\}$ is bounded below. \medskip \noindent Since $\{ a_n\}$ is bounded above (since $a_n 0$, there exists $M$ so that $3^{2n-1} >\varepsilon$ for all $n >M$. \item for every $\varepsilon >0$, there exists $M$ so that $1-2n <-\varepsilon$ for all $n >M$. \item for every $\varepsilon >0$, there exists $M$ so that $| e^{-n} -0| <\varepsilon$ for all $n >M$. \end{itemize} \medskip \noindent {\bf Solution \ref{sequence-scavenger}:} \begin{enumerate} \item {\bf converges:} whenever we are evaluating a limit in which the variable (in this case $n$) appears in both the base and the exponent, we follow the same basic procedure. First use the identity $x =\exp(\ln(x))$ to rewrite the term. Here, \[ a_n= (n+2)^{1/n} =\exp\left( \frac{\ln(n+2)}{n}\right). \] Next, we check to see whether we are dealing with an indeterminate form. Since the limit $\lim_{n\rightarrow\infty} \frac{\ln(n+2)}{n}$ has the indeterminate form $\frac{\infty}{\infty}$, we may use l'Hopital's rule to evaluate \[ \lim_{n\rightarrow\infty} \frac{\ln(n+2)}{n} =\lim_{n\rightarrow\infty} \frac{1}{n+2} =0. \] Hence, $\{ a_n\}$ converges to $e^0 =1$. \item {\bf converges:} there is a standard way of evaluating the limit as $n\rightarrow\infty$ of a rational function in $n$ (where a rational function is the quotient of two polynomials). First, locate the highest power of $n$ that appears in either the numerator or the denominator, and then multiply both numerator and denominator by its reciprocal. Here, the higest power of $n$ that appears is $n^3$, and so we calculate \[ a_n =\frac{n^2 + 3n + 2}{6n^3 + 5} =\frac{n^2 + 3n + 2}{6n^3 + 5} \cdot \frac{\frac{1}{n^3}}{\frac{1}{n^3}} =\frac{\frac{1}{n} + \frac{3}{n^2} + \frac{2}{n^3}}{6+ \frac{5}{n^3}}. \] We then use several properties of limits: that the limit of a quotient is the quotient of the limits, that the limit of a sum is the sum of the limits, and that $\lim_{n\rightarrow\infty} \frac{1}{n} =0$. Here, \[ \lim_{n\rightarrow\infty} a_n =\lim_{n\rightarrow\infty} \frac{\frac{1}{n} + \frac{3}{n^2} + \frac{2}{n^3}}{6+ \frac{5}{n^3}} =\frac{0}{6} =0. \] Hence, $\{ a_n\}$ converges to $0$. \item {\bf converges:} as above, we first rewrite the term using $x =\exp(\ln(x))$. Here, \[ a_n = \left( 1+\frac{1}{n} \right)^n =\exp \left( n\ln\left( 1+\frac{1}{n} \right) \right) =\exp\left( \frac{\ln\left( 1 + \frac{1}{n}\right) }{\frac{1}{n}} \right). \] We then concentrate on the exponent and check to see whether we are dealing with an indeterminate form, which in this case we are, since both $\lim_{n\rightarrow\infty} \ln(1+\frac{1}{n})$ and $\lim_{n\rightarrow\infty} \frac{1}{n}$ are equal to $0$. Hence, we may apply l'Hopital's rule to evaluate \[ \lim_{n\rightarrow\infty} \frac{\ln \left( 1 + \frac{1}{n} \right) }{\frac{1}{n}} =\lim_{n\rightarrow\infty} \frac{1}{1+ \frac{1}{n}} =1. \] Hence, $\{ a_n\}$ converges to $e^1 =e$. \item {\bf converges:} here we use the squeeze law. Since $-1\le \sin(n)\le 1$ for all $n$, we have that $-\frac{1}{3^n} \le \frac{\sin(n)}{3^n} \le \frac{1}{3^n}$. Since $\lim_{n\rightarrow\infty} \frac{1}{3^n} =0$, we have that $\lim_{n\rightarrow\infty} -\frac{1}{3^n} = 0$ as well, and so $\{ a_n\}$ converges to $0$. \item {\bf diverges:} write \[ a_n=(\sqrt{2n+3} -\sqrt{n+1})\cdot\frac{\sqrt{2n+3} +\sqrt{n+1}}{\sqrt{2n+3} +\sqrt{n+1}} = \frac{n+2}{\sqrt{2n+3} +\sqrt{n+1}}. \] We now massage algebraically, in order to simplify: \[ \frac{n+2}{\sqrt{2n+3} +\sqrt{n+1}}\ge \frac{n+2}{2 \sqrt{2n+3}} = \frac{n+ \frac{3}{2} + \frac{1}{2}}{2 \sqrt{2(n+\frac{3}{2} )}} > \frac{n+\frac{3}{2}}{2 \sqrt{2(n+\frac{3}{2} )}} =\frac{1}{2\sqrt{2}}\sqrt{n+\frac{3}{2} }. \] Since $\lim_{n\rightarrow\infty} \sqrt{n+\frac{3}{2} } =\infty$, we see by the comparison test that $\lim_{n\rightarrow\infty} a_n =\infty$, and so $\{ a_n\}$ diverges. \item {\bf diverges:} for $n = 8k$, $a_{8k}=\cos \left( \frac{8k\pi}{4} \right) = 1$, while for $n=8k+1$, $a_{8k+1}=\cos \left( \frac{(8k+1)\pi}{4} \right) =\frac{1}{\sqrt{2}}$. In particular, $|a_{8k} -a_{8k+1}| = \frac{1}{\sqrt{2}}$, and so the sequence fails the Cauchy criterion, and so diverges. \item {\bf converges:} write $a_n= \left( 1+\frac{1}{n} \right)^{1/n} =\exp\left( \frac{ \ln \left( 1+\frac{1}{n} \right)}{n} \right)$. Since $\lim_{n\rightarrow\infty} \ln(1+\frac{1}{n} ) = 0$, we have that $\lim_{n\rightarrow\infty} \frac{\ln(1+\frac{1}{n})}{n} = 0$ (by the squeeze law for instance, since $0\le \frac{ \ln(1+\frac{1}{n})}{n} \le \ln(1+ \frac{1}{n})$ for $n\ge 1$). Hence, $\lim_{n\rightarrow\infty} \exp \left( \frac{ \ln(1+\frac{1}{n})}{n} \right) =e^0 =1$, and so $\{ a_n\}$ converges to $1$. \item {\bf diverges:} given $\varepsilon >0$, we show that there exists $M$ so that $a_n >\varepsilon$ for $n >M$. Since $a_n =\ln(n)$, this becomes $\ln(n) >\varepsilon$ for $n >M$. Exponentiating both sides of $\ln(n) >\varepsilon$, we get that $n >e^\varepsilon$ (and vice versa, that if $n >e^\varepsilon$, then $\ln(n) >\varepsilon$, since $e^x$ is an increasing function), and so we can take $M =e^\varepsilon$. \item {\bf diverges:} very similar to the question just done. Given $\varepsilon >0$, we show that there exists $M$ so that $a_n >\varepsilon$ for $n>M$. Taking logs of both sides of $a_n =e^n >\varepsilon$, we get that $n >\ln(\varepsilon)$. So, we make take $M =\ln(\varepsilon)$. \item {\bf converges:} since $\lim_{n\rightarrow\infty} a_n$ has the indeterminate form $\frac{\infty}{\infty}$ (as both $\ln(n)\rightarrow\infty$ and $\sqrt{n}\rightarrow\infty$ as $n\rightarrow\infty$), we may apply l'Hopital's rule to see that \[ \lim_{n\rightarrow\infty} \frac{\ln(n)}{\sqrt{n}} = =\lim_{n\rightarrow\infty} \frac{ \frac{1}{n} }{ \frac{1}{2\sqrt{n}} } =\lim_{n\rightarrow\infty} \frac{2}{\sqrt{n}} =0. \] Hence, $\{ a_n\}$ converges to $0$. \item {\bf converges:} as always, we first rewrite each term as \[ a_n = \left( 1- \frac{2}{n^2} \right)^n =\exp \left (n\ln \left( 1- \frac{2}{n^2} \right) \right) =\exp \left( \frac{ \ln \left( 1- \frac{2}{n^2} \right)}{\frac{1}{n}} \right). \] As $n\rightarrow\infty$, the exponent reveals itself to have the indeterminate form $\frac{0}{0}$, and so we may evaluate using l'Hopital's rule: \[ \lim_{n\rightarrow\infty} \frac{ \ln \left( 1-\frac{2}{n^2} \right) }{ \frac{1}{n} } = \lim_{n\rightarrow\infty} \frac{ \frac{1}{1-\frac{2}{n^2} }\cdot \frac{4}{n^3}}{\frac{-1}{n^2}} =\lim_{n\rightarrow\infty} \frac{- \frac{4}{1 -\frac{2}{n^2} }}{n} = 0. \] Hence, $\{ a_n\}$ converges to $e^0 =1$. \item {\bf diverges:} we could use either l'Hopital's rule (since the limit has the indeterminate form $\frac{\infty}{\infty}$) or the standard trick for dealing with limits of rational functions (multiply numberator and denominator by the reciprocal of the highest power of $n$ appearing anywhere in the term), but instead we massage algebraically: \[ a_n = \frac{n^3}{10n^2+1} > \frac{n^3}{10n^2+ 10n^2} = \frac{n}{20}. \] Since $\{ \frac{n}{20} \}$ diverges, the comparison test gives that $\{ a_n\}$ diverges as well. \item {\bf converges:} it is a reasonable guess that $\{ a_n =x^n\}$ converges to $0$, which by definition means that given $\varepsilon >0$, there exists $M$ so that $| x^n -0| =| x^n| <\varepsilon$ for $n >M$. For $x =0$, this is true, since $\{ x^n\}$ becomes the constant sequence $\{ a_n =0\}$. So, we can assume that $x\ne 0$. Taking $\ln$ of both sides of $|x^n| <\varepsilon$ and using that $|x^n| =|x|^n$, we get that $n\ln(|x|) <\ln(\varepsilon)$, and so $n > \frac{\ln(\varepsilon)}{\ln(|x|)}$. (The direction of the inequality changes since $|x| <1$ and so $\ln(|x|) <0$.) Hence, we may take $M =\frac{\ln(\varepsilon)}{\ln(|x|)}$. [Then, if $n >M =\frac{\ln(\varepsilon)}{\ln(|x|)}$, then $n\ln(|x|) <\ln(\varepsilon)$, and exponentiating we get that $|x|^n <\varepsilon$, as desired.) \item {\bf converges:} recall that $n^p \ge n$ and that $n\rightarrow\infty$ as $n\rightarrow\infty$, and so $n^p\rightarrow\infty$ as $n\rightarrow\infty$. Hence, $\{ \frac{1}{n^p} \}$ converges to $0$, and therefore $\{ a_n =\frac{c}{n^p} \}$ converges to $c\cdot 0 =0$. \item {\bf converges:} using the standard trick for rational functions, write \[ a_n =\frac{2n}{5n-3} =\frac{2n}{5n-3}\cdot\frac{\frac{1}{n} }{ \frac{1}{n} } =\frac{2}{5-\frac{3}{n} }. \] As $n\rightarrow\infty$, $\frac{1}{n} \rightarrow 0$ and so $\{ a_n\}$ converges to $\frac{2}{5}$. \item {\bf converges:} using the standard trick for rational functions, write \[ a_n =\frac{1-n^2}{2+3n^2} =\frac{1-n^2}{2+3n^2}\cdot\frac{ \frac{1}{n^2} }{ \frac{1}{n^2} } =\frac{ \frac{1}{n^2} -1}{ \frac{2}{n^2} +3}. \] As $n\rightarrow\infty$, $\frac{1}{n^2} \rightarrow 0$ and so $\{ a_n\}$ converges to $-\frac{1}{3}$. \item {\bf converges:} using the standard trick for rational functions, write \[ a_n =\frac{n^3-n+7}{2n^3+n^2} =\frac{n^3-n+7}{2n^3+n^2} \cdot\frac{ \frac{1}{n^3} }{\frac{1}{n^3} } =\frac{1- \frac{1}{n^2} +\frac{7}{n^3} }{2+\frac{1}{n} }. \] As $n\rightarrow\infty$, both $\frac{1}{n^2} \rightarrow 0$ and $\frac{1}{n} \rightarrow 0$, and so $\{ a_n\}$ converges to $\frac{1}{2}$. \item {\bf converges:} by a previous part of this exercise, we know that $\{ (\frac{9}{10} )^n\}$ converges to $0$, since $| \frac{9}{10} | <1$, and so $\lim_{n\rightarrow\infty} ( 1 +( \frac{9}{10} )^n) =1+\lim_{n\rightarrow\infty} ( \frac{9}{10} )^n =1$. \item {\bf converges:} by a previous part of this exercise, we know that $\{ (- \frac{1}{2} )^n\}$ converges to $0$, since $|- \frac{1}{2} | <1$, and so $\lim_{n\rightarrow\infty} (2-(- \frac{1}{2} )^n) =2-\lim_{n\rightarrow\infty} (- \frac{1}{2} )^n =2$. \item {\bf diverges:} for $n$ even, $a_n =2$, while for $n$ odd, $a_n =0$. In particular, $|a_n -a_{n+1}| =2$ for all $n$, and so the sequence fails the Cauchy criterion and hence diverges. \item {\bf converges:} note that $0\le 1+(-1)^n\le 2$ for all $n$, and so the squeeze law yields that since $\lim_{n\rightarrow\infty} \frac{2}{n} =0$, we have that $\lim_{n\rightarrow\infty} a_n =0$. \item {\bf converges:} we begin by noting that \[ 0\le \frac{ 1+(-1)^n\sqrt{n}}{ (\frac{3}{2} )^n} \le \frac{2\sqrt{n}}{ ( \frac{3}{2})^n}, \] and so we'll concentrate on evaluating $\lim_{n\rightarrow\infty} \frac{2\sqrt{n}}{ (\frac{3}{2})^n}$ and hope to be able to apply the squeeze law. Since $\lim_{n\rightarrow\infty} \frac{2\sqrt{n}}{ (\frac{3}{2} )^n}$ has the indeterminate form $\frac{\infty}{\infty}$, we may use l'Hopital's rule to evaluate \[ \lim_{n\rightarrow\infty} \frac{2\sqrt{n}}{( \frac{3}{2} )^n} = \lim_{n\rightarrow\infty} \frac{ \frac{1}{\sqrt{n}} }{\ln(\frac{3}{2}) \exp(n\ln(\frac{3}{2} ))} = \lim_{n\rightarrow\infty} \frac{1}{\ln(\frac{3}{2} ) \sqrt{n} (\frac{3}{2} )^n} =0 \] (where we differentiate $(\frac{3}{2} )^n$ by first writing it as $\exp(n\ln(\frac{3}{2}))$). Hence, we may use the squeeze law to see that $\{ a_n\}$ converges to $0$. \item {\bf converges:} since $0\le \sin^2(n)\le 1$ for all $n$ and since $\frac{1}{\sqrt{n}} \rightarrow 0$ as $n\rightarrow\infty$ (since $\sqrt{n}\rightarrow\infty$ as $n\rightarrow\infty$), the comparison test yields that $\frac{\sin^2(n)}{\sqrt{n}} \rightarrow 0$ as $n\rightarrow\infty$. That is, $\{ a_n\}$ converges to $0$. \item {\bf converges:} since $1\le \sqrt{2+\cos(n)}\le \sqrt{3}$ for all $n$ and since $\frac{1}{n} \rightarrow 0$ as $n\rightarrow\infty$, the squeeze law yields that $\sqrt{\frac{2+\cos(n)}{n} }\rightarrow 0$ as $n\rightarrow\infty$. That is, $\{ a_n\}$ converges to $0$. \item {\bf converges:} since $\sin(\pi n) =0$ for all integers $n$, this sequence is the constant sequence $a_n =n\cdot 0 =0$ for all $n$. In particular, $\{ a_n\}$ converges to $0$. \item {\bf diverges:} since $\cos(\pi n) =(-1)^n$, this sequence can be rewritten as $a_n =(-1)^n n$. For $n\ge 1$, $|a_{n+1} -a_n|\ge 2$, and so the sequences fails the Cauchy criterion, and so diverges. \item {\bf converges:} since $-1\le -\sin(n)\le 1$ for all $n$, we have that $-\frac{1}{n} \le -\frac{\sin(n)}{n} \le \frac{1}{n}$ for all $n$, and so $\{ -\frac{\sin(n)}{n} \}$ converges to $0$. Hence, $\{ a_n\}$ converges to $\pi^0 =1$. \item {\bf diverges:} for $n$ even, $\cos(\pi n) =1$ and for $n$ odd, $\cos(\pi n) =-1$. In particular, $|a_{n+1} -a_n| =|2^1 -2^{-1}| =\frac{3}{2}$ for all $n$, and so this sequences fails the Cauchy criterion, and hence $\{ a_n\}$ diverges. \item {\bf converges:} we could use l'Hopital's rule, since $\lim_{n\rightarrow\infty} \frac{\ln(2n)}{\ln(3n)}$ has the indeterminate form $\frac{\infty}{\infty}$, but we proceed in a more low tech way. Use the laws of logarithms and a variant of the standard trick for rational functions, we rewrite \[ a_n =\frac{\ln(2n)}{\ln(3n)} =\frac{\ln(2) +\ln(n)}{\ln(3) +\ln(n)} =\frac{\ln(2) +\ln(n)}{\ln(3) +\ln(n)}\cdot \frac{\frac{1}{\ln(n)} }{ \frac{1}{\ln(n)} } =\frac{1 + \frac{\ln(2)}{\ln(n)} }{1 + \frac{\ln(3)}{\ln(n)} }. \] Since $\ln(n)\rightarrow\infty$ as $n\rightarrow\infty$, we have that both $\frac{\ln(2)}{\ln(n)}$ and $\frac{\ln(3)}{\ln(n)}$ go to $0$ as $n\rightarrow\infty$, and so $\lim_{n\rightarrow\infty} a_n =1$. \item {\bf converges:} since $\lim_{n\rightarrow\infty} \frac{\ln^2(n)}{n}$ has the indeterminate form $\frac{\infty}{\infty}$, we can use l'Hopital's rule: \[ \lim_{n\rightarrow\infty} \frac{\ln^2(n)}{n} =\lim_{n\rightarrow\infty} \frac{2 \ln(n) \frac{1}{n} }{1} =\lim_{n\rightarrow\infty} \frac{2 \ln(n)}{n}. \] This limit still has the indeterminate form $\frac{\infty}{\infty}$, and we can apply l'Hopital's rule again to get \[ \lim_{n\rightarrow\infty} \frac{2 \ln(n)}{n} =\lim_{n\rightarrow\infty} \frac{ \frac{2}{n} }{1} =0. \] Hence, $\{ a_n\}$ converges to $0$. \item {\bf converges:} write \[ a_n =n\sin \left( \frac{1}{n} \right) = \frac{\sin( \frac{1}{n} ) }{ \frac{1}{n} }. \] Since $\lim_{n\rightarrow\infty} a_n$ has the indeterminate form $\frac{0}{0}$, we can apply l'Hopital's rule to get \[ \lim_{n\rightarrow\infty} \frac{\sin \left( \frac{1}{n} \right)}{ \frac{1}{n} } =\lim_{n\rightarrow\infty} \frac{\cos \left( \frac{1}{n} \right) \left(-\frac{1}{n^2} \right) }{-\frac{1}{n^2} } =\lim_{n\rightarrow\infty} \cos \left( \frac{1}{n} \right) =\cos(0) =1. \] Hence, $\{ a_n\}$ converges to $1$. (There is also a geometric argument for evaluating this limit, that can be found in Adams (p. 116, Theorem 7).) \item {\bf converges:} as $n\rightarrow\infty$, $\arctan(n)\rightarrow \frac{\pi}{2}$, and so $\lim_{n\rightarrow\infty} \frac{\arctan(n)}{n} =0$. (This is an application of the squeeze law, since the numerator is bounded by $0$ and $\pi$.) \item {\bf converges:} since $\lim_{n\rightarrow\infty} \frac{n^3}{e^{n/10}}$ has the indeterminate form $\frac{\infty}{\infty}$, we may use l'Hopital's rule: \[ \lim_{n\rightarrow\infty} \frac{n^3}{e^{n/10}} = \lim_{n\rightarrow\infty} \frac{3n^2}{\frac{1}{10} e^{n/10}}. \] Since this latter limit still has the indeterminate form $\frac{\infty}{\infty}$, we use l'Hopital's rule again: \[ \lim_{n\rightarrow\infty} \frac{3n^2}{\frac{1}{10} e^{n/10}} =\lim_{n\rightarrow\infty} \frac{6 n}{\frac{1}{100} e^{n/10}}. \] And as we still have the indeterminate form $\frac{\infty}{\infty}$, we apply l'Hopital's rule yet again: \[ \lim_{n\rightarrow\infty} \frac{6 n}{\frac{1}{100} e^{n/10}} = \lim_{n\rightarrow\infty} \frac{6}{\frac{1}{1000} e^{n/10}}. \] The right hand limit evaluates to $0$, and so $\{ a_n\}$ converges to $0$. \item {\bf converges:} write \[ a_n =\frac{2^n+1}{e^n} = \frac{2^n}{e^n} + \frac{1}{e^n} =\frac{2^n}{e^n} + \frac{1^n}{e^n} = \left(\frac{2}{e}\right)^n + \left(\frac{1}{e}\right)^n. \] Since both $\frac{2}{e} <1$ and $\frac{1}{e} <1$, we have that both $( \frac{2}{e} )^n$ and $( \frac{1}{e} )^n$ go to $0$ as $n\rightarrow\infty$, and so their sum goes to $0$ as $n\rightarrow\infty$. That is, $\{ a_n\}$ converges to $0$. \item {\bf converges:} again there are several possible approaches, including l'Hopital's rule, but again we take a low tech approach, and begin by expressing $\sinh(n)$ and $\cosh(n)$ in terms of $e^n$ and $e^{-n}$, to get \[ a_n =\frac{\sinh(n)}{\cosh(n)} =\frac{e^n -e^{-n}}{e^n +e^{-n}} =\frac{e^n -e^{-n}}{e^n +e^{-n}}\cdot \frac{e^{-n}}{e^{-n}} =\frac{1 -e^{-2n}}{1 +e^{-2n}}. \] Since $e^{-2n} =( \frac{1}{e^2} )^n \rightarrow 0$ as $n\rightarrow\infty$, we see that $\lim_{n\rightarrow\infty} a_n =1$. That is, $\{ a_n\}$ converges to $1$. \item {\bf converges:} as with all limits in which the variable appears in both the base and the exponent, we begin by rewriting using the identity $m =\exp(\ln(m))$ to get $a_n =(2n+5)^{1/n} =\exp \left( \frac{\ln(2n+5)}{n} \right)$. We may now use l'Hopital's rule to evaluate the limit of the exponent $\lim_{n\rightarrow\infty} \frac{\ln(2n+5)}{n}$ (as it has the indeterminate form $\frac{\infty}{\infty}$) to get \[ \lim_{n\rightarrow\infty} \frac{\ln(2n+5)}{n} =\lim_{n\rightarrow\infty} \frac{ \frac{2}{2n+5}}{1} =0. \] Therefore, $\{ a_n\}$ converges to $e^0 =1$. \item {\bf converges:} as with all limits in which the variable appears in both the base and the exponent, we begin by rewriting using the identity $m =\exp(\ln(m))$ to get \[ a_n = \left(\frac{n-1}{n+1}\right)^n = \left( \frac{n+1-2}{n+1}\right)^n =\left( 1-\frac{2}{n+1} \right)^n =\exp\left( n\ln\left( 1-\frac{2}{n+1} \right) \right). \] Since the exponent has the indeterminate form $0\cdot \infty$ as $n\rightarrow\infty$, we rewrite it as \[ n\ln\left( 1- \frac{2}{n+1} \right) = \frac{ \ln(1- \frac{2}{n+1} )}{\frac{1}{n} }, \] which as the indeterminate form $\frac{0}{0}$ as $n\rightarrow\infty$. We now apply l'Hopital's rule to evaluate \[ \lim_{n\rightarrow\infty} \frac{ \ln \left(1-\frac{2}{n+1}\right)}{\frac{1}{n}} =\lim_{n\rightarrow\infty} \frac{\frac{1}{1-\frac{2}{n+1}}\cdot \frac{2}{(n+1)^2}}{-\frac{1}{n^2}} =\lim_{n\rightarrow\infty} \frac{-2n^2}{\left( 1-\frac{2}{n+1} \right)\cdot (n+1)^2} = -2. \] Hence, $\{ a_n\}$ converges to $e^{-2}$. \item {\bf converges:} since $-\frac{1}{n} \rightarrow 0$ as $n\rightarrow\infty$, we see that $\{ a_n\}$ converges to $(0.001)^0 =1$. \item {\bf converges:} as $n\rightarrow\infty$, $\frac{n+1}{n} =1+ \frac{1}{n} \rightarrow 1$, and so $\{ a_n\}$ converges to $2^1 =2$. \item {\bf converges:} one way to evaluate this limit is to write $a_n =( \frac{2}{n} )^{3/n} = \frac{2^{3/n}}{n^{3/n}}$ and to evaluate the limits of the numerator and denominator separately. To evaluate $\lim_{n\rightarrow\infty} 2^{3/n}$, all we need note is that $\lim_{n\rightarrow\infty} \frac{3}{n} = 0$, and so $\{ 2^{3/n}\}$ converges to $2^0 =1$. \medskip \noindent To evaluate $\lim_{n\rightarrow\infty} n^{3/n}$, we rewrite $n^{3/n}$ as $n^{3/n} =\exp(\ln(n) \frac{3}{n})$ and use l'Hopital's rule to evaluate $\lim_{n\rightarrow\infty} \frac{3 \ln(n)}{n}$ (since it has the indeterminate form $\frac{\infty}{\infty}$). Using l'Hopital's rule, we get that \[ \lim_{n\rightarrow\infty} \frac{3\ln(n)}{n} =\lim_{n\rightarrow\infty} \frac{ \frac{3}{n} }{1} =0, \] and so $\{ n^{3/n}\}$ converges to $e^0 =1$. Therefore, \[ \lim_{n\rightarrow\infty} \frac{2^{3/n}}{n^{3/n}} = \frac{\lim_{n\rightarrow\infty} 2^{3/n}}{\lim_{n\rightarrow\infty} n^{3/n}} =\frac{1}{1} =1. \] \item {\bf diverges:} begin by ignoring the $(-1)^n$ and worrying about what happens to the rest of the term. Using the standard trick, massage to get $(n^2+1)^{1/n} =\exp( \frac{\ln(n^2 +1)}{n} )$. Since $\lim_{n\rightarrow\infty} \frac{\ln(n^2 +1)}{n}$ has the indeterminate form $\frac{\infty}{\infty}$, we may use l'Hopital's rule to evaluate \[ \lim_{n\rightarrow\infty} \frac{\ln(n^2 +1)}{n} =\lim_{n\rightarrow\infty} \frac{ \frac{2n}{n^2 +1} }{1} =0, \] and so \[ \lim_{n\rightarrow\infty} \exp \left( \frac{\ln(n^2 +1)}{n} \right) =e^0 =1. \] So, putting the $(-1)^n$ back into the picture, we see that $\{ a_n\}$ fails the Cauchy criterion: specifically, since $\{ \frac{n^2+1}{n} \}$ converges to $1$, for any $\varepsilon >0$, there exists $M$ so that $\left| \frac{n^2+1}{n} -1 \right| <\varepsilon$ for $n>M$. Choose $\varepsilon =\frac{1}{2}$, and note that for $n>M$, we get that $|a_n -a_{n+1}| > 1$, since one of $a_n$, $a_{n+1}$ is within $\frac{1}{2}$ of $1$ and the other is within $\frac{1}{2}$ of $-1$ (remember the alternating signs). So, $\{ a_n\}$ diverges. \item {\bf converges:} we perform a bit of algebraic massage: note that \[ a_n =\frac{ \left( \frac{2}{3} \right)^n}{ \left( \frac{1}{2} \right)^n+ \left( \frac{9}{10} \right)^n} < \frac{ \left( \frac{2}{3} \right)^n}{ \left( \frac{9}{10} \right)^n} =\left(\frac{20}{27} \right)^n. \] Since $\left(\frac{20}{27} \right)^n\rightarrow 0$ as $n\rightarrow\infty$ (since $\frac{20}{27} <1$), the comparison test yields that $\{ a_n\}$ converges to $0$ as well. \end{enumerate} \medskip \noindent {\bf Solution \ref{fibonacci}:} Suppose that $\{ q_n\}$ converges and set $x =\lim_{n\rightarrow\infty} q_n$. Now, note that \[ q_n =\frac{a_n}{a_{n-1}} =\frac{a_{n-1} +a_{n-2}}{a_{n-1}} =1+\frac{a_{n-2}}{a_{n-1}} =1 + \frac{1}{q_{n-1}}. \] Hence, \[ x =\lim_{n\rightarrow\infty} q_n =\lim_{n\rightarrow\infty} \left( 1+\frac{1}{q_{n-1}} \right) =1+\frac{1}{\lim_{n\rightarrow\infty} q_{n-1}} =1+\frac{1}{x}, \] since $\lim_{n\rightarrow\infty} q_{n-1} =x$ as well. Therefore, $x =1+\frac{1}{x}$, and so (multiplying through by $x$ and simplifying) $x$ satisfies the quadratic equation $x^2 -x-1=0$. By the quadratic formula, this yields that $x =\frac{1}{2}\left(1 \pm\sqrt{5} \right)$. However, since $q_n \ge 0$ for all $n$, it must be that $x\ge 0$ as well, and so $x =\frac{1}{2}\left(1 +\sqrt{5}\right)$. \medskip \noindent {\bf Solution \ref{sequence-proofs}:} In all three of these statements, we start with the same piece of information, namely that $\lim_{n\rightarrow\infty} x_n =-4$. That is, for each $\varepsilon >0$, there exists $M$ (which depends on $\varepsilon$) so that $|x_n - (-4)| =|x_n +4| <\varepsilon$ for $n >M$. \begin{enumerate} \item we need to show that $\lim_{n\rightarrow\infty} \sqrt{|x_n|} = 2$, which is phrased mathematically as needing to show that for each $\mu >0$, there exists $P$ so that $| \sqrt{|x_n|} -2| < \mu$ for $n >P$. We start by rewriting $| \sqrt{|x_n|} -2|$, using the standard trick for handling differences of square roots, namely \[ | \sqrt{|x_n|} -2| =|\sqrt{|x_n|} -2|\cdot \frac{|\sqrt{|x_n|} + 2|}{|\sqrt{|x_n|} + 2| } = \frac{|\: |x_n| -4|}{|\sqrt{|x_n|} + 2|}\le \frac{|\: |x_n| -4|}{2}. \] (The last inequality follows from the fact that $|\: \sqrt{|x_n|} +2|\ge 2$ for all possible values of $x_n$.) Since for any $\mu >0$, there exists $M$ so that $ |\: |x_n| -4| < 2\mu$ (by using the definition of $\lim_{n\rightarrow\infty} |x_n| =4$) for $n >M$, we have that \[ | \sqrt{|x_n|} -2| \le \frac{|\: |x_n| -4|}{2} < \frac{2\mu}{2} =\mu \] for $n >M$, and so we are done. \item we need to show that $\lim_{n\rightarrow\infty} x_n^2 =16$, which is phrased mathematically as needing to show that for each $\mu >0$, there exists $P$ so that $| x_n^2 -16 | < \mu$ for $n >P$. We start by rewriting $| x_n^2 - 16|$, using that it is the difference of two squares: \[ | x_n^2 - 16| =| (x_n -4)(x_n +4)| =| x_n -4|\: |x_n +4|. \] Now apply the definition of $\lim_{n\rightarrow\infty} x_n =-4$ with $\varepsilon =1$, so that there exists $M$ so that if $n >N$, then $|x_n - (-4)| < 1$. In particular, if $n >M$, then $-5 < x_n < -3$, and so $|x_n| < 5$, and so $|x_n -4| \le |x_n| + 4 < 9$. \medskip \noindent Since $x_n\rightarrow -4$ by assumption, we know that for any $\varepsilon >0$, there is $Q$ so that $|x_n - (-4)| = |x_n +4| <\frac{1}{9} \varepsilon$ for $n>Q$. Hence, if $n > P = {\rm max}(M,Q)$, then \[ | x_n^2 -16 | =| x_n -4|\: |x_n +4| < 9 \: \frac{1}{9} \varepsilon =\varepsilon, \] as desired. \item we need to show that $\lim_{n\rightarrow\infty} \frac{x_n}{3} =-\frac{4}{3}$, which is phrased mathematically as needing to show that for each $\mu >0$, there exists $P$ so that $| \frac{x_n}{3}- (-\frac{4}{3})| =| \frac{x_n}{3} + \frac{4}{3} | <\mu$ for $n >P$. Note that $| \frac{x_n}{3}- (-\frac{4}{3} )| =| \frac{x_n}{3} +\frac{4}{3} =\frac{1}{3}|x_n +4|$. We know from the definition of $\lim_{n\rightarrow\infty} x_n =-4$ given above that for any $\mu >0$, there exists $M$ so that $|x_n - (-4)| =|x_n +4| <3 \mu$ for $n >M$. Hence, for $n >M$, we have that $\frac{1}{3} |x_n +4| <\frac{1}{3} 3 \mu =\mu$ for $n >M$, and so we are done. \end{enumerate} \medskip \noindent {\bf Solution \ref{sequences-functions}:} \begin{enumerate} \item since $a >0$, we can apply the definition of $\lim_{n\rightarrow\infty} a_n =a$ with $\varepsilon =\frac{1}{2} a$ to see that there exists $P$ so that $a_n >0$ for $n >P$ (since the interval of radius $\frac{1}{2} a$ centered at $a$ contains only positive numbers), and so for $n >P$, $\sqrt{a_n}$ makes sense. \medskip \noindent We need to get our hands on $| \sqrt{a_n} -\sqrt{a}|$, which we do with our usual trick for handling differences of square roots: \[ | \sqrt{a_n} -\sqrt{a}| =| \sqrt{a_n} -\sqrt{a}| \frac{| \sqrt{a_n} +\sqrt{a}|}{| \sqrt{a_n} +\sqrt{a}|} =\frac{|a_n -a|}{\sqrt{a_n} +\sqrt{a}}. \] (Here we're using that both $\sqrt{a_n} >0$ and $\sqrt{a} >0$ to say that $| \sqrt{a_n} +\sqrt{a}| =\sqrt{a_n} +\sqrt{a}$.) Since $\sqrt{a_n} +\sqrt{a} > \sqrt{a}$ for $n >P$, we have that \[ | \sqrt{a_n} -\sqrt{a}| = \frac{|a_n -a|}{\sqrt{a_n} +\sqrt{a}} < \frac{|a_n -a|}{\sqrt{a}} \] for $n >P$. Since $\{ a_n\}$ converges to $a$, for every $\varepsilon >0$, we can choose $M >P$ so that $|a_n -a| <\varepsilon \sqrt{a}$ for $n >M$. For this choice of $M$, we have that \[ | \sqrt{a_n} -\sqrt{a}| = \frac{|a_n -a|}{\sqrt{a_n} +\sqrt{a}} < \frac{|a_n -a|}{\sqrt{a}} <\frac{\varepsilon \sqrt{a}}{\sqrt{a}} =\varepsilon, \] and so $\{\sqrt{a_n}\}$ converges to $\sqrt{a}$. \item this one, we break into three cases. If $a >0$, then (applying the definition of $\lim_{n\rightarrow\infty} a_n =a$ with $\varepsilon =a$) there exists $M_0$ so that $a_n >0$ for $n >M_0$. In this case, we have $|a_n| =a_n$ for $n >M_0$ and $|a| =a$, and so $|| a_n| -|a|| =|a_n -a|$. Since there is $M_1$ so that $|a_n -a| <\varepsilon$ for $n >M_1$, we have that $|| a_n| -|a|| <\varepsilon$ for $n > M ={\rm max}(M_0, M_1)$, and so $\lim_{n\rightarrow\infty} |a_n| =|a|$. \medskip \noindent If $a <0$, then (applying the definition of $\lim_{n\rightarrow\infty} a_n =a$ with $\varepsilon = |a|$) there exists $M_0$ so that $a_n <0$ for $n >M_0$. In this case, we have $|a_n| = -a_n$ for $n >M_0$ and $|a| = -a$, and so $|| a_n| -|a|| =|-a_n +a| =|a_n -a|$. Since there is $M_1$ so that $|a_n -a| <\varepsilon$ for $n >M_1$, we have that $|| a_n| -|a|| <\varepsilon$ for $n > M ={\rm max}(M_0, M_1)$, and so $\lim_{n\rightarrow\infty} |a_n| =|a|$. \medskip \noindent If $a =0$, then the definition of $\lim_{n\rightarrow\infty} a_n =a$ becomes: for every $\varepsilon >0$, there exists $M$ so that $|a_n -0| =|a_n| <\varepsilon$ for $n >M$. Since $|\: |a_n|\: | =|a_n|$, we have that the definition of $\lim_{n\rightarrow\infty} |a_n| =0$ is satisfied without any further work. \item since $\lim_{n\rightarrow\infty} a_n =\infty$, for each $\varepsilon >0$, there exists $M$ so that $a_n >\varepsilon$ for $n >M$. Inverting both sides, we see that $\frac{1}{a_n} < \frac{1}{\varepsilon}$ for $n >M$. So, given $\mu >0$, choose $\varepsilon >0$ so that $\frac{1}{\varepsilon} <\mu$, which can be done by taking $\varepsilon$ large enough. Then, there exists $M$ so that $\left| \frac{1}{a_n} -0\right| =\frac{1}{a_n} <\frac{1}{\varepsilon} <\mu$ for $n >M$, as desired. \item if $a\ne 0$, consider the definition of $\lim_{n\rightarrow\infty} a_n =a$ with $\varepsilon =\frac{1}{2} |a|$: there exists $M$ so that $|a_n -a| < \frac{1}{2} |a|$ for $n >M$. That is, $a_n$ lies in the interval centered at $a$ with radius $\frac{1}{2} |a|$, and so $|a_n| > \frac{1}{2} |a|$. \medskip \noindent Now consider the sequence $\{ (-1)^n a_n \}$. For $n >M$ and $n$ even, $(-1)^n a_n =a_n$ lies in the interval centered at $a$ with radius $\frac{1}{2} |a|$. For $n >M$ and $n$ odd, $(-1)^n a_n =-a_n$ lies in the interval centered at $-a$ with radius $\frac{1}{2} |a|$. In particular, we have, regardless of whether $n$ is odd or even, that $|(-1)^n a_n - (-1)^{n+1} a_{n+1}| > |a|$ for $n >M$, since $(-1)^n a_n$ and $(-1)^{n+1} a_{n+1}$ lie on opposite sides of $0$ and are both distance at least $\frac{1}{2} |a|$ from the origin. Hence, $\{ (-1)^n a_n\}$ violates the Cauchy criterion (see Theorem \ref{sequence-test-thm} below), and so diverges. \item if $a =0$, the definition of $\lim_{n\rightarrow\infty} a_n =0$ becomes: for every $\varepsilon >0$, there exists $M$ so that $|a_n -0| =|a_n| <\varepsilon$ for $n >M$. However, note that $|(-1)^n a_n -0| =|a_n|$ as well, and so the definition of $\lim_{n\rightarrow\infty} (-1)^n a_n =0$ is satisfied without any further work. \end{enumerate} \medskip \noindent {\bf Solution \ref{more-sequence-proofs}:} Since $\lim_{n\rightarrow\infty} x_n =x$, we have that for each $\varepsilon >0$, there exists $M$ so that $|x_n -x| <\frac{1}{3} \varepsilon$ for $n >M$. For any $m >0$ and $n >M$, we now have that \begin{eqnarray*} |x_{n+1}+\cdots +x_{n+m} -mx| & = & |x_{n+1} -x +\cdots +x_{n+m} -x| \\ & \le & |x_{n+1} -x| +\cdots +|x_{n+m} -x| \\ & \le & m\frac{1}{3}\varepsilon. \end{eqnarray*} Dividing by $n+m$, we obtain that \[ \left| \frac{1}{n+m}(x_{n+1}+\cdots +x_{n+m}) -\frac{m}{n+m} x \right| \le \frac{m}{n+m} \frac{1}{3}\varepsilon <\frac{1}{3}\varepsilon \] (since $\frac{m}{n+m} <1$). Viewing $n$ as fixed for the moment, choose $m$ so that both $|\frac{m}{n+m}x -x| <\frac{1}{3}\varepsilon$ (which we can do since $\lim_{m\rightarrow\infty} \frac{m}{n+m} =1$ for $n$ fixed) and $\frac{1}{n+m} \left| x_1 +x_2+\cdots +x_n \right| <\frac{1}{3}\varepsilon$ (which we can do since $x_1 +x_2+\cdots +x_n$ is a constant when $n$ is fixed). Then, \begin{eqnarray*} \lefteqn{ \left| \frac{1}{n+m}(x_1+\cdots +x_{n+m}) -x \right|} \\ & = & \left| \frac{1}{n+m}(x_1+\cdots +x_n) + \frac{1}{n+m}( x_{n+1} +\cdots +x_{n+m}) -\frac{m}{n+m} x +\frac{m}{n+m} x -x \right| \\ & \le & \left| \frac{1}{n+m}(x_1+\cdots +x_n) \right| + \left| \frac{1}{n+m}( x_{n+1} +\cdots +x_{n+m}) -\frac{m}{n+m} x \right| + \left| \frac{m}{n+m} x -x \right| \\ & \le & \frac{1}{3}\varepsilon +\frac{1}{3}\varepsilon + \frac{1}{3}\varepsilon =\varepsilon \end{eqnarray*} for all $m >0$. Since this is true for all $n >M$ and all $m >0$, we have that $\left| \frac{1}{p}(x_1+\cdots +x_{p}) -x \right| <\varepsilon$ for all $p > M$, as desired. \medskip \noindent {\bf Solution \ref{sequence-examples}:} \begin{enumerate} \item $\{ a_n =(-1)^n\}$, bounded above by $1$ and bounded below by $-1$, hence bounded. This sequence fails the Cauchy criterion, since $|a_n -a_{n+1}| =2$ for all $n$, and so diverges. \item $\{ \sin(n)\}$, bounded above by $1$ and bounded below by $-1$, hence bounded. Though it seems fairly clear why this sequence diverges, the actual proof is a bit subtle, and we do not give it here. If you are intrigued, ask me after class, or come to my office hours. \item $\{ 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, \ldots\}$, bounded above by $1$ and bounded below by $0$, hence bounded. Arbitrarily far out in the sequence, there are consectutive terms taking the values $0$ and $1$, so the sequence fails the Cauchy criterion and hence diverges. \item $\{ a_n = \mbox{ the ${\rm n}^{\rm th}$ digit of $\pi$} \}$, bounded above by $9$ and bounded below by $0$, hence bounded. Does not converge, because the only way for a sequence of integers to converge is for it to be {\bf eventually constant}, that is, constant past some index, which in this case would then imply that $\pi$ is a repeating decimal, hence a rational number, which it isn't. (In fact, fixing an irrational number $x$ and taking $a_n$ to be the $n^{th}$ digit of the decimal expansion of $x$ gives a sequence that is bounded but not convergent, by the same argument.) \item $\{ a_n = \mbox{ the ${\rm n}^{\rm th}$ digit of the rational number $\frac{1}{7} = .\overline{142857}$} \}$, using the same argument as above (which works for rational numbers, as long as the length of the repeating section in the decimal expansion is longer than one digit). \end{enumerate} \medskip \noindent {\bf Solution \ref{function-sup-inf}:} Let $c =\sup(f)$, so that $c =\sup\{ f(a)\: |\: a\in A\}$. In particular, $c\ge f(a)$ for all $a\in A$, and if $u$ is any number satisfying $u\ge f(a)$ for all $a\in A$, then $u \le c$. Multiplying by $-1$, we see that $-c\le -f(a)$ for all $a\in A$ and that if $s$ is any number satisfying $s\le -f(a)$ for all $a\in A$, then $s\ge -c$. However, this is exactly the definition that $-c =\inf(-f)$, as desired. \medskip \noindent {\bf Solution \ref{limit-def-cont}:} (Note that we are not asked to determine whether the statement is correct or not, and if it is correct we are not asked to prove it. This is an exercise in writing down the definition of $\lim_{x\rightarrow a} f(x) =L$ for specific values of $a$ and $L$ and a specific function $f(x)$.) \begin{enumerate} \item For every $\varepsilon >0$, there exists $\delta >0$ so that if $0 <|x -1| <\delta$, then $|(2x)^4 -16| <\varepsilon$. \item For every $\varepsilon >0$, there exists $\delta >0$ so that if $0 <|x - (-3)| = | x+3| <\delta$, then $|(3x^2 +e^x) - (81 + e^{-3})| <\varepsilon$. \end{enumerate} \medskip \noindent {\bf Solution \ref{limit-exercises}:} \begin{enumerate} \item use the squeeze law. We have that $-1\le \sin(\frac{1}{x} )\le 1$ for all $x\ne 0$, and that $\lim_{x\rightarrow 0} \sin(x) =0$. So, we can bound $f(x)$ below by $-\sin(x)$ and above by $\sin(x)$. Since $\lim_{x\rightarrow 0} -\sin(x) = \lim_{x\rightarrow 0} \sin(x) = 0$, we have that $\lim_{x\rightarrow 0} \sin(x)\sin( \frac{1}{x} ) =0$. [Note that the fact that $f(x)$ is not defined at $0$ does not matter, since evaluating $\lim_{x\rightarrow 0} f(x)$ depends only on what's happening with $f(x)$ near $0$, and not at all on what's happening at $0$.] \item since $\lim_{x\rightarrow 0} \cos(x) = 1$, and since $f(x)=\cos(x)$ except at $0$, we have that $\lim_{x\rightarrow 0} f(x) = \lim_{x\rightarrow 0} \cos(x) =1$. [This is another reflection of the fact that $\lim_{x\rightarrow 0} f(x)$ does not care about the value of $f(x)$ at $0$, but only on the values of $f(x)$ near $0$.] \item note that $f(x) = 0$ for $-\frac{1}{3} 0$, we have that $|x| =x$, and so $\lim_{x\rightarrow 0+} \frac{\sin(x)}{|x|} =\lim_{x\rightarrow 0+} \frac{\sin(x)}{x} = 1$. However, for $x <0$, we have that $|x| =-x$, and so $\lim_{x\rightarrow 0-} \frac{\sin(x)}{|x|} = -\lim_{x\rightarrow 0-} \frac{\sin(x)}{x} = -1$. Since $\lim_{x\rightarrow 0+} \frac{\sin(x)}{|x|} \ne \lim_{x\rightarrow 0-} \frac{\sin(x)}{|x|}$, we see that $\lim_{x\rightarrow 0} \frac{\sin(x)}{|x|}$ does not exist. \end{enumerate} \medskip \noindent {\bf Solution \ref{interated-functions}:} \medskip \noindent {\bf Solution \ref{three-series}:} \begin{enumerate} \item Before hitting the ground the first time, the ball travels distance $a$. Between hitting the ground the first and second times, the ball travels distance $2ra$ (distance $ra$ up from the ground, and then distance $ra$ back to down to earth again). Between hitting the ground the second and third times, the ball travels distance $2r^2a$ (distance $r^2a$ up from the ground, and then distance $r^2a$ back to down to earth again). Between hitting the ground the $n^{th}$ and the $(n+1)^{st}$ times, the ball travels distance $2r^na$ (distance $r^na$ up from the ground, and then distance $r^na$ back to down to earth again). Hence, the total distance travelled is \[ a + 2ra + 2 r^2 a + \ldots =a +\sum_{n=1}^\infty 2r^n a = a + 2ra \sum_{n=1}^\infty r^{n-1} =a + 2ra\sum_{k=0}^\infty r^k = a + \frac{2ra}{1-r} = \frac{a + ra}{1-r}. \] \item One way to do this problem is to actually write out the appropriate geometric series and summing it. The easier way is to note that the cars will crash exactly one hour after the fly leaves the front of Jack's car, and in that hour (given the assumption that the fly loses no time in changing direction) the fly flies exactly 257 miles. \end{enumerate} \medskip \noindent {\bf Solution \ref{zeta-exercise}:} Note that for $s <1$, we have that $n^s \frac{1}{n}$. Hence, if we let $S_k$ be the $k^{th}$ partial sum of the harmonic series $\sum_{n=1}^\infty \frac{1}{n}$, and $T_k$ be the $k^{th}$ partial sum of the series $\sum_{n=1}^\infty \frac{1}{n^s}$ under consideration, then $T_k > S_k$. Since $\frac{1}{n^s} >0$ for all $n$, we have that $\{ T_k\}$ is an unbounded monotonically increasing sequence, unbounded since $\{ S_k\}$ is unbounded by the argument given in Example \ref{zeta-series}, and so $\{ T_k\}$ diverges. So, by definition, $\sum_{n=1}^\infty \frac{1}{n^s}$ diverges. \medskip \noindent {\bf Solution \ref{some-series-things}:} \begin{enumerate} \item we argue by contradiction: suppose that $\sum_{n=0}^\infty (a_n +b_n)$ converges. Since $\sum_{n=0}^\infty a_n$ converges by assumption, the arithmetic of series, Theorem \ref{series-arithmetic}, yields that their difference also converges. However, their difference is $\sum_{n=0}^\infty (a_n +b_n -a_n) =\sum_{n=0}^\infty b_n$, which diverges by assumption, yielding the desired contradiction. \item again we argue by contradiction: suppose that the series of multiples $\sum_{n=0}^\infty c a_n$ converges. Then, the sequence $\{ T_k =\sum_{n=0}^k c a_n\}$ of partial sums converges. Note though that $T_k =\sum_{n=0}^k c a_n =c \sum_{n=0}^k a_n =c S_k$, where $S_k$ is the $k^{th}$ partial sum of the series $\sum_{n=0}^\infty a_n$. Since $\{ T_k\}$ converges, the sequence $\{ \frac{1}{c} T_k =S_k\}$ also converges, by the arithmetic of sequences (since the constant sequence $\{ \frac{1}{c}\}$ converges), and so the original series converges, a contradiction. \end{enumerate} \medskip \noindent {\bf Solution \ref{product-quotient-series-examples}:} \begin{enumerate} \item by what has just been done, all we need are two convergent series. For instance, take $a_n = (0.5)^n$ and $b_n =(0.3)^n$ for all $n\ge 0$. Then, $\sum_{n=0}^\infty a_n$, $\sum_{n=0}^\infty b_n$, and $\sum_{n=0}^\infty a_n\: b_n$ are all convergent geometric series. \item take $a_n =1$ for all $n\ge 0$ and $b_n = 1$ for all $n\ge 0$. Then, both $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ are both divergent geometric series, as is $\sum_{n=0}^\infty a_n\: b_n$ (since $a_n\: b_n =1$ for all $n\ge 0$). \item for this one, let's take $a_n =b_n =\frac{1}{n}$ for all $n\ge 1$. Then, both $\sum_{n=1}^\infty a_n$ and $\sum_{n=1}^\infty b_n$ are the harmonic series, and hence divergent. However, the series of products $\sum_{n=1}^\infty a_n\: b_n =\sum_{n=1}^\infty \frac{1}{n^2}$ is convergent, by the discussion in Example \ref{zeta-series}. \item take any convergent series, for example $\sum_{n=0}^\infty (0.5)^n$, and set $a_n =b_n =(0,5)^n$. Then, the series of quotients is $\sum_{n=0}^\infty \frac{a_n}{b_n} =\sum_{n=0}^\infty 1$, which diverges. \item here, we can take $a_n =\frac{1}{n^2}$ and $b_n =\frac{1}{n^4}$ for $n\ge 1$. Then, both $\sum_{n=1}^\infty a_n =\sum_{n=1}^\infty \frac{1}{n^2}$ and $\sum_{n=1}^\infty b_n =\sum_{n=1}^\infty \frac{1}{n^4}$ converge by Exercise \ref{zeta-series}, as does the series of quotients, as $\frac{a_n}{b_n} =\frac{1}{n^2}$. \item let's use geometric series again: both of $\sum_{n=0}^\infty a_n =\sum_{n=0}^\infty 6^n$ and $\sum_{n=0}^\infty b_n =\sum_{n=0}^\infty 2^n$ are divergent geometric series, and the series of quotients $\sum_{n=0}^\infty \frac{a_n}{b_n} =\sum_{n=0}^\infty 3^n$ is also a divergent geometric series. \item $\sum_{n=1}^\infty a_n =\sum_{n=1}^\infty 1$ and $\sum_{n=1}^\infty b_n =\sum_{n=1}^\infty n^2$ both diverge, but the corresponding sequence of quotients $\sum_{n=1}^\infty \frac{a_n}{b_n} =\sum_{n=1}^\infty \frac{1}{n^2}$ converges. \end{enumerate} \medskip \noindent {\bf Solution \ref{mucking-series}:} Let $S_k =\sum_{n=1}^k a_n$ be the $k^{th}$ partial sum of $\sum_{n=1}^\infty a_n$. \begin{enumerate} \item Since $\lim_{n\rightarrow\infty} c_n =0$ and $c_n >0$ for all $n$, there exists $M >0$ so that $0 < c_n <1$ for $n >M$. Let $M ={\rm max}(1, c_1, c_2, \ldots, c_M)$, and note that $c_n\le M$ for all $n\ge 1$. In particular, the $k^{th}$ partial sum of the series $\sum_{n=1}^\infty a_n c_n$ satisfies \[ \sum_{n=1}^k a_n c_n \le \sum_{n=1}^k a_n M = M\: S_k. \] Hence, the sequence of partial sums of the series $\sum_{n=1}^\infty a_n c_n$ forms a monotonic (since all the $a_n$ and $c_n$ are positive), bounded (above by $M\sum_{n=1}^\infty a_n$, below by $0$) sequence, and so converges. That is, $\sum_{n=1}^\infty a_n c_n$ converges. \item The proof in the case that $\lim_{n\rightarrow\infty} c_n =c\ne 0$ is very similar to the proof in the case that $\lim_{n\rightarrow\infty} c_n =0$. Since $\lim_{n\rightarrow\infty} c_n =c\ne 0$, there exists $M >0$ so that $c_n M$. Let $M ={\rm max}(c+1, c_1, c_2, \ldots, c_M)$, and note that $c_n\le M$ for all $n\ge 1$. In particular, the $k^{th}$ partial sum of the series $\sum_{n=1}^\infty a_n c_n$ satisfies \[ \sum_{n=1}^k a_n c_n \le \sum_{n=1}^k a_n M = M\: S_k. \] Hence, the sequence of partial sums of the series $\sum_{n=1}^\infty a_n c_n$ forms a monotonic (since all the $a_n$ and $c_n$ are positive), bounded (above by $M\sum_{n=1}^\infty a_n$, below by $0$) sequence, and so converges. That is, $\sum_{n=1}^\infty a_n c_n$ converges. \end{enumerate} \medskip \noindent {\bf Solution \ref{series-scavenger}:} we make implicit use of the fact that convergence and absolute convergence are the same for series with positive terms. \begin{enumerate} \item {\bf converges absolutely:} we could apply the ratio test, but we do not need to use such heavy machinary. Instead, we note that \[ \sum_{n=0}^\infty \frac{2^{n-1}}{3^n} = \frac{1}{2} \sum_{n=0}^\infty \frac{2^n}{3^n} =\frac{1}{2} \sum_{n=0}^\infty \left( \frac{2}{3}\right)^n =\frac{1}{2} \frac{1}{(1-2/3)} =\frac{3}{2}, \] since $\sum_{n=0}^\infty \frac{2^n}{3^n}$ is a convergent geometric series. \item {\bf diverges:} this is a geometric series, and since $1.01 >1$, it is a divergent geometric series. \item {\bf converges absolutely:} this is a convergent geometric series, since $\frac{e}{10} <1$, and it converges to \[ \sum_{n=1}^\infty \left( \frac{e}{10}\right)^n =\sum_{n=0}^\infty \left( \frac{e}{10}\right)^n -1 =\frac{1}{1 -e/10} -1 = \frac{10}{10 - e} - \frac{10 -e}{10 -e} =\frac{e}{10 -e}. \] \item {\bf converges absolutely:} we use the second comparison test: since $n^2+n+1 >n^2$ for all $n\ge 1$, we have that $\frac{1}{n^2+n+1} < \frac{1}{n^2}$ for all $n\ge 1$. Since $\sum_{n=1}^\infty \frac{1}{n^2}$ converges, we have that $\sum_{n=1}^\infty \frac{1}{n^2+n+1}$ converges. \item {\bf diverges:} note that for $n\ge 1$, we have that $n \ge \sqrt{n}$, and so $n +\sqrt{n}\le 2n$. Therefore, $\frac{1}{n+\sqrt{n}} \ge \frac{1}{2n}$ for $n\ge 1$. Since the harmonic series $\sum_{n=1}^\infty \frac{1}{n}$ diverges, its multiple $\sum_{n=1}^\infty \frac{1}{2n}$ diverges, and hence by the first comparison test the series $\sum_{n=1}^\infty 1 / (n + \sqrt{n})$ diverges. \item {\bf converges absolutely:} since $1 + 3^n > 3^n$ for all $n\ge 1$, we have that $\frac{1}{1+3^n} < \frac{1}{3^n}$ for all $n\ge 1$. Since $\sum_{n=1}^\infty \frac{1}{3^n} =\sum_{n=1}^\infty \left( \frac{1}{3}\right)^n$ converges, the second convergence test yields that $\sum_{n=1}^\infty 1 / (1+3^n)$ converges. \item {\bf diverges:} we'll use the limit comparison test: for large values of $n$, it seems that $\frac{10 n^2}{n^3 - 1}$ behaves like a constant multiple of $\frac{1}{n}$, and in fact \[ \lim_{n\rightarrow\infty} \frac{10 n^2/(n^3 - 1)}{1/n} =\lim_{n\rightarrow\infty} \frac{10 n^3}{n^3 -1} =10 =L. \] Since the limit exists and $0 < L =10 < \infty$, and since $\sum_{n=1}^\infty \frac{1}{n}$ diverges, the limit comparison test yields that $\sum_{n=2}^\infty 10 n^2 / (n^3 - 1)$ diverges. \item {\bf converges absolutely:} again we'll use the limit comparison test: for large values of $n$, it seems that $1 / \sqrt{37n^3 + 3}$ behaves like $1/n^{3/2}$, and in fact \[ \lim_{n\rightarrow\infty} \frac{1 / \sqrt{37n^3 + 3}}{1/n^{3/2}} =\lim_{n\rightarrow\infty} \frac{n^{3/2}}{\sqrt{37n^3 + 3}} =\lim_{n\rightarrow\infty} \frac{1}{\sqrt{37 + 3/n^3}} =\frac{1}{\sqrt{37}} =L. \] Since the limit exists and $0 < L =\frac{1}{\sqrt{37}} < \infty$, and since $\sum_{n=1}^\infty \frac{1}{n^{3/2}}$ converges, the limit comparison test yields that $\sum_{n=1}^\infty 1 / \sqrt{37n^3 + 3}$ converges. \item {\bf converges absolutely:} we start this one with a bit of algebra, namely \[ \frac{\sqrt{n}}{n^2+n} < \frac{\sqrt{n}}{n^2} =\frac{1}{n^{3/2}}. \] From Example \ref{zeta-series}, we know that $\sum_{n=1}^\infty 1/n^{3/2}$ converges, and so by the second comparison test, $\sum_{n=1}^\infty \sqrt{n} / (n^2+n)$ converges. \item {\bf diverges:} since $\ln(n) \frac{1}{n}$ for all $n\ge 2$, and so $\sum_{n=2}^\infty 2 / \ln(n)$ diverges by the first comparison test, comparing it to the harmonic series $\sum_{n=1}^\infty \frac{1}{n}$. \item {\bf converges absolutely:} since $0 < \sin^2(n)\le 1$ for all $n\ge 1$, we have that \[ 0 < \frac{\sin^2(n)}{n^2+1} \le \frac{1}{n^2+1} < \frac{1}{n^2} \] for all $n\ge 1$. Since we are dealing with a series with positive terms and since $\sum_{n=1}^\infty \frac{1}{n^2}$ converges by Example \ref{zeta-series}, we have that $\sum_{n=1}^\infty \sin^2(n) / (n^2+1)$ converges by the second comparison test. \item {\bf converges absolutely:} for this series, we start with a bit of algebraic massage: \[ \frac{n+2^n}{n+3^n} < \frac{n+ 2^n}{3^n} < \frac{2^n + 2^n}{3^n} = 2\left( \frac{2}{3}\right)^n. \] So, the second comparison test, comparing with the convergent geometric series $2\sum_{n=0}^\infty \left( \frac{2}{3}\right)^n$ yields that $\sum_{n=1}^\infty (n+2^n) / (n+3^n)$ converges. \item {\bf converges absolutely:} since $1/( n^2 \ln(n)) < 1/n^2$ for $n\ge 3$, since $\ln(n)\ge 1$ for $n\ge 3$, we have by the second comparison test that $\sum_{n=2}^\infty 1/( n^2 \ln(n))$ converges. \item {\bf diverges:} for large values of $n$, it seems that the $n^{th}$ in the series is approximately $\frac{1}{n}$, and so we might guess that the series diverges by the limit comparison test. To check this guess, we need to evaluate \[ \lim_{n\rightarrow\infty} \frac{ (n^3+1) / (n^4+2)}{1/n} =\lim_{n\rightarrow\infty} \frac{ n^4+n}{n^4+2} = 1 = L. \] Since the limit exists and since $0 < L = 1< \infty$, and since the harmonic series $\sum_{n=1}^\infty \frac{1}{n}$ diverges, we have that $\sum_{n=1}^\infty (n^3+1) / (n^4+2)$ diverges by the limit comparison test. \item {\bf converges absolutely:} since $\frac{1}{n + n^{3/2}} < \frac{1}{n^{3/2}}$ for all $n\ge 1$ and since $\sum_{n=1}^\infty \frac{1}{n^{3/2}}$ converges by Example \ref{zeta-series}, we have that $\sum_{n=1}^\infty 1 / (n + n^{3/2})$ converges by the second comparison test. \item {\bf converges absolutely:} for large values of $n$, it seems that the $n^{th}$ term in this series is approximately equal to $\frac{10}{n^2}$, and so we might guess that this series converges by use of the limit comparison test. To verify this guess, we calculate \[ \lim_{n\rightarrow\infty} \frac{10 n^2 / (n^4+1)}{10/n^2} =\lim_{n\rightarrow\infty} \frac{n^4}{n^4+1} =1 =L. \] Since the limit exists and since $0 < L=1 <\infty$, and since $\sum_{n=1}^\infty \frac{10}{n^2}$ converges by Example \ref{zeta-series}, we have that $\sum_{n=1}^\infty 10 n^2 / (n^4+1)$ converges by the limit comparison test. \item {\bf converges absolutely:} for large values of $n$, it seems again that the $n^{th}$ term in this series is approximately equal to $\frac{1}{n^2}$, and so we might guess that this series converges by use of the limit comparison test. To verify this guess, we calculate \[ \lim_{n\rightarrow\infty} \frac{(n^2 -n) / (n^4 +2)}{1/n^2} =\lim_{n\rightarrow\infty} \frac{n^4 -n^3}{n^4+2} =1 =L. \] Since the limit exists and since $0 < L=1 <\infty$, and since $\sum_{n=1}^\infty \frac{1}{n^2}$ converges by Example \ref{zeta-series}, we have that $\sum_{n=2}^\infty (n^2 -n) / (n^4 +2)$ converges by the limit comparison test. \item {\bf diverges:} for large values of $n$, it seems that the $n^{th}$ term of this series is approximately equal to $\frac{1}{n}$, and so we might guess that this series then diverges by the limit comparison test. To verify this guess, we calculate \[ \lim_{n\rightarrow\infty} \frac{1 / \sqrt{n^2+1}}{1/n} =\lim_{n\rightarrow\infty} \frac{n}{\sqrt{n^2 +1}} = \lim_{n\rightarrow\infty} \frac{n}{n\sqrt{ 1 + 1/n^2}} = 1 = L. \] Since the limit exists and since $0 < L=1 <\infty$, and since $\sum_{n=1}^\infty \frac{1}{n}$ diverges by Example \ref{zeta-series}, we have that $\sum_{n=2}^\infty 1/\sqrt{n^2 +1}$ diverges by the limit comparison test. \item {\bf converges absolutely:} since \[ \frac{1}{3+5^n} < \frac{1}{5^n} =\left( \frac{1}{5}\right)^n, \] and since $\sum_{n=0}^\infty \left( \frac{1}{5}\right)^n$ converges, the second comparison test yields that $\sum_{n=1}^\infty 1 / (3+5^n)$ converges. \item {\bf diverges:} first note that since $\ln(n) 1/n$. Hence, since $\sum_{n=1}^\infty \frac{1}{n}$ diverges, we have that $\sum_{n=2}^\infty 1 / (n-\ln(n))$ diverges, by the first comparison test. \item {\bf converges absolutely:} since $0 <\cos^2(n) \le 1$ for all $N\ge 1$, we have that $\cos^2(n) / 3^n < 1/3^n$. Since $\sum_{n=0}^\infty \frac{1}{3^n}$ is a convergent geometric series, we have by the second comparison test that $\sum_{n=1}^\infty \cos^2(n) / 3^n$ converges. \item {\bf converges absolutely:} since $1 / (2^n+3^n) < 1/2^n$ and since $\sum_{n=0}^\infty \frac{1}{2^n}$ converges, the second comparison test yields that $\sum_{n=1}^\infty 1 / (2^n+3^n)$ converges. \item {\bf converges absolutely:} since $1 + \sqrt{n}\ge 2$ for $n\ge 1$, we have that $n^{1+\sqrt{n}} \ge n^2$ for $n\ge 1$, and so $1 / n^{(1+\sqrt{n})}\le 1/n^2$ for $n\ge 1$. Hence, since $\sum_{n=1}^\infty \frac{1}{n^2}$ converges by Example \ref{zeta-series}, we have by the second comparison test that $\sum_{n=1}^\infty 1 / n^{(1+\sqrt{n})}$ converges. \item {\bf converges absolutely:} since $2^n (n+1) > 2^n$ for $n\ge 1$, we have that $1 / (2^n (n+1)) <1/2^n$ for $n\ge 1$. Since $\sum_{n=1}^\infty \frac{1}{2^n}$ is a convergent geometric series, we have by the second comparison test that $\sum_{n=1}^\infty 1 / (2^n (n+1))$ converges. \item {\bf diverges:} since factorials are involved, we first see whether the ratio test gives us any information, and so we evaluate \[ \lim_{n\rightarrow\infty} \frac{(n+1)! / ((n+1)^2 e^{n+1})}{n!/(n^2 e^n)} =\lim_{n\rightarrow\infty} \frac{(n+1)!n^2 e^n}{n! (n+1)^2 e^{n+1}} = \lim_{n\rightarrow\infty} \frac{n^2}{(n+1)^2} \frac{n+1}{e} =\infty, \] and since $\infty >1$, the ratio test implies that $\sum_{n=1}^\infty n! / (n^2 e^n)$ diverges. \medskip \noindent [Though it's not obvious how, we could also have applied the $n^{th}$ term test for divergence, since for large values of $n$ we have \[ \frac{n!}{n^2 e^n} = \frac{(n-1)(n-2)!}{n e^n} =\frac{n-1}{n}\frac{n-2}{e}\cdots \frac{2}{e} \frac{1}{e^2} > \frac{n-1}{n}\frac{2}{e} \frac{1}{e^2} > \frac{1}{e^3}. \] We simplified by noting that the middle terms $\frac{n-2}{e},\ldots, \frac{3}{e}$ are all greater than $1$ and that $\frac{n-1}{n} >\frac{1}{2}$ for $n$ large. Hence, $\lim_{n\rightarrow\infty} \frac{n!}{n^2 e^n}\ne 0$.] \item {\bf converges absolutely:} there is not an obvious comparison to make, and so we try the ratio test: \[ \lim_{n\rightarrow\infty} \frac{\sqrt{n+1}/ (3^{n+1} \ln(n+1))}{\sqrt{n} / (3^n \ln(n))} =\lim_{n\rightarrow\infty} \frac{1}{3} \frac{\ln(n)}{\ln(n+1)} \sqrt{\frac{n+1}{n}} =\frac{1}{3}, \] since $\lim_{n\rightarrow\infty} \frac{\ln(n)}{\ln(n+1)} =1$, for instance using l'Hopital's rule. Since $\frac{1}{3} <1$, the ratio test yields that $\sum_{n=1}^\infty \sqrt{n} / (3^n \ln(n))$ converges. \item {\bf converges absolutely:} since there are factorials involved, we first try the ratio test: \[ \lim_{n\rightarrow\infty} \frac{(2(n+1))!/((n+1)!)^3}{(2n)! / (n!)^3} =\lim_{n\rightarrow\infty} \frac{(2n+2)(2n+1)}{(n+1)^3} =0 < 1, \] and so the ratio test yields that $\sum_{n=2}^\infty (2n)! / (n!)^3$ converges. \item {\bf converges absolutely:} note that the numerator of each term is either $0$ or $2$, and so this is a series with non-negative terms. Also, $(1 - (-1)^n) / n^4 < 2/n^4$ for all $n\ge 1$ and $\sum_{n=1}^\infty \frac{1}{n^4}$ converges by Example \ref{zeta-series}, and so by the second comparison test $\sum_{n=1}^\infty (1 - (-1)^n) / n^4$ converges. \item {\bf diverges:} we start with a bit of algebraic simplification: \[ \frac{2+\cos(n)}{n + \ln(n)} \ge \frac{1}{n + \ln(n)} > \frac{1}{2n}. \] (The first inequality holds since $2+\cos(n) \ge 2 + (-1) = 1$ for all $n\ge 1$, and the second inequality holds since $\ln(n) < n$ for all $n\ge 1$, and so $n +\ln(n) < n+n =2n$.) Since $\sum_{n=1}^\infty \frac{1}{2n}$ diverges (as it is a constant multiple of the harmonic series), the first comparison test yields that $\sum_{n=1}^\infty (2+\cos(n)) / (n + \ln(n))$ diverges. \item {\bf diverges:} for this one, we use the integral test. Set \[ f(x) =\frac{1}{x \ln(x) \sqrt{\ln(\ln(x))}}, \] so that $a_n =f(n)$ for all $n\ge 3$. (The restriction that $n\ge 3$ is to ensure that $\sqrt{\ln(\ln(n))}$ is well defined.) In order to apply the integral test, we need to know that $f(x)$ is decreasing, which involves calculating a derivative and checking its sign: \[ f'(x) =\frac{-\left( \ln(x)\sqrt{\ln(\ln(x))} + \sqrt{\ln(\ln(x))} + x\ln(x) \frac{1}{2\sqrt{\ln(\ln(x))}}\frac{1}{x\ln(x)} \right)}{(x \ln(x) \sqrt{\ln(\ln(x))})^2} < 0. \] Hence, the integral test can be applied, and says that $\sum_{n=3}^\infty 1 / (n \ln(n) \sqrt{\ln(\ln(n))})$ converges if and only if $\int_3^\infty f(x) {\rm d}x = \lim_{M\rightarrow\infty} \int_3^M f(x) {\rm d}x$ exists. So, we calculate: \[ \lim_{M\rightarrow\infty} \int_3^M f(x) {\rm d}x =\lim_{M\rightarrow\infty} \int_3^M \frac{1}{x \ln(x) \sqrt{\ln(\ln(x))}} {\rm d}x =\lim_{M\rightarrow\infty} 2\sqrt{\ln(\ln(x))}\left|_3^M \right., \] which diverges, and so $\sum_{n=3}^\infty 1 / (n \ln(n) \sqrt{\ln(\ln(n))})$ diverges. \item {\bf converges absolutely:} try the ratio test, since there are factorials about: \[ \lim_{n\rightarrow\infty} \frac{(n+1)^{(n+1)} / (\pi^{(n+1)} (n+1)!)}{n^n / (\pi^n n!)} =\lim_{n\rightarrow\infty} \left( \frac{n+1}{n}\right)^n \frac{1}{\pi} =\lim_{n\rightarrow\infty} \left( 1 +\frac{1}{n}\right)^n \frac{1}{\pi} =\frac{e}{\pi} =L. \] Since the limit exists and since $L <1$, the ratio test yields that $\sum_{n=1}^\infty n^n / (\pi^n n!)$ converges. \item {\bf converges absolutely:} since both the numerator and the denominator are raised to (essentially) the same power, we try the root test, and so need to calculate: \[ \lim_{n\rightarrow\infty} \left( \frac{2^{n+1}}{n^n}\right)^{1/n} = \lim_{n\rightarrow\infty} 2^{1/n} \frac{2}{n} =L=0 \] (since $\lim_{n\rightarrow\infty} 2^{1/n} =2^0 =1$). Since the limit exists and since $L <1$, the root test yields that $\sum_{n=1}^\infty 2^{n+1} / n^n$ converges. \item {\bf converges conditionally:} we first test for absolute convergence, by considering the related series $\sum_{n=1}^\infty | (-1)^{n-1} / \sqrt{n}| =\sum_{n=1}^\infty 1 / \sqrt{n}$, which diverges by Example \ref{zeta-series}. \medskip \noindent We now test for convergence. This is an alternating series, and so we use the alternating series test: write \[ \sum_{n=1}^\infty \frac{(-1)^{n-1}}{\sqrt{n}} =(-1) \sum_{n=1}^\infty \frac{(-1)^n}{\sqrt{n}} = (-1) \sum_{n=1}^\infty (-1)^n a_n, \] where $a_n =\frac{1}{\sqrt{n}} >0$ for all $n\ge 1$. Since $\lim_{n\rightarrow\infty} a_n =\lim_{n\rightarrow\infty} \frac{1}{\sqrt{n}} =0$ and since $a_{n+1} =\frac{1}{\sqrt{n+1}} <\frac{1}{\sqrt{n}} =a_n$ for all $n\ge 1$, the alternating series test applies and yields that this series converges. \medskip \noindent Hence, this series converges but does not converge absolutely. That is, the series converges conditionally. \item {\bf converges conditionally:} we first check for absolute convergence, that is, convergence of the associated series $\sum_{n=1}^\infty |\cos(\pi n) / ( (n+1) \ln(n+1) ) | = \sum_{n=1}^\infty 1 / ( (n+1) \ln(n+1) )$. For this series, we apply the integral test, with $f(x) = 1 / ( (x+1) \ln(x+1) )$. Since \[ f'(x) =\frac{-\left( \ln(x+1) + (x+1)\frac{1}{x+1}\right) }{(x+1)^2 (\ln(x+1))^2} =\frac{-( \ln(x+1) + 1) }{(x+1)^2(\ln(x+1))^2} < 0 \] for $x\ge 1$, the integral test yields that the series converges if and only if $\int_1^\infty f(x) {\rm d}x = \lim_{M\rightarrow\infty} \int_1^M f(x) {\rm d}x$ exists, so we calculate: \[ \lim_{M\rightarrow\infty} \int_1^M \frac{1}{(x+1)\ln(x+1)} {\rm d}x =\lim_{M\rightarrow\infty} \ln(\ln(x+1))\left|_1^M \right. =\lim_{M\rightarrow\infty} (\ln(\ln(M+1)) -\ln(\ln(2))), \] which diverges (very very slowly). So, the series does not converge absolutely. \medskip \noindent We now test for convergence. Since $\cos(\pi n) =(-1)^n$, this is an alternating series, and we start with the alternating series test. Since $(n+1) \ln(n+1) < (n+2) \ln(n+2)$ for all $n\ge 1$, we have that $1 / ( (n+1) \ln(n+1) ) > 1/ ( (n+2) \ln(n+2) )$ for $n\ge 1$. Since $\lim_{n\rightarrow\infty} 1/ ( (n+1) \ln(n+1) ) =0$ (and since $1 / ( (n+1) \ln(n+1) ) >0$ for $n\ge 1$), the alternating series test applies and yields that the series converges. \medskip \noindent Hence, this series converges but does not converge absolutely. That is, the series converges conditionally. \item {\bf diverges:} since $\lim_{n\rightarrow\infty} (n^2 -1) / ( n^2+1) =1$, we have that $\lim_{n\rightarrow\infty} (-1)^n (n^2 -1) / ( n^2+1)$ does not exist, and so $\sum_{n=1}^\infty (-1)^n (n^2 -1) / ( n^2+1)$ diverges by the $n^{th}$ term test for divergence. \item {\bf converges absolutely:} we first test for absolute convergence, by considering the associated series $\sum_{n=1}^\infty |(-1)^n / (n \pi^n)| =\sum_{n=1}^\infty 1 / (n \pi^n)$. Since $1/(n\pi^n)\le 1/\pi^n$ for $n\ge 1$ and since $\sum_{n=0}^\infty \frac{1}{\pi^n}$ converges, the second comparison test yields that $\sum_{n=1}^\infty 1 / (n \pi^n)$ converges, and hence that $\sum_{n=1}^\infty (-1)^n / (n \pi^n)$ converges absolutely. \item {\bf converges conditionally:} we first test for absolute convergence, that is, convergence of the associated series $\sum_{n=1}^\infty |(-1)^n (20n^2 -n -1) / (n^3+n^2+33 )| =\sum_{n=1}^\infty (20n^2 -n -1) / (n^3+n^2+33 )$. Since the $n^{th}$ term looks like a constant multiple of $\frac{1}{n}$ for large $n$, let's try the limit comparison test: \[ \lim_{n\rightarrow\infty} \frac{(20n^2 -n -1) / (n^3+n^2+33)}{1/n} =\lim_{n\rightarrow\infty} \frac{20n^3 -n^2 -n}{n^3+n^2+33} =20 =L. \] Since the limit exists and $0 0$ for $n\ge 1$, and so let's check whether it satisfies the conditions of the alternating series test. Since $(20n^2 -n -1) / (n^3+n^2+33 )$ is a rational function and the denominator has higher degree than the numerator, we have that $\lim_{n\rightarrow\infty} (20n^2 -n -1) / (n^3+n^2+33 ) =0$. All that remains to check is whether the $a_n$ are monotonically decreasing. For this, let $f(x) = (20x^2 -x -1) / (x^3+x^2+33 )$, so that $f(n) =a_n$, and check that it's decreasing, which involves calculating $f'(x)$: \[ f'(x) =\frac{-20x^4 +2x^3 +4x^2 +1322x -33}{(x^3 +x^2 +33)^2} <0 \] for all $x$ greater than any of the roots of the numerator. So, the alternating series test applies, and yields that this series converges. \medskip \noindent Hence, this series converges but does not converge absolutely. That is, the series converges conditionally. \item {\bf diverges:} note that, for $n\ge 101$, we have \[ \frac{n!}{100^n} = \frac{ n(n-1)\cdots 1}{100^n} = \frac{n}{100}\frac{n-1}{100}\cdots \frac{101}{100}\frac{100}{100} \frac{99}{100}\cdots \frac{1}{100} > \frac{99}{100}\cdots \frac{1}{100}, \] and so $\lim_{n\rightarrow\infty} n! / (-100)^n$ does not exist. Hence, by the $n^{th}$ term test for divergence, the series diverges. \item {\bf converges absolutely:} we apply the integral test, with the function $f(x) =\frac{1}{x \ln(x) (\ln(\ln(x)))^2}$. First, we check to see that $f(x)$ is decreasing, by calculating its derivative: \[ f'(x) =\frac{-(\ln(x) (\ln(\ln(x)))^2 + (\ln(\ln(x)))^2 + 1)}{(x \ln(x) (\ln(\ln(x)))^4} < 0 \] for $x\ge 2$ (and the denominator is non-zero for $x\ge 3$). So, now we need to calculate \begin{eqnarray*} \int_3^\infty f(x) {\rm d}x & = & \lim_{M\rightarrow\infty} \int_3^M \frac{1}{x\ln(x) (\ln(\ln(x))^2} {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \frac{1}{\ln(\ln(x)} \left|_3^M \right. =\lim_{M\rightarrow\infty} \left( \frac{-1}{\ln(\ln(M))} + \frac{1}{\ln(\ln(3))} \right) =\frac{1}{\ln(\ln(3))}. \end{eqnarray*} Since the limit converges, $\sum_{n=3}^\infty 1 / (n \ln(n) (\ln(\ln(n)))^2)$ converges absolutely. \item {\bf diverges:} we start with a bit of arithmetic, noting that the numerator satisfies: $(1 + (-1)^n) =0$ for $n$ odd and $(1 + (-1)^n) =2$ for $n$ even. Hence, the terms of the series are non-zero only for $n$ even, so let's make the substitution $n =2k$ for $k\ge 1$. Then, for $n$ even, we have that \[ \frac{1 + (-1)^n}{\sqrt{n}} =\frac{2}{\sqrt{2k}} = \frac{\sqrt{2}}{\sqrt{k}} > \frac{1}{\sqrt{k}}. \] Hence, by the first comparison test and Example \ref{zeta-series}, we have that $\sum_{n=1}^\infty (1 + (-1)^n) / \sqrt{n}$ diverges. \item {\bf converges absolutely:} again, we begin with a bit of algebra, simplifying the $n^{th}$ in the series by noting that \[ \frac{e^n \cos^2(n)}{1+\pi^n} \le \frac{e^n}{1+\pi^n} \le \frac{e^n}{\pi^n} =\left( \frac{e}{\pi}\right)^n, \] where the first inequality follows from $\cos^2(n)\le 1$ for all $n\ge 1$. Since $\sum_{n=0}^\infty (e/\pi)^n$ converges, the second comparison test yields that $\sum_{n=1}^\infty e^n \cos^2(n) / (1+\pi^n)$ converges. \item {\bf converges absolutely:} since there are factorials involved, let's first try the ratio test: \[ \lim_{n\rightarrow\infty} \frac{(n+1)^4 / (n+1)!}{n^4/n!} =\lim_{n\rightarrow\infty} \left( \frac{n+1}{n}\right)^4 \frac{n!}{(n+1)!} ==\lim_{n\rightarrow\infty} \left( \frac{n+1}{n}\right)^4 \frac{1}{n+1} =0 =L. \] Since the limit exists and since $L <1$, the ratio test yields that the series converges. \item {\bf converges absolutely:} again, since there are factorials involved, we first try the ratio test: \[ \lim_{n\rightarrow\infty} \frac{(2(n+1))! 6^{(n+1)} / (3(n+1))!}{ (2n)! 6^n / (3n)!} =\lim_{n\rightarrow\infty} \frac{6(2n+2)(2n+1)}{(3n+3)(3n+2)(3n+1)} =0 =L. \] Since the limit exists and since $L <1$, the ratio test yields that the series converges. \item {\bf converges absolutely:} and yet again, since there are factorials involved, our first attempt should be with the ratio test: \[ \lim_{n\rightarrow\infty} \frac{ (n+1)^{100} 2^{(n+1)} / \sqrt{(n+1)!}}{n^{100} 2^n / \sqrt{n!}} =\lim_{n\rightarrow\infty} \left( \frac{n+1}{n}\right)^{100} \frac{2}{\sqrt{n+1}} =0 =L. \] Since the limit exists and since $L <1$, the ratio test yields that this series converges. \item {\bf diverges:} since there are factorials involved, we first try the ratio test: \[ \lim_{n\rightarrow\infty} \frac{ (1+(n+1)!) / (1+(n+1))!}{ (1+n!) / (1+n)!} =\lim_{n\rightarrow\infty} \frac{1 +(n+1)!}{(1+n!)(n+2)} =\lim_{n\rightarrow\infty} \frac{1/n! + n+1}{(1/n! +1)(n+2)} =1, \] and so the ratio test gives no information. (This discussion was put in to remind you that the ratio test doesn't always work with factorials.) \medskip \noindent Hmm. Notice that when $n$ is large, $1 +n!$ is very nearly equal to $n!$, and so $(1+n!)/(n+1)!$ is very nearly equal to $n!/(n+1)! =1/(n+1)$. So, let's try the limit comparison test with $1/(n+1)$: \[ \lim_{n\rightarrow\infty} \frac{(1+n!) / (1+n)!}{1/(n+1)} =\lim_{n\rightarrow\infty} \frac{(n+1)(1+n!)}{(n+1)!} =\lim_{n\rightarrow\infty} \frac{1+n!}{n!} =1=L. \] Since the limit exists and since $\sum_{n=0}^\infty 1/(n+1)$ diverges (as it's the harmonic series less the leading term), the series $\sum_{n=3}^\infty (1+n!) / (1+n)!$ diverges by the limit comparison test. \item {\bf diverges:} again, since there are factorials involved, we first try the ratio test: \[ \lim_{n\rightarrow\infty} \frac{2^{2(n+1)} ((n+1)!)^2}{(2(n+1))!} \frac{(2n)!}{2^{2n} (n!)^2} =\lim_{n\rightarrow\infty} \frac{4(n+1)^2}{(2n+1)(2n+2)} =1, \] and so the ratio test yields no information. \medskip \noindent So, let's explicitly try the $n^{th}$ term test for divergence. We start with a bit of algebraic massage, namely: \[ 2^{2n} (n!)^2 =(2^n\cdot n!)^2 = ( (2n)\cdot (2n-2)\cdot (2n-4)\cdots 4\cdot 2)^2, \] and so \[ \frac{2^{2n}(n!)^2}{(2n)!} =\frac{ (2n)\cdot (2n) \cdot (2n-2)\cdot (2n-2) \cdots 2\cdot 2}{(2n)\cdot (2n-1)\cdot (2n-2)\cdot (2n-3) \cdots 2\cdot 1} = \frac{ (2n)\cdot (2n-2)\cdots 2}{(2n-1)\cdot (2n-3) \cdots 1} > 1. \] In particular, the limit $\lim_{n\rightarrow\infty} 2^{2n} (n!)^2 / (2n)!$ cannot be zero, and so the $n^{th}$ term test yields that $\sum_{n=1}^\infty 2^{2n} (n!)^2 / (2n)!$ diverges. \item {\bf converges absolutely:} we first check for absolute convergence, namely the convergence of the series $\sum_{n=1}^\infty |(-1)^n / ( n^2 + \ln(n) )| =\sum_{n=1}^\infty 1 / ( n^2 + \ln(n) )$. Since $n^2 +\ln(n) > n^2$, we have that $1/(n^2 +\ln(n)) < 1/n^2$, and so by the second comparison test, the series $\sum_{n=1}^\infty 1 / ( n^2 + \ln(n) )$ converges. That is, the original series $\sum_{n=1}^\infty (-1)^n / ( n^2 + \ln(n) )$ converges absolutely. \item {\bf converges absolutely:} we begin with a bit of algebraic massage, noting that \[ \sum_{n=1}^\infty \frac{(-1)^{2n}}{2^n} =\sum_{n=1}^\infty \frac{((-1)^2)^n }{2^n} = \sum_{n=1}^\infty \frac{1}{2^n} =\sum_{n=1}^\infty \left( \frac{1}{2}\right)^n. \] This is a convergent geometric series, converging to \[ \frac{1}{1-\frac{1}{2}} -1 = 1. \] (The subtraction of $1$ arises from the fact that the starting index in this series is not $0$, so that \[ \sum_{n=1}^\infty \frac{1}{2^n} =\sum_{n=0}^\infty \frac{1}{2^n} - \left( \frac{1}{2}\right)^0 = \sum_{n=0}^\infty \frac{1}{2^n} - 1 = 2-1 = 1.) \] \item {\bf converges absolutely:} we first check for absolute convergence, namely the convergence of the series $\sum_{n=1}^\infty |(-2)^n / n!| =\sum_{n=1}^\infty 2^n / n!$. Since there are factorials involved, we make use of the ratio test: \[ \lim_{n\rightarrow\infty} \frac{2^{(n+1)} / (n+1)!}{2^n/n!} =\lim_{n\rightarrow\infty} \frac{2}{n+1} =0 =L. \] Since this limit exists and satisfies $L <1$, the ratio test yields that $\sum_{n=1}^\infty 2^n / n!$ converges, and hence that the original series $\sum_{n=1}^\infty (-2)^n / n!$ converges absolutely. \item {\bf diverges:} first, note that this is not an alternating series, but is a series with all non-positive terms. Hence, for this series, convergence and absolute convergence are equivalent, as they are for series with non-negative terms. \medskip \noindent Now, for $n$ large, $n/(n^2 +1)$ is approximately equal to $1/n$, and so let's try the limit comparison test with $\frac{1}{n}$. So, we calculate: \[ \lim_{n\rightarrow\infty} \frac{ n / (n^2+1)}{1/n} =\lim_{n\rightarrow\infty} \frac{ n^2}{n^2+1} =1 =L. \] Since the limit exists and since $0 < L =1 <\infty$, and since $\sum_{n=1}^\infty -1/n$ diverges (as it is a constant multiple of the harmonic series), the limit comparison test yields that the series $\sum_{n=1}^\infty -n / (n^2+1)$ diverges. \item {\bf converges conditionally:} we start by noting that $\cos(n\pi) =(-1)^n$, and so this is an alternating series. So, we first check for absolute convergence, namely the convergence of the series $\sum_{n=1}^\infty |100\cos(n\pi) / (2n+3)| = \sum_{n=1}^\infty 100 / (2n+3)$. Here, there are many tests that yield divergence, for instance we may use the limit comparison test with the harmonic series $\sum_{n=1}^\infty \frac{1}{n}$: \[ \lim_{n\rightarrow\infty} \frac{100/(2n+3)}{1/n} =\lim_{n\rightarrow\infty} \frac{100n}{2n+3} =50 =L; \] since this limit exists and satisfies $0 < L =50 <\infty$, and since the harmonic series diverges, the limit comparison test yields that $\sum_{n=1}^\infty 100 / (2n+3)$ diverges. \medskip \noindent However, since $\frac{100}{2(n+1) +3} =\frac{100}{2n+5} < \frac{100}{2n+3}$ and since $\lim_{n\rightarrow\infty} \frac{100}{2n+3} =0$, the alternating series test yields that $\sum_{n=1}^\infty 100\cos(n\pi) / (2n+3)$ converges. \medskip \noindent Hence, this series converges but does not converge absolutely. That is, the series converges conditionally. \item {\bf converges conditionally:} as before, we begin by simplifying the expression of each term. Here, note that $\sin((n+1/2)\pi) =(-1)^n$, and so this is an alternating series. As always, we first check for absolute convergence, namely the convergence of the series $\sum_{n=10}^\infty |\sin((n+1/2)\pi) / \ln(\ln(n))| =\sum_{n=10}^\infty 1 / \ln(\ln(n))$. Since $n > \ln(\ln(n))$ for all $n\ge 10$, we have that $1/\ln(\ln(n)) > 1/n$ for all $n\ge 10$, and so the series $\sum_{n=10}^\infty 1/ \ln(\ln(n))$ diverges by the first comparison test. That is, the original series does not converge absolutely. \medskip \noindent We are now ready to determine convergence of the original series. As this is an alternating series, let's check whether the hypthoses of the alternating series test are satisfied. Since $1/\ln(\ln(n)) > 1/\ln(\ln(n+1))$ and since $\lim_{n\rightarrow\infty} 1/\ln(\ln(n)) =0$ (since $\lim_{n\rightarrow\infty} \ln(\ln(n)) =\infty$), the alternating series test applies to this series, and yields that the series $\sum_{n=10}^\infty \sin((n+1/2)\pi) / \ln(\ln(n))$ converges. \medskip \noindent Hence, this series converges but does not converge absolutely. That is, the series converges conditionally. \item {\bf diverges:} similar to the algebraic manipulation we performed on the series whose terms were the reciprocals of the terms in this series, we calculate: \begin{eqnarray*} \frac{(2n)!}{2^{2n} (n!)^2} & = & \frac{(2n)!}{(2^n n!)^2} \\ & = & \frac{(2n)\cdot (2n-1)\cdot (2n-2)\cdot (2n-3) \cdots 2\cdot 1}{(2n)\cdot (2n) \cdot (2n-2) \cdot (2n-2)\cdots 2\cdot 2} \\ & = & \frac{(2n-1)\cdot (2n-3)\cdots 3\cdot 1}{(2n) \cdot (2n-2)\cdots 4\cdot 2 } \\ & = & \frac{1}{2n} \frac{2n-1}{2n-2} \frac{2n-3}{2n-4} \cdots \frac{5}{4} \frac{3}{2} > \frac{1}{2n}. \end{eqnarray*} Hence, since the series $\sum_{n=1}^\infty \frac{1}{2n}$ diverges (as it is a constant multiple of the harmonic series), the first comparison test yields that $\sum_{n=1}^\infty (2n)! / ( 2^{2n} (n!)^2)$ diverges. \item {\bf converges absolutely:} since each term is a power, we first attempt to apply the root test, and so we calculate: \[ \lim_{n\rightarrow\infty} \left[ \left( \frac{n}{n+1} \right)^{n^2} \right]^{1/n} = \lim_{n\rightarrow\infty} \left( \frac{n}{n+1} \right)^n =\lim_{n\rightarrow\infty} \left( \frac{n+1}{n}\right)^{-n} = \frac{1}{\lim_{n\rightarrow\infty} \left( 1 +\frac{1}{n}\right)^n} = \frac{1}{e} =L. \] Since the limit exists and since $L < 1$, the root test yields that $\sum_{n=1}^\infty (n / (n+1) )^{n^2}$ converges. \item {\bf converges absolutely:} we begin with a bit of algebraic manipulation, namely noting that \[ 1+2+\cdots+n =\frac{n(n+1)}{2} \] for $n\ge 1$, and so \[ \frac{1}{1+2+\cdots+n} =\frac{2}{n(n+1)} < \frac{2}{n^2} \] for $n\ge 1$. Since $\sum_{n=1}^\infty 1/n^2$ converges, by Example \ref{zeta-series}, the second comparison test yields that $\sum_{n=1}^\infty 1 / (1+2+\cdots+n)$ converges. \item {\bf converges absolutely:} we begin with a bit of simplification, namely noting that \[ 0\le \frac{\ln(n)}{2n^3 - 1} \le \frac{n}{2n^3 -1}\le \frac{n}{n^3} = \frac{1}{n^2} \] for $n\ge 1$. (The first inequality follows since $\ln(n)\le n$ for $n\ge 1$, while the second inequality follows since $2n^3 -1 \ge n^3$ for $n\ge 1$.) Since $\sum_{n=1}^\infty 1/n^2$ converges by Example \ref{zeta-series}, the second comparison test yields that $\sum_{n=1}^\infty \ln(n) / (2n^3 - 1)$ converges. \item {\bf converges absolutely:} note that this is not an alternating series, even though the terms are not all of the same sign (since $\sin(n)$ behaves a bit strangely). However, we still begin testing for convergence by testing for absolute convergence, namely the convergence of the series $\sum_{n=1}^\infty |\sin(n) / n^2|$. Since $|\sin(n)|\le 1$ for all $n\ge 1$, and since $\sum_{n=1}^\infty 1/n^2$ converges by Example \ref{zeta-series}, the second comparison test yields that $\sum_{n=1}^\infty \sin(n) / n^2$ converges absolutely. \item {\bf diverges:} since $\lim_{n\rightarrow\infty} (n-1) / n =1$, we have that $\lim_{n\rightarrow\infty} (-1)^n (n-1) / n$ does not exist (since for large $n$, it is oscillating between numbers near $1$ and numbers near $-1$). Since this limit does not exist, the $n^{th}$ term test for divergence yields that $\sum_{n=1}^\infty (-1)^n (n-1) / n$ diverges. \item {\bf diverges:} we can rewrite this series as a geometric series, to whit: \[ \sum_{n=1}^\infty \frac{(-1)^n 2^{3n}}{7^n} =\sum_{n=1}^\infty \frac{(-8)^n}{7^n} = \sum_{n=1}^\infty \left( \frac{-8}{7}\right)^n. \] Since $|-\frac{8}{7}| \ge 1$, this is a divergent geometric series. \item {\bf converges absolutely:} this is similar to a series we handled a few problems ago. Even though the terms are not of the same sign and are not of alternating signs, we still begin our check for convergence by checking for absolute convergence. Since $|\cos(n) / n^4| \le 1/n^4$ (since $|\cos(n)| \le 1$ for all $n\ge 1$) and since $\sum_{n=1}^\infty 1/n^4$ converges, the second comparison test yields that $\sum_{n=1}^\infty \cos(n) / n^4$ converges absolutely. \item {\bf diverges:} even though this is an alternating series, I personally feel the need to try the $n^{th}$ term test first, since for $n$ large, the dominant terms are the $3^n$ in the numerator and the $2^n$ in the demoninator, and so I expect that the value of $3^n / (n(2^n + 1))$ to be large for large values of $n$. Let's check this: \[ \frac{3^n}{n(2^n + 1)} =\frac{3^n}{n\: 2^n + n} > \frac{3^n}{n\: 2^n + n\: 2^n} =\frac{3^n}{2n\: 2^n} =\left(\frac{3}{2}\right)^n \frac{1}{2n}. \] Now, notice that $(3/2)^n > n$ for $n\ge 3$ (since $(3/2)^3 >3$ and the derivative of $(3/2)^n -n$ is positive for $n\ge 3$), and so \[ \frac{3^n}{n(2^n + 1)} > \left(\frac{3}{2}\right)^n \frac{1}{2n} > \frac{1}{2} \] for $n\ge 3$. (So, not exactly large for large values of $n$, but big enough to do the trick.) Hence, the limit $\lim_{n\rightarrow\infty} (-1)^n 3^n / (n(2^n + 1))$ does not exist (as it oscillates positive and negative and never settles down to $0$), and so by the $n^{th}$ term test for divergence, $\sum_{n=1}^\infty (-1)^n 3^n / (n(2^n + 1))$ diverges. \item {\bf converges conditionally:} we first check for absolute convergence, namely the convergence of the series $\sum_{n=1}^\infty |(-1)^{n-1} n / (n^2+1)| =\sum_{n=1}^\infty n / (n^2+1)$. Since $n/(n^2 +1) > n/(n^2 + n^2) = 1/(2n)$ for all $n\ge 1$ and since $\sum_{n=1}^\infty 1/(2n)$ diverges (as it is a constant multiple of the harmonic series), the first comparison test yields that $\sum_{n=1}^\infty n / (n^2+1)$ diverges, and so the original series does not converge absolutely. \medskip \noindent As it is an alternating series, we can attempt to check convergence by seeing if we can apply the alternating series test. Since $\lim_{n\rightarrow\infty} n/(n^2 +1) =0$ and since $n/(n^2 +1) > (n+1)/((n+1)^2 +1)$ for all $n\ge 1$, the hypotheses of the alternating series test are met, and so $\sum_{n=1}^\infty (-1)^{n-1} n / (n^2+1)$ converges. \medskip \noindent Hence, this series converges but does not converge absolutely. That is, the series converges conditionally. \item {\bf converges absolutely:} we first check absolute convergence, namely the convergence of the series $\sum_{n=2}^\infty |(-1)^{n-1} / (n\ln^2(n))| =\sum_{n=2}^\infty 1 / (n\ln^2(n))$. For this series, we use the integral test: set $f(x) =1 / (x\ln^2(x))$. We need to check that $f(x)$ is decreasing, which we do by calculating its derivative: \[ f'(x) =\frac{-(\ln^2(x) + 2\ln(x))}{x^2 \ln^4(x)} <0 \] for $x\ge 2$ (since $\ln(x) > 0$ for $x\ge 2$). We now calculate: \begin{eqnarray*} \int_2^\infty f(x) {\rm d}x & = & \lim_{M\rightarrow\infty} \int_2^M \frac{1}{x\ln^2(x)} {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \frac{-1}{\ln(x)} \left|_2^M \right.\\ & = & \lim_{M\rightarrow\infty} \left( \frac{-1}{\ln(M)} + \frac{1}{\ln(2)}\right) =\frac{1}{\ln(2)}. \end{eqnarray*} Since this limit exists, the integral test yields that the series $\sum_{n=2}^\infty 1 / (n\ln^2(n))$ converges, and hence that the original series $\sum_{n=2}^\infty (-1)^{n-1} / (n\ln^2(n))$ converges absolutely. \item {\bf diverges:} we apply the ratio test (Proposition \ref{ratio-root-general}), as this is a series with non-zero terms: \[ \lim_{n\rightarrow\infty} \left| \frac{(-1)^n 2^{(n+1)} / (n+1)^2}{(-1)^{n-1} 2^n / n^2}\right| =\lim_{n\rightarrow\infty} \frac{2n^2}{(n+1)^2} =2 =L. \] Since this limit exists and satisfies $L >1$, the series $\sum_{n=1}^\infty (-1)^{n-1} 2^n / n^2$ diverges. \item {\bf converges absolutely:} we first check for absolute convergence, namely the convergence of the series $\sum_{n=1}^\infty |(-1)^n \sin(\sqrt{n}) / n^{3/2}| =\sum_{n=1}^\infty |\sin(\sqrt{n})| / n^{3/2}$. Since $ |\sin(\sqrt{n})| / n^{3/2} \le 1/ n^{3/2}$ for $n\ge 1$ (since $|\sin(\sqrt{n})| \le 1$ for $n\ge 1$), and since $\sum_{n=1}^\infty 1 / n^{3/2}$ converges by Example \ref{zeta-series}, the second comparison test yields that $\sum_{n=1}^\infty | \sin(\sqrt{n})| / n^{3/2}$ converges, and hence that the original series $\sum_{n=1}^\infty (-1)^n \sin(\sqrt{n}) / n^{3/2}$ converges absolutely. \item {\bf converges absolutely:} even though there are no factorials, let us apply the ratio test. So, we calculate: \[ \lim_{n\rightarrow\infty} \frac{ (n+1)^4 e^{-(n+1)^2}}{n^4 e^{-n^2}} =\lim_{n\rightarrow\infty} \left( \frac{n+1}{n}\right)^4 e^{-2n-1} =0 =L. \] Since this limit exists and since $L <1$, the ratio test yields that the series $\sum_{n=1}^\infty n^4 e^{-n^2}$ converges. \item {\bf converges conditionally:} before testing for absolute convergence, we perform a bit of algebraic simplification, by noting that \[ \sin\left( \frac{n \pi}{2}\right) = \sin\left( \frac{2k\pi}{2} \right) = \sin(k\pi) =0\] for $n$ even and \[ \sin\left( \frac{\pi n}{2} \right) =\sin\left( \frac{\pi (2k+1)}{2} \right) = \sin\left( k\pi + \frac{\pi}{2}\right) =(-1)^k\] for $n =2k+1$ odd. Hence, setting $n =2k+1$ for $k\ge 0$, we may rewrite the series as \[ \sum_{n=1}^\infty \frac{\sin(n\pi /2)}{n} =\sum_{k=0}^\infty \frac{\sin(\pi (2k+1)/2)}{2k+1} =\sum_{k=0}^\infty \frac{(-1)^k}{2k+1}. \] \medskip \noindent We first test for absolute convergence, namely the convergence of the series $\sum_{k=0}^\infty |(-1)^k/(2k+1)| = \sum_{k=0}^\infty 1/(2k+1)$. However, since $1/(2k+1) > 1/(2k+2) =1/2(k+1)$ and since $\sum_{k=0}^\infty 1/(k+1)$ is the harmonic series, the series $\sum_{k=0}^\infty 1/(2k+1)$ diverges by the first comparison test, and hence the original series does not converge absolutely. \medskip \noindent To test convergence, we use the alternating series test. Since $1/(2k+1) > 1/(2(k+1) +1)$ for all $k\ge 0$ and since $\lim_{k\rightarrow\infty} 1/(2k+1) =0$, the alternating series test yields that $\sum_{k=0}^\infty (-1)^k/(2k+1)$ converges. \medskip \noindent Hence, this series converges but does not converge absolutely. That is, the series converges conditionally. \item {\bf diverges:} for this series, we first note that $\ln(x) < x^{1/8}$ for $x$ large ($x > e^{32}$ works), as follows: consider the function $f(x) =x^{1/8} -\ln(x)$, and note that \[ f(e^{8k}) = (e^{8k})^{1/8} -\ln(e^{8k}) = e^k -8k, \] and so $f(e^{32}) =e^4 - 32 = 22.5982 ... > 0$. \medskip \noindent Moreover, for $x\ge e^{32}$, we have that $f(x)$ is increasing: differentiating, we see that \[ f'(x) = \frac{1}{8} x^{-7/8} -\frac{1}{x} =\frac{1}{x} \left( \frac{1}{8} x -1 \right), \] and so $f'(x) >0$ for $x > 8$. \medskip \noindent So, for $n > e^{32}$, we have that \[ \frac{1}{\ln(n)^8} > \frac{1}{(n^{1/8})^8} =\frac{1}{n}, \] and hence by the first comparison test, $\sum_{n=2}^\infty 1 / (\ln(n))^8$ diverges. (Note that we are making heavy use of Fact \ref{first-few}, that ignoring finitely many terms of a series does not affect its convergence or divergence.) \item {\bf converges absolutely:} (note that the lower limit $13$ for the series yields that $\ln(n)$ and $\ln(\ln(n))$ are positive for all terms in the series.) We apply the integral test, using the function \[ f(x) = \frac{ 1}{x\ln(x) (\ln(\ln(x)))^p}. \] We first check that $f(x)$ is decreasing: \[ f'(x) = \frac{-(\ln(x)\ln(\ln(x))^p + \ln(\ln(x))^p + p)}{(x\ln(x)\ln(\ln(x))^p)^2} < 0 \] for $x >13$, since both $\ln(x) >0$ and $\ln(\ln(x))>0$ for $x >13$ and since $p >0$ by assumption. \medskip \noindent In order to apply the integral test, we now need to calculate: \[ \int_{13}^\infty f(x) {\rm d}x = \lim_{M\rightarrow\infty} \int_{13}^M \frac{ 1}{x\ln(x) (\ln(\ln(x)))^p}. \] There are two cases: if $p = 1$, we get \begin{eqnarray*} \lim_{M\rightarrow\infty} \int_{13}^M \frac{ 1}{x\ln(x) (\ln(\ln(x))} & = & \lim_{M\rightarrow\infty} \ln(\ln(\ln(x))) \left|_{13}^M \right. \\ & = & \lim_{M\rightarrow\infty} \left( \ln(\ln(\ln(M))) -\ln(\ln(\ln(13))) \right) =\infty, \end{eqnarray*} and so for $p =1$ the series diverges. \medskip \noindent For $p\ne 1$, we get: \begin{eqnarray*} \lim_{M\rightarrow\infty} \int_{13}^M \frac{ 1}{x\ln(x) (\ln(\ln(x))^p } & = & \lim_{M\rightarrow\infty} \frac{1}{-p+1} \frac{1}{\ln(\ln(x))^{p-1}} \left|_{13}^M \right. \\ & = & \frac{1}{-p+1} \lim_{M\rightarrow\infty} \left( \ln(\ln(M))^{-p+1} -\ln(\ln(13))^{-p+1} \right), \end{eqnarray*} which converges for $p >1$ (since $-p+1 <0$)and diverges for $p <1$ (since $-p +1 >0$). Hence, the series $\sum_{n=13}^\infty 1 /( n\ln(n) (\ln(\ln(n)))^p )$ converges if and only if $p >1$. (Note that this is really just Example \ref{zeta-series} in a bit of disguise.) \end{enumerate} \medskip \noindent {\bf Solution \ref{another-series}:} By the contrapositive to the $n^{th}$ term test for divergence, since the series $\sum_{n=1}^\infty a_n$ converges, we have that $\lim_{n\rightarrow\infty} a_n =0$. In particular, taking $\varepsilon = 1$ and remembering that each $a_n >0$, there exists $M$ so that $0 < a_n <1$ for all $n >M$. Since $0 < a_n < 1$ for $n >M$ and since $s\ge 1$, we have that $a_n^s < a_n$ for $n >M$, and so by the second comparison test, we have that $\sum_{n=M+1}^\infty a_n^s$ converges by comparison to $\sum_{n=M+1}^\infty a_n$. Since $\sum_{n=M+1}^\infty a_n^s$ converges, we see that $\sum_{n=0}^\infty a_n^s$ converges, as desired. \medskip \noindent {\bf Solution \ref{power-series-scavenger}:} \begin{enumerate} \item {\bf radius of convergence is $\infty$, interval of convergence is ${\bf R}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (-1)^{n+1} x^{n+1} / (n+1)!}{(-1)^n x^n / n!}\right| = |x| \lim_{n\rightarrow\infty} \frac{1}{n+1} =0. \] Hence, this series converges absolutely for all values of $x$ (since this limit is $0$ for every value of $x$). \item {\bf radius of convergence is $\frac{1}{5}$, interval of convergence is $\left[ -\frac{1}{5}, \frac{1}{5}\right]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{5^{n+1} x^{n+1} / (n+1)^2}{5^n x^n / n^2}\right| = |x|\lim_{n\rightarrow\infty} \frac{5n^2}{(n+1)^2} =5 |x|. \] Hence, this series converges absolutely for $5|x| <1$, that is for $|x| < \frac{1}{5}$, and so the radius of convergence is $\frac{1}{5}$. We now need to check the endpoints of the interval $(-\frac{1}{5}, \frac{1}{5})$: \medskip \noindent At $x =-\frac{1}{5}$, the series becomes $\sum_{n=1}^\infty 5^n (-1/5)^n / n^2 =\sum_{n=1}^\infty (-1)^n / n^2$, which converges absolutely. \medskip \noindent At $x =\frac{1}{5}$, the series becomes $\sum_{n=1}^\infty 5^n (1/5)^n / n^2 =\sum_{n=1}^\infty 1 / n^2$, which converges absolutely. \medskip \noindent So the series converges absolutely for all $x$ in the closed interval $\left[ -\frac{1}{5}, \frac{1}{5}\right]$, and diverges elsewhere. \item {\bf radius of convergence is $1$, interval of convergence is $[-1,1]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ x^{n+1} / ((n+1)(n+2))}{x^n / (n(n+1))} \right| = |x| \lim_{n\rightarrow\infty} \frac{n}{n+2} =|x|. \] Hence, this series converges absolutely for $|x| <1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(-1,1)$: \medskip \noindent At $x =-1$, the series becomes $\sum_{n=1}^\infty (-1)^n / (n(n+1))$, which converges absolutely. \medskip \noindent At $x =1$, the series becomes $\sum_{n=1}^\infty 1 / (n(n+1))$, which converges absolutely. \medskip \noindent So, the series converges absolutely for all $x$ in the closed interval $[-1,1]$, and diverges elsewhere. \item {\bf radius of convergence is $1$, interval of convergence is $[-1,1)$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{(-1)^{n+1} x^{n+1} / \sqrt{n+1}}{(-1)^n x^n / \sqrt{n}} \right| =|x|\lim_{n\rightarrow\infty} \sqrt{\frac{n}{n+1}} =|x|. \] Hence, this series converges absolutely for $|x| <1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(-1,1)$: \medskip \noindent At $x =-1$, the series becomes $\sum_{n=1}^\infty (-1)^n / \sqrt{n}$, which converges conditionally. (The alternating series test yields convergence, but this series does not converge absolutely, by comparison to the harmonic series.) \medskip \noindent At $x =1$, the series becomes $\sum_{n=1}^\infty 1 / \sqrt{n}$, which diverges. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(-1,1)$, converges conditionally at $x =-1$, and diverges elsewhere. \item {\bf radius of convergence is $\infty$, interval of convergence is ${\bf R}$:} Apply the ratio test and calculate: \begin{eqnarray*} \lim_{n\rightarrow\infty} \left| \frac{(-1)^{n+1} x^{2(n+1)+1} / (2(n+1) + 1)!}{(-1)^n x^{2n+1} / (2n + 1)!} \right| & = & |x|^2 \lim_{n\rightarrow\infty} \frac{(2n+1)!}{(2n+3)!} \\ & = & |x|^2 \lim_{n\rightarrow\infty} \frac{1}{(2n+3)(2n+2)} = 0. \end{eqnarray*} Hence, this series converges absolutely for all values of $x$ (since this limit is $0$ for every value of $x$). \item {\bf radius of convergence is $\infty$, interval of convergence is ${\bf R}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{3^{n+1} x^{n+1} / (n+1)!}{ 3^n x^n / n!} \right| =|x|\lim_{n\rightarrow\infty} \frac{3}{n+1} =0. \] Hence, this series converges absolutely for all values of $x$ (since this limit is $0$ for every value of $x$). \item {\bf radius of convergence is $1$, interval of convergence is $[-1,1]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{x^{n+1} / (1+(n+1)^2)}{x^n / (1+n^2)} \right| =|x| \lim_{n\rightarrow\infty} \frac{1+n^2}{2 +2n +n^2} =|x|. \] Hence, this series converges absolutely for $|x| <1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(-1,1)$: \medskip \noindent At $x =-1$, the series becomes $\sum_{n=0}^\infty (-1)^n / (1+n^2)$, which converges absolutely. \medskip \noindent At $x =1$, the series becomes $\sum_{n=0}^\infty 1 / (1+n^2)$, which converges absolutely. \medskip \noindent So, the series converges absolutely for all $x$ in the closed interval $[-1,1]$, and diverges elsewhere. \item {\bf radius of convergence is $1$, interval of convergence is $(-2,0]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{(-1)^{(n+1)+1} (x+1)^{n+1} / (n+1)}{(-1)^{n+1} (x+1)^n / n} \right| =|x+1| \lim_{n\rightarrow\infty} \frac{n}{n+1} =|x+1|. \] Hence, this series converges absolutely for $|x +1| <1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(-2,0)$: \medskip \noindent At $x =-2$, the series becomes $\sum_{n=1}^\infty (-1)^{n+1} (-1)^n / n = -\sum_{n=1}^\infty 1/n$, which diverges, being a constant multiple of the harmonic series. \medskip \noindent At $x =0$, the series becomes $\sum_{n=1}^\infty (-1)^{n+1} / n$, which converges conditionally, as it is the alternating harmonic series. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(-2,0)$, converges conditionally at $x =0$, and diverges elsewhere. \item {\bf radius of convergence is $\frac{4}{3}$, interval of convergence is $(-\frac{19}{3}, -\frac{11}{3})$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{3^{n+1} (x+5)^{n+1} / 4^{n+1}}{3^n (x+5)^n / 4^n} \right| = \frac{3}{4} |x+5|. \] Hence, this series converges absolutely for $\frac{3}{4} |x+5| <1$, that is for $|x+5| < \frac{4}{3}$, and so the radius of convergence is $\frac{4}{3}$. We now need to check the endpoints of the interval $(-\frac{19}{3}, -\frac{11}{3})$. \medskip \noindent At $x =-\frac{19}{3}$, the series becomes \[ \sum_{n=0}^\infty \frac{3^n \left(-\frac{19}{3} +5\right)^n}{4^n} =\sum_{n=0}^\infty (-1)^n, \] which diverges (being, for instance, a divergent geometric series). \medskip \noindent At $x =-\frac{11}{3}$, the series becomes \[ \sum_{n=0}^\infty \frac{3^n \left( -\frac{11}{3}+5 \right)^n}{4^n} =\sum_{n=0}^\infty 1, \] which diverges (again being, for instance, a divergent geometric series). \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(-\frac{19}{3}, -\frac{11}{3})$, and diverges elsewhere. \item {\bf radius of convergence is $1$, interval of convergence is $[-2,0]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{(-1)^{n+1} (x+1)^{2(n+1)+1} / ((n+1)^2+4)}{(-1)^n (x+1)^{2n+1} / (n^2+4)} \right| =|x+1|^2 \lim_{n\rightarrow\infty} \frac{n^2 +4}{n^2 +2n +5} =|x+1|^2. \] Hence, this series converges absolutely for $|x+1|^2 <1$, that is for $|x+1| < 1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(-2,0)$. \medskip \noindent At $x =-2$, the series becomes $\sum_{n=1}^\infty (-1)^n (-1)^{2n+1} / (n^2+4) =\sum_{n=1}^\infty (-1)^{n+1} / (n^2+4)$, which converges absolutely. \medskip \noindent At $x =0$, the series becomes $\sum_{n=1}^\infty (-1)^n / (n^2+4)$, again which converges absolutely. \medskip \noindent So, the series converges absolutely for all $x$ in the closed interval $[-2,0]$, and diverges elsewhere. \item {\bf radius of convergence is $\infty$, interval of convergence is ${\bf R}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{\pi^{n+1} (x-1)^{2(n+1)} / (2(n+1)+1)!}{\pi^n (x-1)^{2n} / (2n+1)!} \right| =|x-1|^2 \lim_{n\rightarrow\infty} \frac{\pi}{(2n+2)(2n+3)} =0. \] Hence, this series converges absolutely for all values of $x$ (since this limit is $0$ for every value of $x$). \item {\bf radius of convergence is $\infty$, interval of convergence is ${\bf R}$:} This time, since the coefficients are $n^{th}$ powers, we apply the root test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{x^n}{(\ln(n))^n} \right|^{1/n} = |x| \lim_{n\rightarrow\infty} \frac{1}{\ln(n)} =0. \] Hence, this series converges absolutely for all values of $x$ (since this limit is $0$ for every value of $x$). \item {\bf radius of convergence is $\frac{1}{3}$, interval of convergence is $( -\frac{1}{3}, \frac{1}{3} )$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{3^{n+1} x^{n+1}}{3^n x^n} \right| =3|x|. \] Hence, this series converges absolutely for $3|x| <1$, that is $|x| <\frac{1}{3}$, and so the radius of convergence is $\frac{1}{3}$. We now need to check the endpoints of the interval $(-\frac{1}{3}, \frac{1}{3})$. \medskip \noindent At $x =-\frac{1}{3}$, the series becomes \[ \sum_{n=0}^\infty 3^n \left(-\frac{1}{3}\right)^n =\sum_{n=0}^\infty (-1)^n, \] which diverges (being, for instance, a divergent geometric series). \medskip \noindent At $x =\frac{1}{3}$, the series becomes \[ \sum_{n=0}^\infty 3^n \left( \frac{1}{3} \right)^n =\sum_{n=0}^\infty 1, \] which diverges (again being, for instance, a divergent geometric series). \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(-\frac{1}{3}, \frac{1}{3})$, and diverges elsewhere. \item {\bf radius of convergence is $0$, interval of convergence is $\{ 0\}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{(n+1)! x^{n+1} / 2^{n+1}}{n! x^n / 2^n} \right| =|x| \lim_{n\rightarrow\infty} \frac{n +1}{2} =\infty. \] Hence, this series converges only for $x =0$ and diverges elsewhere. \item {\bf radius of convergence is $\frac{1}{2}$, interval of convergence is $(-\frac{1}{2}, \frac{1}{2}]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (-2)^{n+1} x^{(n +1)+1} / ((n+1)+1)}{(-2)^n x^{n+1} / (n+1)} \right| = |x| \frac{2(n+1)}{n+2} =2|x|. \] Hence, this series converges absolutely for $2|x| <1$, that is $|x| <\frac{1}{2}$, and so the radius of convergence is $\frac{1}{2}$. We now need to check the endpoints of the interval $(-\frac{1}{2}, \frac{1}{2})$. \medskip \noindent At $x =-\frac{1}{2}$, the series becomes \[ \sum_{n=1}^\infty \frac{(-2)^n \left( -\frac{1}{2}\right)^{n+1}}{n+1} =-\frac{1}{2}\sum_{n=1}^\infty \frac{1}{n+1}, \] which diverges, as it is a constant multiple of the harmonic series. \medskip \noindent At $x =\frac{1}{2}$, the series becomes \[ \sum_{n=1}^\infty \frac{(-2)^n \left( \frac{1}{2}\right)^{n+1}}{n+1} =\frac{1}{2} \sum_{n=1}^\infty \frac{(-1)^n}{n+1} , \] which converges, as it is a constant multiple of the alternating harmonic series. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(-\frac{1}{2}, \frac{1}{2})$, converges conditionally at $x =\frac{1}{2}$, and diverges elsewhere. \item {\bf radius of convergence is $\infty$, interval of convergence is ${\bf R}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{(-1)^{n+1} x^{2(n+1)} / (2(n+1))!}{ (-1)^n x^{2n} / (2n)!} \right| =|x|^2 \lim_{n\rightarrow\infty} \frac{1}{(2n+2)(2n+1)} =0. \] Hence, this series converges absolutely for all values of $x$ (since this limit is $0$ for every value of $x$). \item {\bf radius of convergence is $1$, interval of convergence is $[-1,1]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (-1)^{n+1} x^{3(n+1)} / (n+1)^{3/2}}{ (-1)^n x^{3n} / n^{3/2}} \right| =|x|^3 \lim_{n\rightarrow\infty} \frac{n^{3/2}}{(n+1)^{3/2}} =|x|^3. \] Hence, this series converges absolutely for $|x|^3 <1$, that is $|x| <1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(-1,1)$. \medskip \noindent At $x = -1$, the series becomes $\sum_{n=1}^\infty (-1)^n (-1)^n / n^{3/2} =\sum_{n=1}^\infty 1 /n^{3/2}$, which converges, by Example \ref{zeta-series}. \medskip \noindent At $x = 1$, the series becomes $\sum_{n=1}^\infty (-1)^n / n^{3/2}$, which converges absolutely, by Example \ref{zeta-series}. \medskip \noindent So, the series converges absolutely for all $x$ in the closed interval $[-1,1]$, and diverges elsewhere. \item {\bf radius of convergence is $1$, interval of convergence is $[-1,1]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (-1)^{(n+1)+1} x^{n+1} / ((n+1)\ln^2(n+1))}{(-1)^{n+1} x^n / (n\ln^2(n))} \right| =|x| \lim_{n\rightarrow\infty} \frac{n\ln^2(n)}{(n+1)\ln^2(n+1)} =|x|. \] Hence, this series converges absolutely for $|x| <1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(-1,1)$. \medskip \noindent At $x = -1$, the series becomes $\sum_{n=2}^\infty (-1)^{n+1} (-1)^n / (n\ln^2(n)) = -\sum_{n=2}^\infty 1 / (n\ln^2(n))$, which converges by the integral test: take $f(x) = 1/(x \ln^2(x))$. Then, \[ f'(x) = \frac{-(\ln^2(x) + 2\ln(x))}{x^2 \ln^4(x)} <0 \] for $x \ge 2$, and so $f(x)$ is decreasing. Then, we evaluate \begin{eqnarray*} \int_2^\infty f(x) {\rm d}x & = & \lim_{M\rightarrow\infty} \int_2^M \frac{1}{x\ln^2(x)} {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \frac{-1}{\ln(x)} \left|_2^M \right. \\ & = & \lim_{M\rightarrow\infty} \left( \frac{-1}{\ln(M)} +\frac{1}{\ln(2)} \right) =\frac{1}{\ln(2)}, \end{eqnarray*} which converges. Hence, by the integral test, the series converges. \medskip \noindent At $x = 1$, the series becomes $\sum_{n=2}^\infty (-1)^{n+1}/ (n\ln^2(n))$, which converges absolutely by the argument just given. \medskip \noindent So, the series converges absolutely for all $x$ in the closed interval $[-1,1]$, and diverges elsewhere. \item {\bf radius of convergence is $2$, interval of convergence is $(1,5)$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{(x-3)^{n+1} / 2^{n+1}}{(x-3)^n / 2^n} \right| =\frac{1}{2} |x-3|. \] Hence, this series converges absolutely for $\frac{1}{2}|x -3| <1$, that is $|x-3| < 2$, and so the radius of convergence is $2$. We now need to check the endpoints of the interval $(1,5)$. \medskip \noindent At $x = 1$, the series becomes $\sum_{n=0}^\infty (-2)^n / 2^n =\sum_{n=0}^\infty (-1)^n$, which diverges, being for instance a divergent geometric series. \medskip \noindent At $x = 5$, the series becomes $\sum_{n=0}^\infty 1$, which diverges, again being for instance a divergent geometric series. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(1,5)$, and diverges elsewhere. \item {\bf radius of convergence is $1$, interval of convergence is $[3,5]$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{(-1)^{n+1} (x-4)^{n+1} / ((n+1)+1)^2}{(-1)^n (x-4)^n / (n+1)^2} \right| =|x-4| \lim_{n\rightarrow\infty} \frac{(n+1)^2}{(n+2)^2} =|x-4|. \] Hence, this series converges absolutely for $|x -4| <1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(3,5)$. \medskip \noindent At $x = 3$, the series becomes $\sum_{n=1}^\infty (-1)^n (-1)^n / (n+1)^2 =\sum_{n=1}^\infty 1/(n+1)^2$, which converges by Example \ref{zeta-series}. \medskip \noindent At $x = 5$, the series becomes $\sum_{n=1}^\infty (-1)^n / (n+1)^2$, which converges absolutely, again by Example \ref{zeta-series}. \medskip \noindent So, the series converges absolutely for all $x$ in the closed interval $[3,5]$, and diverges elsewhere. \item {\bf radius of convergence is $0$, interval of convergence is $\{ 2\}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (2(n+1)+1)!\: (x-2)^{n+1} / (n+1)^3}{ (2n+1)!\: (x-2)^n / n^3}\right| =|x-2|\lim_{n\rightarrow\infty} \frac{(2n+3)! \: n^3}{(2n+1)!\: (n+1)^3} =\infty \] for all $x\ne 2$. Hence, the series converges only for $x =2$. \item {\bf radius of convergence is $1$, interval of convergence is $[2,4)$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{\ln(n+1) (x-3)^{n+1} / (n+1)}{\ln(n) (x-3)^n / n} \right| =|x-3| \frac{n\ln(n+1)}{(n+1)\ln(n)} = |x-3|. \] Hence, this series converges absolutely for $|x -3| <1$, and so the radius of convergence is $1$. We now need to check the endpoints of the interval $(2,4)$. \medskip \noindent At $x = 2$, the series becomes $\sum_{n=1}^\infty \ln(n) (-1)^n / n$, which converges by the alternating series test (but does not converge absolutely). \medskip \noindent At $x = 4$, the series becomes $\sum_{n=1}^\infty \ln(n) / n$, which diverges by the first comparison test, since $\ln(n)/n > 1/n$ for $n\ge 3$ and the harmonic series $\sum_{n=1}^\infty 1/n$ diverges. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(2,4)$, converges conditionally at $x =2$, and diverges elsewhere. \item {\bf radius of convergence is $8$, interval of convergence is $(-\frac{13}{2}, \frac{19}{2})$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (2x-3)^{n+1} / 4^{2(n+1)}}{(2x-3)^n / 4^{2n}} \right| =\frac{1}{16} \left| 2x-3\right| = \frac{1}{8} \left| x -\frac{3}{2}\right|. \] Hence, this series converges absolutely for $\frac{1}{8} \left| x -\frac{3}{2}\right| <1$, that is for $\left| x-\frac{3}{2}\right| < 8$, and so the radius of convergence is $8$. We now need to check the endpoints of the interval $(-\frac{13}{2}, \frac{19}{2})$. \medskip \noindent At $x = -\frac{13}{2}$, the series becomes $\sum_{n=0}^\infty (2(-13/2)-3)^n / 4^{2n} = \sum_{n=0}^\infty (-1)^n$, which diverges. \medskip \noindent At $x = \frac{19}{2}$, the series becomes $\sum_{n=0}^\infty (2(19/2)-3)^n / 4^{2n} =\sum_{n=0}^\infty 1$, which diverges. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(-\frac{13}{2}, \frac{19}{2})$, and diverges elsewhere. \item {\bf radius of convergence is $b$, interval of convergence is $(a-b, a+b)$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (x-a)^{n+1} / b^{n+1}}{ (x-a)^n / b^n}\right| =\frac{1}{b} |x-a|. \] Hence, this series converges absolutely for $\frac{1}{b} | x-a| <1$, that is for $| x- a| < b$, and so the radius of convergence is $b$. We now need to check the endpoints of the interval $(a -b, a+b)$. \medskip \noindent At $x = a-b$, the series becomes $\sum_{n=2}^\infty (a-b-a)^n / b^n =\sum_{n=2}^\infty (-1)^n$, which diverges. \medskip \noindent At $x = a+b$, the series becomes $\sum_{n=2}^\infty (a+b-a)^n / b^n =\sum_{n=2}^\infty 1$, which diverges. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(a-b, a+b)$, and diverges elsewhere. (Note that the previous series is a specific example of this general phenomenon, with $a =\frac{3}{2}$ and $b =8$.) \item {\bf radius of convergence is $\infty$, interval of convergence is ${\bf R}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ ((n+1)+p)! x^{n+1} / ((n+1)!((n+1)+q)!)}{(n+p)! x^n / (n!(n+q)!)} \right| =|x| \lim_{n\rightarrow\infty} \frac{n+1+p}{(n+1)(n+1+q)} =0. \] Hence, this series converges absolutely for all values of $x$ (since this limit is $0$ for every value of $x$). \item {\bf radius of convergence is $3$, interval of convergence is $[-3,3)$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ x^{(n+1)-1} / ((n+1) 3^{n+1})}{ x^{n-1} / (n 3^n)} \right| =|x|\lim_{n\rightarrow\infty} \frac{n}{3(n+1)} =\frac{1}{3}|x|. \] Hence, this series converges absolutely for $\frac{1}{3} | x| <1$, that is for $| x| < 3$, and so the radius of convergence is $3$. We now need to check the endpoints of the interval $(-3,3)$. \medskip \noindent At $x = -3$, the series becomes $\sum_{n=1}^\infty (-3)^{n-1} / (n 3^n) =\frac{1}{3} \sum_{n=1}^\infty \frac{ (-1)^{n-1}}{n}$, which converges conditionally, as it is a constant multiple of the alternating harmonic series. \medskip \noindent At $x = 3$, the series becomes $\sum_{n=1}^\infty 3^{n-1} / (n 3^n) =\frac{1}{3} \sum_{n=1}^\infty \frac{1}{n}$, which diverges, as it is a constant multiple of the harmonic series. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(-3,3)$, converges conditionally at $x =-3$, and diverges elsewhere. \item {\bf radius of convergence is $\infty$, interval of convergence is ${\bf R}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (-1)^{(n+1)-1} x^{2(n+1)-1} / (2(n+1)-1)!}{ (-1)^{n-1} x^{2n-1} / (2n-1)!} \right| =|x|^2\lim_{n\rightarrow\infty} \frac{1}{2n(2n+1)} =0. \] Hence, this series converges absolutely for all values of $x$ (since this limit is $0$ for every value of $x$). \item {\bf radius of convergence is $0$, interval of convergence is $\{ a\}$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (n+1)! (x-a)^{n+1}}{ n! (x-a)^n} \right| = |x-a|\lim_{n\rightarrow\infty} (n+1) =\infty \] for all $x\ne a$. Hence, the series converges only for $x =a$. \item {\bf radius of convergence is $2$, interval of convergence is $(-1,3)$:} Apply the ratio test and calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ (n+1) (x-1)^{n+1} / (2^{n+1} (3(n+1)-1))}{ n (x-1)^n / (2^n (3n-1))} \right| =|x-1| \lim_{n\rightarrow\infty} \frac{(n+1)(3n-1)}{2n(3n+2)} =\frac{1}{2}|x-1|. \] Hence, this series converges absolutely for $\frac{1}{2} | x-1| <1$, that is for $| x -1| < 2$, and so the radius of convergence is $2$. We now need to check the endpoints of the interval $(-1,3)$. \medskip \noindent At $x = -1$, the series becomes $\sum_{n=1}^\infty n (-1-1)^n / (2^n (3n-1)) =\sum_{n=1}^\infty (-1)^n n/(3n-1)$, which diverges by the $n^{th}$ term test for divergence, as $\lim_{n\rightarrow\infty} \frac{n}{3n-1} =\frac{1}{3}$, and so $\lim_{n\rightarrow\infty} \frac{(-1)^n n}{3n-1}$ does not exist. \medskip \noindent At $x = 3$, the series becomes $\sum_{n=1}^\infty n (3-1)^n / (2^n (3n-1)) =\sum_{n=1}^\infty n/(3n-1)$, which again diverges by the $n^{th}$ term test for divergence. \medskip \noindent So, the series converges absolutely for all $x$ in the open interval $(-1,3)$, and diverges elsewhere. \end{enumerate} \medskip \noindent {\bf Solution \ref{radius-exercise}:} The condition that the $a_n$ satisfy is similar to the condition of the root test, and so we apply the root test to the power series $\sum_{n=0}^\infty a_n x^n$. Namely, we calculate \[ \lim_{n\rightarrow\infty} \left| a_n x^n\right| ^{1/n} = |x| \lim_{n\rightarrow\infty} \left| a_n \right| ^{1/n} = L |x|. \] Hence, the series converges absolutely for $L |x| < 1$, that is $|x| < \frac{1}{L}$, and diverges for $L|x| > 1$, and so the radius of convergence of this series is $\frac{1}{L}$, as desired. \medskip \noindent {\bf Solution \ref{semi-power-series}:} We can use the same techniques that we have developed for power series for other series, that are not strictly speaking power series. For instance, we can apply the ratio test to the series, for all the values of $x$ for which the terms are defined. \begin{enumerate} \item first, we note that this series is not defined at $x =1$, but is defined for all other values of $x$. Applying the ratio test, we calculate: \[ \lim_{n\rightarrow\infty} \left| \frac{ ((x+2)/(x-1))^{n+1} / (2(n+1)-1)}{ ((x+2)/(x-1))^n / (2n-1)} \right| =\frac{|x+2|}{|x-1|} \lim_{n\rightarrow\infty} \frac{2n-1}{2n+1} = \frac{|x+2|}{|x-1|}. \] Hence, this series converges absolutely for $\frac{|x+2|}{|x-1|} < 1$, that is for $|x+2| <|x-1|$, which is the open ray $(-\infty, -\frac{1}{2})$, and diverges for $\frac{|x+2|}{|x-1|} > 1$, which is the union $(-\frac{1}{2}, 1)\cup (1, \infty)$. \medskip \noindent At $x =-\frac{1}{2}$, the only remaining point at which to test for convergence, the series becomes \[ \sum_{n=1}^\infty \frac{1}{2n-1} \left( \frac{-\frac{1}{2} +2}{ -\frac{1}{2}-1} \right)^n = \sum_{n=1}^\infty \frac{1}{2n-1} (-1)^n, \] which converges conditionally, by the alternating series test. Hence, the series converges on the closed ray $(-\infty, -\frac{1}{2}]$. \item for this series, first note that the series is not defined at $x =0$, $x=1$, $x=2$, et cetera, and so the domain of consideration is the complement in ${\bf R}$ of the non-negative integers ${\bf W} =\{ 0, 1, 2, \ldots\}$ (the {\bf whole numbers}). Applying the ratio test, we calculate \[ \lim_{n\rightarrow\infty} \left| \frac{1/((x+n +1)(x+n +1-1))}{ 1/((x+n)(x+n-1))} \right| =\lim_{n\rightarrow\infty} \left| \frac{x+n-1}{x+n+1} \right| =1 \] for every (allowable) value of $x$, and so yields no information. However, we are saved by the observation that the series \[ \sum_{n=1}^\infty \frac{1}{(n+\alpha)(n+\beta)} \] converges for all $\alpha$, $\beta$, by limit comparison to the series $\sum_{n=1}^\infty \frac{1}{n^2}$. Hence, taking $\alpha =x$ and $\beta =x-1$, we have that $\sum_{n=1}^\infty 1/((x+n)(x+n-1))$ converges at every value of $x$ for which it is defined, namely the union \[ (-\infty, 0)\cup (0,1)\cup (1,2)\cup (2,3) \cup \cdots ={\bf R} -{\bf W}. \] \end{enumerate} \medskip \noindent {\bf Solution \ref{some-continuous}:} \begin{enumerate} \item To show that $h_n(x)$ is continuous at $a\in {\bf R}$, we need to show that $\lim_{x\rightarrow a} h_n(x) =h_n(a)$. Recalling the definition of limit, this translates to showing that for each $\varepsilon >0$, there exists $\delta >0$ so that if $| x-a| < \delta$, then $|h_n(x) -h_n(a)| <\varepsilon$. Since $h_n(x) =x^n$, this is the same as showing that for each $\varepsilon >0$, there exists $\delta >0$ so that if $| x-a| < \delta$, then $| x^n -a^n| <\varepsilon$. Let's break the proof into cases. \medskip \noindent If $n =1$, then all we need to do to satisfy the definition is take $\delta =\varepsilon$. So, we can assume that $n\ge 2$. If in addition we have that $a =0$, then by the definition of limit, we need to show that for each $\varepsilon >0$, there is $\delta >0$ so that if $|x| <\delta$, then $|x^n| = |x|^n <\varepsilon$. So, taking $\delta =\varepsilon^{1/n}$, we are done in this case as well. \medskip \noindent Consider now the case that $n\ge 2$ and $a >0$, and factor $|x^n -a^n|$ to get $ |x^n -a^n| = | (x-a)(x^{n-1} + ax^{n-2} +\cdots +a^{n-2}x + a^{n-1})|$. Recall that we have a great deal of choice in how we choose $\delta$, so we may restrict our attention to the interval $|x-a| < \frac{1}{2}a$, so that $\frac{1}{2} a 0$). Calculating, we see that \begin{eqnarray*} |x^n -a^n| & = & | (x-a)(x^{n-1} + ax^{n-2} +\cdots +a^{n-2}x + a^{n-1})| \\ & \le & |x-a| (x^{n-1} + ax^{n-2} +\cdots + a^{n-2} x + a^{n-1}) \\ & < & |x-a| \left( \left(\frac{3}{2} a\right)^{n-1} + a \left(\frac{3}{2} a\right)^{n-2} +\cdots + a^{n-2} \frac{3}{2} a + a^{n-1}\right) \\ & = & |x-a| a^{n-1}\sum_{k=0}^{n-1} \left( \frac{3}{2}\right)^k \\ & = & |x-a| a^{n-1}\frac{1 -(3/2)^n}{1-(3/2)} = C |x-a|, \end{eqnarray*} where $C =a^{n-1}\frac{1 -(3/2)^n}{1-(3/2)} > 0$ depends on both $a >0$ and $n\ge 2$. So, take $\delta$ to be the smaller of $\frac{1}{C}\varepsilon$ and $\frac{1}{2}a$. Then, for $|x-a| <\delta$, we have that $|x^n -a^n | < C|x-a| \le \varepsilon$ as desired. (The first inequality follows from the calculation above and the fact that $|x-a| < \delta < \frac{1}{2} a$, while the second inequality follows from $\delta < \frac{1}{C} \varepsilon$.) \medskip \noindent A similar argument, with appropriate placements of absolute values, holds for $a <0$. (Note that for a given $\varepsilon >0$, the choice of $\delta$ depends on $\varepsilon$, on $a$, and on $n$.) \item To show that $g(x)$ is continuous at $a\in {\bf R}$, we need to show that $\lim_{x\rightarrow a} g(x) =g(a)$. Recalling the definition of limit, this translates to showing that for each $\varepsilon >0$, there exists $\delta >0$ so that if $| x-a| < \delta$, then $|g(x) -g(a)| <\varepsilon$. Since $g(x) =c$ for all $x$, this is the same as showing that for each $\varepsilon >0$, there exists $\delta >0$ so that if $| x-a| < \delta$, then $| c - c| = 0 <\varepsilon$. So, regardless of the value of $\varepsilon$, taking $\delta = 1$ (or whatever your favorite positive number happens to be today) satisfies the definition. \item To show that $f(x)$ is continuous at $a\in {\bf R}$, we need to show that $\lim_{x\rightarrow a} f(x) =f(a)$. Recalling the definition of limit, this translates to showing that for each $\varepsilon >0$, there exists $\delta >0$ so that if $| x-a| < \delta$, then $|f(x) -f(a)| <\varepsilon$. Since $|f(x) -f(a)| \le c|x-a|$, taking $\delta =\frac{1}{c}\varepsilon$ satisfies the definition. (If $|x-a| < \delta = \frac{1}{c}\varepsilon$, then $|f(x) -f(a)| \le c|x-a| < c \frac{1}{c}\varepsilon =\varepsilon$, as desired.) (Functions that satisfy this condition are often referred to as {\bf Lipschitz functions}.) \end{enumerate} \medskip \noindent {\bf Solution \ref{limit-cont-exercise}:} First, since $\lim_{x\rightarrow \infty} (f(x+1) -f(x)) =0$, for any $\varepsilon >0$, there exists $x_0$ (which we can take to be positive) so that $|f(x+1) -f(x)| <\frac{1}{2} \varepsilon$ for $x >x_0$. Now, using the maximum value property, Theorem \ref{max-value-prop}, there exists a maximum value $M$ of $|f(x)|$ on the interval $[x_0, x_0+1]$. \medskip \noindent The first claim is that for any $k\ge 0$, we have that $|f(x)|\le \frac{k}{2} k\varepsilon +M$ for $x$ in the interval $[x_0+k, x_0+k+1]$. To see this, let $K$ be the maximum value of $|f(x)|$ on $[x_0+k, x_0+k+1]$, occurring at $y$. Then, $x_0 +k\le y\le x_0+k+1$, and so $x_0\le y -k\le x_0+1$. We now engage in some algebraic manipulation: \begin{eqnarray*} |f(y)| & = & |f(y) - f(y-k) + f(y-k)| \\ & \le & |f(y) - f(y-k)| + |f(y-k)| \\ & \le & |f(y) - f(y-1) + f(y-1) -\cdots f(y-k+1) + f(y-k+1) - f(y-k)| + |f(y-k)| \\ & \le & |f(y) - f(y-1)| + |f(y-1) - f(y-2)| + \cdots + |f(y-k+1) - f(y-k)| + |f(y-k)| \\ & \le & \frac{1}{2} \varepsilon + \frac{1}{2} \varepsilon + \cdots + \frac{1}{2} \varepsilon + M \\ & \le & \frac{k}{2} \varepsilon + M. \end{eqnarray*} In particular, this tells us that \[ \frac{|f(y)|}{y}\le \frac{\frac{k}{2} \varepsilon +M}{y}\le \frac{\frac{k}{2} \varepsilon}{y} +\frac{M}{y} \le \frac{\frac{k}{2} \varepsilon}{x_0+k} +\frac{M}{y} < \frac{\frac{k}{2} \varepsilon}{k} +\frac{M}{y} < \frac{1}{2} \varepsilon +\frac{M}{y} \] for all $y$ in the interval $[x_0 +k, x_0+k+1]$. \medskip \noindent Now, choose $x_1 > x_0$ so that $\frac{M}{x_1} < \frac{1}{2} \varepsilon$. Then, for all $y >x_1$, we have that \[ \left| \frac{f(y)}{y}\right| = \frac{|f(y)|}{y} < \frac{1}{2} \varepsilon +\frac{M}{y} < \frac{1}{2}\varepsilon + \frac{1}{2}\varepsilon \] for all $y >x_1$. In particular, we have that the definition of $\lim_{x\rightarrow\infty} \frac{f(x)}{x} =0$ is satisfied, as desired. \medskip \noindent {\bf Solution \ref{min-value}:} Since $f$ is continuous on $[a,b]$, so is $g(x) =-f(x)$. Since $g$ is continuous on the closed interval $[a,b]$, the maximum value property applied to $g$ yields that there exists some $x_0$ in $[a,b]$ so that $g(x_0) \ge g(x)$ for all $x$ in $[a,b]$. Hence, $-f(x_0) \ge -f(x)$ for all $x$ in $[a,b]$, and so $f(x_0)\le f(x)$ for all $x$ in $[a,b]$. That is, $f$ satisfies the minimum value property. \medskip \noindent {\bf Solution \ref{int-value-exercises}:} \begin{enumerate} \item as before, consider the continuous function $g(x) = f(x) -x$. Since $f(a) b$, we have that $g(b) =f(b) -b >0$. Hence, the intermediate value property applied to $g$ yields that there exists $c$ in $(a,b)$ with $g(c) =0$. That is, $f(c) -c =0$, and so $f(c) =c$. Hence, the equation $f(x) =x$ has a solution in $[a,b]$. \item first of all, note that $g(x) =x^2 -\cos(x)$ is continuous on all of ${\bf R}$, and so is continuous on every closed interval $[a,b]$ in ${\bf R}$. In order to apply the intermediate value property to find a point $c$ at which $g(c) =0$, we need to find $a$ and $b$ so that $g(a) >0$ and $g(b) <0$ (or vice versa), and the intermediate value property then implies the existence of such a number $c$ between $a$ and $b$. \medskip \noindent So, let's start plugging numbers into $g$: $g(0) = -\cos(0) =- 1 <0$ and $g(2) = (2)^2 -\cos(2) = 4.6536 ... > 0$, and so there exists a number $c_1$ between $0$ and $2$ with $g(c_1) = 0$. (Note that since $(2)^2 = (-2)^2$ and $\cos(2) = \cos(-2)$, we also have that there exists $c_2$ between $-2$ and $0$ with $g(c_2) =0$.) \item for $f(x) = x^{1995} + 7654 x^{123} + x$ on the closed interval $[-a,a]$, start by verifying continuity; actually, $f$ is continuous on all of ${\bf R}$ being a polynomial, and hence is continuous on $[-a,a]$. Now, check the sign of $f$ on the endpoints of the given interval: $f(a) = a^{1995} + 7654 a^{123} + a > 0$ (since $a >0$) and $f(-a) = (-a)^{1995} + 7654 (-a)^{123} + (-a) = -f(a) < 0$, and so the intermediate value property implies that there exists some $c$ in $(-a,a)$ with $f(c) =0$. (And actually, casual inspection reveals that $f(0) =0$.) \item for $\tan(x)=e^{-x}$ for $x$ in $[-1,1]$, start by defining $g(x) = \tan(x) -e^{-x}$, so that $\tan(c) =e^{-c}$ if and only if $g(c) =0$, as was done above. Note that $g$ is continuous on $[-1,1]$, since $e^{-x}$ is continuous on all of ${\bf R}$ and $\tan(x)$ is continuous as long as its denominator $\cos(x)$ is non-zero, which holds true on $[-1,1]$. Since we are working on the closed interval $[-1,1]$, check the values of $g$ on the endpoints: $g(1) = \tan(1) -e^{-1} = 1.1895 ... >0$ and $g(-1) = -4.2757 ... <0$, and so there exists some $c$ in $(-1,1)$ with $g(c) =0$, and hence with $\tan(c) = e^{-c}$. \item as above, $f(x) = x^3+2x^5+(1+x^2)^{-2}$ is continuous on $[-1,1]$, as it is the sum of a polynomial and a rational function whose denominator is non-zero on $[-1,1]$. As always, check the endpoints of the interval first: $f(1) = \frac{13}{4}$ and $f(-1) = -\frac{11}{4}$, and so by the intermediate value property, there is some $c$ in $(-1,1)$ at which $f(c) = 0$. \item consider $f(x) = 3\sin^2(x) - 2\cos^3(x)$. Since both $\sin(x)$ and $\cos(x)$ are continuous on all of ${\bf R}$, we have that $f$ is continuous on all of ${\bf R}$. Since no specific closed interval is given, we need to find an appropriate interval on which to apply the intermediate value property for $f$, if in fact such an interval exists. Fortunately, we remember that $\sin(k\pi) =0$ for all integers $k$, and so we may consider the interval $[k\pi, (k+1)\pi]$ for any integer $k\ge 1$, so that the interval lies in $(0,\infty)$. At the endpoints of this interval, $f(k\pi) = -2\cos^3(k\pi)$ and $f((k+1)\pi) = -2\cos^3((k+1)\pi)$. Since $\cos(k\pi)$ and $\cos((k+1)\pi)$ are equal to $\pm 1$ and have opposite signs, $f(k\pi)$ and $f((k+1)\pi)$ are both non-zero and have opposite signs, and so by the intermediate value property, there is a point $c_k$ in $(k\pi, (k+1)\pi)$ at which $f(c_k) =0$, that is, at which $3\sin^2(c_k) = 2\cos^3(c_k)$, as desired. \item first, note that $f(x)= 3+x^5-1001x^2$ is a polynomial and so is continuous on all of ${\bf R}$, and in particular is continuous for $x>0$. As above, we need to choose a closed interval on which to apply the intermediate value property. Let's start by evaluating $f$ at some of the natural numbers: $f(1) = -997$; $f(2) = -3969$; $f(10) = -90097$; $f(11) = 880$. Hence, the intermediate value property implies that there is a number $c$ in the open interval $(10,11)$ at which $f(c) =0$. \end{enumerate} \medskip \noindent {\bf Solution \ref{interated-sequence}:} first, for the sake of notational clarity, define the $n$-fold composition of $f$ with itself by $f^{\circ n}$, so that $f^{\circ n} =f\circ f^{\circ (n-1)}$. The hypothesis can then be restated as saying that the sequence $\{ f^{\circ n}(c)\}$ converges to $a$. Now, apply $f$ to both sides. Since $f$ is continuous, the sequence $\{ f(f^{\circ n}(c))\}$ converges to $f(a)$, by Proposition \ref{convergence-cont}. However, since $f(f^{\circ n}(c)) =f\circ f^{\circ n}(c) = f^{\circ (n+1)}(c)$, the sequence $\{ f(f^{\circ n}(c))\}$ is the same as the sequence $\{ f^{\circ n}(c)\}$ with the first term removed, and so $\{ f(f^{\circ n}(c))\}$ converges to $a$ as well. Hence, since $\{ f(f^{\circ n}(c))\}$ converges to both $a$ and $f(a)$, we have that $a = f(a)$. \medskip \noindent {\bf Solution \ref{mean-value-example}:} First, note that $f$ is continuous on $[1,4]$, as it is the composition of two continuous functions, namely absolute value and a linear polynomial. However, $f$ is not differentiable at $x=2$ (since absolute value is not differentiable at $0$), and so the hypotheses of the mean value theorem are not satisfied. \medskip \noindent To see that $f$ does not satisfy the conclusion of the mean value theorem, we calculate: $f(4) -f(1) = |4-2| - |1-2| = 2 - 1 = 1$ and $4 - 1 = 3$. However, for $x > 2$, we have that $f'(x) =1$ and for $x < 2$ we have that $f'(x) = -1$, and so there cannot be a point $c$ in $(1,4)$ at which $f'(c) = (f(4) -f(1))/(4 -1) =1/3$. \medskip \noindent {\bf Solution \ref{mean-value-stmts}:} \begin{enumerate} \item This proof follows the same general outline as the proof just given. Suppose that $g'(x) = a_{n-1} x^{n-1} + a_{n-2} x^{n-2} +\cdots + a_1 x + a_0$, and consider the new function $h(x) = \frac{1}{n} a_{n-1} x^n + \frac{1}{n-1} a_{n-2} x^{n-1} +\cdots + \frac{1}{2} a_1 x^2 + a_0 x - g(x)$. Note that since $g$ and polynomials are differentiable, and hence continuous, on all of ${\bf R}$, we have that $h$ is differentiable, and hence continuous, on all of ${\bf R}$. Also, $h'(x) = a_{n-1} x^{n-1} + a_{n-2} x^{n-2} +\cdots + a_1 x + a_0 - g'(x) = 0$ for all $x\in {\bf R}$. \medskip \noindent For $x_0 >0$, apply the mean value theorem to $h$ on the interval $[0, x_0]$. Since $h$ is continuous on $[0, x_0]$ and differentiable on $(0, x_0)$, the mean value theorem yields that there exists some $c$ in $(0, x_0)$ so that $h(x_0) - h(0) = h'(c) (x_0 -0) = 0$, since $h'(c) =0$. That is, $h(x_0) =h(0)$ for all $x_0 >0$. As above, we also get that $h(x_0) =h(0)$ for all $x_0 <0$ by applying the mean value theorem to $h$ on the interval $[x_0, 0]$. \medskip \noindent Hence, setting $b =h(0)$, we have that $h(x) =b$ for all $x\in {\bf R}$. Substituting in the definition of $h$, this yields that $\frac{1}{n} a_{n-1} x^n + \frac{1}{n-1} a_{n-2} x^{n-1} +\cdots + \frac{1}{2} a_1 x^2 + a_0 x - g(x) = b$ for all $x\in {\bf R}$, that is, $g(x) = \frac{1}{n} a_{n-1} x^n + \frac{1}{n-1} a_{n-2} x^{n-1} +\cdots + \frac{1}{2} a_1 x^2 + a_0 x - b$ for all $x\in {\bf R}$, and so $g$ is a polynomial of degree $n$. \item This is a slightly different sort of argument, and we break it into two pieces, corresponding to the two inequalities. \medskip \noindent Set $h(x) = x -\ln(x+1)$, and note that $h$ is differentiable, and hence continuous, on $(-1,\infty)$. The two cases, of $-1 < x <0$ and of $x >0$, are handled in the same fashion, and we write out the details only for the case $x >0$. Apply the mean value theorem to $h$ on any closed interval in $[0, \infty)$. Note that $h(0) =0 -\ln(1) =0$. If there were another point $x_0> 0$ at which $h(x_0) =0$, then by applying either Rolle's theorem or the mean value theorem to $h$ on the interval $[0, x_0]$, there would exist a point $c$ in $(0,x_0)$ at which $h'(c) =0$. However, $h'(c) = 1 -\frac{1}{c+1}$, which is non-zero for $c\ne 0$. Hence, $h(x)\ne 0$ for all $x\in (0,\infty)$. By the intermediate value theorem, this forces either $h(x) >0$ for all $x >0$ or $h(x) <0$ for all $x >0$ (because if there are points $a$ and $b$ in $(0,\infty)$ at which $h(a) >0$ and $h(b) <0$, then there is a point $c$ between $a$ and $b$ at which $h(c) =0$). Since $h(1) = 1 -\ln(2) = 0.3069... >0$, we have that $h(x) >0$ on $(0,\infty)$, that is, that $x > \ln(x+1)$ for all $x >0$, as desired. (As noted above, the argument to show that $h(x) >0$ for $-1 \ln(x+1)$ for $-1 -1$. (As above, we give the details in the case that $x >0$, and leave the case of $-1 0$ for $x >0$. In particular, applying the mean value theorem to $g$ on the interval $[0, x_0]$, we see that there is $c$ in $(0, x_0)$ so that $g(x_0) -g(0) =g'(c) (x_0 -0) >0$, since both $g'(c) >0$ and $x_0 >0$. Hence, $g(x_0) > g(0) = 0$ for all $x >0$. That is, $\ln(x+1) > \frac{x}{x+1}$ for all $x >0$. \item Here, set $g(x) = x -\sin(x)$. We wish to show that $g(x) >0$ for all $x >0$. First, note that since $-1 \le \sin(x) \le 1$ for all $x\in {\bf R}$, we have that $g(x) >0$ for $x >1$, and so we can restrict our attention henceforth to $0 1$ for $c\in (0, 1)$, we have that $g(x_0) >0$ for all $0 0$ for all $x >0$, as desired. \end{enumerate} \medskip \noindent {\bf Solution \ref{mean-value-exercises}:} \begin{enumerate} \item we know that there is one solution to $f(x) =0$ in $[-a,a]$, namely $x =0$ (which can be found with using the intermediate value theorem or by inspection). To see that there are no others, we again use Rolle's theorem: if there were $b$ in $[-a,a]$, $b\ne 0$, with $f(b) =0$, then there would exist some point $c$ between $b$ and $0$ with $f'(c) =0$. However, $f'(x) = 1995 x^{1994} + 941442x^{122} + 1$ and so $f'(c) \ge 1 > 0$ for all $c \in {\bf R}$. Hence, by Rolle's theorem, there is no second solution to $f(x) =0$. \item again working with $g(x) = \tan(x) -e^{-x}$, we saw earlier that there is a solution to $g(x) =0$ in the interval $[-1,1]$. However, since $g'(x) = \sec^2(x) +e^{-x} >0$ for all $x\in (-1,1)$, Rolle's theorem implies that there can be no second solution to $g(x) =0$ in the interval $[-1,1]$. (It is the same reasoning as before: if there were two solutions to $g(x) =0$, then there would exist a point $c$ between them at which $g'(c) =0$; however, the calculation above shows that $g'(c)\ne 0$ for all $c$ in $(-1,1)$) \item we don't have enough information to decide whether we've found all the solutions to $f(x) =0$. With $f(x) = 3\sin^2(x) -2\cos^3(x)$, we have that $f'(x) = 6\sin(x)\cos(x) + 6\cos^2(x)\sin(x) = 6\sin(x)\cos(x) (1+\cos(x)) = 0$ when $x = k\pi$ for $k\in {\bf N}$ (since $\sin(k\pi) =0$) and when $x = (k +\frac{1}{2})\pi$ (since $\cos((k +\frac{1}{2})\pi) =0$ for $k\in {\bf N}$). Note that $f(k\pi) = -2\cos^3(k\pi) = (-1)^{k+1}2\ne 0$ and that $f((k+\frac{1}{2})\pi) = 3\sin^2((k+\frac{1}{2})\pi) = 3\ne 0$. So, for any $m\in {\bf N}$, consider the interval $(m\pi, (m+2)\pi)$. \medskip \noindent So, there exist three points in this interval at which $f'(x) =0$, namely at $(m+\frac{1}{2})\pi$, $(m+1)\pi$, and $(m+\frac{3}{2})\pi$, and our earlier analysis using the intermediate value theorem found only two points in this interval at which $f(x) =0$. However, while Rolle's theorem yields that two points at which $f(x) =0$ yields one point at which $f'(x) =0$, we are unable to argue the other way: there may be many points at which $f'(x) =0$ and still no points at which $f(x) =0$. This example shows the limitations of this sort of analysis. \item for $f(x) =3+x^5-1001x^2$ on $x>0$, again differentiate: $f'(x) = 5x^4 -2002x = x(5x^3 - 2002)$, and so there is only one point in $(0,\infty)$ at which $f'(x) =0$, namely the solution $c$ of $5c^3 -2002 =0$. By calculation, we have that $c = 7.3705 ...$, and so if there is a second solution to $f(x) =0$ in $(0,\infty)$, it must lie in the interval $(0,c)$ (since by Rolle's theorem, if there are two solutions to $f(x) =0$, then there exists at least one solution to $f'(x) =0$ between them). \medskip \noindent Since $f(0) =3$ and since $f(c) = -32624.3179...$, the intermediate value property implies that that there is a solution to $f(x) =0$ in the interval $(0,c)$. Since the only solution to $f'(x) =0$ on $(0,\infty)$ occurs at $c$, Rolle's theorem implies that there can be at most two solutions to $f(x) =0$ in $(0,\infty)$, and we have found them both. \end{enumerate} \medskip \noindent {\bf Solution \ref{more-mean-val-exercises}:} (In these problems, I've stopped explicitly checking the continuity and diffentiability hypotheses of the intermediate value property and of Rolle's theorem and the mean value theorem, because they have been checked so many times already and since they hold true for all the functions in this exercise.) \begin{enumerate} \item using the general mantra that two solutions to $g(x) =0$ yield one solution to $g'(x) =0$ via Rolle's theorem, let's see if we can find three solutions to $g(x) =0$ for $g(x) = x^3 - 12\pi x^2 + 44\pi^2 x - 48\pi^3 + \cos(x) - 1$. Factoring, we see that $g(x) = (x-2\pi)(x-4\pi)(x-6\pi) + \cos(x) -1$, and so $g(2\pi) = g(4\pi) = g(6\pi) =0$. By Rolle's theorem, there then exists $a$ in $(2\pi, 4\pi)$ and $b$ in $(4\pi, 6\pi)$ so that $g'(a) =g'(b) =0$, as desired. (Also, note that the mixture of polynomial and trigonometric functions makes it unlikely that we would find solutions to $g'(x) =0$ by direct calculation.) \item a still slightly different method: calculating, we see that $f'(x)=4x^3 - \pi^3 -\cos(x)$, and that $f'(-10) = -4000 -\pi^3 -\cos(-1000) <0$ and that $f'(10) = 4000 -\pi^3 -\cos(1000) >0$. Since $f$ is continuous on ${\bf R}$, it is certainly continuous on the interval $[-10, 10]$, and so by the intermediate value property, there is some $a$ in $(-10,10)$ at which $f'(a) =0$. \item label the points at which $g$ vanishes as $a_1 < a_2 <\cdots < a_n$. For each consecutive pair $a_k$, $a_{k+1}$, Rolle's theorem yields that there exists a point $b_k$ between $a_k$ and $a_{k+1}$ at which $g'(b_k) =0$. This yields $k-1$ points $b_1,\ldots, b_{k-1}$ at which the derivative $g'(x)$ vanishes, as desired. \item let $h(x) = x^3 +px +q$. Suppose that $h$ has two real roots; by Rolle's theorem, there is then a number $c$ between these roots at which $h'(c) =0$. However, calculating directly we see that $h'(x) = 3x^2 +p \ge p >0$ for all $x\in {\bf R}$, and so there are no solutions to $h'(x) =0$. Hence, there can be at most one root of $h$. \medskip \noindent To see that there is a root, we note that since $h$ has odd degree (and since the coefficient of the highest degree term is positive), we have that $\lim_{x\rightarrow\infty} h(x) =\infty$ and $\lim_{x\rightarrow -\infty} h(x) =-\infty$. Hence, we can find a point $a$ at which $h(a) >0$ and a point $b$ at which $h(b) <0$, and the intermediate value property then implies that there is a point between $a$ and $b$ at which $h(x) =0$. \end{enumerate} \medskip \noindent {\bf Solution \ref{lhopital-exercises}:} [Note that for some of these limits, we do not need to use as heavy a piece of machinery as l'Hopital's rule, just some clever simplifying.] \begin{enumerate} \item since this limit has the indeterminate form $\frac{0}{0}$ (since both $\lim_{x\rightarrow 2} (1-\cos(\pi x)) =0$ and $\lim_{x\rightarrow 2} \sin^2(\pi x) =0$), we may use l'Hopital's rule: \[ \lim_{x\rightarrow 2} \frac{1-\cos(\pi x)}{\sin^2(\pi x)} =\lim_{x\rightarrow 2} \frac{\pi \sin(\pi x)}{2\pi\sin(\pi x)\cos(\pi x)} = \lim_{x\rightarrow 2} \frac{1}{2\cos(\pi x)} =\frac{1}{2}. \] \medskip \noindent (Note that we may also evaluate this limit without l'Hopital's rule, using the trigonometric identity $\sin^2(\theta) +\cos^2(\theta) =1$, as follows: \[ \lim_{x\rightarrow 2} \frac{1-\cos(\pi x)}{\sin^2(\pi x)} = \lim_{x\rightarrow 2} \frac{1-\cos(\pi x)}{1 -\cos^2(\pi x)} = \lim_{x\rightarrow 2} \frac{1}{1 +\cos(\pi x)} = \frac{1}{2}. \left. \right) \] \item again, here we have the choice of factoring or using l'Hopital's rule. I feel like factoring: \begin{eqnarray*} \lim_{x\rightarrow {-1}} \frac{x^7+1}{x^3+1} & = & \lim_{x\rightarrow {-1}} \frac{(x+1)(x^6 -x^5 +x^4 -x^3 +x^2 -x +1)}{(x+1)(x^2 -x+1)} \\ & = & \lim_{x\rightarrow {-1}} \frac{x^6 -x^5 +x^4 -x^3 +x^2 -x +1}{x^2 -x+1} = \frac{7}{3}. \end{eqnarray*} \item write $\tan(z) = \sin(z) /\cos(z)$ and simplify: \begin{eqnarray*} \lim_{x\rightarrow 3} \frac{1+\cos(\pi x)}{\tan^2(\pi x)} & = & \lim_{x\rightarrow 3} \frac{(1+\cos(\pi x))\cos^2(\pi x)}{\sin^2(\pi x)} \\ & = & \lim_{x\rightarrow 3} \frac{(1+\cos(\pi x))\cos^2(\pi x)}{1 -\cos^2(\pi x)} = \lim_{x\rightarrow 3} \frac{\cos^2(\pi x)}{1 -\cos(\pi x)} = \frac{1}{2}. \end{eqnarray*} \item as this has the indeterminate form $\frac{0}{0}$, and since there seems to be no easy simplification possible, we use l'Hopital's rule: \[ \lim_{x\rightarrow 1} \frac{1-x+\ln(x)}{1+\cos(\pi x)} =\lim_{x\rightarrow 1} \frac{ -1 +\frac{1}{x}}{-\pi \sin(\pi x)}. \] Since this limit still has the indeterminate form $\frac{0}{0}$, we may use l'Hopital's rule again: \[ \lim_{x\rightarrow 1} \frac{ -1 +\frac{1}{x}}{-\pi \sin(\pi x)} = \lim_{x\rightarrow 1} \frac{ -\frac{1}{x^2}}{-\pi^2 \cos(\pi x)} = -\frac{1}{\pi^2}. \] \item this has the indeterminate form $\infty^0$, and so we rewrite it: \[ \lim_{x\rightarrow\infty} (\ln(x))^{1/x} =\lim_{x\rightarrow\infty} \left( e^{\ln(\ln(x))}\right)^{1/x} = e^{\lim_{x\rightarrow\infty} \ln(\ln(x))/x}. \] The exponent has the indeterminate form $\frac{\infty}{\infty}$, and so we may use l'Hopital's rule: \[ \lim_{x\rightarrow\infty} \frac{\ln(\ln(x))}{x} = \lim_{x\rightarrow\infty} \frac{\frac{1}{\ln(x)}\cdot \frac{1}{x}}{1} = 0. \] Hence, we see that \[ \lim_{x\rightarrow\infty} (\ln(x))^{1/x} = e^{\lim_{x\rightarrow\infty} \ln(\ln(x))/x} =e^0 =1. \] \item factoring, we see that \[ \lim_{x\rightarrow 2} \frac{x^2+x-6}{x^2-4} =\lim_{x\rightarrow 2} \frac{(x-2)(x+3)}{(x-2)(x+2)} = \lim_{x\rightarrow 2} \frac{x+3}{x+2} =\frac{5}{4}. \] \item as this limit has the indeterminate form $\frac{0}{0}$, we may use l'Hopital's rule: \[ \lim_{x\rightarrow 0} \frac{x+\sin(2x)}{x-\sin(2x)} = \lim_{x\rightarrow 0} \frac{1+ 2\cos(2x)}{1 -2\cos(2x)} = \frac{1 +2}{1-2} = -3. \] \item since this limit has the indeterminate form $\frac{\infty}{\infty}$, we may apply l'Hopital's rule: \[ \lim_{x\rightarrow 0} \frac{e^x-1}{x^2} = \lim_{x\rightarrow 0} \frac{e^x}{2 x} = \lim_{x\rightarrow 0} \frac{e^x}{2} = \infty. \] (The second equality follows from applying l'Hopital's rule a second time, which is valid since the limit still has the indeterminate form $\frac{\infty}{\infty}$.) \item in this limit, though we need to check at each stage, we will apply l'Hopital's rule four times, as the original limit has the indeterminate form $\frac{0}{0}$, and each of the first three applications of l'Hopital's rule results in a limit still in the indeterminate form $\frac{0}{0}$. \begin{eqnarray*} \lim_{x\rightarrow 0} \frac{e^x+e^{-x}-x^2-2}{\sin^2(x)-x^2} = \lim_{x\rightarrow 0} \frac{e^x -e^{-x}-2x}{2\sin(x)\cos(x)-2x} & = & \lim_{x\rightarrow 0} \frac{e^x -e^{-x}-2x}{\sin(2x) -2x} \\ & = & \lim_{x\rightarrow 0} \frac{e^x +e^{-x}-2}{2\cos(2x) -2} \\ & = & \lim_{x\rightarrow 0} \frac{e^x -e^{-x}}{-4\sin(2x)} \\ & = & \lim_{x\rightarrow 0} \frac{e^x +e^{-x}}{-8\cos(2x)} = -\frac{1}{4}. \end{eqnarray*} \item this limit has the indeterminate form $\frac{\infty}{\infty}$, and so we apply l'Hopital's rule: \[ \lim_{x\rightarrow\infty} \frac{\ln(x)}{x} = \lim_{x\rightarrow\infty} \frac{\frac{1}{x}}{1} = 0. \] \item here, we first attempt to evaluate the limit by factoring, a sensible first step for limits of rational functions: \[ \lim_{x\rightarrow 2} \frac{x^3-x^2-x-2}{x^3-3x^2+3x-2} =\lim_{x\rightarrow 2} \frac{(x-2)(x^2 +x+1)}{(x-2)(x^2 -x+1)} =\lim_{x\rightarrow 2} \frac{x^2 +x+1}{x^2 -x+1} = \frac{7}{3}. \] \item again, we first attempt to evaluate the limit by factoring: \[ \lim_{x\rightarrow 1} \frac{x^3-x^2-x+1}{x^3-2x^2+x} =\lim_{x\rightarrow 1} \frac{(x-1)(x^2 -1)}{x(x-1)^2} = \lim_{x\rightarrow 1} \frac{x+1}{x} = 2.\] \end{enumerate} \medskip \noindent {\bf Solution \ref{improper-exercise}:} \begin{enumerate} \item this is an improper integral because $1/x^{3/2}$ is continuous on $(0,4]$ and $\lim_{x\rightarrow 0+} 1/x^{3/2} =\infty$. So, we evaluate: \begin{eqnarray*} \int_0^4 \frac{1}{x^{3/2}} \: {\rm d}x & = & \lim_{c\rightarrow 0+} \int_c^4 \frac{1}{x^{3/2}} \: {\rm d}x \\ & = & \lim_{c\rightarrow 0+} \int_c^4 x^{-3/2} \: {\rm d}x \\ & = & \lim_{c\rightarrow 0+} \left( -\frac{2}{\sqrt{4}} + \frac{2}{\sqrt{c}}\right) \\ & = & -1 + 2\lim_{c\rightarrow 0+} \frac{1}{\sqrt{c}} =\infty, \end{eqnarray*} and so this improper integral {\bf diverges}. \item this is an improper integral because the interval of integration is $[1,\infty)$, which is not a closed interval. So, we evaluate: \begin{eqnarray*} \int_1^\infty \frac{1}{x+1} \: {\rm d}x & = & \lim_{M\rightarrow\infty} \int_1^M \frac{1}{x+1}\: {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \left[ \ln(M+1) -\ln\left( \frac{1}{2} \right) \right] =\infty, \end{eqnarray*} and so this improper integral {\bf diverges}. \item this is an improper integral, as the interval of integration is $[5,\infty)$, which is not a closed interval. So, we evaluate: \begin{eqnarray*} \int_5^\infty \frac{1}{(x-1)^{3/2}}\: {\rm d}x & = & \lim_{M\rightarrow\infty} \int_5^M \frac{1}{(x-1)^{3/2}}\: {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \int_5^M (x-1)^{-3/2}\: {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \left[ -\frac{2}{\sqrt{M-1}} + 1\right] = 1, \end{eqnarray*} and so this improper integral {\bf converges to $1$}. \item this is an improper integral because $1/(9-x)^{3/2}$ is continuous on $[0,9)$ and $\lim_{x\rightarrow 9-} 1/(9-x)^{3/2} =\infty$. So, we evaluate: \begin{eqnarray*} \int_0^9 \frac{1}{(9-x)^{3/2}}\: {\rm d}x & = & \lim_{c\rightarrow 9-} \int_0^c \frac{1}{(9-x)^{3/2}}\: {\rm d}x \\ & = & \lim_{c\rightarrow 9-} \int_0^c (9-x)^{-3/2}\: {\rm d}x \\ & = & \lim_{c\rightarrow 9-} \left[ -\frac{2}{3} +\frac{2}{\sqrt{9-c}} \right] =\infty, \end{eqnarray*} and so this improper integral {\bf diverges}. \item this is an improper integral, since the interval of integration is $(-\infty, -2]$ and so is not a closed interval. So, we evaluate: \begin{eqnarray*} \int_{-\infty}^{-2} \frac{1}{(x+1)^3}\: {\rm d}x & = & \lim_{M\rightarrow -\infty} \int_{M}^{-2} \frac{1}{(x+1)^3}\: {\rm d}x \\ & = & \lim_{M\rightarrow -\infty} \left[ -\frac{1}{2} \frac{1}{(-2 +1)^2} +\frac{1}{2} \frac{1}{(M+1)^2}\right] = -\frac{1}{2}, \end{eqnarray*} and so this improper integral {\bf converges to $-\frac{1}{2}$}. \item this is an improper integral, since the integrand is not continuous on $[-1,8]$ as it has a discontinuity at $0$. Hence, we can break it up as the sum of two improper integrals: \[ \int_{-1}^8 {\rm d}x/x^{1/3} = \int_{-1}^0 {\rm d}x/x^{1/3} + \int_0^8 {\rm d}x/x^{1/3}, \] and we have that $\int_{-1}^8 {\rm d}x/x^{1/3}$ converges if both $\int_{-1}^0 {\rm d}x/x^{1/3}$ and $\int_0^8 {\rm d}x/x^{1/3}$ converge. So, we evaluate: \begin{eqnarray*} \int_{-1}^0 \frac{1}{x^{1/3}} {\rm d}x & = & \lim_{c\rightarrow 0-} \int_{-1}^c \frac{1}{x^{1/3}} {\rm d}x \\ & = & \lim_{c\rightarrow 0-} \int_{-1}^c x^{-1/3} {\rm d}x \\ & = & \lim_{c\rightarrow 0-} \left[ \frac{3}{2} c^{2/3} - \frac{3}{2} \right] = -\frac{3}{2}, \end{eqnarray*} and \begin{eqnarray*} \int_0^8 \frac{1}{x^{1/3}} {\rm d}x & = & \lim_{c\rightarrow 0+} \int_c^8 \frac{1}{x^{1/3}} {\rm d}x \\ & = & \lim_{c\rightarrow 0+} \int_c^8 x^{-1/3} {\rm d}x \\ & = & \lim_{c\rightarrow 0+} \left[ \frac{3}{2} 8^{2/3} - \frac{3}{2} c^{2/3} \right] = 6. \end{eqnarray*} Since both these improper integrals converge, we see that the original improper integral $\int_{-1}^8 {\rm d}x/x^{1/3}$ {\bf converges to $\frac{9}{2}$}. \item this is an improper integral, since the interval of integration is $[2,\infty)$ and hence is not a closed interval. So, we evaluate: \begin{eqnarray*} \int_2^\infty \frac{1}{(x-1)^{1/3}}\: {\rm d}x & = & \lim_{M\rightarrow\infty} \int_2^M \frac{1}{(x-1)^{1/3}}\: {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \int_2^M (x-1)^{-1/3}\: {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \left[ \frac{3}{2} (M-1)^{2/3} - \frac{3}{2} \right] =\infty, \end{eqnarray*} and so this improper integral {\bf diverges}. \item this is an improper integral since the interval of integration is $(-\infty, \infty)$ and hence is not a closed interval. We evaluate this improper integral by breaking it up as the sum of two improper integrals $\int_{-\infty}^\infty x {\rm d}x/(x^2+4) = \int_{-\infty}^0 x {\rm d}x/(x^2+4) + \int_0^\infty x {\rm d}x/(x^2+4)$, and evaluating the two resulting improper integrals separately. So, \begin{eqnarray*} \int_{-\infty}^0 \frac{x}{x^2+4}\: {\rm d}x & =& \lim_{M\rightarrow -\infty} \int_M^0 \frac{x}{x^2+4}\: {\rm d}x \\ & = & \lim_{M\rightarrow -\infty} \left[ \frac{1}{2}\ln(M^2 +4) - \frac{1}{2}\ln(4)\right] =\infty. \end{eqnarray*} Since one of these two improper integrals diverges, we don't need to evaluate the other one, as the original improper integral $\int_{-\infty}^0 x {\rm d}x/(x^2 +4)$ necessarily {\bf diverges}. \item this is an improper integral, as the integrand is continuous on $(0,1]$ and $\lim_{x\rightarrow 0+} e^{\sqrt{x}}/\sqrt{x}= \infty$. So, we evaluate: \begin{eqnarray*} \int_0^1 \frac{e^{\sqrt{x}}}{\sqrt{x}}\: {\rm d}x & = & \lim_{c\rightarrow 0+} \int_c^1 \frac{e^{\sqrt{x}}}{\sqrt{x}}\: {\rm d}x \\ & = & \lim_{c\rightarrow 0+} (2 -2\sqrt{c}) = 2, \end{eqnarray*} and so this improper integral {\bf converges to $2$}. \item this is an improper integral, as the interval of integration is $[1,\infty)$ and so is not a closed interval. Moreover, the integrand is not continuous at $0$ but $\lim_{x\rightarrow 1+} 1/x\ln(x) =\infty$, and so we need to break this improper integral into the sum of two improper integrals $\int_1^\infty {\rm d}x/x\ln(x) = \int_1^2 {\rm d}x/x\ln(x) + \int_2^\infty {\rm d}x/x\ln(x)$, and evaluate the two resulting improper integrals separately. So, \begin{eqnarray*} \int_1^2 \frac{1}{x\ln(x)} \: {\rm d}x & = & \lim_{c\rightarrow 1+} \int_c^2 \frac{1}{x\ln(x)} \: {\rm d}x \\ & = & \lim_{c\rightarrow 1+} (\ln(\ln(2)) -\ln(\ln(c))) = \infty, \end{eqnarray*} and so this improper integral diverges, and so the original improper integral $\int_1^\infty {\rm d}x/x\ln(x)$ necessarily {\bf diverges}. \end{enumerate} \medskip \noindent {\bf Solution \ref{improper-bizarre}:} We first need to write $\int_{-\infty}^\infty (1+x){\rm d}x/(1+x^2)$ as the sum of two improper integrals, for instance \[ \int_{-\infty}^\infty \frac{1+x}{1+x^2}\: {\rm d}x = \int_{-\infty}^0 \frac{1+x}{1+x^2}\: {\rm d}x + \int_0^\infty \frac{1+x}{1+x^2}\: {\rm d}x, \] and then evaluate the two resulting improper integrals separatedly. So, \begin{eqnarray*} \int_0^\infty \frac{1+x}{1+x^2}\: {\rm d}x & = & \lim_{M\rightarrow\infty} \int_0^M \frac{1+x}{1+x^2}\: {\rm d}x \\ & = & \lim_{M\rightarrow\infty} \left[ \int_0^M \frac{1}{1+x^2}\: {\rm d}x + \int_0^M \frac{x}{1+x^2}\: {\rm d}x \right] \\ & = & \lim_{M\rightarrow\infty} \left[ (\arctan(M) - \arctan(0)) + \left( \frac{1}{2} \ln(1+M^2) - \frac{1}{2}\right) \right] = \infty, \end{eqnarray*} since $\lim_{M\rightarrow\infty} \ln(1+M^2) =\infty$, and so the original improper integral $\int_{-\infty}^\infty (1+x){\rm d}x/(1+x^2)$ diverges. \medskip \noindent However, when we evaluate $\lim_{t\rightarrow\infty} \int_{-t}^t (1+x){\rm d}x/(1+x^2)$, we get \begin{eqnarray*} \lim_{t\rightarrow\infty} \int_{-t}^t \frac{1+x}{1+x^2}\: {\rm d}x & = & \lim_{t\rightarrow\infty} \left[ \int_{-t}^t \frac{1}{1+x^2}\: {\rm d}x + \int_{-t}^t \frac{x}{1+x^2}\: {\rm d}x \right] \\ & = & \lim_{t\rightarrow\infty} \left[ (\arctan(t) -\arctan(-t)) + \frac{1}{2}\left( \ln(1 +t^2) - \ln(1 + (-t)^2) \right) \right] \\ & = & \lim_{t\rightarrow\infty} 2\arctan(t) =2\frac{\pi}{2} =\pi, \end{eqnarray*} and so $\lim_{t\rightarrow\infty} \int_{-t}^t (1+x){\rm d}x/(1+x^2)$ converges. (Here, we use that $\arctan(-t) =-\arctan(t)$.) \medskip \noindent {\bf Solution \ref{taylor-exercise}:} \begin{enumerate} \item we start by calculating the derivatives of $f$ at $a =6$: \[ f^{(0)}(6) =f(6) = 455; f^{(1)}(6) =f'(6) = 185; f^{(2)}(6) = 48; f^{(3)}(6) = 6; f^{(n)}(6) = 0 \mbox{ for } n\ge 4. \] Hence, the Taylor series for $f$ centered at $a =6$ is \[ \sum_{n=0}^\infty \frac{1}{n!} f^{(n)}(6) (x-6)^n = 455 + 185 (x-6) + \frac{1}{2} \: 48 (x-6)^2 + \frac{1}{6} \: 6 (x-6)^3. \] The radius of convergence of this series is $\infty$ (using the root test, for instance), and so the interval of convergence is ${\bf R}$. \item we start by calculating that $f^{(n)}(x) = 3^n e^{3x}$ for $n\ge 0$, and so $f^{(n)}(-2) = 3^n e^{-6}$. Hence, the Taylor series for $f$ centered at $a =-2$ is \[ \sum_{n=0}^\infty \frac{1}{n!} f^{(n)}(-2) (x+2)^n = e^{-6} \sum_{n=0}^\infty \frac{3^n}{n!} (x+2)^n. \] The radius of convergence of this series is $\infty$ (using the ratio test, for instance), and so the interval of convergence is ${\bf R}$. \item we start here by recalling that \[ f^{(n)}(x) = \left\{ \begin{array}{ll} \cosh(x) & \mbox{ for } x\mbox{ even, and } \\ \sinh(x) & \mbox{ for } x \mbox{ odd.} \end{array}\right. \] So, we have that $f^{(n)}(1) =\cosh(1) = \frac{1}{2}(e+\frac{1}{e})$ for $n$ even, and $f^{(n)}(1) =\sinh(1) = \frac{1}{2}(e-\frac{1}{e})$ for $n$ odd. Hence, the Taylor series for $f$ centered at $a =1$ is \begin{eqnarray*} \sum_{n=0}^\infty \frac{1}{n!} f^{(n)}(1) (x-1)^n & = & \sum_{k=0}^\infty \frac{1}{(2k)!} f^{(2k)}(1) (x-1)^{2k} + \sum_{k=0}^\infty \frac{1}{(2k+1)!} f^{(2k+1)}(1) (x-1)^{2k+1} \\ & = & \frac{e^2+1}{2e} \sum_{k=0}^\infty \frac{1}{(2k)!} (x-1)^{2k} + \frac{e^2 -1}{2e} \sum_{k=0}^\infty \frac{1}{(2k+1)!} (x-1)^{2k+1}. \end{eqnarray*} The radius of convergence of this series is $\infty$ (using the ratio test, for instance), and so the interval of convergence is ${\bf R}$. \end{enumerate} \end{document} \end