Power Series Solutions to Differential Equations
There are many techniques for finding solutions to different kinds of differential equations, but once you get to second-order (or higher) equations, most of them do not have solutions that can be expressed in terms of elementary functions.
Fortunately, there is a method that works for the large majority of such equations. I’ll go through an example that you could solve by other techniques just so you can check that the method gives the correct answers.
Things you will need to know in this section:
Note especially this very important theorem:
Term-wise calculus on an infinite series: If a power series for the
function converges absolutely on an
interval, then series can be integrated or differentiated term by term so
that (for instance) .
Term-wise calculus on an infinite series:
If a power series for the function
converges absolutely on an interval, then series can be integrated or differentiated term by term so that (for instance)
In general, we can assume that most differential equations have a power series solution of the form
One useful approach is to see if we can’t determine a coefficient if we know the previous one. If we can find such a recursion relationship (something like ak+1 = f(k, ak)), then we can calculate the sequence of coefficients.
To accomplish this, we rely upon the straightforward: the power series solution must satisfy the requirements of the original differential equation (duh!)
Thus, we begin by assuming there is a power-series solution to the given differential equation. We write its general form, and then we substitute that general expression into the original differential equation, taking derivatives and multiplying by things as required by the original equation. Here’s how it works.
Oh, and btw: this approach can be used either with an actual infinite series to get an exact answer to the equation (assuming one exists). OR, it can be used with a finite series of a given number of terms in order to approximate a solution numerically to any given desired accuracy. Since for most people, it’s easier to work with finite rather than infinite polynomials, we’ll start with the second approach.
Example: Use a power-series approach to find a solution to the following initial-value problem:
y – y’ = 0 ; y(0) = 1
We start by assuming a polynomial solution y(x) of some number of terms:
Now, we find the other terms in the equation, in this case y’. To do that, of course, we simply differentiate term by term:.
Now, for the difference of two polynomials to be equal to zero for every value of the variable, the difference of the coefficients of each power of the variable must be zero.
Notice that there is still an arbitrary constant, a0. That shouldn’t surprise you since this is a first-order equation—think of it as a constant of integration. We can now solve for it using the initial value condition we were given.
Here’s what you should end up with when you do:
Now, I originally told you I’d pick an equation you could solve explicitly so the solution could be checked.
The question, of course, is whether the answer you got by hand is the same as the power series we obtained above. While you actually (ahem!) should know the answer to that already, if you don’t, Mathematica has a command that will help refresh your memory. It’s called Series[ ], and it will give you the power series equivalent of any elementary function for as many terms as you like. It also includes, as its last term, an estimation of the order of magnitude of the remaining terms of the infinite series that it’s leaving out. We don’t very often want that information when we’re actually solving and plotting solutions of differential equations, so we can leave it off by putting Series[ ] inside a Normal[ ] command.
which may look a tad more familiar if you remember that 24 = 4!....
Obviously, you’ve not “solved” the original differential equation with a nine-term power series.
You should have, though, approximated it pretty closely:
As you should expect, of course, the approximation gets worse as you get farther and farther away from the point at which the series is generated (in this case, zero).
Once you’ve gotten all the way to x = 2, the error is already more than 0.02 % ! (So you can see why people who use numerical algorithms don’t worry too much about “exact” answers in any problem that will ultimately result in something that’s being built or measured in “the real world.” While there are some places where a tolerance of 0.02 % is too large, they’re very few and far between.
Now, there’s a different way to look at the accuracy of an approximation. Instead of looking at how accuracy diminishes as the point at which we apply an approximation gets farther away from the point at which we created it, we can look at how increasing the number of terms of an approximation increases its accuracy at a given point.
We can even combine both those measures of accuracy graphically in the following manner. Doing so will be easiest in Mathematica. First, define the series as a two-variable expression, so we can plot the approximation at various values of x as well as for various numbers of terms n. Look at the plot below, where I have plotted ex in red and a two-term and three-term approximation (in black and blue, respectively). Notice two things: a) the approximations are both better closer to zero (where they are derived) than farther away; b) the blue curve is closer to the red curve than the black one over the whole plotted domain (not surprisingly, more terms give better results).
You can see how adding more terms works with the sin x function, and it also becomes clear that since we are using polynomial approximations, once the approximation starts messing up, it goes off to infinity quickly, just like a high-order polynomial can be expected to do. In the graph below, again, the red curve is the actual since curve, and the other colored curves are polynomial approximations of increasing number of terms (and hence, of increasing order). You should, just by looking at the graph, be able to put the curves in order of increasing number of terms. I’ll start you off: black, blue…
Back to our series-solution process….
For those of you who don’t think our approximation to ex of 24 decimal places is an acceptable solution, since this particular equation does have an exact, elementary solution, you can in fact find it by a modification of the basic process:
The only difference from the method used to find an approximate answer involves using summation notation so we can write out the infinite series that we approximated in the previous process. And, taking derivatives and setting coefficients equal is just a bit trickier when there are an infinite number of them….
Unfortunately (or fortunately, for you purists), Mathematica isn’t a great deal of help here:
It seems promising…
until you try to work with it….
y[x] – y’ [x]
Neither Factor[ ] nor Expand[ ] nor Collect[ ] seems to help a great deal. So, we’ll do it by hand:
To get to where we can set corresponding coefficients equal to one another (which is still the key to the solution), we need to get the infinite series where they have the same power of x in corresponding places. Our first one has x i and the second one has x i - 1 , so we have to make an adjustment.
We’ll rewrite the first one as follows:
Now, we can work with the series. First, let’s write out the equation to be solved using our assumed solution and start factoring and collecting terms:
If the coefficient of each term of the series must be zero, then
(a i – 1 – i a i ) = 0
for every value of the index i. This condition is equivalent to saying that a i – 1 = i a i for each value of i. A few results are presented in the following table, where the first column simply uses the recursion formula just stated and the second column puts the result into a form that should look familiar:
a 0 = a 1
a 1 = 2 a 2
a 2 = 3a 3
a 3 = 4a 4
Notice that since we now have an infinite series and not a finite one, our solution below will be an exact solution to the differential equation.
where a is a constant determined by the initial conditions of the problem. BTW, notice that this infinite series would be an exact solution even if it weren’t elementary! As it is, you probably recognize it as the power series expression for ex , so it is both exact and elementary. The fact that it’s exact, however, is much more important than the fact that it’s elementary.
Many differential equations (most second-and-higher-order ones, in fact) do not have elementary solutions. However, the general method described above works for a very wide array of differential equations that meet some fairly modest conditions for continuity. Sometimes, in order to meet specific conditions of a specific problem, one has to tweak the method a bit. And sometimes, you’ll only be able to find a recursive formula for the coefficients (like the table) and not an nth-term one (like the explicit power-series solution below it).
But in general, you’ll be able to write as many terms of the solution to most differential equations as you have patience to figure out. Interestingly, in the 19th century this was a very fertile ground for mathematical research, and literally hundreds of such equations were investigated. Many of them have great importance in various physics problems, and those have often been named after their principal investigators and their solutions worked out in great detail.
Before going on, it would be a really good idea to make sure you can solve the problem below. It has an elementary solution, so you’ll be able to check your work.
y’’ + y = 0; y(0) = 1, y’(0) = 1
Note: Since there’s a second derivative in this problem, at some point you’ll need to change indices on a second derivative. In case you have problems with that, here’s how it works:
Start off just like before:
You’re gonna wanna (note colloquialism) rewrite this last expression as follows because we need all our sigmas to have the same power of x in order to be able to put the appropriate coefficients together.:
Reindexed Series 3
If you can’t see where that comes from, go back to the original expression for the second derivative (Series 3) , and make the substitution i = j+ 2, which of course means that j = i – 2
Now, just go back and replace the j’s by i’s, and you’ve got Reindexed Series 3.
At this point, when you add y’’ and y, you’ll be able to combine the series and then do the necessary factoring to find a recursion formula for the coefficients.
An equation that turns out to be of interest in the physics of light is the following one:
; y(0) = 1, y’(0)=0
Use the techniques you developed above to find a general power-series solution to the equation.
You should eventually come up with the following recurrence relation:
Because ai+2 and ai-1 are three apart, the coefficients are determined in three different series. I’ll do the first one. If you start with a0 then the first coefficient the recursion formula gives you is a3 .
Depending on how good you are finding patterns, the sequence above might indicate the following general formula:
There’s another formula for a3n+1 that you should try to come up with. The third formula is easy because it turns out that a2 = 0, so all the coefficients that come from a2 (like a5 , a8 , a11 and so on) are also 0.
You should define the appropriate function in Mathematica so you can plot various polynomial approximations to the infinite series solution. For that purpose, defining the coefficients as follows will be a huge help:
Here are a few polynomial approximations plotted on the same set of axes for the first series of terms:
Here’s an approximation to n = 100:
There is another solution with initial conditions y(0)=0, y’(0) = 1. You should find the first three non-zero terms of that solution and plot them. Here’s a plot with a large number of terms
 Make sure you see why this is true since this observation is the key to the entire process!