Examples of Numerical Approximation of the Inverse Laplace Transform
This script demonstrates using the included Talbot and Euler algorithms for numerical approximations of the inverse Laplace transform. The examples cover functions with known inverses so that the accuracy can easily be assessed.
Note that two versions of each algorithm are included, e.g. talbot_inversion.m and talbot_inversion_sym.m. The "_sym" suffix denotes that these functions use variable precision arithmetic, available in the Symbolic Toolbox™, for much, much greater precision. This is demonstrated below.Tucker McClure @ The MathWorks Copyright 2012, The MathWorks, Inc.
A step function is simply 1/s. We can compare the numerical results to the exact results.
Let's try a simple ramp with more compact notation, defining directly in the call to both the function and the times at which we want the function evaluated.
Exponentially Decaying Sine
Let's plot the results along with the theoretical values for an exponentially decaying sine function.
We can try a natural logarithm too.
T = 0
Inverse Laplace transforms aren't defined for t = 0, but only t > 0.
Sine and Using "M"
Sine oscillates and is a bit trickier on these algorithms, but it works fine here. We specify an "M" parameter -- a higher M yields higher resolution, but if M gets too high, there can be problems. Here we'll use M = 32, and pass this as the third argument to .
Cosine and Low M
When M isn't high enough, we see numerical problems.
Cosine and Good M Selection
Increasing M allows us to increase the number of periods we can compute.
Cosine and Large M Difficulty with Double Precision
We can't just set M arbitrarily high because the numerical precision required is greater than what doubles can provide.
Cosine and Large M Accuracy with Variable Precision Arithmetic
Here, we need cosine calculated out very far. This is not possible with doubles. Therefore, we use the symbolic implementation of Talbot's method (the version that ends with "_sym") and simply specify the required M. The symbolic implementations are capable of arbitrary precision by using the "vpa" function. Note that this takes much longer but might be the only way to solve some problems. Variable precision arithmetic (and therefore this function) requires the Symbolic Toolbox.
That's it! If you have any arbitrary function derived in , you can use these methods to determine its response over time.
For most realistic, difficult problems that people address, it's likely that the symbolic implementations are the best resource, despite the increased run time. These implementations are possible primarily due to the use of the function mentioned above. How does this work? Let's suppose we want to use the binomial theorem for some very large numbers. This involves large factorials. For n choose k with normal double precision, we get:
NaN! The calculation breaks down with huge products on top and bottom, resulting in numerical noise when using double precision. But instead let's make these numbers symbolic. Then the factorials can be carried out symbolically, allowing common terms on top and bottom to be canceled out. Then we can evaluate at the very end to the desired precision. (The binomial coefficients calculation here will result in an integer, so we won't actually see 32 significant digits because everything after the decimal will be 0), but we do get precisely 200, without round-off errors.
That worked as expected. The function is actually used on binomial coefficients in for precisely this type of exact answer, albeit in a more complicated equation. Here's a snippet of , showing the use of to evaluate code handled symbolically. Note that this uses vectorization, complex numbers, and even the operator, and this can all be handled symbolically gracefully!% Binominal function bnml = @(n, z) factorial(n)/(factorial(z)*factorial(n-z)); xi = sym([0.5, ones(1, M), zeros(1, M-1), 2^-sym(M)]); for k = 1:M-1 xi(2*M-k + 1) = xi(2*M-k + 2) + 2^-sym(M) * bnml(sym(M), sym(k)); end k = sym(0:2*M); % Iteration index beta = vpa(sym(M)*log(sym(10))/3 + 1i*pi*k, P); eta = vpa((1-mod(k, 2)*2) .* xi, P);
If you wish to understand these methods in more detail, be sure to look at this great summary of these techniques.
Abate, Joseph, and Ward Whitt. "A Unified Framework for Numerically Inverting Laplace Transforms." INFORMS Journal of Computing, vol. 18.4 (2006): 408-421. Print.
Не. ГЛАВА 65 Бринкерхофф мерил шагами кабинет Мидж Милкен. - Никому не позволено действовать в обход фильтров.