By Jan S. Hesthaven
Spectral tools are well-suited to resolve difficulties modeled by means of time-dependent partial differential equations: they're speedy, effective and actual and commonplace by means of mathematicians and practitioners. This class-tested creation, the 1st at the topic, is perfect for graduate classes, or self-study. The authors describe the fundamental conception of spectral equipment, permitting the reader to appreciate the concepts via quite a few examples in addition to extra rigorous advancements. they supply an in depth remedy of tools in line with Fourier expansions and orthogonal polynomials (including discussions of balance, boundary stipulations, filtering, and the extension from the linear to the nonlinear situation). Computational resolution innovations for integration in time are handled by way of Runge-Kutta sort equipment. numerous chapters are dedicated to fabric now not formerly lined in ebook shape, together with balance thought for polynomial equipment, options for issues of discontinuous recommendations, round-off blunders and the formula of spectral tools on normal grids. those may be in particular precious for practitioners.
Read Online or Download Spectral Methods for Time-Dependent Problems PDF
Best computational mathematicsematics books
Emergent computation: Emphasizing bioinformatics
Emergent Computation emphasizes the interrelationship of different periods of languages studied in mathematical linguistics (regular, context-free, context-sensitive, and kind zero) with elements to the biochemistry of DNA, RNA, and proteins. furthermore, facets of sequential machines equivalent to parity checking and semi-groups are prolonged to the learn of the Biochemistry of DNA, RNA, and proteins.
Reviews in Computational Chemistry Volume 2
This moment quantity of the sequence 'Reviews in Computational Chemistry' explores new functions, new methodologies, and new views. the subjects coated comprise conformational research, protein folding, strength box parameterizations, hydrogen bonding, cost distributions, electrostatic potentials, digital spectroscopy, molecular estate correlations, and the computational chemistry literature.
Introduction to applied numerical analysis
This publication by way of a favourite mathematician is suitable for a single-semester direction in utilized numerical research for desktop technology majors and different upper-level undergraduate and graduate scholars. even though it doesn't hide genuine programming, it makes a speciality of the utilized themes so much pertinent to technological know-how and engineering execs.
Additional resources for Spectral Methods for Time-Dependent Problems
Example text
QED Note that the smoother the function, the larger the value of q, and therefore, the better the approximation. This is in contrast to finite difference or finite element approximations, where the rate of convergence is fixed, regardless of the smoothness of the function. This rate of convergence is referred to in the literature as spectral convergence. If u(x) is analytic then u (q) L 2 [0,2π ] ≤ Cq! u L 2 [0,2π] , and so u − P2N u L 2 [0,2π] ≤ C N −q u (q) L 2 [0,2π] ≤C q! u Nq L 2 [0,2π] . Using Stirling’s formula, q!
14) ˆ N . 6 for N = 8. 30 Trigonometric polynomial approximation Historically, the early availability of the fast fourier transform (FFT), which is highly efficient for 2 p points, has motivated the use of the even number of points approach. However, fast methods are now available for an odd as well as an even number of grid points. 3 A first look at the aliasing error Let’s consider the connection between the continuous Fourier series and the discrete Fourier series based on an even number of grid points.
The differentiation matrix takes us from physical space to physical space, and the act of differentiation is hidden in the matrix itself. The computational cost of the matrix method is the cost of a matrix-vector product, which is an O(N 2 ) operation, rather than the cost of O(Nlog(N )) in the method using expansion coefficients. However, the efficiency of the FFT is machine dependent and for small values of N it may be faster to perform the matrix-vector product. Also, since the differentiation matrices are all circulant one need only store one column of the operator, thereby reducing the memory usage to that of the FFT.