Use Taylor Expansion Again to Derive the Improved Error Bound
Taylor Serial
While a Taylor serial of the commencement or second order is usually sufficient for a wide range of combustion responses, at that place exist situations in which the response is of higher social club within the model parameter space.
From: Figurer Aided Chemical Engineering , 2019
Taylor Serial
Qingkai Kong , ... Alexandre Yard. Bayen , in Python Programming and Numerical Methods, 2021
18.4 Summary and Problems
18.iv.1 Summary
- 1.
-
Some functions tin can exist perfectly represented by a Taylor series, which is an space sum of polynomials.
- ii.
-
Functions that accept a Taylor series expansion can be approximated past truncating its Taylor series.
- 3.
-
The linear approximation is a common local approximation for functions.
- 4.
-
The truncation error can be estimated using the Taylor Balance Estimation Theorem.
- v.
-
Be mindful of the circular-off error in the Taylor serial.
18.4.2 Problems
- 1.
-
Apply Taylor series expansions to bear witness that , where .
- two.
-
Utilize the linear approximation of effectually to show that for small x.
- 3.
-
Write the Taylor series expansion for effectually . Write a function my_double_exp(x, north), which computes an approximation of using the showtime due north terms of the Taylor series expansion. Be sure that my_double_exp can take assortment inputs.
- four.
-
Write a function that gives the Taylor series approximation to the np.exp function effectually 0 for an social club one through 7. Calculate the truncation fault bound for order seven.
- v.
-
Compute the quaternary social club Taylor expansion for and , and around 0, which produces a smaller mistake for . Which is right: calculating separately Taylor expansion for sin and cos and then multiplying the result together, or calculating the Taylor expansion for the product beginning and so plugging in x?
- half dozen.
-
Use the fourth social club Taylor series to approximate and determine the truncation error jump.
- 7.
-
Write a office my_cosh_approximator(ten, n) where output is the northth gild Taylor series approximation for , the hyperbolic cosine of x taken around . You may assume that 10 is an array, and n is a positive integer (including zero). Recall that
Read full affiliate
URL:
https://world wide web.sciencedirect.com/science/commodity/pii/B9780128195499000282
Description of Elements
Zhuming Bi , in Finite Element Analysis Applications, 2018
3.vi.1 Taylor expansion
A Taylor series is a series of polynomial terms to approximate the role (f(ten)) at a signal (10 = a). A one-dimensional Taylor series is given past
(iii.25)
If a is set as 0, the corresponding Taylor expansion is known as a Maclaurin series,
(3.26)
The Taylor'due south theorem states that any function f(x) satisfying sure atmospheric condition can be expressed equally a Taylor series: presume f (due north)(0) (north = 1, 2,3…) is finite and |ten| < 1, the term of becomes less and less significant in contrast to the terms when n is minor.
Therefore, a part over the continuous domain of x can be approximated every bit,
(three.27)
where C i = f (i)(0) (i = 1, 2,…n) are the constants determined by initial conditions, and northward is an integer for the number of constraints applied on the role.
Read full chapter
URL:
https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780128099520000030
Low-level feature extraction (including border detection)
Mark S. Nixon , Alberto S. Aguado , in Feature Extraction & Image Processing for Computer Vision (Third Edition), 2012
four.two.1.2 Analysis of the basic operators
Taylor serial analysis reveals that differencing adjacent points provides an approximate of the starting time-gild derivative at a point. If the difference is taken between points separated by Δ x and so by Taylor expansion for f(10+Δx) we obtain:
(4.five)
By rearrangement, the first-order derivative f′(x) is:
(4.six)
This shows that the deviation between adjacent points is an estimate of the first-order derivative, with fault O(Δten). This error depends on the size of the interval Δx and on the complexity of the bend. When Δx is large this error can exist meaning. The error is also large when the high-lodge derivatives have large values. In practice, the short sampling of image pixels and the reduced loftier-frequency content brand this approximation adequate. However, the error can be reduced by spacing the differenced points past one pixel. This is equivalent to calculating the first-order difference delivered past Eq. (four.1) at two next points, as a new horizontal difference Exx where
(4.7)
This is equivalent to incorporating spacing to detect the edges Exx by:
(four.8)
To clarify this, again past Taylor series, nosotros aggrandize f(x−Δx) as:
(four.9)
By differencing Eq. (iv.nine) from Eq. (4.v), we obtain the first-order derivative as
(4.ten)
Equation (4.x) suggests that the estimate of the get-go-gild deviation is at present the deviation between points separated past 1 pixel, with error O(Δx 2). If Δten<1, this error is clearly smaller than the error associated with differencing side by side pixels, in Eq. (4.half dozen). Once more, averaging has reduced dissonance or error. The template for a horizontal edge-detection operator is given in Effigy 4.4(a).This template gives the vertical edges detected at its center pixel. A transposed version of the template gives a vertical edge-detection operator (Figure 4.4(b)).
The Roberts cantankerous operator (Roberts, 1965) was one of the primeval edge-detection operators. It implements a version of basic first-guild edge detection and uses two templates that differentiate pixel values in a diagonal manner, as opposed to along the axes' directions. The two templates are chosen M + and Thou − and are given in Figure four.5.
In implementation, the maximum value delivered by application of these templates is stored equally the value of the border at that bespeak. The edge point E x,y is and then the maximum of the 2 values derived past convolving the two templates at an paradigm bespeak P ten,y :
(four.11)
The awarding of the Roberts cross operator to the prototype of the square is shown in Figure four.vi. The results of the two templates are shown in Figure 4.6(a) and (b), and the result delivered past the Roberts operator is shown in Effigy 4.6(c). Note that the corners of the square at present appear in the border epitome, past virtue of the diagonal differencing action, whereas they were less apparent in Figure four.2(d) (where the acme left corner did non announced).
An alternative to taking the maximum is to but add together the results of the 2 templates together to combine horizontal and vertical edges. There are of form more varieties of edges and it is often better to consider the two templates equally providing components of an edge vector: the strength of the edge along the horizontal and vertical axes. These give components of a vector and can be added in a vectorial way (which is perhaps more usual for the Roberts operator). The border magnitude is the length of the vector and the border management is the vector's orientation, as shown in Effigy 4.7.
Read total chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123965493000045
Overview of uncertainties in modern power systems: uncertainty models and methods
Mohamed Ebeed , Shady H. Due east. Abdel Aleem , in Uncertainties in Modernistic Power Systems, 2021
iii.one.2.3.3 Taylor serial expansion
Mathematically, Taylor series (TS) is a series expansion of a function into an infinite sum of terms, in which it can exist applied to approximate a complex function while providing quantitative estimates on the mistake in this approximation. In Ref. [ 90], the TS expansion method was applied to the linearization of the cartesian coordinate formulation of nodal load flow equations for ability system state interpretation. The authors in Ref. [91] proposed TS expansion of the Markov concatenation stationary distribution to reproduce parametric uncertainty to reliability and performability indices in Markov reliability. Zhao et al. proposed the TS approximation and MCS combined with nonparametric probability density to consider parameter uncertainties [92]. Mostly, in case of a function that has several random variables, . By application, TS can be applied to expand the function around the value equally follows:
(1.37)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128204917000013
Numerical Computation
A. Kayode Coker , in Fortran Programs for Chemical Procedure Design, Assay, and Simulation, 1995
Euler and Modified Euler Methods
Using the Taylor'south series,
(1-118)
The value of y(x0) is given by the initial condition and f' (x0) is evaluated from f(x0,y0), given past the differential equation
The Euler method can be expressed as:
(1-119)
For the modified Euler method, we expand the Taylor series as
(1-120)
Replacing the second derivative by the frontward difference approximation for f" that is
having an error of O(h), we have
(1-121)
(1-122)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780884152804500029
Nonlinear Optics
Francisco J. Duarte , in Tunable Laser Eyes, 2003
8.2.3 The Refractive Index equally a Role of Intensity
Using a Taylor serial to expand an expression for the refractive index yields
(8.57)
Neglecting the second-order and higher terms, this expression reduces to
(eight.58)
where n 0 is the normal weak-field refractive index, defined in Chapter 12 for diverse materials. The quantity (∂northward/∂I) is not dimensionless and has units that are the changed of the laser intensity, or W–1 cmii. Using polarization arguments this derivative tin be expressed as (Boyd, 1992)
(8.59)
This quantity is known as the second-order alphabetize of refraction and is traditionally referred to as due north 2. Setting ∂n/∂I = northward ii, Eq. (8.58) can be restated in its usual form as
(8.60)
The change in refractive alphabetize as a function of light amplification by stimulated emission of radiation intensity is known as the optical Kerr effect. For a description of the electro-optical Kerr consequence, the reader should refer to Agrawal (1995).
A well-known issue of the optical Kerr event is the miracle of self-focusing. This results from the propagation of a laser beam with a near-Gaussian spatial intensity contour, since, according to Eq. (8.60), the refractive index at the center of the axle is higher than the refractive alphabetize at the wings of the beam. This results in an intensity-dependent lensing effect, as illustrated in Fig. 8.6.
The miracle of cocky-focusing, or intensity-dependent lensing, is of import in ultrafast lasers or femtosecond lasers (Diels, 1990; Diels and Rudoph, 1996), where information technology gives ascension to what is known as Kerr lens manner locking (KLM). This is applied to spatially select the loftier-intensity mode-locked pulses from the background CW lasing. This tin be accomplished simply by inserting an aperture near the proceeds medium to restrict lasing to the central, high-intensity, portion of the intracavity beam. This technique has become widely used in femtosecond laser cavities.
Read full chapter
URL:
https://world wide web.sciencedirect.com/science/article/pii/B9780122226960500557
Series
Bastian E. Rapp , in Microfluidics: Modelling, Mechanics and Mathematics, 2017
4.3.1 Introduction
After discussing the Taylor serial, we now look at the 2nd important series nosotros will crave, the Fourier ane series. Simply as the Taylor serial tin be used to gauge belittling functions past a sum of individual terms, i.east., a polynomial, the Fourier series approximates a function as a polynomial using sine and cosine functions.
Generally speaking, a Fourier series is a special example of a Taylor series equally it uses sine and cosine terms, whereas the Taylor series uses polynomials. A good example is given in section 4.two.two.three where nosotros approximated the sine role using a polynomial only. Given that a Fourier series uses sine and cosine terms, information technology is best used to approximate a periodic function, i.e., a function for which f (x) = f (x + twoπ). However, as we volition see, many functions that nosotros approximate using a Fourier series are not actually periodic. We may simply transform them into a Fourier series thus making them periodic even though nosotros are only interested in the role in the kickoff interval, thus ignoring the periodicity. Nosotros will expect at an example in a moment.
Index | Number |
---|---|
0 | 1 |
1 | |
2 | |
3 | 0 |
4 | |
v | 0 |
6 | |
7 | 0 |
8 | |
9 | 0 |
10 | |
11 | 0 |
12 | |
13 | 0 |
xiv | |
15 | 0 |
xvi |
This periodicity gives rise to another important difference between the Taylor and Fourier serial. A Taylor series always needs to be expanded around a point and is a good approximation of the function only near this expansion indicate. A Fourier serial on the other hand will guess the function in the whole domain in which it is divers. Therefore information technology approximates the function well for any point of the domain for which information technology was defined.
Read total chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781455731411500046
Serial
Alexandre M. Bayen , Timmy Siauw , in An Introduction to MATLAB® Programming and Numerical Methods for Engineers, 2015
Problems
- 1.
-
Utilise Taylor series expansions to prove that , where .
- 2.
-
Use the linear approximation of around to show that for pocket-sized .
- 3.
-
Write the Taylor series expansion for around . Write a function with header [approx] = myDoubleExp(x, Northward), which computes an approximation of using the first terms of the Taylor series expansion. Be sure that myDoubleExp tin can take array inputs.
- four.
-
Write a function that gives the Taylor serial approximation to the exp function around 0 for order 1 through 7.
- v.
-
Compute the fourth order Taylor expansion for and and effectually 0. Which produces less error for : computing the Taylor expansion for and separately and then multiplying the upshot together, or computing the Taylor expansion for the production first and so plugging in ?
- 6.
-
Write a role with header [yApprox] = myCoshApproximator(x, n), where yApprox is the -th lodge Taylor Series approximation for , the hyperbolic cosine of x, taken around . You may presume that x is a vector and northward is a positive integer (including 0). Note that your office should be vectorized for x. Recall that
Test Cases:
Read full chapter
URL:
https://www.sciencedirect.com/science/commodity/pii/B9780124202283000154
TAYLOR'South POLYNOMIAL AND SERIES
G.M. PHILLIPS , P.J. TAYLOR , in Theory and Applications of Numerical Assay (Second Edition), 1996
Section three.5
- three.ix
-
Show that the Taylor serial at 10 = 0 of log(i + x) has radius of convergence + 1.
- 3.ten
-
Testify that the Taylor series at x = 0 of sin x has infinite radius of convergence.
- 3.11
-
Show, using Problem 3.7, that
- three.12
-
If A is a given constant and h = A/due north where due north is a positive integer, show that
- 3.13
-
Show that for ten ≥ 0 and any fixed integer n ≥ 1:
Read total affiliate
URL:
https://www.sciencedirect.com/science/article/pii/B9780125535601500045
Explicit distortion modeling
Jinyu Li , ... Yifan Gong , in Robust Automatic Speech Recognition, 2016
six.2.three VTS Feature Enhancement
Equally shown in Li et al. (2009), VTS model adaptation achieves much better accuracy than several popular model adaptation technologies. Although VTS model adaptation can achieve high accurateness, the computational toll is very high as all the Gaussian parameters in the recognizer need to be updated every time the ecology parameters alter. This time-consuming requirement prevents VTS model adaptation from being widely used, especially in LVCSR tasks where the number of model parameters is large.
On the other hand, VTS feature enhancement has been proposed as a lower-cost culling to VTS model accommodation. For example, a number of techniques take been proposed that tin can be categorized as model-based feature enhancement schemes (Moreno, 1996; Stouten et al., 2003; Droppo et al., 2003; Li et al., 2011). These methods use a small GMM in the front-stop and the same methodology used in VTS model adaptation to derive a minimum-mean-foursquare-error (MMSE) estimate of the clean speech features given the noisy observations. In addition to the reward of a low runtime cost, VTS feature enhancement tin be easily combined with other popular feature-based technologies, such every bit CMN, HLDA, fMPE, etc., which are challenging to VTS model adaptation.
In general, the MMSE method can exist used to become the estimate of clean speech
(6.42)
Denote the clean-trained GMM as
(six.43)
forth with Equation 6.xiii, the MMSE gauge of clean speech becomes
(6.44)
where P(k|y) is the Gaussian posterior probability, calculated as
(6.45)
If the 0th-order VTS approximation is used for the nonlinear term in Equation 6.44, the MMSE approximate of cleaned speech communication 10 is obtained every bit
(vi.46)
This conception was start proposed in Moreno (1996). In Stouten et al. (2003), some other solution was proposed when expanding Equation 6.13 with the 1st-social club VTS. For the one thousandthursday GMM component, the joint distribution of x and y is modeled as
(6.47)
The post-obit can be derived (Stouten et al., 2003)
(six.48)
The covariance between x and y tin can be derived as
(six.49)
(half dozen.50)
(vi.51)
(6.52)
Equation 6.50 is obtained by subtracting Equation half dozen.20 from the 1st-guild VTS expansion in Equation 6.xix. Equation 6.51 is obtained by using the property that speech, channel, and noise are independent.
Then the MMSE approximate of clean speech is (Stouten et al., 2003)
(6.53)
The flowchart of feature VTS is in Figure 6.3. The following are the detailed implementation steps (Li et al., 2011):
- 1.
-
read in a distorted voice communication utterance;
- 2.
-
set the channel mean vector to all zeros;
- three.
-
initialize the noise mean vector and diagonal covariance matrix using the start and last N frames (oral communication-complimentary) from the utterance using sample estimates;
- 4.
-
compute the Gaussian-dependent G(thou) with Equation 6.18, and adapt the GMM parameters with Equations 6.xx–6.26;
- five.
-
re-approximate noise and aqueduct distortions with formulations in Section six.two.2;
- 6.
-
adapt the GMM parameters with Equations 6.xx–half dozen.26;
- 7.
-
utilise the terminal adapted GMM model to clean the distorted speech feature with Equation 6.46 or Equation 6.53;
- viii.
-
use the make clean-trained HMM model to decode the cleaned speech feature obtained in footstep 7 and get output transcription.
2 key aspects of VTS feature enhancement are how to obtain reliable estimates of the dissonance and aqueduct distortion parameters and how to accurately summate the Gaussian occupancy probability. In contrast to using static features alone to calculate the Gaussian occupancy probability (Stouten, 2006), both static and dynamic features are used to obtain more reliable Gaussian occupancy probabilities. Then, these probabilities are plugged into Equation 6.46 or Equation 6.53. In Li et al. (2012b), it is shown that recent improvements in VTS model adaptation can be incorporated into VTS feature enhancement to improve the algorithm operation: updating all of the environment distortion parameters (Li et al., 2007) and later conveying out noise adaptive training (Kalinli et al., 2010).
Figures 6.iv (a) and (b) evidence the distribution of the C1 and C0 of give-and-take oh in Aurora 2 examination set A with dissonance type 1. Comparing with Figure 3.2a and b, it is articulate that after VTS characteristic enhancement (fVTS), the distributions of cleaned signals in all SNR conditions are at present very close to the original distribution of clean signal. This benefits both training and testing. In preparation, only small numbers of Gaussians are now needed to model the distributions of all data. In testing, fifty-fifty clean-trained model can well deal with the cleaned low SNR signals because the distributions are similar now.
A mutual business of feature enhancement is that later the enhancement, the clean speech betoken is distorted and the accuracy on clean test sets will drib. Every bit shown in Li et al. (2012a), VTS feature enhancement enjoys the nice property that it significantly improves accuracy in noisy test conditions without degrading accuracy in make clean test conditions. This can also be visualized in Figure 6.four (c) and (d) which shows the distribution of the C1 and C0 of word "oh" in Aurora 2 test set up A clean condition. The solid line denotes the distribution of raw clean information, and the dotted line denotes the distribution of clean data later feature VTS enhancement. Equally shown in the figures, they are very close.
By incorporating the contempo advances in VTS model adaptation, VTS characteristic enhancement can obtain very loftier accuracy on some noisy tasks (Li et al., 2012b). However, it is shown that there is still a small accurateness gap between VTS feature enhancement and VTS model adaptation (Li et al., 2012b). Regarding the runtime cost, VTS model adaptation needs to adapt the back-finish HMM parameters twice, while VTS feature enhancement needs to suit the front-end GMM parameters twice. Usually, the number of parameters in a forepart-end GMM is much smaller than that in the back-end HMM. Furthermore, ii rounds of decoding are needed in VTS model adaptation while only ane round of decoding is performed in VTS feature enhancement. Every bit a issue, VTS feature enhancement has a much lower computational cost than VTS model adaptation. Therefore, the tradeoff between accuracy and computational cost volition determine which technology is more suitable in a real world deployment scenario if the underlying acoustic model is a GMM. However, if the underlying acoustic model is a DNN, VTS feature enhancement is a more natural choice. In Department vi.2.five, we will hash out how to combine the VTS technology with DNN.
Read full chapter
URL:
https://www.sciencedirect.com/science/commodity/pii/B9780128023983000064
martinezsucantien.blogspot.com
Source: https://www.sciencedirect.com/topics/engineering/taylor-series
0 Response to "Use Taylor Expansion Again to Derive the Improved Error Bound"
Post a Comment