Taylor Serial

While a Taylor serial of the commencement or second order is usually sufficient for a wide range of combustion responses, at that place exist situations in which the response is of higher social club within the model parameter space.

From: Figurer Aided Chemical Engineering , 2019

Taylor Serial

Qingkai Kong , ... Alexandre Yard. Bayen , in Python Programming and Numerical Methods, 2021

18.4 Summary and Problems

18.iv.1 Summary

1.

Some functions tin can exist perfectly represented by a Taylor series, which is an space sum of polynomials.

ii.

Functions that accept a Taylor series expansion can be approximated past truncating its Taylor series.

3.

The linear approximation is a common local approximation for functions.

4.

The truncation error can be estimated using the Taylor Balance Estimation Theorem.

v.

Be mindful of the circular-off error in the Taylor serial.

18.4.2 Problems

1.

Apply Taylor series expansions to bear witness that e i x = cos ( x ) + i sin ( x ) , where i = 1 .

two.

Utilize the linear approximation of sin ( ten ) effectually a = 0 to show that sin ( ten ) x 1 for small x.

3.

Write the Taylor series expansion for east x 2 effectually a = 0 . Write a function my_double_exp(x, north), which computes an approximation of e x 2 using the showtime due north terms of the Taylor series expansion. Be sure that my_double_exp can take assortment inputs.

four.

Write a function that gives the Taylor series approximation to the np.exp function effectually 0 for an social club one through 7. Calculate the truncation fault bound for order seven.

v.

Compute the quaternary social club Taylor expansion for sin ( x ) and cos ( x ) , and sin ( ten ) cos ( x ) around 0, which produces a smaller mistake for x = π / 2 . Which is right: calculating separately Taylor expansion for sin and cos and then multiplying the result together, or calculating the Taylor expansion for the product beginning and so plugging in x?

half dozen.

Use the fourth social club Taylor series to approximate cos ( 0.2 ) and determine the truncation error jump.

7.

Write a office my_cosh_approximator(ten, n) where output is the northth gild Taylor series approximation for cosh ( ten ) , the hyperbolic cosine of x taken around a = 0 . You may assume that 10 is an array, and n is a positive integer (including zero). Recall that

cosh ( 10 ) = ( e x + e x ) / 2 .

Warning: The approximations for n = 0 and n = 1 volition exist equivalent, and the approximations for n = 2 and n = three will be equivalent, etc.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/commodity/pii/B9780128195499000282

Description of Elements

Zhuming Bi , in Finite Element Analysis Applications, 2018

3.vi.1 Taylor expansion

A Taylor series is a series of polynomial terms to approximate the role (f(ten)) at a signal (10  = a). A one-dimensional Taylor series is given past

(iii.25) f ( x ) = f ( a ) + f ( a ) ( x a ) + f ( a ) 2 ! ( x a ) 2 + + f ( n ) ( a ) n ! ( x a ) n +

If a is set as 0, the corresponding Taylor expansion is known as a Maclaurin series,

(3.26) f ( x ) = f ( 0 ) + f ( 0 ) x + f ( 0 ) 2 ! x 2 + + f ( n ) ( 0 ) northward ! x n +

The Taylor'due south theorem states that any function f(x) satisfying sure atmospheric condition can be expressed equally a Taylor series: presume f (due north)(0) (north  =   1, 2,3…) is finite and |ten|   <   1, the term of f ( n ) ( 0 ) n ! x northward becomes less and less significant in contrast to the terms when n is minor.

Therefore, a part over the continuous domain of x can be approximated every bit,

(three.27) f ( ten ) C 0 + C 1 10 + C 2 x 2 + + C due north x n

where C i   = f (i)(0) (i  =   1, 2,…n) are the constants determined by initial conditions, and northward is an integer for the number of constraints applied on the role.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780128099520000030

Low-level feature extraction (including border detection)

Mark S. Nixon , Alberto S. Aguado , in Feature Extraction & Image Processing for Computer Vision (Third Edition), 2012

four.two.1.2 Analysis of the basic operators

Taylor serial analysis reveals that differencing adjacent points provides an approximate of the starting time-gild derivative at a point. If the difference is taken between points separated by Δ x and so by Taylor expansion for f(10x) we obtain:

(4.five) f ( x + Δ ten ) = f ( ten ) + Δ x × f ( x ) + Δ x 2 two ! × f ( x ) + O ( Δ x three )

By rearrangement, the first-order derivative f′(x) is:

(4.six) f ( x ) = f ( ten + Δ 10 ) f ( x ) Δ x O ( Δ ten )

This shows that the deviation between adjacent points is an estimate of the first-order derivative, with fault Oten). This error depends on the size of the interval Δx and on the complexity of the bend. When Δx is large this error can exist meaning. The error is also large when the high-lodge derivatives have large values. In practice, the short sampling of image pixels and the reduced loftier-frequency content brand this approximation adequate. However, the error can be reduced by spacing the differenced points past one pixel. This is equivalent to calculating the first-order difference delivered past Eq. (four.1) at two next points, as a new horizontal difference Exx where

(4.7) Ex x ten , y = E x x + 1 , y + Eastward x x , y = P x + 1 , y P x , y + P x , y P x 1 , y = P x + ane , y P x 1 , y

This is equivalent to incorporating spacing to detect the edges Exx by:

(four.8) Ex 10 x , y = | P 10 + 1 , y P 10 1 , y | x 2 , N 1 ; y 1 , N

To clarify this, again past Taylor series, nosotros aggrandize f(x−Δx) as:

(four.9) f ( 10 Δ x ) = f ( x ) Δ x × f ( x ) + Δ x ii ii ! × f ( ten ) O ( Δ ten 3 )

By differencing Eq. (iv.nine) from Eq. (4.v), we obtain the first-order derivative as

(4.ten) f ( 10 ) = f ( 10 + Δ 10 ) f ( x Δ x ) 2 Δ x O ( Δ x 2 )

Equation (4.x) suggests that the estimate of the get-go-gild deviation is at present the deviation between points separated past 1 pixel, with error Ox 2). If Δten<1, this error is clearly smaller than the error associated with differencing side by side pixels, in Eq. (4.half dozen). Once more, averaging has reduced dissonance or error. The template for a horizontal edge-detection operator is given in Effigy 4.4(a).This template gives the vertical edges detected at its center pixel. A transposed version of the template gives a vertical edge-detection operator (Figure 4.4(b)).

Figure 4.4. Templates for improved first-order difference.

The Roberts cantankerous operator (Roberts, 1965) was one of the primeval edge-detection operators. It implements a version of basic first-guild edge detection and uses two templates that differentiate pixel values in a diagonal manner, as opposed to along the axes' directions. The two templates are chosen M + and Thou and are given in Figure four.5.

Effigy 4.v. Templates for Roberts cross operator.

In implementation, the maximum value delivered by application of these templates is stored equally the value of the border at that bespeak. The edge point E x,y is and then the maximum of the 2 values derived past convolving the two templates at an paradigm bespeak P ten,y :

(four.11) E ten , y = max { | Thou + * P x , y | , | M * P x , y | } x , y ane , Due north one

The awarding of the Roberts cross operator to the prototype of the square is shown in Figure four.vi. The results of the two templates are shown in Figure 4.6(a) and (b), and the result delivered past the Roberts operator is shown in Effigy 4.6(c). Note that the corners of the square at present appear in the border epitome, past virtue of the diagonal differencing action, whereas they were less apparent in Figure four.2(d) (where the acme left corner did non announced).

Figure 4.half dozen. Applying the Roberts cantankerous operator.

An alternative to taking the maximum is to but add together the results of the 2 templates together to combine horizontal and vertical edges. There are of form more varieties of edges and it is often better to consider the two templates equally providing components of an edge vector: the strength of the edge along the horizontal and vertical axes. These give components of a vector and can be added in a vectorial way (which is perhaps more usual for the Roberts operator). The border magnitude is the length of the vector and the border management is the vector's orientation, as shown in Effigy 4.7.

Figure 4.7. Edge detection in vectorial format.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123965493000045

Overview of uncertainties in modern power systems: uncertainty models and methods

Mohamed Ebeed , Shady H. Due east. Abdel Aleem , in Uncertainties in Modernistic Power Systems, 2021

iii.one.2.3.3 Taylor serial expansion

Mathematically, Taylor series (TS) is a series expansion of a function into an infinite sum of terms, in which it can exist applied to approximate a complex function while providing quantitative estimates on the mistake in this approximation. In Ref. [ 90], the TS expansion method was applied to the linearization of the cartesian coordinate formulation of nodal load flow equations for ability system state interpretation. The authors in Ref. [91] proposed TS expansion of the Markov concatenation stationary distribution to reproduce parametric uncertainty to reliability and performability indices in Markov reliability. Zhao et al. proposed the TS approximation and MCS combined with nonparametric probability density to consider parameter uncertainties [92]. Mostly, in case of a function that has several random variables, Y = f ( x 1 , x 2 , , x n , ) . By application, TS can be applied to expand the function around the value x o i ( i = i,2 , , n ) equally follows:

(1.37) Y = f ( ten o 1 , x o two , x o n ) + i = 1 due north ( x i ten o i ) f 10 i | x o + 1 2 i = ane due north ( ( 10 1 10 o one ) ten 1 + ( x 2 10 o 2 ) x ii + + ( x north ten n ) x northward ) 2 f

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128204917000013

Numerical Computation

A. Kayode Coker , in Fortran Programs for Chemical Procedure Design, Assay, and Simulation, 1995

Euler and Modified Euler Methods

Using the Taylor'south series,

(1-118) y ( 10 0 + h ) = y ( x 0 ) + h f ( x 0 ) + f ( ɛ ) h 2 2 x 0 < ɛ < x 0 + h

The value of y(x0) is given by the initial condition and f' (x0) is evaluated from f(x0,y0), given past the differential equation

dy dx = f ( x , y )

The Euler method can be expressed as:

(1-119) y north + 1 = y n + hf n + O ( h ii ) fault

For the modified Euler method, we expand the Taylor series as

(1-120) y n + 1 = y northward + f n h + f due north ii h 2 + f due north vi h 3 , ten n < ɛ < x due north + h

Replacing the second derivative by the frontward difference approximation for f" that is

f = f n + 1 f n h

having an error of O(h), we have

(1-121) y due north + i = y due north + h { f north + ane 2 [ f n + i f northward h + O ( h ) ] h } + O ( h 3 )

(1-122) y n + one = y n + h { f n + 1 two f n + 1 one 2 f n } + O ( h 3 ) = y n + h ( f northward + f n + 1 ) 2 + O ( h 3 )

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780884152804500029

Nonlinear Optics

Francisco J. Duarte , in Tunable Laser Eyes, 2003

8.2.3 The Refractive Index equally a Role of Intensity

Using a Taylor serial to expand an expression for the refractive index yields

(8.57) northward = n 0 + ( north / I ) I + ( ane / two ! ) ( 2 due north / I 2 ) I 2 +

Neglecting the second-order and higher terms, this expression reduces to

(eight.58) n = due north 0 + ( north / I ) I

where n 0 is the normal weak-field refractive index, defined in Chapter 12 for diverse materials. The quantity (∂northward/I) is not dimensionless and has units that are the changed of the laser intensity, or W–1 cmii. Using polarization arguments this derivative tin be expressed as (Boyd, 1992)

(8.59) northward / I = 12 π two χ 3 / ( due north 0 2 ( ω ) c )

This quantity is known as the second-order alphabetize of refraction and is traditionally referred to as due north 2. Setting ∂n/I = northward ii, Eq. (8.58) can be restated in its usual form as

(8.60) n ( ω ) = north 0 ( ω ) + n 2 ( ω ) I ( ω )

The change in refractive alphabetize as a function of light amplification by stimulated emission of radiation intensity is known as the optical Kerr effect. For a description of the electro-optical Kerr consequence, the reader should refer to Agrawal (1995).

A well-known issue of the optical Kerr event is the miracle of self-focusing. This results from the propagation of a laser beam with a near-Gaussian spatial intensity contour, since, according to Eq. (8.60), the refractive index at the center of the axle is higher than the refractive alphabetize at the wings of the beam. This results in an intensity-dependent lensing effect, as illustrated in Fig. 8.6.

Figure 8.vi. Simplified representation of self-focusing due to n = n 0 + n two I in an optical medium due to propagation of a light amplification by stimulated emission of radiation beam with a near-Gaussian intensity contour.

The miracle of cocky-focusing, or intensity-dependent lensing, is of import in ultrafast lasers or femtosecond lasers (Diels, 1990; Diels and Rudoph, 1996), where information technology gives ascension to what is known as Kerr lens manner locking (KLM). This is applied to spatially select the loftier-intensity mode-locked pulses from the background CW lasing. This tin be accomplished simply by inserting an aperture near the proceeds medium to restrict lasing to the central, high-intensity, portion of the intracavity beam. This technique has become widely used in femtosecond laser cavities.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780122226960500557

Series

Bastian E. Rapp , in Microfluidics: Modelling, Mechanics and Mathematics, 2017

4.3.1 Introduction

After discussing the Taylor serial, we now look at the 2nd important series nosotros will crave, the Fourier ane series. Simply as the Taylor serial tin be used to gauge belittling functions past a sum of individual terms, i.east., a polynomial, the Fourier series approximates a function as a polynomial using sine and cosine functions.

Generally speaking, a Fourier series is a special example of a Taylor series equally it uses sine and cosine terms, whereas the Taylor series uses polynomials. A good example is given in section 4.two.two.three where nosotros approximated the sine role using a polynomial only. Given that a Fourier series uses sine and cosine terms, information technology is best used to approximate a periodic function, i.e., a function for which f (x)   = f (x  +   twoπ). However, as we volition see, many functions that nosotros approximate using a Fourier series are not actually periodic. We may simply transform them into a Fourier series thus making them periodic even though nosotros are only interested in the role in the kickoff interval, thus ignoring the periodicity. Nosotros will expect at an example in a moment.

Tab. 4.3. The commencement 16 Bernouilli numbers

Index Number
0 1
1 ± i 2
2 1 6
3 0
4 i 30
v 0
6 1 42
7 0
8 one 30
9 0
10 5 66
11 0
12 691 2730
13 0
xiv seven 6
15 0
xvi 3617 510

This periodicity gives rise to another important difference between the Taylor and Fourier serial. A Taylor series always needs to be expanded around a point and is a good approximation of the function only near this expansion indicate. A Fourier serial on the other hand will guess the function in the whole domain in which it is divers. Therefore information technology approximates the function well for any point of the domain for which information technology was defined.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781455731411500046

Serial

Alexandre M. Bayen , Timmy Siauw , in An Introduction to MATLAB® Programming and Numerical Methods for Engineers, 2015

Problems

1.

Utilise Taylor series expansions to prove that due east ix = cos ( ten ) + i sin ( ten ) , where i = - 1 .

2.

Use the linear approximation of sin ( x ) around a = 0 to show that sin ( x ) x 1 for pocket-sized x .

3.

Write the Taylor series expansion for eastward ten 2 around a = 0 . Write a function with header [approx] = myDoubleExp(x, Northward), which computes an approximation of e x 2 using the first Due north terms of the Taylor series expansion. Be sure that myDoubleExp tin can take array inputs.

four.

Write a function that gives the Taylor serial approximation to the exp function around 0 for order 1 through 7.

v.

Compute the fourth order Taylor expansion for sin ( ten ) and cos ( x ) and sin ( x ) cos ( 10 ) effectually 0. Which produces less error for x = π / ii : computing the Taylor expansion for sin and cos separately and then multiplying the upshot together, or computing the Taylor expansion for the production first and so plugging in x ?

6.

Write a role with header [yApprox] = myCoshApproximator(x, n), where yApprox is the n -th lodge Taylor Series approximation for cosh ( x ) , the hyperbolic cosine of x, taken around a = 0 . You may presume that x is a vector and northward is a positive integer (including 0). Note that your office should be vectorized for x. Recall that

cosh ( x ) = ( e 10 + e - 10 ) / 2 .

Alert: The approximations for northward = 0 and n = 1 will exist equivalent, the approximations for n = 2 and n = 3 will exist equivalent, and then on.

Test Cases:

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780124202283000154

TAYLOR'South POLYNOMIAL AND SERIES

G.M. PHILLIPS , P.J. TAYLOR , in Theory and Applications of Numerical Assay (Second Edition), 1996

Section three.5

three.ix

Show that the Taylor serial at 10 = 0 of log(i + x) has radius of convergence + 1.

3.ten

Testify that the Taylor series at x = 0 of sin x has infinite radius of convergence.

3.11

Show, using Problem 3.7, that

( one + ten ) 1 / 2 = 1 + i ii x + O ( x 2 ) .

three.12

If A is a given constant and h = A/due north where due north is a positive integer, show that

( 1 + O ( h 2 ) ) due north = i + O ( h ) , every bit northward

and

( ane + O ( h 3 ) ) n = 1 + O ( h 2 ) , as due north .

3.13

Show that for ten ≥ 0 and any fixed integer n ≥ 1:

e x 1 + ten , e ten = one + ten + O ( x ii ) , e northward 10 ( i + 10 ) n , e n ten = ( 1 + ten ) northward + O ( x 2 ) .

Read total affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780125535601500045

Explicit distortion modeling

Jinyu Li , ... Yifan Gong , in Robust Automatic Speech Recognition, 2016

six.2.three VTS Feature Enhancement

Equally shown in Li et al. (2009), VTS model adaptation achieves much better accuracy than several popular model adaptation technologies. Although VTS model adaptation can achieve high accurateness, the computational toll is very high as all the Gaussian parameters in the recognizer need to be updated every time the ecology parameters alter. This time-consuming requirement prevents VTS model adaptation from being widely used, especially in LVCSR tasks where the number of model parameters is large.

On the other hand, VTS feature enhancement has been proposed as a lower-cost culling to VTS model accommodation. For example, a number of techniques take been proposed that tin can be categorized as model-based feature enhancement schemes (Moreno, 1996; Stouten et al., 2003; Droppo et al., 2003; Li et al., 2011). These methods use a small GMM in the front-stop and the same methodology used in VTS model adaptation to derive a minimum-mean-foursquare-error (MMSE) estimate of the clean speech features given the noisy observations. In addition to the reward of a low runtime cost, VTS feature enhancement tin be easily combined with other popular feature-based technologies, such every bit CMN, HLDA, fMPE, etc., which are challenging to VTS model adaptation.

In general, the MMSE method can exist used to become the estimate of clean speech

(6.42) x ^ = East ( x | y ) = x p ( 10 | y ) d ten .

Denote the clean-trained GMM as

(six.43) p Λ ( x ) = grand = 1 K c ( thou ) N ( x ; μ x ( k ) , Σ x ( k ) ) ,

forth with Equation 6.xiii, the MMSE gauge of clean speech becomes

(6.44) x ^ = y h C log ( 1 + exp ( C i ( n ten h ) ) ) p ( x | y ) d x , = y h C log ( one + exp ( C 1 ( northward 10 h ) ) ) chiliad = one K P ( k | y ) p ( x | y , k ) d ten , = y h grand = 1 Yard P ( one thousand | y ) C log ( one + exp ( C 1 ( northward 10 h ) ) ) p ( x | y , 1000 ) d x ,

where P(k|y) is the Gaussian posterior probability, calculated as

(6.45) P ( k | y ) = c ( g ) N ( y ; μ y ( thousand ) , Σ y ( k ) ) chiliad = 1 Chiliad c ( k ) Due north ( y ; μ y ( yard ) , Σ y ( chiliad ) ) .

If the 0th-order VTS approximation is used for the nonlinear term in Equation 6.44, the MMSE approximate of cleaned speech communication 10 is obtained every bit

(vi.46) x ^ = y h g = 1 M P ( one thousand | y ) C log ( 1 + exp ( C ane ( μ due north μ x ( k ) μ h ) ) ) .

This conception was start proposed in Moreno (1996). In Stouten et al. (2003), some other solution was proposed when expanding Equation 6.13 with the 1st-social club VTS. For the one thousandthursday GMM component, the joint distribution of x and y is modeled as

(6.47) Northward x y ; μ x ( k ) μ y ( k ) , Σ x ( chiliad ) Σ x y ( 1000 ) Σ y x ( m ) Σ y ( m ) .

The post-obit can be derived (Stouten et al., 2003)

(six.48) Eastward ( ten | y , k ) = μ x | y ( k ) = μ ten ( k ) + Σ x y ( grand ) Σ y ane ( yard ) ( y μ y ( m ) ) .

The covariance between x and y tin can be derived as

(six.49) Σ 10 y ( k ) = E ( x μ x ( g ) ) ( y μ y ( thou ) ) T

(half dozen.50) = E x μ x thou G k x μ x k + M k h μ h + F chiliad n μ northward T

(vi.51) = Eastward ( 10 μ ten ( k ) ) ( G ( k ) ( ten μ 10 ( k ) ) ) T

(6.52) = Σ ten ( k ) G ( g ) T

Equation 6.50 is obtained by subtracting Equation half dozen.20 from the 1st-guild VTS expansion in Equation 6.xix. Equation 6.51 is obtained by using the property that speech, channel, and noise are independent.

Then the MMSE approximate of clean speech is (Stouten et al., 2003)

(6.53) ten ^ = g = 1 Thou P ( g | y ) μ x ( k ) + Σ x ( m ) Thou ( thousand ) T Σ y 1 ( k ) ( y μ y ( k ) ) .

The flowchart of feature VTS is in Figure 6.3. The following are the detailed implementation steps (Li et al., 2011):

Figure half-dozen.3. VTS characteristic enhancement.

1.

read in a distorted voice communication utterance;

2.

set the channel mean vector to all zeros;

three.

initialize the noise mean vector and diagonal covariance matrix using the start and last N frames (oral communication-complimentary) from the utterance using sample estimates;

4.

compute the Gaussian-dependent G(thou) with Equation 6.18, and adapt the GMM parameters with Equations 6.xx–6.26;

five.

re-approximate noise and aqueduct distortions with formulations in Section six.two.2;

6.

adapt the GMM parameters with Equations 6.xx–half dozen.26;

7.

utilise the terminal adapted GMM model to clean the distorted speech feature with Equation 6.46 or Equation 6.53;

viii.

use the make clean-trained HMM model to decode the cleaned speech feature obtained in footstep 7 and get output transcription.

2 key aspects of VTS feature enhancement are how to obtain reliable estimates of the dissonance and aqueduct distortion parameters and how to accurately summate the Gaussian occupancy probability. In contrast to using static features alone to calculate the Gaussian occupancy probability (Stouten, 2006), both static and dynamic features are used to obtain more reliable Gaussian occupancy probabilities. Then, these probabilities are plugged into Equation 6.46 or Equation 6.53. In Li et al. (2012b), it is shown that recent improvements in VTS model adaptation can be incorporated into VTS feature enhancement to improve the algorithm operation: updating all of the environment distortion parameters (Li et al., 2007) and later conveying out noise adaptive training (Kalinli et al., 2010).

Figures 6.iv (a) and (b) evidence the distribution of the C1 and C0 of give-and-take oh in Aurora 2 examination set A with dissonance type 1. Comparing with Figure 3.2a and b, it is articulate that after VTS characteristic enhancement (fVTS), the distributions of cleaned signals in all SNR conditions are at present very close to the original distribution of clean signal. This benefits both training and testing. In preparation, only small numbers of Gaussians are now needed to model the distributions of all data. In testing, fifty-fifty clean-trained model can well deal with the cleaned low SNR signals because the distributions are similar now.

Figure 6.4. Cepstral distribution of word oh in Aurora 2 after VTS feature enhancement (fVTS).

A mutual business of feature enhancement is that later the enhancement, the clean speech betoken is distorted and the accuracy on clean test sets will drib. Every bit shown in Li et al. (2012a), VTS feature enhancement enjoys the nice property that it significantly improves accuracy in noisy test conditions without degrading accuracy in make clean test conditions. This can also be visualized in Figure 6.four (c) and (d) which shows the distribution of the C1 and C0 of word "oh" in Aurora 2 test set up A clean condition. The solid line denotes the distribution of raw clean information, and the dotted line denotes the distribution of clean data later feature VTS enhancement. Equally shown in the figures, they are very close.

By incorporating the contempo advances in VTS model adaptation, VTS characteristic enhancement can obtain very loftier accuracy on some noisy tasks (Li et al., 2012b). However, it is shown that there is still a small accurateness gap between VTS feature enhancement and VTS model adaptation (Li et al., 2012b). Regarding the runtime cost, VTS model adaptation needs to adapt the back-finish HMM parameters twice, while VTS feature enhancement needs to suit the front-end GMM parameters twice. Usually, the number of parameters in a forepart-end GMM is much smaller than that in the back-end HMM. Furthermore, ii rounds of decoding are needed in VTS model adaptation while only ane round of decoding is performed in VTS feature enhancement. Every bit a issue, VTS feature enhancement has a much lower computational cost than VTS model adaptation. Therefore, the tradeoff between accuracy and computational cost volition determine which technology is more suitable in a real world deployment scenario if the underlying acoustic model is a GMM. However, if the underlying acoustic model is a DNN, VTS feature enhancement is a more natural choice. In Department vi.2.five, we will hash out how to combine the VTS technology with DNN.

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780128023983000064