995 resultados para Exponential models


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest treball de recerca, realizat amb mestres especialistes de música de l'etapa primària, exposa diversos models d'interpretació de la cançó, prèvia exposició dels diversos elements que en configuren el caràcter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recientemente, ha aumentado mucho el interés por la aplicación de los modelos de memoria larga a variables económicas, sobre todo los modelos ARFIMA. Sin duda , el método más usado para la estimación de estos modelos en el ámbito del análisis económico es el propuesto por Geweke y Portero-Hudak (GPH) aun cuando en trabajos recientes se ha demostrado que, en ciertos casos, este estimador presenta un sesgo muy importante. De ahí que, se propone una extensión de este estimador a partir del modelo exponencial propuesto por Bloomfield, y que permite corregir este sesgo.A continuación, se analiza y compara el comportamiento de ambos estimadores en muestras no muy grandes y se comprueba como el estimador propuesto presenta un error cuadrático medio menor que el estimador GPH

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Prior repeated-sprints (6) has become an interesting method to resolve the debate surrounding the principal factors that limits the oxygen uptake (V'O2) kinetics at the onset of exercise [i.e., muscle O2 delivery (5) or metabolic inertia (3)]. The aim of this study was to compare the effects of two repeated-sprints sets of 6x6s separated by different recovery duration between the sprints on V'O2 and muscular de-oxygenation [HHb] kinetics during a subsequent heavy-intensity exercise. Methods: 10 male subjects performed a 6-min constant-load cycling test (T50) at intensity corresponding to half of the difference between V'O2max and the ventilatory threshold. Then, they performed two repeated-sprints sets of 6x6s all-out separated by different recovery duration between the sprints (S1:30s and S2:3min) followed, after 7-min-recovery, by the T50 (S1T50 and S2T50, respectively). V'O2, [HHb] of the vastus lateralis (VL) and surface electromyography activity [i.e., root-mean-square (RMS) and the median frequency of the power density spectrum (MDF)] from VL and vastus medialis (VM) were recorded throughout T50. Models using a bi-exponential function for the overall T50 and a mono-exponential for the first 90s of T50 were used to define V'O2 and [HHb] kinetics respectively. Results: V'O2 mean value was higher in S1 (2.9±0.3l.min-1) than in S2 (1.2±0.3l.min-1); (p<0.001). The peripheral blood flow was increased after sprints as attested by a higher basal heart rate (HRbaseline) (S1T50: +22%; S2T50: +17%; p≤0.008). Time delay [HHb] was shorter for S1T50 and S2T50 than for T50 (-22% for both; p≤0.007) whereas the mean response time of V'O2 was accelerated only after S1 (S1T50: 32.3±2.5s; S2T50: 34.4±2.6s; T50: 35.7±5.4s; p=0.031). There were no significant differences in RMS between the three conditions (p>0.05). MDF of VM was higher during the first 3-min in S1T50 than in T50 (+6%; p≤0.05). Conclusion: The study show that V'O2 kinetics was speeded by prior repeated-sprints with a short (30s) but not a long (3min) inter-sprints-recovery even though the [HHb] kinetics was accelerated and the peripheral blood flow was enhanced after both sprints. S1, inducing a greater PCr depletion (1) and change in the pattern of the fibres recruitment (increase in MDF) compared with S2, may decrease metabolic inertia (2), stimulate the oxidative phosphorylation activation (4) and accelerate V'O2 kinetics at the beginning of the subsequent high-intensity exercise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gas sensing systems based on low-cost chemical sensor arrays are gaining interest for the analysis of multicomponent gas mixtures. These sensors show different problems, e.g., nonlinearities and slow time-response, which can be partially solved by digital signal processing. Our approach is based on building a nonlinear inverse dynamic system. Results for different identification techniques, including artificial neural networks and Wiener series, are compared in terms of measurement accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we highlight the importance of the operational costs in explaining economic growth and analyze how the industrial structure affects the growth rate of the economy. If there is monopolistic competition only in an intermediate goods sector, then production growth coincides with consumption growth. Moreover, the pattern of growth depends on the particular form of the operational cost. If the monopolistically competitive sector is the final goods sector, then per capita production is constant but per capita effective consumption or welfare grows. Finally, we modify again the industrial structure of the economy and show an economy with two different growth speeds, one for production and another for effective consumption. Thus, both the operational cost and the particular structure of the sector that produces the final goods determines ultimately the pattern of growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recientemente, ha aumentado mucho el interés por la aplicación de los modelos de memoria larga a variables económicas, sobre todo los modelos ARFIMA. Sin duda , el método más usado para la estimación de estos modelos en el ámbito del análisis económico es el propuesto por Geweke y Portero-Hudak (GPH) aun cuando en trabajos recientes se ha demostrado que, en ciertos casos, este estimador presenta un sesgo muy importante. De ahí que, se propone una extensión de este estimador a partir del modelo exponencial propuesto por Bloomfield, y que permite corregir este sesgo.A continuación, se analiza y compara el comportamiento de ambos estimadores en muestras no muy grandes y se comprueba como el estimador propuesto presenta un error cuadrático medio menor que el estimador GPH

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[eng] This paper provides, from a theoretical and quantitative point of view, an explanation of why taxes on capital returns are high (around 35%) by analyzing the optimal fiscal policy in an economy with intergenerational redistribution. For this purpose, the government is modeled explicitly and can choose (and commit to) an optimal tax policy in order to maximize society's welfare. In an infinitely lived economy with heterogeneous agents, the long run optimal capital tax is zero. If heterogeneity is due to the existence of overlapping generations, this result in general is no longer true. I provide sufficient conditions for zero capital and labor taxes, and show that a general class of preferences, commonly used on the macro and public finance literature, violate these conditions. For a version of the model, calibrated to the US economy, the main results are: first, if the government is restricted to a set of instruments, the observed fiscal policy cannot be disregarded as sub optimal and capital taxes are positive and quantitatively relevant. Second, if the government can use age specific taxes for each generation, then the age profile capital tax pattern implies subsidizing asset returns of the younger generations and taxing at higher rates the asset returns of the older ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.