1000 resultados para SPIN MODELS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest treball de recerca, realizat amb mestres especialistes de música de l'etapa primària, exposa diversos models d'interpretació de la cançó, prèvia exposició dels diversos elements que en configuren el caràcter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gas sensing systems based on low-cost chemical sensor arrays are gaining interest for the analysis of multicomponent gas mixtures. These sensors show different problems, e.g., nonlinearities and slow time-response, which can be partially solved by digital signal processing. Our approach is based on building a nonlinear inverse dynamic system. Results for different identification techniques, including artificial neural networks and Wiener series, are compared in terms of measurement accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The correlation between the structural (average size and density) and optoelectronic properties [band gap and photoluminescence (PL)] of Si nanocrystals embedded in SiO2 is among the essential factors in understanding their emission mechanism. This correlation has been difficult to establish in the past due to the lack of reliable methods for measuring the size distribution of nanocrystals from electron microscopy, mainly because of the insufficient contrast between Si and SiO2. With this aim, we have recently developed a successful method for imaging Si nanocrystals in SiO2 matrices. This is done by using high-resolution electron microscopy in conjunction with conventional electron microscopy in dark field conditions. Then, by varying the time of annealing in a large time scale we have been able to track the nucleation, pure growth, and ripening stages of the nanocrystal population. The nucleation and pure growth stages are almost completed after a few minutes of annealing time at 1100°C in N2 and afterward the ensemble undergoes an asymptotic ripening process. In contrast, the PL intensity steadily increases and reaches saturation after 3-4 h of annealing at 1100°C. Forming gas postannealing considerably enhances the PL intensity but only for samples annealed previously in less time than that needed for PL saturation. The effects of forming gas are reversible and do not modify the spectral shape of the PL emission. The PL intensity shows at all times an inverse correlation with the amount of Pb paramagnetic centers at the Si-SiO2 nanocrystal-matrix interfaces, which have been measured by electron spin resonance. Consequently, the Pb centers or other centers associated with them are interfacial nonradiative channels for recombination and the emission yield largely depends on the interface passivation. We have correlated as well the average size of the nanocrystals with their optical band gap and PL emission energy. The band gap and emission energy shift to the blue as the nanocrystal size shrinks, in agreement with models based on quantum confinement. As a main result, we have found that the Stokes shift is independent of the average size of nanocrystals and has a constant value of 0.26±0.03 eV, which is almost twice the energy of the Si¿O vibration. This finding suggests that among the possible channels for radiative recombination, the dominant one for Si nanocrystals embedded in SiO2 is a fundamental transition spatially located at the Si¿SiO2 interface with the assistance of a local Si-O vibration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we highlight the importance of the operational costs in explaining economic growth and analyze how the industrial structure affects the growth rate of the economy. If there is monopolistic competition only in an intermediate goods sector, then production growth coincides with consumption growth. Moreover, the pattern of growth depends on the particular form of the operational cost. If the monopolistically competitive sector is the final goods sector, then per capita production is constant but per capita effective consumption or welfare grows. Finally, we modify again the industrial structure of the economy and show an economy with two different growth speeds, one for production and another for effective consumption. Thus, both the operational cost and the particular structure of the sector that produces the final goods determines ultimately the pattern of growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[eng] This paper provides, from a theoretical and quantitative point of view, an explanation of why taxes on capital returns are high (around 35%) by analyzing the optimal fiscal policy in an economy with intergenerational redistribution. For this purpose, the government is modeled explicitly and can choose (and commit to) an optimal tax policy in order to maximize society's welfare. In an infinitely lived economy with heterogeneous agents, the long run optimal capital tax is zero. If heterogeneity is due to the existence of overlapping generations, this result in general is no longer true. I provide sufficient conditions for zero capital and labor taxes, and show that a general class of preferences, commonly used on the macro and public finance literature, violate these conditions. For a version of the model, calibrated to the US economy, the main results are: first, if the government is restricted to a set of instruments, the observed fiscal policy cannot be disregarded as sub optimal and capital taxes are positive and quantitatively relevant. Second, if the government can use age specific taxes for each generation, then the age profile capital tax pattern implies subsidizing asset returns of the younger generations and taxing at higher rates the asset returns of the older ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.