965 resultados para Approximat Model (scheme)
Resumo:
A time efficient optical model is proposed for GATE simulation of a LYSO scintillation matrix coupled to a photomultiplier. The purpose is to avoid the excessively long computation time when activating the optical processes in GATE. The usefulness of the model is demonstrated by comparing the simulated and experimental energy spectra obtained with the dual planar head equipment for dosimetry with a positron emission tomograph ( DoPET). The procedure to apply the model is divided in two steps. Firstly, a simplified simulation of a single crystal element of DoPET is used to fit an analytic function that models the optical attenuation inside the crystal. In a second step, the model is employed to calculate the influence of this attenuation in the energy registered by the tomograph. The use of the proposed optical model is around three orders of magnitude faster than a GATE simulation with optical processes enabled. A good agreement was found between the experimental and simulated data using the optical model. The results indicate that optical interactions inside the crystal elements play an important role on the energy resolution and induce a considerable degradation of the spectra information acquired by DoPET. Finally, the same approach employed by the proposed optical model could be useful to simulate a scintillation matrix coupled to a photomultiplier using single or dual readout scheme.
Resumo:
The nonequilibrium phase transition of the one-dimensional triplet-creation model is investigated using the n-site approximation scheme. We find that the phase diagram in the space of parameters (gamma, D), where gamma is the particle decay probability and D is the diffusion probability, exhibits a tricritical point for n >= 4. However, the fitting of the tricritical coordinates (gamma(t), D(t)) using data for 4 <= n <= 13 predicts that gamma(t) becomes negative for n >= 26, indicating thus that the phase transition is always continuous in the limit n -> infinity. However, the large discrepancies between the critical parameters obtained in this limit and those obtained by Monte Carlo simulations, as well as a puzzling non-monotonic dependence of these parameters on the order of the approximation n, argue for the inadequacy of the n-site approximation to study the triplet-creation model for computationally feasible values of n.
Resumo:
This paper develops a bias correction scheme for a multivariate heteroskedastic errors-in-variables model. The applicability of this model is justified in areas such as astrophysics, epidemiology and analytical chemistry, where the variables are subject to measurement errors and the variances vary with the observations. We conduct Monte Carlo simulations to investigate the performance of the corrected estimators. The numerical results show that the bias correction scheme yields nearly unbiased estimates. We also give an application to a real data set.
Resumo:
The goal of this paper is to present an approximation scheme for a reaction-diffusion equation with finite delay, which has been used as a model to study the evolution of a population with density distribution u, in such a way that the resulting finite dimensional ordinary differential system contains the same asymptotic dynamics as the reaction-diffusion equation.
Resumo:
This article is dedicated to harmonic wavelet Galerkin methods for the solution of partial differential equations. Several variants of the method are proposed and analyzed, using the Burgers equation as a test model. The computational complexity can be reduced when the localization properties of the wavelets and restricted interactions between different scales are exploited. The resulting variants of the method have computational complexities ranging from O(N(3)) to O(N) (N being the space dimension) per time step. A pseudo-spectral wavelet scheme is also described and compared to the methods based on connection coefficients. The harmonic wavelet Galerkin scheme is applied to a nonlinear model for the propagation of precipitation fronts, with the front locations being exposed in the sizes of the localized wavelet coefficients. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper derives the second-order biases Of maximum likelihood estimates from a multivariate normal model where the mean vector and the covariance matrix have parameters in common. We show that the second order bias can always be obtained by means of ordinary weighted least-squares regressions. We conduct simulation studies which indicate that the bias correction scheme yields nearly unbiased estimators. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Millions of unconscious calculations are made daily by pedestrians walking through the Colby College campus. I used ArcGIS to make a predictive spatial model that chose paths similar to those that are actually used by people on a regular basis. To make a viable model of how most travelers choose their way, I considered both the distance required and the type of traveling surface. I used an iterative process to develop a scheme for weighting travel costs which resulted in accurate least-cost paths to be predicted by ArcMap. The accuracy was confirmed when the calculated routes were compared to satellite photography and were found to overlap well-worn “shortcuts” taken between the paved paths throughout campus.
Resumo:
In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for large-scale systems. Nonetheless, a critical obstacle, which needs to be overcome in MPC, is the large computational burden when a large-scale system is considered or a long prediction horizon is involved. In order to solve this problem, we use an adaptive prediction accuracy (APA) approach that can reduce the computational burden almost by half. The proposed MPC scheme with this scheme is tested on the northern Dutch water system, which comprises Lake IJssel, Lake Marker, the River IJssel and the North Sea Canal. The simulation results show that by using the MPC-APA scheme, the computational time can be reduced to a large extent and a flood protection problem over longer prediction horizons can be well solved.
Resumo:
This paper investigates the introduction of type dynamic in the La ont and Tirole's regulation model. The regulator and the rm are engaged in a two period relationship governed by short-term contracts, where, the regulator observes cost but cannot distinguish how much of the cost is due to e ort on cost reduction or e ciency of rm's technology, named type. There is asymmetric information about the rm's type. Our model is developed in a framework in which the regulator learns with rm's choice in the rst period and uses that information to design the best second period incentive scheme. The regulator is aware of the possibility of changes in types and takes that into account. We show how type dynamic builds a bridge between com- mitment and non-commitment situations. In particular, the possibility of changing types mitigates the \ratchet e ect". We show that for small degree of type dynamic the equilibrium shows separation and the welfare achived is close to his upper bound (given by the commitment allocation).
Resumo:
We propose a scheme in which the masses of the heavier leptons obey seesaw type relations. The light lepton masses, except the electron and the electron neutrino ones, are generated by one loop level radiative corrections. We work in a version of the 3-3-1 electroweak model that predicts singlets (charged and neutral) of heavy leptons beyond the known ones. An extra U(1)(Omega) symmetry is introduced in order to avoid the light leptons getting masses at the tree level. The electron mass induces an explicit symmetry breaking at U(1)(Omega). We discuss also the mixing matrix among four neutrinos. The new energy scale required is not higher than a few TeV.
Resumo:
We employ the NJL model to calculate mesonic correlation functions at finite temperature and compare results with recent lattice QCD simulations. We employ an implicit regularization scheme to deal with the divergent amplitudes to obtain ambiguity-free, scale-invariant and symmetry-preserving physical amplitudes. Making the coupling constants of the model temperature dependent, we show that at low momenta our results agree qualitatively with lattice simulations.
Resumo:
Recently there have been suggestions that for a proper description of hadronic matter and hadronic correlation functions within the NJL model at finite density/temperature the parameters of the model should be taken density/temperature dependent. Here we show that qualitatively similar results can be obtained using a cutoff-independent regularization of the NJL model. In this regularization scheme one can express the divergent parts at finite density/temperature of the amplitudes in terms of their counterparts in vacuum.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Toda lattice hierarchy and the associated matrix formulation of the 2M-boson KP hierarchies provide a framework for the Drinfeld-Sokolov reduction scheme realized through Hamiltonian action within the second KP Poisson bracket. By working with free currents, which Abelianize the second KP Hamiltonian structure, we are able to obtain a unified formalism for the reduced SL(M + 1, M - k) KdV hierarchies interpolating between the ordinary KP and KdV hierarchies. The corresponding Lax operators are given as superdeterminants of graded SL(M + 1, M - k) matrices in the diagonal gauge and we describe their bracket structure and field content. In particular, we provide explicit free field representations of the associated W(M, M - k) Poisson bracket algebras generalising the familiar nonlinear W-M+1 algebra. Discrete Backlund transformations for SL(M + 1, M - k) KdV are generated naturally from lattice translations in the underlying Toda-like hierarchy. As an application we demonstrate the equivalence of the two-matrix string model to the SL(M + 1, 1) KdV hierarchy.
Resumo:
A boundary element method (BEM) formulation to predict the behavior of solids exhibiting displacement (strong) discontinuity is presented. In this formulation, the effects of the displacement jump of a discontinuity interface embedded in an internal cell are reproduced by an equivalent strain field over the cell. To compute the stresses, this equivalent strain field is assumed as the inelastic part of the total strain. As a consequence, the non-linear BEM integral equations that result from the proposed approach are similar to those of the implicit BEM based on initial strains. Since discontinuity interfaces can be introduced inside the cell independently on the cell boundaries, the proposed BEM formulation, combined with a tracking scheme to trace the discontinuity path during the analysis, allows for arbitrary discontinuity propagation using a fixed mesh. A simple technique to track the crack path is outlined. This technique is based on the construction of a polygonal line formed by segments inside the cells, in which the assumed failure criterion is reached. Two experimental concrete fracture tests were analyzed to assess the performance of the proposed formulation.