981 resultados para G-Hilbert Scheme


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Site-specific meteorological forcing appropriate for applications such as urban outdoor thermal comfort simulations can be obtained using a newly coupled scheme that combines a simple slab convective boundary layer (CBL) model and urban land surface model (ULSM) (here two ULSMs are considered). The former simulates daytime CBL height, air temperature and humidity, and the latter estimates urban surface energy and water balance fluxes accounting for changes in land surface cover. The coupled models are tested at a suburban site and two rural sites, one irrigated and one unirrigated grass, in Sacramento, U.S.A. All the variables modelled compare well to measurements (e.g. coefficient of determination = 0.97 and root mean square error = 1.5 °C for air temperature). The current version is applicable to daytime conditions and needs initial state conditions for the CBL model in the appropriate range to obtain the required performance. The coupled model allows routine observations from distant sites (e.g. rural, airport) to be used to predict air temperature and relative humidity in an urban area of interest. This simple model, which can be rapidly applied, could provide urban data for applications such as air quality forecasting and building energy modelling, in addition to outdoor thermal comfort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

fit the context of normalized variable formulation (NVF) of Leonard and total variation diminishing (TVD) constraints of Harten. this paper presents an extension of it previous work by the authors for solving unsteady incompressible flow problems. The main contributions of the paper are threefold. First, it presents the results of the development and implementation of a bounded high order upwind adaptative QUICKEST scheme in the 3D robust code (Freeflow), for the numerical solution of the full incompressible Navier-Stokes equations. Second, it reports numerical simulation results for 1D hock tube problem, 2D impinging jet and 2D/3D broken clam flows. Furthermore, these results are compared with existing analytical and experimental data. And third, it presents the application of the numerical method for solving 3D free surface flow problems. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a previous paper, we developed a phenomenological-operator technique aiming to simplify the estimate of losses due to dissipation in cavity quantum electrodynamics. In this paper, we apply that technique to estimate losses during an entanglement concentration process in the context of dissipative cavities. In addition, some results, previously used without proof to justify our phenomenological-operator approach, are now formally derived, including an equivalent way to formulate the Wigner-Weisskopf approximation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scheme is based on Ami Harten's ideas (Harten, 1994), the main tools coming from wavelet theory, in the framework of multiresolution analysis for cell averages. But instead of evolving cell averages on the finest uniform level, we propose to evolve just the cell averages on the grid determined by the significant wavelet coefficients. Typically, there are few cells in each time step, big cells on smooth regions, and smaller ones close to irregularities of the solution. For the numerical flux, we use a simple uniform central finite difference scheme, adapted to the size of each cell. If any of the required neighboring cell averages is not present, it is interpolated from coarser scales. But we switch to ENO scheme in the finest part of the grids. To show the feasibility and efficiency of the method, it is applied to a system arising in polymer-flooding of an oil reservoir. In terms of CPU time and memory requirements, it outperforms Harten's multiresolution algorithm.The proposed method applies to systems of conservation laws in 1Dpartial derivative(t)u(x, t) + partial derivative(x)f(u(x, t)) = 0, u(x, t) is an element of R-m. (1)In the spirit of finite volume methods, we shall consider the explicit schemeupsilon(mu)(n+1) = upsilon(mu)(n) - Deltat/hmu ((f) over bar (mu) - (f) over bar (mu)-) = [Dupsilon(n)](mu), (2)where mu is a point of an irregular grid Gamma, mu(-) is the left neighbor of A in Gamma, upsilon(mu)(n) approximate to 1/mu-mu(-) integral(mu-)(mu) u(x, t(n))dx are approximated cell averages of the solution, (f) over bar (mu) = (f) over bar (mu)(upsilon(n)) are the numerical fluxes, and D is the numerical evolution operator of the scheme.According to the definition of (f) over bar (mu), several schemes of this type have been proposed and successfully applied (LeVeque, 1990). Godunov, Lax-Wendroff, and ENO are some of the popular names. Godunov scheme resolves well the shocks, but accuracy (of first order) is poor in smooth regions. Lax-Wendroff is of second order, but produces dangerous oscillations close to shocks. ENO schemes are good alternatives, with high order and without serious oscillations. But the price is high computational cost.Ami Harten proposed in (Harten, 1994) a simple strategy to save expensive ENO flux calculations. The basic tools come from multiresolution analysis for cell averages on uniform grids, and the principle is that wavelet coefficients can be used for the characterization of local smoothness.. Typically, only few wavelet coefficients are significant. At the finest level, they indicate discontinuity points, where ENO numerical fluxes are computed exactly. Elsewhere, cheaper fluxes can be safely used, or just interpolated from coarser scales. Different applications of this principle have been explored by several authors, see for example (G-Muller and Muller, 1998).Our scheme also uses Ami Harten's ideas. But instead of evolving the cell averages on the finest uniform level, we propose to evolve the cell averages on sparse grids associated with the significant wavelet coefficients. This means that the total number of cells is small, with big cells in smooth regions and smaller ones close to irregularities. This task requires improved new tools, which are described next.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

fit the context of normalized variable formulation (NVF) of Leonard and total variation diminishing (TVD) constraints of Harten. this paper presents an extension of it previous work by the authors for solving unsteady incompressible flow problems. The main contributions of the paper are threefold. First, it presents the results of the development and implementation of a bounded high order upwind adaptative QUICKEST scheme in the 3D robust code (Freeflow), for the numerical solution of the full incompressible Navier-Stokes equations. Second, it reports numerical simulation results for 1D hock tube problem, 2D impinging jet and 2D/3D broken clam flows. Furthermore, these results are compared with existing analytical and experimental data. and third, it presents the application of the numerical method for solving 3D free surface flow problems. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional cutoff regularization schemes of the Nambu-Jona-Lasinio model limit the applicability of the model to energy-momentum scales much below the value of the regularizing cutoff. In particular, the model cannot be used to study quark matter with Fermi momenta larger than the cutoff. In the present work, an extension of the model to high temperatures and densities recently proposed by Casalbuoni, Gatto, Nardulli, and Ruggieri is used in connection with an implicit regularization scheme. This is done by making use of scaling relations of the divergent one-loop integrals that relate these integrals at different energy-momentum scales. Fixing the pion decay constant at the chiral symmetry breaking scale in the vacuum, the scaling relations predict a running coupling constant that decreases as the regularization scale increases, implementing in a schematic way the property of asymptotic freedom of quantum chromodynamics. If the regularization scale is allowed to increase with density and temperature, the coupling will decrease with density and temperature, extending in this way the applicability of the model to high densities and temperatures. These results are obtained without specifying an explicit regularization. As an illustration of the formalism, numerical results are obtained for the finite density and finite temperature quark condensate and applied to the problem of color superconductivity at high quark densities and finite temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is a natural continuation of our recent study in quantizing relativistic particles. There it was demonstrated that, by applying a consistent quantization scheme to the classical model of a spinless relativistic particle as well as to the Berezin-Marinov model of a 3 + 1 Dirac particle, it is possible to obtain a consistent relativistic quantum mechanics of such particles. In the present paper, we apply a similar approach to the problem of quantizing the massive 2 + 1 Dirac particle. However, we stress that such a problem differs in a nontrivial way from the one in 3 + 1 dimensions. The point is that in 2 + 1 dimensions each spin polarization describes different fermion species. Technically this fact manifests itself through the presence of a bifermionic constant and of a bifermionic first-class constraint. In particular, this constraint does not admit a conjugate gauge condition at the classical level. The quantization problem in 2 + 1 dimensions is also interesting from the physical viewpoint (e.g., anyons). In order to quantize the model, we first derive a classical formulation in an effective phase space, restricted by constraints and gauges. Then the condition of preservation of the classical symmetries allows us to realize the operator algebra in an unambiguous way and construct an appropriate Hilbert space. The physical sector of the constructed quantum mechanics contains spin-1/2 particles and antiparticles without an infinite number of negative-energy levels, and exactly reproduces the one-particle sector of the 2 + 1 quantum theory of a spinor field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement-based quantum computation is an efficient model to perform universal computation. Nevertheless, theoretical questions have been raised, mainly with respect to realistic noise conditions. In order to shed some light on this issue, we evaluate the exact dynamics of some single-qubit-gate fidelities using the measurement-based quantum computation scheme when the qubits which are used as a resource interact with a common dephasing environment. We report a necessary condition for the fidelity dynamics of a general pure N-qubit state, interacting with this type of error channel, to present an oscillatory behavior, and we show that for the initial canonical cluster state, the fidelity oscillates as a function of time. This state fidelity oscillatory behavior brings significant variations to the values of the computational results of a generic gate acting on that state depending on the instants we choose to apply our set of projective measurements. As we shall see, considering some specific gates that are frequently found in the literature, the fast application of the set of projective measurements does not necessarily imply high gate fidelity, and likewise the slow application thereof does not necessarily imply low gate fidelity. Our condition for the occurrence of the fidelity oscillatory behavior shows that the oscillation presented by the cluster state is due exclusively to its initial geometry. Other states that can be used as resources for measurement-based quantum computation can present the same initial geometrical condition. Therefore, it is very important for the present scheme to know when the fidelity of a particular resource state will oscillate in time and, if this is the case, what are the best times to perform the measurements.