995 resultados para Exactly Solvable Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A refined version of the edge-to-edge matching model is described here. In the original model, the matching directions were obtained from the planes with all the atomic centers that were exactly in the plane, or the distance from the atomic center to the plane which was less than the atomic radius. The direction-matching pairs were the match of straight rows-straight rows and zigzag rows-zigzag rows. In the refined model, the matching directions were obtained from the planes with all the atomic centers that were exactly in the plane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is an author-created, un-copyedited version of an article accepted for publication in Acta Physica Polonica A. The Version of Record is available online at http://przyrbwn.icm.edu.pl/APP/PDF/118/a118z2p31.pdf

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proposed model, called the combinatorial and competitive spatio-temporal memory or CCSTM, provides an elegant solution to the general problem of having to store and recall spatio-temporal patterns in which states or sequences of states can recur in various contexts. For example, fig. 1 shows two state sequences that have a common subsequence, C and D. The CCSTM assumes that any state has a distributed representation as a collection of features. Each feature has an associated competitive module (CM) containing K cells. On any given occurrence of a particular feature, A, exactly one of the cells in CMA will be chosen to represent it. It is the particular set of cells active on the previous time step that determines which cells are chosen to represent instances of their associated features on the current time step. If we assume that typically S features are active in any state then any state has K^S different neural representations. This huge space of possible neural representations of any state is what underlies the model's ability to store and recall numerous context-sensitive state sequences. The purpose of this paper is simply to describe this mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe an empirical, self-consistent, orthogonal tight-binding model for zirconia, which allows for the polarizability of the anions at dipole and quadrupole levels and for crystal field splitting of the cation d orbitals, This is achieved by mixing the orbitals of different symmetry on a site with coupling coefficients driven by the Coulomb potentials up to octapole level. The additional forces on atoms due to the self-consistency and polarizabilities are exactly obtained by straightforward electrostatics, by analogy with the Hellmann-Feynman theorem as applied in first-principles calculations. The model correctly orders the zero temperature energies of all zirconia polymorphs. The Zr-O matrix elements of the Hamiltonian, which measure covalency, make a greater contribution than the polarizability to the energy differences between phases. Results for elastic constants of the cubic and tetragonal phases and phonon frequencies of the cubic phase are also presented and compared with some experimental data and first-principles calculations. We suggest that the model will be useful for studying finite temperature effects by means of molecular dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High resolution spectra of six early B-type main-sequence stars having galactocentric distances of between 10 and 18 kpc are presented. We List the equivalent widths for the metal lines and illustrate their hydrogen and helium line profiles. The stars are analysed using LTE line-blanketed model atmosphere techniques to derive atmospheric parameters and surface chemical compositions. All six stars have similar effective temperatures and surface gravities, allowing a reliable comparison of their metal abundances and distances. Significant variations in the photospheric abundances are evident and are discuss the need for a more detailed line-by-line differential analysis to exactly quantify the differences. This will be presented in a companion paper (Smartt et al. 1996).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present TANC, i.e., a tree-augmented naive credal classifier based on imprecise probabilities; it models prior near-ignorance via the Extreme Imprecise Dirichlet Model (EDM) (Cano et al., 2007) and deals conservatively with missing data in the training set, without assuming them to be missing-at-random. The EDM is an approximation of the global Imprecise Dirichlet Model (IDM), which considerably simplifies the computation of upper and lower probabilities; yet, having been only recently introduced, the quality of the provided approximation needs still to be verified. As first contribution, we extensively compare the output of the naive credal classifier (one of the few cases in which the global IDM can be exactly implemented) when learned with the EDM and the global IDM; the output of the classifier appears to be identical in the vast majority of cases, thus supporting the adoption of the EDM in real classification problems. Then, by experiments we show that TANC is more reliable than the precise TAN (learned with uniform prior), and also that it provides better performance compared to a previous (Zaffalon, 2003) TAN model based on imprecise probabilities. TANC treats missing data by considering all possible completions of the training set, but avoiding an exponential increase of the computational times; eventually, we present some preliminary results with missing data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stimuli outside classical receptive fields significantly influence the neurons' activities in primary visual cortex. We propose that such contextual influences are used to segment regions by detecting the breakdown of homogeneity or translation invariance in the input, thus computing global region boundaries using local interactions. This is implemented in a biologically based model of V1, and demonstrated in examples of texture segmentation and figure-ground segregation. By contrast with traditional approaches, segmentation occurs without classification or comparison of features within or between regions and is performed by exactly the same neural circuit responsible for the dual problem of the grouping and enhancement of contours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data assimilation aims to incorporate measured observations into a dynamical system model in order to produce accurate estimates of all the current (and future) state variables of the system. The optimal estimates minimize a variational principle and can be found using adjoint methods. The model equations are treated as strong constraints on the problem. In reality, the model does not represent the system behaviour exactly and errors arise due to lack of resolution and inaccuracies in physical parameters, boundary conditions and forcing terms. A technique for estimating systematic and time-correlated errors as part of the variational assimilation procedure is described here. The modified method determines a correction term that compensates for model error and leads to improved predictions of the system states. The technique is illustrated in two test cases. Applications to the 1-D nonlinear shallow water equations demonstrate the effectiveness of the new procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiple equilibria in a coupled ocean–atmosphere–sea ice general circulation model (GCM) of an aquaplanet with many degrees of freedom are studied. Three different stable states are found for exactly the same set of parameters and external forcings: a cold state in which a polar sea ice cap extends into the midlatitudes; a warm state, which is ice free; and a completely sea ice–covered “snowball” state. Although low-order energy balance models of the climate are known to exhibit intransitivity (i.e., more than one climate state for a given set of governing equations), the results reported here are the first to demonstrate that this is a property of a complex coupled climate model with a consistent set of equations representing the 3D dynamics of the ocean and atmosphere. The coupled model notably includes atmospheric synoptic systems, large-scale circulation of the ocean, a fully active hydrological cycle, sea ice, and a seasonal cycle. There are no flux adjustments, with the system being solely forced by incoming solar radiation at the top of the atmosphere. It is demonstrated that the multiple equilibria owe their existence to the presence of meridional structure in ocean heat transport: namely, a large heat transport out of the tropics and a relatively weak high-latitude transport. The associated large midlatitude convergence of ocean heat transport leads to a preferred latitude at which the sea ice edge can rest. The mechanism operates in two very different ocean circulation regimes, suggesting that the stabilization of the large ice cap could be a robust feature of the climate system. Finally, the role of ocean heat convergence in permitting multiple equilibria is further explored in simpler models: an atmospheric GCM coupled to a slab mixed layer ocean and an energy balance model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current state-of-the-art global climate models produce different values for Earth’s mean temperature. When comparing simulations with each other and with observations it is standard practice to compare temperature anomalies with respect to a reference period. It is not always appreciated that the choice of reference period can affect conclusions, both about the skill of simulations of past climate, and about the magnitude of expected future changes in climate. For example, observed global temperatures over the past decade are towards the lower end of the range of CMIP5 simulations irrespective of what reference period is used, but exactly where they lie in the model distribution varies with the choice of reference period. Additionally, we demonstrate that projections of when particular temperature levels are reached, for example 2K above ‘pre-industrial’, change by up to a decade depending on the choice of reference period. In this article we discuss some of the key issues that arise when using anomalies relative to a reference period to generate climate projections. We highlight that there is no perfect choice of reference period. When evaluating models against observations, a long reference period should generally be used, but how long depends on the quality of the observations available. The IPCC AR5 choice to use a 1986-2005 reference period for future global temperature projections was reasonable, but a case-by-case approach is needed for different purposes and when assessing projections of different climate variables. Finally, we recommend that any studies that involve the use of a reference period should explicitly examine the robustness of the conclusions to alternative choices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of the mass of a black hole embedded in a universe filled with dark energy and cold dark matter is calculated in a closed form within a test fluid model in a Schwarzschild metric, taking into account the cosmological evolution of both fluids. The result describes exactly how accretion asymptotically switches from the matter-dominated to the Lambda-dominated regime. For early epochs, the black hole mass increases due to dark matter accretion, and on later epochs the increase in mass stops as dark energy accretion takes over. Thus, the unphysical behaviour of previous analyses is improved in this simple exact model. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new accelerating cosmology driven only by baryons plus cold dark matter (CDM) is proposed in the framework of general relativity. In this scenario the present accelerating stage of the Universe is powered by the negative pressure describing the gravitationally-induced particle production of cold dark matter particles. This kind of scenario has only one free parameter and the differential equation governing the evolution of the scale factor is exactly the same of the Lambda CDM model. For a spatially flat Universe, as predicted by inflation (Omega(dm) + Omega(baryon) = 1), it is found that the effectively observed matter density parameter is Omega(meff) = 1 - alpha, where alpha is the constant parameter specifying the CDM particle creation rate. The supernovae test based on the Union data (2008) requires alpha similar to 0.71 so that Omega(meff) similar to 0.29 as independently derived from weak gravitational lensing, the large scale structure and other complementary observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.