75 resultados para GIBBS FORMALISM
Resumo:
Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalization of what is known as weakly constrained four-dimensional variational assimilation (4D-Var) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two-point boundary value problem. For imperfect models, the trade-off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. This trade-off turns out to be trivial if the model is perfect. However, even in this situation, allowing for minute deviations from the perfect model is shown to have positive effects, namely to regularize the problem. The presented formalism is dynamical in character. No statistical assumptions on dynamical or observational noise are imposed.
Resumo:
An integration by parts formula is derived for the first order differential operator corresponding to the action of translations on the space of locally finite simple configurations of infinitely many points on Rd. As reference measures, tempered grand canonical Gibbs measures are considered corresponding to a non-constant non-smooth intensity (one-body potential) and translation invariant potentials fulfilling the usual conditions. It is proven that such Gibbs measures fulfill the intuitive integration by parts formula if and only if the action of the translation is not broken for this particular measure. The latter is automatically fulfilled in the high temperature and low intensity regime.
Resumo:
The primary objective was to determine fatty acid composition of skinless chicken breast and leg meat portions and chicken burgers and nuggets from the economy price range, standard price range (both conventional intensive rearing) and the organic range from four leading supermarkets. Few significant differences in the SFA, MUFA and PUFA composition of breast and leg meat portions were found among price ranges, and supermarket had no effect. No significant differences in fatty acid concentrations of economy and standard chicken burgers were found, whereas economy chicken nuggets had higher C16:1, C18:1 cis, C18:1 trans and C18:3 n-3 concentrations than had standard ones. Overall, processed chicken products had much higher fat contents and SFA than had whole meat. Long chain n-3 fatty acids had considerably lower concentrations in processed products than in whole meat. Overall there was no evidence that organic chicken breast or leg meat had a more favourable fatty acid composition than had meat from conventionally reared birds.
Resumo:
Cross-bred cow adoption is an important and potent policy variable precipitating subsistence household entry into emerging milk markets. This paper focuses on the problem of designing policies that encourage and sustain milkmarket expansion among a sample of subsistence households in the Ethiopian highlands. In this context it is desirable to measure households’ ‘proximity’ to market in terms of the level of deficiency of essential inputs. This problem is compounded by four factors. One is the existence of cross-bred cow numbers (count data) as an important, endogenous decision by the household; second is the lack of a multivariate generalization of the Poisson regression model; third is the censored nature of the milk sales data (sales from non-participating households are, essentially, censored at zero); and fourth is an important simultaneity that exists between the decision to adopt a cross-bred cow, the decision about how much milk to produce, the decision about how much milk to consume and the decision to market that milk which is produced but not consumed internally by the household. Routine application of Gibbs sampling and data augmentation overcome these problems in a relatively straightforward manner. We model the count data from two sites close to Addis Ababa in a latent, categorical-variable setting with known bin boundaries. The single-equation model is then extended to a multivariate system that accommodates the covariance between crossbred-cow adoption, milk-output, and milk-sales equations. The latent-variable procedure proves tractable in extension to the multivariate setting and provides important information for policy formation in emerging-market settings
Resumo:
Data augmentation is a powerful technique for estimating models with latent or missing data, but applications in agricultural economics have thus far been few. This paper showcases the technique in an application to data on milk market participation in the Ethiopian highlands. There, a key impediment to economic development is an apparently low rate of market participation. Consequently, economic interest centers on the “locations” of nonparticipants in relation to the market and their “reservation values” across covariates. These quantities are of policy interest because they provide measures of the additional inputs necessary in order for nonparticipants to enter the market. One quantity of primary interest is the minimum amount of surplus milk (the “minimum efficient scale of operations”) that the household must acquire before market participation becomes feasible. We estimate this quantity through routine application of data augmentation and Gibbs sampling applied to a random-censored Tobit regression. Incorporating random censoring affects markedly the marketable-surplus requirements of the household, but only slightly the covariates requirements estimates and, generally, leads to more plausible policy estimates than the estimates obtained from the zero-censored formulation
Resumo:
An important feature of agribusiness promotion programs is their lagged impact on consumption. Efficient investment in advertising requires reliable estimates of these lagged responses and it is desirable from both applied and theoretical standpoints to have a flexible method for estimating them. This note derives an alternative Bayesian methodology for estimating lagged responses when investments occur intermittently within a time series. The method exploits a latent-variable extension of the natural-conjugate, normal-linear model, Gibbs sampling and data augmentation. It is applied to a monthly time series on Turkish pasta consumption (1993:5-1998:3) and three, nonconsecutive promotion campaigns (1996:3, 1997:3, 1997:10). The results suggest that responses were greatest to the second campaign, which allocated its entire budget to television media; that its impact peaked in the sixth month following expenditure; and that the rate of return (measured in metric tons additional consumption per thousand dollars expended) was around a factor of 20.
Resumo:
We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.
Resumo:
The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods— Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.
Resumo:
The dielectric constant, epsilon', and the dielectric loss, epsilon'', for gelatin films were measured in the glassy and rubbery states over a frequency range from 20 Hz to 10 MHz; epsilon' and epsilon'' were transformed into M* formalism (M* = 1/(epsilon' - i epsilon'') = M' + iM''; i, the imaginary unit). The peak of epsilon'' was masked probably due to dc conduction, but the peak of M'', e.g. the conductivity relaxation, for the gelatin used was observed. By fitting the M'' data to the Havriliak-Negami type equation, the relaxation time, tauHN, was evaluated. The value of the activation energy, Etau, evaluated from an Arrhenius plot of 1/tauHN, agreed well with that of Esigma evaluated from the DC conductivity sigma0 both in the glassy and rubbery states, indicating that the conductivity relaxation observed for the gelatin films was ascribed to ionic conduction. The value of the activation energy in the glassy state was larger than that in the rubbery state.
Resumo:
We consider an equilibrium birth and death type process for a particle system in infinite volume, the latter is described by the space of all locally finite point configurations on Rd. These Glauber type dynamics are Markov processes constructed for pre-given reversible measures. A representation for the ``carré du champ'' and ``second carré du champ'' for the associate infinitesimal generators L are calculated in infinite volume and for a large class of functions in a generalized sense. The corresponding coercivity identity is derived and explicit sufficient conditions for the appearance and bounds for the size of the spectral gap of L are given. These techniques are applied to Glauber dynamics associated to Gibbs measure and conditions are derived extending all previous known results and, in particular, potentials with negative parts can now be treated. The high temperature regime is extended essentially and potentials with non-trivial negative part can be included. Furthermore, a special class of potentials is defined for which the size of the spectral gap is as least as large as for the free system and, surprisingly, the spectral gap is independent of the activity. This type of potentials should not show any phase transition for a given temperature at any activity.
Resumo:
A multi-proxy study of a Holocene sediment core (RF 93-30) from the western flank of the central Adriatic, in 77 m of water, reveals a sequence of changes in terrestrial vegetation, terrigenous sediment input and benthic fauna, as well as evidence for variations in sea surface temperature spanning most of the last 7000 yr. The chronology of sedimentation is based on several lines of evidence, including AMS 14C dates of foraminifera extracted from the core, palaeomagnetic secular variation, pollen indicators and dated tephra. The temporal resolution increases towards the surface and, for some of the properties measured, is sub-decadal for the last few centuries. The main changes recorded in vegetation, sedimentation and benthic foraminiferal assemblages appear to be directly related to human activity in the sediment source area, which includes the Po valley and the eastern flanks of the central and northern Appenines. The most striking episodes of deforestation and expanding human impact begin around 3600 BP (Late Bronze Age) and 700 BP (Medieval) and each leads to an acceleration in mass sedimentation and an increase in the proportion of terrigenous material, reflecting the response of surface processes to widespread forest clearance and cultivation. Although human impact appears to be the proximal cause of these changes, climatic effects may also have been important. During these periods, signs of stress are detectable in the benthic foram morphotype assemblages. Between these two periods of increased terrigeneous sedimentation there is smaller peak in sedimentation rate around 2400BP which is not associated with evidence for deforestation, shifts in the balance between terrigenous and authigenic sedimentation, or changes in benthic foraminifera. The mineral magnetic record provides a sensitive indicator of changing sediment sources: during forested periods of reduced terrigenous input it is dominated by authigenic bacterial magnetite, whereas during periods of increased erosion, anti-ferromagetic minerals (haematite and/or goethite) become more important, as well as both paramagnetic minerals and super-paramagnetic magnetite. Analysis of the alkenone, U37k′, record provides an indication of possible changes in sea surface temperature during the period, but it is premature to place too much reliance on these inferred changes until the indirect effects of past changes in the depth of the halocline and in circulation have been more fully evaluated. The combination of methods used and the results obtained illustrate the potential value of such high resolution near-shore marine sedimentary sequences for recording wide-scale human impact, documenting the effects of this on marine sedimentation and fauna and, potentially, disentangling evidence for human activities from that for past changes in climate.
Resumo:
We discuss several methods of calculating the DIS structure functions F2(x,Q2) based on BFKL-type small x resummations. Taking into account new HERA data ranging down to small xand low Q2, the pure leading order BFKL-based approach is excluded. Other methods based on high energy factorization are closer to conventional renormalization group equations. Despite several difficulties and ambiguities in combining the renormalization group equations with small x resummed terms, we find that a fit to the current data is hardly feasible, since the data in the low Q2 region are not as steep as the BFKL formalism predicts. Thus we conclude that deviations from the (successful) renormalization group approach towards summing up logarithms in 1/x are disfavoured by experiment.