933 resultados para Random Pore Model
Resumo:
New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration data sets extend an additional 2000 yr, from 0-26 cal kyr BP (Before Present, 0 cal. BP = AD 1950), and provide much higher resolution, greater precision, and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically-dated tree-ring samples, converted with a box diffusion model to marine mixed-layer ages, cover the period from 0-10.5 call kyr BR Beyond 10.5 cal kyr BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals. The marine records are corrected with site-specific C-14 reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 cal kyr BR A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring data sets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al. (this issue).
Resumo:
A new calibration curve for the conversion of radiocarbon ages to calibrated (cal) ages has been constructed and internationally ratified to replace ImCal98, which extended from 0-24 cal kyr BP (Before Present, 0 cal BP = AD 1950). The new calibration data set for terrestrial samples extends from 0-26 cal kyr BP, but with much higher resolution beyond 11.4 cal kyr BP than ImCal98. Dendrochronologically-dated tree-ring samples cover the period from 0-12.4 cal kyr BP. Beyond the end of the tree rings, data from marine records (corals and foraminifera) are converted to the atmospheric equivalent with a site-specific marine reservoir correction to provide terrestrial calibration from 12.4-26.0 cal kyr BP. A substantial enhancement relative to ImCal98 is the introduction of a coherent statistical approach based on a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The tree-ring data sets, sources of uncertainty, and regional offsets are discussed here. The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine 04) are discussed in brief, but details are presented in Hughen et al. (this issue a). We do not make a recommendation for calibration beyond 26 cal kyr BP at this time; however, potential calibration data sets are compared in another paper (van der Plicht et al., this issue).
Resumo:
This research is associated with the goal of the horticultural sector of the Colombian southwest, which is to obtain climatic information, specifically, to predict the monthly average temperature in sites where it has not been measured. The data correspond to monthly average temperature, and were recorded in meteorological stations at Valle del Cauca, Colombia, South America. Two components are identified in the data of this research: (1) a component due to the temporal aspects, determined by characteristics of the time series, distribution of the monthly average temperature through the months and the temporal phenomena, which increased (El Nino) and decreased (La Nina) the temperature values, and (2) a component due to the sites, which is determined for the clear differentiation of two populations, the valley and the mountains, which are associated with the pattern of monthly average temperature and with the altitude. Finally, due to the closeness between meteorological stations it is possible to find spatial correlation between data from nearby sites. In the first instance a random coefficient model without spatial covariance structure in the errors is obtained by month and geographical location (mountains and valley, respectively). Models for wet periods in mountains show a normal distribution in the errors; models for the valley and dry periods in mountains do not exhibit a normal pattern in the errors. In models of mountains and wet periods, omni-directional weighted variograms for residuals show spatial continuity. The random coefficient model without spatial covariance structure in the errors and the random coefficient model with spatial covariance structure in the errors are capturing the influence of the El Nino and La Nina phenomena, which indicates that the inclusion of the random part in the model is appropriate. The altitude variable contributes significantly in the models for mountains. In general, the cross-validation process indicates that the random coefficient model with spatial spherical and the random coefficient model with spatial Gaussian are the best models for the wet periods in mountains, and the worst model is the model used by the Colombian Institute for Meteorology, Hydrology and Environmental Studies (IDEAM) to predict temperature.
Resumo:
The direct radiative forcing of 65 chlorofluorocarbons, hydrochlorofluorocarbons, hydrofluorocarbons, hydrofluoroethers, halons, iodoalkanes, chloroalkanes, bromoalkanes, perfluorocarbons and nonmethane hydrocarbons has been evaluated using a consistent set of infrared absorption cross sections. For the radiative transfer models, both line-by-line and random band model approaches were employed for each gas. The line-by-line model was first validated against measurements taken by the Airborne Research Interferometer Evaluation System (ARIES) of the U.K. Meteorological Office; the computed spectrally integrated radiance of agreed to within 2% with experimental measurements. Three model atmospheres, derived from a three-dimensional climatology, were used in the radiative forcing calculations to more accurately represent hemispheric differences in water vapor, ozone concentrations, and cloud cover. Instantaneous, clear-sky radiative forcing values calculated by the line-by-line and band models were in close agreement. The band model values were subsequently modified to ensure exact agreement with the line-by-line model values. Calibrated band model radiative forcing values, for atmospheric profiles with clouds and using stratospheric adjustment, are reported and compared with previous literature values. Fourteen of the 65 molecules have forcings that differ by more than 15% from those in the World Meteorological Organization [1999] compilation. Eleven of the molecules have not been reported previously. The 65-molecule data set reported here is the most comprehensive and consistent database yet available to evaluate the relative impact of halocarbons and hydrocarbons on climate change.
Resumo:
This paper forecasts Daily Sterling exchange rate returns using various naive, linear and non-linear univariate time-series models. The accuracy of the forecasts is evaluated using mean squared error and sign prediction criteria. These show only a very modest improvement over forecasts generated by a random walk model. The Pesaran–Timmerman test and a comparison with forecasts generated artificially shows that even the best models have no evidence of market timing ability.
Resumo:
Various studies have indicated a relationship between enteric methane (CH4) production and milk fatty acid (FA) profiles of dairy cattle. However, the number of studies investigating such a relationship is limited and the direct relationships reported are mainly obtained by variation in CH4 production and milk FA concentration induced by dietary lipid supplements. The aim of this study was to perform a meta-analysis to quantify relationships between CH4 yield (per unit of feed and unit of milk) and milk FA profile in dairy cattle and to develop equations to predict CH4 yield based on milk FA profile of cows fed a wide variety of diets. Data from 8 experiments encompassing 30 different dietary treatments and 146 observations were included. Yield of CH4 measured in these experiments was 21.5 ± 2.46 g/kg of dry matter intake (DMI) and 13.9 ± 2.30 g/ kg of fat- and protein-corrected milk (FPCM). Correlation coefficients were chosen as effect size of the relationship between CH4 yield and individual milk FA concentration (g/100 g of FA). Average true correlation coefficients were estimated by a random-effects model. Milk FA concentrations of C6:0, C8:0, C10:0, C16:0, and C16:0-iso were significantly or tended to be positively related to CH4 yield per unit of feed. Concentrations of trans-6+7+8+9 C18:1, trans-10+11 C18:1, cis- 11 C18:1, cis-12 C18:1, cis-13 C18:1, trans-16+cis-14 C18:1, and cis-9,12 C18:2 in milk fat were significantly or tended to be negatively related to CH4 yield per unit of feed. Milk FA concentrations of C10:0, C12:0, C14:0-iso, C14:0, cis-9 C14:1, C15:0, and C16:0 were significantly or tended to be positively related to CH4 yield per unit of milk. Concentrations of C4:0, C18:0, trans-10+11 C18:1, cis-9 C18:1, cis-11 C18:1, and cis- 9,12 C18:2 in milk fat were significantly or tended to be negatively related to CH4 yield per unit of milk. Mixed model multiple regression and a stepwise selection procedure of milk FA based on the Bayesian information criterion to predict CH4 yield with milk FA as input (g/100 g of FA) resulted in the following prediction equations: CH4 (g/kg of DMI) = 23.39 + 9.74 × C16:0- iso – 1.06 × trans-10+11 C18:1 – 1.75 × cis-9,12 C18:2 (R2 = 0.54), and CH4 (g/kg of FPCM) = 21.13 – 1.38 × C4:0 + 8.53 × C16:0-iso – 0.22 × cis-9 C18:1 – 0.59 × trans-10+11 C18:1 (R2 = 0.47). This indicated that milk FA profile has a moderate potential for predicting CH4 yield per unit of feed and a slightly lower potential for predicting CH4 yield per unit of milk. Key words: methane , milk fatty acid profile , metaanalysis , dairy cattle
Resumo:
Influences of inbreeding on daily milk yield (DMY), age at first calving (AFC), and calving intervals (CI) were determined on a highly inbred zebu dairy subpopulation of the Guzerat breed. Variance components were estimated using animal models in single-trait analyses. Two approaches were employed to estimate inbreeding depression: using individual increase in inbreeding coefficients or using inbreeding coefficients as possible covariates included in the statistical models. The pedigree file included 9,915 animals, of which 9,055 were inbred, with an average inbreeding coefficient of 15.2%. The maximum inbreeding coefficient observed was 49.45%, and the average inbreeding for the females still in the herd during the analysis was 26.42%. Heritability estimates were 0.27 for DMY and 0.38 for AFC. The genetic variance ratio estimated with the random regression model for CI ranged around 0.10. Increased inbreeding caused poorer performance in DMY, AFC, and CI. However, some of the cows with the highest milk yield were among the highly inbred animals in this subpopulation. Individual increase in inbreeding used as a covariate in the statistical models accounted for inbreeding depression while avoiding overestimation that may result when fitting inbreeding coefficients.
Resumo:
We consider independent edge percolation models on Z, with edge occupation probabilities. We prove that oriented percolation occurs when beta > 1 provided p is chosen sufficiently close to 1, answering a question posed in Newman and Schulman (Commun. Math. Phys. 104: 547, 1986). The proof is based on multi-scale analysis.
Resumo:
In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.
Resumo:
This thesis consists of four empirically oriented papers on central bank independence (CBI) reforms. Paper [1] is an investigation of why politicians around the world have chosen to give up power to independent central banks, thereby reducing their ability to control the economy. A new data-set, including the possible occurrence of CBI-reforms in 132 countries during 1980-2005, was collected. Politicians in non-OECD countries were more likely to delegate power to independent central banks if their country had been characterized by high variability in inflation and if they faced a high probability of being replaced. No such effects were found for OECD countries. Paper [2], using a difference-in-difference approach, studies whether CBI reform matters for inflation performance. The analysis is based on a dataset including the possible occurrence of CBI-reforms in 132 countries during the period of 1980-2005. CBI reform is found to have contributed to bringing down inflation in high-inflation countries, but it seems unrelated to inflation performance in low-inflation countries. Paper [3] investigates whether CBI-reforms are important in reducing inflation and maintaining price stability, using a random-effects random-coefficients model to account for heterogeneity in the effects of CBI-reforms on inflation. CBI-reforms are found to have reduced inflation on average by 3.31 percent, but the effect is only present when countries with historically high inflation rates are included in the sample. Countries with more modest inflation rates have achieved low inflation without institutional reforms that grant central banks more independence, thus undermining the time-inconsistency theory case for CBI. There is furthermore no evidence that CBI-reforms have contributed to lower inflation variability Paper [4] studies the relationship between CBI and a suggested trade-off between price variability and output variability using data on CBI-levels, and data the on implementation dates of CBI-reforms. The results question the existence of such a trade-off, but indicate that there may still be potential gains in stabilization policy from CBI-reforms.
Resumo:
We present a new version of the hglm package for fittinghierarchical generalized linear models (HGLM) with spatially correlated random effects. A CAR family for conditional autoregressive random effects was implemented. Eigen decomposition of the matrix describing the spatial structure (e.g. the neighborhood matrix) was used to transform the CAR random effectsinto an independent, but heteroscedastic, gaussian random effect. A linear predictor is fitted for the random effect variance to estimate the parameters in the CAR model.This gives a computationally efficient algorithm for moderately sized problems (e.g. n<5000).
Resumo:
We consider methods for estimating causal effects of treatment in the situation where the individuals in the treatment and the control group are self selected, i.e., the selection mechanism is not randomized. In this case, simple comparison of treated and control outcomes will not generally yield valid estimates of casual effects. The propensity score method is frequently used for the evaluation of treatment effect. However, this method is based onsome strong assumptions, which are not directly testable. In this paper, we present an alternative modeling approachto draw causal inference by using share random-effect model and the computational algorithm to draw likelihood based inference with such a model. With small numerical studies and a real data analysis, we show that our approach gives not only more efficient estimates but it is also less sensitive to model misspecifications, which we consider, than the existing methods.
Resumo:
Gibrat's law predicts that firm growth is purely random and should be independent of firm size. We use a random effects-random coefficient model to test whether Gibrat's law holds on average in the studied sample as well as at the individual firm level in the Swedish energy market. No study has yet investigated whether Gibrat's law holds for individual firms, previous studies having instead estimated whether the law holds on average in the samples studied. The present results support the claim that Gibrat's law is more likely to be rejected ex ante when an entire firm population is considered, but more likely to be confirmed ex post after market selection has "cleaned" the original population of firms or when the analysis treats more disaggregated data. From a theoretical perspective, the results are consistent with models based on passive and active learning, indicating a steady state in the firm expansion process and that Gibrat's law is violated in the short term but holds in the long term once firms have reached a steady state. These results indicate that approximately 70 % of firms in the Swedish energy sector are in steady state, with only random fluctuations in size around that level over the 15 studied years.
Resumo:
In this paper, we present a simple random-matching model of seasons, where di§erent seasons translate into di§erent propensities to consume and produce. We Önd that the cyclical creation and destruction of money is beneÖcial for welfare under a wide variety of circumstances. Our model of seasons can be interpreted as providing support for the creation of the Federal Reserve System, with its mandate of supplying an elastic currency for the nation.