965 resultados para LOG-S DISTRIBUTIONS
Resumo:
[ 1] We have used a fully coupled chemistry-climate model (CCM), which generates its own wind and temperature quasi-biennial oscillation (QBO), to study the effect of coupling on the QBO and to examine the QBO signals in stratospheric trace gases, particularly ozone. Radiative coupling of the interactive chemistry to the underlying general circulation model tends to prolong the QBO period and to increase the QBO amplitude in the equatorial zonal wind in the lower and middle stratosphere. The model ozone QBO agrees well with Stratospheric Aerosol and Gas Experiment II and Total Ozone Mapping Spectrometer satellite observations in terms of vertical and latitudinal structure. The model captures the ozone QBO phase change near 28 km over the equator and the column phase change near +/- 15 degrees latitude. Diagnosis of the model chemical terms shows that variations in NOx are the main chemical driver of the O-3 QBO around 35 km, i.e., above the O-3 phase change.
Resumo:
Many models of immediate memory predict the presence or absence of various effects, but none have been tested to see whether they predict an appropriate distribution of effect sizes. The authors show that the feature model (J. S. Nairne, 1990) produces appropriate distributions of effect sizes for both the phonological confusion effect and the word-length effect. The model produces the appropriate number of reversals, when participants are more accurate with similar items or long items, and also correctly predicts that participants performing less well overall demonstrate smaller and less reliable phonological similarity and word-length effects and are more likely to show reversals. These patterns appear within the model without the need to assume a change in encoding or rehearsal strategy or the deployment of a different storage buffer. The implications of these results and the wider applicability of the distributionmodeling approach are discussed.
Resumo:
Europe is a densely populated region that is a significant global source of black carbon (BC) aerosol, but there is a lack of information regarding the physical properties and spatial/vertical distribution of rBC in the region. We present the first aircraft observations of sub-micron refractory BC (rBC) aerosol concentrations and physical properties measured by a single particle soot photometer (SP2) in the lower troposphere over Europe. The observations spanned a region roughly bounded by 50° to 60° N and from 15° W to 30° E. The measurements, made between April and September 2008, showed that average rBC mass concentrations ranged from about 300 ng m−3 near urban areas to approximately 50 ng m−3 in remote continental regions, lower than previous surface-based measurements. rBC represented between 0.5 and 3% of the sub-micron aerosol mass. Black carbon mass size distributions were log-normally distributed and peaked at approximately 180 nm, but shifted to smaller diameters (~160 nm) near source regions. rBC was correlated with carbon monoxide (CO) but had different ratios to CO depending on location and air mass. Light absorption coefficients were measured by particle soot absorption photometers on two separate aircraft and showed similar geographic patterns to rBC mass measured by the SP2. We summarize the rBC and light absorption measurements as a function of longitude and air mass age and also provide profiles of rBC mass concentrations and size distribution statistics. Our results will help evaluate model-predicted regional rBC concentrations and properties and determine regional and global climate impacts from rBC due to atmospheric heating and surface dimming.
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The problem of estimating the individual probabilities of a discrete distribution is considered. The true distribution of the independent observations is a mixture of a family of power series distributions. First, we ensure identifiability of the mixing distribution assuming mild conditions. Next, the mixing distribution is estimated by non-parametric maximum likelihood and an estimator for individual probabilities is obtained from the corresponding marginal mixture density. We establish asymptotic normality for the estimator of individual probabilities by showing that, under certain conditions, the difference between this estimator and the empirical proportions is asymptotically negligible. Our framework includes Poisson, negative binomial and logarithmic series as well as binomial mixture models. Simulations highlight the benefit in achieving normality when using the proposed marginal mixture density approach instead of the empirical one, especially for small sample sizes and/or when interest is in the tail areas. A real data example is given to illustrate the use of the methodology.
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Resumo:
The sensitivity of 73 isolates of Mycosphaerella graminicola collected over the period 1993–2002 from wheat fields in South England was tested in vitro against the triazole fluquinconazole, the strobilurin azoxystrobin and to the imidazole prochloraz. Over the sampling period, sensitivity of the population to fluquinconazole and prochloraz decreased by factors of approximately 10 and 2, respectively, but there was no evidence of changes in sensitivity to azoxystrobin. There was no correlation between sensitivity to fluquinconazole and prochloraz, but there was a weak negative cross-resistance between fluquinconazole and azoxystrobin.
Resumo:
Analyses of high-density single-nucleotide polymorphism (SNP) data, such as genetic mapping and linkage disequilibrium (LD) studies, require phase-known haplotypes to allow for the correlation between tightly linked loci. However, current SNP genotyping technology cannot determine phase, which must be inferred statistically. In this paper, we present a new Bayesian Markov chain Monte Carlo (MCMC) algorithm for population haplotype frequency estimation, particulary in the context of LD assessment. The novel feature of the method is the incorporation of a log-linear prior model for population haplotype frequencies. We present simulations to suggest that 1) the log-linear prior model is more appropriate than the standard coalescent process in the presence of recombination (>0.02cM between adjacent loci), and 2) there is substantial inflation in measures of LD obtained by a "two-stage" approach to the analysis by treating the "best" haplotype configuration as correct, without regard to uncertainty in the recombination process. Genet Epidemiol 25:106-114, 2003. (C) 2003 Wiley-Liss, Inc.
Resumo:
The distributions of times to first cell division were determined for populations of Escherichia coli stationary-phase cells inoculated onto agar media. This was accomplished by using automated analysis of digital images of individual cells growing on agar and calculation of the "box area ratio." Using approximately 300 cells per experiment, the mean time to first division and standard deviation for cells grown in liquid medium at 37 degrees C and inoculated on agar and incubated at 20 degrees C were determined as 3.0 h and 0.7 h, respectively. Distributions were observed to tail toward the higher values, but no definitive model distribution was identified. Both preinoculation stress by heating cultures at 50 degrees C and postinoculation stress by growth in the presence of higher concentrations of NaCl increased mean times to first division. Both stresses also resulted in an increase in the spread of the distributions that was proportional to the mean division time, the coefficient of variation being constant at approximately 0.2 in all cases. The "relative division time," which is the time to first division for individual cells expressed in terms of the cell size doubling time, was used as measure of the "work to be done" to prepare for cell division. Relative division times were greater for heat-stressed cells than for those growing under osmotic stress.
Resumo:
Cloud radar and lidar can be used to evaluate the skill of numerical weather prediction models in forecasting the timing and placement of clouds, but care must be taken in choosing the appropriate metric of skill to use due to the non- Gaussian nature of cloud-fraction distributions. We compare the properties of a number of different verification measures and conclude that of existing measures the Log of Odds Ratio is the most suitable for cloud fraction. We also propose a new measure, the Symmetric Extreme Dependency Score, which has very attractive properties, being equitable (for large samples), difficult to hedge and independent of the frequency of occurrence of the quantity being verified. We then use data from five European ground-based sites and seven forecast models, processed using the ‘Cloudnet’ analysis system, to investigate the dependence of forecast skill on cloud fraction threshold (for binary skill scores), height, horizontal scale and (for the Met Office and German Weather Service models) forecast lead time. The models are found to be least skillful at predicting the timing and placement of boundary-layer clouds and most skilful at predicting mid-level clouds, although in the latter case they tend to underestimate mean cloud fraction when cloud is present. It is found that skill decreases approximately inverse-exponentially with forecast lead time, enabling a forecast ‘half-life’ to be estimated. When considering the skill of instantaneous model snapshots, we find typical values ranging between 2.5 and 4.5 days. Copyright c 2009 Royal Meteorological Society