929 resultados para Voltage distributions
Resumo:
The problem of estimating the individual probabilities of a discrete distribution is considered. The true distribution of the independent observations is a mixture of a family of power series distributions. First, we ensure identifiability of the mixing distribution assuming mild conditions. Next, the mixing distribution is estimated by non-parametric maximum likelihood and an estimator for individual probabilities is obtained from the corresponding marginal mixture density. We establish asymptotic normality for the estimator of individual probabilities by showing that, under certain conditions, the difference between this estimator and the empirical proportions is asymptotically negligible. Our framework includes Poisson, negative binomial and logarithmic series as well as binomial mixture models. Simulations highlight the benefit in achieving normality when using the proposed marginal mixture density approach instead of the empirical one, especially for small sample sizes and/or when interest is in the tail areas. A real data example is given to illustrate the use of the methodology.
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Resumo:
The sensitivity of 73 isolates of Mycosphaerella graminicola collected over the period 1993–2002 from wheat fields in South England was tested in vitro against the triazole fluquinconazole, the strobilurin azoxystrobin and to the imidazole prochloraz. Over the sampling period, sensitivity of the population to fluquinconazole and prochloraz decreased by factors of approximately 10 and 2, respectively, but there was no evidence of changes in sensitivity to azoxystrobin. There was no correlation between sensitivity to fluquinconazole and prochloraz, but there was a weak negative cross-resistance between fluquinconazole and azoxystrobin.
Resumo:
The mechanism by which Ca2+ enters electrically non-excitable cells is unclear. The sensitivity of the Ca2+ entry pathway in electrically non-excitable cells to inhibition by extracellular Ni2+ was used to direct the synthesis of a library of simple, novel compounds. These novel compounds inhibit Ca2+ entry into and, consequently, proliferation of several cancer cell lines. They showed stereoselective inhibition of proliferation and Ca2+ influx with identical stereoselective inhibition of heterologously expressed Cav3.2 isoform of T-type Ca2+ channels. Proliferation of human embryonic kidney (HEK)293 cells transfected with the Cav3.2 Ca2+ channel was also blocked. Cancer cell lines sensitive to our compounds express message for the Cav3.2 T-type Ca2+ channel isoform, its delta25B splice variant, or both, while a cell line resistant to our compounds does not. These observations raise the possibility that clinically useful drugs can be designed based upon the ability to block these Ca2+ channels.
Resumo:
The distributions of times to first cell division were determined for populations of Escherichia coli stationary-phase cells inoculated onto agar media. This was accomplished by using automated analysis of digital images of individual cells growing on agar and calculation of the "box area ratio." Using approximately 300 cells per experiment, the mean time to first division and standard deviation for cells grown in liquid medium at 37 degrees C and inoculated on agar and incubated at 20 degrees C were determined as 3.0 h and 0.7 h, respectively. Distributions were observed to tail toward the higher values, but no definitive model distribution was identified. Both preinoculation stress by heating cultures at 50 degrees C and postinoculation stress by growth in the presence of higher concentrations of NaCl increased mean times to first division. Both stresses also resulted in an increase in the spread of the distributions that was proportional to the mean division time, the coefficient of variation being constant at approximately 0.2 in all cases. The "relative division time," which is the time to first division for individual cells expressed in terms of the cell size doubling time, was used as measure of the "work to be done" to prepare for cell division. Relative division times were greater for heat-stressed cells than for those growing under osmotic stress.
Resumo:
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5°-resolution range from approximately 50% at 1 mm h−1 to 20% at 14 mm h−1. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%–80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5° resolution is relatively small (less than 6% at 5 mm day−1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%–35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%–15% at 5 mm day−1, with proportionate reductions in latent heating sampling errors.
Resumo:
The design of high-voltage equipment encompasses the study of oscillatory surges caused by transients such as those produced by switching. By obtaining a model, the response of which reconstructs that observed in the actual system, simulation studies and critical tests can be carried out on the model rather than on the equipment itself. In this paper, methods for the construction of simplified models are described and it is shown how the use of a complex model does not necessarily result in superior response pattern reconstruction.
Resumo:
To ensure minimum loss of system security and revenue it is essential that faults on underground cable systems be located and repaired rapidly. Currently in the UK, the impulse current method is used to prelocate faults, prior to using acoustic methods to pinpoint the fault location. The impulse current method is heavily dependent on the engineer's knowledge and experience in recognising/interpreting the transient waveforms produced by the fault. The development of a prototype real-time expert system aid for the prelocation of cable faults is described. Results from the prototype demonstrate the feasibility and benefits of the expert system as an aid for the diagnosis and location of faults on underground cable systems.
Resumo:
We compare rain event size distributions derived from measurements in climatically different regions, which we find to be well approximated by power laws of similar exponents over broad ranges. Differences can be seen in the large-scale cutoffs of the distributions. Event duration distributions suggest that the scale-free aspects are related to the absence of characteristic scales in the meteorological mesoscale.
Resumo:
Investment risk models with infinite variance provide a better description of distributions of individual property returns in the IPD UK database over the period 1981 to 2003 than normally distributed risk models. This finding mirrors results in the US and Australia using identical methodology. Real estate investment risk is heteroskedastic, but the characteristic exponent of the investment risk function is constant across time – yet it may vary by property type. Asset diversification is far less effective at reducing the impact of non‐systematic investment risk on real estate portfolios than in the case of assets with normally distributed investment risk. The results, therefore, indicate that multi‐risk factor portfolio allocation models based on measures of investment codependence from finite‐variance statistics are ineffective in the real estate context
Resumo:
Much of the literature on the construction of mixed asset portfolios and the case for property as a risk diversifier rests on correlations measured over the whole of a given time series. Recent developments in finance, however, focuses on dependence in the tails of the distribution. Does property offer diversification from equity markets when it is most needed - when equity returns are poor. The paper uses an empirical copula approach to test tail dependence between property and equity for the UK and for a global portfolio. Results show strong tail dependence: in the UK, the dependence in the lower tail is stronger than in the upper tail, casting doubt on the defensive properties of real estate stocks.
Resumo:
Investment risk models with infinite variance provide a better description of distributions of individual property returns in the IPD database over the period 1981 to 2003 than Normally distributed risk models, which mirrors results in the U.S. and Australia using identical methodology. Real estate investment risk is heteroscedastic, but the Characteristic Exponent of the investment risk function is constant across time yet may vary by property type. Asset diversification is far less effective at reducing the impact of non-systematic investment risk on real estate portfolios than in the case of assets with Normally distributed investment risk. Multi-risk factor portfolio allocation models based on measures of investment codependence from finite-variance statistics are ineffectual in the real estate context.
Resumo:
Magmas in volcanic conduits commonly contain microlites in association with preexisting phenocrysts, as often indicated by volcanic rock textures. In this study, we present two different experiments that inves- tigate the flow behavior of these bidisperse systems. In the first experiments, rotational rheometric methods are used to determine the rheology of monodisperse and polydisperse suspensions consisting of smaller, prolate particles (microlites) and larger, equant particles (phenocrysts) in a bubble‐free Newtonian liquid (silicate melt). Our data show that increasing the relative proportion of prolate microlites to equant pheno- crysts in a magma at constant total particle content can increase the relative viscosity by up to three orders of magnitude. Consequently, the rheological effect of particles in magmas cannot be modeled by assuming a monodisperse population of particles. We propose a new model that uses interpolated parameters based on the relative proportions of small and large particles and produces a considerably improved fit to the data than earlier models. In a second series of experiments we investigate the textures produced by shearing bimodal suspensions in gradually solidifying epoxy resin in a concentric cylinder setup. The resulting textures show the prolate particles are aligned with the flow lines and spherical particles are found in well‐organized strings, with sphere‐depleted shear bands in high‐shear regions. These observations may explain the measured variation in the shear thinning and yield stress behavior with increasing solid fraction and particle aspect ratio. The implications for magma flow are discussed, and rheological results and tex- tural observations are compared with observations on natural samples.
Resumo:
This paper presents evidence for several features of the population of chess players, and the distribution of their performances measured in terms of Elo ratings and by computer analysis of moves. Evidence that ratings have remained stable since the inception of the Elo system in the 1970’s is given in several forms: by showing that the population of strong players fits a simple logistic-curve model without inflation, by plotting players’ average error against the FIDE category of tournaments over time, and by skill parameters from a model that employs computer analysis keeping a nearly constant relation to Elo rating across that time. The distribution of the model’s Intrinsic Performance Ratings can hence be used to compare populations that have limited interaction, such as between players in a national chess federation and FIDE, and ascertain relative drift in their respective rating systems.
Resumo:
This article introduces generalized beta-generated (GBG) distributions. Sub-models include all classical beta-generated, Kumaraswamy-generated and exponentiated distributions. They are maximum entropy distributions under three intuitive conditions, which show that the classical beta generator skewness parameters only control tail entropy and an additional shape parameter is needed to add entropy to the centre of the parent distribution. This parameter controls skewness without necessarily differentiating tail weights. The GBG class also has tractable properties: we present various expansions for moments, generating function and quantiles. The model parameters are estimated by maximum likelihood and the usefulness of the new class is illustrated by means of some real data sets.