925 resultados para Statistical packages


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a novel and efficient algorithm for modelling sub-65 nm clock interconnect-networks in the presence of process variation. We develop a method for delay analysis of interconnects considering the impact of Gaussian metal process variations. The resistance and capacitance of a distributed RC line are expressed as correlated Gaussian random variables which are then used to compute the standard deviation of delay Probability Distribution Function (PDF) at all nodes in the interconnect network. Main objective is to find delay PDF at a cheaper cost. Convergence of this approach is in probability distribution but not in mean of delay. We validate our approach against SPICE based Monte Carlo simulations while the current method entails significantly lower computational cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A systematic structure analysis of the correlation functions of statistical quantum optics is carried out. From a suitably defined auxiliary two‐point function we are able to identify the excited modes in the wave field. The relative simplicity of the higher order correlation functions emerge as a byproduct and the conditions under which these are made pure are derived. These results depend in a crucial manner on the notion of coherence indices and of unimodular coherence indices. A new class of approximate expressions for the density operator of a statistical wave field is worked out based on discrete characteristic sets. These are even more economical than the diagonal coherent state representations. An appreciation of the subtleties of quantum theory obtains. Certain implications for the physics of light beams are cited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The absorption produced by the audience in concert halls is considered a random variable. Beranek's proposal [L. L. Beranek, Music, Acoustics and Architecture (Wiley, New York, 1962), p. 543] that audience absorption is proportional to the area they occupy and not to their number is subjected to a statistical hypothesis test. A two variable linear regression model of the absorption with audience area and residual area as regressor variables is postulated for concert halls without added absorptive materials. Since Beranek's contention amounts to the statement that audience absorption is independent of the seating density, the test of the hypothesis lies in categorizing halls by seating density and examining for significant differences among slopes of regression planes of the different categories. Such a test shows that Beranek's hypothesis can be accepted. It is also shown that the audience area is a better predictor of the absorption than the audience number. The absorption coefficients and their 95% confidence limits are given for the audience and residual areas. A critique of the regression model is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have developed a theory for an electrochemical way of measuring the statistical properties of a nonfractally rough electrode. We obtained the expression for the current transient on a rough electrode which shows three times regions: short and long time limits and the transition region between them. The expressions for these time ranges are exploited to extract morphological information about the surface roughness. In the short and long time regimes, we extract information regarding various morphological features like the roughness factor, average roughness, curvature, correlation length, dimensionality of roughness, and polynomial approximation for the correlation function. The formulas for the surface structure factors (the measure of surface roughness) of rough surfaces in terms of measured reversible and diffusion-limited current transients are also obtained. Finally, we explore the feasibility of making such measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The statistical thermodynamics of adsorption in caged zeolites is developed by treating the zeolite as an ensemble of M identical cages or subsystems. Within each cage adsorption is assumed to occur onto a lattice of n identical sites. Expressions for the average occupancy per cage are obtained by minimizing the Helmholtz free energy in the canonical ensemble subject to the constraints of constant M and constant number of adsorbates N. Adsorbate-adsorbate interactions in the Brag-Williams or mean field approximation are treated in two ways. The local mean field approximation (LMFA) is based on the local cage occupancy and the global mean field approximation (GMFA) is based on the average coverage of the ensemble. The GMFA is shown to be equivalent in formulation to treating the zeolite as a collection of interacting single site subsystems. In contrast, the treatment in the LMFA retains the description of the zeolite as an ensemble of identical cages, whose thermodynamic properties are conveniently derived in the grand canonical ensemble. For a z coordinated lattice within the zeolite cage, with epsilon(aa) as the adsorbate-adsorbate interaction parameter, the comparisons for different values of epsilon(aa)(*)=epsilon(aa)z/2kT, and number of sites per cage, n, illustrate that for -1 0. We compare the isotherms predicted with the LMFA with previous GMFA predictions [K. G. Ayappa, C. R. Kamala, and T. A. Abinandanan, J. Chem. Phys. 110, 8714 (1999)] (which incorporates both the site volume reduction and a coverage-dependent epsilon(aa)) for xenon and methane in zeolite NaA. In all cases the predicted isotherms are very similar, with the exception of a small steplike feature present in the LMFA for xenon at higher coverages. (C) 1999 American Institute of Physics. [S0021-9606(99)70333-8].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fetal lung and liver tissues were examined by ultrasound in 240 subjects during 24 to 38 weeks of gestational age in order to investigate the feasibility of predicting the maturity of the lung from the textural features of sonograms. A region of interest of 64 X 64 pixels is used for extracting textural features. Since the histological properties of the liver are claimed to remain constant with respect to gestational age, features obtained from the lung region are compared with those from liver. Though the mean values of some of the features show a specific trend with respect to gestation age, the variance is too high to guarantee definite prediction of the gestational age. Thus, we restricted our purview to an investigation into the feasibility of fetal lung maturity prediction using statistical textural features. Out of 64 features extracted, those features that are correlated with gestation age and less computationally intensive are selected. The results of our study show that the sonographic features hold some promise in determining whether the fetal lung is mature or immature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.