991 resultados para Random variables
Resumo:
Let (Xi ) be a sequence of i.i.d. random variables, and let N be a geometric random variable independent of (Xi ). Geometric stable distributions are weak limits of (normalized) geometric compounds, SN = X1 + · · · + XN , when the mean of N converges to infinity. By an appropriate representation of the individual summands in SN we obtain series representation of the limiting geometric stable distribution. In addition, we study the asymptotic behavior of the partial sum process SN (t) = ⅀( i=1 ... [N t] ) Xi , and derive series representations of the limiting geometric stable process and the corresponding stochastic integral. We also obtain strong invariance principles for stable and geometric stable laws.
Resumo:
In the area of stress-strength models there has been a large amount of work as regards estimation of the reliability R = Pr(X2 < X1 ) when X1 and X2 are independent random variables belonging to the same univariate family of distributions. The algebraic form for R = Pr(X2 < X1 ) has been worked out for the majority of the well-known distributions including Normal, uniform, exponential, gamma, weibull and pareto. However, there are still many other distributions for which the form of R is not known. We have identified at least some 30 distributions with no known form for R. In this paper we consider some of these distributions and derive the corresponding forms for the reliability R. The calculations involve the use of various special functions.
Resumo:
It is a crucial task to evaluate the reliability of manufacturing process in product development process. Process reliability is a measurement of production ability of reconfigurable manufacturing system (RMS), which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. An integration framework of manufacturing process reliability evaluation is presented together with product development process. A mathematical model and algorithm based on universal generating function (UGF) is developed for calculating the reliability of manufacturing process with respect to task intensity and process capacity, which are both independent random variables. The rework strategies of RMS are analyzed under different task intensity based on process reliability is presented, and the optimization of rework strategies based on process reliability is discussed afterwards.
Resumo:
MSC Subject Classification: 65C05, 65U05.
Resumo:
2000 Mathematics Subject Classification: 60G70, 60F05.
Resumo:
2000 Mathematics Subject Classification: 60G70, 60F12, 60G10.
Resumo:
2000 Mathematics Subject Classification: 33C90, 62E99
Resumo:
The purpose of the work is to claim that engineers can be motivated to study statistical concepts by using the applications in their experience connected with Statistical ideas. The main idea is to choose a data from the manufacturing factility (for example, output from CMM machine) and explain that even if the parts used do not meet exact specifications they are used in production. By graphing the data one can show that the error is random but follows a distribution, that is, there is regularily in the data in statistical sense. As the error distribution is continuous, we advocate that the concept of randomness be introducted starting with continuous random variables with probabilities connected with areas under the density. The discrete random variables are then introduced in terms of decision connected with size of the errors before generalizing to abstract concept of probability. Using software, they can then be motivated to study statistical analysis of the data they encounter and the use of this analysis to make engineering and management decisions.
Resumo:
Dependence in the world of uncertainty is a complex concept. However, it exists, is asymmetric, has magnitude and direction, and can be measured. We use some measures of dependence between random events to illustrate how to apply it in the study of dependence between non-numeric bivariate variables and numeric random variables. Graphics show what is the inner dependence structure in the Clayton Archimedean copula and the Bivariate Poisson distribution. We know this approach is valid for studying the local dependence structure for any pair of random variables determined by its empirical or theoretical distribution. And it can be used also to simulate dependent events and dependent r/v/’s, but some restrictions apply. ACM Computing Classification System (1998): G.3, J.2.
Resumo:
A hagyományos szavazási játékok speciális átruházható hasznosságú, kooperatív játékok, úgynevezett egyszerű játékok, ahol a játékosok a pártok, és az egyes koalíciók értéke 1 vagy 0 attól függően, hogy az adott koalíció elég erős-e az adott jogszabály elfogadásához, vagy sem. Ebben a cikkben bevezetjük az általánosított súlyozott szavazási játékok fogalmát, ahol a pártok mandátumainak száma a valószínűségi változó. Magyar példákon keresztül mutatjuk be az új megközelítés használhatóságát. / === / Voting games are cooperative games with transferable utility, so-called simple games, where the players are parties and the value of a coalition may be 0 or 1 depending on its ability to pass a new law. The authors introduce the concept of generalized weighted voting games where the parties' strengths are random variables. taking examples from Hungary to illustrate the use of this approach.
Resumo:
In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.
Resumo:
The first objective of this research was to develop closed-form and numerical probabilistic methods of analysis that can be applied to otherwise conventional methods of unreinforced and geosynthetic reinforced slopes and walls. These probabilistic methods explicitly include random variability of soil and reinforcement, spatial variability of the soil, and cross-correlation between soil input parameters on probability of failure. The quantitative impact of simultaneously considering the influence of random and/or spatial variability in soil properties in combination with cross-correlation in soil properties is investigated for the first time in the research literature. Depending on the magnitude of these statistical descriptors, margins of safety based on conventional notions of safety may be very different from margins of safety expressed in terms of probability of failure (or reliability index). The thesis work also shows that intuitive notions of margin of safety using conventional factor of safety and probability of failure can be brought into alignment when cross-correlation between soil properties is considered in a rigorous manner. The second objective of this thesis work was to develop a general closed-form solution to compute the true probability of failure (or reliability index) of a simple linear limit state function with one load term and one resistance term expressed first in general probabilistic terms and then migrated to a LRFD format for the purpose of LRFD calibration. The formulation considers contributions to probability of failure due to model type, uncertainty in bias values, bias dependencies, uncertainty in estimates of nominal values for correlated and uncorrelated load and resistance terms, and average margin of safety expressed as the operational factor of safety (OFS). Bias is defined as the ratio of measured to predicted value. Parametric analyses were carried out to show that ignoring possible correlations between random variables can lead to conservative (safe) values of resistance factor in some cases and in other cases to non-conservative (unsafe) values. Example LRFD calibrations were carried out using different load and resistance models for the pullout internal stability limit state of steel strip and geosynthetic reinforced soil walls together with matching bias data reported in the literature.
Resumo:
The Dirichlet distribution is a multivariate generalization of the Beta distribution. It is an important multivariate continuous distribution in probability and statistics. In this report, we review the Dirichlet distribution and study its properties, including statistical and information-theoretic quantities involving this distribution. Also, relationships between the Dirichlet distribution and other distributions are discussed. There are some different ways to think about generating random variables with a Dirichlet distribution. The stick-breaking approach and the Pólya urn method are discussed. In Bayesian statistics, the Dirichlet distribution and the generalized Dirichlet distribution can both be a conjugate prior for the Multinomial distribution. The Dirichlet distribution has many applications in different fields. We focus on the unsupervised learning of a finite mixture model based on the Dirichlet distribution. The Initialization Algorithm and Dirichlet Mixture Estimation Algorithm are both reviewed for estimating the parameters of a Dirichlet mixture. Three experimental results are shown for the estimation of artificial histograms, summarization of image databases and human skin detection.
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
During the last decade, wind power generation has seen rapid development. According to the U.S. Department of Energy, achieving 20\% wind power penetration in the U.S. by 2030 will require: (i) enhancement of the transmission infrastructure, (ii) improvement of reliability and operability of wind systems and (iii) increased U.S. manufacturing capacity of wind generation equipment. This research will concentrate on improvement of reliability and operability of wind energy conversion systems (WECSs). The increased penetration of wind energy into the grid imposes new operating conditions on power systems. This change requires development of an adequate reliability framework. This thesis proposes a framework for assessing WECS reliability in the face of external disturbances, e.g., grid faults and internal component faults. The framework is illustrated using a detailed model of type C WECS - doubly fed induction generator with corresponding deterministic and random variables in a simplified grid model. Fault parameters and performance requirements essential to reliability measurements are included in the simulation. The proposed framework allows a quantitative analysis of WECS designs; analysis of WECS control schemes, e.g., fault ride-through mechanisms; discovery of key parameters that influence overall WECS reliability; and computation of WECS reliability with respect to different grid codes/performance requirements.