944 resultados para Rayleigh Random Variables


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this review paper we collect several results about copula-based models, especially concerning regression models, by focusing on some insurance applications. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper a new approach is considered for studying the triangular distribution using the theoretical development behind Skew distributions. Triangular distribution are obtained by a reparametrization of usual triangular distribution. Main probabilistic properties of the distribution are studied, including moments, asymmetry and kurtosis coefficients, and an stochastic representation, which provides a simple and efficient method for generating random variables. Moments estimation is also implemented. Finally, a simulation study is conducted to illustrate the behavior of the estimation approach proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the issue of assessing influence of observations in the class of beta regression models, which is useful for modelling random variables that assume values in the standard unit interval and are affected by independent variables. We propose a Cook-like distance and also measures of local influence under different perturbation schemes. Applications using real data are presented. (c) 2008 Elsevier B.V.. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper uses Shannon's information theory to give a quantitative definition of information flow in systems that transform inputs to outputs. For deterministic systems, the definition is shown to specialise to a simpler form when the information source and the known inputs jointly determine the inputs. For this special case, the definition is related to the classical security condition of non-interference and an equivalence is established between non-interference and independence of random variables. Quantitative information flow for deterministic systems is then presented in relational form. With this presentation, it is shown how relational parametricity can be used to derive upper and lower bounds on information flows through families of functions defined in the second order lambda calculus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper investigates which of Shannon’s measures (entropy, conditional entropy, mutual information) is the right one for the task of quantifying information flow in a programming language. We examine earlier relevant contributions from Denning, McLean and Gray and we propose and motivate a specific quantitative definition of information flow. We prove results relating equivalence relations, interference of program variables, independence of random variables and the flow of confidential information. Finally, we show how, in our setting, Shannon’s Perfect Secrecy theorem provides a sufficient condition to determine whether a program leaks confidential information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Climate change has resulted in substantial variations in annual extreme rainfall quantiles in different durations and return periods. Predicting the future changes in extreme rainfall quantiles is essential for various water resources design, assessment, and decision making purposes. Current Predictions of future rainfall extremes, however, exhibit large uncertainties. According to extreme value theory, rainfall extremes are rather random variables, with changing distributions around different return periods; therefore there are uncertainties even under current climate conditions. Regarding future condition, our large-scale knowledge is obtained using global climate models, forced with certain emission scenarios. There are widely known deficiencies with climate models, particularly with respect to precipitation projections. There is also recognition of the limitations of emission scenarios in representing the future global change. Apart from these large-scale uncertainties, the downscaling methods also add uncertainty into estimates of future extreme rainfall when they convert the larger-scale projections into local scale. The aim of this research is to address these uncertainties in future projections of extreme rainfall of different durations and return periods. We plugged 3 emission scenarios with 2 global climate models and used LARS-WG, a well-known weather generator, to stochastically downscale daily climate models’ projections for the city of Saskatoon, Canada, by 2100. The downscaled projections were further disaggregated into hourly resolution using our new stochastic and non-parametric rainfall disaggregator. The extreme rainfall quantiles can be consequently identified for different durations (1-hour, 2-hour, 4-hour, 6-hour, 12-hour, 18-hour and 24-hour) and return periods (2-year, 10-year, 25-year, 50-year, 100-year) using Generalized Extreme Value (GEV) distribution. By providing multiple realizations of future rainfall, we attempt to measure the extent of total predictive uncertainty, which is contributed by climate models, emission scenarios, and downscaling/disaggregation procedures. The results show different proportions of these contributors in different durations and return periods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We apply the concept of exchangeable random variables to the case of non-additive robability distributions exhibiting ncertainty aversion, and in the lass generated bya convex core convex non-additive probabilities, ith a convex core). We are able to rove two versions of the law of arge numbers (de Finetti's heorems). By making use of two efinitions. of independence we rove two versions of the strong law f large numbers. It turns out that e cannot assure the convergence of he sample averages to a constant. e then modal the case there is a true" probability distribution ehind the successive realizations of the uncertain random variable. In this case convergence occurs. This result is important because it renders true the intuition that it is possible "to learn" the "true" additive distribution behind an uncertain event if one repeatedly observes it (a sufficiently large number of times). We also provide a conjecture regarding the "Iearning" (or updating) process above, and prove a partia I result for the case of Dempster-Shafer updating rule and binomial trials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bounds on the distribution function of the sum of two random variables with known marginal distributions obtained by Makarov (1981) can be used to bound the cumulative distribution function (c.d.f.) of individual treatment effects. Identification of the distribution of individual treatment effects is important for policy purposes if we are interested in functionals of that distribution, such as the proportion of individuals who gain from the treatment and the expected gain from the treatment for these individuals. Makarov bounds on the c.d.f. of the individual treatment effect distribution are pointwise sharp, i.e. they cannot be improved in any single point of the distribution. We show that the Makarov bounds are not uniformly sharp. Specifically, we show that the Makarov bounds on the region that contains the c.d.f. of the treatment effect distribution in two (or more) points can be improved, and we derive the smallest set for the c.d.f. of the treatment effect distribution in two (or more) points. An implication is that the Makarov bounds on a functional of the c.d.f. of the individual treatment effect distribution are not best possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The lava Platform is increasing1y being adopted in the development of distributed sys¬tems with higb user demando This kind of application is more complex because it needs beyond attending the functional requirements, to fulfil1 the pre-established performance parameters. This work makes a study on the Java Vutual Machine (JVM), approaching its intemal aspects and exploring the garbage collection strategies existing in the literature and used by the NM. It also presents a set of tools that helps in the job of optimizing applications and others that help in the monitoring of applications in the production envi¬ronment. Doe to the great amount of technologies that aim to solve problems which are common to the application layer, it becomes difficult to choose the one with best time response and less memory usage. This work presents a brief introduction to each one of tbe possible technologies and realize comparative tests through a statistical analysis of the response time and garbage collection activity random variables. The obtained results supply engineers and managers with a subside to decide which technologies to use in large applications through the knowledge of how they behave in their environments and the amount of resources that they consume. The relation between the productivity of the technology and its performance is also considered ao important factor in this choice

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Following the new tendency of interdisciplinarity of modern science, a new field called neuroengineering has come to light in the last decades. After 2000, scientific journals and conferences all around the world have been created on this theme. The present work comprises three different subareas related to neuroengineering and electrical engineering: neural stimulation; theoretical and computational neuroscience; and neuronal signal processing; as well as biomedical engineering. The research can be divided in three parts: (i) A new method of neuronal photostimulation was developed based on the use of caged compounds. Using the inhibitory neurotransmitter GABA caged by a ruthenium complex it was possible to block neuronal population activity using a laser pulse. The obtained results were evaluated by Wavelet analysis and tested by non-parametric statistics. (ii) A mathematical method was created to identify neuronal assemblies. Neuronal assemblies were proposed as the basis of learning by Donald Hebb remain the most accepted theory for neuronal representation of external stimuli. Using the Marcenko-Pastur law of eigenvalue distribution it was possible to detect neuronal assemblies and to compute their activity with high temporal resolution. The application of the method in real electrophysiological data revealed that neurons from the neocortex and hippocampus can be part of the same assembly, and that neurons can participate in multiple assemblies. (iii) A new method of automatic classification of heart beats was developed, which does not rely on a data base for training and is not specialized in specific pathologies. The method is based on Wavelet decomposition and normality measures of random variables. Throughout, the results presented in the three fields of knowledge represent qualification in neural and biomedical engineering

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, the paper of Campos and Dorea [3] was detailed. In that article a Kernel Estimator was applied to a sequence of random variables with general state space, which were independent and identicaly distributed. In chapter 2, the estimator´s properties such as asymptotic unbiasedness, consistency in quadratic mean, strong consistency and asymptotic normality were verified. In chapter 3, using R software, numerical experiments were developed in order to give a visual idea of the estimate process

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work is to test an algorithm to estimate, in real time, the attitude of an artificial satellite using real data supplied by attitude sensors that are on board of the CBERS-2 satellite (China Brazil Earth Resources Satellite). The real-time estimator used in this work for attitude determination is the Unscented Kalman Filter. This filter is a new alternative to the extended Kalman filter usually applied to the estimation and control problems of attitude and orbit. This algorithm is capable of carrying out estimation of the states of nonlinear systems, without the necessity of linearization of the nonlinear functions present in the model. This estimation is possible due to a transformation that generates a set of vectors that, suffering a nonlinear transformation, preserves the same mean and covariance of the random variables before the transformation. The performance will be evaluated and analyzed through the comparison between the Unscented Kalman filter and the extended Kalman filter results, by using real onboard data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper an efficient algorithm for probabilistic analysis of unbalanced three-phase weakly-meshed distribution systems is presented. This algorithm uses the technique of Two-Point Estimate Method for calculating the probabilistic behavior of the system random variables. Additionally, the deterministic analysis of the state variables is performed by means of a Compensation-Based Radial Load Flow (CBRLF). Such load flow efficiently exploits the topological characteristics of the network. To deal with distributed generation, a strategy to incorporate a simplified model of a generator in the CBRLF is proposed. Thus, depending on the type of control and generator operation conditions, the node with distributed generation can be modeled either as a PV or PQ node. To validate the efficiency of the proposed algorithm, the IEEE 37 bus test system is used. The probabilistic results are compared with those obtained using the Monte Carlo method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)