907 resultados para Bivariate Normal Distribution
Resumo:
在多元统计过程监控中,为解决因未知过程数据统计分布而产生误报漏报的现象,提出一种结合多向独立元分析法(MICA)和广义相关系数(GCC)数据预测的综合方法,进行在线监控过程的仿真。MICA分析方法能有效分解各变量的关联关系,且不需考虑建模数据是否符合正态分布,用此方法计算的独立元变量能更好地描述过程的变化规律。为提高预报未来过程故障的能力,提出用广义相关系数法进行数据预测:确定与运行轨迹相似的监控模型库中的轨迹,并使其相应部分承接于运行轨迹之后。现场采集聚氯乙烯聚合过程的数据进行仿真,仿真结果显示:对于在线监控和在线故障诊断方面,这种新型预测方法优于其它传统处理预测问题的方法。
Resumo:
The analysis of energy detector systems is a well studied topic in the literature: numerous models have been derived describing the behaviour of single and multiple antenna architectures operating in a variety of radio environments. However, in many cases of interest, these models are not in a closed form and so their evaluation requires the use of numerical methods. In general, these are computationally expensive, which can cause difficulties in certain scenarios, such as in the optimisation of device parameters on low cost hardware. The problem becomes acute in situations where the signal to noise ratio is small and reliable detection is to be ensured or where the number of samples of the received signal is large. Furthermore, due to the analytic complexity of the models, further insight into the behaviour of various system parameters of interest is not readily apparent. In this thesis, an approximation based approach is taken towards the analysis of such systems. By focusing on the situations where exact analyses become complicated, and making a small number of astute simplifications to the underlying mathematical models, it is possible to derive novel, accurate and compact descriptions of system behaviour. Approximations are derived for the analysis of energy detectors with single and multiple antennae operating on additive white Gaussian noise (AWGN) and independent and identically distributed Rayleigh, Nakagami-m and Rice channels; in the multiple antenna case, approximations are derived for systems with maximal ratio combiner (MRC), equal gain combiner (EGC) and square law combiner (SLC) diversity. In each case, error bounds are derived describing the maximum error resulting from the use of the approximations. In addition, it is demonstrated that the derived approximations require fewer computations of simple functions than any of the exact models available in the literature. Consequently, the regions of applicability of the approximations directly complement the regions of applicability of the available exact models. Further novel approximations for other system parameters of interest, such as sample complexity, minimum detectable signal to noise ratio and diversity gain, are also derived. In the course of the analysis, a novel theorem describing the convergence of the chi square, noncentral chi square and gamma distributions towards the normal distribution is derived. The theorem describes a tight upper bound on the error resulting from the application of the central limit theorem to random variables of the aforementioned distributions and gives a much better description of the resulting error than existing Berry-Esseen type bounds. A second novel theorem, providing an upper bound on the maximum error resulting from the use of the central limit theorem to approximate the noncentral chi square distribution where the noncentrality parameter is a multiple of the number of degrees of freedom, is also derived.
Resumo:
In regression analysis of counts, a lack of simple and efficient algorithms for posterior computation has made Bayesian approaches appear unattractive and thus underdeveloped. We propose a lognormal and gamma mixed negative binomial (NB) regression model for counts, and present efficient closed-form Bayesian inference; unlike conventional Poisson models, the proposed approach has two free parameters to include two different kinds of random effects, and allows the incorporation of prior information, such as sparsity in the regression coefficients. By placing a gamma distribution prior on the NB dispersion parameter r, and connecting a log-normal distribution prior with the logit of the NB probability parameter p, efficient Gibbs sampling and variational Bayes inference are both developed. The closed-form updates are obtained by exploiting conditional conjugacy via both a compound Poisson representation and a Polya-Gamma distribution based data augmentation approach. The proposed Bayesian inference can be implemented routinely, while being easily generalizable to more complex settings involving multivariate dependence structures. The algorithms are illustrated using real examples. Copyright 2012 by the author(s)/owner(s).
Resumo:
Association studies of quantitative traits have often relied on methods in which a normal distribution of the trait is assumed. However, quantitative phenotypes from complex human diseases are often censored, highly skewed, or contaminated with outlying values. We recently developed a rank-based association method that takes into account censoring and makes no distributional assumptions about the trait. In this study, we applied our new method to age-at-onset data on ALDX1 and ALDX2. Both traits are highly skewed (skewness > 1.9) and often censored. We performed a whole genome association study of age at onset of the ALDX1 trait using Illumina single-nucleotide polymorphisms. Only slightly more than 5% of markers were significant. However, we identified two regions on chromosomes 14 and 15, which each have at least four significant markers clustering together. These two regions may harbor genes that regulate age at onset of ALDX1 and ALDX2. Future fine mapping of these two regions with densely spaced markers is warranted.
Resumo:
This brief examines the application of nonlinear statistical process control to the detection and diagnosis of faults in automotive engines. In this statistical framework, the computed score variables may have a complicated nonparametric distri- bution function, which hampers statistical inference, notably for fault detection and diagnosis. This brief shows that introducing the statistical local approach into nonlinear statistical process control produces statistics that follow a normal distribution, thereby enabling a simple statistical inference for fault detection. Further, for fault diagnosis, this brief introduces a compensation scheme that approximates the fault condition signature. Experimental results from a Volkswagen 1.9-L turbo-charged diesel engine are included.
Resumo:
Value-at-risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram-Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time-varying higher-moments models.
Resumo:
Metallographic characterisation is combined with statistical analysis to study the microstructure of a BT16 titanium alloy after different heat treatment processes. It was found that the length, width and aspect ratio of α plates in this alloy follow the three-parameter Weibull distribution. Increasing annealing temperature or time causes the probability distribution of the length and the width of α plates to tend toward a normal distribution. The phase transformation temperature of the BT16 titanium alloy was found to be 875±5°C.
Resumo:
The kinetics of the recovery of the photoinduced transient bleaching of colloidal CdS in the presence of different electron acceptors are examined. In the presence of the zwitterionic viologen, N,N'-dipropyl-2,2'-bipyridinium disulphonate, excitation of colloidal CdS at different flash intensities generates a series of decay profiles which are superimposed when normalized. The shape of the decay curves are as predicted by a first-order activation-controlled model for a log-normal distribution of particles sizes. In contrast, the variation in flash intensity in the presence of a second viologen, N,N'-dipropyl-4,4'-bipyridinium sulphonate, generates normalized decay traces which broaden with increasing flash intensity. This behaviour is predicted by a zero-order diffusion-controlled model for a log-normal distribution of particle radii. The photoreduction of a number of other oxidants sensitized by colloidal CdS is examined and the shape of the decay kinetics interpreted via either the first- or zero-order kinetics models. The rate constants and activation energies derived using these models are consistent with the values expected for an activation- or diffusion-controlled reaction.
Resumo:
This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16. m) and flat terrain (RMSE 0.02. m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52. m) although it was the second highest on the flat unobscured terrain (RMSE 0.07. m). ALS data represented the sloped terrain more realistically (RMSE 0.23. m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29. m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. © 2012 Elsevier Ltd.
Resumo:
This study presents the findings of an empirical channel characterisation for an ultra-wideband off-body optic fibre-fed multiple-antenna array within an office and corridor environment. The results show that for received power experiments, the office and corridor were best modelled by lognormal and Rician distributions, respectively [for both line of sight (LOS) and non-LOS (NLOS) scenarios]. In the office, LOS measurements for t and tRMS were both described by the Normal distribution for all channels, whereas NLOS measurements for t and t were Nakagami and Weibull distributed, respectively. For the corridor measurements, LOS for t and t were either Nakagami or normally distributed for all channels, with NLOS measurements for t and t being Nakagami and normally distributed, respectively. This work also shows that achievable diversity gain was influenced by both mutual coupling and cross-correlation co-efficients. Although the best diversity gains were 1.8 dB for three-channel selective diversity combining, the authors present recommendations for improving these results. © The Institution of Engineering and Technology 2013.
Resumo:
A randomly distributed multi-particle model considering the effects of particle/matrix interface and strengthening mechanisms introduced by the particles has been constructed. Particle shape, distribution, volume fraction and the particles/matrix interface due to the factors including element diffusion were considered in the model. The effects of strengthening mechanisms, caused by the introduction of particles on the mechanical properties of the composites, including grain refinement strengthening, dislocation strengthening and Orowan strengthening, are incorporated. In the model, the particles are assumed to have spheroidal shape, with uniform distribution of the centre, long axis length and inclination angle. The axis ratio follows a right half-normal distribution. Using Monte Carlo method, the location and shape parameters of the spheroids are randomly selected. The particle volume fraction is calculated using the area ratio of the spheroids. Then, the effects of particle/matrix interface and strengthening mechanism on the distribution of Mises stress and equivalent strain and the flow behaviour for the composites are discussed.
Resumo:
ABSTRACT BODY: To resolve outstanding questions on heating of coronal loops, we study intensity fluctuations in inter-moss portions of active region core loops as observed with AIA/SDO. The 94Å fluctuations (Figure 1) have structure on timescales shorter than radiative and conductive cooling times. Each of several strong 94Å brightenings is followed after ~8 min by a broader peak in the cooler 335Å emission. This indicates that we see emission from the hot component of the 94Å contribution function. No hotter contributions appear, and we conclude that the 94Å intensity can be used as a proxy for energy injection into the loop plasma. The probability density function of the observed 94Å intensity has 'heavy tails' that approach zero more slowly than the tails of a normal distribution. Hence, large fluctuations dominate the behavior of the system. The resulting 'intermittence' is associated with power-law or exponential scaling of the related variables, and these in turn are associated with turbulent phenomena. The intensity plots in Figure 1 resemble multifractal time series, which are common to various forms of turbulent energy dissipation. In these systems a single fractal dimension is insufficient to describe the dynamics and instead there is a spectrum of fractal dimensions that quantify the self-similar properties. Figure 2 shows the multifractal spectrum from our data to be invariant over timescales from 24 s to 6.4 min. We compare these results to outputs from theoretical energy dissipation models based on MHD turbulence, and in some cases we find substantial agreement, in terms of intermittence, multifractality and scale invariance. Figure 1. Time traces of 94A intensity in the inter-moss portions of four AR core loops. Figure 2. Multifractal spectra showing timescale invariance. The four cases of Figure 1 are included.
Resumo:
Most cryptographic devices should inevitably have a resistance against the threat of side channel attacks. For this, masking and hiding schemes have been proposed since 1999. The security validation of these countermeasures is an ongoing research topic, as a wider range of new and existing attack techniques are tested against these countermeasures. This paper examines the side channel security of the balanced encoding countermeasure, whose aim is to process the secret key-related data under a constant Hamming weight and/or Hamming distance leakage. Unlike previous works, we assume that the leakage model coefficients conform to a normal distribution, producing a model with closer fidelity to real-world implementations. We perform analysis on the balanced encoded PRINCE block cipher with simulated leakage model and also an implementation on an AVR board. We consider both standard correlation power analysis (CPA) and bit-wise CPA. We confirm the resistance of the countermeasure against standard CPA, however, we find with a bit-wise CPA that we can reveal the key with only a few thousands traces.
Resumo:
Beyond the classical statistical approaches (determination of basic statistics, regression analysis, ANOVA, etc.) a new set of applications of different statistical techniques has increasingly gained relevance in the analysis, processing and interpretation of data concerning the characteristics of forest soils. This is possible to be seen in some of the recent publications in the context of Multivariate Statistics. These new methods require additional care that is not always included or refered in some approaches. In the particular case of geostatistical data applications it is necessary, besides to geo-reference all the data acquisition, to collect the samples in regular grids and in sufficient quantity so that the variograms can reflect the spatial distribution of soil properties in a representative manner. In the case of the great majority of Multivariate Statistics techniques (Principal Component Analysis, Correspondence Analysis, Cluster Analysis, etc.) despite the fact they do not require in most cases the assumption of normal distribution, they however need a proper and rigorous strategy for its utilization. In this work, some reflections about these methodologies and, in particular, about the main constraints that often occur during the information collecting process and about the various linking possibilities of these different techniques will be presented. At the end, illustrations of some particular cases of the applications of these statistical methods will also be presented.
Resumo:
The exposure to dust and polynuclear aromatic hydrocarbons (PAH) of 15 truck drivers from Geneva, Switzerland, was measured. The drivers were divided between "long-distance" drivers and "local" drivers and between smokers and nonsmokers and were compared with a control group of 6 office workers who were also divided into smokers and nonsmokers. Dust was measured on 1 workday both by a direct-reading instrument and by sampling. The local drivers showed higher exposure to dust (0.3 mg/m3) and PAH than the long-distance drivers (0.1 mg/m3), who showed no difference with the control group. This observation may be due to the fact that the local drivers spend more time in more polluted areas, such as streets with heavy traffic and construction sites, than do the long-distance drivers. Smoking does not influence exposure to dust and PAH of professional truck drivers, as measured in this study, probably because the ventilation rate of the truck cabins is relatively high even during cold days (11-15 r/h). The distribution of dust concentrations was shown in some cases to be quite different from the expected log-normal distribution. The contribution of diesel exhaust to these exposures could not be estimated since no specific tracer was used. However, the relatively low level of dust exposure dose not support the hypothesis that present day levels of diesel exhaust particulates play a significant role in the excess occurrence of lung cancer observed in professional truck drivers.