967 resultados para Bivariate Gaussian distribution


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Curie-Weiss model is defined by ah Hamiltonian according to spins interact. For some particular values of the parameters, the sum of the spins normalized with square-root normalization converges or not toward Gaussian distribution. In the thesis we investigate some correlations between the behaviour of the sum and the central limit for interacting random variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Alterations in oncogenes and tumor suppressor genes (TSGs) are considered to be critical steps in oncogenesis. Consistent deletions and loss of heterozygosity (LOH) of polymorphic markers in a determinate chromosomal fragment are known to be indicative of a closely mapping TSG. Deletion of the long arm of chromosome 7 (hchr 7) is a frequent trait in many kinds of human primary tumors. LOH was studied with an extensive set of markers on chromosome 7q in several types of human neoplasias (primary breast, prostate, colon, ovarian and head and neck carcinomas) to determine the location of a putative TSG. The extent of LOH varied depending the type of tumor studied but all the LOH curves we obtained had a peak at (C-A)$\sb{\rm n}$ microsatellite repeat D7S522 at 7q31.1 and showed a Gaussian distribution. The high incidence of LOH in all tumor types studied suggests that a TSG relevant to the development of epithelial cancers is present on the 7q31.1. To investigate whether the putative TSG is conserved in the syntenic mouse locus, we studied LOH of 30 markers along mouse chromosome 6 (mchr 6) in chemically induced squamous cell carcinomas (SCCs). Tumors were obtained from SENCAR and C57BL/6 x DBA/2 F1 females by a two-stage carcinogenesis protocol. The high incidence of LOH in the tumor types studied suggests that a TSG relevant to the development of epithelial cancers is present on mchr 6 A1. Since this segment is syntenic with the hchr 7q31, these data indicate that the putative TSG is conserved in both species. Functional evidence for the existence of a TSG in hchr 7 was obtained by microcell fusion transfer of a single hchr 7 into a murine SCC-derived cell line. Five out of seven hybrids had two to three-fold longer latency periods for in vivo tumorigenicity assays than parental cells. One of the unrepressed hybrids had a deletion in the introduced chromosome 7 involving q31.1-q31.3, confirming the LOH data. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Air was sampled from the porous firn layer at the NEEM site in Northern Greenland. We use an ensemble of ten reference tracers of known atmospheric history to characterise the transport properties of the site. By analysing uncertainties in both data and the reference gas atmospheric histories, we can objectively assign weights to each of the gases used for the depth-diffusivity reconstruction. We define an objective root mean square criterion that is minimised in the model tuning procedure. Each tracer constrains the firn profile differently through its unique atmospheric history and free air diffusivity, making our multiple-tracer characterisation method a clear improvement over the commonly used single-tracer tuning. Six firn air transport models are tuned to the NEEM site; all models successfully reproduce the data within a 1σ Gaussian distribution. A comparison between two replicate boreholes drilled 64 m apart shows differences in measured mixing ratio profiles that exceed the experimental error. We find evidence that diffusivity does not vanish completely in the lock-in zone, as is commonly assumed. The ice age- gas age difference (1 age) at the firn-ice transition is calculated to be 182+3−9 yr. We further present the first intercomparison study of firn air models, where we introduce diagnostic scenarios designed to probe specific aspects of the model physics. Our results show that there are major differences in the way the models handle advective transport. Furthermore, diffusive fractionation of isotopes in the firn is poorly constrained by the models, which has consequences for attempts to reconstruct the isotopic composition of trace gases back in time using firn air and ice core records.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents some initial attempts to mathematically model the dynamics of a continuous estimation of distribution algorithm (EDA) based on a Gaussian distribution and truncation selection. Case studies are conducted on both unimodal and multimodal problems to highlight the effectiveness of the proposed technique and explore some important properties of the EDA. With some general assumptions, we show that, for ID unimodal problems and with the (mu, lambda) scheme: (1). The behaviour of the EDA is dependent only on the general shape of the test function, rather than its specific form; (2). When initialized far from the global optimum, the EDA has a tendency to converge prematurely; (3). Given a certain selection pressure, there is a unique value for the proposed amplification parameter that could help the EDA achieve desirable performance; for ID multimodal problems: (1). The EDA could get stuck with the (mu, lambda) scheme; (2). The EDA will never get stuck with the (mu, lambda) scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using techniques from Statistical Physics, the annealed VC entropy for hyperplanes in high dimensional spaces is calculated as a function of the margin for a spherical Gaussian distribution of inputs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have proposed a novel robust inversion-based neurocontroller that searches for the optimal control law by sampling from the estimated Gaussian distribution of the inverse plant model. However, for problems involving the prediction of continuous variables, a Gaussian model approximation provides only a very limited description of the properties of the inverse model. This is usually the case for problems in which the mapping to be learned is multi-valued or involves hysteritic transfer characteristics. This often arises in the solution of inverse plant models. In order to obtain a complete description of the inverse model, a more general multicomponent distributions must be modeled. In this paper we test whether our proposed sampling approach can be used when considering an arbitrary conditional probability distributions. These arbitrary distributions will be modeled by a mixture density network. Importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The effectiveness of the importance sampling from an arbitrary conditional probability distribution will be demonstrated using a simple single input single output static nonlinear system with hysteretic characteristics in the inverse plant model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effect of having a fixed differential group delay term in the coarse step method results in a periodic pattern in the inserting a varying DGD term at each integration step, according to a Gaussian distribution. Simulation results are given to illustrate the phenomenon and provide some evidence about its statistical nature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pearson's correlation coefficient (‘r’) is one of the most widely used of all statistics. Nevertheless, care needs to be used in interpreting the results because with large numbers of observations, quite small values of ‘r’ become significant and the X variable may only account for a small proportion of the variance in Y. Hence, ‘r squared’ should always be calculated and included in a discussion of the significance of ‘r’. The use of ‘r’ also assumes that the data follow a bivariate normal distribution (see Statnote 17) and this assumption should be examined prior to the study. If the data do not conform to such a distribution, the use of a non-parametric correlation coefficient should be considered. A significant correlation should not be interpreted as indicating ‘causation’ especially in observational studies, in which the two variables may be correlated because of their mutual correlations with other confounding variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effect of having a fixed differential-group delay term in the coarse-step method results in a periodic pattern in the autocorrelation function. We solve this problem by inserting a varying DGD term at each integration step, according to a Gaussian distribution. Simulation results are given to illustrate the phenomenon and provide some evidence, about its statistical nature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a general technique how to reveal in experiments of limited electrical bandwidth which is lower than the optical bandwidth of the optical signal under study, whether the statistical properties of the light source obey Gaussian distribution or mode correlations do exist. To do that one needs to perform measurements by decreasing the measurement bandwidth. We develop a simple model of bandwidth-limited measurements and predict universal laws how intensity probability density function and intensity auto-correlation function of ideal completely stochastic source of Gaussian statistics depend on limited measurement bandwidth and measurement noise level. Results of experimental investigation are in good agreement with model predictions. In particular, we reveal partial mode correlations in the radiation of quasi-CW Raman fibre laser.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report an efficient power tapping device working in near infra-red (800 nm) wavelength region based on UV-in- scribed 45° tilted fiber grating (45°-TFG) structure. Five 45°-TFGs were UV-inscribed in hydrogenated PS750 fiber using a custom-designed phase mask with different grating lengths of 3 mm, 5 mm, 9 mm, 12 mm and 15 mm, showing polarization dependent losses (PDLs) of 1 dB, 3 dB, 7 dB, 10 dB and 13 dB, respectively. The power side-tapping efficiency is clearly depending on the grating strength. It has been identified that the power tapping efficiency increases with the grating strength and deceases along the grating length. The side-tapped power profile has also been examined in azimuthal direction, showing a near-Gaussian distribution. These experimental results clearly demonstrated that 45°- TFGs may be used as in-fiber power tapping devices for applications requiring in-line signal monitoring.