907 resultados para Bivariate Normal Distribution
Resumo:
PURPOSE: To evaluate diffusion-weighted magnetic resonance (MR) imaging of the human placenta in fetuses with and fetuses without intrauterine growth restriction (IUGR) who were suspected of having placental insufficiency. MATERIALS AND METHODS: The study was approved by the local ethics committee, and written informed consent was obtained. The authors retrospectively evaluated 1.5-T fetal MR images from 102 singleton pregnancies (mean gestation ± standard deviation, 29 weeks ± 5; range, 21-41 weeks). Morphologic and diffusion-weighted MR imaging were performed. A region of interest analysis of the apparent diffusion coefficient (ADC) of the placenta was independently performed by two observers who were blinded to clinical data and outcome. Placental insufficiency was diagnosed if flattening of the growth curve was detected at obstetric ultrasonography (US), if the birth weight was in the 10th percentile or less, or if fetal weight estimated with US was below the 10th percentile. Abnormal findings at Doppler US of the umbilical artery and histopathologic examination of specimens from the placenta were recorded. The ADCs in fetuses with placental insufficiency were compared with those in fetuses of the same gestational age without placental insufficiency and tested for normal distribution. The t tests and Pearson correlation coefficients were used to compare these results at 5% levels of significance. RESULTS: Thirty-three of the 102 pregnancies were ultimately categorized as having an insufficient placenta. MR imaging depicted morphologic changes (eg, infarction or bleeding) in 27 fetuses. Placental dysfunction was suspected in 33 fetuses at diffusion-weighted imaging (mean ADC, 146.4 sec/mm(2) ± 10.63 for fetuses with placental insufficiency vs 177.1 sec/mm(2) ± 18.90 for fetuses without placental insufficiency; P < .01, with one false-positive case). The use of diffusion-weighted imaging in addition to US increased sensitivity for the detection of placental insufficiency from 73% to 100%, increased accuracy from 91% to 99%, and preserved specificity at 99%. CONCLUSION: Placental dysfunction associated with growth restriction is associated with restricted diffusion and reduced ADC. A decreased ADC used as an early marker of placental damage might be indicative of pregnancy complications such as IUGR.
Resumo:
Locally affine (polyaffine) image registration methods capture intersubject non-linear deformations with a low number of parameters, while providing an intuitive interpretation for clinicians. Considering the mandible bone, anatomical shape differences can be found at different scales, e.g. left or right side, teeth, etc. Classically, sequential coarse to fine registration are used to handle multiscale deformations, instead we propose a simultaneous optimization of all scales. To avoid local minima we incorporate a prior on the polyaffine transformations. This kind of groupwise registration approach is natural in a polyaffine context, if we assume one configuration of regions that describes an entire group of images, with varying transformations for each region. In this paper, we reformulate polyaffine deformations in a generative statistical model, which enables us to incorporate deformation statistics as a prior in a Bayesian setting. We find optimal transformations by optimizing the maximum a posteriori probability. We assume that the polyaffine transformations follow a normal distribution with mean and concentration matrix. Parameters of the prior are estimated from an initial coarse to fine registration. Knowing the region structure, we develop a blockwise pseudoinverse to obtain the concentration matrix. To our knowledge, we are the first to introduce simultaneous multiscale optimization through groupwise polyaffine registration. We show results on 42 mandible CT images.
Resumo:
BACKGROUND: Social cognition is an important aspect of social behavior in humans. Social cognitive deficits are associated with neurodevelopmental and neuropsychiatric disorders. In this study we examine the neural substrates of social cognition and face processing in a group of healthy young adults to examine the neural substrates of social cognition. METHODS: Fifty-seven undergraduates completed a battery of social cognition tasks and were assessed with electroencephalography (EEG) during a face-perception task. A subset (N=22) were administered a face-perception task during functional magnetic resonance imaging. RESULTS: Variance in the N170 EEG was predicted by social attribution performance and by a quantitative measure of empathy. Neurally, face processing was more bilateral in females than in males. Variance in fMRI voxel count in the face-sensitive fusiform gyrus was predicted by quantitative measures of social behavior, including the Social Responsiveness Scale (SRS) and the Empathizing Quotient. CONCLUSIONS: When measured as a quantitative trait, social behaviors in typical and pathological populations share common neural pathways. The results highlight the importance of viewing neurodevelopmental and neuropsychiatric disorders as spectrum phenomena that may be informed by studies of the normal distribution of relevant traits in the general population. Copyright 2014 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a numerically simple routine for locally adaptive smoothing. The locally heterogeneous regression function is modelled as a penalized spline with a smoothly varying smoothing parameter modelled as another penalized spline. This is being formulated as hierarchical mixed model, with spline coe±cients following a normal distribution, which by itself has a smooth structure over the variances. The modelling exercise is in line with Baladandayuthapani, Mallick & Carroll (2005) or Crainiceanu, Ruppert & Carroll (2006). But in contrast to these papers Laplace's method is used for estimation based on the marginal likelihood. This is numerically simple and fast and provides satisfactory results quickly. We also extend the idea to spatial smoothing and smoothing in the presence of non normal response.
Resumo:
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a complicated target distribution via simple ergodic averages. A fundamental question in MCMC applications is when should the sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the MCMC sampling the first time the width of a confidence interval based on the ergodic averages is less than a user-specified value. Hence calculating Monte Carlo standard errors is a critical step in assessing the output of the simulation. In particular, we consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We describe sufficient conditions for the strong consistency and asymptotic normality of both methods and investigate their finite sample properties in a variety of examples.
Resumo:
To test the role of telomere biology in T-cell prolymphocytic leukemia (T-PLL), a rare aggressive disease characterized by the expansion of a T-cell clone derived from immuno-competent post-thymic T-lymphocytes, we analyzed telomere length and telomerase activity in subsets of peripheral blood leukocytes from 11 newly diagnosed or relapsed patients with sporadic T-PLL. Telomere length values of the leukemic T cells (mean+/-s.d.: 1.53+/-0.65 kb) were all below the 1st percentile of telomere length values observed in T cells from healthy age-matched controls whereas telomere length of normal T- and B cells fell between the 1st and 99th percentile of the normal distribution. Leukemic T cells exhibited high levels of telomerase and were sensitive to the telomerase inhibitor BIBR1532 at doses that showed no effect on normal, unstimulated T cells. Targeting the short telomeres and telomerase activity in T-PLL seems an attractive strategy for the future treatment of this devastating disease.
Resumo:
Four papers, written in collaboration with the author’s graduate school advisor, are presented. In the first paper, uniform and non-uniform Berry-Esseen (BE) bounds on the convergence to normality of a general class of nonlinear statistics are provided; novel applications to specific statistics, including the non-central Student’s, Pearson’s, and the non-central Hotelling’s, are also stated. In the second paper, a BE bound on the rate of convergence of the F-statistic used in testing hypotheses from a general linear model is given. The third paper considers the asymptotic relative efficiency (ARE) between the Pearson, Spearman, and Kendall correlation statistics; conditions sufficient to ensure that the Spearman and Kendall statistics are equally (asymptotically) efficient are provided, and several models are considered which illustrate the use of such conditions. Lastly, the fourth paper proves that, in the bivariate normal model, the ARE between any of these correlation statistics possesses certain monotonicity properties; quadratic lower and upper bounds on the ARE are stated as direct applications of such monotonicity patterns.
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
Resumo:
The role of Soil Organic Carbon (SOC) in mitigating climate change, indicating soil quality and ecosystem function has created research interested to know the nature of SOC at landscape level. The objective of this study was to examine variation and distribution of SOC in a long-term land management at a watershed and plot level. This study was based on meta-analysis of three case studies and 128 surface soil samples from Ethiopia. Three sites (Gununo, Anjeni and Maybar) were compared after considering two Land Management Categories (LMC) and three types of land uses (LUT) in quasi-experimental design. Shapiro-Wilk tests showed non-normal distribution (p = 0.002, a = 0.05) of the data. SOC median value showed the effect of long-term land management with values of 2.29 and 2.38 g kg-1 for less and better-managed watersheds, respectively. SOC values were 1.7, 2.8 and 2.6 g kg-1 for Crop (CLU), Grass (GLU) and Forest Land Use (FLU), respectively. The rank order for SOC variability was FLU>GLU>CLU. Mann-Whitney U and Kruskal-Wallis test showed a significant difference in the medians and distribution of SOC among the LUT, between soil profiles (p<0.05, confidence interval 95%, a = 0.05) while it is not significant (p>0.05) for LMC. The mean and sum rank of Mann Whitney U and Kruskal Wallis test also showed the difference at watershed and plot level. Using SOC as a predictor, cross-validated correct classification with discriminant analysis showed 46 and 49% for LUT and LMC, respectively. The study showed how to categorize landscapes using SOC with respect to land management for decision-makers.
Resumo:
Purpose.This retrospective cohort study evaluated factors for peri-implant bone level changes (ΔIBL) associated with an implant type with inner-cone implant-abutment connection, rough neck surface, and platform switching (AT). Materials and Methods. All AT placed at the Department of Prosthodontics of the University of Bern between January 2004 and December 2005 were included in this study. All implants were examined by single radiographs using the parallel technique taken at surgery (T0) and obtained at least 6 months after surgery (T1). Possible influencing factors were analysed first using t-test (normal distribution) or the nonparametric Wilcoxon test (not normal distribution), and then a mixed model q variance analysis was performed. Results. 43 patients were treated with 109 implants. Five implants in 2 patients failed (survival rate: 95.4%).Mean ΔIBL in group 1 (T1: 6–12 months after surgery) was −0.65 ± 0.82mm and −0.69 ± 0.82mm in group 2 (T1: >12 months after surgery) (
Resumo:
BACKGROUND Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. METHODS The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. RESULTS Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). CONCLUSIONS This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.
Resumo:
The rates of childhood and adolescent obesity in the United States have been increasing steadily. American youth continue to eat more (increase energy intake) and reduce physical activity (decrease energy expenditure) resulting in increased body weight and body fatness. One way to help reduce body weight in children is to increase physical activity. The purpose of this study was to determine if an age appropriate before-school physical activity intervention would be successful in increasing energy expenditure, intensity of activity, and behavioral approaches in overweight girls. The subjects were recruited from Parker Memorial School in Tolland, Connecticut, and two testing periods occurred over an eight week period. Video recordings of each physical activity session were analyzed to determine energy expenditure, exercise intensity, and behaviors during exercise. Data was evaluated for normal distribution, and paired t-tests were used to determine statistical significance. This study showed that the age appropriate before school physical activity intervention was able to increase energy expenditure and exercise intensity and have a positive effect on behavioral approaches in overweight girls.
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.