920 resultados para Estimation Of Distribution Algorithms
Resumo:
Recent attempts to detect mutations involving single base changes or small deletions that are specific to genetic diseases provide an opportunity to develop a two-tier mutation-screening program through which incidence of rare genetic disorders and gene carriers may be precisely estimated. A two-tier survey consists of mutation screening in a sample of patients with specific genetic disorders and in a second sample of newborns from the same population in which mutation frequency is evaluated. We provide the statistical basis for evaluating the incidence of affected and gene carriers in such two-tier mutation-screening surveys, from which the precision of the estimates is derived. Sample-size requirements of such two-tier mutation-screening surveys are evaluated. Considering examples of cystic fibrosis (CF) and medium-chain acyl-CoA dehydrogenase deficiency (MCAD), the two most frequent autosomal recessive disease in Caucasian populations and the two most frequent mutations (delta F508 and G985) that occur on these disease allele-bearing chromosomes, we show that, with 50-100 patients and a 20-fold larger sample of newborns screened for these mutations, the incidence of such diseases and their gene carriers in a population may be quite reliably estimated. The theory developed here is also applicable to rare autosomal dominant diseases for which disease-specific mutations are found.
Resumo:
Haldane (1935) developed a method for estimating the male-to-female ratio of mutation rate ($\alpha$) by using sex-linked recessive genetic disease, but in six different studies using hemophilia A data the estimates of $\alpha$ varied from 1.2 to 29.3. Direct genomic sequencing is a better approach, but it is laborious and not readily applicable to non-human organisms. To study the sex ratios of mutation rate in various mammals, I used an indirect method proposed by Miyata et al. (1987). This method takes advantage of the fact that different chromosomes segregate differently between males and females, and uses the ratios of mutation rate in sequences on different chromosomes to estimate the male-to-female ratio of mutation rate. I sequenced the last intron of ZFX and ZFY genes in 6 species of primates and 2 species of rodents; I also sequenced the partial genomic sequence of the Ube1x and Ube1y genes of mice and rats. The purposes of my study in addition to estimation of $\alpha$'s in different mammalian species, are to test the hypothesis that most mutations are replication dependent and to examine the generation-time effect on $\alpha$. The $\alpha$ value estimated from the ZFX and ZFY introns of the six primate specise is ${\sim}$6. This estimate is the same as an earlier estimate using only 4 species of primates, but the 95% confidence interval has been reduced from (2, 84) to (2, 33). The estimate of $\alpha$ in the rodents obtained from Zfx and Zfy introns is ${\sim}$1.9, and that deriving from Ube1x and Ube1y introns is ${\sim}$2. Both estimates have a 95% confidence interval from 1 to 3. These two estimates are very close to each other, but are only one-third of that of the primates, suggesting a generation-time effect on $\alpha$. An $\alpha$ of 6 in primates and 2 in rodents are close to the estimates of the male-to-female ratio of the number of germ-cell divisions per generation in humans and mice, which are 6 and 2, respectively, assuming the generation time in humans is 20 years and that in mice is 5 months. These findings suggest that errors during germ-cell DNA replication are the primary source of mutation and that $\alpha$ decreases with decreasing length of generation time. ^
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
Several approaches for the non-invasive MRI-based measurement of the aortic pressure waveform over the heart cycle have been proposed in the last years. These methods are normally based on time-resolved, two-dimensional phase-contrast sequences with uni-directionally encoded velocities (2D PC-MRI). In contrast, three-dimensional acquisitions with tridirectional velocity encoding (4D PC-MRI) have been shown to be a suitable data source for detailed investigations of blood flow and spatial blood pressure maps. In order to avoid additional MR acquisitions, it would be advantageous if the aortic pressure waveform could also be computed from this particular form of MRI. Therefore, we propose an approach for the computation of the aortic pressure waveform which can be completely performed using 4D PC-MRI. After the application of a segmentation algorithm, the approach automatically computes the aortic pressure waveform without any manual steps. We show that our method agrees well with catheter measurements in an experimental phantom setup and produces physiologically realistic results in three healthy volunteers.
Resumo:
Introduction The aim of this study was to determine which single measurement on post-mortem cardiac MR reflects actual heart weight as measured at autopsy, assess the intra- and inter-observer reliability of MR measurements, derive a formula to predict heart weight from MR measurements and test the accuracy of the formula to prospectively predict heart weight. Materials and methods 53 human cadavers underwent post-mortem cardiac MR and forensic autopsy. In Phase 1, left ventricular area and wall thickness were measured on short axis and four chamber view images of 29 cases. All measurements were correlated to heart weight at autopsy using linear regression analysis. In Phase 2, single left ventricular area measurements on four chamber view images (LVA_4C) from 24 cases were used to predict heart weight at autopsy based on equations derived during Phase 1. Intra-class correlation coefficient (ICC) was used to determine inter- and intra-reader agreement. Results Heart weight strongly correlates with LVA_4C (r=0.78 M; p<0.001). Intra-reader and inter-reader reliability was excellent for LVA_4C (ICC=0.81–0.91; p<0.001 and ICC=0.90; p<0.001 respectively). A simplified formula for heart weight ([g]≈LVA_4C [mm2]×0.11) was derived based on linear regression analysis. Conclusions This study shows that single circumferential area measurements of the left ventricle in the four chamber view on post-mortem cardiac MR reflect actual heart weight as measured at autopsy. These measurements yield an excellent intra- and inter-reader reliability and can be used to predict heart weight prior to autopsy or to give a reasonable estimate of heart weight in cases where autopsy is not performed.
Resumo:
Pulmonary airways are subdivided into conducting and gas-exchanging airways. An acinus is defined as the small tree of gas-exchanging airways, which is fed by the most distal purely conducting airway. Until now a dissector of five consecutive sections or airway casts were used to count acini. We developed a faster method to estimate the number of acini in young adult rats. Right middle lung lobes were critical point dried or paraffin embedded after heavy metal staining and imaged by X-ray micro-CT or synchrotron radiation-based X-rays tomographic microscopy. The entrances of the acini were counted in three-dimensional (3D) stacks of images by scrolling through them and using morphological criteria (airway wall thickness and appearance of alveoli). Segmentation stopper were placed at the acinar entrances for 3D visualizations of the conducting airways. We observed that acinar airways start at various generations and that one transitional bronchiole may serve more than one acinus. A mean of 5612 (±547) acini per lung and a mean airspace volume of 0.907 (±0.108) μL per acinus were estimated. In 60-day-old rats neither the number of acini nor the mean acinar volume did correlate with the body weight or the lung volume.
Resumo:
Instruments for on-farm determination of colostrum quality such as refractometers and densimeters are increasingly used in dairy farms. The colour of colostrum is also supposed to reflect its quality. A paler or mature milk-like colour is associated with a lower colostrum value in terms of its general composition compared with a more yellowish and darker colour. The objective of this study was to investigate the relationships between colour measurement of colostrum using the CIELAB colour space (CIE L*=from white to black, a*=from red to green, b*=from yellow to blue, chroma value G=visual perceived colourfulness) and its composition. Dairy cow colostrum samples (n=117) obtained at 4·7±1·5 h after parturition were analysed for immunoglobulin G (IgG) by ELISA and for fat, protein and lactose by infrared spectroscopy. For colour measurements, a calibrated spectrophotometer was used. At a cut-off value of 50 mg IgG/ml, colour measurement had a sensitivity of 50·0%, a specificity of 49·5%, and a negative predictive value of 87·9%. Colostral IgG concentration was not correlated with the chroma value G, but with relative lightness L*. While milk fat content showed a relationship to the parameters L*, a*, b* and G from the colour measurement, milk protein content was not correlated with a*, but with L*, b*, and G. Lactose concentration in colostrum showed only a relationship with b* and G. In conclusion, parameters of the colour measurement showed clear relationships to colostral IgG, fat, protein and lactose concentration in dairy cows. Implementation of colour measuring devices in automatic milking systems and milking parlours might be a potential instrument to access colostrum quality as well as detecting abnormal milk.
Resumo:
During November 2010–February 2011, we used camera traps to estimate the population density of Eurasian lynx Lynx lynx in Ciglikara Nature Reserve, Turkey, an isolated population in southwest Asia. Lynx density was calculated through spatial capture—recapture models. In a sampling eff ort of 1093 camera trap days, we identifi ed 15 independent individuals and estimated a density of 4.20 independent lynx per 100 km2, an unreported high density for this species. Camera trap results also indicated that the lynx is likely to be preying on brown hare Lepus europaeus, which accounted for 63% of the non-target species pictured. As lagomorph populations tend to fl uctuate, the high lynx density recorded in Ciglikara may be temporary and may decline with prey fl uctuation. Therefore we recommend to survey other protected areas in southwestern Turkey where lynx is known or assumed to exist, and continuously monitor the lynx populations with reliable methods in order to understand the populations structure and dynamics, defi ne sensible measures and management plans to conserve this important species.
Resumo:
We present an application and sample independent method for the automatic discrimination of noise and signal in optical coherence tomography Bscans. The proposed algorithm models the observed noise probabilistically and allows for a dynamic determination of image noise parameters and the choice of appropriate image rendering parameters. This overcomes the observer variability and the need for a priori information about the content of sample images, both of which are challenging to estimate systematically with current systems. As such, our approach has the advantage of automatically determining crucial parameters for evaluating rendered image quality in a systematic and task independent way. We tested our algorithm on data from four different biological and nonbiological samples (index finger, lemon slices, sticky tape, and detector cards) acquired with three different experimental spectral domain optical coherence tomography (OCT) measurement systems including a swept source OCT. The results are compared to parameters determined manually by four experienced OCT users. Overall, our algorithm works reliably regardless of which system and sample are used and estimates noise parameters in all cases within the confidence interval of those found by observers.
Resumo:
In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.
The impact of common versus separate estimation of orbit parameters on GRACE gravity field solutions
Resumo:
Gravity field parameters are usually determined from observations of the GRACE satellite mission together with arc-specific parameters in a generalized orbit determination process. When separating the estimation of gravity field parameters from the determination of the satellites’ orbits, correlations between orbit parameters and gravity field coefficients are ignored and the latter parameters are biased towards the a priori force model. We are thus confronted with a kind of hidden regularization. To decipher the underlying mechanisms, the Celestial Mechanics Approach is complemented by tools to modify the impact of the pseudo-stochastic arc-specific parameters on the normal equations level and to efficiently generate ensembles of solutions. By introducing a time variable a priori model and solving for hourly pseudo-stochastic accelerations, a significant reduction of noisy striping in the monthly solutions can be achieved. Setting up more frequent pseudo-stochastic parameters results in a further reduction of the noise, but also in a notable damping of the observed geophysical signals. To quantify the effect of the a priori model on the monthly solutions, the process of fixing the orbit parameters is replaced by an equivalent introduction of special pseudo-observations, i.e., by explicit regularization. The contribution of the thereby introduced a priori information is determined by a contribution analysis. The presented mechanism is valid universally. It may be used to separate any subset of parameters by pseudo-observations of a special design and to quantify the damage imposed on the solution.