949 resultados para MAXIMUM LIKELIHOOD
Resumo:
A post classification change detection technique based on a hybrid classification approach (unsupervised and supervised) was applied to Landsat Thematic Mapper (TM), Landsat Enhanced Thematic Plus (ETM+), and ASTER images acquired in 1987, 2000 and 2004 respectively to map land use/cover changes in the Pic Macaya National Park in the southern region of Haiti. Each image was classified individually into six land use/cover classes: built-up, agriculture, herbaceous, open pine forest, mixed forest, and barren land using unsupervised ISODATA and maximum likelihood supervised classifiers with the aid of field collected ground truth data collected in the field. Ground truth information, collected in the field in December 2007, and including equalized stratified random points which were visual interpreted were used to assess the accuracy of the classification results. The overall accuracy of the land classification for each image was respectively: 1987 (82%), 2000 (82%), 2004 (87%). A post classification change detection technique was used to produce change images for 1987 to 2000, 1987 to 2004, and 2000 to 2004. It was found that significant changes in the land use/cover occurred over the 17- year period. The results showed increases in built up (from 10% to 17%) and herbaceous (from 5% to 14%) areas between 1987 and 2004. The increase of herbaceous was mostly caused by the abandonment of exhausted agriculture lands. At the same time, open pine forest and mixed forest areas lost (75%) and (83%) of their area to other land use/cover types. Open pine forest (from 20% to 14%) and mixed forest (from18 to 12%) were transformed into agriculture area or barren land. This study illustrated the continuing deforestation, land degradation and soil erosion in the region, which in turn is leading to decrease in vegetative cover. The study also showed the importance of Remote Sensing (RS) and Geographic Information System (GIS) technologies to estimate timely changes in the land use/cover, and to evaluate their causes in order to design an ecological based management plan for the park.
Resumo:
Hall-effect thrusters (HETs) are compact electric propulsion devices with high specific impulse used for a variety of space propulsion applications. HET technology is well developed but the electron properties in the discharge are not completely understood, mainly due to the difficulty involved in performing accurate measurements in the discharge. Measurements of electron temperature and density have been performed using electrostatic probes, but presence of the probes can significantly disrupt thruster operation, and thus alter the electron temperature and density. While fast-probe studies have expanded understanding of HET discharges, a non-invasive method of measuring the electron temperature and density in the plasma is highly desirable. An alternative to electrostatic probes is a non-perturbing laser diagnostic technique that measures Thomson scattering from the plasma. Thomson scattering is the process by which photons are elastically scattered from the free electrons in a plasma. Since the electrons have thermal energy their motion causes a Doppler shift in the scattered photons that is proportional to their velocity. Like electrostatic probes, laser Thomson scattering (LTS) can be used to determine the temperature and density of free electrons in the plasma. Since Thomson scattering measures the electron velocity distribution function directly no assumptions of the plasma conditions are required, allowing accurate measurements in anisotropic and non-Maxwellian plasmas. LTS requires a complicated measurement apparatus, but has the potential to provide accurate, non-perturbing measurements of electron temperature and density in HET discharges. In order to assess the feasibility of LTS diagnostics on HETs non-invasive measurements of electron temperature and density in the near-field plume of a Hall thruster were performed using a custom built laser Thomson scattering diagnostic. Laser measurements were processed using a maximum likelihood estimation method and results were compared to conventional electrostatic double probe measurements performed at the same thruster conditions. Electron temperature was found to range from approximately 1 – 40 eV and density ranged from approximately 1.0 x 1017 m-3 to 1.3 x 1018 m-3 over discharge voltages from 250 to 450 V and mass flow rates of 40 to 80 SCCM using xenon propellant.
Resumo:
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.
Resumo:
BACKGROUND/AIMS: While several risk factors for the histological progression of chronic hepatitis C have been identified, the contribution of HCV genotypes to liver fibrosis evolution remains controversial. The aim of this study was to assess independent predictors for fibrosis progression. METHODS: We identified 1189 patients from the Swiss Hepatitis C Cohort database with at least one biopsy prior to antiviral treatment and assessable date of infection. Stage-constant fibrosis progression rate was assessed using the ratio of fibrosis Metavir score to duration of infection. Stage-specific fibrosis progression rates were obtained using a Markov model. Risk factors were assessed by univariate and multivariate regression models. RESULTS: Independent risk factors for accelerated stage-constant fibrosis progression (>0.083 fibrosis units/year) included male sex (OR=1.60, [95% CI 1.21-2.12], P<0.001), age at infection (OR=1.08, [1.06-1.09], P<0.001), histological activity (OR=2.03, [1.54-2.68], P<0.001) and genotype 3 (OR=1.89, [1.37-2.61], P<0.001). Slower progression rates were observed in patients infected by blood transfusion (P=0.02) and invasive procedures or needle stick (P=0.03), compared to those infected by intravenous drug use. Maximum likelihood estimates (95% CI) of stage-specific progression rates (fibrosis units/year) for genotype 3 versus the other genotypes were: F0-->F1: 0.126 (0.106-0.145) versus 0.091 (0.083-0.100), F1-->F2: 0.099 (0.080-0.117) versus 0.065 (0.058-0.073), F2-->F3: 0.077 (0.058-0.096) versus 0.068 (0.057-0.080) and F3-->F4: 0.171 (0.106-0.236) versus 0.112 (0.083-0.142, overall P<0.001). CONCLUSIONS: This study shows a significant association of genotype 3 with accelerated fibrosis using both stage-constant and stage-specific estimates of fibrosis progression rates. This observation may have important consequences for the management of patients infected with this genotype.
Resumo:
Most studies on selection in plants estimate female fitness components and neglect male mating success, although the latter might also be fundamental to understand adaptive evolution. Information from molecular genetic markers can be used to assess determinants of male mating success through parentage analyses. We estimated paternal selection gradients on floral traits in a large natural population of the herb Mimulus guttatus using a paternity probability model and maximum likelihood methods. This analysis revealed more significant selection gradients than a previous analysis based on regression of estimated male fertilities on floral traits. There were differences between results of univariate and multivariate analyses most likely due to the underlying covariance structure of the traits. Multivariate analysis, which corrects for the covariance structure of the traits, indicated that male mating success declined with distance from and depended on the direction to the mother plants. Moreover, there was directional selection for plants with fewer open flowers which have smaller corollas, a smaller anther-stigma separation, more red dots on the corolla and a larger fluctuating asymmetry therein. For most of these traits, however, there was also stabilizing selection indicating that there are intermediate optima for these traits. The large number of significant selection gradients in this study shows that even in relatively large natural populations where not all males can be sampled, it is possible to detect significant paternal selection gradients, and that such studies can give us valuable information required to better understand adaptive plant evolution.
Resumo:
In this paper we compare the performance of two image classification paradigms (object- and pixel-based) for creating a land cover map of Asmara, the capital of Eritrea and its surrounding areas using a Landsat ETM+ imagery acquired in January 2000. The image classification methods used were maximum likelihood for the pixel-based approach and Bhattacharyya distance for the object-oriented approach available in, respectively, ArcGIS and SPRING software packages. Advantages and limitations of both approaches are presented and discussed. Classifications outputs were assessed using overall accuracy and Kappa indices. Pixel- and object-based classification methods result in an overall accuracy of 78% and 85%, respectively. The Kappa coefficient for pixel- and object-based approaches was 0.74 and 0.82, respectively. Although pixel-based approach is the most commonly used method, assessment and visual interpretation of the results clearly reveal that the object-oriented approach has advantages for this specific case-study.
Resumo:
Truncated distributions of the exponential family have great influence in the simulation models. This paper discusses the truncated Weibull distribution specifically. The truncation of the distribution is achieved by the Maximum Likelihood Estimation method or combined with the expectation and variance expressions. After the fitting of distribution, the goodness-of-fit tests (the Chi-Square test and the Kolmogorov-Smirnov test) are executed to rule out the rejected hypotheses. Finally the distributions are integrated in various simulation models, e. g. shipment consolidation model, to compare the influence of truncated and original versions of Weibull distribution on the model.
Resumo:
BACKGROUND: HCV coinfection remains a major cause of morbidity and mortality among HIV-infected individuals and its incidence has increased dramatically in HIV-infected men who have sex with men(MSM). METHODS: Hepatitis C virus (HCV) coinfection in the Swiss HIV Cohort Study(SHCS) was studied by combining clinical data with HIV-1 pol-sequences from the SHCS Drug Resistance Database(DRDB). We inferred maximum-likelihood phylogenetic trees, determined Swiss HIV-transmission pairs as monophyletic patient pairs, and then considered the distribution of HCV on those pairs. RESULTS: Among the 9748 patients in the SHCS-DRDB with known HCV status, 2768(28%) were HCV-positive. Focusing on subtype B(7644 patients), we identified 1555 potential HIV-1 transmission pairs. There, we found that, even after controlling for transmission group, calendar year, age and sex, the odds for an HCV coinfection were increased by an odds ratio (OR) of 3.2 [95% confidence interval (CI) 2.2, 4.7) if a patient clustered with another HCV-positive case. This strong association persisted if transmission groups of intravenous drug users (IDUs), MSMs and heterosexuals (HETs) were considered separately(in all cases OR >2). Finally we found that HCV incidence was increased by a hazard ratio of 2.1 (1.1, 3.8) for individuals paired with an HCV-positive partner. CONCLUSIONS: Patients whose HIV virus is closely related to the HIV virus of HIV/HCV-coinfected patients have a higher risk for carrying or acquiring HCV themselves. This indicates the occurrence of domestic and sexual HCV transmission and allows the identification of patients with a high HCV-infection risk.
Resumo:
Hybrid zones provide excellent opportunities to study processes and mechanisms underlying reproductive isolation and speciation. Here we investigated sex-specific clines of molecular markers in hybrid zones of morphologically cryptic yet genetically highly-diverged evolutionary lineages of the European common vole (Microtus arvalis). We analyzed the position and width of four secondary contact zones along three independent transects in the region of the Alps using maternally (mitochondrial DNA) and paternally (Y-chromosome) inherited genetic markers. Given male-biased dispersal in the common vole, a selectively neutral secondary contact would show broader paternal marker clines than maternal ones. In a selective case, for example, involving a form of Haldane’s rule, Y-chromosomal clines would not be expected to be broader than maternal markers because they are transmitted by the heterogametic sex and thus gene flow would be restricted. Consistent with the selective case, paternal clines were significantly narrower or at most equal in width to maternal clines in all contact zones. In addition, analyses using maximum likelihood cline-fitting detected a shift of paternal relative to maternal clines in three of four contact zones. These patterns suggest that processes at the contact zones in the common vole are not selectively neutral, and that partial reproductive isolation is already established between these evolutionary lineages. We conclude that hybrid zone movement, sexual selection and/or genetic incompatibilities are likely associated with an unusual unidirectional manifestation of Haldane’s rule in this common European mammal.
Resumo:
The distribution of the number of heterozygous loci in two randomly chosen gametes or in a random diploid zygote provides information regarding the nonrandom association of alleles among different genetic loci. Two alternative statistics may be employed for detection of nonrandom association of genes of different loci when observations are made on these distributions: observed variance of the number of heterozygous loci (s2k) and a goodness-of-fit criterion (X2) to contrast the observed distribution with that expected under the hypothesis of random association of genes. It is shown, by simulation, that s2k is statistically more efficient than X2 to detect a given extent of nonrandom association. Asymptotic normality of s2k is justified, and X2 is shown to follow a chi-square (chi 2) distribution with partial loss of degrees of freedom arising because of estimation of parameters from the marginal gene frequency data. Whenever direct evaluations of linkage disequilibrium values are possible, tests based on maximum likelihood estimators of linkage disequilibria require a smaller sample size (number of zygotes or gametes) to detect a given level of nonrandom association in comparison with that required if such tests are conducted on the basis of s2k. Summarization of multilocus genotype (or haplotype) data, into the different number of heterozygous loci classes, thus, amounts to appreciable loss of information.
Resumo:
Objectives: The purpose of this meta analysis was to examine the moderating impact of substance use disorder as inclusion/exclusion criterion as well as the percentage of racial/ethnic minorities on the strength of the alliance-outcome relationship in psychotherapy. It was hypothesized that the presence of a dsm axis i substance use disorders as a criterion and the presence of racial/ethnic minority as a psychosocial indicator are confounded client factors reducing the relationship between alliance and outcome. Methods: A random effects restricted maximum-likelihood estimator was used for omnibus and moderator models (k = 94). results: the presence of (a) substance use disorder and, (b) racial/ethnic minorities (overall and specific to african americans) partially moderated the alliance-outcome correlation. The percentage of substance use disorders and racial/ethnic minority status was highly correlated. Conclusions: Socio-cultural contextual variables should be considered along with dsm axis i diagnosis of substance use disorders in analyzing and interpreting mechanisms of change.
Resumo:
Variable number of tandem repeats (VNTR) are genetic loci at which short sequence motifs are found repeated different numbers of times among chromosomes. To explore the potential utility of VNTR loci in evolutionary studies, I have conducted a series of studies to address the following questions: (1) What are the population genetic properties of these loci? (2) What are the mutational mechanisms of repeat number change at these loci? (3) Can DNA profiles be used to measure the relatedness between a pair of individuals? (4) Can DNA fingerprint be used to measure the relatedness between populations in evolutionary studies? (5) Can microsatellite and short tandem repeat (STR) loci which mutate stepwisely be used in evolutionary analyses?^ A large number of VNTR loci typed in many populations were studied by means of statistical methods developed recently. The results of this work indicate that there is no significant departure from Hardy-Weinberg expectation (HWE) at VNTR loci in most of the human populations examined, and the departure from HWE in some VNTR loci are not solely caused by the presence of population sub-structure.^ A statistical procedure is developed to investigate the mutational mechanisms of VNTR loci by studying the allele frequency distributions of these loci. Comparisons of frequency distribution data on several hundreds VNTR loci with the predictions of two mutation models demonstrated that there are differences among VNTR loci grouped by repeat unit sizes.^ By extending the ITO method, I derived the distribution of the number of shared bands between individuals with any kinship relationship. A maximum likelihood estimation procedure is proposed to estimate the relatedness between individuals from the observed number of shared bands between them.^ It was believed that classical measures of genetic distance are not applicable to analysis of DNA fingerprints which reveal many minisatellite loci simultaneously in the genome, because the information regarding underlying alleles and loci is not available. I proposed a new measure of genetic distance based on band sharing between individuals that is applicable to DNA fingerprint data.^ To address the concern that microsatellite and STR loci may not be useful for evolutionary studies because of the convergent nature of their mutation mechanisms, by a theoretical study as well as by computer simulation, I conclude that the possible bias caused by the convergent mutations can be corrected, and a novel measure of genetic distance that makes the correction is suggested. In summary, I conclude that hypervariable VNTR loci are useful in evolutionary studies of closely related populations or species, especially in the study of human evolution and the history of geographic dispersal of Homo sapiens. (Abstract shortened by UMI.) ^
Resumo:
Models of DNA sequence evolution and methods for estimating evolutionary distances are needed for studying the rate and pattern of molecular evolution and for inferring the evolutionary relationships of organisms or genes. In this dissertation, several new models and methods are developed.^ The rate variation among nucleotide sites: To obtain unbiased estimates of evolutionary distances, the rate heterogeneity among nucleotide sites of a gene should be considered. Commonly, it is assumed that the substitution rate varies among sites according to a gamma distribution (gamma model) or, more generally, an invariant+gamma model which includes some invariable sites. A maximum likelihood (ML) approach was developed for estimating the shape parameter of the gamma distribution $(\alpha)$ and/or the proportion of invariable sites $(\theta).$ Computer simulation showed that (1) under the gamma model, $\alpha$ can be well estimated from 3 or 4 sequences if the sequence length is long; and (2) the distance estimate is unbiased and robust against violations of the assumptions of the invariant+gamma model.^ However, this ML method requires a huge amount of computational time and is useful only for less than 6 sequences. Therefore, I developed a fast method for estimating $\alpha,$ which is easy to implement and requires no knowledge of tree. A computer program was developed for estimating $\alpha$ and evolutionary distances, which can handle the number of sequences as large as 30.^ Evolutionary distances under the stationary, time-reversible (SR) model: The SR model is a general model of nucleotide substitution, which assumes (i) stationary nucleotide frequencies and (ii) time-reversibility. It can be extended to SRV model which allows rate variation among sites. I developed a method for estimating the distance under the SR or SRV model, as well as the variance-covariance matrix of distances. Computer simulation showed that the SR method is better than a simpler method when the sequence length $L>1,000$ bp and is robust against deviations from time-reversibility. As expected, when the rate varies among sites, the SRV method is much better than the SR method.^ The evolutionary distances under nonstationary nucleotide frequencies: The statistical properties of the paralinear and LogDet distances under nonstationary nucleotide frequencies were studied. First, I developed formulas for correcting the estimation biases of the paralinear and LogDet distances. The performances of these formulas and the formulas for sampling variances were examined by computer simulation. Second, I developed a method for estimating the variance-covariance matrix of the paralinear distance, so that statistical tests of phylogenies can be conducted when the nucleotide frequencies are nonstationary. Third, a new method for testing the molecular clock hypothesis was developed in the nonstationary case. ^
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
(1) A mathematical theory for computing the probabilities of various nucleotide configurations is developed, and the probability of obtaining the correct phylogenetic tree (model tree) from sequence data is evaluated for six phylogenetic tree-making methods (UPGMA, distance Wagner method, transformed distance method, Fitch-Margoliash's method, maximum parsimony method, and compatibility method). The number of nucleotides (m*) necessary to obtain the correct tree with a probability of 95% is estimated with special reference to the human, chimpanzee, and gorilla divergence. m* is at least 4,200, but the availability of outgroup species greatly reduces m* for all methods except UPGMA. m* increases if transitions occur more frequently than transversions as in the case of mitochondrial DNA. (2) A new tree-making method called the neighbor-joining method is proposed. This method is applicable either for distance data or character state data. Computer simulation has shown that the neighbor-joining method is generally better than UPGMA, Farris' method, Li's method, and modified Farris method on recovering the true topology when distance data are used. A related method, the simultaneous partitioning method, is also discussed. (3) The maximum likelihood (ML) method for phylogeny reconstruction under the assumption of both constant and varying evolutionary rates is studied, and a new algorithm for obtaining the ML tree is presented. This method gives a tree similar to that obtained by UPGMA when constant evolutionary rate is assumed, whereas it gives a tree similar to that obtained by the maximum parsimony tree and the neighbor-joining method when varying evolutionary rate is assumed. ^