899 resultados para two-Gaussian mixture model
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Resumo:
BACKGROUND AND OBJECTIVES We aimed to study the impact of size, maturation and cytochrome P450 2D6 (CYP2D6) genotype activity score as predictors of intravenous tramadol disposition. METHODS Tramadol and O-desmethyl tramadol (M1) observations in 295 human subjects (postmenstrual age 25 weeks to 84.8 years, weight 0.5-186 kg) were pooled. A population pharmacokinetic analysis was performed using a two-compartment model for tramadol and two additional M1 compartments. Covariate analysis included weight, age, sex, disease characteristics (healthy subject or patient) and CYP2D6 genotype activity. A sigmoid maturation model was used to describe age-related changes in tramadol clearance (CLPO), M1 formation clearance (CLPM) and M1 elimination clearance (CLMO). A phenotype-based mixture model was used to identify CLPM polymorphism. RESULTS Differences in clearances were largely accounted for by maturation and size. The time to reach 50 % of adult clearance (TM50) values was used to describe maturation. CLPM (TM50 39.8 weeks) and CLPO (TM50 39.1 weeks) displayed fast maturation, while CLMO matured slower, similar to glomerular filtration rate (TM50 47 weeks). The phenotype-based mixture model identified a slow and a faster metabolizer group. Slow metabolizers comprised 9.8 % of subjects with 19.4 % of faster metabolizer CLPM. Low CYP2D6 genotype activity was associated with lower (25 %) than faster metabolizer CLPM, but only 32 % of those with low genotype activity were in the slow metabolizer group. CONCLUSIONS Maturation and size are key predictors of variability. A two-group polymorphism was identified based on phenotypic M1 formation clearance. Maturation of tramadol elimination occurs early (50 % of adult value at term gestation).
Resumo:
Though E2F1 is deregulated in most human cancers by mutations of the p16-cyclin D-Rb pathway, it also exhibits tumor suppressive activity. A transgenic mouse model overexpressing E2F1 under the control of the bovine keratin 5 (K5) promoter exhibits epidermal hyperplasia and spontaneously develops tumors in the skin and other epithelial tissues after one year of age. In a p53-deficient background, aberrant apoptosis in K5 E2F1 transgenic epidermis is reduced and tumorigenesis is accelerated. In sharp contrast, K5 E2F1 transgenic mice are resistant to papilloma formation in the DMBA/TPA two-stage carcinogenesis protocol. K5 E2F4 and K5 DP1 transgenic mice were also characterized and both display epidermal hyperplasia but do not develop spontaneous tumors even in cooperation with p53 deficiency. These transgenic mice do not have increased levels of apoptosis in their skin and are more susceptible to papilloma formation in the two-stage carcinogenesis model. These studies show that deregulated proliferation does not necessarily lead to tumor formation and that the ability to suppress skin carcinogenesis is unique to E2F1. E2F1 can also suppress skin carcinogenesis when okadaic acid is used as the tumor promoter and when a pre-initiated mouse model is used, demonstrating that E2F1's tumor suppressive activity is not specific for TPA and occurs at the promotion stage. E2F1 was thought to induce p53-dependent apoptosis through upregulation of p19ARF tumor suppressor, which inhibits mdm2-mediated p53 degradation. Consistent with in vitro studies, the overexpression of E2F1 in mouse skin results in the transcriptional activation of the p19ARF and the accumulation of p53. Inactivation of either p19ARF or p53 restores the sensitivity of K5 E2F1 transgenic mice to DMBA/TPA carcinogenesis, demonstrating that an intact p19ARF-p53 pathway is necessary for E2F1 to suppress carcinogenesis. Surprisingly, while p53 is required for E2F1 to induce apoptosis in mouse skin, p19ARF is not, and inactivation of p19ARF actually enhances E2F1-induced apoptosis and proliferation in transgenic epidermis. This indicates that ARF is important for E2F1-induced tumor suppression but not apoptosis. Senescence is another potential mechanism of tumor suppression that involves p53 and p19ARF. K5 E2F1 transgenic mice initiated with DMBA and treated with TPA show an increased number of senescence cells in their epidermis. These experiments demonstrate that E2F1's unique tumor suppressive activity in two-stage skin carcinogenesis can be genetically separated from E2F1-induced apoptosis and suggest that senescence utilizing the p19ARF-p53 pathway plays a role in tumor suppression by E2F1. ^
Resumo:
Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.
Resumo:
El presente estudio se realizó con la finalidad de modelizar la distribución espacial del carbón de la espiga del maíz causada por Sporisorium reilianum durante 2006 en el Estado de México y su visualización a través de la generación mapas de densidad. El muestreo se realizó en 100 parcelas georreferenciadas por cada localidad analizada. La incidencia de la enfermedad (porcentaje de plantas enfermas) se determinó al establecer cinco puntos parcela, en cada punto se contabilizaron 100 plantas. Se realizó el análisis geoestadístico para estimar el semivariograma experimental, una vez obtenido, se ajustó a un modelo teórico (esférico, exponencial o gaussiano) a través de los programas Variowin 2.2., su ajuste se validó a través de la validación cruzada. Posteriormente, se elaboraron mapas de agregación de la enfermedad con el método de interpolación geoestadística o krigeado. Los resultados indicaron que la enfermedad se presentó en 20 localidades de 19 municipios del Estado de México; todas las localidades presentaron un comportamiento espacial agregado de la enfermedad, 16 localidades se ajustaron al modelo esférico, dos al modelo exponencial y dos localidades se ajustaron al modelo gaussiano. En todos los modelos se lograron establecer mapas de agregación que permitirá adecuar las acciones de manejo en términos de puntos o sitios específicos.
Resumo:
We have developed a new projector model specifically tailored for fast list-mode tomographic reconstructions in Positron emission tomography (PET) scanners with parallel planar detectors. The model provides an accurate estimation of the probability distribution of coincidence events defined by pairs of scintillating crystals. This distribution is parameterized with 2D elliptical Gaussian functions defined in planes perpendicular to the main axis of the tube of response (TOR). The parameters of these Gaussian functions have been obtained by fitting Monte Carlo simulations that include positron range, acolinearity of gamma rays, as well as detector attenuation and scatter effects. The proposed model has been applied efficiently to list-mode reconstruction algorithms. Evaluation with Monte Carlo simulations over a rotating high resolution PET scanner indicates that this model allows to obtain better recovery to noise ratio in OSEM (ordered-subsets, expectation-maximization) reconstruction, if compared to list-mode reconstruction with symmetric circular Gaussian TOR model, and histogram-based OSEM with precalculated system matrix using Monte Carlo simulated models and symmetries.
Resumo:
We present a novel approach using both sustained vowels and connected speech, to detect obstructive sleep apnea (OSA) cases within a homogeneous group of speakers. The proposed scheme is based on state-of-the-art GMM-based classifiers, and acknowledges specifically the way in which acoustic models are trained on standard databases, as well as the complexity of the resulting models and their adaptation to specific data. Our experimental database contains a suitable number of utterances and sustained speech from healthy (i.e control) and OSA Spanish speakers. Finally, a 25.1% relative reduction in classification error is achieved when fusing continuous and sustained speech classifiers. Index Terms: obstructive sleep apnea (OSA), gaussian mixture models (GMMs), background model (BM), classifier fusion.
Resumo:
In this paper we present an adaptive multi-camera system for real time object detection able to efficiently adjust the computational requirements of video processing blocks to the available processing power and the activity of the scene. The system is based on a two level adaptation strategy that works at local and at global level. Object detection is based on a Gaussian mixtures model background subtraction algorithm. Results show that the system can efficiently adapt the algorithm parameters without a significant loss in the detection accuracy.
Resumo:
In this paper we propose a two-component polarimetric model for soil moisture estimation on vineyards suited for C-band radar data. According to a polarimetric analysis carried out here, this scenario is made up of one dominant direct return from the soil and a multiple scattering component accounting for disturbing and nonmodeled signal fluctuations from soil and short vegetation. We propose a combined X-Bragg/Fresnel approach to characterize the polarized direct response from soil. A validation of this polarimetric model has been performed in terms of its consistency with respect to the available data both from RADARSAT-2 and from indoor measurements. High inversion rates are reported for different phenological stages of vines, and the model gives a consistent interpretation of the data as long as the volume component power remains about or below 50% of the surface contribution power. However, the scarcity of soil moisture measurements in this study prevents the validation of the algorithm in terms of the accuracy of soil moisture retrieval and an extensive campaign is required to fully demonstrate the validity of the model. Different sources of mismatches between the model and the data have been also discussed and analyzed.
Resumo:
The Import Substitution Process in Latin Amer ica was an attempt to enhance GDP growth and productivity by rising trade barriers upon capital-intensive products. Our main goal is to analyze how an increase in import tariff on a particular type of good affects the production choices and trade pattern of an economy. We develop an extension of the dynamic Heckscher-Ohlin model – a combination of a static two goods, two-factor Heckscher-Ohlin model and a two-sector growth model – allowing for import tariff. We then calibrate the closed economy model to the US. The results show that the economy will produce less of both consumption and investment goods under autarky for low and high levels of capital stock per worker. We also find that total GDP may be lower under free trade in comparison to autarky.
Resumo:
Subaerially erupted tholeiites at Hole 642E were never exposed to the high-temperature seawater circulation and alteration conditions that are found at subaqueous ridges. Alteration of Site 642 rocks is therefore the product of the interaction of rocks and fluids at low temperatures. The alteration mineralogy can thus be used to provide information on the geochemical effects of low temperature circulation of seawater. Rubidium-strontium systematics of leached and unleached tholeiites and underlying, continentally-derived dacites reflect interactions with seawater in fractures and vesicular flow tops. The secondary mineral assemblage in the tholeiites consists mainly of smectite, accompanied in a few flows by the assemblage celadonite + calcite (+/- native Cu). Textural relationships suggest that smectites formed early and that celadonite + calcite, which are at least in part cogenetic, formed later than and partially at the expense of smectite. Smectite precipitation occurred under variable, but generally low, water/rock conditions. The smectites contain much lower concentrations of alkali elements than has been reported in seafloor basalts, and sequentially leached fractions of smectite contain Sr that has not achieved isotopic equilibrium. 87Sr/86Sr results of the leaching experiments suggest that Sr was mostly derived from seawater during early periods of smectite precipitation. The basalt-like 87Sr/86Sr of the most readily exchangeable fraction seems to suggest a late period of exposure to very low water /rock. Smectite formation may have primarily occurred in the interval between the nearly 58-Ma age given by the lower series dacites and the 54.5 +/- 0.2 Ma model age given by a celadonite from the top of the tholeiitic section. The 54.5 +/- 0.2 Ma Rb-Sr model age may be recording the timing of foundering of the Voring Plateau. Celadonites precipitated in flows below the top of the tholeiitic section define a Rb-Sr isochron with a slope corresponding to an age of 24.3 +/- 0.4 Ma. This isochron may be reflecting mixing effects due to long-term chemical interaction between seawater and basalts, in which case the age provides only a minimum for the timing of late alteration. Alternatively, inferrential arguments can be made that the 24.3 +/- 0.4 isochron age reflects the timing of the late Oligocene-early Miocene erosional event that affected the Norwegian-Greenland Sea. Correlation of 87Sr/86Sr and 1/Sr in calcites results in a two-component mixing model for late alteration products. One end-member of the mixing trend is Eocene or younger seawater. Strontium from the nonradiogenic endmember can not, however, have been derived directly from the basalts. Rather, the data suggest that Sr in the calcites is a mixture of Sr derived from seawater and from pre-existing smectites. For Site 642, the reaction involved can be generalized as smectite + seawater ++ celadonite + calcite. The geochemical effects of this reaction include net gains of K and CO2 by the secondary mineral assemblage. The gross similarity of the reactions involved in late, low-temperature alteration at Site 642 to those observed in other sea floor basalts suggests that the transfer of K and C02 to the crust during low-temperature seawater-ocean crust interactions may be significant in calculations of global fluxes.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The research was aimed at developing a technology to combine the production of useful microfungi with the treatment of wastewater from food processing. A recycle bioreactor equipped with a micro-screen was developed as a wastewater treatment system on a laboratory scale to contain a Rhizopus culture and maintain its dominance under non-aseptic conditions. Competitive growth of bacteria was observed, but this was minimised by manipulation of the solids retention time and the hydraulic retention time. Removal of about 90% of the waste organic material (as BOD) from the wastewater was achieved simultaneously. Since essentially all fungi are retained behind the 100 mum aperture screen, the solids retention time could be controlled by the rate of harvesting. The hydraulic retention time was employed to control the bacterial growth as the bacteria were washed through the screen at a short HRT. A steady state model was developed to determine these two parameters. This model predicts the effluent quality. Experimental work is still needed to determine the growth characteristics of the selected fungal species under optimum conditions (pH and temperature).
Resumo:
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.