68 resultados para Two-step langmuir model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been long recognized that highly polymorphic genetic markers can lead to underestimation of divergence between populations when migration is low. Microsatellite loci, which are characterized by extremely high mutation rates, are particularly likely to be affected. Here, we report genetic differentiation estimates in a contact zone between two chromosome races of the common shrew (Sorex araneus), based on 10 autosomal microsatellites, a newly developed Y-chromosome microsatellite, and mitochondrial DNA. These results are compared to previous data on proteins and karyotypes. Estimates of genetic differentiation based on F- and R-statistics are much lower for autosomal microsatellites than for all other genetic markers. We show by simulations that this discrepancy stems mainly from the high mutation rate of microsatellite markers for F-statistics and from deviations from a single-step mutation model for R-statistics. The sex-linked genetic markers show that all gene exchange between races is mediated by females. The absence of male-mediated gene flow most likely results from male hybrid sterility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION. The role of turbine-based NIV ventilators (TBV) versus ICU ventilators with NIV mode activated (ICUV) to deliver NIV in case of severe respiratory failure remains debated. OBJECTIVES. To compare the response time and pressurization capacity of TBV and ICUV during simulated NIV with normal and increased respiratory demand, in condition of normal and obstructive respiratory mechanics. METHODS. In a two-chamber lung model, a ventilator simulated normal (P0.1 = 2 mbar, respiratory rate RR = 15/min) or increased (P0.1 = 6 mbar, RR = 25/min) respiratory demand. NIV was simulated by connecting the lung model (compliance 100 ml/mbar; resistance 5 or 20 l/mbar) to a dummy head equipped with a naso-buccal mask. Connections allowed intentional leaks (29 ± 5 % of insufflated volume). Ventilators to test: Servo-i (Maquet), V60 and Vision (Philips Respironics) were connected via a standard circuit to the mask. Applied pressure support levels (PSL) were 7 mbar for normal and 14 mbar for increased demand. Airway pressure and flow were measured in the ventilator circuit and in the simulated airway. Ventilator performance was assessed by determining trigger delay (Td, ms), pressure time product at 300 ms (PTP300, mbar s) and inspiratory tidal volume (VT, ml) and compared by three-way ANOVA for the effect of inspiratory effort, resistance and the ventilator. Differences between ventilators for each condition were tested by oneway ANOVA and contrast (JMP 8.0.1, p\0.05). RESULTS. Inspiratory demand and resistance had a significant effect throughout all comparisons. Ventilator data figure in Table 1 (normal demand) and 2 (increased demand): (a) different from Servo-i, (b) different from V60.CONCLUSION. In this NIV bench study, with leaks, trigger delay was shorter for TBV with normal respiratory demand. By contrast, it was shorter for ICUV when respiratory demand was high. ICUV afforded better pressurization (PTP 300) with increased demand and PSL, particularly with increased resistance. TBV provided a higher inspiratory VT (i.e., downstream from the leaks) with normal demand, and a significantly (although minimally) lower VT with increased demand and PSL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a) real absences b) pseudo-absences selected randomly from the background and c) two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA) or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors) was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97), and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have limited fit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of model observers for mimicking human detection strategies has followed from symmetric signals in simple noise to increasingly complex backgrounds. In this study we implement different model observers for the complex task of detecting a signal in a 3D image stack. The backgrounds come from real breast tomosynthesis acquisitions and the signals were simulated and reconstructed within the volume. Two different tasks relevant to the early detection of breast cancer were considered: detecting an 8 mm mass and detecting a cluster of microcalcifications. The model observers were calculated using a channelized Hotelling observer (CHO) with dense difference-of-Gaussian channels, and a modified (Partial prewhitening [PPW]) observer which was adapted to realistic signals which are not circularly symmetric. The sustained temporal sensitivity function was used to filter the images before applying the spatial templates. For a frame rate of five frames per second, the only CHO that we calculated performed worse than the humans in a 4-AFC experiment. The other observers were variations of PPW and outperformed human observers in every single case. This initial frame rate was a rather low speed and the temporal filtering did not affect the results compared to a data set with no human temporal effects taken into account. We subsequently investigated two higher speeds at 5, 15 and 30 frames per second. We observed that for large masses, the two types of model observers investigated outperformed the human observers and would be suitable with the appropriate addition of internal noise. However, for microcalcifications both only the PPW observer consistently outperformed the humans. The study demonstrated the possibility of using a model observer which takes into account the temporal effects of scrolling through an image stack while being able to effectively detect a range of mass sizes and distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The bacterial insertion sequence IS21 shares with many insertion sequences a two-step, reactive junction transposition pathway, for which a model is presented in this review: a reactive junction with abutted inverted repeats is first formed and subsequently integrated into the target DNA. The reactive junction occurs in IS21-IS21 tandems and IS21 minicircles. In addition, IS21 shows a unique specialization of transposition functions. By alternative translation initiation, the transposase gene codes for two products: the transposase, capable of promoting both steps of the reactive junction pathway, and the cointegrase, which only promotes the integration of reactive junctions but with higher efficiency. This review also includes a survey of the IS21 family and speculates on the possibility that other members present a similar transpositional specialization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: We developed a population model that describes the ocular penetration and pharmacokinetics of penciclovir in human aqueous humour and plasma after oral administration of famciclovir. METHODS: Fifty-three patients undergoing cataract surgery received a single oral dose of 500 mg of famciclovir prior to surgery. Concentrations of penciclovir in both plasma and aqueous humour were measured by HPLC with fluorescence detection. Concentrations in plasma and aqueous humour were fitted using a two-compartment model (NONMEM software). Inter-individual and intra-individual variabilities were quantified and the influence of demographics and physiopathological and environmental variables on penciclovir pharmacokinetics was explored. RESULTS: Drug concentrations were fitted using a two-compartment, open model with first-order transfer rates between plasma and aqueous humour compartments. Among tested covariates, creatinine clearance, co-intake of angiotensin-converting enzyme inhibitors and body weight significantly influenced penciclovir pharmacokinetics. Plasma clearance was 22.8 ± 9.1 L/h and clearance from the aqueous humour was 8.2 × 10(-5) L/h. AUCs were 25.4 ± 10.2 and 6.6 ± 1.8 μg · h/mL in plasma and aqueous humour, respectively, yielding a penetration ratio of 0.28 ± 0.06. Simulated concentrations in the aqueous humour after administration of 500 mg of famciclovir three times daily were in the range of values required for 50% growth inhibition of non-resistant strains of the herpes zoster virus family. CONCLUSIONS: Plasma and aqueous penciclovir concentrations showed significant variability that could only be partially explained by renal function, body weight and comedication. Concentrations in the aqueous humour were much lower than in plasma, suggesting that factors in the blood-aqueous humour barrier might prevent its ocular penetration or that redistribution occurs in other ocular compartments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this article is to analyse the conditions under which referendum campaigns have an impact on voting choices. Based on a model of opinion formation that integrates both campaign effects and partisan effects, we argue that campaign effects vary according to the context of the popular vote (size and type of conflict among the party elite and intensity and direction of the referendum campaign). We test our hypotheses with two-step estimations for hierarchical models on data covering 25 popular votes on foreign, European and immigration policy in Switzerland. Our results show strong campaign effects and they suggest that their strength and nature are indeed highly conditional on the context of the vote: the type of party coalition pre-structures the patterns of individual voting choices, campaign effects are higher when the campaign is highly intense and they are more symmetric when it is balanced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the application of the guidelines for evidence-based treatments in family therapy developed by Sexton and collaborators to a set of treatment models. These guidelines classify the models using criteria that take into account the distinctive features of couple and family treatments. A two-step approach was taken: (1) The quality of each of the studies supporting the treatment models was assessed according to a list of ad hoc core criteria; (2) the level of evidence of each treatment model was determined using the guidelines. To reflect the stages of empirical validation present in the literature, nine models were selected: three models each with high, moderate, and low levels of empirical validation, determined by the number of randomized clinical trials (RCTs). The quality ratings highlighted the strengths and limitations of each of the studies that provided evidence backing the treatment models. The classification by level of evidence indicated that four of the models were level III, "evidence-based" treatments; one was a level II, "evidence-informed treatment with promising preliminary evidence-based results"; and four were level I, "evidence-informed" treatments. Using the guidelines helped identify treatments that are solid in terms of not only the number of RCTs but also the quality of the evidence supporting the efficacy of a given treatment. From a research perspective, this analysis highlighted areas to be addressed before some models can move up to a higher level of evidence. From a clinical perspective, the guidelines can help identify the models whose studies have produced clinically relevant results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The T-cell receptor (TCR) interaction with antigenic peptides (p) presented by the major histocompatibility complex (MHC) molecule is a key determinant of immune response. In addition, TCR-pMHC interactions offer examples of features more generally pertaining to protein-protein recognition: subtle specificity and cross-reactivity. Despite their importance, molecular details determining the TCR-pMHC binding remain unsolved. However, molecular simulation provides the opportunity to investigate some of these aspects. In this study, we perform extensive equilibrium and steered molecular dynamics simulations to study the unbinding of three TCR-pMHC complexes. As a function of the dissociation reaction coordinate, we are able to obtain converged H-bond counts and energy decompositions at different levels of detail, ranging from the full proteins, to separate residues and water molecules, down to single atoms at the interface. Many observed features do not support a previously proposed two-step model for TCR recognition. Our results also provide keys to interpret experimental point-mutation results. We highlight the role of water both in terms of interface resolvation and of water molecules trapped in the bound complex. Importantly, we illustrate how two TCRs with similar reactivity and structures can have essentially different binding strategies. Proteins 2011; © 2011 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we present a method for the image analysisof Magnetic Resonance Imaging (MRI) of fetuses. Our goalis to segment the brain surface from multiple volumes(axial, coronal and sagittal acquisitions) of a fetus. Tothis end we propose a two-step approach: first, a FiniteGaussian Mixture Model (FGMM) will segment the image into3 classes: brain, non-brain and mixture voxels. Second, aMarkov Random Field scheme will be applied tore-distribute mixture voxels into either brain ornon-brain tissue. Our main contributions are an adaptedenergy computation and an extended neighborhood frommultiple volumes in the MRF step. Preliminary results onfour fetuses of different gestational ages will be shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research deals with an application of artificial neural networks for multitask learning from spatial environmental data. The real case study (sediments contamination of Geneva Lake) consists of 8 pollutants. There are different relationships between these variables, from linear correlations to strong nonlinear dependencies. The main idea is to construct a subsets of pollutants which can be efficiently modeled together within the multitask framework. The proposed two-step approach is based on: 1) the criterion of nonlinear predictability of each variable ?k? by analyzing all possible models composed from the rest of the variables by using a General Regression Neural Network (GRNN) as a model; 2) a multitask learning of the best model using multilayer perceptron and spatial predictions. The results of the study are analyzed using both machine learning and geostatistical tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel and straightforward method for estimating recent migration rates between discrete populations using multilocus genotype data. The approach builds upon a two-step sampling design, where individual genotypes are sampled before and after dispersal. We develop a model that estimates all pairwise backwards migration rates (m(ij), the probability that an individual sampled in population i is a migrant from population j) between a set of populations. The method is validated with simulated data and compared with the methods of BayesAss and Structure. First, we use data for an island model and then we consider more realistic data simulations for a metapopulation of the greater white-toothed shrew (Crocidura russula). We show that the precision and bias of estimates primarily depend upon the proportion of individuals sampled in each population. Weak sampling designs may particularly affect the quality of the coverage provided by 95% highest posterior density intervals. We further show that it is relatively insensitive to the number of loci sampled and the overall strength of genetic structure. The method can easily be extended and makes fewer assumptions about the underlying demographic and genetic processes than currently available methods. It allows backwards migration rates to be estimated across a wide range of realistic conditions.