883 resultados para spatiotemporal epidemic prediction model
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.
Resumo:
The determination of skeletal loading conditions in vivo and their relationship to the health of bone tissues, remain an open question. Computational modeling of the musculoskeletal system is the only practicable method providing a valuable approach to muscle and joint loading analyses, although crucial shortcomings limit the translation process of computational methods into the orthopedic and neurological practice. A growing attention focused on subject-specific modeling, particularly when pathological musculoskeletal conditions need to be studied. Nevertheless, subject-specific data cannot be always collected in the research and clinical practice, and there is a lack of efficient methods and frameworks for building models and incorporating them in simulations of motion. The overall aim of the present PhD thesis was to introduce improvements to the state-of-the-art musculoskeletal modeling for the prediction of physiological muscle and joint loads during motion. A threefold goal was articulated as follows: (i) develop state-of-the art subject-specific models and analyze skeletal load predictions; (ii) analyze the sensitivity of model predictions to relevant musculotendon model parameters and kinematic uncertainties; (iii) design an efficient software framework simplifying the effort-intensive phases of subject-specific modeling pre-processing. The first goal underlined the relevance of subject-specific musculoskeletal modeling to determine physiological skeletal loads during gait, corroborating the choice of full subject-specific modeling for the analyses of pathological conditions. The second goal characterized the sensitivity of skeletal load predictions to major musculotendon parameters and kinematic uncertainties, and robust probabilistic methods were applied for methodological and clinical purposes. The last goal created an efficient software framework for subject-specific modeling and simulation, which is practical, user friendly and effort effective. Future research development aims at the implementation of more accurate models describing lower-limb joint mechanics and musculotendon paths, and the assessment of an overall scenario of the crucial model parameters affecting the skeletal load predictions through probabilistic modeling.
Resumo:
The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.
Resumo:
Der Ausheilung von Infektionen mit Leishmania major liegt die Sekretion von IFN- von sowohl CD4+ als auch CD8+ T Zellen zugrunde.rnAktuell konnte in der Literatur nur ein Epitop aus dem parasitären LACK Protein für eine effektive CD4+ T Zell-vermittelte Immunantwort beschrieben werden. Das Ziel der vorliegenden Arbeit bestand daher darin, mögliche MHC I abhängige CD8+ T Zell Antworten zu untersuchen. rnFür diesen Ansatz wurde als erstes der Effekt einer Vakzinierung mit LACK Protein fusioniert an die Protein-Transduktionsdomäne des HIV-1 (TAT) analysiert. Die Effektivität von TAT-LACK gegenüber CD8+ T Zellen wurde mittels in vivo Protein-Vakzinierung von resistenten C57BL/6 Mäusen in Depletions-Experimenten gezeigt.rnDie Prozessierung von Proteinen vor der Präsentation immunogener Peptide gegenüber T Zellen ist unbedingt erforderlich. Daher wurde in dieser Arbeit die Rolle des IFN--induzierbaren Immunoproteasoms bei der Prozessierung von parasitären Proteinen und Präsentation von Peptiden gebunden an MHC I Moleküle durch in vivo und in vitro Experimente untersucht. Es konnte in dieser Arbeit eine Immunoproteasom-unabhängige Prozessierung aufgezeigt werden.rnWeiterhin wurde Parasitenlysat (SLA) von sowohl Promastigoten als auch Amastigoten fraktioniert. In weiterführenden Experimenten können diese Fraktionen auf immunodominante Proteine/Peptide hin untersucht werden. rnLetztlich wurden Epitop-Vorhersagen für CD8+ T Zellen mittels computergestützer Software von beiden parasitären Lebensformen durchgeführt. 300 dieser Epitope wurden synthetisiert und werden in weiterführenden Experimenten zur Charakterisierung immunogener Eigenschaften weiter verwendet. rnIn ihrer Gesamtheit trägt die vorliegende Arbeit wesentlich zum Verständnis über die komplexen Mechanismen der Prozessierung und letztendlich zur Identifikation von möglichen CD8+ T Zell Epitopen bei. Ein detailiertes Verständnis der Prozessierung von CD8+ T Zell Epitopen von Leishmania major über den MHC Klasse I Weg ist von höchster Bedeutung. Die Charakterisierung sowie die Identifikation dieser Peptide wird einen maßgeblichen Einfluss auf die weiteren Entwicklungen von Vakzinen gegen diesen bedeutenden human-pathogenen Parasiten mit sich bringen. rn
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
Microemulsions are thermodynamically stable, macroscopically homogeneous but microscopically heterogeneous, mixtures of water and oil stabilised by surfactant molecules. They have unique properties like ultralow interfacial tension, large interfacial area and the ability to solubilise other immiscible liquids. Depending on the temperature and concentration, non-ionic surfactants self assemble to micelles, flat lamellar, hexagonal and sponge like bicontinuous morphologies. Microemulsions have three different macroscopic phases (a) 1phase- microemulsion (isotropic), (b) 2phase-microemulsion coexisting with either expelled water or oil and (c) 3phase- microemulsion coexisting with expelled water and oil.rnrnOne of the most important fundamental questions in this field is the relation between the properties of the surfactant monolayer at water-oil interface and those of microemulsion. This monolayer forms an extended interface whose local curvature determines the structure of the microemulsion. The main part of my thesis deals with the quantitative measurements of the temperature induced phase transitions of water-oil-nonionic microemulsions and their interpretation using the temperature dependent spontaneous curvature [c0(T)] of the surfactant monolayer. In a 1phase- region, conservation of the components determines the droplet (domain) size (R) whereas in 2phase-region, it is determined by the temperature dependence of c0(T). The Helfrich bending free energy density includes the dependence of the droplet size on c0(T) as
Resumo:
This study aims at a comprehensive understanding of the effects of aerosol-cloud interactions and their effects on cloud properties and climate using the chemistry-climate model EMAC. In this study, CCN activation is regarded as the dominant driver in aerosol-cloud feedback loops in warm clouds. The CCN activation is calculated prognostically using two different cloud droplet nucleation parameterizations, the STN and HYB CDN schemes. Both CDN schemes account for size and chemistry effects on the droplet formation based on the same aerosol properties. The calculation of the solute effect (hygroscopicity) is the main difference between the CDN schemes. The kappa-method is for the first time incorporated into Abdul-Razzak and Ghan activation scheme (ARG) to calculate hygroscopicity and critical supersaturation of aerosols (HYB), and the performance of the modied scheme is compared with the osmotic coefficient model (STN), which is the standard in the ARG scheme. Reference simulations (REF) with the prescribed cloud droplet number concentration have also been carried out in order to understand the effects of aerosol-cloud feedbacks. In addition, since the calculated cloud coverage is an important determinant of cloud radiative effects and is influencing the nucleation process two cloud cover parameterizations (i.e., a relative humidity threshold; RH-CLC and a statistical cloud cover scheme; ST-CLC) have been examined together with the CDN schemes, and their effects on the simulated cloud properties and relevant climate parameters have been investigated. The distinct cloud droplet spectra show strong sensitivity to aerosol composition effects on cloud droplet formation in all particle sizes, especially for the Aitken mode. As Aitken particles are the major component of the total aerosol number concentration and CCN, and are most sensitive to aerosol chemical composition effect (solute effect) on droplet formation, the activation of Aitken particles strongly contribute to total cloud droplet formation and thereby providing different cloud droplet spectra. These different spectra influence cloud structure, cloud properties, and climate, and show regionally varying sensitivity to meteorological and geographical condition as well as the spatiotemporal aerosol properties (i.e., particle size, number, and composition). The changes responding to different CDN schemes are more pronounced at lower altitudes than higher altitudes. Among regions, the subarctic regions show the strongest changes, as the lower surface temperature amplifies the effects of the activated aerosols; in contrast, the Sahara desert, where is an extremely dry area, is less influenced by changes in CCN number concentration. The aerosol-cloud coupling effects have been examined by comparing the prognostic CDN simulations (STN, HYB) with the reference simulation (REF). Most pronounced effects are found in the cloud droplet number concentration, cloud water distribution, and cloud radiative effect. The aerosol-cloud coupling generally increases cloud droplet number concentration; this decreases the efficiency of the formation of weak stratiform precipitation, and increases the cloud water loading. These large-scale changes lead to larger cloud cover and longer cloud lifetime, and contribute to high optical thickness and strong cloud cooling effects. This cools the Earth's surface, increases atmospheric stability, and reduces convective activity. These changes corresponding to aerosol-cloud feedbacks are also differently simulated depending on the cloud cover scheme. The ST-CLC scheme is more sensitive to aerosol-cloud coupling, since this scheme uses a tighter linkage of local dynamics and cloud water distributions in cloud formation process than the RH-CLC scheme. For the calculated total cloud cover, the RH-CLC scheme simulates relatively similar pattern to observations than the ST-CLC scheme does, but the overall properties (e.g., total cloud cover, cloud water content) in the RH simulations are overestimated, particularly over ocean. This is mainly originated from the difference in simulated skewness in each scheme: the RH simulations calculate negatively skewed distributions of cloud cover and relevant cloud water, which is similar to that of the observations, while the ST simulations yield positively skewed distributions resulting in lower mean values than the RH-CLC scheme does. The underestimation of total cloud cover over ocean, particularly over the intertropical convergence zone (ITCZ) relates to systematic defficiency of the prognostic calculation of skewness in the current set-ups of the ST-CLC scheme.rnOverall, the current EMAC model set-ups perform better over continents for all combinations of the cloud droplet nucleation and cloud cover schemes. To consider aerosol-cloud feedbacks, the HYB scheme is a better method for predicting cloud and climate parameters for both cloud cover schemes than the STN scheme. The RH-CLC scheme offers a better simulation of total cloud cover and the relevant parameters with the HYB scheme and single-moment microphysics (REF) than the ST-CLC does, but is not very sensitive to aerosol-cloud interactions.
Resumo:
Biorelevante Medien sind entwickelt worden, um die Bedingungen im Magen-Darm-Trakt vor und nach der Mahlzeit zu imitieren. Mit FaSSIF und FeSSIF wurden Medien eingeführt, die nicht nur die pH- und Puffer-Kapazität des Dünndarms widerspiegeln, sondern auch Lipid und physiologische Tensid-Arten enthalten. Diese Medien (FaSSIF-V2 und FaSSlFmod6.5) wurden für Bioverfügbarkeitstudien in der Medikamentenentwicklung im Laufe der Jahre kontinuierlich weiterentwickelt. Dennoch sind die auf dem Markt verfügbaren Medien immer noch nicht in der Lage, die realen physiologischen Bedingungen zu simulieren. In der jetzigen Zusammensetzung sind nicht alle Kompetenten enthalten, welche natürlicher Weise im Duodenum vorkommen. Darüber hinaus wird nur eine 1:5 Verdünnung von FeSSIF zu FaSSIF angenommen, die individuelle Wasserzufuhr bei Medikamentengabe wird hierdurch jedoch nur eingeschränkt simuliert, obwohl diese von Patient zu Patient schwanken kann. rnZiel dieser Dissertation war die Verbesserung der Vorhersage der Auflösung und Absorption lipophiler Arzneistoffe durch Simulation der Bedingungen im zweiten Teil des Zwölffingerdarms mit neuen biorelevanten Medien, sowie unter Einwirkung zusätzlicher Detergention als Wirkstoffträger. rnUm den Effekt der Verdünnungsrate und Zeit im Dünndarm zu untersuchen, wurde die Entwicklung der Nanopartikel in der Magen-Darm-Flüssigkeit FaSSIFmod6.5 zu verschiedenen Zeitpunkten und Wassergehalten untersucht. Dafür wurden kinetische Studien an verschieden konzentrierten Modellmedien nach Verdünnungssprung untersucht. Das Modell entspricht der Vermischung der Gallenflüssigkeit mit dem Darminhalt bei variablem Volumen. Die Ergebnisse zeigen, dass Art und Größe der Nanopartikel stark von Verdünnung und Einirkungszeit abhängen. rnrnDie menschliche Darmflüssigkeit enthält Cholesterin, welches in allen früheren Modellmedien fehlt. Daher wurden biokompatible und physiologische Modellflüssigkeiten, FaSSIF-C, entwickelt. Der Cholesteringehalt von FaSSIF - 7C entspricht der Gallenflüssigkeit einer gesunden Frau, FaSSIF - 10C der einer gesunden männlichen Person und FaSSIF - 13C der in einigen Krankheitszuständen. Die intestinale Teilchen-Struktur-Untersuchung mit dynamische Lichtstreuung (DLS) und Neutronen-Kleinwinkelstreuung (SANS) ergab, dass die Korngröße von Vesikeln mit zunehmender Cholesterin-Konzentration abnahm. Zu hohe Cholesterin-Konzentration bewirkte zusätzlich sehr große Partikel, welche vermutlich aus Cholesterin-reichen “Disks“ bestehen. Die Löslichkeiten einiger BCS Klasse II Wirkstoffe (Fenofibrat, Griseofulvin, Carbamazepin, Danazol) in diesen neuen Medien zeigten, dass die Löslichkeit in unterschiedlicher Weise mit der Cholesteringehalt zusammen hing und dieser Effekt selektiv für die Droge war. rnDarüber hinaus wurde die Wirkung von einigen Tensiden auf die kolloidale Struktur und Löslichkeit von Fenofibrat in FaSSIFmod6.5 und FaSSIF -7C untersucht. Struktur und Löslichkeit waren Tensid- und Konzentrations-abhängig. Im Falle von FaSSIFmod6.5 zeigten die Ergebnisse eine dreifache Verzweigung der Lösungswege. Im Bereich mittlerer Tensidkonzentration wurde eine Löslichkeitslücke der Droge zwischen der Zerstörung der Galle-Liposomen und der Bildung von Tensid-reichen Mizellen beobachtet. In FaSSIF - 7C, zerstörten Tenside in höherer Konzentration die Liposomenstruktur trotz der allgemeinen Stabilisierung der Membranen durch Cholesterin. rnDie in dieser Arbeit vorgestellten Ergebnisse ergeben, dass die Anwesenheit von Cholesterin als eine fehlende Komponente der menschlichen Darmflüssigkeit in biorelevanten Medien wichtig ist und dazu beitragen kann, das in vivo Verhalten schwerlöslicher Arzneistoffe im Körper besser vorhersagen zu können. Der Verdünnungsgrad hat einen Einfluss auf die Nanopartikel-Struktur und Tenside beeinflussen die Löslichkeit von Medikamenten in biorelevanten Medien: Dieser Effekt ist sowohl von der Konzentration das Tensids abhängig, als auch dessen Typ.rnrn
Resumo:
We present a geospatial model to predict the radiofrequency electromagnetic field from fixed site transmitters for use in epidemiological exposure assessment. The proposed model extends an existing model toward the prediction of indoor exposure, that is, at the homes of potential study participants. The model is based on accurate operation parameters of all stationary transmitters of mobile communication base stations, and radio broadcast and television transmitters for an extended urban and suburban region in the Basel area (Switzerland). The model was evaluated by calculating Spearman rank correlations and weighted Cohen's kappa (kappa) statistics between the model predictions and measurements obtained at street level, in the homes of volunteers, and in front of the windows of these homes. The correlation coefficients of the numerical predictions with street level measurements were 0.64, with indoor measurements 0.66, and with window measurements 0.67. The kappa coefficients were 0.48 (95%-confidence interval: 0.35-0.61) for street level measurements, 0.44 (95%-CI: 0.32-0.57) for indoor measurements, and 0.53 (95%-CI: 0.42-0.65) for window measurements. Although the modeling of shielding effects by walls and roofs requires considerable simplifications of a complex environment, we found a comparable accuracy of the model for indoor and outdoor points.
Resumo:
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20\% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning.
Resumo:
BACKGROUND: To develop risk-adapted prevention of psychosis, an accurate estimation of the individual risk of psychosis at a given time is needed. Inclusion of biological parameters into multilevel prediction models is thought to improve predictive accuracy of models on the basis of clinical variables. To this aim, mismatch negativity (MMN) was investigated in a sample clinically at high risk, comparing individuals with and without subsequent conversion to psychosis. METHODS: At baseline, an auditory oddball paradigm was used in 62 subjects meeting criteria of a late risk at-state who remained antipsychotic-naive throughout the study. Median follow-up period was 32 months (minimum of 24 months in nonconverters, n = 37). Repeated-measures analysis of covariance was employed to analyze the MMN recorded at frontocentral electrodes; additional comparisons with healthy controls (HC, n = 67) and first-episode schizophrenia patients (FES, n = 33) were performed. Predictive value was evaluated by a Cox regression model. RESULTS: Compared with nonconverters, duration MMN in converters (n = 25) showed significantly reduced amplitudes across the six frontocentral electrodes; the same applied in comparison with HC, but not FES, whereas the duration MMN in in nonconverters was comparable to HC and larger than in FES. A prognostic score was calculated based on a Cox regression model and stratified into two risk classes, which showed significantly different survival curves. CONCLUSIONS: Our findings demonstrate the duration MMN is significantly reduced in at-risk subjects converting to first-episode psychosis compared with nonconverters and may contribute not only to the prediction of conversion but also to a more individualized risk estimation and thus risk-adapted prevention.
Resumo:
The supermolecule approach has been used to model the hydration of cyclic 3‘,5‘-adenosine monophosphate, cAMP. Model building combined with PM3 optimizations predict that the anti conformer of cAMP is capable of hydrogen bonding to an additional solvent water molecule compared to the syn conformer. The addition of one water to the syn superstructure with concurrent rotation of the base about the glycosyl bond to form the anti superstructure leads to an additional enthalpy of stabilization of approximately −6 kcal/mol at the PM3 level. This specific solute−solvent interaction is an example of a large solvent effect, as the method predicts that cAMP has a conformational preference for the anti isomer in solution. This conformational preference results from a change in the number of specific solute−solvent interactions in this system. This prediction could be tested by NMR techniques. The number of waters predicted to be in the first hydration sphere around cAMP is in agreement with the results of hydration studies of nucleotides in DNA. In addition, the detailed picture of solvation about this cyclic nucleotide is in agreement with infrared experimental results.
Resumo:
Using path analysis, the present investigation was done to clarify possible causal linkages among general scholastic aptitude, academic achievement in mathematics, self-concept of ability, and performance on a mathematics examination. Subjects were 122 eighth-grade students who completed a mathematics examination as well as a measure of self-concept of ability. Aptitude and achievement measures were obtained from school records. Analysis showed sex differences in prediction of performance on the mathematics examination. For boys, this performance could be predicted from scholastic aptitude and previous achievement in mathematics. For girls, performance only could be predicted from previous achievement in mathematics. These results indicate that the direction, strength, and magnitude of relations among these variables differed for boys and girls, while mean levels of performance did not.
Resumo:
Substantial variation exists in response to standard doses of codeine ranging from poor analgesia to life-threatening central nervous system (CNS) depression. We aimed to discover the genetic markers predictive of codeine toxicity by evaluating the associations between polymorphisms in cytochrome P450 2D6 (CYP2D6), UDP-glucuronosyltransferase 2B7 (UGT2B7), P-glycoprotein (ABCB1), mu-opioid receptor (OPRM1), and catechol O-methyltransferase (COMT) genes, which are involved in the codeine pathway, and the symptoms of CNS depression in 111 breastfeeding mothers using codeine and their infants. A genetic model combining the maternal risk genotypes in CYP2D6 and ABCB1 was significantly associated with the adverse outcomes in infants (odds ratio (OR) 2.68; 95% confidence interval (CI) 1.61-4.48; P(trend) = 0.0002) and their mothers (OR 2.74; 95% CI 1.55-4.84; P(trend) = 0.0005). A novel combination of the genetic and clinical factors predicted 87% of the infant and maternal CNS depression cases with a sensitivity of 80% and a specificity of 87%. Genetic markers can be used to improve the outcome of codeine therapy and are also probably important for other opioids sharing common biotransformation pathways.
Resumo:
Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.