902 resultados para Physiologically-based pharmacokinetic modeling
Resumo:
The aim of the present study was to develop a physiologically compatible inhalation solution of delta-9-tetrahydrocannabinol (THC), and to compare the pharmacokinetic and analgesic properties of pulmonal THC versus pulmonal placebo and intravenous (iv) THC, respectively. Eight healthy volunteers were included in this randomized, double-blind, crossover study. The aqueous THC formulations were prepared by using a solubilization technique. iv THC (0.053 mg/kg body weight), pulmonal THC (0.053 mg/kg), or a placebo inhalation solution was administered as single dose. At defined time points, blood samples were collected, and somatic and psychotropic side effects as well as vital functions monitored. An ice water immersion test was performed to measure analgesia. Using a pressure-driven nebulizer, the pulmonal administration of the THC liquid aerosol resulted in high THC peak plasma levels within minutes. The bioavailability of the pulmonal THC was 28.7 +/- 8.2% (mean +/- SEM). The side effects observed after pulmonal THC were coughing and slight irritation of the upper respiratory tract, very mild psychotropic symptoms, and headache. The side effects after iv THC were much more prominent. Neither pulmonal nor iv THC significantly reduced experimentally induced pain.
Resumo:
P450 oxidoreductase (POR) is the obligate electron donor for microsomal cytochrome P450s and mutations in POR cause several metabolic disorders. We have modeled the structure of human P450 oxidoreductase by in silico amino acid replacements in the rat POR crystal structure. The rat POR has 94% homology with human POR and 38 amino acids were replaced to make its sequence identical to human POR. Several rounds of molecular dynamic simulations refined the model and removed structural clashes from side chain alterations of replaced amino acids. This approach has the advantage of keeping the cofactor contacts and structural features of the core enzyme intact which could not be achieved by homology based approaches. The final model from our approach was of high quality and compared well with experimentally determined structures of other PORs. This model will be used for analyzing the structural implications of mutations and polymorphisms in human POR.
Resumo:
OBJECTIVE: Hierarchical modeling has been proposed as a solution to the multiple exposure problem. We estimate associations between metabolic syndrome and different components of antiretroviral therapy using both conventional and hierarchical models. STUDY DESIGN AND SETTING: We use discrete time survival analysis to estimate the association between metabolic syndrome and cumulative exposure to 16 antiretrovirals from four drug classes. We fit a hierarchical model where the drug class provides a prior model of the association between metabolic syndrome and exposure to each antiretroviral. RESULTS: One thousand two hundred and eighteen patients were followed for a median of 27 months, with 242 cases of metabolic syndrome (20%) at a rate of 7.5 cases per 100 patient years. Metabolic syndrome was more likely to develop in patients exposed to stavudine, but was less likely to develop in those exposed to atazanavir. The estimate for exposure to atazanavir increased from hazard ratio of 0.06 per 6 months' use in the conventional model to 0.37 in the hierarchical model (or from 0.57 to 0.81 when using spline-based covariate adjustment). CONCLUSION: These results are consistent with trials that show the disadvantage of stavudine and advantage of atazanavir relative to other drugs in their respective classes. The hierarchical model gave more plausible results than the equivalent conventional model.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
The marine aragonite cycle has been included in the global biogeochemical model PISCES to study the role of aragonite in shallow water CaCO3 dissolution. Aragonite production is parameterized as a function of mesozooplankton biomass and aragonite saturation state of ambient waters. Observation-based estimates of marine carbonate production and dissolution are well reproduced by the model and about 60% of the combined CaCO3 water column dissolution from aragonite and calcite is simulated above 2000 m. In contrast, a calcite-only version yields a much smaller fraction. This suggests that the aragonite cycle should be included in models for a realistic representation of CaCO3 dissolution and alkalinity. For the SRES A2 CO2 scenario, production rates of aragonite are projected to notably decrease after 2050. By the end of this century, global aragonite production is reduced by 29% and total CaCO3 production by 19% relative to pre-industrial. Geographically, the effect from increasing atmospheric CO2, and the subsequent reduction in saturation state, is largest in the subpolar and polar areas where the modeled aragonite production is projected to decrease by 65% until 2100.
Resumo:
The study is based on experimental work conducted in alpine snow. We made microwave radiometric and near-infrared reflectance measurements of snow slabs under different experimental conditions. We used an empirical relation to link near-infrared reflectance of snow to the specific surface area (SSA), and converted the SSA into the correlation length. From the measurements of snow radiances at 21 and 35 GHz , we derived the microwave scattering coefficient by inverting two coupled radiative transfer models (the sandwich and six-flux model). The correlation lengths found are in the same range as those determined in the literature using cold laboratory work. The technique shows great potential in the determination of the snow correlation length under field conditions.
Resumo:
The past few years, multimodal interaction has been gaining importance in virtual environments. Although multimodality renders interacting with an environment more natural and intuitive, the development cycle of such an application is often long and expensive. In our overall field of research, we investigate how modelbased design can facilitate the development process by designing environments through the use of highlevel diagrams. In this scope, we present ‘NiMMiT’, a graphical notation for expressing and evaluating multimodal user interaction; we elaborate on the NiMMiT primitives and demonstrate its use by means of a comprehensive example.
Resumo:
PDP++ is a freely available, open source software package designed to support the development, simulation, and analysis of research-grade connectionist models of cognitive processes. It supports most popular parallel distributed processing paradigms and artificial neural network architectures, and it also provides an implementation of the LEABRA computational cognitive neuroscience framework. Models are typically constructed and examined using the PDP++ graphical user interface, but the system may also be extended through the incorporation of user-written C++ code. This article briefly reviews the features of PDP++, focusing on its utility for teaching cognitive modeling concepts and skills to university undergraduate and graduate students. An informal evaluation of the software as a pedagogical tool is provided, based on the author’s classroom experiences at three research universities and several conference-hosted tutorials.
Resumo:
With a steady increase of regulatory requirements for business processes, automation support of compliance management is a field garnering increasing attention in Information Systems research. Several approaches have been developed to support compliance checking of process models. One major challenge for such approaches is their ability to handle different modeling techniques and compliance rules in order to enable widespread adoption and application. Applying a structured literature search strategy, we reflect and discuss compliance-checking approaches in order to provide an insight into their generalizability and evaluation. The results imply that current approaches mainly focus on special modeling techniques and/or a restricted set of types of compliance rules. Most approaches abstain from real-world evaluation which raises the question of their practical applicability. Referring to the search results, we propose a roadmap for further research in model-based business process compliance checking.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
Loss to follow-up (LTFU) is a common problem in many epidemiological studies. In antiretroviral treatment (ART) programs for patients with human immunodeficiency virus (HIV), mortality estimates can be biased if the LTFU mechanism is non-ignorable, that is, mortality differs between lost and retained patients. In this setting, routine procedures for handling missing data may lead to biased estimates. To appropriately deal with non-ignorable LTFU, explicit modeling of the missing data mechanism is needed. This can be based on additional outcome ascertainment for a sample of patients LTFU, for example, through linkage to national registries or through survey-based methods. In this paper, we demonstrate how this additional information can be used to construct estimators based on inverse probability weights (IPW) or multiple imputation. We use simulations to contrast the performance of the proposed estimators with methods widely used in HIV cohort research for dealing with missing data. The practical implications of our approach are illustrated using South African ART data, which are partially linkable to South African national vital registration data. Our results demonstrate that while IPWs and proper imputation procedures can be easily constructed from additional outcome ascertainment to obtain valid overall estimates, neglecting non-ignorable LTFU can result in substantial bias. We believe the proposed estimators are readily applicable to a growing number of studies where LTFU is appreciable, but additional outcome data are available through linkage or surveys of patients LTFU. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
BACKGROUND Moraxella catarrhalis, a major nasopharyngeal pathogen of the human respiratory tract, is exposed to rapid downshifts of environmental temperature when humans breathe cold air. The prevalence of pharyngeal colonization and respiratory tract infections caused by M. catarrhalis is greatest in winter. We investigated how M. catarrhalis uses the physiologic exposure to cold air to regulate pivotal survival systems that may contribute to M. catarrhalis virulence. RESULTS In this study we used the RNA-seq techniques to quantitatively catalogue the transcriptome of M. catarrhalis exposed to a 26 °C cold shock or to continuous growth at 37 °C. Validation of RNA-seq data using quantitative RT-PCR analysis demonstrated the RNA-seq results to be highly reliable. We observed that a 26 °C cold shock induces the expression of genes that in other bacteria have been related to virulence a strong induction was observed for genes involved in high affinity phosphate transport and iron acquisition, indicating that M. catarrhalis makes a better use of both phosphate and iron resources after exposure to cold shock. We detected the induction of genes involved in nitrogen metabolism, as well as several outer membrane proteins, including ompA, m35-like porin and multidrug efflux pump (acrAB) indicating that M. catarrhalis remodels its membrane components in response to downshift of temperature. Furthermore, we demonstrate that a 26 °C cold shock enhances the induction of genes encoding the type IV pili that are essential for natural transformation, and increases the genetic competence of M. catarrhalis, which may facilitate the rapid spread and acquisition of novel virulence-associated genes. CONCLUSION Cold shock at a physiologically relevant temperature of 26 °C induces in M. catarrhalis a complex of adaptive mechanisms that could convey novel pathogenic functions and may contribute to enhanced colonization and virulence.
Resumo:
Our knowledge about the lunar environment is based on a large volume of ground-based, remote, and in situ observations. These observations have been conducted at different times and sampled different pieces of such a complex system as the surface-bound exosphere of the Moon. Numerical modeling is the tool that can link results of these separate observations into a single picture. Being validated against previous measurements, models can be used for predictions and interpretation of future observations results. In this paper we present a kinetic model of the sodium exosphere of the Moon as well as results of its validation against a set of ground-based and remote observations. The unique characteristic of the model is that it takes the orbital motion of the Moon and the Earth into consideration and simulates both the exosphere as well as the sodium tail self-consistently. The extended computational domain covers the part of the Earth’s orbit at new Moon, which allows us to study the effect of Earth’s gravity on the lunar sodium tail. The model is fitted to a set of ground-based and remote observations by tuning sodium source rate as well as values of sticking, and accommodation coefficients. The best agreement of the model results with the observations is reached when all sodium atoms returning from the exosphere stick to the surface and the net sodium escape rate is about 5.3 × 1022 s−1.
Resumo:
The spectacular images of Comet 103P/Hartley 2 recorded by the Medium Resolution Instrument (MRI) and High Resolution Instrument (HRI) on board of the Extrasolar Planet Observation and Deep Impact Extended Investigation (EPOXI) spacecraft, as the Deep Impact extended mission, revealed that its bi-lobed very active nucleus outgasses volatiles heterogeneously. Indeed, CO2 is the primary driver of activity by dragging out chunks of pure ice out of the nucleus from the sub-solar lobe that appear to be the main source of water in Hartley 2's coma by sublimating slowly as they go away from the nucleus. However, water vapor is released by direct sublimation of the nucleus at the waist without any significant amount of either CO2 or icy grains. The coma structure for a comet with such areas of diverse chemistry differs from the usual models where gases are produced in a homogeneous way from the surface. We use the fully kinetic Direct Simulation Monte Carlo model of Tenishev et al. (Tenishev, V.M., Combi, M.R., Davidsson, B. [2008]. Astrophys. J. 685, 659-677; Tenishev, V.M., Combi, M.R., Rubin, M. [2011]. Astrophys. J. 732, 104-120) applied to Comet 103P/Hartley 2 including sublimating icy grains to reproduce the observations made by EPOXI and ground-based measurements. A realistic bi-lobed nucleus with a succession of active areas with different chemistry was included in the model enabling us to study in details the coma of Hartley 2. The different gas production rates from each area were found by fitting the spectra computed using a line-by-line non-LTE radiative transfer model to the HRI observations. The presence of icy grains with long lifetimes, which are pushed anti-sunward by radiation pressure, explains the observed OH asymmetry with enhancement on the night side of the coma.
Resumo:
The scaphoid is the most frequently fractured carpal bone. When investigating fixation stability, which may influence healing, knowledge of forces and moments acting on the scaphoid is essential. The aim of this study was to evaluate cartilage contact forces acting on the intact scaphoid in various functional wrist positions using finite element modeling. A novel methodology was utilized as an attempt to overcome some limitations of earlier studies, namely, relatively coarse imaging resolution to assess geometry, assumption of idealized cartilage thicknesses and neglected cartilage pre-stresses in the unloaded joint. Carpal bone positions and articular cartilage geometry were obtained independently by means of high resolution CT imaging and incorporated into finite element (FE) models of the human wrist in eight functional positions. Displacement driven FE analyses were used to resolve inter-penetration of cartilage layers, and provided contact areas, forces and pressure distribution for the scaphoid bone. The results were in the range reported by previous studies. Novel findings of this study were: (i) cartilage thickness was found to be heterogeneous for each bone and vary considerably between carpal bones; (ii) this heterogeneity largely influenced the FE results and (iii) the forces acting on the scaphoid in the unloaded wrist were found to be significant. As major limitations, accuracy of the method was found to be relatively low, and the results could not be compared to independent experiments. The obtained results will be used in a following study to evaluate existing and recently developed screws used to fix scaphoid fractures.