902 resultados para Physiologically-based pharmacokinetic modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The NIMH's new strategic plan, with its emphasis on the "4P's" (Prediction, Pre-emption, Personalization, and Populations) and biomarker-based medicine requires a radical shift in animal modeling methodology. In particular 4P's models will be non-determinant (i.e. disease severity will depend on secondary environmental and genetic factors); and validated by reverse-translation of animal homologues to human biomarkers. A powerful consequence of the biomarker approach is that different closely related disorders have a unique fingerprint of biomarkers. Animals can be validated as a highly specific model of a single disorder by matching this 'fingerprint'; or as a model of a symptom seen in multiple disorders by matching common biomarkers. Here we illustrate this approach with two Abnormal Repetitive Behaviors (ARBs) in mice: stereotypies and barbering (hair pulling). We developed animal versions of the neuropsychological biomarkers that distinguish human ARBs, and tested the fingerprint of the different mouse ARBs. As predicted, the two mouse ARBs were associated with different biomarkers. Both barbering and stereotypy could be discounted as models of OCD (even though they are widely used as such), due to the absence of limbic biomarkers which are characteristic of OCD and hence are necessary for a valid model. Conversely barbering matched the fingerprint of trichotillomania (i.e. selective deficits in set-shifting), suggesting it may be a highly specific model of this disorder. In contrast stereotypies were correlated only with a biomarker (deficits in response shifting) correlated with stereotypies in multiple disorders, suggesting that animal stereotypies model stereotypies in multiple disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of metal chelators is becoming increasingly important in the development of new tracers for molecular imaging. With the rise of the field of nanotechnology, the fusion of both technologies has shown great potential for clinical applications. The pharmacokinetcs of nanoparticles can be monitored via positron emission tomography (PET) after surface modification and radiolabeling with positron emitting radionuclides. Different metal ion chelators can be used to facilitate labeling of the radionuclides and as a prerequisite, optimized radiolabeling procedure is necessary to prevent nanoparticle aggregation and degradation. However, the effects of chelator modification on nanoparticle pharmacokinetic properties have not been well studied and currently no studies to date have compared the biological effects of the use of different chelators in the surface modification of nanoparticles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

11beta-Hydroxysteroid dehydrogenase (11beta-HSD) enzymes catalyze the conversion of biologically inactive 11-ketosteroids into their active 11beta-hydroxy derivatives and vice versa. Inhibition of 11beta-HSD1 has considerable therapeutic potential for glucocorticoid-associated diseases including obesity, diabetes, wound healing, and muscle atrophy. Because inhibition of related enzymes such as 11beta-HSD2 and 17beta-HSDs causes sodium retention and hypertension or interferes with sex steroid hormone metabolism, respectively, highly selective 11beta-HSD1 inhibitors are required for successful therapy. Here, we employed the software package Catalyst to develop ligand-based multifeature pharmacophore models for 11beta-HSD1 inhibitors. Virtual screening experiments and subsequent in vitro evaluation of promising hits revealed several selective inhibitors. Efficient inhibition of recombinant human 11beta-HSD1 in intact transfected cells as well as endogenous enzyme in mouse 3T3-L1 adipocytes and C2C12 myotubes was demonstrated for compound 27, which was able to block subsequent cortisol-dependent activation of glucocorticoid receptors with only minor direct effects on the receptor itself. Our results suggest that inhibitor-based pharmacophore models for 11beta-HSD1 in combination with suitable cell-based activity assays, including such for related enzymes, can be used for the identification of selective and potent inhibitors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bone research is limited by the methods available for detecting changes in bone metabolism. While dual X-ray absorptiometry is rather insensitive, biochemical markers are subject to significant intra-individual variation. In the study presented here, we evaluated the isotopic labeling of bone using 41Ca, a long-lived radiotracer, as an alternative approach. After successful labeling of the skeleton, changes in the systematics of urinary 41Ca excretion are expected to directly reflect changes in bone Ca metabolism. A minute amount of 41Ca (100 nCi) was administered orally to 22 postmenopausal women. Kinetics of tracer excretion were assessed by monitoring changes in urinary 41Ca/40Ca isotope ratios up to 700 days post-dosing using accelerator mass spectrometry and resonance ionization mass spectrometry. Isotopic labeling of the skeleton was evaluated by two different approaches: (i) urinary 41Ca data were fitted to an established function consisting of an exponential term and a power law term for each individual; (ii) 41Ca data were analyzed by population pharmacokinetic (NONMEM) analysis to identify a compartmental model that describes urinary 41Ca tracer kinetics. A linear three-compartment model with a central compartment and two sequential peripheral compartments was found to best fit the 41Ca data. Fits based on the use of the combined exponential/power law function describing urinary tracer excretion showed substantially higher deviations between predicted and measured values than fits based on the compartmental modeling approach. By establishing the urinary 41Ca excretion pattern using data points up to day 500 and extrapolating these curves up to day 700, it was found that the calculated 41Ca/40Ca isotope ratios in urine were significantly lower than the observed 41Ca/40Ca isotope ratios for both techniques. Compartmental analysis can overcome this limitation. By identifying relative changes in transfer rates between compartments in response to an intervention, inaccuracies in the underlying model cancel out. Changes in tracer distribution between compartments were modeled based on identified kinetic parameters. While changes in bone formation and resorption can, in principle, be assessed by monitoring urinary 41Ca excretion over the first few weeks post-dosing, assessment of an intervention effect is more reliable approximately 150 days post-dosing when excreted tracer originates mainly from bone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genomic alterations have been linked to the development and progression of cancer. The technique of Comparative Genomic Hybridization (CGH) yields data consisting of fluorescence intensity ratios of test and reference DNA samples. The intensity ratios provide information about the number of copies in DNA. Practical issues such as the contamination of tumor cells in tissue specimens and normalization errors necessitate the use of statistics for learning about the genomic alterations from array-CGH data. As increasing amounts of array CGH data become available, there is a growing need for automated algorithms for characterizing genomic profiles. Specifically, there is a need for algorithms that can identify gains and losses in the number of copies based on statistical considerations, rather than merely detect trends in the data. We adopt a Bayesian approach, relying on the hidden Markov model to account for the inherent dependence in the intensity ratios. Posterior inferences are made about gains and losses in copy number. Localized amplifications (associated with oncogene mutations) and deletions (associated with mutations of tumor suppressors) are identified using posterior probabilities. Global trends such as extended regions of altered copy number are detected. Since the posterior distribution is analytically intractable, we implement a Metropolis-within-Gibbs algorithm for efficient simulation-based inference. Publicly available data on pancreatic adenocarcinoma, glioblastoma multiforme and breast cancer are analyzed, and comparisons are made with some widely-used algorithms to illustrate the reliability and success of the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many seemingly disparate approaches for marginal modeling have been developed in recent years. We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the proposed copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Copper (Cu) and its alloys are used extensively in domestic and industrial applications. Cu is also an essential element in mammalian nutrition. Since both copper deficiency and copper excess produce adverse health effects, the dose-response curve is U-shaped, although the precise form has not yet been well characterized. Many animal and human studies were conducted on copper to provide a rich database from which data suitable for modeling the dose-response relationship for copper may be extracted. Possible dose-response modeling strategies are considered in this review, including those based on the benchmark dose and categorical regression. The usefulness of biologically based dose-response modeling techniques in understanding copper toxicity was difficult to assess at this time since the mechanisms underlying copper-induced toxicity have yet to be fully elucidated. A dose-response modeling strategy for copper toxicity was proposed associated with both deficiency and excess. This modeling strategy was applied to multiple studies of copper-induced toxicity, standardized with respect to severity of adverse health outcomes and selected on the basis of criteria reflecting the quality and relevance of individual studies. The use of a comprehensive database on copper-induced toxicity is essential for dose-response modeling since there is insufficient information in any single study to adequately characterize copper dose-response relationships. The dose-response modeling strategy envisioned here is designed to determine whether the existing toxicity data for copper excess or deficiency may be effectively utilized in defining the limits of the homeostatic range in humans and other species. By considering alternative techniques for determining a point of departure and low-dose extrapolation (including categorical regression, the benchmark dose, and identification of observed no-effect levels) this strategy will identify which techniques are most suitable for this purpose. This analysis also serves to identify areas in which additional data are needed to better define the characteristics of dose-response relationships for copper-induced toxicity in relation to excess or deficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correspondence establishment is a key step in statistical shape model building. There are several automated methods for solving this problem in 3D, but they usually can only handle objects with simple topology, like that of a sphere or a disc. We propose an extension to correspondence establishment over a population based on the optimization of the minimal description length function, allowing considering objects with arbitrary topology. Instead of using a fixed structure of kernel placement on a sphere for the systematic manipulation of point landmark positions, we rely on an adaptive, hierarchical organization of surface patches. This hierarchy can be built on surfaces of arbitrary topology and the resulting patches are used as a basis for a consistent, multi-scale modification of the surfaces' parameterization, based on point distribution models. The feasibility of the approach is demonstrated on synthetic models with different topologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EPON 862 is an epoxy resin which is cured with the hardening agent DETDA to form a crosslinked epoxy polymer and is used as a component in modern aircraft structures. These crosslinked polymers are often exposed to prolonged periods of temperatures below glass transition range which cause physical aging to occur. Because physical aging can compromise the performance of epoxies and their composites and because experimental techniques cannot provide all of the necessary physical insight that is needed to fully understand physical aging, efficient computational approaches to predict the effects of physical aging on thermo-mechanical properties are needed. In this study, Molecular Dynamics and Molecular Minimization simulations are being used to establish well-equilibrated, validated molecular models of the EPON 862-DETDA epoxy system with a range of crosslink densities using a united-atom force field. These simulations are subsequently used to predict the glass transition temperature, thermal expansion coefficients, and elastic properties of each of the crosslinked systems for validation of the modeling techniques. The results indicate that glass transition temperature and elastic properties increase with increasing levels of crosslink density and the thermal expansion coefficient decreases with crosslink density, both above and below the glass transition temperature. The results also indicate that there may be an upper limit to crosslink density that can be realistically achieved in epoxy systems. After evaluation of the thermo-mechanical properties, a method is developed to efficiently establish molecular models of epoxy resins that represent the corresponding real molecular structure at specific aging times. Although this approach does not model the physical aging process, it is useful in establishing a molecular model that resembles the physically-aged state for further use in predicting thermo-mechanical properties as a function of aging time. An equation has been predicted based on the results which directly correlate aging time to aged volume of the molecular model. This equation can be helpful for modelers who want to study properties of epoxy resins at different levels of aging but have little information about volume shrinkage occurring during physical aging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability of anesthetic agents to provide adequate analgesia and sedation is limited by the ventilatory depression associated with overdosing in spontaneously breathing patients. Therefore, quantitation of drug induced ventilatory depression is a pharmacokinetic-pharmacodynamic problem relevant to the practice of anesthesia. Although several studies describe the effect of respiratory depressant drugs on isolated endpoints, an integrated description of drug induced respiratory depression with parameters identifiable from clinically available data is not available. This study proposes a physiological model of CO2 disposition, ventilatory regulation, and the effects of anesthetic agents on the control of breathing. The predictive performance of the model is evaluated through simulations aimed at reproducing experimental observations of drug induced hypercarbia and hypoventilation associated with intravenous administration of a fast-onset, highly potent anesthetic mu agonist (including previously unpublished experimental data determined after administration of 1 mg alfentanil bolus). The proposed model structure has substantial descriptive capability and can provide clinically relevant predictions of respiratory inhibition in the non-steady-state to enhance safety of drug delivery in the anesthetic practice.