5 resultados para Variability Modeling
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors.
Resumo:
The goals of the present study were to model the population kinetics of in vivo influx and efflux processes of grepafloxacin at the serum-cerebrospinal fluid (CSF) barrier and to propose a simulation-based approach to optimize the design of dose-finding trials in the meningitis rabbit model. Twenty-nine rabbits with pneumococcal meningitis receiving grepafloxacin at 15 mg/kg of body weight (intravenous administration at 0 h), 30 mg/kg (at 0 h), or 50 mg/kg twice (at 0 and 4 h) were studied. A three-compartment population pharmacokinetic model was fit to the data with the program NONMEM (Nonlinear Mixed Effects Modeling). Passive diffusion clearance (CL(diff)) and active efflux clearance (CL(active)) are transfer kinetic modeling parameters. Influx clearance is assumed to be equal to CL(diff), and efflux clearance is the sum of CL(diff), CL(active), and bulk flow clearance (CL(bulk)). The average influx clearance for the population was 0.0055 ml/min (interindividual variability, 17%). Passive diffusion clearance was greater in rabbits receiving grepafloxacin at 15 mg/kg than in those treated with higher doses (0.0088 versus 0.0034 ml/min). Assuming a CL(bulk) of 0.01 ml/min, CL(active) was estimated to be 0.017 ml/min (11%), and clearance by total efflux was estimated to be 0.032 ml/min. The population kinetic model allows not only to quantify in vivo efflux and influx mechanisms at the serum-CSF barrier but also to analyze the effects of different dose regimens on transfer kinetic parameters in the rabbit meningitis model. The modeling-based approach also provides a tool for the simulation and prediction of various outcomes in which researchers might be interested, which is of great potential in designing dose-finding trials.
Resumo:
Argininosuccinic aciduria (ASA) is an autosomal recessive urea cycle disorder caused by deficiency of argininosuccinate lyase (ASL) with a wide clinical spectrum from asymptomatic to severe hyperammonemic neonatal onset life-threatening courses. We investigated the role of ASL transcript variants in the clinical and biochemical variability of ASA. Recombinant proteins for ASL wild type, mutant p.E189G, and the frequently occurring transcript variants with exon 2 or 7 deletions were (co-)expressed in human embryonic kidney 293T cells. We found that exon 2-deleted ASL forms a stable truncated protein with no relevant activity but a dose-dependent dominant negative effect on enzymatic activity after co-expression with wild type or mutant ASL, whereas exon 7-deleted ASL is unstable but seems to have, nevertheless, a dominant negative effect on mutant ASL. These findings were supported by structural modeling predictions for ASL heterotetramer/homotetramer formation. Illustrating the physiological relevance, the predominant occurrence of exon 7-deleted ASL was found in two patients who were both heterozygous for the ASL mutant p.E189G. Our results suggest that ASL transcripts can contribute to the highly variable phenotype in ASA patients if expressed at high levels. Especially, the exon 2-deleted ASL variant may form a heterotetramer with wild type or mutant ASL, causing markedly reduced ASL activity.
Resumo:
By means of fixed-links modeling, the present study identified different processes of visual short-term memory (VSTM) functioning and investigated how these processes are related to intelligence. We conducted an experiment where the participants were presented with a color change detection task. Task complexity was manipulated through varying the number of presented stimuli (set size). We collected hit rate and reaction time (RT) as indicators for the amount of information retained in VSTM and speed of VSTM scanning, respectively. Due to the impurity of these measures, however, the variability in hit rate and RT was assumed to consist not only of genuine variance due to individual differences in VSTM retention and VSTM scanning but also of other, non-experimental portions of variance. Therefore, we identified two qualitatively different types of components for both hit rate and RT: (1) non-experimental components representing processes that remained constant irrespective of set size and (2) experimental components reflecting processes that increased as a function of set size. For RT, intelligence was negatively associated with the non-experimental components, but was unrelated to the experimental components assumed to represent variability in VSTM scanning speed. This finding indicates that individual differences in basic processing speed, rather than in speed of VSTM scanning, differentiates between high- and low-intelligent individuals. For hit rate, the experimental component constituting individual differences in VSTM retention was positively related to intelligence. The non-experimental components of hit rate, representing variability in basal processes, however, were not associated with intelligence. By decomposing VSTM functioning into non-experimental and experimental components, significant associations with intelligence were revealed that otherwise might have been obscured.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.