986 resultados para modeling tools
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
Objectives: Acetate brain metabolism has the particularity to occur specifically in glial cells. Labeling studies, using acetate labeled either with 13C (NMR) or 11C (PET), are governed by the same biochemical reactions and thus follow the same mathematical principles. In this study, the objective was to adapt an NMR acetate brain metabolism model to analyse [1-11C]acetate infusion in rats. Methods: Brain acetate infusion experiments were modeled using a two-compartment model approach used in NMR.1-3 The [1-11C]acetate labeling study was done using a beta scintillator.4 The measured radioactive signal represents the time evolution of the sum of all labeled metabolites in the brain. Using a coincidence counter in parallel, an arterial input curve was measured. The 11C at position C-1 of acetate is metabolized in the first turn of the TCA cycle to the position 5 of glutamate (Figure 1A). Through the neurotransmission process, it is further transported to the position 5 of glutamine and the position 5 of neuronal glutamate. After the second turn of the TCA cycle, tracer from [1-11C]acetate (and also a part from glial [5-11C]glutamate) is transferred to glial [1-11C]glutamate and further to [1-11C]glutamine and neuronal glutamate through the neurotransmission cycle. Brain poster session: oxidative mechanisms S460 Journal of Cerebral Blood Flow & Metabolism (2009) 29, S455-S466 Results: The standard acetate two-pool PET model describes the system by a plasma pool and a tissue pool linked by rate constants. Experimental data are not fully described with only one tissue compartment (Figure 1B). The modified NMR model was fitted successfully to tissue time-activity curves from 6 single animals, by varying the glial mitochondrial fluxes and the neurotransmission flux Vnt. A glial composite rate constant Kgtg=Vgtg/[Ace]plasma was extracted. Considering an average acetate concentration in plasma of 1 mmol/g5 and the negligible additional amount injected, we found an average Vgtg = 0.08±0.02 (n = 6), in agreement with previous NMR measurements.1 The tissue time-activity curve is dominated by glial glutamate and later by glutamine (Figure 1B). Labeling of neuronal pools has a low influence, at least for the 20 mins of beta-probe acquisition. Based on the high diffusivity of CO2 across the blood-brain barrier; 11CO2 is not predominant in the total tissue curve, even if the brain CO2 pool is big compared with other metabolites, due to its strong dilution through unlabeled CO2 from neuronal metabolism and diffusion from plasma. Conclusion: The two-compartment model presented here is also able to fit data of positron emission experiments and to extract specific glial metabolic fluxes. 11C-labeled acetate presents an alternative for faster measurements of glial oxidative metabolism compared to NMR, potentially applicable to human PET imaging. However, to quantify the relative value of the TCA cycle flux compared to the transmitochondrial flux, the chemical sensitivity of NMR is required. PET and NMR are thus complementary.
Resumo:
PURPOSE: Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors. METHODS AND MATERIALS: Manual and automatic segmentations were compared for 17 patients, based on head computed tomography (CT) volume scans. A 3D statistical shape model of the cornea, lens, and sclera as well as of the optic disc position was developed. Furthermore, an active shape model was built to enable automatic fitting of the eye model to CT slice stacks. Cross-validation was performed based on leave-one-out tests for all training shapes by measuring dice coefficients and mean segmentation errors between automatic segmentation and manual segmentation by an expert. RESULTS: Cross-validation revealed a dice similarity of 95% ± 2% for the sclera and cornea and 91% ± 2% for the lens. Overall, mean segmentation error was found to be 0.3 ± 0.1 mm. Average segmentation time was 14 ± 2 s on a standard personal computer. CONCLUSIONS: Our results show that the solution presented outperforms state-of-the-art methods in terms of accuracy, reliability, and robustness. Moreover, the eye model shape as well as its variability is learned from a training set rather than by making shape assumptions (eg, as with the spherical or elliptical model). Therefore, the model appears to be capable of modeling nonspherically and nonelliptically shaped eyes.
Resumo:
The present research deals with the review of the analysis and modeling of Swiss franc interest rate curves (IRC) by using unsupervised (SOM, Gaussian Mixtures) and supervised machine (MLP) learning algorithms. IRC are considered as objects embedded into different feature spaces: maturities; maturity-date, parameters of Nelson-Siegel model (NSM). Analysis of NSM parameters and their temporal and clustering structures helps to understand the relevance of model and its potential use for the forecasting. Mapping of IRC in a maturity-date feature space is presented and analyzed for the visualization and forecasting purposes.
Resumo:
The overarching goal of this project was to identify and evaluate cognitive and behavioral indices that are sensitive to sleep deprivation and may help identify commercial motor vehicle drivers (CMV) who are at-risk for driving in a sleep deprived state and may prove useful in field tests administered by officers. To that end, we evaluated indices of driver physiognomy (e.g., yawning, droopy eyelids, etc.) and driver behavioral/cognitive state (e.g. distracted driving) and the sensitivity of these indices to objective measures of sleep deprivation. The measures of sleep deprivation were sampled on repeated occasions over a period of 3.5-months in each of 44 drivers diagnosed with Obstructive Sleep Apnea (OSA) and 22 controls (matched for gender, age within 5 years, education within 2 years, and county of residence for rural vs. urban driving). Comprehensive analyses showed that specific dimensions of driver physiognomy associated with sleepiness in previous research and face-valid composite scores of sleepiness did not: 1) distinguish participants with OSA from matched controls; 2) distinguish participants before and after PAP treatment including those who were compliant with their treatment; 3) predict levels of sleep deprivation acquired objectively from actigraphy watches, not even among those chronically sleep deprived. Those findings are consistent with large individual differences in driver physiognomy. In other words, when individuals were sleep deprived as confirmed by actigraphy watch output they did not show consistently reliable behavioral markers of being sleep deprived. This finding held whether each driver was compared to him/herself with adequate and inadequate sleep, and even among chronically sleep deprived drivers. The scientific evidence from this research study does not support the use of driver physiognomy as a valid measure of sleep deprivation or as a basis to judge whether a CMV driver is too fatigued to drive, as on the current Fatigued Driving Evaluation Checklist.. Fair and accurate determinations of CMV driver sleepiness in the field will likely require further research on alternative strategies that make use of a combination of information sources besides driver physiognomy, including work logs, actigraphy, in vehicle data recordings, GPS data on vehicle use, and performance tests.
Resumo:
Debris flows and related landslide processes occur in many regions all over Norway and pose a significant hazard to inhabited areas. Within the framework of the development of a national debris flows susceptibility map, we are working on a modeling approach suitable for Norway with a nationwide coverage. The discrimination of source areas is based on an index approach, which includes topographic parameters and hydrological settings. For the runout modeling, we use the Flow-R model (IGAR, University of Lausanne), which is based on combined probabilistic and energetic algorithms for the assessment of the spreading of the flow and maximum runout distances. First results for different test areas have shown that runout distances can be modeled reliably. For the selection of source areas, however, additional factors have to be considered, such as the lithological and quaternary geological setting, in order to accommodate the strong variation in debris flow activity in the different geological, geomorphological and climate regions of Norway.
Resumo:
Photopolymerization is commonly used in a broad range of bioapplications, such as drug delivery, tissue engineering, and surgical implants, where liquid materials are injected and then hardened by means of illumination to create a solid polymer network. However, photopolymerization using a probe, e.g., needle guiding both the liquid and the curing illumination, has not been thoroughly investigated. We present a Monte Carlo model that takes into account the dynamic absorption and scattering parameters as well as solid-liquid boundaries of the photopolymer to yield the shape and volume of minimally invasively injected, photopolymerized hydrogels. In the first part of the article, our model is validated using a set of well-known poly(ethylene glycol) dimethacrylate hydrogels showing an excellent agreement between simulated and experimental volume-growth-rates. In the second part, in situ experimental results and simulations for photopolymerization in tissue cavities are presented. It was found that a cavity with a volume of 152 mm3 can be photopolymerized from the output of a 0.28-mm2 fiber by adding scattering lipid particles while only a volume of 38 mm3 (25%) was achieved without particles. The proposed model provides a simple and robust method to solve complex photopolymerization problems, where the dimension of the light source is much smaller than the volume of the photopolymerizable hydrogel.