220 resultados para 3D computer animation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Xenobiotic exposure is a risk factor in the etiology of neurodegenerative disease. It was recently hypothesized that restricted exposure during brain development could predispose for a neurodegenerative disease later in life. As neuroinflammation contributes to progressive neurodegeneration, it is suspected that neurodevelopmental xenobiotic exposure could elicit a neuroinflammatory process, which over time may assume a detrimental character. We investigated the neurotoxic effects of paraquat (PQ) in three-dimensional whole rat brain cell cultures, exposed during an early differentiation stage, comparing immediate effects-directly post exposure-with long-term effects, 20 days after interrupted PQ-administration. Adverse effects and neuroinflammatory responses were assessed by measuring changes in gene- and protein-expression as well as by determining cell morphology changes. Differentiating neural cultures were highly susceptible to PQ and showed neuronal damage and strong astrogliosis. After the 20-day washout period, neurons partially recovered, whereas astrogliosis persisted, and was accompanied by microglial activation of a neurodegenerative phenotype. Our data shows that immediate and long-term effects of subchronic PQ-exposure differ. Also, PQ-exposure during this window of extensive neuronal differentiation led to a delayed microglial activation, of a character that could promote further pro-inflammatory signals that enable prolonged inflammation, thereby fueling further neurodegeneration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D dose reconstruction is a verification of the delivered absorbed dose. Our aim was to describe and evaluate a 3D dose reconstruction method applied to phantoms in the context of narrow beams. A solid water phantom and a phantom containing a bone-equivalent material were irradiated on a 6 MV linac. The transmitted dose was measured by using one array of a 2D ion chamber detector. The dose reconstruction was obtained by an iterative algorithm. A phantom set-up error and organ interfraction motion were simulated to test the algorithm sensitivity. In all configurations convergence was obtained within three iterations. A local reconstructed dose agreement of at least 3% / 3mm with respect to the planned dose was obtained, except in a few points of the penumbra. The reconstructed primary fluences were consistent with the planned ones, which validates the whole reconstruction process. The results validate our method in a simple geometry and for narrow beams. The method is sensitive to a set-up error of a heterogeneous phantom and interfraction heterogeneous organ motion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors. METHODS AND MATERIALS: Manual and automatic segmentations were compared for 17 patients, based on head computed tomography (CT) volume scans. A 3D statistical shape model of the cornea, lens, and sclera as well as of the optic disc position was developed. Furthermore, an active shape model was built to enable automatic fitting of the eye model to CT slice stacks. Cross-validation was performed based on leave-one-out tests for all training shapes by measuring dice coefficients and mean segmentation errors between automatic segmentation and manual segmentation by an expert. RESULTS: Cross-validation revealed a dice similarity of 95% ± 2% for the sclera and cornea and 91% ± 2% for the lens. Overall, mean segmentation error was found to be 0.3 ± 0.1 mm. Average segmentation time was 14 ± 2 s on a standard personal computer. CONCLUSIONS: Our results show that the solution presented outperforms state-of-the-art methods in terms of accuracy, reliability, and robustness. Moreover, the eye model shape as well as its variability is learned from a training set rather than by making shape assumptions (eg, as with the spherical or elliptical model). Therefore, the model appears to be capable of modeling nonspherically and nonelliptically shaped eyes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predictive groundwater modeling requires accurate information about aquifer characteristics. Geophysical imaging is a powerful tool for delineating aquifer properties at an appropriate scale and resolution, but it suffers from problems of ambiguity. One way to overcome such limitations is to adopt a simultaneous multitechnique inversion strategy. We have developed a methodology for aquifer characterization based on structural joint inversion of multiple geophysical data sets followed by clustering to form zones and subsequent inversion for zonal parameters. Joint inversions based on cross-gradient structural constraints require less restrictive assumptions than, say, applying predefined petro-physical relationships and generally yield superior results. This approach has, for the first time, been applied to three geophysical data types in three dimensions. A classification scheme using maximum likelihood estimation is used to determine the parameters of a Gaussian mixture model that defines zonal geometries from joint-inversion tomograms. The resulting zones are used to estimate representative geophysical parameters of each zone, which are then used for field-scale petrophysical analysis. A synthetic study demonstrated how joint inversion of seismic and radar traveltimes and electrical resistance tomography (ERT) data greatly reduces misclassification of zones (down from 21.3% to 3.7%) and improves the accuracy of retrieved zonal parameters (from 1.8% to 0.3%) compared to individual inversions. We applied our scheme to a data set collected in northeastern Switzerland to delineate lithologic subunits within a gravel aquifer. The inversion models resolve three principal subhorizontal units along with some important 3D heterogeneity. Petro-physical analysis of the zonal parameters indicated approximately 30% variation in porosity within the gravel aquifer and an increasing fraction of finer sediments with depth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To compare 3 different flow targeted magnetization preparation strategies for coronary MR angiography (cMRA), which allow selective visualization of the vessel lumen. MATERIAL AND METHODS: The right coronary artery of 10 healthy subjects was investigated on a 1.5 Tesla MR system (Gyroscan ACS-NT, Philips Healthcare, Best, NL). A navigator-gated and ECG-triggered 3D radial steady-state free-precession (SSFP) cMRA sequence with 3 different magnetization preparation schemes was performed referred to as projection SSFP (selective labeling of the aorta, subtraction of 2 data sets), LoReIn SSFP (double-inversion preparation, selective labeling of the aorta, 1 data set), and inflow SSFP (inversion preparation, selective labeling of the coronary artery, 1 data set). Signal-to-noise ratio (SNR) of the coronary artery and aorta, contrast-to-noise ratio (CNR) between the coronary artery and epicardial fat, vessel length and vessel sharpness were analyzed. RESULTS: All cMRA sequences were successfully obtained in all subjects. Both projection SSFP and LoReIn SSFP allowed for selective visualization of the coronary arteries with excellent background suppression. Scan time was doubled in projection SSFP because of the need for subtraction of 2 data sets. In inflow SSFP, background suppression was limited to the tissue included in the inversion volume. Projection SSFP (SNR(coro): 25.6 +/- 12.1; SNR(ao): 26.1 +/- 16.8; CNR(coro-fat): 22.0 +/- 11.7) and inflow SSFP (SNR(coro): 27.9 +/- 5.4; SNR(ao): 37.4 +/- 9.2; CNR(coro-fat): 24.9 +/- 4.8) yielded significantly increased SNR and CNR compared with LoReIn SSFP (SNR(coro): 12.3 +/- 5.4; SNR(ao): 11.8 +/- 5.8; CNR(coro-fat): 9.8 +/- 5.5; P < 0.05 for both). Longest visible vessel length was found with projection SSFP (79.5 mm +/- 18.9; P < 0.05 vs. LoReIn) whereas vessel sharpness was best in inflow SSFP (68.2% +/- 4.5%; P < 0.05 vs. LoReIn). Consistently good image quality was achieved using inflow SSFP likely because of the simple planning procedure and short scanning time. CONCLUSION: Three flow targeted cMRA approaches are presented, which provide selective visualization of the coronary vessel lumen and in addition blood flow information without the need of contrast agent administration. Inflow SSFP yielded highest SNR, CNR and vessel sharpness and may prove useful as a fast and efficient approach for assessing proximal and mid vessel coronary blood flow, whereas requiring less planning skills than projection SSFP or LoReIn SSFP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SASbur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, deltaGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC5o without reparametrization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors compared radial steady-state free precession (SSFP) coronary magnetic resonance (MR) angiography, cartesian k-space sampling SSFP coronary MR angiography, and gradient-echo coronary MR angiography in 16 healthy adults and four pilot study patients. Standard gradient-echo MR imaging with a T2 preparatory pulse and cartesian k-space sampling was the reference technique. Image quality was compared by using subjective motion artifact level and objective contrast-to-noise ratio and vessel sharpness. Radial SSFP, compared with cartesian SSFP and gradient-echo MR angiography, resulted in reduced motion artifacts and superior vessel sharpness. Cartesian SSFP resulted in increased motion artifacts (P <.05). Contrast-to-noise ratio with radial SSFP was lower than that with cartesian SSFP and similar to that with the reference technique. Radial SSFP coronary MR angiography appears preferable because of improved definition of vessel borders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During conventional x-ray coronary angiography, multiple projections of the coronary arteries are acquired to define coronary anatomy precisely. Due to time constraints, coronary magnetic resonance angiography (MRA) usually provides only one or two views of the major coronary vessels. A coronary MRA approach that allowed for reconstruction of arbitrary isotropic orientations might therefore be desirable. The purpose of the study was to develop a three-dimensional (3D) coronary MRA technique with isotropic image resolution in a relatively short scanning time that allows for reconstruction of arbitrary views of the coronary arteries without constraints given by anisotropic voxel size. Eight healthy adult subjects were examined using a real-time navigator-gated and corrected free-breathing interleaved echoplanar (TFE-EPI) 3D-MRA sequence. Two 3D datasets were acquired for the left and right coronary systems in each subject, one with anisotropic (1.0 x 1.5 x 3.0 mm, 10 slices) and one with "near" isotropic (1.0 x 1.5 x 1.0 mm, 30 slices) image resolution. All other imaging parameters were maintained. In all cases, the entire left main (LM) and extensive portions of the left anterior descending (LAD) and the right coronary artery (RCA) were visualized. Objective assessment of coronary vessel sharpness was similar (41% +/- 5% vs. 42% +/- 5%; P = NS) between in-plane and through-plane views with "isotropic" voxel size but differed (32% +/- 7% vs. 23% +/- 4%; P < 0.001) with nonisotropic voxel size. In reconstructed views oriented in the through-plane direction, the vessel border was 86% more defined (P < 0.01) for isotropic compared with anisotropic images. A smaller (30%; P < 0.001) improvement was seen for in-plane reconstructions. Vessel diameter measurements were view independent (2.81 +/- 0.45 mm vs. 2.66 +/- 0.52 mm; P = NS) for isotropic, but differed (2.71 +/- 0.51 mm vs. 3.30 +/- 0.38 mm; P < 0.001) between anisotropic views. Average scanning time was 2:31 +/- 0:57 minutes for anisotropic and 7:11 +/- 3:02 minutes for isotropic image resolution (P < 0.001). We present a new approach for "near" isotropic 3D coronary artery imaging, which allows for reconstruction of arbitrary views of the coronary arteries. The good delineation of the coronary arteries in all views suggests that isotropic 3D coronary MRA might be a preferred technique for the assessment of coronary disease, although at the expense of prolonged scan times. Comparative studies with conventional x-ray angiography are needed to investigate the clinical utility of the isotropic strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate prediction of transcription factor binding sites is needed to unravel the function and regulation of genes discovered in genome sequencing projects. To evaluate current computer prediction tools, we have begun a systematic study of the sequence-specific DNA-binding of a transcription factor belonging to the CTF/NFI family. Using a systematic collection of rationally designed oligonucleotides combined with an in vitro DNA binding assay, we found that the sequence specificity of this protein cannot be represented by a simple consensus sequence or weight matrix. For instance, CTF/NFI uses a flexible DNA binding mode that allows for variations of the binding site length. From the experimental data, we derived a novel prediction method using a generalised profile as a binding site predictor. Experimental evaluation of the generalised profile indicated that it accurately predicts the binding affinity of the transcription factor to natural or synthetic DNA sequences. Furthermore, the in vitro measured binding affinities of a subset of oligonucleotides were found to correlate with their transcriptional activities in transfected cells. The combined computational-experimental approach exemplified in this work thus resulted in an accurate prediction method for CTF/NFI binding sites potentially functioning as regulatory regions in vivo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the dramatic increase in the volume of experimental results in every domain of life sciences, assembling pertinent data and combining information from different fields has become a challenge. Information is dispersed over numerous specialized databases and is presented in many different formats. Rapid access to experiment-based information about well-characterized proteins helps predict the function of uncharacterized proteins identified by large-scale sequencing. In this context, universal knowledgebases play essential roles in providing access to data from complementary types of experiments and serving as hubs with cross-references to many specialized databases. This review outlines how the value of experimental data is optimized by combining high-quality protein sequences with complementary experimental results, including information derived from protein 3D-structures, using as an example the UniProt knowledgebase (UniProtKB) and the tools and links provided on its website ( http://www.uniprot.org/ ). It also evokes precautions that are necessary for successful predictions and extrapolations.