70 resultados para post-processing method
em Universit
Resumo:
Brain perfusion can be assessed by CT and MR. For CT, two major techniques are used. First, Xenon CT is an equilibrium technique based on a freely diffusible tracer. First pass of iodinated contrast injected intravenously is a second method, more widely available. Both methods are proven to be robust and quantitative, thanks to the linear relationship between contrast concentration and x-ray attenuation. For the CT methods, concern regarding x-ray doses delivered to the patients need to be addressed. MR is also able to assess brain perfusion using the first pass of gadolinium based contrast agent injected intravenously. This method has to be considered as a semi-quantitative because of the non linear relationship between contrast concentration and MR signal changes. Arterial spin labeling is another MR method assessing brain perfusion without injection of contrast. In such case, the blood flow in the carotids is magnetically labelled by an external radiofrequency pulse and observed during its first pass through the brain. Each of this various CT and MR techniques have advantages and limits that will be illustrated and summarized.Learning Objectives:1. To understand and compare the different techniques for brain perfusion imaging.2. To learn about the methods of acquisition and post-processing of brain perfusion by first pass of contrast agent for CT and MR.3. To learn about non contrast MR methods (arterial spin labelling).
Resumo:
Introduction. Development of the fetal brain surfacewith concomitant gyrification is one of the majormaturational processes of the human brain. Firstdelineated by postmortem studies or by ultrasound, MRIhas recently become a powerful tool for studying in vivothe structural correlates of brain maturation. However,the quantitative measurement of fetal brain developmentis a major challenge because of the movement of the fetusinside the amniotic cavity, the poor spatial resolution,the partial volume effect and the changing appearance ofthe developing brain. Today extensive efforts are made todeal with the âeurooepost-acquisitionâeuro reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution (Rousseau, F., 2006;Jiang, S., 2007). We here propose a framework devoted tothe segmentation of the basal ganglia, the gray-whitetissue segmentation, and in turn the 3D corticalreconstruction of the fetal brain. Method. Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences in fetuses aged from 29 to 32gestational weeks (slice thickness 5.4mm, in planespatial resolution 1.09mm). For each fetus, 6 axialvolumes shifted by 1 mm were acquired (about 1 min pervolume). First, each volume is manually segmented toextract fetal brain from surrounding fetal and maternaltissues. Inhomogeneity intensity correction and linearintensity normalization are then performed. A highspatial resolution image of isotropic voxel size of 1.09mm is created for each fetus as previously published byothers (Rousseau, F., 2006). B-splines are used for thescattered data interpolation (Lee, 1997). Then, basalganglia segmentation is performed on this superreconstructed volume using active contour framework witha Level Set implementation (Bach Cuadra, M., 2010). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed (Bach Cuadra, M., 2009). Theresulting white matter image is then binarized andfurther given as an input in the Freesurfer software(http://surfer.nmr.mgh.harvard.edu/) to provide accuratethree-dimensional reconstructions of the fetal brain.Results. High-resolution images of the cerebral fetalbrain, as obtained from the low-resolution acquired MRI,are presented for 4 subjects of age ranging from 29 to 32GA. An example is depicted in Figure 1. Accuracy in theautomated basal ganglia segmentation is compared withmanual segmentation using measurement of Dice similarity(DSI), with values above 0.7 considering to be a verygood agreement. In our sample we observed DSI valuesbetween 0.785 and 0.856. We further show the results ofgray-white matter segmentation overlaid on thehigh-resolution gray-scale images. The results arevisually checked for accuracy using the same principlesas commonly accepted in adult neuroimaging. Preliminary3D cortical reconstructions of the fetal brain are shownin Figure 2. Conclusion. We hereby present a completepipeline for the automated extraction of accuratethree-dimensional cortical surface of the fetal brain.These results are preliminary but promising, with theultimate goal to provide âeurooemovieâeuro of the normal gyraldevelopment. In turn, a precise knowledge of the normalfetal brain development will allow the quantification ofsubtle and early but clinically relevant deviations.Moreover, a precise understanding of the gyraldevelopment process may help to build hypotheses tounderstand the pathogenesis of several neurodevelopmentalconditions in which gyrification have been shown to bealtered (e.g. schizophrenia, autismâeuro¦). References.Rousseau, F. (2006), 'Registration-Based Approach forReconstruction of High-Resolution In Utero Fetal MR Brainimages', IEEE Transactions on Medical Imaging, vol. 13,no. 9, pp. 1072-1081. Jiang, S. (2007), 'MRI of MovingSubjects Using Multislice Snapshot Images With VolumeReconstruction (SVR): Application to Fetal, Neonatal, andAdult Brain Studies', IEEE Transactions on MedicalImaging, vol. 26, no. 7, pp. 967-980. Lee, S. (1997),'Scattered data interpolation with multilevel B-splines',IEEE Transactions on Visualization and Computer Graphics,vol. 3, no. 3, pp. 228-244. Bach Cuadra, M. (2010),'Central and Cortical Gray Mater Segmentation of MagneticResonance Images of the Fetal Brain', ISMRM Conference.Bach Cuadra, M. (2009), 'Brain tissue segmentation offetal MR images', MICCAI.
Resumo:
Brain perfusion can be assessed by CT and MR. For CT, two major techniquesare used. First, Xenon CT is an equilibrium technique based on a freely diffusibletracer. First pass of iodinated contrast injected intravenously is a second method,more widely available. Both methods are proven to be robust and quantitative,thanks to the linear relationship between contrast concentration and x-ray attenuation.For the CT methods, concern regarding x-ray doses delivered to the patientsneed to be addressed. MR is also able to assess brain perfusion using the firstpass of gadolinium based contrast agent injected intravenously. This method hasto be considered as a semi-quantitative because of the non linear relationshipbetween contrast concentration and MR signal changes. Arterial spin labelingis another MR method assessing brain perfusion without injection of contrast. Insuch case, the blood flow in the carotids is magnetically labelled by an externalradiofrequency pulse and observed during its first pass through the brain. Eachof this various CT and MR techniques have advantages and limits that will be illustratedand summarised.Learning Objectives:1. To understand and compare the different techniques for brain perfusionimaging.2. To learn about the methods of acquisition and post-processing of brainperfusion by first pass of contrast agent for CT and MR.3. To learn about non contrast MR methods (arterial spin labelling).
Resumo:
In (1) H magnetic resonance spectroscopy, macromolecule signals underlay metabolite signals, and knowing their contribution is necessary for reliable metabolite quantification. When macromolecule signals are measured using an inversion-recovery pulse sequence, special care needs to be taken to correctly remove residual metabolite signals to obtain a pure macromolecule spectrum. Furthermore, since a single spectrum is commonly used for quantification in multiple experiments, the impact of potential macromolecule signal variability, because of regional differences or pathologies, on metabolite quantification has to be assessed. In this study, we introduced a novel method to post-process measured macromolecule signals that offers a flexible and robust way of removing residual metabolite signals. This method was applied to investigate regional differences in the mouse brain macromolecule signals that may affect metabolite quantification when not taken into account. However, since no significant differences in metabolite quantification were detected, it was concluded that a single macromolecule spectrum can be generally used for the quantification of healthy mouse brain spectra. Alternatively, the study of a mouse model of human glioma showed several alterations of the macromolecule spectrum, including, but not limited to, increased mobile lipid signals, which had to be taken into account to avoid significant metabolite quantification errors.
Resumo:
Three-dimensional analysis of the entire sequence in ski jumping is recommended when studying the kinematics or evaluating performance. Camera-based systems which allow three-dimensional kinematics measurement are complex to set-up and require extensive post-processing, usually limiting ski jumping analyses to small numbers of jumps. In this study, a simple method using a wearable inertial sensors-based system is described to measure the orientation of the lower-body segments (sacrum, thighs, shanks) and skis during the entire jump sequence. This new method combines the fusion of inertial signals and biomechanical constraints of ski jumping. Its performance was evaluated in terms of validity and sensitivity to different performances based on 22 athletes monitored during daily training. The validity of the method was assessed by comparing the inclination of the ski and the slope at landing point and reported an error of -0.2±4.8°. The validity was also assessed by comparison of characteristic angles obtained with the proposed system and reference values in the literature; the differences were smaller than 6° for 75% of the angles and smaller than 15° for 90% of the angles. The sensitivity to different performances was evaluated by comparing the angles between two groups of athletes with different jump lengths and by assessing the association between angles and jump lengths. The differences of technique observed between athletes and the associations with jumps length agreed with the literature. In conclusion, these results suggest that this system is a promising tool for a generalization of three-dimensional kinematics analysis in ski jumping.
Resumo:
Two-dimensional (2D)-breath-hold coronary magnetic resonance angiography (MRA) has been shown to be a fast and reliable method to depict the proximal coronary arteries. Recent developments, however, allow for free-breathing navigator gated and navigator corrected three-dimensional (3D) coronary MRA. These 3D approaches have potential for improved signal-to-noise ratio (SNR) and allow for the acquisition of adjacent thin slices without the misregistration problems known from 2D approaches. Still, a major impediment of a 3D acquisition is the increased scan time. The purpose of this study was the implementation of a free-breathing navigator gated and corrected ultra-fast 3D coronary MRA technique, which allows for scan times of less than 5 minutes. Twelve healthy adult subjects were examined in the supine position using a navigator gated and corrected ECG triggered ultra-fast 3D interleaved gradient echo planar imaging sequence (TFE-EPI). A 3D slab, consisting of 20 slices with a reconstructed slice thickness of 1.5 mm, was acquired with free-breathing. The diastolic TFE-EPI acquisition block was preceded by a T2prep pre-pulse, a diaphragmatic navigator pulse, and a fat suppression pre-pulse. With a TR of 19 ms and an effective TE of 5.4 ms, the duration of the data acquisition window duration was 38 ms. The in-plane spatial resolution was 1.0-1.3 mm*1.5-1.9 mm. In all cases, the entire left main (LM) and extensive portions of the left anterior descending (LAD) and right coronary artery (RCA) could be visualized with an average scan time for the entire 3D-volume data set of 2:57 +/- 0:51 minutes. Average contiguous vessel length visualized was 53 +/- 11 mm (range: 42 to 75 mm) for the LAD and 84 +/- 14 mm (range: 62 to 112 mm) for the RCA. Contrast-to-noise between coronary blood and myocardium was 5.0 +/- 2.3 for the LM/LAD and 8.0 +/- 2.9 for the RCA, resulting in an excellent suppression of myocardium. We present a new approach for free-breathing 3D coronary MRA, which allows for scan times superior to corresponding 2D coronary MRA approaches, and which takes advantage of the enhanced SNR of 3D acquisitions and the post-processing benefits of thin adjacent slices. The robust image quality and the short average scanning time suggest that this approach may be useful for screening the major coronary arteries or identification of anomalous coronary arteries. J. Magn. Reson. Imaging 1999;10:821-825.
Resumo:
The global structural connectivity of the brain, the human connectome, is now accessible at millimeter scale with the use of MRI. In this paper, we describe an approach to map the connectome by constructing normalized whole-brain structural connection matrices derived from diffusion MRI tractography at 5 different scales. Using a template-based approach to match cortical landmarks of different subjects, we propose a robust method that allows (a) the selection of identical cortical regions of interest of desired size and location in different subjects with identification of the associated fiber tracts (b) straightforward construction and interpretation of anatomically organized whole-brain connection matrices and (c) statistical inter-subject comparison of brain connectivity at various scales. The fully automated post-processing steps necessary to build such matrices are detailed in this paper. Extensive validation tests are performed to assess the reproducibility of the method in a group of 5 healthy subjects and its reliability is as well considerably discussed in a group of 20 healthy subjects.
Resumo:
Résumé Des développements antérieurs, au sein de l'Institut de Géophysique de Lausanne, ont permis de développer des techniques d'acquisition sismique et de réaliser l'interprétation des données sismique 2D et 3D pour étudier la géologie de la région et notamment les différentes séquences sédimentaires du Lac Léman. Pour permettre un interprétation quantitative de la sismique en déterminant des paramètres physiques des sédiments la méthode AVO (Amplitude Versus Offset) a été appliquée. Deux campagnes sismiques lacustres, 2D et 3D, ont été acquises afin de tester la méthode AVO dans le Grand Lac sur les deltas des rivières. La géométrie d'acquisition a été repensée afin de pouvoir enregistrer les données à grands déports. Les flûtes sismiques, mises bout à bout, ont permis d'atteindre des angles d'incidence d'environ 40˚ . Des récepteurs GPS spécialement développés à cet effet, et disposés le long de la flûte, ont permis, après post-traitement des données, de déterminer la position de la flûte avec précision (± 0.5 m). L'étalonnage de nos hydrophones, réalisé dans une chambre anéchoïque, a permis de connaître leur réponse en amplitude en fonction de la fréquence. Une variation maximale de 10 dB a été mis en évidence entre les capteurs des flûtes et le signal de référence. Un traitement sismique dont l'amplitude a été conservée a été appliqué sur les données du lac. L'utilisation de l'algorithme en surface en consistante a permis de corriger les variations d'amplitude des tirs du canon à air. Les sections interceptes et gradients obtenues sur les deltas de l'Aubonne et de la Dranse ont permis de produire des cross-plots. Cette représentation permet de classer les anomalies d'amplitude en fonction du type de sédiments et de leur contenu potentiel en gaz. L'un des attributs qui peut être extrait des données 3D, est l'amplitude de la réflectivité d'une interface sismique. Ceci ajoute une composante quantitative à l'interprétation géologique d'une interface. Le fond d'eau sur le delta de l'Aubonne présente des anomalies en amplitude qui caractérisent les chenaux. L'inversion de l'équation de Zoeppritz par l'algorithme de Levenberg-Marquardt a été programmée afin d'extraire les paramètres physiques des sédiments sur ce delta. Une étude statistique des résultats de l'inversion permet de simuler la variation de l'amplitude en fonction du déport. On a obtenu un modèle dont la première couche est l'eau et dont la seconde est une couche pour laquelle V P = 1461 m∕s, ρ = 1182 kg∕m3 et V S = 383 m∕s. Abstract A system to record very high resolution (VHR) seismic data on lakes in 2D and 3D was developed at the Institute of Geophysics, University of Lausanne. Several seismic surveys carried out on Lake Geneva helped us to better understand the geology of the area and to identify sedimentary sequences. However, more sophisticated analysis of the data such as the AVO (Amplitude Versus Offset) method provides means of deciphering the detailed structure of the complex Quaternary sedimentary fill of the Lake Geneva trough. To study the physical parameters we applied the AVO method at some selected places of sediments. These areas are the Aubonne and Dranse River deltas where the configurations of the strata are relatively smooth and the discontinuities between them easy to pick. A specific layout was developed to acquire large incidence angle. 2D and 3D seismic data were acquired with streamers, deployed end to end, providing incidence angle up to 40˚ . One or more GPS antennas attached to the streamer enabled us to calculate individual hydrophone positions with an accuracy of 50 cm after post-processing of the navigation data. To ensure that our system provides correct amplitude information, our streamer sensors were calibrated in an anechoic chamber using a loudspeaker as a source. Amplitude variations between the each hydrophone were of the order of 10 dB. An amplitude correction for each hydrophone was computed and applied before processing. Amplitude preserving processing was then carried out. Intercept vs. gradient cross-plots enable us to determine that both geological discontinuities (lacustrine sediments/moraine and moraine/molasse) have well defined trends. A 3D volume collected on the Aubonne river delta was processed in order ro obtain AVO attributes. Quantitative interpretation using amplitude maps were produced and amplitude maps revealed high reflectivity in channels. Inversion of the water bottom of the Zoeppritz equation using the Levenberg-Marquadt algorithm was carried out to estimate V P , V S and ρ of sediments immediately under the lake bottom. Real-data inversion gave, under the water layer, a mud layer with V P = 1461 m∕s, ρ = 1182 kg∕m3 et V S = 383 m∕s.
Resumo:
Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.
Resumo:
Synchronization of data coming from different sources is of high importance in biomechanics to ensure reliable analyses. This synchronization can either be performed through hardware to obtain perfect matching of data, or post-processed digitally. Hardware synchronization can be achieved using trigger cables connecting different devices in many situations; however, this is often impractical, and sometimes impossible in outdoors situations. The aim of this paper is to describe a wireless system for outdoor use, allowing synchronization of different types of - potentially embedded and moving - devices. In this system, each synchronization device is composed of: (i) a GPS receiver (used as time reference), (ii) a radio transmitter, and (iii) a microcontroller. These components are used to provide synchronized trigger signals at the desired frequency to the measurement device connected. The synchronization devices communicate wirelessly, are very lightweight, battery-operated and thus very easy to set up. They are adaptable to every measurement device equipped with either trigger input or recording channel. The accuracy of the system was validated using an oscilloscope. The mean synchronization error was found to be 0.39 μs and pulses are generated with an accuracy of <2 μs. The system provides synchronization accuracy about two orders of magnitude better than commonly used post-processing methods, and does not suffer from any drift in trigger generation.
Resumo:
Introduction: A standardized three-dimensional ultrasonographic (3DUS) protocol is described that allows fetal face reconstruction. Ability to identify cleft lip with 3DUS using this protocol was assessed by operators with minimal 3DUS experience. Material and Methods: 260 stored volumes of fetal face were analyzed using a standardized protocol by operators with different levels of competence in 3DUS. The outcomes studied were: (1) the performance of post-processing 3D face volumes for the detection of facial clefts; (2) the ability of a resident with minimal 3DUS experience to reconstruct the acquired facial volumes, and (3) the time needed to reconstruct each plane to allow proper diagnosis of a cleft. Results: The three orthogonal planes of the fetal face (axial, sagittal and coronal) were adequately reconstructed with similar performance when acquired by a maternal-fetal medicine specialist or by residents with minimal experience (72 vs. 76%, p = 0.629). The learning curve for manipulation of 3DUS volumes of the fetal face corresponds to 30 cases and is independent of the operator's level of experience. Discussion: The learning curve for the standardized protocol we describe is short, even for inexperienced sonographers. This technique might decrease the length of anatomy ultrasounds and improve the ability to visualize fetal face anomalies.
Resumo:
Recent advances in CT technologies had significantly improved the clinical utility of cardiac CT. Major efforts have been made to optimize the image quality, standardize protocols and limit the radiation exposure. Rapid progress in post-processing tools dedicated not only to the coronary artery assessment but also to the cardiac cavities, valves and veins extended applications of cardiac CT. This potential might be however used optimally considering the current appropriate indications for use as well as the current technical imitations. Coronary artery disease and related ischemic cardiomyopathy remain the major applications of cardiac CT and at the same time the most complex one. Integration of a specific knowledge is mandatory for optimal use in this area for asymptomatic as for symptomatic patients, with a specific regards to patient with acute chest pain. This review aimed to propose a practical approach to implement appropriate indications in our routine practice. Emerging indications and future direction are also discussed. Adequate preparation of the patient, training of physicians, and the multidisciplinary interaction between actors are the key of successful implementation of cardiac CT in daily practice.
Resumo:
Over the past few decades, Fourier transform infrared (FTIR) spectroscopy coupled to microscopy has been recognized as an emerging and potentially powerful tool in cancer research and diagnosis. For this purpose, histological analyses performed by pathologists are mostly carried out on biopsied tissue that undergoes the formalin-fixation and paraffin-embedding (FFPE) procedure. This processing method ensures an optimal and permanent preservation of the samples, making FFPE-archived tissue an extremely valuable source for retrospective studies. Nevertheless, as highlighted by previous studies, this fixation procedure significantly changes the principal constituents of cells, resulting in important effects on their infrared (IR) spectrum. Despite the chemical and spectral influence of FFPE processing, some studies demonstrate that FTIR imaging allows precise identification of the different cell types present in biopsied tissue, indicating that the FFPE process preserves spectral differences between distinct cell types. In this study, we investigated whether this is also the case for closely related cell lines. We analyzed spectra from 8 cancerous epithelial cell lines: 4 breast cancer cell lines and 4 melanoma cell lines. For each cell line, we harvested cells at subconfluence and divided them into two sets. We first tested the "original" capability of FTIR imaging to identify these closely related cell lines on cells just dried on BaF2 slides. We then repeated the test after submitting the cells to the FFPE procedure. Our results show that the IR spectra of FFPE processed cancerous cell lines undergo small but significant changes due to the treatment. The spectral modifications were interpreted as a potential decrease in the phospholipid content and protein denaturation, in line with the scientific literature on the topic. Nevertheless, unsupervised analyses showed that spectral proximities and distances between closely related cell lines were mostly, but not entirely, conserved after FFPE processing. Finally, PLS-DA statistical analyses highlighted that closely related cell lines are still successfully identified and efficiently distinguished by FTIR spectroscopy after FFPE treatment. This last result paves the way towards identification and characterization of cellular subtypes on FFPE tissue sections by FTIR imaging, indicating that this analysis technique could become a potential useful tool in cancer research.
Resumo:
Introduction: A standardized three-dimensional ultrasonographic (3DUS) protocol is described that allows fetal face reconstruction. Ability to identify cleft lip with 3DUS using this protocol was assessed by operators with minimal 3DUS experience. Material and Methods: 260 stored volumes of fetal face were analyzed using a standardized protocol by operators with different levels of competence in 3DUS. The outcomes studied were: (1) the performance of post-processing 3D face volumes for the detection of facial clefts; (2) the ability of a resident with minimal 3DUS experience to reconstruct the acquired facial volumes, and (3) the time needed to reconstruct each plane to allow proper diagnosis of a cleft. Results: The three orthogonal planes of the fetal face (axial, sagittal and coronal) were adequately reconstructed with similar performance when acquired by a maternal-fetal medicine specialist or by residents with minimal experience (72 vs. 76%, p = 0.629). The learning curve for manipulation of 3DUS volumes of the fetal face corresponds to 30 cases and is independent of the operator's level of experience. Discussion: The learning curve for the standardized protocol we describe is short, even for inexperienced sonographers. This technique might decrease the length of anatomy ultrasounds and improve the ability to visualize fetal face anomalies.
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.