962 resultados para Image findings
Resumo:
Contexte et but de l'étude: Les fractures du triquetrum sont les deuxièmes fractures des os du carpe en fréquence, après celles du scaphoïde. Elles représentent environ 3.5% de toutes les lésions traumatiques du poignet, et résultent le plus souvent d'une chute de sa hauteur avec réception sur le poignet en hyper-extension. Leur mécanisme physiopathologique reste débattu. La première théorie fut celle de l'avulsion ligamentaire d'un fragment osseux dorsal. Puis, Levy et coll. ainsi que Garcia-Elias ont successivement suggéré que ces fractures résultaient plutôt d'une impaction ulno-carpienne. De nombreux ligaments (intrinsèques et extrinsèques du carpe) s'insèrent sur les versants palmaires et dorsaux du triquetrum. Ces ligaments jouent un rôle essentiel dans le maintien de la stabilité du carpe. Bien que l'arthro-IRM du poignet soit l'examen de référence pour évaluer ces ligaments, Shahabpour et coll. ont récemment démontré leur visibilité en IRM tridimensionnelle (volumique) après injection iv. de produit de contraste (Gadolinium). L'atteinte ligamentaire associée aux fractures dorsales du triquetrum n'a jusqu'à présent jamais été évalué. Ces lésions pourraient avoir un impact sur l'évolution et la prise en charge de ces fractures. Les objectifs de l'étude étaient donc les suivants: premièrement, déterminer l'ensemble des caractéristiques des fractures dorsales du triquetrum en IRM, en mettant l'accent sur les lésions ligamentaires extrinsèques associées; secondairement, discuter les différents mécanismes physiopathologiques (i.e. avulsion ligamentaire ou impaction ulno-carpienne) de ces fractures d'après nos résultats en IRM. Patients et méthodes: Ceci est une étude rétrospective multicentrique (CHUV, Lausanne; Hôpital Cochin, AP-HP, Paris) d'examens IRM et radiographies conventionnelles du poignet. A partir de janvier 2008, nous avons recherché dans les bases de données institutionnelles les patients présentant une fracture du triquetrum et ayant bénéficié d'une IRM volumique du poignet dans un délai de six semaines entre le traumatisme et l'IRM. Les examens IRM ont été effectués sur deux machines à haut champ magnétique (3 Tesla) avec une antenne dédiée et un protocole d'acquisition incluant une séquence tridimensionnelle isotropique (« 3D VIBE ») après injection iv. de produit de contraste (Gadolinium). Ces examens ont été analysés par deux radiologues ostéo-articulaires expérimentés. Les mesures ont été effectuées par un troisième radiologue ostéo-articulaire. En ce qui concerne l'analyse qualitative, le type de fracture du triquetrum (selon la classification de Garcia-Elias), la distribution de l'oedème osseux post- traumatique, ainsi que le nombre et la distribution des lésions ligamentaires extrinsèques associées ont été évalués. Pour l'analyse quantitative, l'index du processus de la styloïde ulnaire (selon la formule de Garcia-Elias), le volume du fragment osseux détaché du triquetrum, et la distance séparant ce fragment osseux du triquetrum ont été mesurés.
Resumo:
AIM: The use of an animal model to study the aqueous dynamic and the histological findings after deep sclerectomy with (DSCI) and without collagen implant. METHODS: Deep sclerectomy was performed on rabbits' eyes. Eyes were randomly assigned to receive collagen implants. Measurements of intraocular pressure (IOP) and aqueous outflow facility using the constant pressure method through cannulation of the anterior chamber were performed. The system was filled with BSS and cationised ferritin. Histological assessment of the operative site was performed. Sections were stained with haematoxylin and eosin and with Prussian blue. Aqueous drainage vessels were identified by the reaction between ferritin and Prussian blue. All eyes were coded so that the investigator was blind to the type of surgery until the evaluation was completed. RESULTS: A significant decrease in IOP (p<0.05) was observed during the first 6 weeks after DSCI (mean IOP was 13.07 (2.95) mm Hg preoperatively and 9.08 (2.25) mm Hg at 6 weeks); DS without collagen implant revealed a significant decrease in IOP at weeks 4 and 8 after surgery (mean IOP 12.57 (3.52) mm Hg preoperatively, 9.45 (3.38) mm Hg at 4 weeks, and 9.22 (3.39) mm Hg at 8 weeks). Outflow facility was significantly increased throughout the 9 months of follow up in both DSCI and DS groups (p<0.05). The preoperative outflow facility (OF) was 0.15 (0.02) micro l/min/mm Hg. At 9 months, OF was 0.52 (0.28) microl/min/mm Hg and 0.46 (0.07) micro l/min/mm Hg for DSCI and DS respectively. Light microscopy studies showed the appearance of new aqueous drainage vessels in the sclera adjacent to the dissection site in DSCI and DS and the apparition of spindle cells lining the collagen implant in DSCI after 2 months. CONCLUSION: A significant IOP decrease was observed during the first weeks after DSCI and DS. DS with or without collagen implant provided a significant increase in outflow facility throughout the 9 months of follow up. This might be partly explained by new drainage vessels in the sclera surrounding the operated site. Microscopic studies revealed the appearance of spindle cells lining the collagen implant in DSCI after 2 months.
Resumo:
We present an open-source ITK implementation of a directFourier method for tomographic reconstruction, applicableto parallel-beam x-ray images. Direct Fourierreconstruction makes use of the central-slice theorem tobuild a polar 2D Fourier space from the 1D transformedprojections of the scanned object, that is resampled intoa Cartesian grid. Inverse 2D Fourier transform eventuallyyields the reconstructed image. Additionally, we providea complex wrapper to the BSplineInterpolateImageFunctionto overcome ITKâeuro?s current lack for image interpolatorsdealing with complex data types. A sample application ispresented and extensively illustrated on the Shepp-Loganhead phantom. We show that appropriate input zeropaddingand 2D-DFT oversampling rates together with radial cubicb-spline interpolation improve 2D-DFT interpolationquality and are efficient remedies to reducereconstruction artifacts.
Resumo:
PURPOSE: To investigate the ability of inversion recovery ON-resonant water suppression (IRON) in conjunction with P904 (superparamagnetic nanoparticles which consisting of a maghemite core coated with a low-molecular-weight amino-alcohol derivative of glucose) to perform steady-state equilibrium phase MR angiography (MRA) over a wide dose range. MATERIALS AND METHODS: Experiments were approved by the institutional animal care committee. Rabbits (n = 12) were imaged at baseline and serially after the administration of 10 incremental dosages of 0.57-5.7 mgFe/Kg P904. Conventional T1-weighted and IRON MRA were obtained on a clinical 1.5 Tesla (T) scanner to image the thoracic and abdominal aorta, and peripheral vessels. Contrast-to-noise ratios (CNR) and vessel sharpness were quantified. RESULTS: Using IRON MRA, CNR and vessel sharpness progressively increased with incremental dosages of the contrast agent P904, exhibiting constantly higher contrast values than T1 -weighted MRA over a very wide range of contrast agent doses (CNR of 18.8 ± 5.6 for IRON versus 11.1 ± 2.8 for T1 -weighted MRA at 1.71 mgFe/kg, P = 0.02 and 19.8 ± 5.9 for IRON versus -0.8 ± 1.4 for T1-weighted MRA at 3.99 mgFe/kg, P = 0.0002). Similar results were obtained for vessel sharpness in peripheral vessels, (Vessel sharpness of 46.76 ± 6.48% for IRON versus 33.20 ± 3.53% for T1-weighted MRA at 1.71 mgFe/Kg, P = 0.002, and of 48.66 ± 5.50% for IRON versus 19.00 ± 7.41% for T1-weighted MRA at 3.99 mgFe/Kg, P = 0.003). CONCLUSION: Our study suggests that quantitative CNR and vessel sharpness after the injection of P904 are consistently higher for IRON MRA when compared with conventional T1-weighted MRA. These findings apply for a wide range of contrast agent dosages.
Resumo:
We perceive our environment through multiple sensory channels. Nonetheless, research has traditionally focused on the investigation of sensory processing within single modalities. Thus, investigating how our brain integrates multisensory information is of crucial importance for understanding how organisms cope with a constantly changing and dynamic environment. During my thesis I have investigated how multisensory events impact our perception and brain responses, either when auditory-visual stimuli were presented simultaneously or how multisensory events at one point in time impact later unisensory processing. In "Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012) we investigated the neuronal substrates involved in motion detection in depth under multisensory vs. unisensory conditions. We have shown that congruent auditory-visual looming (i.e. approaching) signals are preferentially integrated by the brain. Further, we show that early effects under these conditions are relevant for behavior, effectively speeding up responses to these combined stimulus presentations. In "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), we investigated the behavioral impact of single encounters with meaningless auditory-visual object parings upon subsequent visual object recognition. In addition to showing that these encounters lead to impaired recognition accuracy upon repeated visual presentations, we have shown that the brain discriminates images as soon as ~100ms post-stimulus onset according to the initial encounter context. In "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review) we have addressed whether auditory object recognition is affected by single-trial multisensory memories, and whether recognition accuracy of sounds was similarly affected by the initial encounter context as visual objects. We found that this is in fact the case. We propose that a common underlying brain network is differentially involved during encoding and retrieval of images and sounds based on our behavioral findings. - Nous percevons l'environnement qui nous entoure à l'aide de plusieurs organes sensoriels. Antérieurement, la recherche sur la perception s'est focalisée sur l'étude des systèmes sensoriels indépendamment les uns des autres. Cependant, l'étude des processus cérébraux qui soutiennent l'intégration de l'information multisensorielle est d'une importance cruciale pour comprendre comment notre cerveau travail en réponse à un monde dynamique en perpétuel changement. Pendant ma thèse, j'ai ainsi étudié comment des événements multisensoriels impactent notre perception immédiate et/ou ultérieure et comment ils sont traités par notre cerveau. Dans l'étude " Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012), nous nous sommes intéressés aux processus neuronaux impliqués dans la détection de mouvements à l'aide de l'utilisation de stimuli audio-visuels seuls ou combinés. Nos résultats ont montré que notre cerveau intègre de manière préférentielle des stimuli audio-visuels combinés s'approchant de l'observateur. De plus, nous avons montré que des effets précoces, observés au niveau de la réponse cérébrale, influencent notre comportement, en accélérant la détection de ces stimuli. Dans l'étude "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), nous nous sommes intéressés à l'impact qu'a la présentation d'un stimulus audio-visuel sur l'exactitude de reconnaissance d'une image. Nous avons étudié comment la présentation d'une combinaison audio-visuelle sans signification, impacte, au niveau comportementale et cérébral, sur la reconnaissance ultérieure de l'image. Les résultats ont montré que l'exactitude de la reconnaissance d'images, présentées dans le passé, avec un son sans signification, est inférieure à celle obtenue dans le cas d'images présentées seules. De plus, notre cerveau différencie ces deux types de stimuli très tôt dans le traitement d'images. Dans l'étude "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review), nous nous sommes posés la question si l'exactitude de ia reconnaissance de sons était affectée de manière semblable par la présentation d'événements multisensoriels passés. Ceci a été vérifié par nos résultats. Nous avons proposé que cette similitude puisse être expliquée par le recrutement différentiel d'un réseau neuronal commun.
Resumo:
Three-dimensional imaging for the quantification of myocardial motion is a key step in the evaluation of cardiac disease. A tagged magnetic resonance imaging method that automatically tracks myocardial displacement in three dimensions is presented. Unlike other techniques, this method tracks both in-plane and through-plane motion from a single image plane without affecting the duration of image acquisition. A small z-encoding gradient is subsequently added to the refocusing lobe of the slice-selection gradient pulse in a slice following CSPAMM acquisition. An opposite polarity z-encoding gradient is added to the orthogonal tag direction. The additional z-gradients encode the instantaneous through plane position of the slice. The vertical and horizontal tags are used to resolve in-plane motion, while the added z-gradients is used to resolve through-plane motion. Postprocessing automatically decodes the acquired data and tracks the three-dimensional displacement of every material point within the image plane for each cine frame. Experiments include both a phantom and in vivo human validation. These studies demonstrate that the simultaneous extraction of both in-plane and through-plane displacements and pathlines from tagged images is achievable. This capability should open up new avenues for the automatic quantification of cardiac motion and strain for scientific and clinical purposes.
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
We present a novel approach for analyzing single-trial electroencephalography (EEG) data, using topographic information. The method allows for visualizing event-related potentials using all the electrodes of recordings overcoming the problem of previous approaches that required electrode selection and waveforms filtering. We apply this method to EEG data from an auditory object recognition experiment that we have previously analyzed at an ERP level. Temporally structured periods were statistically identified wherein a given topography predominated without any prior information about the temporal behavior. In addition to providing novel methods for EEG analysis, the data indicate that ERPs are reliably observable at a single-trial level when examined topographically.
Resumo:
The standard data fusion methods may not be satisfactory to merge a high-resolution panchromatic image and a low-resolution multispectral image because they can distort the spectral characteristics of the multispectral data. The authors developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of such images. The method presented consists of adding the wavelet coefficients of the high-resolution image to the multispectral (low-resolution) data. They have studied several possibilities concluding that the method which produces the best results consists in adding the high order coefficients of the wavelet transform of the panchromatic image to the intensity component (defined as L=(R+G+B)/3) of the multispectral image. The method is, thus, an improvement on standard intensity-hue-saturation (IHS or LHS) mergers. They used the ¿a trous¿ algorithm which allows the use of a dyadic wavelet to merge nondyadic data in a simple and efficient scheme. They used the method to merge SPOT and LANDSATTM images. The technique presented is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.
Resumo:
When preparing an article on image restoration in astronomy, it is obvious that some topics have to be dropped to keep the work at reasonable length. We have decided to concentrate on image and noise models and on the algorithms to find the restoration. Topics like parameter estimation and stopping rules are also commented on. We start by describing the Bayesian paradigm and then proceed to study the noise and blur models used by the astronomical community. Then the prior models used to restore astronomical images are examined. We describe the algorithms used to find the restoration for the most common combinations of degradation and image models. Then we comment on important issues such as acceleration of algorithms, stopping rules, and parameter estimation. We also comment on the huge amount of information available to, and made available by, the astronomical community.
Resumo:
Audit report on the Jackson County Sanitary Disposal Agency for the year ended June 30, 2009
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.