223 resultados para Particle image analyser
Resumo:
Iterative image reconstruction algorithms provide significant improvements over traditional filtered back projection in computed tomography (CT). Clinically available through recent advances in modern CT technology, iterative reconstruction enhances image quality through cyclical image calculation, suppressing image noise and artifacts, particularly blooming artifacts. The advantages of iterative reconstruction are apparent in traditionally challenging cases-for example, in obese patients, those with significant artery calcification, or those with coronary artery stents. In addition, as clinical use of CT has grown, so have concerns over ionizing radiation associated with CT examinations. Through noise reduction, iterative reconstruction has been shown to permit radiation dose reduction while preserving diagnostic image quality. This approach is becoming increasingly attractive as the routine use of CT for pediatric and repeated follow-up evaluation grows ever more common. Cardiovascular CT in particular, with its focus on detailed structural and functional analyses, stands to benefit greatly from the promising iterative solutions that are readily available.
Resumo:
Measurement of arterial input function is a restrictive aspect for quantitative (18)F-FDG PET studies in rodents because of their small total blood volume and the related difficulties in withdrawing blood.
Resumo:
BACKGROUND: The yeast Schizosaccharomyces pombe is frequently used as a model for studying the cell cycle. The cells are rod-shaped and divide by medial fission. The process of cell division, or cytokinesis, is controlled by a network of signaling proteins called the Septation Initiation Network (SIN); SIN proteins associate with the SPBs during nuclear division (mitosis). Some SIN proteins associate with both SPBs early in mitosis, and then display strongly asymmetric signal intensity at the SPBs in late mitosis, just before cytokinesis. This asymmetry is thought to be important for correct regulation of SIN signaling, and coordination of cytokinesis and mitosis. In order to study the dynamics of organelles or large protein complexes such as the spindle pole body (SPB), which have been labeled with a fluorescent protein tag in living cells, a number of the image analysis problems must be solved; the cell outline must be detected automatically, and the position and signal intensity associated with the structures of interest within the cell must be determined. RESULTS: We present a new 2D and 3D image analysis system that permits versatile and robust analysis of motile, fluorescently labeled structures in rod-shaped cells. We have designed an image analysis system that we have implemented as a user-friendly software package allowing the fast and robust image-analysis of large numbers of rod-shaped cells. We have developed new robust algorithms, which we combined with existing methodologies to facilitate fast and accurate analysis. Our software permits the detection and segmentation of rod-shaped cells in either static or dynamic (i.e. time lapse) multi-channel images. It enables tracking of two structures (for example SPBs) in two different image channels. For 2D or 3D static images, the locations of the structures are identified, and then intensity values are extracted together with several quantitative parameters, such as length, width, cell orientation, background fluorescence and the distance between the structures of interest. Furthermore, two kinds of kymographs of the tracked structures can be established, one representing the migration with respect to their relative position, the other representing their individual trajectories inside the cell. This software package, called "RodCellJ", allowed us to analyze a large number of S. pombe cells to understand the rules that govern SIN protein asymmetry. CONCLUSIONS: "RodCell" is freely available to the community as a package of several ImageJ plugins to simultaneously analyze the behavior of a large number of rod-shaped cells in an extensive manner. The integration of different image-processing techniques in a single package, as well as the development of novel algorithms does not only allow to speed up the analysis with respect to the usage of existing tools, but also accounts for higher accuracy. Its utility was demonstrated on both 2D and 3D static and dynamic images to study the septation initiation network of the yeast Schizosaccharomyces pombe. More generally, it can be used in any kind of biological context where fluorescent-protein labeled structures need to be analyzed in rod-shaped cells. AVAILABILITY: RodCellJ is freely available under http://bigwww.epfl.ch/algorithms.html, (after acceptance of the publication).
Resumo:
In many European countries, image quality for digital x-ray systems used in screening mammography is currently specified using a threshold-detail detectability method. This is a two-part study that proposes an alternative method based on calculated detectability for a model observer: the first part of the work presents a characterization of the systems. Eleven digital mammography systems were included in the study; four computed radiography (CR) systems, and a group of seven digital radiography (DR) detectors, composed of three amorphous selenium-based detectors, three caesium iodide scintillator systems and a silicon wafer-based photon counting system. The technical parameters assessed included the system response curve, detector uniformity error, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Approximate quantum noise limited exposure range was examined using a separation of noise sources based upon standard deviation. Noise separation showed that electronic noise was the dominant noise at low detector air kerma for three systems; the remaining systems showed quantum noise limited behaviour between 12.5 and 380 µGy. Greater variation in detector MTF was found for the DR group compared to the CR systems; MTF at 5 mm(-1) varied from 0.08 to 0.23 for the CR detectors against a range of 0.16-0.64 for the DR units. The needle CR detector had a higher MTF, lower NNPS and higher DQE at 5 mm(-1) than the powder CR phosphors. DQE at 5 mm(-1) ranged from 0.02 to 0.20 for the CR systems, while DQE at 5 mm(-1) for the DR group ranged from 0.04 to 0.41, indicating higher DQE for the DR detectors and needle CR system than for the powder CR phosphor systems. The technical evaluation section of the study showed that the digital mammography systems were well set up and exhibiting typical performance for the detector technology employed in the respective systems.
Resumo:
Depuis une trentaine d'années, les représentations de la figure de Jésus se sont multipliées en photographie. De la séquence narrative du photographe américain Duane Michals (Christ in New York, 1981) au chemin de croix de Wim Delvoye (Viae Crucis, 2006), en passant par la série I.N.R.I. (1997-98) de Bettina Rheims, les Photographies apocryphes (1994-98) d'Olivier Christinat, les Seven Bible Scenes (1998) de Rauf Mamedov, Ecce homo (1996-98) d'Elisabeth Ohlson, Jesus is my Homeboy (2003) de David LaChapelle ou encore South Soudan (2006) de Vanessa Beecroft, l'intérêt pour la figure christique dans le champ profane est incontestable. Le phénomène dépasse d'ailleurs les frontières géographiques, culturelles et confessionnelles.¦Cette thèse de doctorat réunit un important fonds iconographique qui démontre l'intérêt actuel pour la figure du Christ et la grande diversité des démarches et des profils des artistes. Cet important corpus est analysé selon trois perspectives. La première partie est consacrée au médium photographique, à ses liens étroits avec la sainte Face depuis le cliché du suaire de Turin en 1898, et depuis l'ambitieux projet photographique (1898) de Fred Holland Day qui incame le Christ dans ses nombreuses réinterprétations de la vie de Jésus.¦La deuxième partie de ce travail interroge l'emploi des formules iconographiques chrétiennes archétypales et questionne les références christiques utilisées par les artistes au travers de poses, d'attitudes, de gestes repris de chefs-d'oeuvre de l'art religieux, tant dans le domaine de la photographie d'art que dans la publicité ou la photographie de presse. L'usage du texte, la place des Ecritures dans les projets actuels sont aussi abordés, notamment autour de la question de la possibilité de raconter la vie de Jésus et des stratégies de mises en scène employées pour traduire les récits en images.¦Enfin, la dernière partie porte plus particulièrement sur les usages de la figure de Jésus, souvent alter ego de l'artiste, mais surtout porte-parole. À la suite des luttes socio- politiques des années 1970, la figure de Jésus est réinvestie par les minorités et Jésus incarne alors les combats d'artistes qui se réapproprient la représentation du Christ pour qu'elle corresponde à l'image qu'ils voient dans leur propre miroir (ex. Renee Cox, Yo Mama's Last Supper, 1996). Cet usage revendicateur, souvent doublé d'un goût non dissimulé pour la provocation (ex. Andres Serrano, Piss Christ, 1987), a souvent suscité la polémique. La question de la réception des oeuvres constitue un point essentiel de cette recherche qui tâche d'analyser la figure christique au miroir de la photographie contemporaine et qui conclut que le Christ est en réalité miroir des artistes eux-mêmes.
Resumo:
Part I of this series of articles focused on the construction of graphical probabilistic inference procedures, at various levels of detail, for assessing the evidential value of gunshot residue (GSR) particle evidence. The proposed models - in the form of Bayesian networks - address the issues of background presence of GSR particles, analytical performance (i.e., the efficiency of evidence searching and analysis procedures) and contamination. The use and practical implementation of Bayesian networks for case pre-assessment is also discussed. This paper, Part II, concentrates on Bayesian parameter estimation. This topic complements Part I in that it offers means for producing estimates useable for the numerical specification of the proposed probabilistic graphical models. Bayesian estimation procedures are given a primary focus of attention because they allow the scientist to combine (his/her) prior knowledge about the problem of interest with newly acquired experimental data. The present paper also considers further topics such as the sensitivity of the likelihood ratio due to uncertainty in parameters and the study of likelihood ratio values obtained for members of particular populations (e.g., individuals with or without exposure to GSR).
Resumo:
White-light cystoscopy and cytology are the standard tools to diagnose bladder cancer. White-light cystoscopy is excellent to detect macroscopic exophytic tumors, but its sensitivity is poor for flat tumors such as carcinoma in situ. Use of fluorescence cystoscopy during transurethral bladder resection improve tumor detection, particulary for carcinoma in situ. Fluorescence cystoscopy reduce residual tumor rate, especially for voluminous and multifocal tumors with consecutive lower recurrence. Fluorescence is now recommended to diagnose and treat bladder cancer.
Resumo:
BACKGROUND AND STUDY AIMS: The current gold standard in Barrett's esophagus monitoring consists of four-quadrant biopsies every 1-2 cm in accordance with the Seattle protocol. Adding brush cytology processed by digital image cytometry (DICM) may further increase the detection of patients with Barrett's esophagus who are at risk of neoplasia. The aim of the present study was to assess the additional diagnostic value and accuracy of DICM when added to the standard histological analysis in a cross-sectional multicenter study of patients with Barrett's esophagus in Switzerland. METHODS: One hundred sixty-four patients with Barrett's esophagus underwent 239 endoscopies with biopsy and brush cytology. DICM was carried out on 239 cytology specimens. Measures of the test accuracy of DICM (relative risk, sensitivity, specificity, likelihood ratios) were obtained by dichotomizing the histopathology results (high-grade dysplasia or adenocarcinoma vs. all others) and DICM results (aneuploidy/intermediate pattern vs. diploidy). RESULTS: DICM revealed diploidy in 83% of 239 endoscopies, an intermediate pattern in 8.8%, and aneuploidy in 8.4%. An intermediate DICM result carried a relative risk (RR) of 12 and aneuploidy a RR of 27 for high-grade dysplasia/adenocarcinoma. Adding DICM to the standard biopsy protocol, a pathological cytometry result (aneuploid or intermediate) was found in 25 of 239 endoscopies (11%; 18 patients) with low-risk histology (no high-grade dysplasia or adenocarcinoma). During follow-up of 14 of these 18 patients, histological deterioration was seen in 3 (21%). CONCLUSION: DICM from brush cytology may add important information to a standard biopsy protocol by identifying a subgroup of BE-patients with high-risk cellular abnormalities.
Resumo:
This paper proposes a novel approach for the analysis of illicit tablets based on their visual characteristics. In particular, the paper concentrates on the problem of ecstasy pill seizure profiling and monitoring. The presented method extracts the visual information from pill images and builds a representation of it, i.e. it builds a pill profile based on the pill visual appearance. Different visual features are used to build different image similarity measures, which are the basis for a pill monitoring strategy based on both discriminative and clustering models. The discriminative model permits to infer whether two pills come from the same seizure, while the clustering models groups of pills that share similar visual characteristics. The resulting clustering structure allows to perform a visual identification of the relationships between different seizures. The proposed approach was evaluated using a data set of 621 Ecstasy pill pictures. The results demonstrate that this is a feasible and cost effective method for performing pill profiling and monitoring.
Resumo:
Particle physics studies highly complex processes which cannot be directly observed. Scientific realism claims that we are nevertheless warranted in believing that these processes really occur and that the objects involved in them really exist. This dissertation defends a version of scientific realism, called causal realism, in the context of particle physics. I start by introducing the central theses and arguments in the recent philosophical debate on scientific realism (chapter 1), with a special focus on an important presupposition of the debate, namely common sense realism. Chapter 2 then discusses entity realism, which introduces a crucial element into the debate by emphasizing the importance of experiments in defending scientific realism. Most of the chapter is concerned with Ian Hacking's position, but I also argue that Nancy Cartwright's version of entity realism is ultimately preferable as a basis for further development. In chapter 3,1 take a step back and consider the question whether the realism debate is worth pursuing at all. Arthur Fine has given a negative answer to that question, proposing his natural ontologica! attitude as an alternative to both realism and antirealism. I argue that the debate (in particular the realist side of it) is in fact less vicious than Fine presents it. The second part of my work (chapters 4-6) develops, illustrates and defends causal realism. The key idea is that inference to the best explanation is reliable in some cases, but not in others. Chapter 4 characterizes the difference between these two kinds of cases in terms of three criteria which distinguish causal from theoretical warrant. In order to flesh out this distinction, chapter 5 then applies it to a concrete case from the history of particle physics, the discovery of the neutrino. This case study shows that the distinction between causal and theoretical warrant is crucial for understanding what it means to "directly detect" a new particle. But the distinction is also an effective tool against what I take to be the presently most powerful objection to scientific realism: Kyle Stanford's argument from unconceived alternatives. I respond to this argument in chapter 6, and I illustrate my response with a discussion of Jean Perrin's experimental work concerning the atomic hypothesis. In the final part of the dissertation, I turn to the specific challenges posed to realism by quantum theories. One of these challenges comes from the experimental violations of Bell's inequalities, which indicate a failure of locality in the quantum domain. I show in chapter 7 how causal realism can further our understanding of quantum non-locality by taking account of some recent experimental results. Another challenge to realism in quantum mechanics comes from delayed-choice experiments, which seem to imply that certain aspects of what happens in an experiment can be influenced by later choices of the experimenter. Chapter 8 analyzes these experiments and argues that they do not warrant the antirealist conclusions which some commentators draw from them. It pays particular attention to the case of delayed-choice entanglement swapping and the corresponding question whether entanglement is a real physical relation. In chapter 9,1 finally address relativistic quantum theories. It is often claimed that these theories are incompatible with a particle ontology, and this calls into question causal realism's commitment to localizable and countable entities. I defend the commitments of causal realism against these objections, and I conclude with some remarks connecting the interpretation of quantum field theory to more general metaphysical issues confronting causal realism.
Resumo:
STATEMENT OF PROBLEM: Wear of methacrylate artificial teeth resulting in vertical loss is a problem for both dentists and patients. PURPOSE: The purpose of this study was to quantify wear of artificial teeth in vivo and to relate it to subject and tooth variables. MATERIAL AND METHODS: Twenty-eight subjects treated with complete dentures received 2 artificial tooth materials (polymethyl methacrylate (PMMA)/double-cross linked PMMA fillers; 35%/59% (SR Antaris DCL, SR Postaris DCL); experimental 48%/46%). At baseline and after 12 months, impressions of the dentures were poured with improved stone. After laser scanning, the casts were superimposed and matched. Maximal vertical loss (mm) and volumetric loss (mm(3)) were calculated for each tooth and log-transformed to reduce variability. Volumetric loss was related to the occlusally active surface area. Linear mixed models were used to study the influence of the factors jaw, tooth, and material on adjusted (residual) wear values (alpha=.05). RESULTS: Due to drop outs (n=5) and unmatchable casts (n=3), 69% of all teeth were analyzed. Volumetric loss had a strong linear relationship to surface area (P<.001); this was less pronounced for vertical loss (P=.004). The factor showing the highest influence was the subject. Wear was tooth dependent (increasing from incisors to molars). However, these differences diminished once the wear rates were adjusted for occlusal area, and only a few remained significant (anterior versus posterior maxillary teeth). Another influencing factor was the age of the subject. CONCLUSIONS: Clinical wear of artificial teeth is higher than previously measured or expected. The presented method of analyzing wear of artificial teeth using a laser-scanning device seemed suitable.
Resumo:
Le nombre d'examens fluoroscopiques pratiqués en fluoroscopie est en augmentation constante en cardiologie pédiatrique. Ces examens ont un bénéfice évident pour le diagnostic et la thérapie de pathologies cardiaques complexes mais ils sont également la cause d'exposition à des hautes doses de radiation. Notre étude propose donc d'analyser cette pratique au Centre Hospitalier Universitaire Vaudois (CHUV) ainsi que d'établir des niveaux de référence diagnostiques et de rechercher les moyens possibles de diminution de doses. La base de données que nous avons analysé provient du service de cardiologie pédiatrique du CHUV (Lausanne). Elle contient 873 examens fluoroscopiques pratiqués entre le 1er janvier 2003 et le 31 décembre 2011 et se compose des données démographiques, du temps de scopie en minutes et du dose area product (DAP) en Gycm 2 pour chaque examen. Les examens sont séparés en deux modalités, diagnostique et interventionnel et ont été pratiqués sur l'installation GE jusqu'en juillet 2010 et par la suite sur l'installation Philips. L'analyse s'est faite sur Excel et sur JMP Statistics afin d'établir la distribution démographique de l'échantillon, les moyennes et percentiles 75. Les examens diagnostiques ont été étudié par classes d'âge et les examens interventionnels selon une classification d'intervention (Ranking) établie en collaboration avec le médecin responsable de ces procédures au CHUV. Seuls les groupes d'examens ayant un nombre égal ou supérieur à 20 ont été analysés. Nous avons donc analysé 873 examens, dont 512 diagnostiques et 361 interventionnels. Le temps de scopie moyen pour l'ensemble des examens diagnostiques est de 11.91 minutes et le DAP moyen de 12.04 Gycm2. Concernant les examens interventionnels, les moyennes de temps de scopie et de DAP sont de 17.74 minutes et 9.77 Gycm2 respectivement. En plus des analyses par classes d'âges et par ranking, nous avons étudié les examens selon leurs données démographiques ainsi que par pathologie et par installation. L'ensemble des examens diagnostiques connaissent une diminution significative (p<0.0001) de 30% pour le temps de scopie moyen et de 60% pour le DAP moyen en passant de l'installation la plus ancienne, GE, à la plus récente, Philips. Concernant les examens interventionnels, La différence entre les deux installations est encore plus marquée avec un temps de scopie moyen 55 % inférieur ( Gycm2) et un DAP moyen 73 % (p=0.0002) plus faible sur Philips par rapport à GE. Ces différences sont principalement expliquées par l'apport de nouveaux outils sur l'installation Philips, tels que la digitalisation et le traitement de l'image, de la possibilité de changer le nombre d'images par seconde durant un examen ainsi que de l'amélioration de la pratique des examinateurs. Nous avons pu définir des percentiles 75 pour les examens diagnostiques par classes d'âge et par pathologie et pour les examens interventionnels selon le ranking établi par le Dr Di Bernardo.
Resumo:
Recent studies at high magnetic fields using the phase of gradient-echo MR images have shown the ability to unveil cortical substructure in the human brain. To investigate the contrast mechanisms in phase imaging, this study extends, for the first time, phase imaging to the rodent brain. Using a 14.1 T horizontal bore animal MRI scanner for in vivo micro-imaging, images with an in-plane resolution of 33 microm were acquired. Phase images revealed, often more clearly than the corresponding magnitude images, hippocampal fields, cortical layers (e.g. layer 4), cerebellar layers (molecular and granule cell layers) and small white matter structures present in the striatum and septal nucleus. The contrast of the phase images depended in part on the orientation of anatomical structures relative to the magnetic field, consistent with bulk susceptibility variations between tissues. This was found not only for vessels, but also for white matter structures, such as the anterior commissure, and cortical layers in the cerebellum. Such susceptibility changes could result from variable blood volume. However, when the deoxyhemoglobin content was reduced by increasing cerebral blood flow (CBF) with a carbogen breathing challenge, contrast between white and gray matter and cortical layers was not affected, suggesting that tissue cerebral blood volume (and therefore deoxyhemoglobin) is not a major source of the tissue phase contrast. We conclude that phase variations in gradient-echo images are likely due to susceptibility shifts of non-vascular origin.
Resumo:
PURPOSE: To compare examination time with radiologist time and to measure radiation dose of computed tomographic (CT) fluoroscopy, conventional CT, and conventional fluoroscopy as guiding modalities for shoulder CT arthrography. MATERIALS AND METHODS: Glenohumeral injection of contrast material for CT arthrography was performed in 64 consecutive patients (mean age, 32 years; age range, 16-74 years) and was guided with CT fluoroscopy (n = 28), conventional CT (n = 14), or conventional fluoroscopy (n = 22). Room times (arthrography, room change, CT, and total examination times) and radiologist times (time the radiologist spent in the fluoroscopy or CT room) were measured. One-way analysis of variance and Bonferroni-Dunn posthoc tests were performed for comparison of mean times. Mean effective radiation dose was calculated for each method with examination data, phantom measurements, and standard software. RESULTS: Mean total examination time was 28.0 minutes for CT fluoroscopy, 28.6 minutes for conventional CT, and 29.4 minutes for conventional fluoroscopy; mean radiologist time was 9.9 minutes, 10.5 minutes, and 9.0 minutes, respectively. These differences were not statistically significant. Mean effective radiation dose was 0.0015 mSv for conventional fluoroscopy (mean, nine sections), 0.22 mSv for CT fluoroscopy (120 kV; 50 mA; mean, 15 sections), and 0.96 mSv for conventional CT (140 kV; 240 mA; mean, six sections). Effective radiation dose can be reduced to 0.18 mSv for conventional CT by changing imaging parameters to 120 kV and 100 mA. Mean effective radiation dose of the diagnostic CT arthrographic examination (140 kV; 240 mA; mean, 25 sections) was 2.4 mSv. CONCLUSION: CT fluoroscopy and conventional CT are valuable alternative modalities for glenohumeral CT arthrography, as examination and radiologist times are not significantly different. CT guidance requires a greater radiation dose than does conventional fluoroscopy, but with adequate parameters CT guidance constitutes approximately 8% of the radiation dose.