992 resultados para diagnostic techniques
Resumo:
Although a radiographic unit is not standard equipment for bovine practitioners in hospital or field situations, ultrasound machines with 7.5-MHz linear transducers have been used in bovine reproduction for many years, and are eminently suitable for evaluation of orthopedic disorders. The goal of this article is to encourage veterinarians to use radiology and ultrasonography for the evaluation of bovine orthopedic disorders. These diagnostic imaging techniques improve the likelihood of a definitive diagnosis in every bovine patient but especially in highly valuable cattle, whose owners demand increasingly more diagnostic and surgical interventions that require high-level specialized techniques.
Resumo:
PURPOSE Primary nasal epithelial cells are used for diagnostic purposes in clinical routine and have been shown to be good surrogate models for bronchial epithelial cells in studies of airway inflammation and remodeling. We aimed at comparing different instruments allowing isolation of nasal epithelial cells. METHODS Primary airway epithelial cell cultures were established using cells acquired from the inferior surface of the middle turbinate of both nostrils. Three different instruments to isolate nasal cells were used: homemade cytology brush, nasal swab, and curette. Cell count, viability, time until a confluent cell layer was reached, and success rate in establishing cell cultures were evaluated. A standard numeric pain intensity scale was used to assess the acceptability of each instrument. RESULTS Sixty healthy adults (median with interquartile range [IQR] age of 31 [26-37] years) participated in the study. Higher number of cells (×10(5) cells/ml) was obtained using brushes (9.8 [5.9-33.5]) compared to swabs (2.4 [1.5-3.9], p < 0.0001) and curettes (5.5 [4.4-6.9], p < 0.01). Cell viability was similar between groups. Cells obtained by brushes had the fastest growth rate, and the success rate in establishing primary cell cultures was highest with brushes (90% vs. 65% for swabs and 70% for curettes). Pain was highest with curettes (VAS score 4.0 [3.0-5.0] out of 10). The epithelial phenotype of the cultures was confirmed through cytokeratin and E-cadherin staining. CONCLUSIONS All three types of instruments allow collection and growth of human nasal epithelial cells with good acceptability to study participants. The most efficient instrument is the nasal brush.
Resumo:
REASONS FOR PERFORMING STUDY: There is limited information on potential diffusion of local anaesthetic solution after various diagnostic analgesic techniques of the proximal metacarpal region. OBJECTIVE: To document potential distribution of local anaesthetic solution following 4 techniques used for diagnostic analgesia of the proximal metacarpal region. METHODS: Radiodense contrast medium was injected around the lateral palmar or medial and lateral palmar metacarpal nerves in 8 mature horses, using 4 different techniques. Radiographs were obtained 0, 10 and 20 min after injection and were analysed subjectively. A mixture of radiodense contrast medium and methylene blue was injected into 4 cadaver limbs; the location of the contrast medium and dye was determined by radiography and dissection. RESULTS: Following perineural injection of the palmar metacarpal nerves, most of the contrast medium was distributed in an elongated pattern axial to the second and fourth metacarpal bones. The carpometacarpal joint was inadvertently penetrated in 4/8 limbs after injections of the palmar metacarpal nerves from medial and lateral approaches, and in 1/8 limbs when both injections were performed from the lateral approach. Following perineural injection of the lateral palmar nerve using a lateral approach, the contrast medium was diffusely distributed in all but one limb, in which the carpal sheath was inadvertently penetrated. In 5/8 limbs, following perineural injection of the lateral palmar nerve using a medial approach, the contrast medium diffused proximally to the distal third of the antebrachium. CONCLUSIONS AND POTENTIAL RELEVANCE: Inadvertent penetration of the carpometacarpal joint is common after perineural injection of the palmar metacarpal nerves, but less so if both palmar metacarpal nerves are injected using a lateral approach. Following injection of the lateral palmar nerve using a medial approach, the entire palmar aspect of the carpus may be desensitised.
Resumo:
Postmortem investigation is increasingly supported by Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This led to the idea to implement a noninvasive or minimally invasive autopsy technique. Therefore, a minimally invasive angiography technique becomes necessary, in order to support the vascular cross section diagnostic. Preliminary experiments investigating different contrast agents for CT and MRI and their postmortem applicability have been performed using an ex-vivo porcine coronary model. MSCT and MRI angiography was performed in the porcine model. Three human corpses were investigated using minimally invasive MSCT angiography. Via the right femoral artery a plastic tube was advanced into the aortic arch. Using a flow adjustable pump the radiopaque contrast agent meglumine-ioxithalamate was injected. Subsequent MSCT scanning provided an excellent anatomic visualization of the human arterial system including intracranial and coronary arteries. Vascular pathologies such as calcification, stenosis and injury were detected. Limitations of the introduced approach are cases of major vessel injury and cases that show an advanced stage of decay.
Resumo:
UNLABELLED OBJECTIVE; Virtual autopsy methods, such as postmortem CT and MRI, are increasingly being used in forensic medicine. Forensic investigators with little to no training in diagnostic radiology and medical laypeople such as state's attorneys often find it difficult to understand the anatomic orientation of axial postmortem CT images. We present a computer-assisted system that permits postmortem CT datasets to be quickly and intuitively resliced in real time at the body to narrow the gap between radiologic imaging and autopsy. CONCLUSION Our system is a potentially valuable tool for planning autopsies, showing findings to medical laypeople, and teaching CT anatomy, thus further closing the gap between radiology and forensic pathology.
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
Medical microbiology and virology laboratories use nucleic acid tests (NAT) to detect genomic material of infectious organisms in clinical samples. Laboratories choose to perform assembled (or in-house) NAT if commercial assays are not available or if assembled NAT are more economical or accurate. One reason commercial assays are more expensive is because extensive validation is necessary before the kit is marketed, as manufacturers must accept liability for the performance of their assays, assuming their instructions are followed. On the other hand, it is a particular laboratory's responsibility to validate an assembled NAT prior to using it for testing and reporting results on human samples. There are few published guidelines for the validation of assembled NAT. One procedure that laboratories can use to establish a validation process for an assay is detailed in this document. Before validating a method, laboratories must optimise it and then document the protocol. All instruments must be calibrated and maintained throughout the testing process. The validation process involves a series of steps including: (i) testing of dilution series of positive samples to determine the limits of detection of the assay and their linearity over concentrations to be measured in quantitative NAT; (ii) establishing the day-to-day variation of the assay's performance; (iii) evaluating the sensitivity and specificity of the assay as far as practicable, along with the extent of cross-reactivity with other genomic material; and (iv) assuring the quality of assembled assays using quality control procedures that monitor the performance of reagent batches before introducing new lots of reagent for testing.
Resumo:
La richiesta di allergeni puri è in continuo aumento per scopi diagnostici, come standard per metodi di rilevamento e di quantificazione, per l'immunoterapia e per lo studio a livello molecolare dei meccanismi delle reazioni allergiche, al fine di facilitare lo sviluppo di possibili cure. In questa tesi di dottorato sono descritte diverse strategie per l’ottenimento di forme pure di non-specific Lipid Transfer Proteins (nsLTPs), le quali sono state riconosciute essere rilevanti allergeni alimentari in molti frutti e verdure comunemente consumati e sono state definite come modello di veri allergeni alimentari. Una LTP potenzialmente allergenica, non nota in precedenza, è stata isolata dalle mandorle, mentre una LTP dall’allergenicità nota contenuta nelle noci è stata prodotta mediante tecniche di DNA ricombinante. Oltre a questi approcci classici, metodi per la sintesi chimica totale di proteine sono stati applicati per la prima volta alla produzione di un allergene, utilizzando Pru p 3, la LTP prototipica e principale allergene della pesca nell'area mediterranea, come modello. La sintesi chimica totale di proteinepermette di controllarne completamente la sequenza e di studiare la loro funzione a livello atomico. La sua applicazione alla produzione di allergeni costituisce perciò un importante passo avanti nel campo della ricerca sulle allergie alimentari. La proteina Pru p 3 è stata prodotta nella sua intera lunghezza e sono necessari solo due passaggi finali di deprotezione per ottenere il target nella sua forma nativa. Le condizioni sperimentali per tali deprotezioni sono state messe a punto durante la produzione dei peptidi sPru p 3 (1-37) e sPru p 3 (38-91), componenti insieme l'intera proteina. Tecniche avanzate di spettrometria di massa sono state usate per caratterizzare tutti i composti ottenuti, mentre la loro allergenicità è stata studiata attraverso test immunologici o approcci in silico.
Resumo:
Separate physiological mechanisms which respond to spatial and temporal stimulation have been identified in the visual system. Some pathological conditions may selectively affect these mechanisms, offering a unique opportunity to investigate how psychophysical and electrophysiological tests reflect these visual processes, and thus enhance the use of the tests in clinical diagnosis. Amblyopia and optical blur were studied, representing spatial visual defects of neural and optical origin, respectively. Selective defects of the visual pathways were also studied - optic neuritis which affects the optic nerve, and dementia of the Alzheimer type in which the higher association areas are believed to be affected, but the primary projections spared. Seventy control subjects from 10 to 79 years of age were investigated. This provided material for an additional study of the effect of age on the psychophysical and electrophysiological responses. Spatial processing was measured by visual acuity, the contrast sensitivity function, or spatial modulation transfer function (MTF), and the pattern reversal and pattern onset-offset visual evoked potential (VEP). Temporal, or luminance, processing was measured by the de Lange curve, or temporal MTF, and the flash VEP. The pattern VEP was shown to reflect the integrity of the optic nerve, geniculo striate pathway and primary projections, and was related to high temporal frequency processing. The individual components of the flash VEP differed in their characteristics. The results suggested that the P2 component reflects the function of the higher association areas and is related to low temporal frequency processing, while the Pl component reflects the primary projection areas. The combination of a delayed flash P2 component and a normal latency pattern VEP appears to be specific to dementia of the Alzheimer type and represents an important diagnostic test for this condition.
Resumo:
One in 3,000 people in the US are born with cystic fibrosis (CF), a genetic disorder affecting the reproductive system, pancreas, and lungs. Lung disease caused by chronic bacterial and fungal infections is the leading cause of morbidity and mortality in CF. Identities of the microbes are traditionally determined by culturing followed by phenotypic and biochemical assays. It was first thought that the bacterial infections were caused by a select handful of bacteria such as S. aureus, H. influenzae, B. cenocepacia, and P. aeruginosa. With the advent of PCR and molecular techniques, the polymicrobial nature of the CF lung became evident. The CF lung contains numerous bacteria and the communities are diverse and unique to each patient. The total complexity of the bacterial infections is still being determined. In addition, only a few members of the fungal communities have been identified. Much of the fungal community composition is still a mystery. This dissertation addresses this gap in knowledge. A snap shot of CF sputa bacterial community was obtained using the length heterogeneity-PCR community profiling technique. The profiles show that south Florida CF patients have a unique, diverse, and dynamic bacterial community which changes over time. The identities of the bacteria and fungi present were determined using the state-of-the-art 454 sequencing. Sequencing results show that the CF lung microbiome contains commonly cultured pathogenic bacteria, organisms considered a part of the healthy core biome, and novel organisms. Understanding the dynamic changes of these identified microbes will ultimately lead to better therapeutical interventions. Early detection is key in reducing the lung damage caused by chronic infections. Thus, there is a need for accurate and sensitive diagnostic tests. This issue was addressed by designing a bacterial diagnostic tool targeted towards CF pathogens using SPR. By identifying the organisms associated with the CF lung and understanding their community interactions, patients can receive better treatment and live longer.
Resumo:
One in 3,000 people in the US are born with cystic fibrosis (CF), a genetic disorder affecting the reproductive system, pancreas, and lungs. Lung disease caused by chronic bacterial and fungal infections is the leading cause of morbidity and mortality in CF. Identities of the microbes are traditionally determined by culturing followed by phenotypic and biochemical assays. It was first thought that the bacterial infections were caused by a select handful of bacteria such as S. aureus, H. influenzae, B. cenocepacia, and P. aeruginosa. With the advent of PCR and molecular techniques, the polymicrobial nature of the CF lung became evident. The CF lung contains numerous bacteria and the communities are diverse and unique to each patient. The total complexity of the bacterial infections is still being determined. In addition, only a few members of the fungal communities have been identified. Much of the fungal community composition is still a mystery. This dissertation addresses this gap in knowledge. A snap shot of CF sputa bacterial community was obtained using the length heterogeneity-PCR community profiling technique. The profiles show that south Florida CF patients have a unique, diverse, and dynamic bacterial community which changes over time. The identities of the bacteria and fungi present were determined using the state-of-the-art 454 sequencing. Sequencing results show that the CF lung microbiome contains commonly cultured pathogenic bacteria, organisms considered a part of the healthy core biome, and novel organisms. Understanding the dynamic changes of these identified microbes will ultimately lead to better therapeutical interventions. Early detection is key in reducing the lung damage caused by chronic infections. Thus, there is a need for accurate and sensitive diagnostic tests. This issue was addressed by designing a bacterial diagnostic tool targeted towards CF pathogens using SPR. By identifying the organisms associated with the CF lung and understanding their community interactions, patients can receive better treatment and live longer.
Resumo:
The importance of non-destructive techniques (NDT) in structural health monitoring programmes is being critically felt in the recent times. The quality of the measured data, often affected by various environmental conditions can be a guiding factor in terms usefulness and prediction efficiencies of the various detection and monitoring methods used in this regard. Often, a preprocessing of the acquired data in relation to the affecting environmental parameters can improve the information quality and lead towards a significantly more efficient and correct prediction process. The improvement can be directly related to the final decision making policy about a structure or a network of structures and is compatible with general probabilistic frameworks of such assessment and decision making programmes. This paper considers a preprocessing technique employed for an image analysis based structural health monitoring methodology to identify sub-marine pitting corrosion in the presence of variable luminosity, contrast and noise affecting the quality of images. A preprocessing of the gray-level threshold of the various images is observed to bring about a significant improvement in terms of damage detection as compared to an automatically computed gray-level threshold. The case dependent adjustments of the threshold enable to obtain the best possible information from an existing image. The corresponding improvements are observed in a qualitative manner in the present study.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Cette étude identifie les représentations du rôle professionnel d'éducatrice en éducation psychomotrice. Le rôle professionnel d'éducatrice se définit comme un processus de prise de décisions concernant l'enfant, l'organisation de l'environnement et le jeu. Les représentations sont constituées de deux systèmes : les croyances et les processus cognitifs. Trois étudiantes respectivement inscrites en première, en troisième et en cinquième session ont participé à une entrevue semi-dirigée portant sur le rôle professionnel d'éducatrice en éducation psychomotrice. Les modèles de rôles maternel, thérapeutique et instructionnel (Katz, 1970) sont à la base de l'analyse des croyances alors que le diagnostic, la conception, la planification et le guide identifient les quatre processus cognitifs du rôle professionnel d'éducatrice (Saracho, 1988). Une analyse qualitative des verbatims des entrevues a permis d'isoler les croyances et les processus cognitifs. Par la suite, les représentations du rôle professionnel d'éducatrice se sont précisées de la présence des modèles de rôles dans l'utilisation des processus cognitifs. Selon cette étude, les quatre processus cognitifs du rôle professionnel apparaissent chez chacun des sujets. Les sujets prennent principalement des décisions concernant l'enfant. Par contre, les décisions touchant l'organisation de l'environnement et le jeu sont peu présentes. Par ailleurs, les trois sujets appliquent les modèles de rôles maternel, instructionnel et thérapeutique dans chaque processus cognitif du rôle professionnel d'éducatrice. Toutefois, de façon générale, les sujets tendent vers l'orientation intellectuelle du modèle de rôle instructionnel dans le diagnostic, la conception et la planification. De plus, tous les sujets présentent l'orientation intellectuelle du modèle instructionnel dans le guide. Cependant, l'orientation académique du modèle instructionnel apparaît dans le diagnostic, la conception et la planification des sujets de troisième et de cinquième session. Cette étude ouvre la voie à la recherche sur l'enseignement et l'apprentissage des processus cognitifs et des croyances compatibles avec le rôle professionnel d'éducatrice. De plus, ces représentations devenant explicites permettent aux formatrices d'éducatrice de porter une évaluation diagnostique autant en formation initiale qu'en formation continue. Cette recherche contribue à définir un «corpus» de savoirs propre aux éducatrices en Techniques d'éducation en services de garde dans le cadre d'une pédagogie basée sur le cognitivisme.
Resumo:
R?SUM?: Ce m?moire a comme but la description de la mise en place de deux unit?s didactiques pour l?am?lioration de la comp?tence communicative orale en fran?ais d?un groupe d??l?ves de terminale d?un lyc?e de Cali. Ce travail s?inscrit dans la recherche action qui nous a permis de mener un processus de diagnostic, d?impl?mentation et d??valuation. Les techniques utilis?es ont ?t? les enqu?tes, entretiens aux ?l?ves et au professeur titulaire, observations de classes et enregistrements audio et vid?o. L?analyse de la phase d?impl?mentation s?est faite ? partir de trois grandes cat?gories qui rassemblent le sens de l?approche collaborative et la comp?tence communicative orale. Les r?sultats montrent que les ?l?ves ont am?lior? leur expression orale en fran?ais notamment l?enrichissement de vocabulaire et la fluidit?, ? travers des contenus proches de leur r?alit?. Nous montrons les b?n?fices de l?approche collaborative tels que le travail de groupe, le changement de r?les de l?enseignant et des ?l?ves et la possibilit? de co-?valuation du processus dans un cours de fran?ais langue ?trang?re.