244 resultados para Siemens
Resumo:
Neuroimaging studies in bipolar disorder report gray matter volume (GMV) abnormalities in neural regions implicated in emotion regulation. This includes a reduction in ventral/orbital medial prefrontal cortex (OMPFC) GMV and, inconsistently, increases in amygdala GMV. We aimed to examine OMPFC and amygdala GMV in bipolar disorder type 1 patients (BPI) versus healthy control participants (HC), and the potential confounding effects of gender, clinical and illness history variables and psychotropic medication upon any group differences that were demonstrated in OMPFC and amygdala GMV. Images were acquired from 27 BPI (17 euthymic, 10 depressed) and 28 age- and gender-matched HC in a 3T Siemens scanner. Data were analyzed with SPM5 using voxel-based morphometry (VBM) to assess main effects of diagnostic group and gender upon whole brain (WB) GMV. Post-hoc analyses were subsequently performed using SPSS to examine the extent to which clinical and illness history variables and psychotropic medication contributed to GMV abnormalities in BPI in a priori and non-a priori regions has demonstrated by the above VBM analyses. BPI showed reduced GMV in bilateral posteromedial rectal gyrus (PMRG), but no abnormalities in amygdala GMV. BPI also showed reduced GMV in two non-a priori regions: left parahippocampal gyrus and left putamen. For left PMRG GMV, there was a significant group by gender by trait anxiety interaction. GMV was significantly reduced in male low-trait anxiety BPI versus male low-trait anxiety HC, and in high- versus low-trait anxiety male BPI. Our results show that in BPI there were significant effects of gender and trait-anxiety, with male BPI and those high in trait-anxiety showing reduced left PMRG GMV. PMRG is part of medial prefrontal network implicated in visceromotor and emotion regulation.
Resumo:
VSC converters are becoming more prevalent for HVDC applications. Two circuits are commercially available at present, a traditional six-switch, PWM inverter, implemented using series connected IGBTs - ABBs HVDC Light, and the other a modular multi-level converter (MMC) - Siemens HVDC-PLUS. This paper presents an alternative MMC topology, which utilises a novel current injection technique, and exhibits several desirable characteristics.
Resumo:
Background/aims - To determine which biometric parameters provide optimum predictive power for ocular volume. Methods - Sixty-seven adult subjects were scanned with a Siemens 3-T MRI scanner. Mean spherical error (MSE) (D) was measured with a Shin-Nippon autorefractor and a Zeiss IOLMaster used to measure (mm) axial length (AL), anterior chamber depth (ACD) and corneal radius (CR). Total ocular volume (TOV) was calculated from T2-weighted MRIs (voxel size 1.0mm3) using an automatic voxel counting and shading algorithm. Each MR slice was subsequently edited manually in the axial, sagittal and coronal plane, the latter enabling location of the posterior pole of the crystalline lens and partitioning of TOV into anterior (AV) and posterior volume (PV) regions. Results - Mean values (SD) for MSE (D), AL (mm), ACD (mm) and CR (mm) were 2.623.83, 24.511.47, 3.550.34 and 7.750.28, respectively. Mean values (SD) for TOV, AV and PV (mm3) were 8168.211141.86, 1099.40139.24 and 7068.821134.05, respectively. TOV showed significant correlation with MSE, AL, PV (all p<0.001), CR (p=0.043) and ACD (p=0.024). Bar CR, the correlations were shown to be wholly attributable to variation in PV. Multiple linear regression indicated that the combination of AL and CR provided optimum R2 values of 79.4% for TOV. Conclusion - Clinically useful estimations of ocular volume can be obtained from measurement of AL and CR.
Resumo:
The use of chemical fertilization in arable perimeters provides increased productivity, though it can eventually lead to a qualitative depreciation of groundwater sources, especially if such sources are unconfined in nature. In this context, this thesis presents results from an analysis of the level of natural protection of the Barreiras Aquifer in an area located on the eastern coast of the Rio Grande do Norte State - Brazil. Such an aquifer is clastic in nature and has an unconfined hydraulic character, which clearly makes it susceptible to contamination from surface ground loads with contaminants associated with the leaching of excess fertilizers not absorbed by ground vegetation. The methodology used was based on the use of hydro-geophysical data, particularly inverse models of vertical electrical soundings (VES) and information from well profiles, allowing the acquisition of longitudinal conductance cartographies (S), data in mili-Siemens (mS), and the vulnerability of the aquifer. Such maps were prepared with emphasis to the unsaturated overlying zone, highlighting in particular its thickness and occurrence of clay lithologies. Thus, the longitudinal conductance and aquifer vulnerability reveal areas more susceptible to contamination in the northeast and east-central sections of the study area, with values equal to or less than 10mS and greater than or equal to 0,50, respectively. On the other hand, the southwestern section proved to be less susceptible to contamination, whose longitudinal conductance and vulnerability indices are greater than or equal to 30mS and less than or equal to 0,40, respectively.
Resumo:
The use of chemical fertilization in arable perimeters provides increased productivity, though it can eventually lead to a qualitative depreciation of groundwater sources, especially if such sources are unconfined in nature. In this context, this thesis presents results from an analysis of the level of natural protection of the Barreiras Aquifer in an area located on the eastern coast of the Rio Grande do Norte State - Brazil. Such an aquifer is clastic in nature and has an unconfined hydraulic character, which clearly makes it susceptible to contamination from surface ground loads with contaminants associated with the leaching of excess fertilizers not absorbed by ground vegetation. The methodology used was based on the use of hydro-geophysical data, particularly inverse models of vertical electrical soundings (VES) and information from well profiles, allowing the acquisition of longitudinal conductance cartographies (S), data in mili-Siemens (mS), and the vulnerability of the aquifer. Such maps were prepared with emphasis to the unsaturated overlying zone, highlighting in particular its thickness and occurrence of clay lithologies. Thus, the longitudinal conductance and aquifer vulnerability reveal areas more susceptible to contamination in the northeast and east-central sections of the study area, with values equal to or less than 10mS and greater than or equal to 0,50, respectively. On the other hand, the southwestern section proved to be less susceptible to contamination, whose longitudinal conductance and vulnerability indices are greater than or equal to 30mS and less than or equal to 0,40, respectively.
Resumo:
<p>X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination. </p><p>A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.</p><p>Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.</p><p>The work in this dissertation used the task-based definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an observer to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observers performance in completing the task at hand (e.g., detection sensitivity/specificity).</p><p>First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection FPB vs Advanced Modeled Iterative Reconstruction ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.</p><p>Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.</p><p>Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included nave simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.</p><p>The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.</p><p>To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.</p><p>The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called Clustured Lumpy Background texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.</p><p>The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesions morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called hybrid images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.</p><p>Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.</p><p>The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose. </p><p>In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.</p>
Resumo:
<p>Purpose: The purpose of this work was to investigate the breast dose saving potential of a breast positioning technique (BP) for thoracic CT examinations with organ-based tube current modulation (OTCM).</p><p>Methods: The study included 13 female patient models (XCAT, age range: 27-65 y.o., weight range: 52 to 105.8 kg). Each model was modified to simulate three breast sizes in standard supine geometry. The modeled breasts were further deformed, emulating a BP that would constrain the breasts within 120 anterior tube current (mA) reduction zone. The tube current value of the CT examination was modeled using an attenuation-based program, which reduces the radiation dose to 20% in the anterior region with a corresponding increase to the posterior region. A validated Monte Carlo program was used to estimate organ doses with a typical clinical system (SOMATOM Definition Flash, Siemens Healthcare). The simulated organ doses and organ doses normalized by CTDIvol were compared between attenuation-based tube current modulation (ATCM), OTCM, and OTCM with BP (OTCMBP). </p><p>Results: On average, compared to ATCM, OTCM reduced the breast dose by 19.34.5%, whereas OTCMBP reduced breast dose by 36.66.9% (an additional 21.37.3%). The dose saving of OTCMBP was more significant for larger breasts (on average 32, 38, and 44% reduction for 0.5, 1.5, and 2.5 kg breasts, respectively). Compared to ATCM, OTCMBP also reduced thymus and heart dose by 12.1 6.3% and 13.1 5.4%, respectively. </p><p>Conclusions: In thoracic CT examinations, OTCM with a breast positioning technique can markedly reduce unnecessary exposure to the radiosensitive organs in the anterior chest wall, specifically breast tissue. The breast dose reduction is more notable for women with larger breasts.</p>
Resumo:
We propose a novel method to harmonize diffusion MRI data acquired from multiple sites and scanners, which is imperative for joint analysis of the data to significantly increase sample size and statistical power of neuroimaging studies. Our method incorporates the following main novelties: i) we take into account the scanner-dependent spatial variability of the diffusion signal in different parts of the brain; ii) our method is independent of compartmental modeling of diffusion (e.g., tensor, and intra/extra cellular compartments) and the acquired signal itself is corrected for scanner related differences; and iii) inter-subject variability as measured by the coefficient of variation is maintained at each site. We represent the signal in a basis of spherical harmonics and compute several rotation invariant spherical harmonic features to estimate a region and tissue specific linear mapping between the signal from different sites (and scanners). We validate our method on diffusion data acquired from seven different sites (including two GE, three Philips, and two Siemens scanners) on a group of age-matched healthy subjects. Since the extracted rotation invariant spherical harmonic features depend on the accuracy of the brain parcellation provided by Freesurfer, we propose a feature based refinement of the original parcellation such that it better characterizes the anatomy and provides robust linear mappings to harmonize the dMRI data. We demonstrate the efficacy of our method by statistically comparing diffusion measures such as fractional anisotropy, mean diffusivity and generalized fractional anisotropy across multiple sites before and after data harmonization. We also show results using tract-based spatial statistics before and after harmonization for independent validation of the proposed methodology. Our experimental results demonstrate that, for nearly identical acquisition protocol across sites, scanner-specific differences can be accurately removed using the proposed method.
Resumo:
Nella seguente tesi descritto il principio di sviluppo di una macchina industriale di alimentazione. Il suddetto sistema dovr essere installato fra due macchine industriali. Lapparato dovr mettere al passo e sincronizzare con la macchina a valle i prodotti che arriveranno in input. La macchina ordina gli oggetti usando una serie di nastri trasportatori a velocit regolabile.Lo sviluppo stato effettuato al Laboratorio Liam dopo la richiesta dellazienda Sitma. Sitma produceva gi un tipo di sistema come quello descritto in questa tesi. Il deisderio di Sitma quindi quello di modernizzare la precedente applicazione poich il dispositivo che le permetteva di effettuare la messa al passo di prodotti era un PLC Siemens che non pi commercializzato. La tesi verter sullo studio dellapplicazione e la modellazione tramite Matlab-Simulink per poi proseguire ad una applicazione, seppure non risolutiva, in TwinCAT 3.
Resumo:
Le prsent essai explique la conception dun dispositif dautoformation visant amliorer le dveloppement de la comptence de mise jour continue des savoirs chez les tudiantes et les tudiants du programme Techniques dintgration multimdia. Lide de ce dispositif prend racine dans des proccupations en lien avec le dveloppement des comptences du 21e sicle et la mise en place de plans de littratie numrique travers le monde afin de relever les comptences chez la citoyenne et le citoyen qui on demande de sadapter, apprendre et matriser les changements de manire rapide et efficiente (OCDE, 2000). La littratie numrique regroupe les comptences associes aux savoir-faire relis lutilisation des technologies, mais aussi aux savoir-tre ncessaires leur utilisation critique et thique, en plus de savoir-apprendre ncessaires une utilisation innovante et crative de ces mmes technologies. Cest ce savoir apprendre qui nous intresse particulirement dans le contexte o les tudiantes et les tudiants du programme Techniques dintgration multimdia sont confronts des exigences leves et constantes de mise jour continue de leurs savoirs. Le cadre de rfrence de notre essai permet didentifier les comptences et les habilets qui sont en lien avec le dveloppement de la comptence de mise jour continue des savoirs dans quatre plans de littratie numrique internationaux et nationaux, dont Le profil TIC des tudiants du collgial propos par le Rseau REPTIC (2015). Nous tayons ensuite la dfinition de la mise jour continue des savoirs grce aux travaux fondateurs de Knoles (1975), Straka (1997a), Carr (1997), Long (1988), Foucher (2000) et Tremblay (2003) qui sintressent aux concepts de l apprentissage autodirig et de l autoformation . De ces deux concepts, nous dgageons trois dimensions principales considrer afin damliorer le dveloppement de la mise jour continue des savoirs: la dimension sociale, la dimension psychologique et la dimension pdagogique. Premirement, pour la dimension sociale, nous rfrons aux enjeux contemporains du dveloppement de la littratie numrique et au concept de sujet social apprenant support par les travaux de Roger (2010) et de Piguet (2013). Deuximement, la dimension psychologique renvoie aux aspects motivationnels appuys par la thorie de lautodtermination de Deci et Ryan (2000) et aux aspects volitionnels supports par la thorie de lautorgulation de Zimmerman (1989). Finalement, pour la dimension pdagogique nous prsentons la thorie du socioconstructivisme, la perspective pdagogique du connectivisme (Siemens, 2005) et la classification des stratgies dapprentissage propose par Boulet, Savoie-Zajc et Chevrier (1996). Nous poursuivons notre rflexion thorique en considrant divers modes dapprentissage laide des outils du Web 2.0 dont les blogues, les communauts et lapprentissage en rseau. Nous concluons notre cadre de rfrence par la prsentation du systme dapprentissage de Paquette (2002), du modle des sept piliers de lautoformation de Carr (1992, 2005) auxquels nous superposons les recommandations de Debon (2002) et finalement la prsentation du modle dingnierie pdagogique ADDIE de Lebrun (2007), tous quatre utiles lapplication dun processus systmique de dveloppement de notre dispositif dautoformation. Notre recherche dveloppement sinscrit dans un paradigme interprtatif avec une mthodologie qualitative. Les collectes de donnes ont t effectues auprs dtudiantes et dtudiants du programme Techniques dintgration multimdia. Ces participantes et participants volontaires ont t utiles la tenue dun groupe de discussion en cours dimplantation et dun questionnaire lectronique utile lvaluation du dispositif dautoformation. la lumire de nos rsultats, nous pensons que notre dispositif dautoformation permet datteindre son objectif damliorer le dveloppement de la comptence de mise jour continue des savoirs des tudiantes et des tudiants du programme Techniques dintgration multimdia. Linterprtation de nos rsultats permet daffirmer que notre dispositif dautoformation, conu par lapplication dun processus systmique fidle aux constats dgags par notre cadre de rfrence, permet de couvrir les trois dimensions que nous avons identifies comme essentielles lautoformation, soit la dimension sociale, la dimension psychologique et la dimension pdagogique, mais surtout de confirmer leur relle importance dans le dveloppement de la comptence de la mise jour continue des savoirs. Tel que nous le prsentons dans notre cadre de rfrence, nous constatons que la dimension sociale dclenche les processus motivationnels et volitionnels qui sont propres la dimension psychologique de lapprentissage autodirig ou de lautoformation. Nous sommes mme de constater quil existe en effet un lien entre la dimension sociale et la thorie de la motivation autodtermine qui accorde une importance aux facteurs sociaux qui facilitent la motivation en rpondant des besoins psychologiques fondamentaux. De plus, nous constatons que les outils dvelopps dans le cadre de notre essai, tels que le plan de travail et le rapport de temps, jouent un rle dautorgulation crucial pour les tudiantes et les tudiants dans leur processus de surveillance et dajustement cognitif tel par la fixation dobjectifs, lauto-valuation, lajustement stratgique de ses mthodes dapprentissage et la gestion du temps quils permettent. Nous pensons que notre essai prsente des retombes pour le programme Techniques dintgration multimdia principalement en lien avec des pistes concrtes damlioration de la comptence de mise jour continue des savoirs pour les tudiantes et les tudiants du programme et le dveloppement dune expertise dans lapplication rigoureuse dune ingnierie pdagogique pour le dveloppement futur de diffrents dispositifs dapprentissage. Nous identifions deux perspectives de recherches futures en lien avec notre essai. Premirement, nous pensons quil serait intressant dexplorer la capacit heuristique de lapprentissage en rseau dans une perspective sociale, psychologique et pdagogique de lautoformation, linstar des travaux de Henri et Jeunesse (2013). Deuximement, nous pensons quil serait intressant damliorer le dveloppement de la littratie numrique sous ses aspects de crativit et dinnovation dans un contexte o notre programme denseignement outille nos tudiantes et nos tudiants une utilisation experte des technologies, leur permettant ainsi de mettre ces comptences contribution dans une exploitation crative et innovante des technologies.
Resumo:
Introduo: A cintigrafia ssea um dos exames mais frequentes em Medicina Nuclear. Esta modalidade de imagem mdica requere um balano apropriado entre a qualidade de imagem e a dose de radiao, ou seja, as imagens obtidas devem conter o nmero mnimo de contagem necessrias, para que apresentem qualidade considerada suficiente para fins diagnsticos. Objetivo: Este estudo tem como principal objetivo, a aplicao do software Enhanced Planar Processing (EPP), nos exames de cintigrafia ssea em doentes com carcinoma da mama e prstata que apresentam metstases sseas. Desta forma, pretende-se avaliar a performance do algoritmo EPP na prtica clnica em termos de qualidade e confiana diagnstica quando o tempo de aquisio reduzido em 50%. Material e Mtodos: Esta investigao teve lugar no departamento de Radiologia e Medicina Nuclear do Radboud University Nijmegen Medical Centre. Cinquenta e um doentes com suspeita de metstases sseas foram administrados com 500MBq de metilenodifosfonato marcado com tecncio-99m. Cada doente foi submetido a duas aquisies de imagem, sendo que na primeira foi seguido o protocolo standard do departamento (scan speed=8 cm/min) e na segunda, o tempo de aquisio foi reduzido para metade (scan speed=16 cm/min). As imagens adquiridas com o segundo protocolo foram processadas com o algoritmo EPP. Todas as imagens foram submetidas a uma avaliao objetiva e subjetiva. Relativamente anlise subjetiva, trs mdicos especialistas em Medicina Nuclear avaliaram as imagens em termos da detetabilidade das leses, qualidade de imagem, aceitabilidade diagnstica, localizao das leses e confiana diagnstica. No que respeita avaliao objetiva, foram selecionadas duas regies de interesse, uma localizada no tero mdio do fmur e outra localizada nos tecidos moles adjacentes, de modo a obter os valores de relao sinal-rudo, relao contraste-rudo e coeficiente de variao. Resultados: Os resultados obtidos evidenciam que as imagens processadas com o software EPP oferecem aos mdicos suficiente informao diagnstica na deteo de metstases, uma vez que no foram encontradas diferenas estatisticamente significativas (p>0.05). Para alm disso, a concordncia entre os observadores, comparando essas imagens e as imagens adquiridas com o protocolo standard foi de 95% (k=0.88). Por outro lado, no que respeita qualidade de imagem, foram encontradas diferenas estatisticamente significativas quando se compararam as modalidades de imagem entre si (p0.05). Relativamente aceitabilidade diagnstica, no foram encontradas diferenas estatisticamente significativas entre as imagens adquiridas com o protocolo standard e as imagens processadas com o EPP software (p>0.05), verificando-se uma concordncia entre os observadores de 70.6%. Todavia, foram encontradas diferenas estatisticamente significativas entre as imagens adquiridas com o protocolo standard e as imagens adquiridas com o segundo protocolo e no processadas com o software EPP (p0.05). Para alm disso, no foram encontradas diferenas estatisticamente significativas (p>0.05) em termos de relao sinal-rudo, relao contraste-rudo e coeficiente de variao entre as imagens adquiridas com o protocolo standard e as imagens processadas com o EPP. Concluso: Com os resultados obtidos atravs deste estudo, possvel concluir que o algoritmo EPP, desenvolvido pela Siemens, oferece a possibilidade de reduzir o tempo de aquisio em 50%, mantendo ao mesmo tempo uma qualidade de imagem considerada suficiente para fins de diagnstico. A utilizao desta tecnologia, para alm de aumentar a satisfao por parte dos doentes, bastante vantajosa no que respeita ao workflow do departamento.
Resumo:
A cintigrafia ssea de corpo inteiro representa um dos exames imagiolgicos mais frequentes realizados em medicina nuclear. Para alm de outras aplicaes, este procedimento capaz de fornecer o diagnstico de metstases sseas. Em doentes oncolgicos, a presena de metstases sseas representa um forte indicador prognstico da longevidade do doente. Para alm disso, a presena ou ausncia de metstases sseas ir influenciar o planeamento do tratamento, requerendo para isso uma interpretao precisa dos resultados imagiolgicos. Problema: Tendo em conta que a metastizao ssea considerada uma complicao severa relacionada com aumento da morbilidade e diminuio de sobrevivncia dos doentes, o conceito de patient care torna-se ainda mais imperativo nestas situaes. Assim, devem ser implementadas as melhores prticas imagiolgicas de forma a obter o melhor resultado possvel do procedimento efetuado, associado ao desconforto mnimo do doente. Uma tcnica provvel para atingir este objetivo no caso especfico da cintigrafia ssea de corpo inteiro a reduo do tempo de aquisio, contudo, as imagens obtidas por si s teriam qualidade de tal forma reduzida que os resultados poderiam ser enviesados. Atualmente, surgiram novas tcnicas, nomeadamente relativas a processamento de imagem, atravs das quais possvel gerar imagens cintigrficas com contagem reduzida de qualidade comparvel quela obtida com o protocolo considerado como standard. Ainda assim, alguns desses mtodos continuam associados a algumas incertezas, particularmente no que respeita a sustentao da confiana diagnstica aps a modificao dos protocolos de rotina. Objetivos: O presente trabalho pretende avaliar a performance do algoritmo Pixon para processamento de imagem por meio de um estudo com fantoma. O objetivo ser comparar a qualidade de imagem e a detetabilidade fornecidas por imagens no processadas com aquelas submetidas referida tcnica de processamento. Para alm disso, pretende-se tambm avaliar o efeito deste algoritmo na reduo do tempo de aquisio. De forma a atingir este objetivo, ir ser feita uma comparao entre as imagens obtidas com o protocolo standard e aquelas adquiridas usando protocolos mais rpidos, posteriormente submetidas ao mtodo de processamento referido. Material e Mtodos: Esta investigao for realizada no departamento de Radiologia e Medicina Nuclear do Radboud University Nijmegen Medical Centre, situado na Holanda. Foi utilizado um fantoma cilndrico contendo um conjunto de seis esferas de diferentes tamanhos, adequado tcnica de imagem planar. O fantoma foi preparado com diferentes rcios de atividade entre as esferas e o background (4:1, 8:1, 17:1, 22:1, 32:1 e 71:1). Posteriormente, para cada teste experimental, o fantoma foi submetido a vrios protocolos de aquisio de imagem, nomeadamente com diferentes velocidades de aquisio: 8 cm/min, 12 cm/min, 16 cm/min e 20 cm/min. Todas as imagens foram adquiridas na mesma cmara gama - e.cam Signature Dual Detector System (Siemens Medical Solutions USA, Inc.) - utilizando os mesmos parmetros tcnicos de aquisio, exceo da velocidade. Foram adquiridas 24 imagens, todas elas submetidas a ps-processamento com recurso a um software da Siemens (Siemens Medical Solutions USA, Inc.) que inclui a ferramenta necessria ao processamento de imagens cintigrficas de corpo inteiro. Os parmetros de reconstruo utilizados foram os mesmos para cada srie de imagens, estando estabelecidos em modo automtico. A anlise da informao recolhida foi realizada com recurso a uma avaliao objetiva (utilizando parmetros fsicos de qualidade de imagem) e outra subjetiva (atravs de dois observadores). A anlise estatstica foi efetuada recorrendo ao software SPSS verso 22 para Windows. Resultados: Atravs da anlise subjetiva de cada rcio de atividade foi demonstrado que, no geral, a detetabilidade das esferas aumentou aps as imagens serem processadas. A concordncia entre observadores para a distribuio desta anlise foi substancial, tanto para imagens no processadas como imagens processadas. Foi igualmente demonstrado que os parmetros fsicos de qualidade de imagem progrediram depois de o algoritmo de processamento ter sido aplicado. Para alm disso, observou-se ao comparar as imagens standard (adquiridas com 8 cm/min) e aquelas processadas e adquiridas com protocolos mais rpidos que: imagens adquiridas com uma velocidade de aquisio de 12 cm/min podem fornecer resultados melhorados, com parmetros de qualidade de imagem e detetabilidade superiores; imagens adquiridas com uma velocidade de 16 cm/min fornecem resultados comparveis aos standard, com valores aproximados de qualidade de imagem e detetabilidade; e imagens adquiridas com uma velocidade de 20 cm/min resultam em valores diminudos de qualidade de imagem, bem como reduo a nvel da detetabilidade. Discusso: Os resultados obtidos foram igualmente estabelecidos por meio de um estudo clnico numa investigao independente, no mesmo departamento. Foram includos cinquenta e um doentes referidos com carcinomas da mama e da prstata, com o objetivo de estudar o impacto desta tcnica na prtica clnica. Os doentes foram, assim, submetidos ao protocolo standard e posteriormente a uma aquisio adicional com uma velocidade de aquisio de 16 cm/min. Depois de as imagens terem sido cegamente avaliadas por trs mdicos especialistas, concluiu-se que a qualidade de imagem bem como a detetabilidade entre imagens era comparvel, corroborando os resultados desta investigao. Concluso: Com o objetivo de reduzir o tempo de aquisio aplicando um algoritmo de processamento de imagem, foi demonstrado que o protocolo com 16 cm/min de velocidade de aquisio ser o limite para o aumento dessa mesma velocidade. Aps processar a informao, este protocolo fornece os resultados mais equivalentes queles obtidos com o protocolo standard. Tendo em conta que esta tcnica foi estabelecida com sucesso na prtica clnica, pode-se concluir que, pelo menos em doentes referidos com carcinomas da mama e da prstata, o tempo de aquisio pode ser reduzido para metade, duplicando a velocidade de aquisio de 8 para 16 cm/min.
Resumo:
Esta dissertao aborda, de um ponto de vista crtico, a teoria do Conetivismo luz dos seus princpios e das respetivas implicaes na viso tradicional de aprendizagem e de conhecimento. A tese foi desenvolvida tendo em conta uma metodologia de reviso bibliogrfica das publicaes mais relevantes da autoria dos principais representantes do Conetivismo, nomeadamente George Siemens e Stephen Downes, estando sempre subjacente a preocupao em no apresentar apenas mais um estudo sintetizador da teoria, mas simultaneamente uma viso crtica do Conetivismo. Enquanto teoria de aprendizagem para uns, mera perspetiva epistemolgica para outros, o Conetivismo tem assumido um papel crescente no debate acerca daquilo que entendemos por aprendizagem em rede e das suas implicaes nos estatutos tradicionais do conhecimento e da aprendizagem e at do papel dos educadores e dos alunos. Alvo de reconhecimento para uns, de crticas para outros, o Conetivismo est ainda a dar os primeiros passos no desenvolvimento de uma viso epistemolgica inovadora, principalmente no que diz respeito partilha em rede, aprendizagem centrada em comunidades online, regidas por interesses e objetivos comuns, onde a auto-aprendizagem fundamental. Mas que consequncias traz esta nova forma de encarar a aprendizagem? At que ponto o Conetivismo uma teoria que vai mais alm das teorias de aprendizagem anteriores? Passaremos a encarar o conhecimento de modo diferente a partir daqui? Qual o verdadeiro alcance dos MOOC, cada vez mais em voga?
Resumo:
Massive Open Online Courses (MOOCs) may be considered to be a new form of virtual technology enhanced learning environments. Since their first appearance in 2008, the increase in the number of MOOCs has been dramatic. The hype about MOOCs was accompanied by great expectations: 2012 was named the Year of the MOOCs and it was expected that MOOCs would revolutionise higher education. Two types of MOOCs may be distinguished: cMOOCs as proposed by Siemens, based on his ideas of connectivism, and xMOOCs developed in institutions such as Stanford and MIT. Although MOOCs have received a great deal of attention, they have also met with criticism. The time has therefore come to critically reflect upon this phenomenon.
Resumo:
Sachant que plusieurs maladies entrainent des lsions qui ne sont pas toujours observables loeil, cette tude prliminaire en palopathologie humaine utilise une approche complmentaire issue de limagerie mdicale, le ct-scan, afin de fournir des diagnostics plus prcis. Lobjectif est donc de tester ici lefficacit et les limites de lanalyse scanographique durant lanalyse de spcimens archologiques. Un chantillon de 55 individus a t slectionn partir de la collection ostologique provenant du cimetire protestant St. Matthew (ville de Qubec, 1771 1860). Une analyse macroscopique et scanographique complte a alors t effectue sur chaque squelette. Les observations macroscopiques ont consist enregistrer une dizaine de critres standardiss par la littrature de rfrence en lien avec des manifestations anormales la surface du squelette. Les ct-scans ont t raliss lInstitut National de la Recherche Scientifique de la Ville de Qubec avec un tomodensitomtre Somatom de Siemens (dfinition AS+ 128). Les donnes scanographiques ont permis denregistrer une srie de critres complmentaires sur la structure interne de los (amincissement/paississement de la corticale, variation de densit, etc.) Selon la mthode du diagnostic diffrentiel, des hypothses ou diagnostics ont t proposs. Ils sont principalement bass sur les critres diagnostiques mentionns dans les manuels de rfrence en palopathologie, mais aussi laide de la littrature clinique et lexpertise de mdecins. Les rsultats prsents ici supportent que: 1) Dans 43% des cas, les donnes scanographiques ont apport des informations essentielles dans la diagnose pathologique. Cette tendance se confirme en fonction de certaines maladies, mais pas dautres, car certains diagnostics ne peuvent se faire sans la prsence de tissus mous. 2) La distribution spatiale de la plupart des lsions varie selon les rgions anatomiques, aussi bien en macroscopie quen scanographie. 3) Certains types de maladie semblent associs lge et au sexe, ce qui est confort par la littrature. 4) Cette recherche dmontre aussi que le processus de diagnose ncessite, dans 38% des cas, une analyse complmentaire (ex. histologie, scintigraphie, radiographie) pour prciser le diagnostic final.