244 resultados para Siemens-Schuckertwerke.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neuroimaging studies in bipolar disorder report gray matter volume (GMV) abnormalities in neural regions implicated in emotion regulation. This includes a reduction in ventral/orbital medial prefrontal cortex (OMPFC) GMV and, inconsistently, increases in amygdala GMV. We aimed to examine OMPFC and amygdala GMV in bipolar disorder type 1 patients (BPI) versus healthy control participants (HC), and the potential confounding effects of gender, clinical and illness history variables and psychotropic medication upon any group differences that were demonstrated in OMPFC and amygdala GMV. Images were acquired from 27 BPI (17 euthymic, 10 depressed) and 28 age- and gender-matched HC in a 3T Siemens scanner. Data were analyzed with SPM5 using voxel-based morphometry (VBM) to assess main effects of diagnostic group and gender upon whole brain (WB) GMV. Post-hoc analyses were subsequently performed using SPSS to examine the extent to which clinical and illness history variables and psychotropic medication contributed to GMV abnormalities in BPI in a priori and non-a priori regions has demonstrated by the above VBM analyses. BPI showed reduced GMV in bilateral posteromedial rectal gyrus (PMRG), but no abnormalities in amygdala GMV. BPI also showed reduced GMV in two non-a priori regions: left parahippocampal gyrus and left putamen. For left PMRG GMV, there was a significant group by gender by trait anxiety interaction. GMV was significantly reduced in male low-trait anxiety BPI versus male low-trait anxiety HC, and in high- versus low-trait anxiety male BPI. Our results show that in BPI there were significant effects of gender and trait-anxiety, with male BPI and those high in trait-anxiety showing reduced left PMRG GMV. PMRG is part of medial prefrontal network implicated in visceromotor and emotion regulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

VSC converters are becoming more prevalent for HVDC applications. Two circuits are commercially available at present, a traditional six-switch, PWM inverter, implemented using series connected IGBTs - ABBs HVDC Light®, and the other a modular multi-level converter (MMC) - Siemens HVDC-PLUS. This paper presents an alternative MMC topology, which utilises a novel current injection technique, and exhibits several desirable characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background/aims - To determine which biometric parameters provide optimum predictive power for ocular volume. Methods - Sixty-seven adult subjects were scanned with a Siemens 3-T MRI scanner. Mean spherical error (MSE) (D) was measured with a Shin-Nippon autorefractor and a Zeiss IOLMaster used to measure (mm) axial length (AL), anterior chamber depth (ACD) and corneal radius (CR). Total ocular volume (TOV) was calculated from T2-weighted MRIs (voxel size 1.0 mm3) using an automatic voxel counting and shading algorithm. Each MR slice was subsequently edited manually in the axial, sagittal and coronal plane, the latter enabling location of the posterior pole of the crystalline lens and partitioning of TOV into anterior (AV) and posterior volume (PV) regions. Results - Mean values (±SD) for MSE (D), AL (mm), ACD (mm) and CR (mm) were −2.62±3.83, 24.51±1.47, 3.55±0.34 and 7.75±0.28, respectively. Mean values (±SD) for TOV, AV and PV (mm3) were 8168.21±1141.86, 1099.40±139.24 and 7068.82±1134.05, respectively. TOV showed significant correlation with MSE, AL, PV (all p<0.001), CR (p=0.043) and ACD (p=0.024). Bar CR, the correlations were shown to be wholly attributable to variation in PV. Multiple linear regression indicated that the combination of AL and CR provided optimum R2 values of 79.4% for TOV. Conclusion - Clinically useful estimations of ocular volume can be obtained from measurement of AL and CR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of chemical fertilization in arable perimeters provides increased productivity, though it can eventually lead to a qualitative depreciation of groundwater sources, especially if such sources are unconfined in nature. In this context, this thesis presents results from an analysis of the level of natural protection of the Barreiras Aquifer in an area located on the eastern coast of the Rio Grande do Norte State - Brazil. Such an aquifer is clastic in nature and has an unconfined hydraulic character, which clearly makes it susceptible to contamination from surface ground loads with contaminants associated with the leaching of excess fertilizers not absorbed by ground vegetation. The methodology used was based on the use of hydro-geophysical data, particularly inverse models of vertical electrical soundings (VES) and information from well profiles, allowing the acquisition of longitudinal conductance cartographies (S), data in mili-Siemens (mS), and the vulnerability of the aquifer. Such maps were prepared with emphasis to the unsaturated overlying zone, highlighting in particular its thickness and occurrence of clay lithologies. Thus, the longitudinal conductance and aquifer vulnerability reveal areas more susceptible to contamination in the northeast and east-central sections of the study area, with values equal to or less than 10mS and greater than or equal to 0,50, respectively. On the other hand, the southwestern section proved to be less susceptible to contamination, whose longitudinal conductance and vulnerability indices are greater than or equal to 30mS and less than or equal to 0,40, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of chemical fertilization in arable perimeters provides increased productivity, though it can eventually lead to a qualitative depreciation of groundwater sources, especially if such sources are unconfined in nature. In this context, this thesis presents results from an analysis of the level of natural protection of the Barreiras Aquifer in an area located on the eastern coast of the Rio Grande do Norte State - Brazil. Such an aquifer is clastic in nature and has an unconfined hydraulic character, which clearly makes it susceptible to contamination from surface ground loads with contaminants associated with the leaching of excess fertilizers not absorbed by ground vegetation. The methodology used was based on the use of hydro-geophysical data, particularly inverse models of vertical electrical soundings (VES) and information from well profiles, allowing the acquisition of longitudinal conductance cartographies (S), data in mili-Siemens (mS), and the vulnerability of the aquifer. Such maps were prepared with emphasis to the unsaturated overlying zone, highlighting in particular its thickness and occurrence of clay lithologies. Thus, the longitudinal conductance and aquifer vulnerability reveal areas more susceptible to contamination in the northeast and east-central sections of the study area, with values equal to or less than 10mS and greater than or equal to 0,50, respectively. On the other hand, the southwestern section proved to be less susceptible to contamination, whose longitudinal conductance and vulnerability indices are greater than or equal to 30mS and less than or equal to 0,40, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The purpose of this work was to investigate the breast dose saving potential of a breast positioning technique (BP) for thoracic CT examinations with organ-based tube current modulation (OTCM).

Methods: The study included 13 female patient models (XCAT, age range: 27-65 y.o., weight range: 52 to 105.8 kg). Each model was modified to simulate three breast sizes in standard supine geometry. The modeled breasts were further deformed, emulating a BP that would constrain the breasts within 120° anterior tube current (mA) reduction zone. The tube current value of the CT examination was modeled using an attenuation-based program, which reduces the radiation dose to 20% in the anterior region with a corresponding increase to the posterior region. A validated Monte Carlo program was used to estimate organ doses with a typical clinical system (SOMATOM Definition Flash, Siemens Healthcare). The simulated organ doses and organ doses normalized by CTDIvol were compared between attenuation-based tube current modulation (ATCM), OTCM, and OTCM with BP (OTCMBP).

Results: On average, compared to ATCM, OTCM reduced the breast dose by 19.3±4.5%, whereas OTCMBP reduced breast dose by 36.6±6.9% (an additional 21.3±7.3%). The dose saving of OTCMBP was more significant for larger breasts (on average 32, 38, and 44% reduction for 0.5, 1.5, and 2.5 kg breasts, respectively). Compared to ATCM, OTCMBP also reduced thymus and heart dose by 12.1 ± 6.3% and 13.1 ± 5.4%, respectively.

Conclusions: In thoracic CT examinations, OTCM with a breast positioning technique can markedly reduce unnecessary exposure to the radiosensitive organs in the anterior chest wall, specifically breast tissue. The breast dose reduction is more notable for women with larger breasts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a novel method to harmonize diffusion MRI data acquired from multiple sites and scanners, which is imperative for joint analysis of the data to significantly increase sample size and statistical power of neuroimaging studies. Our method incorporates the following main novelties: i) we take into account the scanner-dependent spatial variability of the diffusion signal in different parts of the brain; ii) our method is independent of compartmental modeling of diffusion (e.g., tensor, and intra/extra cellular compartments) and the acquired signal itself is corrected for scanner related differences; and iii) inter-subject variability as measured by the coefficient of variation is maintained at each site. We represent the signal in a basis of spherical harmonics and compute several rotation invariant spherical harmonic features to estimate a region and tissue specific linear mapping between the signal from different sites (and scanners). We validate our method on diffusion data acquired from seven different sites (including two GE, three Philips, and two Siemens scanners) on a group of age-matched healthy subjects. Since the extracted rotation invariant spherical harmonic features depend on the accuracy of the brain parcellation provided by Freesurfer, we propose a feature based refinement of the original parcellation such that it better characterizes the anatomy and provides robust linear mappings to harmonize the dMRI data. We demonstrate the efficacy of our method by statistically comparing diffusion measures such as fractional anisotropy, mean diffusivity and generalized fractional anisotropy across multiple sites before and after data harmonization. We also show results using tract-based spatial statistics before and after harmonization for independent validation of the proposed methodology. Our experimental results demonstrate that, for nearly identical acquisition protocol across sites, scanner-specific differences can be accurately removed using the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nella seguente tesi è descritto il principio di sviluppo di una macchina industriale di alimentazione. Il suddetto sistema dovrà essere installato fra due macchine industriali. L’apparato dovrà mettere al passo e sincronizzare con la macchina a valle i prodotti che arriveranno in input. La macchina ordina gli oggetti usando una serie di nastri trasportatori a velocità regolabile.
Lo sviluppo è stato effettuato al Laboratorio Liam dopo la richiesta dell’azienda Sitma. Sitma produceva già un tipo di sistema come quello descritto in questa tesi. Il deisderio di Sitma è quindi quello di modernizzare la precedente applicazione poiché il dispositivo che le permetteva di effettuare la messa al passo di prodotti era un PLC Siemens che non è più commercializzato. La tesi verterà sullo studio dell’applicazione e la modellazione tramite Matlab-Simulink per poi proseguire ad una applicazione, seppure non risolutiva, in TwinCAT 3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le présent essai explique la conception d’un dispositif d’autoformation visant à améliorer le développement de la compétence de mise à jour continue des savoirs chez les étudiantes et les étudiants du programme Techniques d’intégration multimédia. L’idée de ce dispositif prend racine dans des préoccupations en lien avec le développement des compétences du 21e siècle et la mise en place de plans de littératie numérique à travers le monde afin de relever les compétences chez la citoyenne et le citoyen à qui on demande de s’adapter, apprendre et maîtriser les changements de manière rapide et efficiente (OCDE, 2000). La littératie numérique regroupe les compétences associées aux savoir-faire reliés à l’utilisation des technologies, mais aussi aux savoir-être nécessaires à leur utilisation critique et éthique, en plus de savoir-apprendre nécessaires à une utilisation innovante et créative de ces mêmes technologies. C’est ce savoir apprendre qui nous intéresse particulièrement dans le contexte où les étudiantes et les étudiants du programme Techniques d’intégration multimédia sont confrontés à des exigences élevées et constantes de mise à jour continue de leurs savoirs. Le cadre de référence de notre essai permet d’identifier les compétences et les habiletés qui sont en lien avec le développement de la compétence de mise à jour continue des savoirs dans quatre plans de littératie numérique internationaux et nationaux, dont Le profil TIC des étudiants du collégial proposé par le Réseau REPTIC (2015). Nous étayons ensuite la définition de la mise à jour continue des savoirs grâce aux travaux fondateurs de Knoles (1975), Straka (1997a), Carré (1997), Long (1988), Foucher (2000) et Tremblay (2003) qui s’intéressent aux concepts de l’« apprentissage autodirigé » et de l’« autoformation ». De ces deux concepts, nous dégageons trois dimensions principales à considérer afin d’améliorer le développement de la mise à jour continue des savoirs: la dimension sociale, la dimension psychologique et la dimension pédagogique. Premièrement, pour la dimension sociale, nous référons aux enjeux contemporains du développement de la littératie numérique et au concept de sujet social apprenant supporté par les travaux de Roger (2010) et de Piguet (2013). Deuxièmement, la dimension psychologique renvoie aux aspects motivationnels appuyés par la théorie de l’autodétermination de Deci et Ryan (2000) et aux aspects volitionnels supportés par la théorie de l’autorégulation de Zimmerman (1989). Finalement, pour la dimension pédagogique nous présentons la théorie du socioconstructivisme, la perspective pédagogique du connectivisme (Siemens, 2005) et la classification des stratégies d’apprentissage proposée par Boulet, Savoie-Zajc et Chevrier (1996). Nous poursuivons notre réflexion théorique en considérant divers modes d’apprentissage à l’aide des outils du Web 2.0 dont les blogues, les communautés et l’apprentissage en réseau. Nous concluons notre cadre de référence par la présentation du système d’apprentissage de Paquette (2002), du modèle des sept piliers de l’autoformation de Carré (1992, 2005) auxquels nous superposons les recommandations de Debon (2002) et finalement la présentation du modèle d’ingénierie pédagogique ADDIE de Lebrun (2007), tous quatre utiles à l’application d’un processus systémique de développement de notre dispositif d’autoformation. Notre recherche développement s’inscrit dans un paradigme interprétatif avec une méthodologie qualitative. Les collectes de données ont été effectuées auprès d’étudiantes et d’étudiants du programme Techniques d’intégration multimédia. Ces participantes et participants volontaires ont été utiles à la tenue d’un groupe de discussion en cours d’implantation et d’un questionnaire électronique utile à l’évaluation du dispositif d’autoformation. À la lumière de nos résultats, nous pensons que notre dispositif d’autoformation permet d’atteindre son objectif d’améliorer le développement de la compétence de mise à jour continue des savoirs des étudiantes et des étudiants du programme Techniques d’intégration multimédia. L’interprétation de nos résultats permet d’affirmer que notre dispositif d’autoformation, conçu par l’application d’un processus systémique fidèle aux constats dégagés par notre cadre de référence, permet de couvrir les trois dimensions que nous avons identifiées comme essentielles à l’autoformation, soit la dimension sociale, la dimension psychologique et la dimension pédagogique, mais surtout de confirmer leur réelle importance dans le développement de la compétence de la mise à jour continue des savoirs. Tel que nous le présentons dans notre cadre de référence, nous constatons que la dimension sociale déclenche les processus motivationnels et volitionnels qui sont propres à la dimension psychologique de l’apprentissage autodirigé ou de l’autoformation. Nous sommes à même de constater qu’il existe en effet un lien entre la dimension sociale et la théorie de la motivation autodéterminée qui accorde une importance aux facteurs sociaux qui facilitent la motivation en répondant à des besoins psychologiques fondamentaux. De plus, nous constatons que les outils développés dans le cadre de notre essai, tels que le plan de travail et le rapport de temps, jouent un rôle d’autorégulation crucial pour les étudiantes et les étudiants dans leur processus de surveillance et d’ajustement cognitif tel par la fixation d’objectifs, l’auto-évaluation, l’ajustement stratégique de ses méthodes d’apprentissage et la gestion du temps qu’ils permettent. Nous pensons que notre essai présente des retombées pour le programme Techniques d’intégration multimédia principalement en lien avec des pistes concrètes d’amélioration de la compétence de mise à jour continue des savoirs pour les étudiantes et les étudiants du programme et le développement d’une expertise dans l’application rigoureuse d’une ingénierie pédagogique pour le développement futur de différents dispositifs d’apprentissage. Nous identifions deux perspectives de recherches futures en lien avec notre essai. Premièrement, nous pensons qu’il serait intéressant d’explorer la capacité heuristique de l’apprentissage en réseau dans une perspective sociale, psychologique et pédagogique de l’autoformation, à l’instar des travaux de Henri et Jeunesse (2013). Deuxièmement, nous pensons qu’il serait intéressant d’améliorer le développement de la littératie numérique sous ses aspects de créativité et d’innovation dans un contexte où notre programme d’enseignement outille nos étudiantes et nos étudiants à une utilisation experte des technologies, leur permettant ainsi de mettre ces compétences à contribution dans une exploitation créative et innovante des technologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introdução: A cintigrafia óssea é um dos exames mais frequentes em Medicina Nuclear. Esta modalidade de imagem médica requere um balanço apropriado entre a qualidade de imagem e a dose de radiação, ou seja, as imagens obtidas devem conter o número mínimo de contagem necessárias, para que apresentem qualidade considerada suficiente para fins diagnósticos. Objetivo: Este estudo tem como principal objetivo, a aplicação do software Enhanced Planar Processing (EPP), nos exames de cintigrafia óssea em doentes com carcinoma da mama e próstata que apresentam metástases ósseas. Desta forma, pretende-se avaliar a performance do algoritmo EPP na prática clínica em termos de qualidade e confiança diagnóstica quando o tempo de aquisição é reduzido em 50%. Material e Métodos: Esta investigação teve lugar no departamento de Radiologia e Medicina Nuclear do Radboud University Nijmegen Medical Centre. Cinquenta e um doentes com suspeita de metástases ósseas foram administrados com 500MBq de metilenodifosfonato marcado com tecnécio-99m. Cada doente foi submetido a duas aquisições de imagem, sendo que na primeira foi seguido o protocolo standard do departamento (scan speed=8 cm/min) e na segunda, o tempo de aquisição foi reduzido para metade (scan speed=16 cm/min). As imagens adquiridas com o segundo protocolo foram processadas com o algoritmo EPP. Todas as imagens foram submetidas a uma avaliação objetiva e subjetiva. Relativamente à análise subjetiva, três médicos especialistas em Medicina Nuclear avaliaram as imagens em termos da detetabilidade das lesões, qualidade de imagem, aceitabilidade diagnóstica, localização das lesões e confiança diagnóstica. No que respeita à avaliação objetiva, foram selecionadas duas regiões de interesse, uma localizada no terço médio do fémur e outra localizada nos tecidos moles adjacentes, de modo a obter os valores de relação sinal-ruído, relação contraste-ruído e coeficiente de variação. Resultados: Os resultados obtidos evidenciam que as imagens processadas com o software EPP oferecem aos médicos suficiente informação diagnóstica na deteção de metástases, uma vez que não foram encontradas diferenças estatisticamente significativas (p>0.05). Para além disso, a concordância entre os observadores, comparando essas imagens e as imagens adquiridas com o protocolo standard foi de 95% (k=0.88). Por outro lado, no que respeita à qualidade de imagem, foram encontradas diferenças estatisticamente significativas quando se compararam as modalidades de imagem entre si (p≤0.05). Relativamente à aceitabilidade diagnóstica, não foram encontradas diferenças estatisticamente significativas entre as imagens adquiridas com o protocolo standard e as imagens processadas com o EPP software (p>0.05), verificando-se uma concordância entre os observadores de 70.6%. Todavia, foram encontradas diferenças estatisticamente significativas entre as imagens adquiridas com o protocolo standard e as imagens adquiridas com o segundo protocolo e não processadas com o software EPP (p≤0.05). Para além disso, não foram encontradas diferenças estatisticamente significativas (p>0.05) em termos de relação sinal-ruído, relação contraste-ruído e coeficiente de variação entre as imagens adquiridas com o protocolo standard e as imagens processadas com o EPP. Conclusão: Com os resultados obtidos através deste estudo, é possível concluir que o algoritmo EPP, desenvolvido pela Siemens, oferece a possibilidade de reduzir o tempo de aquisição em 50%, mantendo ao mesmo tempo uma qualidade de imagem considerada suficiente para fins de diagnóstico. A utilização desta tecnologia, para além de aumentar a satisfação por parte dos doentes, é bastante vantajosa no que respeita ao workflow do departamento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A cintigrafia óssea de corpo inteiro representa um dos exames imagiológicos mais frequentes realizados em medicina nuclear. Para além de outras aplicações, este procedimento é capaz de fornecer o diagnóstico de metástases ósseas. Em doentes oncológicos, a presença de metástases ósseas representa um forte indicador prognóstico da longevidade do doente. Para além disso, a presença ou ausência de metástases ósseas irá influenciar o planeamento do tratamento, requerendo para isso uma interpretação precisa dos resultados imagiológicos. Problema: Tendo em conta que a metastização óssea é considerada uma complicação severa relacionada com aumento da morbilidade e diminuição de sobrevivência dos doentes, o conceito de patient care torna-se ainda mais imperativo nestas situações. Assim, devem ser implementadas as melhores práticas imagiológicas de forma a obter o melhor resultado possível do procedimento efetuado, associado ao desconforto mínimo do doente. Uma técnica provável para atingir este objetivo no caso específico da cintigrafia óssea de corpo inteiro é a redução do tempo de aquisição, contudo, as imagens obtidas por si só teriam qualidade de tal forma reduzida que os resultados poderiam ser enviesados. Atualmente, surgiram novas técnicas, nomeadamente relativas a processamento de imagem, através das quais é possível gerar imagens cintigráficas com contagem reduzida de qualidade comparável àquela obtida com o protocolo considerado como standard. Ainda assim, alguns desses métodos continuam associados a algumas incertezas, particularmente no que respeita a sustentação da confiança diagnóstica após a modificação dos protocolos de rotina. Objetivos: O presente trabalho pretende avaliar a performance do algoritmo Pixon para processamento de imagem por meio de um estudo com fantoma. O objetivo será comparar a qualidade de imagem e a detetabilidade fornecidas por imagens não processadas com aquelas submetidas à referida técnica de processamento. Para além disso, pretende-se também avaliar o efeito deste algoritmo na redução do tempo de aquisição. De forma a atingir este objetivo, irá ser feita uma comparação entre as imagens obtidas com o protocolo standard e aquelas adquiridas usando protocolos mais rápidos, posteriormente submetidas ao método de processamento referido. Material e Métodos: Esta investigação for realizada no departamento de Radiologia e Medicina Nuclear do Radboud University Nijmegen Medical Centre, situado na Holanda. Foi utilizado um fantoma cilíndrico contendo um conjunto de seis esferas de diferentes tamanhos, adequado à técnica de imagem planar. O fantoma foi preparado com diferentes rácios de atividade entre as esferas e o background (4:1, 8:1, 17:1, 22:1, 32:1 e 71:1). Posteriormente, para cada teste experimental, o fantoma foi submetido a vários protocolos de aquisição de imagem, nomeadamente com diferentes velocidades de aquisição: 8 cm/min, 12 cm/min, 16 cm/min e 20 cm/min. Todas as imagens foram adquiridas na mesma câmara gama - e.cam Signature Dual Detector System (Siemens Medical Solutions USA, Inc.) - utilizando os mesmos parâmetros técnicos de aquisição, à exceção da velocidade. Foram adquiridas 24 imagens, todas elas submetidas a pós-processamento com recurso a um software da Siemens (Siemens Medical Solutions USA, Inc.) que inclui a ferramenta necessária ao processamento de imagens cintigráficas de corpo inteiro. Os parâmetros de reconstrução utilizados foram os mesmos para cada série de imagens, estando estabelecidos em modo automático. A análise da informação recolhida foi realizada com recurso a uma avaliação objetiva (utilizando parâmetros físicos de qualidade de imagem) e outra subjetiva (através de dois observadores). A análise estatística foi efetuada recorrendo ao software SPSS versão 22 para Windows. Resultados: Através da análise subjetiva de cada rácio de atividade foi demonstrado que, no geral, a detetabilidade das esferas aumentou após as imagens serem processadas. A concordância entre observadores para a distribuição desta análise foi substancial, tanto para imagens não processadas como imagens processadas. Foi igualmente demonstrado que os parâmetros físicos de qualidade de imagem progrediram depois de o algoritmo de processamento ter sido aplicado. Para além disso, observou-se ao comparar as imagens standard (adquiridas com 8 cm/min) e aquelas processadas e adquiridas com protocolos mais rápidos que: imagens adquiridas com uma velocidade de aquisição de 12 cm/min podem fornecer resultados melhorados, com parâmetros de qualidade de imagem e detetabilidade superiores; imagens adquiridas com uma velocidade de 16 cm/min fornecem resultados comparáveis aos standard, com valores aproximados de qualidade de imagem e detetabilidade; e imagens adquiridas com uma velocidade de 20 cm/min resultam em valores diminuídos de qualidade de imagem, bem como redução a nível da detetabilidade. Discussão: Os resultados obtidos foram igualmente estabelecidos por meio de um estudo clínico numa investigação independente, no mesmo departamento. Foram incluídos cinquenta e um doentes referidos com carcinomas da mama e da próstata, com o objetivo de estudar o impacto desta técnica na prática clínica. Os doentes foram, assim, submetidos ao protocolo standard e posteriormente a uma aquisição adicional com uma velocidade de aquisição de 16 cm/min. Depois de as imagens terem sido cegamente avaliadas por três médicos especialistas, concluiu-se que a qualidade de imagem bem como a detetabilidade entre imagens era comparável, corroborando os resultados desta investigação. Conclusão: Com o objetivo de reduzir o tempo de aquisição aplicando um algoritmo de processamento de imagem, foi demonstrado que o protocolo com 16 cm/min de velocidade de aquisição será o limite para o aumento dessa mesma velocidade. Após processar a informação, este protocolo fornece os resultados mais equivalentes àqueles obtidos com o protocolo standard. Tendo em conta que esta técnica foi estabelecida com sucesso na prática clínica, pode-se concluir que, pelo menos em doentes referidos com carcinomas da mama e da próstata, o tempo de aquisição pode ser reduzido para metade, duplicando a velocidade de aquisição de 8 para 16 cm/min.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta dissertação aborda, de um ponto de vista crítico, a teoria do Conetivismo à luz dos seus princípios e das respetivas implicações na visão tradicional de aprendizagem e de conhecimento. A tese foi desenvolvida tendo em conta uma metodologia de revisão bibliográfica das publicações mais relevantes da autoria dos principais representantes do Conetivismo, nomeadamente George Siemens e Stephen Downes, estando sempre subjacente a preocupação em não apresentar apenas mais um estudo sintetizador da teoria, mas simultaneamente uma visão crítica do Conetivismo. Enquanto teoria de aprendizagem para uns, mera perspetiva epistemológica para outros, o Conetivismo tem assumido um papel crescente no debate acerca daquilo que entendemos por aprendizagem em rede e das suas implicações nos estatutos tradicionais do conhecimento e da aprendizagem e até do papel dos educadores e dos alunos. Alvo de reconhecimento para uns, de críticas para outros, o Conetivismo está ainda a dar os primeiros passos no desenvolvimento de uma visão epistemológica inovadora, principalmente no que diz respeito à partilha em rede, à aprendizagem centrada em comunidades online, regidas por interesses e objetivos comuns, onde a auto-aprendizagem é fundamental. Mas que consequências traz esta nova forma de encarar a aprendizagem? Até que ponto o Conetivismo é uma teoria que vai mais além das teorias de aprendizagem anteriores? Passaremos a encarar o conhecimento de modo diferente a partir daqui? Qual o verdadeiro alcance dos MOOC, cada vez mais em voga?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Massive Open Online Courses (MOOCs) may be considered to be a new form of virtual technology enhanced learning environments. Since their first appearance in 2008, the increase in the number of MOOCs has been dramatic. The hype about MOOCs was accompanied by great expectations: 2012 was named the Year of the MOOCs and it was expected that MOOCs would revolutionise higher education. Two types of MOOCs may be distinguished: cMOOCs as proposed by Siemens, based on his ideas of connectivism, and xMOOCs developed in institutions such as Stanford and MIT. Although MOOCs have received a great deal of attention, they have also met with criticism. The time has therefore come to critically reflect upon this phenomenon.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sachant que plusieurs maladies entrainent des lésions qui ne sont pas toujours observables à l’oeil, cette étude préliminaire en paléopathologie humaine utilise une approche complémentaire issue de l’imagerie médicale, le ct-scan, afin de fournir des diagnostics plus précis. L’objectif est donc de tester ici l’efficacité et les limites de l’analyse scanographique durant l’analyse de spécimens archéologiques. Un échantillon de 55 individus a été sélectionné à partir de la collection ostéologique provenant du cimetière protestant St. Matthew (ville de Québec, 1771 – 1860). Une analyse macroscopique et scanographique complète a alors été effectuée sur chaque squelette. Les observations macroscopiques ont consisté à enregistrer une dizaine de critères standardisés par la littérature de référence en lien avec des manifestations anormales à la surface du squelette. Les ct-scans ont été réalisés à l’Institut National de la Recherche Scientifique de la Ville de Québec avec un tomodensitomètre Somatom de Siemens (définition AS+ 128). Les données scanographiques ont permis d’enregistrer une série de critères complémentaires sur la structure interne de l’os (amincissement/épaississement de la corticale, variation de densité, etc.) Selon la méthode du diagnostic différentiel, des hypothèses ou diagnostics ont été proposés. Ils sont principalement basés sur les critères diagnostiques mentionnés dans les manuels de référence en paléopathologie, mais aussi à l’aide de la littérature clinique et l’expertise de médecins. Les résultats présentés ici supportent que: 1) Dans 43% des cas, les données scanographiques ont apporté des informations essentielles dans la diagnose pathologique. Cette tendance se confirme en fonction de certaines maladies, mais pas d’autres, car certains diagnostics ne peuvent se faire sans la présence de tissus mous. 2) La distribution spatiale de la plupart des lésions varie selon les régions anatomiques, aussi bien en macroscopie qu’en scanographie. 3) Certains types de maladie semblent associés à l’âge et au sexe, ce qui est conforté par la littérature. 4) Cette recherche démontre aussi que le processus de diagnose nécessite, dans 38% des cas, une analyse complémentaire (ex. histologie, scintigraphie, radiographie) pour préciser le diagnostic final.