895 resultados para computer-aided instruction
Resumo:
The aim of this thesis is to utilize the technology developed at LUT and to provide an easy tool for high-speed solid-rotor induction machine preliminary design. Computer aided design tool MathCAD has been chosen as the environment for realizing the calculation program. Four versions of the design program have been made depending on the motor rotor type. The first rotor type is an axially slitted solid-rotor with steel end rings. The next one is an axially slitted solid-rotor with copper end rings. The third machine type is a solid rotor with deep, rectangular copper bars and end rings (squirrel cage). And the last one is a solid-rotor with round copper bars and end rings (squirrel cage). Each type of rotor has its own specialties but a general thread of design is common. This paper follows the structure of the calculating program and explains some features and formulas. The attention is concentrated on the difference between laminated and solid-rotor machine design principles. There is no deep analysis of the calculation ways are presented. References for all solution methods appearing during the design procedure are given for more detailed studying. This thesis pays respect to the latest innovations in solid-rotor machines theory. Rotor ends’ analytical calculation follows the latest knowledge in this field. Correction factor for adjusting the rotor impedance is implemented. The purpose of the created design program is to calculate the preliminary dimensions of the machine according to initial data. Obtained results are not recommended for exact machine development. Further more detailed design should be done in a finite element method application. Hence, this thesis is a practical tool for the prior evaluating of the high-speed machine with different solid-rotor types parameters.
Resumo:
OBJETIVOS: avaliar a reprodutibilidade intra-observador e interobservador da medida do volume endometrial utilizando ultra-sonografia tridimensional (3D) e o programa VOCAL® (Virtual Organ Computer-aided AnaLysis). MÉTODOS: um bloco ultra-sonográfico 3D do endométrio foi obtido de cinco pacientes inférteis voluntárias que apresentavam diferentes volumes endometriais. Para cada bloco 3D, o volume endometrial foi calculado utilizando o modo manual em quatro diferentes passos de rotação (30º, 15º, 9º e 6º) por dois diferentes observadores. Dez medidas foram obtidas para cada rotação e por cada observador. Testamos a diferença entre as médias com o one-way ANOVA e o pós-teste de Tukey e a reprodutibilidade com os coeficientes de correlação intraclasse. RESULTADOS: as medidas realizadas com passo de rotação de 30º foram associadas a médias significativamente menores em 3 das 5 pacientes. Não houve diferença entre as médias obtidas pelos passos de rotação de 15º, 9º ou 6º. Em nenhuma das avaliações foi notada diferença entre as médias obtidas pelos dois observadores. Os coeficientes de correlação intraclasse foram significativamente menores com o passo de rotação de 30º (todos abaixo de 0,984) que com os outros passos de rotação (todos acima de 0,996). CONCLUSÕES: o uso de passos de rotação de 15º ou menos apresentou medidas reprodutíveis do volume endometrial: não houve diferença significativa entre as médias obtidas pelos dois observadores, associado ao alto coeficiente de correlação intraclasse (>0,996). É recomendável o uso do passo de rotação de 15º, pois demora menos para ser obtido em comparação com 6º e 9º.
Resumo:
OBJETIVO: avaliar a evolução do volume do embrião (VE) entre a sétima e a décima semana de gestação por meio da ultra-sonografia tridimensional. MÉTODOS: realizou-se um estudo de corte transversal com 63 gestantes normais entre a sétima e a décima semana. Os exames ultra-sonográficos foram realizados por meio de um transdutor endocavitário volumétrico. Para o cálculo do VE, utilizou-se o método VOCAL (Virtual Organ Computer-aided Analysis) com ângulo de rotação de 12º, com delimitação de 15 planos seqüenciais. Para o VE foram calculadas médias, medianas, desvios padrão e valores máximo e mínimo em todas as idades gestacionais. Para se avaliar a correlação entre o VE e o comprimento cabeça-nádega (CCN) foi criado gráfico de dispersão, sendo o ajuste realizado pelo coeficiente de determinação (R²). Para se determinarem intervalos de referência do VE em função do CCN, utilizou-se a seguinte fórmula: percentil =VE+K versus dp, com K=1,96. RESULTADOS: o CCN variou de 9,0 a 39,7 mm, com média de 23,9 mm (±7,9 mm), enquanto o VE variou de 0,1 a 7,6 cm³, com média de 2,7 cm³ (±3,2 cm³). O VE foi altamente correlacionado com o CCN, sendo que o melhor ajuste foi obtido com regressão quadrática (VE=0,165 - 0,055 x CCN + 0,005 x CCN²; R²=0,853). O VE médio variou de 0,1 (-0,3 a 0,5 cm³) a 6,7 cm³ (3,8 a 9,7 cm³) no intervalo de 9 a 40 mm do CCN. Neste intervalo o VE aumentou 67 vezes, enquanto o CCN aumentou apenas 4,4 vezes. CONCLUSÕES: o VE é um parâmetro mais sensível que o CCN para avaliar o crescimento embrionário entre a sétima e a décima semana de gestação.
Resumo:
In this paper a computer program to model and support product design is presented. The product is represented through a hierarchical structure that allows the user to navigate across the products components, and it aims at facilitating each step of the detail design process. A graphical interface was also developed, which shows visually to the user the contents of the product structure. Features are used as building blocks for the parts that compose the product, and object-oriented methodology was used as a means to implement the product structure. Finally, an expert system was also implemented, whose knowledge base rules help the user design a product that meets design and manufacturing requirements.
Resumo:
Vaikka liiketoimintatiedon hallintaa sekä johdon päätöksentekoa on tutkittu laajasti, näiden kahden käsitteen yhteisvaikutuksesta on olemassa hyvin rajallinen määrä tutkimustietoa. Tulevaisuudessa aiheen tärkeys korostuu, sillä olemassa olevan datan määrä kasvaa jatkuvasti. Yritykset tarvitsevat jatkossa yhä enemmän kyvykkyyksiä sekä resursseja, jotta sekä strukturoitua että strukturoimatonta tietoa voidaan hyödyntää lähteestä riippumatta. Nykyiset Business Intelligence -ratkaisut mahdollistavat tehokkaan liiketoimintatiedon hallinnan osana johdon päätöksentekoa. Aiemman kirjallisuuden pohjalta, tutkimuksen empiirinen osuus tunnistaa liiketoimintatiedon hyödyntämiseen liittyviä tekijöitä, jotka joko tukevat tai rajoittavat johdon päätöksentekoprosessia. Tutkimuksen teoreettinen osuus johdattaa lukijan tutkimusaiheeseen kirjallisuuskatsauksen avulla. Keskeisimmät tutkimukseen liittyvät käsitteet, kuten Business Intelligence ja johdon päätöksenteko, esitetään relevantin kirjallisuuden avulla – tämän lisäksi myös dataan liittyvät käsitteet analysoidaan tarkasti. Tutkimuksen empiirinen osuus rakentuu tutkimusteorian pohjalta. Tutkimuksen empiirisessä osuudessa paneudutaan tutkimusteemoihin käytännön esimerkein: kolmen tapaustutkimuksen avulla tutkitaan sekä kuvataan toisistaan irrallisia tapauksia. Jokainen tapaus kuvataan sekä analysoidaan teoriaan perustuvien väitteiden avulla – nämä väitteet ovat perusedellytyksiä menestyksekkäälle liiketoimintatiedon hyödyntämiseen perustuvalle päätöksenteolle. Tapaustutkimusten avulla alkuperäistä tutkimusongelmaa voidaan analysoida tarkasti huomioiden jo olemassa oleva tutkimustieto. Analyysin tulosten avulla myös yksittäisiä rajoitteita sekä mahdollistavia tekijöitä voidaan analysoida. Tulokset osoittavat, että rajoitteilla on vahvasti negatiivinen vaikutus päätöksentekoprosessin onnistumiseen. Toisaalta yritysjohto on tietoinen liiketoimintatiedon hallintaan liittyvistä positiivisista seurauksista, vaikka kaikkia mahdollisuuksia ei olisikaan hyödynnetty. Tutkimuksen merkittävin tulos esittelee viitekehyksen, jonka puitteissa johdon päätöksentekoprosesseja voidaan arvioida sekä analysoida. Despite the fact that the literature on Business Intelligence and managerial decision-making is extensive, relatively little effort has been made to research the relationship between them. This particular field of study has become important since the amount of data in the world is growing every second. Companies require capabilities and resources in order to utilize structured data and unstructured data from internal and external data sources. However, the present Business Intelligence technologies enable managers to utilize data effectively in decision-making. Based on the prior literature, the empirical part of the thesis identifies the enablers and constraints in computer-aided managerial decision-making process. In this thesis, the theoretical part provides a preliminary understanding about the research area through a literature review. The key concepts such as Business Intelligence and managerial decision-making are explored by reviewing the relevant literature. Additionally, different data sources as well as data forms are analyzed in further detail. All key concepts are taken into account when the empirical part is carried out. The empirical part obtains an understanding of the real world situation when it comes to the themes that were covered in the theoretical part. Three selected case companies are analyzed through those statements, which are considered as critical prerequisites for successful computer-aided managerial decision-making. The case study analysis, which is a part of the empirical part, enables the researcher to examine the relationship between Business Intelligence and managerial decision-making. Based on the findings of the case study analysis, the researcher identifies the enablers and constraints through the case study interviews. The findings indicate that the constraints have a highly negative influence on the decision-making process. In addition, the managers are aware of the positive implications that Business Intelligence has for decision-making, but all possibilities are not yet utilized. As a main result of this study, a data-driven framework for managerial decision-making is introduced. This framework can be used when the managerial decision-making processes are evaluated and analyzed.
Resumo:
This Master’s Thesis is dedicated to the simulation of new p-type pixel strip detector with enhanced multiplication effect. It is done for high-energy physics experiments upgrade such as Super Large Hadron Collider especially for Compact Muon Solenoid particle track silicon detectors. These detectors are used in very harsh radiation environment and should have good radiation hardness. The device engineering technology for developing more radiation hard particle detectors is used for minimizing the radiation degradation. New detector structure with enhanced multiplication effect is proposed in this work. There are studies of electric field and electric charge distribution of conventional and new p-type detector under reverse voltage bias and irradiation. Finally, the dependence of the anode current from the applied cathode reverse voltage bias under irradiation is obtained in this Thesis. For simulation Silvaco Technology Computer Aided Design software was used. Athena was used for creation of doping profiles and device structures and Atlas was used for getting electrical characteristics of the studied devices. The program codes for this software are represented in Appendixes.
Resumo:
Electrical keyboard instruments and computer-aided music-making generally base on the piano keyboard that was developed for a tuning system no longer used. Alternative keyboard layout offers at least easier playing, faster adopting, new ways to play and better ergonomics. This thesis explores the development of keyboard instruments and tunings, and different keyboard layouts. This work is preliminary research for an electrical keyboard instrument to be implemented later on.
Resumo:
Renewable energy investments play a key role in energy transition. While studies have suggested that social acceptance may form a barrier for renewable energy investments, the ways in which companies perceive and attempt to gain the acceptance have received little attention. This study aims to fill the gap by exploring how large electric utilities justify their strategic investments in their press releases and how do the justifications differ between renewable and non-renewable energy investments. The study bases on legitimacy theory and aims at contributing to the research on legitimation in institutional change. As its research method, the study employs an inductive mixed method content analysis. The study has two parts: a qualitative content analysis that explores and identifies the themes and legitimation strategies of the press releases and a quantitative computer-aided analysis that compares renewable and non-renewable energy investments. The sample of the study consists of 396 press releases representing the strategic energy investments of 34 electric utilities from the list of the world’s 250 largest and financially most successful energy companies. The data is collected from the period of 2010–2014. The study reveals that most important justifications for strategic energy investments are fit with the strategy and environmental and social benefits. Justifications address especially the expectations of market. Investments into non-renewable energy are justified more and they use more arguments addressing the proprieties and performance of power plants whereas renewable energy investments are legitimized by references to past actions and commonly accepted morals and norms. The findings support the notion that validity-addressing and propriety-addressing legitimation strategies are used differently in stable and unstable institutional settings.
Resumo:
The introduction of computer and communications technology, and particularly the internet, into education has opened up some new possibilities for teaching and learning. Courses designed and delivered in an online environment offer the possibility of highly interactive and individually focussed teaching and learning experiences. However, online courses also present new challenges for both teachers and students. A qualitative study was conducted to explore teachers' perceptions about the similarities and differences in teaching in the online and face-to-face (F2F) environments. Focus group discussions were held with 5 teachers; 2 teachers were interviewed in depth. The participants, 3 female and 2 male, were full-time teachers from a large College of Applied Arts & Technology in southern Ontario. Each of them had over 10 years of F2F teaching experience and each had been involved in the development and teaching of at least one online course. i - -; The study focussed on how teaching in the online environment compares with teaching in the F2F environment, what roles teachers and students adopt in each setting, what learning communities mean online and F2F and how they are developed, and how institutional policies, procedures, and infrastructure affect teaching and learning F2F and online. This study was emic in nature, that is the teachers' words determine the themes identified throughout the study. The factors identified as affecting teaching in an online environment included teacher issues such as course design, motivation to teach online, teaching style, role, characteristics or skills, and strategies. Student issues as perceived by the teachers included learning styles, role, and characteristics or skills. As well, technology issues such as a reliable infrastructure, clear role and responsibilities for maintaining the infrastructure, support, and multimedia capability affected teaching online. Finally, administrative policies and procedures, including teacher selection and training, registration and scheduling procedures, intellectual property and workload policies, and the development and communication of a comprehensive strategic plan were found to impact on teaching online. The teachers shared some of the benefits they perceived about teaching online as well as some of the challenges they had faced and challenges they perceived students had faced online. Overall, the teachers feh that there were more similarities than differences in teaching between the two environments, with the main differences being the change from F2F verbal interactions involving body language to online written interactions without body language cues, and the fundamental reliance on technology in the online environment. These findings support previous research in online teaching and learning, and add teachers' perspectives on the factors that stay the same and the factors that change when moving from a F2F environment to an online environment.
Resumo:
This qualitative study investigated how a team of 7 hospital educators collaborated to develop e-curriculum units to pilot for a newly acquired learning -r management system at a large, multisite academic health sciences centre. A case study approach was used to examine how the e-Curriculum Team was structured, how the educators worked together to develop strategies to better utilize e-leaming in their ovwi practice, what e-curriculum they chose to develop, and how they determined their priorities for e-curriculum development. It also inquired into how they planned to involve other educators in using e-leaming. One set of semistructured interviews with the 6 hospital educators involved in the project, as well as minutes of team meetings and the researcher's journal, were analyzed (the researcher was also a hospital educator on the team). Project management structure, educator support, and organizational pressures on the implementation project feature prominently in the case study. This study suggests that implementation of e-leaming will be more successful if (a) educators involved in the development of e-leaming curriculum are supported in their role as change agents, (b) the pain of vmleaming current educational practice is considered, (c) the limitations of the software being implemented are recognized, (d) time is spent leaming about best practice, and (e) the project is protected as much as possible from organizational pressures and distractions.
Resumo:
This qualitative narrative inquiry was driven by my desire to further explore my personal discovery that my utilization of educational technologies in teaching and learning environments seemed to heighten a sense of creativity, which in turn increased reflective practice and authenticity in my teaching. A narrative inquiry approach was used as it offered the opportunity to uncover the deeper meanings of authenticity and reflection as participants' personal experiences were coconstructed and reconstructed in relationship with me and in relationship to a social milieu. To gain further insight into this potential phenomenon, I engaged in 2 conversational interviews with 2 other teachers from an Ontario College in a large urban centre who have utilized educational technologies in their teaching and learning communities and I maintained a research journal, constructed during the interview process, to record my own emerging narrative accounts, reflections, insights and further questions. The field texts consisted of transcriptions of the interviews and my reflective journal. Research texts were developed as field texts were listened to multiple times and texts were examined for meanings and themes. The educational technologies that both women focused on in the interview were digital video of children as they play, learn and develop and the use of an audible teacher voice in online courses. The invitation given to students to explore and discover meaning in videos of children as they watched them with the teacher seemed to be a catalyst for authenticity and a sense of synergy in the classroom. The power of the audible teacher voice came through as an essential component in online learning environments to offer students a sense of humanness and connection with the teacher. Relationships in both online and face to face classrooms emerged as a necessary and central component to all teaching and learning communities. The theme of paradox also emerged as participants recognized that educational technologies can be used in ways that enhance creativity, authenticity, reflection and relationships or in ways that hinder these qualities in the teaching and learning community. Knowledge of the common experiences of college educators who utilize educational technologies, specifically digital video of children to educate early childhood educators, might give meaning and insight to inform the practice of other teachers who seek authentic, reflexive practice in the classroom and in on line environments.
Resumo:
Although there is a consensus in th~ literature on the many uses of the Internet in education, as well as the unique features of the Internet for presenting facts and information, there is no consensus on a standardized method for evaluating Internetbased courseware. Educators rarely have the opportunity to participate in the development of Internet-based courseware, yet they are encouraged to use the technology in their learning environments. This creates a need for summative evaluation methods for Internet-based health courseware. The purpose ofthis study was to assess evaluative measures for Internet-based courseware. Specifically, two entities were evaluated within the study: a) the outcome of the Internet-based courseware, and b) the Internet-based courseware itself. To this end, the Web site www.bodymatters.com was evaluated using two different approaches by two different cohorts. The first approach was a performance appraisal by a group of endusers. A positive, statistically significant change in the students performance was observed due to the intervention ofthe Web site. The second approach was a productoriented evaluation ofthe Web site with the use of a criterion-based checklist and an open-ended comments section. The findings indicate that a summative, criterion-based evaluation is best completed by a multidisciplinary team. The findi~gs also indicated that the two different cohorts reported different product-oriented appraisals of the Web site. The current research confirmed previous research that found that experts returning a poor evaluation of a Web site did not have a relationship to whether or not the end-users performance improved due to the intervention of the Web site.
Resumo:
This study examined the efficacy of providing four Grade 7 and 8 students with reading difficulties with explicit instruction in the use of reading comprehension strategies while using text-reader software. Specifically, the study explored participants' combined use of a text-reader and question-answering comprehension strategy during a 6-week instructional program. Using a qualitative case study methodology approach, participants' experiences using text-reader software, with the presence of explicit instruction in evidence-based reading comprehension strategies, were examined. The study involved three phases: (a) the first phase consisted of individual interviews with the participants and their parents; (b) the second phase consisted of a nine session course; and (c) the third phase consisted of individual exit interviews and a focus group discussion. After the data collection phases were completed, data were analyzed and coded for emerging themes, with-quantitativ,e measures of participants' reading performance used as descriptive data. The data suggested that assistive technology can serve as an instructional "hook", motivating students to engage actively in the reading processes, especially when accompanied by explicit strategy instruction. Participants' experiences also reflected development of strategy use and use of text-reader software and the importance of social interactions in developing reading comprehension skills. The findings of this study support the view that the integration of instruction using evidence-based practices are important and vital components in the inclusion oftext-reader software as part of students' educational programming. Also, the findings from this study can be extended to develop in-class programming for students using text-reader software.
Resumo:
The Dudding group is interested in the application of Density Functional Theory (DFT) in developing asymmetric methodologies, and thus the focus of this dissertation will be on the integration of these approaches. Several interrelated subsets of computer aided design and implementation in catalysis have been addressed during the course of these studies. The first of the aims rested upon the advancement of methodologies for the synthesis of biological active C(1)-chiral 3-methylene-indan-1-ols, which in practice lead to the use of a sequential asymmetric Yamamoto-Sakurai-Hosomi allylation/Mizoroki Heck reaction sequence. An important aspect of this work was the utilization of ortho-substituted arylaldehyde reagents which are known to be a problematic class of substrates for existing asymmetric allylation approaches. The second phase of my research program lead to the further development of asymmetric allylation methods using o-arylaldehyde substrates for synthesis of chiral C(3)-substituted phthalides. Apart from the de novo design of these chemistries in silico, which notably utilized water-tolerant, inexpensive, and relatively environmental benign indium metal, this work represented the first computational study of a stereoselective indium-mediated process. Following from these discoveries was the advent of a related, yet catalytic, Ag(I)-catalyzed approach for preparing C(3)-substituted phthalides that from a practical standpoint was complementary in many ways. Not only did this new methodology build upon my earlier work with the integrated (experimental/computational) use of the Ag(I)-catalyzed asymmetric methods in synthesis, it provided fundamental insight arrived at through DFT calculations, regarding the Yamamoto-Sakurai-Hosomi allylation. The development of ligands for unprecedented asymmetric Lewis base catalysis, especially asymmetric allylations using silver and indium metals, followed as a natural extension from these earlier discoveries. To this end, forthcoming as well was the advancement of a family of disubstituted (N-cyclopropenium guanidine/N-imidazoliumyl substituted cyclopropenylimine) nitrogen adducts that has provided fundamental insight into chemical bonding and offered an unprecedented class of phase transfer catalysts (PTC) having far-reaching potential. Salient features of these disubstituted nitrogen species is unprecedented finding of a cyclopropenium based C-H•••πaryl interaction, as well, the presence of a highly dissociated anion projected them to serve as a catalyst promoting fluorination reactions. Attracted by the timely development of these disubstituted nitrogen adducts my last studies as a PhD scholar has addressed the utility of one of the synthesized disubstituted nitrogen adducts as a valuable catalyst for benzylation of the Schiff base N-diphenyl methylene glycine ethyl ester. Additionally, the catalyst was applied for benzylic fluorination, emerging from this exploration was successful fluorination of benzyl bromide and its derivatives in high yields. A notable feature of this protocol is column-free purification of the product and recovery of the catalyst to use in a further reaction sequence.
Resumo:
La maladie des artères périphériques (MAP) se manifeste par une réduction (sténose) de la lumière de l’artère des membres inférieurs. Elle est causée par l’athérosclérose, une accumulation de cellules spumeuses, de graisse, de calcium et de débris cellulaires dans la paroi artérielle, généralement dans les bifurcations et les ramifications. Par ailleurs, la MAP peut être causée par d`autres facteurs associés comme l’inflammation, une malformation anatomique et dans de rares cas, au niveau des artères iliaques et fémorales, par la dysplasie fibromusculaire. L’imagerie ultrasonore est le premier moyen de diagnostic de la MAP. La littérature clinique rapporte qu’au niveau de l’artère fémorale, l’écho-Doppler montre une sensibilité de 80 à 98 % et une spécificité de 89 à 99 % à détecter une sténose supérieure à 50 %. Cependant, l’écho-Doppler ne permet pas une cartographie de l’ensemble des artères des membres inférieurs. D’autre part, la reconstruction 3D à partir des images échographiques 2D des artères atteintes de la MAP est fortement opérateur dépendant à cause de la grande variabilité des mesures pendant l’examen par les cliniciens. Pour planifier une intervention chirurgicale, les cliniciens utilisent la tomodensitométrie (CTA), l’angiographie par résonance magnétique (MRA) et l’angiographie par soustraction numérique (DSA). Il est vrai que ces modalités sont très performantes. La CTA montre une grande précision dans la détection et l’évaluation des sténoses supérieures à 50 % avec une sensibilité de 92 à 97 % et une spécificité entre 93 et 97 %. Par contre, elle est ionisante (rayon x) et invasive à cause du produit de contraste, qui peut causer des néphropathies. La MRA avec injection de contraste (CE MRA) est maintenant la plus utilisée. Elle offre une sensibilité de 92 à 99.5 % et une spécificité entre 64 et 99 %. Cependant, elle sous-estime les sténoses et peut aussi causer une néphropathie dans de rares cas. De plus les patients avec stents, implants métalliques ou bien claustrophobes sont exclus de ce type d`examen. La DSA est très performante mais s`avère invasive et ionisante. Aujourd’hui, l’imagerie ultrasonore (3D US) s’est généralisée surtout en obstétrique et échocardiographie. En angiographie il est possible de calculer le volume de la plaque grâce à l’imagerie ultrasonore 3D, ce qui permet un suivi de l’évolution de la plaque athéromateuse au niveau des vaisseaux. L’imagerie intravasculaire ultrasonore (IVUS) est une technique qui mesure ce volume. Cependant, elle est invasive, dispendieuse et risquée. Des études in vivo ont montré qu’avec l’imagerie 3D-US on est capable de quantifier la plaque au niveau de la carotide et de caractériser la géométrie 3D de l'anastomose dans les artères périphériques. Par contre, ces systèmes ne fonctionnent que sur de courtes distances. Par conséquent, ils ne sont pas adaptés pour l’examen de l’artère fémorale, à cause de sa longueur et de sa forme tortueuse. L’intérêt pour la robotique médicale date des années 70. Depuis, plusieurs robots médicaux ont été proposés pour la chirurgie, la thérapie et le diagnostic. Dans le cas du diagnostic artériel, seuls deux prototypes sont proposés, mais non commercialisés. Hippocrate est le premier robot de type maitre/esclave conçu pour des examens des petits segments d’artères (carotide). Il est composé d’un bras à 6 degrés de liberté (ddl) suspendu au-dessus du patient sur un socle rigide. À partir de ce prototype, un contrôleur automatisant les déplacements du robot par rétroaction des images échographiques a été conçu et testé sur des fantômes. Le deuxième est le robot de la Colombie Britannique conçu pour les examens à distance de la carotide. Le mouvement de la sonde est asservi par rétroaction des images US. Les travaux publiés avec les deux robots se limitent à la carotide. Afin d’examiner un long segment d’artère, un système robotique US a été conçu dans notre laboratoire. Le système possède deux modes de fonctionnement, le mode teach/replay (voir annexe 3) et le mode commande libre par l’utilisateur. Dans ce dernier mode, l’utilisateur peut implémenter des programmes personnalisés comme ceux utilisés dans ce projet afin de contrôler les mouvements du robot. Le but de ce projet est de démontrer les performances de ce système robotique dans des conditions proches au contexte clinique avec le mode commande libre par l’utilisateur. Deux objectifs étaient visés: (1) évaluer in vitro le suivi automatique et la reconstruction 3D en temps réel d’une artère en utilisant trois fantômes ayant des géométries réalistes. (2) évaluer in vivo la capacité de ce système d'imagerie robotique pour la cartographie 3D en temps réel d'une artère fémorale normale. Pour le premier objectif, la reconstruction 3D US a été comparée avec les fichiers CAD (computer-aided-design) des fantômes. De plus, pour le troisième fantôme, la reconstruction 3D US a été comparée avec sa reconstruction CTA, considéré comme examen de référence pour évaluer la MAP. Cinq chapitres composent ce mémoire. Dans le premier chapitre, la MAP sera expliquée, puis dans les deuxième et troisième chapitres, l’imagerie 3D ultrasonore et la robotique médicale seront développées. Le quatrième chapitre sera consacré à la présentation d’un article intitulé " A robotic ultrasound scanner for automatic vessel tracking and three-dimensional reconstruction of B-mode images" qui résume les résultats obtenus dans ce projet de maîtrise. Une discussion générale conclura ce mémoire. L’article intitulé " A 3D ultrasound imaging robotic system to detect and quantify lower limb arterial stenoses: in vivo feasibility " de Marie-Ange Janvier et al dans l’annexe 3, permettra également au lecteur de mieux comprendre notre système robotisé. Ma contribution dans cet article était l’acquisition des images mode B, la reconstruction 3D et l’analyse des résultats pour le patient sain.