965 resultados para Web-Assisted Error Detection
Resumo:
The SiC optical processor for error detection and correction is realized by using double pin/pin a-SiC:H photodetector with front and back biased optical gating elements. Data shows that the background act as selector that pick one or more states by splitting portions of the input multi optical signals across the front and back photodiodes. Boolean operations such as exclusive OR (EXOR) and three bit addition are demonstrated optically with a combination of such switching devices, showing that when one or all of the inputs are present the output will be amplified, the system will behave as an XOR gate representing the SUM. When two or three inputs are on, the system acts as AND gate indicating the present of the CARRY bit. Additional parity logic operations are performed by use of the four incoming pulsed communication channels that are transmitted and checked for errors together. As a simple example of this approach, we describe an all optical processor for error detection and correction and then, provide an experimental demonstration of this fault tolerant reversible system, in emerging nanotechnology.
Resumo:
Optimal behavior relies on flexible adaptation to environmental requirements, notably based on the detection of errors. The impact of error detection on subsequent behavior typically manifests as a slowing down of RTs following errors. Precisely how errors impact the processing of subsequent stimuli and in turn shape behavior remains unresolved. To address these questions, we used an auditory spatial go/no-go task where continual feedback informed participants of whether they were too slow. We contrasted auditory-evoked potentials to left-lateralized go and right no-go stimuli as a function of performance on the preceding go stimuli, generating a 2 × 2 design with "preceding performance" (fast hit [FH], slow hit [SH]) and stimulus type (go, no-go) as within-subject factors. SH trials yielded SH trials on the following trials more often than did FHs, supporting our assumption that SHs engaged effects similar to errors. Electrophysiologically, auditory-evoked potentials modulated topographically as a function of preceding performance 80-110 msec poststimulus onset and then as a function of stimulus type at 110-140 msec, indicative of changes in the underlying brain networks. Source estimations revealed a stronger activity of prefrontal regions to stimuli after successful than error trials, followed by a stronger response of parietal areas to the no-go than go stimuli. We interpret these results in terms of a shift from a fast automatic to a slow controlled form of inhibitory control induced by the detection of errors, manifesting during low-level integration of task-relevant features of subsequent stimuli, which in turn influences response speed.
Resumo:
Adjusting behavior following the detection of inappropriate actions allows flexible adaptation to task demands and environmental contingencies during goal-directed behaviors. Post-error behavioral adjustments typically consist in adopting more cautious response mode, which manifests as a slowing down of response speed. Although converging evidence involves the dorsolateral prefrontal cortex (DLPFC) in post-error behavioral adjustment, whether and when the left or right DLPFC is critical for post-error slowing (PES), as well as the underlying brain mechanisms, remain highly debated. To resolve these issues, we used single-pulse transcranial magnetic stimulation in healthy human adults to disrupt the left or right DLPFC selectively at various delays within the 30-180ms interval following false alarms commission, while participants preformed a standard visual Go/NoGo task. PES significantly increased after TMS disruption of the right, but not the left DLPFC at 150ms post-FA response. We discuss these results in terms of an involvement of the right DLPFC in reducing the detrimental effects of error detection on subsequent behavioral performance, as opposed to implementing adaptative error-induced slowing down of response speed.
Resumo:
The general packet radio service (GPRS) has been developed to allow packet data to be transported efficiently over an existing circuit-switched radio network, such as GSM. The main application of GPRS are in transporting Internet protocol (IP) datagrams from web servers (for telemetry or for mobile Internet browsers). Four GPRS baseband coding schemes are defined to offer a trade-off in requested data rates versus propagation channel conditions. However, data rates in the order of > 100 kbits/s are only achievable if the simplest coding scheme is used (CS-4) which offers little error detection and correction (EDC) (requiring excellent SNR) and the receiver hardware is capable of full duplex which is not currently available in the consumer market. A simple EDC scheme to improve the GPRS block error rate (BLER) performance is presented, particularly for CS-4, however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel and improving the user's application data rate. As GPRS requires intensive processing in the baseband, a viable field programmable gate array (FPGA) solution is presented in this paper.
Resumo:
Este trabajo de Tesis se desarrolla en el marco de los escenarios de ejecución distribuida de servicios móviles y contribuye a la definición y desarrollo del concepto de usuario prosumer. El usuario prosumer se caracteriza por utilizar su teléfono móvil para crear, proveer y ejecutar servicios. Este nuevo modelo de usuario contribuye al avance de la sociedad de la información, ya que el usuario prosumer se transforma de creador de contenidos a creador de servicios (estos últimos formados por contenidos y la lógica para acceder a ellos, procesarlos y representarlos). El objetivo general de este trabajo de Tesis es la provisión de un modelo de creación, distribución y ejecución de servicios para entorno móvil que permita a los usuarios no programadores (usuarios prosumer), pero expertos en un determinado dominio, crear y ejecutar sus propias aplicaciones y servicios. Para ello se definen, desarrollan e implementan metodologías, procesos, algoritmos y mecanismos adaptables a dominios específicos, para construir entornos de ejecución distribuida de servicios móviles para usuarios prosumer. La provisión de herramientas de creación adaptadas a usuarios no expertos es una tendencia actual que está siendo desarrollada en distintos trabajos de investigación. Sin embargo, no se ha propuesto una metodología de desarrollo de servicios que involucre al usuario prosumer en el proceso de diseño, desarrollo, implementación y validación de servicios. Este trabajo de Tesis realiza un estudio de las metodologías y tecnologías más innovadoras relacionadas con la co‐creación y utiliza este análisis para definir y validar una metodología que habilita al usuario para ser el responsable de la creación de servicios finales. Siendo los entornos móviles prosumer (mobile prosumer environments) una particularización de los entornos de ejecución distribuida de servicios móviles, en este trabajo se tesis se investiga en técnicas de adaptación, distribución, coordinación de servicios y acceso a recursos identificando como requisitos las problemáticas de este tipo de entornos y las características de los usuarios que participan en los mismos. Se contribuye a la adaptación de servicios definiendo un modelo de variabilidad que soporte la interdependencia entre las decisiones de personalización de los usuarios, incorporando mecanismos de guiado y detección de errores. La distribución de servicios se implementa utilizando técnicas de descomposición en árbol SPQR, cuantificando el impacto de separar cualquier servicio en distintos dominios. Considerando el plano de comunicaciones para la coordinación en la ejecución de servicios distribuidos hemos identificado varias problemáticas, como las pérdidas de enlace, conexiones, desconexiones y descubrimiento de participantes, que resolvemos utilizando técnicas de diseminación basadas en publicación subscripción y algoritmos Gossip. Para lograr una ejecución flexible de servicios distribuidos en entorno móvil, soportamos la adaptación a cambios en la disponibilidad de los recursos, proporcionando una infraestructura de comunicaciones para el acceso uniforme y eficiente a recursos. Se han realizado validaciones experimentales para evaluar la viabilidad de las soluciones propuestas, definiendo escenarios de aplicación relevantes (el nuevo universo inteligente, prosumerización de servicios en entornos hospitalarios y emergencias en la web de la cosas). Abstract This Thesis work is developed in the framework of distributed execution of mobile services and contributes to the definition and development of the concept of prosumer user. The prosumer user is characterized by using his mobile phone to create, provide and execute services. This new user model contributes to the advancement of the information society, as the prosumer is transformed from producer of content, to producer of services (consisting of content and logic to access them, process them and represent them). The overall goal of this Thesis work is to provide a model for creation, distribution and execution of services for the mobile environment that enables non‐programmers (prosumer users), but experts in a given domain, to create and execute their own applications and services. For this purpose I define, develop and implement methodologies, processes, algorithms and mechanisms, adapted to specific domains, to build distributed environments for the execution of mobile services for prosumer users. The provision of creation tools adapted to non‐expert users is a current trend that is being developed in different research works. However, it has not been proposed a service development methodology involving the prosumer user in the process of design, development, implementation and validation of services. This thesis work studies innovative methodologies and technologies related to the co‐creation and relies on this analysis to define and validate a methodological approach that enables the user to be responsible for creating final services. Being mobile prosumer environments a specific case of environments for distributed execution of mobile services, this Thesis work researches in service adaptation, distribution, coordination and resource access techniques, and identifies as requirements the challenges of such environments and characteristics of the participating users. I contribute to service adaptation by defining a variability model that supports the dependency of user personalization decisions, incorporating guiding and error detection mechanisms. Service distribution is implemented by using decomposition techniques based on SPQR trees, quantifying the impact of separating any service in different domains. Considering the communication level for the coordination of distributed service executions I have identified several problems, such as link losses, connections, disconnections and discovery of participants, which I solve using dissemination techniques based on publish‐subscribe communication models and Gossip algorithms. To achieve a flexible distributed service execution in mobile environments, I support adaptation to changes in the availability of resources, while providing a communication infrastructure for the uniform and efficient access to resources. Experimental validations have been conducted to assess the feasibility of the proposed solutions, defining relevant application scenarios (the new intelligent universe, service prosumerization in hospitals and emergency situations in the web of things).
Resumo:
Mestrado em Radioterapia
Resumo:
Controller area network (CAN) is a fieldbus network suitable for small-scale distributed computer controlled systems (DCCS), being appropriate for sending and receiving short real-time messages at speeds up to 1 Mbit/sec. Several studies are available on how to guarantee the real-time requirements of CAN messages, providing preruntime schedulability conditions to guarantee the real-time communication requirements of DCCS traffic. Usually, it is considered that CAN guarantees atomic multicast properties by means of its extensive error detection/signaling mechanisms. However, there are some error situations where messages can be delivered in duplicate or delivered only by a subset of the receivers, leading to inconsistencies in the supported applications. In order to prevent such inconsistencies, a middleware for reliable communication in CAN is proposed, taking advantage of CAN synchronous properties to minimize the runtime overhead. Such middleware comprises a set of atomic multicast and consolidation protocols, upon which the reliable communication properties are guaranteed. The related timing analysis demonstrates that, in spite of the extra stack of protocols, the real-time properties of CAN are preserved since the predictability of message transfer is guaranteed.
Resumo:
Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
RESUMO: Esta dissertação pretende apresentar uma proposta para um programa de Controlo de Qualidade interno das hormonas tiroideias, segundo as boas práticas da qualidade. Para que se comprove o rigor e fiabilidade dos seus resultados analíticos, condição indispensável para que estes possam servir de base às mais diversas tomadas de decisão, toda a filosofia que envolve a qualidade assume um papel preponderante. Neste contexto a elaboração de um programa de controlo de qualidade adequado é muito importante e imprescindível. A realização do controlo de qualidade permite monitorizar o desempenho de todos os materiais, equipamentos, instrumentos e métodos analíticos bem como criar sinais de alerta para prevenir a emissão de resultados não-conformes e indicar a necessidade de ações corretivas. Permite também indicar a necessidade de melhorias em processos e em atividades ligadas aos operadores e consciencializar o pessoal de que o controlo da qualidade é um dever para com o cliente e tem a função de gerar confiança nos resultados obtidos. O controlo de qualidade interno abrange todos os procedimentos assumidos por um laboratório para avaliação contínua do seu trabalho. A sua finalidade é assegurar a consistência dos resultados diários e a sua conformidade com critérios definidos, avaliando a precisão dos ensaios e dando indicação do momento para se promoverem ações corretivas quando surge uma não conformidade. Segundo as melhores práticas da qualidade, serão calculados os valores do controlo de qualidade interno para as hormonas tiroideias, os seus limites e critérios (regras) de aceitabilidade, com base na relação entre o desempenho analítico e o Erro Máximo Admissível. Pretende-se assim, otimizar o desempenho do Controlo de Qualidade Interno, aperfeiçoando a capacidade de identificação do erro.-------- ABSTRACT: This paper intends to submit a proposal for a thyroid hormones internal quality Control program, according to the best quality practices. The whole philosophy involving quality plays a central role in order to prove the accuracy and reliability of the analytical results, a basic prerequisite for decision making. In this context the elaboration of an appropriate quality control program is essential and very important. Quality control allows the motorization of the performance of all materials, equipment, instruments and analytical methods, as well as the creation of warning signals indicating the need for corrective actions, to prevent the release of non-compliant results. It also indicates the need for improvements in the processes and operating activities as well as making the staff aware that quality control is a duty to the client and promotes confidence in the results. The internal quality control covers all procedures undertaken by a laboratory for a continuous evaluation of its performance. Its purpose is to ensure the consistency of the daily results and compliance with defined criteria, assessing the accuracy of the tests and indicating the moment to promote corrective actions when nonconformity appears. According to the best quality practices, the values of the internal quality control for the thyroid hormones, their limits and criteria (rules) of acceptability will be calculated, based on the relationship between analytical performance and the maximum allowable error. Thus, the aim is to optimize the performance of the Internal Quality Control, improving the ability for error detection and identification.
Resumo:
El uso intensivo y prolongado de computadores de altas prestaciones para ejecutar aplicaciones computacionalmente intensivas, sumado al elevado número de elementos que los componen, incrementan drásticamente la probabilidad de ocurrencia de fallos durante su funcionamiento. El objetivo del trabajo es resolver el problema de tolerancia a fallos para redes de interconexión de altas prestaciones, partiendo del diseño de políticas de encaminamiento tolerantes a fallos. Buscamos resolver una determinada cantidad de fallos de enlaces y nodos, considerando sus factores de impacto y probabilidad de aparición. Para ello aprovechamos la redundancia de caminos de comunicación existentes, partiendo desde enfoques de encaminamiento adaptativos capaces de cumplir con las cuatro fases de la tolerancia a fallos: detección del error, contención del daño, recuperación del error, y tratamiento del fallo y continuidad del servicio. La experimentación muestra una degradación de prestaciones menor al 5%. En el futuro, se tratará la pérdida de información en tránsito.
Resumo:
Executive control refers to a set of abilities enabling us to plan, control and implement our behavior to rapidly and flexibly adapt to environmental requirements. These adaptations notably involve the suppression of intended or ongoing cognitive or motor processes, a skill referred to as "inhibitory control". To implement efficient executive control of behavior, one must monitor our performance following errors to adjust our behavior accordingly. Deficits in inhibitory control have been associated with the emergènce of a wide range of psychiatric disorders, ranging from drug addiction to attention deficit/hyperactivity disorders. Inhibitory control deficits could, however, be remediated- The brain has indeed the amazing possibility to reorganize following training to allow for behavioral improvements. This mechanism is referred to as neural and behavioral plasticity. Here, our aim is to investigate training-induced plasticity in inhibitory control and propose a model of inhibitory control explaining the spatio- temporal brain mechanisms supporting inhibitory control processes and their plasticity. In the two studies entitled "Brain dynamics underlying training-induced improvement in suppressing inappropriate action" (Manuel et al., 2010) and "Training-induced neuroplastic reinforcement óf top-down inhibitory control" (Manuel et al., 2012c), we investigated the neurophysiological and behavioral changes induced by inhibitory control training with two different tasks and populations of healthy participants. We report that different inhibitory control training developed either automatic/bottom-up inhibition in parietal areas or reinforced controlled/top-down inhibitory control in frontal brain regions. We discuss the results of both studies in the light of a model of fronto-basal inhibition processes. In "Spatio-temporal brain dynamics mediating post-error behavioral adjustments" (Manuel et al., 2012a), we investigated how error detection modulates the processing of following stimuli and in turn impact behavior. We showed that during early integration of stimuli, the activity of prefrontal and parietal areas is modulated according to previous performance and impacts the post-error behavioral adjustments. We discuss these results in terms of a shift from an automatic to a controlled form of inhibition induced by the detection of errors, which in turn influenced response speed. In "Inter- and intra-hemispheric dissociations in ideomotor apraxia: a large-scale lesion- symptom mapping study in subacute brain-damaged patients" (Manuel et al., 2012b), we investigated ideomotor apraxia, a deficit in performing pantomime gestures of object use, and identified the anatomical correlates of distinct ideomotor apraxia error types in 150 subacute brain-damaged patients. Our results reveal a left intra-hemispheric dissociation for different pantomime error types, but with an unspecific role for inferior frontal areas. Les fonctions exécutives désignent un ensemble de processus nous permettant de planifier et contrôler notre comportement afin de nous adapter de manière rapide et flexible à l'environnement. L'une des manières de s'adapter consiste à arrêter un processus cognitif ou moteur en cours ; le contrôle de l'inhibition. Afin que le contrôle exécutif soit optimal il est nécessaire d'ajuster notre comportement après avoir fait des erreurs. Les déficits du contrôle de l'inhibition sont à l'origine de divers troubles psychiatriques tels que l'addiction à la drogue ou les déficits d'attention et d'hyperactivité. De tels déficits pourraient être réhabilités. En effet, le cerveau a l'incroyable capacité de se réorganiser après un entraînement et ainsi engendrer des améliorations comportementales. Ce mécanisme s'appelle la plasticité neuronale et comportementale. Ici, notre but èst d'étudier la plasticité du contrôle de l'inhibition après un bref entraînement et de proposer un modèle du contrôle de l'inhibition qui permette d'expliquer les mécanismes cérébraux spatiaux-temporels sous-tendant l'amélioration du contrôle de l'inhibition et de leur plasticité. Dans les deux études intitulées "Brain dynamics underlying training-induced improvement in suppressing inappropriate action" (Manuel et al., 2010) et "Training-induced neuroplastic reinforcement of top-down inhibitory control" (Manuel et al., 2012c), nous nous sommes intéressés aux changements neurophysiologiques et comportementaux liés à un entraînement du contrôle de l'inhibition. Pour ce faire, nous avons étudié l'inhibition à l'aide de deux différentes tâches et deux populations de sujets sains. Nous avons démontré que différents entraînements pouvaient soit développer une inhibition automatique/bottom-up dans les aires pariétales soit renforcer une inhibition contrôlée/top-down dans les aires frontales. Nous discutons ces résultats dans le contexte du modèle fronto-basal du contrôle de l'inhibition. Dans "Spatio-temporal brain dynamics mediating post-error behavioral adjustments" (Manuel et al., 2012a), nous avons investigué comment la détection d'erreurs influençait le traitement du prochain stimulus et comment elle agissait sur le comportement post-erreur. Nous avons montré que pendant l'intégration précoce des stimuli, l'activité des aires préfrontales et pariétales était modulée en fonction de la performance précédente et avait un impact sur les ajustements post-erreur. Nous proposons que la détection d'erreur ait induit un « shift » d'un mode d'inhibition automatique à un mode contrôlé qui a à son tour influencé le temps de réponse. Dans "Inter- and intra-hemispheric dissociations in ideomotor apraxia: a large-scale lesion-symptom mapping study in subacute brain-damaged patients" (Manuel et al., 2012b), nous avons examiné l'apraxie idémotrice, une incapacité à exécuter des gestes d'utilisation d'objets, chez 150 patients cérébro-lésés. Nous avons mis en avant une dissociation intra-hémisphérique pour différents types d'erreurs avec un rôle non spécifique pour les aires frontales inférieures.
Resumo:
In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
Tässä diplomityössä esitellään langattoman mittaus- ja valvontajärjestelmän protokollakehitys. Työssä selvitetään protokollakehityksessä huomioon otettavat asiat ja esitetään langattoman tilavalvontaan perustuvan pilottijärjestelmän toteutus. Pilottijärjestelmänä käytetään Ensto Busch-Jaeger Oy:n Jussi-kosteusvahtijärjestelmää, joka muutetaan langattomaksi. Järjestelmän tiedonsiirto on yksisuuntaista ja tapahtuu radioyhteydellä. Käytetty taajuus on 433,92 MHz. Tavoitteena työssä oli kehittää yksinkertainen, mutta luotettava signalointijärjestelmä. Siihen toteutettu protokolla koodaa lähetettävän datan NRZ-L -koodauksen tapaisesti. Virheenkorjaus tehdään pariteettibittiä ja Hamming-etäisyyttä hyväksi käyttäen. Lisäksi tiedonsiirron yhteyskäytäntöön on lisätty rinnakkaisuutta yksisuuntaisen tiedonsiirron varmistamiseksi. Kehitetylle protokollalle tehdyt testit osoittavat sen olevan luotettava valitussa tiedonsiirtoympäristössä.