923 resultados para High-accuracy
Resumo:
Genetic diversity is one of the levels of biodiversity that the World Conservation Union (IUCN) has recognized as being important to preserve. This is because genetic diversity is fundamental to the future evolution and to the adaptive flexibility of a species to respond to the inherently dynamic nature of the natural world. Therefore, the key to maintaining biodiversity and healthy ecosystems is to identify, monitor and maintain locally-adapted populations, along with their unique gene pools, upon which future adaptation depends. Thus, conservation genetics deals with the genetic factors that affect extinction risk and the genetic management regimes required to minimize the risk. The conservation of exploited species, such as salmonid fishes, is particularly challenging due to the conflicts between different interest groups. In this thesis, I conduct a series of conservation genetic studies on primarily Finnish populations of two salmonid fish species (European grayling, Thymallus thymallus, and lake-run brown trout, Salmo trutta) which are popular recreational game fishes in Finland. The general aim of these studies was to apply and develop population genetic approaches to assist conservation and sustainable harvest of these populations. The approaches applied included: i) the characterization of population genetic structure at national and local scales; ii) the identification of management units and the prioritization of populations for conservation based on evolutionary forces shaping indigenous gene pools; iii) the detection of population declines and the testing of the assumptions underlying these tests; and iv) the evaluation of the contribution of natural populations to a mixed stock fishery. Based on microsatellite analyses, clear genetic structuring of exploited Finnish grayling and brown trout populations was detected at both national and local scales. Finnish grayling were clustered into three genetically distinct groups, corresponding to northern, Baltic and south-eastern geographic areas of Finland. The genetic differentiation among and within population groups of grayling ranged from moderate to high levels. Such strong genetic structuring combined with low genetic diversity strongly indicates that genetic drift plays a major role in the evolution of grayling populations. Further analyses of European grayling covering the majority of the species’ distribution range indicated a strong global footprint of population decline. Using a coalescent approach the beginning of population reduction was dated back to 1 000-10 000 years ago (ca. 200-2 000 generations). Forward simulations demonstrated that the bottleneck footprints measured using the M ratio can persist within small populations much longer than previously anticipated in the face of low levels of gene flow. In contrast to the M ratio, two alternative methods for genetic bottleneck detection identified recent bottlenecks in six grayling populations that warrant future monitoring. Consistent with the predominant role of random genetic drift, the effective population size (Ne) estimates of all grayling populations were very low with the majority of Ne estimates below 50. Taken together, highly structured local populations, limited gene flow and the small Ne of grayling populations indicates that grayling populations are vulnerable to overexploitation and, hence, monitoring and careful management using the precautionary principles is required not only in Finland but throughout Europe. Population genetic analyses of lake-run brown trout populations in the Inari basin (northernmost Finland) revealed hierarchical population structure where individual populations were clustered into three population groups largely corresponding to different geographic regions of the basin. Similar to my earlier work with European grayling, the genetic differentiation among and within population groups of lake-run brown trout was relatively high. Such strong differentiation indicated that the power to determine the relative contribution of populations in mixed fisheries should be relatively high. Consistent with these expectations, high accuracy and precision in mixed stock analysis (MSA) simulations were observed. Application of MSA to indigenous fish caught in the Inari basin identified altogether twelve populations that contributed significantly to mixed stock fisheries with the Ivalojoki river system being the major contributor (70%) to the total catch. When the contribution of wild trout populations to the fisheries was evaluated regionally, geographically nearby populations were the main contributors to the local catches. MSA also revealed a clear separation between the lower and upper reaches of Ivalojoki river system – in contrast to lower reaches of the Ivalojoki river that contributed considerably to the catch, populations from the upper reaches of the Ivalojoki river system (>140 km from the river mouth) did not contribute significantly to the fishery. This could be related to the available habitat size but also associated with a resident type life history and increased cost of migration. The studies in my thesis highlight the importance of dense sampling and wide population coverage at the scale being studied and also demonstrate the importance of critical evaluation of the underlying assumptions of the population genetic models and methods used. These results have important implications for conservation and sustainable fisheries management of Finnish populations of European grayling and brown trout in the Inari basin.
Resumo:
It is necessary to use highly specialized robots in ITER (International Thermonuclear Experimental Reactor) both in the manufacturing and maintenance of the reactor due to a demanding environment. The sectors of the ITER vacuum vessel (VV) require more stringent tolerances than normally expected for the size of the structure involved. VV consists of nine sectors that are to be welded together. The vacuum vessel has a toroidal chamber structure. The task of the designed robot is to carry the welding apparatus along a path with a stringent tolerance during the assembly operation. In addition to the initial vacuum vessel assembly, after a limited running period, sectors need to be replaced for repair. Mechanisms with closed-loop kinematic chains are used in the design of robots in this work. One version is a purely parallel manipulator and another is a hybrid manipulator where the parallel and serial structures are combined. Traditional industrial robots that generally have the links actuated in series are inherently not very rigid and have poor dynamic performance in high speed and high dynamic loading conditions. Compared with open chain manipulators, parallel manipulators have high stiffness, high accuracy and a high force/torque capacity in a reduced workspace. Parallel manipulators have a mechanical architecture where all of the links are connected to the base and to the end-effector of the robot. The purpose of this thesis is to develop special parallel robots for the assembly, machining and repairing of the VV of the ITER. The process of the assembly and machining of the vacuum vessel needs a special robot. By studying the structure of the vacuum vessel, two novel parallel robots were designed and built; they have six and ten degrees of freedom driven by hydraulic cylinders and electrical servo motors. Kinematic models for the proposed robots were defined and two prototypes built. Experiments for machine cutting and laser welding with the 6-DOF robot were carried out. It was demonstrated that the parallel robots are capable of holding all necessary machining tools and welding end-effectors in all positions accurately and stably inside the vacuum vessel sector. The kinematic models appeared to be complex especially in the case of the 10-DOF robot because of its redundant structure. Multibody dynamics simulations were carried out, ensuring sufficient stiffness during the robot motion. The entire design and testing processes of the robots appeared to be complex tasks due to the high specialization of the manufacturing technology needed in the ITER reactor, while the results demonstrate the applicability of the proposed solutions quite well. The results offer not only devices but also a methodology for the assembly and repair of ITER by means of parallel robots.
Resumo:
This paper aims to assess the effectiveness of ASTER imagery to support the mapping of Pittosporum undulatum, an invasive woody species, in Pico da Vara Natural Reserve (S. Miguel Island, Archipelago of the Azores, Portugal). This assessment was done by applying K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Maximum Likelihood (MLC) pixel-based supervised classifications to 4 different geographic and remote sensing datasets constituted by the Visible, Near-Infrared (VNIR) and Short Wave Infrared (SWIR) of the ASTER sensor and by digital cartography associated to orography (altitude and "distance to water streams") of which the spatial distribution of Pittosporum undulatum directly depends. Overall, most performed classifications showed a strong agreement and high accuracy. At targeted species level, the two higher classification accuracies were obtained when applying MLC and KNN to the VNIR bands coupled with auxiliary geographic information use. Results improved significantly by including ecology and occurrence information of species (altitude and distance to water streams) in the classification scheme. These results show that the use of ASTER sensor VNIR spectral bands, when coupled to relevant ancillary GIS data, can constitute an effective and low cost approach for the evaluation and continuous assessment of Pittosporum undulatum woodland propagation and distribution within Protected Areas of the Azores Islands.
Resumo:
Percarboxylic acids are commonly used as disinfection and bleaching agents in textile, paper, and fine chemical industries. All of these applications are based on the oxidative potential of these compounds. In spite of high interest in these chemicals, they are unstable and explosive chemicals, which increase the risk of synthesis processes and transportation. Therefore, the safety criteria in the production process should be considered. Microreactors represent a technology that efficiently utilizes safety advantages resulting from small scale. Therefore, microreactor technology was used in the synthesis of peracetic acid and performic acid. These percarboxylic acids were produced at different temperatures, residence times and catalyst i.e. sulfuric acid concentrations. Both synthesis reactions seemed to be rather fast because with performic acid equilibrium was reached in 4 min at 313 K and with peracetic acid in 10 min at 343 K. In addition, the experimental results were used to study the kinetics of the formation of performic acid and peracetic acid. The advantages of the microreactors in this study were the efficient temperature control even in very exothermic reaction and good mixing due to the short diffusion distances. Therefore, reaction rates were determined with high accuracy. Three different models were considered in order to estimate the kinetic parameters such as reaction rate constants and activation energies. From these three models, the laminar flow model with radial velocity distribution gave most precise parameters. However, sulfuric acid creates many drawbacks in this synthesis process. Therefore, a ´´greener´´ way to use heterogeneous catalyst in the synthesis of performic acid in microreactor was studied. The cation exchange resin, Dowex 50 Wx8, presented very high activity and a long life time in this reaction. In the presence of this catalyst, the equilibrium was reached in 120 second at 313 K which indicates a rather fast reaction. In addition, the safety advantages of microreactors were investigated in this study. Four different conventional methods were used. Production of peracetic acid was used as a test case, and the safety of one conventional batch process was compared with an on-site continuous microprocess. It was found that the conventional methods for the analysis of process safety might not be reliable and adequate for radically novel technology, such as microreactors. This is understandable because the conventional methods are partly based on experience, which is very limited in connection with totally novel technology. Therefore, one checklist-based method was developed to study the safety of intensified and novel processes at the early stage of process development. The checklist was formulated using the concept of layers of protection for a chemical process. The traditional and three intensified processes of hydrogen peroxide synthesis were selected as test cases. With these real cases, it was shown that several positive and negative effects on safety can be detected in process intensification. The general claim that safety is always improved by process intensification was questioned.
Resumo:
Macroalgae are the main primary producers of the temperate rocky shores providing a three-dimensional habitat, food and nursery grounds for many other species. During the past decades, the state of the coastal waters has deteriorated due to increasing human pressures, resulting in dramatic changes in coastal ecosystems, including macroalgal communities. To reverse the deterioration of the European seas, the EU has adopted the Water Framework Directive (WFD) and the Marine Strategy Framework Directive (MSFD), aiming at improved status of the coastal waters and the marine environment. Further, the Habitats Directive (HD) calls for the protection of important habitats and species (many of which are marine) and the Maritime Spatial Planning Directive for sustainability in the use of resources and human activities at sea and by the coasts. To efficiently protect important marine habitats and communities, we need knowledge on their spatial distribution. Ecological knowledge is also needed to assess the status of the marine areas by involving biological indicators, as required by the WFD and the MSFD; knowledge on how biota changes with human-induced pressures is essential, but to reliably assess change, we need also to know how biotic communities vary over natural environmental gradients. This is especially important in sea areas such as the Baltic Sea, where the natural environmental gradients create substantial differences in biota between areas. In this thesis, I studied the variation occurring in macroalgal communities across the environmental gradients of the northern Baltic Sea, including eutrophication induced changes. The aim was to produce knowledge to support the reliable use of macroalgae as indicators of ecological status of the marine areas and to test practical metrics that could potentially be used in status assessments. Further, the aim was to develop a methodology for mapping the HD Annex I habitat reefs, using the best available data on geology and bathymetry. The results showed that the large-scale variation in the macroalgal community composition of the northern Baltic Sea is largely driven by salinity and exposure. Exposure is important also on smaller spatial scales, affecting species occurrence, community structure and depth penetration of algae. Consequently, the natural variability complicates the use of macroalgae as indicators of human-induced changes. Of the studied indicators, the number of perennial algal species, the perennial cover, the fraction of annual algae, and the lower limit of occurrence of red and brown perennial algae showed potential as usable indicators of ecological status. However, the cumulated cover of algae, commonly used as an indicator in the fully marine environments, showed low responses to eutrophication in the area. Although the mere occurrence of perennial algae did not show clear indicator potential, a distinct discrepancy in the occurrence of bladderwrack, Fucus vesiculosus, was found between two areas with differing eutrophication history, the Bothnian Sea and the Archipelago Sea. The absence of Fucus from many potential sites in the outer Archipelago Sea is likely due to its inability to recover from its disappearance from the area 30-40 years ago, highlighting the importance of past events in macroalgal occurrence. The methodology presented for mapping the potential distribution and the ecological value of reefs showed, that relatively high accuracy in mapping can be achieved by combining existing available data, and the maps produced serve as valuable background information for more detailed surveys. Taken together, the results of the theses contribute significantly to the knowledge on macroalgal communities of the northern Baltic Sea that can be directly applied in various management contexts.
Resumo:
In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.
Resumo:
The present study describes an auxiliary tool in the diagnosis of left ventricular (LV) segmental wall motion (WM) abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN) was developed and validated for grading LV segmental WM using data from color kinesis (CK) images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1) normal, 2) mild hypokinesia, 3) moderate hypokinesia, 4) severe hypokinesia, 5) akinesia, and 6) dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R² = 0.99). In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.
Resumo:
Several methods have been described to measure intraocular pressure (IOP) in clinical and research situations. However, the measurement of time varying IOP with high accuracy, mainly in situations that alter corneal properties, has not been reported until now. The present report describes a computerized system capable of recording the transitory variability of IOP, which is sufficiently sensitive to reliably measure ocular pulse peak-to-peak values. We also describe its characteristics and discuss its applicability to research and clinical studies. The device consists of a pressure transducer, a signal conditioning unit and an analog-to-digital converter coupled to a video acquisition board. A modified Cairns trabeculectomy was performed in 9 Oryctolagus cuniculus rabbits to obtain changes in IOP decay parameters and to evaluate the utility and sensitivity of the recording system. The device was effective for the study of kinetic parameters of IOP, such as decay pattern and ocular pulse waves due to cardiac and respiratory cycle rhythm. In addition, there was a significant increase of IOP versus time curve derivative when pre- and post-trabeculectomy recordings were compared. The present procedure excludes corneal thickness and error related to individual operator ability. Clinical complications due to saline infusion and pressure overload were not observed during biomicroscopic evaluation. Among the disadvantages of the procedure are the requirement of anesthesia and the use in acute recordings rather than chronic protocols. Finally, the method described may provide a reliable alternative for the study of ocular pressure dynamic alterations in man and may facilitate the investigation of the pathogenesis of glaucoma.
Resumo:
Many industrial applications need object recognition and tracking capabilities. The algorithms developed for those purposes are computationally expensive. Yet ,real time performance, high accuracy and small power consumption are essential measures of the system. When all these requirements are combined, hardware acceleration of these algorithms becomes a feasible solution. The purpose of this study is to analyze the current state of these hardware acceleration solutions, which algorithms have been implemented in hardware and what modifications have been done in order to adapt these algorithms to hardware.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
The Lennard-Jones Devonshire 1 (LJD) single particle theory for liquids is extended and applied to the anharmonic solid in a high temperature limit. The exact free energy for the crystal is expressed as a convergent series of terms involving larger and larger sets of contiguous particles called cell-clusters. The motions of all the particles within cell-clusters are correlated to each other and lead to non-trivial integrals of orders 3, 6, 9, ... 3N. For the first time the six dimensional integral has been calculated to high accuracy using a Lennard-Jones (6-12) pair interaction between nearest neighbours only for the f.c.c. lattice. The thermodynamic properties predicted by this model agree well with experimental results for solid Xenon.
Resumo:
Il est reconnu que le benzène, le toluène, l’éthylbenzène et les isomères du xylène, composés organiques volatils (COVs) communément désignés BTEX, produisent des effets nocifs sur la santé humaine et sur les végétaux dépendamment de la durée et des niveaux d’exposition. Le benzène en particulier est classé cancérogène et une exposition à des concentrations supérieures à 64 g/m3 de benzène peut être fatale en 5–10 minutes. Par conséquent, la mesure en temps réel des BTEX dans l’air ambiant est essentielle pour détecter rapidement un danger associé à leur émission dans l’air et pour estimer les risques potentiels pour les êtres vivants et pour l’environnement. Dans cette thèse, une méthode d’analyse en temps réel des BTEX dans l’air ambiant a été développée et validée. La méthode est basée sur la technique d’échantillonnage direct de l’air couplée avec la spectrométrie de masse en tandem utilisant une source d’ionisation chimique à pression atmosphérique (APCI-MS/MS directe). La validation analytique a démontré la sensibilité (limite de détection LDM 1–2 μg/m3), la précision (coefficient de variation CV < 10%), l’exactitude (exactitude > 95%) et la sélectivité de la méthode. Des échantillons d’air ambiant provenant d’un site d’enfouissement de déchets industriels et de divers garages d’entretien automobile ont été analysés par la méthode développée. La comparaison des résultats avec ceux obtenus par la technique de chromatographie gazeuse on-line couplée avec un détecteur à ionisation de flamme (GC-FID) a donné des résultats similaires. La capacité de la méthode pour l’évaluation rapide des risques potentiels associés à une exposition aux BTEX a été prouvée à travers une étude de terrain avec analyse de risque pour la santé des travailleurs dans trois garages d’entretien automobile et par des expériences sous atmosphères simulées. Les concentrations mesurées dans l’air ambiant des garages étaient de 8,9–25 µg/m3 pour le benzène, 119–1156 µg/m3 pour le toluène, 9–70 µg/m3 pour l’éthylbenzène et 45–347 µg/m3 pour les xylènes. Une dose quotidienne environnementale totale entre 1,46 10-3 et 2,52 10-3 mg/kg/jour a été déterminée pour le benzène. Le risque de cancer lié à l’exposition environnementale totale au benzène estimé pour les travailleurs étudiés se situait entre 1,1 10-5 et 1,8 10-5. Une nouvelle méthode APCI-MS/MS a été également développée et validée pour l’analyse directe de l’octaméthylcyclotétrasiloxane (D4) et le décaméthylcyclopentasiloxane (D5) dans l’air et les biogaz. Le D4 et le D5 sont des siloxanes cycliques volatils largement utilisés comme solvants dans les processus industriels et les produits de consommation à la place des COVs précurseurs d’ozone troposphérique tels que les BTEX. Leur présence ubiquitaire dans les échantillons d’air ambiant, due à l’utilisation massive, suscite un besoin d’études de toxicité. De telles études requièrent des analyses qualitatives et quantitatives de traces de ces composés. Par ailleurs, la présence de traces de ces substances dans un biogaz entrave son utilisation comme source d’énergie renouvelable en causant des dommages coûteux à l’équipement. L’analyse des siloxanes dans un biogaz s’avère donc essentielle pour déterminer si le biogaz nécessite une purification avant son utilisation pour la production d’énergie. La méthode développée dans cette étude possède une bonne sensibilité (LDM 4–6 μg/m3), une bonne précision (CV < 10%), une bonne exactitude (> 93%) et une grande sélectivité. Il a été également démontré qu’en utilisant cette méthode avec l’hexaméthyl-d18-disiloxane comme étalon interne, la détection et la quantification du D4 et du D5 dans des échantillons réels de biogaz peuvent être accomplies avec une meilleure sensibilité (LDM ~ 2 μg/m3), une grande précision (CV < 5%) et une grande exactitude (> 97%). Une variété d’échantillons de biogaz prélevés au site d’enfouissement sanitaire du Complexe Environnemental de Saint-Michel à Montréal a été analysée avec succès par cette nouvelle méthode. Les concentrations mesurées étaient de 131–1275 µg/m3 pour le D4 et 250–6226 µg/m3 pour le D5. Ces résultats représentent les premières données rapportées dans la littérature sur la concentration des siloxanes D4 et D5 dans les biogaz d’enfouissement en fonction de l’âge des déchets.
Resumo:
L’athérosclérose est une maladie qui cause, par l’accumulation de plaques lipidiques, le durcissement de la paroi des artères et le rétrécissement de la lumière. Ces lésions sont généralement localisées sur les segments artériels coronariens, carotidiens, aortiques, rénaux, digestifs et périphériques. En ce qui concerne l’atteinte périphérique, celle des membres inférieurs est particulièrement fréquente. En effet, la sévérité de ces lésions artérielles est souvent évaluée par le degré d’une sténose (réduction >50 % du diamètre de la lumière) en angiographie, imagerie par résonnance magnétique (IRM), tomodensitométrie ou échographie. Cependant, pour planifier une intervention chirurgicale, une représentation géométrique artérielle 3D est notamment préférable. Les méthodes d’imagerie par coupe (IRM et tomodensitométrie) sont très performantes pour générer une imagerie tridimensionnelle de bonne qualité mais leurs utilisations sont dispendieuses et invasives pour les patients. L’échographie 3D peut constituer une avenue très prometteuse en imagerie pour la localisation et la quantification des sténoses. Cette modalité d’imagerie offre des avantages distincts tels la commodité, des coûts peu élevés pour un diagnostic non invasif (sans irradiation ni agent de contraste néphrotoxique) et aussi l’option d’analyse en Doppler pour quantifier le flux sanguin. Étant donné que les robots médicaux ont déjà été utilisés avec succès en chirurgie et en orthopédie, notre équipe a conçu un nouveau système robotique d’échographie 3D pour détecter et quantifier les sténoses des membres inférieurs. Avec cette nouvelle technologie, un radiologue fait l’apprentissage manuel au robot d’un balayage échographique du vaisseau concerné. Par la suite, le robot répète à très haute précision la trajectoire apprise, contrôle simultanément le processus d’acquisition d’images échographiques à un pas d’échantillonnage constant et conserve de façon sécuritaire la force appliquée par la sonde sur la peau du patient. Par conséquent, la reconstruction d’une géométrie artérielle 3D des membres inférieurs à partir de ce système pourrait permettre une localisation et une quantification des sténoses à très grande fiabilité. L’objectif de ce projet de recherche consistait donc à valider et optimiser ce système robotisé d’imagerie échographique 3D. La fiabilité d’une géométrie reconstruite en 3D à partir d’un système référentiel robotique dépend beaucoup de la précision du positionnement et de la procédure de calibration. De ce fait, la précision pour le positionnement du bras robotique fut évaluée à travers son espace de travail avec un fantôme spécialement conçu pour simuler la configuration des artères des membres inférieurs (article 1 - chapitre 3). De plus, un fantôme de fils croisés en forme de Z a été conçu pour assurer une calibration précise du système robotique (article 2 - chapitre 4). Ces méthodes optimales ont été utilisées pour valider le système pour l’application clinique et trouver la transformation qui convertit les coordonnées de l’image échographique 2D dans le référentiel cartésien du bras robotisé. À partir de ces résultats, tout objet balayé par le système robotique peut être caractérisé pour une reconstruction 3D adéquate. Des fantômes vasculaires compatibles avec plusieurs modalités d’imagerie ont été utilisés pour simuler différentes représentations artérielles des membres inférieurs (article 2 - chapitre 4, article 3 - chapitre 5). La validation des géométries reconstruites a été effectuée à l`aide d`analyses comparatives. La précision pour localiser et quantifier les sténoses avec ce système robotisé d’imagerie échographique 3D a aussi été déterminée. Ces évaluations ont été réalisées in vivo pour percevoir le potentiel de l’utilisation d’un tel système en clinique (article 3- chapitre 5).
Resumo:
L’analyse de la marche a émergé comme l’un des domaines médicaux le plus im- portants récemment. Les systèmes à base de marqueurs sont les méthodes les plus fa- vorisées par l’évaluation du mouvement humain et l’analyse de la marche, cependant, ces systèmes nécessitent des équipements et de l’expertise spécifiques et sont lourds, coûteux et difficiles à utiliser. De nombreuses approches récentes basées sur la vision par ordinateur ont été développées pour réduire le coût des systèmes de capture de mou- vement tout en assurant un résultat de haute précision. Dans cette thèse, nous présentons notre nouveau système d’analyse de la démarche à faible coût, qui est composé de deux caméras vidéo monoculaire placées sur le côté gauche et droit d’un tapis roulant. Chaque modèle 2D de la moitié du squelette humain est reconstruit à partir de chaque vue sur la base de la segmentation dynamique de la couleur, l’analyse de la marche est alors effectuée sur ces deux modèles. La validation avec l’état de l’art basée sur la vision du système de capture de mouvement (en utilisant le Microsoft Kinect) et la réalité du ter- rain (avec des marqueurs) a été faite pour démontrer la robustesse et l’efficacité de notre système. L’erreur moyenne de l’estimation du modèle de squelette humain par rapport à la réalité du terrain entre notre méthode vs Kinect est très prometteur: les joints des angles de cuisses (6,29◦ contre 9,68◦), jambes (7,68◦ contre 11,47◦), pieds (6,14◦ contre 13,63◦), la longueur de la foulée (6.14cm rapport de 13.63cm) sont meilleurs et plus stables que ceux de la Kinect, alors que le système peut maintenir une précision assez proche de la Kinect pour les bras (7,29◦ contre 6,12◦), les bras inférieurs (8,33◦ contre 8,04◦), et le torse (8,69◦contre 6,47◦). Basé sur le modèle de squelette obtenu par chaque méthode, nous avons réalisé une étude de symétrie sur différentes articulations (coude, genou et cheville) en utilisant chaque méthode sur trois sujets différents pour voir quelle méthode permet de distinguer plus efficacement la caractéristique symétrie / asymétrie de la marche. Dans notre test, notre système a un angle de genou au maximum de 8,97◦ et 13,86◦ pour des promenades normale et asymétrique respectivement, tandis que la Kinect a donné 10,58◦et 11,94◦. Par rapport à la réalité de terrain, 7,64◦et 14,34◦, notre système a montré une plus grande précision et pouvoir discriminant entre les deux cas.
Resumo:
Global Positioning System (GPS), with its high integrity, continuous availability and reliability, revolutionized the navigation system based on radio ranging. With four or more GPS satellites in view, a GPS receiver can find its location anywhere over the globe with accuracy of few meters. High accuracy - within centimeters, or even millimeters is achievable by correcting the GPS signal with external augmentation system. The use of satellite for critical application like navigation has become a reality through the development of these augmentation systems (like W AAS, SDCM, and EGNOS, etc.) with a primary objective of providing essential integrity information needed for navigation service in their respective regions. Apart from these, many countries have initiated developing space-based regional augmentation systems like GAGAN and IRNSS of India, MSAS and QZSS of Japan, COMPASS of China, etc. In future, these regional systems will operate simultaneously and emerge as a Global Navigation Satellite System or GNSS to support a broad range of activities in the global navigation sector.Among different types of error sources in the GPS precise positioning, the propagation delay due to the atmospheric refraction is a limiting factor on the achievable accuracy using this system. The WADGPS, aimed for accurate positioning over a large area though broadcasts different errors involved in GPS ranging including ionosphere and troposphere errors, due to the large temporal and spatial variations in different atmospheric parameters especially in lower atmosphere (troposphere), the use of these broadcasted tropospheric corrections are not sufficiently accurate. This necessitated the estimation of tropospheric error based on realistic values of tropospheric refractivity. Presently available methodologies for the estimation of tropospheric delay are mostly based on the atmospheric data and GPS measurements from the mid-latitude regions, where the atmospheric conditions are significantly different from that over the tropics. No such attempts were made over the tropics. In a practical approach when the measured atmospheric parameters are not available analytical models evolved using data from mid-latitudes for this purpose alone can be used. The major drawback of these existing models is that it neglects the seasonal variation of the atmospheric parameters at stations near the equator. At tropics the model underestimates the delay in quite a few occasions. In this context, the present study is afirst and major step towards the development of models for tropospheric delay over the Indian region which is a prime requisite for future space based navigation program (GAGAN and IRNSS). Apart from the models based on the measured surface parameters, a region specific model which does not require any measured atmospheric parameter as input, but depends on latitude and day of the year was developed for the tropical region with emphasis on Indian sector.Large variability of atmospheric water vapor content in short spatial and/or temporal scales makes its measurement rather involved and expensive. A local network of GPS receivers is an effective tool for water vapor remote sensing over the land. This recently developed technique proves to be an effective tool for measuring PW. The potential of using GPS to estimate water vapor in the atmosphere at all-weather condition and with high temporal resolution is attempted. This will be useful for retrieving columnar water vapor from ground based GPS data. A good network of GPS could be a major source of water vapor information for Numerical Weather Prediction models and could act as surrogate to the data gap in microwave remote sensing for water vapor over land.