954 resultados para decentralised data fusion
Resumo:
Tungsten will be employed as a plasma facing material in the ITER fusion reactor under construction in Cadarache, France; therefore, there is a significant need for accurate electron-impact excitation and ionization data for the ions of tungsten. We report on the results of extensive calculations of ionization and excitation for W 3+ that are intended to provide the atomic data needed for the determination of impurity influx diagnostics of tungsten in several existing tokamak reactors. The electron-impact excitation rate coefficients for this study were determined using the relativistic R -matrix method. The contribution to direct electron-impact ionization was determined using the distorted-wave approximation, the accuracy of which was verified by an R -matrix with pseudo states calculation. Contributions to total ionization from excitation autoionization were also generated from the relativistic R -matrix method. These results were then employed to calculate values of ionization per emitted photon, or SXB ratios, for four carefully selected spectral lines; these data will allow the determination of impurity influx from tungsten facing surfaces. For the range of densities of importance in the edge region of a tokamak reactor, these SXB ratios are found to be nearly independent of electron density but vary significantly with electron temperature.
Resumo:
With the focus of ITER on the transport and emission properties of tungsten, generating atomic data for complex species has received much interest. Focusing on impurity influx diagnostics, we discuss recent work on heavy species. Perturbative approaches do not work well for near neutral systems so non-perturbative data are required, presenting a particular challenge for these influx diagnostics. Recent results on Mo+ are given as an illustration of how the diagnostic applications can guide the theoretical calculations for such systems.
Resumo:
Electron-impact excitation collision strengths for transitions between all singly excited levels up to the n = 4 shell of helium-Eke argon and the n = 4 and 5 shells of helium-like iron have been calculated using a radiation-damped R-matrix approach. The theoretical collision strengths have been examined and associated with their infinite-energy limit values to allow the preparation of Maxwell-averaged effective collision strengths. These are conservatively considered to be accurate to within 20% at all temperatures, 3 x 10(5)-3 x 10(8) K forAr(16+) and 10(6)-10(9) K for Fe24+. They have been compared with the results of previous studies, where possible, and we find a broad accord. The corresponding rate coefficients are required for use in the calculation of derived, collisional-radiative, effective emission coefficients for helium-like lines for diagnostic application to fusion and astrophysical plasmas. The uncertainties in the fundamental collision data have been used to provide a critical assessment of the expected resultant uncertainties in such derived data, including redistributive and cascade collisional-radiative effects. The consequential uncertainties in the parts of the effective emission coefficients driven by excitation from the ground levels for the key w, x, y and z lines vary between 5% and 10%. Our results remove an uncertainty in the reaction rates of a key class of atomic processes governing the spectral emission of helium-like ions in plasmas.
Resumo:
A first-stage collision database is assembled which contains electron-impact excitation, ionization, and recombination rate coefficients for Be, Be+, Be2+, and Be3+. The first-stage database is constructed using the R-matrix with pseudo-states, time-dependent close-coupling, and perturbative, distorted-wave methods. A second-stage collision database is then assembled which contains generalized collisional-radiative and radiated power loss coefficients. The second-stage database is constructed by solution of collisional-radiative equations in the quasi-static equilibrium approximation using the first-stage database. Both collision database stages reside in electronic form at the ORNL Controlled Fusion Atomic Data Center and in the ADAS database, and are easily accessed over the worldwide internet. © 2007 Elsevier Inc. All rights reserved.
Resumo:
A first stage collision database is assembled which contains electron-impact effective collision strengths, and ionization and recombination rate coefficients for Li, Li+, and Li2+. The first stage database is constructed using the R-matrix with pseudo-states, time-dependent close-coupling, converged close-coupling, and perturbative distorted-wave methods. A second stage collision database is then assembled which contains generalized collisional-radiative and radiated power loss coefficients. The second stage database is constructed by solution of collisional-radiative equations in the quasi-static equilibrium approximation using the first stage database. Both collision database stages reside in electronic form at the ORNL Controlled Fusion Atomic Data Center and in the ADAS database, and are easily accessed over the worldwide internet. ?? 2006 Elsevier Inc. All rights reserved.
Resumo:
Knowing exactly where a mobile entity is and monitoring its trajectory in real-time has recently attracted a lot of interests from both academia and industrial communities, due to the large number of applications it enables, nevertheless, it is nowadays one of the most challenging problems from scientific and technological standpoints. In this work we propose a tracking system based on the fusion of position estimations provided by different sources, that are combined together to get a final estimation that aims at providing improved accuracy with respect to those generated by each system individually. In particular, exploiting the availability of a Wireless Sensor Network as an infrastructure, a mobile entity equipped with an inertial system first gets the position estimation using both a Kalman Filter and a fully distributed positioning algorithm (the Enhanced Steepest Descent, we recently proposed), then combines the results using the Simple Convex Combination algorithm. Simulation results clearly show good performance in terms of the final accuracy achieved. Finally, the proposed technique is validated against real data taken from an inertial sensor provided by THALES ITALIA.
Resumo:
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.
Resumo:
Target tracking with bearing-only sensors is a challenging problem when the target moves dynamically in complex scenarios. Besides the partial observability of such sensors, they have limited field of views, occlusions can occur, etc. In those cases, cooperative approaches with multiple tracking robots are interesting, but the different sources of uncertain information need to be considered appropriately in order to achieve better estimates. Even though there exist probabilistic filters that can estimate the position of a target dealing with incertainties, bearing-only measurements bring usually additional problems with initialization and data association. In this paper, we propose a multi-robot triangulation method with a dynamic baseline that can triangulate bearing-only measurements in a probabilistic manner to produce 3D observations. This method is combined with a decentralized stochastic filter and used to tackle those initialization and data association issues. The approach is validated with simulations and field experiments where a team of aerial and ground robots with cameras track a dynamic target.
Resumo:
Affiliation: Département de Biochimie, Université de Montréal
Resumo:
Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.
Resumo:
Ce mémoire s'intéresse à la détection de mouvement dans une séquence d'images acquises à l'aide d'une caméra fixe. Dans ce problème, la difficulté vient du fait que les mouvements récurrents ou non significatifs de la scène tels que les oscillations d'une branche, l'ombre d'un objet ou les remous d'une surface d'eau doivent être ignorés et classés comme appartenant aux régions statiques de la scène. La plupart des méthodes de détection de mouvement utilisées à ce jour reposent en fait sur le principe bas-niveau de la modélisation puis la soustraction de l'arrière-plan. Ces méthodes sont simples et rapides mais aussi limitées dans les cas où l'arrière-plan est complexe ou bruité (neige, pluie, ombres, etc.). Cette recherche consiste à proposer une technique d'amélioration de ces algorithmes dont l'idée principale est d'exploiter et mimer deux caractéristiques essentielles du système de vision humain. Pour assurer une vision nette de l’objet (qu’il soit fixe ou mobile) puis l'analyser et l'identifier, l'œil ne parcourt pas la scène de façon continue, mais opère par une série de ``balayages'' ou de saccades autour (des points caractéristiques) de l'objet en question. Pour chaque fixation pendant laquelle l'œil reste relativement immobile, l'image est projetée au niveau de la rétine puis interprétée en coordonnées log polaires dont le centre est l'endroit fixé par l'oeil. Les traitements bas-niveau de détection de mouvement doivent donc s'opérer sur cette image transformée qui est centrée pour un point (de vue) particulier de la scène. L'étape suivante (intégration trans-saccadique du Système Visuel Humain (SVH)) consiste ensuite à combiner ces détections de mouvement obtenues pour les différents centres de cette transformée pour fusionner les différentes interprétations visuelles obtenues selon ses différents points de vue.
Resumo:
Fingerprint based authentication systems are one of the cost-effective biometric authentication techniques employed for personal identification. As the data base population increases, fast identification/recognition algorithms are required with high accuracy. Accuracy can be increased using multimodal evidences collected by multiple biometric traits. In this work, consecutive fingerprint images are taken, global singularities are located using directional field strength and their local orientation vector is formulated with respect to the base line of the finger. Feature level fusion is carried out and a 32 element feature template is obtained. A matching score is formulated for the identification and 100% accuracy was obtained for a database of 300 persons. The polygonal feature vector helps to reduce the size of the feature database from the present 70-100 minutiae features to just 32 features and also a lower matching threshold can be fixed compared to single finger based identification
Resumo:
Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.
Resumo:
Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.