979 resultados para Correlation matching techniques
Resumo:
This paper focuses on two basic issues: the anxiety-generating nature of the interpreting task and the relevance of interpreter trainees’ academic self-concept. The first has already been acknowledged, although not extensively researched, in several papers, and the second has only been mentioned briefly in interpreting literature. This study seeks to examine the relationship between the anxiety and academic self-concept constructs among interpreter trainees. An adapted version of the Foreign Language Anxiety Scale (Horwitz et al., 1986), the Academic Autoconcept Scale (Schmidt, Messoulam & Molina, 2008) and a background information questionnaire were used to collect data. Students’ t-Test analysis results indicated that female students reported experiencing significantly higher levels of anxiety than male students. No significant gender difference in self-concept levels was found. Correlation analysis results suggested, on the one hand, that younger would-be interpreters suffered from higher anxiety levels and students with higher marks tended to have lower anxiety levels; and, on the other hand, that younger students had lower self-concept levels and higher-ability students held higher self-concept levels. In addition, the results revealed that students with higher anxiety levels tended to have lower self-concept levels. Based on these findings, recommendations for interpreting pedagogy are discussed.
Resumo:
Users need to be able to address in-air gesture systems, which means finding where to perform gestures and how to direct them towards the intended system. This is necessary for input to be sensed correctly and without unintentionally affecting other systems. This thesis investigates novel interaction techniques which allow users to address gesture systems properly, helping them find where and how to gesture. It also investigates audio, tactile and interactive light displays for multimodal gesture feedback; these can be used by gesture systems with limited output capabilities (like mobile phones and small household controls), allowing the interaction techniques to be used by a variety of device types. It investigates tactile and interactive light displays in greater detail, as these are not as well understood as audio displays. Experiments 1 and 2 explored tactile feedback for gesture systems, comparing an ultrasound haptic display to wearable tactile displays at different body locations and investigating feedback designs. These experiments found that tactile feedback improves the user experience of gesturing by reassuring users that their movements are being sensed. Experiment 3 investigated interactive light displays for gesture systems, finding this novel display type effective for giving feedback and presenting information. It also found that interactive light feedback is enhanced by audio and tactile feedback. These feedback modalities were then used alongside audio feedback in two interaction techniques for addressing gesture systems: sensor strength feedback and rhythmic gestures. Sensor strength feedback is multimodal feedback that tells users how well they can be sensed, encouraging them to find where to gesture through active exploration. Experiment 4 found that they can do this with 51mm accuracy, with combinations of audio and interactive light feedback leading to the best performance. Rhythmic gestures are continuously repeated gesture movements which can be used to direct input. Experiment 5 investigated the usability of this technique, finding that users can match rhythmic gestures well and with ease. Finally, these interaction techniques were combined, resulting in a new single interaction for addressing gesture systems. Using this interaction, users could direct their input with rhythmic gestures while using the sensor strength feedback to find a good location for addressing the system. Experiment 6 studied the effectiveness and usability of this technique, as well as the design space for combining the two types of feedback. It found that this interaction was successful, with users matching 99.9% of rhythmic gestures, with 80mm accuracy from target points. The findings show that gesture systems could successfully use this interaction technique to allow users to address them. Novel design recommendations for using rhythmic gestures and sensor strength feedback were created, informed by the experiment findings.
Resumo:
Optical mapping of voltage signals has revolutionised the field and study of cardiac electrophysiology by providing the means to visualise changes in electrical activity at a high temporal and spatial resolution from the cellular to the whole heart level under both normal and disease conditions. The aim of this thesis was to develop a novel method of panoramic optical mapping using a single camera and to study myocardial electrophysiology in isolated Langendorff-perfused rabbit hearts. First, proper procedures for selection, filtering and analysis of the optical data recorded from the panoramic optical mapping system were established. This work was followed by extensive characterisation of the electrical activity across the epicardial surface of the preparation investigating time and heart dependent effects. In an initial study, features of epicardial electrophysiology were examined as the temperature of the heart was reduced below physiological values. This manoeuvre was chosen to mimic the temperatures experienced during various levels of hypothermia in vivo, a condition known to promote arrhythmias. The facility for panoramic optical mapping allowed the extent of changes in conduction timing and pattern of ventricular activation and repolarisation to be assessed. In the main experimental section, changes in epicardial electrical activity were assessed under various pacing conditions in both normal hearts and in a rabbit model of chronic MI. In these experiments, there was significant changes in the pattern of electrical activation corresponding with the changes in pacing regime. These experiments demonstrated a negative correlation between activation time and APD, which was not maintained during ventricular pacing. This suggests that activation pattern is not the sole determinant of action potential duration in intact hearts. Lastly, a realistic 3D computational model of the rabbit left ventricle was developed to simulate the passive and active mechanical properties of the heart. The aim of this model was to infer further information from the experimental optical mapping studies. In future, it would be feasible to gain insight into the electrical and mechanical performance of the heart by simulating experimental pacing conditions in the model.
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
Botnets, which consist of thousands of compromised machines, can cause a significant threat to other systems by launching Distributed Denial of Service attacks, keylogging, and backdoors. In response to this threat, new effective techniques are needed to detect the presence of botnets. In this paper, we have used an interception technique to monitor Windows Application Programming Interface system calls made by communication applications. Existing approaches for botnet detection are based on finding bot traffic patterns. Our approach does not depend on finding patterns but rather monitors the change of behaviour in the system. In addition, we will present our idea of detecting botnet based on log correlations from different hosts.
Resumo:
Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnoloigia, 2016.
Resumo:
Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Resumo:
The premise of automated alert correlation is to accept that false alerts from a low level intrusion detection system are inevitable and use attack models to explain the output in an understandable way. Several algorithms exist for this purpose which use attack graphs to model the ways in which attacks can be combined. These algorithms can be classified in to two broad categories namely scenario-graph approaches, which create an attack model starting from a vulnerability assessment and type-graph approaches which rely on an abstract model of the relations between attack types. Some research in to improving the efficiency of type-graph correlation has been carried out but this research has ignored the hypothesizing of missing alerts. Our work is to present a novel type-graph algorithm which unifies correlation and hypothesizing in to a single operation. Our experimental results indicate that the approach is extremely efficient in the face of intensive alerts and produces compact output graphs comparable to other techniques.
Resumo:
Chaque année, le piratage mondial de la musique coûte plusieurs milliards de dollars en pertes économiques, pertes d’emplois et pertes de gains des travailleurs ainsi que la perte de millions de dollars en recettes fiscales. La plupart du piratage de la musique est dû à la croissance rapide et à la facilité des technologies actuelles pour la copie, le partage, la manipulation et la distribution de données musicales [Domingo, 2015], [Siwek, 2007]. Le tatouage des signaux sonores a été proposé pour protéger les droit des auteurs et pour permettre la localisation des instants où le signal sonore a été falsifié. Dans cette thèse, nous proposons d’utiliser la représentation parcimonieuse bio-inspirée par graphe de décharges (spikegramme), pour concevoir une nouvelle méthode permettant la localisation de la falsification dans les signaux sonores. Aussi, une nouvelle méthode de protection du droit d’auteur. Finalement, une nouvelle attaque perceptuelle, en utilisant le spikegramme, pour attaquer des systèmes de tatouage sonore. Nous proposons tout d’abord une technique de localisation des falsifications (‘tampering’) des signaux sonores. Pour cela nous combinons une méthode à spectre étendu modifié (‘modified spread spectrum’, MSS) avec une représentation parcimonieuse. Nous utilisons une technique de poursuite perceptive adaptée (perceptual marching pursuit, PMP [Hossein Najaf-Zadeh, 2008]) pour générer une représentation parcimonieuse (spikegramme) du signal sonore d’entrée qui est invariante au décalage temporel [E. C. Smith, 2006] et qui prend en compte les phénomènes de masquage tels qu’ils sont observés en audition. Un code d’authentification est inséré à l’intérieur des coefficients de la représentation en spikegramme. Puis ceux-ci sont combinés aux seuils de masquage. Le signal tatoué est resynthétisé à partir des coefficients modifiés, et le signal ainsi obtenu est transmis au décodeur. Au décodeur, pour identifier un segment falsifié du signal sonore, les codes d’authentification de tous les segments intacts sont analysés. Si les codes ne peuvent être détectés correctement, on sait qu’alors le segment aura été falsifié. Nous proposons de tatouer selon le principe à spectre étendu (appelé MSS) afin d’obtenir une grande capacité en nombre de bits de tatouage introduits. Dans les situations où il y a désynchronisation entre le codeur et le décodeur, notre méthode permet quand même de détecter des pièces falsifiées. Par rapport à l’état de l’art, notre approche a le taux d’erreur le plus bas pour ce qui est de détecter les pièces falsifiées. Nous avons utilisé le test de l’opinion moyenne (‘MOS’) pour mesurer la qualité des systèmes tatoués. Nous évaluons la méthode de tatouage semi-fragile par le taux d’erreur (nombre de bits erronés divisé par tous les bits soumis) suite à plusieurs attaques. Les résultats confirment la supériorité de notre approche pour la localisation des pièces falsifiées dans les signaux sonores tout en préservant la qualité des signaux. Ensuite nous proposons une nouvelle technique pour la protection des signaux sonores. Cette technique est basée sur la représentation par spikegrammes des signaux sonores et utilise deux dictionnaires (TDA pour Two-Dictionary Approach). Le spikegramme est utilisé pour coder le signal hôte en utilisant un dictionnaire de filtres gammatones. Pour le tatouage, nous utilisons deux dictionnaires différents qui sont sélectionnés en fonction du bit d’entrée à tatouer et du contenu du signal. Notre approche trouve les gammatones appropriés (appelés noyaux de tatouage) sur la base de la valeur du bit à tatouer, et incorpore les bits de tatouage dans la phase des gammatones du tatouage. De plus, il est montré que la TDA est libre d’erreur dans le cas d’aucune situation d’attaque. Il est démontré que la décorrélation des noyaux de tatouage permet la conception d’une méthode de tatouage sonore très robuste. Les expériences ont montré la meilleure robustesse pour la méthode proposée lorsque le signal tatoué est corrompu par une compression MP3 à 32 kbits par seconde avec une charge utile de 56.5 bps par rapport à plusieurs techniques récentes. De plus nous avons étudié la robustesse du tatouage lorsque les nouveaux codec USAC (Unified Audion and Speech Coding) à 24kbps sont utilisés. La charge utile est alors comprise entre 5 et 15 bps. Finalement, nous utilisons les spikegrammes pour proposer trois nouvelles méthodes d’attaques. Nous les comparons aux méthodes récentes d’attaques telles que 32 kbps MP3 et 24 kbps USAC. Ces attaques comprennent l’attaque par PMP, l’attaque par bruit inaudible et l’attaque de remplacement parcimonieuse. Dans le cas de l’attaque par PMP, le signal de tatouage est représenté et resynthétisé avec un spikegramme. Dans le cas de l’attaque par bruit inaudible, celui-ci est généré et ajouté aux coefficients du spikegramme. Dans le cas de l’attaque de remplacement parcimonieuse, dans chaque segment du signal, les caractéristiques spectro-temporelles du signal (les décharges temporelles ;‘time spikes’) se trouvent en utilisant le spikegramme et les spikes temporelles et similaires sont remplacés par une autre. Pour comparer l’efficacité des attaques proposées, nous les comparons au décodeur du tatouage à spectre étendu. Il est démontré que l’attaque par remplacement parcimonieux réduit la corrélation normalisée du décodeur de spectre étendu avec un plus grand facteur par rapport à la situation où le décodeur de spectre étendu est attaqué par la transformation MP3 (32 kbps) et 24 kbps USAC.
Resumo:
Abstract: Respiratory therapists must be able to care for their patients safely, efficiently and competently. They manage critically ill patients on life support systems. As a member of the anesthesia team they are responsible for the vulnerable patient undergoing surgery. Within all areas of the hospital they are called upon to make decisions and judgements concerning patient treatment. The environment that is found in the modern clinical setting is often stressful and demanding. The respiratory and anesthesia technology program has the responsibility of preparing competent practioners who graduate not only with a broad knowledge base but with the affective competencies that are required to meet these challenges. Faculty and clinical instructors in the program of Respiratory and Anesthesia Technology have been troubled by rising attrition rates and weak performance of students. It is apparent that this is not a problem unique to Vanier College. The rationale for this study was multi-fold; to establish a definition of student success, to determine whether pre-admission academic abilities can predict success in the program and whether scores on a professional behavioural aptitudes tool can predict success in the clinical year of the program. Predictors were sought that could be used either in the pre-program admission policies or during the course of study in order to ensure success throughout the program and beyond. A qualitative analysis involving clinical instructors and faculty (n=5) was carried out to explore what success signified for a student in the respiratory and anesthesia program. While this process revealed that a student who obtained a grade above 77.5% was considered “successful”, the concept surrounding success was a much more complex issue. Affective as well as cognitive and psychomotor abilities complete the model of the successful student. Appropriate behaviour and certain character traits in a respiratory therapy student are considered to be significant elements leading to success. Assessment of students in their clinical year of the respiratory & anesthesia technology program currently include little measurement of abilities in the affective domain, and the resulting grade becomes primarily a measure of academic and procedural skills. A quantitative study of preadmission records and final program grades was obtained from a single cohort of respiratory and anesthesia technology students who began the program in 2005 and graduated in 2008 (n=16). Data was collected and a descriptive analysis (analysis of variance, Pearson correlation) was used to determine the relationship between preadmission grades and success. The lack of association between the high school grades and grades in the program ran contrary to some of the findings in the literature and it can be cautiously inferred that preadmission grades do not predict success in the program. To ascertain the predictive significance of evaluating professional behavioural skills and success in clinical internship, a behaviour assessment tool was used by clinical instructors and faculty to score each student during a rotation in their third year of the program which was clinical internship. The results of this analysis showed that a moderately strong association could be made between a high score on the behavior assessment tool and final clinical grades. Therefore this tool may be effective in predicting success in the clinical year of the program. Refining the admissions process to meet the challenge and responsibility of turning out graduates who are capable of meeting the needs of the profession is difficult but essential. The capacity to predict which students possess the affective competencies necessary to cope and succeed in their clinical year is conceivably more important than their academic abilities. Although these preliminary findings contribute, to some degree, to the literature that exists concerning methods of predicting success in a respiratory and anesthesia technology program, much data is still unknown. Further quantitative and qualitative research is required using a broader population base to substantiate the findings of this small study.||Résumé: Les inhalothérapeutes doivent être capables de prodiguer des soins à leurs patients d’une manière sécuritaire, efficace et compétente. Ils/elles peuvent être appelé(e)s à gérer les soins aux personnes gravement malades branchées à un respirateur artificiel. En tant que membres de l’équipe d’anesthésie, ils/elles sont responsables des patients qui subissent une chirurgie. Ils/elles sont sollicité(e)s par tous les secteurs de l’hôpital pour décider ou juger des traitements à apporter aux malades. L’environnement dans lequel ils/elles travaillent est souvent stressant et exigeant. Le programme de Techniques d’inhalothérapie et d’anesthésie vise à former des inhalothérapeutes compétent(e)s qui possèdent non seulement les connaissances propres à la discipline mais également les aptitudes affectives nécessaires pour faire face à ces défis. Les enseignant(e)s et instructeur(e)s cliniques en Techniques d’inhalothérapie et d’anesthésie sont préoccupé(e)s par le taux d’abandon croissant et la faible performance des étudiant(e)s dans le programme. Il semble que ce problème ne soit pas unique au Collège Vanier. Le but de cette recherche est multiple : définir ce qu’est «réussir» pour les étudiant(e)s de ce programme; déterminer si les aptitudes scolaires acquises avant l’admission au programme peuvent aider à prévoir le succès des étudiant(e)s dans le programme; et si les résultats obtenus à un test mesurant les aptitudes comportementales professionnelles permettent de prévoir le succès des étudiant(e)s dans le stage clinique du programme. On a essayé d’identifier des facteurs qui pourraient être utilisés dans les politiques d’admission au programme ou celles régissant le cheminement dans le programme qui permettraient d’assurer le succès au cours du programme et par la suite. Une analyse qualitative a été conduite auprès des instructeur(e)s cliniques et des enseignant(e)s (n=5) afin d’étudier la notion de « réussite » des étudiant(e)s dans le programme. Bien qu’un(e) étudiant(e) ayant obtenu une note supérieure à 77.5% soit considéré(e) comme ayant « réussi », la notion de « réussite » est beaucoup plus complexe. Des aptitudes affectives, autant que cognitives et psychomotrices complètent le modèle d’un(e) étudiant(e) ayant réussi. Un comportement approprié et certains traits de caractère sont considérés comme des facteurs importants pour la réussite d’un(e) étudiant(e) en techniques d’inhalothérapie et d’anesthésie. L’évaluation qui se fait actuellement des étudiant(e)s dans le stage clinique du programme ne porte que peu sur les aptitudes affectives, et le résultat obtenu témoigne essentiellement des aptitudes scolaires et procédurales. Une analyse quantitative des dossiers des étudiant(e)s avant leur admission au programme et leurs notes finales a été conduite auprès d’une cohorte d’étudiant(e)s ayant commencé le programme en 2005 et gradué en 2008 (n=16). Des données ont été recueillies et une analyse descriptive (analyse de la variance, corrélation de Pearson) ont été faites afin de déterminer l’existence d’un lien entre les notes obtenues au secondaire et celles obtenues dans le programme. L’absence de corrélation entre les deux catégories de notes va à l’encontre de certaines recherches publiées et on peut déduire avec réserve que les notes obtenues avant l’admission au programme ne permettent pas de prévoir la réussite dans le programme. Afin de vérifier la portée de l’évaluation du comportement professionnel et de la réussite en milieu clinique quant à la prévision de réussite dans le programme, une méthode d’évaluation du comportement a été appliquée par les instructeurs(e) cliniques et les enseignant(e)s pour évaluer chaque étudiant(e) au cours d’une rotation dans leur troisième année de stage clinique. Les résultats de cette analyse ont démontré qu’une corrélation moyennement forte pouvait être faite entre une bonne note à l’évaluation comportementale et les notes finales du stage clinique. Perfectionner le processus d’admission au programme afin d’assumer la responsabilité de former des diplômé(e)s capables de répondre aux besoins de la profession est difficile mais essentiel. Avoir les moyens de prévoir quels/quelles étudiant(e)s ont les compétences affectives nécessaires pour faire face à la réussite de leur année de stage clinique est peut être plus important que d’avoir les aptitudes scolaires. Bien que ces observations préliminaires contribuent, à un certain degré, à la littérature existante sur les méthodes de prévoir la réussite dans le programme d’inhalothérapie et d’anesthésie, plusieurs données restent inconnues. Une recherche quantitative et qualitative plus élaborée, conduite sur un échantillon plus large de la population, est nécessaire afin de corroborer les résultats de cette étude limitée.
Resumo:
Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs.
Resumo:
Technical diversity and various knowledge is required for the understanding of undoubtedly complex system such as a Lithium-ion battery. The peculiarity is to combine different techniques that allow a complete investigation while the battery is working. Nowadays, research on Li-ion batteries (LIBs) is experiencing an exponential growth in the development of new cathode materials. Accordingly, Li-rich and Ni-rich NMCs, which have similar layered structure of LiMO2 oxides, have been recently proposed. Despite the promising performance on them, still a lot of issues have to be resolved and the materials need a more in depth characterisation for further commercial applications. In this study LiMO2 material, in particular M = Co and Ni, will be presented. We have focused on the synthesis of pure LiCoO2 and LiNiO2 at first, followed by the mixed LiNi0.5Co0.5O2. Different ways of synthesis were investigated for LCO but the sol-gel-water method showed the best performances. An accurate and systematic structural characterization followed by the appropriate electrochemical tests were done. Moreover, the in situ techniques (in-situ XRD and in situ OEMS) allowed a deep investigation in the structural change and gas evolution upon the electrochemically driven processes.
Resumo:
The navigation of deep space spacecraft requires accurate measurement of the probe’s state and attitude with respect to a body whose ephemerides may not be known with good accuracy. The heliocentric state of the spacecraft is estimated through radiometric techniques (ranging, Doppler, and Delta-DOR), while optical observables can be introduced to improve the uncertainty in the relative position and attitude with respect to the target body. In this study, we analyze how simulated optical observables affect the estimation of parameters in an orbit determination problem, considering the case of the ESA’s Hera mission towards the binary asteroid system composed of Didymos and Dimorphos. To this extent, a shape model and a photometric function are used to create synthetic onboard camera images. Then, using a stereophotoclinometry technique on some of the simulated images, we create a database of maplets that describe the 3D geometry of the surface around a set of landmarks. The matching of maplets with the simulated images provides the optical observables, expressed as pixel coordinates in the camera frame, which are fed to an orbit determination filter to estimate a certain number of solve-for parameters. The noise introduced in the output optical observables by the image processing can be quantified using as a metric the quality of the residuals, which is used to fine-tune the maplet-matching parameters. In particular, the best results are obtained when using small maplets, with high correlation coefficients and occupation factors.
Resumo:
Depth estimation from images has long been regarded as a preferable alternative compared to expensive and intrusive active sensors, such as LiDAR and ToF. The topic has attracted the attention of an increasingly wide audience thanks to the great amount of application domains, such as autonomous driving, robotic navigation and 3D reconstruction. Among the various techniques employed for depth estimation, stereo matching is one of the most widespread, owing to its robustness, speed and simplicity in setup. Recent developments has been aided by the abundance of annotated stereo images, which granted to deep learning the opportunity to thrive in a research area where deep networks can reach state-of-the-art sub-pixel precision in most cases. Despite the recent findings, stereo matching still begets many open challenges, two among them being finding pixel correspondences in presence of objects that exhibits a non-Lambertian behaviour and processing high-resolution images. Recently, a novel dataset named Booster, which contains high-resolution stereo pairs featuring a large collection of labeled non-Lambertian objects, has been released. The work shown that training state-of-the-art deep neural network on such data improves the generalization capabilities of these networks also in presence of non-Lambertian surfaces. Regardless being a further step to tackle the aforementioned challenge, Booster includes a rather small number of annotated images, and thus cannot satisfy the intensive training requirements of deep learning. This thesis work aims to investigate novel view synthesis techniques to augment the Booster dataset, with ultimate goal of improving stereo matching reliability in presence of high-resolution images that displays non-Lambertian surfaces.