872 resultados para signal detection theory
Resumo:
This paper presents a new approach for damage detection in structural health monitoring systems exploiting the coherence function between the signals from PZT (Lead Zirconate Titanate) transducers bonded to a host structure. The physical configuration of this new approach is similar to the configuration used in Lamb wave based methods, but the analysis and operation are different. A PZT excited by a signal with a wide frequency range acts as an actuator and others PZTs are used as sensors to receive the signal. The coherences between the signals from the PZT sensors are obtained and the standard deviation for each coherence function is computed. It is demonstrated through experimental results that the standard deviation of the coherence between the signals from the PZTs in healthy and damaged conditions is a very sensitive metric index to detect damage. Tests were carried out on an aluminum plate and the results show that the proposed methodology could be an excellent approach for structural health monitoring (SHM) applications.
Digital filtering of oscillations intrinsic to transmission line modeling based on lumped parameters
Resumo:
A correction procedure based on digital signal processing theory is proposed to smooth the numeric oscillations in electromagnetic transient simulation results from transmission line modeling based on an equivalent representation by lumped parameters. The proposed improvement to this well-known line representation is carried out with an Finite Impulse Response (FIR) digital filter used to exclude the high-frequency components associated with the spurious numeric oscillations. To prove the efficacy of this correction method, a well-established frequency-dependent line representation using state equations is modeled with an FIR filter included in the model. The results obtained from the state-space model with and without the FIR filtering are compared with the results simulated by a line model based on distributed parameters and inverse transforms. Finally, the line model integrated with the FIR filtering is also tested and validated based on simulations that include nonlinear and time-variable elements. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
As the methodologies available for the detection of positive selection from genomic data vary in terms of assumptions and execution, weak correlations are expected among them. However, if there is any given signal that is consistently supported across different methodologies, it is strong evidence that the locus has been under past selection. In this paper, a straightforward frequentist approach based on the Stouffer Method to combine P-values across different tests for evidence of recent positive selection in common variations, as well as strategies for extracting biological information from the detected signals, were described and applied to high density single nucleotide polymorphism (SNP) data generated from dairy and beef cattle (taurine and indicine). The ancestral Bovinae allele state of over 440,000 SNP is also reported. Using this combination of methods, highly significant (P<3.17×10-7) population-specific sweeps pointing out to candidate genes and pathways that may be involved in beef and dairy production were identified. The most significant signal was found in the Cornichon homolog 3 gene (CNIH3) in Brown Swiss (P = 3.82×10-12), and may be involved in the regulation of pre-ovulatory luteinizing hormone surge. Other putative pathways under selection are the glucolysis/gluconeogenesis, transcription machinery and chemokine/cytokine activity in Angus; calpain-calpastatin system and ribosome biogenesis in Brown Swiss; and gangliosides deposition in milk fat globules in Gyr. The composite method, combined with the strategies applied to retrieve functional information, may be a useful tool for surveying genome-wide selective sweeps and providing insights in to the source of selection.
Resumo:
Spontaneous adverse drug events (ADE) reporting is the main source of data for assessing the risk/benefit of drugs available in the pharmaceutical market. However, its major limitation is underreporting, which hinders and delays the signal detection by Pharmacovigilance (PhV). To identify the techniques of educational intervention (EI) for promotion of PhV by health professionals and to assess their impact. A systematic review was performed in the PUBMED, PAHO, LILACS and EMBASE databases, from November/2011 to January/2012, updated in March/2013. The strategy search included the use of health descriptors and a manual search in the references cited by selected papers. 101 articles were identified, of which 16 met the inclusion criteria. Most of these studies (10) were conducted in European hospitals and physicians were the health professionals subjected to most EI (12), these studies lasted from one month to two years. EI with multifaceted techniques raised the absolute number, the rate of reporting related to adverse drug reactions (ADR), technical defects of health technologies, and also promoted an improvement in the quality of reports, since there was increased reporting of ADR classified as serious, unexpected, related to new drugs and with high degree of causality. Multifaceted educational interventions for multidisciplinary health teams working at all healthcare levels, with sufficient duration to reach all professionals who act in the institution, including issues related to medication errors and therapeutic ineffectiveness, must be validated, with the aim of standardizing the Good Practice of PhV and improve drug safety indicators.
Resumo:
Background: The Beck Depression Inventory (BDI) is used worldwide for detecting depressive symptoms. This questionnaire has been revised (1996) to match the DSM-IV criteria for a major depressive episode. We assessed the reliability and the validity of the Brazilian Portuguese version of the BDI-II for non-clinical adults. Methods: The questionnaire was applied to 60 college students on two occasions. Afterwards, 182 community-dwelling adults completed the BDI-II, the Self-Report Questionnaire, and the K10 Scale. Trained psychiatrists performed face-to-face interviews with the respondents using the Structured Clinical Interview (SCID-I), the Montgomery-angstrom sberg Depression Scale, and the Hamilton Anxiety Scale. Descriptive analysis, signal detection analysis (Receiver Operating Characteristics), correlation analysis, and discriminant function analysis were performed to investigate the psychometric properties of the BDI-II. Results: The intraclass correlation coefficient of the BDI-II was 0.89, and the Cronbach's alpha coefficient of internal consistency was 0.93. Taking the SCID as the gold standard, the cut-off point of 10/11 was the best threshold for detecting depression, yielding a sensitivity of 70% and a specificity of 87%. The concurrent validity (a correlation of 0.63-0.93 with scales applied simultaneously) and the predictive ability of the severity level (over 65% correct classification) were acceptable. Conclusion: The BDI-II is reliable and valid for measuring depressive symptomatology among Portuguese-speaking Brazilian non-clinical populations.
Resumo:
Obwohl die funktionelle Magnetresonanztomographie (fMRI) interiktaler Spikes mit simultaner EEG-Ableitung bei Patienten mit fokalen Anfallsleiden seit einigen Jahren zur Lokalisation beteiligter Hirnstrukturen untersucht wird, ist sie nach wie vor eine experimentelle Methode. Um zuverlässig Ergebnisse zu erhalten, ist insbesondere die Verbesserung des Signal-zu-Rausch-Verhältnisses in der statistischen Bilddatenauswertung von Bedeutung. Frühere Untersuchungen zur sog. event-related fMRI weisen auf einen Zusammenhang zwischen Häufigkeit von Einzelreizen und nachfolgender hämodynamischer Signalantwort in der fMRI hin. Um einen möglichen Einfluss der Häufigkeit interiktaler Spikes auf die Signalantwort nachzuweisen, wurden 20 Kinder mit fokaler Epilepsie mit der EEG-fMRI untersucht. Von 11 dieser Patienten konnten die Daten ausgewertet werden. In einer zweifachen Analyse mit dem Softwarepaket SPM99 wurden die Bilddaten zuerst ausschließlich je nach Auftreten interiktaler Spikes der „Reiz“- oder „Ruhe“-Bedingung zugeordnet, unabhängig von der jeweiligen Anzahl der Spikes je Messzeitpunkt (on/off-Analyse). In einem zweiten Schritt wurden die „Reiz“- Bedingungen auch differenziert nach jeweiliger Anzahl einzelner Spikes ausgewertet (häufigkeitskorrelierte Analyse). Die Ergebnisse dieser Analysen zeigten bei 5 der 11 Patienten eine Zunahme von Sensitivität und Signifikanzen der in der fMRI nachgewiesenen Aktivierungen. Eine höhere Spezifität konnte hingegen nicht gezeigt werden. Diese Ergebnisse weisen auf eine positive Korrelation von Reizhäufigkeit und nachfolgender hämodynamischer Antwort auch bei interiktalen Spikes hin, welche für die EEG-fMRI nutzbar ist. Bei 6 Patienten konnte keine fMRI-Aktivierung nachgewiesen werden. Mögliche technische und physiologische Ursachen hierfür werden diskutiert.
Resumo:
Future wireless communications systems are expected to be extremely dynamic, smart and capable to interact with the surrounding radio environment. To implement such advanced devices, cognitive radio (CR) is a promising paradigm, focusing on strategies for acquiring information and learning. The first task of a cognitive systems is spectrum sensing, that has been mainly studied in the context of opportunistic spectrum access, in which cognitive nodes must implement signal detection techniques to identify unused bands for transmission. In the present work, we study different spectrum sensing algorithms, focusing on their statistical description and evaluation of the detection performance. Moving from traditional sensing approaches we consider the presence of practical impairments, and analyze algorithm design. Far from the ambition of cover the broad spectrum of spectrum sensing, we aim at providing contributions to the main classes of sensing techniques. In particular, in the context of energy detection we studied the practical design of the test, considering the case in which the noise power is estimated at the receiver. This analysis allows to deepen the phenomenon of the SNR wall, providing the conditions for its existence and showing that presence of the SNR wall is determined by the accuracy of the noise power estimation process. In the context of the eigenvalue based detectors, that can be adopted by multiple sensors systems, we studied the practical situation in presence of unbalances in the noise power at the receivers. Then, we shift the focus from single band detectors to wideband sensing, proposing a new approach based on information theoretic criteria. This technique is blind and, requiring no threshold setting, can be adopted even if the statistical distribution of the observed data in not known exactly. In the last part of the thesis we analyze some simple cooperative localization techniques based on weighted centroid strategies.
Resumo:
Das wichtigste Oxidationsmittel für den Abbau flüchtiger Kohlenwasserstoffverbindungen (VOC, engl.: volatile organic compounds) in der Atmosphäre ist das Hydroxylradikal (OH), welches sich in einem schnellen chemischen Gleichgewicht mit dem Hydroperoxylradical (HO2) befindet. Bisherige Messungen und Modellvergleiche dieser Radikalspezies in Waldgebieten haben signifikante Lücken im Verständnis der zugrundeliegenden Prozesse aufgezeigt.rnIm Rahmen dieser Doktorarbeit wurden Messungen von OH- und HO2-Radikalen mittelsrnlaserinduzierten Fluoreszensmesstechnik (LIF, engl.: laser-induced fluorescence) in einem Nadelwald in Süd-Finnland während der Messkampagne HUMPPA–COPEC–2010 (Hyytiälä United Measurements of Photochemistry and Particles in Air – Comprehensive Organic Precursor Emission and Concentration study) im Sommer 2010 durchgeführt. Unterschiedliche Komponenten des LIF-Instruments wurden verbessert. Eine modifizierte Methode zur Bestimmung des Hintergrundsignals (engl.: InletPreInjector technique) wurde in den Messaufbaurnintegriert und erstmals zur Messung von atmosphärischem OH verwendet. Vergleichsmessungen zweier Instrumente basierend auf unterschiedlichen Methoden zur Messung von OH-Radikalen, chemische Ionisationsmassenspektrometrie (CIMS - engl.: chemical ionization mass spectrometry) und LIF-Technik, zeigten eine gute Übereinstimmung. Die Vergleichsmessungen belegen das Vermögen und die Leistungsfähigkeit des modifizierten LIF-Instruments atmosphärische OH Konzentrationen akkurat zu messen. Nachfolgend wurde das LIF-Instrument auf der obersten Plattform eines 20m hohen Turmes positioniert, um knapp oberhalb der Baumkronen die Radikal-Chemie an der Schnittstelle zwischen Ökosystem und Atmosphäre zu untersuchen. Umfangreiche Messungen - dies beinhaltet Messungen der totalen OH-Reaktivität - wurden durchgeführt und unter Verwendung von Gleichgewichtszustandsberechnungen und einem Boxmodell, in welches die gemessenen Daten als Randbedingungen eingehen, analysiert. Wenn moderate OH-Reaktivitäten(k′(OH)≤ 15 s−1) vorlagen, sind OH-Produktionsraten, die aus gemessenen Konzentrationen von OH-Vorläuferspezies berechnet wurden, konsistent mit Produktionsraten, die unter der Gleichgewichtsannahme von Messungen des totalen OH Verlustes abgeleitet wurden. Die primären photolytischen OH-Quellen tragen mit einem Anteil von bis zu einem Drittel zur Gesamt-OH-Produktion bei. Es wurde gezeigt, dass OH-Rezyklierung unter Bedingungen moderater OH-Reaktivität hauptsächlich durch die Reaktionen von HO2 mit NO oder O3 bestimmt ist. Während Zeiten hoher OH-Reaktivität (k′(OH) > 15 s−1) wurden zusätzliche Rezyklierungspfade, die nicht über die Reaktionen von HO2 mit NO oder O3, sondern direkt OH bilden, aufgezeigt.rnFür Hydroxylradikale stimmen Boxmodell-Simulationen und Messungen gut übereinrn(OHmod/OHobs=1.04±0.16), während HO2-Mischungsverhältnisse in der Simulation signifikant unterschätzt werden (HO2mod/HO2obs=0.3±0.2) und die simulierte OH-Reaktivität nicht mit der gemessenen OH-Reaktivität übereinstimmt. Die gleichzeitige Unterschätzung der HO2-Mischungsverhältnisse und der OH-Reaktivität, während OH-Konzentrationen von der Simulation gut beschrieben werden, legt nahe, dass die fehlende OH-Reaktivität in der Simulation eine noch unberücksichtigte HO2-Quelle darstellt. Zusätzliche, OH-unabhängigernRO2/HO2-Quellen, wie z.B. der thermische Zerfall von herantransportiertem peroxyacetylnitrat (PAN) und die Photolyse von Glyoxal sind indiziert.
Resumo:
Das aus wissenschaftlicher und ökonomischer Sicht wichtigste Pflanzenpathogen M. oryzae entwickelte im Laufe der Evolution konservierte aber auch einzigartige Mechanismen zur Signaltransduktion. Das Erforschen dieser Mechanismen und Prozesse ist essenziell für das Verständnis von Differenzierungsprozessen bei der Pathogen-Wirt-Interaktion.rnIm ersten Teil der vorliegenden Arbeit wurde der Signalweg zur Osmoregulation, der „High Osmolarity Glycerol“ (HOG)-Signalweg, erstmals anhand physiologischer Experimente in entsprechenden Mutantenstämmen in M. oryzae untersucht. Dabei konnten klare Unter-schiede zum HOG-Signalweg von S. cerevisiae aufgezeigt werden. rnDas in M. oryzae bisher noch nicht beschriebene Gen MoYPD1, welches das Phosphotransferprotein MoYpd1p kodiert, wurde erfolgreich inaktiviert. Diese Inaktivierung ist in S. cerevisiae und vielen anderen Pilzen letal und resultierte bei M. oryzae in einer apathoge¬nen Albinomutante, deren Konidiogenese gestört ist. Insbesondere die Funktion des Phosphotransferproteins MoYpd1p, sowohl im Phosphorelaysystem des HOG-Signal¬wegs als auch im Wirkmechanismus des Fungizids Fludioxonil, konnte eindeutig mittels Y2H- und Western Blot-Analysen nachgewiesen werden.rnEs wurden entscheidende Fortschritte für das Verständnis des Aufbaus und der Funktion des HOG-Signalwegs sowohl als physiologisches Regulationssystem für Umweltreize als auch als Fungizidtarget im Pflanzenschutz erzielt. Dabei konnte gezeigt werden, dass die Zweikompo-nenten-Hybrid-Histidinkinase (HIK) MoSln1p als Signalsensor für Salzstress und MoHik1p als Signalsensor für Zuckerstress fungiert. Die Beteiligung der Histidinkinasen MoHik5p und MoHik9p als Sensorproteine für Hypoxie im HOG-Signalweg ist durchaus denk¬bar und wurde durch erste Ergebnisse bekräftigt. rnSo konnte der HOG-Signalweg in mehreren Modellen dargestellt werden. Die Modelle der Signalerkennung und –transduktion von osmotischem Stress, von Hypoxie und der Wirkmecha¬nismus von Fludioxonil wurden erstmals in diesem Umfang für M. oryzae ausgearbei¬tet.rnDer zweite Teil dieser Arbeit repräsentiert die erste umfassende Untersuchung aller zehn HIK-codierender Gensequenzen, die im Genom von M. oryzae identifiziert werden konnten. Diese Signalproteine waren bisher noch nicht Gegenstand wissenschaftlicher Studien. Die Untersuchung beginnt mit einer phylogenetischen Einordnung aller untersuchten Proteinsequen¬zen in die verschiedenen Gruppen von Histidinkinasen in Pilzen. Eine ausführli-che phänotypische Charakterisierung aller HIK-codierender Gene folgt und wurde anhand von Mutanten durchgeführt, in denen diese Gene einzeln inaktiviert wurden.rnDie Beteiligung von MoHik5p und MoHik9p als mögliche Sauerstoffsensoren im HOG-Signal-weg konnte dokumentiert werden und die anschließenden Western Blot-Analysen bestätig¬ten erstmals die Aktivierung des HOG-Signalwegs bei hypoxieähnlichen Zuständen.rnDes Weiteren wurden mit MoHik5p und MoHik8p zwei neue Pathogenitätsfaktoren in M. oryzae identifiziert. Die apathogenen Mutantenstämme ΔMohik5 und ΔMohik8 sind in der Konidiogenese gestört und nicht in der Lage Appressorien zu differenzieren. Der Einsatz dieser Proteine als Fungizidtarget im protektiven Pflanzenschutz in der Zukunft ist somit denk-bar.rn
Resumo:
There is increasing evidence that strain variation in Mycobacterium tuberculosis complex (MTBC) might influence the outcome of tuberculosis infection and disease. To assess genotype-phenotype associations, phylogenetically robust molecular markers and appropriate genotyping tools are required. Most current genotyping methods for MTBC are based on mobile or repetitive DNA elements. Because these elements are prone to convergent evolution, the corresponding genotyping techniques are suboptimal for phylogenetic studies and strain classification. By contrast, single nucleotide polymorphisms (SNP) are ideal markers for classifying MTBC into phylogenetic lineages, as they exhibit very low degrees of homoplasy. In this study, we developed two complementary SNP-based genotyping methods to classify strains into the six main human-associated lineages of MTBC, the "Beijing" sublineage, and the clade comprising Mycobacterium bovis and Mycobacterium caprae. Phylogenetically informative SNPs were obtained from 22 MTBC whole-genome sequences. The first assay, referred to as MOL-PCR, is a ligation-dependent PCR with signal detection by fluorescent microspheres and a Luminex flow cytometer, which simultaneously interrogates eight SNPs. The second assay is based on six individual TaqMan real-time PCR assays for singleplex SNP-typing. We compared MOL-PCR and TaqMan results in two panels of clinical MTBC isolates. Both methods agreed fully when assigning 36 well-characterized strains into the main phylogenetic lineages. The sensitivity in allele-calling was 98.6% and 98.8% for MOL-PCR and TaqMan, respectively. Typing of an additional panel of 78 unknown clinical isolates revealed 99.2% and 100% sensitivity in allele-calling, respectively, and 100% agreement in lineage assignment between both methods. While MOL-PCR and TaqMan are both highly sensitive and specific, MOL-PCR is ideal for classification of isolates with no previous information, whereas TaqMan is faster for confirmation. Furthermore, both methods are rapid, flexible and comparably inexpensive.
Resumo:
In the present study we introduce a novel task for the quantitative assessment of both originality and speed of individual associations. This 'BAG' (Bridge-the-Associative-Gap) task was used to investigate the relationships between creativity and paranormal belief. Twelve strong 'believers' and 12 strong 'skeptics' in paranormal phenomena were selected from a large student population (n > 350). Subjects were asked to produce single-word associations to word pairs. In 40 trials the two stimulus words were semantically indirectly related and in 40 other trials the words were semantically unrelated. Separately for these two stimulus types, response commonalities and association latencies were calculated. The main finding was that for unrelated stimuli, believers produced associations that were more original (had a lower frequency of occurrence in the group as a whole) than those of the skeptics. For the interpretation of the result we propose a model of association behavior that captures both 'positive' psychological aspects (i.e., verbal creativity) and 'negative' aspects (susceptibility to unfounded inferences), and outline its relevance for psychiatry. This model suggests that believers adopt a looser response criterion than skeptics when confronted with 'semantic noise'. Such a signal detection view of the presence/absence of judgments for loose semantic relations may help to elucidate the commonalities between creative thinking, paranormal belief and delusional ideation.
Resumo:
En entornos hostiles tales como aquellas instalaciones científicas donde la radiación ionizante es el principal peligro, el hecho de reducir las intervenciones humanas mediante el incremento de las operaciones robotizadas está siendo cada vez más de especial interés. CERN, la Organización Europea para la Investigación Nuclear, tiene alrededor de unos 50 km de superficie subterránea donde robots móviles controlador de forma remota podrían ayudar en su funcionamiento, por ejemplo, a la hora de llevar a cabo inspecciones remotas sobre radiación en los diferentes áreas destinados al efecto. No solo es preciso considerar que los robots deben ser capaces de recorrer largas distancias y operar durante largos periodos de tiempo, sino que deben saber desenvolverse en los correspondientes túneles subterráneos, tener en cuenta la presencia de campos electromagnéticos, radiación ionizante, etc. y finalmente, el hecho de que los robots no deben interrumpir el funcionamiento de los aceleradores. El hecho de disponer de un sistema de comunicaciones inalámbrico fiable y robusto es esencial para la correcta ejecución de las misiones que los robots deben afrontar y por supuesto, para evitar tales situaciones en las que es necesario la recuperación manual de los robots al agotarse su energía o al perder el enlace de comunicaciones. El objetivo de esta Tesis es proveer de las directrices y los medios necesarios para reducir el riesgo de fallo en la misión y maximizar las capacidades de los robots móviles inalámbricos los cuales disponen de almacenamiento finito de energía al trabajar en entornos peligrosos donde no se dispone de línea de vista directa. Para ello se proponen y muestran diferentes estrategias y métodos de comunicación inalámbrica. Teniendo esto en cuenta, se presentan a continuación los objetivos de investigación a seguir a lo largo de la Tesis: predecir la cobertura de comunicaciones antes y durante las misiones robotizadas; optimizar la capacidad de red inalámbrica de los robots móviles con respecto a su posición; y mejorar el rango operacional de esta clase de robots. Por su parte, las contribuciones a la Tesis se citan más abajo. El primer conjunto de contribuciones son métodos novedosos para predecir el consumo de energía y la autonomía en la comunicación antes y después de disponer de los robots en el entorno seleccionado. Esto es importante para proporcionar conciencia de la situación del robot y evitar fallos en la misión. El consumo de energía se predice usando una estrategia propuesta la cual usa modelos de consumo provenientes de diferentes componentes en un robot. La predicción para la cobertura de comunicaciones se desarrolla usando un nuevo filtro de RSS (Radio Signal Strength) y técnicas de estimación con la ayuda de Filtros de Kalman. El segundo conjunto de contribuciones son métodos para optimizar el rango de comunicaciones usando novedosas técnicas basadas en muestreo espacial que son robustas frente a ruidos de campos de detección y radio y que proporcionan redundancia. Se emplean métodos de diferencia central finitos para determinar los gradientes 2D RSS y se usa la movilidad del robot para optimizar el rango de comunicaciones y la capacidad de red. Este método también se valida con un caso de estudio centrado en la teleoperación háptica de robots móviles inalámbricos. La tercera contribución es un algoritmo robusto y estocástico descentralizado para la optimización de la posición al considerar múltiples robots autónomos usados principalmente para extender el rango de comunicaciones desde la estación de control al robot que está desarrollando la tarea. Todos los métodos y algoritmos propuestos se verifican y validan usando simulaciones y experimentos de campo con variedad de robots móviles disponibles en CERN. En resumen, esta Tesis ofrece métodos novedosos y demuestra su uso para: predecir RSS; optimizar la posición del robot; extender el rango de las comunicaciones inalámbricas; y mejorar las capacidades de red de los robots móviles inalámbricos para su uso en aplicaciones dentro de entornos peligrosos, que como ya se mencionó anteriormente, se destacan las instalaciones científicas con emisión de radiación ionizante. En otros términos, se ha desarrollado un conjunto de herramientas para mejorar, facilitar y hacer más seguras las misiones de los robots en entornos hostiles. Esta Tesis demuestra tanto en teoría como en práctica que los robots móviles pueden mejorar la calidad de las comunicaciones inalámbricas mediante la profundización en el estudio de su movilidad para optimizar dinámicamente sus posiciones y mantener conectividad incluso cuando no existe línea de vista. Los métodos desarrollados en la Tesis son especialmente adecuados para su fácil integración en robots móviles y pueden ser aplicados directamente en la capa de aplicación de la red inalámbrica. ABSTRACT In hostile environments such as in scientific facilities where ionising radiation is a dominant hazard, reducing human interventions by increasing robotic operations are desirable. CERN, the European Organization for Nuclear Research, has around 50 km of underground scientific facilities, where wireless mobile robots could help in the operation of the accelerator complex, e.g. in conducting remote inspections and radiation surveys in different areas. The main challenges to be considered here are not only that the robots should be able to go over long distances and operate for relatively long periods, but also the underground tunnel environment, the possible presence of electromagnetic fields, radiation effects, and the fact that the robots shall in no way interrupt the operation of the accelerators. Having a reliable and robust wireless communication system is essential for successful execution of such robotic missions and to avoid situations of manual recovery of the robots in the event that the robot runs out of energy or when the robot loses its communication link. The goal of this thesis is to provide means to reduce risk of mission failure and maximise mission capabilities of wireless mobile robots with finite energy storage capacity working in a radiation environment with non-line-of-sight (NLOS) communications by employing enhanced wireless communication methods. Towards this goal, the following research objectives are addressed in this thesis: predict the communication range before and during robotic missions; optimise and enhance wireless communication qualities of mobile robots by using robot mobility and employing multi-robot network. This thesis provides introductory information on the infrastructures where mobile robots will need to operate, the tasks to be carried out by mobile robots and the problems encountered in these environments. The reporting of research work carried out to improve wireless communication comprises an introduction to the relevant radio signal propagation theory and technology followed by explanation of the research in the following stages: An analysis of the wireless communication requirements for mobile robot for different tasks in a selection of CERN facilities; predictions of energy and communication autonomies (in terms of distance and time) to reduce risk of energy and communication related failures during missions; autonomous navigation of a mobile robot to find zone(s) of maximum radio signal strength to improve communication coverage area; and autonomous navigation of one or more mobile robots acting as mobile wireless relay (repeater) points in order to provide a tethered wireless connection to a teleoperated mobile robot carrying out inspection or radiation monitoring activities in a challenging radio environment. The specific contributions of this thesis are outlined below. The first sets of contributions are novel methods for predicting the energy autonomy and communication range(s) before and after deployment of the mobile robots in the intended environments. This is important in order to provide situational awareness and avoid mission failures. The energy consumption is predicted by using power consumption models of different components in a mobile robot. This energy prediction model will pave the way for choosing energy-efficient wireless communication strategies. The communication range prediction is performed using radio signal propagation models and applies radio signal strength (RSS) filtering and estimation techniques with the help of Kalman filters and Gaussian process models. The second set of contributions are methods to optimise the wireless communication qualities by using novel spatial sampling based techniques that are robust to sensing and radio field noises and provide redundancy features. Central finite difference (CFD) methods are employed to determine the 2-D RSS gradients and use robot mobility to optimise the communication quality and the network throughput. This method is also validated with a case study application involving superior haptic teleoperation of wireless mobile robots where an operator from a remote location can smoothly navigate a mobile robot in an environment with low-wireless signals. The third contribution is a robust stochastic position optimisation algorithm for multiple autonomous relay robots which are used for wireless tethering of radio signals and thereby to enhance the wireless communication qualities. All the proposed methods and algorithms are verified and validated using simulations and field experiments with a variety of mobile robots available at CERN. In summary, this thesis offers novel methods and demonstrates their use to predict energy autonomy and wireless communication range, optimise robots position to improve communication quality and enhance communication range and wireless network qualities of mobile robots for use in applications in hostile environmental characteristics such as scientific facilities emitting ionising radiations. In simpler terms, a set of tools are developed in this thesis for improving, easing and making safer robotic missions in hostile environments. This thesis validates both in theory and experiments that mobile robots can improve wireless communication quality by exploiting robots mobility to dynamically optimise their positions and maintain connectivity even when the (radio signal) environment possess non-line-of-sight characteristics. The methods developed in this thesis are well-suited for easier integration in mobile robots and can be applied directly at the application layer of the wireless network. The results of the proposed methods have outperformed other comparable state-of-the-art methods.
Resumo:
Este Proyecto Fin de Carrera pretende desarrollar una serie de unidades didácticas orientadas a mejorar el aprendizaje de la teoría de procesado digital de señales a través de la aplicación práctica. Con tal fin, se han diseñado una serie de prácticas que permitan al alumno alcanzar un apropiado nivel de conocimiento de la asignatura, la adquisición de competencias y alcanzar los resultados de aprendizaje previstos. Para desarrollar el proyecto primero se ha realizado una selección apropiada de los contenidos de la teoría de procesado digital de señales en relación con los resultados de aprendizaje esperados, seguidamente se han diseñado y validado unas prácticas basadas en un entorno de trabajo basado en MATLAB y DSP, y por último se ha redactado un manual de laboratorio que combina una parte teórica con su práctica correspondiente. El objetivo perseguido con la realización de estas prácticas es alcanzar un equilibrio teórico/práctico que permita sacar el máximo rendimiento de la asignatura desde el laboratorio, trabajando principalmente con el IDE Code Composer Studio junto con un kit de desarrollo basado en un DSP. ABSTRACT. This dissertation intends to develop some lessons oriented to improve about the digital signal processing theory. In order to get this objective some practices have been developed to allow to the students to achieve an appropriate level of knowledge of the subject, acquire skills and achieve the intended learning outcomes. To develop the project firstly it has been made an appropriate selection of the contents of the digital signal processing theory related with the expected results. After that, five practices based in a work environment based on Matlab and DSP have been designed and validated, and finally a laboratory manual has been drafted that combines the theoretical part with its corresponding practice. The objective with the implementation of these practices is to achieve a theoretical / practical balance to get the highest performance to the subject from the laboratory working mainly with the Code Composer Studio IDE together a development kit based on DSP.
Diseño de algoritmos de guerra electrónica y radar para su implementación en sistemas de tiempo real
Resumo:
Esta tesis se centra en el estudio y desarrollo de algoritmos de guerra electrónica {electronic warfare, EW) y radar para su implementación en sistemas de tiempo real. La llegada de los sistemas de radio, radar y navegación al terreno militar llevó al desarrollo de tecnologías para combatirlos. Así, el objetivo de los sistemas de guerra electrónica es el control del espectro electomagnético. Una de la funciones de la guerra electrónica es la inteligencia de señales {signals intelligence, SIGINT), cuya labor es detectar, almacenar, analizar, clasificar y localizar la procedencia de todo tipo de señales presentes en el espectro. El subsistema de inteligencia de señales dedicado a las señales radar es la inteligencia electrónica {electronic intelligence, ELINT). Un sistema de tiempo real es aquel cuyo factor de mérito depende tanto del resultado proporcionado como del tiempo en que se da dicho resultado. Los sistemas radar y de guerra electrónica tienen que proporcionar información lo más rápido posible y de forma continua, por lo que pueden encuadrarse dentro de los sistemas de tiempo real. La introducción de restricciones de tiempo real implica un proceso de realimentación entre el diseño del algoritmo y su implementación en plataformas “hardware”. Las restricciones de tiempo real son dos: latencia y área de la implementación. En esta tesis, todos los algoritmos presentados se han implementado en plataformas del tipo field programmable gate array (FPGA), ya que presentan un buen compromiso entre velocidad, coste total, consumo y reconfigurabilidad. La primera parte de la tesis está centrada en el estudio de diferentes subsistemas de un equipo ELINT: detección de señales mediante un detector canalizado, extracción de los parámetros de pulsos radar, clasificación de modulaciones y localization pasiva. La transformada discreta de Fourier {discrete Fourier transform, DFT) es un detector y estimador de frecuencia quasi-óptimo para señales de banda estrecha en presencia de ruido blanco. El desarrollo de algoritmos eficientes para el cálculo de la DFT, conocidos como fast Fourier transform (FFT), han situado a la FFT como el algoritmo más utilizado para la detección de señales de banda estrecha con requisitos de tiempo real. Así, se ha diseñado e implementado un algoritmo de detección y análisis espectral para su implementación en tiempo real. Los parámetros más característicos de un pulso radar son su tiempo de llegada y anchura de pulso. Se ha diseñado e implementado un algoritmo capaz de extraer dichos parámetros. Este algoritmo se puede utilizar con varios propósitos: realizar un reconocimiento genérico del radar que transmite dicha señal, localizar la posición de dicho radar o bien puede utilizarse como la parte de preprocesado de un clasificador automático de modulaciones. La clasificación automática de modulaciones es extremadamente complicada en entornos no cooperativos. Un clasificador automático de modulaciones se divide en dos partes: preprocesado y el algoritmo de clasificación. Los algoritmos de clasificación basados en parámetros representativos calculan diferentes estadísticos de la señal de entrada y la clasifican procesando dichos estadísticos. Los algoritmos de localization pueden dividirse en dos tipos: triangulación y sistemas cuadráticos. En los algoritmos basados en triangulación, la posición se estima mediante la intersección de las rectas proporcionadas por la dirección de llegada de la señal. En cambio, en los sistemas cuadráticos, la posición se estima mediante la intersección de superficies con igual diferencia en el tiempo de llegada (time difference of arrival, TDOA) o diferencia en la frecuencia de llegada (frequency difference of arrival, FDOA). Aunque sólo se ha implementado la estimación del TDOA y FDOA mediante la diferencia de tiempos de llegada y diferencia de frecuencias, se presentan estudios exhaustivos sobre los diferentes algoritmos para la estimación del TDOA, FDOA y localización pasiva mediante TDOA-FDOA. La segunda parte de la tesis está dedicada al diseño e implementación filtros discretos de respuesta finita (finite impulse response, FIR) para dos aplicaciones radar: phased array de banda ancha mediante filtros retardadores (true-time delay, TTD) y la mejora del alcance de un radar sin modificar el “hardware” existente para que la solución sea de bajo coste. La operación de un phased array de banda ancha mediante desfasadores no es factible ya que el retardo temporal no puede aproximarse mediante un desfase. La solución adoptada e implementada consiste en sustituir los desfasadores por filtros digitales con retardo programable. El máximo alcance de un radar depende de la relación señal a ruido promedio en el receptor. La relación señal a ruido depende a su vez de la energía de señal transmitida, potencia multiplicado por la anchura de pulso. Cualquier cambio hardware que se realice conlleva un alto coste. La solución que se propone es utilizar una técnica de compresión de pulsos, consistente en introducir una modulación interna a la señal, desacoplando alcance y resolución. ABSTRACT This thesis is focused on the study and development of electronic warfare (EW) and radar algorithms for real-time implementation. The arrival of radar, radio and navigation systems to the military sphere led to the development of technologies to fight them. Therefore, the objective of EW systems is the control of the electromagnetic spectrum. Signals Intelligence (SIGINT) is one of the EW functions, whose mission is to detect, collect, analyze, classify and locate all kind of electromagnetic emissions. Electronic intelligence (ELINT) is the SIGINT subsystem that is devoted to radar signals. A real-time system is the one whose correctness depends not only on the provided result but also on the time in which this result is obtained. Radar and EW systems must provide information as fast as possible on a continuous basis and they can be defined as real-time systems. The introduction of real-time constraints implies a feedback process between the design of the algorithms and their hardware implementation. Moreover, a real-time constraint consists of two parameters: Latency and area of the implementation. All the algorithms in this thesis have been implemented on field programmable gate array (FPGAs) platforms, presenting a trade-off among performance, cost, power consumption and reconfigurability. The first part of the thesis is related to the study of different key subsystems of an ELINT equipment: Signal detection with channelized receivers, pulse parameter extraction, modulation classification for radar signals and passive location algorithms. The discrete Fourier transform (DFT) is a nearly optimal detector and frequency estimator for narrow-band signals buried in white noise. The introduction of fast algorithms to calculate the DFT, known as FFT, reduces the complexity and the processing time of the DFT computation. These properties have placed the FFT as one the most conventional methods for narrow-band signal detection for real-time applications. An algorithm for real-time spectral analysis for user-defined bandwidth, instantaneous dynamic range and resolution is presented. The most characteristic parameters of a pulsed signal are its time of arrival (TOA) and the pulse width (PW). The estimation of these basic parameters is a fundamental task in an ELINT equipment. A basic pulse parameter extractor (PPE) that is able to estimate all these parameters is designed and implemented. The PPE may be useful to perform a generic radar recognition process, perform an emitter location technique and can be used as the preprocessing part of an automatic modulation classifier (AMC). Modulation classification is a difficult task in a non-cooperative environment. An AMC consists of two parts: Signal preprocessing and the classification algorithm itself. Featurebased algorithms obtain different characteristics or features of the input signals. Once these features are extracted, the classification is carried out by processing these features. A feature based-AMC for pulsed radar signals with real-time requirements is studied, designed and implemented. Emitter passive location techniques can be divided into two classes: Triangulation systems, in which the emitter location is estimated with the intersection of the different lines of bearing created from the estimated directions of arrival, and quadratic position-fixing systems, in which the position is estimated through the intersection of iso-time difference of arrival (TDOA) or iso-frequency difference of arrival (FDOA) quadratic surfaces. Although TDOA and FDOA are only implemented with time of arrival and frequency differences, different algorithms for TDOA, FDOA and position estimation are studied and analyzed. The second part is dedicated to FIR filter design and implementation for two different radar applications: Wideband phased arrays with true-time delay (TTD) filters and the range improvement of an operative radar with no hardware changes to minimize costs. Wideband operation of phased arrays is unfeasible because time delays cannot be approximated by phase shifts. The presented solution is based on the substitution of the phase shifters by FIR discrete delay filters. The maximum range of a radar depends on the averaged signal to noise ratio (SNR) at the receiver. Among other factors, the SNR depends on the transmitted signal energy that is power times pulse width. Any possible hardware change implies high costs. The proposed solution lies in the use of a signal processing technique known as pulse compression, which consists of introducing an internal modulation within the pulse width, decoupling range and resolution.
Resumo:
El control, o cancelación activa de ruido, consiste en la atenuación del ruido presente en un entorno acústico mediante la emisión de una señal igual y en oposición de fase al ruido que se desea atenuar. La suma de ambas señales en el medio acústico produce una cancelación mutua, de forma que el nivel de ruido resultante es mucho menor al inicial. El funcionamiento de estos sistemas se basa en los principios de comportamiento de los fenómenos ondulatorios descubiertos por Augustin-Jean Fresnel, Christiaan Huygens y Thomas Young entre otros. Desde la década de 1930, se han desarrollado prototipos de sistemas de control activo de ruido, aunque estas primeras ideas eran irrealizables en la práctica o requerían de ajustes manuales cada poco tiempo que hacían inviable su uso. En la década de 1970, el investigador estadounidense Bernard Widrow desarrolla la teoría de procesado adaptativo de señales y el algoritmo de mínimos cuadrados LMS. De este modo, es posible implementar filtros digitales cuya respuesta se adapte de forma dinámica a las condiciones variables del entorno. Con la aparición de los procesadores digitales de señal en la década de 1980 y su evolución posterior, se abre la puerta para el desarrollo de sistemas de cancelación activa de ruido basados en procesado de señal digital adaptativo. Hoy en día, existen sistemas de control activo de ruido implementados en automóviles, aviones, auriculares o racks de equipamiento profesional. El control activo de ruido se basa en el algoritmo fxlms, una versión modificada del algoritmo LMS de filtrado adaptativo que permite compensar la respuesta acústica del entorno. De este modo, se puede filtrar una señal de referencia de ruido de forma dinámica para emitir la señal adecuada que produzca la cancelación. Como el espacio de cancelación acústica está limitado a unas dimensiones de la décima parte de la longitud de onda, sólo es viable la reducción de ruido en baja frecuencia. Generalmente se acepta que el límite está en torno a 500 Hz. En frecuencias medias y altas deben emplearse métodos pasivos de acondicionamiento y aislamiento, que ofrecen muy buenos resultados. Este proyecto tiene como objetivo el desarrollo de un sistema de cancelación activa de ruidos de carácter periódico, empleando para ello electrónica de consumo y un kit de desarrollo DSP basado en un procesador de muy bajo coste. Se han desarrollado una serie de módulos de código para el DSP escritos en lenguaje C, que realizan el procesado de señal adecuado a la referencia de ruido. Esta señal procesada, una vez emitida, produce la cancelación acústica. Empleando el código implementado, se han realizado pruebas que generan la señal de ruido que se desea eliminar dentro del propio DSP. Esta señal se emite mediante un altavoz que simula la fuente de ruido a cancelar, y mediante otro altavoz se emite una versión filtrada de la misma empleando el algoritmo fxlms. Se han realizado pruebas con distintas versiones del algoritmo, y se han obtenido atenuaciones de entre 20 y 35 dB medidas en márgenes de frecuencia estrechos alrededor de la frecuencia del generador, y de entre 8 y 15 dB medidas en banda ancha. ABSTRACT. Active noise control consists on attenuating the noise in an acoustic environment by emitting a signal equal but phase opposed to the undesired noise. The sum of both signals results in mutual cancellation, so that the residual noise is much lower than the original. The operation of these systems is based on the behavior principles of wave phenomena discovered by Augustin-Jean Fresnel, Christiaan Huygens and Thomas Young. Since the 1930’s, active noise control system prototypes have been developed, though these first ideas were practically unrealizable or required manual adjustments very often, therefore they were unusable. In the 1970’s, American researcher Bernard Widrow develops the adaptive signal processing theory and the Least Mean Squares algorithm (LMS). Thereby, implementing digital filters whose response adapts dynamically to the variable environment conditions, becomes possible. With the emergence of digital signal processors in the 1980’s and their later evolution, active noise cancellation systems based on adaptive signal processing are attained. Nowadays active noise control systems have been successfully implemented on automobiles, planes, headphones or racks for professional equipment. Active noise control is based on the fxlms algorithm, which is actually a modified version of the LMS adaptive filtering algorithm that allows compensation for the acoustic response of the environment. Therefore it is possible to dynamically filter a noise reference signal to obtain the appropriate cancelling signal. As the noise cancellation space is limited to approximately one tenth of the wavelength, noise attenuation is only viable for low frequencies. It is commonly accepted the limit of 500 Hz. For mid and high frequencies, conditioning and isolating passive techniques must be used, as they produce very good results. The objective of this project is to develop a noise cancellation system for periodic noise, by using consumer electronics and a DSP development kit based on a very-low-cost processor. Several C coded modules have been developed for the DSP, implementing the appropriate signal processing to the noise reference. This processed signal, once emitted, results in noise cancellation. The developed code has been tested by generating the undesired noise signal in the DSP. This signal is emitted through a speaker simulating the noise source to be removed, and another speaker emits an fxlms filtered version of the same signal. Several versions of the algorithm have been tested, obtaining attenuation levels around 20 – 35 dB measured in a tight bandwidth around the generator frequency, or around 8 – 15 dB measured in broadband.