940 resultados para Time-frequency analysis
Resumo:
The Quaternary history of metastable CaCO3 input and preservation within Antarctic Intermediate Water (AAIW) was examined by studying sediments from ODP Holes 818B (745 mbsl) and 817A (1015 mbsl) drilled in the Townsville Trough on the southern slope of the Queensland Plateau. These sites lie within the core of modern AAIW, and near the aragonite saturation depth (~1000 m). Thus, they are well positioned to monitor chemical changes that may have occurred within this watermass during the past 1.6 m.y. The percent of fine aragonite content, percent of fine magnesian calcite content, and percent of whole pteropods (>355 µm) were used to separate the fine aragonite input signal from the CaCO3 preservation signal. Stable d18O and d13C isotopic ratios were determined for the planktonic foraminifer Globigerinoides sacculifer and, in Hole 818B, for the benthic foraminifer Cibicidoides spp. to establish the oxygen isotope stratigraphy and to study the relationship between intermediate and shallow water d13C of Sum CO2 and the relationship between benthic foraminiferal d13C and CaCO3 preservation within intermediate waters of the Townsville Trough. Data were converted from depth to age using oxygen isotope stratigraphy, nannostratigraphy, and foraminiferal biostratigraphy. Several long hiatuses and the absence of magnetostratigraphy did not permit time series analysis. The principal results of the CaCO3 preservation study include the following (1) a general increase in CaCO3 preservation between 0.9 and 1.6 Ma; (2) a CaCO3 dissolution maximum near 0.9 Ma, primarily expressed in the Hole 818B fine aragonite record; (3) an abrupt and permanent increase of fine aragonite content between 0.86 and 0.875 Ma in both Holes 818B and 817A probably reflecting a dramatic increase of fine carbonate sediment production on the Queensland Plateau; (4) an improvement in CaCO3 preservation near 0.87 Ma, which accompanied the increase of sediment input, indicated by the first appearance of whole pteropods in the deeper Hole 817A and a "spike" in the percent whole pteropods in Hole 818B; (5) a period of strong CaCO3 dissolution during the mid-Brunhes Chron from 0.36 to 0.41 Ma; and (6) a complex CaCO3 preservation pattern between 0.36 Ma and the present characterized by a general increase in CaCO3 preservation through time with good preservation during interglacial stages and poor preservation during glacial stages. The long-term aragonite preservation histories for Holes 818B and 817A appear to be similar in general shape, although different in detail, to CaCO3 preservation records from the deep Indian and central equatorial Pacific oceans as well as from intermediate water sites in the Bahamas and the Maldives. All of these areas have experienced CaCO3 dissolution at about 0.9 Ma and during the mid-Brunhes Chron. However, the late Quaternary (0 to 0.36 Ma) glacial to interglacial preservation pattern in Holes 818B and 817A is out of phase with CaCO3 preservation records for sediments deposited in Pacific deep and bottom waters. The sharp increase in bank production and export from the Queensland Plateau and the coincident improvement of CaCO3 preservation between 0.86 and 0.875 Ma may have been synchronous with the initiation of the Great Barrier Reef and roughly coincides with an increase in carbonate accumulation on the Bahama banks, in the western North Atlantic Ocean, and on Mururoa atoll, in the central South Pacific Ocean. The development of these reef systems during the middle Quaternary may be related to the transition in the frequency and amplitude of global sea level change from 41 k.y. low amplitude cycles prior to 0.9 Ma to 100 k.y. high amplitude cycles after 0.73 Ma. Carbon isotopic analyses show that benthic foraminiferal d13C values (Cibicidoides spp.) have been heavier than planktonic foraminiferal d13C values (G. sacculifer) throughout most of the last 0.54 m.y., which may indicate that 13C-enriched intermediate water (AAIW) occupied the Townsville Trough during much of the late Quaternary. Furthermore, both planktonic and benthic foraminiferal d13C values are often observed to be heaviest during interglacial to glacial transitions, and lightest during glacial to interglacial transitions. We suggest that this pattern is the result of changes in the preformed d13C of Sum CO2 of AAIW and may reflect changes in nutrient utilization by primary producers in Antarctic surface waters, changes in the d13C of upwelled Circumpolar Deep Water, or changes in the extent and/or temperature of equilibration between surface water and atmospheric CO2 within the Antarctic Polar Frontal Zone (the source area for AAIW). Finally, the poor correlation between percent of whole pteropods (aragonite preservation) and d13C of Cibicidoides spp. may be the result of a decoupling of d13C from CO2 due to the numerous and complex variables that combine to produce the preformed d13C of AAIW.
Resumo:
During Leg 127, the formation microscanner (FMS) logging tool was used as part of an Ocean Drilling Program (ODP) logging program for only the second time in the history of the program. Resistivity images, also known as FMS logs, were obtained at Sites 794 and 797 that covered nearly the complete Yamato Basin sedimentary sequence to a depth below 500 mbsf. The FMS images from these two sites at the northeastern and southwestern corners of the Yamato Basin thus were amenable to comparison. A strong visual correlation was noticed between the FMS logs taken in Holes 794B and 797C in an upper Miocene interval (350-384 mbsf), although the two sites are approximately 360 km apart. In this interval, the FMS logs showed a series of more resistive thin beds (10-200 cm) alternating with relatively lower resistivity layers: a pattern that was manifested by alternating dark (low resistivity) and light (high resistivity) banding in the FMS images. We attribute this layering to interbedding of chert and porcellanite layers, a common lithologic sequence throughout Japan (Tada and Iijima, 1983, doi:10.1306/212F82E7-2B24-11D7-8648000102C1865D). Spatial frequency analysis of this interval of dominant dark-light banding showed spatial cycles of period of 1.1 to 1.3 and 0.6 m. This pronounced layering and the correlation between the two sites terminate at 384 mbsf, coincident with the opal-CT to quartz transition at Site 794. We think the correlation in the FMS logs might well extend earlier in the middle Miocene, but the opal-CT to quartz transition obscures this layering below 384 mbsf. Although 34 m is only a small part of the core recovered at these two sites, it is significant because it represents an area of extremely poor core recovery and an interval for which a near-depositional hiatus was postulated for Site 797, but not for Site 794.
Resumo:
At the western continental margin of the Barents Sea, 75°N, hemipelagic sediments provide a record of Holocene climate change with a time resolution of 10-70 years. Planktic foraminifera counts reveal a very early Holocene thermal optimum 10.7-7.7 kyr BP, with summer sea surface temperatures (SST) of 8°C and a much enhanced West Spitsbergen Current. There was a short cooling between 8.8 and 8.2 kyr BP. In the middle and late Holocene summer, SST dropped to 2.5°-5.0°C, indicative of reduced Atlantic heat advection, except for two short warmings near 2.2 and 1.6 kyr BP. Distinct quasi-periodic spikes of coarse sediment fraction (with large portions of lithic grains, benthic and planktic foraminifera) record cascades of cold, dense winter water down the continental slope as a result of enhanced seasonal sea ice formation and storminess on the Barents shelf over the entire Holocene. The spikes primarily cluster near recurrence intervals of 400-650 and 1000-1350 years, when traced over the entire Holocene, but follow significant 885-/840- and 505-/605-year periodicities in the early Holocene. These non-stationary periodicities mimic the Greenland-[Formula: See Text]Be variability, which is a tracer of solar forcing. Further significant Holocene periodicities of 230, (145) and 93 years come close to the deVries and Gleissberg solar cycles.
Resumo:
We use the oxygen isotopic composition of planktonic foraminifera Globigerinoides ruber (white) from Ocean Drilling Program Site 1058 in the subtropical northwestern Atlantic to construct a high-resolution (~800 year) climate record spanning the mid-Pleistocene climate transition (~410 ka to 1350 ka). We investigate whether or not millennial-scale instabilities in the proxy record are associated with the extent of continental glaciation. G. ruber d18O values display high-frequency fluctuations throughout the record, but the amplitude about mean glacial and interglacial d18O values increases at marine isotope stage (MIS) 22 (880 ka) and is highest during MIS 12. These observations support that millennial-scale climate instabilities are associated with ice sheet size. Time series analysis illustrates that these variations have significant concentration of spectral power centered on periods of ~10-12 ka and ~5 ka. The timing of these fluctuations agrees well, or coincides with, the periodicities of the second and fourth harmonics, respectively, of precessional forcing at the equator. An insolation-based origin of the millennial-scale instabilities would be independent of ice volume and explains the presence of these fluctuations before the mid-Pleistocene climate transition as well as during interglacial intervals (e.g., MIS 37 and 17). Because the amplitude of the millennial-scale variations increases during the mid-Pleistocene transition, feedback mechanisms associated with the growth of large, 100-ka-paced, polar ice sheets may be important amplifiers of regional surface water hydrographic changes.
Resumo:
Since the seminal work by Hays et al. (1976), a plethora of studies has demonstrated a correlation between orbital variations and climatic change. However, information on how changes in orbital boundary conditions affected the frequency and amplitude of millennial-scale climate variability is still fragmentary. The Marine Isotope Stage (MIS) 19, an interglacial centred at around 785 ka, provides an opportunity to pursue this question and test the hypothesis that the long-term processes set up the boundary conditions within which the short-term processes operate. Similarly to the current interglacial, MIS 19 is characterised by a minimum of the 400-kyr eccentricity cycle, subdued amplitude of precessional changes, and small amplitude variations in insolation. Here we examine the record of climatic conditions during MIS 19 using high-resolution stable isotope records from benthic and planktonic foraminifera from a sedimentary sequence in the North Atlantic (Integrated Ocean Drilling Program Expedition 306, Site U1313) in order to assess the stability and duration of this interglacial, and evaluate the climate system's response in the millennial band to known orbitally induced insolation changes. Benthic and planktonic foraminiferal d18O values indicate relatively stable conditions during the peak warmth of MIS 19, but sea-surface and deep-water reconstructions start diverging during the transition towards the glacial MIS 18, when large, cold excursions disrupt the surface waters whereas low amplitude millennial scale fluctuations persist in the deep waters as recorded by the oxygen isotope signal. The glacial inception occurred at ~779 ka, in agreement with an increased abundance of tetra-unsaturated alkenones, reflecting the influence of icebergs and associated meltwater pulses and high-latitude waters at the study site. After having combined the new results with previous data from the same site, and using a variety of time series analysis techniques, we evaluate the evolution of millennial climate variability in response to changing orbital boundary conditions during the Early-Middle Pleistocene. Suborbital variability in both surface- and deep-water records is mainly concentrated at a period of ~11 kyr and, additionally, at ~5.8 and ~3.9 kyr in the deep ocean; these periods are equal to harmonics of precession band oscillations. The fact that the response at the 11 kyr period increased over the same interval during which the amplitude of the response to the precessional cycle increased supports the notion that most of the variance in the 11 kyr band in the sedimentary record is nonlinearly transferred from precession band oscillations. Considering that these periodicities are important features in the equatorial and intertropical insolation, these observations are in line with the view that the low-latitude regions play an important role in the response of the climate system to the astronomical forcing. We conclude that the effect of the orbitally induced insolation is of fundamental importance in regulating the timing and amplitude of millennial scale climate variability.
Resumo:
El retroceso de las costas acantiladas es un fenómeno muy extendido sobre los litorales rocosos expuestos a la incidencia combinada de los procesos marinos y meteorológicos que se dan en la franja costera. Este fenómeno se revela violentamente como movimientos gravitacionales del terreno esporádicos, pudiendo causar pérdidas materiales y/o humanas. Aunque el conocimiento de estos riesgos de erosión resulta de vital importancia para la correcta gestión de la costa, el desarrollo de modelos predictivos se encuentra limitado desde el punto de vista geomorfológico debido a la complejidad e interacción de los procesos de desarrollo espacio-temporal que tienen lugar en la zona costera. Los modelos de predicción publicados son escasos y con importantes inconvenientes: a) extrapolación, extienden la información de registros históricos; b) empíricos, sobre registros históricos estudian la respuesta al cambio de un parámetro; c) estocásticos, determinan la cadencia y magnitud de los eventos futuros extrapolando las distribuciones de probabilidad extraídas de catálogos históricos; d) proceso-respuesta, de estabilidad y propagación del error inexplorada; e) en Ecuaciones en Derivadas Parciales, computacionalmente costosos y poco exactos. La primera parte de esta tesis detalla las principales características de los modelos más recientes de cada tipo y, para los más habitualmente utilizados, se indican sus rangos de aplicación, ventajas e inconvenientes. Finalmente como síntesis de los procesos más relevantes que contemplan los modelos revisados, se presenta un diagrama conceptual de la recesión costera, donde se recogen los procesos más influyentes que deben ser tenidos en cuenta, a la hora de utilizar o crear un modelo de recesión costera con el objetivo de evaluar la peligrosidad (tiempo/frecuencia) del fenómeno a medio-corto plazo. En esta tesis se desarrolla un modelo de proceso-respuesta de retroceso de acantilados costeros que incorpora el comportamiento geomecánico de materiales cuya resistencia a compresión no supere los 5 MPa. El modelo simula la evolución espaciotemporal de un perfil-2D del acantilado que puede estar formado por materiales heterogéneos. Para ello, se acoplan la dinámica marina: nivel medio del mar, cambios en el nivel medio del lago, mareas y oleaje; con la evolución del terreno: erosión, desprendimiento rocoso y formación de talud de derrubios. El modelo en sus diferentes variantes es capaz de incluir el análisis de la estabilidad geomecánica de los materiales, el efecto de los derrubios presentes al pie del acantilado, el efecto del agua subterránea, la playa, el run-up, cambios en el nivel medio del mar o cambios (estacionales o interanuales) en el nivel medio de la masa de agua (lagos). Se ha estudiado el error de discretización del modelo y su propagación en el tiempo a partir de las soluciones exactas para los dos primeros periodos de marea para diferentes aproximaciones numéricas tanto en tiempo como en espacio. Los resultados obtenidos han permitido justificar las elecciones que minimizan el error y los métodos de aproximación más adecuados para su posterior uso en la modelización. El modelo ha sido validado frente a datos reales en la costa de Holderness, Yorkshire, Reino Unido; y en la costa norte del lago Erie, Ontario, Canadá. Los resultados obtenidos presentan un importante avance en los modelos de recesión costera, especialmente en su relación con las condiciones geomecánicas del medio, la influencia del agua subterránea, la verticalización de los perfiles rocosos y su respuesta ante condiciones variables producidas por el cambio climático (por ejemplo, nivel medio del mar, cambios en los niveles de lago, etc.). The recession of coastal cliffs is a widespread phenomenon on the rocky shores that are exposed to the combined incidence of marine and meteorological processes that occur in the shoreline. This phenomenon is revealed violently and occasionally, as gravitational movements of the ground and can cause material or human losses. Although knowledge of the risks of erosion is vital for the proper management of the coast, the development of cliff erosion predictive models is limited by the complex interactions between environmental processes and material properties over a range of temporal and spatial scales. Published prediction models are scarce and present important drawbacks: extrapolation, that extend historical records to the future; empirical, that based on historical records studies the system response against the change in one parameter; stochastic, that represent of cliff behaviour based on assumptions regarding the magnitude and frequency of events in a probabilistic framework based on historical records; process-response, stability and error propagation unexplored; PDE´s, highly computationally expensive and not very accurate. The first part of this thesis describes the main features of the latest models of each type and, for the most commonly used, their ranges of application, advantages and disadvantages are given. Finally as a synthesis of the most relevant processes that include the revised models, a conceptual diagram of coastal recession is presented. This conceptual model includes the most influential processes that must be taken into account when using or creating a model of coastal recession to evaluate the dangerousness (time/frequency) of the phenomenon to medium-short term. A new process-response coastal recession model developed in this thesis has been designed to incorporate the behavioural and mechanical characteristics of coastal cliffs which are composed of with materials whose compressive strength is less than 5 MPa. The model simulates the spatial and temporal evolution of a cliff-2D profile that can consist of heterogeneous materials. To do so, marine dynamics: mean sea level, waves, tides, lake seasonal changes; is coupled with the evolution of land recession: erosion, cliff face failure and associated protective colluvial wedge. The model in its different variants can include analysis of material geomechanical stability, the effect of debris present at the cliff foot, groundwater effects, beach and run-up effects, changes in the mean sea level or changes (seasonal or inter-annual) in the mean lake level. Computational implementation and study of different numerical resolution techniques, in both time and space approximations, and the produced errors are exposed and analysed for the first two tidal periods. The results obtained in the errors analysis allow us to operate the model with a configuration that minimizes the error of the approximation methods. The model is validated through profile evolution assessment at various locations of coastline retreat on the Holderness Coast, Yorkshire, UK and on the north coast of Lake Erie, Ontario, Canada. The results represent an important stepforward in linking material properties to the processes of cliff recession, in considering the effect of groundwater charge and the slope oversteeping and their response to changing conditions caused by climate change (i.e. sea level, changes in lakes levels, etc.).
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
By spectral analysis, and using joint time-frequency representations, we present the theoretical basis to design invariant bandlimited Airy pulses with an arbitrary degree of robustness and an arbitrary range of single-mode fiber chromatic dispersion. The numerically simulated examples confirm the theoretically predicted pulse partial invariance in the propagation of the pulse in the fiber.
Resumo:
Recently a new recipe for developing and deploying real-time systems has become increasingly adopted in the JET tokamak. Powered by the advent of x86 multi-core technology and the reliability of the JET’s well established Real-Time Data Network (RTDN) to handle all real-time I/O, an official Linux vanilla kernel has been demonstrated to be able to provide realtime performance to user-space applications that are required to meet stringent timing constraints. In particular, a careful rearrangement of the Interrupt ReQuests’ (IRQs) affinities together with the kernel’s CPU isolation mechanism allows to obtain either soft or hard real-time behavior depending on the synchronization mechanism adopted. Finally, the Multithreaded Application Real-Time executor (MARTe) framework is used for building applications particularly optimised for exploring multicore architectures. In the past year, four new systems based on this philosophy have been installed and are now part of the JET’s routine operation. The focus of the present work is on the configuration and interconnection of the ingredients that enable these new systems’ real-time capability and on the impact that JET’s distributed real-time architecture has on system engineering requirements, such as algorithm testing and plant commissioning. Details are given about the common real-time configuration and development path of these systems, followed by a brief description of each system together with results regarding their real-time performance. A cycle time jitter analysis of a user-space MARTe based application synchronising over a network is also presented. The goal is to compare its deterministic performance while running on a vanilla and on a Messaging Real time Grid (MRG) Linux kernel.
Resumo:
The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ‘traditional’ set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.
Resumo:
El objetivo principal de esta tesis es el desarrollo de herramientas numéricas basadas en técnicas de onda completa para el diseño asistido por ordenador (Computer-Aided Design,‘CAD’) de dispositivos de microondas. En este contexto, se desarrolla una herramienta numérica basada en el método de los elementos finitos para el diseño y análisis de antenas impresas mediante algoritmos de optimización. Esta técnica consiste en dividir el análisis de una antena en dos partes. Una parte de análisis 3D que se realiza sólo una vez en cada punto de frecuencia de la banda de funcionamiento donde se sustituye una superficie que contiene la metalización del parche por puertas artificiales. En una segunda parte se inserta entre las puertas artificiales en la estructura 3D la superficie soportando una metalización y se procede un análisis 2D para caracterizar el comportamiento de la antena. La técnica propuesta en esta tesis se puede implementar en un algoritmo de optimización para definir el perfil de la antena que permite conseguir los objetivos del diseño. Se valida experimentalmente dicha técnica empleándola en el diseño de antenas impresas de banda ancha para diferentes aplicaciones mediante la optimización del perfil de los parches. También, se desarrolla en esta tesis un procedimiento basado en el método de descomposición de dominio y el método de los elementos finitos para el diseño de dispositivos pasivos de microonda. Se utiliza este procedimiento en particular para el diseño y sintonía de filtros de microondas. En la primera etapa de su aplicación se divide la estructura que se quiere analizar en subdominios aplicando el método de descomposición de dominio, este proceso permite analizar cada segmento por separado utilizando el método de análisis adecuado dado que suele haber subdominios que se pueden analizar mediante métodos analíticos por lo que el tiempo de análisis es más reducido. Se utilizan métodos numéricos para analizar los subdominios que no se pueden analizar mediante métodos analíticos. En esta tesis, se utiliza el método de los elementos finitos para llevar a cabo el análisis. Además de la descomposición de dominio, se aplica un proceso de barrido en frecuencia para reducir los tiempos del análisis. Como método de orden reducido se utiliza la técnica de bases reducidas. Se ha utilizado este procedimiento para diseñar y sintonizar varios ejemplos de filtros con el fin de comprobar la validez de dicho procedimiento. Los resultados obtenidos demuestran la utilidad de este procedimiento y confirman su rigurosidad, precisión y eficiencia en el diseño de filtros de microondas. ABSTRACT The main objective of this thesis is the development of numerical tools based on full-wave techniques for computer-aided design ‘CAD’ of microwave devices. In this context, a numerical technique based on the finite element method ‘FEM’ for the design and analysis of printed antennas using optimization algorithms has been developed. The proposed technique consists in dividing the analysis of the antenna in two stages. In the first stage, the regions of the antenna which do not need to be modified during the CAD process are initially characterized only once from their corresponding matrix transfer function (Generalized Admittance matrix, ‘GAM’). The regions which will be modified are defined as artificial ports, precisely the regions which will contain the conducting surfaces of the printed antenna. In a second stage, the contour shape of the conducting surfaces of the printed antenna is iteratively modified in order to achieve a desired electromagnetic performance of the antenna. In this way, a new GAM of the radiating device which takes into account each printed antenna shape is computed after each iteration. The proposed technique can be implemented with a genetic algorithm to achieve the design objectives. This technique is validated experimentally and applied to the design of wideband printed antennas for different applications by optimizing the shape of the radiating device. In addition, a procedure based on the domain decomposition method and the finite element method has been developed for the design of microwave passive devices. In particular, this procedure can be applied to the design and tune of microwave filters. In the first stage of its implementation, the structure to be analyzed is divided into subdomains using the domain decomposition method; this process allows each subdomains can be analyzed separately using suitable analysis method, since there is usually subdomains that can be analyzed by analytical methods so that the time of analysis is reduced. For analyzing the subdomains that cannot be analyzed by analytical methods, we use the numerical methods. In this thesis, the FEM is used to carry out the analysis. Furthermore the decomposition of the domain, a frequency sweep process is applied to reduce analysis times. The reduced order model as the reduced basis technique is used in this procedure. This procedure is applied to the design and tune of several examples of microwave filters in order to check its validity. The obtained results allow concluding the usefulness of this procedure and confirming their thoroughness, accuracy and efficiency for the design of microwave filters.
Resumo:
In this work we present a new way to mask the data in a one-user communication system when direct sequence - code division multiple access (DS-CDMA) techniques are used. The code is generated by a digital chaotic generator, originally proposed by us and previously reported for a chaos cryptographic system. It is demonstrated that if the user's data signal is encoded with a bipolar phase-shift keying (BPSK) technique, usual in DS-CDMA, it can be easily recovered from a time-frequency domain representation. To avoid this situation, a new system is presented in which a previous dispersive stage is applied to the data signal. A time-frequency domain analysis is performed, and the devices required at the transmitter and receiver end, both user-independent, are presented for the optical domain.
Resumo:
The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ?traditional? set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified, easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.
Diseño de algoritmos de guerra electrónica y radar para su implementación en sistemas de tiempo real
Resumo:
Esta tesis se centra en el estudio y desarrollo de algoritmos de guerra electrónica {electronic warfare, EW) y radar para su implementación en sistemas de tiempo real. La llegada de los sistemas de radio, radar y navegación al terreno militar llevó al desarrollo de tecnologías para combatirlos. Así, el objetivo de los sistemas de guerra electrónica es el control del espectro electomagnético. Una de la funciones de la guerra electrónica es la inteligencia de señales {signals intelligence, SIGINT), cuya labor es detectar, almacenar, analizar, clasificar y localizar la procedencia de todo tipo de señales presentes en el espectro. El subsistema de inteligencia de señales dedicado a las señales radar es la inteligencia electrónica {electronic intelligence, ELINT). Un sistema de tiempo real es aquel cuyo factor de mérito depende tanto del resultado proporcionado como del tiempo en que se da dicho resultado. Los sistemas radar y de guerra electrónica tienen que proporcionar información lo más rápido posible y de forma continua, por lo que pueden encuadrarse dentro de los sistemas de tiempo real. La introducción de restricciones de tiempo real implica un proceso de realimentación entre el diseño del algoritmo y su implementación en plataformas “hardware”. Las restricciones de tiempo real son dos: latencia y área de la implementación. En esta tesis, todos los algoritmos presentados se han implementado en plataformas del tipo field programmable gate array (FPGA), ya que presentan un buen compromiso entre velocidad, coste total, consumo y reconfigurabilidad. La primera parte de la tesis está centrada en el estudio de diferentes subsistemas de un equipo ELINT: detección de señales mediante un detector canalizado, extracción de los parámetros de pulsos radar, clasificación de modulaciones y localization pasiva. La transformada discreta de Fourier {discrete Fourier transform, DFT) es un detector y estimador de frecuencia quasi-óptimo para señales de banda estrecha en presencia de ruido blanco. El desarrollo de algoritmos eficientes para el cálculo de la DFT, conocidos como fast Fourier transform (FFT), han situado a la FFT como el algoritmo más utilizado para la detección de señales de banda estrecha con requisitos de tiempo real. Así, se ha diseñado e implementado un algoritmo de detección y análisis espectral para su implementación en tiempo real. Los parámetros más característicos de un pulso radar son su tiempo de llegada y anchura de pulso. Se ha diseñado e implementado un algoritmo capaz de extraer dichos parámetros. Este algoritmo se puede utilizar con varios propósitos: realizar un reconocimiento genérico del radar que transmite dicha señal, localizar la posición de dicho radar o bien puede utilizarse como la parte de preprocesado de un clasificador automático de modulaciones. La clasificación automática de modulaciones es extremadamente complicada en entornos no cooperativos. Un clasificador automático de modulaciones se divide en dos partes: preprocesado y el algoritmo de clasificación. Los algoritmos de clasificación basados en parámetros representativos calculan diferentes estadísticos de la señal de entrada y la clasifican procesando dichos estadísticos. Los algoritmos de localization pueden dividirse en dos tipos: triangulación y sistemas cuadráticos. En los algoritmos basados en triangulación, la posición se estima mediante la intersección de las rectas proporcionadas por la dirección de llegada de la señal. En cambio, en los sistemas cuadráticos, la posición se estima mediante la intersección de superficies con igual diferencia en el tiempo de llegada (time difference of arrival, TDOA) o diferencia en la frecuencia de llegada (frequency difference of arrival, FDOA). Aunque sólo se ha implementado la estimación del TDOA y FDOA mediante la diferencia de tiempos de llegada y diferencia de frecuencias, se presentan estudios exhaustivos sobre los diferentes algoritmos para la estimación del TDOA, FDOA y localización pasiva mediante TDOA-FDOA. La segunda parte de la tesis está dedicada al diseño e implementación filtros discretos de respuesta finita (finite impulse response, FIR) para dos aplicaciones radar: phased array de banda ancha mediante filtros retardadores (true-time delay, TTD) y la mejora del alcance de un radar sin modificar el “hardware” existente para que la solución sea de bajo coste. La operación de un phased array de banda ancha mediante desfasadores no es factible ya que el retardo temporal no puede aproximarse mediante un desfase. La solución adoptada e implementada consiste en sustituir los desfasadores por filtros digitales con retardo programable. El máximo alcance de un radar depende de la relación señal a ruido promedio en el receptor. La relación señal a ruido depende a su vez de la energía de señal transmitida, potencia multiplicado por la anchura de pulso. Cualquier cambio hardware que se realice conlleva un alto coste. La solución que se propone es utilizar una técnica de compresión de pulsos, consistente en introducir una modulación interna a la señal, desacoplando alcance y resolución. ABSTRACT This thesis is focused on the study and development of electronic warfare (EW) and radar algorithms for real-time implementation. The arrival of radar, radio and navigation systems to the military sphere led to the development of technologies to fight them. Therefore, the objective of EW systems is the control of the electromagnetic spectrum. Signals Intelligence (SIGINT) is one of the EW functions, whose mission is to detect, collect, analyze, classify and locate all kind of electromagnetic emissions. Electronic intelligence (ELINT) is the SIGINT subsystem that is devoted to radar signals. A real-time system is the one whose correctness depends not only on the provided result but also on the time in which this result is obtained. Radar and EW systems must provide information as fast as possible on a continuous basis and they can be defined as real-time systems. The introduction of real-time constraints implies a feedback process between the design of the algorithms and their hardware implementation. Moreover, a real-time constraint consists of two parameters: Latency and area of the implementation. All the algorithms in this thesis have been implemented on field programmable gate array (FPGAs) platforms, presenting a trade-off among performance, cost, power consumption and reconfigurability. The first part of the thesis is related to the study of different key subsystems of an ELINT equipment: Signal detection with channelized receivers, pulse parameter extraction, modulation classification for radar signals and passive location algorithms. The discrete Fourier transform (DFT) is a nearly optimal detector and frequency estimator for narrow-band signals buried in white noise. The introduction of fast algorithms to calculate the DFT, known as FFT, reduces the complexity and the processing time of the DFT computation. These properties have placed the FFT as one the most conventional methods for narrow-band signal detection for real-time applications. An algorithm for real-time spectral analysis for user-defined bandwidth, instantaneous dynamic range and resolution is presented. The most characteristic parameters of a pulsed signal are its time of arrival (TOA) and the pulse width (PW). The estimation of these basic parameters is a fundamental task in an ELINT equipment. A basic pulse parameter extractor (PPE) that is able to estimate all these parameters is designed and implemented. The PPE may be useful to perform a generic radar recognition process, perform an emitter location technique and can be used as the preprocessing part of an automatic modulation classifier (AMC). Modulation classification is a difficult task in a non-cooperative environment. An AMC consists of two parts: Signal preprocessing and the classification algorithm itself. Featurebased algorithms obtain different characteristics or features of the input signals. Once these features are extracted, the classification is carried out by processing these features. A feature based-AMC for pulsed radar signals with real-time requirements is studied, designed and implemented. Emitter passive location techniques can be divided into two classes: Triangulation systems, in which the emitter location is estimated with the intersection of the different lines of bearing created from the estimated directions of arrival, and quadratic position-fixing systems, in which the position is estimated through the intersection of iso-time difference of arrival (TDOA) or iso-frequency difference of arrival (FDOA) quadratic surfaces. Although TDOA and FDOA are only implemented with time of arrival and frequency differences, different algorithms for TDOA, FDOA and position estimation are studied and analyzed. The second part is dedicated to FIR filter design and implementation for two different radar applications: Wideband phased arrays with true-time delay (TTD) filters and the range improvement of an operative radar with no hardware changes to minimize costs. Wideband operation of phased arrays is unfeasible because time delays cannot be approximated by phase shifts. The presented solution is based on the substitution of the phase shifters by FIR discrete delay filters. The maximum range of a radar depends on the averaged signal to noise ratio (SNR) at the receiver. Among other factors, the SNR depends on the transmitted signal energy that is power times pulse width. Any possible hardware change implies high costs. The proposed solution lies in the use of a signal processing technique known as pulse compression, which consists of introducing an internal modulation within the pulse width, decoupling range and resolution.