16 resultados para Time-frequency distribution


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the use of sound waves to illustrate multipath radio propagation concepts. Specifically, a procedure is presented to measure the time-varying frequency response of the channel. This helps demonstrate how a propagation channel can be characterized in time and frequency, and provides visualizations of the concepts of coherence time and coherence bandwidth. The measurements are very simple to carry out, and the required equipment is easily available. The proposed method can be useful for wireless or mobile communication courses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Improvements in neuroimaging methods have afforded significant advances in our knowledge of the cognitive and neural foundations of aesthetic appreciation. We used magnetoencephalography (MEG) to register brain activity while participants decided about the beauty of visual stimuli. The data were analyzed with event-related field (ERF) and Time-Frequency (TF) procedures. ERFs revealed no significant differences between brain activity related with stimuli rated as “beautiful” and “not beautiful.” TF analysis showed clear differences between both conditions 400 ms after stimulus onset. Oscillatory power was greater for stimuli rated as “beautiful” than those regarded as “not beautiful” in the four frequency bands (theta, alpha, beta, and gamma). These results are interpreted in the frame of synchronization studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research proposes a generic methodology for dimensionality reduction upon time-frequency representations applied to the classification of different types of biosignals. The methodology directly deals with the highly redundant and irrelevant data contained in these representations, combining a first stage of irrelevant data removal by variable selection, with a second stage of redundancy reduction using methods based on linear transformations. The study addresses two techniques that provided a similar performance: the first one is based on the selection of a set of the most relevant time?frequency points, whereas the second one selects the most relevant frequency bands. The first methodology needs a lower quantity of components, leading to a lower feature space; but the second improves the capture of the time-varying dynamics of the signal, and therefore provides a more stable performance. In order to evaluate the generalization capabilities of the methodology proposed it has been applied to two types of biosignals with different kinds of non-stationary behaviors: electroencephalographic and phonocardiographic biosignals. Even when these two databases contain samples with different degrees of complexity and a wide variety of characterizing patterns, the results demonstrate a good accuracy for the detection of pathologies, over 98%.The results open the possibility to extrapolate the methodology to the study of other biosignals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The type of signals obtained has conditioned chaos analysis tools. Almost in every case, they have analogue characteristics. But in certain cases, a chaotic digital signal is obtained and theses signals need a different approach than conventional analogue ones. The main objective of this paper will be to present some possible approaches to the study of this signals and how information about their characteristics may be obtained in the more straightforward possible way. We have obtained digital chaotic signals from an Optical Logic Cell with some feedback between output and one of the possible control gates. This chaos has been reported in several papers and its characteristics have been employed as a possible method to secure communications and as a way to encryption. In both cases, the influence of some perturbation in the transmission medium gave problems both for the synchronization of chaotic generators at emitter and receiver and for the recovering of information data. A proposed way to analyze the presence of some perturbation is to study the noise contents of transmitted signal and to implement a way to eliminate it. In our present case, the digital signal will be converted to a multilevel one by grouping bits in packets of 8 bits and applying conventional methods of time-frequency analysis to them. The results give information about the change in signals characteristics and hence some information about the noise or perturbations present. Equivalent representations to the phase and to the Feigenbaum diagrams for digital signals are employed in this case.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La adecuada estimación de avenidas de diseño asociadas a altos periodos de retorno es necesaria para el diseño y gestión de estructuras hidráulicas como presas. En la práctica, la estimación de estos cuantiles se realiza normalmente a través de análisis de frecuencia univariados, basados en su mayoría en el estudio de caudales punta. Sin embargo, la naturaleza de las avenidas es multivariada, siendo esencial tener en cuenta características representativas de las avenidas, tales como caudal punta, volumen y duración del hidrograma, con el fin de llevar a cabo un análisis apropiado; especialmente cuando el caudal de entrada se transforma en un caudal de salida diferente durante el proceso de laminación en un embalse o llanura de inundación. Los análisis de frecuencia de avenidas multivariados han sido tradicionalmente llevados a cabo mediante el uso de distribuciones bivariadas estándar con el fin de modelar variables correlacionadas. Sin embargo, su uso conlleva limitaciones como la necesidad de usar el mismo tipo de distribuciones marginales para todas las variables y la existencia de una relación de dependencia lineal entre ellas. Recientemente, el uso de cópulas se ha extendido en hidrología debido a sus beneficios en relación al contexto multivariado, permitiendo superar los inconvenientes de las técnicas tradicionales. Una copula es una función que representa la estructura de dependencia de las variables de estudio, y permite obtener la distribución de frecuencia multivariada de dichas variables mediante sus distribuciones marginales, sin importar el tipo de distribución marginal utilizada. La estimación de periodos de retorno multivariados, y por lo tanto, de cuantiles multivariados, también se facilita debido a la manera en la que las cópulas están formuladas. La presente tesis doctoral busca proporcionar metodologías que mejoren las técnicas tradicionales usadas por profesionales para estimar cuantiles de avenida más adecuados para el diseño y la gestión de presas, así como para la evaluación del riesgo de avenida, mediante análisis de frecuencia de avenidas bivariados basados en cópulas. Las variables consideradas para ello son el caudal punta y el volumen del hidrograma. Con el objetivo de llevar a cabo un estudio completo, la presente investigación abarca: (i) el análisis de frecuencia de avenidas local bivariado centrado en examinar y comparar los periodos de retorno teóricos basados en la probabilidad natural de ocurrencia de una avenida, con el periodo de retorno asociado al riesgo de sobrevertido de la presa bajo análisis, con el fin de proporcionar cuantiles en una estación de aforo determinada; (ii) la extensión del enfoque local al regional, proporcionando un procedimiento completo para llevar a cabo un análisis de frecuencia de avenidas regional bivariado para proporcionar cuantiles en estaciones sin aforar o para mejorar la estimación de dichos cuantiles en estaciones aforadas; (iii) el uso de cópulas para investigar tendencias bivariadas en avenidas debido al aumento de los niveles de urbanización en una cuenca; y (iv) la extensión de series de avenida observadas mediante la combinación de los beneficios de un modelo basado en cópulas y de un modelo hidrometeorológico. Accurate design flood estimates associated with high return periods are necessary to design and manage hydraulic structures such as dams. In practice, the estimate of such quantiles is usually done via univariate flood frequency analyses, mostly based on the study of peak flows. Nevertheless, the nature of floods is multivariate, being essential to consider representative flood characteristics, such as flood peak, hydrograph volume and hydrograph duration to carry out an appropriate analysis; especially when the inflow peak is transformed into a different outflow peak during the routing process in a reservoir or floodplain. Multivariate flood frequency analyses have been traditionally performed by using standard bivariate distributions to model correlated variables, yet they entail some shortcomings such as the need of using the same kind of marginal distribution for all variables and the assumption of a linear dependence relation between them. Recently, the use of copulas has been extended in hydrology because of their benefits regarding dealing with the multivariate context, as they overcome the drawbacks of the traditional approach. A copula is a function that represents the dependence structure of the studied variables, and allows obtaining the multivariate frequency distribution of them by using their marginal distributions, regardless of the kind of marginal distributions considered. The estimate of multivariate return periods, and therefore multivariate quantiles, is also facilitated by the way in which copulas are formulated. The present doctoral thesis seeks to provide methodologies that improve traditional techniques used by practitioners, in order to estimate more appropriate flood quantiles for dam design, dam management and flood risk assessment, through bivariate flood frequency analyses based on the copula approach. The flood variables considered for that goal are peak flow and hydrograph volume. In order to accomplish a complete study, the present research addresses: (i) a bivariate local flood frequency analysis focused on examining and comparing theoretical return periods based on the natural probability of occurrence of a flood, with the return period associated with the risk of dam overtopping, to estimate quantiles at a given gauged site; (ii) the extension of the local to the regional approach, supplying a complete procedure for performing a bivariate regional flood frequency analysis to either estimate quantiles at ungauged sites or improve at-site estimates at gauged sites; (iii) the use of copulas to investigate bivariate flood trends due to increasing urbanisation levels in a catchment; and (iv) the extension of observed flood series by combining the benefits of a copula-based model and a hydro-meteorological model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although there are numerous accurate measuring methods to determine soil moisture content in a spot, until very recently there were no precise in situ and in real time methods that were able to measure soil moisture content along a line. By means of the Distributed Fiber Optic Temperature Measurement method or DFOT, the temperature in 0.12 m intervals and long distances (up to 10,000 m) with a high time frequency and an accuracy of +0.2º C is determined. The principle of temperature measurement along a fiber optic cable is based on the thermal sensitivity of the relative intensities of backscattered photons that arise from collisions with electrons in the core of the glass fiber. A laser pulse, generated by the DTS unit, traversing a fiber optic cable will result in backscatter at two frequencies. The DTS quantifies the intensity of these backscattered photons and elapsed time between the pulse and the observed returned light. The intensity of one of the frequencies is strongly dependent on the temperature at the point where the scattering process occurred. The computed temperature is attributed to the position along the cable from which the light was reflected, computed from the time of travel for the light.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El retroceso de las costas acantiladas es un fenómeno muy extendido sobre los litorales rocosos expuestos a la incidencia combinada de los procesos marinos y meteorológicos que se dan en la franja costera. Este fenómeno se revela violentamente como movimientos gravitacionales del terreno esporádicos, pudiendo causar pérdidas materiales y/o humanas. Aunque el conocimiento de estos riesgos de erosión resulta de vital importancia para la correcta gestión de la costa, el desarrollo de modelos predictivos se encuentra limitado desde el punto de vista geomorfológico debido a la complejidad e interacción de los procesos de desarrollo espacio-temporal que tienen lugar en la zona costera. Los modelos de predicción publicados son escasos y con importantes inconvenientes: a) extrapolación, extienden la información de registros históricos; b) empíricos, sobre registros históricos estudian la respuesta al cambio de un parámetro; c) estocásticos, determinan la cadencia y magnitud de los eventos futuros extrapolando las distribuciones de probabilidad extraídas de catálogos históricos; d) proceso-respuesta, de estabilidad y propagación del error inexplorada; e) en Ecuaciones en Derivadas Parciales, computacionalmente costosos y poco exactos. La primera parte de esta tesis detalla las principales características de los modelos más recientes de cada tipo y, para los más habitualmente utilizados, se indican sus rangos de aplicación, ventajas e inconvenientes. Finalmente como síntesis de los procesos más relevantes que contemplan los modelos revisados, se presenta un diagrama conceptual de la recesión costera, donde se recogen los procesos más influyentes que deben ser tenidos en cuenta, a la hora de utilizar o crear un modelo de recesión costera con el objetivo de evaluar la peligrosidad (tiempo/frecuencia) del fenómeno a medio-corto plazo. En esta tesis se desarrolla un modelo de proceso-respuesta de retroceso de acantilados costeros que incorpora el comportamiento geomecánico de materiales cuya resistencia a compresión no supere los 5 MPa. El modelo simula la evolución espaciotemporal de un perfil-2D del acantilado que puede estar formado por materiales heterogéneos. Para ello, se acoplan la dinámica marina: nivel medio del mar, cambios en el nivel medio del lago, mareas y oleaje; con la evolución del terreno: erosión, desprendimiento rocoso y formación de talud de derrubios. El modelo en sus diferentes variantes es capaz de incluir el análisis de la estabilidad geomecánica de los materiales, el efecto de los derrubios presentes al pie del acantilado, el efecto del agua subterránea, la playa, el run-up, cambios en el nivel medio del mar o cambios (estacionales o interanuales) en el nivel medio de la masa de agua (lagos). Se ha estudiado el error de discretización del modelo y su propagación en el tiempo a partir de las soluciones exactas para los dos primeros periodos de marea para diferentes aproximaciones numéricas tanto en tiempo como en espacio. Los resultados obtenidos han permitido justificar las elecciones que minimizan el error y los métodos de aproximación más adecuados para su posterior uso en la modelización. El modelo ha sido validado frente a datos reales en la costa de Holderness, Yorkshire, Reino Unido; y en la costa norte del lago Erie, Ontario, Canadá. Los resultados obtenidos presentan un importante avance en los modelos de recesión costera, especialmente en su relación con las condiciones geomecánicas del medio, la influencia del agua subterránea, la verticalización de los perfiles rocosos y su respuesta ante condiciones variables producidas por el cambio climático (por ejemplo, nivel medio del mar, cambios en los niveles de lago, etc.). The recession of coastal cliffs is a widespread phenomenon on the rocky shores that are exposed to the combined incidence of marine and meteorological processes that occur in the shoreline. This phenomenon is revealed violently and occasionally, as gravitational movements of the ground and can cause material or human losses. Although knowledge of the risks of erosion is vital for the proper management of the coast, the development of cliff erosion predictive models is limited by the complex interactions between environmental processes and material properties over a range of temporal and spatial scales. Published prediction models are scarce and present important drawbacks: extrapolation, that extend historical records to the future; empirical, that based on historical records studies the system response against the change in one parameter; stochastic, that represent of cliff behaviour based on assumptions regarding the magnitude and frequency of events in a probabilistic framework based on historical records; process-response, stability and error propagation unexplored; PDE´s, highly computationally expensive and not very accurate. The first part of this thesis describes the main features of the latest models of each type and, for the most commonly used, their ranges of application, advantages and disadvantages are given. Finally as a synthesis of the most relevant processes that include the revised models, a conceptual diagram of coastal recession is presented. This conceptual model includes the most influential processes that must be taken into account when using or creating a model of coastal recession to evaluate the dangerousness (time/frequency) of the phenomenon to medium-short term. A new process-response coastal recession model developed in this thesis has been designed to incorporate the behavioural and mechanical characteristics of coastal cliffs which are composed of with materials whose compressive strength is less than 5 MPa. The model simulates the spatial and temporal evolution of a cliff-2D profile that can consist of heterogeneous materials. To do so, marine dynamics: mean sea level, waves, tides, lake seasonal changes; is coupled with the evolution of land recession: erosion, cliff face failure and associated protective colluvial wedge. The model in its different variants can include analysis of material geomechanical stability, the effect of debris present at the cliff foot, groundwater effects, beach and run-up effects, changes in the mean sea level or changes (seasonal or inter-annual) in the mean lake level. Computational implementation and study of different numerical resolution techniques, in both time and space approximations, and the produced errors are exposed and analysed for the first two tidal periods. The results obtained in the errors analysis allow us to operate the model with a configuration that minimizes the error of the approximation methods. The model is validated through profile evolution assessment at various locations of coastline retreat on the Holderness Coast, Yorkshire, UK and on the north coast of Lake Erie, Ontario, Canada. The results represent an important stepforward in linking material properties to the processes of cliff recession, in considering the effect of groundwater charge and the slope oversteeping and their response to changing conditions caused by climate change (i.e. sea level, changes in lakes levels, etc.).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La evaluación de la seguridad de estructuras antiguas de fábrica es un problema abierto.El material es heterogéneo y anisótropo, el estado previo de tensiones difícil de conocer y las condiciones de contorno inciertas. A comienzos de los años 50 se demostró que el análisis límite era aplicable a este tipo de estructuras, considerándose desde entonces como una herramienta adecuada. En los casos en los que no se produce deslizamiento la aplicación de los teoremas del análisis límite estándar constituye una herramienta formidable por su simplicidad y robustez. No es necesario conocer el estado real de tensiones. Basta con encontrar cualquier solución de equilibrio, y que satisfaga las condiciones de límite del material, en la seguridad de que su carga será igual o inferior a la carga real de inicio de colapso. Además esta carga de inicio de colapso es única (teorema de la unicidad) y se puede obtener como el óptimo de uno cualquiera entre un par de programas matemáticos convexos duales. Sin embargo, cuando puedan existir mecanismos de inicio de colapso que impliquen deslizamientos, cualquier solución debe satisfacer tanto las restricciones estáticas como las cinemáticas, así como un tipo especial de restricciones disyuntivas que ligan las anteriores y que pueden plantearse como de complementariedad. En este último caso no está asegurada la existencia de una solución única, por lo que es necesaria la búsqueda de otros métodos para tratar la incertidumbre asociada a su multiplicidad. En los últimos años, la investigación se ha centrado en la búsqueda de un mínimo absoluto por debajo del cual el colapso sea imposible. Este método es fácil de plantear desde el punto de vista matemático, pero intratable computacionalmente, debido a las restricciones de complementariedad 0 y z 0 que no son ni convexas ni suaves. El problema de decisión resultante es de complejidad computacional No determinista Polinomial (NP)- completo y el problema de optimización global NP-difícil. A pesar de ello, obtener una solución (sin garantía de exito) es un problema asequible. La presente tesis propone resolver el problema mediante Programación Lineal Secuencial, aprovechando las especiales características de las restricciones de complementariedad, que escritas en forma bilineal son del tipo y z = 0; y 0; z 0 , y aprovechando que el error de complementariedad (en forma bilineal) es una función de penalización exacta. Pero cuando se trata de encontrar la peor solución, el problema de optimización global equivalente es intratable (NP-difícil). Además, en tanto no se demuestre la existencia de un principio de máximo o mínimo, existe la duda de que el esfuerzo empleado en aproximar este mínimo esté justificado. En el capítulo 5, se propone hallar la distribución de frecuencias del factor de carga, para todas las soluciones de inicio de colapso posibles, sobre un sencillo ejemplo. Para ello, se realiza un muestreo de soluciones mediante el método de Monte Carlo, utilizando como contraste un método exacto de computación de politopos. El objetivo final es plantear hasta que punto está justificada la busqueda del mínimo absoluto y proponer un método alternativo de evaluación de la seguridad basado en probabilidades. Las distribuciones de frecuencias, de los factores de carga correspondientes a las soluciones de inicio de colapso obtenidas para el caso estudiado, muestran que tanto el valor máximo como el mínimo de los factores de carga son muy infrecuentes, y tanto más, cuanto más perfecto y contínuo es el contacto. Los resultados obtenidos confirman el interés de desarrollar nuevos métodos probabilistas. En el capítulo 6, se propone un método de este tipo basado en la obtención de múltiples soluciones, desde puntos de partida aleatorios y calificando los resultados mediante la Estadística de Orden. El propósito es determinar la probabilidad de inicio de colapso para cada solución.El método se aplica (de acuerdo a la reducción de expectativas propuesta por la Optimización Ordinal) para obtener una solución que se encuentre en un porcentaje determinado de las peores. Finalmente, en el capítulo 7, se proponen métodos híbridos, incorporando metaheurísticas, para los casos en que la búsqueda del mínimo global esté justificada. Abstract Safety assessment of the historic masonry structures is an open problem. The material is heterogeneous and anisotropic, the previous state of stress is hard to know and the boundary conditions are uncertain. In the early 50's it was proven that limit analysis was applicable to this kind of structures, being considered a suitable tool since then. In cases where no slip occurs, the application of the standard limit analysis theorems constitutes an excellent tool due to its simplicity and robustness. It is enough find any equilibrium solution which satisfy the limit constraints of the material. As we are certain that this load will be equal to or less than the actual load of the onset of collapse, it is not necessary to know the actual stresses state. Furthermore this load for the onset of collapse is unique (uniqueness theorem), and it can be obtained as the optimal from any of two mathematical convex duals programs However, if the mechanisms of the onset of collapse involve sliding, any solution must satisfy both static and kinematic constraints, and also a special kind of disjunctive constraints linking the previous ones, which can be formulated as complementarity constraints. In the latter case, it is not guaranted the existence of a single solution, so it is necessary to look for other ways to treat the uncertainty associated with its multiplicity. In recent years, research has been focused on finding an absolute minimum below which collapse is impossible. This method is easy to set from a mathematical point of view, but computationally intractable. This is due to the complementarity constraints 0 y z 0 , which are neither convex nor smooth. The computational complexity of the resulting decision problem is "Not-deterministic Polynomialcomplete" (NP-complete), and the corresponding global optimization problem is NP-hard. However, obtaining a solution (success is not guaranteed) is an affordable problem. This thesis proposes solve that problem through Successive Linear Programming: taking advantage of the special characteristics of complementarity constraints, which written in bilinear form are y z = 0; y 0; z 0 ; and taking advantage of the fact that the complementarity error (bilinear form) is an exact penalty function. But when it comes to finding the worst solution, the (equivalent) global optimization problem is intractable (NP-hard). Furthermore, until a minimum or maximum principle is not demonstrated, it is questionable that the effort expended in approximating this minimum is justified. XIV In chapter 5, it is proposed find the frequency distribution of the load factor, for all possible solutions of the onset of collapse, on a simple example. For this purpose, a Monte Carlo sampling of solutions is performed using a contrast method "exact computation of polytopes". The ultimate goal is to determine to which extent the search of the global minimum is justified, and to propose an alternative approach to safety assessment based on probabilities. The frequency distributions for the case study show that both the maximum and the minimum load factors are very infrequent, especially when the contact gets more perfect and more continuous. The results indicates the interest of developing new probabilistic methods. In Chapter 6, is proposed a method based on multiple solutions obtained from random starting points, and qualifying the results through Order Statistics. The purpose is to determine the probability for each solution of the onset of collapse. The method is applied (according to expectations reduction given by the Ordinal Optimization) to obtain a solution that is in a certain percentage of the worst. Finally, in Chapter 7, hybrid methods incorporating metaheuristics are proposed for cases in which the search for the global minimum is justified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Images acquired during free breathing using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) exhibit a quasiperiodic motion pattern that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. In this work, we present a method to compensate this movement by combining independent component analysis (ICA) and image registration: First, we use ICA and a time?frequency analysis to identify the motion and separate it from the intensity change induced by the contrast agent. Then, synthetic reference images are created by recombining all the independent components but the one related to the motion. Therefore, the resulting image series does not exhibit motion and its images have intensities similar to those of their original counterparts. Motion compensation is then achieved by using a multi-pass image registration procedure. We tested our method on 39 image series acquired from 13 patients, covering the basal, mid and apical areas of the left heart ventricle and consisting of 58 perfusion images each. We validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration of 13 patient data sets (39 distinct slices). We compared linear, non-linear, and combined ICA based registration approaches and previously published motion compensation schemes. Considering run-time and accuracy, a two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32 ± 12 s on a recent workstation. The proposed scheme improves the Pearsons correlation coefficient between manually and automatically obtained time?intensity curves from .84 ± .19 before registration to .96 ± .06 after registration

Relevância:

80.00% 80.00%

Publicador:

Resumo:

By spectral analysis, and using joint time-frequency representations, we present the theoretical basis to design invariant bandlimited Airy pulses with an arbitrary degree of robustness and an arbitrary range of single-mode fiber chromatic dispersion. The numerically simulated examples confirm the theoretically predicted pulse partial invariance in the propagation of the pulse in the fiber.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The quality and the reliability of the power generated by large grid-connected photovoltaic (PV) plants are negatively affected by the source characteristic variability. This paper deals with the smoothing of power fluctuations because of geographical dispersion of PV systems. The fluctuation frequency and the maximum fluctuation registered at a PV plant ensemble are analyzed to study these effects. We propose an empirical expression to compare the fluctuation attenuation because of both the size and the number of PV plants grouped. The convolution of single PV plants frequency distribution functions has turned out to be a successful tool to statistically describe the behavior of an ensemble of PV plants and determine their maximum output fluctuation. Our work is based on experimental 1-s data collected throughout 2009 from seven PV plants, 20 MWp in total, separated between 6 and 360 km.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we present a new way to mask the data in a one-user communication system when direct sequence - code division multiple access (DS-CDMA) techniques are used. The code is generated by a digital chaotic generator, originally proposed by us and previously reported for a chaos cryptographic system. It is demonstrated that if the user's data signal is encoded with a bipolar phase-shift keying (BPSK) technique, usual in DS-CDMA, it can be easily recovered from a time-frequency domain representation. To avoid this situation, a new system is presented in which a previous dispersive stage is applied to the data signal. A time-frequency domain analysis is performed, and the devices required at the transmitter and receiver end, both user-independent, are presented for the optical domain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis establece los fundamentos teóricos y diseña una colección abierta de clases C++ denominada VBF (Vector Boolean Functions) para analizar funciones booleanas vectoriales (funciones que asocian un vector booleano a otro vector booleano) desde una perspectiva criptográfica. Esta nueva implementación emplea la librería NTL de Victor Shoup, incorporando nuevos módulos que complementan a las funciones de NTL, adecuándolas para el análisis criptográfico. La clase fundamental que representa una función booleana vectorial se puede inicializar de manera muy flexible mediante diferentes estructuras de datas tales como la Tabla de verdad, la Representación de traza y la Forma algebraica normal entre otras. De esta manera VBF permite evaluar los criterios criptográficos más relevantes de los algoritmos de cifra en bloque y de stream, así como funciones hash: por ejemplo, proporciona la no-linealidad, la distancia lineal, el grado algebraico, las estructuras lineales, la distribución de frecuencias de los valores absolutos del espectro Walsh o del espectro de autocorrelación, entre otros criterios. Adicionalmente, VBF puede llevar a cabo operaciones entre funciones booleanas vectoriales tales como la comprobación de igualdad, la composición, la inversión, la suma, la suma directa, el bricklayering (aplicación paralela de funciones booleanas vectoriales como la empleada en el algoritmo de cifra Rijndael), y la adición de funciones coordenada. La tesis también muestra el empleo de la librería VBF en dos aplicaciones prácticas. Por un lado, se han analizado las características más relevantes de los sistemas de cifra en bloque. Por otro lado, combinando VBF con algoritmos de optimización, se han diseñado funciones booleanas cuyas propiedades criptográficas son las mejores conocidas hasta la fecha. ABSTRACT This thesis develops the theoretical foundations and designs an open collection of C++ classes, called VBF, designed for analyzing vector Boolean functions (functions that map a Boolean vector to another Boolean vector) from a cryptographic perspective. This new implementation uses the NTL library from Victor Shoup, adding new modules which complement the existing ones making VBF better suited for cryptography. The fundamental class representing a vector Boolean function can be initialized in a flexible way via several alternative types of data structures such as Truth Table, Trace Representation, Algebraic Normal Form (ANF) among others. This way, VBF allows the evaluation of the most relevant cryptographic criteria for block and stream ciphers as well as for hash functions: for instance, it provides the nonlinearity, the linearity distance, the algebraic degree, the linear structures, the frequency distribution of the absolute values of the Walsh Spectrum or the Autocorrelation Spectrum, among others. In addition, VBF can perform operations such as equality testing, composition, inversion, sum, direct sum, bricklayering (parallel application of vector Boolean functions as employed in Rijndael cipher), and adding coordinate functions of two vector Boolean functions. This thesis also illustrates the use of VBF in two practical applications. On the one hand, the most relevant properties of the existing block ciphers have been analysed. On the other hand, by combining VBF with optimization algorithms, new Boolean functions have been designed which have the best known cryptographic properties up-to-date.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La Ingeniería Biomédica surgió en la década de 1950 como una fascinante mezcla interdisciplinaria, en la cual la ingeniería, la biología y la medicina aunaban esfuerzos para analizar y comprender distintas enfermedades. Las señales existentes en este área deben ser analizadas e interpretadas, más allá de las capacidades limitadas de la simple vista y la experiencia humana. Aquí es donde el procesamiento digital de la señal se postula como una herramienta indispensable para extraer la información relevante oculta en dichas señales. La electrocardiografía fue una de las primeras áreas en las que se aplicó el procesado digital de señales hace más de 50 años. Las señales electrocardiográficas continúan siendo, a día de hoy, objeto de estudio por parte de cardiólogos e ingenieros. En esta área, las técnicas de procesamiento de señal han ayudado a encontrar información oculta a simple vista que ha cambiado la forma de tratar ciertas enfermedades que fueron ya diagnosticadas previamente. Desde entonces, se han desarrollado numerosas técnicas de procesado de señales electrocardiográficas, pudiéndose resumir estas en tres grandes categorías: análisis tiempo-frecuencia, análisis de organización espacio-temporal y separación de la actividad atrial del ruido y las interferencias. Este proyecto se enmarca dentro de la primera categoría, análisis tiempo-frecuencia, y en concreto dentro de lo que se conoce como análisis de frecuencia dominante, la cual se va a aplicar al análisis de señales de fibrilación auricular. El proyecto incluye una parte teórica de análisis y desarrollo de algoritmos de procesado de señal, y una parte práctica, de programación y simulación con Matlab. Matlab es una de las herramientas fundamentales para el procesamiento digital de señales por ordenador, la cual presenta importantes funciones y utilidades para el desarrollo de proyectos en este campo. Por ello, se ha elegido dicho software como herramienta para la implementación del proyecto. ABSTRACT. Biomedical Engineering emerged in the 1950s as a fascinating interdisciplinary blend, in which engineering, biology and medicine pooled efforts to analyze and understand different diseases. Existing signals in this area should be analyzed and interpreted, beyond the limited capabilities of the naked eye and the human experience. This is where the digital signal processing is postulated as an indispensable tool to extract the relevant information hidden in these signals. Electrocardiography was one of the first areas where digital signal processing was applied over 50 years ago. Electrocardiographic signals remain, even today, the subject of close study by cardiologists and engineers. In this area, signal processing techniques have helped to find hidden information that has changed the way of treating certain diseases that were already previously diagnosed. Since then, numerous techniques have been developed for processing electrocardiographic signals. These methods can be summarized into three categories: time-frequency analysis, analysis of spatio-temporal organization and separation of atrial activity from noise and interferences. This project belongs to the first category, time-frequency analysis, and specifically to what is known as dominant frequency analysis, which is one of the fundamental tools applied in the analysis of atrial fibrillation signals. The project includes a theoretical part, related to the analysis and development of signal processing algorithms, and a practical part, related to programming and simulation using Matlab. Matlab is one of the fundamental tools for digital signal processing, presenting significant functions and advantages for the development of projects in this field. Therefore, we have chosen this software as a tool for project implementation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The evolution of water content on a sandy soil during the sprinkler irrigation campaign, in the summer of 2010, of a field of sugar beet crop located at Valladolid (Spain) is assessed by a capacitive FDR (Frequency Domain Reflectometry) EnviroScan. This field is one of the experimental sites of the Spanish research center for the sugar beet development (AIMCRA). The objective of the work focus on monitoring the soil water content evolution of consecutive irrigations during the second two weeks of July (from the 12th to the 28th). These measurements will be used to simulate water movement by means of Hydrus-2D. The water probe logged water content readings (m3/m3) at 10, 20, 40 and 60 cm depth every 30 minutes. The probe was placed between two rows in one of the typical 12 x 15 m sprinkler irrigation framework. Furthermore, a texture analysis at the soil profile was also conducted. The irrigation frequency in this farm was set by the own personal farmer 0 s criteria that aiming to minimizing electricity pumping costs, used to irrigate at night and during the weekend i.e. longer irrigation frequency than expected. However, the high evapotranspiration rates and the weekly sugar beet water consumption—up to 50mm/week—clearly determined the need for lower this frequency. Moreover, farmer used to irrigate for six or five hours whilst results from the EnviroScan probe showed the soil profile reaching saturation point after the first three hours. It must be noted that AIMCRA provides to his members with a SMS service regarding weekly sugar beet water requirement; from the use of different meteorological stations and evapotranspiration pans, farmers have an idea of the weekly irrigation needs. Nevertheless, it is the farmer 0 s decision to decide how to irrigate. Thus, in order to minimize water stress and pumping costs, a suitable irrigation time and irrigation frequency was modeled with Hydrus-2D. Results for the period above mentioned showed values of water content ranging from 35 and 30 (m3/m3) for the first 10 and 20cm profile depth (two hours after irrigation) to the minimum 14 and 13 (m3/m3) ( two hours before irrigation). For the 40 and 60 cm profile depth, water content moves steadily across the dates: The greater the root activity the greater the water content variation. According to the results in the EnviroScan probe and the modeling in Hydrus-2D, shorter frequencies and irrigation times are suggested.