19 resultados para Variants of FSGS
em Universidad Politécnica de Madrid
Resumo:
Of the many state-of-the-art methods for cooperative localization in wireless sensor networks (WSN), only very few adapt well to mobile networks. The main problems of the well-known algorithms, based on nonparametric belief propagation (NBP), are the high communication cost and inefficient sampling techniques. Moreover, they either do not use smoothing or just apply it o ine. Therefore, in this article, we propose more flexible and effcient variants of NBP for cooperative localization in mobile networks. In particular, we provide: i) an optional 1-lag smoothing done almost in real-time, ii) a novel low-cost communication protocol based on package approximation and censoring, iii) higher robustness of the standard mixture importance sampling (MIS) technique, and iv) a higher amount of information in the importance densities by using the population Monte Carlo (PMC) approach, or an auxiliary variable. Through extensive simulations, we confirmed that all the proposed techniques outperform the standard NBP method.
Resumo:
The implementation of abstract machines involves complex decisions regarding, e.g., data representation, opcodes, or instruction specialization levéis, all of which affect the final performance of the emulator and the size of the bytecode programs in ways that are often difficult to foresee. Besides, studying alternatives by implementing abstract machine variants is a time-consuming and error-prone task because of the level of complexity and optimization of competitive implementations, which makes them generally difficult to understand, maintain, and modify. This also makes it hard to genérate specific implementations for particular purposes. To ameliorate those problems, we propose a systematic approach to the automatic generation of implementations of abstract machines. Different parts of their definition (e.g., the instruction set or the infernal data and bytecode representation) are kept sepárate and automatically assembled in the generation process. Alternative versions of the abstract machine are therefore easier to produce, and variants of their implementation can be created mechanically, with specific characteristics for a particular application if necessary. We illustrate the practicality of the approach by reporting on an implementation of a generator of production-quality WAMs which are specialized for executing a particular fixed (set of) program(s). The experimental results show that the approach is effective in reducing emulator size.
Resumo:
The Networks of Evolutionary Processors (NEPs) are computing mechanisms directly inspired from the behavior of cell populations more specifically the point mutations in DNA strands. These mechanisms are been used for solving NP-complete problems by means of a parallel computation postulation. This paper describes an implementation of the basic model of NEP using Web technologies and includes the possibility of designing some of the most common variants of it by means the use of the web page design which eases the configuration of a given problem. It is a system intended to be used in a multicore processor in order to benefit from the multi thread use.
Resumo:
Tissue P systems generalize the membrane structure tree usual in original models of P systems to an arbitrary graph. Basic opera- tions in these systems are communication rules, enriched in some variants with cell division or cell separation. Several variants of tissue P systems were recently studied, together with the concept of uniform families of these systems. Their computational power was shown to range between P and NP ? co-NP , thus characterizing some interesting borderlines between tractability and intractability. In this paper we show that com- putational power of these uniform families in polynomial time is limited by the class PSPACE . This class characterizes the power of many clas- sical parallel computing models
Resumo:
The calculation of the effective delayed neutron fraction, beff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for beff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we call the k-eigenvalue technique and other techniques based on different interpretations of the physical meaning of the adjoint weighting. To test the validity of all these techniques we have implemented them with the MCNPX code and we have benchmarked them against a range of critical and subcritical systems for which either experimental or deterministic values of beff are available. Furthermore, several nuclear data libraries have been used in order to assess the impact of the uncertainty in nuclear data in the calculated value of beff .
Resumo:
El objetivo principal de esta tesis ha sido el diseño y la optimización de receptores implementados con fibra óptica, para ser usados en redes ópticas de alta velocidad que empleen formatos de modulación de fase. En los últimos años, los formatos de modulación de fase (Phase Shift keying, PSK) han captado gran atención debido a la mejora de sus prestaciones respecto a los formatos de modulación convencionales. Principalmente, presentan una mejora de la eficiencia espectral y una mayor tolerancia a la degradación de la señal causada por la dispersión cromática, la dispersión por modo de polarización y los efectos no-lineales en la fibra óptica. En este trabajo, se analizan en detalle los formatos PSK, incluyendo sus variantes de modulación de fase diferencial (Differential Phase Shift Keying, DPSK), en cuadratura (Differential Quadrature Phase Shift Keying, DQPSK) y multiplexación en polarización (Polarization Multiplexing Differential Quadrature Phase Shift Keying, PM-DQPSK), con la finalidad de diseñar y optimizar los receptores que permita su demodulación. Para ello, se han analizado y desarrollado nuevas estructuras que ofrecen una mejora en las prestaciones del receptor y una reducción de coste comparadas con las actualmente disponibles. Para la demodulación de señales DPSK, en esta tesis, se proponen dos nuevos receptores basados en un interferómetro en línea Mach-Zehnder (MZI) implementado con tecnología todo-fibra. El principio de funcionamiento de los MZI todo-fibra propuestos se asienta en la interferencia modal que se produce en una fibra multimodo (MMF) cuando se situada entre dos monomodo (SMF). Este tipo de configuración (monomodo-multimodo-monomodo, SMS) presenta un buen ratio de extinción interferente si la potencia acoplada en la fibra multimodo se reparte, principal y equitativamente, entre dos modos dominantes. Con este objetivo, se han estudiado y demostrado tanto teórica como experimentalmente dos nuevas estructuras SMS que mejoran el ratio de extinción. Una de las propuestas se basa en emplear una fibra multimodo de índice gradual cuyo perfil del índice de refracción presenta un hundimiento en su zona central. La otra consiste en una estructura SMS con las fibras desalineadas y donde la fibra multimodo es una fibra de índice gradual convencional. Para las dos estructuras, mediante el análisis teórico desarrollado, se ha demostrado que el 80 – 90% de la potencia de entrada se acopla a los dos modos dominantes de la fibra multimodo y se consigue una diferencia inferior al 10% entre ellos. También se ha demostrado experimentalmente que se puede obtener un ratio de extinción de al menos 12 dB. Con el objeto de demostrar la capacidad de estas estructuras para ser empleadas como demoduladores de señales DPSK, se han realizado numerosas simulaciones de un sistema de transmisión óptico completo y se ha analizado la calidad del receptor bajo diferentes perspectivas, tales como la sensibilidad, la tolerancia a un filtrado óptico severo o la tolerancia a las dispersiones cromática y por modo de polarización. En todos los casos se ha concluido que los receptores propuestos presentan rendimientos comparables a los obtenidos con receptores convencionales. En esta tesis, también se presenta un diseño alternativo para la implementación de un receptor DQPSK, basado en el uso de una fibra mantenedora de la polarización (PMF). A través del análisi teórico y del desarrollo de simulaciones numéricas, se ha demostrado que el receptor DQPSK propuesto presenta prestaciones similares a los convencionales. Para complementar el trabajo realizado sobre el receptor DQPSK basado en PMF, se ha extendido el estudio de su principio de demodulación con el objeto de demodular señales PM-DQPSK, obteniendo como resultado la propuesta de una nueva estructura de demodulación. El receptor PM-DQPSK propuesto se basa en la estructura conjunta de una única línea de retardo junto con un rotador de polarización. Se ha analizado la calidad de los receptores DQPSK y PM-DQPSK bajo diferentes perspectivas, tales como la sensibilidad, la tolerancia a un filtrado óptico severo, la tolerancia a las dispersiones cromática y por modo de polarización o su comportamiento bajo condiciones no-ideales. En comparación con los receptores convencionales, nuestra propuesta exhibe prestaciones similares y además permite un diseño más simple que redunda en un coste potencialmente menor. En las redes de comunicaciones ópticas actuales se utiliza la tecnología de multimplexación en longitud de onda (WDM) que obliga al uso de filtros ópticos con bandas de paso lo más estrechas posibles y a emplear una serie de dispositivos que incorporan filtros en su arquitectura, tales como los multiplexores, demultiplexores, ROADMs, conmutadores y OXCs. Todos estos dispositivos conectados entre sí son equivalentes a una cadena de filtros cuyo ancho de banda se va haciendo cada vez más estrecho, llegando a distorsionar la forma de onda de las señales. Por esto, además de analizar el impacto del filtrado óptico en las señales de 40 Gbps DQPSK y 100 Gbps PM-DQPSK, este trabajo de tesis se completa estudiando qué tipo de filtro óptico minimiza las degradaciones causadas en la señal y analizando el número máximo de filtros concatenados que permiten mantener la calidad requerida al sistema. Se han estudiado y simulado cuatro tipos de filtros ópticos;Butterworth, Bessel, FBG y F-P. ABSTRACT The objective of this thesis is the design and optimization of optical fiber-based phase shift keying (PSK) demodulators for high-bit-rate optical networks. PSK modulation formats have attracted significant attention in recent years, because of the better performance with respect to conventional modulation formats. Principally, PSK signals can improve spectrum efficiency and tolerate more signal degradation caused by chromatic dispersion, polarization mode dispersion and nonlinearities in the fiber. In this work, many PSK formats were analyzed in detail, including the variants of differential phase modulation (Differential Phase Shift Keying, DPSK), in quadrature (Differential Quadrature Phase Shift Keying, DQPSK) and polarization multiplexing (Polarization Multiplexing Differential Quadrature Phase Shift Keying, PM-DQPSK), in order to design and optimize receivers enabling demodulations. Therefore, novel structures, which offer good receiver performances and a reduction in cost compared to the current structures, have been analyzed and developed. Two novel receivers based on an all-fiber in-line Mach-Zehnder interferometer (MZI) were proposed for DPSK signal demodulation in this thesis. The operating principle of the all-fiber MZI is based on the modal interference that occurs in a multimode fiber (MMF) when it is located between two single-mode fibers (SMFs). This type of configuration (Single-mode-multimode-single-mode, SMS) can provide a good extinction ratio if the incoming power from the SMF could be coupled equally into two dominant modes excited in the MMF. In order to improve the interference extinction ratio, two novel SMS structures have been studied and demonstrated, theoretically and experimentally. One of the two proposed MZIs is based on a graded-index multimode fiber (MMF) with a central dip in the index profile, located between two single-mode fibers (SMFs). The other one is based on a conventional graded-index MMF mismatch spliced between two SMFs. Theoretical analysis has shown that, in these two schemes, 80 – 90% of the incoming power can be coupled into the two dominant modes exited in the MMF, and the power difference between them is only ~10%. Experimental results show that interference extinction ratio of 12 dB could be obtained. In order to demonstrate the capacity of these two structures for use as DPSK signal demodulators, numerical simulations in a completed optical transmission system have been carried out, and the receiver quality has been analyzed under different perspectives, such as sensitivity, tolerance to severe optical filtering or tolerance to chromatic and polarization mode dispersion. In all cases, from the simulation results we can conclude that the two proposed receivers can provide performances comparable to conventional ones. In this thesis, an alternative design for the implementation of a DQPSK receiver, which is based on a polarization maintaining fiber (PMF), was also presented. To complement the work made for the PMF-based DQPSK receiver, the study of the demodulation principle has been extended to demodulate PM-DQPSK signals, resulting in the proposal of a novel demodulation structure. The proposed PM-DQPSK receiver is based on only one delay line and a polarization rotator. The quality of the proposed DQPSK and PM-DQPSK receivers under different perspectives, such as sensitivity, tolerance to severe optical filtering, tolerance to chromatic dispersion and polarization mode dispersion, or behavior under non-ideal conditions. Compared with the conventional receivers, our proposals exhibit similar performances but allow a simpler design which can potentially reduce the cost. The wavelength division multiplexing (WDM) technology used in current optical communications networks requires the use of optical filters with a passband as narrow as possible, and the use of a series of devices that incorporate filters in their architecture, such as multiplexers, demultiplexers, switches, reconfigurable add-drop multiplexers (ROADMs) and optical cross-connects (OXCs). All these devices connected together are equivalent to a chain of filters whose bandwidth becomes increasingly narrow, resulting in distortion to the waveform of the signals. Therefore, in addition to analyzing the impact of optical filtering on signal of 40 Gbps DQPSK and 100 Gbps PM-DQPSK, we study which kind of optical filter minimizes the signal degradation and analyze the maximum number of concatenated filters for maintaining the required quality of the system. Four types of optical filters, including Butterworth, Bessel, FBG and FP, have studied and simulated.
Resumo:
The shelter effect of a windbreak protects aggregate piles and provides a reduction of particle emissions in harbours. RANS (Reynolds-averaged Navier–Stokes equations) simulations using three variants of k–ε (standard k–ε, RNG k–ε and realizable k–ε) turbulence closure models have been performed to analyse wind flow characteristics behind an isolated fence located on a flat surface without roughness elements. The performance of the three turbulence models has been assessed by wind tunnel experiments. Cases of fences with different porosities (φ) have been evaluated using wind tunnel experiments as well as numerical simulations. The aim is to determine an optimum porosity for sheltering effect of an isolated windbreak. A value of 0.35 was found as the optimum value among the studied porosities (φ=0, 0.1, 0.24, 0.35, 0.4, 0.5).
Resumo:
We discuss here different variants of the Sharing abstract domain, including the base domain that captures set-sharing, a variant to capture pairsharing, in which redundant sharing groups (w.r.t. the pair-sharing property) can be eliminated, and an alternative representation based on cliques. The original proposal for using cliques in the non-redundant version of the domain is reviewed, then extended to the base domain. Variants of all the domains including freeness alone, and freeness together with linearity are also studied.
Resumo:
Abstract: In this paper we propose a generalization of the accepting splicingsystems introduced in Mitrana et al. (Theor Comput Sci 411:2414?2422,2010). More precisely, the input word is accepted as soon as a permittingword is obtained provided that no forbidding word has been obtained sofar, otherwise it is rejected. Note that in the new variant of acceptingsplicing system the input word is rejected if either no permitting word isever generated (like in Mitrana et al. in Theor Comput Sci 411:2414?2422,2010) or a forbidding word has been generated and no permitting wordhad been generated before. We investigate the computational power ofthe new variants of accepting splicing systems and the interrelationshipsamong them. We show that the new condition strictly increases thecomputational power of accepting splicing systems. Although there areregular languages that cannot be accepted by any of the splicing systemsconsidered here, the new variants can accept non-regular and even non-context-free languages, a situation that is not very common in the case of(extended) finite splicing systems without additional restrictions. We alsoshow that the smallest class of languages out of the four classes definedby accepting splicing systems is strictly included in the class of context-free languages. Solutions to a few decidability problems are immediatelyderived from the proof of this result.
Resumo:
In this paper we propose a condition for rejecting the input word by an accepting splicing system which is defined by a finite set of forbidding words. We investigate the computational power of the new variants of accepting splicing systems. We show that the new condition strictly increases the computational power of accepting splicing systems. Rather surprisingly, accepting splicing systems considered here can accept non-regular languages, a situation that has never occurred in the case of (extended) finite splicing systems without additional restrictions.
Resumo:
El trigo blando (Triticum aestivum ssp vulgare L., AABBDD, 2n=6x=42) presenta propiedades viscoélasticas únicas debidas a la presencia en la harina de las prolaminas: gluteninas y gliadinas. Ambos tipos de proteínas forman parte de la red de gluten. Basándose en la movilidad en SDS-PAGE, las gluteninas se clasifican en dos grupos: gluteninas de alto peso molecular (HMW-GS) y gluteninas de bajo peso molecular (LMW-GS). Los genes que codifican para las HMW-GS se encuentran en tres loci del grupo 1 de cromosomas: Glu-A1, Glu-B1 y Glu-D1. Cada locus codifica para uno o dos polipéptidos o subunidades. La variación alélica de las HMW-GS es el principal determinante de de la calidad harino-panadera y ha sido ampliamente estudiado tanto a nivel de proteína como de ADN. El conocimiento de estas proteínas ha contribuido sustancialmente al progreso de los programas de mejora para la calidad del trigo. Comparadas con las HMW-GS, las LMW-GS forman una familia proteica mucho más compleja. La mayoría de los genes LMW se localizan en el grupo 1 de cromosomas en tres loci: Glu-A3, Glu-B3 y Glu-D3 que se encuentran estrechamente ligados a los loci que codifican para gliadinas. El número de copias de estos genes ha sido estimado entre 10-40 en trigo hexaploide, pero el número exacto aún se desconoce debido a la ausencia de un método eficiente para diferenciar los miembros de esta familia multigénica. La nomenclatura de los alelos LMW-GS por electroforesis convencional es complicada, y diferentes autores asignan distintos alelos a la misma variedad lo que dificulta aún más el estudio de esta compleja familia. El uso de marcadores moleculares para la discriminación de genes LMW, aunque es una tarea dificil, puede ser muy útil para los programas de mejora. El objetivo de este trabajo ha sido profundizar en la relación entre las gluteninas y la calidad panadera y desarrollar marcadores moleculares que permitan ayudar en la correcta clasificación de HMW-GS y LMW-GS. Se han obtenido dos poblaciones de líneas avanzadas F4:6 a partir de los cruzamientos entre las variedades ‘Tigre’ x ‘Gazul’ y ‘Fiel’ x ‘Taber’, seleccionándose para los análisis de calidad las líneas homogéneas para HMW-GS, LMW-GS y gliadinas. La determinación alélica de HMW-GS se llevó a cabo por SDS-PAGE, y se complementó con análisis moleculares, desarrollándose un nuevo marcador de PCR para diferenciar entre las subunidades Bx7 y Bx7*del locus Glu-B1. Resumen 2 La determinación alélica para LMW-GS se llevó a cabo mediante SDS-PAGE siguiendo distintas nomenclaturas y utilizando variedades testigo para cada alelo. El resultado no fue concluyente para el locus Glu-B3, así que se recurrió a marcadores moleculares. El ADN de los parentales y de los testigos se amplificó usando cebadores diseñados en regiones conservadas de los genes LMW y fue posteriormente analizado mediante electroforesis capilar. Los patrones de amplificación obtenidos fueron comparados entre las distintas muestras y permitieron establecer una relación con los alelos de LMW-GS. Con este método se pudo aclarar la determinación alélica de este locus para los cuatro parentales La calidad de la harina fue testada mediante porcentaje de contenido en proteína, prueba de sedimentación (SDSS) y alveógrafo de Chopin (parámetros P, L, P/L y W). Los valores fueron analizados en relación a la composición en gluteninas. Las líneas del cruzamiento ‘Fiel’ x ‘Taber’ mostraron una clara influencia del locus Glu-A3 en la variación de los valores de SDSS. Las líneas que llevaban el nuevo alelo Glu-A3b’ presentaron valores significativamente mayores que los de las líneas con el alelo Glu-A3f. En las líneas procedentes del cruzamiento ‘Tigre ’x ‘Gazul’, los loci Glu-B1 y Glu-B3 loci mostraron ambos influencia en los parámetros de calidad. Los resultados indicaron que: para los valores de SDSS y P, las líneas con las HMW-GS Bx7OE+By8 fueron significativamente mejores que las líneas con Bx17+By18; y las líneas que llevaban el alelo Glu-B3ac presentaban valores de P significativamente superiores que las líneas con el alelo Glu-B3ad y significativamente menores para los valores de L . El análisis de los valores de calidad en relación a los fragmentos LMW amplificados, reveló un efecto significativo entre dos fragmentos (2-616 y 2-636) con los valores de P. La presencia del fragmento 2-636 estaba asociada a valores de P mayores. Estos fragmentos fueron clonados y secuenciados, confirmándose que correspondían a genes del locus Glu-B3. El estudio de la secuencia reveló que la diferencia entre ambos se hallaba en algunos SNPs y en una deleción de 21 nucleótidos que en la proteína correspondería a un InDel de un heptapéptido en la región repetida de la proteína. En este trabajo, la utilización de líneas que difieren en el locus Glu-B3 ha permitido el análisis de la influencia de este locus (el peor caracterizado hasta la fecha) en la calidad panadera. Además, se ha validado el uso de marcadores moleculares en la determinación alélica de las LMW-GS y su relación con la calidad panadera. Summary 3 Bread wheat (Triticum aestivum ssp vulgare L., AABBDD, 2n=6x=42) flour has unique dough viscoelastic properties conferred by prolamins: glutenins and gliadins. Both types of proteins are cross-linked to form gluten polymers. On the basis of their mobility in SDS-PAGE, glutenins can be classified in two groups: high molecular weight glutenins (HMW-GS) and low molecular weight glutenins (LMW-GS). Genes encoding HMW-GS are located on group 1 chromosomes in three loci: Glu-A1, Glu-B1 and Glu-D1, each one encoding two polypeptides, named subunits. Allelic variation of HMW-GS is the most important determinant for bread making quality, and has been exhaustively studied at protein and DNA level. The knowledge of these proteins has substantially contributed to genetic improvement of bread quality in breeding programs. Compared to HMW-GS, LMW-GS are a much more complex family. Most genes encoded LMW-GS are located on group 1 chromosomes. Glu-A3, Glu-B3 and Glu-D3 loci are closely linked to the gliadin loci. The total gene copy number has been estimated to vary from 10–40 in hexaploid wheat. However, the exact copy number of LMW-GS genes is still unknown, mostly due to lack of efficient methods to distinguish members of this multigene family. Nomenclature of LMW-GS alleles is also unclear, and different authors can assign different alleles to the same variety increasing confusion in the study of this complex family. The use of molecular markers for the discrimination of LMW-GS genes might be very useful in breeding programs, but their wide application is not easy. The objective of this work is to gain insight into the relationship between glutenins and bread quality, and the developing of molecular markers that help in the allele classification of HMW-GS and LMW-GS. Two populations of advanced lines F4:6 were obtained from the cross ‘Tigre’ x ‘Gazul’ and ‘Fiel’ x ‘Taber’. Lines homogeneous for HMW-GS, LMW-GS and gliadins pattern were selected for quality analysis. The allele classification of HMW-GS was performed by SDS-PAGE, and then complemented by PCR analysis. A new PCR marker was developed to undoubtedly differentiate between two similar subunits from Glu-B1 locus, Bx7 and Bx7*. The allele classification of LMW-GS was initially performed by SDS-PAGE following different established nomenclatures and using standard varieties. The results were not completely concluding for Glu-B3 locus, so a molecular marker system was applied. DNA from parental lines and standard varieties was amplified using primers designed in conserved domains of LMW genes and analyzed by capillary electrophoresis. The pattern of amplification products obtained was compared among samples and related to the protein allele classification. It was possible to establish a correspondence between specific amplification products and almost all LMW alleles analyzed. With this method, the allele classification of the four parental lines was clarified. Flour quality of F4:6 advanced lines were tested by protein content, sedimentation test (SDSS) and alveograph (P, L, P/L and W). The values were analyzed in relation to the lines prolamin composition. In the ‘Fiel’ x ‘Taber’ population, Glu-A3 locus showed an influence in SDSS values. Lines carrying new allele Glu-A3b’, presented a significantly higher SDSS value than lines with Glu-A3f allele. In the ‘Tigre ’x ‘Gazul’ population, the Glu-B1 and Glu-B3 loci also showed an effect in quality parameters, in SDSS, and P and L values. Results indicated that: for SDSS and P, lines with Bx7OE+By8 were significantly better than lines with Bx17+By18; lines carrying Glu-B3ac allele had a significantly higher P values than Glu-B3ad allele values. lines with and lower L The analysis of quality parameters and amplified LMW fragments revealed a significant influence of two peaks (2-616 y 2-636) in P values. The presence of 2-636 peak gave higher P values than 2-616. These fragments had been cloned and sequenced and identified as Glu-B3 genes. The sequence analysis revealed that the molecular difference between them was some SNPs and a small deletion of 21 nucleotides that in the protein would produce an InDel of a heptapeptide in the repetitive region. In this work, the analysis of two crosses with differences in Glu-3 composition has made possible to study the influence of LMG-GS in quality parameters. Specifically, the influence of Glu-B3, the most interesting and less studied loci has been possible. The results have shown that Glu-B3 allele composition influences the alveograph parameter P (tenacity). The existence of different molecular variants of Glu-B3 alleles have been assessed by using a molecular marker method. This work supports the use of molecular approaches in the study of the very complex LMW-GS family, and validates their application in the analysis of advanced recombinant lines for quality studies.
Resumo:
Los sistemas basados en la técnica OFDM (Multiplexación por División de Frecuencias Ortogonales) son una evolución de los tradicionales sistemas FDM (Multiplexación por División de Frecuencia), gracias a la cual se consigue un mejor aprovechamiento del ancho de banda. En la actualidad los sistemas OFDM y sus variantes ocupan un lugar muy importante en las comunicaciones, estando implementados en diversos estándares como pueden ser: DVB-T (estándar de la TDT), ADSL, LTE, WIMAX, DAB (radio digital), etc. Debido a ello, en este proyecto se implementa un sistema OFDM en el que poder realizar diversas simulaciones para entender mejor su funcionamiento. Para ello nos vamos a valer de la herramienta Matlab. Los objetivos fundamentales dentro de la simulación del sistema es poner a prueba el empleo de turbo códigos (comparándolo con los códigos convolucionales tradicionales) y de un ecualizador. Todo ello con la intención de mejorar la calidad de nuestro sistema (recibir menos bits erróneos) en condiciones cada vez más adversas: relaciones señal a ruido bajas y multitrayectos. Para ello se han implementado las funciones necesarias en Matlab, así como una interfaz gráfica para que sea más sencillo de utilizar el programa y más didáctico. En los capítulos segundo y tercero de este proyecto se efectúa un estudio de las bases de los sistemas OFDM. En el segundo nos centramos más en un estudio teórico puro para después pasar en el tercero a centrarnos únicamente en la teoría de los bloques implementados en el sistema OFDM que se desarrolla en este proyecto. En el capítulo cuarto se explican las distintas opciones que se pueden llevar a cabo mediante la interfaz implementada, a la vez que se elabora un manual para el correcto uso de la misma. El quinto capítulo se divide en dos partes, en la primera se muestran las representaciones que puede realizar el programa, y en la segunda únicamente se realizan simulaciones para comprobar que tal responde nuestra sistema a distintas configuraciones de canal, y las a distintas configuraciones que hagamos nosotros de nuestro sistema (utilicemos una codificación u otra, utilicemos el ecualizador o el prefijo cíclico, etc…). Para finalizar, en el último capítulo se exponen las conclusiones obtenidas en este proyecto, así como posibles líneas de trabajo que seguir en próximas versiones del mismo. ABSTRACT. Systems based on OFDM (Orthogonal Frequency Division Multiplexing) technique are an evolution of traditional FDM (Frequency Division Multiplexing). Due to the use of OFDM systems are achieved by more efficient use of bandwidth. Nowadays, OFDM systems and variants of OFDM systems occupy a very important place in the world of communications, being implemented in standards such as DVB-T, ADSL, LTE, WiMAX, DAB (digital radio) and another more. For all these reasons, this project implements a OFDM system for performing various simulations for better understanding of OFDM system operation. The system has been simulated using Matlab. With system simulation we search to get two key objectives: to test the use of turbo codes (compared to traditional convolutional codes) and an equalizer. We do so with the intention of improving the quality of our system (receive fewer rates of bit error) in increasingly adverse conditions: lower signal-to-noise and multipath. For these reasons necessaries Matlab´s functions have been developed, and a GUI (User Graphical Interface) has been integrated so the program can be used in a easier and more didactic way. This project is divided into five chapters. In the second and third chapter of this project are developed the basis of OFDM systems. Being developed in the second one a pure theoretical study, while focusing only on block theory implemented in the OFDM system in the third one. The fourth chapter describes the options that can be carried out by the interface implemented. Furthermore the chapter is developed for the correct use of the interface. The fifth chapter is divided into two parts, the first part shows to us the representations that the program can perform, and the second one just makes simulations to check that our system responds to differents channel configurations (use of convolutional codes or turbo codes, the use of equalizer or cyclic prefix…). Finally, the last chapter presents the conclusions of this project and possible lines of work to follow in future versions.
Resumo:
Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.
Resumo:
La computación con membranas surge como una alternativa a la computación tradicional. Dentro de este campo se sitúan los denominados Sistemas P de Transición que se basan en la existencia de regiones que contienen recursos y reglas que hacen evolucionar a dichos recursos para poder llevar a cada una de las regiones a una nueva situación denominada configuración. La sucesión de las diferentes configuraciones conforman la computación. En este campo, el Grupo de Computación Natural de la Universidad Politécnica de Madrid lleva a cabo numerosas investigaciones al amparo de las cuales se han publicado numerosos artículos y realizado varias tesis doctorales. Las principales vías de investigación han sido, hasta el momento, el estudio del modelo teórico sobre el que se definen los Sistemas P, el estudio de los algoritmos que se utilizan para la aplicación de las reglas de evolución en las regiones, el diseño de nuevas arquitecturas que mejoren las comunicaciones entre las diferentes membranas (regiones) que componen el sistema y la implantación de estos sistemas en dispositivos hardware que pudiesen definir futuras máquinas basadas en este modelo. Dentro de este último campo, es decir, dentro del objetivo de construir finalmente máquinas que puedan llevar a cabo la funcionalidad de la computación con Sistemas P, la presente tesis doctoral se centra en el diseño de dos procesadores paralelos que, aplicando variantes de algoritmos existentes, favorezcan el crecimiento en el nivel de intra-paralelismo a la hora de aplicar las reglas. El diseño y creación de ambos procesadores presentan novedosas aportaciones al entorno de investigación de los Sistemas P de Transición en tanto en cuanto se utilizan conceptos que aunque previamente definidos de manera teórica, no habían sido introducidos en el hardware diseñado para estos sistemas. Así, los dos procesadores mantienen las siguientes características: - Presentan un alto rendimiento en la fase de aplicación de reglas, manteniendo por otro lado una flexibilidad y escalabilidad medias que son dependientes de la tecnología final sobre la que se sinteticen dichos procesadores. - Presentan un alto nivel de intra-paralelismo en las regiones al permitir la aplicación simultánea de reglas. - Tienen carácter universal en tanto en cuanto no depende del carácter de las reglas que componen el Sistema P. - Tienen un comportamiento indeterminista que es inherente a la propia naturaleza de estos sistemas. El primero de los circuitos utiliza el conjunto potencia del conjunto de reglas de aplicación así como el concepto de máxima aplicabilidad para favorecer el intra-paralelismo y el segundo incluye, además, el concepto de dominio de aplicabilidad para determinar el conjunto de reglas que son aplicables en cada momento con los recursos existentes. Ambos procesadores se diseñan y se prueban mediante herramientas de diseño electrónico y se preparan para ser sintetizados sobre FPGAs. ABSTRACT Membrane computing appears as an alternative to traditional computing. P Systems are placed inside this field and they are based upon the existence of regions called “membranes” that contain resources and rules that describe how the resources may vary to take each of these regions to a new situation called "configuration". Successive configurations conform computation. Inside this field, the Natural Computing Group of the Universidad Politécnica of Madrid develops a large number of works and researches that provide a lot of papers and some doctoral theses. Main research lines have been, by the moment, the study of the theoretical model over which Transition P Systems are defined, the study of the algorithms that are used for the evolution rules application in the regions, the design of new architectures that may improve communication among the different membranes (regions) that compose the whole system and the implementation of such systems over hardware devices that may define machines based upon this new model. Within this last research field, this is, within the objective of finally building machines that may accomplish the functionality of computation with P Systems, the present thesis is centered on the design of two parallel processors that, applying several variants of some known algorithms, improve the level of the internal parallelism at the evolution rule application phase. Design and creation of both processors present innovations to the field of Transition P Systems research because they use concepts that, even being known before, were never used for circuits that implement the applying phase of evolution rules. So, both processors present the following characteristics: - They present a very high performance during the application rule phase, keeping, on the other hand, a level of flexibility and scalability that, even known it is not very high, it seems to be acceptable. - They present a very high level of internal parallelism inside the regions, allowing several rule to be applied at the same time. - They present a universal character meaning this that they are not dependent upon the active rules that compose the P System. - They have a non-deterministic behavior that is inherent to this systems nature. The first processor uses the concept of "power set of the application rule set" and the concept of "maximal application" number to improve parallelism, and the second one includes, besides the previous ones, the concept of "applicability domain" to determine the set of rules that may be applied in each moment with the existing resources.. Both processors are designed and tested with the design software by Altera Corporation and they are ready to be synthetized over FPGAs.
Resumo:
El Malware es una grave amenaza para la seguridad de los sistemas. Con el uso generalizado de la World Wide Web, ha habido un enorme aumento en los ataques de virus, haciendo que la seguridad informática sea esencial para todas las computadoras y se expandan las áreas de investigación sobre los nuevos incidentes que se generan, siendo una de éstas la clasificación del malware. Los “desarrolladores de malware” utilizan nuevas técnicas para generar malware polimórfico reutilizando los malware existentes, por lo cual es necesario agruparlos en familias para estudiar sus características y poder detectar nuevas variantes de los mismos. Este trabajo, además de presentar un detallado estado de la cuestión de la clasificación del malware de ficheros ejecutables PE, presenta un enfoque en el que se mejora el índice de la clasificación de la base de datos de Malware MALICIA utilizando las características estáticas de ficheros ejecutables Imphash y Pehash, utilizando dichas características se realiza un clustering con el algoritmo clustering agresivo el cual se cambia con la clasificación actual mediante el algoritmo de majority voting y la característica icon_label, obteniendo un Precision de 99,15% y un Recall de 99,32% mejorando la clasificación de MALICIA con un F-measure de 99,23%.---ABSTRACT---Malware is a serious threat to the security of systems. With the widespread use of the World Wide Web, there has been a huge increase in virus attacks, making the computer security essential for all computers. Near areas of research have append in this area including classifying malware into families, Malware developers use polymorphism to generate new variants of existing malware. Thus it is crucial to group variants of the same family, to study their characteristics and to detect new variants. This work, in addition to presenting a detailed analysis of the problem of classifying malware PE executable files, presents an approach in which the classification in the Malware database MALICIA is improved by using static characteristics of executable files, namely Imphash and Pehash. Both features are evaluated through clustering real malware with family labels with aggressive clustering algorithm and combining this with the current classification by Majority voting algorithm, obtaining a Precision of 99.15% and a Recall of 99.32%, improving the classification of MALICIA with an F-measure of 99,23%.