974 resultados para Implementation complexity
Resumo:
This paper presents a new low-complexity multicarrier modulation (MCM) technique based on lattices which achieves a peak-to-average power ratio (PAR) as low as three. The scheme can be viewed as a drop in replacement for the discrete multitone (DMT) modulation of an asymmetric digital subscriber line modem. We show that the lattice-MCM retains many of the attractive features of sinusoidal-MCM, and does so with lower implementation complexity, O(N), compared with DMT, which requires O(N log N) operations. We also present techniques for narrowband interference rejection and power profiling. Simulation studies confirm that performance of the lattice-MCM is superior, even compared with recent techniques for PAR reduction in DMT.
Resumo:
The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
Distributed hash tables have recently become a useful building block for a variety of distributed applications. However, current schemes based upon consistent hashing require both considerable implementation complexity and substantial storage overhead to achieve desired load balancing goals. We argue in this paper that these goals can b e achieved more simply and more cost-effectively. First, we suggest the direct application of the "power of two choices" paradigm, whereby an item is stored at the less loaded of two (or more) random alternatives. We then consider how associating a small constant number of hash values with a key can naturally b e extended to support other load balancing methods, including load-stealing or load-shedding schemes, as well as providing natural fault-tolerance mechanisms.
Resumo:
Long reach passive optical networks (LR-PONs), which integrate fibre-to-the-home with metro networks, have been the subject of intensive research in recent years and are considered one of the most promising candidates for the next generation of optical access networks. Such systems ideally have reaches greater than 100km and bit rates of at least 10Gb/s per wavelength in the downstream and upstream directions. Due to the limited equipment sharing that is possible in access networks, the laser transmitters in the terminal units, which are usually the most expensive components, must be as cheap as possible. However, the requirement for low cost is generally incompatible with the need for a transmitter chirp characteristic that is optimised for such long reaches at 10Gb/s, and hence dispersion compensation is required. In this thesis electronic dispersion compensation (EDC) techniques are employed to increase the chromatic dispersion tolerance and to enhance the system performance at the expense of moderate additional implementation complexity. In order to use such EDC in LR-PON architectures, a number of challenges associated with the burst-mode nature of the upstream link need to be overcome. In particular, the EDC must be made adaptive from one burst to the next (burst-mode EDC, or BM-EDC) in time scales on the order of tens to hundreds of nanoseconds. Burst-mode operation of EDC has received little attention to date. The main objective of this thesis is to demonstrate the feasibility of such a concept and to identify the key BM-EDC design parameters required for applications in a 10Gb/s burst-mode link. This is achieved through a combination of simulations and transmission experiments utilising off-line data processing. The research shows that burst-to-burst adaptation can in principle be implemented efficiently, opening the possibility of low overhead, adaptive EDC-enabled burst-mode systems.
Resumo:
We present two physical layer secure transmission schemes for multi-user multi-relay networks, where the communication from M users to the base station is assisted by direct links and by N decode-and-forward relays. In this network, we consider that a passive eavesdropper exists to overhear the transmitted information, which entails exploiting the advantages of both direct and relay links for physical layer security enhancement. To fulfill this requirement, we investigate two criteria for user and relay selection and examine the achievable secrecy performance. Criterion I performs a joint user and relay selection, while Criterion II performs separate user and relay selections, with a lower implementation complexity. We derive a tight lower bound on the secrecy outage probability for Criterion I and an accurate analytical expression for the secrecy outage probability for Criterion II. We further derive the asymptotic secrecy outage probabilities at high transmit signal-to-noise ratios and high main-to-eavesdropper ratios for both criteria. We demonstrate that the secrecy diversity order is min (MN, M + N) for Criterion I, and N for Criterion II. Finally, we present numerical and simulation results to validate the proposed analysis, and show the occurrence condition of the secrecy outage probability floor
Resumo:
An invisibility cloak is a device that can hide the target by enclosing it from the incident radiation. This intriguing device has attracted a lot of attention since it was first implemented at a microwave frequency in 2006. However, the problems of existing cloak designs prevent them from being widely applied in practice. In this dissertation, we try to remove or alleviate the three constraints for practical applications imposed by loosy cloaking media, high implementation complexity, and small size of hidden objects compared to the incident wavelength. To facilitate cloaking design and experimental characterization, several devices and relevant techniques for measuring the complex permittivity of dielectric materials at microwave frequencies are developed. In particular, a unique parallel plate waveguide chamber has been set up to automatically map the electromagnetic (EM) field distribution for wave propagation through the resonator arrays and cloaking structures. The total scattering cross section of the cloaking structures was derived based on the measured scattering field by using this apparatus. To overcome the adverse effects of lossy cloaking media, microwave cloaks composed of identical dielectric resonators made of low loss ceramic materials are designed and implemented. The effective permeability dispersion was provided by tailoring dielectric resonator filling fractions. The cloak performances had been verified by full-wave simulation of true multi-resonator structures and experimental measurements of the fabricated prototypes. With the aim to reduce the implementation complexity caused by metamaterials employment for cloaking, we proposed to design 2-D cylindrical cloaks and 3-D spherical cloaks by using multi-layer ordinary dielectric material (εr>1) coating. Genetic algorithm was employed to optimize the dielectric profiles of the cloaking shells to provide the minimum scattering cross sections of the cloaked targets. The designed cloaks can be easily scaled to various operating frequencies. The simulation results show that the multi-layer cylindrical cloak essentially outperforms the similarly sized metamaterials-based cloak designed by using the transformation optics-based reduced parameters. For the designed spherical cloak, the simulated scattering pattern shows that the total scattering cross section is greatly reduced. In addition, the scattering in specific directions could be significantly reduced. It is shown that the cloaking efficiency for larger targets could be improved by employing lossy materials in the shell. At last, we propose to hide a target inside a waveguide structure filled with only epsilon near zero materials, which are easy to implement in practice. The cloaking efficiency of this method, which was found to increase for large targets, has been confirmed both theoretically and by simulations.
Resumo:
We experimentally demonstrate an effective multiplier-free blind phase noise estimation technique for CO-OFDM systems for the first time based on the statistical properties of the received symbols' phases. Our technique operates in polar coordinates, providing very low implementation complexity.
Resumo:
Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.
Physical Layer Security with Threshold-Based Multiuser Scheduling in Multi-antenna Wireless Networks
Resumo:
In this paper, we consider a multiuser downlink wiretap network consisting of one base station (BS) equipped with AA antennas, NB single-antenna legitimate users, and NE single-antenna eavesdroppers over Nakagami-m fading channels. In particular, we introduce a joint secure transmission scheme that adopts transmit antenna selection (TAS) at the BS and explores threshold-based selection diversity (tSD) scheduling over legitimate users to achieve a good secrecy performance while maintaining low implementation complexity. More specifically, in an effort to quantify the secrecy performance of the considered system, two practical scenarios are investigated, i.e., Scenario I: the eavesdropper’s channel state information (CSI) is unavailable at the BS, and Scenario II: the eavesdropper’s CSI is available at the BS. For Scenario I, novel exact closed-form expressions of the secrecy outage probability are derived, which are valid for general networks with an arbitrary number of legitimate users, antenna configurations, number of eavesdroppers, and the switched threshold. For Scenario II, we take into account the ergodic secrecy rate as the principle performance metric, and derive novel closed-form expressions of the exact ergodic secrecy rate. Additionally, we also provide simple and asymptotic expressions for secrecy outage probability and ergodic secrecy rate under two distinct cases, i.e., Case I: the legitimate user is located close to the BS, and Case II: both the legitimate user and eavesdropper are located close to the BS. Our important findings reveal that the secrecy diversity order is AAmA and the slope of secrecy rate is one under Case I, while the secrecy diversity order and the slope of secrecy rate collapse to zero under Case II, where the secrecy performance floor occurs. Finally, when the switched threshold is carefully selected, the considered scheduling scheme outperforms other well known existing schemes in terms of the secrecy performance and complexity tradeoff
Resumo:
This thesis focuses on digital equalization of nonlinear fiber impairments for coherent optical transmission systems. Building from well-known physical models of signal propagation in single-mode optical fibers, novel nonlinear equalization techniques are proposed, numerically assessed and experimentally demonstrated. The structure of the proposed algorithms is strongly driven by the optimization of the performance versus complexity tradeoff, envisioning the near-future practical application in commercial real-time transceivers. The work is initially focused on the mitigation of intra-channel nonlinear impairments relying on the concept of digital backpropagation (DBP) associated with Volterra-based filtering. After a comprehensive analysis of the third-order Volterra kernel, a set of critical simplifications are identified, culminating in the development of reduced complexity nonlinear equalization algorithms formulated both in time and frequency domains. The implementation complexity of the proposed techniques is analytically described in terms of computational effort and processing latency, by determining the number of real multiplications per processed sample and the number of serial multiplications, respectively. The equalization performance is numerically and experimentally assessed through bit error rate (BER) measurements. Finally, the problem of inter-channel nonlinear compensation is addressed within the context of 400 Gb/s (400G) superchannels for long-haul and ultra-long-haul transmission. Different superchannel configurations and nonlinear equalization strategies are experimentally assessed, demonstrating that inter-subcarrier nonlinear equalization can provide an enhanced signal reach while requiring only marginal added complexity.
Resumo:
Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.
Resumo:
Objectives The intent of this paper is in the examination of health IT implementation processes – the barriers to and facilitators of successful implementation, identification of a beginning set of implementation best practices, the identification of gaps in the health IT implementation body of knowledge, and recommendations for future study and application. Methods A literature review resulted in the identification of six health IT related implementation best practices which were subsequently debated and clarified by participants attending the NI2012 Research Post Conference held in Montreal in the summer of 2012. Using the framework for implementation research (CFIR) to guide their application, the six best practices were applied to two distinct health IT implementation studies to assess their applicability. Results Assessing the implementation processes from two markedly diverse settings illustrated both the challenges and potentials of using standardized implementation processes. In support of what was discovered in the review of the literature, “one size fits all” in health IT implementation is a fallacy, particularly when global diversity is added into the mix. At the same time, several frameworks show promise for use as “scaffolding” to begin to assess best practices, their distinct dimensions, and their applicability for use. Conclusions Health IT innovations, regardless of the implementation setting, requires a close assessment of many dimensions. While there is no “one size fits all”, there are commonalities and best practices that can be blended, adapted, and utilized to improve the process of implementation. This paper examines health IT implementation processes and identifies a beginning set of implementation best practices, which could begin to address gaps in the health IT implementation body of knowledge.