936 resultados para Digital communication systems


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How does an archaeological museum understand its function in a digital environment? Consumer expectations are rapidly shifting, from what used to be a passive relationship with exhibition contents, towards a different one, in which interaction, individuality and proactivity define the visitor experience. This consumer paradigm is much studied in fast moving markets, where it provokes immediately measurable impacts. In other fields, such as tourism and regional development, the very heterogeneous nature of the product to be branded makes it near to impossible for only one player to engage successfully. This systemic feature implies that museums, acting as major stakeholders, often anchor a regional brand around which SME tend to cluster, and thus assume responsibilities in constructing marketable identities. As such, the archaeological element becomes a very useful trademark. On the other hand, it also emerges erratically on the Internet, in personal blogs, commercial websites, and social networks. This forces museums to enter as a mediator, authenticating contents and providing credibility. What might be called the digital pull factor poses specific challenges to museum management: what is to be promoted, and how, in order to create and maintain a coherent presence in social media? The underlying issue this paper tries to address is how museums perceive their current and future role in digital communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We derive the Cramer-Rao Lower Bound (CRLB) for the estimation of initial conditions of noise-embedded orbits produced by general one-dimensional maps. We relate this bound`s asymptotic behavior to the attractor`s Lyapunov number and show numerical examples. These results pave the way for more suitable choices for the chaotic signal generator in some chaotic digital communication systems. (c) 2006 Published by Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Telecommunications have been in constant evolution during past decades. Among the technological innovations, the use of digital technologies is very relevant. Digital communication systems have proven their efficiency and brought a new element in the chain of signal transmitting and receiving, the digital processor. This device offers to new radio equipments the flexibility of a programmable system. Nowadays, the behavior of a communication system can be modified by simply changing its software. This gave rising to a new radio model called Software Defined Radio (or Software-Defined Radio - SDR). In this new model, one moves to the software the task to set radio behavior, leaving to hardware only the implementation of RF front-end. Thus, the radio is no longer static, defined by their circuits and becomes a dynamic element, which may change their operating characteristics, such as bandwidth, modulation, coding rate, even modified during runtime according to software configuration. This article aims to present the use of GNU Radio software, an open-source solution for SDR specific applications, as a tool for development configurable digital radio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desde que las Tecnologías de la Información y la Comunicación comenzaron a adquirir una gran importancia en la sociedad, uno de los principales objetivos ha sido conseguir que la información transmitida llegue en perfectas condiciones al receptor. Por este motivo, se hace necesario el desarrollo de nuevos sistemas de comunicación digital capaces de ofrecer una transmisión segura y fiable. Con el paso de los años, se han ido mejorando las características de los mismos, lo que significa importantes avances en la vida cotidiana. En este contexto, uno de los sistemas que más éxito ha tenido es la Modulación Reticulada con Codificación TCM, que aporta grandes ventajas en la comunicación digital, especialmente en los sistemas de banda estrecha. Este tipo de código de protección contra errores, basado en la codificación convolucional, se caracteriza por realizar la modulación y codificación en una sola función. Como consecuencia, se obtiene una mayor velocidad de transmisión de datos sin necesidad de incrementar el ancho de banda, a costa de pasar a una constelación superior. Con este Proyecto Fin de Grado se quiere analizar el comportamiento de la modulación TCM y cuáles son las ventajas que ofrece frente a otros sistemas similares. Se propone realizar cuatro simulaciones, que permitan visualizar diversas gráficas en las que se relacione la probabilidad de bit erróneo BER y la relación señal a ruido SNR. Además, con estas gráficas se puede determinar la ganancia que se obtiene con respecto a la probabilidad de bit erróneo teórica. Estos sistemas pasan de una modulación QPSK a una 8PSK o de una 8PSK a una 16QAM. Finalmente, se desarrolla un entorno gráfico de Matlab con el fin de proporcionar un sencillo manejo al usuario y una mayor interactividad. ABSTRACT. Since Information and Communication Technologies began to gain importance on society, one of the main objectives has been to achieve the transmitted information reaches the receiver perfectly. For this reason, it is necessary to develop new digital communication systems with the ability to offer a secure and reliable transmission. The systems characteristics have improved over the past years, what it means important progress in everyday life. In this context, one of the most successful systems is Trellis Coded Modulation TCM, that brings great advantages in terms of digital communications, especially narrowband systems. This kind of error correcting code, based on convolutional coding, is characterized by codifying and modulating at the same time. As a result, a higher data transmission speed is achieved without increasing bandwidth at the expense of using a superior modulation. The aim of this project is to analyze the TCM performance and the advantages it offers in comparison with other similar systems. Four simulations are proposed, that allows to display several graphics that show how the Bit Error Ratio BER and Signal Noise Ratio SNR are related. Furthermore, it is possible to calculate the coding gain. Finally, a Matlab graphic environment is designed in order to guarantee the interactivity with the final user.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The detection of signals in the presence of noise is one of the most basic and important problems encountered by communication engineers. Although the literature abounds with analyses of communications in Gaussian noise, relatively little work has appeared dealing with communications in non-Gaussian noise. In this thesis several digital communication systems disturbed by non-Gaussian noise are analysed. The thesis is divided into two main parts. In the first part, a filtered-Poisson impulse noise model is utilized to calulate error probability characteristics of a linear receiver operating in additive impulsive noise. Firstly the effect that non-Gaussian interference has on the performance of a receiver that has been optimized for Gaussian noise is determined. The factors affecting the choice of modulation scheme so as to minimize the deterimental effects of non-Gaussian noise are then discussed. In the second part, a new theoretical model of impulsive noise that fits well with the observed statistics of noise in radio channels below 100 MHz has been developed. This empirical noise model is applied to the detection of known signals in the presence of noise to determine the optimal receiver structure. The performance of such a detector has been assessed and is found to depend on the signal shape, the time-bandwidth product, as well as the signal-to-noise ratio. The optimal signal to minimize the probability of error of; the detector is determined. Attention is then turned to the problem of threshold detection. Detector structure, large sample performance and robustness against errors in the detector parameters are examined. Finally, estimators of such parameters as. the occurrence of an impulse and the parameters in an empirical noise model are developed for the case of an adaptive system with slowly varying conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plain radiography still accounts for the vast majority of imaging studies that are performed at multiple clinical instances. Digital detectors are now prominent in many imaging facilities and they are the main driving force towards filmless environments. There has been a working paradigm shift due to the functional separation of acquisition, visualization, and storage with deep impact in the imaging workflows. Moreover with direct digital detectors images are made available almost immediately. Digital radiology is now completely integrated in Picture Archiving and Communication System (PACS) environments governed by the Digital Imaging and Communications in Medicine (DICOM) standard. In this chapter a brief overview of PACS architectures and components is presented together with a necessarily brief account of the DICOM standard. Special focus is given to the DICOM digital radiology objects and how specific attributes may now be used to improve and increase the metadata repository associated with image data. Regular scrutiny of the metadata repository may serve as a valuable tool for improved, cost-effective, and multidimensional quality control procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital back-propagation (DBP) has recently been proposed for the comprehensive compensation of channel nonlinearities in optical communication systems. While DBP is attractive for its flexibility and performance, it poses significant challenges in terms of computational complexity. Alternatively, phase conjugation or spectral inversion has previously been employed to mitigate nonlinear fibre impairments. Though spectral inversion is relatively straightforward to implement in optical or electrical domain, it requires precise positioning and symmetrised link power profile in order to avail the full benefit. In this paper, we directly compare ideal and low-precision single-channel DBP with single-channel spectral-inversion both with and without symmetry correction via dispersive chirping. We demonstrate that for all the dispersion maps studied, spectral inversion approaches the performance of ideal DBP with 40 steps per span and exceeds the performance of electronic dispersion compensation by ~3.5 dB in Q-factor, enabling up to 96% reduction in complexity in terms of required DBP stages, relative to low precision one step per span based DBP. For maps where quasi-phase matching is a significant issue, spectral inversion significantly outperforms ideal DBP by ~3 dB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis focuses on digital equalization of nonlinear fiber impairments for coherent optical transmission systems. Building from well-known physical models of signal propagation in single-mode optical fibers, novel nonlinear equalization techniques are proposed, numerically assessed and experimentally demonstrated. The structure of the proposed algorithms is strongly driven by the optimization of the performance versus complexity tradeoff, envisioning the near-future practical application in commercial real-time transceivers. The work is initially focused on the mitigation of intra-channel nonlinear impairments relying on the concept of digital backpropagation (DBP) associated with Volterra-based filtering. After a comprehensive analysis of the third-order Volterra kernel, a set of critical simplifications are identified, culminating in the development of reduced complexity nonlinear equalization algorithms formulated both in time and frequency domains. The implementation complexity of the proposed techniques is analytically described in terms of computational effort and processing latency, by determining the number of real multiplications per processed sample and the number of serial multiplications, respectively. The equalization performance is numerically and experimentally assessed through bit error rate (BER) measurements. Finally, the problem of inter-channel nonlinear compensation is addressed within the context of 400 Gb/s (400G) superchannels for long-haul and ultra-long-haul transmission. Different superchannel configurations and nonlinear equalization strategies are experimentally assessed, demonstrating that inter-subcarrier nonlinear equalization can provide an enhanced signal reach while requiring only marginal added complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese pretende contribuir para o estudo e análise dos factores relacionados com as técnicas de aquisição de imagens radiológicas digitais, a qualidade diagnóstica e a gestão da dose de radiação em sistema de radiologia digital. A metodologia encontra-se organizada em duas componentes. A componente observacional, baseada num desenho do estudo de natureza retrospectiva e transversal. Os dados recolhidos a partir de sistemas CR e DR permitiram a avaliação dos parâmetros técnicos de exposição utilizados em radiologia digital, a avaliação da dose absorvida e o índice de exposição no detector. No contexto desta classificação metodológica (retrospectiva e transversal), também foi possível desenvolver estudos da qualidade diagnóstica em sistemas digitais: estudos de observadores a partir de imagens arquivadas no sistema PACS. A componente experimental da tese baseou-se na realização de experiências em fantomas para avaliar a relação entre dose e qualidade de imagem. As experiências efectuadas permitiram caracterizar as propriedades físicas dos sistemas de radiologia digital, através da manipulação das variáveis relacionadas com os parâmetros de exposição e a avaliação da influência destas na dose e na qualidade da imagem. Utilizando um fantoma contraste de detalhe, fantomas antropomórficos e um fantoma de osso animal, foi possível objectivar medidas de quantificação da qualidade diagnóstica e medidas de detectabilidade de objectos. Da investigação efectuada, foi possível salientar algumas conclusões. As medidas quantitativas referentes à performance dos detectores são a base do processo de optimização, permitindo a medição e a determinação dos parâmetros físicos dos sistemas de radiologia digital. Os parâmetros de exposição utilizados na prática clínica mostram que a prática não está em conformidade com o referencial Europeu. Verifica-se a necessidade de avaliar, melhorar e implementar um padrão de referência para o processo de optimização, através de novos referenciais de boa prática ajustados aos sistemas digitais. Os parâmetros de exposição influenciam a dose no paciente, mas a percepção da qualidade de imagem digital não parece afectada com a variação da exposição. Os estudos que se realizaram envolvendo tanto imagens de fantomas como imagens de pacientes mostram que a sobreexposição é um risco potencial em radiologia digital. A avaliação da qualidade diagnóstica das imagens mostrou que com a variação da exposição não se observou degradação substancial da qualidade das imagens quando a redução de dose é efectuada. Propõe-se o estudo e a implementação de novos níveis de referência de diagnóstico ajustados aos sistemas de radiologia digital. Como contributo da tese, é proposto um modelo (STDI) para a optimização de sistemas de radiologia digital.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of patient dose has gained increased attention, still being an issue of concern that arises from the use of digital systems. The development of digital technology offers the possibility for a reduction of radiation dose around 50% without loss in image quality when compared to a conventional screen–film system. Digital systems give an equivalent or superior diagnostic performance and also several other advantages, but the risk of overexposure with no adverse effect on image quality could be present. This chapter refers to the management of patient dose and provides an explanation of dose-related concepts. In this chapter, exposure influence in dose and image representation and the effects of radiation exposure are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.