943 resultados para Digital processing
Resumo:
Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.
Resumo:
This thesis discusses the need for nondestructive testing and highlights some of the limitations in present day techniques. Special interest has been given to ultrasonic examination techniques and the problems encountered when they are applied to thick welded plates. Some suggestions are given using signal processing methods. Chapter 2 treats the need for nondestructive testing as seen in the light of economy and safety. A short review of present day techniques in nondestructive testing is also given. The special problems using ultrasonic techniques for welded structures is discussed in Chapter 3 with some examples of elastic wave propagation in welded steel. The limitations in applying sophisticated signal processing techniques to ultrasonic NDT~ mainly found in the transducers generating or receiving the ultrasound. Chapter 4 deals with the different transducers used. One of the difficulties with ultrasonic testing is the interpretation of the signals encountered. Similar problems might be found with SONAR/RADAR techniques and Chapter 5 draws some analogies between SONAR/RADAR and ultrasonic nondestructive testing. This chapter also includes a discussion on some on the techniques used in signal processing in general. A special signal processing technique found useful is cross-correlation detection and this technique is treated in Chapter 6. Electronic digital compute.rs have made signal processing techniques easier to implement -Chapter 7 discusses the use of digital computers in ultrasonic NDT. Experimental equipment used to test cross-correlation detection of ultrasonic signals is described in Chapter 8. Chapter 9 summarises the conclusions drawn during this investigation.
Resumo:
One of the major problems associated with communication via a loudspeaking telephone (LST) is that, using analogue processing, duplex transmission is limited to low-loss lines and produces a low acoustic output. An architectural for an instrument has been developed and tested, which uses digital signal processing to provide duplex transmission between a LST and a telopnone handset over most of the B.T. network. Digital adaptive-filters are used in the duplex LST to cancel coupling between the loudspeaker and microphone, and across the transmit to receive paths of the 2-to-4-wire converter. Normal movement of a person in the acoustic path causes a loss of stability by increasing the level of coupling from the loudspeaker to the microphone, since there is a lag associated the adaptive filters learning about a non-stationary path, Control of the loop stability and the level of sidetone heard by the hadset user is by a microprocessoe, which continually monitors the system and regulates the gain. The result is a system which offers the best compromise available based on a set of measured parameters.A theory has been developed which gives the loop stability requirements based on the error between the parameters of the filter and those of the unknown path. The programme to develope a low-cost adaptive filter in LST produced a low-cost adaptive filter in LST produced a unique architecture which has a number of features not available in any similar system. These include automatic compensation for the rate of adaptation over a 36 dB range of output level, , 4 rates of adaptation (with a maximum of 465 dB/s), plus the ability to cascade up to 4 filters without loss o performance. A complex story has been developed to determine the adptation which can be achieved using finite-precision arithmatic. This enabled the development of an architecture which distributed the normalisation required to achieve optimum rate of adaptation over the useful input range. Comparison of theory and measurement for the adaptive filter show very close agreement. A single experimental LST was built and tested on connections to hanset telephones over the BT network. The LST demonstrated that duplex transmission was feasible using signal processing and produced a more comfortable means of communication beween people than methods emplying deep voice-switching to regulate the local-loop gain. Although, with the current level of processing power, it is not a panacea and attention must be directed toward the physical acoustic isolation between loudspeaker and microphone.
Resumo:
We report the impact of cascaded reconfigurable optical add-drop multiplexer induced penalties on coherently-detected 28 Gbaud polarization multiplexed m-ary quadrature amplitude modulation (PM m-ary QAM) WDM channels. We investigate the interplay between different higher-order modulation channels and the effect of filter shapes and bandwidth of (de)multiplexers on the transmission performance, in a segment of pan-European optical network with a maximum optical path of 4,560 km (80km x 57 spans). We verify that if the link capacities are assigned assuming that digital back propagation is available, 25% of the network connections fail using electronic dispersion compensation alone. However, majority of such links can indeed be restored by employing single-channel digital back-propagation employing less than 15 steps for the whole link, facilitating practical application of DBP. We report that higher-order channels are most sensitive to nonlinear fiber impairments and filtering effects, however these formats are less prone to ROADM induced penalties due to the reduced maximum number of hops. Furthermore, it has been demonstrated that a minimum filter Gaussian order of 3 and bandwidth of 35 GHz enable negligible excess penalty for any modulation order.
Resumo:
Recent advances in our ability to watch the molecular and cellular processes of life in action-such as atomic force microscopy, optical tweezers and Forster fluorescence resonance energy transfer-raise challenges for digital signal processing (DSP) of the resulting experimental data. This article explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions and autocorrelated noise. It exposes the problems with classical linear DSP algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biological physicists. It is argued that these new methods applied in this context typify the nascent field of biophysical DSP. Practical experimental examples are supplied.
Resumo:
The world is connected by a core network of long-haul optical communication systems that link countries and continents, enabling long-distance phone calls, data-center communications, and the Internet. The demands on information rates have been constantly driven up by applications such as online gaming, high-definition video, and cloud computing. All over the world, end-user connection speeds are being increased by replacing conventional digital subscriber line (DSL) and asymmetric DSL (ADSL) with fiber to the home. Clearly, the capacity of the core network must also increase proportionally. © 1991-2012 IEEE.
Resumo:
Photonic signal processing is used to implement common mode signal cancellation across a very wide bandwidth utilising phase modulation of radio frequency (RF) signals onto a narrow linewidth laser carrier. RF spectra were observed using narrow-band, tunable optical filtering using a scanning Fabry Perot etalon. Thus functions conventionally performed using digital signal processing techniques in the electronic domain have been replaced by analog techniques in the photonic domain. This technique was able to observe simultaneous cancellation of signals across a bandwidth of 1400 MHz, limited only by the free spectral range of the etalon. © 2013 David M. Benton.
Resumo:
As a discipline, supply chain management (SCM) has traditionally been primarily concerned with the procurement, processing, movement and sale of physical goods. However an important class of products has emerged - digital products - which cannot be described as physical as they do not obey commonly understood physical laws. They do not possess mass or volume, and they require no energy in their manufacture or distribution. With the Internet, they can be distributed at speeds unimaginable in the physical world, and every copy produced is a 100% perfect duplicate of the original version. Furthermore, the ease with which digital products can be replicated has few analogues in the physical world. This paper assesses the effect of non-physicality on one such product – software – in relation to the practice of SCM. It explores the challenges that arise when managing the software supply chain and how practitioners are addressing these challenges. Using a two-pronged exploratory approach that examines the literature around software management as well as direct interviews with software distribution practitioners, a number of key challenges associated with software supply chains are uncovered, along with responses to these challenges. This paper proposes a new model for software supply chains that takes into account the non-physicality of the product being delivered. Central to this model is the replacement of physical flows with flows of intellectual property, the growing importance of innovation over duplication and the increased centrality of the customer in the entire process. Hybrid physical / digital supply chains are discussed and a framework for practitioners concerned with software supply chains is presented.
Resumo:
All-optical signal processing is a powerful tool for the processing of communication signals and optical network applications have been routinely considered since the inception of optical communication. There are many successful optical devices deployed in today’s communication networks, including optical amplification, dispersion compensation, optical cross connects and reconfigurable add drop multiplexers. However, despite record breaking performance, all-optical signal processing devices have struggled to find a viable market niche. This has been mainly due to competition from electro-optic alternatives, either from detailed performance analysis or more usually due to the limited market opportunity for a mid-link device. For example a wavelength converter would compete with a reconfigured transponder which has an additional market as an actual transponder enabling significantly more economical development. Never-the-less, the potential performance of all-optical devices is enticing. Motivated by their prospects of eventual deployment, in this chapter we analyse the performance and energy consumption of digital coherent transponders, linear coherent repeaters and modulator based pulse shaping/frequency conversion, setting a benchmark for the proposed all-optical implementations.
Resumo:
During the MEMORIAL project time an international consortium has developed a software solution called DDW (Digital Document Workbench). It provides a set of tools to support the process of digitisation of documents from the scanning up to the retrievable presentation of the content. The attention is focused to machine typed archival documents. One of the important features is the evaluation of quality in each step of the process. The workbench consists of automatic parts as well as of parts which request human activity. The measurable improvement of 20% shows the approach is successful.
Resumo:
After many years of scholar study, manuscript collections continue to be an important source of novel information for scholars, concerning both the history of earlier times as well as the development of cultural documentation over the centuries. D-SCRIBE project aims to support and facilitate current and future efforts in manuscript digitization and processing. It strives toward the creation of a comprehensive software product, which can assist the content holders in turning an archive of manuscripts into a digital collection using automated methods. In this paper, we focus on the problem of recognizing early Christian Greek manuscripts. We propose a novel digital image binarization scheme for low quality historical documents allowing further content exploitation in an efficient way. Based on the existence of closed cavity regions in the majority of characters and character ligatures in these scripts, we propose a novel, segmentation-free, fast and efficient technique that assists the recognition procedure by tracing and recognizing the most frequently appearing characters or character ligatures.
Resumo:
The paper informs about the history of manuscript digitization in the National Library of the Czech Republic as well as about other issues concerning processing of manuscripts. The main consequence of the massive digitization and record and/or full text processing is a paradigm shift leading to the digital history.
Resumo:
Digital devices are profoundly changing the way individuals consume media entertainment, and in particular television (TV). Our research contributes to prior work on narrative processing by advancing a more comprehensive theorization of narrative pace when control is in the hands of story receivers as opposed to storytellers. Using the empirical context of TV series viewing we draw on in-depth interviews to uncover 1) consumer narrative pace control practices such as multi-episode viewing sometimes colloquially called “binge-watching” or replaying specific scenes, and 2) the factors that drive the adoption of such practices, including the countervailing forces of narrative satiation and need for closure, as well as curiosity and enjoyment of mystery.