948 resultados para signal processing program


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of next generation microwave technology for backhauling systems is driven by an increasing capacity demand. In order to provide higher data rates and throughputs over a point-to-point link, a cost-effective performance improvement is enabled by an enhanced energy-efficiency of the transmit power amplification stage, whereas a combination of spectrally efficient modulation formats and wider bandwidths is supported by amplifiers that fulfil strict constraints in terms of linearity. An optimal trade-off between these conflicting requirements can be achieved by resorting to flexible digital signal processing techniques at baseband. In such a scenario, the adaptive digital pre-distortion is a well-known linearization method, that comes up to be a potentially widely-used solution since it can be easily integrated into base stations. Its operation can effectively compensate for the inter-modulation distortion introduced by the power amplifier, keeping up with the frequency-dependent time-varying behaviour of the relative nonlinear characteristic. In particular, the impact of the memory effects become more relevant and their equalisation become more challenging as the input discrete signal feature a wider bandwidth and a faster envelope to pre-distort. This thesis project involves the research, design and simulation a pre-distorter implementation at RTL based on a novel polyphase architecture, which makes it capable of operating over very wideband signals at a sampling rate that complies with the actual available clock speed of current digital devices. The motivation behind this structure is to carry out a feasible pre-distortion for the multi-band spectrally efficient complex signals carrying multiple channels that are going to be transmitted in near future high capacity and reliability microwave backhaul links.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurde gezeigt, wie man das Potential nanopartikulärer Systeme, die vorwiegend via Miniemulsion hergestellt wurden, im Hinblick auf „Drug Delivery“ ausnutzen könnte, indem ein Wirkstoffmodell auf unterschiedliche Art und Weise intrazellulär freigesetzt wurde. Dies wurde hauptsächlich mittels konfokaler Laser-Raster-Mikrokopie (CLSM) in Kombination mit dem Bildbearbeitungsprogramm Volocity® analysiert.rnPBCA-Nanokapseln eigneten sich besonders, um hydrophile Substanzen wie etwa Oligonukleotide zu verkapseln und sie so auf ihrem Transportweg in die Zellen vor einem etwaigen Abbau zu schützen. Es konnte eine Freisetzung der Oligonukleotide in den Zellen aufgrund der elektrostatischen Anziehung des mitochondrialen Membranpotentials nachgewiesen werden. Dabei war die Kombination aus Oligonukleotid und angebundenem Cyanin-Farbstoff (Cy5) an der 5‘-Position der Oligonukleotid-Sequenz ausschlaggebend. Durch quantitative Analysen mittels Volocity® konnte die vollständige Kolokalisation der freigesetzten Oligonukleotide an Mitochondrien bewiesen werden, was anhand der Kolokalisationskoeffizienten „Manders‘ Coefficients“ M1 und M2 diskutiert wurde. Es konnte ebenfalls aufgrund von FRET-Studien doppelt markierter Oligos gezeigt werden, dass die Oligonukleotide weder beim Transport noch bei der Freisetzung abgebaut wurden. Außerdem wurde aufgeklärt, dass nur der Inhalt der Nanokapseln, d. h. die Oligonukleotide, an Mitochondrien akkumulierte, das Kapselmaterial selbst jedoch in anderen intrazellulären Bereichen aufzufinden war. Eine Kombination aus Cyanin-Farbstoffen wie Cy5 mit einer Nukleotidsequenz oder einem Wirkstoff könnte also die Basis für einen gezielten Wirkstofftransport zu Mitochondrien liefern bzw. die Grundlage schaffen, eine Freisetzung aus Kapseln ins Zytoplasma zu gewährleisten.rnDer vielseitige Einsatz der Miniemulsion gestattete es, nicht nur Kapseln sondern auch Nanopartikel herzustellen, in welchen hydrophobe Substanzen im Partikelkern eingeschlossen werden konnten. Diese auf hydrophobe Wechselwirkungen beruhende „Verkapselung“ eines Wirkstoffmodells, in diesem Fall PMI, wurde bei PDLLA- bzw. PS-Nanopartikeln ausgenutzt, welche durch ein HPMA-basiertes Block-Copolymer stabilisiert wurden. Dabei konnte gezeigt werden, dass das hydrophobe Wirkstoffmodell PMI innerhalb kürzester Zeit in die Zellen freigesetzt wurde und sich in sogenannte „Lipid Droplets“ einlagerte, ohne dass die Nanopartikel selbst aufgenommen werden mussten. Daneben war ein intrazelluläres Ablösen des stabilisierenden Block-Copolymers zu verzeichnen, welches rn8 h nach Partikelaufnahme erfolgte und ebenfalls durch Analysen mittels Volocity® untermauert wurde. Dies hatte jedoch keinen Einfluss auf die eigentliche Partikelaufnahme oder die Freisetzung des Wirkstoffmodells. Ein großer Vorteil in der Verwendung des HPMA-basierten Block-Copolymers liegt darin begründet, dass auf zeitaufwendige Waschschritte wie etwa Dialyse nach der Partikelherstellung verzichtet werden konnte, da P(HPMA) ein biokompatibles Polymer ist. Auf der anderen Seite hat man aufgrund der Syntheseroute dieses Block-Copolymers vielfältige Möglichkeiten, Funktionalitäten wie etwa Fluoreszenzmarker einzubringen. Eine kovalente Anbindung eines Wirkstoffs ist ebenfalls denkbar, welcher intrazellulär z. B. aufgrund von enzymatischen Abbauprozessen langsam freigesetzt werden könnte. Somit bietet sich die Möglichkeit mit Nanopartikeln, die durch HPMA-basierte Block-Copolymere stabilisiert wurden, gleichzeitig zwei unterschiedliche Wirkstoffe in die Zellen zu bringen, wobei der eine schnell und der zweite über einen längeren Zeitraum hinweg (kontrolliert) freigesetzt werden könnte.rnNeben Nanokapseln sowie –partikeln, die durch inverse bzw. direkte Miniemulsion dargestellt wurden, sind auch Nanohydrogelpartikel untersucht worden, die sich aufgrund von Selbstorganisation eines amphiphilen Bock-Copolymers bildeten. Diese Nanohydrogelpartikel dienten der Komplexierung von siRNA und wurden hinsichtlich ihrer Anreicherung in Lysosomen untersucht. Aufgrund der Knockdown-Studien von Lutz Nuhn konnte ein Unterschied in der Knockdown-Effizienz festgestellt werden, je nach dem, ob 100 nm oder 40 nm große Nanohydrogelpartikel verwendet wurden. Es sollte festgestellt werden, ob eine größenbedingte, unterschiedlich schnelle Anreicherung dieser beiden Partikel in Lysosomen erfolgte, was die unterschiedliche Knockdown-Effizienz erklären könnte. CLSM-Studien und quantitative Kolokalisationsstudien gaben einen ersten Hinweis auf diese Größenabhängigkeit. rnBei allen verwendeten nanopartikulären Systemen konnte eine Freisetzung ihres Inhalts gezeigt werden. Somit bieten sie ein großes Potential als Wirkstoffträger für biomedizinische Anwendungen.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il compressed sensing è un’innovativa tecnica per l’acquisizione dei dati, che mira all'estrazione del solo contenuto informativo intrinseco di un segnale. Ciò si traduce nella possibilità di acquisire informazione direttamente in forma compressa, riducendo la quantità di risorse richieste per tale operazione. In questa tesi è sviluppata un'architettura hardware per l'acquisizione di segnali analogici basata sul compressed sensing, specializzata al campionamento con consumo di potenza ridotto di segnali biomedicali a basse frequenze. Lo studio è svolto a livello di sistema mediante l'integrazione della modulazione richiesta dal compressed sensing in un convertitore analogico-digitale ad approssimazioni successive, modificandone la logica di controllo. Le prestazioni risultanti sono misurate tramite simulazioni numeriche e circuitali. Queste confermano la possibilità di ridurre la complessità hardware del sistema di acquisizione rispetto allo stato dell'arte, senza alterarne le prestazioni.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new method for the enhancement of speech. The method is designed for scenarios in which targeted speaker enrollment as well as system training within the typical noise environment are feasible. The proposed procedure is fundamentally different from most conventional and state-of-the-art denoising approaches. Instead of filtering a distorted signal we are resynthesizing a new “clean” signal based on its likely characteristics. These characteristics are estimated from the distorted signal. A successful implementation of the proposed method is presented. Experiments were performed in a scenario with roughly one hour of clean speech training data. Our results show that the proposed method compares very favorably to other state-of-the-art systems in both objective and subjective speech quality assessments. Potential applications for the proposed method include jet cockpit communication systems and offline methods for the restoration of audio recordings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This letter presents a new recursive method for computing discrete polynomial transforms. The method is shown for forward and inverse transforms of the Hermite, binomial, and Laguerre transforms. The recursive flow diagrams require only 2 additions, 2( +1) memory units, and +1multipliers for the +1-point Hermite and binomial transforms. The recursive flow diagram for the +1-point Laguerre transform requires 2 additions, 2( +1) memory units, and 2( +1) multipliers. The transform computation time for all of these transforms is ( )

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of this paper is to discuss various aspects of implementing a specific intrusion-detection scheme on a micro-computer system using fixed-point arithmetic. The proposed scheme is suitable for detecting intruder stimuli which are in the form of transient signals. It consists of two stages: an adaptive digital predictor and an adaptive threshold detection algorithm. Experimental results involving data acquired via field experiments are also included.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Analog filters and direct digital filters are implemented using digital signal processing techniques. Specifically, Butterworth, Elliptic, and Chebyshev filters are implemented using the Motorola 56001 Digital Signal Processor by the integration of three software packages: MATLAB, C++, and Motorola's Application Development System. The integrated environment allows the novice user to design a filter automatically by specifying the filter order and critical frequencies, while permitting more experienced designers to take advantage of MATLAB's advanced design capabilities. This project bridges the gap between the theoretical results produced by MATLAB and the practicalities of implementing digital filters using the Motorola 56001 Digital Signal Processor. While these results are specific to the Motorola 56001 they may be extended to other digital signal processors. MATLAB handles the filter calculations, a C++ routine handles the conversion to assembly code, and the Motorola software compiles and transmits the code to the processor

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Digital signal processing (DSP) techniques for biological sequence analysis continue to grow in popularity due to the inherent digital nature of these sequences. DSP methods have demonstrated early success for detection of coding regions in a gene. Recently, these methods are being used to establish DNA gene similarity. We present the inter-coefficient difference (ICD) transformation, a novel extension of the discrete Fourier transformation, which can be applied to any DNA sequence. The ICD method is a mathematical, alignment-free DNA comparison method that generates a genetic signature for any DNA sequence that is used to generate relative measures of similarity among DNA sequences. We demonstrate our method on a set of insulin genes obtained from an evolutionarily wide range of species, and on a set of avian influenza viral sequences, which represents a set of highly similar sequences. We compare phylogenetic trees generated using our technique against trees generated using traditional alignment techniques for similarity and demonstrate that the ICD method produces a highly accurate tree without requiring an alignment prior to establishing sequence similarity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The task considered in this paper is performance evaluation of region segmentation algorithms in the ground-truth-based paradigm. Given a machine segmentation and a ground-truth segmentation, performance measures are needed. We propose to consider the image segmentation problem as one of data clustering and, as a consequence, to use measures for comparing clusterings developed in statistics and machine learning. By doing so, we obtain a variety of performance measures which have not been used before in image processing. In particular, some of these measures have the highly desired property of being a metric. Experimental results are reported on both synthetic and real data to validate the measures and compare them with others.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reflected at any level of organization of the central nervous system, most of the processes ranging from ion channels to neuronal networks occur in a closed loop, where the input to the system depends on its output. In contrast, most in vitro preparations and experimental protocols operate autonomously, and do not depend on the output of the studied system. Thanks to the progress in digital signal processing and real-time computing, it is now possible to artificially close the loop and investigate biophysical processes and mechanisms under increased realism. In this contribution, we review some of the most relevant examples of a new trend in in vitro electrophysiology, ranging from the use of dynamic-clamp to multi-electrode distributed feedback stimulation. We are convinced these represents the beginning of new frontiers for the in vitro investigation of the brain, promising to open the still existing borders between theoretical and experimental approaches while taking advantage of cutting edge technologies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electroencephalograms (EEG) are often contaminated with high amplitude artifacts limiting the usability of data. Methods that reduce these artifacts are often restricted to certain types of artifacts, require manual interaction or large training data sets. Within this paper we introduce a novel method, which is able to eliminate many different types of artifacts without manual intervention. The algorithm first decomposes the signal into different sub-band signals in order to isolate different types of artifacts into specific frequency bands. After signal decomposition with principal component analysis (PCA) an adaptive threshold is applied to eliminate components with high variance corresponding to the dominant artifact activity. Our results show that the algorithm is able to significantly reduce artifacts while preserving the EEG activity. Parameters for the algorithm do not have to be identified for every patient individually making the method a good candidate for preprocessing in automatic seizure detection and prediction algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Users of cochlear implants (auditory aids, which stimulate the auditory nerve electrically at the inner ear) often suffer from poor speech understanding in noise. We evaluate a small (intermicrophone distance 7 mm) and computationally inexpensive adaptive noise reduction system suitable for behind-the-ear cochlear implant speech processors. The system is evaluated in simulated and real, anechoic and reverberant environments. Results from simulations show improvements of 3.4 to 9.3 dB in signal to noise ratio for rooms with realistic reverberation and more than 18 dB under anechoic conditions. Speech understanding in noise is measured in 6 adult cochlear implant users in a reverberant room, showing average improvements of 7.9–9.6 dB, when compared to a single omnidirectional microphone or 1.3–5.6 dB, when compared to a simple directional two-microphone device. Subjective evaluation in a cafeteria at lunchtime shows a preference of the cochlear implant users for the evaluated device in terms of speech understanding and sound quality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Transformer protection is one of the most challenging applications within the power system protective relay field. Transformers with a capacity rating exceeding 10 MVA are usually protected using differential current relays. Transformers are an aging and vulnerable bottleneck in the present power grid; therefore, quick fault detection and corresponding transformer de-energization is the key element in minimizing transformer damage. Present differential current relays are based on digital signal processing (DSP). They combine DSP phasor estimation and protective-logic-based decision making. The limitations of existing DSP-based differential current relays must be identified to determine the best protection options for sensitive and quick fault detection. The development, implementation, and evaluation of a DSP differential current relay is detailed. The overall goal is to make fault detection faster without compromising secure and safe transformer operation. A detailed background on the DSP differential current relay is provided. Then different DSP phasor estimation filters are implemented and evaluated based on their ability to extract desired frequency components from the measured current signal quickly and accurately. The main focus of the phasor estimation evaluation is to identify the difference between using non-recursive and recursive filtering methods. Then the protective logic of the DSP differential current relay is implemented and required settings made in accordance with transformer application. Finally, the DSP differential current relay will be evaluated using available transformer models within the ATP simulation environment. Recursive filtering methods were found to have significant advantage over non-recursive filtering methods when evaluated individually and when applied in the DSP differential relay. Recursive filtering methods can be up to 50% faster than non-recursive methods, but can cause false trip due to overshoot if the only objective is speed. The relay sensitivity is however independent of filtering method and depends on the settings of the relay’s differential characteristics (pickup threshold and percent slope).