982 resultados para Narrow-band frequency filters
Resumo:
A major challenge in the transmission of narrow pulses is the radiation characteristics of the antenna. Designing the front ends for UWB systems pose challenges compared to their narrow and wide band counterparts because in addition to having electrically small size, high efficiency and band width, the antenna has to have excellent transient response. The present work deals with the design of four novel antenna designs- Square Monopole, Semi-Elliptic Slot, Step and Linear Tapered slot - and an assay on their suitability in UWB Systems. Multiple resonances in the geometry are matched to UWB by redesigning the ground-patch interfaces. Techniques to avoid narrow band interference is proposed in the antenna level and their effect on a nano second pulse have also been investigated. The thesis proposes design guidelines to design the antenna on laminates of any permittivity and the analyzes are complete with results in the frequency and time domains.
Resumo:
Optische Spektroskopie ist eine sehr wichtige Messtechnik mit einem hohen Potential für zahlreiche Anwendungen in der Industrie und Wissenschaft. Kostengünstige und miniaturisierte Spektrometer z.B. werden besonders für moderne Sensorsysteme “smart personal environments” benötigt, die vor allem in der Energietechnik, Messtechnik, Sicherheitstechnik (safety and security), IT und Medizintechnik verwendet werden. Unter allen miniaturisierten Spektrometern ist eines der attraktivsten Miniaturisierungsverfahren das Fabry Pérot Filter. Bei diesem Verfahren kann die Kombination von einem Fabry Pérot (FP) Filterarray und einem Detektorarray als Mikrospektrometer funktionieren. Jeder Detektor entspricht einem einzelnen Filter, um ein sehr schmales Band von Wellenlängen, die durch das Filter durchgelassen werden, zu detektieren. Ein Array von FP-Filter wird eingesetzt, bei dem jeder Filter eine unterschiedliche spektrale Filterlinie auswählt. Die spektrale Position jedes Bandes der Wellenlänge wird durch die einzelnen Kavitätshöhe des Filters definiert. Die Arrays wurden mit Filtergrößen, die nur durch die Array-Dimension der einzelnen Detektoren begrenzt werden, entwickelt. Allerdings erfordern die bestehenden Fabry Pérot Filter-Mikrospektrometer komplizierte Fertigungsschritte für die Strukturierung der 3D-Filter-Kavitäten mit unterschiedlichen Höhen, die nicht kosteneffizient für eine industrielle Fertigung sind. Um die Kosten bei Aufrechterhaltung der herausragenden Vorteile der FP-Filter-Struktur zu reduzieren, wird eine neue Methode zur Herstellung der miniaturisierten FP-Filtern mittels NanoImprint Technologie entwickelt und präsentiert. In diesem Fall werden die mehreren Kavitäten-Herstellungsschritte durch einen einzigen Schritt ersetzt, die hohe vertikale Auflösung der 3D NanoImprint Technologie verwendet. Seit dem die NanoImprint Technologie verwendet wird, wird das auf FP Filters basierende miniaturisierte Spectrometer nanospectrometer genannt. Ein statischer Nano-Spektrometer besteht aus einem statischen FP-Filterarray auf einem Detektorarray (siehe Abb. 1). Jeder FP-Filter im Array besteht aus dem unteren Distributed Bragg Reflector (DBR), einer Resonanz-Kavität und einen oberen DBR. Der obere und untere DBR sind identisch und bestehen aus periodisch abwechselnden dünnen dielektrischen Schichten von Materialien mit hohem und niedrigem Brechungsindex. Die optischen Schichten jeder dielektrischen Dünnfilmschicht, die in dem DBR enthalten sind, entsprechen einen Viertel der Design-Wellenlänge. Jeder FP-Filter wird einer definierten Fläche des Detektorarrays zugeordnet. Dieser Bereich kann aus einzelnen Detektorelementen oder deren Gruppen enthalten. Daher werden die Seitenkanal-Geometrien der Kavität aufgebaut, die dem Detektor entsprechen. Die seitlichen und vertikalen Dimensionen der Kavität werden genau durch 3D NanoImprint Technologie aufgebaut. Die Kavitäten haben Unterschiede von wenigem Nanometer in der vertikalen Richtung. Die Präzision der Kavität in der vertikalen Richtung ist ein wichtiger Faktor, der die Genauigkeit der spektralen Position und Durchlässigkeit des Filters Transmissionslinie beeinflusst.
Resumo:
Introducción: El vitíligo es una enfermedad prevalente en nuestro medio con una prevalencia del 2% de la población mundial. Los síntomas de esta enfermedad son principalmente estéticos al manifestarse como máculas acrómicas, simétricas en las extremidades y en rostro donde genera la mayor estigmatización de los pacientes. Actualmente ningún tratamiento provee mejoría pronta y permanente de los síntomas. Objetivo: Determinar la efectividad del Láser Excimer 308 nm en el tratamiento del vitíligo por medio de una revisión sistemática de la literatura. Métodos: Búsqueda sistemática de ensayos clínicos y estudios cuasiexperimentales en las bases de datos más importantes acerca de la efectividad del Láser Excimer 308 nm en la repigmentación de los pacientes adultos con vitíligo. Se evaluó su calidad metodológica. Resultados: De 862 artículos encontrados se escogieron 40 artículos potenciales de los cuales dos fueron incluidos en esta revisión. El láser Excimer 308 nm como monoterapia presenta una pigmentación efectiva (≥50%) en 28.03% de las áreas tratadas, de los cuales 72.9% se localizaron en áreas sensibles a radiación ultravioleta y 27.02% en zonas no sensibles. Inicio de pigmentación a la sesión número 13 (un mes post inicio del tratamiento). El láser fue seguro y bien tolerado. Conclusión: La evidencia sugiere que el tratamiento con Láser Excimer 308 nm, como monoterapia, es una alternativa terapéutica para lograr repigmentación pronta de las máculas acrómicas del vitíligo en áreas sensibles a radiación ultravioleta. Deben considerarse estudios que evalúen combinaciones de fármacos y Láser en el tratamiento de vitíligo.
Resumo:
The design and manufacture of the band-defining filters and their associated dichroic beam splitter for the 11- and the 12-µm infrared channels of the advanced along-track scanning radiometer are described. The filter requirements that have led to the choice of coating designs, coating materials, disposition of coatings, and effects of polarization are discussed. Overall spectral throughputs of the filter and dichroic interaction for the two channels are also presented.
Resumo:
Clouds and associated precipitation are the largest source of uncertainty in current weather and future climate simulations. Observations of the microphysical, dynamical and radiative processes that act at cloud scales are needed to improve our understanding of clouds. The rapid expansion of ground-based super-sites and the availability of continuous profiling and scanning multi-frequency radar observations at 35 and 94 GHz have significantly improved our ability to probe the internal structure of clouds in high temporal-spatial resolution, and to retrieve quantitative cloud and precipitation properties. However, there are still gaps in our ability to probe clouds due to large uncertainties in the retrievals. The present work discusses the potential of G band (frequency between 110 and 300 GHz) Doppler radars in combination with lower frequencies to further improve the retrievals of microphysical properties. Our results show that, thanks to a larger dynamic range in dual-wavelength reflectivity, dual-wavelength attenuation and dual-wavelength Doppler velocity (with respect to a Rayleigh reference), the inclusion of frequencies in the G band can significantly improve current profiling capabilities in three key areas: boundary layer clouds, cirrus and mid-level ice clouds, and precipitating snow.
Resumo:
This paper proposes an improved voice activity detection (VAD) algorithm using wavelet and support vector machine (SVM) for European Telecommunication Standards Institution (ETS1) adaptive multi-rate (AMR) narrow-band (NB) and wide-band (WB) speech codecs. First, based on the wavelet transform, the original IIR filter bank and pitch/tone detector are implemented, respectively, via the wavelet filter bank and the wavelet-based pitch/tone detection algorithm. The wavelet filter bank can divide input speech signal into several frequency bands so that the signal power level at each sub-band can be calculated. In addition, the background noise level can be estimated in each sub-band by using the wavelet de-noising method. The wavelet filter bank is also derived to detect correlated complex signals like music. Then the proposed algorithm can apply SVM to train an optimized non-linear VAD decision rule involving the sub-band power, noise level, pitch period, tone flag, and complex signals warning flag of input speech signals. By the use of the trained SVM, the proposed VAD algorithm can produce more accurate detection results. Various experimental results carried out from the Aurora speech database with different noise conditions show that the proposed algorithm gives considerable VAD performances superior to the AMR-NB VAD Options 1 and 2, and AMR-WB VAD. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this thesis, a frequency selective surface (FSS) consists of a two-dimensional periodic structure mounted on a dielectric substrate, which is capable of selecting signals in one or more frequency bands of interest. In search of better performance, more compact dimensions, low cost manufacturing, among other characteristics, these periodic structures have been continually optimized over time. Due to its spectral characteristics, which are similar to band-stop or band-pass filters, the FSSs have been studied and used in several applications for more than four decades. The design of an FSS with a periodic structure composed by pre-fractal elements facilitates the tuning of these spatial filters and the adjustment of its electromagnetic parameters, enabling a compact design which generally has a stable frequency response and superior performance relative to its euclidean counterpart. The unique properties of geometric fractals have shown to be useful, mainly in the production of antennas and frequency selective surfaces, enabling innovative solutions and commercial applications in microwave range. In recent applications, the FSSs modify the indoor propagation environments (emerging concept called wireless building ). In this context, the use of pre-fractal elements has also shown promising results, allowing a more effective filtering of more than one frequency band with a single-layer structure. This thesis approaches the design of FSSs using pre-fractal elements based on Vicsek, Peano and teragons geometries, which act as band-stop spatial filters. The transmission properties of the periodic surfaces are analyzed to design compact and efficient devices with stable frequency responses, applicable to microwave frequency range and suitable for use in indoor communications. The results are discussed in terms of the electromagnetic effect resulting from the variation of parameters such as: fractal iteration number (or fractal level), scale factor, fractal dimension and periodicity of FSS, according the pre-fractal element applied on the surface. The analysis of the fractal dimension s influence on the resonant properties of a FSS is a new contribution in relation to researches about microwave devices that use fractal geometry. Due to its own characteristics and the geometric shape of the Peano pre-fractal elements, the reconfiguration possibility of these structures is also investigated and discussed. This thesis also approaches, the construction of efficient selective filters with new configurations of teragons pre-fractal patches, proposed to control the WLAN coverage in indoor environments by rejecting the signals in the bands of 2.4~2.5 GHz (IEEE 802.11 b) and 5.0~6.0 GHz (IEEE 802.11a). The FSSs are initially analyzed through simulations performed by commercial software s: Ansoft DesignerTM and HFSSTM. The fractal design methodology is validated by experimental characterization of the built prototypes, using alternatively, different measurement setups, with commercial horn antennas and microstrip monopoles fabricated for low cost measurements
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A interferência eletromagnética causada pela linha de energia elétrica afeta negativamente os sinais de instrumentos eletrônicos, especialmente aqueles com baixos níveis de amplitude. Este tipo de interferência é conhecida como interferência de modo comum. Existem muitos métodos e arquiteturas utilizadas para minimizar a influência deste fenômeno de interferência em instrumentos eletrônicos, o mais comum dos quais é a utilização de filtros rejeita banda. Este trabalho apresenta: a análise, desenvolvimento, protótipo e teste de uma nova arquitetura de filtro com característica reconfigurável para instrumentos biomédicos e medição de dados de fluxo em fluido de alta complexidade, com objetivo de reduzir a interferência de modo comum e preservar as componentes do sinal útil na mesma faixa de frequência do ruído, utilizando a técnica de equilíbrio dinâmico de impedância. Além disso, este trabalho pode ser usado em qualquer sistema de medição que também sofra interferência na frequência da linha de alimentação (50/60 Hz, no Brasil e na França, 60 Hz nos Estados Unidos da América). Os blocos de circuitos foram modelados matematicamente e a função de transferência global do circuito fechado foi gerada. Em seguida, o projeto foi descrito e simulado na língua VHDL_AMS e também em um software de simulação eletrônica, usando blocos de componentes discretos, com e sem realimentação. Após análise teórica dos resultados da simulação, um circuito protótipo foi construído e testado usando como entrada um sinal obtido a partir de eletrodos de ECG e Eletrodos Eletroresistivos. Os resultados experimentais do circuito condizem com os da simulação: uma redução de ruído de 98,7% foi obtida em simulações utilizando um sinal sinusoidal, e uma redução de 92% foi realizada utilizando eletrodos de ECG em testes experimentais. Os mesmos testes em eletrodos Eletroresistivos, obtendo o maior valor de 80,3% de redução (durante análise de 3 casos). Em ambos os casos, o sinal útil foi preservado. O método e a sua arquitetura pode ser aplicado para atenuar as interferências que ocorrem na mesma banda de frequência das componentes do sinal útil, preservando ao mesmo tempo estes sinais.
Resumo:
Piezoelectric ceramics, such as PZT, can generate subnanometric displacements, bu t in order to generate multi- micrometric displacements, they should be either driven by high electric voltages (hundreds of volts ), or operate at a mechanical resonant frequency (in narrow band), or have large dimensions (tens of centimeters). A piezoelectric flextensional actuator (PFA) is a device with small dimensions that can be driven by reduced voltages and can operate in the nano- and micro scales. Interferometric techniques are very adequate for the characterization of these devices, because there is no mechanical contact in the measurement process, and it has high sensitivity, bandwidth and dynamic range. A low cost open-loop homodyne Michelson interferometer is utilized in this work to experimentally detect the nanovi brations of PFAs, based on the spectral analysis of the interfero metric signal. By employing the well known J 1 ...J 4 phase demodulation method, a new and improved version is proposed, which presents the following characteristics: is direct, self-consistent, is immune to fading, and does not present phase ambiguity problems. The proposed method has resolution that is similar to the modified J 1 ...J 4 method (0.18 rad); however, differently from the former, its dynamic range is 20% larger, does not demand Bessel functions algebraic sign correction algorithms and there are no singularities when the static phase shift between the interferometer arms is equal to an integer multiple of /2 rad. Electronic noise and random phase drifts due to ambient perturbations are taken into account in the analysis of the method. The PFA nanopositioner characterization was based on the analysis of linearity betw een the applied voltage and the resulting displacement, on the displacement frequency response and determination of main resonance frequencies.
Resumo:
The heating of the solar corona has been investigated during four of decades and several mechanisms able to produce heating have been proposed. It has until now not been possible to produce quantitative estimates that would establish any of these heating mechanism as the most important in the solar corona. In order to investigate which heating mechanism is the most important, a more detailed approach is needed. In this thesis, the heating problem is approached ”ab initio”, using well observed facts and including realistic physics in a 3D magneto-hydrodynamic simulation of a small part of the solar atmosphere. The ”engine” of the heating mechanism is the solar photospheric velocity field, that braids the magnetic field into a configuration where energy has to be dissipated. The initial magnetic field is taken from an observation of a typical magnetic active region scaled down to fit inside the computational domain. The driving velocity field is generated by an algorithm that reproduces the statistical and geometrical fingerprints of solar granulation. Using a standard model atmosphere as the thermal initial condition, the simulation goes through a short startup phase, where the initial thermal stratification is quickly forgotten, after which the simulation stabilizes in statistical equilibrium. In this state, the magnetic field is able to dissipate the same amount of energy as is estimated to be lost through radiation, which is the main energy loss mechanism in the solar corona. The simulation produces heating that is intermittent on the smallest resolved scales and hot loops similar to those observed through narrow band filters in the ultra violet. Other observed characteristics of the heating are reproduced, as well as a coronal temperature of roughly one million K. Because of the ab initio approach, the amount of heating produced in these simulations represents a lower limit to coronal heating and the conclusion is that such heating of the corona is unavoidable.
Resumo:
In dieser Arbeit wird eine schmalbandige kontinuierliche kohärente Lyman-α-Quelle basierend auf Festkörperlasersystemen zur zukünftigen Kühlung von Antiwasserstoff vorgestellt. Die fundamentalen Festkörperlasersysteme ermöglichen es im Vier-Wellen-Misch-Prozess zur Erzeugung der Lyman-α-Strahlung nicht nur die 6^1S – 7^1S-Zwei-Photonen-Resonanz des Quecksilbers sondern erstmals auch die 6^1S – 6^3P-Ein-Photonen-Resonanz zur Erhöhung der Konversionseffizienz optimal zu nutzen. In ersten Messungen wurden 0,063nW Leistung bei Lyman-α erzeugt. Mit dieser Lyman-α-Quelle war es, durch die Nähe des ersten fundamentalen Lasers zur Ein-Photonen-Resonanz, erstmals möglich den kompletten Verlauf der Phasenanpassungskurve des Vier-Wellen- Misch-Prozesses aufzunehmen. Neben den fundamentalen Lasersystemen und der Lyman-alpha-Erzeugung selbst, wird in dieser Arbeit die Detektion der produzierten Lyman-α-Strahlung mit einem Photomultiplier vorgestellt, die soweit optimiert wurde, dass eine zuverlässige Abschätzung der erzeugten Leistung möglich ist. Für diesen Zweck wurde zudem ein Teststand aufgebaut, mit dem die Transmissivität der Optiken, welche in der Lyman-α-Apparatur verwendet werden, bei 121,56nm gemessen wurde. Des Weiteren wird hier eine vielseitige Rechnung vorgestellt, mit der die erzeugte Leistung bei Lyman-α, unter anderem in Abhängigkeit von der Temperatur, der Absorption des ersten fundamentalen Laserstrahls, dem Dichteprofil des Quecksilberdampfes und unter dem Einfluss eines Puffergases, bestimmt wird.
Resumo:
To quantify the evolution of genuine zero-lag cross-correlations of focal onset seizures, we apply a recently introduced multivariate measure to broad band and to narrow-band EEG data. For frequency components below 12.5 Hz, the strength of genuine cross-correlations decreases significantly during the seizure and the immediate postseizure period, while higher frequency bands show a tendency of elevated cross-correlations during the same period. We conclude that in terms of genuine zero-lag cross-correlations, the electrical brain activity as assessed by scalp electrodes shows a significant spatial fragmentation, which might promote seizure offset.
Resumo:
En este proyecto se estudian y analizan las diferentes técnicas de procesado digital de señal aplicadas a acelerómetros. Se hace uso de una tarjeta de prototipado, basada en DSP, para realizar las diferentes pruebas. El proyecto se basa, principalmente, en realizar filtrado digital en señales provenientes de un acelerómetro en concreto, el 1201F, cuyo campo de aplicación es básicamente la automoción. Una vez estudiadas la teoría de procesado y las características de los filtros, diseñamos una aplicación basándonos sobre todo en el entorno en el que se desarrollaría una aplicación de este tipo. A lo largo del diseño, se explican las diferentes fases: diseño por ordenador (Matlab), diseño de los filtros en el DSP (C), pruebas sobre el DSP sin el acelerómetro, calibración del acelerómetro, pruebas finales sobre el acelerómetro... Las herramientas utilizadas son: la plataforma Kit de evaluación 21-161N de Analog Devices (equipado con el entorno de desarrollo Visual DSP 4.5++), el acelerómetro 1201F, el sistema de calibración de acelerómetros CS-18-LF de Spektra y los programas software MATLAB 7.5 y CoolEditPRO 2.0. Se realizan únicamente filtros IIR de 2º orden, de todos los tipos (Butterworth, Chebyshev I y II y Elípticos). Realizamos filtros de banda estrecha, paso-banda y banda eliminada, de varios tipos, dentro del fondo de escala que permite el acelerómetro. Una vez realizadas todas las pruebas, tanto simulaciones como físicas, se seleccionan los filtros que presentan un mejor funcionamiento y se analizan para obtener conclusiones. Como se dispone de un entorno adecuado para ello, se combinan los filtros entre sí de varias maneras, para obtener filtros de mayor orden (estructura paralelo). De esta forma, a partir de filtros paso-banda, podemos obtener otras configuraciones que nos darán mayor flexibilidad. El objetivo de este proyecto no se basa sólo en obtener buenos resultados en el filtrado, sino también de aprovechar las facilidades del entorno y las herramientas de las que disponemos para realizar el diseño más eficiente posible. In this project, we study and analize digital signal processing in order to design an accelerometer-based application. We use a hardware card of evaluation, based on DSP, to make different tests. This project is based in design digital filters for an automotion application. The accelerometer type is 1201F. First, we study digital processing theory and main parameters of real filters, to make a design based on the application environment. Along the application, we comment all the different steps: computer design (Matlab), filter design on the DSP (C language), simulation test on the DSP without the accelerometer, accelerometer calibration, final tests on the accelerometer... Hardware and software tools used are: Kit of Evaluation 21-161-N, based on DSP, of Analog Devices (equiped with software development tool Visual DSP 4.5++), 1201-F accelerometer, CS-18-LF calibration system of SPEKTRA and software tools MATLAB 7.5 and CoolEditPRO 2.0. We only perform 2nd orden IIR filters, all-type : Butterworth, Chebyshev I and II and Ellyptics. We perform bandpass and stopband filters, with very narrow band, taking advantage of the accelerometer's full scale. Once all the evidence, both simulations and physical, are finished, filters having better performance and analyzed and selected to draw conclusions. As there is a suitable environment for it, the filters are combined together in different ways to obtain higher order filters (parallel structure). Thus, from band-pass filters, we can obtain many configurations that will give us greater flexibility. The purpose of this project is not only based on good results in filtering, but also to exploit the facilities of the environment and the available tools to make the most efficient design possible.
Resumo:
An increasing number of neuroimaging studies are concerned with the identification of interactions or statistical dependencies between brain areas. Dependencies between the activities of different brain regions can be quantified with functional connectivity measures such as the cross-correlation coefficient. An important factor limiting the accuracy of such measures is the amount of empirical data available. For event-related protocols, the amount of data also affects the temporal resolution of the analysis. We use analytical expressions to calculate the amount of empirical data needed to establish whether a certain level of dependency is significant when the time series are autocorrelated, as is the case for biological signals. These analytical results are then contrasted with estimates from simulations based on real data recorded with magnetoencephalography during a resting-state paradigm and during the presentation of visual stimuli. Results indicate that, for broadband signals, 50–100 s of data is required to detect a true underlying cross-correlations coefficient of 0.05. This corresponds to a resolution of a few hundred milliseconds for typical event-related recordings. The required time window increases for narrow band signals as frequency decreases. For instance, approximately 3 times as much data is necessary for signals in the alpha band. Important implications can be derived for the design and interpretation of experiments to characterize weak interactions, which are potentially important for brain processing.