993 resultados para Sampling rate


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta dissertação tem como objetivo principal propor um nodo (ou nó) sensor sem fio para ser utilizado em redes de sensores sem fio, em sistemas de aquisição de dados de extensômetros. O sistema de aquisição para os extensômetros é baseado na ponte de Wheatstone e de modo a permitir várias configurações de extensômetros. O processamento e a comunicação sem fio é realizada pelo ATmega128RFA1, composto por um microcontrolador e um transceiver Rádio-Frequência com o padrão Zigbee. O nodo foi projetado para garantir confiabilidade na aquisição de dados e ser totalmente controlado remotamente. Entre os parâmetros controláveis estão: o ganho do sinal e a taxa de amostragem. Além disso, o nodo possui recursos para efetuar o equilíbrio da ponte de Wheatstone automaticamente. A escolha de seus componentes, baseou-se em critérios relacionados ao consumo de energia do mesmo e ao custo. Foi concebida uma placa de circuito impresso (PCI) para o nodo, e sobre ela foram realizadas estimativas sobre o consumo de energia e valor agregado do protótipo, com o objetivo de analisar a sua viabilidade. Além do projeto do nodo sensor, o trabalho apresenta a proposta de integração do mesmo em uma rede de sensores sem fio (RSSF), incluindo a sugestão do hardware complementar e desenvolvimentos dos softwares. Para os testes do nodo sensor, foi construido experimentalmente um transdutor de força.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of multimodal neuroimaging techniques has been helpful in the investigation of epileptogenic zone in patients with refractory epilepsies. This work aims to describe an ictal event during EEG-fMRI performed simultaneously in a 39-year-old man with refractory epilepsy. The EEG data were recorded at a sampling rate of 5 kHz, using a BrainAmp (BrainProducts, München, Germany) amplifier, with 64 MR (magnetic resonance) compatible Ag/AgCl electrodes. MR images were acquired using a 3T scanner in 3 sequences of 6 minutes of echo-planar images (EPIs), with TR = 2s, being the last sequence stopped after the ictal event. The EEG was corrected for gradient and pulse artifacts using the Brain Vision Analyzer2 software (BrainProducts), and the functional images were realigned, slice-timing corrected, normalized and smoothed. The start of the ictal changes was used for the evaluation of the BOLD response in MR images, using a t-test with a minimum cluster of 5 voxels, p <0.005 (T>2.5). The patient had a partial complex seizure, as noted by neurologist. The fMRI data showed positive BOLD responses (activation) in dysplastic areas, but showed the most significant activation outside the lesion, in areas compatible with secondary spread of the epileptic focus, probably caused by motor reaction also observed during the seizure. As a conclusion, we note that the technique of EEG-fMRI can detect the epileptogenic zone in patients with refractory epilepsy, but areas of dissemination of primary epileptogenic focus may show significant activation, introducing additional difficulties to the interpretation of the results

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Biosusceptometry AC (BAC) is a research tool that has been extensively explored by the group Biomagnetism IBB-UNESP for monitoring of the gastrointestinal tract, its response to a known drug or in vivo performance of solid dosage forms. During this period the BAC, which has the characteristics of high sensitivity and low cost, has been developed primarily for recording signals contraction of activity and traffic human gastrointestinal tract. With the possibility of producing images with this instrumentation, it was possible to evaluate different situations in vitro and in vivo for physiological studies and pharmaceuticals. Considering the good performance of this system to produce planar images, the first aim of the BAC system tomography (TBAC) was to evaluate the system performance of BAC to produce tomographic images of phantoms ferromagnetic for a single channel system. All these applications were only possible because of their sensitivity to materials of high magnetic suscepitibility as ferrite, which allow to produce an electrical signal proportional to the variation of the magnetic flux generated by the presence of magnetic marker next to a first-order gradiometer. Measuring this variation at various points was possible to generate planar images that recently came to be produced in systems with multiple detectors, said multi-channels. From planar images, also producing tomographic images of simulators BAC bars in a system of 13 channels using only the center channel, with good results when applied to simple objects as one and two bars. When testing the resolution of the system with more elaborate forms the quality and resolution of images reconstructed is not satisfactory, which would be solved by increasing the spatial sampling rate and hence the acquisition time. The present system works with an acquisition time of about five hours. Whereas this system will be applied for in vivo experiments, the acquisition time became a ...

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This monograph proposes the implementation of a low cost PID controller utilizing a PIC microcontroller, and its application in a positioning system previously controlled by a dedicated integrated circuit for a positioning system. Applying the closed-loop PID control, the system instability was reduced, and its response was smoother, eliminating vibrations and mechanical wear compared to its response with the dedicated integrated circuit, which has a very limited control action. The actuator of the system is a DC motor, whose speed is controlled by the Pulse Width Modulation (PWM) technique, using a Full-Bridge circuit, allowing the shift of direction of rotation. The utilized microcontroller was the PIC16F684, which has an enhanced PWM module, with its analog converters used as reference and position feedback. The positioning sensor is a multiturn potentiometer coupled to the motor axis by gears. The possibility of programming the PID coefficients in the microcontroller, as well as the adjustment of the sampling rate, allows the implemented system achieving high level of versatility

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the work underlying this thesis solid-phase microextraction (SPME) was evaluated as a passive sampling technique for organophosphate triesters in indoor air. These compounds are used on a large scale as flame-retarding and plastizicing additives in a variety of materials and products, and have proven to be common pollutants in indoor air. The main objective of this work was to develop an accurate method for measuring the volatile fraction. Such a method can be used in combination with active sampling to obtain information regarding the vapour/particulate distribution in different indoor environments. SPME was investigated under both equilibrium and non-equilibrium conditions and parameters associated with these different conditions were estimated. In Paper I, time-weighted average (TWA) SPME under dynamic conditions was investigated in order to obtain a fast air sampling method for organophosphate triesters. Among the investigated SPME coatings, the absorptive PDMS polymer had the highest affinity for the organophosphate triesters and was consequently used in all further work. Since the sampling rate is dependent on the agitation conditions, the linear airflow rates had to be carefully considered. Sampling periods as short as 1 hour were shown to be sufficient for measurements in the ng-μg m-3 range when using a PDMS 100-μm fibre and a linear flow rate above 7 cm s-1 over the fibre. SPME under equilibrium conditions is rather time-consuming, even under dynamic conditions, for slowly partitioning compounds such as organophosphate triesters. Nevertheless, this method has some significant advantages. For instance, the limit of detection is much lower compared to 1 h TWA sampling. Furthermore, the sampling time can be ignored as long as equilibrium has been attained. In Paper II, SPME under equilibrium conditions was investigated and evaluated for organophosphate triester vapours. Since temperature and humidity are closely associated with the distribution constant a simple study of the effect of these parameters was performed. The obtained distribution constants were used to determine the air levels in a common indoor environment. SPME and parallel active sampling on filters yielded similar results, indicating that the detected compounds were almost entirely associated with the vapour phase To apply dynamic SPME method in the field a sampler device, which enables controlled linear airflow rates to be applied, was constructed and evaluated (Paper III). This device was developed for application of SPME and active sampling in parallel. A GC/PICI-MS/MS method was developed and used in combination with active sampling of organophosphate triesters in indoor air (Paper IV). The combination of MS/MS and the soft ionization achieved with methanol as reagent gas yielded high selectivity and detection limits comparable to those provided by GC with nitrogen-phosphorus detection (NPD). The method limit of detection, when sampling 1.5 m3 of air, was in the range 0.1-1.4 ng m-3. In Paper V, the developed MS method was used in combination with SPME for indoor air measurements. The levels detected in the investigated indoor environments range from a few ng to μg m-3. Tris(2-chloropropyl) phosphate was detected at a concentration as high as 7 μg m-3 in a newly rebuilt lecture room.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Array seismology is an useful tool to perform a detailed investigation of the Earth’s interior. Seismic arrays by using the coherence properties of the wavefield are able to extract directivity information and to increase the ratio of the coherent signal amplitude relative to the amplitude of incoherent noise. The Double Beam Method (DBM), developed by Krüger et al. (1993, 1996), is one of the possible applications to perform a refined seismic investigation of the crust and mantle by using seismic arrays. The DBM is based on a combination of source and receiver arrays leading to a further improvement of the signal-to-noise ratio by reducing the error in the location of coherent phases. Previous DBM works have been performed for mantle and core/mantle resolution (Krüger et al., 1993; Scherbaum et al., 1997; Krüger et al., 2001). An implementation of the DBM has been presented at 2D large-scale (Italian data-set for Mw=9.3, Sumatra earthquake) and at 3D crustal-scale as proposed by Rietbrock & Scherbaum (1999), by applying the revised version of Source Scanning Algorithm (SSA; Kao & Shan, 2004). In the 2D application, the rupture front propagation in time has been computed. In 3D application, the study area (20x20x33 km3), the data-set and the source-receiver configurations are related to the KTB-1994 seismic experiment (Jost et al., 1998). We used 60 short-period seismic stations (200-Hz sampling rate, 1-Hz sensors) arranged in 9 small arrays deployed in 2 concentric rings about 1 km (A-arrays) and 5 km (B-array) radius. The coherence values of the scattering points have been computed in the crustal volume, for a finite time-window along all array stations given the hypothesized origin time and source location. The resulting images can be seen as a (relative) joint log-likelihood of any point in the subsurface that have contributed to the full set of observed seismograms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An extensive study of the morphology and the dynamics of the equatorial ionosphere over South America is presented here. A multi parametric approach is used to describe the physical characteristics of the ionosphere in the regions where the combination of the thermospheric electric field and the horizontal geomagnetic field creates the so-called Equatorial Ionization Anomalies. Ground based measurements from GNSS receivers are used to link the Total Electron Content (TEC), its spatial gradients and the phenomenon known as scintillation that can lead to a GNSS signal degradation or even to a GNSS signal ‘loss of lock’. A new algorithm to highlight the features characterizing the TEC distribution is developed in the framework of this thesis and the results obtained are validated and used to improve the performance of a GNSS positioning technique (long baseline RTK). In addition, the correlation between scintillation and dynamics of the ionospheric irregularities is investigated. By means of a software, here implemented, the velocity of the ionospheric irregularities is evaluated using high sampling rate GNSS measurements. The results highlight the parallel behaviour of the amplitude scintillation index (S4) occurrence and the zonal velocity of the ionospheric irregularities at least during severe scintillations conditions (post-sunset hours). This suggests that scintillations are driven by TEC gradients as well as by the dynamics of the ionospheric plasma. Finally, given the importance of such studies for technological applications (e.g. GNSS high-precision applications), a validation of the NeQuick model (i.e. the model used in the new GALILEO satellites for TEC modelling) is performed. The NeQuick performance dramatically improves when data from HF radar sounding (ionograms) are ingested. A custom designed algorithm, based on the image recognition technique, is developed to properly select the ingested data, leading to further improvement of the NeQuick performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of next generation microwave technology for backhauling systems is driven by an increasing capacity demand. In order to provide higher data rates and throughputs over a point-to-point link, a cost-effective performance improvement is enabled by an enhanced energy-efficiency of the transmit power amplification stage, whereas a combination of spectrally efficient modulation formats and wider bandwidths is supported by amplifiers that fulfil strict constraints in terms of linearity. An optimal trade-off between these conflicting requirements can be achieved by resorting to flexible digital signal processing techniques at baseband. In such a scenario, the adaptive digital pre-distortion is a well-known linearization method, that comes up to be a potentially widely-used solution since it can be easily integrated into base stations. Its operation can effectively compensate for the inter-modulation distortion introduced by the power amplifier, keeping up with the frequency-dependent time-varying behaviour of the relative nonlinear characteristic. In particular, the impact of the memory effects become more relevant and their equalisation become more challenging as the input discrete signal feature a wider bandwidth and a faster envelope to pre-distort. This thesis project involves the research, design and simulation a pre-distorter implementation at RTL based on a novel polyphase architecture, which makes it capable of operating over very wideband signals at a sampling rate that complies with the actual available clock speed of current digital devices. The motivation behind this structure is to carry out a feasible pre-distortion for the multi-band spectrally efficient complex signals carrying multiple channels that are going to be transmitted in near future high capacity and reliability microwave backhaul links.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spectrum sensing is currently one of the most challenging design problems in cognitive radio. A robust spectrum sensing technique is important in allowing implementation of a practical dynamic spectrum access in noisy and interference uncertain environments. In addition, it is desired to minimize the sensing time, while meeting the stringent cognitive radio application requirements. To cope with this challenge, cyclic spectrum sensing techniques have been proposed. However, such techniques require very high sampling rates in the wideband regime and thus are costly in hardware implementation and power consumption. In this thesis the concept of compressed sensing is applied to circumvent this problem by utilizing the sparsity of the two-dimensional cyclic spectrum. Compressive sampling is used to reduce the sampling rate and a recovery method is developed for re- constructing the sparse cyclic spectrum from the compressed samples. The reconstruction solution used, exploits the sparsity structure in the two-dimensional cyclic spectrum do-main which is different from conventional compressed sensing techniques for vector-form sparse signals. The entire wideband cyclic spectrum is reconstructed from sub-Nyquist-rate samples for simultaneous detection of multiple signal sources. After the cyclic spectrum recovery two methods are proposed to make spectral occupancy decisions from the recovered cyclic spectrum: a band-by-band multi-cycle detector which works for all modulation schemes, and a fast and simple thresholding method that works for Binary Phase Shift Keying (BPSK) signals only. In addition a method for recovering the power spectrum of stationary signals is developed as a special case. Simulation results demonstrate that the proposed spectrum sensing algorithms can significantly reduce sampling rate without sacrifcing performance. The robustness of the algorithms to the noise uncertainty of the wireless channel is also shown.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the scarcest resources in the wireless communication system is the limited frequency spectrum. Many wireless communication systems are hindered by the bandwidth limitation and are not able to provide high speed communication. However, Ultra-wideband (UWB) communication promises a high speed communication because of its very wide bandwidth of 7.5GHz (3.1GHz-10.6GHz). The unprecedented bandwidth promises many advantages for the 21st century wireless communication system. However, UWB has many hardware challenges, such as a very high speed sampling rate requirement for analog to digital conversion, channel estimation, and implementation challenges. In this thesis, a new method is proposed using compressed sensing (CS), a mathematical concept of sub-Nyquist rate sampling, to reduce the hardware complexity of the system. The method takes advantage of the unique signal structure of the UWB symbol. Also, a new digital implementation method for CS based UWB is proposed. Lastly, a comparative study is done of the CS-UWB hardware implementation methods. Simulation results show that the application of compressed sensing using the proposed method significantly reduces the number of hardware complexity compared to the conventional method of using compressed sensing based UWB receiver.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.