967 resultados para plane wave method
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.
Resumo:
This work presents the analysis of wave and turbulence measurements collected at a tidal energy site. A new method is introduced to produce more consistent and rigorous estimations of the velocity fluctuations power spectral densities. An analytical function is further proposed to fit the observed spectra and could be input to the numerical models predicting power production and structural loading on tidal turbines. Another new approach is developed to correct for the effect of the Doppler noise on the high frequencies power spectral densities. The analysis of velocity time series combining wave and turbulent contributions demonstrates that the turbulent motions are coherent throughout the water column, rendering the wave coherence-based methods not applicable to our dataset. To avoid this problem, an alternative approach relying on the pressure data collected by the ADCP is introduced and shows appreciable improvement in the wave-turbulence separation.
Resumo:
The change in the carbonaceous skeleton of nanoporous carbons during their activation has received limited attention, unlike its counterpart process in the presence of an inert atmosphere. Here we adopt a multi-method approach to elucidate this change in a poly(furfuryl alcohol)-derived carbon activated using cyclic application of oxygen saturation at 250 °C before its removal (with carbon) at 800 °C in argon. The methods used include helium pycnometry, synchrotron-based X-ray diffraction (XRD) and associated radial distribution function (RDF) analysis, transmission electron microscopy (TEM) and, uniquely, electron energy-loss spectroscopy spectrum-imaging (EELS-SI), electron nanodiffraction and fluctuation electron microscopy (FEM). Helium pycnometry indicates the solid skeleton of the carbon densifies during activation from 78% to 93% of graphite. RDF analysis, EELS-SI, and FEM all suggest this densification comes through an in-plane growth of sp2 carbon out to the medium range without commensurate increase in order normal to the plane. This process could be termed ‘graphenization’. The exact way in which this process occurs is not clear, but TEM images of the carbon before and after activation suggest it may come through removal of the more reactive carbon, breaking constraining cross-links and creating space that allows the remaining carbon material to migrate in an annealing-like process.
Resumo:
Direct sampling methods are increasingly being used to solve the inverse medium scattering problem to estimate the shape of the scattering object. A simple direct method using one incident wave and multiple measurements was proposed by Ito, Jin and Zou. In this report, we performed some analytic and numerical studies of the direct sampling method. The method was found to be effective in general. However, there are a few exceptions exposed in the investigation. Analytic solutions in different situations were studied to verify the viability of the method while numerical tests were used to validate the effectiveness of the method.
Resumo:
This work presents the development of an in-plane vertical micro-coaxial probe using bulk micromachining technique for high frequency material characterization. The coaxial probe was fabricated in a silicon substrate by standard photolithography and a deep reactive ion etching (DRIE) technique. The through-hole structure in the form of a coaxial probe was etched and metalized with a diluted silver paste. A co-planar waveguide configuration was integrated with the design to characterize the probe. The electrical and RF characteristics of the coaxial probe were determined by simulating the probe design in Ansoft’s High Frequency Structure Simulator (HFSS). The reflection coefficient and transducer gain performance of the probe was measured up to 65 GHz using a vector network analyzer (VNA). The probe demonstrated excellent results over a wide frequency band, indicating its ability to integrate with millimeter wave packaging systems as well as characterize unknown materials at high frequencies. The probe was then placed in contact with 3 materials where their unknown permittivities were determined. To accomplish this, the coaxial probe was placed in contact with the material under test and electromagnetic waves were directed to the surface using the VNA, where its reflection coefficient was then determined over a wide frequency band from dc-to -65GHz. Next, the permittivity of each material was deduced from its measured reflection coefficients using a cross ratio invariance coding technique. The permittivity results obtained when measuring the reflection coefficient data were compared to simulated permittivity results and agreed well. These results validate the use of the micro-coaxial probe to characterize the permittivity of unknown materials at high frequencies up to 65GHz.
Resumo:
Oscillometric blood pressure (BP) monitors are currently used to diagnose hypertension both in home and clinical settings. These monitors take BP measurements once every 15 minutes over a 24 hour period and provide a reliable and accurate system that is minimally invasive. Although intermittent cuff measurements have proven to be a good indicator of BP, a continuous BP monitor is highly desirable for the diagnosis of hypertension and other cardiac diseases. However, no such devices currently exist. A novel algorithm has been developed based on the Pulse Transit Time (PTT) method, which would allow non-invasive and continuous BP measurement. PTT is defined as the time it takes the BP wave to propagate from the heart to a specified point on the body. After an initial BP measurement, PTT algorithms can track BP over short periods of time, known as calibration intervals. After this time has elapsed, a new BP measurement is required to recalibrate the algorithm. Using the PhysioNet database as a basis, the new algorithm was developed and tested using 15 patients, each tested 3 times over a period of 30 minutes. The predicted BP of the algorithm was compared to the arterial BP of each patient. It has been established that this new algorithm is capable of tracking BP over 12 minutes without the need for recalibration, using the BHS standard, a 100% improvement over what has been previously identified. The algorithm was incorporated into a new system based on its requirements and was tested using three volunteers. The results mirrored those previously observed, providing accurate BP measurements when a 12 minute calibration interval was used. This new system provides a significant improvement to the existing method allowing BP to be monitored continuously and non-invasively, on a beat-to-beat basis over 24 hours, adding major clinical and diagnostic value.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
Numerical techniques such as the Boundary Element Method, Finite Element Method and Finite Difference Time Domain have been used widely to investigate plane and curved wave-front scattering by rough surfaces. For certain shapes of roughness elements (cylinders, semi-cylinders and ellipsoids) there are semi-analytical alternatives. Here, we present a theory for multiple scattering by cylinders on a hard surface to investigate effects due to different roughness shape, the effects of vacancies and variation of roughness element size on the excess attenuation due to a periodically rough surfaces.
Resumo:
In the last few years, mobile wireless technology has gone through a revolutionary change. Web-enabled devices have evolved into essential tools for communication, information, and entertainment. The fifth generation (5G) of mobile communication networks is envisioned to be a key enabler of the next upcoming wireless revolution. Millimeter wave (mmWave) spectrum and the evolution of Cloud Radio Access Networks (C-RANs) are two of the main technological innovations of 5G wireless systems and beyond. Because of the current spectrum-shortage condition, mmWaves have been proposed for the next generation systems, providing larger bandwidths and higher data rates. Consequently, new radio channel models are being developed. Recently, deterministic ray-based models such as Ray-Tracing (RT) are getting more attractive thanks to their frequency-agility and reliable predictions. A modern RT software has been calibrated and used to analyze the mmWave channel. Knowledge of the electromagnetic properties of materials is therefore essential. Hence, an item-level electromagnetic characterization of common construction materials has been successfully achieved to obtain information about their complex relative permittivity. A complete tuning of the RT tool has been performed against indoor and outdoor measurement campaigns at 27 and 38 GHz, setting the basis for the future development of advanced beamforming techniques which rely on deterministic propagation models (as RT). C-RAN is a novel mobile network architecture which can address a number of challenges that network operators are facing in order to meet the continuous customers’ demands. C-RANs have already been adopted in advanced 4G deployments; however, there are still some issues to deal with, especially considering the bandwidth requirements set by the forthcoming 5G systems. Open RAN specifications have been proposed to overcome the new 5G challenges set on C-RAN architectures, including synchronization aspects. In this work it is described an FPGA implementation of the Synchronization Plane for an O-RAN-compliant radio system.
Resumo:
In recent years, developed countries have turned their attention to clean and renewable energy, such as wind energy and wave energy that can be converted to electrical power. Companies and academic groups worldwide are investigating several wave energy ideas today. Accordingly, this thesis studies the numerical simulation of the dynamic response of the wave energy converters (WECs) subjected to the ocean waves. This study considers a two-body point absorber (2BPA) and an oscillating surge wave energy converter (OSWEC). The first aim is to mesh the bodies of the earlier mentioned WECs to calculate their hydrostatic properties using axiMesh.m and Mesh.m functions provided by NEMOH. The second aim is to calculate the first-order hydrodynamic coefficients of the WECs using the NEMOH BEM solver and to study the ability of this method to eliminate irregular frequencies. The third is to generate a *.h5 file for 2BPA and OSWEC devices, in which all the hydrodynamic data are included. The BEMIO, a pre-and post-processing tool developed by WEC-Sim, is used in this study to create *.h5 files. The primary and final goal is to run the wave energy converter Simulator (WEC-Sim) to simulate the dynamic responses of WECs studied in this thesis and estimate their power performance at different sites located in the Mediterranean Sea and the North Sea. The hydrodynamic data obtained by the NEMOH BEM solver for the 2BPA and OSWEC devices studied in this thesis is imported to WEC-Sim using BEMIO. Lastly, the power matrices and annual energy production (AEP) of WECs are estimated for different sites located in the Sea of Sicily, Sea of Sardinia, Adriatic Sea, Tyrrhenian Sea, and the North Sea. To this end, the NEMOH and WEC-Sim are still the most practical tools to estimate the power generation of WECs numerically.
Resumo:
The present paper describes a novel, simple and reliable differential pulse voltammetric method for determining amitriptyline (AMT) in pharmaceutical formulations. It has been described for many authors that this antidepressant is electrochemically inactive at carbon electrodes. However, the procedure proposed herein consisted in electrochemically oxidizing AMT at an unmodified carbon nanotube paste electrode in the presence of 0.1 mol L(-1) sulfuric acid used as electrolyte. At such concentration, the acid facilitated the AMT electroxidation through one-electron transfer at 1.33 V vs. Ag/AgCl, as observed by the augmentation of peak current. Concerning optimized conditions (modulation time 5 ms, scan rate 90 mV s(-1), and pulse amplitude 120 mV) a linear calibration curve was constructed in the range of 0.0-30.0 μmol L(-1), with a correlation coefficient of 0.9991 and a limit of detection of 1.61 μmol L(-1). The procedure was successfully validated for intra- and inter-day precision and accuracy. Moreover, its feasibility was assessed through analysis of commercial pharmaceutical formulations and it has been compared to the UV-vis spectrophotometric method used as standard analytical technique recommended by the Brazilian Pharmacopoeia.
Resumo:
Obstructive sleep apnea syndrome has a high prevalence among adults. Cephalometric variables can be a valuable method for evaluating patients with this syndrome. To correlate cephalometric data with the apnea-hypopnea sleep index. We performed a retrospective and cross-sectional study that analyzed the cephalometric data of patients followed in the Sleep Disorders Outpatient Clinic of the Discipline of Otorhinolaryngology of a university hospital, from June 2007 to May 2012. Ninety-six patients were included, 45 men, and 51 women, with a mean age of 50.3 years. A total of 11 patients had snoring, 20 had mild apnea, 26 had moderate apnea, and 39 had severe apnea. The distance from the hyoid bone to the mandibular plane was the only variable that showed a statistically significant correlation with the apnea-hypopnea index. Cephalometric variables are useful tools for the understanding of obstructive sleep apnea syndrome. The distance from the hyoid bone to the mandibular plane showed a statistically significant correlation with the apnea-hypopnea index.
Resumo:
An unfavorable denture-bearing area could compromise denture retention and stability, limit mastication, and possibly alter masticatory motion. The purpose of this study was to evaluate the masticatory movements of denture wearers with normal and resorbed denture-bearing areas. Completely edentulous participants who received new complete dentures were selected and divided into 2 groups (n=15) according to the condition of their denture-bearing areas as classified by the Kapur method: a normal group (control) (mean age, 65.9 ± 7.8 years) and a resorbed group (mean age, 70.2 ± 7.6 years). Masticatory motion was recorded and analyzed with a kinesiographic device. The patients masticated peanuts and Optocal. The masticatory movements evaluated were the durations of opening, closing, and occlusion; duration of the masticatory cycle; maximum velocities and angles of opening and closing; total masticatory area; and amplitudes of the masticatory cycle. The data were analyzed by 2-way ANOVA and the Tukey honestly significant difference post hoc test (α=.05). The group with a resorbed denture-bearing area had a smaller total masticatory area in the frontal plane and shorter horizontal masticatory amplitude than the group with normal denture-bearing area (P<.05). Denture wearers with resorbed denture-bearing areas showed reduced jaw motion during mastication.
Resumo:
The present work compared the local injection of mononuclear cells to the spinal cord lateral funiculus with the alternative approach of local delivery with fibrin sealant after ventral root avulsion (VRA) and reimplantation. For that, female adult Lewis rats were divided into the following groups: avulsion only, reimplantation with fibrin sealant; root repair with fibrin sealant associated with mononuclear cells; and repair with fibrin sealant and injected mononuclear cells. Cell therapy resulted in greater survival of spinal motoneurons up to four weeks post-surgery, especially when mononuclear cells were added to the fibrin glue. Injection of mononuclear cells to the lateral funiculus yield similar results to the reimplantation alone. Additionally, mononuclear cells added to the fibrin glue increased neurotrophic factor gene transcript levels in the spinal cord ventral horn. Regarding the motor recovery, evaluated by the functional peroneal index, as well as the paw print pressure, cell treated rats performed equally well as compared to reimplanted only animals, and significantly better than the avulsion only subjects. The results herein demonstrate that mononuclear cells therapy is neuroprotective by increasing levels of brain derived neurotrophic factor (BDNF) and glial derived neurotrophic factor (GDNF). Moreover, the use of fibrin sealant mononuclear cells delivery approach gave the best and more long lasting results.
Resumo:
It is well known that long term use of shampoo causes damage to human hair. Although the Lowry method has been widely used to quantify hair damage, it is unsuitable to determine this in the presence of some surfactants and there is no other method proposed in literature. In this work, a different method is used to investigate and compare the hair damage induced by four types of surfactants (including three commercial-grade surfactants) and water. Hair samples were immersed in aqueous solution of surfactants under conditions that resemble a shower (38 °C, constant shaking). These solutions become colored with time of contact with hair and its UV-vis spectra were recorded. For comparison, the amount of extracted proteins from hair by sodium dodecyl sulfate (SDS) and by water were estimated by the Lowry method. Additionally, non-pigmented vs. pigmented hair and also sepia melanin were used to understand the washing solution color and their spectra. The results presented herein show that hair degradation is mostly caused by the extraction of proteins, cuticle fragments and melanin granules from hair fiber. It was found that the intensity of solution color varies with the charge density of the surfactants. Furthermore, the intensity of solution color can be correlated to the amount of proteins quantified by the Lowry method as well as to the degree of hair damage. UV-vis spectrum of hair washing solutions is a simple and straightforward method to quantify and compare hair damages induced by different commercial surfactants.