993 resultados para Sampling rate
Resumo:
En synthèse d’images, reproduire les effets complexes de la lumière sur des matériaux transluminescents, tels que la cire, le marbre ou la peau, contribue grandement au réalisme d’une image. Malheureusement, ce réalisme supplémentaire est couteux en temps de calcul. Les modèles basés sur la théorie de la diffusion visent à réduire ce coût en simulant le comportement physique du transport de la lumière sous surfacique tout en imposant des contraintes de variation sur la lumière incidente et sortante. Une composante importante de ces modèles est leur application à évaluer hiérarchiquement l’intégrale numérique de l’illumination sur la surface d’un objet. Cette thèse révise en premier lieu la littérature actuelle sur la simulation réaliste de la transluminescence, avant d’investiguer plus en profondeur leur application et les extensions des modèles de diffusion en synthèse d’images. Ainsi, nous proposons et évaluons une nouvelle technique d’intégration numérique hiérarchique utilisant une nouvelle analyse fréquentielle de la lumière sortante et incidente pour adapter efficacement le taux d’échantillonnage pendant l’intégration. Nous appliquons cette théorie à plusieurs modèles qui correspondent à l’état de l’art en diffusion, octroyant une amélioration possible à leur efficacité et précision.
Resumo:
In a sigma-delta analog to digital (A/D) As most of the sigma-delta ADC applications require converter, the most computationally intensive block is decimation filters with linear phase characteristics, the decimation filter and its hardware implementation symmetric Finite Impulse Response (FIR) filters are may require millions of transistors. Since these widely used for implementation. But the number of FIR converters are now targeted for a portable application, filter coefficients will be quite large for implementing a a hardware efficient design is an implicit requirement. narrow band decimation filter. Implementing decimation In this effect, this paper presents a computationally filter in several stages reduces the total number of filter efficient polyphase implementation of non-recursive coefficients, and hence reduces the hardware complexity cascaded integrator comb (CIC) decimators for and power consumption [2]. Sigma-Delta Converters (SDCs). The SDCs are The first stage of decimation filter can be operating at high oversampling frequencies and hence implemented very efficiently using a cascade of integrators require large sampling rate conversions. The filtering and comb filters which do not require multiplication or and rate reduction are performed in several stages to coefficient storage. The remaining filtering is performed reduce hardware complexity and power dissipation. either in single stage or in two stages with more complex The CIC filters are widely adopted as the first stage of FIR or infinite impulse response (IIR) filters according to decimation due to its multiplier free structure. In this the requirements. The amount of passband aliasing or research, the performance of polyphase structure is imaging error can be brought within prescribed bounds by compared with the CICs using recursive and increasing the number of stages in the CIC filter. The non-recursive algorithms in terms of power, speed and width of the passband and the frequency characteristics area. This polyphase implementation offers high speed outside the passband are severely limited. So, CIC filters operation and low power consumption. The polyphase are used to make the transition between high and low implementation of 4th order CIC filter with a sampling rates. Conventional filters operating at low decimation factor of '64' and input word length of sampling rate are used to attain the required transition '4-bits' offers about 70% and 37% of power saving bandwidth and stopband attenuation. compared to the corresponding recursive and Several papers are available in literature that deals non-recursive implementations respectively. The same with different implementations of decimation filter polyphase CIC filter can operate about 7 times faster architecture for sigma-delta ADCs. Hogenauer has than the recursive and about 3.7 times faster than the described the design procedures for decimation and non-recursive CIC filters.
Resumo:
This paper analyzes the convergence behavior of the least mean square (LMS) filter when used in an adaptive code division multiple access (CDMA) detector consisting of a tapped delay line with adjustable tap weights. The sampling rate may be equal to or higher than the chip rate, and these correspond to chip-spaced (CS) and fractionally spaced (FS) detection, respectively. It is shown that CS and FS detectors with the same time-span exhibit identical convergence behavior if the baseband received signal is strictly bandlimited to half the chip rate. Even in the practical case when this condition is not met, deviations from this observation are imperceptible unless the initial tap-weight vector gives an extremely large mean squared error (MSE). This phenomenon is carefully explained with reference to the eigenvalues of the correlation matrix when the input signal is not perfectly bandlimited. The inadequacy of the eigenvalue spread of the tap-input correlation matrix as an indicator of the transient behavior and the influence of the initial tap weight vector on convergence speed are highlighted. Specifically, a initialization within the signal subspace or to the origin leads to very much faster convergence compared with initialization in the a noise subspace.
Resumo:
Urban boundary layers (UBLs) can be highly complex due to the heterogeneous roughness and heating of the surface, particularly at night. Due to a general lack of observations, it is not clear whether canonical models of boundary layer mixing are appropriate in modelling air quality in urban areas. This paper reports Doppler lidar observations of turbulence profiles in the centre of London, UK, as part of the second REPARTEE campaign in autumn 2007. Lidar-measured standard deviation of vertical velocity averaged over 30 min intervals generally compared well with in situ sonic anemometer measurements at 190 m on the BT telecommunications Tower. During calm, nocturnal periods, the lidar underestimated turbulent mixing due mainly to limited sampling rate. Mixing height derived from the turbulence, and aerosol layer height from the backscatter profiles, showed similar diurnal cycles ranging from c. 300 to 800 m, increasing to c. 200 to 850 m under clear skies. The aerosol layer height was sometimes significantly different to the mixing height, particularly at night under clear skies. For convective and neutral cases, the scaled turbulence profiles resembled canonical results; this was less clear for the stable case. Lidar observations clearly showed enhanced mixing beneath stratocumulus clouds reaching down on occasion to approximately half daytime boundary layer depth. On one occasion the nocturnal turbulent structure was consistent with a nocturnal jet, suggesting a stable layer. Given the general agreement between observations and canonical turbulence profiles, mixing timescales were calculated for passive scalars released at street level to reach the BT Tower using existing models of turbulent mixing. It was estimated to take c. 10 min to diffuse up to 190 m, rising to between 20 and 50 min at night, depending on stability. Determination of mixing timescales is important when comparing to physico-chemical processes acting on pollutant species measured simultaneously at both the ground and at the BT Tower during the campaign. From the 3 week autumnal data-set there is evidence for occasional stable layers in central London, effectively decoupling surface emissions from air aloft.
Resumo:
There has been considerable interest in the climate impact of trends in stratospheric water vapor (SWV). However, the representation of the radiative properties of water vapor under stratospheric conditions remains poorly constrained across different radiation codes. This study examines the sensitivity of a detailed line-by-line (LBL) code, a Malkmus narrow-band model and two broadband GCM radiation codes to a uniform perturbation in SWV in the longwave spectral region. The choice of sampling rate in wave number space (Δν) in the LBL code is shown to be important for calculations of the instantaneous change in heating rate (ΔQ) and the instantaneous longwave radiative forcing (ΔFtrop). ΔQ varies by up to 50% for values of Δν spanning 5 orders of magnitude, and ΔFtrop varies by up to 10%. In the three less detailed codes, ΔQ differs by up to 45% at 100 hPa and 50% at 1 hPa compared to a LBL calculation. This causes differences of up to 70% in the equilibrium fixed dynamical heating temperature change due to the SWV perturbation. The stratosphere-adjusted radiative forcing differs by up to 96% across the less detailed codes. The results highlight an important source of uncertainty in quantifying and modeling the links between SWV trends and climate.
Resumo:
We employ a numerical model of cusp ion precipitation and proton aurora emission to fit variations of the peak Doppler-shifted Lyman-a intensity observed on 26 November 2000 by the SI-12 channel of the FUV instrument on the IMAGE satellite. The major features of this event appeared in response to two brief swings of the interplanetary magnetic field (IMF) toward a southward orientation. We reproduce the observed spatial distributions of this emission on newly opened field lines by combining the proton emission model with a model of the response of ionospheric convection. The simulations are based on the observed variations of the solar wind proton temperature and concentration and the interplanetary magnetic field clock angle. They also allow for the efficiency, sampling rate, integration time and spatial resolution of the FUV instrument. The good match (correlation coefficient 0.91, significant at the 98% level) between observed and modeled variations confirms the time constant (about 4 min) for the rise and decay of the proton emissions predicted by the model for southward IMF conditions. The implications for the detection of pulsed magnetopause reconnection using proton aurora are discussed for a range of interplanetary conditions.
Resumo:
A sensitive and robust analytical method for spectrophotometric determination of ethyl xanthate, CH(3)CH(2)OCS(2)(-) at trace concentrations in pulp solutions from froth flotation process is proposed. The analytical method is based on the decomposition of ethyl xanthate. EtX(-), with 2.0 mol L(-1) HCl generating ethanol and carbon disulfide. CS(2). A gas diffusion cell assures that only the volatile compounds diffuse through a PTFE membrane towards an acceptor stream of deionized water, thus avoiding the interferences of non-volatile compounds and suspended particles. The CS(2) is selectively detected by UV absorbance at 206 nm (epsilon = 65,000 L mol(-1) cm(-1)). The measured absorbance is directly proportional to EtX(-) concentration present in the sample solutions. The Beer`s law is obeyed in a 1 x 10(-6) to 2 x 10(-4) mol L(-1) concentration range of ethyl xanthate in the pulp with an excellent correlation coefficient (r = 0.999) and a detection limit of 3.1 x 10(-7) mol L(-1), corresponding to 38 mu g L. At flow rates of 200 mu L min(-1) of the donor stream and 100 mu L min(-1) of the acceptor channel a sampling rate of 15 injections per hour could be achieved with RSD < 2.3% (n = 10, 300 mu L injections of 1 x 10(-5) mol L(-1) EtX(-)). Two practical applications demonstrate the versatility of the FIA method: (i) evaluation the free EtX(-) concentration during a laboratory study of the EtX(-) adsorption capacity on pulverized sulfide ore (pyrite) and (ii) monitoring of EtX(-) at different stages (from starting load to washing effluents) of a flotation pilot plant processing a Cu-Zn sulfide ore. (C) 2010 Elsevier By. All rights reserved.
Resumo:
A novel approach was developed for nitrate analysis in a FIA configuration with amperometric detection (E=-0.48 V). Sensitive and reproducible current measurements were achieved by using a copper electrode activated with a controlled potential protocol. The response of the FIA amperometric method was linear over the range from 0.1 to 2.5 mmol L(-1) nitrate with a detection limit of 4.2 mu mol L(-1) (S/N = 3). The repeatability of measurements was determined as 4.7% (n=9) at the best conditions (flow rate: 3.0 mL min(-1), sample volume: 150 mu L and nitrate concentration: 0.5 mmol L(-1)) with a sampling rate of 60 samples h(-1). The method was employed for the determination of nitrate in mineral water and soft drink samples and the results were in agreement with those obtained by using a recommended procedure. Studies towards a selective monitoring of nitrite were also performed in samples containing nitrate by carrying out measurements at a less negative potential (-0.20 V). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Hydrogen peroxide was determined in oral antiseptic and bleach samples using a flow-injection system with amperometric detection. A glassy carbon electrode modified by electrochemical deposition of ruthenium oxide hexacyanoferrate was used as working electrode and a homemade Ag/AgCl (saturated KCl) electrode and a platinum wire were used as reference and counter electrodes, respectively. The electrocatalytic reduction process allowed the determination of hydrogen peroxide at 0.0 V. A linear relationship between the cathodic peak current and concentration of hydrogen peroxide was obtained in the range 10-5000 mu mol L(-1) with detection and quantification limits of 1.7 (S/N = 3) and 5.9 (S/N = 10) mu mol L(-1), respectively. The repeatability of the method was evaluated using a 500 mu mol L(-1) hydrogen peroxide solution, the value obtained being 1.6% (n = 14). A sampling rate of 112 samples h(-1) was achieved at optimised conditions. The method was employed for the quantification of hydrogen peroxide in two commercial samples and the results were in agreement with those obtained by using a recommended procedure.
Resumo:
This paper describes the development of a sequential injection method to automate the fluorimetric determination of glyphosate based on a first step of oxidation to glycine by hypochlorite at 48 degrees C, followed by reaction with the fluorogenic reagent o-phthaldialdehyde in presence of 2-mercaptoethanol in borate buffer (pH > 9) to produce a fluorescent 1-(2`-hydroxyethylthio)-2-N-alkylisoindole. The proposed method has a linear response for glyphosate concentrations between 0.25 and 25.0 mu mol L(-1), with limits of detection and quantification of 0.08 and 0.25 mu mol L(-1), respectively. The sampling rate of the method is 18 samples per hour, consuming only a fraction of reagents consumed by the chromatographic method based on the same chemistry. The method was applied to study adsorption/desorption properties in a soil and in a sediment sample. Adsorption and desorption isotherms were properly fitted by Freundlich and Langmuir equations, leading to adsorption capacities of 1384 +/- 26 and 295 +/- 30 mg kg(-1) for the soil and sediment samples, respectively. These values are consistent with the literature, with the larger adsorption capacity of the soil being explained by its larger content of clay minerals, while the sediment was predominantly sandy. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A flow system designed with solenoid valves is proposed for determination of weak acid dissociable cyanide, based on the reaction with o-phthalaldehyde (OPA) and glycine yielding a highly fluorescent isoindole derivative. The proposed procedure minimizes the main drawbacks related to the reference batch procedure, based on reaction with barbituric acid and pyridine followed by spectrophotometric detection, i.e., use of toxic reagents, high reagent consumption and waste generation, low sampling rate, and poor sensitivity. Retention of the sample zone was exploited to increase the conversion rate of the analyte with minimized sample dispersion. Linear response (r=0.999) was observed for cyanide concentrations in the range 1-200 mu g L(-1), with a detection limit (99.7% confidence level) of 0.5 mu g L(-1)(19 nmol L(-1)). The sampling rate and coefficient of variation (n=10) were estimated as 22 measurements per hour and 1.4%, respectively. The results of determination of weak acid dissociable cyanide in natural water samples were in agreement with those achieved by the batch reference procedure at the 95% confidence level. Additionally to the improvement in the analytical features in comparison with those of the flow system with continuous reagent addition (sensitivity and sampling rate 90 and 83% higher, respectively), the consumption of OPA was 230-fold lower.
Resumo:
A flow system exploiting the multicommutation approach is proposed for spectrophotometric determination of tannin in beverages. The procedure is based on the reduction of Cu(II) in the presence of 4,4`-dicarboxy-2,2`-biquinoline, yielding a complex with maximum absorption at 558 nm. Calibration graph was linear (r=0.999) for tannic acid concentrations up to 5.00 mu mol L-1. The detection limit and coefficient of variation were estimated as 10 nmol L-1 (99.7% confidence level) and 1% (1.78 mu mol L-1 tannic acid, n=10), respectively. The sampling rate was 50 determinations per hour. The proposed procedure is more sensitive and selective than the official Folin-Denis method, also minimizing drastically waste generation. Recoveries within 91.8 and 115% were estimated for total tannin determination in tea and wine samples. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Objective: To develop a method for objective quantification of PD motor symptoms related to Off episodes and peak dose dyskinesias, using spiral data gathered by using a touch screen telemetry device. The aim was to objectively characterize predominant motor phenotypes (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Background: A retrospective analysis was conducted on recordings from 65 patients with advanced idiopathic PD from nine different clinics in Sweden, recruited from January 2006 until August 2010. In addition to the patient group, 10 healthy elderly subjects were recruited. Upper limb movement data were collected using a touch screen telemetry device from home environments of the subjects. Measurements with the device were performed four times per day during week-long test periods. On each test occasion, the subjects were asked to trace pre-drawn Archimedean spirals, using the dominant hand. The pre-drawn spiral was shown on the screen of the device. The spiral test was repeated three times per test occasion and they were instructed to complete it within 10 seconds. The device had a sampling rate of 10Hz and measured both position and time-stamps (in milliseconds) of the pen tip. Methods: Four independent raters (FB, DH, AJ and DN) used a web interface that animated the spiral drawings and allowed them to observe different kinematic features during the drawing process and to rate task performance. Initially, a number of kinematic features were assessed including ‘impairment’, ‘speed’, ‘irregularity’ and ‘hesitation’ followed by marking the predominant motor phenotype on a 3-category scale: tremor, bradykinesia and/or choreatic dyskinesia. There were only 2 test occasions for which all the four raters either classified them as tremor or could not identify the motor phenotype. Therefore, the two main motor phenotype categories were bradykinesia and dyskinesia. ‘Impairment’ was rated on a scale from 0 (no impairment) to 10 (extremely severe) whereas ‘speed’, ‘irregularity’ and ‘hesitation’ were rated on a scale from 0 (normal) to 4 (extremely severe). The proposed data-driven method consisted of the following steps. Initially, 28 spatiotemporal features were extracted from the time series signals before being presented to a Multilayer Perceptron (MLP) classifier. The features were based on different kinematic quantities of spirals including radius, angle, speed and velocity with the aim of measuring the severity of involuntary symptoms and discriminate between PD-specific (bradykinesia) and/or treatment-induced symptoms (dyskinesia). A Principal Component Analysis was applied on the features to reduce their dimensions where 4 relevant principal components (PCs) were retained and used as inputs to the MLP classifier. Finally, the MLP classifier mapped these components to the corresponding visually assessed motor phenotype scores for automating the process of scoring the bradykinesia and dyskinesia in PD patients whilst they draw spirals using the touch screen device. For motor phenotype (bradykinesia vs. dyskinesia) classification, the stratified 10-fold cross validation technique was employed. Results: There were good agreements between the four raters when rating the individual kinematic features with intra-class correlation coefficient (ICC) of 0.88 for ‘impairment’, 0.74 for ‘speed’, 0.70 for ‘irregularity’, and moderate agreements when rating ‘hesitation’ with an ICC of 0.49. When assessing the two main motor phenotype categories (bradykinesia or dyskinesia) in animated spirals the agreements between the four raters ranged from fair to moderate. There were good correlations between mean ratings of the four raters on individual kinematic features and computed scores. The MLP classifier classified the motor phenotype that is bradykinesia or dyskinesia with an accuracy of 85% in relation to visual classifications of the four movement disorder specialists. The test-retest reliability of the four PCs across the three spiral test trials was good with Cronbach’s Alpha coefficients of 0.80, 0.82, 0.54 and 0.49, respectively. These results indicate that the computed scores are stable and consistent over time. Significant differences were found between the two groups (patients and healthy elderly subjects) in all the PCs, except for the PC3. Conclusions: The proposed method automatically assessed the severity of unwanted symptoms and could reasonably well discriminate between PD-specific and/or treatment-induced motor symptoms, in relation to visual assessments of movement disorder specialists. The objective assessments could provide a time-effect summary score that could be useful for improving decision-making during symptom evaluation of individualized treatment when the goal is to maximize functional On time for patients while minimizing their Off episodes and troublesome dyskinesias.
Resumo:
This work aims to investigate the efficiency of digital signal processing tools of acoustic emission signals in order to detect thermal damages in grinding process. To accomplish such a goal, an experimental work was carried out for 15 runs in a surface grinding machine operating with an aluminum oxide grinding wheel and ABNT 1045. The acoustic emission signals were acquired from a fixed sensor placed on the workpiece holder. A high sampling rate data acquisition system at 2.5 MHz was used to collect the raw acoustic emission instead of root mean square value usually employed. Many statistics have shown effective to detect burn, such as the root mean square (RMS), correlation of the AE, constant false alarm (CFAR), ratio of power (ROP) and mean-value deviance (MVD). However, the CFAR, ROP, Kurtosis and correlation of the AE have been presented more sensitive than the RMS.
Resumo:
In this paper a new algorithmic of Analog-to-Digital Converter is presented. This new topology use the current-mode technique that allows a large dynamic range and can be implemented in digital CMOS process. The ADC proposed is very small and can handle high sampling rates. Simulation results using a 1.2um CMOS process show that an 8-b ADC can support a sampling rate of 50MHz.