991 resultados para RATE SIGNAL
Resumo:
We studied the global and local ℳ-Z relation based on the first data available from the CALIFA survey (150 galaxies). This survey provides integral field spectroscopy of the complete optical extent of each galaxy (up to 2−3 effective radii), with a resolution high enough to separate individual H II regions and/or aggregations. About 3000 individual H II regions have been detected. The spectra cover the wavelength range between [OII]3727 and [SII]6731, with a sufficient signal-to-noise ratio to derive the oxygen abundance and star-formation rate associated with each region. In addition, we computed the integrated and spatially resolved stellar masses (and surface densities) based on SDSS photometric data. We explore the relations between the stellar mass, oxygen abundance and star-formation rate using this dataset. We derive a tight relation between the integrated stellar mass and the gas-phase abundance, with a dispersion lower than the one already reported in the literature (σ_Δlog (O/H) = 0.07 dex). Indeed, this dispersion is only slightly higher than the typical error derived for our oxygen abundances. However, we found no secondary relation with the star-formation rate other than the one induced by the primary relation of this quantity with the stellar mass. The analysis for our sample of ~3000 individual H II regions confirms (i) a local mass-metallicity relation and (ii) the lack of a secondary relation with the star-formation rate. The same analysis was performed with similar results for the specific star-formation rate. Our results agree with the scenario in which gas recycling in galaxies, both locally and globally, is much faster than other typical timescales, such like that of gas accretion by inflow and/or metal loss due to outflows. In essence, late-type/disk-dominated galaxies seem to be in a quasi-steady situation, with a behavior similar to the one expected from an instantaneous recycling/closed-box model.
Resumo:
The Federal Reserve left rates unchanged at its closely-watched meeting on September 17th, although many had argued that the real economy data, especially on the labour market, would have justified an exit (from the zero interest policy). In this CEPS Commentary, Daniel Gros observes that no similar decision on exit is in sight in the euro area, despite the fact that some have argued that the ECB should consider further easing measures (pushing the deposit rate deeper into negative territory or increasing the size of its asset purchase programme). He asks, in fact, whether further easing measures should be even discussed at this point.
Resumo:
We sought to improve the feasibility of strain rate imaging (SRI) during dobutamine stress echocardiography (DSE) in 56 subjects at low risk of coronary disease. The impact of several SRI changes during acquisition were studied, including: (1) changing from fundamental to harmonic imaging; (2) parallel beam-forming; (3) alteration of spatial resolution and (4) narrow sector acquisition. We assessed SR signal quality, a quantitative measure of signal noise and measurements of SRI. Of 1462 segments evaluated, 6% were uninterpretable at rest and 8% at peak stress. Signal quality was optimised by increasing temporal (p = 0.01) and spatial resolution (p<0.0001 vs. baseline imaging) at rest and peak. Increasing spatial resolution also minimised signal noise (p<0.0001). Inter-observer variability of time to peak SR and peak SR were less with high temporal and spatial resolution. SRI quality can be improved with harmonic imaging and higher temporal resolution but optimisation of spatial resolution is critical. (C) 2004 World Federation for Ultrasound in Medicine Biology.
Resumo:
Purpose: Tissue Doppler strain rate imaging (SRI) have been validated and applied in various clinical settings, but the clinical use of this modality is still limited due to time-consuming postprocessing, unfavorable signal to noise ratio and major angle dependency of image acquisition. 2D Strain (2DS) measures strain parameters through automated tissue tracking (Lagrangian strain) rather than tissue velocity regression. We sought to compare the accuracy of this technique with SRI and evaluate whether it overcomes the above limitations. Methods: We assessed 26 patients (13 female, age 60±5yrs) at low risk of CAD and with normal DSE at both baseline and peak stress. End systolic strain (ESS), peak systolic strain rate (SR), and timing parameters were measured by two independent observers using SRI and 2D Strain. Myocardial segments were excluded from the analyses if the insonation angle exceeded 30 degrees or if the segments were not visualized; 417 segments were evaluated. Results: Normal ranges for TVI and CEB approaches were comparable for SR (-0.99 ± 0.39 vs -0.88 ± 0.36, p=NS), ESS (-15.1 ± 6.5 vs -14.9 ± 6.3, p=NS), time to end of systole (174 ± 47 vs 174 ± 53, p=NS) and time to peak SR (TTP; 340 ± 34 vs 375 ± 57). The best correlations between the techniques were for time to end systole (rest r=0.6, p
Resumo:
Since its introduction, pulse oximetry has become a conventional clinical measure. Besides being arterial blood oxygen saturation (SpO2) measure, pulse oximeters can be used for other cardiovascular measurements, like heart rate (HR) estimations, derived from its photo plethysmographic (PPG) signals. The temporal coherence of the PPG signals and thereby HR estimates are heavily dependent on its minimal phase variability. A Masimo SET Rad-9TM, Novametrix Oxypleth and a custom designed PPG system were investigated for their relative phase variation. R-R intervals from electro-cardiogram (ECG) were recorded concurrently as reference. PPG signals obtained from the 3 systems were evaluated by comparing their respective beat-to-beat (B-B) intervals with the corresponding R-R estimates during a static test. For their relative B-B comparison to the ECG, Novametrix system differed 0.680.52% (p
Resumo:
In this paper, a new method for characterizing the newborn heart rate variability (HRV) is proposed. The central of the method is the newly proposed technique for instantaneous frequency (IF) estimation specifically designed for nonstationary multicomponen signals such as HRV. The new method attempts to characterize the newborn HRV using features extracted from the time–frequency (TF) domain of the signal. These features comprise the IF, the instantaneous bandwidth (IB) and instantaneous energy (IE) of the different TF components of the HRV. Applied to the HRV of both normal and seizure suffering newborns, this method clearly reveals the locations of the spectral peaks and their time-varying nature. The total energy of HRV components, ET and ratio of energy concentrated in the low-frequency (LF) to that in high frequency (HF) components have been shown to be significant features in identifying the HRV of newborn with seizures.
Resumo:
In this paper, we propose features extracted from the heart rate variability (HRV) based on the first and second conditional moments of time-frequency distribution (TFD) as an additional guide for seizure detection in newborn. The features of HRV in the low frequency band (LF: 0-0.07 Hz), mid frequency band (MF: 0.07-0.15 Hz), and high frequency band (HF: 0.15-0.6 Hz) have been obtained by means of the time-frequency analysis using the modified-B distribution (MBD). Results of ongoing time-frequency research are presented. Based on our preliminary results, the first conditional moment of HRV which is also known as the mean/central frequency in the LF band and the second conditional moment of HRV which is also known as the variance/instantaneous bandwidth (IB) in the HF band can be used as a good feature to discriminate the newborn seizure from the non-seizure
Resumo:
The concept of entropy rate is well defined in dynamical systems theory but is impossible to apply it directly to finite real world data sets. With this in mind, Pincus developed Approximate Entropy (ApEn), which uses ideas from Eckmann and Ruelle to create a regularity measure based on entropy rate that can be used to determine the influence of chaotic behaviour in a real world signal. However, this measure was found not to be robust and so an improved formulation known as the Sample Entropy (SampEn) was created by Richman and Moorman to address these issues. We have developed a new, related, regularity measure which is not based on the theory provided by Eckmann and Ruelle and proves a more well-behaved measure of complexity than the previous measures whilst still retaining a low computational cost.
Resumo:
Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead ofbeing another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. There are many kinds of protocols that work over WMNs, such as IEEE 802.11a/b/g, 802.15 and 802.16. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. While transmission rate is a significant part, only a few algorithms such as Auto Rate Fallback (ARF) or Receiver Based Auto Rate (RBAR) have been published. In this paper we will show MAC, packet loss and physical layer conditions play important role for having good channel condition. Also we perform rate adaption along with multiple packet transmission for better throughput. By allowing for dynamically monitored, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria improvements in performance can be obtained. The proposed method is the detection of channel congestion by measuring the fluctuation of signal to the standard deviation of and the detection of packet loss before channel performance diminishes. We will show that the use of such techniques in WMN can significantly improve performance. The effectiveness of the proposed method is presented in an experimental wireless network testbed via packet-level simulation. Our simulation results show that regardless of the channel condition we were to improve the performance in the throughput.
Resumo:
One of the major problems associated with communication via a loudspeaking telephone (LST) is that, using analogue processing, duplex transmission is limited to low-loss lines and produces a low acoustic output. An architectural for an instrument has been developed and tested, which uses digital signal processing to provide duplex transmission between a LST and a telopnone handset over most of the B.T. network. Digital adaptive-filters are used in the duplex LST to cancel coupling between the loudspeaker and microphone, and across the transmit to receive paths of the 2-to-4-wire converter. Normal movement of a person in the acoustic path causes a loss of stability by increasing the level of coupling from the loudspeaker to the microphone, since there is a lag associated the adaptive filters learning about a non-stationary path, Control of the loop stability and the level of sidetone heard by the hadset user is by a microprocessoe, which continually monitors the system and regulates the gain. The result is a system which offers the best compromise available based on a set of measured parameters.A theory has been developed which gives the loop stability requirements based on the error between the parameters of the filter and those of the unknown path. The programme to develope a low-cost adaptive filter in LST produced a low-cost adaptive filter in LST produced a unique architecture which has a number of features not available in any similar system. These include automatic compensation for the rate of adaptation over a 36 dB range of output level, , 4 rates of adaptation (with a maximum of 465 dB/s), plus the ability to cascade up to 4 filters without loss o performance. A complex story has been developed to determine the adptation which can be achieved using finite-precision arithmatic. This enabled the development of an architecture which distributed the normalisation required to achieve optimum rate of adaptation over the useful input range. Comparison of theory and measurement for the adaptive filter show very close agreement. A single experimental LST was built and tested on connections to hanset telephones over the BT network. The LST demonstrated that duplex transmission was feasible using signal processing and produced a more comfortable means of communication beween people than methods emplying deep voice-switching to regulate the local-loop gain. Although, with the current level of processing power, it is not a panacea and attention must be directed toward the physical acoustic isolation between loudspeaker and microphone.
Resumo:
The need for low bit-rate speech coding is the result of growing demand on the available radio bandwidth for mobile communications both for military purposes and for the public sector. To meet this growing demand it is required that the available bandwidth be utilized in the most economic way to accommodate more services. Two low bit-rate speech coders have been built and tested in this project. The two coders combine predictive coding with delta modulation, a property which enables them to achieve simultaneously the low bit-rate and good speech quality requirements. To enhance their efficiency, the predictor coefficients and the quantizer step size are updated periodically in each coder. This enables the coders to keep up with changes in the characteristics of the speech signal with time and with changes in the dynamic range of the speech waveform. However, the two coders differ in the method of updating their predictor coefficients. One updates the coefficients once every one hundred sampling periods and extracts the coefficients from input speech samples. This is known in this project as the Forward Adaptive Coder. Since the coefficients are extracted from input speech samples, these must be transmitted to the receiver to reconstruct the transmitted speech sample, thus adding to the transmission bit rate. The other updates its coefficients every sampling period, based on information of output data. This coder is known as the Backward Adaptive Coder. Results of subjective tests showed both coders to be reasonably robust to quantization noise. Both were graded quite good, with the Forward Adaptive performing slightly better, but with a slightly higher transmission bit rate for the same speech quality, than its Backward counterpart. The coders yielded acceptable speech quality of 9.6kbps for the Forward Adaptive and 8kbps for the Backward Adaptive.
Resumo:
Nonlinear phenomena occurring in optical fibres have many attractive features and great, but not yet fully explored potential in signal processing. Here, we review recent progress on the use of fibre nonlinearities for the generation and shaping of optical pulses, and on the applications of advanced pulse waveforms in all-optical signal processing. Among other topics, we will discuss ultrahigh repetition-rate pulse sources, the generation of parabolic-shaped pulses in active and passive fibres, the generation of pulses with triangular temporal profiles, and coherent supercontinuum sources. The signal processing applications will span optical regeneration, linear distortion compensation, optical decision at the receiver in optical communication systems, spectral and temporal signal doubling, and frequency conversion. © 2012 IEEE.
Resumo:
Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead of being another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. Several protocols that work over WMNs include IEEE 802.11a/b/g, 802.15, 802.16 and LTE-Advanced. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. In this paper, we have proposed a scheme to improve channel conditions by performing rate adaptation along with multiple packet transmission using packet loss and physical layer condition. Dynamic monitoring, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria provided greater throughput. The key feature of the proposed method is the combination of the following two factors: 1) detection of intrinsic channel conditions by measuring the fluctuation of noise to signal ratio via the standard deviation, and 2) the detection of packet loss induced through congestion. We have shown that the use of such techniques in a WMN can significantly improve performance in terms of the packet sending rate. The effectiveness of the proposed method was demonstrated in a simulated wireless network testbed via packet-level simulation.
Resumo:
We investigate the use of different direct detection modulation formats in a wavelength switched optical network. We find the minimum time it takes a tunable sampled grating distributed Bragg reflector laser to recover after switching from one wavelength channel to another for different modulation formats. The recovery time is investigated utilizing a field programmable gate array which operates as a time resolved bit error rate detector. The detector offers 93 ps resolution operating at 10.7 Gb/s and allows for all the data received to contribute to the measurement, allowing low bit error rates to be measured at high speed. The recovery times for 10.7 Gb/s non-return-to-zero on–off keyed modulation, 10.7 Gb/s differentially phase shift keyed signal and 21.4 Gb/s differentially quadrature phase shift keyed formats can be as low as 4 ns, 7 ns and 40 ns, respectively. The time resolved phase noise associated with laser settling is simultaneously measured for 21.4 Gb/s differentially quadrature phase shift keyed data and it shows that the phase noise coupled with frequency error is the primary limitation on transmitting immediately after a laser switching event.
Resumo:
We experimentally demonstrate the use of full-field electronic dispersion compensation (EDC) to achieve a bit error rate of 5 x 10(-5) at 22.3 dB optical signal-to-noise ratio for single-channel 10 Gbit/s on-off keyed signal after transmission over 496 km field-installed single-mode fibre with an amplifier spacing of 124 km. This performance is achieved by designing the EDC so as to avoid electronic amplification of the noise content of the signal during full-field reconstruction. We also investigate the tolerance of the system to key signal processing parameters, and numerically demonstrate that single-channel 2160 km single mode fibre transmission without in-line optical dispersion compensation can be achieved using this technique with 80 km amplifier spacing and optimized system parameters.