904 resultados para Glomerular filtration rate estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzes the validity of different Q-factor models in the BER estimation in RZ-DPSK transmission at 40 Gb/s channel rate. The impact of the duty cycle of the carrier pulses on the accuracy of the BER estimates through the different models has also been studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applying direct error counting, we compare the accuracy and evaluate the validity of different available numerical approaches to the estimation of the bit-error rate (BER) in 40-Gb/s return-to-zero differential phase-shift-keying transmission. As a particular example, we consider a system with in-line semiconductor optical amplifiers. We demonstrate that none of the existing models has an absolute superiority over the others. We also reveal the impact of the duty cycle on the accuracy of the BER estimates through the differently introduced Q-factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applying direct error counting, we assess the performance of 20 Gbit/s wavelength-division multiplexing return-to-zero differential phase-shift keying (RZ-DPSK) transmission at 0.4 bit/(s Hz) spectral efficiency for application on installed non-zero dispersion-shifted fibre based transoceanic submarine systems. The impact of the pulse duty cycle on the system performance is investigated and the reliability of the existing theoretical approaches to the BER estimation for the RZ-DPSK format is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzes the validity of different Q-factor models in the BER estimation in RZ-DPSK transmission at 40 Gb/s channel rate. The impact of the duty cycle of the carrier pulses on the accuracy of the BER estimates through the different models has also been studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applying direct error counting, we compare the accuracy and evaluate the validity of different available numerical approaches to the estimation of the bit-error rate (BER) in 40-Gb/s return-to-zero differential phase-shift-keying transmission. As a particular example, we consider a system with in-line semiconductor optical amplifiers. We demonstrate that none of the existing models has an absolute superiority over the others. We also reveal the impact of the duty cycle on the accuracy of the BER estimates through the differently introduced Q-factors. © 2007 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applying direct error counting, we assess the performance of 20 Gbit/s wavelength-division multiplexing return-to-zero differential phase-shift keying (RZ-DPSK) transmission at 0.4 bit/(s Hz) spectral efficiency for application on installed non-zero dispersion-shifted fibre based transoceanic submarine systems. The impact of the pulse duty cycle on the system performance is investigated and the reliability of the existing theoretical approaches to the BER estimation for the RZ-DPSK format is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiotocographic data provide physicians information about foetal development and permit to assess conditions such as foetal distress. An incorrect evaluation of the foetal status can be of course very dangerous. To improve interpretation of cardiotocographic recordings, great interest has been dedicated to foetal heart rate variability spectral analysis. It is worth reminding, however, that foetal heart rate is intrinsically an uneven series, so in order to produce an evenly sampled series a zero-order, linear or cubic spline interpolation can be employed. This is not suitable for frequency analyses because interpolation introduces alterations in the foetal heart rate power spectrum. In particular, interpolation process can produce alterations of the power spectral density that, for example, affects the estimation of the sympatho-vagal balance (computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations of the foetal heart rate variability signal due to interpolation and cardiotocographic storage rates, in this work, we simulated uneven foetal heart rate series with set characteristics, their evenly spaced versions (with different orders of interpolation and storage rates) and computed the sympatho-vagal balance values by power spectral density. For power spectral density estimation, we chose the Lomb method, as suggested by other authors to study the uneven heart rate series in adults. Summarising, the obtained results show that the evaluation of SVB values on the evenly spaced FHR series provides its overestimation due to the interpolation process and to the storage rate. However, cubic spline interpolation produces more robust and accurate results. © 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiotocography provides significant information on foetal oxygenation linked to characteristics of foetal heart rate signals. Among most important we can mention foetal heart rate variability, whose spectral analysis is recognised like useful in improving diagnosis of pathologic conditions. However, despite its importance, a standardisation of definition and estimation of foetal heart rate variability is still searched. Some guidelines state that variability refers to fluctuations in the baseline free from accelerations and decelerations. This is an important limit in clinical routine since variability in correspondence of these FHR alterations has always been regarded as particularly significant in terms of prognostic value. In this work we compute foetal heart rate variability as difference between foetal heart rate and floatingline and we propose a method for extraction of floatingline which takes into account accelerations and decelerations. © 2011 Springer-Verlag Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiotocographic data provide physicians information about foetal development and, through assessment of specific parameters (like accelerations, uterine contractions, ...), permit to assess conditions such as foetal distress. An incorrect evaluation of foetal status can be of course very dangerous. In the last decades, to improve interpretation of cardiotocographic recordings, great interest has been dedicated to FHRV spectral analysis. It is worth reminding that FHR is intrinsically an uneven series and that to obtain evenly sampled series, many commercial cardiotocographs use a zero-order interpolation (storage rate of CTG data equal to 4 Hz). This is not suitable for frequency analyses because interpolation introduces alterations in the FHR power spectrum. In particular, this interpolation process can produce artifacts and an attenuation of the high-frequency components of the PSD that, for example, affects the estimation of the sympatho-vagal balance (SVB - computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations due to zero-order interpolation and other CTG storage rates, in this work, we simulated uneven FHR series with set characteristics, their evenly spaced versions (with different storage rates) and computed SVB values by PSD. For PSD estimation, we chose the Lomb method, as suggested by other authors in application to uneven HR series. ©2009 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J60, 62M99.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62G07, 62L20.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this talk we investigate the usage of spectrally shaped amplified spontaneous emission (ASE) in order to emulate highly dispersed wavelength division multiplexed (WDM) signals in an optical transmission system. Such a technique offers various simplifications to large scale WDM experiments. Not only does it offer a reduction in transmitter complexity, removing the need for multiple source lasers, it potentially reduces the test and measurement complexity by requiring only the centre channel of a WDM system to be measured in order to estimate WDM worst case performance. The use of ASE as a test and measurement tool is well established in optical communication systems and several measurement techniques will be discussed [1, 2]. One of the most prevalent uses of ASE is in the measurement of receiver sensitivity where ASE is introduced in order to degrade the optical signal to noise ratio (OSNR) and measure the resulting bit error rate (BER) at the receiver. From an analytical point of view noise has been used to emulate system performance, the Gaussian Noise model is used as an estimate of highly dispersed signals and has had consider- able interest [3]. The work to be presented here extends the use of ASE by using it as a metric to emulate highly dispersed WDM signals and in the process reduce WDM transmitter complexity and receiver measurement time in a lab environment. Results thus far have indicated [2] that such a transmitter configuration is consistent with an AWGN model for transmission, with modulation format complexity and nonlinearities playing a key role in estimating the performance of systems utilising the ASE channel emulation technique. We conclude this work by investigating techniques capable of characterising the nonlinear and damage limits of optical fibres and the resultant information capacity limits. REFERENCES McCarthy, M. E., N. Mac Suibhne, S. T. Le, P. Harper, and A. D. Ellis, “High spectral efficiency transmission emulation for non-linear transmission performance estimation for high order modulation formats," 2014 European Conference on IEEE Optical Communication (ECOC), 2014. 2. Ellis, A., N. Mac Suibhne, F. Gunning, and S. Sygletos, “Expressions for the nonlinear trans- mission performance of multi-mode optical fiber," Opt. Express, Vol. 21, 22834{22846, 2013. Vacondio, F., O. Rival, C. Simonneau, E. Grellier, A. Bononi, L. Lorcy, J. Antona, and S. Bigo, “On nonlinear distortions of highly dispersive optical coherent systems," Opt. Express, Vol. 20, 1022-1032, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cikkünkben a magyar monetáris politikát vizsgáljuk olyan szempontból, hogy kamatdöntései meghozatalakor figyelembe vette-e az országkockázatot, és ha igen, hogyan. A kérdés megválaszolásához a monetáris politika elemzésének leggyakoribb eszközét használjuk: az ország monetáris politikáját leíró Taylor-szabályokat becslünk. A becslést több kockázati mérőszámmal is elvégeztük több, különféle Taylor-szabályt használva. Az érzékenységvizsgálatban az inflációhoz és a kibocsátási réshez is alkalmaztunk más, az alapspecifikációban szereplőtől eltérő mérőszámokat. Eredményeink szerint a Magyar Nemzeti Bank kamatdöntései jól leírhatók egy rugalmas, inflációs célkövető rezsimmel: a Taylor-szabályban szignifikáns szerepe van az inflációs céltól való eltérésének és - a szabályok egy része esetén - a kibocsátási résnek. Emellett a döntéshozók figyelembe vették az országkockázatot is, annak növekedésére a kamat emelésével válaszoltak. Az országkockázat Taylor-szabályba történő beillesztése a megfelelő kockázati mérőszám kiválasztása esetén jelentős mértékben képes javítani a Taylor-szabály illeszkedését. _____ The paper investigates the degree to which Hungarian monetary policy has considered country risk in its decisions and if so, how. The answer was sought through the commonest method of analysing a countrys monetary policy: Taylor rules for describing it. The estimation of the rule was prepared using several risk indicators and applying various types of Taylor rules. As a sensitivity analysis, other indicators of inflation and output gap were employed than in the base rule. This showed that the interest-rate decisions of the National Bank of Hungary can be well described by a flexible inflation targeting regime: in the Taylor rules, deviation of inflation from its target has a significant role and the output gap is also significant in one part of the rules. The decision-makers also considered country risk and responded to an increase in it by raising interest rates. Insertion of country risk into the Taylor rule could improve the models fit to an important degree when choosing an appropriate risk measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation examines the behavior of the exchange rate under two different scenarios. The first one is characterized by, relatively, low inflation or a situation where prices adjust sluggishly. The second is a high inflation economy where prices respond very rapidly even to unanticipated shocks. In the first one, following a monetary expansion, the exchange rate overshoots, i.e. the nominal exchange rate depreciates at a faster pace than the price level. Under high levels of inflation, prices change faster than the exchange rate so the exchange rate undershoots its long run equilibrium value.^ The standard work in this area, Dornbusch (1976), explains the overshooting process in the context of perfect capital mobility and sluggish adjustment in the goods market. A monetary expansion will make the exchange rate increase beyond its long run equilibrium value. This dissertation expands on Dornbusch's model and provides an analysis of the exchange rate under conditions of currency substitution and price flexibility, characteristics of the Peruvian economy during the hyper inflation process that took place at the end of the 1980's. The results of the modified Dornbusch model reveal that, given a monetary expansion, the change in the price level will be larger than the change in the exchange rate if prices react more than proportionally to the monetary shock.^ We will expect this over-reaction in circumstances of high inflation when the velocity of money is increasing very rapidly. Increasing velocity of money, gives rise to a higher relative price variability which in turn contributes to the appearance of new financial (and also non-financial) instruments that report a higher return than the exchange rate, causing people to switch their demand for foreign exchange to this new assets. In the context of currency substitution, economic agents hoard and use foreign exchange as a store of value. The big decline in output originated by hyper inflation induces people to sell this hoarded money to finance current expenses, increasing the supply of foreign exchange in the market. Both, the decrease in demand and the increase in supply reduce the price of foreign exchange i.e. the real exchange rate. The findings mentioned above are tested using Peruvian data for the period January 1985-July 1990, the results of the econometric estimation confirm our findings in the theoretical model. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.