988 resultados para Sequential error ratio


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Para implantar o manejo apropriado do curuquerê-do-algodoeiro, é necessário construir um plano de amostragem que permita estimar, de forma rápida e precisa, a densidade populacional da praga. Esta pesquisa objetivou determinar o plano de amostragem seqüencial de Alabama argillacea (Hübner) em algodoeiro, cultivar CNPA ITA-90. Os dados foram coletados no ano agrícola de 1998/99 na Fazenda Itamarati Sul S/A, localizada no município de Ponta Porã, MS, em três áreas de 10.000 m² cada. As áreas amostrais foram compostas de 100 parcelas de 100 m². O número de lagartas pequenas, médias e grandes foi determinado semanalmente em cinco plantas tomadas ao acaso por parcela. Após verificado que todos os instares das lagartas estavam distribuídos de acordo com o modelo de distribuição agregada, ajustando-se à Distribuição Binomial Negativa durante todo o período de infestação, construiu-se um plano de amostragem seqüencial de acordo com o Teste Seqüencial da Razão de Probabilidade (TSRP). Adotou-se o nível de controle de duas lagartas por planta para a construção do plano de amostragem. A análise dos dados indicou duas linhas de decisão: a superior, que representa a condição de que a adoção de um método de controle é recomendado, definida por S1= 4,8784+1,4227n; e a inferior representando que a adoção de algum método de controle não é necessário, definida por S0= -4,8784+1,4227n. A amostragem seqüencial estimou o número máximo esperado de 16 unidades amostrais para se definir a necessidade ou não do controle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Long-term electrocardiogram (ECG) signals might suffer from relevant baseline disturbances during physical activity. Motion artifacts in particular are more pronounced with dry surface or esophageal electrodes which are dedicated to prolonged ECG recording. In this paper we present a method called baseline wander tracking (BWT) that tracks and rejects strong baseline disturbances and avoids concurrent saturation of the analog front-end. The proposed algorithm shifts the baseline level of the ECG signal to the middle of the dynamic input range. Due to the fast offset shifts, that produce much steeper signal portions than the normal ECG waves, the true ECG signal can be reconstructed offline and filtered using computationally intensive algorithms. Based on Monte Carlo simulations we observed reconstruction errors mainly caused by the non-linearity inaccuracies of the DAC. However, the signal to error ratio of the BWT is higher compared to an analog front-end featuring a dynamic input ranges above 15 mV if a synthetic ECG signal was used. The BWT is additionally able to suppress (electrode) offset potentials without introducing long transients. Due to its structural simplicity, memory efficiency and the DC coupling capability, the BWT is dedicated to high integration required in long-term and low-power ECG recording systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The reduction in sea ice along the SE Greenland coast during the last century has severely impacted ice-rafting to this area. In order to reconstruct ice-rafting and oceanographic conditions in the area of Denmark Strait during the last ~150 years, we conducted a multiproxy study on three short (20 cm) sediment cores from outer Kangerdlugssuaq Trough (~300 m water depth). The proxy-based data obtained have been compared with historical and instrumental data to gain a better understanding of the ice sheet-ocean interactions in the area. A robust chronology has been developed based on 210Pb and 137Cs measurements on core PO175GKC#9 (~66.2°N, 32°W) and expanded to the two adjacent cores based on correlations between calcite weight percent records. Our proxy records include sea-ice and phytoplankton biomarkers, and a variety of mineralogical determinations based on the <2 mm sediment fraction, including identification with quantitative x-ray diffraction, ice-rafted debris counts on the 63-150 µm sand fraction, and source identifications based on the composition of Fe oxides in the 45-250 µm fraction. A multivariate statistical analysis indicated significant correlations between our proxy records and historical data, especially with the mean annual temperature data from Stykkishólmur (Iceland) and the storis index (historical observations of sea-ice export via the East Greenland Current). In particular, the biological proxies (calcite weight percent, IP25, and total organic carbon %) showed significant linkage with the storis index. Our records show two distinct intervals in the recent history of the SE Greenland coast. The first of these (ad 1850-1910) shows predominantly perennial sea-ice conditions in the area, while the second (ad 1910-1990) shows more seasonally open water conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Desde que las Tecnologías de la Información y la Comunicación comenzaron a adquirir una gran importancia en la sociedad, uno de los principales objetivos ha sido conseguir que la información transmitida llegue en perfectas condiciones al receptor. Por este motivo, se hace necesario el desarrollo de nuevos sistemas de comunicación digital capaces de ofrecer una transmisión segura y fiable. Con el paso de los años, se han ido mejorando las características de los mismos, lo que significa importantes avances en la vida cotidiana. En este contexto, uno de los sistemas que más éxito ha tenido es la Modulación Reticulada con Codificación TCM, que aporta grandes ventajas en la comunicación digital, especialmente en los sistemas de banda estrecha. Este tipo de código de protección contra errores, basado en la codificación convolucional, se caracteriza por realizar la modulación y codificación en una sola función. Como consecuencia, se obtiene una mayor velocidad de transmisión de datos sin necesidad de incrementar el ancho de banda, a costa de pasar a una constelación superior. Con este Proyecto Fin de Grado se quiere analizar el comportamiento de la modulación TCM y cuáles son las ventajas que ofrece frente a otros sistemas similares. Se propone realizar cuatro simulaciones, que permitan visualizar diversas gráficas en las que se relacione la probabilidad de bit erróneo BER y la relación señal a ruido SNR. Además, con estas gráficas se puede determinar la ganancia que se obtiene con respecto a la probabilidad de bit erróneo teórica. Estos sistemas pasan de una modulación QPSK a una 8PSK o de una 8PSK a una 16QAM. Finalmente, se desarrolla un entorno gráfico de Matlab con el fin de proporcionar un sencillo manejo al usuario y una mayor interactividad. ABSTRACT. Since Information and Communication Technologies began to gain importance on society, one of the main objectives has been to achieve the transmitted information reaches the receiver perfectly. For this reason, it is necessary to develop new digital communication systems with the ability to offer a secure and reliable transmission. The systems characteristics have improved over the past years, what it means important progress in everyday life. In this context, one of the most successful systems is Trellis Coded Modulation TCM, that brings great advantages in terms of digital communications, especially narrowband systems. This kind of error correcting code, based on convolutional coding, is characterized by codifying and modulating at the same time. As a result, a higher data transmission speed is achieved without increasing bandwidth at the expense of using a superior modulation. The aim of this project is to analyze the TCM performance and the advantages it offers in comparison with other similar systems. Four simulations are proposed, that allows to display several graphics that show how the Bit Error Ratio BER and Signal Noise Ratio SNR are related. Furthermore, it is possible to calculate the coding gain. Finally, a Matlab graphic environment is designed in order to guarantee the interactivity with the final user.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Polarization-switched quadrature phase-shift keying has been demonstrated experimentally at 40.5Gb/s with a coherent receiver and digital signal processing. Compared to polarization-multiplexed QPSK at the same bit rate, its back-to-back sensitivity at 10-3 bit-error-ratio shows 0.9dB improvement, and it tolerates about 1.6dB higher launch power for 10 × 100km, 50GHz-spaced WDM transmission allowing 1dB penalty in required optical-signal-to-noise ratio relative to back-to-back.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this letter, an energy-efficient adaptive code position modulation scheme is proposed for wireless sensor networks to provide the relatively stable bit error ratio (BER) performance expected by the upper layers. The system is designed with focus on the adaptive control of transmission power, which is adjusted based on the measured power density of background noise. Interfaces among the modulation module, packet scheduling module and upper layer are provided for flexible adjustments to adapt to the background noise and deliver expected application quality. Simulations with Signal Processing Worksystem (SPW) validate the effectiveness of the scheme. © 2005 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a framework for estimating the quality of transmission (QoT) of a new lightpath before it is established, as well as for calculating the expected degradation it will cause to existing lightpaths. The framework correlates the QoT metrics of established lightpaths, which are readily available from coherent optical receivers that can be extended to serve as optical performance monitors. Past similar studies used only space (routing) information and thus neglected spectrum, while they focused on oldgeneration noncoherent networks. The proposed framework accounts for correlation in both the space and spectrum domains and can be applied to both fixed-grid wavelength division multiplexing (WDM) and elastic optical networks. It is based on a graph transformation that exposes and models the interference between spectrum-neighboring channels. Our results indicate that our QoT estimates are very close to the actual performance data, that is, to having perfect knowledge of the physical layer. The proposed estimation framework is shown to provide up to 4 × 10-2 lower pre-forward error correction bit error ratio (BER) compared to theworst-case interference scenario,which overestimates the BER. The higher accuracy can be harvested when lightpaths are provisioned with low margins; our results showed up to 47% reduction in required regenerators, a substantial savings in equipment cost.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The serological detection of antibodies against human papillomavirus (HPV) antigens is a useful tool to determine exposure to genital HPV infection and in predicting the risk of infection persistence and associated lesions. Enzyme-linked immunosorbent assays (ELISAs) are commonly used for seroepidemiological studies of HPV infection but are not standardized. Intra-and interassay performance variation is difficult to control, especially in cohort studies that require the testing of specimens over extended periods. We propose the use of normalized absorbance ratios (NARs) as a standardization procedure to control for such variations and minimize measurement error. We compared NAR and ELISA optical density (OD) values for the strength of the correlation between serological results for paired visits 4 months apart and HPV-16 DNA positivity in cervical specimens from a cohort investigation of 2,048 women tested with an ELISA using HPV-16 virus-like particles. NARs were calculated by dividing the mean blank-subtracted (net) ODs by the equivalent values of a control serum pool included in the same plate in triplicate, using different dilutions. Stronger correlations were observed with NAR values than with net ODs at every dilution, with an overall reduction in nonexplained regression variability of 39%. Using logistic regression, the ranges of odds ratios of HPV-16 DNA positivity contrasting upper and lower quintiles at different dilutions and their averages were 4.73 to 5.47 for NARs and 2.78 to 3.28 for net ODs, with corresponding significant improvements in seroreactivity-risk trends across quintiles when NARs were used. The NAR standardization is a simple procedure to reduce measurement error in seroepidemiological studies of HPV infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information fusion in biometrics has received considerable attention. The architecture proposed here is based on the sequential integration of multi-instance and multi-sample fusion schemes. This method is analytically shown to improve the performance and allow a controlled trade-off between false alarms and false rejects when the classifier decisions are statistically independent. Equations developed for detection error rates are experimentally evaluated by considering the proposed architecture for text dependent speaker verification using HMM based digit dependent speaker models. The tuning of parameters, n classifiers and m attempts/samples, is investigated and the resultant detection error trade-off performance is evaluated on individual digits. Results show that performance improvement can be achieved even for weaker classifiers (FRR-19.6%, FAR-16.7%). The architectures investigated apply to speaker verification from spoken digit strings such as credit card numbers in telephone or VOIP or internet based applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel combinatory orthogonal frequency division multiplexing (PC-OFDM yields lower maximum peak-to-average power ratio (PAR), high bandwidth efficiency and lower bit error rate (BER) on Gaussian channels compared to OFDM systems. However, PC-OFDM does not improve the statistics of PAR significantly. In this chapter, the use of a set of fixed permutations to improve the statistics of the PAR of a PC-OFDM signal is presented. For this technique, interleavers are used to produce K-1 permuted sequences from the same information sequence. The sequence with the lowest PAR, among K sequences is chosen for the transmission. The PAR of a PC-OFDM signal can be further reduced by 3-4 dB by this technique. Mathematical expressions for the complementary cumulative density function (CCDF)of PAR of PC-OFDM signal and interleaved PC-OFDM signal are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes the use of eigenvoice modeling techniques with the Cross Likelihood Ratio (CLR) as a criterion for speaker clustering within a speaker diarization system. The CLR has previously been shown to be a robust decision criterion for speaker clustering using Gaussian Mixture Models. Recently, eigenvoice modeling techniques have become increasingly popular, due to its ability to adequately represent a speaker based on sparse training data, as well as an improved capture of differences in speaker characteristics. This paper hence proposes that it would be beneficial to capitalize on the advantages of eigenvoice modeling in a CLR framework. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 35.1% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical dependence between classifier decisions is often shown to improve performance over statistically independent decisions. Though the solution for favourable dependence between two classifier decisions has been derived, the theoretical analysis for the general case of 'n' client and impostor decision fusion has not been presented before. This paper presents the expressions developed for favourable dependence of multi-instance and multi-sample fusion schemes that employ 'AND' and 'OR' rules. The expressions are experimentally evaluated by considering the proposed architecture for text-dependent speaker verification using HMM based digit dependent speaker models. The improvement in fusion performance is found to be higher when digit combinations with favourable client and impostor decisions are used for speaker verification. The total error rate of 20% for fusion of independent decisions is reduced to 2.1% for fusion of decisions that are favourable for both client and impostors. The expressions developed here are also applicable to other biometric modalities, such as finger prints and handwriting samples, for reliable identity verification.