948 resultados para ERROR rates


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Delay and disruption tolerant networks (DTNs) are computer networks where round trip delays and error rates are high and disconnections frequent. Examples of these extreme networks are space communications, sensor networks, connecting rural villages to the Internet and even interconnecting commodity portable wireless devices and mobile phones. Basic elements of delay tolerant networks are a store-and-forward message transfer resembling traditional mail delivery, an opportunistic and intermittent routing, and an extensible cross-region resource naming service. Individual nodes of the network take an active part in routing the traffic and provide in-network data storage for application data that flows through the network. Application architecture for delay tolerant networks differs also from those used in traditional networks. It has become feasible to design applications that are network-aware and opportunistic, taking an advantage of different network connection speeds and capabilities. This might change some of the basic paradigms of network application design. DTN protocols will also support in designing applications which depend on processes to be persistent over reboots and power failures. DTN protocols could also be applicable to traditional networks in cases where high tolerance to delays or errors would be desired. It is apparent that challenged networks also challenge the traditional strictly layered model of network application design. This thesis provides an extensive introduction to delay tolerant networking concepts and applications. Most attention is given to challenging problems of routing and application architecture. Finally, future prospects of DTN applications and implementations are envisioned through recent research results and an interview with an active researcher of DTN networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Detect and Avoid (DAA) technology is widely acknowledged as a critical enabler for unsegregated Remote Piloted Aircraft (RPA) operations, particularly Beyond Visual Line of Sight (BVLOS). Image-based DAA, in the visible spectrum, is a promising technological option for addressing the challenges DAA presents. Two impediments to progress for this approach are the scarcity of available video footage to train and test algorithms, in conjunction with testing regimes and specifications which facilitate repeatable, statistically valid, performance assessment. This paper includes three key contributions undertaken to address these impediments. In the first instance, we detail our progress towards the creation of a large hybrid collision and near-collision encounter database. Second, we explore the suitability of techniques employed by the biometric research community (Speaker Verification and Language Identification), for DAA performance optimisation and assessment. These techniques include Detection Error Trade-off (DET) curves, Equal Error Rates (EER), and the Detection Cost Function (DCF). Finally, the hybrid database and the speech-based techniques are combined and employed in the assessment of a contemporary, image based DAA system. This system includes stabilisation, morphological filtering and a Hidden Markov Model (HMM) temporal filter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses the problem of resolving ambiguities in frequently confused online Tamil character pairs by employing script specific algorithms as a post classification step. Robust structural cues and temporal information of the preprocessed character are extensively utilized in the design of these algorithms. The methods are quite robust in automatically extracting the discriminative sub-strokes of confused characters for further analysis. Experimental validation on the IWFHR Database indicates error rates of less than 3 % for the confused characters. Thus, these post processing steps have a good potential to improve the performance of online Tamil handwritten character recognition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a mobile ad-hoc network scenario, where communication nodes are mounted on moving platforms (like jeeps, trucks, tanks, etc.), use of V-BLAST requires that the number of receive antennas in a given node must be greater than or equal to the sum of the number of transmit antennas of all its neighbor nodes. This limits the achievable spatial multiplexing gain (data rate) for a given node. In such a scenario, we propose to achieve high data rates per node through multicode direct sequence spread spectrum techniques in conjunction with V-BLAST. In the considered multicode V-BLAST system, the receiver experiences code domain interference (CDI) in frequency selective fading, in addition to space domain interference (SDI) experienced in conventional V-BLAST systems. We propose two interference cancelling receivers that employ a linear parallel interference cancellation approach to handle the CDI, followed by conventional V-BLAST detector to handle the SDI, and then evaluate their bit error rates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Performance of space-time block codes can be improved using the coordinate interleaving of the input symbols from rotated M-ary phase shift keying (MPSK) and M-ary quadrature amplitude modulation (MQAM) constellations. This paper is on the performance analysis of coordinate-interleaved space-time codes, which are a subset of single-symbol maximum likelihood decodable linear space-time block codes, for wireless multiple antenna terminals. The analytical and simulation results show that full diversity is achievable. Using the equivalent single-input single-output model, simple expressions for the average bit error rates are derived over flat uncorrelated Rayleigh fading channels. Optimum rotation angles are found by finding the minimum of the average bit error rate curves.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Sensitive remote homology detection and accurate alignments especially in the midnight zone of sequence similarity are needed for better function annotation and structural modeling of proteins. An algorithm, AlignHUSH for HMM-HMM alignment has been developed which is capable of recognizing distantly related domain families The method uses structural information, in the form of predicted secondary structure probabilities, and hydrophobicity of amino acids to align HMMs of two sets of aligned sequences. The effect of using adjoining column(s) information has also been investigated and is found to increase the sensitivity of HMM-HMM alignments and remote homology detection. Results: We have assessed the performance of AlignHUSH using known evolutionary relationships available in SCOP. AlignHUSH performs better than the best HMM-HMM alignment methods and is observed to be even more sensitive at higher error rates. Accuracy of the alignments obtained using AlignHUSH has been assessed using the structure-based alignments available in BaliBASE. The alignment length and the alignment quality are found to be appropriate for homology modeling and function annotation. The alignment accuracy is found to be comparable to existing methods for profile-profile alignments. Conclusions: A new method to align HMMs has been developed and is shown to have better sensitivity at error rates of 10% and above when compared to other available programs. The proposed method could effectively aid obtaining clues to functions of proteins of yet unknown function. A web-server incorporating the AlignHUSH method is available at http://crick.mbu.iisc.ernet.in/similar to alignhush/

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For an n(t) transmit, n(r) receive antenna system (n(t) x n(r) system), a full-rate space time block code (STBC) transmits at least n(min) = min(n(t), n(r))complex symbols per channel use. The well-known Golden code is an example of a full-rate, full-diversity STBC for two transmit antennas. Its ML-decoding complexity is of the order of M(2.5) for square M-QAM. The Silver code for two transmit antennas has all the desirable properties of the Golden code except its coding gain, but offers lower ML-decoding complexity of the order of M(2). Importantly, the slight loss in coding gain is negligible compared to the advantage it offers in terms of lowering the ML-decoding complexity. For higher number of transmit antennas, the best known codes are the Perfect codes, which are full-rate, full-diversity, information lossless codes (for n(r) >= n(t)) but have a high ML-decoding complexity of the order of M(ntnmin) (for n(r) < n(t), the punctured Perfect codes are considered). In this paper, a scheme to obtain full-rate STBCs for 2(a) transmit antennas and any n(r) with reduced ML-decoding complexity of the order of M(nt)(n(min)-3/4)-0.5 is presented. The codes constructed are also information lossless for >= n(t), like the Perfect codes, and allow higher mutual information than the comparable punctured Perfect codes for n(r) < n(t). These codes are referred to as the generalized Silver codes, since they enjoy the same desirable properties as the comparable Perfect codes (except possibly the coding gain) with lower ML-decoding complexity, analogous to the Silver code and the Golden code for two transmit antennas. Simulation results of the symbol error rates for four and eight transmit antennas show that the generalized Silver codes match the punctured Perfect codes in error performance while offering lower ML-decoding complexity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With no Channel State Information (CSI) at the users, transmission over the two-user Gaussian Multiple Access Channel with fading and finite constellation at the input, will have high error rates due to multiple access interference (MAI). However, perfect CSI at the users is an unrealistic assumption in the wireless scenario, as it would involve extremely large feedback overheads. In this paper we propose a scheme which removes the adverse effect of MAI using only quantized knowledge of fade state at the transmitters such that the associated overhead is nominal. One of the users rotates its constellation relative to the other without varying the transmit power to adapt to the existing channel conditions, in order to meet certain predetermined minimum Euclidean distance requirement in the equivalent constellation at the destination. The optimal rotation scheme is described for the case when both the users use symmetric M-PSK constellations at the input, where M = 2(gimel), gimel being a positive integer. The strategy is illustrated by considering the example where both the users use QPSK signal sets at the input. The case when the users use PSK constellations of different sizes is also considered. It is shown that the proposed scheme has considerable better error performance compared to the conventional non-adaptive scheme, at the cost of a feedback overhead of just log log(2) (M-2/8 - M/4 + 2)] + 1 bits, for the M-PSK case.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Fast and correct analysis of such information is important in for instance geospatial and social visualization applications. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a dataset to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap we report on a between-subjects experiment comparing novice users error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the dataset, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users when analyzing complex spatiotemporal patterns.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Chinese language is based on characters which are syllabic in nature. Since languages have syllabotactic rules which govern the construction of syllables and their allowed sequences, Chinese character sequence models can be used as a first level approximation of allowed syllable sequences. N-gram character sequence models were trained on 4.3 billion characters. Characters are used as a first level recognition unit with multiple pronunciations per character. For comparison the CU-HTK Mandarin word based system was used to recognize words which were then converted to character sequences. The character only system error rates for one best recognition were slightly worse than word based character recognition. However combining the two systems using log-linear combination gives better results than either system separately. An equally weighted combination gave consistent CER gains of 0.1-0.2% absolute over the word based standard system. Copyright © 2009 ISCA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O objeto de estudo foi o preparo e a administração de medicamentos por cateter pela enfermagem em pacientes que recebem nutrição enteral. O objetivo geral foi investigar o padrão de preparo e administração dos medicamentos por cateter em pacientes que recebem nutrição enteral concomitante. Os objetivos específicos foram apresentar o perfil dos medicamentos preparados e administrados de acordo com a possibilidade de serem administrados por cateter enteral e avaliar o tipo e a freqüência de erros que ocorrem no preparo e administração de medicamentos por cateter. Tratou-se de uma pesquisa com desenho transversal de natureza observacional, sem modelo de intervenção. Foi desenvolvida em um hospital do Rio de Janeiro onde foram observados técnicos de enfermagem preparando e administrando medicamentos por cateter na Unidade de Terapia Intensiva. Foram observadas 350 doses de medicamentos sendo preparados e administrados. Os grupos de medicamentos prevalentes foram os que agem no Sistema Cardiovascular Renal com 164 doses (46,80%), seguido pelos que agem no Sistema Respiratório e Sangue com 12,85% e 12,56% respectivamente. Foram encontrados 19 medicamentos diferentes do primeiro grupo, dois no segundo e cinco no terceiro. As categorias de erro no preparo foram trituração, diluição e misturas. Encontrou-se uma taxa média de 67,71% no preparo de medicamentos. Comprimidos simples foram preparados errados em 72,54% das doses, e todos os comprimidos revestidos e de liberação prolongada foram triturados indevidamente entre sólidos a categoria de erro prevalente foi trituração com 45,47%, preparar misturando medicamentos foi um erro encontrado em quase 40% das doses de medicamentos sólidos. A trituração insuficiente ocorreu em 73,33% das doses de ácido fólico, do cloridrato de amiodarona (58,97%) e bromoprida (50,00%). A mistura com outros medicamentos ocorreu em 66,66% das doses de bromoprida, de besilato de anlodipina (53,33%), bamifilina (43,47%), ácido fólico (40,00%) e ácido acetilsalicílico (33,33%). Os erros na administração foram ausência de pausa e manejo indevido do cateter. A taxa média de erros na administração foi de 32,64%, distribuídas entre 17,14% para pausa e 48,14% para manejo do cateter. A ausência de lavagem do cateter antes foi o erro mais comum e o mais incomum foi não lavar o cateter após a administração. Os medicamentos mais envolvidos em erros na administração foram: cloridrato de amiodarona (n=39), captopril (n=33), cloridrato de hidralazina (n=7), levotiroxina sódica (n=7). Com relação à lavagem dos cateteres antes, ela não ocorreu em 330 doses de medicamentos. O preparo e administração inadequados de medicamentos podem levar à perdas na biodisponibilidade, diminuição do nível sérico e riscos de intoxicações para o paciente. Preparar e administrar medicamentos são procedimentos comuns, porém apresentou altas taxas de erros, o que talvez reflita pouco conhecimento desses profissionais sobre as boas práticas da terapia medicamentosa. Constata-se a necessidade de maior investimento de todos os profissionais envolvidos, médicos, enfermeiros e farmacêuticos nas questões que envolvam a segurança com medicamentos assim como repensar o processo de trabalho da enfermagem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

全面对采用空间分集技术和时域Rake接收机分集的带限空间光通信系统的原理进行了模拟和分析,首次在空间激光通信领域提出了综合了分集接收和均衡技术的联合信道均衡器方法,通过计算机仿真分析,研究了不同空间分集方法在非相关空间光开关键控信号下的误比特率,在不同符号间干扰条件下采用rake接收时的误比特率,以及在不同信噪比和不同信道数时采用联合分集均衡的误码率。研究的结果确认联合分集均衡方法能够明显的提高空间光通信系统的性能。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[Es]Este documento explica el procedimiento seguido para desarrollar la última etapa de un decodificador de DVB-T2, que consiste en la extracción de un archivo de vídeo desde un archivo binario resultante del resto del decodificador. Este decodificador se trata del software de un receptor desarrollado por el departamento de TSR (Tratamiento de Señal y Radiocomunicaciones) de la Escuela de Ingenieros de Bilbao en el año 2010. Dicho software es capaz de analizar la señal recibida de DVB-T2 para calcular la tasa de errores y conocer otros parámetros relevantes como el tipo de modulación utilizado. No obstante, para analizar de manera subjetiva las mejoras de DVB-T2 e incluso para determinar de qué manera afectan los errores a la calidad de la imagen es necesario visualizar el video transmitido. Por esta razón se ha comenzado un proyecto en el que el objetivo es programar un nuevo software que proporcione un archivo que contenga el video en cuestión. Este software se ha programado en lenguaje del programa Matlab, y toma el fichero resultante del receptor como entrada, para procesarlo y obtener uno nuevo con el vídeo. De modo que una vez programado y probado para su corrección, se aplica a continuación del receptor del departamento TSR. Una vez obtenido el vídeo es posible comparar la calidad de la imagen con diferentes tasas de error en la comunicación, simulando transmisiones en diferentes ámbitos cada uno con su correspondiente ruido. De esta manera, se estima con muy alta precisión el comportamiento de una transmisión real dependiendo de la climatología y otros factores que afecten a la relación señal a ruido.