974 resultados para Bit Error Rate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is common in econometric applications that several hypothesis tests arecarried out at the same time. The problem then becomes how to decide whichhypotheses to reject, accounting for the multitude of tests. In this paper,we suggest a stepwise multiple testing procedure which asymptoticallycontrols the familywise error rate at a desired level. Compared to relatedsingle-step methods, our procedure is more powerful in the sense that itoften will reject more false hypotheses. In addition, we advocate the useof studentization when it is feasible. Unlike some stepwise methods, ourmethod implicitly captures the joint dependence structure of the teststatistics, which results in increased ability to detect alternativehypotheses. We prove our method asymptotically controls the familywise errorrate under minimal assumptions. We present our methodology in the context ofcomparing several strategies to a common benchmark and deciding whichstrategies actually beat the benchmark. However, our ideas can easily beextended and/or modied to other contexts, such as making inference for theindividual regression coecients in a multiple regression framework. Somesimulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores three aspects of strategic uncertainty: its relation to risk, predictability of behavior and subjective beliefs of players. In a laboratory experiment we measure subjects certainty equivalents for three coordination games and one lottery. Behavior in coordination games is related to risk aversion, experience seeking, and age.From the distribution of certainty equivalents we estimate probabilities for successful coordination in a wide range of games. For many games, success of coordination is predictable with a reasonable error rate. The best response to observed behavior is close to the global-game solution. Comparing choices in coordination games with revealed risk aversion, we estimate subjective probabilities for successful coordination. In games with a low coordination requirement, most subjects underestimate the probability of success. In games with a high coordination requirement, most subjects overestimate this probability. Estimating probabilistic decision models, we show that the quality of predictions can be improved when individual characteristics are taken into account. Subjects behavior is consistent with probabilistic beliefs about the aggregate outcome, but inconsistent with probabilistic beliefs about individual behavior.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this paper is to compare the performance of twopredictive radiological models, logistic regression (LR) and neural network (NN), with five different resampling methods. One hundred and sixty-seven patients with proven calvarial lesions as the only known disease were enrolled. Clinical and CT data were used for LR and NN models. Both models were developed with cross validation, leave-one-out and three different bootstrap algorithms. The final results of each model were compared with error rate and the area under receiver operating characteristic curves (Az). The neural network obtained statistically higher Az than LR with cross validation. The remaining resampling validation methods did not reveal statistically significant differences between LR and NN rules. The neural network classifier performs better than the one based on logistic regression. This advantage is well detected by three-fold cross-validation, but remains unnoticed when leave-one-out or bootstrap algorithms are used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Detecting local differences between groups of connectomes is a great challenge in neuroimaging, because the large number of tests that have to be performed and the impact on multiplicity correction. Any available information should be exploited to increase the power of detecting true between-group effects. We present an adaptive strategy that exploits the data structure and the prior information concerning positive dependence between nodes and connections, without relying on strong assumptions. As a first step, we decompose the brain network, i.e., the connectome, into subnetworks and we apply a screening at the subnetwork level. The subnetworks are defined either according to prior knowledge or by applying a data driven algorithm. Given the results of the screening step, a filtering is performed to seek real differences at the node/connection level. The proposed strategy could be used to strongly control either the family-wise error rate or the false discovery rate. We show by means of different simulations the benefit of the proposed strategy, and we present a real application of comparing connectomes of preschool children and adolescents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this study was to calibrate the CENTURY, APSIM and NDICEA simulation models for estimating decomposition and N mineralization rates of plant organic materials (Arachis pintoi, Calopogonium mucunoides, Stizolobium aterrimum, Stylosanthes guyanensis) for 360 days in the Atlantic rainforest bioma of Brazil. The models´ default settings overestimated the decomposition and N-mineralization of plant residues, underlining the fact that the models must be calibrated for use under tropical conditions. For example, the APSIM model simulated the decomposition of the Stizolobium aterrimum and Calopogonium mucunoides residues with an error rate of 37.62 and 48.23 %, respectively, by comparison with the observed data, and was the least accurate model in the absence of calibration. At the default settings, the NDICEA model produced an error rate of 10.46 and 14.46 % and the CENTURY model, 21.42 and 31.84 %, respectively, for Stizolobium aterrimum and Calopogonium mucunoides residue decomposition. After calibration, the models showed a high level of accuracy in estimating decomposition and N- mineralization, with an error rate of less than 20 %. The calibrated NDICEA model showed the highest level of accuracy, followed by the APSIM and CENTURY. All models performed poorly in the first few months of decomposition and N-mineralization, indicating the need of an additional parameter for initial microorganism growth on the residues that would take the effect of leaching due to rainfall into account.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Therapy of chronic hepatitis C (CHC) with pegIFNα/ribavirin achieves a sustained virologic response (SVR) in ∼55%. Pre-activation of the endogenous interferon system in the liver is associated with non-response (NR). Recently, genome-wide association studies described associations of allelic variants near the IL28B (IFNλ3) gene with treatment response and with spontaneous clearance of the virus. We investigated if the IL28B genotype determines the constitutive expression of IFN stimulated genes (ISGs) in the liver of patients with CHC. METHODS: We genotyped 93 patients with CHC for 3 IL28B single nucleotide polymorphisms (SNPs, rs12979860, rs8099917, rs12980275), extracted RNA from their liver biopsies and quantified the expression of IL28B and of 8 previously identified classifier genes which discriminate between SVR and NR (IFI44L, RSAD2, ISG15, IFI22, LAMP3, OAS3, LGALS3BP and HTATIP2). Decision tree ensembles in the form of a random forest classifier were used to calculate the relative predictive power of these different variables in a multivariate analysis. RESULTS: The minor IL28B allele (bad risk for treatment response) was significantly associated with increased expression of ISGs, and, unexpectedly, with decreased expression of IL28B. Stratification of the patients into SVR and NR revealed that ISG expression was conditionally independent from the IL28B genotype, i.e. there was an increased expression of ISGs in NR compared to SVR irrespective of the IL28B genotype. The random forest feature score (RFFS) identified IFI27 (RFFS = 2.93), RSAD2 (1.88) and HTATIP2 (1.50) expression and the HCV genotype (1.62) as the strongest predictors of treatment response. ROC curves of the IL28B SNPs showed an AUC of 0.66 with an error rate (ERR) of 0.38. A classifier with the 3 best classifying genes showed an excellent test performance with an AUC of 0.94 and ERR of 0.15. The addition of IL28B genotype information did not improve the predictive power of the 3-gene classifier. CONCLUSIONS: IL28B genotype and hepatic ISG expression are conditionally independent predictors of treatment response in CHC. There is no direct link between altered IFNλ3 expression and pre-activation of the endogenous system in the liver. Hepatic ISG expression is by far the better predictor for treatment response than IL28B genotype.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article describes the developmentof an Open Source shallow-transfer machine translation system from Czech to Polish in theApertium platform. It gives details ofthe methods and resources used in contructingthe system. Although the resulting system has quite a high error rate, it is still competitive with other systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Localization, which is the ability of a mobile robot to estimate its position within its environment, is a key capability for autonomous operation of any mobile robot. This thesis presents a system for indoor coarse and global localization of a mobile robot based on visual information. The system is based on image matching and uses SIFT features as natural landmarks. Features extracted from training images arestored in a database for use in localization later. During localization an image of the scene is captured using the on-board camera of the robot, features are extracted from the image and the best match is searched from the database. Feature matching is done using the k-d tree algorithm. Experimental results showed that localization accuracy increases with the number of training features used in the training database, while, on the other hand, increasing number of features tended to have a negative impact on the computational time. For some parts of the environment the error rate was relatively high due to a strong correlation of features taken from those places across the environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tämän diplomityön tavoitteena oli tutkia kohinan poistoa spektrikuvista käyttäen pehmeitä morfologisia suodattimia. Työssä painotettiin impulssimaisen kohinan suodattamista. Suodattimien toimintaa arvioitiin numeerisesti keskimääräisen itseisarvovirheen, neliövirheen sekä signaali-kohinasuhteen avulla ja visuaalisesti tarkastelemalla suodatettuja kuvia sekä niiden yksittäisiä spektritasoja. Käytettyjä suodatusmenetelmiä olivat suodatus kuvapisteittäin spektrin suunnassa, suodatus koko spektrissä sekä kuutiomenetelmä ja komponenteittainen suodatus. Suodatettavat kuvat sisälsivät joko suola ja pippuri- tai bittivirhekohinaa. Parhaimmat suodatustulokset sekä numeeristen virhekriteerien että visuaalisen tarkastelun perusteella saatiin komponenteittaisella sekä kuvapisteittäisellä menetelmällä. Työssä käytetyt menetelmät on esitetty algoritmimuodossa. Suodatinalgoritmien toteutukset ja suodatuskokeet tehtiin Matlab-ohjelmistolla.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tulevaisuudessa siirrettävät laitteet, kuten matkapuhelimet ja kämmenmikrot, pystyvät muodostamaan verkkoyhteyden käyttäen erilaisia yhteysmenetelmiä eri tilanteissa. Yhteysmenetelmillä on toisistaan poikkeavat viestintäominaisuudet mm. latenssin, kaistanleveyden, virhemäärän yms. suhteen. Langattomille yhteysmenetelmille on myös ominaista tietoliikenneyhteyden ominaisuuksien voimakas muuttuminen ympäristön suhteen. Parhaan suorituskyvyn ja käytettävyyden saavuttamiseksi, on siirrettävän laitteen pystyttävä mukautumaan käytettyyn viestintämenetelmään ja viestintäympäristössä tapahtuviin muutoksiin. Olennainen osa tietoliikenteessä ovat protokollapinot, jotka mahdollistavat tietoliikenneyhteyden järjestelmien välillä tarjoten verkkopalveluita päätelaitteen käyttäjäsovelluksille. Jotta protokollapinot pystyisivät mukautumaan tietyn viestintäympäristön ominaisuuksiin, on protokollapinon käyttäytymistä pystyttävä muuttamaan ajonaikaisesti. Perinteisesti protokollapinot ovat kuitenkin rakennettu muuttumattomiksi niin, että mukautuminen tässä laajuudessa on erittäin vaikeaa toteuttaa, ellei jopa mahdotonta. Tämä diplomityö käsittelee mukautuvien protokollapinojen rakentamista käyttäen komponenttipohjaista ohjelmistokehystä joka mahdollistaa protokollapinojen ajonaikaisen muuttamisen. Toteuttamalla esimerkkijärjestelmän, ja mittaamalla sen suorituskykyä vaihtelevassa tietoliikenneympäristössä, osoitamme, että mukautuvat protokollapinot ovat mahdollisia rakentaa ja ne tarjoavat merkittäviä etuja erityisesti tulevaisuuden siirrettävissä laitteissa.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose the use of the independent component analysis (ICA) [1] technique for improving the classification rate of decision trees and multilayer perceptrons [2], [3]. The use of an ICA for the preprocessing stage, makes the structure of both classifiers simpler, and therefore improves the generalization properties. The hypothesis behind the proposed preprocessing is that an ICA analysis will transform the feature space into a space where the components are independent, and aligned to the axes and therefore will be more adapted to the way that a decision tree is constructed. Also the inference of the weights of a multilayer perceptron will be much easier because the gradient search in the weight space will follow independent trajectories. The result is that classifiers are less complex and on some databases the error rate is lower. This idea is also applicable to regression

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language. This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains. The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time. To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization. To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships. Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.