992 resultados para Non-binary arithmetic


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A multi-chromosome GA (Multi-GA) was developed, based upon concepts from the natural world, allowing improved flexibility in a number of areas including representation, genetic operators, their parameter rates and real world multi-dimensional applications. A series of experiments were conducted, comparing the performance of the Multi-GA to a traditional GA on a number of recognised and increasingly complex test optimisation surfaces, with promising results. Further experiments demonstrated the Multi-GA's flexibility through the use of non-binary chromosome representations and its applicability to dynamic parameterisation. A number of alternative and new methods of dynamic parameterisation were investigated, in addition to a new non-binary 'Quotient crossover' mechanism. Finally, the Multi-GA was applied to two real world problems, demonstrating its ability to handle mixed type chromosomes within an individual, the limited use of a chromosome level fitness function, the introduction of new genetic operators for structural self-adaptation and its viability as a serious real world analysis tool. The first problem involved optimum placement of computers within a building, allowing the Multi-GA to use multiple chromosomes with different type representations and different operators in a single individual. The second problem, commonly associated with Geographical Information Systems (GIS), required a spatial analysis location of the optimum number and distribution of retail sites over two different population grids. In applying the Multi-GA, two new genetic operators (addition and deletion) were developed and explored, resulting in the definition of a mechanism for self-modification of genetic material within the Multi-GA structure and a study of this behaviour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les gènes, qui servent à encoder les fonctions biologiques des êtres vivants, forment l'unité moléculaire de base de l'hérédité. Afin d'expliquer la diversité des espèces que l'on peut observer aujourd'hui, il est essentiel de comprendre comment les gènes évoluent. Pour ce faire, on doit recréer le passé en inférant leur phylogénie, c'est-à-dire un arbre de gènes qui représente les liens de parenté des régions codantes des vivants. Les méthodes classiques d'inférence phylogénétique ont été élaborées principalement pour construire des arbres d'espèces et ne se basent que sur les séquences d'ADN. Les gènes sont toutefois riches en information, et on commence à peine à voir apparaître des méthodes de reconstruction qui utilisent leurs propriétés spécifiques. Notamment, l'histoire d'une famille de gènes en terme de duplications et de pertes, obtenue par la réconciliation d'un arbre de gènes avec un arbre d'espèces, peut nous permettre de détecter des faiblesses au sein d'un arbre et de l'améliorer. Dans cette thèse, la réconciliation est appliquée à la construction et la correction d'arbres de gènes sous trois angles différents: 1) Nous abordons la problématique de résoudre un arbre de gènes non-binaire. En particulier, nous présentons un algorithme en temps linéaire qui résout une polytomie en se basant sur la réconciliation. 2) Nous proposons une nouvelle approche de correction d'arbres de gènes par les relations d'orthologie et paralogie. Des algorithmes en temps polynomial sont présentés pour les problèmes suivants: corriger un arbre de gènes afin qu'il contienne un ensemble d'orthologues donné, et valider un ensemble de relations partielles d'orthologie et paralogie. 3) Nous montrons comment la réconciliation peut servir à "combiner'' plusieurs arbres de gènes. Plus précisément, nous étudions le problème de choisir un superarbre de gènes selon son coût de réconciliation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper revisits strongly-MDS convolutional codes with maximum distance profile (MDP). These are (non-binary) convolutional codes that have an optimum sequence of column distances and attains the generalized Singleton bound at the earliest possible time frame. These properties make these convolutional codes applicable over the erasure channel, since they are able to correct a large number of erasures per time interval. The existence of these codes have been shown only for some specific cases. This paper shows by construction the existence of convolutional codes that are both strongly-MDS and MDP for all choices of parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les gènes, qui servent à encoder les fonctions biologiques des êtres vivants, forment l'unité moléculaire de base de l'hérédité. Afin d'expliquer la diversité des espèces que l'on peut observer aujourd'hui, il est essentiel de comprendre comment les gènes évoluent. Pour ce faire, on doit recréer le passé en inférant leur phylogénie, c'est-à-dire un arbre de gènes qui représente les liens de parenté des régions codantes des vivants. Les méthodes classiques d'inférence phylogénétique ont été élaborées principalement pour construire des arbres d'espèces et ne se basent que sur les séquences d'ADN. Les gènes sont toutefois riches en information, et on commence à peine à voir apparaître des méthodes de reconstruction qui utilisent leurs propriétés spécifiques. Notamment, l'histoire d'une famille de gènes en terme de duplications et de pertes, obtenue par la réconciliation d'un arbre de gènes avec un arbre d'espèces, peut nous permettre de détecter des faiblesses au sein d'un arbre et de l'améliorer. Dans cette thèse, la réconciliation est appliquée à la construction et la correction d'arbres de gènes sous trois angles différents: 1) Nous abordons la problématique de résoudre un arbre de gènes non-binaire. En particulier, nous présentons un algorithme en temps linéaire qui résout une polytomie en se basant sur la réconciliation. 2) Nous proposons une nouvelle approche de correction d'arbres de gènes par les relations d'orthologie et paralogie. Des algorithmes en temps polynomial sont présentés pour les problèmes suivants: corriger un arbre de gènes afin qu'il contienne un ensemble d'orthologues donné, et valider un ensemble de relations partielles d'orthologie et paralogie. 3) Nous montrons comment la réconciliation peut servir à "combiner'' plusieurs arbres de gènes. Plus précisément, nous étudions le problème de choisir un superarbre de gènes selon son coût de réconciliation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of the framework we are proposing is to help the physician obtain information about the patient's condition in order to reach the \emph{correct} diagnosis as soon as possible. In our proposal, the number of interactions between the physician and the patient is reduced to a strict minimum on the one hand and, on the other hand, it is made possible to increase the number of questions to be asked if the uncertainty about the diagnosis persists. These advantages are due to the fact that (i) we implement a reasoning component that allows us to predict a symptom from another symptom without explicitly asking the patient, (ii) we consider non-binary values for the weights associated with the symptoms, we introduce a dataset filtering process in order to choose which partition should be used with respect to some particular characteristics of the patient, and, in addition, (iv) it was added new functionality to the framework: the ability to detect further future risks of a patient already knowing his pathology. The experimental results we obtained are very encouraging

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this brief, a read-only-memoryless structure for binary-to-residue number system (RNS) conversion modulo {2(n) +/- k} is proposed. This structure is based only on adders and constant multipliers. This brief is motivated by the existing {2(n) +/- k} binary-to-RNS converters, which are particular inefficient for larger values of n. The experimental results obtained for 4n and 8n bits of dynamic range suggest that the proposed conversion structures are able to significantly improve the forward conversion efficiency, with an AT metric improvement above 100%, regarding the related state of the art. Delay improvements of 2.17 times with only 5% area increase can be achieved if a proper selection of the {2(n) +/- k} moduli is performed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the past few decades detailed observations of radio and X-ray emission from massive binary systems revealed a whole new physics present in such systems. Both thermal and non-thermal components of this emission indicate that most of the radiation at these bands originates in shocks. O and B-type stars and WolfRayet (WR) stars present supersonic and massive winds that, when colliding, emit largely due to the freefree radiation. The non-thermal radio and X-ray emissions are due to synchrotron and inverse Compton processes, respectively. In this case, magnetic fields are expected to play an important role in the emission distribution. In the past few years the modelling of the freefree and synchrotron emissions from massive binary systems have been based on purely hydrodynamical simulations, and ad hoc assumptions regarding the distribution of magnetic energy and the field geometry. In this work we provide the first full magnetohydrodynamic numerical simulations of windwind collision in massive binary systems. We study the freefree emission characterizing its dependence on the stellar and orbital parameters. We also study self-consistently the evolution of the magnetic field at the shock region, obtaining also the synchrotron energy distribution integrated along different lines of sight. We show that the magnetic field in the shocks is larger than that obtained when the proportionality between B and the plasma density is assumed. Also, we show that the role of the synchrotron emission relative to the total radio emission has been underestimated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Nutritional, immunological and psychological benefts of exclusive breastfeeding for the frst 6 months of life are unequivocally recognized. However, mothers should also be aware of the importance of breastfeeding for promoting adequate oral development. This study evaluated the association between breastfeeding and non-nutritive sucking patterns and the prevalence of anterior open bite in primary dentition. MATERIAL AND METHODS: Infant feeding and non-nutritive sucking were investigated in a 3-6 year-old sample of 1,377 children, from São Paulo city, Brazil. Children were grouped according to breastfeeding duration: G1 - non-breastfed, G2 - shorter than 6 months, G3 - interruption between 6 and 12 months, and G4 - longer than 12 months. Three calibrated dentists performed clinical examinations and classifed overbite into 3 categories: normal, anterior open bite and deep bite. Chi-square tests (p<0.05) with odds ratio (OR) calculation were used for intergroup comparisons. The impact of breastfeeding and non-nutritive sucking on the prevalence of anterior open bite was analyzed using binary logistic regression. RESULTS: The prevalence estimates of anterior open bite were: 31.9% (G1), 26.1% (G2), 22.1% (G3), and 6.2% (G4). G1 would have signifcantly more chances of having anterior open bite compared with G4; in the total sample (OR=7.1) and in the subgroup without history of non-nutritive sucking (OR=9.3). Prolonging breastfeeding for 12 months was associated with a 3.7 times lower chance of having anterior open bite. In each year of persistence with non-nutritive sucking habits, the chance of developing this malocclusion increased in 2.38 times. CONCLUSIONS: Breastfeeding and non-nutritive sucking durations demonstrated opposite effects on the prediction of anterior open bite. Non-breastfed children presented signifcantly greater chances of having anterior open bite compared with those who were breastfed for periods longer than 12 months, demonstrating the benefcial infuence of breastfeeding on dental occlusion.