811 resultados para Decoding algorithm
Resumo:
In this paper, we introduced new construction techniques of BCH, alternant, Goppa, Srivastava codes through the semigroup ring B[X; 1 3Z0] instead of the polynomial ring B[X; Z0], where B is a finite commutative ring with identity, and for these constructions we improve the several results of [1]. After this, we present a decoding principle for BCH, alternant and Goppa codes which is based on modified Berlekamp-Massey algorithm. This algorithm corrects all errors up to the Hamming weight t ≤ r/2, i.e., whose minimum Hamming distance is r + 1.
Resumo:
This paper describes a new methodology adopted for urban traffic stream optimization. By using Petri net analysis as fitness function of a Genetic Algorithm, an entire urban road network is controlled in real time. With the advent of new technologies that have been published, particularly focusing on communications among vehicles and roads infrastructures, we consider that vehicles can provide their positions and their destinations to a central server so that it is able to calculate the best route for one of them. Our tests concentrate on comparisons between the proposed approach and other algorithms that are currently used for the same purpose, being possible to conclude that our algorithm optimizes traffic in a relevant manner.
Resumo:
Corresponding to $C_{0}[n,n-r]$, a binary cyclic code generated by a primitive irreducible polynomial $p(X)\in \mathbb{F}_{2}[X]$ of degree $r=2b$, where $b\in \mathbb{Z}^{+}$, we can constitute a binary cyclic code $C[(n+1)^{3^{k}}-1,(n+1)^{3^{k}}-1-3^{k}r]$, which is generated by primitive irreducible generalized polynomial $p(X^{\frac{1}{3^{k}}})\in \mathbb{F}_{2}[X;\frac{1}{3^{k}}\mathbb{Z}_{0}]$ with degree $3^{k}r$, where $k\in \mathbb{Z}^{+}$. This new code $C$ improves the code rate and has error corrections capability higher than $C_{0}$. The purpose of this study is to establish a decoding procedure for $C_{0}$ by using $C$ in such a way that one can obtain an improved code rate and error-correcting capabilities for $C_{0}$.
Resumo:
Background: Once multi-relational approach has emerged as an alternative for analyzing structured data such as relational databases, since they allow applying data mining in multiple tables directly, thus avoiding expensive joining operations and semantic losses, this work proposes an algorithm with multi-relational approach. Methods: Aiming to compare traditional approach performance and multi-relational for mining association rules, this paper discusses an empirical study between PatriciaMine - an traditional algorithm - and its corresponding multi-relational proposed, MR-Radix. Results: This work showed advantages of the multi-relational approach in performance over several tables, which avoids the high cost for joining operations from multiple tables and semantic losses. The performance provided by the algorithm MR-Radix shows faster than PatriciaMine, despite handling complex multi-relational patterns. The utilized memory indicates a more conservative growth curve for MR-Radix than PatriciaMine, which shows the increase in demand of frequent items in MR-Radix does not result in a significant growth of utilized memory like in PatriciaMine. Conclusion: The comparative study between PatriciaMine and MR-Radix confirmed efficacy of the multi-relational approach in data mining process both in terms of execution time and in relation to memory usage. Besides that, the multi-relational proposed algorithm, unlike other algorithms of this approach, is efficient for use in large relational databases.
Resumo:
In this paper, to solve the reconfiguration problem of radial distribution systems a scatter search, which is a metaheuristic-based algorithm, is proposed. In the codification process of this algorithm a structure called node-depth representation is used. It then, via the operators and from the electrical power system point of view, results finding only radial topologies. In order to show the effectiveness, usefulness, and the efficiency of the proposed method, a commonly used test system, 135-bus, and a practical system, a part of Sao Paulo state's distribution network, 7052 bus, are conducted. Results confirm the efficiency of the proposed algorithm that can find high quality solutions satisfying all the physical and operational constraints of the problem.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A self-learning simulated annealing algorithm is developed by combining the characteristics of simulated annealing and domain elimination methods. The algorithm is validated by using a standard mathematical function and by optimizing the end region of a practical power transformer. The numerical results show that the CPU time required by the proposed method is about one third of that using conventional simulated annealing algorithm.
Resumo:
Doreen Barrie should have subtitled this book "Advocating a Different Identity" because this is its basic thrust. In Barrie's view, today's wealthy, modern, and expansive Alberta should abandon its historic grievances and hostility towards Ottawa. Instead, it should embrace a new narrative emphasizing "the positive qualities Albertans possess . . . the contributions the province has made to the country . . . and that Albertans share fundamental Canadian values with people in other parts of Canada and are eager to playa larger role on the national stage."
Resumo:
In this action research study of my classroom of 7th grade mathematics, I investigated whether the use of decoding would increase the students’ ability to problem solve. I discovered that knowing how to decode a word problem is only one facet of being a successful problem solver. I also discovered that confidence, effective instruction, and practice have an impact on improving problem solving skills. Because of this research, I plan to alter my problem solving guide that will enable it to be used by any classroom teacher. I also plan to keep adding to my math problem solving clue words and share with others. My hope is that I will be able to explain my project to math teachers in my district to make them aware of the importance of knowing the steps to solve a word problem.
Resumo:
Active machine learning algorithms are used when large numbers of unlabeled examples are available and getting labels for them is costly (e.g. requiring consulting a human expert). Many conventional active learning algorithms focus on refining the decision boundary, at the expense of exploring new regions that the current hypothesis misclassifies. We propose a new active learning algorithm that balances such exploration with refining of the decision boundary by dynamically adjusting the probability to explore at each step. Our experimental results demonstrate improved performance on data sets that require extensive exploration while remaining competitive on data sets that do not. Our algorithm also shows significant tolerance of noise.
Resumo:
The irregular shape packing problem is approached. The container has a fixed width and an open dimension to be minimized. The proposed algorithm constructively creates the solution using an ordered list of items and a placement heuristic. Simulated annealing is the adopted metaheuristic to solve the optimization problem. A two-level algorithm is used to minimize the open dimension of the container. To ensure feasible layouts, the concept of collision free region is used. A collision free region represents all possible translations for an item to be placed and may be degenerated. For a moving item, the proposed placement heuristic detects the presence of exact fits (when the item is fully constrained by its surroundings) and exact slides (when the item position is constrained in all but one direction). The relevance of these positions is analyzed and a new placement heuristic is proposed. Computational comparisons on benchmark problems show that the proposed algorithm generated highly competitive solutions. Moreover, our algorithm updated some best known results. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
OBJECTIVE: Differentiation between benign and malignant ovarian neoplasms is essential for creating a system for patient referrals. Therefore, the contributions of the tumor markers CA125 and human epididymis protein 4 (HE4) as well as the risk ovarian malignancy algorithm (ROMA) and risk malignancy index (RMI) values were considered individually and in combination to evaluate their utility for establishing this type of patient referral system. METHODS: Patients who had been diagnosed with ovarian masses through imaging analyses (n = 128) were assessed for their expression of the tumor markers CA125 and HE4. The ROMA and RMI values were also determined. The sensitivity and specificity of each parameter were calculated using receiver operating characteristic curves according to the area under the curve (AUC) for each method. RESULTS: The sensitivities associated with the ability of CA125, HE4, ROMA, or RMI to distinguish between malignant versus benign ovarian masses were 70.4%, 79.6%, 74.1%, and 63%, respectively. Among carcinomas, the sensitivities of CA125, HE4, ROMA (pre-and post-menopausal), and RMI were 93.5%, 87.1%, 80%, 95.2%, and 87.1%, respectively. The most accurate numerical values were obtained with RMI, although the four parameters were shown to be statistically equivalent. CONCLUSION: There were no differences in accuracy between CA125, HE4, ROMA, and RMI for differentiating between types of ovarian masses. RMI had the lowest sensitivity but was the most numerically accurate method. HE4 demonstrated the best overall sensitivity for the evaluation of malignant ovarian tumors and the differential diagnosis of endometriosis. All of the parameters demonstrated increased sensitivity when tumors with low malignancy potential were considered low-risk, which may be used as an acceptable assessment method for referring patients to reference centers.
Resumo:
A direct reconstruction algorithm for complex conductivities in W-2,W-infinity(Omega), where Omega is a bounded, simply connected Lipschitz domain in R-2, is presented. The framework is based on the uniqueness proof by Francini (2000 Inverse Problems 6 107-19), but equations relating the Dirichlet-to-Neumann to the scattering transform and the exponentially growing solutions are not present in that work, and are derived here. The algorithm constitutes the first D-bar method for the reconstruction of conductivities and permittivities in two dimensions. Reconstructions of numerically simulated chest phantoms with discontinuities at the organ boundaries are included.