953 resultados para Linear Codes over Finite Fields


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cognitive radio is a growing zone in wireless communication which offers an opening in complete utilization of incompetently used frequency spectrum: deprived of crafting interference for the primary (authorized) user, the secondary user is indorsed to use the frequency band. Though, scheming a model with the least interference produced by the secondary user for primary user is a perplexing job. In this study we proposed a transmission model based on error correcting codes dealing with a countable number of pairs of primary and secondary users. However, we obtain an effective utilization of spectrum by the transmission of the pairs of primary and secondary users' data through the linear codes with different given lengths. Due to the techniques of error correcting codes we developed a number of schemes regarding an appropriate bandwidth distribution in cognitive radio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Matemática em Rede Nacional - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Hasse-Minkowski theorem concerns the classification of quadratic forms over global fields (i.e., finite extensions of Q and rational function fields with a finite constant field). Hasse proved the theorem over the rational numbers in his Ph.D. thesis in 1921. He extended the research of his thesis to quadratic forms over all number fields in 1924. Historically, the Hasse-Minkowski theorem was the first notable application of p-adic fields that caught the attention of a wide mathematical audience. The goal of this thesis is to discuss the Hasse-Minkowski theorem over the rational numbers and over the rational function fields with a finite constant field of odd characteristic. Our treatments of quadratic forms and local fields, though, are more general than what is strictly necessary for our proofs of the Hasse-Minkowski theorem over Q and its analogue over rational function fields (of odd characteristic). Our discussion concludes with some applications of the Hasse-Minkowski theorem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides concordance procedures for product-level trade and production data in the EU and examines the implications of changing product classifications on measured product adding and dropping at Belgian firms. Using the algorithms developed by Pierce and Schott (2012a, 2012b), the paper develops concordance procedures that allow researchers to trace changes in coding systems over time and to translate product-level production and trade data into a common classification that is consistent both within a single year and over time. Separate procedures are created for the eightdigit Combined Nomenclature system used to classify international trade activities at the product level within the European Union as well as for the eight-digit Prodcom categories used to classify products in European domestic production data. The paper further highlights important differences in coverage between the Prodcom and Combined Nomenclature classifications which need to be taken into account when generating combined domestic production and international trade data at the product level. The use of consistent product codes over time results in less product adding and dropping at continuing firms in the Belgian export and production data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method has been constructed for the solution of a wide range of chemical plant simulation models including differential equations and optimization. Double orthogonal collocation on finite elements is applied to convert the model into an NLP problem that is solved either by the VF 13AD package based on successive quadratic programming, or by the GRG2 package, based on the generalized reduced gradient method. This approach is termed simultaneous optimization and solution strategy. The objective functional can contain integral terms. The state and control variables can have time delays. Equalities and inequalities containing state and control variables can be included into the model as well as algebraic equations and inequalities. The maximum number of independent variables is 2. Problems containing 3 independent variables can be transformed into problems having 2 independent variables using finite differencing. The maximum number of NLP variables and constraints is 1500. The method is also suitable for solving ordinary and partial differential equations. The state functions are approximated by a linear combination of Lagrange interpolation polynomials. The control function can either be approximated by a linear combination of Lagrange interpolation polynomials or by a piecewise constant function over finite elements. The number of internal collocation points can vary by finite elements. The residual error is evaluated at arbitrarily chosen equidistant grid-points, thus enabling the user to check the accuracy of the solution between collocation points, where the solution is exact. The solution functions can be tabulated. There is an option to use control vector parameterization to solve optimization problems containing initial value ordinary differential equations. When there are many differential equations or the upper integration limit should be selected optimally then this approach should be used. The portability of the package has been addressed converting the package from V AX FORTRAN 77 into IBM PC FORTRAN 77 and into SUN SPARC 2000 FORTRAN 77. Computer runs have shown that the method can reproduce optimization problems published in the literature. The GRG2 and the VF I 3AD packages, integrated into the optimization package, proved to be robust and reliable. The package contains an executive module, a module performing control vector parameterization and 2 nonlinear problem solver modules, GRG2 and VF I 3AD. There is a stand-alone module that converts the differential-algebraic optimization problem into a nonlinear programming problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We prove the nonexistence of [g3(6, d), 6, d]3 codes for d = 86, 87, 88, where g3(k, d) = ∑⌈d/3i⌉ and i=0 ... k−1. This determines n3(6, d) for d = 86, 87, 88, where nq(k, d) is the minimum length n for which an [n, k, d]q code exists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A correlation scheme (leading to a special equilibrium called “soft” correlated equilibrium) is applied for two-person finite games in extensive form with perfect information. Randomization by an umpire takes place over the leaves of the game tree. At every decision point players have the choice either to follow the recommendation of the umpire blindly or freely choose any other action except the one suggested. This scheme can lead to Pareto-improved outcomes of other correlated equilibria. Computational issues of maximizing a linear function over the set of soft correlated equilibria are considered and a linear-time algorithm in terms of the number of edges in the game tree is given for a special procedure called “subgame perfect optimization”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research aims to set whether is possible to build spatial patterns over oil fields using DFA (Detrended Fluctuation Analysis) of the following well logs: sonic, density, porosity, resistivity and gamma ray. It was employed in the analysis a set of 54 well logs from the oil field of Campos dos Namorados, RJ, Brazil. To check for spatial correlation, it was employed the Mantel test between the matrix of geographic distance and the matrix of the difference of DFA exponents of the well logs. The null hypothesis assumes the absence of spatial structures that means no correlation between the matrix of Euclidean distance and the matrix of DFA differences. Our analysis indicate that the sonic (p=0.18) and the density (p=0.26) were the profiles that show tendency to correlation, or weak correlation. A complementary analysis using contour plot also has suggested that the sonic and the density are the most suitable with geophysical quantities for the construction of spatial structures corroborating the results of Mantel test

Relevância:

100.00% 100.00%

Publicador:

Resumo:

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.

Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.

Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.

Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.

Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many areas of simulation, a crucial component for efficient numerical computations is the use of solution-driven adaptive features: locally adapted meshing or re-meshing; dynamically changing computational tasks. The full advantages of high performance computing (HPC) technology will thus only be able to be exploited when efficient parallel adaptive solvers can be realised. The resulting requirement for HPC software is for dynamic load balancing, which for many mesh-based applications means dynamic mesh re-partitioning. The DRAMA project has been initiated to address this issue, with a particular focus being the requirements of industrial Finite Element codes, but codes using Finite Volume formulations will also be able to make use of the project results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The decomposition of Feynman integrals into a basis of independent master integrals is an essential ingredient of high-precision theoretical predictions, that often represents a major bottleneck when processes with a high number of loops and legs are involved. In this thesis we present a new algorithm for the decomposition of Feynman integrals into master integrals with the formalism of intersection theory. Intersection theory is a novel approach that allows to decompose Feynman integrals into master integrals via projections, based on a scalar product between Feynman integrals called intersection number. We propose a new purely rational algorithm for the calculation of intersection numbers of differential $n-$forms that avoids the presence of algebraic extensions. We show how expansions around non-rational poles, which are a bottleneck of existing algorithms for intersection numbers, can be avoided by performing an expansion in series around a rational polynomial irreducible over $\mathbb{Q}$, that we refer to as $p(z)-$adic expansion. The algorithm we developed has been implemented and tested on several diagrams, both at one and two loops.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Irreducible nonzero level modules with finite-dimensional weight spaces are discussed for nontwisted affine Lie superalgebras. A complete classification of such modules is obtained for superalgebras of type A(m, n)(boolean AND) and C(n)(boolean AND) using Mathieu's classification of cuspidal modules over simple Lie algebras. In other cases the classification problem is reduced to the classification of cuspidal modules over finite-dimensional cuspidal Lie superalgebras described by Dimitrov, Mathieu and Penkov. Based on these results a. complete classification of irreducible integrable (in the sense of Kac and Wakimoto) modules is obtained by showing that any such module is of highest weight, in which case the problem was solved by Kac and Wakimoto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A three-phase liquid-phase microextraction (LPME) method using porous polypropylene hollow fibre membrane with a sealed end was developed for the extraction of mirtazapine (MRT) and its two major metabolites, 8-hydroxymirtazapine (8-OHM) and demethylmirtazapine (DMR), from human plasma. The analytes were extracted from 1.0 mL of plasma, previously diluted and alkalinized with 3.0 mL 0.5 mol L-1 pH 8 phosphate buffer solution and supplemented with 15% sodium chloride (NaCl), using n-hexyl ether as organic solvent and 0.01 moL L-1 acetic acid solution as the acceptor phase. Haloperidol was used as internal standard. The chromatographic analyses were carried out on a chiral column, using acetonitrile-methanol-ethanol (98:1:1, v/v/v) plus 0.2% diethylamine as mobile phase, at a flow rate of 1.0 mL min(-1). Multi-reaction monitoring (MRM) detection was performed by mass spectrometry (MS-MS) using a triple-stage quadrupole and electrospray ionization interface operating in the positive ion mode. The mean recoveries were in 18.3-45.5% range with linear responses over the 1.25-125 ng mL(-1) concentration range for all enantiomers evaluated. The quantification limit (LOQ) was 1.25 ng mL(-1). Within-day and between-day assay precision and accuracy (2.5, 50 and 100 ng mL(-1)) showed relative standard deviation and the relative error lower than 11.9% for all enantiomers evaluated. Finally, the method was successfully used for the determination of mirtazapine and its metabolite enantiomers in plasma samples obtained after single drug administration of mirtazapine to a healthy volunteer. (c) 2007 Elsevier B.V. All rights reserved.