953 resultados para Linear Codes over Finite Fields


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main results of this paper are twofold: the first one is a matrix theoretical result. We say that a matrix is superregular if all of its minors that are not trivially zero are nonzero. Given a a×b, a ≥ b, superregular matrix over a field, we show that if all of its rows are nonzero then any linear combination of its columns, with nonzero coefficients, has at least a−b + 1 nonzero entries. Secondly, we make use of this result to construct convolutional codes that attain the maximum possible distance for some fixed parameters of the code, namely, the rate and the Forney indices. These results answer some open questions on distances and constructions of convolutional codes posted in the literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new), and respiratory rate predictor RRP) with three main components of cow’s milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p-value < 0.001 and R2 (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation (p-value < 0.001) with R2 (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Following the seminal work of Zhuang, connected Hopf algebras of finite GK-dimension over algebraically closed fields of characteristic zero have been the subject of several recent papers. This thesis is concerned with continuing this line of research and promoting connected Hopf algebras as a natural, intricate and interesting class of algebras. We begin by discussing the theory of connected Hopf algebras which are either commutative or cocommutative, and then proceed to review the modern theory of arbitrary connected Hopf algebras of finite GK-dimension initiated by Zhuang. We next focus on the (left) coideal subalgebras of connected Hopf algebras of finite GK-dimension. They are shown to be deformations of commutative polynomial algebras. A number of homological properties follow immediately from this fact. Further properties are described, examples are considered and invariants are constructed. A connected Hopf algebra is said to be "primitively thick" if the difference between its GK-dimension and the vector-space dimension of its primitive space is precisely one . Building on the results of Wang, Zhang and Zhuang,, we describe a method of constructing such a Hopf algebra, and as a result obtain a host of new examples of such objects. Moreover, we prove that such a Hopf algebra can never be isomorphic to the enveloping algebra of a semisimple Lie algebra, nor can a semisimple Lie algebra appear as its primitive space. It has been asked in the literature whether connected Hopf algebras of finite GK-dimension are always isomorphic as algebras to enveloping algebras of Lie algebras. We provide a negative answer to this question by constructing a counterexample of GK-dimension 5. Substantial progress was made in determining the order of the antipode of a finite dimensional pointed Hopf algebra by Taft and Wilson in the 1970s. Our final main result is to show that the proof of their result can be generalised to give an analogous result for arbitrary pointed Hopf algebras.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although various abutment connections and materials have recently been introduced, insufficient data exist regarding the effect of stress distribution on their mechanical performance. The purpose of this study was to investigate the effect of different abutment materials and platform connections on stress distribution in single anterior implant-supported restorations with the finite element method. Nine experimental groups were modeled from the combination of 3 platform connections (external hexagon, internal hexagon, and Morse tapered) and 3 abutment materials (titanium, zirconia, and hybrid) as follows: external hexagon-titanium, external hexagon-zirconia, external hexagon-hybrid, internal hexagon-titanium, internal hexagon-zirconia, internal hexagon-hybrid, Morse tapered-titanium, Morse tapered-zirconia, and Morse tapered-hybrid. Finite element models consisted of a 4×13-mm implant, anatomic abutment, and lithium disilicate central incisor crown cemented over the abutment. The 49 N occlusal loading was applied in 6 steps to simulate the incisal guidance. Equivalent von Mises stress (σvM) was used for both the qualitative and quantitative evaluation of the implant and abutment in all the groups and the maximum (σmax) and minimum (σmin) principal stresses for the numerical comparison of the zirconia parts. The highest abutment σvM occurred in the Morse-tapered groups and the lowest in the external hexagon-hybrid, internal hexagon-titanium, and internal hexagon-hybrid groups. The σmax and σmin values were lower in the hybrid groups than in the zirconia groups. The stress distribution concentrated in the abutment-implant interface in all the groups, regardless of the platform connection or abutment material. The platform connection influenced the stress on abutments more than the abutment material. The stress values for implants were similar among different platform connections, but greater stress concentrations were observed in internal connections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fluid flow over bodies with complex geometry has been the subject of research of many scientists and widely explored experimentally and numerically. The present study proposes an Eulerian Immersed Boundary Method for flows simulations over stationary or moving rigid bodies. The proposed method allows the use of Cartesians Meshes. Here, two-dimensional simulations of fluid flow over stationary and oscillating circular cylinders were used for verification and validation. Four different cases were explored: the flow over a stationary cylinder, the flow over a cylinder oscillating in the flow direction, the flow over a cylinder oscillating in the normal flow direction, and a cylinder with angular oscillation. The time integration was carried out by a classical 4th order Runge-Kutta scheme, with a time step of the same order of distance between two consecutive points in x direction. High-order compact finite difference schemes were used to calculate spatial derivatives. The drag and lift coefficients, the lock-in phenomenon and vorticity contour plots were used for the verification and validation of the proposed method. The extension of the current method allowing the study of a body with different geometry and three-dimensional simulations is straightforward. The results obtained show a good agreement with both numerical and experimental results, encouraging the use of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fenômenos oscilatórios e ressonantes são explorados em vários cursos experimentais de física. Em geral os experimentos são interpretados no limite de pequenas oscilações e campos uniformes. Neste artigo descrevemos um experimento de baixo custo para o estudo da ressonância em campo magnético da agulha de uma bússola fora dos limites acima. Nesse caso, termos não lineares na equação diferencial são responsáveis por fenômenos interessantes de serem explorados em laboratórios didáticos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate the stress distribution in the cervical region of a sound upper central incisor in two clinical situations, standard and maximum masticatory forces, by means of a 3D model with the highest possible level of fidelity to the anatomic dimensions. Two models with 331,887 linear tetrahedral elements that represent a sound upper central incisor with periodontal ligament, cortical and trabecular bones were loaded at 45º in relation to the tooth's long axis. All structures were considered to be homogeneous and isotropic, with the exception of the enamel (anisotropic). A standard masticatory force (100 N) was simulated on one of the models, while on the other one a maximum masticatory force was simulated (235.9 N). The software used were: PATRAN for pre- and post-processing and Nastran for processing. In the cementoenamel junction area, tensile forces reached 14.7 MPa in the 100 N model, and 40.2 MPa in the 235.9 N model, exceeding the enamel's tensile strength (16.7 MPa). The fact that the stress concentration in the amelodentinal junction exceeded the enamel's tensile strength under simulated conditions of maximum masticatory force suggests the possibility of the occurrence of non-carious cervical lesions such as abfractions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Colloidal particles have been used to template the electrosynthesis of several materials, such as semiconductors, metals and alloys. The method allows good control over the thickness of the resulting material by choosing the appropriate charge applied to the system, and it is able to produce high density deposited materials without shrinkage. These materials are a true model of the template structure and, due to the high surface areas obtained, are very promising for use in electrochemical applications. In the present work, the assembly of monodisperse polystyrene templates was conduced over gold, platinum and glassy carbon substrates in order to show the electrodeposition of an oxide, a conducting polymer and a hybrid inorganic-organic material with applications in the supercapacitor and sensor fields. The performances of the resulting nanostructured films have been compared with the analogue bulk material and the results achieved are depicted in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a positional FEM formulation to deal with geometrical nonlinear dynamics of shells. The main objective is to develop a new FEM methodology based on the minimum potential energy theorem written regarding nodal positions and generalized unconstrained vectors not displacements and rotations. These characteristics are the novelty of the present work and avoid the use of large rotation approximations. A nondimensional auxiliary coordinate system is created, and the change of configuration function is written following two independent mappings from which the strain energy function is derived. This methodology is called positional and, as far as the authors' knowledge goes, is a new procedure to approximated geometrical nonlinear structures. In this paper a proof for the linear and angular momentum conservation property of the Newmark beta algorithm is provided for total Lagrangian description. The proposed shell element is locking free for elastic stress-strain relations due to the presence of linear strain variation along the shell thickness. The curved, high-order element together with an implicit procedure to solve nonlinear equations guarantees precision in calculations. The momentum conserving, the locking free behavior, and the frame invariance of the adopted mapping are numerically confirmed by examples. Copyright (C) 2009 H. B. Coda and R. R. Paccola.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a fully non-linear finite element formulation for shell analysis comprising linear strain variation along the thickness of the shell and geometrically exact description for curved triangular elements. The developed formulation assumes positions and generalized unconstrained vectors as the variables of the problem, not displacements and finite rotations. The full 3D Saint-Venant-Kirchhoff constitutive relation is adopted and, to avoid locking, the rate of thickness variation enhancement is introduced. As a consequence, the second Piola-Kirchhoff stress tensor and the Green strain measure are employed to derive the specific strain energy potential. Curved triangular elements with cubic approximation are adopted using simple notation. Selected numerical simulations illustrate and confirm the objectivity, accuracy, path independence and applicability of the proposed technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic fields of intensities similar to those in our galaxy are also observed in high redshift galaxies, where a mean field dynamo would not have had time to produce them. Therefore, a primordial origin is indicated. It has been suggested that magnetic fields were created at various primordial eras: during inflation, the electroweak phase transition, the quark-hadron phase transition (QHPT), during the formation of the first objects, and during reionization. We suggest here that the large-scale fields similar to mu G, observed in galaxies at both high and low redshifts by Faraday rotation measurements (FRMs), have their origin in the electromagnetic fluctuations that naturally occurred in the dense hot plasma that existed just after the QHPT. We evolve the predicted fields to the present time. The size of the region containing a coherent magnetic field increased due to the fusion of smaller regions. Magnetic fields (MFs) similar to 10 mu G over a comoving similar to 1 pc region are predicted at redshift z similar to 10. These fields are orders of magnitude greater than those predicted in previous scenarios for creating primordial magnetic fields. Line-of-sight average MFs similar to 10(-2) mu G, valid for FRMs, are obtained over a 1 Mpc comoving region at the redshift z similar to 10. In the collapse to a galaxy (comoving size similar to 30 kpc) at z similar to 10, the fields are amplified to similar to 10 mu G. This indicates that the MFs created immediately after the QHPT (10(-4) s), predicted by the fluctuation-dissipation theorem, could be the origin of the similar to mu G fields observed by FRMs in galaxies at both high and low redshifts. Our predicted MFs are shown to be consistent with present observations. We discuss the possibility that the predicted MFs could cause non-negligible deflections of ultrahigh energy cosmic rays and help create the observed isotropic distribution of their incoming directions. We also discuss the importance of the volume average magnetic field predicted by our model in producing the first stars and in reionizing the Universe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microgauss magnetic fields are observed in all galaxies at low and high redshifts. The origin of these intense magnetic fields is a challenging question in astrophysics. We show here that the natural plasma fluctuations in the primordial Universe (assumed to be random), predicted by the fluctuation - dissipation theorem, predicts similar to 0.034 mu G fields over similar to 0.3 kpc regions in galaxies. If the dipole magnetic fields predicted by the fluctuation- dissipation theorem are not completely random, microgauss fields over regions greater than or similar to 0.34 kpc are easily obtained. The model is thus a strong candidate for resolving the problem of the origin of magnetic fields in less than or similar to 10(9) years in high redshift galaxies.