953 resultados para Linear Codes over Finite Fields
Resumo:
The accurate prediction of stress histories for the fatigue analysis is of utmost importance for the design process of wind turbine rotor blades. As detailed, transient, and geometrically non-linear three-dimensional finite element analyses are computationally weigh too expensive, it is commonly regarded sufficient to calculate the stresses with a geometrically linear analysis and superimpose different stress states in order to obtain the complete stress histories. In order to quantify the error from geometrically linear simulations for the calculation of stress histories and to verify the practical applicability of the superposition principal in fatigue analyses, this paper studies the influence of geometric non-linearity in the example of a trailing edge bond line, as this subcomponent suffers from high strains in span-wise direction. The blade under consideration is that of the IWES IWT-7.5-164 reference wind turbine. From turbine simulations the highest edgewise loading scenario from the fatigue load cases is used as the reference. A 3D finite element model of the blade is created and the bond line fatigue assessment is performed according to the GL certification guidelines in its 2010 edition, and in comparison to the latest DNV GL standard from end of 2015. The results show a significant difference between the geometrically linear and non-linear stress analyses when the bending moments are approximated via a corresponding external loading, especially in case of the 2010 GL certification guidelines. This finding emphasizes the demand to reconsider the application of the superposition principal in fatigue analyses of modern flexible rotor blades, where geometrical nonlinearities become significant. In addition, a new load application methodology is introduced that reduces the geometrically non-linear behaviour of the blade in the finite element analysis.
Resumo:
Purpose: To develop and validate a simple, efficient and reliable Liquid chromatographic-mass spectrometric (LC-MS/MS) method for the quantitative determination of two dermatological drugs, Lamisil® (terbinafine) and Proscar® (finasteride), in split tablet dosage form. Methods: Thirty tablets each of the 2 studied medications were randomly selected. Tablets were weighed and divided into 3 groups. Ten tablets of each drug were kept intact, another group of 10 tablets were manually split into halves using a tablet cutter and weighed with an analytical balance; a third group were split into quarters and weighed. All intact and split tablets were individually dissolved in a water: methanol mixture (4:1), sonicated, filtered and further diluted with mobile phase. Optimal chromatographic separation and mass spectrometric detection were achieved using an Agilent 1200 HPLC system coupled with an Agilent 6410 triple quadrupole mass spectrometer. Analytes were eluted through an Agilent eclipse plus C8 analytical column (150 mm × 4.6 mm, 5 μm) with a mobile phase composed of solvent A (water) containing 0.1% formic acid and 5mM ammonium formate pH 7.5, and solvent B (acetonitrile mixed with water in a ratio A:B 55:45) at a flow rate of 0.8 mL min-1 with a total run time of 12 min. Mass spectrometric detection was carried out using positive ionization mode with analyte quantitation monitored by multiple reaction monitoring (MRM) mode. Results: The proposed analytical method proved to be specific, robust and adequately sensitive. The results showed a good linear fit over the concentration range of 20 - 100 ng mL-1 for both analytes, with a correlation coefficient (r2) ≥ 0.999 and 0.998 for finasteride and terbinafine, respectively. Following tablet splitting, the drug content of the split tablets fell outside of the proxy USP specification for at least 14 halves (70 %) and 34 quarters (85 %) of FIN, as well as 16 halves (80 %) and 37 quarters (92.5 %) of TBN. Mean weight loss, after splitting, was 0.58 and 2.22 % for FIN half- and quarter tablets, respectively, and 3.96 and 4.09 % for TBN half- and quarter tablets,respectively. Conclusion: The proposed LC-MS/MS method has successfully been used to provide precise drug content uniformity of split tablets of FIN and TBN. Unequal distribution of the drug on the split tablets is indicated by the high standard deviation beyond the accepted value. Hence, it is recommended not to split non-scored tablets especially, for those medications with significant toxicity
Resumo:
Selon la philosophie de Katz et Sarnak, la distribution des zéros des fonctions $L$ est prédite par le comportement des valeurs propres de matrices aléatoires. En particulier, le comportement des zéros près du point central révèle le type de symétrie de la famille de fonctions $L$. Une fois la symétrie identifiée, la philosophie de Katz et Sarnak conjecture que plusieurs statistiques associées aux zéros seront modélisées par les valeurs propres de matrices aléatoires du groupe correspondant. Ce mémoire étudiera la distribution des zéros près du point central de la famille des courbes elliptiques sur $\mathbb{Q}[i]$. Brumer a effectué ces calculs en 1992 sur la famille de courbes elliptiques sur $\mathbb{Q}$. Les nouvelles problématiques reliées à la généralisation de ses travaux vers un corps de nombres seront mises en évidence
Resumo:
Selon la philosophie de Katz et Sarnak, la distribution des zéros des fonctions $L$ est prédite par le comportement des valeurs propres de matrices aléatoires. En particulier, le comportement des zéros près du point central révèle le type de symétrie de la famille de fonctions $L$. Une fois la symétrie identifiée, la philosophie de Katz et Sarnak conjecture que plusieurs statistiques associées aux zéros seront modélisées par les valeurs propres de matrices aléatoires du groupe correspondant. Ce mémoire étudiera la distribution des zéros près du point central de la famille des courbes elliptiques sur $\mathbb{Q}[i]$. Brumer a effectué ces calculs en 1992 sur la famille de courbes elliptiques sur $\mathbb{Q}$. Les nouvelles problématiques reliées à la généralisation de ses travaux vers un corps de nombres seront mises en évidence
Resumo:
Bilinear pairings can be used to construct cryptographic systems with very desirable properties. A pairing performs a mapping on members of groups on elliptic and genus 2 hyperelliptic curves to an extension of the finite field on which the curves are defined. The finite fields must, however, be large to ensure adequate security. The complicated group structure of the curves and the expensive field operations result in time consuming computations that are an impediment to the practicality of pairing-based systems. The Tate pairing can be computed efficiently using the ɳT method. Hardware architectures can be used to accelerate the required operations by exploiting the parallelism inherent to the algorithmic and finite field calculations. The Tate pairing can be performed on elliptic curves of characteristic 2 and 3 and on genus 2 hyperelliptic curves of characteristic 2. Curve selection is dependent on several factors including desired computational speed, the area constraints of the target device and the required security level. In this thesis, custom hardware processors for the acceleration of the Tate pairing are presented and implemented on an FPGA. The underlying hardware architectures are designed with care to exploit available parallelism while ensuring resource efficiency. The characteristic 2 elliptic curve processor contains novel units that return a pairing result in a very low number of clock cycles. Despite the more complicated computational algorithm, the speed of the genus 2 processor is comparable. Pairing computation on each of these curves can be appealing in applications with various attributes. A flexible processor that can perform pairing computation on elliptic curves of characteristic 2 and 3 has also been designed. An integrated hardware/software design and verification environment has been developed. This system automates the procedures required for robust processor creation and enables the rapid provision of solutions for a wide range of cryptographic applications.
Resumo:
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
[EN]In this talk we introduce a new methodology for wind field simulation or forecasting over complex terrain. The idea is to use wind measurements or predictions of the HARMONIE mesoscale model as the input data for an adaptive finite element mass consistent wind model [1,2]. The method has been recently implemented in the freely-available Wind3D code [3]. A description of the HARMONIE Non-Hydrostatic Dynamics can be found in [4]. The results of HARMONIE (obtained with a maximum resolution about 1 Km) are refined by the finite element model in a local scale (about a few meters). An interface between both models is implemented such that the initial wind field approximation is obtained by a suitable interpolation of the HARMONIE results…
Resumo:
What is the computational power of a quantum computer? We show that determining the output of a quantum computation is equivalent to counting the number of solutions to an easily computed set of polynomials defined over the finite field Z(2). This connection allows simple proofs to be given for two known relationships between quantum and classical complexity classes, namely BQP subset of P-#P and BQP subset of PP.
Resumo:
We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate R=14, an RS critical transition point at pc 0.67 while the critical RSB transition point is located at pc 0.7450±0.0050, to be compared with the corresponding Shannon bound 1-R. For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed. © 2006 The American Physical Society.
Resumo:
The present dissertation is concerned with the determination of the magnetic field distribution in ma[.rnetic electron lenses by means of the finite element method. In the differential form of this method a Poisson type equation is solved by numerical methods over a finite boundary. Previous methods of adapting this procedure to the requirements of digital computers have restricted its use to computers of extremely large core size. It is shown that by reformulating the boundary conditions, a considerable reduction in core store can be achieved for a given accuracy of field distribution. The magnetic field distribution of a lens may also be calculated by the integral form of the finite element rnethod. This eliminates boundary problems mentioned but introduces other difficulties. After a careful analysis of both methods it has proved possible to combine the advantages of both in a .new approach to the problem which may be called the 'differential-integral' finite element method. The application of this method to the determination of the magnetic field distribution of some new types of magnetic lenses is described. In the course of the work considerable re-programming of standard programs was necessary in order to reduce the core store requirements to a minimum.
Resumo:
We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.
Resumo:
AMS subject classification: 90C05, 90A14.