55 resultados para Irreducible polynomial
Resumo:
A new type of advanced encryption standard (AES) implementation using a normal basis is presented. The method is based on a lookup technique that makes use of inversion and shift registers, which leads to a smaller size of lookup for the S-box than its corresponding implementations. The reduction in the lookup size is based on grouping sets of inverses into conjugate sets which in turn leads to a reduction in the number of lookup values. The above technique is implemented in a regular AES architecture using register files, which requires less interconnect and area and is suitable for security applications. The results of the implementation are competitive in throughput and area compared with the corresponding solutions in a polynomial basis.
Resumo:
We consider non-standard totalisation functors for double complexes, involving left or right truncated products. We show how properties of these imply that the algebraic mapping torus of a self map h of a cochain complex of finitely presented modules has trivial negative Novikov cohomology, and has trivial positive Novikov cohomology provided h is a quasi-isomorphism. As an application we obtain a new and transparent proof that a finitely dominated cochain complex over a Laurent polynomial ring has trivial (positive and negative) Novikov cohomology.
Resumo:
We restate the notion of orthogonal calculus in terms of model categories. This provides a cleaner set of results and makes the role of O(n)-equivariance clearer. Thus we develop model structures for the category of n-polynomial and n-homogeneous functors, along with Quillen pairs relating them. We then classify n-homogeneous functors, via a zig-zag of Quillen equivalences, in terms of spectra with an O(n)-action. This improves upon the classification theorem of Weiss. As an application, we develop a variant of orthogonal calculus by replacing topological spaces with orthogonal spectra.
Resumo:
This paper investigates the distribution of the condition number of complex Wishart matrices. Two closely related measures are considered: the standard condition number (SCN) and the Demmel condition number (DCN), both of which have important applications in the context of multiple-input multipleoutput (MIMO) communication systems, as well as in various branches of mathematics. We first present a novel generic framework for the SCN distribution which accounts for both central and non-central Wishart matrices of arbitrary dimension. This result is a simple unified expression which involves only a single scalar integral, and therefore allows for fast and efficient computation. For the case of dual Wishart matrices, we derive new exact polynomial expressions for both the SCN and DCN distributions. We also formulate a new closed-form expression for the tail SCN distribution which applies for correlated central Wishart matrices of arbitrary dimension and demonstrates an interesting connection to the maximum eigenvalue moments of Wishart matrices of smaller dimension. Based on our analytical results, we gain valuable insights into the statistical behavior of the channel conditioning for various MIMO fading scenarios, such as uncorrelated/semi-correlated Rayleigh fading and Ricean fading. © 2010 IEEE.
Resumo:
This paper proposes a method to assess the small signal stability of a power system network by selective determination of the modal eigenvalues. This uses an accelerating polynomial transform, designed using approximate eigenvalues
obtained from a wavelet approximation. Application to the IEEE 14 bus network model produced computational savings of 20%,over the QR algorithm.
Resumo:
Microwave heating reduces the preparation time and improves the adsorption quality of activated carbon. In this study, activated carbon was prepared by impregnation of palm kernel fiber with phosphoric acid followed by microwave activation. Three different types of activated carbon were prepared, having high surface areas of 872 m2 g-1, 1256 m2 g-1, and 952 m2 g-1 and pore volumes of 0.598 cc g-1, 1.010 cc g-1, and 0.778 cc g-1, respectively. The combined effects of the different process parameters, such as the initial adsorbate concentration, pH, and temperature, on adsorption efficiency were explored with the help of Box-Behnken design for response surface methodology (RSM). The adsorption rate could be expressed by a polynomial equation as the function of the independent variables. The hexavalent chromium adsorption rate was found to be 19.1 mg g-1 at the optimized conditions of the process parameters, i.e., initial concentration of 60 mg L-1, pH of 3, and operating temperature of 50 oC. Adsorption of Cr(VI) by the prepared activated carbon was spontaneous and followed second-order kinetics. The adsorption mechanism can be described by the Freundlich Isotherm model. The prepared activated carbon has demonstrated comparable performance to other available activated carbons for the adsorption of Cr(VI).
Resumo:
This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique. © 2013 Elsevier B.V.
Resumo:
Modal analysis is a popular approach used in structural dynamic and aeroelastic problems due to its efficiency. The response of a structure is compo
sed of the sum of orthogonal eigenvectors or modeshapes and corresponding modal frequencies. This paper investigates the importance of modeshapes on the aeroelastic response of the Goland wing subject to structural uncertainties. The wing undergoes limit cycle oscillations (LCO) as a result of the inclusion of polynomial stiffness nonlinearities. The LCO computations are performed using a Harmonic Balance approach for speed, the modal properties of the system are extracted from MSC NASTRAN. Variability in both the wing’s structure and the store centre of gravity location is investigated in two cases:- supercritical and subcritical type LCOs. Results show that the LCO behaviour is only sensitive to change in modeshapes when the nature of the modes are changing significantly.
Resumo:
For the computation of limit cycle oscillations (LCO) at transonic speeds, CFD is required to capture the nonlinear flow features present. The Harmonic Balance method provides an effective means for the computation of LCOs and this paper exploits its efficiency to investigate the impact of variability (both structural a nd aerodynamic) on the aeroelastic behaviour of a 2 dof aerofoil. A Harmonic Balance inviscid CFD solver is coupled with the structural equations and is validated against time marching analyses. Polynomial chaos expansions are employed for the stochastic investiga tion as a faster alternative to Monte Carlo analysis. Adaptive sampling is employed when discontinuities are present. Uncertainties in aerodynamic parameters are looked at first followed by the inclusion of structural variability. Results show the nonlinear effect of Mach number and it’s interaction with the structural parameters on supercritical LCOs. The bifurcation boundaries are well captured by the polynomial chaos.
Resumo:
We present an algebro-geometric approach to a theorem on finite domination of chain complexes over a Laurent polynomial ring. The approach uses extension of chain complexes to sheaves on the projective line, which is governed by a K-theoretical obstruction.
Resumo:
This work investigates limit cycle oscillations in the transonic regime. A novel approach to predict Limit Cycle Oscillations using high fidelity analysis is exploited to accelerate calculations. The method used is an Aeroeasltic Harmonic Balance approach, which has been proven to be efficient and able to predict periodic phenomena. The behaviour of limit cycle oscillations is analysed using uncertainty quantification tools based on polynomial chaos expansions. To improve the efficiency of the sampling process for the polynomial-chaos expansions an adaptive sampling procedure is used. These methods are exercised using two problems: a pitch/plunge aerofoil and a delta-wing. Results indicate that Mach n. variability is determinant to the amplitude of the LCO for the 2D test case, whereas for the wing case analysed here, variability in the Mach n. has an almost negligible influence in amplitude variation and the LCO frequency variability has an almost linear relation with Mach number. Further test cases are required to understand the generality of these results.
Resumo:
The Harmonic Balance method is an attractive solution for computing periodic responses and can be an alternative to time domain methods, at a reduced computational cost. The current paper investigates using a Harmonic Balance method for simulating limit cycle oscillations under uncertainty. The Harmonic Balance method is used in conjunction with a non-intrusive polynomial-chaos approach to propagate variability and is validated against Monte Carlo analysis. Results show the potential of the approach for a range of nonlinear dynamical systems, including a full wing configuration exhibiting supercritical and subcritical bifurcations, at a fraction of the cost of performing time domain simulations.
Resumo:
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.
On the complexity of solving polytree-shaped limited memory influence diagrams with binary variables
Resumo:
Influence diagrams are intuitive and concise representations of structured decision problems. When the problem is non-Markovian, an optimal strategy can be exponentially large in the size of the diagram. We can avoid the inherent intractability by constraining the size of admissible strategies, giving rise to limited memory influence diagrams. A valuable question is then how small do strategies need to be to enable efficient optimal planning. Arguably, the smallest strategies one can conceive simply prescribe an action for each time step, without considering past decisions or observations. Previous work has shown that finding such optimal strategies even for polytree-shaped diagrams with ternary variables and a single value node is NP-hard, but the case of binary variables was left open. In this paper we address such a case, by first noting that optimal strategies can be obtained in polynomial time for polytree-shaped diagrams with binary variables and a single value node. We then show that the same problem is NP-hard if the diagram has multiple value nodes. These two results close the fixed-parameter complexity analysis of optimal strategy selection in influence diagrams parametrized by the shape of the diagram, the number of value nodes and the maximum variable cardinality.