996 resultados para Algebraic solution
Resumo:
We show that an anisotropic nonquadratic potential, for which a path integral treatment has been recently discussed in the literature, possesses the SO(2, 1) ⊗SO(2, 1) ⊗SO(2, 1) dynamical symmetry, and construct its Green function algebraically. A particular case which generates new eigenvalues and eigenfunctions is also discussed. © 1990.
Resumo:
Using an algebraic technique related to the SO (2, 1) group we construct the Green function for the potential ar2 + b(r sin θ)-2 + c(r cos θ)-2 + dr2 sin2θ + er2 cos2θ. The energy spectrum and the normalized wave functions are also obtained. © 1990.
Resumo:
Non-pressure compensating drip hose is widely used for irrigation of vegetables and orchards. One limitation is that the lateral line length must be short to maintain uniformity due to head loss and slope. Any procedure to increase the length is appropriate because it represents low initial cost of the irrigation system. The hypothesis of this research is that it is possible to increase the lateral line length combining two points: using a larger spacing between emitters at the beginning of the lateral line and a smaller one after a certain distance; and allowing a higher pressure variation along the lateral line under an acceptable value of distribution uniformity. To evaluate this hypothesis, a nonlinear programming model (NLP) was developed. The input data are: diameter, roughness coefficient, pressure variation, emitter operational pressure, relationship between emitter discharge and pressure. The output data are: line length, discharge and length of the each section with different spacing between drippers, total discharge in the lateral line, multiple outlet adjustment coefficient, head losses, localized head loss, pressure variation, number of emitters, spacing between emitters, discharge in each emitter, and discharge per linear meter. The mathematical model developed was compared with the lateral line length obtained with the algebraic solution generated by the Darcy-Weisbach equation. The NLP model showed the best results since it generated the greater gain in the lateral line length, maintaining the uniformity and the flow variation under acceptable standards. It had also the lower flow variation, so its adoption is feasible and recommended.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
Mode of access: Internet.
Resumo:
This paper examines the algebraic cryptanalysis of small scale variants of the LEX-BES. LEX-BES is a stream cipher based on the Advanced Encryption Standard (AES) block cipher. LEX is a generic method proposed for constructing a stream cipher from a block cipher, initially introduced by Biryukov at eSTREAM, the ECRYPT Stream Cipher project in 2005. The Big Encryption System (BES) is a block cipher introduced at CRYPTO 2002 which facilitates the algebraic analysis of the AES block cipher. In this paper, experiments were conducted to find solution of the equation system describing small scale LEX-BES using Gröbner Basis computations. This follows a similar approach to the work by Cid, Murphy and Robshaw at FSE 2005 that investigated algebraic cryptanalysis on small scale variants of the BES. The difference between LEX-BES and BES is that due to the way the keystream is extracted, the number of unknowns in LEX-BES equations is fewer than the number in BES. As far as the author knows, this attempt is the first at creating solvable equation systems for stream ciphers based on the LEX method using Gröbner Basis computations.
Resumo:
We present a novel approach for preprocessing systems of polynomial equations via graph partitioning. The variable-sharing graph of a system of polynomial equations is defined. If such graph is disconnected, then the corresponding system of equations can be split into smaller ones that can be solved individually. This can provide a tremendous speed-up in computing the solution to the system, but is unlikely to occur either randomly or in applications. However, by deleting certain vertices on the graph, the variable-sharing graph could be disconnected in a balanced fashion, and in turn the system of polynomial equations would be separated into smaller systems of near-equal sizes. In graph theory terms, this process is equivalent to finding balanced vertex partitions with minimum-weight vertex separators. The techniques of finding these vertex partitions are discussed, and experiments are performed to evaluate its practicality for general graphs and systems of polynomial equations. Applications of this approach in algebraic cryptanalysis on symmetric ciphers are presented: For the QUAD family of stream ciphers, we show how a malicious party can manufacture conforming systems that can be easily broken. For the stream ciphers Bivium and Trivium, we nachieve significant speedups in algebraic attacks against them, mainly in a partial key guess scenario. In each of these cases, the systems of polynomial equations involved are well-suited to our graph partitioning method. These results may open a new avenue for evaluating the security of symmetric ciphers against algebraic attacks.
Resumo:
This is an update of an earlier paper, and is written for Excel 2007. A series of Excel 2007 models is described. The more advanced versions allow solution of f(x)=0 by examining change of sign of function values. The function is graphed and change of sign easily detected by a change of colour. Relevant features of Excel 2007 used are Names, Scatter Chart and Conditional Formatting. Several sample Excel 2007 models are available for download, and the paper is intended to be used as a lesson plan for students having some familiarity with derivatives. For comparison and reference purposes, the paper also presents a brief outline of several common equation-solving strategies as an Appendix.
Resumo:
Computation of the dependency basis is the fundamental step in solving the membership problem for functional dependencies (FDs) and multivalued dependencies (MVDs) in relational database theory. We examine this problem from an algebraic perspective. We introduce the notion of the inference basis of a set M of MVDs and show that it contains the maximum information about the logical consequences of M. We propose the notion of a dependency-lattice and develop an algebraic characterization of inference basis using simple notions from lattice theory. We also establish several interesting properties of dependency-lattices related to the implication problem. Founded on our characterization, we synthesize efficient algorithms for (a): computing the inference basis of a given set M of MVDs; (b): computing the dependency basis of a given attribute set w.r.t. M; and (c): solving the membership problem for MVDs. We also show that our results naturally extend to incorporate FDs also in a way that enables the solution of the membership problem for both FDs and MVDs put together. We finally show that our algorithms are more efficient than existing ones, when used to solve what we term the ‘generalized membership problem’.
Resumo:
Exact traveling-wave solutions of time-dependent nonlinear inhomogeneous PDEs, describing several model systems in geophysical fluid dynamics, are found. The reduced nonlinear ODEs are treated as systems of linear algebraic equations in the derivatives. A variety of solutions are found, depending on the rank of the algebraic systems. The geophysical systems include acoustic gravity waves, inertial waves, and Rossby waves. The solutions describe waves which are, in general, either periodic or monoclinic. The present approach is compared with the earlier one due to Grundland (1974) for finding exact solutions of inhomogeneous systems of nonlinear PDEs.
Resumo:
An iterative procedure is described for solving nonlinear optimal control problems subject to differential algebraic equations. The procedure iterates on an integrated modified simplified model based problem with parameter updating in such a manner that the correct solution of the original nonlinear problem is achieved.
Resumo:
A novel iterative procedure is described for solving nonlinear optimal control problems subject to differential algebraic equations. The procedure iterates on an integrated modified linear quadratic model based problem with parameter updating in such a manner that the correct solution of the original non-linear problem is achieved. The resulting algorithm has a particular advantage in that the solution is achieved without the need to solve the differential algebraic equations . Convergence aspects are discussed and a simulation example is described which illustrates the performance of the technique. 1. Introduction When modelling industrial processes often the resulting equations consist of coupled differential and algebraic equations (DAEs). In many situations these equations are nonlinear and cannot readily be directly reduced to ordinary differential equations.
Resumo:
This letter presents an approach for a geometrical solution of an optimal power flow (OPF) problem for a two-bus system (slack and PV busses). The algebraic equations for the calculation of the Lagrange multipliers and for the minimum losses value are obtained. These equations are used to validate the results obtained using an OPF program.