19 resultados para Iterative power methods
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In numerical linear algebra, students encounter earlythe iterative power method, which finds eigenvectors of a matrixfrom an arbitrary starting point through repeated normalizationand multiplications by the matrix itself. In practice, more sophisticatedmethods are used nowadays, threatening to make the powermethod a historical and pedagogic footnote. However, in the contextof communication over a time-division duplex (TDD) multipleinputmultiple-output (MIMO) channel, the power method takes aspecial position. It can be viewed as an intrinsic part of the uplinkand downlink communication switching, enabling estimationof the eigenmodes of the channel without extra overhead. Generalizingthe method to vector subspaces, communication in thesubspaces with the best receive and transmit signal-to-noise ratio(SNR) is made possible. In exploring this intrinsic subspace convergence(ISC), we show that several published and new schemes canbe cast into a common framework where all members benefit fromthe ISC.
Resumo:
Recently there has been a great deal of work on noncommutative algebraic cryptography. This involves the use of noncommutative algebraic objects as the platforms for encryption systems. Most of this work, such as the Anshel-Anshel-Goldfeld scheme, the Ko-Lee scheme and the Baumslag-Fine-Xu Modular group scheme use nonabelian groups as the basic algebraic object. Some of these encryption methods have been successful and some have been broken. It has been suggested that at this point further pure group theoretic research, with an eye towards cryptographic applications, is necessary.In the present study we attempt to extend the class of noncommutative algebraic objects to be used in cryptography. In particular we explore several different methods to use a formal power series ring R && x1; :::; xn && in noncommuting variables x1; :::; xn as a base to develop cryptosystems. Although R can be any ring we have in mind formal power series rings over the rationals Q. We use in particular a result of Magnus that a finitely generated free group F has a faithful representation in a quotient of the formal power series ring in noncommuting variables.
Resumo:
This article investigates the history of land and water transformations in Matadepera, a wealthy suburb of metropolitan Barcelona. Analysis is informed by theories of political ecology and methods of environmental history; although very relevant, these have received relatively little attention within ecological economics. Empirical material includes communications from the City Archives of Matadepera (1919-1979), 17 interviews with locals born between 1913 and 1958, and an exhaustive review of grey historical literature. Existing water histories of Barcelona and its outskirts portray a battle against natural water scarcity, hard won by heroic engineers and politicians acting for the good of the community. Our research in Matadepera tells a very different story. We reveal the production of a highly uneven landscape and waterscape through fierce political and power struggles. The evolution of Matadepera from a small rural village to an elite suburb was anything but spontaneous or peaceful. It was a socio-environmental project well intended by landowning elites and heavily fought by others. The struggle for the control of water went hand in hand with the land and political struggles that culminated – and were violently resolved - in the Spanish Civil War. The displacement of the economic and environmental costs of water use from few to many continues to this day and is constitutive of Matadepera’s uneven and unsustainable landscape. By unravelling the relations of power that are inscribed in the urbanization of nature (Swyngedouw, 2004), we question the perceived wisdoms of contemporary water policy debates, particularly the notion of a natural scarcity that merits a technical or economic response. We argue that the water question is fundamentally a political question of environmental justice; it is about negotiating alternative visions of the future and deciding whose visions will be produced.
Resumo:
When using a polynomial approximating function the most contentious aspect of the Heat Balance Integral Method is the choice of power of the highest order term. In this paper we employ a method recently developed for thermal problems, where the exponent is determined during the solution process, to analyse Stefan problems. This is achieved by minimising an error function. The solution requires no knowledge of an exact solution and generally produces significantly better results than all previous HBI models. The method is illustrated by first applying it to standard thermal problems. A Stefan problem with an analytical solution is then discussed and results compared to the approximate solution. An ablation problem is also analysed and results compared against a numerical solution. In both examples the agreement is excellent. A Stefan problem where the boundary temperature increases exponentially is analysed. This highlights the difficulties that can be encountered with a time dependent boundary condition. Finally, melting with a time-dependent flux is briefly analysed without applying analytical or numerical results to assess the accuracy.
Resumo:
In this paper the two main drawbacks of the heat balance integral methods are examined. Firstly we investigate the choice of approximating function. For a standard polynomial form it is shown that combining the Heat Balance and Refined Integral methods to determine the power of the highest order term will either lead to the same, or more often, greatly improved accuracy on standard methods. Secondly we examine thermal problems with a time-dependent boundary condition. In doing so we develop a logarithmic approximating function. This new function allows us to model moving peaks in the temperature profile, a feature that previous heat balance methods cannot capture. If the boundary temperature varies so that at some time t & 0 it equals the far-field temperature, then standard methods predict that the temperature is everywhere at this constant value. The new method predicts the correct behaviour. It is also shown that this function provides even more accurate results, when coupled with the new CIM, than the polynomial profile. Analysis primarily focuses on a specified constant boundary temperature and is then extended to constant flux, Newton cooling and time dependent boundary conditions.
Resumo:
The Great Tohoku-Kanto earthquake and resulting tsunami has brought considerable attention to the issue of the construction of new power plants. We argue in this paper, nuclear power is not a sustainable solution to energy problems. First, we explore the stock of uranium-235 and the different schemes developed by the nuclear power industry to exploit this resource. Second, we show that these methods, fast breeder and MOX fuel reactors, are not feasible. Third, we show that the argument that nuclear energy can be used to reduce CO2 emissions is false: the emissions from the increased water evaporation from nuclear power generation must be accounted for. In the case of Japan, water from nuclear power plants is drained into the surrounding sea, raising the water temperature which has an adverse affect on the immediate ecosystem, as well as increasing CO2 emissions from increased water evaporation from the sea. Next, a short exercise is used to show that nuclear power is not even needed to meet consumer demand in Japan. Such an exercise should be performed for any country considering the construction of additional nuclear power plants. Lastly, the paper is concluded with a discussion of the implications of our findings.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
This paper presents and compares two approaches to estimate the origin (upstream or downstream) of voltage sag registered in distribution substations. The first approach is based on the application of a single rule dealing with features extracted from the impedances during the fault whereas the second method exploit the variability of waveforms from an statistical point of view. Both approaches have been tested with voltage sags registered in distribution substations and advantages, drawbacks and comparative results are presented
Resumo:
This paper aims to survey the techniques and methods described in literature to analyse and characterise voltage sags and the corresponding objectives of these works. The study has been performed from a data mining point of view
Resumo:
The work presented in this paper belongs to the power quality knowledge area and deals with the voltage sags in power transmission and distribution systems. Propagating throughout the power network, voltage sags can cause plenty of problems for domestic and industrial loads that can financially cost a lot. To impose penalties to responsible party and to improve monitoring and mitigation strategies, sags must be located in the power network. With such a worthwhile objective, this paper comes up with a new method for associating a sag waveform with its origin in transmission and distribution networks. It solves this problem through developing hybrid methods which hire multiway principal component analysis (MPCA) as a dimension reduction tool. MPCA reexpresses sag waveforms in a new subspace just in a few scores. We train some well-known classifiers with these scores and exploit them for classification of future sags. The capabilities of the proposed method for dimension reduction and classification are examined using the real data gathered from three substations in Catalonia, Spain. The obtained classification rates certify the goodness and powerfulness of the developed hybrid methods as brand-new tools for sag classification
Resumo:
Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Resumo:
Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).