72 resultados para Eigenvalue Bounds
Resumo:
We discuss the properties of the lifetime or the time-delay matrix Q(E) for multichannel scattering, which is related to the scattering matrix S(E) by Q = i?S(dS†/dE). For two overlapping resonances occurring at energies E with widths G(? = 1, 2), with an energy-independent background, only two eigenvalues of Q(E) are proved to be different from zero and to show typical avoided-crossing behaviour. These eigenvalues are expressible in terms of the four resonance parameters (E , G) and a parameter representing the strength of the interaction of the resonances. An example of the strong and weak interaction in an overlapping double resonance is presented for the positronium negative ion. When more than two resonances overlap (? = 1, ..., N), no simple representation of each eigenvalue has been found. However, the formula for the trace of the Q-matrix leads to the expression d(E) = -?arctan[(G/2)/(E - E)] + d(E) for the eigenphase sum d(E) and the background eigenphase sum d(E), in agreement with the known form of the state density. The formulae presented in this paper are useful in a parameter fitting of overlapping resonances. © 2006 IOP Publishing Ltd.
Resumo:
Flutter prediction as currently practiced is almost always deterministic in nature, based on a single structural model that is assumed to represent a fleet of aircraft. However, it is also recognized that there can be significant structural variability, even for different flights of the same aircraft. The safety factor used for flutter clearance is in part meant to account for this variability. Simulation tools can, however, represent the consequences of structural variability in the flutter predictions, providing extra information that could be useful in planning physical tests and assessing risk. The main problem arising for this type of calculation when using high-fidelity tools based on computational fluid dynamics is the computational cost. The current paper uses an eigenvalue-based stability method together with Euler-level aerodynamics and different methods for propagating structural variability to stability predictions. The propagation methods are Monte Carlo, perturbation, and interval analysis. The feasibility of this type of analysis is demonstrated. Results are presented for the Goland wing and a generic fighter configuration.
Resumo:
A method is described to allow searches for transonic aeroelastic instability of realistically sized aircraft models in multidimensional parameter spaces when computational fluid dynamics are used to model the aerodynamics. Aeroelastic instability is predicted from a small nonlinear eigenvalue problem. The approximation of the computationally expensive interaction term modeling the fluid response is formulated to allow the automated and blind search for aeroelastic instability. The approximation uses a kriging interpolation of exact numerical samples covering the parameter space. The approach, demonstrated for the Goland wing and the multidisciplinary optimization transport wing, results in stability analyses over whole flight envelopes at an equivalent cost of several steady-state simulations.
Resumo:
In this paper the use of eigenvalue stability analysis of very large dimension aeroelastic numerical models arising from the exploitation of computational fluid dynamics is reviewed. A formulation based on a block reduction of the system Jacobian proves powerful to allow various numerical algorithms to be exploited, including frequency domain solvers, reconstruction of a term describing the fluid–structure interaction from the sparse data which incurs the main computational cost, and sampling to place the expensive samples where they are most needed. The stability formulation also allows non-deterministic analysis to be carried out very efficiently through the use of an approximate Newton solver. Finally, the system eigenvectors are exploited to produce nonlinear and parameterised reduced order models for computing limit cycle responses. The performance of the methods is illustrated with results from a number of academic and large dimension aircraft test cases.
Resumo:
Flutter prediction as currently practiced is usually deterministic, with a single structural model used to represent an aircraft. By using interval analysis to take into account structural variability, recent work has demonstrated that small changes in the structure can lead to very large changes in the altitude at which
utter occurs (Marques, Badcock, et al., J. Aircraft, 2010). In this follow-up work we examine the same phenomenon using probabilistic collocation (PC), an uncertainty quantification technique which can eficiently propagate multivariate stochastic input through a simulation code,
in this case an eigenvalue-based fluid-structure stability code. The resulting analysis predicts the consequences of an uncertain structure on incidence of
utter in probabilistic terms { information that could be useful in planning
flight-tests and assessing the risk of structural failure. The uncertainty in
utter altitude is confirmed to be substantial. Assuming that the structural uncertainty represents a epistemic uncertainty regarding the
structure, it may be reduced with the availability of additional information { for example aeroelastic response data from a flight-test. Such data is used to update the structural uncertainty using Bayes' theorem. The consequent
utter uncertainty is significantly reduced across the entire Mach number range.
Resumo:
This paper studies the Demmel condition number of Wishart matrices, a quantity which has numerous applications to wireless communications, such as adaptive switching between beamforming and diversity coding, link adaptation, and spectrum sensing. For complex Wishart matrices, we give an exact analytical expression for the probability density function (p.d.f.) of the Demmel condition number, and also derive simplified expressions for the high tail regime. These results indicate that the condition of complex Wishart matrices is dominantly decided by the difference between the matrix dimension and degree of freedom (DoF), i.e., the probability of drawing a highly ill conditioned matrix decreases considerably when the difference between the matrix dimension and DoF increases. We further investigate real Wishart matrices, and derive new expressions for the p.d.f. of the smallest eigenvalue, when the difference between the matrix dimension and DoF is odd. Based on these results, we succeed to obtain an exact p.d.f. expression for the Demmel condition number, and simplified expressions for the high tail regime.
Resumo:
Simultaneous multithreading processors dynamically share processor resources between multiple threads. In general, shared SMT resources may be managed explicitly, for instance, by dynamically setting queue occupation bounds for each thread as in the DCRA and Hill-Climbing policies. Alternatively, resources may be managed implicitly; that is, resource usage is controlled by placing the desired instruction mix in the resources. In this case, the main resource management tool is the instruction fetch policy which must predict the behavior of each thread (branch mispredictions, long-latency loads, etc.) as it fetches instructions.
Resumo:
Morphometric study of modern ice masses is useful because many reconstructions of glaciers traditionally draw on their shape for guidance Here we analyse data derived from the surface profiles of 200 modern ice masses-valley glaciers icefields ice caps and ice sheets with length scales from 10º to 10³ km-from different parts of the world Four profile attributes are investigated relief span and two parameters C* and C that result from using Nye s (1952) theoretical parabola as a profile descriptor C* and C respectively measure each profile s aspect ratio and steepness and are found to decrease in size and variability with span This dependence quantifies the competing influences of unconstrained spreading behaviour of ice flow and bed topography on the profile shape of ice masses which becomes more parabolic as span Increases (with C* and C tending to low values of 2.5-3.3 m ½) The same data reveal coherent minimum bounds in C* and C for modern ice masses that we develop into two new methods of palaeo glacier reconstruction In the first method glacial limits are known from moraines and the bounds are used to constrain the lowest palaeo ice surface consistent with modern profiles We give an example of applying this method over a three-dimensional glacial landscape in Kamchatka In the second method we test the plausibility of existing reconstructions by comparing their C* and C against the modern minimum bounds Of the 86 published palaeo ice masses that we put to this test 88% are found to be plausible The search for other morphometric constraints will help us formalise glacier reconstructions and reduce their uncertainty and subjectiveness
Resumo:
Local computation in join trees or acyclic hypertrees has been shown to be linked to a particular algebraic structure, called valuation algebra.There are many models of this algebraic structure ranging from probability theory to numerical analysis, relational databases and various classical and non-classical logics. It turns out that many interesting models of valuation algebras may be derived from semiring valued mappings. In this paper we study how valuation algebras are induced by semirings and how the structure of the valuation algebra is related to the algebraic structure of the semiring. In particular, c-semirings with idempotent multiplication induce idempotent valuation algebras and therefore permit particularly efficient architectures for local computation. Also important are semirings whose multiplicative semigroup is embedded in a union of groups. They induce valuation algebras with a partially defined division. For these valuation algebras, the well-known architectures for Bayesian networks apply. We also extend the general computational framework to allow derivation of bounds and approximations, for when exact computation is not feasible.
Resumo:
An appreciation of the quantity of streamflow derived from the main hydrological pathways involved in transporting diffuse contaminants is critical when addressing a wide range of water resource management issues. In order to assess hydrological pathway contributions to streams, it is necessary to provide feasible upper and lower bounds for flows in each pathway. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways and subsequently the quicker overland and interflow pathways. This paper investigates the effectiveness of a multi-faceted approach applying different hydrograph separation techniques, supplemented by lumped hydrological modelling, for calculating the Baseflow Index (BFI), for the development of an integrated approach to hydrograph separation. A semi-distributed, lumped and deterministic rainfall runoff model known as NAM has been applied to ten catchments (ranging from 5 to 699 km2). While this modelling approach is useful as a validation method, NAM itself is also an important tool for investigation. These separation techniques provide a large variation in BFI, a difference of 0.741 predicted for BFI in a catchment with the less reliable fixed and sliding interval methods and local minima turning point methods included. This variation is reduced to 0.167 with these methods omitted. The Boughton and Eckhardt algorithms, while quite subjective in their use, provide quick and easily implemented approaches for obtaining physically realistic hydrograph separations. It is observed that while the different separation techniques give varying BFI values for each of the catchments, a recharge coefficient approach developed in Ireland, when applied in conjunction with the Master recession Curve Tabulation method, predict estimates in agreement with those obtained using the NAM model, and these estimates are also consistent with the study catchments’ geology. These two separation methods, in conjunction with the NAM model, were selected to form an integrated approach to assessing BFI in catchments.
Resumo:
Primary care in the United States is undergoing many changes. Reliable and valid instruments are needed to assess the effects of these changes. The Primary Care Organizational Questionnaire (PCOQ), a 56-item 5-point Likert scale survey that evaluates interactions among members of the clinic/practice and job-related attributes, was administered to clinicians and staff in 36 primary care practices serving paediatric populations in Connecticut. A priori scales were reliable (Cronbach alpha =0.7). Analysis of variance (ANOVA) showed greater heterogeneity across clinics than within clinics for 13 of 15 a priori scales, which were then included in a principal component factor analysis with varimax rotation. Eigenvalue analysis showed nine significant factors, largely similar to the a priori scales, indicating concurrent construct validity. Further research will ascertain the utility of the PCOQ in predicting the effectiveness of primary care practices in implementing disease management programmes. © 2006 Royal Society of Medicine Press.
Resumo:
A web-service is a remote computational facility which is made available for general use by means of the internet. An orchestration is a multi-threaded computation which invokes remote services. In this paper game theory is used to analyse the behaviour of orchestration evaluations when underlying web-services are unreliable. Uncertainty profiles are proposed as a means of defining bounds on the number of service failures that can be expected during an orchestration evaluation. An uncertainty profile describes a strategic situation that can be analyzed using a zero-sum angel-daemon game with two competing players: an angel a whose objective is to minimize damage to an orchestration and a daemon d who acts in a destructive fashion. An uncertainty profile is assessed using the value of its angel daemon game. It is shown that uncertainty profiles form a partial order which is monotonic with respect to assessment.
Resumo:
This paper investigates the distribution of the condition number of complex Wishart matrices. Two closely related measures are considered: the standard condition number (SCN) and the Demmel condition number (DCN), both of which have important applications in the context of multiple-input multipleoutput (MIMO) communication systems, as well as in various branches of mathematics. We first present a novel generic framework for the SCN distribution which accounts for both central and non-central Wishart matrices of arbitrary dimension. This result is a simple unified expression which involves only a single scalar integral, and therefore allows for fast and efficient computation. For the case of dual Wishart matrices, we derive new exact polynomial expressions for both the SCN and DCN distributions. We also formulate a new closed-form expression for the tail SCN distribution which applies for correlated central Wishart matrices of arbitrary dimension and demonstrates an interesting connection to the maximum eigenvalue moments of Wishart matrices of smaller dimension. Based on our analytical results, we gain valuable insights into the statistical behavior of the channel conditioning for various MIMO fading scenarios, such as uncorrelated/semi-correlated Rayleigh fading and Ricean fading. © 2010 IEEE.
Resumo:
We present pollen records from three sites in south Westland, New Zealand, that document past vegetation and inferred climate change between approximately 30,000 and 15,000 cal. yr BP. Detailed radiocarbon dating of the enclosing sediments at one of those sites, Galway tarn, provides a more robust chronology for the structure and timing of climate-induced vegetation change than has previously been possible in this region. The Kawakawa/Oruanui tephra, a key isochronous marker, affords a precise stratigraphic link across all three pollen records, while other tie points are provided by key pollen-stratigraphic changes which appear to be synchronous across all three sites. Collectively, the records show three episodes in which grassland, interpreted as indicating mostly cold subalpine to alpine conditions, was prevalent in lowland south Westland, separated by phases dominated by subalpine shrubs and montane-lowland trees, indicating milder interstadial conditions. Dating, expressed as a Bayesian-estimated single 'best' age followed in parentheses by younger/older bounds of the 95% confidence modelled age range, indicates that a cold stadial episode, whose onset was marked by replacement of woodland by grassland, occurred between 28,730 (29,390-28,500) and 25,470 (26,090-25,270) cal. yr BP (years before AD, 1950), prior to the deposition of the Kawakawa/Oruanui tephra. Milder interstadial conditions prevailed between 25,470 (26,090-25,270) and 24,400 (24,840-24,120) cal. yr BP and between 22,630 (22,930-22,340) and 21,980 (22,210-21,580) cal. yr BP, separated by a return to cold stadial conditions between 24,400 and 22,630 cal. yr BP. A final episode of grass-dominated vegetation, indicating cold stadial conditions, occurred from 21,980 (22,210-21,580) to 18,490 (18,670-17,950) cal. yr BP. The decline in grass pollen, indicating progressive climate amelioration, was well advanced by 17,370 (17,730-17,110) cal. yr BP, indicating that the onset of the termination in south Westland occurred sometime between ca 18,490 and ca 17,370 cal. yr BP. A similar general pattern of stadials and interstadials is seen, to varying degrees of resolution but generally with lesser chronological control, in many other paleoclimate proxy records from the New Zealand region. This highly resolved chronology of vegetation changes from southwestern New Zealand contributes to the examination of past climate variations in the southwest Pacific region. The stadial and interstadial episodes defined by south Westland pollen records represent notable climate variability during the latter part of the Last Glaciation. Similar climatic patterns recorded farther afield, for example from Antarctica and the Southern Ocean, imply that climate variations during the latter part of the Last Glaciation and the transition to the Holocene interglacial were inter-regionally extensive in the Southern Hemisphere and thus important to understand in detail and to place into a global context. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).
We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.