45 resultados para item interpretation
Resumo:
The Silent Aircraft Initiative aims to provide a conceptual design for a large passenger aircraft whose noise would be imperceptible above the background level outside an urban airfield. Landing gear noise presents a significant challenge to such an aircraft. 1/10th scale models have been examined with the aim of establishing a lower noise limit for large aircraft landing gear. Additionally, the landing gear has been included in an integrated design concept for the 'Silent' Aircraft. This work demonstrates the capabilities of the closed-section Markham wind tunnel and the installed phased microphone arrays for aerodynamic and acoustic measurements. Interpretation of acoustic data has been enhanced by use of the CLEAN algorithm to quantify noise levels in a repeatable way and to eliminate side lobes which result from the microphone array geometry. Results suggest that highly simplified landing gears containing only the main struts offer a 12dBA reduction from modern gear noise. Noise treatment of simplified landing gear with fairings offers a further reduction which appears to be limited by noise from the lower parts of the wheels. The importance of fine details and surface discontinuities for low noise design are also underlined.
Resumo:
The interaction of wakes shed by a moving bladerow with a downstream bladerow causes unsteady flow. The meaning of the freestream stagnation pressure and stagnation enthalpy in these circumstances has been examined using simple analyses, measurements and CFD. The unsteady flow in question arises from the behaviour of the wakes as so-called negative-jets. The interactions of the negative-jets with the downstream blades lead to fluctuations in static pressure which in turn generate fluctuations in the stagnation pressure and stagnation enthalpy. It is shown that the fluctuations of the stagnation quantities created by unsteady effects within the bladerow are far greater than those within the incoming wake. The time-mean exit profiles of the stagnation pressure and stagnation enthalpy are affected by these large fluctuations. This phenomenon of energy separation is much more significant than the distortion of the time-mean exit profiles that is caused directly by the cross-passage transport associated with the negative-jet, as described by Kerrebrock and Mikolajczak. Finally, it is shown that if only time-averaged values of loss are required across a bladerow, it is nevertheless sufficient to determine the time-mean exit stagnation pressure.
Resumo:
This paper provides a physical interpretation of the mechanism of stagnation enthalpy and stagnation pressure changes in turbomachines due to unsteady flow, the agency for all work transfer between a turbomachine and an inviscid fluid. Examples are first given to illustrate the direct link between the time variation of static pressure seen by a given fluid particle and the rate of change of stagnation enthalpy for that particle. These include absolute stagnation temperature rises in turbine rotor tip leakage flow, wake transport through downstream blade rows, and effects of wake phasing on compressor work input. Fluid dynamic situations are then constructed to explain the effect of unsteadiness, including a physical interpretation of how stagnation pressure variations are created by temporal variations in static pressure; in this it is shown that the unsteady static pressure plays the role of a time-dependent body force potential. It is further shown that when the unsteadiness is due to a spatial nonuniformity translating at constant speed, as in a turbomachine, the unsteady pressure variation can be viewed as a local power input per unit mass from this body force to the fluid particle instantaneously at that point. © 2012 American Society of Mechanical Engineers.
Resumo:
The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.
Resumo:
Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.
Resumo:
Periodic feedback stabilization is a very natural solution to overcome the topological obstructions which may occur when one tries to asymptotically (locally) stabilize a (locally) controllable nonlinear system around an equilibrium point. The object of this paper is to give a simple geometric interpretation of this fact, to show that one obtains a weakened form of those obstructions when periodic feedback is used, and to illustrate the success of periodic feedback stabilization on a representative system which contains a drift.
Resumo:
The notion of coupling within a design, particularly within the context of Multidisciplinary Design Optimization (MDO), is much used but ill-defined. There are many different ways of measuring design coupling, but these measures vary in both their conceptions of what design coupling is and how such coupling may be calculated. Within the differential geometry framework which we have previously developed for MDO systems, we put forth our own design coupling metric for consideration. Our metric is not commensurate with similar types of coupling metrics, but we show that it both provides a helpful geo- metric interpretation of coupling (and uncoupledness in particular) and exhibits greater generality and potential for analysis than those similar metrics. Furthermore, we discuss how the metric might be profitably extended to time-varying problems and show how the metric's measure of coupling can be applied to multi-objective optimization problems (in unconstrained optimization and in MDO). © 2013 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Resumo:
Laminated glass units are traditionally used to provide a degree of post-fracture strength, but the residual strength is often limited to relatively low levels suffi cient for holding the glass fragments together for a predetermined amount of time. It is possible to achieve a higher level of residual strength, but this requires specific boundary conditions and/or opaque reinforcing materials. This paper describes the experimental investigations on laminated glass units that can provide a signifi cant degree of post-fracture resistance, without the need of boundary restraints or opaque reinforcing materials. The glass units are composed entirely of combinations of conventional transparent interlayers and commercially available glass (annealed, heat treated and chemically strengthened). The paper also describes an empirical energy based interpretation of the mechanical response of the laminated units.
Resumo:
One of the main causes of failure of historic buildings is represented by the differential settlements of foundations. Finite element analysis provides a useful tool for predicting the consequences of given ground displacements in terms of structural damage and also assesses the need of strengthening techniques. The actual damage classification for buildings subject to settlement bases the assessment of the potential damage on the expected crack pattern of the structure. In this paper, the correlation between the physical description of the damage in terms of crack width and the interpretation of the finite element analysis output is analyzed. Different discrete and continuum crack models are applied to simulate an experiment carried on a scale model of a masonry historical building, the Loggia Palace in Brescia (Italy). Results are discussed and a modified version of the fixed total strain smeared crack model is evaluated, in order to solve the problem related to the calculation of the exact crack width.