964 resultados para CHEMORECEPTOR INPUTS
Resumo:
The contributions of full-wake dynamics in trim analysis are demonstrated for finding the control inputs and periodic responses simultaneously, as well as in Floquet eigenanalysis for finding the damping levels. The equations of flap bending, lag bending, and torsion are coupled with a three-dimensional, finite state wake, and low-frequency (<1/rev) to high frequency (>1/rev) multiblade modes are considered. Full blade-wake dynamics is used in trim analysis and Floquet eigenanalysis. A uniform cantilever blade in trimmed flight is investigated over a range of thrust levels, advance ratios, number of blades, and blade torsional frequencies. The investigation includes the convergence characteristics of control inputs, periodic responses, and damping levels with respect to the number of spatial azimuthal harmonics and radial shape functions in the wake representation. It also includes correlation with the measured lag damping of a three-bladed untrimmed rotor. The parametric study shows the dominant influence of wake dynamics on control inputs, periodic responses, and damping levels, and wake theory generally improves the correlation.
Resumo:
The problem of determining optimal power spectral density models for earthquake excitation which satisfy constraints on total average power, zero crossing rate and which produce the highest response variance in a given linear system is considered. The solution to this problem is obtained using linear programming methods. The resulting solutions are shown to display a highly deterministic structure and, therefore, fail to capture the stochastic nature of the input. A modification to the definition of critical excitation is proposed which takes into account the entropy rate as a measure of uncertainty in the earthquake loads. The resulting problem is solved using calculus of variations and also within linear programming framework. Illustrative examples on specifying seismic inputs for a nuclear power plant and a tall earth dam are considered and the resulting solutions are shown to be realistic.
Resumo:
Floquet analysis is widely used for small-order systems (say, order M < 100) to find trim results of control inputs and periodic responses, and stability results of damping levels and frequencies, Presently, however, it is practical neither for design applications nor for comprehensive analysis models that lead to large systems (M > 100); the run time on a sequential computer is simply prohibitive, Accordingly, a massively parallel Floquet analysis is developed with emphasis on large systems, and it is implemented on two SIMD or single-instruction, multiple-data computers with 4096 and 8192 processors, The focus of this development is a parallel shooting method with damped Newton iteration to generate trim results; the Floquet transition matrix (FTM) comes out as a byproduct, The eigenvalues and eigenvectors of the FTM are computed by a parallel QR method, and thereby stability results are generated, For illustration, flap and flap-lag stability of isolated rotors are treated by the parallel analysis and by a corresponding sequential analysis with the conventional shooting and QR methods; linear quasisteady airfoil aerodynamics and a finite-state three-dimensional wake model are used, Computational reliability is quantified by the condition numbers of the Jacobian matrices in Newton iteration, the condition numbers of the eigenvalues and the residual errors of the eigenpairs, and reliability figures are comparable in both the parallel and sequential analyses, Compared to the sequential analysis, the parallel analysis reduces the run time of large systems dramatically, and the reduction increases with increasing system order; this finding offers considerable promise for design and comprehensive-analysis applications.
Resumo:
In this paper we consider an N x N non-blocking, space division ATM switch with input cell queueing. At each input, the cell arrival process comprises geometrically distributed bursts of consecutive cells for the various outputs. Motivated by the fact that some input links may be connected to metropolitan area networks, and others directly to B-ISDN terminals, we study the situation where there are two classes of inputs with different values of mean burst length. We show that when inputs contend for an output, giving priority to an input with smaller expected burst length yields a saturation throughput larger than if the reverse priority is given. Further, giving priority to less bursty traffic can give better throughput than if all the inputs were occupied by this less bursty traffic. We derive the asymptotic (as N --> infinity) saturation throughputs for each priority class.
Resumo:
Dynamics of the aircraft configuration considered in this paper show a unique characteristic in that there are no stable attractors in the entire high angle-of-attack flight envelope. As a result, once the aircraft has departed from the normal flight regime, no standard technique can be applied to recover the aircraft. In this paper, using feedback linearization technique, a nonlinear controller is designed at high angles of attack, which is engaged after the aircraft departs from normal flight regime. This controller stabilizes the aircraft into a stable spin. Then a set of synthetic pilot inputs is applied to cause an automatic transition from the spin equilibrium to low angles of attack where the second controller is connected. This controller is a normal gain-scheduled controller designed to have a large domain of attraction at low angles of attack. It traps the aircraft into a low angle-of-attack level flight. This entire concept of recovery has been verified using six-degrees-of-freedom nonlinear simulation. Feedback linearization technique used to design a controller ensures internal stability only if the nonlinear plant has stable zero dynamics. Because zero dynamics depend on the selection of outputs, a new method of choosing outputs is described to obtain a plant that has stable zero dynamics. Certain important aspects pertaining to the implementation of a feedback linearization-based controller are also discussed.
Resumo:
A fuzzy logic system is developed for helicopter rotor system fault isolation. Inputs to the fuzzy logic system are measurement deviations of blade bending and torsion response and vibration from a "good" undamaged helicopter rotor. The rotor system measurements used are flap and lag bending tip deflections, elastic twist deflection at the tip, and three forces and three moments at the rotor hub. The fuzzy logic system uses rules developed from an aeroelastic model of the helicopter rotor with implanted faults to isolate the fault while accounting for uncertainty in the measurements. The faults modeled include moisture absorption, loss of trim mass, damaged lag damper, damaged pitch control system, misadjusted pitch link, and damaged flap. Tests with simulated data show that the fuzzy system isolates rotor system faults with an accuracy of about 90-100%. Furthermore, the fuzzy system is robust and gives excellent results, even when some measurements are not available. A rule-based expert system based on similar rules from the aeroelastic model performs much more poorly than the fuzzy system in the presence of high levels of uncertainty.
Resumo:
This paper brings out the existence of the maximum in the curvature of the vapour pressure curve. It occurs in the reduced temperature range of 0.6–0.7 for all liquids and has a value of 3.8–4.8. A set of 17 working fluids consisting of several refrigerants, carbon dioxide, cryogenic liquids and water are taken as test fluids. There exists also a minimum close to the critical point which can be observed only when a thermodynamically consistent functional form of the vapour pressure equation is chosen. This feature, in addition to throwing some light on the behaviour of the vapour pressure curve, could provide some useful inputs to the choice of working fluids for vapour pressure thermometers and thermostatic expansion valves.
Resumo:
A (k-, K) circuit is one which can be decomposed into nonintersecting blocks of gates where each block has no more than K external inputs, such that the graph formed by letting each block be a node and inserting edges between blocks if they share a signal line, is a partial k-tree. (k, K) circuits are special in that they have been shown to be testable in time polynomial in the number of gates in the circuit, and are useful if the constants k and K are small. We demonstrate a procedure to synthesise (k, K) circuits from a special class of Boolean expressions.
Resumo:
Automated synthesis of mechanical designs is an important step towards the development of an intelligent CAD system. Research into methods for supporting conceptual design using automated synthesis has attracted much attention in the past decades. The research work presented here is based on the processes of synthesizing multiple state mechanical devices carried out individually by ten engineering designers. The designers are asked to think aloud, while carrying out the synthesis. The ten design synthesis processes are video recorded, and the records are transcribed and coded for identifying activities occurring in the synthesis processes, as well as for identifying the inputs to and outputs from the activities. A mathematical representation for specifying multi-state design task is proposed. Further, a descriptive model capturing all the ten synthesis processes is developed and presented in this paper. This will be used to identify the outstanding issues to be resolved before a system for supporting design synthesis of multiple state mechanical devices that is capable of creating a comprehensive variety of solution alternatives could be developed.
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
Estimation of creep and shrinkage are critical in order to compute loss of prestress with time in order to compute leak tightness and assess safety margins available in containment structures of nuclear power plants. Short-term creep and shrinkage experiments have been conducted using in-house test facilities developed specifically for the present research program on 35 and 45 MPa normal concrete and 25 MPa heavy density concrete. The extensive experimental program for creep, has cylinders subject to sustained levels of load typically for several days duration (till negligible strain increase with time is observed in the creep specimen), to provide the total creep strain versus time curves for the two normal density concrete grades and one heavy density concrete grade at different load levels, different ages at loading, and at different relative humidity’s. Shrinkage studies on prism specimen for concrete of the same mix grades are also being studied. In the first instance, creep and shrinkage prediction models reported in the literature has been used to predict the creep and shrinkage levels in subsequent experimental data with acceptable accuracy. While macro-scale short experiments and analytical model development to estimate time dependent deformation under sustained loads over long term, accounting for the composite rheology through the influence of parameters such as the characteristic strength, age of concrete at loading, relative humidity, temperature, mix proportion (cement: fine aggregate: coarse aggregate: water) and volume to surface ratio and the associated uncertainties in these variables form one part of the study, it is widely believed that strength, early age rheology, creep and shrinkage are affected by the material properties at the nano-scale that are not well established. In order to understand and improve cement and concrete properties, investigation of the nanostructure of the composite and how it relates to the local mechanical properties is being undertaken. While results of creep and shrinkage obtained at macro-scale and their predictions through rheological modeling are satisfactory, the nano and micro indenting experimental and analytical studies are presently underway. Computational mechanics based models for creep and shrinkage in concrete must necessarily account for numerous parameters that impact their short and long term response. A Kelvin type model with several elements representing the influence of various factors that impact the behaviour is under development. The immediate short term deformation (elastic response), effects of relative humidity and temperature, volume to surface ratio, water cement ratio and aggregate cement ratio, load levels and age of concrete at loading are parameters accounted for in this model. Inputs to this model, such as the pore structure and mechanical properties at micro/nano scale have been taken from scanning electron microscopy and micro/nano-indenting of the sample specimen.
Resumo:
A robust aeroelastic optimization is performed to minimize helicopter vibration with uncertainties in the design variables. Polynomial response surfaces and space-¯lling experimental designs are used to generate the surrogate model of aeroelastic analysis code. Aeroelastic simulations are performed at the sample inputs generated by Latin hypercube sampling. The response values which does not satisfy the frequency constraints are eliminated from the data for model ¯tting. This step increased the accuracy of response surface models in the feasible design space. It is found that the response surface models are able to capture the robust optimal regions of design space. The optimal designs show a reduction of 10 percent in the objective function comprising six vibratory hub loads and 1.5 to 80 percent reduction for the individual vibratory forces and moments. This study demonstrates that the second-order response surface models with space ¯lling-designs can be a favorable choice for computationally intensive robust aeroelastic optimization.
Resumo:
The theory of phase formation is generalised for any arbitrary time dependence of nucleation and growth rates. Some sources of this time dependence are time-dependent potential inputs, ohmic drop and the ingestion effect. Particular cases, such as potentiostatic and, especially, linear potential sweep, are worked out for the two limiting cases of nucleation, namely instantaneous and progressive. The ohmic drop is discussed and a procedure for this correction is indicated. Recent results of Angerstein-Kozlowska, Conway and Klinger are critically investigated. Several earlier results are deduced as special cases. Evans' overlap formula is generalised for the time-dependent case and the equivalence between Avrami's and Evans' equations established.
Resumo:
Technology scaling has caused Negative Bias Temperature Instability (NBTI) to emerge as a major circuit reliability concern. Simultaneously leakage power is becoming a greater fraction of the total power dissipated by logic circuits. As both NBTI and leakage power are highly dependent on vectors applied at the circuit’s inputs, they can be minimized by applying carefully chosen input vectors during periods when the circuit is in standby or idle mode. Unfortunately input vectors that minimize leakage power are not the ones that minimize NBTI degradation, so there is a need for a methodology to generate input vectors that minimize both of these variables.This paper proposes such a systematic methodology for the generation of input vectors which minimize leakage power under the constraint that NBTI degradation does not exceed a specified limit. These input vectors can be applied at the primary inputs of a circuit when it is in standby/idle mode and are such that the gates dissipate only a small amount of leakage power and also allow a large majority of the transistors on critical paths to be in the “recovery” phase of NBTI degradation. The advantage of this methodology is that allowing circuit designers to constrain NBTI degradation to below a specified limit enables tighter guardbanding, increasing performance. Our methodology guarantees that the generated input vector dissipates the least leakage power among all the input vectors that satisfy the degradation constraint. We formulate the problem as a zero-one integer linear program and show that this formulation produces input vectors whose leakage power is within 1% of a minimum leakage vector selected by a search algorithm and simultaneously reduces NBTI by about 5.75% of maximum circuit delay as compared to the worst case NBTI degradation. Our paper also proposes two new algorithms for the identification of circuit paths that are affected the most by NBTI degradation. The number of such paths identified by our algorithms are an order of magnitude fewer than previously proposed heuristics.
INTACTE: An Interconnect Area, Delay, and Energy Estimation Tool for Microarchitectural Explorations
Resumo:
Prior work on modeling interconnects has focused on optimizing the wire and repeater design for trading off energy and delay, and is largely based on low level circuit parameters. Hence these models are hard to use directly to make high level microarchitectural trade-offs in the initial exploration phase of a design. In this paper, we propose INTACTE, a tool that can be used by architects toget reasonably accurate interconnect area, delay, and power estimates based on a few architecture level parameters for the interconnect such as length, width (in number of bits), frequency, and latency for a specified technology and voltage. The tool uses well known models of interconnect delay and energy taking into account the wire pitch, repeater size, and spacing for a range of voltages and technologies.It then solves an optimization problem of finding the lowest energy interconnect design in terms of the low level circuit parameters, which meets the architectural constraintsgiven as inputs. In addition, the tool also provides the area, energy, and delay for a range of supply voltages and degrees of pipelining, which can be used for micro-architectural exploration of a chip. The delay and energy models used by the tool have been validated against low level circuit simulations. We discuss several potential applications of the tool and present an example of optimizing interconnect design in the context of clustered VLIW architectures. Copyright 2007 ACM.