867 resultados para response time
Resumo:
The paper describes the sensitivity of the simulated precipitation to changes in convective relaxation time scale (TAU) of Zhang and McFarlane (ZM) cumulus parameterization, in NCAR-Community Atmosphere Model version 3 (CAM3). In the default configuration of the model, the prescribed value of TAU, a characteristic time scale with which convective available potential energy (CAPE) is removed at an exponential rate by convection, is assumed to be 1 h. However, some recent observational findings suggest that, it is larger by around one order of magnitude. In order to explore the sensitivity of the model simulation to TAU, two model frameworks have been used, namely, aqua-planet and actual-planet configurations. Numerical integrations have been carried out by using different values of TAU, and its effect on simulated precipitation has been analyzed. The aqua-planet simulations reveal that when TAU increases, rate of deep convective precipitation (DCP) decreases and this leads to an accumulation of convective instability in the atmosphere. Consequently, the moisture content in the lower-and mid-troposphere increases. On the other hand, the shallow convective precipitation (SCP) and large-scale precipitation (LSP) intensify, predominantly the SCP, and thus capping the accumulation of convective instability in the atmosphere. The total precipitation (TP) remains approximately constant, but the proportion of the three components changes significantly, which in turn alters the vertical distribution of total precipitation production. The vertical structure of moist heating changes from a vertically extended profile to a bottom heavy profile, with the increase of TAU. Altitude of the maximum vertical velocity shifts from upper troposphere to lower troposphere. Similar response was seen in the actual-planet simulations. With an increase in TAU from 1 h to 8 h, there was a significant improvement in the simulation of the seasonal mean precipitation. The fraction of deep convective precipitation was in much better agreement with satellite observations.
Resumo:
A simple method is described to combine a modern function generator and a digital oscilloscope to configure a setup that can directly measure the amplitude frequency response of a system. This is achieved by synchronously triggering both instruments, with the function generator operated in the ``Linear-Sweep'' frequency mode, while the oscilloscope is operated in the ``Envelope'' acquisition mode. Under these conditions, the acquired envelopes directly correspond to the (input and output signal) spectra, whose ratio yields the amplitude frequency response. The method is easy to configure, automatic, time-efficient, and does not require any external control or interface or programming. This method is ideally suited to impart hands-on experience in sweep frequency response measurements, demonstrate resonance phenomenon in transformer windings, explain the working principle of an impedance analyzer, practically exhibit properties of network functions, and so on. The proposed method is an inexpensive alternative to existing commercial equipment meant for this job and is also an effective teaching aid. Details of its implementation, along with some practical measurements on an actual transformer, are presented.
Resumo:
Random Access Scan, which addresses individual flip-flops in a design using a memory array like row and column decoder architecture, has recently attracted widespread attention, due to its potential for lower test application time, test data volume and test power dissipation when compared to traditional Serial Scan. This is because typically only a very limited number of random ``care'' bits in a test response need be modified to create the next test vector. Unlike traditional scan, most flip-flops need not be updated. Test application efficiency can be further improved by organizing the access by word instead of by bit. In this paper we present a new decoder structure that takes advantage of basis vectors and linear algebra to further significantly optimize test application in RAS by performing the write operations on multiple bits consecutively. Simulations performed on benchmark circuits show an average of 2-3 times speed up in test write time compared to conventional RAS.
Resumo:
Nonlinear static and dynamic response analyses of a clamped. rectangular composite plate resting on a two-parameter elastic foundation have been studied using von Karman's relations. Incorporating the material damping, the governing coupled, nonlinear partial differential equations are obtained for the plate under step pressure pulse load excitation. These equations have been solved by a one-term solution and by applying Galerkin's technique to the deflection equation. This yields an ordinary nonlinear differential equation in time. The nonlinear static solution is obtained by neglecting the time-dependent variables. Thc nonlinear dynamic damped response is obtained by applying the ultraspherical polynomial approximation (UPA) technique. The influences of foundation modulus, shear modulus, orthotropy, etc. upon the nonlinear static and dynamic responses have been presented.
Resumo:
We present the details of a formalism for calculating spatially varying zero-frequency response functions and equal-time correlation functions in models of magnetic and mixed-valence impurities of metals. The method is based on a combination of perturbative, thermodynamic scaling theory [H. R. Krishna-murthy and C. Jayaprakash, Phys. Rev. B 30, 2806 (1984)] and a nonperturbative technique such as the Wilson renormalization group. We illustrate the formalism for the spin-1/2 Kondo problem and present results for the conduction-spin-density�impurity-spin correlation function and conduction-electron charge density near the impurity. We also discuss qualitative features that emerge from our calculations and discuss how they can be carried over to the case of realistic models for transition-metal impurities.
Resumo:
We study the transient response of a colloidal bead which is released from different heights and allowed to relax in the potential well of an optical trap. Depending on the initial potential energy, the system's time evolution shows dramatically different behaviors. Starting from the short-time reversible to long-time irreversible transition, a stationary reversible state with zero net dissipation can be achieved as the release point energy is decreased. If the system starts with even lower energy, it progressively extracts useful work from thermal noise and exhibits an anomalous irreversibility. In addition, we have verified the Transient Fluctuation Theorem and the Integrated Transient Fluctuation Theorem even for the non-ergodic descriptions of our system. Copyright (C) EPLA, 2011
Resumo:
A trajectory optimization approach is applied to the design of a sequence of open-die forging operations in order to control the transient thermal response of a large titanium alloy billet. The amount of time tire billet is soaked in furnace prior to each successive forging operation is optimized to minimize the total process time while simultaneously satisfying constraints on the maximum and minimum values of the billet's temperature distribution to avoid microstructural defects during forging. The results indicate that a "differential" heating profile is the most effective at meeting these design goals.
Resumo:
An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Estimation of creep and shrinkage are critical in order to compute loss of prestress with time in order to compute leak tightness and assess safety margins available in containment structures of nuclear power plants. Short-term creep and shrinkage experiments have been conducted using in-house test facilities developed specifically for the present research program on 35 and 45 MPa normal concrete and 25 MPa heavy density concrete. The extensive experimental program for creep, has cylinders subject to sustained levels of load typically for several days duration (till negligible strain increase with time is observed in the creep specimen), to provide the total creep strain versus time curves for the two normal density concrete grades and one heavy density concrete grade at different load levels, different ages at loading, and at different relative humidity’s. Shrinkage studies on prism specimen for concrete of the same mix grades are also being studied. In the first instance, creep and shrinkage prediction models reported in the literature has been used to predict the creep and shrinkage levels in subsequent experimental data with acceptable accuracy. While macro-scale short experiments and analytical model development to estimate time dependent deformation under sustained loads over long term, accounting for the composite rheology through the influence of parameters such as the characteristic strength, age of concrete at loading, relative humidity, temperature, mix proportion (cement: fine aggregate: coarse aggregate: water) and volume to surface ratio and the associated uncertainties in these variables form one part of the study, it is widely believed that strength, early age rheology, creep and shrinkage are affected by the material properties at the nano-scale that are not well established. In order to understand and improve cement and concrete properties, investigation of the nanostructure of the composite and how it relates to the local mechanical properties is being undertaken. While results of creep and shrinkage obtained at macro-scale and their predictions through rheological modeling are satisfactory, the nano and micro indenting experimental and analytical studies are presently underway. Computational mechanics based models for creep and shrinkage in concrete must necessarily account for numerous parameters that impact their short and long term response. A Kelvin type model with several elements representing the influence of various factors that impact the behaviour is under development. The immediate short term deformation (elastic response), effects of relative humidity and temperature, volume to surface ratio, water cement ratio and aggregate cement ratio, load levels and age of concrete at loading are parameters accounted for in this model. Inputs to this model, such as the pore structure and mechanical properties at micro/nano scale have been taken from scanning electron microscopy and micro/nano-indenting of the sample specimen.
Resumo:
A comparative study of strain response and mechanical properties of rammed earth prisms, has been made using Fiber Bragg Grating (FBG) sensors (optical) and clip-on extensometer (electro-mechanical). The aim of this study is to address the merits and demerits of traditional extensometer vis-à-vis FBG sensor; a uni-axial compression test has been performed on a rammed earth prism to validate its structural properties from the stress - strain curves obtained by two different methods of measurement. An array of FBG sensors on a single fiber with varying Bragg wavelengths (..B), has been used to spatially resolve the strains along the height of the specimen. It is interesting to note from the obtained stress-strain curves that the initial tangent modulus obtained using the FBG sensor is lower compared to that obtained using clip-on extensometer. The results also indicate that the strains measured by both FBG and extensometer sensor follow the same trend and both the sensors register the maximum strain value at the same time.
Resumo:
Combustion instability events in lean premixed combustion systems can cause spatio-temporal variations in unburnt mixture fuel/air ratio. This provides a driving mechanism for heat-release oscillations when they interact with the flame. Several Reduced Order Modelling (ROM) approaches to predict the characteristics of these oscillations have been developed in the past. The present paper compares results for flame describing function characteristics determined from a ROM approach based on the level-set method, with corresponding results from detailed, fully compressible reacting flow computations for the same two dimensional slot flame configuration. The comparison between these results is seen to be sensitive to small geometric differences in the shape of the nominally steady flame used in the two computations. When the results are corrected to account for these differences, describing function magnitudes are well predicted for frequencies lesser than and greater than a lower and upper cutoff respectively due to amplification of flame surface wrinkling by the convective Darrieus-Landau (DL) instability. However, good agreement in describing function phase predictions is seen as the ROM captures the transit time of wrinkles through the flame correctly. Also, good agreement is seen for both magnitude and phase of the flame response, for large forcing amplitudes, at frequencies where the DL instability has a minimal influence. Thus, the present ROM can predict flame response as long as the DL instability, caused by gas expansion at the flame front, does not significantly alter flame front perturbation amplitudes as they traverse the flame. (C) 2012 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Our everyday visual experience frequently involves searching for objects in clutter. Why are some searches easy and others hard? It is generally believed that the time taken to find a target increases as it becomes similar to its surrounding distractors. Here, I show that while this is qualitatively true, the exact relationship is in fact not linear. In a simple search experiment, when subjects searched for a bar differing in orientation from its distractors, search time was inversely proportional to the angular difference in orientation. Thus, rather than taking search reaction time (RT) to be a measure of target-distractor similarity, we can literally turn search time on its head (i.e. take its reciprocal 1/RT) to obtain a measure of search dissimilarity that varies linearly over a large range of target-distractor differences. I show that this dissimilarity measure has the properties of a distance metric, and report two interesting insights come from this measure: First, for a large number of searches, search asymmetries are relatively rare and when they do occur, differ by a fixed distance. Second, search distances can be used to elucidate object representations that underlie search - for example, these representations are roughly invariant to three-dimensional view. Finally, search distance has a straightforward interpretation in the context of accumulator models of search, where it is proportional to the discriminative signal that is integrated to produce a response. This is consistent with recent studies that have linked this distance to neuronal discriminability in visual cortex. Thus, while search time remains the more direct measure of visual search, its reciprocal also has the potential for interesting and novel insights. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Urea-based molecular constructs are shown for the first time to be nonlinear optically (NLO) active in solution. We demonstrate self-assembly triggered large amplification and specific anion recognition driven attenuation of the NLO activity. This orthogonal modulation along with an excellent nonlinearity-transparency trade-off makes them attractive NLO probes for studies related to weak self-assembly and anion transportation by second harmonic microscopy.
Resumo:
The problem of updating the reliability of instrumented structures based on measured response under random dynamic loading is considered. A solution strategy within the framework of Monte Carlo simulation based dynamic state estimation method and Girsanov's transformation for variance reduction is developed. For linear Gaussian state space models, the solution is developed based on continuous version of the Kalman filter, while, for non-linear and (or) non-Gaussian state space models, bootstrap particle filters are adopted. The controls to implement the Girsanov transformation are developed by solving a constrained non-linear optimization problem. Numerical illustrations include studies on a multi degree of freedom linear system and non-linear systems with geometric and (or) hereditary non-linearities and non-stationary random excitations.
Resumo:
We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.