809 resultados para time-varying delays
Resumo:
The interaction between unsteady heat release and acoustic pressure oscillations in gas turbines results in self-excited combustion oscillations which can potentially be strong enough to cause significant structural damage to the combustor. Correctly predicting the interaction of these processes, and anticipating the onset of these oscillations can be difficult. In recent years much research effort has focused on the response of premixed flames to velocity and equivalence ratio perturbations. In this paper, we develop a flame model based on the socalled G-Equation, which captures the kinematic evolution of the flame surfaces, under the assumptions of axisymmetry, and ignoring vorticity and compressibility. This builds on previous work by Dowling [1], Schuller et al. [2], Cho & Lieuwen [3], among many others, and extends the model to a realistic geometry, with two intersecting flame surfaces within a non-uniform velocity field. The inputs to the model are the free-stream velocity perturbations, and the associated equivalence ratio perturbations. The model also proposes a time-delay calculation wherein the time delay for the fuel convection varies both spatially and temporally. The flame response from this model was compared with experiments conducted by Balachandran [4, 5], and found to show promising agreement with experimental forced case. To address the primary industrial interest of predicting self-excited limit cycles, the model has then been linked with an acoustic network model to simulate the closed-loop interaction between the combustion and acoustic processes. This has been done both linearly and nonlinearly. The nonlinear analysis is achieved by applying a describing function analysis in the frequency domain to predict the limit cycle, and also through a time domain simulation. In the latter case, the acoustic field is assumed to remain linear, with the nonlinearity in the response of the combustion to flow and equivalence ratio perturbations. A transfer function from unsteady heat release to unsteady pressure is obtained from a linear acoustic network model, and the corresponding Green function is used to provide the input to the flame model as it evolves in the time domain. The predicted unstable frequency and limit cycle are in good agreement with experiment, demonstrating the potential of this approach to predict instabilities, and as a test bench for developing control strategies. Copyright © 2011 by ASME.
Resumo:
After committing to an action, a decision-maker can change their mind to revise the action. Such changes of mind can even occur when the stream of information that led to the action is curtailed at movement onset. This is explained by the time delays in sensory processing and motor planning which lead to a component at the end of the sensory stream that can only be processed after initiation. Such post-initiation processing can explain the pattern of changes of mind by asserting an accumulation of additional evidence to a criterion level, termed change-of-mind bound. Here we test the hypothesis that physical effort associated with the movement required to change one's mind affects the level of the change-of-mind bound and the time for post-initiation deliberation. We varied the effort required to change from one choice target to another in a reaching movement by varying the geometry of the choice targets or by applying a force field between the targets. We show that there is a reduction in the frequency of change of mind when the separation of the choice targets would require a larger excursion of the hand from the initial to the opposite choice. The reduction is best explained by an increase in the evidence required for changes of mind and a reduced time period of integration after the initial decision. Thus the criteria to revise an initial choice is sensitive to energetic costs.
Resumo:
The vehicle navigation problem studied in Bell (2009) is revisited and a time-dependent reverse Hyperstar algorithm is presented. This minimises the expected time of arrival at the destination, and all intermediate nodes, where expectation is based on a pessimistic (or risk-averse) view of unknown link delays. This may also be regarded as a hyperpath version of the Chabini and Lan (2002) algorithm, which itself is a time-dependent A* algorithm. Links are assigned undelayed travel times and maximum delays, both of which are potentially functions of the time of arrival at the respective link. The driver seeks probabilities for link use that minimise his/her maximum exposure to delay on the approach to each node, leading to the determination of the pessimistic expected time of arrival. Since the context considered is vehicle navigation where the driver is not making repeated trips, the probability of link use may be interpreted as a measure of link attractiveness, so a link with a zero probability of use is unattractive while a link with a probability of use equal to one will have no attractive alternatives. A solution algorithm is presented and proven to solve the problem provided the node potentials are feasible and a FIFO condition applies for undelayed link travel times. The paper concludes with a numerical example.
Resumo:
In this paper, a Lyapunov function candidate is introduced for multivariable systems with inner delays, without assuming a priori stability for the nondelayed subsystem. By using this Lyapunov function, a controller is deduced. Such a controller utilizes an input-output description of the original system, a circumstance that facilitates practical applications of the proposed approach.
Resumo:
In this paper, we propose a new class of Concurrency Control Algorithms that is especially suited for real-time database applications. Our approach relies on the use of (potentially) redundant computations to ensure that serializable schedules are found and executed as early as possible, thus, increasing the chances of a timely commitment of transactions with strict timing constraints. Due to its nature, we term our concurrency control algorithms Speculative. The aforementioned description encompasses many algorithms that we call collectively Speculative Concurrency Control (SCC) algorithms. SCC algorithms combine the advantages of both Pessimistic and Optimistic Concurrency Control (PCC and OCC) algorithms, while avoiding their disadvantages. On the one hand, SCC resembles PCC in that conflicts are detected as early as possible, thus making alternative schedules available in a timely fashion in case they are needed. On the other hand, SCC resembles OCC in that it allows conflicting transactions to proceed concurrently, thus avoiding unnecessary delays that may jeopardize their timely commitment.
Resumo:
A novel approach for real-time skin segmentation in video sequences is described. The approach enables reliable skin segmentation despite wide variation in illumination during tracking. An explicit second order Markov model is used to predict evolution of the skin color (HSV) histogram over time. Histograms are dynamically updated based on feedback from the current segmentation and based on predictions of the Markov model. The evolution of the skin color distribution at each frame is parameterized by translation, scaling and rotation in color space. Consequent changes in geometric parameterization of the distribution are propagated by warping and re-sampling the histogram. The parameters of the discrete-time dynamic Markov model are estimated using Maximum Likelihood Estimation, and also evolve over time. Quantitative evaluation of the method was conducted on labeled ground-truth video sequences taken from popular movies.
Resumo:
SomeCast is a novel paradigm for the reliable multicast of real-time data to a large set of receivers over the Internet. SomeCast is receiver-initiated and thus scalable in the number of receivers, the diverse characteristics of paths between senders and receivers (e.g. maximum bandwidth and round-trip-time), and the dynamic conditions of such paths (e.g. congestion-induced delays and losses). SomeCast enables receivers to dynamically adjust the rate at which they receive multicast information to enable the satisfaction of real-time QoS constraints (e.g. rate, deadlines, or jitter). This is done by enabling a receiver to join SOME number of concurrent multiCAST sessions, whereby each session delivers a portion of an encoding of the real-time data. By adjusting the number of such sessions dynamically, client-specific QoS constraints can be met independently. The SomeCast paradigm can be thought of as a generalization of the AnyCast (e.g. Dynamic Server Selection) and ManyCast (e.g. Digital Fountain) paradigms, which have been proposed in the literature to address issues of scalability of UniCast and MultiCast environments, respectively. In this paper we overview the SomeCast paradigm, describe an instance of a SomeCast protocol, and present simulation results that quantify the significant advantages gained from adopting such a protocol for the reliable multicast of data to a diverse set of receivers subject to real-time QoS constraints.
Resumo:
A human-computer interface (HCI) system designed for use by people with severe disabilities is presented. People that are severely paralyzed or afflicted with diseases such as ALS (Lou Gehrig's disease) or multiple sclerosis are unable to move or control any parts of their bodies except for their eyes. The system presented here detects the user's eye blinks and analyzes the pattern and duration of the blinks, using them to provide input to the computer in the form of a mouse click. After the automatic initialization of the system occurs from the processing of the user's involuntary eye blinks in the first few seconds of use, the eye is tracked in real time using correlation with an online template. If the user's depth changes significantly or rapid head movement occurs, the system is automatically reinitialized. There are no lighting requirements nor offline templates needed for the proper functioning of the system. The system works with inexpensive USB cameras and runs at a frame rate of 30 frames per second. Extensive experiments were conducted to determine both the system's accuracy in classifying voluntary and involuntary blinks, as well as the system's fitness in varying environment conditions, such as alternative camera placements and different lighting conditions. These experiments on eight test subjects yielded an overall detection accuracy of 95.3%.
Resumo:
The hippocampus participates in multiple functions, including spatial navigation, adaptive timing, and declarative (notably, episodic) memory. How does it carry out these particular functions? The present article proposes that hippocampal spatial and temporal processing are carried out by parallel circuits within entorhinal cortex, dentate gyrus, and CA3 that are variations of the same circuit design. In particular, interactions between these brain regions transform fine spatial and temporal scales into population codes that are capable of representing the much larger spatial and temporal scales that are needed to control adaptive behaviors. Previous models of adaptively timed learning propose how a spectrum of cells tuned to brief but different delays are combined and modulated by learning to create a population code for controlling goal-oriented behaviors that span hundreds of milliseconds or even seconds. Here it is proposed how projections from entorhinal grid cells can undergo a similar learning process to create hippocampal place cells that can cover a space of many meters that are needed to control navigational behaviors. The suggested homology between spatial and temporal processing may clarify how spatial and temporal information may be integrated into an episodic memory.
Resumo:
In order to determine the size-resolved chemical composition of single particles in real-time an ATOFMS was deployed at urban background sites in Paris and Barcelona during the MEGAPOLI and SAPUSS monitoring campaigns respectively. The particle types detected during MEGAPOLI included several carbonaceous species, metal-containing types and sea-salt. Elemental carbon particle types were highly abundant, with 86% due to fossil fuel combustion and 14% attributed to biomass burning. Furthermore, 79% of the EC was apportioned to local emissions and 21% to continental transport. The carbonaceous particle types were compared with quantitative measurements from other instruments, and while direct correlations using particle counts were poor, scaling of the ATOFMS counts greatly improved the relationship. During SAPUSS carbonaceous species, sea-salt, dust, vegetative debris and various metal-containing particle types were identified. Throughout the campaign the site was influenced by air masses altering the composition of particles detected. During North African air masses the city was heavily influenced by Saharan dust. A regional stagnation was also observed leading to a large increase in carbonaceous particle counts. While the ATOFMS provides a list of particle types present during the measurement campaigns, the data presented is not directly quantitative. The quantitative response of the ATOFMS to metals was examined by comparing the ion signals within particle mass spectra and to hourly mass concentrations of; Na, K, Ca, Ti, V, Cr, Mn, Fe, Zn and Pb. The ATOFMS was found to have varying correlations with these metals depending on sampling issues such as matrix effects. The strongest correlations were observed for Al, Fe, Zn, Mn and Pb. Overall the results of this work highlight the excellent ability of the ATOFMS in providing composition and mixing state information on atmospheric particles at high time resolution. However they also show its limitations in delivering quantitative information directly.
Resumo:
The three-dimensional, time-dependent electromagnetic field arising from the precession of the arc centre in a vacuum arc remelting furnace is shown (in a numerical simulation) to affect the fluid flow and heat transfer conditions near the solidification front in the upper part of the ingot.
Resumo:
The characterization of thermocouple sensors for temperature measurement in varying-flow environments is a challenging problem. Recently, the authors introduced novel difference-equation-based algorithms that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. In particular, a linear least squares (LS) lambda formulation of the characterization problem, which yields unbiased estimates when identified using generalized total LS, was introduced. These algorithms assume that time constants do not change during operation and are, therefore, appropriate for temperature measurement in homogenous constant-velocity liquid or gas flows. This paper develops an alternative ß-formulation of the characterization problem that has the major advantage of allowing exploitation of a priori knowledge of the ratio of the sensor time constants, thereby facilitating the implementation of computationally efficient algorithms that are less sensitive to measurement noise. A number of variants of the ß-formulation are developed, and appropriate unbiased estimators are identified. Monte Carlo simulation results are used to support the analysis.
Resumo:
We have conducted a series of radiocarbon measurements on decadal samples of dendrochronologically dated wood from both hemispheres, spanning 1000 years (McCormac et al. 1998; Hogg et al. this issue). Using the data presented in Hogg et al., we show that during the period AD 950-1850 the 14C offset between the hemispheres is not constant, but varies periodically (~130 yr periodicity) with amplitudes varying between 1 and 10‰ (i.e. 8-80 yr), with a consequent effect on the 14C calibration of material from the Southern Hemisphere. A large increase in the offset occurs between AD 1245 and 1355. In this paper, we present a Southern Hemisphere high-precision calibration data set (SHCal02) that comprises measurements from New Zealand, Chile, and South Africa. This data, and a new value of 41 ± 14 yr for correction of the IntCal98 data for the period outside the range given here, is proposed for use in calibrating Southern Hemisphere 14C dates.
Resumo:
Few-cycle laser pulses are used to "pump and probe" image the vibrational wavepacket dynamics of a HD+ molecular ion. The quantum dephasing and revival structure of the wavepacket are mapped experimentally with time-resolved photodissociation imaging. The motion of the molecule is simulated using a quantum-mechanical model predicting the observed structure. The coherence of the wavepacket is controlled by varying the duration of the intense laser pulses. By means of a Fourier transform analysis both the periodicity and relative population of the vibrational states of the excited molecular ion have been characterized.
Resumo:
Explicit finite difference (FD) schemes can realise highly realistic physical models of musical instruments but are computationally complex. A design methodology is presented for the creation of FPGA-based micro-architectures for FD schemes which can be applied to a range of applications with varying computational requirements, excitation and output patterns and boundary conditions. It has been applied to membrane and plate-based sound producing models, resulting in faster than real-time performance on a Xilinx XC2VP50 device which is 10 to 35 times faster than general purpose and DSP processors. The models have developed in such a way to allow a wide range of interaction (by a musician) thereby leading to the possibility of creating a highly realistic digital musical instrument.