869 resultados para Just-in-time
Resumo:
Most behavioral tasks have time constraints for successful completion, such as catching a ball in flight. Many of these tasks require trading off the time allocated to perception and action, especially when only one of the two is possible at any time. In general, the longer we perceive, the smaller the uncertainty in perceptual estimates. However, a longer perception phase leaves less time for action, which results in less precise movements. Here we examine subjects catching a virtual ball. Critically, as soon as subjects began to move, the ball became invisible. We study how subjects trade-off sensory and movement uncertainty by deciding when to initiate their actions. We formulate this task in a probabilistic framework and show that subjects' decisions when to start moving are statistically near optimal given their individual sensory and motor uncertainties. Moreover, we accurately predict individual subject's task performance. Thus we show that subjects in a natural task are quantitatively aware of how sensory and motor variability depend on time and act so as to minimize overall task variability.
Resumo:
An implementation of the inverse vector Jiles-Atherton model for the solution of non-linear hysteretic finite element problems is presented. The implementation applies the fixed point method with differential reluctivity values obtained from the Jiles-Atherton model. Differential reluctivities are usually computed using numerical differentiation, which is ill-posed and amplifies small perturbations causing large sudden increases or decreases of differential reluctivity values, which may cause numerical problems. A rule based algorithm for conditioning differential reluctivity values is presented. Unwanted perturbations on the computed differential reluctivity values are eliminated or reduced with the aim to guarantee convergence. Details of the algorithm are presented together with an evaluation of the algorithm by a numerical example. The algorithm is shown to guarantee convergence, although the rate of convergence depends on the choice of algorithm parameters. © 2011 IEEE.
Laser induced photoelectron impact ionization in time-of-flight mass spectrometer 飞行时间质谱中光发射电子碰撞电离过程
Resumo:
The vehicle navigation problem studied in Bell (2009) is revisited and a time-dependent reverse Hyperstar algorithm is presented. This minimises the expected time of arrival at the destination, and all intermediate nodes, where expectation is based on a pessimistic (or risk-averse) view of unknown link delays. This may also be regarded as a hyperpath version of the Chabini and Lan (2002) algorithm, which itself is a time-dependent A* algorithm. Links are assigned undelayed travel times and maximum delays, both of which are potentially functions of the time of arrival at the respective link. The driver seeks probabilities for link use that minimise his/her maximum exposure to delay on the approach to each node, leading to the determination of the pessimistic expected time of arrival. Since the context considered is vehicle navigation where the driver is not making repeated trips, the probability of link use may be interpreted as a measure of link attractiveness, so a link with a zero probability of use is unattractive while a link with a probability of use equal to one will have no attractive alternatives. A solution algorithm is presented and proven to solve the problem provided the node potentials are feasible and a FIFO condition applies for undelayed link travel times. The paper concludes with a numerical example.
Resumo:
Rates of population increase in early spring and the sizes of overwintering stocks were calculated for the planktonic copepods Pseudocalanus elongatus and Acartia clausi for a set of areas covering the open waters of the north-east Atlantic Ocean and the North Sea for the period 1948 to 1979. For both species, the rates of population increase were higher in the open ocean than in the North Sea and appear to be related to temperature. The overwintering stocks in the North Sea were larger than those in the open ocean and are probably related to phytoplanton concentration. P. elongatus shows higher overwintering stocks and lower rates of population increase than A. clausi, resulting in different levels of persistence in the stocks of the two species. It is suggested that this difference in persistence is responsible for differences between the two species with respect to geographical distribution in summer and different patterns of year-to-year fluctuations in abundance.
Continuous Plankton Records - Persistence In Time-Series Of Annual Means Of Abundance Of Zooplankton
Resumo:
Time-series of annual means of abundance of zooplankton of the north-east Atlantic Ocean and the North Sea, for the period 1948 to 1977, show considerable associations between successive years. The seasonal dynamics of the stocks appear to be consistent with at least a proportion of this being due to inherent persistence from year-to-year. Experiments with a simple model suggest that the observed properties of the time-series cannot be reproduced as a response to simple random forcing. The extent of trends and long wavelength variations can be simulated by introducing fairly extensive persistence into the perturbations, but this underestimates the extent of shorter wavelength variability in the observed time-series. The effect of persistence is to increase the proportion of trend and long wavelength variability in time-series of annual means, but stocks can respond to short wavelength perturbations provided these have a clearly defined frequency.
Resumo:
Historical GIS has the potential to re-invigorate our use of statistics from historical censuses and related sources. In particular, areal interpolation can be used to create long-run time-series of spatially detailed data that will enable us to enhance significantly our understanding of geographical change over periods of a century or more. The difficulty with areal interpolation, however, is that the data that it generates are estimates which will inevitably contain some error. This paper describes a technique that allows the automated identification of possible errors at the level of the individual data values.