976 resultados para Temporal models
Resumo:
Different interceptive tasks and modes of interception (hitting or capturing) do not necessarily involve similar control processes. Control based on preprogramming of movement parameters is possible for actions with brief movement times but is now widely rejected; continuous perceptuomotor control models are preferred for all types of interception. The rejection of preprogrammed control and acceptance of continuous control is evaluated for the timing of rapidly executed, manual hitting actions. It is shown that a preprogrammed control model is capable of providing a convincing account of observed behavior patterns that avoids many of the arguments that have been raised against it. Prominent continuous perceptual control models are analyzed within a common framework and are shown to be interpretable as feedback control strategies. Although these models can explain observations of on-line adjustments to movement, they offer only post hoc explanations for observed behavior patterns in hitting tasks and are not directly supported by data. It is proposed that rapid manual hitting tasks make up a class of interceptions for which a preprogrammed strategy is adopted-a strategy that minimizes the role of visual feedback. Such a strategy is effective when the task demands a high degree of temporal accuracy.
Resumo:
The effects of temporal precision constraints and movement amplitude on performance of an interceptive aiming task were examined. Participants were required to strike a moving target object with a 'bat' by moving the bat along a straight path (constrained by a linear slide) perpendicular to the path of the target. Temporal precision constraints were defined in terms of the time period (or window) within which contact with the target was possible. Three time windows were used (approx. 35, 50 and 65 ms) and these were achieved either by manipulating the size of the bat (experiment 1a), the size of the target (experiment 1b) or the speed of the target (experiment 2). In all experiments, movement time (MT) increased in proportion to movement amplitude but was only affected by differences in the temporal precision constraint if this was achieved by variation in the target's speed. In this case the MT was approximately inversely proportional to target speed. Peak movement speed was affected by temporal accuracy constraints in all three experiments: participants reached higher speeds when the temporal precision required was greater. These results are discussed with reference to the speed-accuracy trade-off observed for temporally constrained aiming movements. It is suggested that the MT and speed of interceptive aiming movements may be understood as responses to the spatiotemporal constraints of the task.
Resumo:
The influence of temporal association on the representation and recognition of objects was investigated. Observers were shown sequences of novel faces in which the identity of the face changed as the head rotated. As a result, observers showed a tendency to treat the views as if they were of the same person. Additional experiments revealed that this was only true if the training sequences depicted head rotations rather than jumbled views; in other words, the sequence had to be spatially as well as temporally smooth. Results suggest that we are continuously associating views of objects to support later recognition, and that we do so not only on the basis of the physical similarity, but also the correlated appearance in time of the objects.
Resumo:
While some recent frameworks on cognitive agents addressed the combination of mental attitudes with deontic concepts, they commonly ignore the representation of time. An exception is [1]that manages also some temporal aspects both with respect to cognition and normative provisions. We propose in this paper an extension of the logic presented in [1]with temporal intervals.
Resumo:
This paper proposes some variants of Temporal Defeasible Logic (TDL) to reason about normative modifications. These variants make it possible to differentiate cases in which, for example, modifications at some time change legal rules but their conclusions persist afterwards from cases where also their conclusions are blocked.
Resumo:
The relative importance of factors that may promote genetic differentiation in marine organisms is largely unknown. Here, contributions to population structure from biogeography, habitat distribution, and isolation by distance were investigated in Axoclinus nigricaudus, a small subtidal rock reef fish, throughout its range in the Gulf of California. A 408 basepair fragment of the mitochondrial control region was sequenced from 105 individuals. Variation was significantly partitioned between many pairs of populations. Phylogenetic analyses, hierarchical analyses of variance, and general linear models substantiated a major break between two putative biogeographic regions. This genetic discontinuity coincides with an abrupt change in ecological characteristics (including temperature and salinity) but does not coincide with known oceanographic circulation patterns. Geographic distance and the nature of habitat separating populations (continuous habitat along a shoreline, discontinuous habitat along a shoreline, and open water) also contributed to population structure in general linear model analyses. To verify that local populations are genetically stable over time, one population was resampled on four occasions over eighteen months; it showed no evidence of a temporal component to diversity. These results indicate that having a planktonic life stage does not preclude geographically partitioned genetic variation over relatively small geographic distances in marine environments. Moreover, levels of genetic differentiation among populations of Axoclinus nigricaudus cannot be explained by a single factor, but are due to the combined influences of a biogeographic boundary, habitat, and geographic distance.
Resumo:
The Gaudin models based on the face-type elliptic quantum groups and the XYZ Gaudin models are studied. The Gaudin model Hamiltonians are constructed and are diagonalized by using the algebraic Bethe ansatz method. The corresponding face-type Knizhnik–Zamolodchikov equations and their solutions are given.
Resumo:
In this review we demonstrate how the algebraic Bethe ansatz is used for the calculation of the-energy spectra and form factors (operator matrix elements in the basis of Hamiltonian eigenstates) in exactly solvable quantum systems. As examples we apply the theory to several models of current interest in the study of Bose-Einstein condensates, which have been successfully created using ultracold dilute atomic gases. The first model we introduce describes Josephson tunnelling between two coupled Bose-Einstein condensates. It can be used not only for the study of tunnelling between condensates of atomic gases, but for solid state Josephson junctions and coupled Cooper pair boxes. The theory is also applicable to models of atomic-molecular Bose-Einstein condensates, with two examples given and analysed. Additionally, these same two models are relevant to studies in quantum optics; Finally, we discuss the model of Bardeen, Cooper and Schrieffer in this framework, which is appropriate for systems of ultracold fermionic atomic gases, as well as being applicable for the description of superconducting correlations in metallic grains with nanoscale dimensions.; In applying all the above models to. physical situations, the need for an exact analysis of small-scale systems is established due to large quantum fluctuations which render mean-field approaches inaccurate.
Resumo:
Arriving in Brisbane some six years ago, I could not help being impressed by what may be prosaically described as its atmospheric amenity resources. Perhaps this in part was due to my recent experiences in major urban centres in North America, but since that time, that sparkling quality and the blue skies seem to have progressively diminished. Unfortunately, there is also objective evidence available to suggest that this apparent deterioration is not merely the result of habituation of the senses. Air pollution data for the city show trends of increasing concentrations of those very substances that have destroyed the attractiveness of major population centres elsewhere, with climates initially as salubrious. Indeed, present figures indicate that photochemical smog in unacceptably high concentrations is rapidly becoming endemic also over Brisbane. These regrettable developments should come as no surprise. The society at large has not been inclined to respond purposefully to warnings of impending environmental problems, despite the experiences and publicity from overseas and even from other cities within Australia. Nor, up to the present, have certain politicians and government officials displayed stances beyond those necessary for the maintenance of a decorum of concern. At this stage, there still exists the possibility for meaningful government action without the embarrassment of losing political favour with the electorate. To the contrary, there is every chance that such action may be turned to advantage with increased public enlightenment. It would be more than a pity to miss perhaps the final remaining opportunity: Queensland is one of the few remaining places in the world with sufficient resources to permit both rational development and high environmental quality. The choice appears to be one of making a relatively minor investment now for a large financial and social gain the near future, or, permitting Brisbane to degenerate gradually into just another stagnated Los Angeles or Sydney. The present monograph attempts to introduce the problem by reviewing the available research on air quality in the Brisbane area. It also tries to elucidate some seemingly obvious, but so far unapplied management approaches. By necessity, such a broad treatment needs to make inroads into extensive ranges of subject areas, including political and legal practices to public perceptions, scientific measurement and statistical analysis to dynamics of air flow. Clearly, it does not pretend to be definitive in any of these fields, but it does try to emphasize those adjustable facets of the human use system of natural resources, too often neglected in favour of air pollution control technology. The crossing of disciplinary boundaries, however, needs no apology: air quality problems are ubiquitous, touching upon space, time and human interaction.
Resumo:
Many images consist of two or more 'phases', where a phase is a collection of homogeneous zones. For example, the phases may represent the presence of different sulphides in an ore sample. Frequently, these phases exhibit very little structure, though all connected components of a given phase may be similar in some sense. As a consequence, random set models are commonly used to model such images. The Boolean model and models derived from the Boolean model are often chosen. An alternative approach to modelling such images is to use the excursion sets of random fields to model each phase. In this paper, the properties of excursion sets will be firstly discussed in terms of modelling binary images. Ways of extending these models to multi-phase images will then be explored. A desirable feature of any model is to be able to fit it to data reasonably well. Different methods for fitting random set models based on excursion sets will be presented and some of the difficulties with these methods will be discussed.
Resumo:
Except for a few large scale projects, language planners have tended to talk and argue among themselves rather than to see language policy development as an inherently political process. A comparison with a social policy example, taken from the United States, suggests that it is important to understand the problem and to develop solutions in the context of the political process, as this is where decisions will ultimately be made.
Resumo:
Polytomous Item Response Theory Models provides a unified, comprehensive introduction to the range of polytomous models available within item response theory (IRT). It begins by outlining the primary structural distinction between the two major types of polytomous IRT models. This focuses on the two types of response probability that are unique to polytomous models and their associated response functions, which are modeled differently by the different types of IRT model. It describes, both conceptually and mathematically, the major specific polytomous models, including the Nominal Response Model, the Partial Credit Model, the Rating Scale model, and the Graded Response Model. Important variations, such as the Generalized Partial Credit Model are also described as are less common variations, such as the Rating Scale version of the Graded Response Model. Relationships among the models are also investigated and the operation of measurement information is described for each major model. Practical examples of major models using real data are provided, as is a chapter on choosing an appropriate model. Figures are used throughout to illustrate important elements as they are described.
Resumo:
This paper discusses a multi-layer feedforward (MLF) neural network incident detection model that was developed and evaluated using field data. In contrast to published neural network incident detection models which relied on simulated or limited field data for model development and testing, the model described in this paper was trained and tested on a real-world data set of 100 incidents. The model uses speed, flow and occupancy data measured at dual stations, averaged across all lanes and only from time interval t. The off-line performance of the model is reported under both incident and non-incident conditions. The incident detection performance of the model is reported based on a validation-test data set of 40 incidents that were independent of the 60 incidents used for training. The false alarm rates of the model are evaluated based on non-incident data that were collected from a freeway section which was video-taped for a period of 33 days. A comparative evaluation between the neural network model and the incident detection model in operation on Melbourne's freeways is also presented. The results of the comparative performance evaluation clearly demonstrate the substantial improvement in incident detection performance obtained by the neural network model. The paper also presents additional results that demonstrate how improvements in model performance can be achieved using variable decision thresholds. Finally, the model's fault-tolerance under conditions of corrupt or missing data is investigated and the impact of loop detector failure/malfunction on the performance of the trained model is evaluated and discussed. The results presented in this paper provide a comprehensive evaluation of the developed model and confirm that neural network models can provide fast and reliable incident detection on freeways. (C) 1997 Elsevier Science Ltd. All rights reserved.
Resumo:
The conventional analysis for the estimation of the tortuosity factor for transport in porous media is modified here to account for the effect of pore aspect ratio. Structural models of the porous medium are also constructed for calculating the aspect ratio as a function of porosity. Comparison of the model predictions with the extensive data of Currie (1960) for the effective diffusivity of hydrogen in packed beds shows good agreement with a network model of randomly oriented intersecting pores for porosities upto about 50 percent, which is the region of practical interest. The predictions based on this network model are also found to be in better agreement with the data of Currie than earlier expressions developed for unconsolidated and grainy media.