908 resultados para Task-to-core mapping
Resumo:
The fascinating idea that tools become extensions of our body appears in artistic, literary, philosophical, and scientific works alike. In the last fifteen years, this idea has been re-framed into several related hypotheses, one of which states that tool use extends the neural representation of the multisensory space immediately surrounding the hands (variously termed peripersonal space, peri-hand space, peri-cutaneous space, action space, or near space). This and related hypotheses have been tested extensively in the cognitive neurosciences, with evidence from molecular, neurophysiological, neuroimaging, neuropsychological, and behavioural fields. Here, I briefly review the evidence for and against the hypothesis that tool use extends a neural representation of the space surrounding the hand, concentrating on neurophysiological, neuropsychological, and behavioural evidence. I then provide a re-analysis of data from six published and one unpublished experiments using the crossmodal congruency task to test this hypothesis. While the re-analysis broadly confirms the previously-reported finding that tool use does not literally extend peripersonal space, the overall effect-sizes are small and statistical power is low. I conclude by questioning whether the crossmodal congruency task can indeed be used to test the hypothesis that tool use modifies peripersonal space.
Resumo:
Although tactile representations of the two body sides are initially segregated into opposite hemispheres of the brain, behavioural interactions between body sides exist and can be revealed under conditions of tactile double simultaneous stimulation (DSS) at the hands. Here we examined to what extent vision can affect body side segregation in touch. To this aim, we changed hand-related visual input while participants performed a go/no-go task to detect a tactile stimulus delivered to one target finger (e.g., right index), stimulated alone or with a concurrent non-target finger either on the same hand (e.g., right middle finger) or on the other hand (e.g., left index finger = homologous; left middle finger = non-homologous). Across experiments, the two hands were visible or occluded from view (Experiment 1), images of the two hands were either merged using a morphing technique (Experiment 2), or were shown in a compatible vs incompatible position with respect to the actual posture (Experiment 3). Overall, the results showed reliable interference effects of DSS, as compared to target-only stimulation. This interference varied as a function of which non-target finger was stimulated, and emerged both within and between hands. These results imply that the competition between tactile events is not clearly segregated across body sides. Crucially, non-informative vision of the hand affected overall tactile performance only when a visual/proprioceptive conflict was present, while neither congruent nor morphed hand vision affected tactile DSS interference. This suggests that DSS operates at a tactile processing stage in which interactions between body sides can occur regardless of the available visual input from the body.
Resumo:
We studied the effect of tactile double simultaneous stimulation (DSS) within and between hands to examine spatial coding of touch at the fingers. Participants performed a go/no-go task to detect a tactile stimulus delivered to one target finger (e.g., right index), stimulated alone or with a concurrent non-target finger, either on the same hand (e.g., right middle finger) or on the other hand (e.g., left index finger=homologous; left middle finger=non-homologous). Across blocks we also changed the unseen hands posture (both hands palm down, or one hand rotated palm-up). When both hands were palm-down DSS interference effects emerged both within and between hands, but only when the non-homologous finger served as non-target. This suggests a clear segregation between the fingers of each hand, regardless of finger side. By contrast, when one hand was palm-up interference effects emerged only within hand, whereas between hands DSS interference was considerably reduced or absent. Thus, between hands interference was clearly affected by changes in hands posture. Taken together, these findings provide behavioral evidence in humans for multiple spatial coding of touch during tactile DSS at the fingers. In particular, they confirm the existence of representational stages of touch that distinguish between body-regions more than body-sides. Moreover, they show that the availability of tactile stimulation side becomes prominent when postural update is required.
Resumo:
The detection of long-range dependence in time series analysis is an important task to which this paper contributes by showing that whilst the theoretical definition of a long-memory (or long-range dependent) process is based on the autocorrelation function, it is not possible for long memory to be identified using the sum of the sample autocorrelations, as usually defined. The reason for this is that the sample sum is a predetermined constant for any stationary time series; a result that is independent of the sample size. Diagnostic or estimation procedures, such as those in the frequency domain, that embed this sum are equally open to this criticism. We develop this result in the context of long memory, extending it to the implications for the spectral density function and the variance of partial sums of a stationary stochastic process. The results are further extended to higher order sample autocorrelations and the bispectral density. The corresponding result is that the sum of the third order sample (auto) bicorrelations at lags h,k≥1, is also a predetermined constant, different from that in the second order case, for any stationary time series of arbitrary length.
Resumo:
The planning of semi-autonomous vehicles in traffic scenarios is a relatively new problem that contributes towards the goal of making road travel by vehicles free of human drivers. An algorithm needs to ensure optimal real time planning of multiple vehicles (moving in either direction along a road), in the presence of a complex obstacle network. Unlike other approaches, here we assume that speed lanes are not present and that different lanes do not need to be maintained for inbound and outbound traffic. Our basic hypothesis is to carry forward the planning task to ensure that a sufficient distance is maintained by each vehicle from all other vehicles, obstacles and road boundaries. We present here a 4-layer planning algorithm that consists of road selection (for selecting the individual roads of traversal to reach the goal), pathway selection (a strategy to avoid and/or overtake obstacles, road diversions and other blockages), pathway distribution (to select the position of a vehicle at every instance of time in a pathway), and trajectory generation (for generating a curve, smooth enough, to allow for the maximum possible speed). Cooperation between vehicles is handled separately at the different levels, the aim being to maximize the separation between vehicles. Simulated results exhibit behaviours of smooth, efficient and safe driving of vehicles in multiple scenarios; along with typical vehicle behaviours including following and overtaking.
Resumo:
This paper assesses the performance of a vocabulary test designed to measure second language productive vocabulary knowledge.The test, Lex30, uses a word association task to elicit vocabulary, and uses word frequency data to measure the vocabulary produced. Here we report firstly on the reliability of the test as measured by a test-retest study, a parallel test forms experiment and an internal consistency measure. We then investigate the construct validity of the test by looking at changes in test performance over time, analyses of correlations with scores on similar tests, and comparison of spoken and written test performance. Last, we examine the theoretical bases of the two main test components: eliciting vocabulary and measuring vocabulary. Interpretations of our findings are discussed in the context of test validation research literature. We conclude that the findings reported here present a robust argument for the validity of the test as a research tool, and encourage further investigation of its validity in an instructional context
Resumo:
Perception and action are tightly linked: objects may be perceived not only in terms of visual features, but also in terms of possibilities for action. Previous studies showed that when a centrally located object has a salient graspable feature (e.g., a handle), it facilitates motor responses corresponding with the feature's position. However, such so-called affordance effects have been criticized as resulting from spatial compatibility effects, due to the visual asymmetry created by the graspable feature, irrespective of any affordances. In order to dissociate between affordance and spatial compatibility effects, we asked participants to perform a simple reaction-time task to typically graspable and non-graspable objects with similar visual features (e.g., lollipop and stop sign). Responses were measured using either electromyography (EMG) on proximal arm muscles during reaching-like movements, or with finger key-presses. In both EMG and button press measurements, participants responded faster when the object was either presented in the same location as the responding hand, or was affordable, resulting in significant and independent spatial compatibility and affordance effects, but no interaction. Furthermore, while the spatial compatibility effect was present from the earliest stages of movement preparation and throughout the different stages of movement execution, the affordance effect was restricted to the early stages of movement execution. Finally, we tested a small group of unilateral arm amputees using EMG, and found residual spatial compatibility but no affordance, suggesting that spatial compatibility effects do not necessarily rely on individuals’ available affordances. Our results show dissociation between affordance and spatial compatibility effects, and suggest that rather than evoking the specific motor action most suitable for interaction with the viewed object, graspable objects prompt the motor system in a general, body-part independent fashion
Resumo:
OBJECTIVE: This study modeled win and lose trials in a simple gambling task to examine the effect of entire win-lose situations (WIN, LOSS, or TIE) on single win/lose trials and related neural underpinnings. METHODS: The behavior responses and brain activities of 17 participants were recorded by an MRI scanner while they performed a gambling task. Different conditions were compared to determine the effect of the task on the behavior and brain activity of the participants. Correlations between brain activity and behavior were calculated to support the imaging results. RESULTS: In win trials, LOSS caused less intense posterior cingulate activity than TIE. In lose trials, LOSS caused more intense activity in the right superior temporal gyrus, bilateral superior frontal gyrus, bilateral anterior cingulate, bilateral insula cortex, and left orbitofrontal cortex than WIN and TIE. CONCLUSIONS: The experiences of the participants in win trials showed great similarity among different win-lose situations. However, the brain activity and behavior responses of the participants in lose trials indicated that they experienced stronger negative emotion in LOSS. The participants also showed an increased desire to win in LOSS than in WIN or TIE conditions.
Resumo:
Research on child bilingualism accounts for differences in the course and the outcomes of monolingual and different types of bilingual language acquisition primarily from two perspectives: age of onset of exposure to the language(s) and the role of the input (Genesee, Paradis, & Crago, 2004; Meisel, 2009; Unsworth et al., 2014). Some findings suggest that early successive bilingual children may pattern similarly to simultaneous bilingual children, passing through different trajectories from child L2 learners due to a later age of onset in the latter group. Studies on bilingual development have also shown that input quantity in bilingual acquisition is considerably reduced, i.e., in each of their two languages, bilingual children are likely exposed to much less input than their monolingual peers (Paradis & Genesee, 1996; Unsworth, 2013b). At the same time, simultaneous bilingual children develop and attain competence in the two languages, sometimes without even an attested age delay compared to monolingual children (Paradis, Genesee & Crago, 2011). The implication is that even half of the input suffices for early language development, at least with respect to ‘core’ aspects of language, in whatever way ‘core’ is defined.My aim in this article is to consider how an additional, linguistic variable interacts with age of onset and input in bilingual development, namely, the timing in L1 development of the phenomena examined in bilingual children’s performance. Specifically, I will consider timing differences attested in the monolingual development of features and structures, distinguishing between early, late or ‘very late’ acquired phenomena. I will then argue that this three-way distinction reflects differences in the role of narrow syntax: early phenomena are core, parametric and narrowly syntactic, in contrast to late and very late phenomena, which involve syntax-external or even language-external resources too. I explore the consequences of these timing differences in monolingual development for bilingual development. I will review some findings from early (V2 in Germanic, grammatical gender in Greek), late (passives) and very late (grammatical gender in Dutch) phenomena in the bilingual literature and argue that early phenomena can differentiate between simultaneous and (early) successive bilingualism with an advantage for the former group, while the other two reveal similarly (high or low) performance across bilingual groups, differentiating them from monolinguals. The paper proposes that questions about the role of age of onset and language input in early bilingual development can only be meaningfully addressed when the properties and timing of the phenomena under investigation are taken into account.
Resumo:
Exploration of how neighbourhoods and others have responded to the UK government’s localism agenda in England, and specifically towards Neighbourhood Planning (NP), is important given that NP is a prominent part of that policy agenda. It is also of interest as the ramifications emerge for planning practice in the formal introduction of statutory plans which are ostensibly led by communities (Parker et al, 2015; Gallent, 2013). There is a necessary task to provide critical commentary on the socio-economic impact of localist policy. The paper explores the issues arising from experience thus far and highlights the take-up of Neighbourhood Planning since 2011. This assessment shows how a vast majority of those active have been in parished areas and in less-deprived areas. This indicates that government needs to do more to ensure that NP is accessible and worthwhile for a wider range of communities.
Resumo:
An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tesselations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.
Resumo:
The impact of extreme sea ice initial conditions on modelled climate is analysed for a fully coupled atmosphere ocean sea ice general circulation model, the Hadley Centre climate model HadCM3. A control run is chosen as reference experiment with greenhouse gas concentration fixed at preindustrial conditions. Sensitivity experiments show an almost complete recovery from total removal or strong increase of sea ice after four years. Thus, uncertainties in initial sea ice conditions seem to be unimportant for climate modelling on decadal or longer time scales. When the initial conditions of the ocean mixed layer were adjusted to ice-free conditions, a few substantial differences remained for more than 15 model years. But these differences are clearly smaller than the uncertainty of the HadCM3 run and all the other 19 IPCC fourth assessment report climate model preindustrial runs. It is an important task to improve climate models in simulating the past sea ice variability to enable them to make reliable projections for the 21st century.
Resumo:
Objectives: The current study examined younger and older adults’ error detection accuracy, prediction calibration, and postdiction calibration on a proofreading task, to determine if age-related difference would be present in this type of common error detection task. Method: Participants were given text passages, and were first asked to predict the percentage of errors they would detect in the passage. They then read the passage and circled errors (which varied in complexity and locality), and made postdictions regarding their performance, before repeating this with another passage and answering a comprehension test of both passages. Results: There were no age-related differences in error detection accuracy, text comprehension, or metacognitive calibration, though participants in both age groups were overconfident overall in their metacognitive judgments. Both groups gave similar ratings of motivation to complete the task. The older adults rated the passages as more interesting than younger adults did, although this level of interest did not appear to influence error-detection performance. Discussion: The age equivalence in both proofreading ability and calibration suggests that the ability to proofread text passages and the associated metacognitive monitoring used in judging one’s own performance are maintained in aging. These age-related similarities persisted when younger adults completed the proofreading tasks on a computer screen, rather than with paper and pencil. The findings provide novel insights regarding the influence that cognitive aging may have on metacognitive accuracy and text processing in an everyday task.
Resumo:
The aim of this work was to encapsulate casein hydrolysate by complex coacervation with soybean protein isolate (SPI)/pectin. Three treatments were studied with wall material to core ratio of 1:1, 1:2 and 1:3. The samples were evaluated for morphological characteristics, moisture, hygroscopicity, solubility, hydrophobicity, surface tension, encapsulation efficiency and bitter taste with a trained sensory panel using a paired comparison test. The samples were very stable in cold water. The hydrophobicity decreased inversely with the hydrolysate content in the microcapsule. Encapsulated samples had lower hygroscopicity values than free hydrolysate. The encapsulation efficiency varied from 91.62% to 78.8%. Encapsulated samples had similar surface tension, higher values than free hydrolysate. The results of the sensory panel test considering the encapsulated samples less bitter (P < 0.05) than the free hydroly-state, showed that complex coacervation with SPI/pectin as wall material was an efficient method for microencapsulation and attenuation of the bitter taste of the hydrolysate. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Object selection refers to the mechanism of extracting objects of interest while ignoring other objects and background in a given visual scene. It is a fundamental issue for many computer vision and image analysis techniques and it is still a challenging task to artificial Visual systems. Chaotic phase synchronization takes place in cases involving almost identical dynamical systems and it means that the phase difference between the systems is kept bounded over the time, while their amplitudes remain chaotic and may be uncorrelated. Instead of complete synchronization, phase synchronization is believed to be a mechanism for neural integration in brain. In this paper, an object selection model is proposed. Oscillators in the network representing the salient object in a given scene are phase synchronized, while no phase synchronization occurs for background objects. In this way, the salient object can be extracted. In this model, a shift mechanism is also introduced to change attention from one object to another. Computer simulations show that the model produces some results similar to those observed in natural vision systems.