909 resultados para Simultaneous Tasks
Resumo:
This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.
Resumo:
When stimuli presented to the two eyes differ considerably, stable binocular fusion fails, and the subjective percept alternates between the two monocular images, a phenomenon known as binocular rivalry. The influence of attention over this perceptual switching has long been studied, and although there is evidence that attention can affect the alternation rate, its role in the overall dynamics of the rivalry process remains unclear. The present study investigated the relationship between the attention paid to the rivalry stimulus, and the dynamics of the perceptual alternations. Specifically, the temporal course of binocular rivalry was studied as the subjects performed difficult nonvisual and visual concurrent tasks, directing their attention away from the rivalry stimulus. Periods of complete perceptual dominance were compared for the attended condition, where the subjects reported perceptual changes, and the unattended condition, where one of the simultaneous tasks was performed. During both the attended and unattended conditions, phases of rivalry dominance were obtained by analyzing the subject"s optokinetic nystagmus recorded by an electrooculogram, where the polarity of the nystagmus served as an objective indicator of the perceived direction of motion. In all cases, the presence of a difficult concurrent task had little or no effect on the statistics of the alternations, as judged by two classic tests of rivalry, although the overall alternation rate showed a small but significant increase with the concurrent task. It is concluded that the statistical patterns of rivalry alternations are not governed by attentional shifts or decision-making on the part of the subject.
Resumo:
In software development organizations there is sometimes a need for change. In order to meet continuously increasing demands from their customers, Sandvik IT Services- SITS, at Sandvik in Sweden, required improving the way they worked with software development. Due to issues like a lot of work in progress and lot of simultaneous tasks for individuals in the teams that caused stress, it was almost impossible to address the question of working with improvements. In order to enable the improvement process Kanban was introduced in the software development teams. Kanban for software development is a change method created by David J. Anderson. The purpose of this thesis is twofold. One part is to assess what effects Kanban has had on the software development teams. The other part is to make a documentation of the Kanban implementation process at SITS. The documentation has been made on the basis of both company internal resources and observations of the Kanban implementation process. The effects of Kanban have been researched with an interview survey to the teams that have gone through the Kick start of the Kanban process. The result of the thesis is also twofold. One part of the result is an extensive documentation of the implementation process of Kanban at SITS. The other part is an assessment of the effects that Kanban has had at SITS. The major effects have been that the teams are experiencing less stress, more focus on quality and better customer collaboration. It is also evident is that it takes time for some effects to evolve when implementing Kanban
Resumo:
The control of stances such as the upright stance seems not to have a purpose in itself; this control could facilitate the execution of other simultaneous tasks, the so-called suprapostural tasks. The goal of this study was to determine the effects of saccadic eye movements on the control of posture. Twelve adult participants had their body oscillations analyzed while standing upright, for 70 s, in the postural conditions of feet apart and feet together, performing fixation in the central target or horizontal saccadic movements, in the conditions slow (0,5 Hz) and fast (1,1 Hz). The results showed that saccadic movements, independently of their frequency, strongly reduced trunk and head oscillations in the anterior-posterior (AP) axis. In this axis, there was an effect of feet position only in head oscillation. In the medio-lateral (ML) axis, the results showed a strong effect of feet position with body oscillation decreased in the condition of feet apart. The effect of the visual task in the ML axis occurred only for trunk oscillation, not reaching significance level in the pairewise comparisons. In the AP axis, the data corroborate a facilitatory explanation of the control of posture: the reduction in body oscillation limited the variations of the stimulus image projected on the retina, facilitating the execution of saccadic movements as compared to fixation. In the ML axis, the effect of reducing the basis of support was more evident than the effect of saccadic movements, suggesting that the available resources were used primarily for the postural task in detriment of the visual task. Additionally, aspects like attentional focus and sensory information pick up are discussed as mechanisms involved in this task
Resumo:
Este trabalho discute as Representações Sociais construídas por Coordenadores Pedagógicos sobre seu próprio trabalho. Apresenta-se, devido à atual pauta de reflexões da categoria, como uma discussão importante, principalmente se considerada como uma atividade profissional multifacetada e que encerra várias funções e atribuições simultâneas. De forma a constituir as bases teórico-metodológicas para a análise da temática, foram pesquisados autores de referência, a exemplo de Serge Moscovici (1971), com sua teoria das Representações Sociais e António Nóvoa que discute a teoria da pessoalidade inscrita no interior de uma teoria da profissionalidade para captar o sentido de uma profissão. A pesquisa apoiou-se na Lei de Diretrizes e Bases da Educação Nacional LDB-9394/96, bem como na Classificação Brasileira de Ocupações de 2002 que descreve e delimita as matrizes de responsabilidade do cargo e/ou função do Coordenador Pedagógico. Em relação à metodologia, procura articular uma pesquisa de cunho bibliográfico com a pesquisa de campo, com a realização de entrevistas com coordenadores pedagógicos de várias instituições educativas, a partir de um roteiro aberto. Os resultados revelaram novas relações e novas formas de entendimento da realidade do trabalho do Coordenador Pedagógico, do seu papel profissional, das dificultadas enfrentadas no cotidiano, de forma a oferecer algumas reflexões sobre as políticas e práticas relacionadas ao seu papel na organização do trabalho da e na escola.
Resumo:
Stand-alone virtual environments (VEs) using haptic devices have proved useful for assembly/disassembly simulation of mechanical components. Nowadays, collaborative haptic virtual environments (CHVEs) are also emerging. A new peer-to-peer collaborative haptic assembly simulator (CHAS) has been developed whereby two users can simultaneously carry out assembly tasks using haptic devices. Two major challenges have been addressed: virtual scene synchronization (consistency) and the provision of a reliable and effective haptic feedback. A consistency-maintenance scheme has been designed to solve the challenge of achieving consistency. Results show that consistency is guaranteed. Furthermore, a force-smoothing algorithm has been developed which is shown to improve the quality of force feedback under adverse network conditions. A range of laboratory experiments and several real trials between Labein (Spain) and Queen’s University Belfast (Northern Ireland) have verified that CHAS can provide an adequate haptic interaction when both users perform remote assemblies (assembly of one user’s object with an object grasped by the other user). Moreover, when collisions between grasped objects occur (dependent collisions), the haptic feedback usually provides satisfactory haptic perception. Based on a qualitative study, it is shown that the haptic feedback obtained during remote assemblies with dependent collisions can continue to improve the sense of co-presence between users with regard to only visual feedback.
Resumo:
The work presents a new approach to the problem of simultaneous localization and mapping - SLAM - inspired by computational models of the hippocampus of rodents. The rodent hippocampus has been extensively studied with respect to navigation tasks, and displays many of the properties of a desirable SLAM solution. RatSLAM is an implementation of a hippocampal model that can perform SLAM in real time on a real robot. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment. Experimental results show that RatSLAM can operate with ambiguous landmark information and recover from both minor and major path integration errors.
Resumo:
This paper presents a novel method to rank map hypotheses by the quality of localization they afford. The highest ranked hypothesis at any moment becomes the active representation that is used to guide the robot to its goal location. A single static representation is insufficient for navigation in dynamic environments where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners. In our approach we simultaneously rank multiple map hypotheses by the influence that localization in each of them has on locally accurate odometry. This is done online for the current locally accurate window by formulating a factor graph of odometry relaxed by localization constraints. Comparison of the resulting perturbed odometry of each hypothesis with the original odometry yields a score that can be used to rank map hypotheses by their utility. We deploy the proposed approach on a real robot navigating a structurally noisy office environment. The configuration of the environment is physically altered outside the robots sensory horizon during navigation tasks to demonstrate the proposed approach of hypothesis selection.
Resumo:
Facet-based sentiment analysis involves discovering the latent facets, sentiments and their associations. Traditional facet-based sentiment analysis algorithms typically perform the various tasks in sequence, and fail to take advantage of the mutual reinforcement of the tasks. Additionally,inferring sentiment levels typically requires domain knowledge or human intervention. In this paper, we propose aseries of probabilistic models that jointly discover latent facets and sentiment topics, and also order the sentiment topics with respect to a multi-point scale, in a language and domain independent manner. This is achieved by simultaneously capturing both short-range syntactic structure and long range semantic dependencies between the sentiment and facet words. The models further incorporate coherence in reviews, where reviewers dwell on one facet or sentiment level before moving on, for more accurate facet and sentiment discovery. For reviews which are supplemented with ratings, our models automatically order the latent sentiment topics, without requiring seed-words or domain-knowledge. To the best of our knowledge, our work is the first attempt to combine the notions of syntactic and semantic dependencies in the domain of review mining. Further, the concept of facet and sentiment coherence has not been explored earlier either. Extensive experimental results on real world review data show that the proposed models outperform various state of the art baselines for facet-based sentiment analysis.
Resumo:
This paper addresses the problem of localizing the sources of contaminants spread in the environment, and mapping the boundary of the affected region using an innovative swarm intelligence based technique. Unlike most work in this area the algorithm is capable of localizing multiple sources simultaneously while also mapping the boundary of the contaminant spread. At the same time the algorithm is suitable for implementation using a mobile robotic sensor network. Two types of agents, called the source localization agents (or S-agents) and boundary mapping agents (or B-agents) are used for this purpose. The paper uses the basic glowworm swarm optimization (GSO) algorithm, which has been used only for multiple signal source localization, and modifies it considerably to make it suitable for both these tasks. This requires the definition of new behaviour patterns for the agents based on their terminal performance as well as interactions between them that helps the swarm to split into subgroups easily and identify contaminant sources as well as spread along the boundary to map its full length. Simulations results are given to demonstrate the efficacy of the algorithm.
Resumo:
This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.
A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.
Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.
This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.
Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.
Resumo:
A common approach to visualise multidimensional data sets is to map every data dimension to a separate visual feature. It is generally assumed that such visual features can be judged independently from each other. However, we have recently shown that interactions between features do exist [Hannus et al. 2004; van den Berg et al. 2005]. In those studies, we first determined individual colour and size contrast or colour and orientation contrast necessary to achieve a fixed level of discrimination performance in single feature search tasks. These contrasts were then used in a conjunction search task in which the target was defined by a combination of a colour and a size or a colour and an orientation. We found that in conjunction search, despite the matched feature discriminability, subjects significantly more often chose an item with the correct colour than one with correct size or orientation. This finding may have consequences for visualisation: the saliency of information coded by objects' size or orientation may change when there is a need to simultaneously search for colour that codes another aspect of the information. In the present experiment, we studied whether a colour bias can also be found in a more complex and continuous task, Subjects had to search for a target in a node-link diagram consisting of SO nodes, while their eye movements were being tracked, Each node was assigned a random colour and size (from a range of 10 possible values with fixed perceptual distances). We found that when we base the distances on the mean threshold contrasts that were determined in our previous experiments, the fixated nodes tend to resemble the target colour more than the target size (Figure 1a). This indicates that despite the perceptual matching, colour is judged with greater precision than size during conjunction search. We also found that when we double the size contrast (i.e. the distances between the 10 possible node sizes), this effect disappears (Figure 1b). Our findings confirm that the previously found decrease in salience of other features during colour conjunction search is also present in more complex (more 'visualisation- realistic') visual search tasks. The asymmetry in visual search behaviour can be compensated for by manipulating step sizes (perceptual distances) within feature dimensions. Our results therefore also imply that feature hierarchies are not completely fixed and may be adapted to the requirements of a particular visualisation. Copyright © 2005 by the Association for Computing Machinery, Inc.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.