868 resultados para Concurrent Task
Resumo:
Abstract Background The present study examined absolute alpha power using quantitative electroencephalogram (qEEG) in bilateral temporal and parietal cortices in novice soldiers under the influence of methylphenidate (MPH) during the preparatory aiming period in a practical pistol-shooting task. We anticipated higher bi-hemispheric cortical activation in the preparatory period relative to pre-shot baseline in the methylphenidate group when compared with the control group because methylphenidate has been shown to enhance task-related cognitive functions. Methods Twenty healthy, novice soldiers were equally distributed in control (CG; n = 10) and MPH groups 10 mg (MG; n = 10) using a randomized, double blind design. Subjects performed a pistol-shooting task while electroencephalographic activity was acquired. Results We found main effects for group and practice blocks on behavioral measures, and interactions between group and phases on electroencephalographic measures for the electrodes T3, T4, P3 and P4. Regarding the behavioral measures, the MPH group demonstrated significantly poorer in shooting performance when compared with the control and, in addition, significant increases in the scores over practice blocks were found on both groups. In addition, regarding the electroencephalographic data, we observed a significant increase in alpha power over practice blocks, but alpha power was significantly lower for the MPH group when compared with the placebo group. Moreover, we observed a significant decrease in alpha power in electrodes T4 and P4 during PTM. Conclusion Although we found no correlation between behavioral and EEG data, our findings show that MPH did not prevent the learning of the task in healthy subjects. However, during the practice blocks (PBs) it also did not favor the performance when compared with control group performance. It seems that the CNS effects of MPH demanded an initial readjustment period of integrated operations relative to the sensorimotor system. In other words, MPH seems to provoke a period of initial instability due to a possible modulation in neural activity, which can be explained by lower levels of alpha power (i.e., higher cortical activity). However, after the end of the PB1 a new stabilization was established in neural circuits, due to repetition of the task, resulting higher cortical activity during the task. In conclusion, MPH group performance was not initially superior to that of the control group, but eventually exceeded it, albeit without achieving statistical significance.
Resumo:
Background: The Maternal-Child Pastoral is a volunteer-based community organization of the Dominican Republic that works with families to improve child survival and development. A program that promotes key practices of maternal and child care through meetings with pregnant women and home visits to promote child growth and development was designed and implemented. This study aims to evaluate the impact of the program on nutritional status indicators of children in the first two years of age. Methods: A quasi-experimental design was used, with groups paired according to a socioeconomic index, comparing eight geographical areas of intervention with eight control areas. The intervention was carried out by lay health volunteers. Mothers in the intervention areas received home visits each month and participated in a group activity held biweekly during pregnancy and monthly after birth. The primary outcomes were length and body mass index for age. Statistical analyses were based on linear and logistic regression models. Results: 196 children in the intervention group and 263 in the control group were evaluated. The intervention did not show statistically significant effects on length, but point estimates found were in the desired direction: mean difference 0.21 (95%CI −0.02; 0.44) for length-for-age Z-score and OR 0.50 (95%CI 0.22; 1.10) for stunting. Significant reductions of BMI-for-age Z-score (−0.31, 95%CI −0.49; -0.12) and of BMI-for-age > 85th percentile (0.43, 95%CI 0.23; 0.77) were observed. The intervention showed positive effects in some indicators of intermediary factors such as growth monitoring, health promotion activities, micronutrient supplementation, exclusive breastfeeding and complementary feeding. Conclusions: Despite finding effect measures pointing to effects in the desired direction related to malnutrition, we could only detect a reduction in the risk of overweight attributable to the intervention. The findings related to obesity prevention may be of interest in the context of the nutritional transition. Given the size of this study, the results are encouraging and we believe a larger study is warranted.
Resumo:
Abstract Background Catching an object is a complex movement that involves not only programming but also effective motor coordination. Such behavior is related to the activation and recruitment of cortical regions that participates in the sensorimotor integration process. This study aimed to elucidate the cortical mechanisms involved in anticipatory actions when performing a task of catching an object in free fall. Methods Quantitative electroencephalography (qEEG) was recorded using a 20-channel EEG system in 20 healthy right-handed participants performed the catching ball task. We used the EEG coherence analysis to investigate subdivisions of alpha (8-12 Hz) and beta (12-30 Hz) bands, which are related to cognitive processing and sensory-motor integration. Results Notwithstanding, we found the main effects for the factor block; for alpha-1, coherence decreased from the first to sixth block, and the opposite effect occurred for alpha-2 and beta-2, with coherence increasing along the blocks. Conclusion It was concluded that to perform successfully our task, which involved anticipatory processes (i.e. feedback mechanisms), subjects exhibited a great involvement of sensory-motor and associative areas, possibly due to organization of information to process visuospatial parameters and further catch the falling object.
Resumo:
Abstract Background The time synchronization is a very important ability for the acquisition and performance of motor skills that generate the need to adapt the actions of body segments to external events of the environment that are changing their position in space. Down Syndrome (DS) individuals may present some deficits to perform tasks with synchronization demand. We aimed to investigate the performance of individuals with DS in a simple Coincident Timing task. Method 32 individuals were divided into 2 groups: the Down syndrome group (DSG) comprised of 16 individuals with average age of 20 (+/− 5 years old), and a control group (CG) comprised of 16 individuals of the same age. All individuals performed the Simple Timing (ST) task and their performance was measured in milliseconds. The study was conducted in a single phase with the execution of 20 consecutive trials for each participant. Results There was a significant difference in the intergroup analysis for the accuracy adjustment - Absolute Error (Z = 3.656, p = 0.001); and for the performance consistence - Variable Error (Z = 2.939, p = 0.003). Conclusion DS individuals have more difficulty in integrating the motor action to an external stimulus and they also present more inconsistence in performance. Both groups presented the same tendency to delay their motor responses.
Resumo:
The occurrence of a weak auditory warning stimulus increases the speed of the response to a subsequent visual target stimulus that must be identified. This facilitatory effect has been attributed to the temporal expectancy automatically induced by the warning stimulus. It has not been determined whether this results from a modulation of the stimulus identification process, the response selection process or both. The present study examined these possibilities. A group of 12 young adults performed a reaction time location identification task and another group of 12 young adults performed a reaction time shape identification task. A visual target stimulus was presented 1850 to 2350 ms plus a fixed interval (50, 100, 200, 400, 800, or 1600 ms, depending on the block) after the appearance of a fixation point, on its left or right side, above or below a virtual horizontal line passing through it. In half of the trials, a weak auditory warning stimulus (S1) appeared 50, 100, 200, 400, 800, or 1600 ms (according to the block) before the target stimulus (S2). Twelve trials were run for each condition. The S1 produced a facilitatory effect for the 200, 400, 800, and 1600 ms stimulus onset asynchronies (SOA) in the case of the side stimulus-response (S-R) corresponding condition, and for the 100 and 400 ms SOA in the case of the side S-R non-corresponding condition. Since these two conditions differ mainly by their response selection requirements, it is reasonable to conclude that automatic temporal expectancy influences the response selection process.
Resumo:
This work investigated the effects of frequency and precision of feedback on the learning of a dual-motor task. One hundred and twenty adults were randomly assigned to six groups of different knowledge of results (KR), frequency (100%, 66% or 33%) and precision (specific or general) levels. In the stabilization phase, participants performed the dual task (combination of linear positioning and manual force control) with the provision of KR. Ten non-KR adaptation trials were performed for the same task, but with the introduction of an electromagnetic opposite traction force. The analysis showed a significant main effect for frequency of KR. The participants who received KR in 66% of the stabilization trials showed superior adaptation performance than those who received 100% or 33%. This finding reinforces that there is an optimal level of information, neither too high nor too low, for motor learning to be effective.
Resumo:
The effect produced by a warning stimulus(i) (WS) in reaction time (RT) tasks is commonly attributed to a facilitation of sensorimotor mechanisms by alertness. Recently, evidence was presented that this effect is also related to a proactive inhibition of motor control mechanisms. This inhibition would hinder responding to the WS instead of the target stimulus (TS). Some studies have shown that auditory WS produce a stronger facilitatory effect than visual WS. The present study investigated whether the former WS also produces a stronger inhibitory effect than the latter WS. In one session, the RTs to a visual target in two groups of volunteers were evaluated. In a second session, subjects reacted to the visual target both with (50% of the trials) and without (50% of the trials) a WS. During trials, when subjects received a WS, one group received a visual WS and the other group was presented with an auditory WS. In the first session, the mean RTs of the two groups did not differ significantly. In the second session, the mean RT of the two groups in the presence of the WS was shorter than in their absence. The mean RT in the absence of the auditory WS was significantly longer than the mean RT in the absence of the visual WS. Mean RTs did not differ significantly between the present conditions of the visual and auditory WS. The longer RTs of the auditory WS group as opposed to the visual WS group in the WS-absent trials suggest that auditory WS exert a stronger inhibitory influence on responsivity than visual WS.
Resumo:
Objectives The current study investigated to what extent task-specific practice can help reduce the adverse effects of high-pressure on performance in a simulated penalty kick task. Based on the assumption that practice attenuates the required attentional resources, it was hypothesized that task-specific practice would enhance resilience against high-pressure. Method Participants practiced a simulated penalty kick in which they had to move a lever to the side opposite to the goalkeeper's dive. The goalkeeper moved at different times before ball-contact. Design Before and after task-specific practice, participants were tested on the same task both under low- and high-pressure conditions. Results Before practice, performance of all participants worsened under high-pressure; however, whereas one group of participants merely required more time to correctly respond to the goalkeeper movement and showed a typical logistic relation between the percentage of correct responses and the time available to respond, a second group of participants showed a linear relationship between the percentage of correct responses and the time available to respond. This implies that they tended to make systematic errors for the shortest times available. Practice eliminated the debilitating effects of high-pressure in the former group, whereas in the latter group high-pressure continued to negatively affect performance. Conclusions Task-specific practice increased resilience to high-pressure. However, the effect was a function of how participants responded initially to high-pressure, that is, prior to practice. The results are discussed within the framework of attentional control theory (ACT).
Resumo:
Service Oriented Computing is a new programming paradigm for addressing distributed system design issues. Services are autonomous computational entities which can be dynamically discovered and composed in order to form more complex systems able to achieve different kinds of task. E-government, e-business and e-science are some examples of the IT areas where Service Oriented Computing will be exploited in the next years. At present, the most credited Service Oriented Computing technology is that of Web Services, whose specifications are enriched day by day by industrial consortia without following a precise and rigorous approach. This PhD thesis aims, on the one hand, at modelling Service Oriented Computing in a formal way in order to precisely define the main concepts it is based upon and, on the other hand, at defining a new approach, called bipolar approach, for addressing system design issues by synergically exploiting choreography and orchestration languages related by means of a mathematical relation called conformance. Choreography allows us to describe systems of services from a global view point whereas orchestration supplies a means for addressing such an issue from a local perspective. In this work we present SOCK, a process algebra based language inspired by the Web Service orchestration language WS-BPEL which catches the essentials of Service Oriented Computing. From the definition of SOCK we will able to define a general model for dealing with Service Oriented Computing where services and systems of services are related to the design of finite state automata and process algebra concurrent systems, respectively. Furthermore, we introduce a formal language for dealing with choreography. Such a language is equipped with a formal semantics and it forms, together with a subset of the SOCK calculus, the bipolar framework. Finally, we present JOLIE which is a Java implentation of a subset of the SOCK calculus and it is part of the bipolar framework we intend to promote.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
Resumo:
The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique.
We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy.
We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours.
Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with m
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.