908 resultados para Task-to-core mapping
Resumo:
This Minor Field Study was carried out during November and December in 2011 in the Mount Elgon District in Western Kenya. The objective was to examine nine small-scale farming household´s land use and socioeconomic situation when they have joined a non-governmental organization (NGO) project, which specifically targets small-scale farming households to improve land use system and socioeconomic situation by the extension of soil and water conservation measures. The survey has worked along three integral examinations methods which are mapping and processing data using GIS, semi structured interviews and literature studies. This study has adopted a theoretical approach referred to as political ecology, in which landesque capital is a central concept. The result shows that all farmers, except one, have issues with land degradation. However, the extent of the problem and also implemented sustainable soil and water conservation measures were diverse among the farmers. The main causes of this can both be linked to how the farmers themselves utilized their farmland and how impacts from the climate change have modified the terms of the farmers working conditions. These factors have consequently resulted in impacts on the informants’ socioeconomic conditions. Furthermore it was also registered that social and economic elements, in some cases, were the causes of how the farmers manage their farmland. The farmer who had no significant problem with soil erosion had invested in trees and opportunities to irrigate the farmland. In addition, it was also recorded that certain farmers had invested in particular soil and water conservation measures without any significant result. This was probably due to the time span these land measures cover before they start to generate revenue. The outcome of this study has traced how global, national and local elements exist in a context when it comes to the conditions of the farmers´ land use and their socioeconomic situation. The farmers atMt.Elgon are thereby a component of a wider context when they are both contributory to their socioeconomic situation, mainly due to their land management, and also exposed to core-periphery relationships on which the farmers themselves have no influence.
Resumo:
Minisatellite core sequences were used as single primers in polymerase chain reaction (PCR) to amplify genomic DNA in a way similar to the random amplified polymorphic DNA methodology. This technique, known as Directed Amplification of Minisatellite-region DNA, was applied in order to differentiate three neotropical fish species (Brycon orbignyanus, B. microlepis and B. lundii ) and to detect possible genetic variations among samples of the threatened species, B. lundii , collected in two regions with distinct environmental conditions in the area of influence of a hydroelectric dam. Most primers generated species-specific banding patterns and high levels of intraspecific polymorphism. The genetic variation observed between the two sampling regions of B. lundii was also high enough to suggest the presence of distinct stocks of this species along the same river basin. The results demonstrated that minisatellite core sequences are potentially useful as single primers in PCR to assist in species and population identification. The observed genetic stock differentiation in B. lundii associated with ecological and demographic data constitute a crucial task to develop efficient conservation strategies in order to preserve the genetic diversity of this endangered fish species.
Resumo:
The efficient emulation of a many-core architecture is a challenging task, each core could be emulated through a dedicated thread and such threads would be interleaved on an either single-core or a multi-core processor. The high number of context switches will results in an unacceptable performance. To support this kind of application, the GPU computational power is exploited in order to schedule the emulation threads on the GPU cores. This presents a non trivial divergence issue, since GPU computational power is offered through SIMD processing elements, that are forced to synchronously execute the same instruction on different memory portions. Thus, a new emulation technique is introduced in order to overcome this limitation: instead of providing a routine for each ISA opcode, the emulator mimics the behavior of the Micro Architecture level, here instructions are date that a unique routine takes as input. Our new technique has been implemented and compared with the classic emulation approach, in order to investigate the chance of a hybrid solution.
Resumo:
We developed a novel delay discounting task to investigate outcome impulsivity in pigs. As impulsivity can affect aggression, and might also relate to proactive and reactive coping styles, eight proactive (HR) and eight reactive (LR) pigs identified in a manual restraint test ("Backtest", after Bolhuis et al., 2003) were weaned and mixed in four pens of four unfamiliar pigs, so that each pen had two HR and two LR pigs, and aggression was scored in the 9h after mixing. In the delay discounting task, each pig chose between two levers, one always delivering a small immediate reward, the other a large delayed reward with daily increasing delays, impulsive individuals being the ones discounting the value of the large reward quicker. Two novel strategies emerged: some pigs gradually switched their preference towards the small reward ('Switchers') as predicted, but others persistently preferred the large reward until they stopped making choices ('Omitters'). Outcome impulsivity itself was unrelated to these strategies, to urinary serotonin metabolite (5-HIAA) or dopamine metabolite (HVA) levels, aggression at weaning, or coping style. However, HVA was relatively higher in Omitters than Switchers, and positively correlated with behavioural measures of indecisiveness and frustration during choosing. The delay discounting task thus revealed two response strategies that seemed to be related to the activity of the dopamine system and might indicate a difference in execution, rather than outcome, impulsivity.
Resumo:
The important task to observe the global coverage of middle atmospheric trace gases like water vapor or ozone usually is accomplished by satellites. Climate and atmospheric studies rely upon the knowledge of trace gas distributions throughout the stratosphere and mesosphere. Many of these gases are currently measured from satellites, but it is not clear whether this capability will be maintained in the future. This could lead to a significant knowledge gap of the state of the atmosphere. We explore the possibilities of mapping middle atmospheric water vapor in the Northern Hemisphere by using Lagrangian trajectory calculations and water vapor profile data from a small network of five ground-based microwave radiometers. Four of them are operated within the frame of NDACC (Network for the Detection of Atmospheric Composition Change). Keeping in mind that the instruments are based on different hardware and calibration setups, a height-dependent bias of the retrieved water vapor profiles has to be expected among the microwave radiometers. In order to correct and harmonize the different data sets, the Microwave Limb Sounder (MLS) on the Aura satellite is used to serve as a kind of traveling standard. A domain-averaging TM (trajectory mapping) method is applied which simplifies the subsequent validation of the quality of the trajectory-mapped water vapor distribution towards direct satellite observations. Trajectories are calculated forwards and backwards in time for up to 10 days using 6 hourly meteorological wind analysis fields. Overall, a total of four case studies of trajectory mapping in different meteorological regimes are discussed. One of the case studies takes place during a major sudden stratospheric warming (SSW) accompanied by the polar vortex breakdown; a second takes place after the reformation of stable circulation system. TM cases close to the fall equinox and June solstice event from the year 2012 complete the study, showing the high potential of a network of ground-based remote sensing instruments to synthesize hemispheric maps of water vapor.
Resumo:
Low self-referential thoughts are associated with better concentration, which leads to deeper encoding and increases learning and subsequent retrieval. There is evidence that being engaged in externally rather than internally focused tasks is related to low neural activity in the default mode network (DMN) promoting open mind and the deep elaboration of new information. Thus, reduced DMN activity should lead to enhanced concentration, comprehensive stimulus evaluation including emotional categorization, deeper stimulus processing, and better long-term retention over one whole week. In this fMRI study, we investigated brain activation preceding and during incidental encoding of emotional pictures and on subsequent recognition performance. During fMRI, 24 subjects were exposed to 80 pictures of different emotional valence and subsequently asked to complete an online recognition task one week later. Results indicate that neural activity within the medial temporal lobes during encoding predicts subsequent memory performance. Moreover, a low activity of the default mode network preceding incidental encoding leads to slightly better recognition performance independent of the emotional perception of a picture. The findings indicate that the suppression of internally-oriented thoughts leads to a more comprehensive and thorough evaluation of a stimulus and its emotional valence. Reduced activation of the DMN prior to stimulus onset is associated with deeper encoding and enhanced consolidation and retrieval performance even one week later. Even small prestimulus lapses of attention influence consolidation and subsequent recognition performance. Hum Brain Mapp, 2015. © 2015 Wiley Periodicals, Inc.
Resumo:
The important task to observe the global coverage of middle atmospheric trace gases like water vapor or ozone usually is accomplished by satellites. Climate and atmospheric studies rely upon the knowledge of trace gas distributions throughout the stratosphere and mesosphere. Many of these gases are currently measured from satellites, but it is not clear whether this capability will be maintained in the future. This could lead to a significant knowledge gap of the state of the atmosphere. We explore the possibilities of mapping middle atmospheric water vapor in the Northern Hemisphere by using Lagrangian trajectory calculations and water vapor profile data from a small network of five ground-based microwave radiometers. Four of them are operated within the frame of NDACC (Network for the Detection of Atmospheric Composition Change). Keeping in mind that the instruments are based on different hardware and calibration setups, a height-dependent bias of the retrieved water vapor profiles has to be expected among the microwave radiometers. In order to correct and harmonize the different data sets, the Microwave Limb Sounder (MLS) on the Aura satellite is used to serve as a kind of traveling standard. A domain-averaging TM (trajectory mapping) method is applied which simplifies the subsequent validation of the quality of the trajectory-mapped water vapor distribution towards direct satellite observations. Trajectories are calculated forwards and backwards in time for up to 10 days using 6 hourly meteorological wind analysis fields. Overall, a total of four case studies of trajectory mapping in different meteorological regimes are discussed. One of the case studies takes place during a major sudden stratospheric warming (SSW) accompanied by the polar vortex breakdown; a second takes place after the reformation of stable circulation system. TM cases close to the fall equinox and June solstice event from the year 2012 complete the study, showing the high potential of a network of ground-based remote sensing instruments to synthesize hemispheric maps of water vapor.
Resumo:
The nucleus accumbens, a site within the ventral striatum, is best known for its prominent role in mediating the reinforcing effects of drugs of abuse such as cocaine, alcohol, and nicotine. Indeed, it is generally believed that this structure subserves motivated behaviors, such as feeding, drinking, sexual behavior, and exploratory locomotion, which are elicited by natural rewards or incentive stimuli. A basic rule of positive reinforcement is that motor responses will increase in magnitude and vigor if followed by a rewarding event. It is likely, therefore, that the nucleus accumbens may serve as a substrate for reinforcement learning. However, there is surprisingly little information concerning the neural mechanisms by which appetitive responses are learned. In the present study, we report that treatment of the nucleus accumbens core with the selective competitive N-methyl-d-aspartate (NMDA) antagonist 2-amino-5-phosphonopentanoic acid (AP-5; 5 nmol/0.5 μl bilaterally) impairs response-reinforcement learning in the acquisition of a simple lever-press task to obtain food. Once the rats learned the task, AP-5 had no effect, demonstrating the requirement of NMDA receptor-dependent plasticity in the early stages of learning. Infusion of AP-5 into the accumbens shell produced a much smaller impairment of learning. Additional experiments showed that AP-5 core-treated rats had normal feeding and locomotor responses and were capable of acquiring stimulus-reward associations. We hypothesize that stimulation of NMDA receptors within the accumbens core is a key process through which motor responses become established in response to reinforcing stimuli. Further, this mechanism, may also play a critical role in the motivational and addictive properties of drugs of abuse.
Resumo:
Crop monitoring and more generally land use change detection are of primary importance in order to analyze spatio-temporal dynamics and its impacts on environment. This aspect is especially true in such a region as the State of Mato Grosso (south of the Brazilian Amazon Basin) which hosts an intensive pioneer front. Deforestation in this region as often been explained by soybean expansion in the last three decades. Remote sensing techniques may now represent an efficient and objective manner to quantify how crops expansion really represents a factor of deforestation through crop mapping studies. Due to the special characteristics of the soybean productions' farms in Mato Grosso (area varying between 1000 hectares and 40000 hectares and individual fields often bigger than 100 hectares), the Moderate Resolution Imaging Spectroradiometer (MODIS) data with a near daily temporal resolution and 250 m spatial resolution can be considered as adequate resources to crop mapping. Especially, multitemporal vegetation indices (VI) studies have been currently used to realize this task [1] [2]. In this study, 16-days compositions of EVI (MODQ13 product) data are used. However, although these data are already processed, multitemporal VI profiles still remain noisy due to cloudiness (which is extremely frequent in a tropical region such as south Amazon Basin), sensor problems, errors in atmospheric corrections or BRDF effect. Thus, many works tried to develop algorithms that could smooth the multitemporal VI profiles in order to improve further classification. The goal of this study is to compare and test different smoothing algorithms in order to select the one which satisfies better to the demand which is classifying crop classes. Those classes correspond to 6 different agricultural managements observed in Mato Grosso through an intensive field work which resulted in mapping more than 1000 individual fields. The agricultural managements above mentioned are based on combination of soy, cotton, corn, millet and sorghum crops sowed in single or double crop systems. Due to the difficulty in separating certain classes because of too similar agricultural calendars, the classification will be reduced to 3 classes : Cotton (single crop), Soy and cotton (double crop), soy (single or double crop with corn, millet or sorghum). The classification will use training data obtained in the 2005-2006 harvest and then be tested on the 2006-2007 harvest. In a first step, four smoothing techniques are presented and criticized. Those techniques are Best Index Slope Extraction (BISE) [3], Mean Value Iteration (MVI) [4], Weighted Least Squares (WLS) [5] and Savitzky-Golay Filter (SG) [6] [7]. These techniques are then implemented and visually compared on a few individual pixels so that it allows doing a first selection between the five studied techniques. The WLS and SG techniques are selected according to criteria proposed by [8]. Those criteria are: ability in eliminating frequent noises, conserving the upper values of the VI profiles and keeping the temporality of the profiles. Those selected algorithms are then programmed and applied to the MODIS/TERRA EVI data (16-days composition periods). Tests of separability are realized based on the Jeffries-Matusita distance in order to see if the algorithms managed in improving the potential of differentiation between the classes. Those tests are realized on the overall profile (comprising 23 MODIS images) as well as on each MODIS sub-period of the profile [1]. This last test is a double interest process because it allows comparing the smoothing techniques and also enables to select a set of images which carries more information on the separability between the classes. Those selected dates can then be used to realize a supervised classification. Here three different classifiers are tested to evaluate if the smoothing techniques as a particular effect on the classification depending on the classifiers used. Those classifiers are Maximum Likelihood classifier, Spectral Angle Mapper (SAM) classifier and CHAID Improved Decision tree. It appears through the separability tests on the overall process that the smoothed profiles don't improve efficiently the potential of discrimination between classes when compared with the original data. However, the same tests realized on the MODIS sub-periods show better results obtained with the smoothed algorithms. The results of the classification confirm this first analyze. The Kappa coefficients are always better with the smoothing techniques and the results obtained with the WLS and SG smoothed profiles are nearly equal. However, the results are different depending on the classifier used. The impact of the smoothing algorithms is much better while using the decision tree model. Indeed, it allows a gain of 0.1 in the Kappa coefficient. While using the Maximum Likelihood end SAM models, the gain remains positive but is much lower (Kappa improved of 0.02 only). Thus, this work's aim is to prove the utility in smoothing the VI profiles in order to improve the final results. However, the choice of the smoothing algorithm has to be made considering the original data used and the classifier models used. In that case the Savitzky-Golay filter gave the better results.
Resumo:
Despite its importance to agriculture, the genetic basis of heterosis is still not well understood. The main competing hypotheses include dominance, overdominance, and epistasis. NC design III is an experimental design that. has been used for estimating the average degree of dominance of quantitative trait 106 (QTL) and also for studying heterosis. In this study, we first develop a multiple-interval mapping (MIM) model for design III that provides a platform to estimate the number, genomic positions, augmented additive and dominance effects, and epistatic interactions of QTL. The model can be used for parents with any generation of selling. We apply the method to two data sets, one for maize and one for rice. Our results show that heterosis in maize is mainly due to dominant gene action, although overdominance of individual QTL could not completely be ruled out due to the mapping resolution and limitations of NC design III. For rice, the estimated QTL dominant effects could not explain the observed heterosis. There is evidence that additive X additive epistatic effects of QTL could be the main cause for the heterosis in rice. The difference in the genetic basis of heterosis seems to be related to open or self pollination of the two species. The MIM model for NC design III is implemented in Windows QTL Cartographer, a freely distributed software.
Resumo:
Participants in Experiments 1 and 2 performed a discrimination and counting task to assess the effect of lead stimulus modality on attentional modification of the acoustic startle reflex. Modality of the discrimination stimuli was changed across subjects. Electrodermal responses were larger during task-relevant stimuli than during task-irrelevant stimuli in all conditions. Larger blink magnitude facilitation was found during auditory and visual task-relevant stimuli, but not for tactile stimuli. Experiment 3 used acoustic, visual, and tactile conditioned stimuli (CSs) in differential conditioning with an aversive unconditioned stimulus (US). Startle magnitude facilitation and electrodermal responses were larger during a CS that preceded the US than during a CS that was presented alone regardless of lead stimulus modality. Although not unequivocal, the present data pose problems for attentional accounts of blink modification that emphasize the importance of lead stimulus modality.
Resumo:
The effect that the difficulty of the discrimination between task-relevant and task-irrelevant stimuli has on the relationship between skin conductance orienting and secondary task reaction time (RT) was examined. Participants (N = 72) counted the number of longer-than-usual presentations of one shape (task-relevant) and ignored presentations of another shape (task-irrelevant). The difficulty of discriminating between the two shapes varied across three groups (low, medium, and high difficulty). Simultaneous with the primary counting task, participants performed a secondary RT task to acoustic probes presented 50, 150, and 2000 ms following shape onset. Skin conductance orienting was larger, and secondary RT at the 2000 ms probe position was slower during task-relevant shapes than during task-irrelevant shapes in the low-difficulty group. This difference declined as the discrimination difficulty was increased, such that there was no difference in the high-difficulty group. Secondary RT was slower during task-irrelevant shapes than during task-relevant shapes only in the medium-difficulty group-and only at the 150 ms probe position in the first half of the experiment. The close relationship between autonomic orienting and secondary RT at the 2000 ms probe position suggests that orienting reflects the resource allocation that results from the number of matching features between a stimulus input and a mental representation primed as significant.
Resumo:
This study aimed to quantify the efficiency and smoothness of voluntary movement in Huntington's disease (HD) by the use of a graphics tablet that permits analysis of movement profiles. In particular, we aimed to ascertain whether a concurrent task (digit span) would affect the kinematics of goal-directed movements. Twelve patients with HD and their matched controls performed 12 vertical zig-zag movements, with both left and right hands (with and without the concurrent task), to large or small circular targets over long or short extents. The concurrent task was associated with shorter movement times and reduced right-hand superiority. Patients with HD were overall slower, especially with long strokes, and had similar peak velocities for both small and large targets, so that controls could better accommodate differences in target size. Patients with HD spent more time decelerating, especially with small targets, whereas controls allocated more nearly equal proportions of time to the acceleration and deceleration phases of movement, especially with large targets. Short strokes were generally less force inefficient than were long strokes, especially so for either hand in either group in the absence of the concurrent task, and for the right hand in its presence. With the concurrent task, however, the left hand's behavior changed differentially for the two groups; for patients with HD, it became more force efficient with short strokes and even less efficient with long strokes, whereas for controls, it became more efficient with long strokes. Controls may be able to divert attention away from the inferior left hand, increasing its automaticity, whereas patients with HD, because of disease, may be forced to engage even further online visual control under the demands of a concurrent task. Patients with HD may perhaps become increasingly reliant on terminal visual guidance, which indicates an impairment in constructing and refining an internal representation of the movement necessary for its. effective execution. Basal ganglia dysfunction may impair the ability to use internally generated cues to guide movement.
Resumo:
Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.
Resumo:
Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.