908 resultados para Task-to-core mapping
Resumo:
Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.
Resumo:
13th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC 2015). 21 to 23, Oct, 2015, Session W1-A: Multiprocessing and Multicore Architectures. Porto, Portugal.
Resumo:
Presented at 21st IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2015). 19 to 21, Aug, 2015, pp 122-131. Hong Kong, China.
Resumo:
In the future, robots will enter our everyday lives to help us with various tasks.For a complete integration and cooperation with humans, these robots needto be able to acquire new skills. Sensor capabilities for navigation in real humanenvironments and intelligent interaction with humans are some of the keychallenges.Learning by demonstration systems focus on the problem of human robotinteraction, and let the human teach the robot by demonstrating the task usinghis own hands. In this thesis, we present a solution to a subproblem within thelearning by demonstration field, namely human-robot grasp mapping. Robotgrasping of objects in a home or office environment is challenging problem.Programming by demonstration systems, can give important skills for aidingthe robot in the grasping task.The thesis presents two techniques for human-robot grasp mapping, directrobot imitation from human demonstrator and intelligent grasp imitation. Inintelligent grasp mapping, the robot takes the size and shape of the object intoconsideration, while for direct mapping, only the pose of the human hand isavailable.These are evaluated in a simulated environment on several robot platforms.The results show that knowing the object shape and size for a grasping taskimproves the robot precision and performance
Resumo:
Taxonomic free sorting (TFS) is a fast, reliable and new technique in sensory science. The method extends the typical free sorting task where stimuli are grouped according to similarities, by asking respondents to combine their groups two at a time to produce a hierarchy. Previously, TFS has been used for the visual assessment of packaging whereas this study extends the range of potential uses of the technique to incorporate full sensory analysis by the target consumer, which, when combined with hedonic liking scores, was used to generate a novel preference map. Furthermore, to fully evaluate the efficacy of using the sorting method, the technique was evaluated with a healthy older adult consumer group. Participants sorted eight products into groups and described their reason at each stage as they combined those groups, producing a consumer-specific vocabulary. This vocabulary was combined with hedonic data from a separate group of older adults, to give the external preference map. Taxonomic sorting is a simple, fast and effective method for use with older adults, and its combination with liking data can yield a preference map constructed entirely from target consumer data.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
ABSTRACT In recent years, geotechnologies as remote and proximal sensing and attributes derived from digital terrain elevation models indicated to be very useful for the description of soil variability. However, these information sources are rarely used together. Therefore, a methodology for assessing and specialize soil classes using the information obtained from remote/proximal sensing, GIS and technical knowledge has been applied and evaluated. Two areas of study, in the State of São Paulo, Brazil, totaling approximately 28.000 ha were used for this work. First, in an area (area 1), conventional pedological mapping was done and from the soil classes found patterns were obtained with the following information: a) spectral information (forms of features and absorption intensity of spectral curves with 350 wavelengths -2,500 nm) of soil samples collected at specific points in the area (according to each soil type); b) obtaining equations for determining chemical and physical properties of the soil from the relationship between the results obtained in the laboratory by the conventional method, the levels of chemical and physical attributes with the spectral data; c) supervised classification of Landsat TM 5 images, in order to detect changes in the size of the soil particles (soil texture); d) relationship between classes relief soils and attributes. Subsequently, the obtained patterns were applied in area 2 obtain pedological classification of soils, but in GIS (ArcGIS). Finally, we developed a conventional pedological mapping in area 2 to which was compared with a digital map, ie the one obtained only with pre certain standards. The proposed methodology had a 79 % accuracy in the first categorical level of Soil Classification System, 60 % accuracy in the second category level and became less useful in the categorical level 3 (37 % accuracy).
Resumo:
Very high-resolution Synthetic Aperture Radar sensors represent an alternative to aerial photography for delineating floods in built-up environments where flood risk is highest. However, even with currently available SAR image resolutions of 3 m and higher, signal returns from man-made structures hamper the accurate mapping of flooded areas. Enhanced image processing algorithms and a better exploitation of image archives are required to facilitate the use of microwave remote sensing data for monitoring flood dynamics in urban areas. In this study a hybrid methodology combining radiometric thresholding, region growing and change detection is introduced as an approach enabling the automated, objective and reliable flood extent extraction from very high-resolution urban SAR images. The method is based on the calibration of a statistical distribution of “open water” backscatter values inferred from SAR images of floods. SAR images acquired during dry conditions enable the identification of areas i) that are not “visible” to the sensor (i.e. regions affected by ‘layover’ and ‘shadow’) and ii) that systematically behave as specular reflectors (e.g. smooth tarmac, permanent water bodies). Change detection with respect to a pre- or post flood reference image thereby reduces over-detection of inundated areas. A case study of the July 2007 Severn River flood (UK) observed by the very high-resolution SAR sensor on board TerraSAR-X as well as airborne photography highlights advantages and limitations of the proposed method. We conclude that even though the fully automated SAR-based flood mapping technique overcomes some limitations of previous methods, further technological and methodological improvements are necessary for SAR-based flood detection in urban areas to match the flood mapping capability of high quality aerial photography.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Resumo:
A new managerial task arises in today’s working life: to provide conditions for and influence interaction between actors and thus to enable the emergence of organizing structure in tune with a changing environment. We call this the enabling managerial task. The goal of this paper is to study whether training first line managers in the enabling managerial task could lead to changes in the work for the subordinates. This paper presents results from questionnaires answered by the subordinates of the managers before and after the training. The training was organized as a learning network and consisted of eight workshops carried out over a period of one year (September 2009–June 2010), where the managers met with each other and the researchers once a month. Each workshop consisted of three parts, during three and a half hours. The first hour was devoted to joint reflection on a task that had been undertaken since the last workshop; some results were presented from the employee pre-assessments, followed by relevant theory and illuminating practices, finally the managers created new tasks for themselves to undertake during the following month. The subordinates’ answers show positive change in all of the seventeen scales used to assess it. The improvements are significant in scales measuring the relationship between the manager and the employees, as well as in those measuring interaction between employees. It is concluded that the result was a success for all managers that had the possibility of using the training in their management work.
Resumo:
Research has mainly focussed on the perceptual nature of synaesthesia. However, synaesthetic experiences are also semantically represented. It was our aim to develop a task to investigate the semantic representation of the concurrent and its relation to the inducer in grapheme-colour synaesthesia. Non-synaesthetes were either tested with a lexical-decision (i.e., word / non-word) or a semantic-classification (i.e., edibility decision) task. Targets consisted of words which were strongly associated with a specific colour (e.g., banana - yellow) and words which were neutral and not associated with a specific colour (e.g., aunt). Target words were primed with colours: the prime target relationship was either intramodal (i.e., word - word) or crossmodal (colour patch - word). Each of the four task versions consisted of three conditions: congruent (same colour for prime and target), incongruent (different colour), and unrelated (neutral target). For both tasks (i.e., lexical and semantic) and both versions of the task (i.e., intramodal and crossmodal), we expected faster reaction times (RTs) in the congruent condition than in the neutral condition and slower RTs in the incongruent condition than the neutral condition. Stronger effects were expected in the intramodal condition due to the overlap in the prime target modality. The results suggest that the hypotheses were partly confirmed. We conclude that the tasks and hypotheses can be readily adopted to investigate the nature of the representation of the synaesthetic experiences.