135 resultados para Software-reconfigurable array processing architectures


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fragmentation on dynamically reconfigurable FPGAs is a major obstacle to the efficient management of the logic space in reconfigurable systems. When resource allocation decisions have to be made at run-time a rearrangement may be necessary to release enough contiguous resources to implement incoming functions. The feasibility of run-time relocation depends on the processing time required to set up rearrangements. Moreover, the performance of the relocated functions should not be affected by this process or otherwise the whole system performance, and even its operation, may be at risk. Relocation should take into account not only specific functional issues, but also the FPGA architecture, since these two aspects are normally intertwined. A simple and fast method to assess performance degradation of a function during relocation and to speed up the defragmentation process, based on previous function labelling and on the application of the Euclidian distance concept, is proposed in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The World Business Council for Sustainable Development (WBCSD) defines Eco-Efficiency as follows: ‘Eco- Efficiency is achieved by the delivery of competitively priced-goods and services that satisfy human needs and bring quality of life, while progressively reducing ecological impacts and resource intensity throughout the life-cycle to a level at least in line with the earth’s estimated carrying capacity’. Eco-Efficiency is under this point of view a key concept for sustainable development, bringing together economic and ecological progress. Measuring the Eco-Efficiency of a company, factory or business, is a complex process that involves the measurement and control of several and relevant parameters or indicators, globally applied to all companies in general, or specific according to the nature and specificities of the business itself. In this study, an attempt was made in order to measure and evaluate the eco-efficiency of a pultruded composite processing company. For this purpose the recommendations of WBCSD [1] and the directives of ISO 14301 standard [2] were followed and applied. The analysis was restricted to the main business branch of the company: the production and sale of standard GFRP pultrusion profiles. The main general indicators of eco-efficiency, as well as the specific indicators, were defined and determined according to ISO 14031 recommendations. With basis on indicators’ figures, the value profile, the environmental profile, and the pertinent eco-efficiency’s ratios were established and analyzed. In order to evaluate potential improvements on company eco-performance, new indicators values and ecoefficiency ratios were estimated taking into account the implementation of new proceedings and procedures, both in upstream and downstream of the production process, namely: a) Adoption of new heating system for pultrusion die in the manufacturing process, more effective and with minor heat losses; b) Implementation of new software for stock management (raw materials and final products) that minimize production failures and delivery delays to final consumer; c) Recycling approach, with partial waste reuse of scrap material derived from manufacturing, cutting and assembly processes of GFRP profiles. In particular, the last approach seems to significantly improve the eco-efficient performance of the company. Currently, by-products and wastes generated in the manufacturing process of GFRP profiles are landfilled, with supplementary added costs to this company traduced by transport of scrap, landfill taxes and required test analysis to waste materials. However, mechanical recycling of GFRP waste materials, with reduction to powdered and fibrous particulates, constitutes a recycling process that can be easily attained on heavy-duty cutting mills. The posterior reuse of obtained recyclates, either into a close-looping process, as filler replacement of resin matrix of GFRP profiles, or as reinforcement of other composite materials produced by the company, will drive to both costs reduction in raw materials and landfill process, and minimization of waste landfill. These features lead to significant improvements on the sequent assessed eco-efficiency ratios of the present case study, yielding to a more sustainable product and manufacturing process of pultruded GFRP profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconfigurable computing experienced a considerable expansion in the last few years, due in part to the fast run-time partial reconfiguration features offered by recent SRAM-based Field Programmable Gate Arrays (FPGAs), which allowed the implementation in real-time of dynamic resource allocation strategies, with multiple independent functions from different applications sharing the same logic resources in the space and temporal domains. However, when the sequence of reconfigurations to be performed is not predictable, the efficient management of the logic space available becomes the greatest challenge posed to these systems. Resource allocation decisions have to be made concurrently with system operation, taking into account function priorities and optimizing the space currently available. As a consequence of the unpredictability of this allocation procedure, the logic space becomes fragmented, with many small areas of free resources failing to satisfy most requests and so remaining unused. A rearrangement of the currently running functions is therefore necessary, so as to obtain enough contiguous space to implement incoming functions, avoiding the spreading of their components and the resulting degradation of system performance. A novel active relocation procedure for Configurable Logic Blocks (CLBs) is herein presented, able to carry out online rearrangements, defragmenting the available FPGA resources without disturbing functions currently running.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coarse Grained Reconfigurable Architectures (CGRAs) are emerging as enabling platforms to meet the high performance demanded by modern applications (e.g. 4G, CDMA, etc.). Recently proposed CGRAs offer time-multiplexing and dynamic applications parallelism to enhance device utilization and reduce energy consumption at the cost of additional memory (up to 50% area of the overall platform). To reduce the memory overheads, novel CGRAs employ either statistical compression, intermediate compact representation, or multicasting. Each compaction technique has different properties (i.e. compression ratio, decompression time and decompression energy) and is best suited for a particular class of applications. However, existing research only deals with these methods separately. Moreover, they only analyze the compaction ratio and do not evaluate the associated energy overheads. To tackle these issues, we propose a polymorphic compression architecture that interleaves these techniques in a unique platform. The proposed architecture allows each application to take advantage of a separate compression/decompression hierarchy (consisting of various types and implementations of hardware/software decoders) tailored to its needs. Simulation results, using different applications (FFT, Matrix multiplication, and WLAN), reveal that the choice of compression hierarchy has a significant impact on compression ratio (up to 52%), decompression energy (up to 4 orders of magnitude), and configuration time (from 33 n to 1.5 s) for the tested applications. Synthesis results reveal that introducing adaptivity incurs negligible additional overheads (1%) compared to the overall platform area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As of today, AUTOSAR is the de facto standard in the automotive industry, providing a common software architec- ture and development process for automotive applications. While this standard is originally written for singlecore operated Elec- tronic Control Units (ECU), new guidelines and recommendations have been added recently to provide support for multicore archi- tectures. This update came as a response to the steady increase of the number and complexity of the software functions embedded in modern vehicles, which call for the computing power of multicore execution environments. In this paper, we enumerate and analyze the design options and the challenges of porting AUTOSAR-based automotive applications onto multicore platforms. In particular, we investigate those options when considering the emerging many- core architectures that provide a more scalable environment than the traditional multicore systems. Such platforms are suitable to enable massive parallel execution, and their design is more suitable for partitioning and isolating the software components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed real-time systems such as automotive applications are becoming larger and more complex, thus, requiring the use of more powerful hardware and software architectures. Furthermore, those distributed applications commonly have stringent real-time constraints. This implies that such applications would gain in flexibility if they were parallelized and distributed over the system. In this paper, we consider the problem of allocating fixed-priority fork-join Parallel/Distributed real-time tasks onto distributed multi-core nodes connected through a Flexible Time Triggered Switched Ethernet network. We analyze the system requirements and present a set of formulations based on a constraint programming approach. Constraint programming allows us to express the relations between variables in the form of constraints. Our approach is guaranteed to find a feasible solution, if one exists, in contrast to other approaches based on heuristics. Furthermore, approaches based on constraint programming have shown to obtain solutions for these type of formulations in reasonable time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3rd Workshop on High-performance and Real-time Embedded Systems (HIRES 2015). 21, Jan, 2015. Amsterdam, Netherlands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Article in Press, Corrected Proof

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent embedded processor architectures containing multiple heterogeneous cores and non-coherent caches renewed attention to the use of Software Transactional Memory (STM) as a building block for developing parallel applications. STM promises to ease concurrent and parallel software development, but relies on the possibility of abort conflicting transactions to maintain data consistency, which in turns affects the execution time of tasks carrying transactions. Because of this fact the timing behaviour of the task set may not be predictable, thus it is crucial to limit the execution time overheads resulting from aborts. In this paper we formalise a FIFO-based algorithm to order the sequence of commits of concurrent transactions. Then, we propose and evaluate two non-preemptive and one SRP-based fully-preemptive scheduling strategies, in order to avoid transaction starvation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems are increasingly concerned with providing higher performance in real-time, challenging the performance capabilities of current architectures. The advent of next-generation many-core embedded platforms has the chance of intercepting this converging need for predictable high-performance, allowing HPC and EC applications to be executed on efficient and powerful heterogeneous architectures integrating general-purpose processors with many-core computing fabrics. To this end, it is of paramount importance to develop new techniques for exploiting the massively parallel computation capabilities of such platforms in a predictable way. P-SOCRATES will tackle this important challenge by merging leading research groups from the HPC and EC communities. The time-criticality and parallelisation challenges common to both areas will be addressed by proposing an integrated framework for executing workload-intensive applications with real-time requirements on top of next-generation commercial-off-the-shelf (COTS) platforms based on many-core accelerated architectures. The project will investigate new HPC techniques that fulfil real-time requirements. The main sources of indeterminism will be identified, proposing efficient mapping and scheduling algorithms, along with the associated timing and schedulability analysis, to guarantee the real-time and performance requirements of the applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of chemical diffusion in biological tissues is a research field of high importance and with application in many clinical, research and industrial areas. The evaluation of diffusion and viscosity properties of chemicals in tissues is necessary to characterize treatments or inclusion of preservatives in tissues or organs for low temperature conservation. Recently, we have demonstrated experimentally that the diffusion properties and dynamic viscosity of sugars and alcohols can be evaluated from optical measurements. Our studies were performed in skeletal muscle, but our results have revealed that the same methodology can be used with other tissues and different chemicals. Considering the significant number of studies that can be made with this method, it becomes necessary to turn data processing and calculation easier. With this objective, we have developed a software application that integrates all processing and calculations, turning the researcher work easier and faster. Using the same experimental data that previously was used to estimate the diffusion and viscosity of glucose in skeletal muscle, we have repeated the calculations with the new application. Comparing between the results obtained with the new application and with previous independent routines we have demonstrated great similarity and consequently validated the application. This new tool is now available to be used in similar research to obtain the diffusion properties of other chemicals in different tissues or organs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Ressonância Magnética Mamaria (RMM), ao longo da década, tem demonstrado um franco desenvolvimento no diagnóstico e caracterização do Carcinoma Mamário. O objectivo deste trabalho científico é demonstrar, através de uma revisão bibliográfica, os avanços desta modalidade na avaliação das lesões da mama, tendo em conta as características: elasticidade (Elastografia), bioquímicas (Espectroscopia), celularidade (Difusão) e vascularização (Perfusão). A avaliação destas em consonância com as morfológicas e cinéticas (RMM), permitem um aumento da especificidade da RMM, reduzindo assim o número de biopsias desnecessárias. Contudo estas evoluções técnicas devem estar em consonância com a inovação em questões de software de processamento de Imagem e hardware dos equipamentos de Ressonância Magnética.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the last decade research in Group Decision Making area have been focus in the building of meeting rooms that could support the decision making task and improve the quality of those decisions. However the emergence of Ambient Intelligence concept contributes with a new perspective, a different way of viewing traditional decision rooms. In this paper we will present an overview of Smart Decision Rooms providing Intelligence to the meeting environment, and we will also present LAID, an Ambient Intelligence Environment oriented to support Group Decision Making and some of the software tools that we already have installed in this environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores