995 resultados para hardware design
Resumo:
We describe a compositional framework, together with its supporting toolset, for hardware/software co-design. Our framework is an integration of a formal approach within a traditional design flow. The formal approach is based on Interval Temporal Logic and its executable subset, Tempura. Refinement is the key element in our framework because it will derive from a single formal specification of the system the software and hardware parts of the implementation, while preserving all properties of the system specification. During refinement simulation is used to choose the appropriate refinement rules, which are applied automatically in the HOL system. The framework is illustrated with two case studies. The work presented is part of a UK collaborative research project between the Software Technology Research Laboratory at the De Montfort University and the Oxford University Computing Laboratory.
Resumo:
We have recently proposed an extension to Petri nets in order to be able to directly deal with all aspects of embedded digital systems. This extension is meant to be used as an internal model of our co-design environment. After analyzing relevant related work, and presenting a short introduction to our extension as a background material, we describe the details of the timing model we use in our approach, which is mainly based in Merlin's time model. We conclude the paper by discussing an example of its usage. © 2004 IEEE.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Die Kernmagnetresonanz (NMR) ist eine vielseitige Technik, die auf spin-tragende Kerne angewiesen ist. Seit ihrer Entdeckung ist die Kernmagnetresonanz zu einem unverzichtbaren Werkzeug in unzähligen Anwendungen der Physik, Chemie, Biologie und Medizin geworden. Das größte Problem der NMR ist ihre geringe Sensitivtät auf Grund der sehr kleinen Energieaufspaltung bei Raumtemperatur. Für Protonenspins, die das größte magnetogyrische Verhältnis besitzen, ist der Polarisationsgrad selbst in den größten verfügbaren Magnetfeldern (24 T) nur ~7*10^(-5).rnDurch die geringe inhärente Polarisation ist folglich eine theoretische Sensitivitätssteigerung von mehr als 10^4 möglich. rnIn dieser Arbeit wurden verschiedene technische Aspekte und unterschiedliche Polarisationsagenzien für Dynamic Nuclear Polarization (DNP) untersucht.rnDie technische Entwicklung des mobilen Aufbaus umfasst die Verwendung eines neuen Halbach Magneten, die Konstruktion neuer Probenköpfe und den automatisierten Ablauf der Experimente mittels eines LabVIEW basierten Programms. Desweiteren wurden zwei neue Polarisationsagenzien mit besonderen Merkmalen für den Overhauser und den Tieftemperatur DNP getestet. Zusätzlich konnte die Durchführbarkeit von NMR Experimenten an Heterokernen (19F und 13C) im mobilen Aufbau bei 0,35 T gezeigt werden. Diese Ergebnisse zeigen die Möglichkeiten der Polarisationstechnik DNP auf, wenn Heterokerne mit einem kleinen magnetogyrischen Verhältnis polarisiert werden müssen.rnDie Sensitivitätssteigerung sollte viele neue Anwendungen, speziell in der Medizin, ermöglichen.
Resumo:
Abstract. The ASSERT project de?ned new software engineering methods and tools for the development of critical embedded real-time systems in the space domain. The ASSERT model-driven engineering process was one of the achievements of the project and is based on the concept of property- preserving model transformations. The key element of this process is that non-functional properties of the software system must be preserved during model transformations. Properties preservation is carried out through model transformations compliant with the Ravenscar Pro?le and provides a formal basis to the process. In this way, the so-called Ravenscar Computational Model is central to the whole ASSERT process. This paper describes the work done in the HWSWCO study, whose main objective has been to address the integration of the Hardware/Software co-design phase in the ASSERT process. In order to do that, non-functional properties of the software system must also be preserved during hardware synthesis. Keywords : Ada 2005, Ravenscar pro?le, Hardware/Software co-design, real- time systems, high-integrity systems, ORK
Resumo:
In this work, a unified algorithm-architecture-circuit co-design environment for complex FPGA system development is presented. The main objective is to find an efficient methodology for designing a configurable optimized FPGA system by using as few efforts as possible in verification stage, so as to speed up the development period. A proposed high performance FFT/iFFT processor for Multiband Orthogonal Frequency Division Multiplexing Ultra Wideband (MB-OFDM UWB) system design process is given as an example to demonstrate the proposed methodology. This efficient design methodology is tested and considered to be suitable for almost all types of complex FPGA system designs and verifications.
Resumo:
Federal Highway Administration, Safety Design Division, McLean, Va.
Resumo:
Federal Highway Administration, Safety Design Division, McLean, Va.
Resumo:
Federal Highway Administration, Safety Design Division, McLean, Va.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
International audience
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
This Thesis wants to highlight the importance of ad-hoc designed and developed embedded systems in the implementation of intelligent sensor networks. As evidence four areas of application are presented: Precision Agriculture, Bioengineering, Automotive and Structural Health Monitoring. For each field is reported one, or more, smart device design and developing, in addition to on-board elaborations, experimental validation and in field tests. In particular, it is presented the design and development of a fruit meter. In the bioengineering field, three different projects are reported, detailing the architectures implemented and the validation tests conducted. Two prototype realizations of an inner temperature measurement system in electric motors for an automotive application are then discussed. Lastly, the HW/SW design of a Smart Sensor Network is analyzed: the network features on-board data management and processing, integration in an IoT toolchain, Wireless Sensor Network developments and an AI framework for vibration-based structural assessment.
Resumo:
This master's thesis investigates different aspects of Dual-Active-Bridge (DAB) Converter and extends aspects further to Multi-Active-Bridges (MAB). The thesis starts with an overview of the applications of the DAB and MAB and their importance. The analytical part of the thesis includes the derivation of the peak and RMS currents, which is required for finding the losses present in the system. The power converters, considered in this thesis are DAB, Triple-Active Bridge (TAB) and Quad-Active Bridge (QAB). All the theoretical calculations are compared with the simulation results from PLECS software for identifying the correctness of the reviewed and developed theory. The Hardware-in-the-Loop (HIL) simulation is conducted for checking the control operation in real-time with the help of the RT box from the Plexim. Additionally, as in real systems digital signal processor (DSP), system-on-chip or field programmable gate array is employed for the control of the power electronic systems, and the execution of the control in the real-time simulation (RTS) conducted is performed by DSP.