843 resultados para Hardware and software
Resumo:
Security-critical communications devices must be evaluated to the highest possible standards before they can be deployed. This process includes tracing potential information flow through the device's electronic circuitry, for each of the device's operating modes. Increasingly, however, security functionality is being entrusted to embedded software running on microprocessors within such devices, so new strategies are needed for integrating information flow analyses of embedded program code with hardware analyses. Here we show how standard compiler principles can augment high-integrity security evaluations to allow seamless tracing of information flow through both the hardware and software of embedded systems. This is done by unifying input/output statements in embedded program execution paths with the hardware pins they access, and by associating significant software states with corresponding operating modes of the surrounding electronic circuitry.
Resumo:
NanoStreams is a consortium project funded by the European Commission under its FP7 programme and is a major effort to address the challenges of processing vast amounts of data in real-time, with a markedly lower carbon footprint than the state of the art. The project addresses both the energy challenge and the high-performance required by emerging applications in real-time streaming data analytics. NanoStreams achieves this goal by designing and building disruptive micro-server solutions incorporating real-silicon prototype micro-servers based on System-on-Chip and reconfigurable hardware technologies.
Resumo:
The main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.
Resumo:
[EN]One of the main issues of the current education system is the lack of student motivation. This aspect together with the permanent change that the Information and Communications Technologies involve represents a major challenge for the teacher: to continuously update contents and to keep awake the student’s interest. A tremendously useful tool in classrooms consists on the integration of projects with participative and collaborative dynamics, where the teacher acts mainly as a guidance to the student activity instead of being a mere knowledge and evaluation transmitter. As a specific example of project based learning, the EDUROVs project consists on building an economic underwater robot using low cost materials, but allowing the integration and programming of many accessories and sensors with minimum budget using opensource hardware and software.
Resumo:
ALICE, that is an experiment held at CERN using the LHC, is specialized in analyzing lead-ion collisions. ALICE will study the properties of quarkgluon plasma, a state of matter where quarks and gluons, under conditions of very high temperatures and densities, are no longer confined inside hadrons. Such a state of matter probably existed just after the Big Bang, before particles such as protons and neutrons were formed. The SDD detector, one of the ALICE subdetectors, is part of the ITS that is composed by 6 cylindrical layers with the innermost one attached to the beam pipe. The ITS tracks and identifies particles near the interaction point, it also aligns the tracks of the articles detected by more external detectors. The two ITS middle layers contain the whole 260 SDD detectors. A multichannel readout board, called CARLOSrx, receives at the same time the data coming from 12 SDD detectors. In total there are 24 CARLOSrx boards needed to read data coming from all the SDD modules (detector plus front end electronics). CARLOSrx packs data coming from the front end electronics through optical link connections, it stores them in a large data FIFO and then it sends them to the DAQ system. Each CARLOSrx is composed by two boards. One is called CARLOSrx data, that reads data coming from the SDD detectors and configures the FEE; the other one is called CARLOSrx clock, that sends the clock signal to all the FEE. This thesis contains a description of the hardware design and firmware features of both CARLOSrx data and CARLOSrx clock boards, which deal with all the SDD readout chain. A description of the software tools necessary to test and configure the front end electronics will be presented at the end of the thesis.
Resumo:
"The Fourth ROLM MIL-SPEC Users Group Conference"--Pref.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
[ES]El objetivo de este Trabajo es el de actualizar un entorno de gestión de bases de datos existente a la versión 11.2 del software de bases de datos Oracle y a una plataforma hardware de última generación. Se migran con tiempo de parada cero varias bases de datos dispersas en distintos servidores a un entorno consolidado de dos nodos dispuestos en alta disponibilidad tipo "activo-activo" mediante Oracle RAC y respaldado por un entorno de contingencia totalmente independiente y sincronizado en tiempo real mediante Oracle GoldenGate. Se realiza un estudio del entorno actual y, realizando una estimación de crecimiento, se propone una configuración de hardware y software mínima para implementar con garantías de éxito los requerimientos del entorno de gestión de bases de datos a corto y medio plazo. Una vez adquirido el hardware, se lleva a cabo la instalación, actualización y configuración del Sistema Operativo y el acceso redundado de los servidores a la cabina de almacenamiento. Posteriormente se instala el software de clúster de Oracle, el software de la base de datos y se crea una instancia que albergará los esquemas requeridos de las bases de datos a consolidar. Seguidamente se migran los esquemas al entorno consolidado y se establece la replicación de éstos en tiempo real con la máquina de contingencia usando en ambos casos Oracle GoldenGate. Finalmente se crea y prueba un esquema de copias de seguridad que incluye copias lógicas y físicas de la propia base de datos y de archivos de configuración del clúster a partir de los cuales será posible restaurar el entorno completamente.
Resumo:
In large flexible software systems, bloat occurs in many forms, causing excess resource utilization and resource bottlenecks. This results in lost throughput and wasted joules. However, mitigating bloat is not easy; efforts are best applied where savings would be substantial. To aid this we develop an analytical model establishing the relation between bottleneck in resources, bloat, performance and power. Analyses with the model places into perspective results from the first experimental study of the power-performance implications of bloat. In the experiments we find that while bloat reduction can provide as much as 40% energy savings, the degree of impact depends on hardware and software characteristics. We confirm predictions from our model with selected results from our experimental study. Our findings show that a software-only view is inadequate when assessing the effects of bloat. The impact of bloat on physical resource usage and power should be understood for a full systems perspective to properly deploy bloat reduction solutions and reap their power-performance benefits.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
The main requirements to DRM platforms implementing effective user experience and strong security measures to prevent unauthorized use of content are discussed. Comparison of hardware-based and software- based platforms is made showing the general inherent advantages of hardware DRM solutions. Analysis and evaluation of the main flaws of hardware platforms are conducted, pointing out the possibilities to overcome them. The overview of the existing concepts for practical realization of hardware DRM protection reveals their advantages and disadvantages and the increasing demand for creation of multi-core architecture, which could assure an effective DRM protection without decreasing the user’s freedom and importing risks for end system security.