917 resultados para Distributed embedded systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The traditional process of filling the medicine trays and dispensing the medicines to the patients in the hospitals is manually done by reading the printed paper medicine chart. This process can be very strenuous and error-prone, given the number of sub-tasks involved in the entire workflow and the dynamic nature of the work environment. Therefore, efforts are being made to digitalise the medication dispensation process by introducing a mobile application called Smart Dosing application. The introduction of the Smart Dosing application into hospital workflow raises security concerns and calls for security requirement analysis. This thesis is written as a part of the smart medication management project at Embedded Systems Laboratory, A° bo Akademi University. The project aims at digitising the medicine dispensation process by integrating information from various health systems, and making them available through the Smart Dosing application. This application is intended to be used on a tablet computer which will be incorporated on the medicine tray. The smart medication management system include the medicine tray, the tablet device, and the medicine cups with the cup holders. Introducing the Smart Dosing application should not interfere with the existing process carried out by the nurses, and it should result in minimum modifications to the tray design and the workflow. The re-designing of the tray would include integrating the device running the application into the tray in a manner that the users find it convenient and make less errors while using it. The main objective of this thesis is to enhance the security of the hospital medicine dispensation process by ensuring the security of the Smart Dosing application at various levels. The methods used for writing this thesis was to analyse how the tray design, and the application user interface design can help prevent errors and what secure technology choices have to be made before starting the development of the next prototype of the Smart Dosing application. The thesis first understands the context of the use of the application, the end-users and their needs, and the errors made in everyday medication dispensation workflow by continuous discussions with the nursing researchers. The thesis then gains insight to the vulnerabilities, threats and risks of using mobile application in hospital medication dispensation process. The resulting list of security requirements was made by analysing the previously built prototype of the Smart Dosing application, continuous interactive discussions with the nursing researchers, and an exhaustive stateof- the-art study on security risks of using mobile applications in hospital context. The thesis also uses Octave Allegro method to make the readers understand the likelihood and impact of threats, and what steps should be taken to prevent or fix them. The security requirements obtained, as a result, are a starting point for the developers of the next iteration of the prototype for the Smart Dosing application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Advances in FPGA technology and higher processing capabilities requirements have pushed to the emerge of All Programmable Systems-on-Chip, which incorporate a hard designed processing system and a programmable logic that enable the development of specialized computer systems for a wide range of practical applications, including data and signal processing, high performance computing, embedded systems, among many others. To give place to an infrastructure that is capable of using the benefits of such a reconfigurable system, the main goal of the thesis is to implement an infrastructure composed of hardware, software and network resources, that incorporates the necessary services for the operation, management and interface of peripherals, that coompose the basic building blocks for the execution of applications. The project will be developed using a chip from the Zynq-7000 All Programmable Systems-on-Chip family.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis covers the challenges of creating and maintaining an introductory engineering laboratory. The history of the University of Illinois Electrical and Computer Engineering department’s introductory course, ECE 110, is recounted. The current state of the course, as of Fall 2008, is discussed along with current challenges arising from the use of a hand-wired prototyping board with logic gates. A plan for overcoming these issues using a new microcontroller-based board with a pseudo hardware description language is discussed. The new microcontroller based system implementation is extensively detailed along with its new accompanying description language. This new system was tried in several sections of the Fall 2008 semester alongside the old system; the students’ final performances with the two different approaches are compared in terms of design, performance, complexity, and enjoyment. The system in its first run shows great promise, increasing the students’ enjoyment, and improving the performance of their designs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed generation systems must fulfill standards specifications of current harmonics injected to the grid. In order to satisfy these grid requirements, passive filters are connected between inverter and grid. This work compares the characteristic response of the traditional inductive (L) filter with the inductive-capacitive-inductive (LCL) filter. It is shown that increasing the inductance L leads to a good ripple current suppression around the inverter switching frequency. The LCL filter provides better harmonic attenuation and reduces the filter size. The main drawback is the LCL filter impedance, which is characterized by a typical resonance peak, which must be damped to avoid instability. Passive or active techniques can be used to damp the LCL resonance. To address this issue, this dissertation presents a comparison of current control for PV grid-tied inverters with L filter and LCL filter and also discuss the use of active and passive damping for different regions of resonance frequency. From the mathematical models, a design methodology of the controllers was developed and the dynamic behavior of the system operating in closed loop was investigated. To validate the studies developed during this work, experimental results are presented using a three-phase 5kW experimental platform. The main components and their functions are discussed in this work. Experimental results are given to support the theoretical analysis and to illustrate the performance of grid-connected PV inverter system. It is shown that the resonant frequency of the system, and sampling frequency can be associated in order to calculate a critical frequency, below which is essential to perform the damping of the LCL filter. Also, the experimental results show that the active buffer per virtual resistor, although with a simple development, is effective to damp the resonance of the LCL filter and allow the system to operate stable within predetermined parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabajo realizado en la empresa ULMA Embedded Solutions

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabajo realizado en la empresa ULMA Embedded Solutions

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Wireless Sensor Networks (WSN) methods applied to the lifting of oil present as an area with growing demand technical and scientific in view of the optimizations that can be carried forward with existing processes. This dissertation has as main objective to present the development of embedded systems dedicated to a wireless sensor network based on IEEE 802.15.4, which applies the ZigBee protocol, between sensors, actuators and the PLC (Programmable Logic Controller), aiming to solve the present problems in the deployment and maintenance of the physical communication of current elevation oil units based on the method Plunger-Lift. Embedded systems developed for this application will be responsible for acquiring information from sensors and control actuators of the devices present at the well, and also, using the Modbus protocol to make this network becomes transparent to the PLC responsible for controlling the production and delivery information for supervisory SISAL

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Dynamic Data eXchange (DDX) is our third generation platform for building distributed robot controllers. DDX allows a coalition of programs to share data at run-time through an efficient shared memory mechanism managed by a store. Further, stores on multiple machines can be linked by means of a global catalog and data is moved between the stores on an as needed basis by multi-casting. Heterogeneous computer systems are handled. We describe the architecture of DDX and the standard clients we have developed that let us rapidly build complex control systems with minimal coding.