956 resultados para embedded, system, entropy, pool, TRNG, random, ADC
Resumo:
La partición hardware/software es una etapa clave dentro del proceso de co-diseño de los sistemas embebidos. En esta etapa se decide qué componentes serán implementados como co-procesadores de hardware y qué componentes serán implementados en un procesador de propósito general. La decisión es tomada a partir de la exploración del espacio de diseño, evaluando un conjunto de posibles soluciones para establecer cuál de estas es la que mejor balance logra entre todas las métricas de diseño. Para explorar el espacio de soluciones, la mayoría de las propuestas, utilizan algoritmos metaheurísticos; destacándose los Algoritmos Genéticos, Recocido Simulado. Esta decisión, en muchos casos, no es tomada a partir de análisis comparativos que involucren a varios algoritmos sobre un mismo problema. En este trabajo se presenta la aplicación de los algoritmos: Escalador de Colinas Estocástico y Escalador de Colinas Estocástico con Reinicio, para resolver el problema de la partición hardware/software. Para validar el empleo de estos algoritmos se presenta la aplicación de este algoritmo sobre un caso de estudio, en particular la partición hardware/software de un codificador JPEG. En todos los experimentos es posible apreciar que ambos algoritmos alcanzan soluciones comparables con las obtenidas por los algoritmos utilizados con más frecuencia.
Resumo:
With the fast changing global business landscape, manufacturing companies are facing increasing challenge to reduce cost of production, increase equipment utilization and provide innovative products in order to compete with countries with low labour cost and production cost. On of the methods is zero down time. Unfortunately, the current research and industrial solution does not provide user friendly development environment to create “Adaptive microprocessor size with supercomputer performance” solution to reduce downtime. Most of the solutions are PC based computer with off the shelf research software tools which is inadequate for the space constraint manufacturing environment in developed countries. On the other hand, to develop solution for various manufacturing domain will take too much time, there is lacking tools available for rapid or adaptive way of create the solution. Therefore, this research is to understand the needs, trends, gaps of manufacturing prognostics and defines the research potential related to rapid embedded system framework for prognostic.
Resumo:
The real-time embedded systems design requires precise control of the passage of time in the computation performed by the modules and communication between them. Generally, these systems consist of several modules, each designed for a specific task and restricted communication with other modules in order to obtain the required timing. This strategy, called federated architecture, is already becoming unviable in front of the current demands of cost, required performance and quality of embedded system. To address this problem, it has been proposed the use of integrated architectures that consist of one or few circuits performing multiple tasks in parallel in a more efficient manner and with reduced costs. However, one has to ensure that the integrated architecture has temporal composability, ie the ability to design each task temporally isolated from the others in order to maintain the individual characteristics of each task. The Precision Timed Machines are an integrated architecture approach that makes use of multithreaded processors to ensure temporal composability. Thus, this work presents the implementation of a Precision Machine Timed named Hivek-RT. This processor which is a VLIW supporting Simultaneous Multithreading is capable of efficiently execute real-time tasks when compared to a traditional processor. In addition to the efficient implementation, the proposed architecture facilitates the implementation real-time tasks from a programming point of view.
Resumo:
In this work a system of autonomous agents engaged in cyclic pursuit (under constant bearing (CB) strategy) is considered, for which one informed agent (the leader) also senses and responds to a stationary beacon. Building on the framework proposed in a previous work on beacon-referenced cyclic pursuit, necessary and suffi- cient conditions for the existence of circling equilibria in a system with one informed agent are derived, with discussion of stability and performance. In a physical testbed, the leader (robot) is equipped with a sound sensing apparatus composed of a real time embedded system, estimating direction of arrival of sound by an Interaural Level and Phase Difference Algorithm, using empirically determined phase and level signatures, and breaking front-back ambiguity with appropriate sensor placement. Furthermore a simple framework for implementing and evaluating the performance of control laws with the Robot Operating System (ROS) is proposed, demonstrated, and discussed.
Resumo:
Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.
Resumo:
In this report, we develop an intelligent adaptive neuro-fuzzy controller by using adaptive neuro fuzzy inference system (ANFIS) techniques. We begin by starting with a standard proportional-derivative (PD) controller and use the PD controller data to train the ANFIS system to develop a fuzzy controller. We then propose and validate a method to implement this control strategy on commercial off-the-shelf (COTS) hardware. An analysis is made into the choice of filters for attitude estimation. These choices are limited by the complexity of the filter and the computing ability and memory constraints of the micro-controller. Simplified Kalman filters are found to be good at estimation of attitude given the above constraints. Using model based design techniques, the models are implemented on an embedded system. This enables the deployment of fuzzy controllers on enthusiast-grade controllers. We evaluate the feasibility of the proposed control strategy in a model-in-the-loop simulation. We then propose a rapid prototyping strategy, allowing us to deploy these control algorithms on a system consisting of a combination of an ARM-based microcontroller and two Arduino-based controllers. We then use a combination of the code generation capabilities within MATLAB/Simulink in combination with multiple open-source projects in order to deploy code to an ARM CortexM4 based controller board. We also evaluate this strategy on an ARM-A8 based board, and a much less powerful Arduino based flight controller. We conclude by proving the feasibility of fuzzy controllers on Commercial-off the shelf (COTS) hardware, we also point out the limitations in the current hardware and make suggestions for hardware that we think would be better suited for memory heavy controllers.
Resumo:
Spectral sensors are a wide class of devices that are extremely useful for detecting essential information of the environment and materials with high degree of selectivity. Recently, they have achieved high degrees of integration and low implementation cost to be suited for fast, small, and non-invasive monitoring systems. However, the useful information is hidden in spectra and it is difficult to decode. So, mathematical algorithms are needed to infer the value of the variables of interest from the acquired data. Between the different families of predictive modeling, Principal Component Analysis and the techniques stemmed from it can provide very good performances, as well as small computational and memory requirements. For these reasons, they allow the implementation of the prediction even in embedded and autonomous devices. In this thesis, I will present 4 practical applications of these algorithms to the prediction of different variables: moisture of soil, moisture of concrete, freshness of anchovies/sardines, and concentration of gasses. In all of these cases, the workflow will be the same. Initially, an acquisition campaign was performed to acquire both spectra and the variables of interest from samples. Then these data are used as input for the creation of the prediction models, to solve both classification and regression problems. From these models, an array of calibration coefficients is derived and used for the implementation of the prediction in an embedded system. The presented results will show that this workflow was successfully applied to very different scientific fields, obtaining autonomous and non-invasive devices able to predict the value of physical parameters of choice from new spectral acquisitions.
Resumo:
Miniaturized flying robotic platforms, called nano-drones, have the potential to revolutionize the autonomous robots industry sector thanks to their very small form factor. The nano-drones’ limited payload only allows for a sub-100mW microcontroller unit for the on-board computations. Therefore, traditional computer vision and control algorithms are too computationally expensive to be executed on board these palm-sized robots, and we are forced to rely on artificial intelligence to trade off accuracy in favor of lightweight pipelines for autonomous tasks. However, relying on deep learning exposes us to the problem of generalization since the deployment scenario of a convolutional neural network (CNN) is often composed by different visual cues and different features from those learned during training, leading to poor inference performances. Our objective is to develop and deploy and adaptation algorithm, based on the concept of latent replays, that would allow us to fine-tune a CNN to work in new and diverse deployment scenarios. To do so we start from an existing model for visual human pose estimation, called PULPFrontnet, which is used to identify the pose of a human subject in space through its 4 output variables, and we present the design of our novel adaptation algorithm, which features automatic data gathering and labeling and on-device deployment. We therefore showcase the ability of our algorithm to adapt PULP-Frontnet to new deployment scenarios, improving the R2 scores of the four network outputs, with respect to an unknown environment, from approximately [−0.2, 0.4, 0.0,−0.7] to [0.25, 0.45, 0.2, 0.1]. Finally we demonstrate how it is possible to fine-tune our neural network in real time (i.e., under 76 seconds), using the target parallel ultra-low power GAP 8 System-on-Chip on board the nano-drone, and we show how all adaptation operations can take place using less than 2mWh of energy, a small fraction of the available battery power.
Resumo:
Meniscal injuries can occur secondary to trauma or be instigated by the changes in knee-joint function that are associated with aging, osteo- and rheumatoid arthritis, disturbances in gait and obesity. Sixty per cent of persons over 50 years of age manifest signs of meniscal pathology. The surgical and arthroscopic measures that are currently implemented to treat meniscal deficiencies bring only transient relief from pain and effect but a temporary improvement in joint function. Although tissue-engineering-based approaches to meniscal repair are now being pursued, an appropriate in-vitro model has not been conceived. The aim of this study was to develop an organ-slice culturing system to simulate the repair of human meniscal lesions in vitro. The model consists of a ring of bovine meniscus enclosing a chamber that represents the defect and reproduces its sequestered physiological microenvironment. The defect, which is closed with a porous membrane, is filled with fragments of synovial tissue, as a source of meniscoprogenitor cells, and a fibrin-embedded, calcium-phosphate-entrapped depot of the meniscogenic agents BMP-2 and TGF-ß1. After culturing for 2 to 6 weeks, the constructs were evaluated histochemically and histomorphometrically, as well as immunohistochemically for the apoptotic marker caspase 3 and collagen types I and II. Under the defined conditions, the fragments of synovium underwent differentiation into meniscal tissue, which bonded with the parent meniscal wall. Both the parent and the neoformed meniscal tissue survived the duration of the culturing period without significant cell losses. The concept on which the in-vitro system is based was thus validated. This article is protected by copyright. All rights reserved.
Resumo:
A photoluminescence (PL) study of the individual electron states localized in a random potential is performed in artificially disordered superlattices embedded in a wide parabolic well. The valence band bowing of the parabolic potential provides a variation of the emission energies which splits the optical transitions corresponding to different wells within the random potential. The blueshift of the PL lines emitted by individual random wells, observed with increasing disorder strength, is demonstrated. The variation of temperature and magnetic field allowed for the behavior of the electrons localized in individual wells of the random potential to be distinguished.
Resumo:
This paper presents a compact embedded fuzzy system for three-phase induction-motor scalar speed control. The control strategy consists in keeping constant the voltage-frequency ratio of the induction-motor supply source. A fuzzy-control system is built on a digital signal processor, which uses speed error and speed-error variation to change both the fundamental voltage amplitude and frequency of a sinusoidal pulsewidth modulation inverter. An alternative optimized method for embedded fuzzy-system design is also proposed. The controller performance, in relation to reference and load-torque variations, is evaluated by experimental results. A comparative analysis with conventional proportional-integral controller is also achieved.
Resumo:
Gamma ray tomography experiments have been carried out to detect spatial patterns in the porosity in a 0.27 m diameter column packed with steel Rashig rings of different sizes: 12.6, 37.9, and 76 mm. using a first generation CT system (Chen et al., 1998). A fast Fourier transform tomographic reconstruction algorithm has been used to calculate the spatial variation over the column cross section. Cross-sectional gas porosity and solid holdup distribution were determinate. The values of cross-sectional average gas porosity were epsilon=0.849, 0.938 and 0.966 for the 12.6, 37.9, and 76 mm rings, respectively. Radial holdup variation within the packed bed has been determined. The variation of the circumferentially averaged gas holdup in the radial direction indicates that the porosity in the column wall region is a somewhat higher than that in the bulk region, due to the effect of the column wall. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective and usable WSN system architectures that address both functional and non-functional requirements in an integrated fashion. This poster outlines the EMMON system architecture for large-scale, dense, real-time embedded monitoring. It provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to maintain as much as flexibility as possible while meeting specific applications requirements. EMMON has been validated through extensive analytical, simulation and experimental evaluations, including through a 300+ nodes test-bed the largest single-site WSN test-bed in Europe.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective, feasible and usable system architectures that address both functional and non-functional requirements in an integrated fashion. In this paper, we outline the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to use standard commercially-available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. The EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
When considering time series data of variables describing agent interactions in social neurobiological systems, measures of regularity can provide a global understanding of such system behaviors. Approximate entropy (ApEn) was introduced as a nonlinear measure to assess the complexity of a system behavior by quantifying the regularity of the generated time series. However, ApEn is not reliable when assessing and comparing the regularity of data series with short or inconsistent lengths, which often occur in studies of social neurobiological systems, particularly in dyadic human movement systems. Here, the authors present two normalized, nonmodified measures of regularity derived from the original ApEn, which are less dependent on time series length. The validity of the suggested measures was tested in well-established series (random and sine) prior to their empirical application, describing the dyadic behavior of athletes in team games. The authors consider one of the ApEn normalized measures to generate the 95th percentile envelopes that can be used to test whether a particular social neurobiological system is highly complex (i.e., generates highly unpredictable time series). Results demonstrated that suggested measures may be considered as valid instruments for measuring and comparing complexity in systems that produce time series with inconsistent lengths.