977 resultados para Embedded systems, real-time control, Scilab, Linux, development
Resumo:
As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Real-time software systems are rarely developed once and left to run. They are subject to changes of requirements as the applications they support expand, and they commonly outlive the platforms they were designed to run on. A successful real-time system is duplicated and adapted to a variety of applications - it becomes a product line. Current methods for real-time software development are commonly based on low-level programming languages and involve considerable duplication of effort when a similar system is to be developed or the hardware platform changes. To provide more dependable, flexible and maintainable real-time systems at a lower cost what is needed is a platform-independent approach to real-time systems development. The development process is composed of two phases: a platform-independent phase, that defines the desired system behaviour and develops a platform-independent design and implementation, and a platform-dependent phase that maps the implementation onto the target platform. The last phase should be highly automated. For critical systems, assessing dependability is crucial. The partitioning into platform dependent and independent phases has to support verification of system properties through both phases.
Resumo:
This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.
Resumo:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^
Resumo:
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
Resumo:
The purpose of this research is to develop an optimal kernel which would be used in a real-time engineering and communications system. Since the application is a real-time system, relevant real-time issues are studied in conjunction with kernel related issues. The emphasis of the research is the development of a kernel which would not only adhere to the criteria of a real-time environment, namely determinism and performance, but also provide the flexibility and portability associated with non-real-time environments. The essence of the research is to study how the features found in non-real-time systems could be applied to the real-time system in order to generate an optimal kernel which would provide flexibility and architecture independence while maintaining the performance needed by most of the engineering applications. Traditionally, development of real-time kernels has been done using assembly language. By utilizing the powerful constructs of the C language, a real-time kernel was developed which addressed the goals of flexibility and portability while still meeting the real-time criteria. The implementation of the kernel is carried out using the powerful 68010/20/30/40 microprocessor based systems.
Resumo:
Image processing offers unparalleled potential for traffic monitoring and control. For many years engineers have attempted to perfect the art of automatic data abstraction from sequences of video images. This paper outlines a research project undertaken at Napier University by the authors in the field of image processing for automatic traffic analysis. A software based system implementing TRIP algorithms to count cars and measure vehicle speed has been developed by members of the Transport Engineering Research Unit (TERU) at the University. The TRIP algorithm has been ported and evaluated on an IBM PC platform with a view to hardware implementation of the pre-processing routines required for vehicle detection. Results show that a software based traffic counting system is realisable for single window processing. Due to the high volume of data required to be processed for full frames or multiple lanes, system operations in real time are limited. Therefore specific hardware is required to be designed. The paper outlines a hardware design for implementation of inter-frame and background differencing, background updating and shadow removal techniques. Preliminary results showing the processing time and counting accuracy for the routines implemented in software are presented and a real time hardware pre-processing architecture is described.
Resumo:
With the construction of operational oceanography systems, the need for real-time has become more and more important. A lot of work had been done in the past, within National Data Centres (NODC) and International Oceanographic Data and Information Exchange (IODE) to standardise delayed mode quality control procedures. Concerning such quality control procedures applicable in real-time (within hours to a maximum of a week from acquisition), which means automatically, some recommendations were set up for physical parameters but mainly within projects without consolidation with other initiatives. During the past ten years the EuroGOOS community has been working on such procedures within international programs such as Argo, OceanSites or GOSUD, or within EC projects such as Mersea, MFSTEP, FerryBox, ECOOP, and MyOcean. In collaboration with the FP7 SeaDataNet project that is standardizing the delayed mode quality control procedures in NODCs, and MyOcean GMES FP7 project that is standardizing near real time quality control procedures for operational oceanography purposes, the DATA-MEQ working group decided to put together this document to summarize the recommendations for near real-time QC procedures that they judged mature enough to be advertised and recommended to EuroGOOS.
Resumo:
For various reasons, many Algol 68 compilers do not directly implement the parallel processing operations defined in the Revised Algol 68 Report. It is still possible however, to perform parallel processing, multitasking and simulation provided that the implementation permits the creation of a master routine for the coordination and initiation of processes under its control. The package described here is intended for real time applications and runs in conjunction with the Algol 68R system; it extends and develops the original Algol 68RT package, which was designed for use with multiplexers at the Royal Radar Establishment, Malvern. The facilities provided, in addition to the synchronising operations, include an interface to an ICL Communications Processor enabling the abstract processes to be realised as the interaction of several teletypes or visual display units with a real time program providing a useful service.
Resumo:
Wireless sensor networks (WSNs) are the key enablers of the internet of things (IoT) paradigm. Traditionally, sensor network research has been to be unlike the internet, motivated by power and device constraints. The IETF 6LoWPAN draft standard changes this, defining how IPv6 packets can be efficiently transmitted over IEEE 802.15.4 radio links. Due to this 6LoWPAN technology, low power, low cost micro- controllers can be connected to the internet forming what is known as the wireless embedded internet. Another IETF recommendation, CoAP allows these devices to communicate interactively over the internet. The integration of such tiny, ubiquitous electronic devices to the internet enables interesting real-time applications. This thesis work attempts to evaluate the performance of a stack consisting of CoAP and 6LoWPAN over the IEEE 802.15.4 radio link using the Contiki OS and Cooja simulator, along with the CoAP framework Californium (Cf). Ultimately, the implementation of this stack on real hardware is carried out using a raspberry pi as a border router with T-mote sky sensors as slip radios and CoAP servers relaying temperature and humidity data. The reliability of the stack was also demonstrated during scalability analysis conducted on the physical deployment. The interoperability is ensured by connecting the WSN to the global internet using different hardware platforms supported by Contiki and without the use of specialized gateways commonly found in non IP based networks. This work therefore developed and demonstrated a heterogeneous wireless sensor network stack, which is IP based and conducted performance analysis of the stack, both in terms of simulations and real hardware.
Resumo:
Biofilms are the primary cause of clinical bacterial infections and are impervious to typical amounts of antibiotics, necessitating very high doses for treatment. Therefore, it is highly desirable to develop new alternate methods of treatment that can complement or replace existing approaches using significantly lower doses of antibiotics. Current standards for studying biofilms are based on end-point studies that are invasive and destroy the biofilm during characterization. This dissertation presents the development of a novel real-time sensing and treatment technology to aid in the non-invasive characterization, monitoring and treatment of bacterial biofilms. The technology is demonstrated through the use of a high-throughput bifurcation based microfluidic reactor that enables simulation of flow conditions similar to indwelling medical devices. The integrated microsystem developed in this work incorporates the advantages of previous in vitro platforms while attempting to overcome some of their limitations. Biofilm formation is extremely sensitive to various growth parameters that cause large variability in biofilms between repeated experiments. In this work we investigate the use of microfluidic bifurcations for the reduction in biofilm growth variance. The microfluidic flow cell designed here spatially sections a single biofilm into multiple channels using microfluidic flow bifurcation. Biofilms grown in the bifurcated device were evaluated and verified for reduced biofilm growth variance using standard techniques like confocal microscopy. This uniformity in biofilm growth allows for reliable comparison and evaluation of new treatments with integrated controls on a single device. Biofilm partitioning was demonstrated using the bifurcation device by exposing three of the four channels to various treatments. We studied a novel bacterial biofilm treatment independent of traditional antibiotics using only small molecule inhibitors of bacterial quorum sensing (analogs) in combination with low electric fields. Studies using the bifurcation-based microfluidic flow cell integrated with real-time transduction methods and macro-scale end-point testing of the combination treatment showed a significant decrease in biomass compared to the untreated controls and well-known treatments such as antibiotics. To understand the possible mechanism of action of electric field-based treatments, fundamental treatment efficacy studies focusing on the effect of the energy of the applied electrical signal were performed. It was shown that the total energy and not the type of the applied electrical signal affects the effectiveness of the treatment. The linear dependence of the treatment efficacy on the applied electrical energy was also demonstrated. The integrated bifurcation-based microfluidic platform is the first microsystem that enables biofilm growth with reduced variance, as well as continuous real-time threshold-activated feedback monitoring and treatment using low electric fields. The sensors detect biofilm growth by monitoring the change in impedance across the interdigitated electrodes. Using the measured impedance change and user inputs provided through a convenient and simple graphical interface, a custom-built MATLAB control module intelligently switches the system into and out of treatment mode. Using this self-governing microsystem, in situ biofilm treatment based on the principles of the bioelectric effect was demonstrated by exposing two of the channels of the integrated bifurcation device to low doses of antibiotics.
Resumo:
In this report, we develop an intelligent adaptive neuro-fuzzy controller by using adaptive neuro fuzzy inference system (ANFIS) techniques. We begin by starting with a standard proportional-derivative (PD) controller and use the PD controller data to train the ANFIS system to develop a fuzzy controller. We then propose and validate a method to implement this control strategy on commercial off-the-shelf (COTS) hardware. An analysis is made into the choice of filters for attitude estimation. These choices are limited by the complexity of the filter and the computing ability and memory constraints of the micro-controller. Simplified Kalman filters are found to be good at estimation of attitude given the above constraints. Using model based design techniques, the models are implemented on an embedded system. This enables the deployment of fuzzy controllers on enthusiast-grade controllers. We evaluate the feasibility of the proposed control strategy in a model-in-the-loop simulation. We then propose a rapid prototyping strategy, allowing us to deploy these control algorithms on a system consisting of a combination of an ARM-based microcontroller and two Arduino-based controllers. We then use a combination of the code generation capabilities within MATLAB/Simulink in combination with multiple open-source projects in order to deploy code to an ARM CortexM4 based controller board. We also evaluate this strategy on an ARM-A8 based board, and a much less powerful Arduino based flight controller. We conclude by proving the feasibility of fuzzy controllers on Commercial-off the shelf (COTS) hardware, we also point out the limitations in the current hardware and make suggestions for hardware that we think would be better suited for memory heavy controllers.
Resumo:
Engine developers are putting more and more emphasis on the research of maximum thermal and mechanical efficiency in the recent years. Research advances have proven the effectiveness of downsized, turbocharged and direct injection concepts, applied to gasoline combustion systems, to reduce the overall fuel consumption while respecting exhaust emissions limits. These new technologies require more complex engine control units. The sound emitted from a mechanical system encloses many information related to its operating condition and it can be used for control and diagnostic purposes. The thesis shows how the functions carried out from different and specific sensors usually present on-board, can be executed, at the same time, using only one multifunction sensor based on low-cost microphone technology. A theoretical background about sound and signal processing is provided in chapter 1. In modern turbocharged downsized GDI engines, the achievement of maximum thermal efficiency is precluded by the occurrence of knock. Knock emits an unmistakable sound perceived by the human ear like a clink. In chapter 2, the possibility of using this characteristic sound for knock control propose, starting from first experimental assessment tests, to the implementation in a real, production-type engine control unit will be shown. Chapter 3 focus is on misfire detection. Putting emphasis on the low frequency domain of the engine sound spectrum, features related to each combustion cycle of each cylinder can be identified and isolated. An innovative approach to misfire detection, which presents the advantage of not being affected by the road and driveline conditions is introduced. A preliminary study of air path leak detection techniques based on acoustic emissions analysis has been developed, and the first experimental results are shown in chapter 4. Finally, in chapter 5, an innovative detection methodology, based on engine vibration analysis, that can provide useful information about combustion phase is reported.
Resumo:
Background: Hepatitis C virus (HCV) genotyping is the most significant predictor of the response to antiviral therapy. The aim of this study was to develop and evaluate a novel real-time PCR method for HCV genotyping based on the NS5B region. Methodology/Principal Findings: Two triplex reaction sets were designed, one to detect genotypes 1a, 1b and 3a; and another to detect genotypes 2a, 2b, and 2c. This approach had an overall sensitivity of 97.0%, detecting 295 of the 304 tested samples. All samples genotyped by real-time PCR had the same type that was assigned using LiPA version 1 (Line in Probe Assay). Although LiPA v. 1 was not able to subtype 68 of the 295 samples (23.0%) and rendered different subtype results from those assigned by real-time PCR for 12/295 samples (4.0%), NS5B sequencing and real-time PCR results agreed in all 146 tested cases. Analytical sensitivity of the real-time PCR assay was determined by end-point dilution of the 5000 IU/ml member of the OptiQuant HCV RNA panel. The lower limit of detection was estimated to be 125 IU/ml for genotype 3a, 250 IU/ml for genotypes 1b and 2b, and 500 IU/ml for genotype 1a. Conclusions/Significance: The total time required for performing this assay was two hours, compared to four hours required for LiPA v. 1 after PCR-amplification. Furthermore, the estimated reaction cost was nine times lower than that of available commercial methods in Brazil. Thus, we have developed an efficient, feasible, and affordable method for HCV genotype identification.