995 resultados para Embedded Software
Resumo:
At present, there is a variety of formalisms for modeling and analyzing the communication behavior of components. Due to a tremendous increase in size and complexity of embedded systems accompanied by shorter time to market cycles and cost reduction, so called behavioral type systems become more and more important. This chapter presents an overview and a taxonomy of behavioral types. The intentions of this taxonomy are to provide a guidance for software engineers and to form the basis for future research.
Resumo:
Feature modeling, embebbed software, software product lines, tool support
Resumo:
Agile software development has grown in popularity starting from the agile manifesto declared in 2001. However there is a strong belief that the agile methods are not suitable for embedded, critical or real-time software development, even though multiple studies and cases show differently. This thesis will present a custom agile process that can be used in embedded software development. The reasons for presumed unfitness of agile methods in embedded software development have mainly based on the feeling of these methods providing no real control, no strict discipline and less rigor engineering practices. One starting point is to provide a light process with disciplined approach to the embedded software development. Agile software development has gained popularity due to the fact that there are still big issues in software development as a whole. Projects fail due to schedule slips, budget surpassing or failing to meet the business needs. This does not change when talking about embedded software development. These issues are still valid, with multiple new ones rising from the quite complex and hard domain the embedded software developers work in. These issues are another starting point for this thesis. The thesis is based heavily on Feature Driven Development, a software development methodology that can be seen as a runner up to the most popular agile methodologies. The FDD as such is quite process oriented and is lacking few practices considered commonly as extremely important in agile development methodologies. In order for FDD to gain acceptance in the software development community it needs to be modified and enhanced. This thesis presents an improved custom agile process that can be used in embedded software development projects with size varying from 10 to 500 persons. This process is based on Feature Driven Development and by suitable parts to Extreme Programming, Scrum and Agile Modeling. Finally this thesis will present how the new process responds to the common issues in the embedded software development. The process of creating the new process is evaluated at the retrospective and guidelines for such process creation work are introduced. These emphasize the agility also in the process development through early and frequent deliveries and the team work needed to create suitable process.
Resumo:
The goal of this thesis is to make a case study of test automation’s profitability in the development of embedded software in a real industrial setting. The cost-benefit analysis is done by considering the costs and benefits test automation causes to software development, before the software is released to customers. The potential benefits of test automation regarding software quality after customer release were not estimated. Test automation is a significant investment which often requires dedicated resources. When done accordingly, the investment in test automation can produce major cost savings by reducing the need for manual testing effort, especially if the software is developed with an agile development framework. It can reduce the cost of avoidable rework of software development, as test automation enables the detection of construction time defects in the earliest possible moment. Test automation also has many pitfalls such as test maintainability and testability of the software, and if those areas are neglected, the investment in test automation may become worthless or it may even produce negative results. The results of this thesis suggest that test automation is very profitable at the company under study.
Resumo:
This portfolio thesis describes work undertaken by the author under the Engineering Doctorate program of the Institute for System Level Integration. It was carried out in conjunction with the sponsor company Teledyne Defence Limited. A radar warning receiver is a device used to detect and identify the emissions of radars. They were originally developed during the Second World War and are found today on a variety of military platforms as part of the platform’s defensive systems. Teledyne Defence has designed and built components and electronic subsystems for the defence industry since the 1970s. This thesis documents part of the work carried out to create Phobos, Teledyne Defence’s first complete radar warning receiver. Phobos was designed to be the first low cost radar warning receiver. This was made possible by the reuse of existing Teledyne Defence products, commercial off the shelf hardware and advanced UK government algorithms. The challenges of this integration are described and discussed, with detail given of the software architecture and the development of the embedded application. Performance of the embedded system as a whole is described and qualified within the context of a low cost system.
Resumo:
The recent trend on embedded system development opens a new prospect for applications that in the past were not possible. The eye tracking for sleep and fatigue detection has become an important and useful application in industrial and automotive scenarios since fatigue is one of the most prevalent causes of earth-moving equipment accidents. Typical applications such as cameras, accelerometers and dermal analyzers are present on the market but have some inconvenient. This thesis project has used EEG signal, particularly, alpha waves, to overcome them by using an embedded software-hardware implementation to detect these signals in real time
Resumo:
Monimutkaisen tietokonejärjestelmän suorituskykyoptimointi edellyttää järjestelmän ajonaikaisen käyttäytymisen ymmärtämistä. Ohjelmiston koon ja monimutkaisuuden kasvun myötä suorituskykyoptimointi tulee yhä tärkeämmäksi osaksi tuotekehitysprosessia. Tehokkaampien prosessorien käytön myötä myös energiankulutus ja lämmöntuotto ovat nousseet yhä suuremmiksi ongelmiksi, erityisesti pienissä, kannettavissa laitteissa. Lämpö- ja energiaongelmien rajoittamiseksi on kehitetty suorituskyvyn skaalausmenetelmiä, jotka edelleen lisäävät järjestelmän kompleksisuutta ja suorituskykyoptimoinnin tarvetta. Tässä työssä kehitettiin visualisointi- ja analysointityökalu ajonaikaisen käyttäytymisen ymmärtämisen helpottamiseksi. Lisäksi kehitettiin suorituskyvyn mitta, joka mahdollistaa erilaisten skaalausmenetelmien vertailun ja arvioimisen suoritusympäristöstä riippumatta, perustuen joko suoritustallenteen tai teoreettiseen analyysiin. Työkalu esittää ajonaikaisesti kerätyn tallenteen helposti ymmärrettävällä tavalla. Se näyttää mm. prosessit, prosessorikuorman, skaalausmenetelmien toiminnan sekä energiankulutuksen kolmiulotteista grafiikkaa käyttäen. Työkalu tuottaa myös käyttäjän valitsemasta osasta suorituskuvaa numeerista tietoa, joka sisältää useita oleellisia suorituskykyarvoja ja tilastotietoa. Työkalun sovellettavuutta tarkasteltiin todellisesta laitteesta saatua suoritustallennetta sekä suorituskyvyn skaalauksen simulointia analysoimalla. Skaalausmekanismin parametrien vaikutus simuloidun laitteen suorituskykyyn analysoitiin.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
Software visualization can be of great use for understanding and exploring a software system in an intuitive manner. Spatial representation of software is a promising approach of increasing interest. However, little is known about how developers interact with spatial visualizations that are embedded in the IDE. In this paper, we present a pilot study that explores the use of Software Cartography for program comprehension of an unknown system. We investigated whether developers establish a spatial memory of the system, whether clustering by topic offers a sound base layout, and how developers interact with maps. We report our results in the form of observations, hypotheses, and implications. Key findings are a) that developers made good use of the map to inspect search results and call graphs, and b) that developers found the base layout surprising and often confusing. We conclude with concrete advice for the design of embedded software maps
Resumo:
El presente Trabajo de Fin de Grado (TFG) es el resultado de la necesidad de la seguridad en la construcción del software ya que es uno de los mayores problemas con que se enfrenta hoy la industria debido a la baja calidad de la misma tanto en software de Sistema Operativo, como empotrado y de aplicaciones. La creciente dependencia de software para que se hagan trabajos críticos significa que el valor del software ya no reside únicamente en su capacidad para mejorar o mantener la productividad y la eficiencia. En lugar de ello, su valor también se deriva de su capacidad para continuar operando de forma fiable incluso de cara de los eventos que la amenazan. La capacidad de confiar en que el software seguirá siendo fiable en cualquier circunstancia, con un nivel de confianza justificada, es el objetivo de la seguridad del software. Seguridad del software es importante porque muchas funciones críticas son completamente dependientes del software. Esto hace que el software sea un objetivo de valor muy alto para los atacantes, cuyos motivos pueden ser maliciosos, penales, contenciosos, competitivos, o de naturaleza terrorista. Existen fuentes muy importantes de mejores prácticas, métodos y herramientas para mejorar desde los requisitos en sus aspectos no funcionales, ciclo de vida del software seguro, pasando por la dirección de proyectos hasta su desarrollo, pruebas y despliegue que debe ser tenido en cuenta por los desarrolladores. Este trabajo se centra fundamentalmente en elaborar una guía de mejores prácticas con la información existente CERT, CMMI, Mitre, Cigital, HP, y otras fuentes. También se plantea desarrollar un caso práctico sobre una aplicación dinámica o estática con el fin de explotar sus vulnerabilidades.---ABSTRACT---This Final Project Grade (TFG) is the result of the need for security in software construction as it is one of the biggest problems facing the industry today due to the low quality of it both OS software, embedded software and applications software. The increasing reliance on software for critical jobs means that the value of the software no longer resides solely in its capacity to improve or maintain productivity and efficiency. Instead, its value also stems from its ability to continue to operate reliably even when facing events that threaten it. The ability to trust that the software will remain reliable in all circumstances, with justified confidence level is the goal of software security. The security in software is important because many critical functions are completely dependent of the software. This makes the software to be a very high value target for attackers, whose motives may be by a malicious, by crime, for litigating, by competitiveness or by a terrorist nature. There are very important sources of best practices, methods and tools to improve the requirements in their non-functional aspects, the software life cycle with security in mind, from project management to its phases (development, testing and deployment) which should be taken into account by the developers. This paper focuses primarily on developing a best practice guide with existing information from CERT, CMMI, Mitre, Cigital, HP, and other organizations. It also aims to develop a case study on a dynamic or static application in order to exploit their vulnerabilities.
Resumo:
Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
Software Configuration Management is the discipline of managing large collections of software development artefacts from which software products are built. Software configuration management tools typically deal with artefacts at fine levels of granularity - such as individual source code files - and assist with coordination of changes to such artefacts. This paper describes a lightweight tool, designed to be used on top of a traditional file-based configuration management system. The add-on tool support enables users to flexibly define new hierarchical views of product structure, independent of the underlying artefact-repository structure. The tool extracts configuration and change data with respect to the user-defined hierarchy, leading to improved visibility of how individual subsystems have changed. The approach yields a range of new capabilities for build managers, and verification and validation teams. The paper includes a description of our experience using the tool in an organization that builds large embedded software systems.
Resumo:
Embedded software systems in vehicles are of rapidly increasing commercial importance for the automotive industry. Current systems employ a static run-time environment; due to the difficulty and cost involved in the development of dynamic systems in a high-integrity embedded control context. A dynamic system, referring to the system configuration, would greatly increase the flexibility of the offered functionality and enable customised software configuration for individual vehicles, adding customer value through plug-and-play capability, and increased quality due to its inherent ability to adjust to changes in hardware and software. We envisage an automotive system containing a variety of components, from a multitude of organizations, not necessarily known at development time. The system dynamically adapts its configuration to suit the run-time system constraints. This paper presents our vision for future automotive control systems that will be regarded in an EU research project, referred to as DySCAS (Dynamically Self-Configuring Automotive Systems). We propose a self-configuring vehicular control system architecture, with capabilities that include automatic discovery and inclusion of new devices, self-optimisation to best-use the processing, storage and communication resources available, self-diagnostics and ultimately self-healing. Such an architecture has benefits extending to reduced development and maintenance costs, improved passenger safety and comfort, and flexible owner customisation. Specifically, this paper addresses the following issues: The state of the art of embedded software systems in vehicles, emphasising the current limitations arising from fixed run-time configurations; and the benefits and challenges of dynamic configuration, giving rise to opportunities for self-healing, self-optimisation, and the automatic inclusion of users’ Consumer Electronic (CE) devices. Our proposal for a dynamically reconfigurable automotive software system platform is outlined and a typical use-case is presented as an example to exemplify the benefits of the envisioned dynamic capabilities.