990 resultados para software reliability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managing software maintenance is rarely a precise task due to uncertainties concerned with resources and services descriptions. Even when a well-established maintenance process is followed, the risk of delaying tasks remains if the new services are not precisely described or when resources change during process execution. Also, the delay of a task at an early process stage may represent a different delay at the end of the process, depending on complexity or services reliability requirements. This paper presents a knowledge-based representation (Bayesian Networks) for maintenance project delays based on specialists experience and a corresponding tool to help in managing software maintenance projects. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relevant results for (sub-)distribution functions related to parallel systems are discussed. The reverse hazard rate is defined using the product integral. Consequently, the restriction of absolute continuity for the involved distributions can be relaxed. The only restriction is that the sets of discontinuity points of the parallel distributions have to be disjointed. Nonparametric Bayesian estimators of all survival (sub-)distribution functions are derived. Dual to the series systems that use minimum life times as observations, the parallel systems record the maximum life times. Dirichlet multivariate processes forming a class of prior distributions are considered for the nonparametric Bayesian estimation of the component distribution functions, and the system reliability. For illustration, two striking numerical examples are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main idea of this research to solve the problem of inventory management for the paper industry SPM PVT limited. The aim of this research was to find a methodology by which the inventory of raw material could be kept at minimum level by means of buffer stock level.The main objective then lies in finding the minimum level of buffer stock according to daily consumption of raw material, finding the Economic Order Quantity (EOQ) reorders point and how much order will be placed in a year to control the shortage of raw material.In this project, we discuss continuous review model (Deterministic EOQ models) that includes the probabilistic demand directly in the formulation. According to the formula, we see the reorder point and the order up to model. The problem was tackled mathematically as well as simulation modeling was used where mathematically tractable solution was not possible.The simulation modeling was done by Awesim software for developing the simulation network. This simulation network has the ability to predict the buffer stock level based on variable consumption of raw material and lead-time. The data collection for this simulation network is taken from the industrial engineering personnel and the departmental studies of the concerned factory. At the end, we find the optimum level of order quantity, reorder point and order days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation aims to develop a software applied to a communication system for a wireless sensor network (WSN) for tracking analog and digital variables and control valve of the gas flow in artificial oil s elevation units, Plunger Lift type. The reason for this implementation is due to the fact that, in the studied plant configuration, the sensors communicate with the PLC (Programmable and Logic Controller) by the cables and pipelines, making any changes in that system, such as changing the layout of it, as well as inconveniences that arise from the nature of the site, such as the vicinity s animals presence that tend to destroy the cables for interconnection of sensors to the PLC. For software development, was used communication polling method via SMAC protocol (Simple Medium Access ControlIEEE 802.15.4 standard) in the CodeWarrior environment to which generated a firmware, loaded into the WSN s transceivers, present in the kit MC13193-EVK, (all items described above are owners of Freescale Semiconductors Inc.). The network monitoring and parameterization used in its application, was developed in LabVIEW software from National Instruments. The results were obtained through the observation of the network s behavior of sensors proposal, focusing on aspects such as: indoor and outdoor quantity of packages received and lost, general aspects of reliability in data transmission, coexistence with other types of wireless networks and power consumption under different operating conditions. The results were considered satisfactory, which showed the software efficiency in this communication system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work shows a project method proposed to design and build software components from the software functional m del up to assembly code level in a rigorous fashion. This method is based on the B method, which was developed with support and interest of British Petroleum (BP). One goal of this methodology is to contribute to solve an important problem, known as The Verifying Compiler. Besides, this work describes a formal model of Z80 microcontroller and a real system of petroleum area. To achieve this goal, the formal model of Z80 was developed and documented, as it is one key component for the verification upto the assembly level. In order to improve the mentioned methodology, it was applied on a petroleum production test system, which is presented in this work. Part of this technique is performed manually. However, almost of these activities can be automated by a specific compiler. To build such compiler, the formal modelling of microcontroller and modelling of production test system should provide relevant knowledge and experiences to the design of a new compiler. In ummary, this work should improve the viability of one of the most stringent criteria for formal verification: speeding up the verification process, reducing design time and increasing the quality and reliability of the product of the final software. All these qualities are very important for systems that involve serious risks or in need of a high confidence, which is very common in the petroleum industry

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to investigate the reliability of visual and digital methods to assess marginal microleakage in vitro. Materials and Methods: Typical Class V preparations were made in bovine teeth and filled with composite resin. After dye penetration (0.5% basic fuchsin), teeth were sectioned and the 53 obtained fragments were assessed according to visual (stereomicroscope) and digital methods (Image Tool Software ® -ITS) (University of Texas Health Science Center-San Antonio Dental School, USA). Two calibrated examiners (A and B) evaluated dye penetration, by means of a stereomicroscope with ×20 magnification (scores), and by the ITS (millimeters). The intra- and inter-examiner agreement was estimated according to Kappa statistics (κ), and intraclass correlation coefficient (ρ). Results: In relation to the visual method, the intra-examiner agreement was almost perfect (κA = 0.87) and substantial (κB = 0.76), respectively to the examiner A and B. The inter-examiner agreement showed an almost perfect reliability (κ = 0.84). For the digital method, the intra-examiner agreement was almost perfect for both examiners and equal to ρ = 0.99, and so was the inter-examiner agreement value. Conclusion: Visual (stereomicroscope) and digital methods (ITS) showed high levels of intra- and inter-examiner reproducibility when marginal microleakage was assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business. In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors. Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary. In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform. We also face the problem of the performance and lifetime degradation. We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tanto los robots autónomos móviles como los robots móviles remotamente operados se utilizan con éxito actualmente en un gran número de ámbitos, algunos de los cuales son tan dispares como la limpieza en el hogar, movimiento de productos en almacenes o la exploración espacial. Sin embargo, es difícil garantizar la ausencia de defectos en los programas que controlan dichos dispositivos, al igual que ocurre en otros sectores informáticos. Existen diferentes alternativas para medir la calidad de un sistema en el desempeño de las funciones para las que fue diseñado, siendo una de ellas la fiabilidad. En el caso de la mayoría de los sistemas físicos se detecta una degradación en la fiabilidad a medida que el sistema envejece. Esto es debido generalmente a efectos de desgaste. En el caso de los sistemas software esto no suele ocurrir, ya que los defectos que existen en ellos generalmente no han sido adquiridos con el paso del tiempo, sino que han sido insertados en el proceso de desarrollo de los mismos. Si dentro del proceso de generación de un sistema software se focaliza la atención en la etapa de codificación, podría plantearse un estudio que tratara de determinar la fiabilidad de distintos algoritmos, válidos para desempeñar el mismo cometido, según los posibles defectos que pudieran introducir los programadores. Este estudio básico podría tener diferentes aplicaciones, como por ejemplo elegir el algoritmo menos sensible a los defectos, para el desarrollo de un sistema crítico o establecer procedimientos de verificación y validación, más exigentes, si existe la necesidad de utilizar un algoritmo que tenga una alta sensibilidad a los defectos. En el presente trabajo de investigación se ha estudiado la influencia que tienen determinados tipos de defectos software en la fiabilidad de tres controladores de velocidad multivariable (PID, Fuzzy y LQR) al actuar en un robot móvil específico. La hipótesis planteada es que los controladores estudiados ofrecen distinta fiabilidad al verse afectados por similares patrones de defectos, lo cual ha sido confirmado por los resultados obtenidos. Desde el punto de vista de la planificación experimental, en primer lugar se realizaron los ensayos necesarios para determinar si los controladores de una misma familia (PID, Fuzzy o LQR) ofrecían una fiabilidad similar, bajo las mismas condiciones experimentales. Una vez confirmado este extremo, se eligió de forma aleatoria un representante de clase de cada familia de controladores, para efectuar una batería de pruebas más exhaustiva, con el objeto de obtener datos que permitieran comparar de una forma más completa la fiabilidad de los controladores bajo estudio. Ante la imposibilidad de realizar un elevado número de pruebas con un robot real, así como para evitar daños en un dispositivo que generalmente tiene un coste significativo, ha sido necesario construir un simulador multicomputador del robot. Dicho simulador ha sido utilizado tanto en las actividades de obtención de controladores bien ajustados, como en la realización de los diferentes ensayos necesarios para el experimento de fiabilidad. ABSTRACT Autonomous mobile robots and remotely operated robots are used successfully in very diverse scenarios, such as home cleaning, movement of goods in warehouses or space exploration. However, it is difficult to ensure the absence of defects in programs controlling these devices, as it happens in most computer sectors. There exist different quality measures of a system when performing the functions for which it was designed, among them, reliability. For most physical systems, a degradation occurs as the system ages. This is generally due to the wear effect. In software systems, this does not usually happen, and defects often come from system development and not from use. Let us assume that we focus on the coding stage in the software development pro¬cess. We could consider a study to find out the reliability of different and equally valid algorithms, taking into account any flaws that programmers may introduce. This basic study may have several applications, such as choosing the algorithm less sensitive to pro¬gramming defects for the development of a critical system. We could also establish more demanding procedures for verification and validation if we need an algorithm with high sensitivity to programming defects. In this thesis, we studied the influence of certain types of software defects in the reliability of three multivariable speed controllers (PID, Fuzzy and LQR) designed to work in a specific mobile robot. The hypothesis is that similar defect patterns affect differently the reliability of controllers, and it has been confirmed by the results. From the viewpoint of experimental planning, we followed these steps. First, we conducted the necessary test to determine if controllers of the same family (PID, Fuzzy or LQR) offered a similar reliability under the same experimental conditions. Then, a class representative was chosen at ramdom within each controller family to perform a more comprehensive test set, with the purpose of getting data to compare more extensively the reliability of the controllers under study. The impossibility of performing a large number of tests with a real robot and the need to prevent the damage of a device with a significant cost, lead us to construct a multicomputer robot simulator. This simulator has been used to obtain well adjusted controllers and to carry out the required reliability experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La informática se está convirtiendo en la quinta utilidad (gas, agua, luz, teléfono) en parte debido al impacto de Cloud Computing en las mayorías de las organizaciones. Este uso de informática es usada por cada vez más tipos de sistemas, incluidos Sistemas Críticos. Esto tiene un impacto en la complejidad internad y la fiabilidad de los sistemas de la organización y los que se ofrecen a los clientes. Este trabajo investiga el uso de Cloud Computing por sistemas críticos, centrándose en las dependencias y especialmente en la fiabilidad de estos sistemas. Se han presentado algunos ejemplos de su uso, y aunque su utilización en sistemas críticos no está extendido, se presenta cual puede llegar a ser su impacto. El objetivo de este trabajo es primero definir un modelo que pueda representar de una forma cuantitativa las interdependencias en fiabilidad y interdependencia para las organizaciones que utilicen estos sistemas, y aplicar este modelo en un sistema crítico del campo de sanidad y mostrar sus resultados. Los conceptos de “macro-dependability” y “micro-dependability” son introducidos en el modelo para la definición de interdependencia y para analizar la fiabilidad de sistemas que dependen de otros sistemas. ABSTRACT With the increasing utilization of Internet services and cloud computing by most organizations (both private and public), it is clear that computing is becoming the 5th utility (along with water, electricity, telephony and gas). These technologies are used for almost all types of systems, and the number is increasing, including Critical Infrastructure systems. Even if Critical Infrastructure systems appear not to rely directly on cloud services, there may be hidden inter-dependencies. This is true even for private cloud computing, which seems more secure and reliable. The critical systems can began in some cases with a clear and simple design, but evolved as described by Egan to "rafted" networks. Because they are usually controlled by one or few organizations, even when they are complex systems, their dependencies can be understood. The organization oversees and manages changes. These CI systems have been affected by the introduction of new ICT models like global communications, PCs and the Internet. Even virtualization took more time to be adopted by Critical systems, due to their strategic nature, but once that these technologies have been proven in other areas, at the end they are adopted as well, for different reasons such as costs. A new technology model is happening now based on some previous technologies (virtualization, distributing and utility computing, web and software services) that are offered in new ways and is called cloud computing. The organizations are migrating more services to the cloud; this will have impact in their internal complexity and in the reliability of the systems they are offering to the organization itself and their clients. Not always this added complexity and associated risks to their reliability are seen. As well, when two or more CI systems are interacting, the risks of one can affect the rest, sharing the risks. This work investigates the use of cloud computing by critical systems, and is focused in the dependencies and reliability of these systems. Some examples are presented together with the associated risks. A framework is introduced for analysing the dependability and resilience of a system that relies on cloud services and how to improve them. As part of the framework, the concepts of micro and macro dependability are introduced to explain the internal and external dependability on services supplied by an external cloud. A pharmacovigilance model system has been used for framework validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modeling. Then, the current state of knowledge modeling techniques is described where increased reliability is available through the modern knowledge acquisition techniques and supporting tools. The KSM (Knowledge Structure Manager) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are proposed about the feasibility of using the knowledge modeling tools and methods for general application design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las metodologías de desarrollo ágiles han sufrido un gran auge en entornos industriales durante los últimos años debido a la rapidez y fiabilidad de los procesos de desarrollo que proponen. La filosofía DevOps y específicamente las metodologías derivadas de ella como Continuous Delivery o Continuous Deployment promueven la gestión completamente automatizada del ciclo de vida de las aplicaciones, desde el código fuente a las aplicaciones ejecutándose en entornos de producción. La automatización se ve como un medio para producir procesos repetibles, fiables y rápidos. Sin embargo, no todas las partes de las metodologías Continuous están completamente automatizadas. En particular, la gestión de la configuración de los parámetros de ejecución es un problema que ha sido acrecentado por la elasticidad y escalabilidad que proporcionan las tecnologías de computación en la nube. La mayoría de las herramientas de despliegue actuales pueden automatizar el despliegue de la configuración de parámetros de ejecución, pero no ofrecen soporte a la hora de fijar esos parámetros o de validar los ficheros que despliegan, principalmente debido al gran abanico de opciones de configuración y el hecho de que el valor de muchos de esos parámetros es fijado en base a preferencias expresadas por el usuario. Esto hecho hace que pueda parecer que cualquier solución al problema debe estar ajustada a una aplicación específica en lugar de ofrecer una solución general. Con el objetivo de solucionar este problema, propongo un modelo de configuración que puede ser inferido a partir de instancias de configuración existentes y que puede reflejar las preferencias de los usuarios para ser usado para facilitar los procesos de configuración. El modelo de configuración puede ser usado como la base de un proceso de configuración interactivo capaz de guiar a un operador humano a través de la configuración de una aplicación para su despliegue en un entorno determinado o para detectar cambios de configuración automáticamente y producir una configuración válida que se ajuste a esos cambios. Además, el modelo de configuración debería ser gestionado como si se tratase de cualquier otro artefacto software y debería ser incorporado a las prácticas de gestión habituales. Por eso también propongo un modelo de gestión de servicios que incluya información relativa a la configuración de parámetros de ejecución y que además es capaz de describir y gestionar propuestas arquitectónicas actuales tales como los arquitecturas de microservicios. ABSTRACT Agile development methodologies have risen in popularity within the industry in recent years due to the speed and reliability of the processes they propose. The DevOps philosophy and specifically the methodologies derived from it such as Continuous Delivery and Continuous Deployment push for a totally automated management of the application lifecycle, from the source code to the software running in production environment. Automation in this regard is used as a means to produce repeatable, reliable and fast processes. However, not all parts of the Continuous methodologies are completely automatized. In particular, management of runtime parameter configuration is a problem that has increased its impact in deployment process due to the scalability and elasticity provided by cloud technologies. Most deployment tools nowadays can automate the deployment of runtime parameter configuration, but they offer no support for parameter setting o configuration validation, as the range of different configuration options and the fact that the value of many of those parameters is based on user preference seems to imply that any solution to the problem will have to be tailored to a specific application. With the aim to solve this problem I propose a configuration model that can be inferred from existing configurations and reflect user preferences in order to ease the configuration process. The configuration model can be used as the base of an interactive configuration process capable of guiding a human operator through the configuration of an application for its deployment in a specific environment or to automatically detect configuration changes and produce valid runtime parameter configurations that take into account those changes. Additionally, the configuration model should be managed as any other software artefact and should be incorporated into current management practices. I also propose a service management model that includes the configuration information and that is able to describe and manage current architectural practices such as the microservices architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Commercial off-the-shelf microprocessors are the core of low-cost embedded systems due to their programmability and cost-effectiveness. Recent advances in electronic technologies have allowed remarkable improvements in their performance. However, they have also made microprocessors more susceptible to transient faults induced by radiation. These non-destructive events (soft errors), may cause a microprocessor to produce a wrong computation result or lose control of a system with catastrophic consequences. Therefore, soft error mitigation has become a compulsory requirement for an increasing number of applications, which operate from the space to the ground level. In this context, this paper uses the concept of selective hardening, which is aimed to design reduced-overhead and flexible mitigation techniques. Following this concept, a novel flexible version of the software-based fault recovery technique known as SWIFT-R is proposed. Our approach makes possible to select different registers subsets from the microprocessor register file to be protected on software. Thus, design space is enriched with a wide spectrum of new partially protected versions, which offer more flexibility to designers. This permits to find the best trade-offs between performance, code size, and fault coverage. Three case studies have been developed to show the applicability and flexibility of the proposal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software-based techniques offer several advantages to increase the reliability of processor-based systems at very low cost, but they cause performance degradation and an increase of the code size. To meet constraints in performance and memory, we propose SETA, a new control-flow software-only technique that uses assertions to detect errors affecting the program flow. SETA is an independent technique, but it was conceived to work together with previously proposed data-flow techniques that aim at reducing performance and memory overheads. Thus, SETA is combined with such data-flow techniques and submitted to a fault injection campaign. Simulation and neutron induced SEE tests show high fault coverage at performance and memory overheads inferior to the state-of-the-art.