908 resultados para Safety Culture, Safety Leadership, Safety Critical Tasks
Resumo:
Formal specification is vital to the development of distributed real-time systems as these systems are inherently complex and safety-critical. It is widely acknowledged that formal specification and automatic analysis of specifications can significantly increase system reliability. Although a number of specification techniques for real-time systems have been reported in the literature, most of these formalisms do not adequately address to the constraints that the aspects of 'distribution' and 'real-time' impose on specifications. Further, an automatic verification tool is necessary to reduce human errors in the reasoning process. In this regard, this paper is an attempt towards the development of a novel executable specification language for distributed real-time systems. First, we give a precise characterization of the syntax and semantics of DL. Subsequently, we discuss the problems of model checking, automatic verification of satisfiability of DL specifications, and testing conformance of event traces with DL specifications. Effective solutions to these problems are presented as extensions to the classical first-order tableau algorithm. The use of the proposed framework is illustrated by specifying a sample problem.
Resumo:
Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.
Resumo:
The basic framework and - conceptual understanding of the metallurgy of Ti alloys is strong and this has enabled the use of titanium and its alloys in safety-critical structures such as those in aircraft and aircraft engines. Nevertheless, a focus on cost-effectiveness and the compression of product development time by effectively integrating design with manufacturing in these applications, as well as those emerging in bioengineering, has driven research in recent decades towards a greater predictive capability through the use of computational materials engineering tools. Therefore this paper focuses on the complexity and variety of fundamental phenomena in this material system with a focus on phase transformations and mechanical behaviour in order to delineate the challenges that lie ahead in achieving these goals. (C) 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
In recent times, crowdsourcing over social networks has emerged as an active tool for complex task execution. In this paper, we address the problem faced by a planner to incen-tivize agents in the network to execute a task and also help in recruiting other agents for this purpose. We study this mecha-nism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner’s goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We define a set of fairness properties that should beideally satisfied by a crowdsourcing mechanism. We prove that no mechanism can satisfy all these properties simultane-ously. We relax some of these properties and define their ap-proximate counterparts. Under appropriate approximate fair-ness criteria, we obtain a non-trivial family of payment mech-anisms. Moreover, we provide precise characterizations of cost critical and time critical mechanisms.
Resumo:
The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.
Resumo:
We survey several of the research efforts pursued by the iBench and snBench projects in the CS Department at Boston University over the last half dozen years. These activities use ideas and methodologies inspired by recent developments in other parts of computer science -- particularly in formal methods and in the foundations of programming languages -- but now specifically applied to the certification of safety-critical networking systems. This is research jointly led by Azer Bestavros and Assaf Kfoury with the participation of Adam Bradley, Andrei Lapets, and Michael Ocean.
Resumo:
Insulated gate bipolar transistor (IGBT) modules are important safety critical components in electrical power systems. Bond wire lift-off, a plastic deformation between wire bond and adjacent layers of a device caused by repeated power/thermal cycles, is the most common failure mechanism in IGBT modules. For the early detection and characterization of such failures, it is important to constantly detect or monitor the health state of IGBT modules, and the state of bond wires in particular. This paper introduces eddy current pulsed thermography (ECPT), a nondestructive evaluation technique, for the state detection and characterization of bond wire lift-off in IGBT modules. After the introduction of the experimental ECPT system, numerical simulation work is reported. The presented simulations are based on the 3-D electromagnetic-thermal coupling finite-element method and analyze transient temperature distribution within the bond wires. This paper illustrates the thermal patterns of bond wires using inductive heating with different wire statuses (lifted-off or well bonded) under two excitation conditions: nonuniform and uniform magnetic field excitations. Experimental results show that uniform excitation of healthy bonding wires, using a Helmholtz coil, provides the same eddy currents on each, while different eddy currents are seen on faulty wires. Both experimental and numerical results show that ECPT can be used for the detection and characterization of bond wires in power semiconductors through the analysis of the transient heating patterns of the wires. The main impact of this paper is that it is the first time electromagnetic induction thermography, so-called ECPT, has been employed on power/electronic devices. Because of its capability of contactless inspection of multiple wires in a single pass, and as such it opens a wide field of investigation in power/electronic devices for failure detection, performance characterization, and health monitoring.
Resumo:
The expectations of citizens from the Information Technologies (ITs) are increasing as the ITs have become integral part of our society, serving all kinds of activities whether professional, leisure, safety-critical applications or business. Hence, the limitations of the traditional network designs to provide innovative and enhanced services and applications motivated a consensus to integrate all services over packet switching infrastructures, using the Internet Protocol, so as to leverage flexible control and economical benefits in the Next Generation Networks (NGNs). However, the Internet is not capable of treating services differently while each service has its own requirements (e.g., Quality of Service - QoS). Therefore, the need for more evolved forms of communications has driven to radical changes of architectural and layering designs which demand appropriate solutions for service admission and network resources control. This Thesis addresses QoS and network control issues, aiming to improve overall control performance in current and future networks which classify services into classes. The Thesis is divided into three parts. In the first part, we propose two resource over-reservation algorithms, a Class-based bandwidth Over-Reservation (COR) and an Enhanced COR (ECOR). The over-reservation means reserving more bandwidth than a Class of Service (CoS) needs, so the QoS reservation signalling rate is reduced. COR and ECOR allow for dynamically defining over-reservation parameters for CoSs based on network interfaces resource conditions; they aim to reduce QoS signalling and related overhead without incurring CoS starvation or waste of bandwidth. ECOR differs from COR by allowing for optimizing control overhead minimization. Further, we propose a centralized control mechanism called Advanced Centralization Architecture (ACA), that uses a single state-full Control Decision Point (CDP) which maintains a good view of its underlying network topology and the related links resource statistics on real-time basis to control the overall network. It is very important to mention that, in this Thesis, we use multicast trees as the basis for session transport, not only for group communication purposes, but mainly to pin packets of a session mapped to a tree to follow the desired tree. Our simulation results prove a drastic reduction of QoS control signalling and the related overhead without QoS violation or waste of resources. Besides, we provide a generic-purpose analytical model to assess the impact of various parameters (e.g., link capacity, session dynamics, etc.) that generally challenge resource overprovisioning control. In the second part of this Thesis, we propose a decentralization control mechanism called Advanced Class-based resource OverpRovisioning (ACOR), that aims to achieve better scalability than the ACA approach. ACOR enables multiple CDPs, distributed at network edge, to cooperate and exchange appropriate control data (e.g., trees and bandwidth usage information) such that each CDP is able to maintain a good knowledge of the network topology and the related links resource statistics on real-time basis. From scalability perspective, ACOR cooperation is selective, meaning that control information is exchanged dynamically among only the CDPs which are concerned (correlated). Moreover, the synchronization is carried out through our proposed concept of Virtual Over-Provisioned Resource (VOPR), which is a share of over-reservations of each interface to each tree that uses the interface. Thus, each CDP can process several session requests over a tree without requiring synchronization between the correlated CDPs as long as the VOPR of the tree is not exhausted. Analytical and simulation results demonstrate that aggregate over-reservation control in decentralized scenarios keep low signalling without QoS violations or waste of resources. We also introduced a control signalling protocol called ACOR Protocol (ACOR-P) to support the centralization and decentralization designs in this Thesis. Further, we propose an Extended ACOR (E-ACOR) which aggregates the VOPR of all trees that originate at the same CDP, and more session requests can be processed without synchronization when compared with ACOR. In addition, E-ACOR introduces a mechanism to efficiently track network congestion information to prevent unnecessary synchronization during congestion time when VOPRs would exhaust upon every session request. The performance evaluation through analytical and simulation results proves the superiority of E-ACOR in minimizing overall control signalling overhead while keeping all advantages of ACOR, that is, without incurring QoS violations or waste of resources. The last part of this Thesis includes the Survivable ACOR (SACOR) proposal to support stable operations of the QoS and network control mechanisms in case of failures and recoveries (e.g., of links and nodes). The performance results show flexible survivability characterized by fast convergence time and differentiation of traffic re-routing under efficient resource utilization i.e. without wasting bandwidth. In summary, the QoS and architectural control mechanisms proposed in this Thesis provide efficient and scalable support for network control key sub-systems (e.g., QoS and resource control, traffic engineering, multicasting, etc.), and thus allow for optimizing network overall control performance.
Resumo:
In Safety critical software failure can have a high price. Such software should be free of errors before it is put into operation. Application of formal methods in the Software Development Life Cycle helps to ensure that the software for safety critical missions are ultra reliable. PVS theorem prover, a formal method tool, can be used for the formal verification of software in ADA Language for Flight Software Application (ALFA.). This paper describes the modeling of ALFA programs for PVS theorem prover. An ALFA2PVS translator is developed which automatically converts the software in ALFA to PVS specification. By this approach the software can be verified formally with respect to underflow/overflow errors and divide by zero conditions without the actual execution of the code.
Resumo:
In Safety critical software failure can have a high price. Such software should be free of errors before it is put into operation. Application of formal methods in the Software Development Life Cycle helps to ensure that the software for safety critical missions are ultra reliable. PVS theorem prover, a formal method tool, can be used for the formal verification of software in ADA Language for Flight Software Application (ALFA.). This paper describes the modeling of ALFA programs for PVS theorem prover. An ALFA2PVS translator is developed which automatically converts the software in ALFA to PVS specification. By this approach the software can be verified formally with respect to underflow/overflow errors and divide by zero conditions without the actual execution of the code
Resumo:
Sistemas computacionais de tempo-real são tipicamente construídos a partir de primitivas de sincronização que fornecem uma noção do tempo no objetivo de coordenar a execução múltiplos fluxos de instruções em um processador. Quando o processamento é centralizado, a base de tempo destas primitivas é extraída do oscilador local da plataforma, permitindo que as ações do sistema sejam devidamente ordenadas, respeitando restrições de tempo e causalidade. No entanto, em sistemas distribuídos o problema não pode ser resolvido desta forma em decorrência de imperfeições nos dispositivos físicos. Diferenças mínimas na freqüência de osciladores fazem com que as bases de tempo dos componentes divirjam cada vez mais ao longo do tempo, dificultando ou até mesmo impossibilitando um ordenamento consistente de eventos. Por esta razão, sincronização de relógios é um serviço de fundamental importância, sobretudo em aplicações críticas, onde os níveis de confiabilidade exigidos são mais elevados. O presente trabalho consiste na proposta e implementação de uma plataforma de comunicação otimizada para sistemas de controle distribuídos, caracterizados por uma alta regularidade no comportamento da comunicação. O objetivo é propor uma solução em baixo nível com suporte para o projeto de sistemas distribuídos no domínio de aplicações críticas. A plataforma proposta, à qual foi atribuído o nome CASCA, sigla para “Communication Architecture for Safety- Critical Applications”, é de fato uma extensão time-triggered do protocolo CAN. Acima da camada de enlace do protocolo original foram projetados mecanismos sincronização de relógios e criação inicial da base de tempo, implementados na forma de uma combinação de hardware e software. Principais características da plataforma são jitter mínimo, uma base de tempo global essencialmente distribuída e particionamento temporal. Diferentes alternativas de projeto foram consideradas, observando com maior atenção a viabilidade de prototipação em dispositivos FPGA para fins de validação e aplicação imediata em plataformas reconfiguráveis. Como forma de validação da plataforma, um sistema elementar formado por três nodos foi sintetizado com sucesso em bancada obtendo-se como resultado uma base de tempo essencialmente distribuída com precisão menor do que um micro-segundo.
Resumo:
Online geographic-databases have been growing increasingly as they have become a crucial source of information for both social networks and safety-critical systems. Since the quality of such applications is largely related to the richness and completeness of their data, it becomes imperative to develop adaptable and persistent storage systems, able to make use of several sources of information as well as enabling the fastest possible response from them. This work will create a shared and extensible geographic model, able to retrieve and store information from the major spatial sources available. A geographic-based system also has very high requirements in terms of scalability, computational power and domain complexity, causing several difficulties for a traditional relational database as the number of results increases. NoSQL systems provide valuable advantages for this scenario, in particular graph databases which are capable of modeling vast amounts of inter-connected data while providing a very substantial increase of performance for several spatial requests, such as finding shortestpath routes and performing relationship lookups with high concurrency. In this work, we will analyze the current state of geographic information systems and develop a unified geographic model, named GeoPlace Explorer (GE). GE is able to import and store spatial data from several online sources at a symbolic level in both a relational and a graph databases, where several stress tests were performed in order to find the advantages and disadvantages of each database paradigm.
Resumo:
PLCs (acronym for Programmable Logic Controllers) perform control operations, receiving information from the environment, processing it and modifying this same environment according to the results produced. They are commonly used in industry in several applications, from mass transport to petroleum industry. As the complexity of these applications increase, and as various are safety critical, a necessity for ensuring that they are reliable arouses. Testing and simulation are the de-facto methods used in the industry to do so, but they can leave flaws undiscovered. Formal methods can provide more confidence in an application s safety, once they permit their mathematical verification. We make use of the B Method, which has been successfully applied in the formal verification of industrial systems, is supported by several tools and can handle decomposition, refinement, and verification of correctness according to the specification. The method we developed and present in this work automatically generates B models from PLC programs and verify them in terms of safety constraints, manually derived from the system requirements. The scope of our method is the PLC programming languages presented in the IEC 61131-3 standard, although we are also able to verify programs not fully compliant with the standard. Our approach aims to ease the integration of formal methods in the industry through the abbreviation of the effort to perform formal verification in PLCs
Resumo:
The effect of the ionosphere on the signals of Global Navigation Satellite Systems (GNSS), such as the Global Positionig System (GPS) and the proposed European Galileo, is dependent on the ionospheric electron density, given by its Total Electron Content (TEC). Ionospheric time-varying density irregularities may cause scintillations, which are fluctuations in phase and amplitude of the signals. Scintillations occur more often at equatorial and high latitudes. They can degrade navigation and positioning accuracy and may cause loss of signal tracking, disrupting safety-critical applications, such as marine navigation and civil aviation. This paper addresses the results of initial research carried out on two fronts that are relevant to GNSS users if they are to counter ionospheric scintillations, i.e. forecasting and mitigating their effects. On the forecasting front, the dynamics of scintillation occurrence were analysed during the severe ionospheric storm that took place on the evening of 30 October 2003, using data from a network of GPS Ionospheric Scintillation and TEC Monitor (GISTM) receivers set up in Northern Europe. Previous results [1] indicated that GPS scintillations in that region can originate from ionospheric plasma structures from the American sector. In this paper we describe experiments that enabled confirmation of those findings. On the mitigation front we used the variance of the output error of the GPS receiver DLL (Delay Locked Loop) to modify the least squares stochastic model applied by an ordinary receiver to compute position. This error was modelled according to [2], as a function of the S4 amplitude scintillation index measured by the GISTM receivers. An improvement of up to 21% in relative positioning accuracy was achieved with this technnique.